Tag: Collaborative Environment

3 posts tagged with "Collaborative Environment"

Enabling DEMO:POLIS

Enabling DEMO:POLIS

“Enabling DEMO:POLIS” is a participatory urban planning installation, presented as part of the DEMO:POLIS exhibition at the Berlin Akademie der Künste (https://www.adk.de/demopolis - 11.3.2016 - 29.5.2016). The installation engages the public in the design of open space and consists of six terminals that run a custom, interactive software application.

The software leads the user through a number of typical urban design tools (space allocation, streets, buildings, landscape, etc.) and concludes with a fly-through through the generated 3D scenario, in this case, the Rathausforum / Alexanderplatz area in Berlin.

The following video demonstrates a full cycle of a possible design.

https://youtu.be/sWgARvrcgxk

Open Source

Source code, data and a binary build are available at: https://github.com/arisona

Credits

Concept: Stefan Arisona, Ruth Conroy Dalton, Christoph Hölscher, Wilfried Wang

Data & Coding: Stefan Arisona, Simon Schubiger, Zeng Wei

Support: Akademie der Künste Berlin, FHNW Switzerland (Institute of 4D Technologies), ETH Zürich (Future Cities Laboratory and Chair of Cognitive Science), Northumbria University (Architecture and Built Environment).

Data & Software Workflow

Enabling DEMO:POLIS builds on Open Data, in particular the publicly available 3D models of central Berlin provided by the Staatssenat für Stadtentwicklung und Umwelt (https://www.stadtentwicklung.berlin.de/planen/stadtmodelle/))

The original 3D models were initially imported into Autodesk AutoCAD for layer selection and coordinate system adjustments, then imported into Autodesk Maya for data cleaning and corrections. In a final step the data was imported into Esri CityEngine for final data adjustments & cleaning, merging, labelling, etc. The data was then exported as OBJs. The software application is written in Java, based on the 3D graphics library/engine ether. As indicated above, all source code and data is available as open source.

Read more →

The Value Lab Asia

The Value Lab Asia is a collaborative, digitally augmented environment for a wide range of applications, such as participatory urban planning and design, stakeholder communication, information visualisation and discovery, remote teaching and conferencing. It includes a 33 megapixel video wall, three large displays with touch overlays, a number of smaller, mobile multi-touch enabled displays, and extensive video conferencing capabilities. The Value Lab Asia is the younger sibling of the Value Lab Zurich, built at ETH Zurich’s ScienceCity by Gerhard Schmitt, Remo Burkhard, Jan Halatsch and Antje Kunze of the Chair of Information Architecture in 2007/08. It therefore borrows many of the concepts of the Value Lab Zurich, such as being set in a friendly environment that operates in daylight conditions, however comes with updated state-of-the-art hardware and a different look.

The Value Lab Asia was conceived in the second half of 2011, and built in only two months from January 2012 to March 2012. It has been in regular operation since then. We are currently working on a more extensive documentation. Below you find the basic technical specifications.

If you are interested in visiting and/or using the facility, please feel free to contact me at arisona@arch.ethz.ch

Brief Technical Specifications

  • Video wall, 4.9 x 2.7m, running at native 7680 x 4320 resolution (roughly 33 megapixels), driven from a single machine. Therefore, most applications run out of the box.
  • Display wall with three 82" multi-touch displays, also driven from a single machine.
  • Several 40" mobile multi-touch units, including mini-PC, thus completely autonomous.
  • Tandberg-based video conferencing with one fixed and one mobile camera. Can be flexibly configured to run on video wall as well as display wall.
  • Integrated video recording and production.

Institution: ETH Zurich’s Future Cities Laboratory
Location: Singapore
Period: Since 2011
Project lead, concept, system specification: Stefan Arisona
Interior design: Stefan Arisona in collaboration with Plasmadesign
System integration: PAVE System Pte Ltd

Read more →

The Allosphere Research Facility

I was working on the Allosphere project during my stay at UCSB. Mostly in terms of experimenting with projection warping in a full-dome environment, and with bringing in content, especially through the Soundium platform.

The AlloSphere is a large, immersive, multimedia and multimodal instrument for scientific and artistic exploration being built at UC Santa Barbara. Physically, the AlloSphere is a three-story cubic space housing a large perforated metal sphere that serves as a display surface. A bridge cuts through the center of the sphere from the second floor and comfortably holds up to 25 participants. The first generation AlloSphere instrumentation design includes 14 high-resolution stereo video projectors to light up the complete spherical display surface, 256 speakers distributed outside the surface to provide high quality spatial sound, a suite of sensors and interaction devices to enable rich user interaction with the data and simulations, and the computing infrastructure to enable the high-volume computations necessary to provide a rich visual, aural, and interactive experience for the user. When fully equipped, the AlloSphere will be one of the largest scientific instruments in the world; it will also serve as an ongoing research testbed for several important areas of computing, such as scientific visualization, numerical simulations, large scale sensor networks, high-performance computing, data mining, knowledge discovery, multimedia systems, and human-computer interaction. It will be a unique immersive exploration environment, with a full surrounding sphere of high quality stereo video and spatial sound and user tracking for rich interaction. It will support rich interaction and exploration of a single user, small groups, or small classrooms.

The AlloSphere differs from conventional virtual reality environments, such as a CAVE or a hemispherical immersive theater, by its seamless surround-view capabilities and its focus on multiple sensory modalities and interaction. It enables much higher levels of immersion and user/researcher participation than existing immersive environments. The AlloSphere research landscape comprises two general directions: (1) computing research, which includes audio and visual multimedia visualization, human computer interaction, and computer systems research focused largely on the Allosphere itself – i.e., pushing the state of the art in these areas to create the most advanced and effective immersive visualization environment possible; and (2) applications research, which describes the integration of Allosphere technologies to scientific and engineering problems to produce domain-specific applications for analysis and exploration in areas such as nanotechnology, biochemistry, quantum computing, brain imaging, geoscience, and large scale design. These two types of research activities are different yet tightly coupled, feeding back to one another. Computing research produces and improves the enabling technologies for the applications research, which in turn drive, guide, and re-inform the computing research.

https://www.allosphere.ucsb.edu/

Location: California NanoSystems Institute, University of California, Santa Barbara
Period: 2007 - 2008

Read more →