All Blog Articles

Multi-Projector-Mapper (MPM): Open-Source 3D Projection Mapping Software Framework
1 Introduction
The multi-projector-mapper (MPM) is an open-source software framework for 3D projection mapping using multiple projectors. It contains a basic rendering infrastructure, and interactive tools for projector calibration. For calibration, the method given in Oliver Bimber and Ramesh Raskar’s book Spatial Augmented Reality, Appendix A, is used.
The framework is the outcome of the “Projections of Reality” cluster at smartgeometry 2013, and is to be seen as a prototype that can be used for developing specialized projection mapping applications. Alternatively, the projector calibration method alone could also be used just to output the OpenGL projection and modelview matrices, which then can be used by other applications. In addition, the more generic code within the framework might as well serve as a starting point for those who want to dive into ‘pure’ Java / OpenGL coding (e.g. when coming from Processing).
Currently, at ETH Zurich’s Future Cities Laboratory we continue to work on the code. Among upcoming features will be the integration of the 3D scene analysis component, that was so far realised by a separate application. Your suggestions and feedback are welcome!
1 Source Code Repository @ GitHub
The framework is available as open-source (BSD licensed). Jump to GitHub to get the source code:
https://github.com/arisona/mpm
The repository contains an Eclipse project, including dependencies such as JOGL etc. Thus the code should run out of the box on Mac OS X, Windows and Linux.
2 Usage
The framework allows an arbitrary number of projectors - as many as your computer allows. At smartgeometry, we were using an AMD HD 7870 Eyefinity 6 with 6 mini-displayport outputs, where four outputs were used for projection mapping and one as control output:

2.1 Configuration
The code allows opening an OpenGL window for every output (for projection mapped scenes, windows without decorations are used, and they can be placed accordingly at full screen on the virtual desktop):
public MPM() {
ICalibrationModel model = new SampleCalibrationModel();
scene = new Scene();
scene.setModel(model);
scene.addView(new View(scene, 0, 10, 512, 512, "View 0", 0, 0.0, View.ViewType.CONTROL_VIEW));
scene.addView(new View(scene, 530, 0, 512, 512, "View 1", 1, 0.0, View.ViewType.PROJECTION_VIEW));
scene.addView(new View(scene, 530, 530, 512, 512, "View 2", 2, 90.0, View.ViewType.PROJECTION_VIEW));
...
}
Above code opens three windows: one control view (which contains window decorations), and two projection views (without decorations). The coordinates and window sizes in this example are just samples and need to be adjusted for a concrete case (i.e. depending on virtual desktop configuration).
2.2 Launching the Application & Calibration
Once the application launches, all views show the default scene. The control view in addition shows the available key strokes. Pressing “2” switches to calibration mode. The views will now show the calibration model, with calibration points. Note that in calibration mode, all projection views will blank, unless their window is activated (i.e. by clicking into the window).
For calibration, 6 circular calibration points in 3D space need to be matched to the their physical counterparts. Thus, when using the default calibration model, which is a cube, a physical calibration rig corresponding to the cube needs to be used:

The individual points can now be matched by dragging them with the mouse. For fine tuning, use the cursor keys. As soon as the 6th point is selected, the scene automatically adjusts.
For first time setup, there is also a ‘fill mode’ which basically projects a white filled rectangle (with a cross hair in the middle) for each projector. This allows for easy rough adjustment of each project. Hit “3” to activate fill mode.
Once calibration is complete, press “S” to save the configuration, and “1” to return to navigation / rendering mode. On the next restart, press “L” to load the previous configuration. When in rendering mode, the actual model is shown, which by default is just a fake model, thus a piece of code that is application specific. The renderer includes shadow volume rendering (press “0” to toggle shadows), however the code is not optimised at this point.

Note that it is not necessary to use a cube as calibration rig - basically any 3D shape can be used for calibration, as long as you have a matching 3D and physical model. Simply replace the initialisation of your ICalibrationModel with an instance of your custom model.
The following YouTube video provides a short overview of the calibration and projection procedure:
3 Code Internals and Additional Features
The code is written in Java using the JOGL OpenGL bindings for rendering, and the Apache Commons Math3 Library for the matrix decomposition. Most of the it is rather straightforward, as it is intentionally kept clean and modular. Rendering to multiple windows makes use of OpenGL shared contexts. Currently, we’re working on the transition towards OpenGL 3.2 and will replace the fixed pipeline code.
In addition, the code also contains a simple geometry server, basically listening via UDP or UDP/OSC for lists of coloured triangles, which are then fed into the renderer. Using this mechanism, at smartgeometry, we build a system consisting of multiple machines doing 3D scanning, geometry analysis and rendering, by sending geometry data between them using the network. Note that this is prototype code and will be replaced with a more systematic approach in future.
4 Credits & Further Information
Concept & projection setup: Eva Friedrich & Stefan Arisona. Partially also based on earlier discussions and work of Christian Schneider & Stefan Arisona.
Code: MPM was written by Stefan Arisona, with contributions by Eva Friedrich (early prototyping, and shadow volumes) and Simon Schubiger (OSC).
Support: This software was developed in part at ETH Zurich’s Future Cities Laboratory in Singapore.
A general overview of the work at smartgeometry'13 is available at the “Projections of Reality” page.

Summer School "Future Cities: Networks and Grammars" (24.6 - 12.7.2013)
From 24. June - 12. July, the Future Cities Laboratory will host the “Future Cities 2013” summer school on “Networks and Grammars”. As part of the programme, my colleagues and I will be teaching hands-on crash courses using Esri CityEngine.
More information: https://www.sustainability.ethz.ch/lehre/Sommerakademien/so2013/index_EN
Visualizing Interchange Patterns in Massive Movement Data (EuroVis 2013)
Authors: Wei Zeng, Chi-Wing Fu, Stefan Arisona, Huamin Qu

Abstract: Massive amount of movement data, such as daily trips made by millions of passengers in a city, are widely avail- able nowadays. They are a highly valuable means not only for unveiling human mobility patterns, but also for assisting transportation planning, in particular for metropolises around the world. In this paper, we focus on a novel aspect of visualizing and analyzing massive movement data, i.e., the interchange pattern, aiming at re- vealing passenger redistribution in a traffic network. We first formulate a new model of circos figure, namely the interchange circos diagram, to present interchange patterns at a junction node in a bundled fashion, and optimize the color assignments to respect the connections within and between junction nodes. Based on this, we develop a family of visual analysis techniques to help users interactively study interchange patterns in a spatiotemporal manner: 1) multi-spatial scales: from network junctions such as train stations to people flow across and between larger spatial areas; and 2) temporal changes of patterns from different times of the day. Our techniques have been applied to real movement data consisting of hundred thousands of trips, and we present also two case studies on how transportation experts worked with our interface.
https://www.youtube.com/watch?v=_QWnA1k2ZrU
Title: Visualizing Interchange Patterns in Massive Movement Data
Authors: Wei Zeng, Chi-Wing Fu, Stefan Arisona, Huamin Qu
Journal: Computer Graphics Forum
Publisher: Wiley
Year: 2013
Volume: 32(3)
Pages: 271-280
DOI: 10.1111/cgf.12114
Link: https://onlinelibrary.wiley.com/doi/10.1111/cgf.12114/abstract
Projections of Reality (smartgeometry'13)
Projections of Reality engages in augmenting design processes involving physical models with real time spatial analysis. The work was initiated at smartgeometry'13 at UCL London in April 2013 and is currently continued at ETH Zurich’s Value Lab Asia in Singapore.
The work explores techniques of real time spatial analysis of architectural and urban design models: how physical models can be augmented with real-time 3D capture and analysis, enabling the architect to interact with the physical model whilst obtaining feedback from a computational analysis. We investigate possible workflows that close the cycle of designing and model making, analysing the design and feeding results back into the design cycle by projecting them back onto the physical model.
On a technical level, we are addressing the challenge of dealing with complex, unstructured data from real time scanning, e.g. a point cloud, which needs processing before it can be used in urban analysis. We will investigate strategies to extract information of the urban form targeted for design processes, using real time scanning devices (Microsoft Kinect, hand held laser scanners, etc.), open source 3D reconstruction software (reconstructMeQT) and post-processing of the input through parametric modelling (e.g. Processing, Rhino scripting, Generative Components).
The goal of the workshop at smartgeometry'13 was to create a working prototype of a physical urban model that is augmented with real-time analysis. We worked with the cluster participants to formulate a design concept of the prototype and to break down the challenge into individual modules, e.g. analysing and cleaning the geometry, algorithms for real time analysis, projecting imagery onto the physical model.
Cluster Champions: Stefan Arisona, Eva Friedrich, Bruno Moser, Dominik Nüssen, Lukas Treyer
Cluster Participants: Piotr Baszynski, Moritz Cramer, Peter Ferguson, Rich Maddock, Teruyuki Nomura, Jessica In, Graham Shawcross, Reeti Sompura, Tsung-Hsien Wang
Year: 2013

Open Call: Art/Science Residency at the Future Cities Laboratory
ASR 2013 Call For Proposals
Please find full call and application forms here: https://www.digitalartweeks.ethz.ch/web/DAW13/ASR2013
Arts/Science Residency with focus on Transmedia at ETH Zurich’s Future Cities Laboratory
The Singapore-ETH Centre, in collaboration with the Arts and Creativity Lab & the Interactive and Digital Media Institute, are pleased to announce a 2013 Arts/Science Residency at ETH Zurich’s Future Cities Laboratory (FCL). The selected artist will be invited to spend 2 months working at the FCL with researchers, students and the local arts community as she or he conduct a project exploring and making connections between art and science.
The artist will be invited to present the project at ETH Zurich’s Digital Art Weeks Festival (May 6 – 19 2013), thus the residency must start no later than beginning of May 2013.
The Art/Science Residency is made possible with the support of ETH Zurich’s Future Cities Laboratory and IDMI Art/Science Residency Programme.
Theme: Explorations in Transmedia for Urban Research
The Future Cities Laboratory (FCL) is a transdisciplinary research centre focused on urban sustainability in a global frame. It is the first research programme of the Singapore-ETH Centre for Global Environmental Sustainability (SEC). It is home to a community of over 100 PhD, postdoctoral and Professorial researchers working on diverse themes related to future cities and environmental sustainability.
In September 2013, the 3rd FCL Forum will take place at the NRF CREATE Campus in Singapore. The event is planned and realised through three main pillars, which are a conference, an exhibition, and the library. All pillars collected and showcase FCL work established over the last three years.
The goal of this Art/Science Residency is to propose and realise a bridge that connects the pillars. Thereby, the general topic of investigation is the use of transmedia storytelling approaches to support large, heterogeneous, and complex research projects in terms of coherently integrating the overall mission, research questions, works in progress and results across multiple platforms and formats. Consequently, proposals should radically question and innovatively revise current standards in academic communication. While including web- and game-based transmedia approaches, as typically known from advertisement, they should go beyond the norm of such techniques.
In particular, we are looking for proposals that include other areas and formats, and adhere to the following guidelines:
- Proposals should include use of the Value Lab Asia, a large collaborative, digitally augmented space, equipped with several multi-touch surfaces and displays, a 33 megapixel high-resolution video wall, and video conferencing systems. It is used by the FCL researchers for urban visualization, scenario planning and stakeholder participation applications.
- Proposals should have the openness to incorporate output from on going design research studios, seminars and research projects.
- Proposals should incorporate the evolving Future Cities Laboratory exhibition and the upcoming September 2013 conference, and the outcome of the project should be directly applicable for the exhibition and conference.
- Proposals may include design and production of physical models through digital fabrication.
For all formats and areas you will work closely with FCL faculty and PhD students, and will have access to FCL space and technical infrastructure, including the Value Lab and the FCL model-making workshop.
