Tag: Portfolio
41 posts tagged with "Portfolio"
Computer-assisted Media Authoring Techniques
Stefan Arisona, research fellowship at UCSB, 2007 - 2008
Project Summary
The volume of digital media content has enormously grown in the recent years: Emerging internet-based applications, such as YouTube or MySpace, are media-rich, emphasise on content sharing, and attract huge communities. As the volume of content further expands, we see an increasing need for not only being able to annotate and retrieve content, but also to produce and edit content in a computer-assisted manner and by employing (semi-) automated methods. However, existing work so far does not satisfy these needs – both in terms of usable software instruments as well the underlying theoretical frameworks. This viewpoint is also acknowledged by upcoming European Union IST projects: The Framework Programme 7 (FP7) calls for projects addressing Intelligent Content and Semantics and targets at advanced authoring environments for interactive and expressive content [ICT Programme Committee, 2007, Objective ICT-2007.4.2].
This project aims at a systematic classification and implementation of novel content authoring techniques. By “mixed-media content authoring” we understand the interactive editing, or the ‘‘composition’’, of content for multiple modalities (e.g., auditory and visual). The project emphasises on applying computational methods in order to enable a high degree of automatisation of the authoring process. It further aims at techniques that are concerned with expressive human-computer interaction and those that need to operate in real-time, where needed.
Central to the project is the hypothesis that existing traditional composition techniques from individual domains (e.g., music, graphics, video) can be identified, classified, and then be generalised in a foundational framework of interactive media composition techniques that is applicable to automated authoring tasks. We claim that the implementation of these techniques will lead to more effective methods for simultaneous mixed-media authoring. The project starts out by establishing a catalogue of media composition techniques. This includes collecting and classifying existing techniques from specific modalities, identifying common methodologies, and also filling gaps and building bridges with new techniques where needed. Before actually implementing the composition techniques, an appropriate real-time content rendering infrastructure is required. Here, the project benefits from preliminary work carried out at the former Multimedia Laboratory of the University of Zurich and at the Computer Systems Institute of ETH Zurich. As its central part, the project implements relevant composition techniques as part of an interactive media authoring instrument. Special thoughts are given to a human-centred approach in order to prevent results that are stigmatised by software, and in order to provide the highest possible degree of expressiveness to content authors. At its final stage, the project empirically verifies the results by means of concrete application scenarios from media art and entertainment.
The project brings together elements from many different fields, primarily from software engineering, human-computer interaction, and the foundations of multimedia, but also from music composition, film editing theory, and aesthetics. The intellectual merit of the project is to provide systematic foundational work on mixed-media composition techniques by exploiting computational methods and to validate the practicability of the theory in terms of usable software instruments. Besides of dissemination of scientific findings, the broader impacts of the project are expected to be very significant for a wide range of applications that depend on computer-assisted and automated content creation techniques: Among others, example beneficiaries are the entertainment industry or commercial content providers who can benefit from novel expressive authoring instruments, or emerging services beyond IPTV, where automated content composition tasks help supporting whole communities. Collectively, the project aims at advancing the state of the art of interactive media content authoring to a new level.

Project: Post-doctoral research fellowship, fully funded by the Swiss National Science Foundation (“Fellowship for Advanced Researchers”)
Location: Media Arts & Technology, University of California, Santa Barbara
Period: 2007 - 2008
The Allosphere Research Facility
I was working on the Allosphere project during my stay at UCSB. Mostly in terms of experimenting with projection warping in a full-dome environment, and with bringing in content, especially through the Soundium platform.
The AlloSphere is a large, immersive, multimedia and multimodal instrument for scientific and artistic exploration being built at UC Santa Barbara. Physically, the AlloSphere is a three-story cubic space housing a large perforated metal sphere that serves as a display surface. A bridge cuts through the center of the sphere from the second floor and comfortably holds up to 25 participants. The first generation AlloSphere instrumentation design includes 14 high-resolution stereo video projectors to light up the complete spherical display surface, 256 speakers distributed outside the surface to provide high quality spatial sound, a suite of sensors and interaction devices to enable rich user interaction with the data and simulations, and the computing infrastructure to enable the high-volume computations necessary to provide a rich visual, aural, and interactive experience for the user. When fully equipped, the AlloSphere will be one of the largest scientific instruments in the world; it will also serve as an ongoing research testbed for several important areas of computing, such as scientific visualization, numerical simulations, large scale sensor networks, high-performance computing, data mining, knowledge discovery, multimedia systems, and human-computer interaction. It will be a unique immersive exploration environment, with a full surrounding sphere of high quality stereo video and spatial sound and user tracking for rich interaction. It will support rich interaction and exploration of a single user, small groups, or small classrooms.
The AlloSphere differs from conventional virtual reality environments, such as a CAVE or a hemispherical immersive theater, by its seamless surround-view capabilities and its focus on multiple sensory modalities and interaction. It enables much higher levels of immersion and user/researcher participation than existing immersive environments. The AlloSphere research landscape comprises two general directions: (1) computing research, which includes audio and visual multimedia visualization, human computer interaction, and computer systems research focused largely on the Allosphere itself – i.e., pushing the state of the art in these areas to create the most advanced and effective immersive visualization environment possible; and (2) applications research, which describes the integration of Allosphere technologies to scientific and engineering problems to produce domain-specific applications for analysis and exploration in areas such as nanotechnology, biochemistry, quantum computing, brain imaging, geoscience, and large scale design. These two types of research activities are different yet tightly coupled, feeding back to one another. Computing research produces and improves the enabling technologies for the applications research, which in turn drive, guide, and re-inform the computing research.
https://www.allosphere.ucsb.edu/
Location: California NanoSystems Institute, University of California, Santa Barbara
Period: 2007 - 2008
Mobile Systems Architectures Lecture
The lecture “Mobile Systems Architectures 1 & 2”, taught at the Computer Systems Institute of ETH Zurich in 2006 and 2007, was realised in collaboration with Swisscom, and took an integral approach to mobile computing from a “systems view”. The course was targeted at master programme students with an interest in mobile applications and systems design. Its goal was to provide in-depth knowledge of all architectural aspects of today’s mobile systems and to prepare the students for taking a leading role in designing and implementing tomorrow’s mobile systems and application models. The course will directly benefited from the Institute’s internationally acknowledged research competence in systems construction. Swisscom contributed its competence and technicall skills as leading mobile network operator.
Important Information
Please Note: Simon left Swisscom Innovations in July 2007 and I left ETH Zurich’s Computer Systems Institute in September 2007, and the course has been suspended with the end of summer term 2007. We thank all students that participated and contributed to the course.
The slides are available here, but please be aware that they have not been updated since 2007 and some chapters are outdated (e.g. who’s interested in Symbian nowadays?). However, much of the conceptual and historical content is still valid. If you are interested in the original PowerPoint files for your own course, please get in touch! We will be happy to give permission to use the material as long as the origin and authorship are mentioned.
Organisation & Lecturers
Dr. Stefan Arisona, Computer Systems Institute, Department of Computer Science, ETH Zürich Dr. Simon Schubiger, Swisscom Innovations
Slide Download
Mobile Systems Architectures 1: https://robotized.arisona.ch/doc/msa/msa_1_slides.zip Mobile Systems Architectures 2: https://robotized.arisona.ch/doc/msa/msa_2_slides.zip Industry Compact Course: https://robotized.arisona.ch/doc/msa/cc_slides.zip
Course Materials: MSA I
General Information
Title: Mobile Systems Architectures I / Mobile System-Architekturen I
Number: 251-0279-00L
Schedule: Winter Term 2006/2007, Wednesdays 9 - 11 (Lecture) and 11 - 12 (Exercises)
Location: IFW C42
Credit Points (ECTS): 5
Language: German, English (if requested)
Enrollment: Web
Description
The lecture Mobile Systems Architectures I & II, which is realised in collaboration with Swisscom, takes an integral approach to mobile computing from a “systems view”. The course is targeted at master programme students with an interest in mobile applications and systems design. Its goal is to provide in-depth knowledge of all architectural aspects of today’s mobile systems and to prepare the students for taking a leading role in designing and implementing tomorrow’s mobile systems and application models.
The first part of the course, Mobile Systems Architectures I [251-0279-00], focuses on mobile devices, their operating systems, and how these devices communicate locally. In addtion, we will present various, widely used SPIs (System Programming Interfaces), APIs (Application Programming Interfaces), and corresponding development environments, which are essential for the mobile service and application designer/developer.
The material is presented in a top-down approach, starting with the high-level Java 2 Microedition. Then various operating systems are presented (Symbian, Windows CE). Typical mobile hardware will be presented in terms of a case study of TI’s OMAP platform. Local communication is covered by NFC, IrDA, Bluetooth, the AT commands and OBEX. At the end of the course we will present Sync ML and OMA DM, which can be seen as the crossing point to remote communication, covered in Mobile Systems Architectures II (Summer term 2007).
It is our intent to present the course in a broad manner. At the same time, we will provide in-depth information at certain points, typically in terms of accompaning lab exercises. The practical application of the presented material is regarded as an integral part of the course, and successful solutions will be rewarded in terms of final credit point accreditation.
Principal Objectives
Introduction to the foundations of mobile systems architectures with particular focus on mobile end devices such as hand phones and PDAs, their operating systems and local communication capabilities. Become acquainted with development and simulation environments. Realisation of practical application examples.
Requirements
- Basic knowledge on operating systems and computer networks.
- Basic C++ and Java programming skills.
Efficiency Control and Examination
- All exercises (except those marked as optional) have to be completed and sent to the assistant by email (deadlines are given below). Please build groups of two.
- End-term oral examination.
Schedule, Handouts and Exercises
25.10.2006: Introduction and Development Environments
Exercise 1: J2ME Development Environment Setup. Optional.
01.11.2006: Developing for Mobile Devices: Java 2 Microedition (1)
Exercise 2: J2ME Address Book.
08.11.2006: Developing for Mobile Devices: Java 2 Microedition (2)
Exercise 2: J2ME Address Book (continued).
15.11.2006: Developing for Mobile Devices: Symbian (1)
Exercise 3: Native Hello World. Optional.
22.11.2006: Developing for Mobile Devices: Symbian (2)
Exercise 4: Native Extended Address Book.
29.11.2006: Developing for Mobile Devices: MSHELL & Python
Exercise 4: Native Extended Address Book (continued).
06.12.2006: Developing for Mobile Devices: Windows CE
Exercise 4: Native Extended Address Book (continued).
13.12.2006: Mobile Linux / Mobile Device Hardware: OMAP
Exercise 4: Native Extended Address Book (continued)
20.12.2006: Local Communication: NFC
Exercise 5: NFC Exercise. Deadline: 10.01.2007
3.01.2007: Local Communication: IrDA
Exercise 6: Dump Analysis. Deadline: 17.01.2007
10.01.2007: Local Communication: IrDA, IrCOMM, OBEX
17.01.2007: Local Communication: AT Commands
Exercise 7: Address Book + OBEX. Deadline: 31.01.2007
24.01.2007: Local Communication: Bluetooth 1
31.01.2007: Local Communication: Bluetooth 2. Summary.
Course Materials MSA II
General Information
Title: Mobile Systems Architectures II / Mobile System-Architekturen II
Number: 251-0280-00
Schedule: Summer Term 2007, Wednesdays 9 - 11 (Lecture) and 11 - 12 (Exercises)
Location: IFW B42
Credit Points (ECTS): 5
Language: German, English (if requested)
Enrollment: Web
Description
The lecture Mobile Systems Architectures I & II, which is realised in collaboration with Swisscom, takes an integral approach to mobile computing from a “systems view”. The course is targeted at master programme students with an interest in mobile applications and systems design. Its goal is to provide in-depth knowledge of all architectural aspects of today’s mobile systems and to prepare the students for taking a leading role in designing and implementing tomorrow’s mobile systems and application models.
The second part of the course, Mobile Systems Architectures II [251-0280-00], focuses on current and upcoming remote communication capabilities of mobile devices, cellular networks and mobile service infrastructures. The course does not deal with low level wireless communication theory, which is well covered by other courses offered at ETHZ.
Starting point will be currently deployed (2G) GSM networks, including GSM voice, SMS, MMS and WAP services. We will cover 2.5G and 2.75G packet based services like GPRS and EDGE. As next step we will introduce UMTS with a side glance at other 3G standards. In addition, an introduction WLAN and WiMAX technologies is given. At another level, the course will present techniques for the successful design of mobile client server applications.
It is our intent to present the course in a broad manner. At the same time, we will provide in-depth information at certain points, typically in terms of accompaning lab exercises. The practical application of the presented material is regarded as an integral part of the course, and successful solutions will be rewarded in terms of final credit point accreditation.
Principal Objectives
Introduction to the foundations of mobile systems architectures with particular focus on currently deployed and future remote communication technologies and mobile application services. Become acquainted with development and simulation environments. Realisation of practical application examples.
Requirements
- Basic knowledge on operating systems and computer networks.
- Basic C++ and Java programming skills.
- Mobile Systems Architectures I recommended but not required.
Efficiency Control and Examination
- All exercises (except those marked as optional) have to be completed and sent to the assistant by email (deadlines are given below). Please build groups of two.
- End-term oral examination.
Schedule, Handouts and Exercises
21.03.2007: Introduction
28.03.2007: GSM: Introduction and Protocols
04.04.2007: GSM Protocols continued
11.04.2007: GSM Voice/Data and SMS
Exercise 1 is at the end of the slides. Deadline is 24.4.2007
18.04.2007: GPRS & EDGE
25.04.2007: UMTS 1
02.05.2007: UMTS 2
CMDA Link: https://www.dpunkt.de/mobile/code/cdma.html
09.05.2007: Location Based Services
Exercise 2 is at the end of the slides. Deadline is 24.5.2007
16.05.2007: End-to-End Application Design I
23.05.2007: Emission (Guest Lecturer)
30.05.2007: End-to-End Application Design II
06.06.2007: Outlook Asia (Japan / South Korea) (Guest Lecturer)
13.06.2007: HDSPA / VOIP / CASE STUDIES
Note: The case studies are presented in the lecture only.
20.06.2007: Alternatives: WLAN, WIMAX, VoIP… (Guest Lecturer)
SQEAK - Real-time Multiuser Interaction Using Cellphones
This research project explored one approach to providing mobile phone users with a simple low cost real-time user interface allowing them to control highly interactive public space applications involving a single user or a large number of simultaneous users.
In order to sense accurately the real-time hand movement gestures of mobile phone users, the method uses miniature accelerometers that send the orientation signals over the network’s audio channel to a central computer for signal processing and application delivery. This affords that there is minimal delay, minimal connection protocol incompatibility and minimal mobile phone type or version discrimination. Without the need for mass user compliance, large numbers of users could begin to control public space cultural and entertainment applications using simple gesture movements.

D. Majoe, S. Schubiger-Banz, A. Clay, and S. Arisona. 2007. SQEAK: A Mobile Multi Platform Phone and Networks Gesture Sensor. In: Proceedings of the Second International Conference on Pervasive Computing and Applications (ICPCA07). Birmingham, UK, July 26 - 27.
Research: Project carried out at ETH Zurich
Location: ETH Zurich, Switzerland
Timeframe: 2006 - 2007
Realisation: Stefan Arisona, Simon Schubiger, Dennis Majoe, Art Clay
Collaborator: Swisscom Innovations
The Digital Marionette
The interactive installation Digital Marionette impressively shows the audience the look and feel of a puppet in the multimedia era: The nicely dressed wooden marionette is replaced by a Lara Croft - like character; the traditional strings attached to puppet control handles emerge into a network of computer cables. The installation is currently exhibited at the Ars Electronica Center in Linz.
The installation consists of a projection of a digital face, which can be controlled by the visitors. The puppet can be made talking via speech input, and the classical puppet controls serve as controllers for head direction and face emotions, such as joy, anger, or sadness. The whole artistic concept was designed and realised in an interdisciplinary manner, incorporating art historical facts about marionettes, the architectural space, interaction design, and state of the art research results from computer graphics and speech recognition.

Concept
The translation from old to new, from analogue to digital, takes place via the most popular computer input device: the mouse. The puppet control handles are attached to sliding strips of mousepads and eight computer mice track movements of the individual strings. This approach is at the same time efficient, low-cost and easily understandable by the non-expert visitor. Speech input is realised via speech recognition, where the recognised phonemes are mapped to a set of facial expressions and visemes.
Exhibition at Museum Bellerive, Zurich, Switzerland, 2004
The first exhibition of the Marionette was realised in 2004 by the Corebounce Art Collective in cooperation with Christian Iten (interface realisation), Swisscom Innovations (Swiss-German voice recognition), and ETH Zürich (real-time face animation), and Eva Afuhs and Sergio Cavero (Curators, Museum Bellerive, Zürich).
Exhibition at Ars Electronica Centre, Linz, Austria, 2006 - 2008
An augmented permanent version of the installation was presented in the entrace hall of the world-famous Ars Electronica Center in Linz. It was realised by the Corebounce Art Collective with technical support from Gerhard Grafinger of the Ars Electronica Center. Furthermore we thank Ellen Fethke, Gerold Hofstadler and Nicoletta Blacher of Ars Electronica; and Jürg Gutknecht and Luc Van Gool of ETH Zürich.
Bellerive Installation Video
3D Concept Video
Additional Information
Exhibition: Museum Bellerive
Location: Zurich, Switzerland
Period: Jun 11 - Sep 12 2004
Exhibition: Ars Electronic Centre
Location: Linz, Austria
Period: Sep 2006 - Sep 2008
Concept and realisation: Corebounce Art Collective (Pascal Mueller, Stefan Arisona, Simon Schubiger, Matthias Specht)
