Tag: UCSB
2 posts tagged with "UCSB"
Computer-assisted Media Authoring Techniques
Stefan Arisona, research fellowship at UCSB, 2007 - 2008
Project Summary
The volume of digital media content has enormously grown in the recent years: Emerging internet-based applications, such as YouTube or MySpace, are media-rich, emphasise on content sharing, and attract huge communities. As the volume of content further expands, we see an increasing need for not only being able to annotate and retrieve content, but also to produce and edit content in a computer-assisted manner and by employing (semi-) automated methods. However, existing work so far does not satisfy these needs – both in terms of usable software instruments as well the underlying theoretical frameworks. This viewpoint is also acknowledged by upcoming European Union IST projects: The Framework Programme 7 (FP7) calls for projects addressing Intelligent Content and Semantics and targets at advanced authoring environments for interactive and expressive content [ICT Programme Committee, 2007, Objective ICT-2007.4.2].
This project aims at a systematic classification and implementation of novel content authoring techniques. By “mixed-media content authoring” we understand the interactive editing, or the ‘‘composition’’, of content for multiple modalities (e.g., auditory and visual). The project emphasises on applying computational methods in order to enable a high degree of automatisation of the authoring process. It further aims at techniques that are concerned with expressive human-computer interaction and those that need to operate in real-time, where needed.
Central to the project is the hypothesis that existing traditional composition techniques from individual domains (e.g., music, graphics, video) can be identified, classified, and then be generalised in a foundational framework of interactive media composition techniques that is applicable to automated authoring tasks. We claim that the implementation of these techniques will lead to more effective methods for simultaneous mixed-media authoring. The project starts out by establishing a catalogue of media composition techniques. This includes collecting and classifying existing techniques from specific modalities, identifying common methodologies, and also filling gaps and building bridges with new techniques where needed. Before actually implementing the composition techniques, an appropriate real-time content rendering infrastructure is required. Here, the project benefits from preliminary work carried out at the former Multimedia Laboratory of the University of Zurich and at the Computer Systems Institute of ETH Zurich. As its central part, the project implements relevant composition techniques as part of an interactive media authoring instrument. Special thoughts are given to a human-centred approach in order to prevent results that are stigmatised by software, and in order to provide the highest possible degree of expressiveness to content authors. At its final stage, the project empirically verifies the results by means of concrete application scenarios from media art and entertainment.
The project brings together elements from many different fields, primarily from software engineering, human-computer interaction, and the foundations of multimedia, but also from music composition, film editing theory, and aesthetics. The intellectual merit of the project is to provide systematic foundational work on mixed-media composition techniques by exploiting computational methods and to validate the practicability of the theory in terms of usable software instruments. Besides of dissemination of scientific findings, the broader impacts of the project are expected to be very significant for a wide range of applications that depend on computer-assisted and automated content creation techniques: Among others, example beneficiaries are the entertainment industry or commercial content providers who can benefit from novel expressive authoring instruments, or emerging services beyond IPTV, where automated content composition tasks help supporting whole communities. Collectively, the project aims at advancing the state of the art of interactive media content authoring to a new level.

Project: Post-doctoral research fellowship, fully funded by the Swiss National Science Foundation (“Fellowship for Advanced Researchers”)
Location: Media Arts & Technology, University of California, Santa Barbara
Period: 2007 - 2008
The Allosphere Research Facility
I was working on the Allosphere project during my stay at UCSB. Mostly in terms of experimenting with projection warping in a full-dome environment, and with bringing in content, especially through the Soundium platform.
The AlloSphere is a large, immersive, multimedia and multimodal instrument for scientific and artistic exploration being built at UC Santa Barbara. Physically, the AlloSphere is a three-story cubic space housing a large perforated metal sphere that serves as a display surface. A bridge cuts through the center of the sphere from the second floor and comfortably holds up to 25 participants. The first generation AlloSphere instrumentation design includes 14 high-resolution stereo video projectors to light up the complete spherical display surface, 256 speakers distributed outside the surface to provide high quality spatial sound, a suite of sensors and interaction devices to enable rich user interaction with the data and simulations, and the computing infrastructure to enable the high-volume computations necessary to provide a rich visual, aural, and interactive experience for the user. When fully equipped, the AlloSphere will be one of the largest scientific instruments in the world; it will also serve as an ongoing research testbed for several important areas of computing, such as scientific visualization, numerical simulations, large scale sensor networks, high-performance computing, data mining, knowledge discovery, multimedia systems, and human-computer interaction. It will be a unique immersive exploration environment, with a full surrounding sphere of high quality stereo video and spatial sound and user tracking for rich interaction. It will support rich interaction and exploration of a single user, small groups, or small classrooms.
The AlloSphere differs from conventional virtual reality environments, such as a CAVE or a hemispherical immersive theater, by its seamless surround-view capabilities and its focus on multiple sensory modalities and interaction. It enables much higher levels of immersion and user/researcher participation than existing immersive environments. The AlloSphere research landscape comprises two general directions: (1) computing research, which includes audio and visual multimedia visualization, human computer interaction, and computer systems research focused largely on the Allosphere itself – i.e., pushing the state of the art in these areas to create the most advanced and effective immersive visualization environment possible; and (2) applications research, which describes the integration of Allosphere technologies to scientific and engineering problems to produce domain-specific applications for analysis and exploration in areas such as nanotechnology, biochemistry, quantum computing, brain imaging, geoscience, and large scale design. These two types of research activities are different yet tightly coupled, feeding back to one another. Computing research produces and improves the enabling technologies for the applications research, which in turn drive, guide, and re-inform the computing research.
https://www.allosphere.ucsb.edu/
Location: California NanoSystems Institute, University of California, Santa Barbara
Period: 2007 - 2008
