MET++

MET++ (Multimedia ET++) is general compositional environment for multimedia with interactive editing facilities in which multimedia presentations are regarded as a hierarchical composition of time objects. MET ++ was developed at the University of Zurich by Philip Ackermann [Ackermann, 1994a]. It was published and distributed as Free Software in 1996 but no updates have been added since then as the author joined a startup company shortly after and discontinued its support2.3. Nevertheless, the framework has been used for several developments and it is a reference for multimedia software design.

Much of its impact is due to the fact that the media framework is based on the ET++ framework written in C++ also developed at the Univ. of Zurich. ET++[Weinand et al., 1989] is an OO framework that integrates user interface building blocks, basic data structures, support for object input/output, printing, and high-level application framework components. ET++ is a classical reference in OO literature for being a test-bed for software engineering practices such as design patterns and refactoring. To the already existing support for 2D objects present in ET++, MET++ added support for 3D graphical objects, camera objects, and audio and music object.

MET++ is an object-oriented application framework that addresses the main difficulties that can be encountered when designing applications for multimedia authoring or editing, namely: integration of several media types with different real-time constraints at low-level device interfaces; support for media compositions to define high-level inter-media synchronization and real-time control; intuitive user interface interactions with direct manipulations of graphical representations and high semantic feedback; and flexibility and portability for different hardware configurations and platforms.

Although the main development platform of MET++ was a Silicon Graphics Indigo workstation, the framework is cross-platform and hardware dependencies are hidden in a portability layer that provides abstract interfaces to operating system, windows system, as well as audio and MIDI input/output.

Perhaps the most important aspect in the framework is the way it handles time synchronization [Ackermann, 1994b]. The temporal information is represented through a hierarchical composition that includes information about part-of relations, grouping, and temporal constraints and allows automatic time calculation. The components in the structure are modelled as time events that have their own starting point, duration and virtual line. The base class of the time dependent objects is the TEvent. The high-level synchronization accuracy is based on milliseconds time resolution. In a multimedia presentation temporal relationships are configured by composing several universal time objects such as TSequence, TimeLine, TSynchro, TShift and TLoop with special media objects. Any time dependent object inherited from TEvent can be inserted into the composition.

The time dynamic behavior of a time wrapper is supported through time functions expressing actions such as fading, scaling or positioning. A time function can manipulate the time itself so that every object can have its local time. In real-time presentations there is usually a Conductor object that periodically sends Perform messages to the time dependent objects in the hierarchy. Media objects compute only the necessary data for the next interval. Continuous media data is computed incrementally ahead into a buffer. Interactions such as fast-forward or slow-motion are realized by changing the parameters of the Perform message: start time, presentation duration and real-time duration (the behavior of the media objects depend on the way they respond to the relation between these two).

The Intensity class is used for representing values for amplitude, gain, volume... It can operate and convert itself to byte, integer, floating-point, decibel and MIDI value. The Pitch class is able to handle information about pitch key and frequency. It uses the Tonal System to map symbolic keys to Hz. The Beat class models beat and bar properties and delegates quantization and the mapping from symbolic beat measure to physical seconds to the MusicalContext class. The PitchScale class can represent symbolic scales (chromatic, dorian...) or physical intervals for tuning purposes.

Audio resources are modelled with the abstract AudioUnit class as modules of a source-filter-sink architecture. The output of an AudioUnit can be sent to another AudioUnit and define an audio signal flow graph. Audio files can be played in real-time from disk or converted to a Sound object.

The interpretation of a musical performance is modelled in the time-dependent MusicalContext class. It holds information about the tonal system, tonality, signature, measure, and tempo. The MusicPlayer interprets its notes in the context and delegates the note playing to the abstract MusicInstrument class. The concrete classes MIDIInstrument, CSoundInstrument, and Synthesizer (allocates Oscillators) map the note representation to device specific synthesis parameters.

2004-10-18