Technology of Virtual Reality

Overview of VRML

By Azura Mat Salim

Virtual Reality Modelling Language or simply VRML is an international standard file format for describing interactive 3D multimedia on the Internet. It is not a general purpose language like C,C++ but instead a scene description language that describes behaviors of objects in 3D worlds. Also, it does not define an application programmer interface (API).VRML 1.0 was created based on the Open Inventor file format. Its capabilities were further extended with the release of VRML 2.0. However, with minor improvements in its the working draft, VRML97 then replaced VRML 2.0 as the ISO standard.

Features

VRML defines most common attributes found in 3D applications like transformation, texture mapping, viewpoints and materials. VRML 1.0 was used to create static scenes and objects, which was made to be more interactive in the VRML 2.0 specification. VRML 1.0 lacked in dynamics and multi-user support as well as support for curved objects (eg NURBS). The introduction of behavior into scenes was the major enhancement over v1.0, which lead to sharing of dynamically changing geometry and promotes more interactivity between the user and scene. VRML 2.0 further included sensors for interaction between objects and events (collision, viewpoint).

It is relatively easy to compose objects for VRML worlds using a generic text editor, similar to HTML. In the VRML 2.0 specification, it can be used both for defining VRML documents and or used as a file interchange format. VRML files can refer to many other file formats such as JPEG,BMP and GIF being used as texture maps or even Java as an independent standard chosen to be used with VRML. VRML also has support for various scripting languages (eg Javascript) custom protocol.

VRML uses a hierarchical scene graph in describing 3D objects and worlds. This makes it easier to create large worlds from smaller parts. Nodes in the scene graph hold their data in fields and communicate with each other through events. Events are generated using sensors, which are the basic user interaction and animation of VRML. The VRML 2.0 design enables a prototyping mechanism that allows encapsulation and promotes reusability of scene graphs using the PROTO statement. It also defines a method to allow nodes to be external to the VRML file for extensibility using the EXTERNPROTO statement.

A VRML browser is required to view VRML objects and worlds. It can be a Web browser plug-in (e.g. Cortona by Parallel Graphics and Cosmo Player or standalone applications that can view and manipulate VRML worlds (e.g.: Open World, FreeWRL, an open source VRML browser for Linux).

VRML is most commonly used in creating virtual worlds for architectural walkthroughs, scientific visualization, and entertainment and industrial designs. With this, comes the need for creating multi-user worlds where people can meet and collaborate in various fields. However, in the present, VRML still lacks the networking and database protocols necessary to create true multi-user interactive 3D worlds. The present technology of multi-user worlds requires the integration of VRML with other languages such as Java. The 2 key elements in creating a virtual world are having a text editor or build tools and a VRML browser to view the world. Basic knowledge in VRML is sufficient enough to create a decent world but some amount of complex programming, experience and artistic flair are necessary to create a fast, efficient and striking world.

Reference

Java 3D

By Fadlynna Ilyani Zulkarim

Java 3D is a full-featured 3D graphics API (Applications Programming Interface) that has the most essential features also found in similar graphics rendering tools. The Java 3D implementation is layered on top of native low-level rendering APIs, namely OpenGL and Direct3D. With a comprehensive library of 3D classes, it is a part of the Java Media family of APIs. It is considered as a high-level programming language because it is based on Java and it also shields users from low-level rendering details, e.g. hardware acceleration. It also supports high levels of optimization and multiprocessor rendering.

Java 3D allows us to easily create virtual worlds that are immersive and even interactive. Like other graphics APIs, it also lets users to deal with lighting, texture mapping and various other behaviors.

Java 3D has a powerful yet easily mastered graphics capabilities. It is also able to support applications that operate on a variety of output devices, thanks to its Java predecessors. An added advantage would be the portability and networking capabilities provided by the Java platform. A Java 3D applet or application may run on different operating systems, different low-level graphics APIs or different graphics hardware. Thus, it gives a platform-independent mechanism for the development of 3D graphics.

Features

  • Scene graph programming model: Java 3D is based on high-level scene graph programming model. Scene graphs are tree-like data structures used to store, organize and render 3D information and Java 3D’s scene graphs are made of objects called ‘nodes’. A 3D space that has 3D objects in it is called a ‘virtual universe’, which can be viewed on a display device. We can attach the scene graph to the virtual universe and changes are made by calling methods to the objects.
  • Java 3D rendering control: It has its own rendering control whereby the Java 3D renderer can traverse a Java 3D scene graph and displays its visible geometry. It also processes user input and performing behaviors. Thus, the developer may have full control on the rendering process.
  • Scalability: Java 3D was built with scalability in mind. As it is a high-level graphics API, it utilizes the hardware acceleration and high-level optimization features (e.g. view culling and parallel rendering). As in Java, Java 3D too supports multithreading, whereby threads (or lightweight processes) allows division of a program into smaller tasks that can be independently executed.
  • Convenience and Utility Classes: Basic scene graph construction, mouse and keyboard navigation behaviors, etc may be found in Sun’s Java 3D convenience classes. This will effectively save the developers’ time and effort and concentrate on enhancing the program itself.

Java 3D is basically the Java programming interface for interactive 3D graphics. Naturally, it provides developers and users platform-independent, high-performance applications and applets. Since it uses scene graph programming model, it greatly simplifies programming as opposed to other low-level APIs such as OpenGL and DirectX. It is an optional package that is installed on top of Java 2.

Reference

Walsh, A.E, Gehringer D., Java 3D API Jump-Start (2002)

MPEG-4

By Chieng Chin Yi

MPEG-4 is an ISO/IEC standard developed by MPEG (Moving Picture Experts Group), the committee that also developed the standards known as MPEG-1 and MPEG-2. These standards are what made interactive video on CD-ROM and Digital Television possible. MPEG-4 is the result of another international effort involving hundreds of researchers and engineers from all over the world. MPEG-4, whose formal ISO/IEC designation is ISO/IEC 14496, was finalized in October 1998 and became an International Standard in the first months of 1999. The fully backward compatible extensions under the title of MPEG-4 Version 2 were frozen at the end of 1999, to acquire the formal International Standard Status early in 2000.

MPEG-4 builds on the proven success of three fields:

  • Digital television;
  • Interactive graphics applications (synthetic content);
  • Interactive multimedia (World Wide Web, distribution of and access to content)

MPEG-4 provides the standardized technological elements enabling the integration of the production, distribution and content access paradigms of the three fields.

Features

The MPEG-4 standard provides a set of technologies to satisfy the needs of authors, service providers and end users alike.

  • For authors, MPEG-4 enables the production of content that has far greater reusability, has greater flexibility than is possible today with individual technologies such as digital television, animated graphics, World Wide Web (WWW) pages and their extensions. Also, it is now possible to better manage and protect content owner rights.
  • For network service providers MPEG-4 offers transparent information, this can be interpreted and translated into the appropriate native signaling messages of each network with the help of relevant standards bodies. The foregoing, however, excludes Quality of Service considerations, for which MPEG-4 provides a generic QoS descriptor for different MPEG-4 media. The exact translations from the QoS parameters set for each media to the network QoS are beyond the scope of MPEG-4 and are left to network providers. Signaling of the MPEG-4 media QoS descriptors end-to-end enables transport optimization in heterogeneous networks.
  • For end users, MPEG-4 brings higher levels of interaction with content, within the limits set by the author. It also brings multimedia to new networks, including those employing relatively low bitrate, and mobile ones. An MPEG-4 applications document exists on the MPEG Home page ( which describes many end user applications, including interactive multimedia broadcast and mobile communications.

For all parties involved, MPEG seeks to avoid a multitude of proprietary, non-interworking formats and players.

MPEG-4 achieves these goals by providing standardized ways to:

  • Represent units of aural, visual or audiovisual content, called "media objects". These media objects can be of natural or synthetic origin; this means they could be recorded with a camera or microphone, or generated with a computer;
  • Describe the composition of these objects to create compound media objects that form audiovisual scenes;
  • Multiplex and synchronize the data associated with media objects, so that they can be transported over network channels providing a QoS appropriate for the nature of the specific media objects; and
  • Interact with the audiovisual scene generated at the receiver’s end.

Implementation

MPEG-4 Video offers technology that covers a large range of existing applications as well as new ones. The low-bit rate and error resilient coding allows for robust communication over limited rate wireless channels, useful for e.g. mobile videophones and space communication. There may also be roles in surveillance data compression since it is possible to have a very low or variable frame rate. At high bit-rates, tools are available to allow the transmission and storage of high-quality video suitable for the studio and other very demanding content creation applications. It is likely that the standard will eventually support data-rates well beyond those of MPEG-2.

A major application area is interactive web-based video. Software that provides live MPEG-4 video on a web page has already been demonstrated. There is much room for applications to make use of MPEG-4's object-based characteristics. The binary and grayscale shape-coding tools allow arbitrary-shaped video objects to be composed together with text and graphics. Doing so, a rich interactive experience for web-based presentations and advertising can be provided; this same scenario also applies to set-top box applications. Additionally, it is possible to make use of scalability tools to allow for a smooth control of user experience with terminal and data link capabilities.

MPEG-4 video has already been used to encode video captures with a hand-held camera. This form of application is likely to grow in popularity with its fast and easy transfer to a web page, and may also make use of MPEG-4 still-texture mode for still frame capture. The games market is another area where the application of MPEG-4 video, still-texture, interactivity and SNHC shows much promise, with 3-D texture mapping of still images, live video, or extended pre-recorded video sequences enhancing the player experience. Adding live video of users adds to the user experience multi-player 3-D games, as does use of arbitrary-shaped video, where transparency could be combined artistically with 3-D video texture mapping.

Reference

of Open InventorTM

By Ernie Darlina Taib

Open Inventor is an object-oriented 3D toolkit that includes a powerful yet easy-to-use API for developing interactive 3D graphics applications. It presents a programming model based on a 3D scene database that dramatically simplifies graphics programming. It includes a rich set of objects such as cubes, polygons, text, materials, cameras, lights, trackballs, handle boxes, 3D viewers, and editors that speeds up programming time and extends 3D programming.

The Open Inventor file format is the basis of VRML (Virtual Reality Modeling Language) for extending the World Wide Web to incorporate 3D graphics. Because it is based on OpenGL, it can take advantage of OpenGL hardware acceleration when available. And like OpenGL, Open Inventor is platform/window-system independent but provides a rich suite of window system specific components that encapsulate commonly used tasks in an object-oriented fashion. Written in C++, Open Inventor also defines a standard file format for exchanging 3D data among applications and platforms.

This toolkit simplifies the software development process and allows very rapid development of graphics applications. Simple enough to be an integral part of many computer graphics courses taught at the university level, Open Inventor powers thousands of production-quality applications used in almost every industry including CAD, geosciences, medicine, academia, chemistry and movie production.

History

Open Inventor came to life about 10 years ago. The first beta version of Inventor (then known as Scenario) appeared in 1991. IRIS Inventor 1.0 was released in 1992. It was based on GL and hence was not platform independent. Inventor was used in Showcasetm (a multimedia authoring and presentation tool) by providing support for the 3D Gizmo (3D objects, textures, materials, lights, shadows, customizable extruded models etc.) After a major revision and rewrite, Open Inventor 2.0 was introduced in 1994. Open Inventor was now based on OpenGL and was licensed to third parties for porting to other platforms. The same year, a subset of its file format with some additions was proposed to the Internet community as a possible scene description language for 3D over the Internet. It was accepted as the VRML standard. The realization of VRML popularized Inventor to thousands of people all over the world indirectly. Most VRML authoring applications and browsers are written using Inventor-like APIs.

Over the years SGI has received numerous requests for a GNU/Linux® version of Open Inventor. By open-sourcing this toolkit Silicon Graphics Inc. (SGI) made it available on Linux and at the same time enabling this large and very active user community to study, understand and enhance Open Inventor. Earlier in 2000, SGI released the source code of the OpenGL Sample Implementation to the open source community, clearing the way for high-quality OpenGL implementations on Linux. The release of Open Inventor source codes to the community in August 2000 further highlights SGI's commitment to providing hardware and software technologies that are relevant to graphics developers.

Design & Architecture

The Open Inventor toolkit includes: a 3D scene database that includes shape, property, group, engine, and sensor objects, used to create a hierarchical 3D scene; a set of node kits that provide a convenient mechanism for creating pre-built groupings of Inventor nodes; a set of manipulators, including the handle box and trackball, which are objects in a scene database that the user can interact with directly; and an Inventor Component Library for Xt, including a render area (a window used for rendering), material editor, viewers, and utility functions, used to provide some high-level interactive tasks.

For displaying and interaction the Component Library is the most important part of Inventor. The component library is window-system dependent but is available for most platforms thus maintaining the common look-and-feel for applications across platforms. On X-windows systems, the Inventor Xt Component Library makes heavy use of Xt/Motif for windows and events. OpenGL is used for all shading, lighting, and drawing via the GLX extension provided by OpenGL. An inventor scene is an ordered collection of objects (nodes). The scene database recognizes all the registered inventor nodes. New nodes and behaviors can be added by the programmer in a number of ways. By using dynamic shared objects (DSO) the new nodes can be made available to any Inventor application for reading/rendering. This provides a powerful plug-and-play ability allowing sharing of data/code among applications. Even if a DSO is not provided for a new node, alternate representations that use existing nodes can be specified for use with other applications.

Reference

Other Languages for VR

By Norkhairul Wahab

AC3D

AC3D is popular 3D object/scene modeler available for Linux, Windows 95/NT, and SGI. It’s very easy to use but powerful too. It outputs POV-Ray, VRML (1 and 2), RenderMan, Dive, Massive and other formats.

Features
  • Multi platform program - AC3D file format compatible across platforms
  • Easy to use intuitive interface.
  • Built-in fast OpenGL 3D renderer with adjustable field-of-view - instantly see results of your actions in 3D. Spin the model or switch into 'walk mode' for Quake-style control.
  • Headlight and up to 7 possible of lights position.
  • 24-bit color palette with adjustable diffuse; ambient; emissive; shininess and transparency
  • Attach URLs to objects for use in VRML files

Minimal Reality (MR)