ASME:

International Mechanical Engineering Congress & Exhibition

November 2002 New Orleans

Paper Number: IMECE2002-32470

8 Copyright © #### by ASME

A Supervisory Data-Traffic Controller in Collaborative Virtual Reality Simulations

Ali Akgunduz
Analyst-Research and Development
Information Systems Division
United Airlines
P.O. Box 66100
Chicago, IL 60666-0100
Phone: (847) 700-1373 / Pat Banerjee[*]
Department of Mechanical and
Industrial Engineering
University of Illinois at Chicago
842 W. Taylor St.
2039 Engineering Research Facility, M/C 251
Chicago, IL 60607
Phone: (312) 413-3619
Fax: (312) 413-0447

8 Copyright © #### by ASME

ABSTRACT

In this paper an efficient technique for distributing the data in collaborative virtual reality is presented. The described technique incorporates the culling and level of detail concepts in virtual reality to obtain cell based bounding volumes in each virtual environment. Defined bounding volumes are utilized in filtering the data that is transferred between different virtual environments in the simulation system. To orchestrate the communication between virtual environments and their bounding volumes, a supervisory control system is presented in the text.

1.  INTRODUCTION

The major advantages of Virtual Reality (VR) over the traditional simulation techniques are enabling the users be present in an immerse environment, feeling the surroundings, interacting with the objects, and receiving the outputs in both mathematical and visual forms. This tool, still developing in both industry and academia, enables designers, engineers etc. a means to replicate real systems more efficiently with less expense. Recent studies in this field have focused on collaborative virtual reality simulations [1 and 2].

Not many tasks in real life are performed by a single person. Activities such as product design, personal training, combat/flight simulations etc. requires a number of people to interact together in a shared environment. The fundamental goal of VR is replicating the real systems in the virtual worlds so the real life actions can be simulated cheaper, faster and more accurately. From this aspect, a quality VR simulation should allow multiple users to interact within the same environment. Traditionally, a VR system is designed for a single user to interact with the virtual world. Most of the areas (some of them given above) where VR is utilized as a simulation tool require a number of persons to interact together in the same environment. Since virtual world is positioned on a VR display with respect to a single user’s position and rotation, more than one user in the same physical space would create conflict in being positioned in the environment. Yet, the necessity of having a number of users interacting in the same virtual domain for performing a common goal cannot be ignored in the real-time simulations. Collaborative virtual reality concept is the solution to such drawbacks of VR. A collection of virtual worlds with networking creates a shared work domain for many collaborators to be present in the same immerse, shared environment to design/manufacture new products or train people in a safer platform or any other useful application. Such systems are called Collaborative Virtual Reality (CVR).

Today, VRML and many other Web3D applications such as X3D and Shockwave Director provides a practical platform to meet customers and companies in the same virtual world over the Internet. There have been a large number of studies presented in the field. Most of these works focus on networking [3, 4, 10]. In the CVR simulations, data distribution efficiency is an important factor when the complexity of simulation increases. Not many works that concern the efficient data distribution among Virtual Environments (VEs) have been studied in the literature. This paper describes a novel technique to determine when and what kind of data to which VEs needed to be transferred during the real-time simulation to facilitate faster rendering with no loss of data. Bounding volume tree concept is utilized to achieve such goal. In this technique, VEs are divided by cells (we term these Virtual Cells) using the Level-Of-Detail (LOD) and culling concepts of VR, and each user is assigned its own bounding volume. Data that defines the changes in a particular cell is distributed among VEs when the user’s bounding volume overlaps with the cell. If more than one user’s bounding volume intersect with the same cell then a frame-by-frame communication takes place between these users. Otherwise, the data transfer between the collaborators is delayed until the user visits the Virtual Cell (VC) later in the simulation. For this procedure we developed a supervisory controller, termed “data controller” that decides when a data segment/block should be delayed or distributed.

2.  An overview of collaborative virtual reality

Communication is the critical aspect in CVR simulations. The basic idea in CVR is obtaining the identical virtual world in all the collaborating VEs. In an environment, where each individual collaborator interacts with the environment independently, predicting the collaborator behavior and the consequences of their actions is still at its infancy. Communication between VEs is essential for obtaining a common, identical virtual domain among the many collaborators. Some of the critical issues in data transfer in collaborative virtual reality are discussed below.

2.1  Data transfer between virtual environments

There are a number of ways of establishing communication between collaborators in CVR. Some of the techniques can be classified as broadcasting, multicasting or point-to-point communication. Broadcasting technique transmits the information to all the users currently connected to the network regardless of whether they need, want or should receive the data. The next technique, multicasting, sends the data to a specific address and whoever needs the data has to contact the address and receive the information [9]. This technique is more appropriate for use in CVR than broadcasting. It reduces the amount of data transfer between the users, yet the data is still sent in each frame to the multicasting address regardless of whether there is any need for it. The next networking technique is point-to-point, which establishes direct communication between each user. Even though, this technique can be very efficient in CVR, the number of connections increases rapidly as the number of users increase [7]. Point-to-Point networking is preferred generally in large-scale simulations. For the small-scale projects, it is not a feasible technique.

2.2  The need for Data Controller in Collaborative Virtual Manufacturing

In traditional CVR simulations, when a user interacts in the local environment (walking, changing the position of objects in the virtual world, triggering a machining operation etc), the data that describes these interactions is transferred to the neighboring VEs in each frame. Therefore, all the collaborators share the same identical virtual world during the simulation. However, data sharing among the other collaborators decreases the overall performance of VR simulations because of the slower frame update rate. In general, the real-time rendering minimally requires generation of about 20 frames per second [3].

In simulation of large systems such as military combats and flight simulation, there could be a number of participants. During the simulation, we cannot expect all the participants to be in a position where they can see/hear each other and their activities. From this point of view, just to keep the identical virtual worlds in all the VEs, transferring the data that describes actions to the entire neighbor VEs would not add any additional quality to the simulation. But it would decrease the speed of rendering because of the large data traffic. From this aspect, a quality CVR compiler should be able to identify which data, when, to which user should be transmitted. In this paper, we introduce a methodology that avoids frame by frame data transferring between all the collaborators to obtain a realistic CVR simulation. By using this method, data broadcasting can be delayed until it is necessary.

2.3  Object Data Structures in Virtual Reality

Before we present how data is shared between different VEs to sustain identical platforms on each VE, it is important to understand the Virtual Object (VO) relationships with each other and with the environment. In general, all the objects are attached to a main-parent in VR. This main-parent has its own coordinate system and is positioned in the 3D space with respect to the World Coordinate System (WCS). All other objects in the VR simulation are children of the main-parent and these objects are located on the virtual world with respect to the Parent Coordinate System (PCS). Some of them might be a parent for other objects. For example, a car can be attached to a main-parent as a child. In this example the car can be considered as a parent as well when the tires are attached to it. Therefore, tires are placed in 3D world with respect to the Car Coordinate System (CCS), and car is located in the 3D world with respect to a main-parent’s coordinate system. The object-environment relationship for the car example is illustrated in IRIS Performer convention in Figure 1.

Figure 1 presents how objects are related to each other in VR. In the figure, PCS represents a main parent where a car is attached to it. For the VR engine, the car’s move is translational and rotational changes relative to the main parent. In the same manner, tire motion is computed with respect to its parent, CCS. Since the tire is a child of the car, when car move, tires move along with it. VR animations are created based on the child parent relationship. In each frame objects are positioned in the virtual world with respect to their own and parent’s coordinate system. The current position of a Virtual Object is not dependent on its previous position. From this aspect we can conclude that animation of an object could be ignored until a user has visual contact to the object. All the software, used for creating virtual environments, uses some format to describe these relationships between objects and their parents. The scenegraph is the most commonly used data structure to represent objects in the VE. Each object is represented with a node in scenegraph. These nodes contain information such as position, rotation, scaling etc. to describe object-environment relationship.

In collaborative VR simulations, when an object position is changed relative to its own or parent coordinate system, if this animation is not being seen by the neighboring user because of the reasons given above, transferring the data that describes these animation in each frame is not required to sustain continuity of the motion. Since the animations in VR are created by projecting sequence of discrete pictures on the VR displays, displaying the current position of an object in the neighboring VEs is necessary when a local user is in a position to see the virtual object. One of the simplest but the efficient way of detecting the objects that are in the range of users view is dividing the virtual environment by cells. When a user is in a particular cell, we assume all the objects in the cell are in the users field of view, therefore when these objects interact with other neighbor users their movements should be transferred to this VE.

3.  Supervisory data-traffic controller

Supervisory controllers are smart information assessment programs which usually run in the background of systems. They continuously analyze the system inputs and outputs and make decisions as a result of their assessment. The supervisory data-traffic controller, discussed in the text, is designed to observe users activities in local VEs and incoming information from other users in the system. Based on the nature of the signal received from outside or activity performed in the local VE, supervisory data-traffic controller determines what data should be received from which neighbor or which data should be transmitted to which neighbor. In section 3.1, basic sources of activities in the VR simulations are presented.

3.1  Source of Data: What is to be shared among VEs

·  Avatars:

Avatars are human replicas in the virtual reality. The core objective of CVR simulations is that users want to communicate with each other. Representing the different users in each VE by avatars is critical for achieving a higher level realism in CVR. For simplicity, user motions are ignored in CVR simulations. The only important information about the user is his/her location and maybe his/her rotation [7]. This is not sufficient in the CVR simulations such as Product Design, Assembly testing, and remote-surgery, etc. However, movement of hands is critical in these kinds of simulations.

Trackers attached to user’s hands, head, and forehead track user movements in many immersive environments, such as the CAVE. In his work, Luciano [8] shows it is possible to animate human hands and upper body motions by avatars using 3 trackers attached to the user (two trackers on hands, one on head). Therefore, data received from these three trackers needs to be shared by the entire neighboring VEs in CVR.

·  Object Movements:

In VR, objects can be moved in two different ways.

i.  Parent moves

ii.  Object moves

Each parent has an independent coordinate system. When a parent is rotated in 3D space, rotation of its children is executed with respect to the parent coordinate system. Similarly with translation, objects are moved along with the parent. Therefore, when a parent is moved locally in one VE, sharing the data that describes the parent’s move is satisfactory for replicating the same move in the neighbor VEs. Object movements inside of the parent (tire rotation in the car) are performed with respect to the parent coordinate system. For such movement, we do not consider the movement of the object with respect to the world. So, only data that needed to be shared is the one defines the child’s move in parent coordinate system.

IRIS Performer uses objects’ final coordinates and rotations to locate them in the virtual world. Therefore, for determining the current location of an object in each frame, previous location of the object is not necessarily to be known. We use the advantage of this transformation technique in data sharing format that is described later in the text. IRIS Performer uses the following format to position the objects in the virtual space.

setRot(a,b,g)

setTrans(X, Y, Z)

where, a, b, and g are current rotations about the X, Y, and Z coordinates respectively and X, Y and Z are the current position of object in Cartesian coordinate system.