Everything is Alive Project
Craig Thompson
Computer Science and Computer Engineering Department, University of Arkansas
The Everything is Alive (EiA) project at the University of Arkansas is using virtual world technology to explore pervasive computing in a future smart world where every object can have identity and can communicate with humans and other smart object. In that sense, all objects are agents. Furthermore, objects can have metadata which constitutes an open ended ontology of things (knowledge representation for smart objects, the concrete nouns). We are also working on planning that results in workflows describing behavior aspects of situations involving objects (the action verbs). We observe that the real world can be viewed as a very high def, 3D virtual world so many of our results can be applied agnostic to whether we are connected to the real world or one or the other of various virtual worlds. The EiA project website is http://vw.ddns.uark.edu.
FVWC11 Submissions
Ontology Service for Virtual Worlds 2
Smart Objects for Virtual Worlds 3
Workflow for Virtual Worlds 4
Talking to Things 5
Stutterbots 6
Other EiA projects you may be interested in
Not all of our virtual world projects are AI-related so we submitted just some of the ones that are. In addition to the project descriptions submitted, we also have developed:
· search engine for virtual worlds - we developed a search engine in which avatarbots act as search spiders to visit regions, interact with objects and record everything they see. Operates in Second Life and OpenSim. Try it out at http://csce.uark.edu/~jeno/vwsearch. Here is a dissertation describing the work: http://vw.ddns.uark.edu/content/2010-12--Dissertation--Josh-Eno--Final-Draft.pdf. The contact is Josh Eno.
· a robot assembly language in Second Life - Robots can be tasked to go somewhere (they find a path using AI route planning A*) and can pick up and carry things. This was used in one of our first workflow demos. The contact is Nick Farrer and a reference is here: http://vw.ddns.uark.edu/content/2009-02--SL-Robot-Command-Language-v0--Nicholas-Farrer.doc. You can see and command our robots on University of Arkansas island in Second Life.
· mirror world demos - a person moves an item of apparel from one rack to another in a department store (simulated at our University of Arkansas RFID Center funded by Wal-Mart) and a Real Time Locating System captures the changes. This is translated into a virtual world model of the store and the object is moved accordingly. See video: http://csce.uark.edu/~cwt/COURSES/2010-08--CSCE-4013--Virtual%20Worlds/ASSIGNMENTS/G--Mirror-world--RTLS%20Item%20Movement.mp4
· Immoral Avatar - we capture real life experiences of a person and uses chatbot technology to deliver these experiences (even after the person dies), providing an interactive form of biography. The chatbot is made to look like the person.
· Games in Second Life - a term paper analyzes what games can be implemented effectively in Second Life. See http://csce.uark.edu/~cwt/COURSES/2010-08--CSCE-4013--Virtual%20Worlds/ASSIGNMENTS/SUMMARY--VW%20TERM%20PROJECTS--Fall%202010.htm#_Toc279933829
· Other projects are also described on our EiA website: http://VW.ddns.uark.edu. Recent class projects are described here: http://csce.uark.edu/~cwt/COURSES/2010-08--CSCE-4013--Virtual%20Worlds/ASSIGNMENTS/SUMMARY--VW%20TERM%20PROJECTS--Fall%202010.htm.
NOTE: you can consider our project's submissions individually or as a team project.
Ontology Service for Virtual Worlds
Craig Thompson, Andrew Tackett, Randall Tolleson, Quincy Ward
Abstract
Like the semantic web, semantic worlds (real and virtual) involve annotating things with metadata (type, supertype, API, attributes, cost, owner, location, …). Our project developed two ontology services, one that takes Second Life labels, looks them up in the WordNet ontology, and then overlays them with metadata from WordNet. The other is an annotation service that depends on crowd sourcing so that any user can add metadata to any object. The ontology service could operate agnostic to whether it is connected to the real or any particular virtual world. Our implementation operates in Second Life. With RFID and smart phones we can do this in the real world. This ontology service is a step toward knowledge overlays for virtual worlds, a current missing component that could enable a wide variety of location and context aware applications.
Problem
Humans can tell the difference between a door and a castle. But real world objects are not labeled with explicit identity, type, location, or ownership. 3D virtual worlds may provide explicit (transient) identity, location, and ownership, but labels for naming objects are often not used. To build a smart, semantic world where every object is labeled and associated with knowledge, we need some way of associating object types, descriptions, APIs, and knowledge with *things*.
Objective
This project demonstrated using an ontology service that can associates information with virtual world objects.
Description
Two ontology services are demonstrated. Both use a common interface in Second Life involving a HUD-like remote control that the ontology service script is associated with (worn on an avatar - appears as a box on the avatar’s head at present). The HUD makes visible augmented reality labels that appear above nearby objects. Information displayed comes from either of two ontology sourced. In the first case, the label on an object is used to access the concept-level Wordnet ontology and to retrieve type and other information. In the second case, if an object’s identity has been associated with information by a previous annotator, then that is retrieved, else the HUD wearer is given the opportunity to annotate the object and that annotation is displayed.
This is just a first step in making many different kinds of information available associated with objects. For instance, supertypes, APIs, current status, repair manuals, calorie counts, etc. can be associated with things and a more sophisticated object inspector interface could be used to retrieve additional information.
Contribution to AI and Virtual Worlds
The ontology service demonstrates the need to add a critical missing ingredient to virtual worlds - a knowledge overlay (plugin). As such, this project extends the so-called semantic web to semantic worlds. The ontology service is agnostic whether it is communicating with a virtual or real world.
When we couple the ontology service with our search engine work, we are exploring ways to use labels on some things like parcels or objects to label with some confidence many unlabeled things and to group terms to locate places like this one (to find restaurants, classrooms, …). If scaled up, we could build product ontologies for millions of real world objects (and have connections to Wal-Mart and Home Depot that could help us do this!)
Project Documentation and Demo Availability
A paper, presentation and code describing the project including screenshots is available here: http://csce.uark.edu/~cwt/COURSES/2010-08--CSCE-4013--Virtual%20Worlds/ASSIGNMENTS/SUMMARY--VW%20TERM%20PROJECTS--Fall%202010.htm#_Toc279933834
The demo runs on University of Arkansas island in Second Life. At present, the service requires a remote database and the remote Wordnet service and is available by contacting the submitters.
Smart Objects for Virtual Worlds
Craig Thompson, Akihiro Eguchi
Abstract
We identify protocols that can be added to a real or virtual world object that make it into a smart(er) object. We describe how to do this in the real world using technologies like RFID and smart phones but demonstrate this using 3D virtual worlds where an avatar passing by smart objects can use a universal remote (which could be a smart phone in the real world) to read an object’s API and control the object. We demonstrate this in a healthcare setting in Second Life.
Problem
Currently in the real world, most objects are passive and not very smart. But in 3D virtual worlds, objects can have associated behaviors (scripts). What is it about objects that makes them smart and how can we make real and virtual worlds where objects can get smarter.
Objective
Identify protocols that make an object into a smart(er) object. Use 3D virtual worlds to demonstrate and explore such protocols. Translate these protocols to the real world.
Description
Back in the 1990s, a puzzle for the AI agent community was to distinguish what made an agent into an intelligent or mobile agent. Since 3D virtual worlds can be viewed as composed of objects (agents), the same puzzle occurs. Our resolution is to identify a family of protocols that, if followed, enable us to classify objects as more or less smart objects. Example protocols are explicit identity, message-based communication, API, security, associated 3D models and ontologies, and others.
To demonstrate these ideas, we developed a demonstration in Second Life on University of Arkansas island of a baby mannequin that nurses can use to learn how to help infants needing special care. Such babies stay in warming beds with several smart features. We visited our School or Nursing, modeling a infant bed, and then attached scripts to operate the bed. By itself, that was not our main contribution, it just set the stage to understand smart objects.
In order to understand how to make “smart objects,” we associated APIs with objects and then added the ability to discover the API of smart objects nearby. Then, when a (virtual) remote control device is near objects, it “discovers” the API and imports it to a display so that the object can be operated by remote control. This works with all objects that follow the API discovery protocol. We also identified several other protocols that make an object smart or rather smarter. A companion submission describes one of these, an ontology service that associates discoverable knowledge with things. Another companion submission provides a way humans can communicate with smart objects using a variant of menu-based natural language interfaces.
Contribution to AI and Virtual Worlds
We humans are pretty self-centered. We think being smart distinguishes us from reactive objects like thermostats and pets and passive objects like chairs. But this is about to change as we associate knowledge, action, and rules with objects in the world around us. We can learn how to do this using 3D virtual worlds but this also helps us understand how to translate these capabilities to the real world - so putting RFID tags (or barcodes) on objects, we can use smart phones to identify and associate information with objects and that information can include means to control the objects as well as security APIs to insure only authorized personnel can fire a gun or turn up the thermostat.
Project Documentation and Demo Availability
The project is described in detail in a paper written for the X10 Workshop on Extensible Virtual Worlds: see http://vw.ddns.uark.edu/X10/content/CAPABILITIES--Smart-Objects-in-a-Virtual-World--Eguchi-Thompson.pdf
A YouTube video explains the ideas - see “Healthcare Remote Control and Smart Objects” demo on our project website: http://vw.ddns.uark.edu/index.php?page=media
The workflow demos require some setup on University of Arkansas island in Second Life. Live demonstrations of the smart objects demo are available upon request from Akihiro Eguchi.
Workflow for Virtual Worlds
Craig Thompson, Keith Perkins
Abstract
Second Life supports scripts associated with objects but a higher level language is needed to sequence a collection of avatars if one wants to build simulations where avatarbots (avatars driven by programs) work together to perform a heart operation or produce a play (two of our demonstration). We have developed a workflow language layer that contains primitive operations and sequencing operations so that we can describe complex scenarios. This workflow can be viewed as a plan which could be the result of a planning layer (which we are now working on).
Problem
In 3D virtual worlds, objects can have associated scripts and avatars can have associated animations, but it is very difficult to build complex behavioral simulations using collections of avatarbots (avatars driven by program rather than a human). Higher level controls are needed to script complex behavioral simulations like a medical operation or a play.
Objective
Develop a workflow layer for Second Life.
Description
Workflow is a technology developed in the 1990s that provides higher level representations for collections of behaviors. Ingredients are roles, tasks, and arguments (objects) that participate and the workflow recipe provides a sequencing of the behaviors.
Our basic contribution so far is to explore how to add a workflow layer to Second Life. The workflow layer is demonstrated as a complex simulation with a collection of avatarbots playing different roles and sequencing through several behaviors. An example is the “Catheterization Lab Demo” on our demo page: http://vw.ddns.uark.edu/index.php?page=media.
Contribution to AI and Virtual Worlds
We have or are exploring several extensions to the basic workflow project. Workflow can be viewed as leaf level behavior, the result of AI planning. We developed a Prolog program that uses rules to describe the tasks of the Cath Lab operation so that we could associate “reason” with the workflow steps. We also logged the workflow steps in a remote DBMS so that we could analyze an operator to provide “after action” reporting. An extension we are considering is how to add exceptions to our workflows so that if something goes wrong (surgeon drops an instrument), a corrective action can compensate.
Project Documentation and Demo Availability
The project is described in more detail in a paper written for the X10 Workshop on Extensible Virtual Worlds: see http://vw.ddns.uark.edu/X10/content/CAPABILITIES--Workflow-in-a-Virtual-World--Perkins-Thompson.pdf. The paper describes three workflow simulations (heart operation, robot nurses, supply chain) and gives URLs for YouTube videos that demonstrate the workflow capabilities. A fourth YouTube demonstrates a workflow for a scene from Romeo and Juliet: http://vw.ddns.uark.edu/index.php?page=media
The workflow demos require some setup on University of Arkansas island in Second Life. Live demonstrations of the workflow demos are available upon request from Keith Perkins.
Talking to Things
Craig Thompson, Tanmaya Kumar
Abstract
Humans do not know what to say to objects. Objects like thermostats might understand queries to find out the temperature or commands to raise or lower the temperature. They might recall the history of past temperatures or know how to schedule the temperature at different times of day during the week. We describe an expansion of the Second Life PIE menu that uses menu-based natural language interfaces that can read grammars associated with things and translate the commands to the object’s API to command, control, and query those objects using domain-restricted natural language.
Problem
In an increasingly smart world filled with increasingly smart objects, how can we humans learn to talk to things around us? We’d like to limit our conversations to topics the things know about so that we can command, control, and query any smart things including things we have never seen before.