CS560 Proposal Pervasive Framework

CS560 Proposal Pervasive Framework

CS560 Proposal – Pervasive Framework

Introduction

All these years, we have been computing which is centered on machines. We work to understand the hardware they use and to get even the smallest thing done, we need to understand the language they use and learn the software’s they run. We have been literally behaving like slaves to them. For making our lives easy, we have made our lives difficult at times understanding their ways. This is where Pervasive Computing comes into picture, meant to re-focus the computation to humans to make their lives easy. Computing is to be available everywhere, like batteries and power sockets, or oxygen in the air we breathe. It will enter our lives and rather than us working for them, they would work for us. We would just be the subjects while the computers and intelligent devices present in our environment would do the understanding. This new environment would enable us to carry generic configurable devices and would bring computation to us. It will adapt to our multiple identities in different environments (e.g. friend in a club, manager in office, patient at the hospital or customer in a shop) and cater to our needs and preferences. According to some researchers at MIT AI Labs, “We won't have to type, click, or learn new computer jargon. Instead, we'll communicate naturally, using speech and gestures that describe our intent, and leave it to the computer to carry out our will.” This project at MIT is called Project Oxygen, the names comes from the fact that Pervasive Computing environment is going to be all around us the way oxygen is!

This sounds really good till the point we come to think about the practical implementation of the environment. The whole lot of issues that we can come across can be divided into three major categories according to the technologies like Distributed Computing, Mobile Computing and Pervasive Computing that are to be involved in bringing this imaginary system to life. Concentrating only at the issues of Pervasive Computing, these can be divided roughly into:

  1. Smart Spaces
  2. Invisibility
  3. Localized Stability
  4. Privacy and Security
  5. Implementation issues:
  6. User Intent
  7. Cyber Foraging
  8. Client Thickness
  9. Context Awareness.

This project typically cuts down the entire problem space to Isolated Autonomous systems. There is a huge amount of information available on the Web that can be organized and used by us. This project proposes the use of a unique identity for all humans and devices present today and making use of Semantic Web to share information over the web so as to protect the privacy of the individual at the same time. To begin with, it suggests that every autonomous system (AS) and the finest components (e.g. a sensor device) belonging to it will have a unique URI as its identification. Every thing has a URI is a personal profile which is a collection of all the information about the URI like its security policies. Personal servers are the devices that give the profile URI. Since, this happens to identify the identity of the device or human, it makes use of a Biometric Authentication. The profile gives in the user authentication inputs and is authenticated locally or globally. A new concept of Pervasive Services is introduced which are further classified into Atomic Services and Semantic Services. The atomic services are the basic sensors that perform the final end level task and give results, while the Semantic services interpret the semantics and drive the sensors together to provide services with in the Autonomous Services.

Project Goal

The overall goal is to build a generic, adaptable framework for pervasive computing that facilitates the integration of various devices in a smart environment that proactively responds to user needs. The idea is to integrate a user’s context in an environment with the workflow associated with a service provided to the user. Further more we wish to compose such workflows dynamically and optimally based on a given set of underlying services. We also wish to achieve a truly global pervasive environment by separating the various pervasive environments into Autonomous systems that are independent and have their own policies and can interact with other Autonomous Systems to share information about the users. In a hospital environment such an Autonomous system can be used to schedule resources, provide user specific services and expedite the overall functioning of the hospital system.

Thus our overall goal is to model the following Autonomous system.

Figure 1: Model of a pervasive environment Autonomous System (AS).

Figure 2: Global Pervasive environment

Specific Objectives

Our specific objectives are the following

  • Modeling the User Context: In any pervasive environment the user context plays a critical role. The context contains attributes such as the user’s location that can affect the workflow. Thus to build a proactive system that responds to a users needs we will have to build a very effective user model and context model. The User and context model could include concepts specific to the hospital domain.
  • Modeling AS Policies: Each AS head will have to maintain a set of policies or rules to be followed within the AS. For instance restricting the use of a device to a certain set of people. These serve as restrictions on the set of services that can be offered to a user.
  • Creating User Model: This will help in understanding the needs of the various types of users of the system. This often will define the set of services that should be provided to the user. An accurate user model can help in providing services customized at a per user level. We could use domain specific information about a hospital for instance about doctors, nurses etc. to refine the user model.
  • Service Markup: Our framework abstracts every device as a service. It is essential to identify candidate devices and create appropriate service wrappers. The services can be very specific for instance, for a particular HP printer (atomic services), or they could be abstract for instance a printing service (Semantic services). The semantic services will need an inference engine to determine which specific services to use. Also these services will have to use the user context to determine the optimal workflow.
  • Design of Service Coordinator: The service coordinator will coordinate the interaction with the services. It will parse the high level job assigned to it and determine the services to be invoked. It will also ensure proper chaining of the inputs and outputs between services.
  • Design of Event Coordinator: The framework is event based. The workflow is established by calling various events. The event coordinator will handle all the events generated by the framework. It will interact with the service coordinator to invoke the various services.

In the scenario of a hospital environment various devices related to a typical hospital will have to be studied and their service wrappers will have to be designed. A hospital employee hierarchy will have to be studied and incorporated into the AS policies model. In cases of emergencies the overall system should act as a decision support system. The emergency can be thought of as a special event for the pervasive framework to act on.

Related work

Here is presented the research being done in the field of Pervasive computing. The three major research projects considered in this section are Project Cobra by the ebiquity group at UMBC [1], Project Oxygen at MIT [2] and Project Aura at CMU [3].

The Cobra project, one of the recent approaches in this field, is an agent-based approach, where the pervasive computing is aimed to be achieved using agents that will model humans, devices and other concepts. The original idea [4][5] started with the idea of providing an autonomous agent called as broker, which facilitates the interactions between these agents. The original idea was to provide a framework for different agents to share contextual information and interact according to this common understanding of these contexts. The work in [6] describes how Pervasive computing can be achieved by using Agent teamwork, and specifically considered the challenges associated in the agent teamwork in Pervasive computing. One important factor in Pervasive computing using agent teamwork is sharing of context information; where context can be defined as a collection of information that characterizes the situation of a person or a computing entity [7]. Central to this architecture is the broker agent that maintains the shared model of the context for all the computing entities in the space and enforces the privacy policy defined by users and devices [4][5]. This approach provides a better support for knowledge sharing and context reasoning using common ontology expressed in explicit semantics [4]. Furthermore it explores the use of Semantic Web technologies (i.e., languages, logic inferences, and programming tools) for supporting context-aware systems in smart spaces [8]. This agent-based approach necessitates the advance knowledge on agent modeling, which could be a big disadvantage in its practical and large-scale application development and deployment.

Another significant approach in this field is the Oxygen project carried out at MIT. Oxygen aims at providing user interaction through user technologies like speech and vision technologies and provides individualized knowledge access, and collaboration technologies, which perform a wide variety of tasks. Oxygen is based on computational devices, called Enviro21s (E21s), embedded in homes, offices, and cars that sense and affect user’s immediate environment; Handheld devices, called Handy21s (H21s), which empower users to communicate and compute no matter where they are; Dynamic, self-configuring networks (N21s) help user machines locate each other as well as the people, services, and resources that they want to reach; and Software that adapts to changes in the environment or in user requirements (O2S) help users do what they want when they want to do it [2]. More details on this approach can be found in the Oxygen brochure [9]. This approach depends on the availability of the H21 handheld device at every place like battery or power outlets. The H21 which is a powerful machine culminating the functionality of a cell phone, PDA, camera, television performing a range of functions and supporting many communication protocols is still a far way to go for becoming widely available. This is the biggest flaw in the overall approach.

Another approach is the Aura project [3] at CMU. This approach [10][11] proposes an architectural framework that solves two of the hard problems in developing software systems for Pervasive computing. First, it focused on the problem of allowing a user to preserve continuity in his/her work when moving between different environments. The key advantage of this framework over other traditional approaches is that it allows the system to tailor the user’s task to the resources in the environment. Second, it addressed the problem of adapting the on-going computation of a particular environment in the presence of dynamic resource variability. As resources come and go, the computations can adapt appropriately. The key ingredients of the architectural framework are explicit representations of user tasks as collections of services, context observation that allows the task to be configured in a way that is appropriate to the environment, and environment management that assists with resource monitoring and adaptation. Each of these capabilities is encapsulated in a component of the architectural framework (the task manager, environment manager, and context observer, respectively). The services needed to support a user’s task are carried out by a set of components termed service suppliers. Service suppliers typically are implemented as wrappers of more traditional applications and services. Finally, explicit connectors that hide details of distribution and heterogeneity of service suppliers carry out interactions between the parts.

Robert Grimm etc. [12] talk about system architecture for pervasive computing called one.world. Which provides an integrated and comprehensive framework for building pervasive applications. Existing distributed environments have the following drawbacks: -

  • They seek to hide distribution. This makes the applications less adaptable since change to the environment is hidden
  • They compose applications through programmatic interfaces. This forces a very tight coupling between application components. Thus it becomes quite difficult to add new behavior.
  • Distributed applications encapsulate both data and functions into objects; this complicates sharing, searching and filtering of data, which are essential in a distributed application.

The World Wide Web is an architecture that is not built like a traditional distributed system but it still has a few drawbacks namely: -

  • It requires connected operation
  • It has no provision for adapting to changes. Nor is it easy for building adaptive applications.

The Architecture Proposed

In this each device runs an instance of one.world. The applications store and communicate data in tuples. The applications themselves are composed of components. The tuples are self-describing that is an application can dynamically determine a tuple’s fields and types. Components implement functionality but importing and exporting event handlers.

Environments provide structure and control. They are containers for tuples, components and other environments. Leases control access to both local and remote resources.

One.world also provides a set of services that can serve as building blocks for pervasive applications. They are: -

  • Migration: - provides the ability to move or copy an environment and its contents to another node.
  • Remote Event Passing: - provides the ability to send events to remote receivers.
  • Replication: - Makes tuples accessible on several nodes.
  • Check pointing: - Saves the current state of the environment and saves it in a tuple.

What we learnt from this paper

  • That the data should be abstracted and maintained as a separate entity. Thus we were able to move our abstract data tree for the user profile from the server end to another separate shared entity.
  • The paper has influenced our decision to try and implement the entire application by way of components.

Proposed Solution

The proposed solution seeks to create a generic framework for a pervasive environment that can handle services and devices that are plugged into it, and can respond to external events that are detected and recognized by the framework.

In the Medical Emergency Framework, events and services are primarily defined in the medical domain. This is apart from the services and events that are generic to a framework, such as, Output services (print, display, project), exception events (when a device does not service a request). The framework catches an event, looks up the event definition in the knowledge base, and invokes the appropriate service from among the services. The service is invoked, again, by looking up the appropriate invocation procedure and parameters from the knowledge base.

A service may or may not be abstract. An abstract service is a wrapper over a collaboration of one or more abstract or physical services. This wrapper provides an added decision making layer over these services. The service checks some parameters of the invocation and, depending on the context, directs it to the most appropriate services in the subordinate layer. Abstract services do not communicate directly with any other service, these services also throw an event which represents the present status of the request which has been intelligently redirected to the services/devices that may be able to process the request in the most effective manner (the print service may redirect a print request to use a line printer if the laser printer is unavailable). The control in the framework (we refer to it as Mr.X in our discussions) catches the event and resolves it by referring to the knowledge base, thereafter invoking the service that most accurately maps to the request.

This sort of architecture suggests dynamic linking between services. As services do not directly communicate with each other, they just need to have enough decision-making abilities to process requests that map to it. The service need only process the request and output an event after the process (processing being mapping requests to events in abstract services and mapping requests to output within physical devices). Mr.X is the entity that catches events and dynamically creates the linking between different services. In cases of exception events thrown, there might be exception services that may be called. In case of an event that is unrecognized by the framework, it may show a failure or handle it in the manner it may be coded to do.

Therefore the framework becomes a dynamically multi-layered architecture that creates a workflow to process an event. The event is interpreted as a request; therefore even requests to provide an output are given to the framework as “output” events.

To achieve the above-defined architecture, we need to perform a semantic markup of services into semantic services and atomic services. We feel that DAMLS may be useful for such a task. Policies can be expressed as a set of rules that are defined to control access, execution and direction of the services to various users of the autonomous system. These policies can also direct workflows between different autonomous systems or between different services of a single autonomous system. Such policies may be enforced on a ‘per-event’ basis, as events requiring similar services can also differ with each other through a provision of differentiating parameters. Context may be defined in a manner similar to policy definition. As context is also used to control the workflow of the entire process, the functioning of the framework in a context will be very similar to the way a policy is used. The entire architecture, by definition, becomes event and rule based. Rules define possible pathways that a workflow may take and events are outputs of services (or the initial input to the framework) on the basis of which a runtime workflow is decided by composition of services on the basis of the rules.