Hancock Interaction with Automation 1

Adaptive Automation

Instructor: Dr. Peter Hancock

Lecture Overview

In the next three hours we are going to seek to cover three complementary topics.

- The use of automation in general, when is it appropriate to use automation, is it always appropriate and what are the goals and desired (hierarchical?) relations between humans and machines.

- Define and approach problems and promises in Adaptive Automation.

- Define adaptable interfaces and the difference between adaptable and adaptive.

Unfortunately, each one of these topics can be the focus of an entire course or at least deserves a lecture of its own. It is therefore important that you use the lecture reading list as a “starter” to get you more familiar with the various topics we will discuss in class. Here again I have chosen key articles, which I hope will lead you to others.

Some concept in automation that we will discuss briefly in class are:

·  MABA-MABA

·  Supervisory control

·  Level of automation

·  Level of control – management by consent MBC or management by exception MBE

·  Humans and automation: Use, misuse, disuse, abuse (after Parasuraman & Riley, 1997)

What is adaptive automation?

Adaptive automation (AA) is an approach to automation design where tasks are dynamically allocated over time between humans and machines in cooperative systems for the purpose of optimizing overall system performance.

The main goal of research in AA is to give solutions to underpinnings of automation by means of the development of systems that properly adaptively allocate tasks to either humans, machines, or both. One can distinguish two main research questions in AA that together might offer us such solutions:

(1) Effects of automation on overall system performance

This is mainly an empirical driven effort. Trust, skill, workload, situation awareness, mode awareness, over- and underuse, boredom, stress, and automation-induced complacency.

(2) Determining successful triggering mechanisms and transition criteria

This is a theory driven-effort. Such as theories on transparency, machine autonomy, task switching, responsibility, "human in the loop"-ness, and delegation strategy.

Some challenges:

·  Can machines learn to cooperate with humans?

Increasing intelligence of machines leads to a shift of human-machine interaction to human-machine cooperation. There is a need for humans and machines to comprehend each other's reasoning and behavior (first mentioned in [Hollnagel, Woods, 1983]). If the machines were replaced by humans, this would be explained as a need for cooperation [Hoc, 2001]. An interesting idea is to have the machine learn to cooperate with humans. In order to be able to do that, a machine should be able to assess and adapt goals of a human.

·  How does one measure "human in the loop"-ness and estimate its desired level?

Automation has the purpose to let machines do what formerly humans did equally or less effective. One would expect engineers to automate as much as possible in that respect. But an important disadvantage of it is that in particular situations the cooperative human is now too much "out of the loop" and has, in case of an incident, a limited situation awareness and therefore is not able to cope with the situation. A challenge is to find out for each of those situations how to measure "human in the loop"-ness, estimate its desired level, and to reach to that.

·  How do trust and transparency relate in human-machine cooperation?

Adaptive automation results in humans and machines to get in or lose control. If such reallocations of control are not expected to be effective by either a human or machine, the allocations are likely to be overruled because of a lack of trust in the other, itself, or the allocator. In the case that such lack of trust is unnecessary, ways to support agents to correctly trust each other are welcome. For instance, this can be done by automatically adapting the transparency level of the different task execution processes.

Adaptable Interfaces

Adaptive interfaces are a relatively new attempt to overcome contemporary problems due to the increasing complexity of human-computer interaction. They are designed to tailor a system's interactive behavior with consideration of both individual needs of human users and altering conditions within an application environment. The broader approach of intelligent user interfaces includes adaptive characteristics as a major source of its intelligent behavior. In dynamic situations where task requirements and operator states can change from moment to moment, such as those found in battle conflicts or agile work environments, a fixed interface only provides the best mappings between the user and technology over the narrow predefined range fixed during design, producing suboptimal performance outside of this design envelope. Given the very dynamic and ever-changing situations addressed by both military and global industry, a real-time rapidly customizable interface between the human and technology is required to continuously maintain the best match between these entities for maximizing both the personnel and technology investments.

To develop an adaptive model that meets a set of criteria, the model must:

·  incorporate representations of operator states.

·  represent interface features which are adaptable.

·  represent cognitive processing in a computational framework.

·  provide measures of merit for matches between the input state variables and interface adaptation permutations.

·  operate in a real-time mode.

·  incorporate self-evolving mechanisms.

Current Learning Objectives

·  To understand the basic concepts in automated human-machine systems

·  To understand how human operators are affected by automation implementation in real-world systems.

·  To comprehend how the uses of human performance measurement may be used to enhance human-machine system design.

Lecture Readings

Hancock Interaction with Automation 1

Automation

Kirlik, A. (1993), Modeling strategic behavior in human-automation interaction: Why an “aid" can (and should) go unused. Human Factors, 34 (2), 221-242.

Parasuraman, R. & Riley, V. (1997). Humans and automation: Use, misuse, disuse and abuse, Human Factors, 39, 230-253.

Sheridan, T.B. (2002). Humans and automation: System design and research issues, Wiley: Santa Monica, CA,.

Sheridan, T.B. (1992). Telerobotics, automation and supervisory control, MIT: Cambridge MA,

Adaptive Automation – Foundations

Byrne, E. & Parasuraman, R. (1996) Psychophysiology and adaptive automation, Biological Psychology, 42, 249-268.

Hancock, P.A. Chignell, M.H. (1987). Adaptive control in human-machine systems. In Human factors psychology, (pp. 305-345) P.A. Hancock, (Ed.), Amsterdam: North-Holland, 305-345.

Hancock, P.A. Scallen, S.F. (1996). The future of function allocation. Ergonomics in Design, 4 (4), 24-29.

Hollnagel, E., & Woods, D.D. (1983). Cognitive systems engineering: New wine in new bottles. International Journal of Man-Machine Studies, 18 (6), 583-600.

Parasuraman, R., Mouloua, M., & Molloy, R. (1996). Effects of adaptive task allocation on monitoring of automated systems. Human Factors, 38, 665-679.

Rouse, W.B. (1994). Twenty years of adaptive aiding: origins of the concept and lessons learned. In: M. Mouloua and R. Parasuraman. Eds., Human performance in automated systems: Current research and trends. (pp. 28-32), Hillsdale, NJ, Erlbaum.

Wiener, E.L. (1973) Adaptive measurement of vigilance decrement. Ergonomics, 16, 353-363.

Adaptive Automation – Recent Work

Hoc, J.M. (2001). Towards a cognitive approach to human-machine cooperation in dynamic situations, International Journal of Human-Computer Studies, 54(4),509-540.

Inagaki, T. (2003). Adaptive Automation: Sharing and Trading of Control. In: E. Hollnagel (Ed.), Handbook of Cognitive Task Design. (pp. 147-169), Lawrence Erlbaum Associates. Mahwah, NJ.

Parasuraman, R., Sheridan T., & Wickens, C. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics, 30, 286-297.

Adaptable Interfaces

http://www.sift.info/English/publications/PDF/MFGWM-AugCog-DecMak.pdf

Reinhard Oppermann (Ed.) (1994). Adaptive User Support, Ergonomic Design of Manually and Automatically Adaptable Software Institute for Applied Information Technology GMD, LEA Publishers: Hillsdale, New Jersey Hove, UK available online at http://www.questia.com/PM.qst?a=o&d=78542879