© 1995 06 Home page Publications Paper frame
Bibliographics authorship abstract contents references

AIDING DECISIONS BY RECOGNISING UNEXPECTED SITUATIONS

Simon Grant

1 INTRODUCTION

The idea that accidents happen in unexpected situations seems obvious. If we expect a dangerous situation, that implies that we accept a degree of risk. Attempts to minimise risk require that the likelihood of foreseeable dangerous situations is minimised.

It is interesting to reflect on the widely reported fact that the majority of accidents have human error as a prime cause. It is more difficult to foresee the possible mental states of an operator than to foresee the possible physical states of the system that he or she is controlling. And even if one could list the possible mental states of a process operator, pilot, captain, or whoever, together with the effect of those mental states on the actions performed, techniques of estimating the likelihood of those mental states are at best in their infancy.

Training, it could be said, exists at least partly so that in particular situations, an operator has an appropriate mental state, which includes the knowledge (explicit and implicit) necessary to deal effectively with the situation. This view would be consistent with the idea that if humans were well-trained, their behaviour would fit in with the system design to the extent that the designer could reasonably expect operators to be predictable. Clearly, if humans sometimes do things that the designers did not envisage (which could be a designer's view of human error) it is a major problem both for the actual safety of the complex systems and the estimation and quantification of their reliability.

If we take the point of view of the operator of the complex system, accidents will generally occur in unexpected situations. In the case of accidents that are attributed to operator error, it is the fact of being unexpected to the operator that matters, even if the situations have been considered by the designer. In situations that have been deeply analysed in the process of design, the thing that is unexpected to the designer may be just that the operator in the loop does not expect the situation. Considerations like these ones assume two alternative approaches to considering unexpectedness: namely, from the point of view of a designer; and from the point of view of an operator. The first aim of this paper is to distinguish between these two approaches, showing how important it is to consider the operator's expectations. The two forms of unexpectedness then reveal two ways of describing the space in which tasks are performed.

This paper explores the specification of the requirements for a system to manage situations unexpected to the operator but otherwise expected to the designers. It is argued that any system to help reduce the incidence of surprised operators must relate the technical view of a designer with a view of the cognition of an operator.

2 TWO FORMS OF UNEXPECTED SITUATION

The concept of unexpectedness hides a very important distinction. A simple way of appreciating this would be to address the question, who it is that does not expect the situation. It is almost unknown for complex technological systems such as aircraft, ships, process plant or traffic control systems to be designed and operated by the same people. This in itself suggests a distinction. Even if the designer and operator were the same, the designer's task and the operator's task tend to have different characteristics. For example, the task of operating a complex system is dynamic, in a way in which the task of designing it is not; and the time constraints of the two tasks differ. What is unexpected in a short time-frame can in some cases be expected given enough time for reflection.

Considering these two aspects of unexpectedness, from the points of view of the designer and the operator, gives in principle three kinds of unexpected situation. Situations that are unexpected by the designers, but expected by the operator, are potentially interesting, but will not be considered here as they raise other issues. The two remaining kinds of unexpected situation are both unexpected from the operator's point of view. One is also unexpected by the designers, the other is expected. It is these two that will be considered in more detail.

2.1 SITUATIONS UNEXPECTED IN DESIGN

A little imagination can produce any number of situations that would probably not have been anticipated and planned for in the course of design of a complex system. It is obviously unreasonable to plan for such freak events as a wing being torn off by a falling meteorite, but experience, or detailed research, can sometimes reveal situations that have not been considered, but ought to have been.

As an example in the field of civil aviation, we may reflect on the Mount Erebus disaster (Vette 1985), where the pilot was unfamiliar with the phenomenon of "white-out" in very high latitudes, and this combined with other events led to a crash on the mountain (in Antarctica) which appeared to be completely unexpected by the pilots, until moments before the impact when the ground proximity warning system sounded, too late. No training or procedures had been devised to cope with this situation.

It is not difficult to imagine another kind of situation where features that are not dangerous in themselves combine in unforeseen ways to create a real danger. Other examples could be suggested that involve organisational or management factors, which are becoming more clearly recognised as important factors in the causes of accidents.

Though design, management and organisational errors are very important, they can usefully be treated separately from errors in operation of complex systems. In the two settings, the kinds of risk and of time pressure differ, and because of these differences, it is likely that cognitive strategies differ. Therefore that findings about human error in one domain probably do not generalise to the other. This is one reason for focusing on the operational view of unexpectedness.

Another reason is that in this paper we are considering what kinds of aids can be designed and developed. We could (later) discuss how these would be developed, and at that stage a discussion of possible errors in design would be appropriate. But to discuss what kind of tools should be developed we need to imagine the design process as perfect, which leaves us to focus on situations unexpected from the operational point of view.

2.2 SITUATIONS UNEXPECTED IN OPERATION

What reasons could there be for situations being unexpected for an operator, even though the situations were within the normal operational envelope from a technical point of view? One which will not be dealt with here is that of inadequate training. Remembering the assumption that the situation is within the normal operational boundary, the problem must then be that the operator thinks that something is happening when in fact something else is happening. The operator could have forgotten, or never realised, what the situation really is.

A classic example of this is controlled flight into terrain, where a pilot is not aware of the true flight-path, and unaware of the need for corrective action. The impact with the ground is unexpected (at least until it is too late to take corrective action) because the situation is misconstrued. This can certainly be seen as loss of 'situation awareness' (a term used widely in aviation to describe a pilot's awareness of what is happening), regardless of the difficulties in defining situation awareness positively (Sarter & Woods 1991). Similar examples could be constructed or identified in other complex tasks. The common factor would be that, after then event, the operator would recognise that he or she did not at the time correctly recognise what was happening.

This kind of unexpectedness is a challenge for design, because in these situations the human operator is limited, and does not at all times have the whole picture of what is happening that a designer could expect to have given sufficient time to think over all the possibilities. One way of responding to the challenge would be to design systems and tasks in a way in which the operator is somehow less able to lose track of what is happening. Perhaps more realistically, we may assume that from time to time human operators are going to lose situation awareness, whether by distraction or for other reasons, and the challenge is then to recognise when that happens, alerting the operator to it as soon as possible.

3 MANAGING UNEXPECTED SITUATIONS

If we are to design a system to manage unexpected situations in the way suggested, we must somehow integrate sufficient of the cognitive view of the operators' mental processes into the framework of a design view. The challenge is clarified by considering the different representations of the task which would be suitable for, on the one hand, a design view and, on the other hand, a view of the cognitive processes of the operator.

It should be clear from the foregoing discussion that, to understand and model the processes that may cause loss of situation awareness and unexpected situations, not only must the objective situation itself be modelled, but also the attention and instantaneous knowledge of the operator. Unfortunately, there is no easy way to detect what is in the operators' minds at any particular time. Any designed system functions on the basis of information that is available, and it is highly implausible in most cases to expect the operators continually to be updating a system with the knowledge of what they are attending to. To build a good working support system, the system itself must be able, during the execution of the task, to derive an adequate model of what an operator is attending to or has in mind.

A first step towards design of an advanced system would be to consider how we may represent continuous situation awareness in a pilot, process operator, or whoever, and to relate that to a global view of what is happening. The global view would be an engineering view, useful in design, and would be of one large task space, the dimensions of which were at least all the relevant variables that may affect desired task behaviour. But if humans could attend to all this possibly relevant information at once, we would not suffer from the second type of unexpectedness. In contrast, a representation of the cognition of operator would have to include, at any particular time, the variables that were salient for that stage of the task, or which were being taken into consideration or were the object of attention. At any particular stage of the task, this set of variables would be a subset of the whole. Interesting evidence about these subsets of variables in a dynamic simulation can be found elsewhere (Grant, 1990, §7.2).

unexpected in design unexpected in operation
machine human
engineering view cognitive view
smooth space lumpy space
continuous representation discrete representation

In this table are put together some of the distinctions that are being made here, under the headings of the main distinction that is being drawn in this paper. The table shows two quite different sides, and the differences mean that it is difficult, and probably unhelpful, to reduce the cognitive viewpoint to the technical one.

The way in which the cognitive viewpoint is represented, is closely related to the implicit or explicit model within a system designed to manage unexpectedness for operators. Managing these unexpected situations requires that the system has some model of what the human has, or does not have, in mind. One existing system which warns pilots in this way is the ground proximity warning system (GPWS). For warning systems such as the GPWS, or simpler systems, the model is very basic, and implicit in the way that such systems are currently designed.

A GPWS is en example of a system that recognises a particular kind of hazardous situation (in this case, imminent collision with the ground). The implicit model of the pilot's cognition is simply that he is, or is not, attending to the identified danger. Nothing else is considered relevant to the GPWS. Whether the pilot is aware of the danger is inferred very simply: if the situation is dangerous in the specified way, the pilot cannot have noticed it, because if he had, he would not have allowed the situation to remain in that condition.

We may consider any other simple alarm similarly. It is assumed that if the operator knew the value of a particular variable, he or she would not have allowed the danger situation to have arisen. Since the danger has arisen, the operator must not be aware of the problem. We can immediately point out problems with this argument: firstly, that if many alarms are active at once, the operator has limited attention resources to deal with them. Furthermore, in abnormal situations, the operator may even be perfectly aware of some variable going out of its normal range, but be unable or unwilling to do anything about it immediately. In these situations, the alarm is counterproductive, which suggests that the naive implicit model of the operator's cognition is both unrealistic and unsatisfactory.

A different, and stronger, way to manage unexpected situations would be to identify unexpected situations at the earliest practical opportunity. In current aeronautical terms, we could see this as identifying loss of situation awareness, or, more generally, identifying a 'breakdown', a suggestive term borrowed by Johnston (1995) from Heidegger. Since there is no way directly to read the mind of the operator, this identification must be done on the basis of observable behaviour (which includes inaction).

The limits of what is possible can be grasped by imagining what could be done by a very knowledgeable human. If we imagine that an extra 'safety' pilot has been observing the flying pilot carefully, and understands his particular ways of doing things, the safety pilot could, for every action of the flying pilot, recognise whether that action was a reasonable thing to do in that situation. If it was not, the safety pilot could either ask something like "what are you doing that for", which would initiate a corrective dialogue, or perhaps even simply say "look at the altimeter" or otherwise direct the flying pilot's attention to some overlooked piece of information that would correct the flying pilot's situation awareness.

Why, then, not just have an extra pilot doing exactly that? Would that not make flying safer? A detailed answer is beyond the scope of this paper. Briefly, the issues include cost, which is one reason why modern flight decks tend to have two rather than three people; and reliability. If we want to help avoid human errors, it seems less than perfect to rely entirely on another human that could be subject to similar errors.

If we accept the idea that an engineered system could potentially fill this role, there is a phrase that serves to describe the principle on which it would be based: "Situation Awareness Correspondence between Human and Engineered system", or SACHE as a mnemonic. To create a system incorporating SACHE is a major challenge, a large part of which is to develop a model of cognition to be incorporated in the engineered system, capable of representing what is necessary of the operator's cognition.

4 TOWARDS A MODEL OF COGNITION

To build a system to support safe operation in the way outlined above, we have to know what observable behaviour is expected of the operator in the various situations, and we have to be able to recognise the onset of each situation as the task progresses. This can only be on the basis of a continual monitoring of what is happening, together with a rich and detailed model of an operator's cognitive processes while performing the given task. The model would need to be rich enough to support a simulation of operator behaviour, in the tradition of COSIMO, a COgnitive SImulation MOdel developed at JRC Ispra (Cacciabue et al., 1992).

A simple, and perhaps obvious way of modelling loss of situation awareness, and unexpected situations, would be based on a cognitive unit comparable with the frame in COSIMO. The architecture of COSIMO was not explicitly built for the purpose of cognitive realism, but the choice of the frames concept seems reasonably appropriate and fits in with many other approaches. Models and simulations in cognitive science tend to have some kind of cognitive modularity, or unit (Grant, 1994). We will assume here that the model is based on the kind of contextual modularity described in the paper cited (Grant 1994) and the basic unit in the model will be called a Personal Immediate Cognitive Task Unit, or PICTU. The origin of this term is described elsewhere (Grant 1995).

If a situation is unexpected, in the manner under discussion, the operator mistakes the context. In terms of the model, the operator's cognitive state would be associated with one PICTU, while the actual situation required a different PICTU. A simulation of possible cognition would have to track what were the appropriate PICTUs that were acceptable in the real situation, and observe the actual behaviour, while at the same time predicting the expected observable behaviour. Comparing what was observable with what was expected would give the first possible chance to alert the operator to the possibility that he or she may have lost situation awareness.

Such a system could not be expected to be perfect, bearing in mind the complexity of many tasks and the lack of a single normative behaviour pattern in many situations. Thus, in some instances it would be the system that had lost situation awareness, rather than the human. It would be important that these instances did not occur too often, otherwise the system may be perceived as a nuisance. When loss of system situation awareness did occur, however, it could be used as the basis for progressive refinement of the cognitive model, so that the problem did not occur again.

The focal point in the development of a safety system of this kind is the construction of the cognitive simulation. This is both because the simulation is a fundamental part of the system, and because it is relatively easy to test. A simulation of cognitive processes should produce similar output for similar input, within defined ranges of situation and of scope of the simulation. Previous work using the COSIMO approach has, for example, been able to produce output able to control the simulation of an aircraft in a small part of its mission. Code for this was written in Smalltalk, and this is serving as a starting-point for a redesign of the architecture, also in Smalltalk. Following reimplementation, a scenario will be chosen to test out the capabilities of the architecture to produce the range of phenomena of interest. This specifically includes the phenomena of unexpectedness described in this paper.

5 CONCLUSION

Recognising unexpected situations is important to creating support systems that stand a good chance of averting some of the less easily tractable kinds of (human) error. We cannot afford to treat all unexpected situations in the same way, because they have different causes and different people have responsibility. Situations unexpected by the operator involve loss of awareness of what the situation really is. Understanding these situations needs a representation of the world different from the one appropriate to design: more discrete and taking into account attention and the perceived context. If, on this basis, a system recognised unexpected situations, it could serve as a safety system that alerted operators to situations involving loss of situation awareness earlier than traditional alarms based on crossing a global performance envelope. Implementation is the only satisfactory way of validating such a model and simulation, and further discussion will follow the construction and trial of such a simulation.

ACKNOWLEDGEMENTS

I would like to thank Anne-Laure Amat for detailed collaborative comments, and Neil Johnston for help with references.

© 1995 06 Home page Publications Paper frame
Bibliographics authorship abstract contents references