© 1995 08 Home page Publications Paper frame
Bibliographics authorship abstract contents references

SAFETY SYSTEMS AND COGNITIVE MODELS

1 Introduction

The concept of a safety system presupposes the existence of risk or danger, and the kind of danger that is relevant to this paper is the kind that is familiar in the aerospace and other industries: incidents or accidents involving injury or loss of life, and damage to or loss of things that are expensive. Since accidents have increasingly been attributed to human error, there is an ongoing need for safety systems aimed at human error in particular.

AI and cognitive science offer the possibility of the creation and use of some kind of cognitive simulation or model of the operator involved in the complex task. Various cognitive models have been proposed for complex tasks [CACCIABUE 94], and the main suggested uses for them include interface design, analysis of human errors and reliability, and design of safety systems that compensate for the inherent weaker points of human cognition. A model of cognition could potentially support a safety system in a number of diverse ways, depending both on the kind of error that is to be counteracted, and on the nature of the safety system.

How does the contribution of a cognitive model depend on the kind of error? Even a basic model of human perception and attention could help the design of warning signals to counteract lapses of attention. In contrast, diagnosing errors of intention ('mistakes' [REASON 90]), or loss of 'situation awareness' [SARTER 91] for an aircraft pilot, requires a more powerful model embedded in the safety system. An analysis of types of human error is therefore relevant to the requirements of a model of cognition. Some work on this is presented elsewhere [GRANT 95], but will not be taken up here.

On the other hand, how does the contribution of a cognitive model depend on the kind of proposed safety system? For example, the design of a fixed written warning notice may benefit from basic knowledge of the mental capabilities of the person that it is there to warn-even down to the level of whether the human knows the meaning of the symbols or words on the notice. But the design of an effective interactive warning system would benefit from knowledge also of the appropriate information content and timing for the giving of the warning, which would be found only in a more complex model. Thus an analysis of the type of safety system is also relevant to development and use of models of cognition.

This paper provides an original analysis of some types of engineered safety system, from the point of view of the need for, and the possible contribution of, models of cognition; but also bearing in mind the relevance of the systems to coping with human error. It should be noted that many other issues can and do arise from consideration of the different human and machine elements of a safety system, but this paper focuses on the engineered parts of such systems alone, in the belief that these are simpler to analyse, and provide a valid first step towards dealing with the larger issues.

None of the cognitive simulations or models up to now have been associated with a comprehensive analysis of the kind of system for which they would be suitable. The analysis will enable those who wish to use cognitive models more easily to consider exactly what kind of safety system it is that they are concerned with, and from that, what kind of simulation or model of cognition would be appropriate. To this end, it is hoped that this analysis distinguishes classes of safety system that are homogeneous with respect to their requirements of models of cognition.

A natural next step after constructing the analytical framework is to select the category of greatest interest for application of cognitive techniques, and briefly to point to what could be done. Based on assumptions that need to be made, a framework for a model (and simulation) of cognition is presented together with a particular conceptual design outline for a new generation of safety system that uses that model framework. The safety system concept is called SACHE, for Situation Awareness Correspondence between Human and Engineered system. In order to further situate these ideas in a potential practical context, a suggestion is made on how such a system could operate in conjunction with other systems.

In this paper, examples focus on the pilot on the flight deck or in the cockpit. However, much of the discussion could be applied to many complex tasks, including the field of process control in general, and for that reason the humans that control the complex systems are referred to as operators (a general term), which is intended to include pilots, air traffic controllers, and other people that could be performing any one of a range of complex tasks.

2 Classification of safety systems

The main aim of this classification is to classify systems that provide alert or warning as suggested above, as a counter-measure to human error, for the purpose of assessing the different possible contributions of models of cognition to the different classes of system. As well as warning systems, here, for illustration, other safety systems are also included in the analysis. Taking technological systems in general, the analysis will proceed as shown in Figure 1. At each level, the left-hand branch is terminated with an evaluation of the relevance to cognitive models to that class. It will be seen that a prime area of application of models of cognition is the last category in the figure.

Figure 1: A classification of systems for safety and cognition

2.1 The dimension passive / active

Beginning the classification of safety systems, the dimension passive / active depends on whether the system takes any actions or not. Passive systems are put into place and then remain there, whether or not they are used, whereas active systems take some kind of action.

Examples of passive systems include:

Passive systems present the operator with fixed information, or constraints. A good example of a passive system in aviation would be a flap system interlock which makes it impossible to select the flaps at an airspeed where they would be damaged. Modelling the cognition of the humans involved may be relevant, but it must be done at the design stage, and the system, being passive, would only cope with the human responses that had been explicitly considered. Since there is no variation in behaviour of a passive system, this kind of system could not make use of an 'embedded' simulation of the operator's cognition, and there would be no possibility of discriminating the moments when the operator was more or less likely to err. The analysis of human and system interaction could be carried out in the form of an event tree, and would be reasonable in just those cases where there are not likely to be contextual effects affecting that analysis.

The strength of passive safety systems is that they are invariant across a very wide range of possible situations, and this invariance means that they are difficult to circumvent. There are issues concerning the cognition of other people involved in designing, testing, checking, maintaining these systems and potentially coping with their failure. These issues are not the subject of the current discussion. Since the application of cognitive simulation is limited here, we will continue further to consider active systems, those that are not passive.

2.2 The dimension responsive / autonomous

Systems that are fundamentally responsive are defined here as those that need human intervention to operate at all. Autonomous systems, on the other hand, are more powerful in that they can function without being explicitly called into action by humans. The main area of interest in this paper is not in fully automatic systems, which are autonomous, but rather in those systems that interact with the human by providing information of some kind, especially warnings or alerts. Many of these latter systems, less obviously, are here classed as autonomous, since they operate independently of being requested. The present meaning of the term autonomous is intended to correspond to its use elsewhere (e.g. [PREVOT 93]).

Since the term 'interactive' covers a range of responsive and autonomous systems, it is not useful in the analysis at this point, and in particular cannot be used instead of the term 'responsive'. Systems that are not interactive have correspondingly no use for models of human cognition.

Examples of responsive systems include:

Typical situations that would suit responsive systems are those where their need is completely clear, and no conflicting tasks will interfere with either the human's recognition of the need to use the system, or the human's activation of the system. Responsive systems are necessary where the use of the safety system has far-reaching consequences and might be regretted later, unless the system has sufficient knowledge to ensure that the system will only be deployed in the correct situations.

The other side of this is that responsive systems share the problem they may not be selected at the appropriate time. This is easiest to see for general information systems, where vital safety-related information may well be available, but unknown to the operator. As the effectiveness of these systems depends on being chosen, an analysis of their effectiveness would have to include the criteria that could affect whether the human would actually choose to use them in the situations for which they were designed. This is in the end little or no advance from the problems of reliability analysis in purely human safety systems. Furthermore, as with purely human systems, there can be no guarantee that the problem of human error is significantly reduced.

Responsive information systems, because they do what they are asked, do not protect the operator against information overload caused by the system giving the operator all that the operator asks for.

Since responsive systems suffer from all these limitations, adequately broad approaches to the problem of human error in complex control tasks are more likely to come from autonomous (and interactive) safety systems, and we continue with further classification of these.

2.3 The dimension usual / unusual

In the course of this analysis, the relevance of the categories of safety systems to human cognition is increasing. The dimension of usual / unusual would have little meaning apart from the consideration of cognition. The adaptivity of the human is such that whether a system is experienced as usual or unusual makes a difference. In the case of systems that are usually in operation, the human will adapt ways of operation to take into account the effects of the system, and this may cause problems.

For a system to be classified in this paper as usual, it must be one where the operator is familiar with the system's operation. There is a continuum of familiarity, from systems which are never encountered either in training or in operations, to systems that are in active use for the majority of the time. It is important to realise that if an abnormal or emergency procedure is well-practised, it becomes usual in the sense meant here. This is one reason for not using the term 'normal'.

One may not wish to call the normally operative systems 'safety systems' at all, since they form part of the basic operational design of the system, and it is difficult to distinguish safety-related elements of the design from elements related to other considerations. It is therefore difficult to give clear examples of separate safety systems that would be classified as usual. In aircraft, one could think of the flight management or control systems in general.

To illustrate an important aspect of usual systems, let us take the example of an anti-skid system on a car or aeroplane, that prevents the wheels from locking and causing a skid with serious loss of friction between tyres and the ground. Proponents of the concept of risk homeostasis [WILDE 88] suggest that humans have a certain personally acceptable level of risk, and that if an extra safety system is introduced, the human will change behaviour so as to tend to counteract the increase in safety. We can easily imagine this in the case of an anti-skid system, where if a driver or pilot knows that there is an anti-skid system in operation, there will be no need to bear in mind the risk of skidding, and this may tempt the human to treat casually situations where skidding was likely. Another cited example is of car seat-belts, though seat-belts belong to the passive or responsive categories. It may be that the feeling of safety gained by wearing a seat-belt encourages a driver to feel safe driving faster, and thus the risk of injury may not be as much decreased as may have been expected at the design stage. The concept of risk homeostasis has been criticised, but it is nevertheless taken seriously, and this is enough to show at least that people may adapt to usual systems (which could include normally used information systems, assistant systems, or decision support systems) in ways that may interfere with the intended or predicted effect of a safety system.

Perhaps it could be argued that models of cognition should be used to predict the ways in which a human would be likely to adapt to a usual system. But if we wanted to analyse human adaptation to usual systems, from the point of view of risk and safety, at the design stage, we would need a very sophisticated model of human learning, which is both presently beyond the state of the art, and in any case more difficult to construct than the kind of models deemed here to be appropriate to some latent safety systems (see below).

Another point to make on the subject of usual systems is this. One of the main purposes of safety systems should be to cope with unexpected situations, and it is often in unexpected situations that human error is seen to occur. But usual systems, particularly when they have been adapted to by the humans who usually use them, are adapted just to usual situations, and could well be that the resultant human-machine safety system would be ineffective in situations that are unpractised, since (for complex tasks) the variety of unpractised situations is much greater than the variety encountered in normal operation, to which the usual systems are well-adapted.

The analysis proceeds with what are termed here 'unusual' systems, which are those to which the operator has not adapted in the context of normal operation.

2.4 The dimension dormant / latent

As this classification proceeds to a finer level, the internal nature of the safety system is also becoming more prominent. The aspect of internal nature under consideration at this point is related to the complexity of the triggering of an unusual system, and the distinction has implications both for what the safety system can do, and how much of a cognitive model is required to do it.

Among latent systems, it is fairly easy to recognise those systems that are usually inactive, but brought to activity by a simply-defined or simply-measured condition. For these systems, the term 'dormant' is chosen to suggest that while in the dormant state, no relevant activity is taking place, but the dormant system can be 'woken up' by a clearly-defined, context-independent condition. On the other hand, what are here called latent systems are also activated by triggering events, but in this case the triggers are context-dependent. Because of this context-dependency, latent systems continually need to process information even while not active. This is necessary to determine the correct moment for activation of the system.

Some systems that can be classified as dormant are independent of cognition, such as a car airbag system. These could be designed to be set off by certain vibrations in or distortions of the car, and in the circumstances in which they are activated, cognition is largely irrelevant. For that reason they are passed over here.

There are also dormant systems that interact with cognition. In this category would come alerting and warning systems in general, that are set off by particular conditions occurring in the controlled system. To be classified as dormant rather than usual, a warning system must be activated only in unusual situations, rather than as a part of normal operation. Part of the current definition of dormant is that dormant systems are activated independently of context. In aviation, a reasonable example of a system that could be regarded as dormant would be a stall warning device. Stalling conditions are relatively easy to detect, since they depend on the relationship of the airflow with the wing, and other factors are irrelevant to the stalling condition. A stall is equally a stall on the final approach, or while doing steep turning for emergency manoeuvring, though the airspeed and other conditions may be different. The simplicity of the condition is testified to by the fact that, even in the most elementary flying training, stall recovery becomes a conditioned reflex, so cutting through whatever else may be happening at the time. Note also that the avoidance of stalling is a function that is clearly able to be automated, as in current fly-by-wire aircraft.

If there are only a few dangerous conditions associated with dormant warnings of this kind, either the operator could practise responses so much that they become automatised, or the responses could actually be automated. However, when there are many such dangerous conditions linked to dormant warning systems, the overall situation is different. It is important to note that the human response to warnings or alarms is dependent on aspects of the cognitive context, and in particular, the general workload and stress at the time. A classic illustrative example would be the renowned case of Three Mile Island, where at one point over one hundred alarms were sounding [BIGNELL 84]. In this situation, the effectiveness of another added alarm could actually be negative, in that it could hinder rather than help the operators cope with the emergency.

Thus, to be effective in a system with many potential warnings, the dormant warnings would have to be mediated by some kind of system that filtered them in accordance with the cognitive situation; perhaps for example dependent on something like the 'control modes' idea of Hollnagel [HOLLNAGEL 93], which deals with the relationship between the way in which cognitive processes are co-ordinated and factors such as the time pressure or cognitive demands. The suggestion of this general kind of approach is not new. But could this filtering itself actually be a dormant system, responding to a simple condition? Most likely not: in the more complex situations it would have to perform significant processing, including some aspects of cognitive simulation, in order effectively to determine which warnings to present.

This brings us naturally forward to latent systems. In contrast with dormant systems, since latent systems have a continuously active monitoring function, they are able to activate themselves in particular situations that may be contextually dependent, or may be difficult to measure directly. A final distinction among latent systems will now be made, which has further implications for the degree of involvement of a model of cognition.

2.5 The dimension danger-centred / cognition-centred

Latent safety systems are illustrated here by the common (and sometime controversial) topic of Ground Proximity Warning Systems (GPWS). Two dimensions of the task state space are illustrated in Figure 2 for two possible designs of GPWS.

Figure 2. A small part of task space for a GPWS (schematic)

If it were designed as a dormant system, as defined above, it would be of little use. The most obvious single variable to warn about would be the distance from the ground, but if a system was set to sound an alarm at, say, 500 feet above the ground (horizontal line in Figure 2), it would fall short of being useful in two ways. Firstly, it would give an alarm in many normal situations where it was not wanted (and hence it would be a usual system), and secondly, in conditions with steep, high mountains, a warning at 500 feet above ground level could be too late. A more important factor than simple height above ground is the time before collision, and the simplest reasonable GPWS could be based on this (diagonal line in Figure 2). In order to implement this last design, the GPWS would have to use information both about the height above the ground and about the closing speed. The time to potential impact would be the height divided by the closing speed, so the GPWS would have to be continuously monitoring both quantities, and calculating that time.

This latter GPWS design could be called danger-centred, as the principle is to identify situations that are dangerous or potentially dangerous, and to install a warning system to alert the operator in time to take recovery action to avoid the dangerous state. In terms of the diagram, it looks effective, but the diagram is misleading, because it draws only two dimensions of the task space. In fact, relevant to whether a GPWS should give a warning are factors such as the stage of the flight. If it is known that the aircraft is coming in to land at an airport, then having the GPWS sounding just before landing could be an unwanted distraction. On the other hand, when flying level over high mountains at 30000 feet, the pilot knows perfectly well that the aircraft cannot collide with the mountains: but a steeply rising mountain could still set off this GPWS. In these ways, the second design illustrated is still not sufficiently context-sensitive.

A better system, it might be thought, would be one that took into account more variables, and therefore eliminated the known examples of false positive warnings and missed warnings. But in complex systems, the total number of variables is very large. How can anyone be sure that all the appropriate dangerous situations have been dealt with? How can the designer be sure that what is regarded as dangerous from the point of view of design is also the point at which operators should be warned? We can see a progression from technical to cognitive: in the case of stall warning, the technical definition can be accepted as the correct one. For a GPWS, it is not so clear, because cognitive issues seem to be more important. For a better, more comprehensive system, we need to consider carefully whether cognitive issues should be more in the foreground.

Another consideration that arises with any complex system is that of understandability. If an active unusual system is to be understandable, and to fit into the task, then the basis on which it acts must be compatible with the way that the person sees the task cognitively.

3 Some theory for cognition-centred safety systems

The complete task space for a complex task is very large, because typically there are very many variables which are relevant to the performance of the task, which usually involves a complex system. This huge task space can be divided up conceptually into a number of distinct regions, which are illustrated in Figure 3.

Figure 3. Schematic diagram for cognition in a complex task (see text)

Firstly there is the overall boundary to the space, which is dictated by the static configuration and physical constraints of the controlled system and its environment. Aircraft, for example, have a height ceiling above which they simply cannot fly. Next, there are parts of the state space that either are associated with, or lead to, accidents or incidents that are known to be either undesirable or even disastrous (one drawn in Fig. 3). Bounding these erroneous zones, we may expect to see operating rules forbidding the approach to danger. For example, an airframe may be dangerously damaged if a certain g-force is exceeded. There may be a rule never to exceed a somewhat lower figure. Alternatively, instead of an explicit rule, with advanced flight control systems, the system itself may be configured so that certain regions of task space cannot be reached.

But even after excluding these areas that are known to be dangerous, there will typically still remain a very large space of possible states of the system and environment. We may reasonably assume that only some of these states will be usual and familiar to the human in control. Based on this assumption, it becomes apparent that there is an alternative approach to the one, indicated above, of identifying known errors and setting warnings at appropriate places in the task space.

The alternative approach is based around the idea of alerting the human in control when the current state goes away from the usual areas. That is, instead of warning when the system state approaches danger, warning when the system state goes away from a familiar situation. For this to be feasible, the safety system would have to contain some kind of model of which areas of task space are usual or familiar. So far the discussion has been general. To explore possibilities further, certain assumptions need to be clarified.

The models and simulations of cognition that have been applied to the area of complex tasks have generally had some units of task knowledge linked together in some way. For example, Boy's knowledge blocks [BOY 91] are units of task-related knowledge that are linked by achievement of goals or by recognition of abnormal conditions. Bainbridge's Cognitive Processing Elements [ BAINBRIDGE 92, 94] are similarly units of task procedural knowledge that invoke each other in the style of a network. Hollnagel bases his model [HOLLNAGEL 93] on the distinction between competence and control, and there are units of competence which are linked together according to the control mode. Given these examples, it is not unreasonable to take as an assumption that the usual, familiar parts of task cognition can be represented by discrete, separate parts of task space.

The next assumption, based on the intention of the model, is that there is a correspondence between the parts of task space that are recognisably usual or familiar, and units of structure within human task cognition-considered both for long-term knowledge and for immediate execution. Clearly there will never be a complete representation of a particular human's task knowledge, but the important question is, for the areas of the task that are investigated, whether human task cognition can be adequately represented in this way, adequate for modelling information use and actions relevant to particular situations, and changes between different perceived situations, following internal or external cues.

A full understanding of Figure 3 depends on this assumption, where the ovals are intended equally to represent both familiar, usual regions of the task in terms of system variables, and the units of immediate cognition, which could be called cognitive task units or cognitive contexts. For a complex task there may be thousands of these units. The arrows represent the usual transitions between these units, where usual includes both normal operation and practised abnormal situations. The extensive space between the cognitive task units represents that part of the task space that is not usually encountered (in reality or training), is not familiar, but nevertheless is not identifiably 'unsafe' in the sense that it would be if it were across some boundary set to guard against a dangerous part of task space that was known about. In terms of this diagram, we can identify kinds of error which would not be picked up by a danger-centred safety system. Warnings about these errors will then be made the basis of the cognition-centred safety system.

The model diagram is misleading in that it represents task space as one continuum. Cognitively, the dimensions of the immediate task space vary with the situation. Thus, from a cognitive point of view, each oval in the diagram properly represents not so much a part of a fixed global task space, but rather a separate task space with its own small number of dimensions. The relevant variables in each unit are likely to differ, and there is no guarantee that any two individual operators will have the same units, or that the variables relevant to those units will be the same: thus they are personal. Hence a good name for what is represented as an oval would be a 'personal immediate cognitive task unit' or PICTU for short. This abbreviation will be used from here on for brevity and for avoidance of confusion with other related concepts.

Two kinds of basic error category may be identified in terms of this view of task cognition.

  • The operator may let the system state go outside the usual, familiar regions, and he or she may be aware of this or not. This may be caused, for example, by making an unusual control action, or failing to make a usual one.
  • The operator may mistake a transition between two PICTUs, either changing the personally perceived PICTU when the task situation does not call for a change, or by failing to change the PICTU when the task situation requires a change. This is likely not to be caused directly by an erroneous control action, but by the misidentification of a cue to change context. A transition could be missed when a task goal is reached, but also, in a multi-tasking situation, the transitions between the two or more tasks may be mistaken while they are all in progress.
  • Further consideration leads to the identification of a compound error combining these two basic types. An operator may miss a transition; this may lead to the wrong variables being monitored, and the task situation could wander seriously away from the usual without the operator knowing.

    Let us return briefly to the classification of safety systems introduced above. If we can in fact, as suggested here, identify the usual regions of task space, then an autonomous safety system that gives a warning as soon as these are diverged from, has as tight a boundary as an unusual system could possibly have. The warning system would stand at the exact limit of the usual. Because that boundary cannot be defined in terms of simply measurable quantities, such a system must be latent rather than dormant, in terms of the definitions above.

    Such a safety system could also be regarded as ensuring that the system state stays within an envelope, but instead of the envelope being technical, as is the case in modern avionics, it would be cognitive: an envelope around the known, the usual, the familiar. Safety systems based on these principles would deserve to be called cognition-centred.

    4 SACHE: a concept for a cognition-centred safety system

    Only an outline of the possible implementation of this class of system can be given here. The purpose of this section is no more than to give an outline specification of the kind of system that could be classified as cognition-centred. In particular, the question of whether it would be worthwhile implementing such as system is left open. This paper does not attempt to evaluate that, but is limited to exploring the possibilities in a way that clarifies the various options.

    Leaving aside the definition of situation awareness itself (which is problematic [SARTER 91]), in terms of this model, loss of situation awareness can be taken as either a missed transition between PICTUs or the deviation from a PICTU without noticing what is happening. To counter loss of situation awareness, pilots would need alerting when their operative PICTU did not correspond to the one that was appropriate for the actual situation, that is, either when a transition was missed or when the situation went out of usual bounds. Hence, the name given here to the safety system concept is SACHE: Situation Awareness Correspondence between Human and Engineered system. 'SACHE' refers to a particular cognition-centred design concept.

    To implement such a system would need a very large amount of work. It would require at least:

    1. dividing up the task space giving boundaries of recognisable units, PICTUs;
    2. knowing what prompts transition between one PICTU and another;
    3. knowing what actions (physical and verbal) are expected or allowed within the different PICTUs;
    4. tracking PICTUs and actions and words;
    5. when there is evidence of a lack of correspondence, issuing a warning.

    The units-actions and words, and transitions-which are required from the analysis, would not be necessarily the same between different individuals, so there may be a choice between implementing a system for one individual and implementing a less precise system aimed at the common points between individuals. It would be a mistake simply to base this analysis on standard operating procedures (SOPs), because there are often usual situations where people deviate from SOPs, and these are not necessarily dangerous. A more careful analysis of actual procedures used needs to be made. There seems to be no agreed method for the knowledge acquisition that would provide the basis for such a system-which could be called by a term that has appeared from time to time in the past [GRANT 91]: cognitive task analysis.

    The difficulty with tracking PICTUs and actions arises from the obvious fact that one cannot directly detect, from the aircraft engineered systems, what the pilot is thinking, and therefore directly whether there is loss of situation awareness or not. A SACHE system needs to infer from the system-observable behaviour what the situation is according to the pilot. Eye-tracking, if feasible, would mean that more of the operator's information gathering behaviour was system-observable. Even with the best practical information gathering and processing, it is still quite possible that the aircrew need to give more information to the SACHE system for it to function effectively. It would have to be seen in practice how much load this would be, or, alternatively, how much the system would be compromised by the unnecessary warnings that would happen if the system had an unaided representation of pilot knowledge.

    Other work has also recognised the need for this kind of system to have the ability to model the intentions of the pilot (e.g. [PREVOT 93] ), but details of how it is suggested to be done are harder to obtain. In any case, some of the more detailed knowledge is likely to come only with trials of a prototype system.

    There are various ways in which a SACHE system could alert the pilot to the situation if the need arose. Perhaps the most sophisticated would be first to reason whether the pilot's neglect of one piece of information could explain the pilot's actions observed by the system. If this were the case, it is that information in particular that would be highlighted. An alternative way would be possible for those PICTUs that were recognised by the system and by the pilots, and which had been given an agreed, readily understood, name: SACHE could inform the pilot of the (named) PICTU that SACHE reckoned was appropriate, and which the pilot's actions did not correspond to.

    Implementing a SACHE system needs a cognitive simulation framework that can adequately represent the PICTU structures and their transitions. Current research is proceeding to assess existing frameworks, such as those mentioned above [BAINBRIDGE 92, BOY 91, CACCIABUE 92, HOLLNAGEL 93] and to improve on them as necessary, producing a model which as far as possible subsumes their major characteristics.

    It is perhaps worth noting at this point that validating the kind of cognitive model that would underlie a SACHE system is troublesome. Using the model as a simulation is a good strategy, but if a simulation responds differently from a human in the same situation, it is still difficult to know whether it is just the elicited knowledge that is at fault, or whether there is a problem with the architecture of the cognitive system.

    In practice, a SACHE system by itself can only help to keep a pilot within the boundaries of his or her expertise. If unusual situations arise which cannot be easily brought back within the bounds of the usual, there must be safety systems to deal with these situations as well, which may not be based on the same level of cognitive model as the SACHE system. This would involve integrating the SACHE system with other, more traditional systems, in a way that would use the SACHE system only as the first line of defence.

    5 Conclusion

    This paper has proposed a broad classification scheme for safety systems, and explored the role of models of cognition, in particular for cognition-centred systems. It is hoped that the examples and illustrations give a sense both of the interesting nature of the possibilities, and also of the challenges that accompany the attempt to create and implement such a system. The argument points to the conclusion that safety systems to counteract human errors in the performance of complex tasks ideally would be of this cognition-centred type. The safety system application is one which challenges the forefront of research in models of cognition.

    In terms of the proposed classification, passive systems may use cognitive models at the design stage, but this is limited. Responsive systems share problems with purely human systems. Usual systems suffer from operator adaptation, and it would be very difficult to model this. Dormant systems, and danger-centred latent systems, do not take cognition into account, but they could use a model of human processing capacity and workload. Cognition-centred systems need the most sophisticated model of cognition, aspects of which have been outlined. Whatever the progress towards these new kinds of safety system, this classification and analysis of safety systems should give some guidance and clarity to anyone considering the application or relevance of models of cognition to safety systems.

    There appears to be much work to do in this area, and among the challenges touched on in this paper are the following.

  • Methodology for the kind of cognitive task analysis proposed needs much further development.
  • In particular, approaches need to be explored for verifying the cognitive models that result from the cognitive task analysis.
  • It is known that individual operators differ in their cognitive task knowledge for the same complex task. It remains a challenge to find how much can be covered by a common model, and how to relate a common model to individually variant extensions. This may well vary between different domains.
  • If it emerges that there is little in common and much individual difference, there will be a practical challenge to develop methods for eliciting individual variations that are not prohibitively time-consuming. This may require development of machine learning techniques.
  • The investigation of other practical questions is a challenge awaiting implementation of such a system, and its testing in simulator environments. There are many more such questions than the two given here.

  • It is not known how much extra information a safety system would need to be able to act effectively in the way described for the SACHE concept.
  • Nor is it known what is the best manner for a SACHE system to give warnings.
  • In identifying these challenges, this paper represents a first step towards investigating them and then developing systems built on that basis.

    Acknowledgements

    Anne-Laure Amat and Gordon Rugg have given detailed useful criticism which has led to several improvements. Thanks also are due to Peter Ayton for pointers towards relevant areas of literature, and to an anonymous reviewer for criticism.

    © 1995 08 Home page Publications Paper frame
    Bibliographics authorship abstract contents references