The contextual modularity of complex cognition

Simon Grant February 1994
This material is copyright and must not be reproduced or copied or have any links set up to it without the formal consent of the copyright owner.

Abstract

Modularity in models of complex cognition can be achieved through either functional, or contextual, differentiation. Complementing Cooper's approaches (this volume) to functional modularity, here contextual modularity is examined. The threads in cognitive science about contextual modularity have to varying degrees, on the one hand, clearly defined cognitive modules, and on the other hand, clearly defined means of coordinating or articulating those modules. The present paper argues that the two have not been successfully integrated, and proposes a model embodying that integration. It recapitulates empirical study with a view of cognition as having a clearly contextually modular structure, where at any time the user or operator is in one or other of the contextual modules. Each module has specific rules governing decisions or actions, a specific cognitive representation of the relevant variables in the context, and specific sources of information which are used in the derivation of the relevant variables. How are these contextual cognitive modules coordinated? Here, the hypothesis is that there are two radically different kinds of transition from one module to another, namely, a learned, context-dependent mechanism, and an associative mechanism dealing with situations that have not often been met before. This being a new approach, it is shown how it may deal with some of the outstanding problems in earlier work. A computational model of these processes is under construction, using the language SCEPTIC, already widely used for cognitive modelling.

1. Introduction

Models of cognition able to deal with the details of information use are particularly useful and necessary where there is a large amount of information needed to perform a task, and where the pressures on task performance allow the consideration of only a small part of that information at once [Grant and Mayes 1991] . Such complex tasks often allow inter-individual differences in performance, and therefore, such models as are made of task knowledge in general, primarily address the overall structure of that knowledge. In particular cases, researchers may attempt to fill in the detail, and this is in principle easier if a framework is first established. Should such a framework have some kind of modularity, and if so, what kind?

Modularity in models of complex cognition can be achieved through either functional, or contextual, differentiation. Since the main focus of this paper is on contextual modularity, first functional modularity will be discussed to clear the ground.

1.1 Functional modularity

It is common practice to divide cognitive faculties into functional modules, not unlike the block diagrams that have been used in the design of the structure of computer software, or the very common data-flow diagrams used similarly in structured systems analysis. Where researchers come to divide up cognitive function by way of top-down decomposition, it is not surprising that such a structure should result. I will give two examples here of predominantly functional modularity in models of cognition.

An interesting example of a functionally modular model of cognition is given by Barnard's Interacting Cognitive Subsystems. The subsystems (such as visual, propositional, motor, etc.) are described as independently operating, connected by a data-bus-like structure. Widespread academic discussion of this model suggests that this is a promising way of giving modularity to cognition, but it does not address all the questions necessary to explain complex task performance. Quite apart from anything else, there is no attempt to explain why some subsystems should be used in in particular situations, where other ones may be used in similar situations.

The work of Shallice and others give another view on modularity. Here a main motivation is to form models capable of explaining phenomena of impaired cognition. Clearly, if cognition is impaired in a particular way across all cognition, we are looking for an explanation in terms of a functional module, which be associated with a particular location in the brain. This does not address, however, the performance of perfectly normal and competent people who happen to do things they might later regret. This needs a contextual view.

It should be clear that most models of cognition have modularity both of function and of context. The difference between the two groups identified here is only in the emphasis given to one or other.

1.2 Contextual modularity

In contrast with the functional view, contextual modularity here is taken to mean the division of cognitive structure into modules of similar functionality, but differing context, such that each module performs an analogous role in its own specific context.

Contextual modularity has no very obvious correlate in general-purpose computer systems, where large amounts of immediately accessible memory, along with virtual memory management, mean that a great deal of information is simultaneously available. But from basic knowledge about memory (e.g., popularly, [Baddeley 1983], it is clear that humans do not function in this way. The small size of the typical contents of human working memory suggests a small basic unit, or module, of knowledge, since it is implausible to have all of complex cognition in one large unit. The view that human knowledge structures are divided into such small units is the basic assumption underlying a contextually modular view.

There are many strands to the modelling of cognition that could be seen as coming from a contextually modular position. Taking some of the seminal ideas from the literature, we could consider any schema theory (e.g., Bartlett) to be a form of this division, as could be frame theories (from Minsky) and script-like theories (from Schank). On the other hand, theories such as Anderson's ACT* have little in the way of contextual modularity. In the case of Soar the only true contextual modularity is in terms of separate ``problem spaces''.

There is clearly no close agreement among cognitive models on the issue of contextual modularity. In view of this, a useful starting point for contextually modular modelling would be twofold.

  1. The clear definition of units or modules of some kind. This involves the nature of the modules.
  2. A mechanism for interrelationship between these units: this includes the shifting between them, but also eventually their creation etc.

A challenge for any cognitive model is to account not only for the regular performance of human skill, but also for the less regular features of human performance, including errors, interruption, and biases. What is argued in this paper is that, in particular, this requires attention to be given to the transitions between contextual modules, and that this could be a way forward to more thorough and powerful models, particularly of complex task performance.

2. Analysis of previous themes in contextual modularity

Introduced above was the idea that there are two facets to modularity: the nature of the modules by themselves; and their interrelationship. In the case of contextual modularity, the question of the nature of the modules includes discussion on exactly what is contained within or associated with a module, both with respect to the quantity and quality. The interrelationship of the modules seems to be a more difficult question. Unlike the case in functional modularity, where the functional relationship between the modules is part of the reason for their existence, for contextual modularity we must consider how, during cognition, the modules are selected or replaced, since, if they perform essentially the same function, not all of them (possibly only one) can be primarily operating at once.

With a model of contextual modularity, there is the question of whether these two facets are addressed, and what the balance is between the facets. Here, we shall consider each facet in turn, relating them to classic models of the past.

2.1 Models where the nature of the modules is clearly defined

Bartlett is one of the originators of a view of memory as having schemata, where each schema is a known pattern. Bartlett's schema is ``an active organisation of past reactions'', but Bartlett gives no indication of how the schemata are switched between in the course of complex task performance. In a similar vein, Schank's scripts and MOPs (mental organisation packets) have been based on the understanding of stories, rather than the performance of tasks. The scripts or packets themselves are clearly defined, but the ideas on switching are very weak, and the theories do not explain complex task performance. Minsky's concepts of frames and agents also clearly have much of the same character of contextual modules. But again here, much more effort is put into the delineation of the structure of the frames and agents, and very little into detailing their interrelationship.

In an attempt to produce models of behaviour in a complex task using rule-induction techniques, Grant investigated dividing up human actions according to the information that was available at the time the actions were taken. This led to a clearly contextually modular structure, where at any time the user or operator is in one or other ``context'', as the contextual modules were referred to. Each context has specific rules governing decisions or actions, a specific cognitive representation of the relevant variables in the context, and specific sources of information which are used in the derivation of the relevant variables. In the study, the mechanism for transition between contextual modules was not very clearly defined, but it was suggested at least that there may be learned cues, which had to be related to the currently observable quantities being monitored at the time.

The difficulty for any model without a clear model of module interrelationship is that, although the model could simulate a part of a task well enough, it would rapidly get lost when trying to switch between appropriate modules. In the dynamic control literature, this is sometimes referred to as ``situation awareness'', and approaches to modelling it are not immediately obvious.

2.2 Models that have clearly defined interrelationships

So, in contrast, here we see the other side of contextual modularity. The examples chosen here are of working computational models, and this is not surprising, because in order to make a working computational model one must have a clear execution model, including effective transition between whatever contextual modules there are.

Anderson's ACT*, PUPS etc. has a clearly defined model of learning, but not such a clear model of execution of tasks using the learned knowledge. Procedural memory, modelled as production rules, is not explicitly divided into contextual modules. As in many production systems, all of the productions are considered at each cognitive cycle. The semantic network of declarative memory works on the basis of spreading activation, rather than context. This leaves the contextual granularity at the level of the single production or semantic unit, which is smaller than is suggested by the previously cited models.

Soar has a clear execution model within a problem space. The model is highly generalised and unified, and this is made explicit through the assumptions that are stated, such as the problem space hypothesis, and the universal subgoaling hypothesis. Unfortunately, this strictly tree-structured hierarchy does not appear to correspond well with human complex task performance (see, e.g., Bainbridge. Soar has a concept of context, which is associated with the goal stack, and this is the context in which a particular production fires or not. But it is not a cognitive context in the sense of being responsible for context effects such as priming. Chunking in Soar is to do with the replacement of a sequence of cognitive processing with one cognitive operation, which, again, is not the same as a contextual module. In Soar, perhaps the closest correspondence with the contextual module is the problem space itself, as there is a different problem space for each problem, but Soar is devoted primarily to the mechanism of problem solving an learning within a problem space: hence moving between problem spaces is given little attention.

The current models specifying the interrelationship of contextual modules only really deal with predictable behaviour, not exceptions. One difficulty appears to be that emphasis on switching between the modules makes the nature of the modules less clear. This may be because, having devised one effective switching and control mechanism, it is capable of dealing with a variety of data, and so the format of the data is less obviously a problem. Without a clear commitment to cognitive plausibility, it is easy to choose a representation that is computationally convenient rather than cognitively accurate.

3. Putting together nature and interrelationship of modules

The foregoing discussions of modular models may be put together thus. If, in a model, the nature of the module is the focus of the theory, the interrelationship between the modules can easily be glossed over, and vice versa. Selection of a particular module nature may easily have consequences for the associated model of interrelationship, and again, vice versa. To illustrate this, consider two modular theories with different sized modules. The transitions between the modules would not be the same. Again, the transitions between modules would be very different depending on whether the modules themselves contained information concerning the transitions, or whether all transitions were managed by a separate function. This latter point will be taken up shortly.

What is needed are theories and models that considers equally the nature of the modules, and their interrelationship. To be cognitively plausible, rather than just an exercise in AI, the modules must be of such a nature and size as is compatible with known contextual effects; and the modules must be interrelated by transitions which are compatible with what is known about human switching of context. This switching may be particularly significant in the discussion of ``human error''.

The work of Lisanne Bainbridge is an important step towards just this kind of model that deals with both nature and interrelationship. Bainbridge (based on her PhD study), developed on paper a model of the process control skill of a steel-worker performing a realistic simulation task. This model is in the form of a complex flow-chart, where the cognitive processes are detailed to a level that allows estimation of the load on what she terms ``working storage''. The cognitive processes are divided into what Bainbridge now terms ``routines'' and ``sequencers'', which both correspond to recurrent patterns of action (and associated verbal protocol). The interrelationship between these is very straightforward. The routines are called by the sequencers, and return a value to them along with the control. An example of a routine, in the steel-works task, is the decision of which furnace to cut the power to. In contrast, the sequencers do not return control; rather they pass control on to another sequencer.

This style of model is still, it seems, unique in the analysis of complex tasks. It offers a plausible model of a particular operator's usual performance, in a way that takes into account cognitive capacity. However, it does not deal with the kind of module transition that is unexpected. For this, we have to suppose another mechanism, and this invites a thorough look at how the nature of contextual modules may interact with these two kinds of transition between them.

4. Two mechanisms for transition between contextual modules

How are these cognitive contextual modules coordinated? Common observation of human tasks brings up two important relevant observations.

The first is that, as people practise performing a task, they are increasingly able to move smoothly and without apparent effort between different parts of the task. Using Rasmussen's ideas, we could say that information processing becomes increasingly dominated by the skill-based level, where the information is perceived as signals. Specific cues are learned for many parts of a task, and we may well suppose that this includes moving from one stage of the task to another: one contextual module to another. What becomes clear in many tasks is that the cue itself does not determine the destination module. To take a very common example, the same bell may, dependent on context, cue the human into very different activities. This strongly suggests the involvement of the contextual modules themselves.

The second observation is that there are many situations that may be imagined where an unexpected event, or the observation of an unexpected value, may completely interrupt a task, and replace the actions that were to be carried out by very different ones. An explosion might be a good example in many settings.

The hypothesis here is that there are two fundamentally different kinds of transition from one predicament to another, appropriate to the two observations above. These are firstly, a learned, context-dependent mechanism; and secondly an associative mechanism dealing with situations that have not often been met before.

4.1 A learned mechanism for change of context

The essence of the contextually modular view is that regularities appropriate to certain contexts are stored together, and are accessible together. Where decisions or actions are taken, the rules for these would be included in the module. It is only a short step from here to including the regularities governing transitions between modules. There could be, for example, rules of the same form as decision rules. This kind of transition rule would be reliable, and suitable for automatising in the course of development of a skill.

One advantage of having transition rules associated with contextual modules is that the rules can be much simpler than they would be in a context-free system. In a particular context, taking the next left turn when driving may make sense; but clearly there cannot be a general rule active to take the next left turn. A driver would not get very far on that basis. If one wished to get the same effect in a non-contextual way, that same instruction would have to be given in a much more verbose way, to prevent the rule from firing at an inappropriate moment.

Transition rules of this kind are more like ``goto'' instructions than like procedure calls, and for this reason, they may be imagined in the form of a transition network, rather than as a tree-structured hierarchy of goals. If a graphical notation such as this is to be used, it must be remembered that a contextual module has internal structure, and is not a single atomic entity. Arrows may then be drawn starting from a particular place in a module, which would be associated with a condition being fulfilled, and leading to the next appropriate module.

The learned mechanism for transition between contextual modules is executed by transition rules that are specific to each module. But the problem with a learned mechanism is that it cannot deal with the multiplicity of events that would actually be a radical change of context to a human. This is not unlike the frame problem. When driving, it is clearly implausible to suggest that all the events that could stop one driving were explicitly encoded. Hence the hypothesis of an alternative mechanism for just such unexpected situations.

4.2 An associative mechanism for change of context

Where a specific transition rule has not yet been learned, there must be a mechanism to get the human out of inappropriate perseverance within that module. The model framework suggested here is that this happens in two stages. In the first stage, the human detects some relevant condition that makes the current state lie outside the normal operational envelope. Note particularly that the terms in which the current state is described---the local representation---differ between modules. The exact conditions in which this happens probably vary between individuals and between situations, but it is easy to think of several events that would stop us from driving normally: from strange loud sounds from the engine, to the sight of a tree fallen across the road ahead, to the perception of other drivers not following expected behaviour rules---driving on the wrong side of the road, for instance.

In these circumstances, the human is aware that the situation may be inappropriate for the set of rules that are currently in operation, but no learned cue has been encountered that would have led to another contextual module in a routine way. Something must be done, however, and the problem for the human (and our models of the human) is how the next module is selected. This may be envisaged in terms of the state transition network introduced above. These transitions come from recognising that one has moved out of the range of one module, but not hit on a prepared path that leads to another.

At this point, an associative mechanism would fit the requirement. It could be that the known states of affairs are matched against the characteristics of possible other contextual modules: the characteristics could include salience, recency, frequency of encounter, and even associated affect, of the modules, as well as the match of their typical features with the features of the current situation.

Because of the range of features that may be used, and their variability, this associative mechanism is likely to be unreliable, and to give different results dependent on chance circumstances. In this way it is very different from the learned mechanism. However, as the same situation is encountered repeatedly, the association would become routine, and a transition rule would be learned.

The distinction between the two transition mechanisms is thus not yet entirely clear. What has been described here are the two extreme cases: of a well-learned transition rule; and of a situation that has never been encountered before. There must also be intermediate stages of some kind. This invites future work to be done in testing whether these mechanisms do in fact represent what happens, and if so, how one form of transition develops into the other.

5. Computational modelling

The language SCEPTIC has been used to begin to specify and implement the ideas presented above. Representing facts and rules in such a PROLOG-like language is commonplace, and causes little difficulty. Three main areas of challenge to the computational modelling of the concepts introduced in this paper will be briefly discussed here.

  1. The overall structure of rules, etc.
  2. How, in particular, to model the two types of transitions between cognitive modules.
  3. How to implement the deletion of one set of rules and the introduction of another, which accompanies transition between cognitive module.

5.1 Overall structure

There are currently envisaged to be three kinds of rule. This gives much extensibility and flexibility, as well as providing a focus for particular task rules.

Task rules.
These correspond to the kind of rules that operators would naturally suggest, and are described in the (probably high-level) language familiar to the operators. They are particular to contextual modules, rather than being generally available, as in many production rule systems. The format is expected to be some sort of condition-action pairs, as is commonplace in cognitive modelling, with the extra proviso that the conditions and actions are specified in arbitrarily high-level concepts. It is highly plausible that task knowledge can be easily transferred to situations where the evidence or execution differ: hence the need for separate high-level task rules.
Evidence rules.
These are where the conditions are defined in terms of the evidence which relates to each one. Taken to lower and lower levels, this would lead back to the field of perception, where there is a great deal of current research.
Execution rules.
These relate high-level actions to their method of execution. To analyse these completely would need study of the physiology of motor actions: walking has for example been extensively researched in this way.
The task rules are specific to a contextual module, whereas the evidence and execution rules could be shared, or there could be specific variants of a general rule in particular modules.

5.2 Transitions between modules

Learned transitions are modelled simply as task rules. When particular conditions were met, the transition rule would fire, and the set of rules for a different module would be put in place.

Associative transitions are more difficult, and await fuller modelling. The current idea is to have some kind of quantitative activation value for each contextual module, which could involve affect, recency, familiarity, priming, etc. This would then be put together with a pattern-matching mechanism, which brings up contextual modules that are associated with the current perceived and memorised state of affairs. The module that was selected to be changed to would be the one which scored highest on the combined criteria of match and activation.

These mechanisms all have some roots in current modelling, so the suggestions here are not relying on completely new ideas, but they are nowhere else put together in this way.

5.3 How to implement transitions

This is an interesting problem, that is tied to current languages' nature. The effect wanted is rather like a PROLOG retract of a whole set of rules and facts, followed by asserting the ones appropriate to the next module. Ideally, perhaps when there is a good modelling system that is object-oriented as well as dealing with logic, this may be easier. Currently it is the object of experiment.

The ideas here are clearly still at an early stage of development, but it can already be said that in the limited time devoted to this modelling and implementation effort, the use of SCEPTIC has been very fruitful in guiding the model towards clarity and consistency, perhaps more so than if standard PROLOG had been chosen as the basis for implementation, and certainly than if lisp or a lower-level language had been employed.

6. Discussion

The issues raised in this paper suggest two complementary questions that may be asked about any model of cognition in complex tasks. If a theory has yet to be implemented computationally, one may ask how it would be done. On the basis of current examples, this is likely to reveal major problems. On the other hand, if a model is already executable, one may ask, do the data structures of the model tally with what we know about human memory and skill? Again, current models have tended to invite the answer, no.

The most important issue to keep in view is that of cognitive plausibility. The claim in this paper is that to get a good model of complex tasks, one needs both the right modularity and the right interrelationship between those modules, and that the most fruitful way of addressing these two is together. Whatever are the virtues or otherwise of the model presented here, the discussion highlights several important issues that need further work, and this paper will have served its purpose if these issues are brought higher on the agenda for the task of modelling cognition.

The first issue is granularity of contextual module. The two mechanisms proposed give two, not necessarily identical, partitions of task knowledge into contextual modules. The routine transitions emphasise a granularity based on current decision or action rules, and current local representation of the task state space. The associative transitions emphasise a granularity based on points at which the task may be joined, and sections of the task which do not permit restarting other than at the beginning. Further empirical work clearly needs to be undertaken to clarify this granularity.

The second issue is the proceduralisation of task knowledge, and the development, in Rasmussen's terms, from knowledge-based behaviour through rule-based to skill-based behaviour. How can this be modelled? It is clearly an important question, and it is hoped that this paper's setting out of the two kinds of transition, with the associative used more by the novice, and the learned by the expert, may help to focus the issue.

A third topic for investigation, following from the discussion here, is how interruption and reorientation mechanisms work. Experiments could be designed to test between the model presented here, of interruption being associative transition, as opposed to routine being learned transition. This also implies that reorientation to a task starts at the boundary of a contextual module as described here.

Acknowledgements

This paper has benefited from extensive general discussion with Lisanne Bainbridge and Rick Cooper.

References

John R. Anderson (1983). The Architecture of Cognition. Harvard University Press, Cambridge, MA.

John R. Anderson (1989). A theory of the origins of human knowledge. Artificial Intelligence, 40:313--351.

Alan Baddeley (1983). Your Memory: A User's Guide. Penguin, Harmondsworth, England.

Lisanne Bainbridge. An Analysis of a Verbal Protocol from a Process Control Task. PhD thesis, Faculty of Science, University of Bristol, England, 1972.

Lisanne Bainbridge. Analysis of verbal protocols from a process control task. In Elwyn Edwards and Frank P. Lees, editors, The Human Operator in Process Control, pages 146--158. Taylor Francis, London, England, 1974.

Lisanne Bainbridge. Types of hierarchy imply types of model. Ergonomics, 36(11):1399--1412, 1993.

Philip J. Barnard. Cognitive resources and the learning of human-computer dialogs. In John M. Carroll, editor, Interfacing Thought: Cognitive Aspects of Human-Computer Interaction, chapter 6. MIT Press, Cambridge, MA, 1987.

F. C. Bartlett. Remembering. Cambridge University Press, Cambridge, England, 1932.

Jerry A. Fodor. The Modularity of Mind. MIT Press, Cambridge, MA, 1983.

A. Simon Grant. Modelling Cognitive Aspects of Complex Control Tasks. PhD thesis, Department of Computer Science, University of Strathclyde, Glasgow, 1990. Available from the author.

A. Simon Grant and J. Terry Mayes. Cognitive task analysis? In George R. S. Weir and James L. Alty, editors, Human-Computer Interaction and Complex Systems, chapter 6. Academic Press, London, 1991.

M. Minsky. A framework for representing knowledge. In P. H. Winston, editor, The Psychology of Computer Vision. McGraw-Hill, New York, 1975.

M. Minsky. The Society of Mind. Heinemann, London, 1987.

Allen Newell. Unified Theories of Cognition. Harvard University Press, Cambridge, MA, 1990.

Jens Rasmussen. Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering. North-Holland, New York, 1986.

R. C. Schank and R. Abelson. Scripts, Plans, Goals and Understanding. Lawrence Erlbaum Associates, Hillsdale, NJ, 1977.

Roger C. Schank. Dynamic memory: A theory of reminding and learning in computers and people. Cambridge University Press, Cambridge, England, 1982.

Tim Shallice. From Neuropsychology to Mental Structure. Cambridge University Press, Cambridge, 1988.




(c) Copyright Simon Grant.


This material is copyright and must not be reproduced or copied or have any links set up to it without the formal consent of the copyright owner.


If you have any comments, or wish to use the material in any way, please send me e-mail my usual address.


Other publications by me are listed on a separate page.


For further information on the author, please refer to my home page.