The Logic of Competence
By: Simon Grant. Series initiated 2010-11-19
I wrote these pieces on the Cetis blog, mainly in 2011, to set out the logic of competence as I saw it.
What do I mean by the logic of competence? I'm writing about the logical structure – and some of the philosophy – involved in writing statements about the competences that people have, or don't have — what they can do effectively, or not. This is highly significant in many areas of life, and thus people say many things about their own, or other people's, competence. This kind of analysis is necessary in order to represent statements about competence coherently in a way that can be effectively processed by ICT systems. It may be done implicitly, but that is more prone to hidden misunderstanding, and failures of interoperability. I've done it here explicitly, in the hope of contributing to clarity, mutual understanding, and thus interoperability, among other things.
Here is what I wrote, now (2023) gathered together into a single web page. I've left the date of each piece at the top, and please note, naturally, that the information I refer to or give was current at the time of writing, but not necessarily now. Equally, I have tried to update the links, but some may have become or remained dead.
This branch of work was intended to feed in to several activities in which I was taking part, including InLOC, eCOTOOL, ICOPER, MedBiquitous Competencies WG, Competence Structures for E-Portfolio Tools, and the CEN WS-LT Competency SIG, which had its 3rd Annual meeting in Berlin near the beginning of the series. It built on and complements Rowin's and my 2010 paper, and rather than setting out an academic case, which we did in that paper, it aims to detail the logic, that can be evaluated on its own terms, requiring reference only to common language and practice.
The first step is to express a working definition, and a logical basis for further discussion, which is that it is expressions like claims to competence, rather than competency definitions, that are logically prior.
See № 1, “The basis of competence ideas”.
The series continues by considering:
- how transferability gives a competence concept its logical identity
- how the analysis of just what a competence claim is claiming results in various possible structures for the competence-related concepts
- how to make sense of levels of competence
- how to make sense of criteria, conditions or context
- basic tree structuring of competence concepts
- desirable variants of tree structures (including more on levels)
- representing the commonality in different structures of competence
- other less precise cross-structure relationships
- definitions, and a map, of several of the major concepts used, together with logically related ones.
Continuing towards practical implementations:
- the requirements for implementing the logic of competence
- representing the interplay between concept definitions and structures
- representing structural relationships
- different ways of representing the same logic
- optional parts of competence
- the logic of National Occupational Standards
- the logic of competence assessability
- representing level relationships
- more and less specificity in competence definitions
- the logic of tourism as an analogy for competence
- the pragmatics of InLOC competence logic
- InLOC as a cornerstone for other initiatives
- InLOC and open badges: a reprise
- open frameworks of learning outcomes
- why frameworks of skill and competence?
- how to do InLOC
- the key to competence frameworks.
Where possible, each point is motivated and illustrated by reference to examples, drawn from (then) existing published materials. Many of these materials are no longer to be found on the web. You may be able to find them on The Internet Archive.
The intention was to put together a full paper incorporating and crediting ideas from other people, however there was not a lot, and so I am just reproducing the posts as I wrote them here.
I made an offer which still stands: get in touch with me and I will talk you through any of this material you are interested in, while at the same time I will try to understand where you are coming from, and what is easier or harder for you to grasp. That will help me to express myself more clearly and simply, where I have not yet achieved clarity.
2010-11-24 (1st in my logic of competence series)
The basis of competence ideas
Let's start with a deceptively simple definition. Competence means the ability to do what is required. It is the unpacking of “what is required" that is not simple.
I don't want to make any claims for that particular form of words — there are any number of definitions current, most of them quite reasonable in their own way. But, in the majority of definitions, you can pick out two principle components: here, they are “the ability to do” and “what is required”. Rowin's and my earlier paper does offer some other reasonable definitions of what competence means, but I wanted here to start from something as simple-looking as possible.
If the definition is to be helpful, “the ability to do” has to be something simpler than the concept of competence as a whole. And there are many statements of basic, raw ability that would not normally be seen as amounting to competence in any distinct sense. The answers to questions like “can you perform this calculation in your head”, “can you lift this 50 kg weight” and “can you thread this needle” are generally taken as matters of fact, easily testable by giving people the appropriate equipment and seeing if they can perform the task.
What does “what is required” mean, then? This is where all the interest and hidden complexity arises. Perhaps it is easiest to go back to the basic use of competence ideas in common usage. For a job – with an employer, perhaps, or just getting a trades person to fix something – “what is required” is that the person doing the job is competent at the role he or she is taking on. Unless we are recruiting someone, we don't usually think this through in any detail. We just want “a good gardener”, or to go to “a good dentist” without knowing exactly what being good at these roles involves. We often just go on reputation: has that person done a good job for someone we know? would they recommend them?
The idea is similar from the other point of view. If I want a job as a gardener or a dentist, at the most basic level I want to claim (and convince people) that I am a good gardener, or a good dentist. Exactly what that involves is open to negotiation. What I'm suggesting is that these are the absolute basics in common usage and practice of concepts related to competency. It is, at root, all about finding someone, or claiming that one is the kind of person, that fulfils a role well, according to what is generally required.
People claim, or require, a wide range of things that they “can do” or “are good at”. At the most familiar end of the spectrum, we think of people's ability or competence for example at cooking, housework, child care, driving, DIY. There are any number of sports and pastimes that people may be more or less good at. At the formal and organisational end of the spectrum, we may think of people as more or less good at their particular role in an organisation — a position for which they may be employed, and which might consist of various sub-roles and tasks. The important point to base further discussion on is that we tend normally to think about people in these quite general terms, and people's reputation tends to be passed on in these quite general terms, often without explicit analysis or elaboration, unless specific questions are raised.
When either party asks more specific questions, as might happen in a recruitment situation, it is easy to imagine the kind of details that might come up. Two things may happen here. First, questions may probe deeper than the generic idea of competence, to the specifics of what is required for this particular job or role. And second, the issue of evidence may come up. I'll address these questions later, but right next I want to discuss how competence concepts are identified in terms of transferability.
But the point I have made here is that all this analysis is secondary. Because common usage does not rely on it, we must take the concept of competence as resting primarily just on the claim and on the requirement for a person to fill a role.
2010-12-07 (2nd in my logic of competence series)
Competence concepts and competence transfer
If we take competence as the ability to do what is required in a particular situation, then there is a risk that competence concepts could proliferate wildly. This is because “what is required” is rarely exactly the same in different kinds of situations. Competence concepts group together the abilities to do what is required in related situations, where there is at least some correlation between the competence required in the related situations — sometimes talked about in terms of transfer of competence from one situation to another.
For example, horticulture can reasonably be taken as an area of competence, because if one is an able horticulturalist in one area — say growing strawberries — there will be some considerable overlap in one's ability in another, less practiced area — say growing apples. Yes, there are differences, and a specialist in strawberries may not be very good with apples. But he or she will probably be much better at it than a typical engineer. Surgery might be a completely different example. A specialist in hip replacements might not be immediately competent in kidney transplants, but the training necessary to achieve full competence in kidney transplants would be very much shorter than for a typical engineer.
Some areas of competence, often known as “key skills”, appear across many different areas of work, and probably transfer well. Communication skills, team working skills, and other areas at the same level play a part in full competence of many different roles, though the communication skills required of a competent diplomat may be at a different level to those required of a programmer. Hence, we can meaningfully talk about skill, or competence, or competency, in team work. But if we consider the case of “dealing with problems” (and that may reasonably be taken as part of full competence in many areas) there is probably very little in common between those different areas. We therefore do not tend to think of “dealing with problems” as a skill in its own right.
But we do recognise that the competence in dealing with problems in, say, horticultural contexts shares something in common, and when someone shows themselves able to deal with problems in one situation, probably we only need to inform them of what problems may occur and what action they are meant to take, and they will be able to take appropriate actions in another area of horticulture. As people gain experience in horticulture, one would expect that they would gain familiarity with the general kinds of equipment and materials they have to deal with, although any particularly novel items may need learning about.
Clearing and preparing sites for crops may well have some similarity to other tasks or roles in production horticulture and agriculture more generally, but is unlikely to have much in common with driving or surgery. The more skills or competences in two fields have in common, the more that competence in one field is likely to transfer to competence in another.
So, we naturally accept competence concepts as meaningful, I'm claiming, in virtue of the fact that they refer to types of situation where there is at least some substantial transfer of skill between one situation and another. The more that we can identify transfer going on, the more naturally we are inclined to see it as one area of competence. Conversely, to the extent to which there is no transfer, we are likely to see competences as distinct. This way of doing things naturally supports the way we informally deal with reputation, which is generally done in as general terms as seems to be adequate. Though this failure to look into the details of what we mean to require does lead to mistakes. How did we not know that the financial adviser we took on didn't know about the kind of investments we really wanted, or was indeed less than wholly ethical in other ways?
Having a clearer idea of what a competence is prepares the way for thinking more about the analysis and structure of competence.
2011-01-04 (3rd in my logic of competence series)
Analysis and structure of competence
I have suggested that the natural way of identifying competence concepts relates to the likely correlation of “the ability to do what is required” between different tasks and situations that may be encountered, requiring similar competence. Having identified an area of competence in this way, how could it best be analysed and structured?
First, we should make a case that analysis is indeed needed. Without analysis of competence concepts, we would have to assume that going through any relevant education, training or apprenticeship, leading to recognition, or a relevant qualification, gives people everything they need for competence in the whole area. If this were true, distinguishing between, say, the candidates for a job would not be on the basis of an analysis of their competence, but on the basis of personal attributes, or reputation, or recommendation. While this is indeed how people tend to handle getting a tradesperson to do a private job, it seems unlikely that it would be appropriate for specialist employees. Thus, for example, many IT employers do not just to want “a programmer”, but one who has experience or competence in particular languages and application areas.
On the other hand, it would not be much use only to recruit people who had experience of exactly the tasks or roles required. For a new role, there will naturally not be anyone with that exact prior experience. And equally obviously, people need to develop professionally, gaining new skills. So we need ways of measuring and comparing ability that are not just in terms of time served on the job. In any case, time served on a job is not a reliable indicator of competence. People may learn from experience at different rates, as well as learning different things, even from the same experience. This all points to the need to analyse competence, but how?
We should start by recognising the fact that there are at present no universally accepted rules for how to analyse competence concepts, or what their constituent parts should look like. Instead of imagining some ideal a priori analytical scheme, it is useful to start by looking at examples of how competence has been analysed in practical situations. First, back to horticulture…
The relevant source materials I have to hand happen to be the UK National Occupational Standards (NOSs) produced by LANTRA (UK’s Sector Skills Council for land-based and environmental industries). The “Production Horticulture” NOSs has 16 “units” specific to production horticulture, such as “Set out and establish crops”, “Harvest and prepare intensive crops”, and “Identify and classify plants accurately using their botanical names”. Alongside these specialist units, there are 21 other units either borrowed from, or shared with, other NOSs, such as “Monitor and maintain health and safety”, “Receive, transmit and store information within the workplace”, and “Provide leadership for your team”. At this “unit” level, the analysis of what it takes to be good at production horticulture seems to be understandable and comprehensible, with a good degree of common sense. Most areas of expertise can be broken down in this way to the kind of level where one sees individual roles, jobs or tasks that could in principle be allocated to different people. And there is often a logic to the analysis: to get crops, you have to prepare the ground, then plant, look after, and harvest the crops. That much is obvious to anyone. More detailed, less obvious analysis could be given by someone with relevant experience.
Even at this level of NOS units, there is some abstraction going on. LANTRA evidently chose not to create separate units or standards for growing carrots, cabbages and strawberries. Going back to the ideas on competence correlation, we infer that there is much in common between competence at growing carrots and strawberries, even if there are also some differences. This may be where “knowledge” comes into play, and why occupational standards seem universally to have knowledge listed as well as skills. If someone is competent at growing carrots, then perhaps simply their knowledge of what is different between growing carrots and growing strawberries goes much of the way towards their competence in growing strawberries. But how far? That is less clear.
Abstraction seems to be even more extensive at lower levels. Taking an arbitrary example, the first, fairly ordinary unit in “Production Horticulture” is “Clear and prepare sites for planting crops”, and is subdivided into two elements, PH1.1 “Clear sites ready for planting crops” and PH1.2 “Prepare sites and make resources available for planting crops”. PH1.2 contains lists of 6 things that people should be able to do, and 9 things that they should know. The second item in the list of things that people need to be able to do is “place equipment and materials in the correct location ready for use”, which self-evidently requires a knowledge of what the correct location is. The fifth item is to “keep accurate, legible and complete records”. This is supported by an explicit knowledge requirement, documented as “the records which are required and the purpose of such records”.
This is quite a substantial abstraction, as these examples could make equal sense in a very wide range of occupational standards. In each case, the exact nature of these abilities needs to be filled out with the relevant details from the particular area of application. But no formal structure is given for these abstractions, here or, as far as I know, in any occupational standard, and this leads to problems.
For example, there is no way of telling, from the standard documentation, the extent to which proving the ability to keep accurate records in one domain is evidence of the ability to keep accurate records in another domain; and indeed no way is provided to document views about the relationship between various record-keeping skills. When describing wide competences, this is may be somewhat less of a problem, because when two skills or competences are analysed explicitly, one can at least compare their documented parts to arrive at some sense of the degree of similarity, and the degree to which competence in one might predict competence in another. But at the narrowest, finest grained level documented – in the case of NOSs, the analysis of a unit or element into items of skill and items of knowledge – it means that, though we can see the abstractions, it is not obvious how to use them, and in particular it is not clear how to represent them in information systems in a way that they might be automatically compared, or otherwise managed.
There has been much written, speculatively, about how competence descriptions and structures might effectively be used with information systems, for example acting as the common language between the outcomes of learning, education and training on the one hand, and occupational requirements on the other. But to make this effective in practice, we need to get to grips properly with these questions of abstraction, structure and representation, to move forward from the common sense but informal abstractions and loose structures presently in use, to a more formally structured, though still flexible and intuitive approach.
The next two blog entries will attempt to explore two possible aspects of formalisation: level, and other features often left out from competence definitions, including context or conditions.
2011-01-07 (4th in my logic of competence series)
Levels of competence
Specifications, to gain acceptance, have to reflect common usage, at least to a reasonable degree. The reason is not hard to see.
If a specification fails to map common usage in an understandable way, people using it will be confused, and could try to represent common usage in unpredictable ways, defeating interoperability. The abstractions that are most important to formalise clearly are thus those in common usage.
It does seem to be very common practice that competence in many fields comes to be described as having levels. The logic of competence levels is very simple: a higher level of competence subsumes – that is, includes – lower levels of the same competence. In any field where competence has levels, in principle this allows graded claims, where there may be a career progression from lower to higher level, along with increasing knowledge, practice, and experience. Individuals can claim competence at a level appropriate to them; if a search system represents levels of competence effectively, employers or others seeking competent people will not miss people whose level of competence is greater than the one they give as the minimum.
For example, the Skills Framework for the Information Age (SFIA) is a UK-originated framework for the IT sector, founded in 2003 by a partnership including the British Computer Society. This gives 7 “levels or responsibility”, and different roles in the industry are represented at one or more levels. The levels labels are: 1 Follow; 2 Assist; 3 Apply; 4 Enable; 5 Ensure, advise; 6 Initiate, influence; 7 Set strategy, inspire, mobilise. These levels are given fuller general definitions in terms of degress of autonomy, influence, complexity, and business skills. There are around 87 separate skills defined, and for each skill, there is a description of what is expected of this skill at each defined level — of which there are between 1 and 6.
The European e-Competency Framework (e-CF), on which work began in 2007, was influenced by SFIA, but has just 5 “proficiency levels” simply termed e-1 to e-5. The meaning of each level is given within each e-Competence. There are 36 e-competences, grouped into 5 areas.
The e-CF refers to the cross-subject European Qualifications Framework, which has 8 levels. Level e-1 corresponds to EQF level 3; e-2 to EQF 4 and 5; e-3 to EQF 6; e-4 to EQF 7; and e-5 to EQF 8. However, the relationships between e-CF and SFIA, and between SFIA and EQF, are not as clear cut. The EQF gives descriptors for each of three categories at each level: “Knowledge”, “Skills”, and “Competence": that is, 24 descriptors in all.
This small selection of well-developed frameworks is enough to show conclusively that there is no universally agreed set of levels. In the absence of such agreement, levels only make sense in terms of the framework that they belong to. All these frameworks give descriptors of what is expected at each level, and the process of assigning a level will essentially be a process of gauging which descriptor best fits a particular person's performance in a relevant setting. While this is not a precise science, the kind of descriptors used suggest that there might be a reasonable degree of agreement between assessors about the level of a particular individual in a particular area.
For comparison, it is worth mentioning some other frameworks. (Here are just two more to broaden the scope of the examples; but there are very many others throughout the professions, and in learning education and training.)
In the UK, the National Health Service has a Knowledge and Skills Framework (NHS KSF) published in 2004. It is quite like the e-CF in structure, in that there are 30 areas of knowledge and skill (called, perhaps confusingly, “dimensions”), and for each “dimension” there are descriptors at four levels, from the lowest 1 to the highest 4. As with all level structures, higher level competence in one particular “dimension” seems to imply coverage of the lower levels, though a level on one “dimension” has no obvious implication about levels in other “dimensions”.
A completely different application of levels is seen in the Europass Language Passport. This offers 6 levels for each of 5 linguistic areas, as a way of self-assessing the levels of one's linguistic abilities. The areas are: listening; reading; spoken interaction; spoken production; and writing. The levels are in three groups of two: basic user A1 and A2; independent user B1 and B2; proficient user C1 and C2. At each level, for each area, there is a descriptor of the ability in that area at that level. That is 30 different descriptors. All of this applies equally to any language, so the particular languages do not need to appear in the framework.
Overall, there is a great deal of consistency in the kind of ways in which levels are described and used. Given that they have been in use now for many years, it makes clear sense for any competence structure to take account of levels, by allowing a competence claim, or a requirement, to specify a level as a qualifier to the area of competence, with that level tied to the framework to which it belongs, and where it is defined in terms of a descriptor. This use of level will at least make processing of competence information a little easier.
But beyond level it seems to get harder. The next topic to be covered will be other attributes including conditions or context of competence.
2011-01-19 (5th in my logic of competence series)
Other competence attributes
Beyond levels, there are still many aspects to the abstractions that can be seen in competence definitions and structures in common use. What else can we identify? We could think of these as potential attributes of competence, though the term “attributes” is far from ideal.
Just as levels don't appear in competence definitions themselves, when an attribute is abstracted from a competence definition it usually does not appear explicitly. But unlike levels, which are set out plainly in frameworks, we have to be aware to find other abstracted features.
I started this series by saying that the logical starting point for talking about competence are the claim and the requirement. And indeed, we see many implicit or explicit claims to competence with plenty of detail — for example in the form of covering letters accompanying CVs in support of applications for opportunities. Following this through, it is relevant to consider what else could be said in support of a claim to competence — that goes into more detail than, say, official recognition that a particular standard of competence has been reached.
Imagine that you had done a course on “production horticulture” (the topic mentioned in previous entries) and received a certificate, perhaps even with an attached Europass Certificate Supplement describing the “skills and competences” that you are expected to have acquired by the end of this course. Alternatively, your certificate may not have come as a direct result of a course, but instead from “APEL” — the accreditation of prior experiential learning. That would mean that you had plenty of experience as a horticulturalist, and it had been assessed that you had the skills and competence covered for a particular certificate that covers production horticulture. Now, if you were applying for a job, as well as citing your certificates, maybe attaching the information in any supplements, and stating the level of competence you believe you have achieved, what else would you be likely to want to say to a prospective employer, e.g. in a covering letter, or at an interview?
The most immediately obvious extra feature of one's own experience might be, for horticulture, what kind of crops you have experience in growing. Stating this is an natural consequence of the fact that the LANTRA standards do not refer to the kind of crops. Next, the NOS documentation explicitly mentions indoor (perhaps greenhouse) and outdoor growing, though these terms are not used in the actual definitions. Which do you have experience of? Or, broadening that out, what kind of farms have you worked on? Soon afterwards, the documentation goes on to talk about equipment, without mentioning the types of equipment explicitly. Can you drive a tractor? Beyond this, I'm ignorant about what kinds of specialist equipment are used for different kinds of cultivation, but more relevant questions could be asked, as it might be important to know whether someone is experienced in using whatever equipment is used in the job being offered. Most workers these days will not be experienced in using hand ploughs or ox-drawn ploughs… And after equipment, the list of attributes that are abstracted out — left out from documented competence definitions — continues.
Some differences in skills and competence are less significant, and you may not need to mention them explicitly, because it is understood that any farmhand could pick up the ability quickly. It is the skills that take longer to learn that will be more significant for recruitment purposes, and more likely to be mentioned in a covering letter or checked at interview.
One of the key facts to recognise here is that the boundary is essentially arbitrary between, on the one hand, what is specified in documentation like the LANTRA NOSs, and on the other hand, what is left for individuals to fill in. Where the boundary is set depends entirely on the authority. While LANTRA standards do not specify particular crops or equipment, we could imagine that there was a professional association of, say, strawberry growers, that published a set of more detailed standards about what skills and competences are needed to grow just strawberries. Quite possibly that would mention the specific equipment that is used in strawberry growing. (There is for example as it happens a Florida Strawberry Growers Association, but it doesn't seem to set out competence standards.)
Different occupational areas will reveal a similar pattern. It is unlikely, for instance, that ICT skills standards (e.g. SFIA, e-CF) will specify particular programming languages, but it is one attribute that programmers regularly mention in their CVs or covering letters when seeking employment. Or, take the case of health professionals. There are several generic types of situations with different demands. What “is required” of a competent health professional may well differ between first aid situations with no equipment; at the scene of accidents; in emergency units in hospitals; and for outpatient clinics. Some skills or abilities may be present in different forms in these different situations, and we can perhaps imagine someone mentioning in a covering letter the kinds of situation in which they had experience, if these were particularly relevant to a post applied for.
It should be clear at this point that extra detail claimed will naturally fill in for what is left out of the skills or competence standard documentation. But because what is sometimes left out is equally sometimes left in the documentation, it would make a great deal of sense for industry sectors to set out a full appropriate terminology for their domain — a “domain ontology” if you like. (Though don't be using that “O” word in the wrong company…) Those terms may then be used either within competence definitions, or for individuals to supplement the competence definitions within their own claims. Typically we could expect common industry terms to include a set of roles, and a range of tools, equipment or materials, but of course, these sets will differ between occupations. They may also differ between different countries, cultures and jurisdications. As well as roles and equipment, any occupational area could easily have their own set of terms that have no corresponding set in different areas. We saw this above with indoor and outdoor growing. For plumbing, there are, for example, compression fittings and soldered fittings. For musicians, there are different genres of music each with their own style. For builders, building methods and parts differ between different countries, so could be documented somewhere. And so on.
There is a very wide range of particular attributes that could be dealt with in this way, but it is probably worth mentioning a few particular generic concepts that may be of special interest. First, let us consider context. For standard descriptions of competence, it will be the contexts that are met with repeatedly that are of interest, because it is those where experience may be gained and presented that may match with a job requirement. To call something a context, all that is needed is to be able to say that a skill was learned, or practiced, in the context of — whatever the context is. A context could be taken as a set of conditions that in some way frame, or surround, or define, a kind of situation of practice. If we have a good domain ontology, we would expect to find the common contexts of the domain formulated and documented.
Second, what about conditions? We can refer back to more informal usage, where someone might say, for instance, I can plough that field, or write that program, as long as I have this particular tool (agricultural or software). It makes a lot of sense to say that this is a condition of the competence. Conditions can really be almost anything one can think of that can affect performance of a task. As suggested in the discussion of context, a set of stable and recognisable conditions could be taken to consititute a context. But the term “conditions” generally seems to be wider. It means, literally, anything that I can “say” that affects the competence. As such, we are probably more likely to meet conditions in the clarification of an individual claim than in standard competence documentation. That still means that there is value to assembling terminology for any conditions that are understandable to many people in a domain. It may be that a job requirement specifies conditions that are not in the standard competence definitions, and if those conditions are in a domain ontology, they can potentially be matched automatically to the claims of individuals referring to the same conditions.
Assessment methods should also specify conditions under which the assessment is to take place. The relevance of an assessment may depend at least partly on how closely the conditions for the assessment reflect the conditions under which relevant tasks are performed. And talking about assessment, it is perhaps worth pointing out that, though assessment criteria are logically separate from the definitions of the skills and competence that are being assessed, there is still a fluid boundary between what is defined by the competence documentation, what is claimed by an individual, and what appears as an assessment condition or criterion. The conditions of an assessment may add detail to a competence in such a way that the individual no longer needs to detail something in a claim. An asssessment criterion may fairly obviously point to a level, but, given that a level is also sometimes wrapped in with a competence definition, the criterion may take over something of the competence definition itself. It would be expected that assessment criteria also use the same domain terminology as can be used, both for competence definitions, and within claims.
If the picture that emerges is rather confused, that seems unfortunately realistic. The fluid boundaries that I have discussed here are perhaps a natural result of the desire to specify and detail skill and competence in whatever way is most convenient, but that does not add any clarity to distinctions between context, conditions, criteria, levels, and other possible features or attributes of competence. On the other hand, this lack of clarity makes it paradoxically easier to represent the information. If we have no clear distinction between these different concepts, then we can use a minimal number of ways of representing them.
So, how should competence attributes, including context, conditions and criteria, be represented?
- To do this most usefully, a domain ontology / classification / glossary / dictionary needs to exist. It doesn't matter what it is called, but it does matter that each term is defined, related where possible to the other terms, and given a URI. This doesn't need to be a monolithic ontology. It could be just a set of relevant defined terms in vocabularies. And there is every reason to reuse common terms, vocabularies and classification schemes across different domains.
- There is one major logical distinction to be made. Some terms are strictly ordered on a scale: these are levels or like levels. Other terms are not on a scale, and are not ordered. These are all the rest, covering what has been discussed above as context, conditions, criteria.
- Competence definitions, assessment specifications, job requirements and individual claims can all use this set of domain related terms. The more thoroughly this is done, the more possibilities there will be to do automatic matching, or at least for the ICT systems to be as helpful as possible when people are searching for jobs, when employers are searching for people, or anything related.
Having sorted out this much, we are free to consider the basic structures into which competence concepts and definitions seem to fit.
2011-02-03 (6th in my logic of competence series)
Basic structures of competences
In the earlier post on structure, I was looking for the structure of a single “definition” of “what is required”. In following that line of enquiry, I drew attention to one of the UK National Occupational Standards (NOSs), in horticulture as it happened. Other UK NOSs share a similar structure, and each one of these could be seen as setting out a kind of relationship structure between competences in that occupational area. In each case we see an overall area (in the case cited, “production horticulture”), which is broken down into units, where each unit seems to correspond roughly to an occupational role — one of a set of roles that could be distributed between employees. Then, each unit is broken down into what the person with that role has to be able to do, and what they need to know to provide a proper basis for that ability.
This is clearly a kind of tree structure, but it is not immediately obvious what kind of tree. Detailed consideration of a few examples is instructive. A first point to note is that NOS units may occur within several different occupational areas. This is particularly true of generic competences such as health and safety, but also applies to some specific units of skill and competence that just happen to play a part in several occupational areas, or several careers if you like. So, a particular unit does not necessarily have a single place on “the tree”. A second point emerges from consideration of different trees. UK NOSs have a common structure of, roughly: responsible body (usually a Sector Skills Council); occupational area; unit; skill or knowledge. But this is not always the case with structures that are not NOSs. For example, the “Tuning” work on “educational structures in Europe” includes “generic competences” that are given just as headings, from “capacity for analysis and synthesis” to “will to succeed”, and there is no attempt to break these down into smaller components.
Tuning's specific competences have the same depth of tree structure as their generic ones, still unlike NOSs. For instance, the “business-specific competences” have items such as “identify and operate adequate software”, which looks a bit like some of the things that NOSs specify that people have to do, but also items such as “understand the principles of law and link them with business / management knowledge”, which seems to correspond more with NOS knowledge items. Some Tuning items straddle both ability and knowledge. In all Tuning cases, the tree structure is shallower than for NOSs. You may find many other such tree-structures of competences, but I doubt you will find any reliable correspondence between the kinds of thing that appear at different points on different trees. This is a natural consequence of the logical premise of this whole series: that it is the claim and the requirement that are the logical starting point. Yes, we may well see correspondence at that level of job requirement, and much common practice; but any commonality here will not extend to other levels, because people analyse claims and requirements in their own different ways. It's not just that some trees leave out particular kinds of branch, but rather that, to go with the natural analogy, branches come in all thicknesses, with no clear dividing line between say a branch and a twig.
Even for the same subject area, there are quite different structures. As well as NOSs, the UK has what are called “subject benchmarks”, which are more for academic courses rather than purely vocational ones. The QAA's Subject benchmark statement for “Agriculture, horticulture, forestry, food and consumer sciences” has this structure:
- 8 very general “abilities and skills”, such as “understand the provisional nature of information and allow for competing and alternative explanations within their subject”
- other generic skills divided into
- intellectual skills
- practical skills
- numeracy skills
- communication skills
- information and communication technology (ICT) skills
- interpersonal/teamwork skills
- self-management and professional development skills
- Subject-specific knowledge and understanding, expressed as what a graduate “will be able to”, in three areas:
- “agriculture and horticulture”,
- “the agricultural sciences”,
- “food science and technology”.
Both the subject-specific and the generic skills have descriptions for what is expected at three levels: “threshold”, “typical”, and “excellent”. While this is an interesting and reasonable structure, the details of the structure do differ from the NOSs in the same area.
We have also to reckon with the fact that just about any of a tree's smallest branches can in principle be extended to even more detailed and smaller ones by adding thinner twigs. It might be tempting to try this with the Tuning competences, as talk about the “principles of law”, and how they linked with other “knowledge”, begs the question of what principles we are talking about and indeed how they are linked. However, in practice this is unlikely, because the Tuning work is intended as a synthesis and reference point for diverse academic objectives, and typically every academic institution will structure their own version of these competences in their own different ways. Another way in which two similar trees may differ is the number of intermediate layers, together with the branching factor. One tree may have twenty “thinner” branches coming off a “thicker” one; another tree may cover the same twenty by first having four divisions, each with five sub-divisions. There is no right or wrong here, just variants.
A simple way of representing many tree structures is to document the relationship between elements that are immediately larger and smaller, or broader and narrower. And recently, there seems to be a significant consensus building up that relationships from the SKOS Simple Knowledge Organization System are a good start, and may be the best known and most widely known relationships that fit. SKOS has the relationships “broader” and “narrower”: the broader competence, skill, or knowledge is the one that covers a set of narrower ones. The only thing to be careful about is that the SKOS terms come from the librarian's BT and NT — that is, if we write “A broader B” it does not mean “A is broader than B”, but the opposite, that A is associated with a broader term, and that broader term is B. Thus B is a broader concept than A. Then, to use SKOS in the way it is designed to be used, we need identifiers for all the terms that might occur as “A” or “B” here. Each identifier would most reasonably be a URI, and needs to be clearly associated with its description.
This general purpose structure of URIs and SKOS relations seems to be sufficient to represent the basic aspects of most aspects of the competence structures I have mentioned or referred to, beyond the concepts and definitions themselves. We will next look at more advanced considerations.
2011-02-07 (7th in my logic of competence series)
Advanced structures for competences
In my previous post, I explained how SKOS relationships can be used to represent the basics of competence structures. But in one of the examples cited, the QAA Subject Benchmark Statement for honours level agriculture related studies, the aspect of level of attainment is present, and this is not easily covered by the SKOS broader and narrower relations just by themselves. Let me explain in some more detail.
In this particular Subject Benchmark, the skills, knowledge and understanding were (at the time of writing) described at three levels: “threshold”, “typical”, and “excellent”. As a first example, in one of the generic skills, (communication skills), under “threshold” one item reads “make contributions to group discussions”; under “typical” the corresponding item reads “contribute coherently to group discussions”; and under “excellent” it reads “contribute constructively to group discussions”. Or take an example from the “subject specific knowledge and understanding in agriculture and horticulture” — threshold: “demonstrate some understanding of the scientific factors affecting production”; typical: “demonstrate understanding of the scientific factors limiting production”; excellent: “demonstrate understanding of the scientific factors limiting production and their interactions”. Leaving aside difficulties in clarifying and assessing exactly what these mean, it is clear that there is a level structure, as illustrated in my earlier post. In both cases, the three descriptions are neither identical nor unrelated — higher levels encompass lower ones. (But note also that benchmark statements in different subjects have different structures.)
Can one represent these attainment levels in a tree structure? One option might be to have three benchmark statements presented separately, one each for threshold, typical and excellent. However this would miss the obvious connections between the elements within each level. A more helpful approach might be to describe the common headings with the finest reasonable granularity, and then distinguish the descriptors for different attainment levels at this granularity. This would need a slight restructuring of this statement, because finer-grained common headings are possible than the ones given. For instance, “subject specific knowledge and understanding in agriculture and horticulture” could easily be subdivided into something like these, (using words that appear in each level):
- “science and management of sustainable production systems”
- “social, economic, legal, scientific and technological principles underlying the business management of farm or horticultural enterprises”
- “range of concepts, theories and methods drawn from the constituent disciplines”
At a still finer level, the descriptors mostly share many words, with just the detail differing to reflect the different levels, as exemplified above. In the example above, the common wording is “understanding of the scientific factors affecting production”. Headings could be created from common wording. Then there is still the issue of relating the three described levels into the structure as a whole. Threshold, typical, and excellent are not three components of one higher level entity, they are different levels of the same entity. These levels are one kind of variant.
Variants more generally are not always easy to see in common definitions, perhaps because part of the point of having standards is to reduce variability. For a clearer example from a broader perspective, we may consider areas not documented by occupational or educational standards. Consider skill and competence at management. The literature suggests several distinct styles of management: autocratic, democratic, laissez-faire, paternalistic, etc. It is probably obvious that to be an effective manager, one does not have to be able to manage according to all these styles. Perhaps just one may be good enough for any particular management position, though different ones may be needed in different contexts. Having chosen a management style, each will have a different range of component skills. If one wished to create a tree structure to represent management competences, what would the relationship be between a reasonable topmost node, perhaps called just “management”, and the four or more styles? It is rather similar to the issue with the levels we saw above, but at a different granularity. As another alternative example, look at the broader issue of developing competence in agriculture or horticulture. Probably no one is an expert in growing everything. Anyone wanting to be a farmer or grower will at some point need to decide what to specialise in, if not in academic study, then at least in terms of practical experience and expertise. There are clear choices, and the range of skills and competence needed for different specialisms will of course differ. Being a competent farmer does not mean being competent at growing all crops in the world. You have to choose.
The basic structures mentioned in my previous post start out with the idea of “broader” and “narrower” concepts. It is reasonable to say that management competence in general is a broader concept than competence as a democratic manager? Or can one say that graduate level competence in agriculture is a broader concept than being assessed as threshold, typical or excellent? Does it really help to say simply that horticulture is a broader concept than growing grapes?
What seems to emerge on thinking this through is that there are at least two kinds of “broader” (and equally two kinds of “narrower”) with different logic. One type is like whole-part relationships. We saw this in the UK National Occupational Standards units, which were composed of things that a person needs to be able to do, alongside things that the person needs to know. In principle all parts are needed to constitute the whole. If we imagine say a personal development or learning tracking system that helps you with your learning, and you were working towards the unit of competence, then the system could keep track of which ones you say you have done, and perhaps remind you to complete the remaining ones.
On the other hand, the other type of relationship (illustrated above) is “style” or “variant” rather than “part”. If we imagine a system to help with professional development, and you wanted to develop your management skill, it is at least plausible that you could be asked at the outset which style of management you would like to improve your skill in. Having chosen one (or more) the rest would be put aside. You would work towards the constituent knowledge and skills for the chosen ones, and the system would not bother you with the knowledge and skills needed for the styles you had chosen not to learn more about. Similarly, a general horticulture skill aid would have to start by getting you to select the kind of crops you wanted to grow. And for the other example, with the attainment standards of the Subject Benchmark, we can imagine selecting a topic and then being asked what level you believe you have attained on this topic, so again there is a selection process instead of simply the combination of parts.
One could indeed imagine all of these features together in a tool that helped with personal development. The system could ask you what level you believe you have attained already, and what level you are working towards, for fine grained knowledge and skills, and then reminding you to work at the identified gaps. At the same time, which fine grained areas you work at will depend on your more course-grained choices, like which styles of the competence you want to acquire, and which options you will specialise in.
It may help to compare these two kinds of relationship with ones that are very common elsewhere. UML distinguishes various relationships within class diagrams by graphical symbols, and two of the most common are called “composition” and “generalization”. Composition is very close to the kind of basic relationship in competence where component skills and knowledge are required to make up a wider competence, or that various competences are required to qualify as a certain grade of professional. On the other hand, the broad concept of management competence could be seen as a generalisation of the more specific competences in various styles of management. A word of caution, however: UML is designed specifically for use in systems analysis and design, or software engineering, so it should not be suprising if the match with representing competence is not exact.
Even though the two kinds of relationship I have been talking about are well known in many fields, SKOS does not make an explicit distinction between them. Logic seems to lead to the idea (which I have heard SKOS experts suggesting) that it is up to others to define more specific relationships than (specialisations of) SKOS's “broader” and “narrower” to represent these two kinds of relationship. We don't want to deprive SKOS of the right to be called “Simple”.
However we represent these two kinds of relationship, if we are going to represent them in a way which is useful for tools to help people manage their competence, their learning towards competence, and their self-assessment of competence (perhaps leading to external assessment) then it does seem entirely appropriate to represent them differently. Very simply, there are times when you need all of a set of components, and there are times when you need to choose which of a group of options you are going to choose: “and” and “or”; and both kinds of relationship are of great practical use.
Addition, 2011-06-22 On the other hand, people seem to easily mix compulsory and optional parts in the same structure. This is extremely widespread in the definition of qualifications, which are still a very important proxy for or indicator of abilities and competence. So, rather than needing necessarily to separate out the two kinds of structural relationship, we can simply be liberal about accepting whatever combinations people want to represent. If a certain ability has both necessary and optional parts, it is still very easy to understand what that means in practice, and to follow through the implications.
2011-07-04 And I give detailed argument to the reasons for optionality in post 15 in this series.
That may be a good place to stop for defining generic structure for single framework structures of skills or competence. But what I have not covered so far is relationships between different competence structures. One thing this is needed for is reuse of common elements between definitions, which follows here …
2011-02-15 (8th in my logic of competence series)
Representing common structures
In the last two posts, I've set out some logic for simple competence structures and for more complex cases. But we still need to consider how to link across different structures, because only then will the structures start to become really useful.
If you look at various related UK National Occupational Standards (NOSs), you will notice that typically, each published document, containing a collection of units, has some units specific to that collection and some shared in common with other collections. Thus, LANTRA's Production Horticulture NOSs (October 2008) include 17 common units that are shared between different LANTRA NOSs, and Agricultural Crop Production NOSs (May 2007) include 18 of these units. Ten of them appear in both sets. Now if, for instance, you happen to have studied Production Horticulture and you wanted to move over to Agricultural Crop Production, it would be useful to be able to identify the common ground so that you didn't have to waste your time studying things you know already. And, if you want to claim competence in both agriculture and horticulture, it would be useful to be able to use the same evidence for common requirements.
How can what is in common between two such competence structures be clearly identified? There are currently common codes (CU2, CU5, etc.) that identify the common units; and units imported from other Sector Skills Councils (as frequently happens) are identified by their unit code from the originating NOSs. However, there are no guarantees. And if you look hard, you sometimes find discrepancies. CU5, for example, “Develop personal performance and maintain working relationships”, is divided into two elements, “Maintain and develop personal performance” and “Establish and maintain working relationships with others”. In both sets, “others” are defined as
- colleagues
- supervisors and managers
- persons external to the team, department or organisation
- people for whom English is not their first language.
But when the unit CU5 appeared in Veterinary Nursing NOSs in 2006, non-native English speakers were not explicitly specified. Do we have to regard the units are slightly different? We can imagine what has happened — presumably someone has recognised an omission, and put it what was missing. But what if that has been reflected in the training delivered? Would it mean that people trained in 2006 would not have been introduced to issues with non-native speakers? And does that mean that they really should be given some extra training? And later the plot thickens… LANTRA's “Veterinary nursing and auxiliary services” NOSs from July 2010 has CU5, “Maintain and develop personal performance” and CU5A, “Establish and maintain working relationships with others”. This seems to follow a pattern of development in which the NOS units are simplified and separated. The (same) different kinds of “others” are now just included in the overview paragraph at the beginning of CU5A.
I hope it's worth going through this exercise in possible confusion to underline the need for links across structures. Ideally, an occupational standard should be able to include a unit from elsewhere by referring to it, not by copying it; and there would need to be clear versions with clearly marked changes. But if people insist on copying (as they currently often do), at least there could be very clear indications given about when something is intended to be exactly the same, and when it is pretty close even though not exactly the same.
Back in the simple competence structures post, I introduced the SKOS relationships “broader” and “narrower”. There are other SKOS relationships that seem perfectly suited for this job of relating across different competence structures. These are the SKOS Mapping Properties. It would seem natural to take skos:exactMatch to mean that this competence definition I have here is intended to be exactly the same as that one over there, and skos:closeMatch would serve well for “pretty much the same”, or “practically the same”. If these relationships were properly noted, there could be ICT support for the kinds of tasks mentioned above — e.g. working out what counted as evidence of what competence, and what you still needed to cover in a new course that you hadn't covered in an old course, or gained from experience.
And if all parts of competence structures were given globally unique IDs, ideally in the form of URIs, then this same process could apply at any granularity. It would be easy to make it clear even to machines that this NOS unit was the same as that one, right down to the fine granularity of a particular knowledge or ability item being the same as one in a different unit. An electronic list of competence concepts would have alongside it an electronic list of relationships — a kind of “map” — that could show both the internal “skos:broader” and “skos:narrower” relations, and the external “skos:exactMatch” and “skos:closeMatch” equivalencies.
This gives us a reasonable basis for reuse of parts, at any level, of different structures, but we haven't yet considered comparison of competence structures where there aren't any direct equivalence mappings, which comes next.
2011-02-16 (9th in my logic of competence series)
Other cross-structure relationships
My previous post covered how to do common competence features in different structures, typically where the structures share context. But what about when the two structures are from quite different starting points? Equivalences are harder to identify, but it may be useful to document other relationships.
My earlier post on the basic structures taken separately contrasted the UK LANTRA NOS's with the QAA's Subject benchmark statement in the area. The way in which these are put together is quite different, and the language used is far from the same.
But there may be a good case for relating these two. What would happen if someone who has a qualification based on NOSs wanted to give evidence that they have attained Subject Benchmarks? Or, more likely, what if someone who has a vocational qualification in, say, agriculture wants to select modules from a degree course in agriculture, where the intended learning outcomes of the university's degree course refer to the appropriate Subject Benchmark Statement? Even if there are no equivalences to note (as discussed in the previous post) we may see other useful relationships, such as when something in one structure is clearly part of something else in another structure, or where they are not equivalent, but they are meaningfully related. Let's see what we can find for the (not atypical) examples we have been looking at.
Starting hopefully on familiar ground, let's look at the generic skills related to the LANTRA unit CU5 that I've mentioned before. Element CU5.1, or unit CU5 in the 2010 Veterinary NOSs, is called “Maintain and develop personal performance”, and this seems related to the Benchmark's “Self-management and professional development skills”. They appear not to be equivalent, so we aren't justified in creating a skos:exactMatch or skos:closeMatch relationship between those two structures, but we could perhaps use skos:relatedMatch (another SKOS Mapping Property) to indicate that there is a meaningful relationship, even if not precisely specified. This might then be a helpful pointer to people about where to start looking for similar skill definitions, when comparing the two structures. The Benchmark seems to be generally wider than the NOS unit, and perhaps this would be expected, given that graduate level skills in agriculture should cover something that vocational skills do not. Here, “moral and ethical issues” and “professional codes of conduct” are not covered in the NOSs. Perhaps the closest correspondence can be seen with the Benchmark's “targets for personal, career and academic development”, prefaced at “threshold” level by “identify…”, “typical” level by “identify and work towards…” and “excellent” level by “identify and work towards ambitious…”. In the NOS, the individual must be able to: “agree personal performance targets with the appropriate person”; “agree your development needs and methods of meeting these needs with the appropriate person”; “develop your personal performance according to your agreed targets, development needs and organisational requirements”; and “review personal performance with the appropriate person at suitable intervals”. They must also know and understand (among other things) “how to determine and agree development needs and personal targets”. Personally, I'm not sure whether anything deserves a skos:closeMatch property — probably what we would need to do would be to get the relevant people together to discuss the kinds of behaviour covered, and see if they actually agree or not on whether there was any practical equivalence worthy of a skos:closeMatch.
There is also a definite relationship between the Benchmark's “Interpersonal and teamwork skills” and the NOS's “Establish and maintain working relationships with others”. Again, it is difficult to identify any very clear relationships between the component parts of these, but despite this lack of correspondence at fine granularity, it seems to me that the five ability points from the NOS are more than covered by the five points appearing at the “typical” level of the Benchmark. There are two other SKOS Mapping Properties that might help us here: skos:broadMatch and skos:narrowMatch. These correspond to skos:broader and skos:narrower, but applied across different structures, rather than within one structure. Thus we could potentially represent that LANTRA CU5A (2010) has a “skos:broadMatch” in the Benchmark's Interpersonal and teamwork skills, “typical” level. Conversely, that “typical” Benchmark component has a “skos:narrowMatch” in LANTRA's CU5A.
On the subject-specific end, again there are plenty of areas where you can see some connection, but hard to see very clear distinct relationships. As you might expect, there is a tendency for the NOSs to deal with specific skills, while the Benchmark deals in more general knowledge and understanding. The horticultural PH16 NOS unit is titled “Respond to legislation affecting consumer rights”, while the Benchmark has various “subject-specific knowledge and understanding” to do with “social, economic, legal and technological principles underlying the business management of farm or horticultural enterprises”. Probably, people meeting this part of the Benchmark standard at a good enough level have skills that include that unit of the NOS, so we could in theory note a skos:broadMatch relationship between the NOS unit and that part of the Benchmark. But we could only do that (for any area) if we had URI identifiers available to mark the relevant sections unambiguously, and at present there are few if any competence structures where URIs have been officially assigned to the parts of the structure.
It seems unlikely that an agriculture graduate would be wanting accreditation of a LANTRA NOS unit, but if someone did, supporting systems could potentially make use of these relationships represented as SKOS Mapping Properties. More likely, someone who has covered the LANTRA NOS would be able to save a lot of time in putting together a shortened agriculture degree programme if all the skos:broadMatch relationships were documented, as it would be relatively easy to design a tool that allows efficient comparison of the relevant documentation, as a support to deciding whether a particular module at degree level needs to be taken, or not. This seems likely to be a similar process to Accreditation of Prior Learning (APL) in which the university accredits previous attainment in terms of their degree programme. It could also be adapted to APEL (E = “Experiential”) if the individual brought along a portfolio of evidence for attaining relevant NOSs. These processes are important in the likely future world where tailoring of degree courses becomes more common.
It looks like I have finished the coverage of the essential logical features of competence structures that I believe could usefully be incorporated in an interoperability specification. To repeat a point I have inserted in the introduction to this series, I would be delighted to discuss any of these posts one-to-one with interested people. It remains to bring all these points together in a way that is easier to follow, through the judicious use of diagrams, to discuss other emergent issues, and to talk about how we could work towards the practical implementation of such competence structures. The first diagram offered is a concept map, together with definitions.
2011-03-31 (10th in my logic of competence series)
Competence concepts mapped
In this series of posts I've used many terms as a part of my attempts to communicate on these topics. Now I offer definitions for or notes about both the concepts I've used in the blog posts so far, and related ones drawn from a range of other work, and I link to posts where the ideas behind these concepts are discussed or used prominently. Then, towards the end of this post (placed there solely for readability) there is a map of how the concepts I've used relate to each other.
There are two main sources for borrowed definitions: first, the European Qualifications Framework (EQF); and second, the European Standard that is currently in the process of being published, EN 15981, “European Learner Mobility Achievement Information”, and its published precursor, CEN Workshop Agreement CWA 16133. While I was nothing to do with the creation of the EQF, I am a joint author of CWA 16133 and EN 15981.
Definitions and notes
term | in | definition and notes |
ability |
1; 2; 3; |
something that a person is able to do (Abilities cover both skills and competences, and are normally expressible in the form of a clause starting with an active verb. EQF uses the word “ability” in both definitions. Many learning outcomes are also abilities.) |
assessing body | |
organisation that assesses or evaluates the actions or products of learners that indicate their knowledge, skill, competence, or any expected learning outcome [CWA 16133] |
assessment process | |
process of applying an assessment specification to a specific learner at a specific time or over a specific time interval [CWA 16133] |
assessment result |
5; |
recorded result of an assessment process [EN 15981] |
assessment result pattern | |
People most often look for patterns in assessment results, like “over 70%” or “rated at least as adequate” rather than specific results themselves: not many people are interested in whether someone has scored exactly 75%. This concept represents the idea of what people are looking for in terms of assessment results. |
assessment specification | |
description of methods used to evaluate learners' achievement of expected learning outcomes [CWA 16133] This covers all the documentation (or the implicit understanding) that defines an assessment process. |
awarding body | |
organisation that awards credit or qualifications [EN 15981] |
common contextual term |
3; 4; 5; |
In any domain, or any context, there are concepts (at various levels of abstraction) that are shared by the people in that domain, that serve as a vocabulary. It is important that the terms used within a domain for the related frameworks, standards, ability definitions, criteria and conditions are consistent in their meaning. This box indicates the need for these concepts to be common, and that terms should not be defined differently for different purposes within a domain. |
criterion or condition of performance or assessment |
5; |
(see below) |
educational level |
|
one of a set of terms, properly defined within a framework or scheme, applied to an entity in order to group it together with other entities relevant to the same stage of education [EN 15981] |
effect, product, material evidence | |
material results of a person's activity If something material endures, it can be used as evidence. If there is nothing enduring, the original evidence need to be observed by witnesses, after which the witness statements substitute for the evidence. |
employer | |
agent employing an individual |
employer activity | |
actions of the employer |
framework or occupational standard |
3; 4; |
description of an occupational or industry area, conceivably including or related to job profiles, occupational standards, occupational levels or grades, competence requirements, contexts, tools, techniques or equipment within the industry |
generic work role | |
what is signified by an appropriate simple phrase appearing in a job advertisement, job specification, or occupational standard |
industry sector |
4; |
system of employers, employees and jobs working in related areas that share some of: common concepts and terminology; contexts; a framework or standards; or job requirements |
job description or requirement |
1; 3; |
expression used to describe what abilities are required to perform a particular job or undertake a particular role |
knowledge / understanding | |
outcome of the assimilation of information through learning [EQF] (Knowledge is the body of facts, principles, theories and practices that is related to a field of work or study. In the context of the European Qualifications Framework, skills are described as cognitive (involving the use of logical, intuitive and creative thinking) or practical (involving manual dexterity and the use of methods, materials, tools and instruments).) |
level |
4; |
educational level (q.v.) or occupational level (q.v.) |
material and social reality | |
This means all of the common objective world, whether described scientifically, or according to social convention, or in any way. |
occupational level |
4; |
one of a set of terms, properly defined within an occupational framework, associated with criteria that distinguish different stages of development within an area of competence
(This is often related to responsibility and autonomy, as with the EQF concept of competence. There may be some correlation or similarity between the criteria distinguishing the same level in different competence areas.) |
person as agent | |
This represents the active, conscious, rational aspect of the individual. |
personal activity | |
set or sequence of actions by a person, intended or taken as a whole
(An activity may be focused on the performance of a task, or may be identified by location, time, or context. Activities may require abilities.)
|
personal claim |
1; 5; |
statement that an individual is able to do specified things |
practiced skill | |
ability to apply knowledge and use know-how to complete tasks and solve problems [EQF]
(In the context of the European Qualifications Framework, skills are described as cognitive (involving the use of logical, intuitive and creative thinking) or practical (involving manual dexterity and the use of methods, materials, tools and instruments).) |
qualification | |
status awarded to or conferred on a learner (Many formal learning opportunities are designed to prepare learners for the assessment that may lead to an awarding body awarding them a qualification.) [latest draft of MLO: prEN 15982] |
record of experience or practice |
3; |
(This refers to any record or reflection about things done, but particularly in this context about tasks undertaken.) |
task | |
specification for learner activity, including any constraints, performance criteria or completion criteria
(Performance of a task may be assessed or evaluated. Specified tasks are usually part of job descriptions.) |
Criteria and conditions
One particular area that is harder than most to understand is represented by the box called "criterion or condition of performance or assessment" — and this is evidently fairly central to the map below, being the most connected box, and directly connected to the concepts which I originally proposed as logically basic: personal claims may be about meeting these conditions or criteria; job descriptions or requirements may have them included.
Assessment and performance criteria and conditions as general terms are fairly easy to understand in themselves. For assessment, they specify either the conditions under which the assessment takes place, or the criteria by which the assessment is measured. For performance, conditions in effect specify the task that is to be undertaken, while criteria specify what counts as successful performance.
What is less easy to see is the dividing line between these and the ability concepts and definitions themselves, and perhaps this is due to the same fact that we have reckoned with earlier — that how much is abstracted in an ability concept or definition is essentially arbitrary. One can easily read, or imagine, definitions of ability that include conditions and performance criteria; but some do not.
For the purposes of the concept map below, perhaps the best way of understanding this concept is to think of it as containing all the conditions or criteria that are not specified by the ability concept or definition itself; recognising that the boundary line is arbitrary.
To make common sense and to be usable, conditions and criteria have to be grounded in material or social reality — they have to be based on things that are commonly taken to be observable, rather than being based on theoretical constructs.
Concept map
The following diagram maps out several of the ways that the concepts above can be understood as relating to one other. Note that generic language is used in a neutral way, in that for instance the verbs are all in the present tense. However, many of these relationships are in fact tentative or possible, rather than definite, and they may be singular or plural.
The diagram is a concept map constructed with CmapTools, and includes various other concepts that I haven't discussed explicitly, but on which I have suggested definitions or notes above. I reckoned that these other concepts might help explain how it all fits together. As always with these large diagrams, a few words of caution are in order.
- This is of course only a small selection of what could be represented.
- It is from a particular point of view, and cannot be perfect.
- Such a map is best looked at a little at a time. Focus on one thing of interest, and follow through the connections from that.
I hope that the definitions and the concept map are of interest and of use.
What the map does not clarify sufficiently is the detailed structure and relationships of ability concepts and structures that contain several of them. This will follow later, but before that, I will review the requirements I have collected for implementation.
2011-05-12 (11th in my logic of competence series)
Requirements for implementing the logic of competence
Having discussed, defined, and mapped the principal features of the concepts of ability and competence, we are left with the challenge of working towards “the practical implementation of such competence structures” (ninth post) by looking at the “detailed structure and relationships of ability concepts and structures that contain several of them” (tenth post) and working towards a particular formalisation that represents those concepts adequately for the uses that are envisaged.
At this point, I'll look back over the posts so far to collect what look like the principle requirements for implementing representations of competence in an interoperable way.
The first post in the series noted that the basis of “what is required” is logically the claim of, or the need for, an ability or competence. Thus an implementation should represent the analysis of “what is required” in terms of abilities. On reaching the sixth post, it was clear that the description of what is required can be formalised to an arbitrary degree, and analysed to an arbitrary granularity, so the formal structures used will need to be flexible rather than rigid.
The second post in the series briefly details the issue of transferability or commonality between different roles. Any formalisation should NOT try to answer questions of transferability, but rather provide a good basis for posing and answering those questions within their own domains.
The third post introduces the idea of abstractions in competence or ability definitions, and “common language between the outcomes of learning, education and training on the one hand, and occupational requirements on the other”. A common language is a language that is reused in different contexts. Particularly when concepts are used in different contexts, it is vital to identify them clearly, so that there is a minimum of ambiguity. Here is not the place to argue for the obvious choice for unambiguous identifier being the URI, but that is what I assume. A URI needs to be given to any ability or competence concept or framework that may plausibly need to be referred to across different contexts or applications. This obviously includes both the case from the second post of transferring between different occupational context, and the case from the third and later posts about what is being learned in education or training contexts being used in occupational ones.
The third post also started to look at some of the large body of UK National Occupational Standards (NOSs). One common sense requirement is that any common representation needs to relate to existing relevant materials. Doing this sets up the possibility of broad and fast adoption (politics and other factor being favourable, and with a fair wind) whereas failure to do this sets up the barrier of having to revise existing materials before adoption. Each NOS is clearly a hierarchically structured document, so a common representation must at least deal with simple hierarchical structures.
The fourth post on levels suggests that a simple hierarchy will often not be sufficient. Both claims and requirements need to be able to include levels, and the representation of levels must allow automatic inferences about higher and lower levels.
The fifth post proposes the requirement for a formal representation to cover the kinds of conditions cited in personal claims and job specifications, that go beyond and detail abstract definitions.
The sixth post starts to suggest some technology ideas for the formal structures, starting with SKOS.
The seventh post points out that decomposition is not the only way of analysing competence concepts. We also need the idea of style, variant, or approach to doing “what is required”. (Though this post did not finally resolve how variants, optionality and levels relate to each other.)
The eighth and ninth posts recognise the value in being able to represent equivalencies and comparisons, across different structures or frameworks as well as within them, and propose using the SKOS Mapping Properties for this purpose.
Listing these requirements in brief, we seem to have something like this:
- represent competence concepts suitably for reuse
- represent analysis of competence in terms of abilities
- deal with levels helpfully
- cover claims and occupational requirements
- use SKOS as a basis
- represent styles or approaches as well as decomposition
- represent relations across different frameworks
Putting all my proposals for meeting these requirements here would make this post uncomfortably long, so instead I'll break it down into more bite-sized chunks. (If I change my mind on how to structure the following posts, I'll change it here as well, and in any case I'll link from here to following posts when written.)
First, I'll deal with how we can formally represent individual competence concepts and frameworks so that the structures contain existing materials, can work well together, and can be fully reused.
Next, I'll put forward my developed ideas on how to represent the structural relationships between competence concepts, and tag on dealing with categories.
Later, I'll deal properly with the tricky area of levels, for which up to now I have not come across any really convincing solutions.
I'll do these all with the help of diagrams, representing not the conceptual connections of the previous post, but information modelling connections. This will come together in a big diagram.
I also want to compare and contrast with diagrams representing other past attempts to represent these things, but I haven't yet decided whether to try to cover that bit by bit while first putting forward the ideas, or to do a big post that covers several alternatives.
After that, for real implementation, we would need to discuss the “binding” question — that is, the different ways of representing this emerging information model, particularly looking at XML, Atom, RDF triples, and XHTML+RDFa. [Note: although the work on JSON-LD had started, it had not become a W3C recommendation until 2014, which was too late for InLOC. This series doesn't have any substantial contribution to binding.]
At that point, I hope to be able to conclude the series, having outlined a fair solution to the practical representation of the logic of competence!
Now, on to the question of representing how definitions and structures relate to each other.
2011-05-16 (12th in my logic of competence series, edited 2011-06-17)
Representing the interplay between competence definitions and structures
One of the keys to a fuller understanding the logic of competence is the interplay between, on the one hand, the individual definition of an ability or competence concept, and on the other hand, a framework or structure containing several related definitions. Implementing a logically sound representation of competence concepts depends on this fuller understanding, and this was one of the requirements presented in the previous post.
A framework by its very nature “includes” (in some sense to be better understood later) several concept definitions. Equally, when one analyses any competence concept, it is necessarily in terms of other concepts at least related to competence. There would seem to be little difference in principle between these more and less extensive structures.
When we consider a small structure it is easier to see that the small structures usually double as a concepts in their own right. To illustrate this, let's consider UK National Occupational Standards (NOSs) again. Back in post number 3 we met some LANTRA standards, where an example of a unit, within the general topic of “Production Horticulture”, is the one with the name “Set out and establish crops”. In this unit, there is a general description — which is a description or definition of the competence concept, not a definition of its components — and then lists of “what you must be able to do” and “what you must know and understand”. The ability items are probably not the kinds of things you would want to claim separately (there are too many of them), but nevertheless they could easily be used as in checklists both for learning, and for assessment. My guess is that a claim of competence would be much more likely to reference the unit title in this case.
From this it does appear that a NOS unit simultaneously defines a competence concept definition, and also gives structure to that concept.
It is when you consider the use of these competence-related structures that this point reveals its real significance. Perhaps the most important use of these structures by individuals is in their learning, education and training, and in the assessment of what they have learned. In learning a skill or competence, or learning what is required to fulfill an occupational role, the learner has many reasons to have some kind of checklist with which to monitor and measure progress. What exactly appears on that checklist, and how granular it is, makes a lot of difference to a learner. There can easily be too much or too little. Too few items on the list would mean that each item covers a lot of ground, and it may not be clear how they should assess their own ability in that area. To take the LANTRA example above, it is not clear to a learner what “Set out and establish crops” involves, and learners may have different ideas. The evidence a learner brings up may not satisfy an employer. At the other extreme, I really wouldn't want a system to expect me to produce separate evidence for whether I can start a tractor engine, find the gears, and steer the machine. That would be onerous and practically useless.
Structures for practical use in ICT tools need, therefore, to be clear about the what is included as a separate competence-related definition within the structure, and what is not included, or included only as part of the description of a wider definition.
The “Set out and establish crops” LANTRA unit does have a clear structure, and the smallest parts of that structure are the individual ability and knowledge items — what someone is expected to be “able to do”, and to “know and understand”. And let us suppose that we formalise that unit in that way, so that an ICT learning or assessment support tool allowed learners to check off or provide evidence for the separate items — e.g. that they could “ensure the growing medium is in a suitable condition for planting” and that they knew and understood the “methods of preparing growing media for planting relevant to the enterprise”.
Then, suppose we wanted to include this unit in another structure or framework. Would we want, perhaps, just one “box” to be ticked only for the unit title, “Set out and establish crops”; or would we want two boxes corresponding to the elements of the unit, “Set out crops in growing medium” and “Establish crops in growing medium”; or would we rather have all the ability and knowledge items as separate boxes? None of these options are inherently implausible.
To put this in a different way: when we want to include one competence-related structure within another one, how do we say whether we just want it as a single item, or whether we are including all the structure that comes with it? The very fact that this is a meaningful question relies on the dual nature of a definition, as either a “stand-alone” concept, or structure.
The solution I propose here is that we have two identifiers, one for the concept definition itself, and one for the structure definition that includes the concept. I understand these as closely related. But the description of the structure would perhaps more naturally talk about what it is used for, and wouldn't be the kind of thing that you can claim, while the description of the concept would indeed describe something that one could claim or require. The structure is structured, whereas the definition of the concept by itself is not, and the structure relies on definitions of subsidiary concepts, that are the parts of the whole original concept.
I illustrated a change of wording in the eighth post, raising, but not answering the question of whether the concept remained unchanged across the changed wording of the definition.
Let's look at this from another angle. If you author a framework or structure, in doing that you are at the same time authoring at least one competence-related concept definition, and there may be one main concept for that structure or framework. You will often be authoring other subsidiary definitions, which are parts of your structure. You may also include, in your structure or framework, definitions authored by others, or at another time. Indeed, it is possible that all the other components come from elsewhere, in which case you will be authoring only the minimal one concept definition.
One more question that I hope will clarify things still further. What is the title of a structure? The LANTRA examples illustrate that there may be no separate title of a structure, from the title of the main concept definition contained in it. But really, and properly, the title of the structure should be something like “the structure for <this competence concept>”.
Contrast this with the subsidiary concept definitions given in the structure. Their titles and descriptions clearly must be different. They may be defined at the same time as the structure, or they may have been defined previously, and be being reused by the structure.
Exactly how all this is represented is a matter for the “binding” technology adopted. Representing in terms of an XML schema will look quite different from RDF triples, or XHTML with RDFa. I'll deal with those matters in a later post. But, however implemented, I do think we have the beginnings of a list of information model features that are necessary (or at least highly desirable) for representing this interplay between competence definitions and structures. (I will here assume that identifiers are URIs.)
- The structure (or framework) has its own URI.
- The structure often has a “main” concept that represents the structure taken as a separate concept.
- The structure cannot be represented without referring to the concept definitions that are structured by it.
- Each concept definition, including the main concept and subsidiary concepts, has its own URI.
I've tried to represent this in a small diagram. See what you think… (note that the colours bear no relation to colours in my other concept maps)
Of course, as well as the URIs, titles and descriptions, there is much more from structures or frameworks to represent, particularly about the relations between the concepts. So it is to the practical implementation of this that I turn next.
2011-05-24 (13th in my logic of competence series)
Structural relations for competence concepts
My recent thoughts on how to represent the interplay between competence definitions and structures seem to be stimulating but probably not convincing enough. This post tries to clarify the issues more thoroughly, at the same time as introducing more proposals about the structural relationships themselves.
Clear explanation seems always to benefit from a good example. Let me introduce my example first of all. It is based on a real ability — the ability to make pizzas — but in the interests of being easy to follow, it is artificially simplified. It is unrealistic to expect anyone to define a structure for something that is so straightforward. But I hope it illustrates the points fairly clearly.
I'm imagining a British Society of Home and Amateur Pizza Makers (BSHAPM), dedicated to promoting pizza making competence at home, through various channels. (The Associazione Verace Pizza Napoletana does apparently exist, however, and organises training in pizza making, which does have some kind of syllabus, but it is not fully defined on their web site.) BSHAPM decides to create and publish a framework for pizza making competence, liberally licenced, that it can be referred to by schools, magazines, and cookery and recipe sites. A few BSHAPM members own traditional wood-fired pizza ovens which they occasionally share with other members. There are also some commercial pizza outlets that have an arrangement with the BSHAPM.
The BSHAPM framework is about their view of competence in making pizzas. In line with an “active verb” approach to defining skills, it is simply entitled “Make pizza”. The outline of the BSHAPM framework is here:
- prepare pizza dough
- with fresh yeast
- with dried yeast
- with non-yeast raising agents
- form dough into a pizza base
- by hand in the air
- with a rolling pin on a work surface
- prepare a pizza base sauce from available ingredients
- select, prepare and arrange toppings according to eater's needs and choices
- prepare complete pizza for baking
- bake pizza
- in kitchen oven
- in a traditional wood-fired oven
- in a commercial pizza oven
The framework goes into more detail than that, which will not be needed here. It also specifies several knowledge items, both for the overall making pizza framework, and for the subsidiary abilities.
Clearly the ability detailed in the BSHAPM “make pizza” framework is a different ability to several other abilities that could also be called “make pizzas” — for instance, the idea that making pizzas involves going to a shop, buying ready-made pizzas, and putting them in the oven for an amount of time specified on the packaging.
As well as the BSHAPM framework, we could also imagine related structures or frameworks, for:
- all food preparation
- general baking
- making bread
- preparing dough for bread etc.
So, let's start on what I hope can be common ground, clarifying the previous post, and referring to the pizza example as appropriate.
Point 1: Frameworks can often be seen as ability definitions. The BSHAPM concept of what it takes to “make pizza” represents an ability that people could potentially want to claim, provide evidence for, or (as employers) require from potential employees. It could be given a long description that explains all the aspects of making pizza, including the various subsidiary abilities. At the same time, it defines a way of analysing the ability to make pizza in terms of those subsidiary abilities. In this case, these are different aspects or presentations of the same concept.
Point 2: Each component concept definition may be used by itself. While it is unlikely that someone would want to evidence those subsidiary abilities, it is perfectly reasonable to suppose that they either could form part of a course curriculum, or could be items to check off in a computer-based system for someone to develop their own pizza-making skills. They are abilities in their own right. On the other hand, it is highly plausible that some other curriculum, or some other learning tracking system, might want not to represent the subsidiary abilities as separate items, particularly in cases where the overall competence claimed were at a higher level. In this case (though not generally) the subsidiary abilities are reasonably related to steps in the pizza making process, and we could imagine a pizza production line with the process at each stage focusing on just one subsidiary ability.
Point 3: The structure information can in principle be separated from the concept definitions. Each ability definition, including the main one of “make pizza” and each subsidiary one, can be defined in its own right and quoted separately. The “rest” of the framework in this case is simply documenting what is seen as part of what: the fact that preparing pizza dough is part of making pizza; baking the pizza is part of making pizza, etc., etc.
Point 4: The structure information by itself is not a competence concept, and does not look like a framework. One cannot claim, produce evidence for, nor require the structural links between concepts, but only items referred to by that structure information. It is stretching a point, probably beyond breakage, to call the structural information by itself a framework.
Point 5: Including a structured ability in a framework needs choice between whether to include only the self-standing concept or to include all the subsidiary definitions. To illustrate this, if one wanted to include “make pizza” into a more general framework for baking skills, there is the question of the level of granularity that is desired. If, for example, the subsidiary skills are regarded as too trivial to evidence, train, assess, etc., perhaps because it would be part of a higher level course, then “make pizza” by itself would suffice. It is clearly the same concept though, because it would be understood in the same way as it is with its subsidiary abilities separately defined. But if all the subsidiary concepts are wanted, it is in effect including a structure within a structure.
These initial points may be somewhat obscured by the fact that some frameworks are very broad — too broad to be mastered by any one person, or perhaps to broad to have any meaningful definition as a self-standing competence-related concept. Take the European Qualifications Framework (EQF), for example, that has been mentioned in previous posts (4; 10). We don't think of EQF being a single concept. But that is fine, because the EQF doesn't attempt to define abilities in themselves, just level characteristics of those abilities.
There are other large frameworks that might be seen as more borderline. Medicine and the health-related professions provide many examples of frameworks and structures. The UK General Medical Council (GMC) publishes Good Medical Practice (GMP), a very broad framework covering the breadth of being a medical practitioner. It could represent the structure of the GMC's definition of what it is to “practice medicine well”, though that idea may take some getting used to. The question of how to include GMP in a broader framework will never practially arise, because it is already quite large enough to fill any professional life completely. (Ars longa, vita brevis…)
It is for the narrower ranges of ability, skill or competence that the cases for points 1 and 5 are clearest. This is why I have chosen a very narrow example, of making pizza. For this, we can reflect on two questions about representation, and the interplay between frameworks and self-standing concept definitions.
- Question A: What would be a good representation of a structure to be included within a wider structure?
- Question B: What difference is there between that and just the main self-standing concept being included?
So let's try to elaborate and choose a method for Point 3 — separating self-standing concept definitions from structural information. Representing the self-standing concepts is relatively clear: they need separate identifiers so that they can be referred to separately, and reused in different structures. The question to answer first, before addressing A and B is how to represent the structure information.
- Option one is to strip out all the relations, and bundle them together separately from all the concept definitions. “Make pizza” at the broadest, and the other narrower abilities including “bake pizza”, would all be separate concepts; the “structure for” making pizza would be separately identified. The “make pizza” framework would then be the ensemble of concept definitions and structure.
- Option two is to give an identifier to the framework, where the framework consists of the concepts plus the structure information, and not give an identifier to the structure information by itself.
Let's look again at this structural information with an eye on whether or not it could need an identifier. The structural information within the imagined BSAPHM framework for making pizza would contain the relations between the various ability concepts. There would be necessary part and optional part relations. A necessary part of making pizza the BSAPHM way is to make the dough, but how the dough is made has three options. Another necessary part is to form the pizza base, and that has two options. And so on.
So, perhaps now we are ready to compare the answers to the questions A and B asked above. To include one self-standing concept in another framework requires that the main concept is represented with its own identifier, because both an identifier for the framework, and an identifier for the structural information, would imply the inclusion of subsidiary abilities, and those are not wanted. To include the framework as a whole, on the other hand, there is a marked difference between options one and two. In option one, both the identifier for the main concept and the identifier for the structural information need to be associated with the broader concept, to indicate that the whole structure, not just the one self-standing concept, is included in the broader framework. Even if we still have the helpful duality of concept and structure, the picture looks something like this (representing option 1):
If we had to represent the concept and structure entirely separately, the implementation would surely look still worse.
Moving forward, option two looks a lot neater. If the framework URI is clearly linked (though one structural relation) to the main concept definition, there is no need for extra optional URIs in the relations. It's worth saying at this point that, just as with so many computational concepts, it is possible to do it in many ways, but it is the one that makes most intuitive sense, and is easiest to work with, that is likely to be the one chosen. So here is my preferred solution (representing option 2):
To propose a little more detail here: the types of relationship could cover which concepts are defined within the framework, and specifying the main concept, as well as the general broader and narrower relationships, in two variants — necessaryPart and optionalPart. (I've now added more detailed justification for the value of this distinction in post 15 in this series.)
One of the practical considerations for implementation is, what has to happen when something that is likely to change, changes? What can and cannot change independently? It seems clear (to me at least) that if a framework changes substantially, so that it no longer expresses the same ability, the main concept expressed by that framework must also change. Evidence for a “make pizza” concept whose structure does not include preparing dough doesn't provide full evidence for the BSHAPM concept. It is a different concept. On the other hand, if the long description of the concept remains the same, it is possible to have a different structure expressing the same concept. One obvious way in which this is possible is that one structure could for BSHAPM pizza making could include just the abilities listed above, while a different structure for exactly the same ability concept could include a further layer of detail, for example spelling out the subsidiary abilities needed to make a pizza base in the air without a rolling pin. (It looks tricky: I've seen it once but never tried it!) Nothing has actually changed, but it is more detailed with more of the subsidiary abilities made explicit.
These arrangements still support the main idea, valuable for reuse, that the concept definitions can remain the same even if they are combined differently in a different framework.
There are two other features that, for me, reinforce the desirability of option 2 over option 1. They are, first, that various metadata can be helpfully shared, and second, that a range of subsidiary competence concepts can be included in a framework. Explanation follows here.
First, I am saying that you can't change a framework structure substantially without changing the nature of the main competence concept that stands for competence in the framework abilities together as one.
The structure or framework would probably be called be something like “the … framework” where the title of the main concept goes in place of the dots. The two titles are not truly independent, but need differentiation, because of the different usage (and indeed meaning) of the competence structure and the competence concept.
Second, if we have an identified framework including a main concept definition as in option 2, there seems no reason why it should not, in the same way, include all the other subsidiary definitions that are defined within the framework. This seems to me to capture very well the common-sense idea that the framework is the collection of things defined within it, plus their relationships. Concepts imported from elsewhere would be clearly visible. In contrast, if the structural information alone is highlighted as in option 1, there is no obvious way of telling, without an extra mechanism, which of the URIs used in the structure are native to this framework, and which are imported foreign ones.
There are probably more reasons for favouring option 2 over option 1 that I have not thought of for now — if on the other hand you can think of any arguments pointing the other way, please let me know.
If I had more time to write this, it would probably be more coherent and persuasive. Which reminds me, I'm open to offers of collaboration for turning these ideas into more tightly argued and defended cases for publication somewhere.
But there is more to finish off — I would like to cover the rest of the relationships that I see as important in the practical as well as logical representation of competence.
However, after writing the original version of this post, I have had some very useful discussions with others involved in this area, reflections on which are given in the next post.
2011-06-08 (14th in my logic of competence series)
Different ways to represent the same logic
Earlier this week I was at a meeting where we were talking about interoperability for abilities, and there was much discussion about the niceties of representation. Human readability is significant — whether the representation reflects what is in people's minds. The same logic can be represented in radically different ways that are still logically equivalent (and so interoperable); there remains the question of what is identified by identifiers.
A relatively well-known example of variation of readability between different representations involves RDF. RDF/XML has a tendency to make people run a mile, as it can be difficult to comprehend what is represented. Triples formats (e.g. Turtle) at least have a great simplicity to them, because you can see clearly the mapping between the triples and the RDF “graph” (of blobs and arrows) that represents your little corner of the Semantic Web. (I am one of those who only started to appreciate RDF after recognising that RDF/XML is not the only way.) The problem with triples formats is that the knowledge structure is finely fragmented, so you don't get a clear overview from the triples of what is being expressed: you still need a diagram that represents that overall structure. This is not a surprise — it is generally very hard to serialise a network structure in a comprehensible way. Only particular forms lend themselves to serialisation: e.g. strict tree structures.
In the case of the logic of competence, as I discussed in post 12 in the series, we want to represent both individual competence concepts (or abilities) and structures or frameworks that include several of them.
Published competence frameworks generally use plain text as a medium — they are not primarily graphical or diagrammatic — and have therefore in a sense already been serialised by the publishers. They come across overall rather like a tree structure, though there are very often cross-references and/or repetitions that betray the fact that the information is in reality more complex than a simple tree. But the structure is close enough to a tree to tempt people to want to represent it as such. This is nicely illustrated in Alan Paull's comment on post 12. Alan's ability definitions are nested within each other: a depth-first traversal serialisation of the tree if you like.
As Alan and I have agreed in conversation, it is possible to convert a tree-like competence structure to and from other forms. I'll now give three other forms, and explain how the conversion can be done, and following that I'll discuss the implications. I'll call the forms: Atom-like; disjoint; and triples.
First, it can be transformed to a format similar to Atom, where each separate thing (for Atom: entry; here: ability) is given in a flat list of things, each thing including the links between it and other things. (Atom is the format we adopted for Leap2A.) To do this, you take each ability from the tree structure and put it into a flat list, replacing the relation to the nested ability with one using only its identifier, and adding a reverse link from the narrower ability to the broader ability it was removed from. It is also possible to reverse this procedure — start by finding the broadest ability definition (that is, the one which is has no broader links) and then replace narrower links by the whole narrower ability definition, removing the narrower ability's link to the broader ability. If a narrower ability has already been put in place, leave the reference in place, to avoid duplication.
Second, it can be transformed into a disjoint structure with all the relationships separated out. It's perhaps easiest to imagine this starting from the Atom-like format, as in the Atom-like format each ability has already been separated out, and there are fewer steps to reach the disjoint form. For each link within each ability, convert it to a separate relationship whose subject is the ability where it is defined, and whose object is the ability referenced. Separate the relationships, leaving the ability definitions with no relationship information included within their structure. An extra step of de-duplication can then occur, because probably the Atom-like format had two representations of each relationship: A narrower B and also the equivalent B broader A. Only one of each pair like this is needed to represent the structure fully.
As in the previous case, it is straightforward to reverse this transformation. For each ability, find the relationships which involve that ability identifier. If the relationship has the ability identifier in the subject position, include a link to the object ability within the ability. If the ability identifier is in the object position, include a link with the reciprocal relationship to the other ability.
Third, it can be transformed by being broken right down into RDF triples. As before, it is easiest to start with the nearest other form — in this case the disjoint one. Take each disjoint ability definition (without relationships). This should convert to a set of triples each with the ability identifier as subject, and probably a literal object. The separate relationships are already in a triple-like format, so they can be converted very easily. To reverse this transformation, examine each triple in turn. If subject and object are ability identifiers, turn the triple into a relationship. Then, for each ability, find all triples that have the identifier of that ability as the subject, and have a literal object, and build a single ability structure out of that set.
Now we've seen that these different formats are interconvertable, so which one you use does not impede the communication of a complete ability or competence framework. Where they do differ, however, is in what identifiers are seen to identify, and that does have implications, at least for human use.
Identifiers in RDF triples don't really identify anything by themselves. An RDF resource is simply a node, with a URI as an identifier. RDF relationships have been called predicates or properties, which is nicely ambivalent about how tight the relationship is. RDF doesn't tell you which relations relate to things that should be considered as part of the essence of the identified “resource” — or what is inside the “skin” of the resource, if you like. The only thing you can say, when grouping RDF triples together, is that literal properties don't make any sense by themselves, so they can be seen as attached to, or hang off, a “resource”. In the discussion above, we have assumed that the abilities are the only kind of resources we are dealing with, and that will guide the conversion from the “triples” form to the “disjoint” form.
In the disjoint form, literal properties are grouped with the abilities they are properties of. These properties are likely to include the very well-known ones of title and description at least. The fact that relations are listed separately implies that the relationships are less essential to the nature of an ability than its title and description. In the Atom-like form, an identifier looks like it refers to an ability together with all of its immediate relationships. But in the tree-like form, the identifier of a broader ability seems to refer to the complete structure branching down from it.
Which of these is the most useful or flexible way to identify abilities? That is a real question, and I believe it was the question implicitly underlying much of the discussion at the meeting I participated in earlier this week.
One way of tackling the issue of what is the most useful way of doing identifiers is to look at when you would want to change the identifier. There's not much one can say about this for RDF triples. For the disjoint form, an identifier would want to change typically when the title or description changes. For Atom-like form, the identifier might reasonably change if any of the direct relationships changed. For broader tree-like structures, the implication is that the identifier should change if any of the structure changes.
When an ability identifier changes is significant. Effective connection between what is taught, learned, assessed, required, claimed or evidenced is only assured if the same identifier is used. If different ones are used for essentially the same ability, extra provision needs to be made to ensure, e.g., that evidence for the ability under one ID can be used to fulfill requirements under a different ID. That provision might be in terms of declaring that two ability IDs actually are equivalent. So, generally speaking, it is reasonable to have ability identifiers changing only when necessary — when what the ability means in practice has actually changed.
So now we can ask: which approach to structuring ability or competence definitions delivers this outcome of needing changed identifiers no more (and no less) than necessary?
The first sub-question I'd like to address is: should changing structure always require changing identifier? My answer is clearly, no, not always, and this is the reasoning. Yes, of course you should change the identifier if the content has changed. But structure change does not strictly imply content change. After thinking a long time about this, I think the clearest example is with intermediate layers of structure. And, happily, this is illustrated in real life with several UK National Occupational Standards. OK, so imagine we have a three-layer competence / ability structure.
Top ability A has two sub-abilities, B and C. B is further divided into P, Q and R, while C is further divided into X, Y and Z. (In real life, there would usually be more.)
The body that defines the structure decides that the justification for the intermediate layer is rather flimsy, and removes it, leaving the structure that ability A has direct sub-abilities P, Q, R, X, Y and Z. The title and long definition of A are unchanged. Is A the same ability? I would answer, unequivocally, yes it is, because all evidence the former A is also equally evidence for the latter A.
Or apply this to the BSHAPM pizza making ability example. A stands for the ability to make pizza the BSHAPM way. B could be baking pizza. P, Q and R could be the three approaches to pizza baking. The BSHAPM could decide that, for simplicity, they wanted to eliminate the node of baking pizza as a separate ability, and instead represent the three approaches to baking pizza as direct sub-abilities of pizza making.
Now if you cling to the view that changes in structure must result in changes of identifier, this means that you will need to declare, and process, a whole extra kind of relationship: that the former A is equivalent to the latter A. This strikes me as unnecessary and quite possibly confusing. Possible: yes; ideal: no. The same example also goes against the Atom-style idea of ability identity. The immediate relationships of ability A change in this scenario, without the ability itself changing at all.
Thus, if we still want to deliver the outcome of changing the identifier only as much as necessary, not more, we are driven to the next type of structural representation, the “disjoint” one. But this comes with a caution. If we are not including the structure as an essential part of the ability or competence definition, we need to be sure that we aren't cutting corners, and omitting to give a full description of the ability that we can use as a proper definition. Sometimes this may happen in cases where the structure is defined at the same time as the contained abilities. We may simply say that ability A is defined as the sum of abilities B, C and D. Then we risk not noticing that the substance, the content, of an ability has changed, when we change it to being composed of B, C, D and E. So, there is a requirement, to use this “disjoint” approach, that we properly define the ability, in such a way that if an extra component is added, we feel we need to change the definition, and thus the identifier with it. I would say that amounts to no more than good practice. At very least, we should have a long description that states that ability A consists of B, C and D (or B, C, D and E). Or we may choose to make explicit, in text that is not formally structured, the fact that ability A is actually made up of the things that are grouped together under the headings, B, C, etc. Usually, ability A will actually have more to it than simply the sum of the parts. One would expect at least that ability A would include the ability to recognise when abilities B, C, etc. are appropriate, and apply them accordingly, or something like that. So, again, failing to write a full definition or long description is laziness and bad practice.
This reflects back on what I said earlier about a structure doubling as a concept in its own right, or in other words a framework doubling as an ability definition (which also I have actually changed now so as not to have too much hanging around that I no longer believe). Perhaps that needs qualifying and clarifying now. The way I would put it now is that in authoring a competence structure, I am usually implicitly defining a competence concept, but good practice demands that I define that concept explicitly in its own right. It is then true to say that the structure “gives structure to” the concept, in the sense that it details a certain set of narrower parts that the broader concept “contains”. But that is certainly not the only way of structuring the concept. My example based on real NOS cases is only the tip of the iceberg — it is very easy indeed to make up endless examples where the same broad ability is structured in different ways.
It is also not true that a structure necessarily defines a clear single concept. In many cases (such as my BSHAPM pizza making ability) it may, but in very broad cases it may not do. We cannot have that as a requirement for a representation. Thus, contrary to what I wrote at one point previously, it is plausible to have a structure or framework title and definition that is independent of an ability title or definition. It's just that you can't use one as the other, and it's more usual, in less broad cases, to have the structure and the ability concept closely related, perhaps even sharing the title. The structure should not, however, have a long description anything like an ability concept.
Thus, the structure “gives structure to” the concept, and the concept “is structured by” the structure.
Perhaps it is worth remembering that a major envisaged use of these structures, in their structured electronic (rather than less explicitly formatted document) form, is to give learners a set of discrete concepts to which evidence can apply, which can be self-assessed or assessed by others, which can be claimed, or required. Some kind of “container” element at least is necessary in which an ability can be contained or not. The container seems to be exactly the explicit framework structure. In the three-layer example above, the definition, identifier and title of ability A can remain the same, while the framework structure can change from containing in the first case A, B, C, P, Q, R, X, Y and Z and in the latter case A, P, Q, R, X, Y and Z. Applied to pizza making, the frameworks would have to be given better titles than “structure for baking pizzas (layers)” and “structure for baking pizzas (flat)”!
I'd like to conclude by pointing out the trade-off involved in taking the different paths.
- if you proliferate identifiers due to changing identifiers each time the structure changes, you'll need extra mechanisms to pin together different “versions” of ability definitions that differ only in their structure, or the extent of their structure, not in their substance or content;
- if you want to keep the same ability identifier when the structure changes but not the content, you'll need to take care to make long descriptions explicit; and to separate identifiers for structure and ability, pinning them closely together where appropriate.
With the latter option, I'm claiming that the ability identifier will naturally change in just those cases where one would want it to change. The cost is getting one's head round the difference between the ability and the structure. I firmly believe it is worth the effort for system designers to do this, so that the software can handle things properly behind the scenes, while not needing to trouble the end users with thinking about these matters. I'm also suggesting that the latter option requirements embody good practice, where the former ones do not.
This gives us one step forward on the structure diagram from the last two posts, 12 and 13.

Next I'll firm up on why allowing optionality in competences structures is a good idea, before going on to saying more about how to represent level attributions and level definitions.
2011-07-04 (15th in my logic of competence series)
Optional parts of competence
Discussion suggests that it is important to lay out the argument in favour of optionality in competence structures. I touched on this post 7 in this series, talking about component parts, and styles or variants. But here I want to challenge the “purist” view that competence structures should always be definite, and never optional.
Leaving aside the inevitable uncertainty at the edges of ability, what a certain person can do is at least reasonably definite at any one time. If you say that a person can do “A or B or C”, it is probably because you have not found out which of them they can actually do. A personal claim explicitly composed of options would seem rather strange. People do, however, claim broader abilities that could be fulfilled in diverse narrower ways, without always specifying which way they do it.
Requirements, on the other hand, involve optionality much more centrally. Because people vary enormously in the detail of their abilities, it is not hard to fall into the trap of over-specifying job requirements, to the point where no candidate fits the exact requirements (unless you have, unfairly, carefully tailored the requirements to fit one particular person who you want to take on). So, in practice, many detailed job requirements include reasonable alternatives. If there are three different reasonable ways to build a brick wall, I am unlikely to care too much which a builder users. What I care about is whether the end product is of good quality. There are not only different techniques in different trades and crafts, but also different approaches in professional life, to management, or even competences like surgery.
When we call something an ability or competence, sometimes we ignore the fact that different people have their own different styles of doing it. If you look in sufficiently fine detail, it could be that most abilities differ between different people. Certainly, the nature of complex tasks and problems means that, if people are not channelled strongly into using a particular approach, each person's exploration, and the different past experiences they bring, tends to lead to idiosyncracies in the approach that they develop. Perhaps because people are particularly complex, managing people is one area that tends to be done in different ways, whether in the workplace or the school classroom. There are very many other examples of complex tasks done in (sometimes subtly) different ways by different people.
Another aspect of optionality is apparent in recruitment practice. Commonly, some abilities are classified as “essential”, others are “desirable”. The desirable ones are regarded as optional in terms of the judgement of whether someone is a good candidate for a particular role. Is that so different from “competence”? One of the basic foundations I proposed for the logic of competence is the formulation of job requirements.
Now, you may still argue that does not mean that the definitions of particular abilities necessarily involve optionality. And it is true that it is possible to avoid the appearance of optionality, by never naming an ability together with its optional constituents. But perhaps the question here should not be one of whether such purity is possible, but instead what is the approach that most naturally represents how people think, while not straying from other important insights. Purity is often reached only at the cost of practicality and understandability.
What does seem clear is that if we allow optionality in structural relationships for competence definitions, we can represent with the same structure both personal claims without options, requirements that have options, and teaching, learning, assessment and qualification structures, which very often have options. If, on the other hand, we were to choose a purist approach that disallows optionality for abilities as pure concepts, we would still have to introduce an extra optionality mechanism for requirements, for teaching structures, for learning structures, for assessment structures, and for qualification structures, all of which are closely related to competence. Having the possibility of optionality does not mean that optionality is required, so an approach that allows optionality will also cover non-optional personal abilities and claims. So why have two mechanisms when one will serve perfectly well?
To have optionality in competence structures, we have to have two kinds of “part of” relationship, or if you prefer, two kinds of both broader and narrower. Here I'll call them “necessary part of” and “optional part of”, because to me, “necessary” sounds more generally applicable and natural than “mandatory” or “compulsory”.
If you have a wider ability, and if in some authority's definition that wider ability has necessary narrower parts, then (according to that authority) you must have all the necessary parts. It doesn't always work the other way round. If you have all the necessary parts, there may be more optional parts that are needed; or indeed there may be some aspects of the broader ability that are not easy to represent as distinct narrower parts at all.
Obviously, the implications between a wider ability and its optional parts are less definite. The way that optional parts contribute towards the whole broader ability is something that may best be described in the description of the broader ability. OK, you may be tempted to write down some kind of formula, but do you really need to write a machine-readable formula, when it may not be so clear in reality what the optional parts are? My guess is, no. So stating the relationship in a long description would seem to me to work just fine.
Real examples of this kind of optionality are easy to find in course and qualification structures, but I can't find good simple examples in real-life competence structures. In place of this, to give a flavour of an example, try the pizza making example I wrote about in post 13.
I might claim I can make pizza. That is a reasonable claim, but there are many options about what exactly it might entail. Someone might want to employ a pizza maker, and it will be entirely up to them what exactly they require — they may or may not accept a number of different options.
In the fictional BSHAPM pizza making framework, there is optionality in the sense that there are three different ways to prepare dough, two different ways to form the dough into the base, and three different ways of baking the pizza. Defining each of these as options still leaves it open how they are used. For example, it may be that the BSHAPM award a certificate for demonstrating just one of each set of options, but a diploma for demonstrating all of the options in each case. I could claim some or all of the options. An employer could well require some or all of the options.
In the end, marking parts as options in a framework does not limit how they can be used. You could still superimpose necessity and optionality on top of a framework that was constructed simply in terms of broader and narrower. What it does do is allow tools that use a framework to calculate when something deemed necessary has been missed out. But marking a part as necessary doesn't mean that a training course, or an assessment, necessarily has to cover that part. The training or assessment can be deliberatly incomplete. Where optionality is really useful is in specifying job requirements, course and assessment coverage, and the abilities that a qualification signifies in general, leaving it to individual records to specify which options have been covered by a particular person, and leaving it to individual people to claim particular sets of detailed abilities.
Perhaps the argument concludes there: it is useful (I have argued) for people constructing competence structures to be able to indicate which parts are logically necessary and which parts are logically optional. Other tools and applications can build on that.
Does that make sense? Over to you, readers…
Meanwhile, next, I'll write something about the logic of National Occupational Standards, because they are a very important source of existing structured competence-related definitions.
2011-08-18 (16th in my logic of competence series)
The logic of National Occupational Standards
I've mentioned NOSs (UK National Occupational Standards) many times in earlier posts in this series, (3, 5, 6, 8, 9, 12, 14) but last week I was fortunate to visit a real SSC — LANTRA — talk to some very friendly and helpful people there and elsewhere, and reflect further on the logic of NOSs.
One thing that became clear is that NOSs have specific uses, not exactly the same as some of the other competence-related concepts I've been writing about. Following this up, on the UKCES website I soon found the very helpful “Guide to Developing National Occupational Standards” (pdf) by Geoff Carroll and Trevor Boutall, written quite recently: March 2010. For brevity, I'll refer to this as “the NOS Guide”.
The NOS Guide
I won't review the whole NOS Guide, beyond saying that it is an invaluable guide to current thinking and practice around NOSs. But I will pick out a few things that are relevant: to my discussion of the logic of competence; to how to represent the particular features of NOS structures; and towards how we represent the kinds of competence-related structures that are not part of the NOS world.
The NOS Guide distinguishes occupational competence and skill. Its definitions aren't watertight, but generally they are in keeping with the idea that a skill is something that is independent of its context, not necessarily in itself valuable, whereas an occupational competence in a “work function” involves applying skills (and knowledge). Occupational competence is “what it means to be competent in a work role” (page 7), and this seems close enough to my formulation “the ability to do what is required”, and with the corresponding EQF definitions. But this
doesn't help greatly in drawing a clear line between the two. What is considered a work function might depend not only on the particularities of the job itself, and also the detail in which it has been analysed for defining a particular job role. In the end, while the distinction makes some sense, the dividing line still looks fairly arbitrary, which justifies my support for not making a distinction in representation. This seems confirmed also by the fact that, later, when the NOS Guide discusses Functional Analysis (more of which below), the competence/skill distinction is barely mentioned.
The NOS Guide advocates a common language for representing skill or occupational competence at any granularity, ideally involving one brief sentence, containing:
- at least one action verb;
- at least one object for the verb;
- optionally, an indication of context or conditions.
Some people (including M. David Merrill, and following him, Lester Gilbert) advocate detailed vocabularies for the component parts of this sentence. While one may doubt the practicality of ever compiling complete general vocabularies, perhaps we ought to allow at least for the possiblity of representing verbs, objects and conditions distinctly, for any particular domain, represented in a domain ontology. If it were possible, this would help with:
- ensuring consistency and comprehensibility;
- search and cross-referencing;
- revision.
But it makes sense not to make these structures mandatory, as most likely there are too many edge cases.
The whole of Section 2 of the NOS Guide is devoted to what the authors refer to as “Functional Analysis”. This involves identifying a “Key Purpose”, the “Main Functions” that need to happen to achieve the Key Purpose, and subordinate to those, the possible NOSs that set out what needs to happen to achieve each main function. (What is referred to in the NOS Guide as “a NOS” has also previously been called a “Unit”, and for clarity I'll refer to them as “NOS units”.) Each NOS unit in turn contains performance criteria, and necessary supporting “knowledge and understanding”. However, these layers are not rigid. Sometimes, a wide-reaching purpose may be analysed by more than one layer of functions, and sometimes a NOS unit is divided into elements.
It makes sense not to attempt to make absolute distinctions between the different layers. (See also my post #14.) For the purposes of representation, this implies that each competence concept definition is represented in the same way, whichever layer it might be seen as belonging to; layers are related through “broader” and “narrower” relationships between the competence concepts, but different bodies may distinguish different layers. In eCOTOOL particularly, I've come to call competence concept definitions, in any layer, “ability items” for short, and I'll use this terminology from here.
One particularly interesting section of the NOS Guide is its Section 2.9, where attention turns to the identification of NOS units themselves, as the component parts of the Main Functions. In view of the authority of this document, it is highly worthwhile studying what the Guide says about the nature of NOS units. Section 2.9 directly tackles the question of what size a NOS should be. Four relevant points are made, of which I'll distinguish just two.
First, there is what we could call the criterion of individual activity. The Guide says: “NOS apply to the work of individuals. Each NOS should be written in such a way that it can be performed by an individual staff member.” I look at this both ways for complementary views. When two aspects of a role may reasonably and justifiably be performed separately by separate individuals, there should be separate NOS units. Conversely, when two aspects of a role are practically always performed by the same person, they naturally belong within the same NOS unit.
Second, I've put together manageability and distinctness. The Guide says that, if too large, the “size of the resulting NOS … could result in a document that is quite large and probably not well received by the employers or staff members who will be using them”, and also that it matters “whether or not things are seen as distinct activities which involve different skills and knowledge sets.” These seem to me both to be to do with fitting the size of the NOS unit to human expectations and requirements. In the end, however, the size of NOS units is a matter of good practice, not formal constraint.
Section 3 of the NOS Guide deals with using existing NOS units, and given the good sense of reuse, it seems right to discuss this before detailing creating your own. The relationship between the standards one is creating and existing NOS units could well be represented formally. Other existing NOS units may be
- “imported” as is, with the permission of the originating body
- “tailored”, that is modified slightly to suit the new context, but without any substantive change in what is covered (again, with permission)
- used as the basis of a new NOS unit.
In the first two cases, the unit title remains the same; but in the other case where the content changes, the unit title should change as well. Interestingly, there seems no formal way of stating that a new NOS unit is based on an existing one, but changed too much to be counted as “tailored”.
Section 4, on creating your own NOSs, is useful particularly from the point of view of formalising NOS structures. The “mandatory NOS components” are set out as:
- Unique Reference Number
- Title
- Overview
- Performance Criteria
- Knowledge and Understanding
- Technical Data
and I'll briefly go over each of these here.
It would be so easy, in principle, to recast a Unique Reference Number as a URI! However, the UKCES has not yet mandated this, and no SSC seems to have taken it up either. (I'm hoping to persuade some.) If a URI was also given to the broader items (e.g. key purposes and main functions) then the road would be open to a “linked data” approach to representing the relationships between structural components.
Title is standard Dublin Core, while Overview maps reasonably to dcterms:description.
Performance criteria may be seen as the finest granularity ability items represented in a NOS, and are strictly parts of NOS units. They have the same short sentence structure as both NOS units and broader functions and purposes. In principle, each performance criterion could also have its own URI. A performance criterion could then be treated like other ability items, and further analysed, explained or described elsewhere. An issue for NOSs is that performance criteria are not identified separately, and therefore there is no way within a NOS structure to indicate similarity or overlap between performance criteria appearing in different NOS units, whether or not the wording is the same. On the other hand, if NOS structures could give URIs to the performance criteria, they could be reused, for example to suggest that evidence within one NOS unit would provide also useful evidence within a different NOS unit.
Performance criteria within NOS units need to be valid across a sector. Thus they must not embody methods, etc., that are fine for one typical employer but wrong for another. They must also be practically assessable. These are reasons for avoiding evaluative adverbs, like the Guide's example “promptly”, which may be evaluated differently in different contexts. If there are going to be contextual differences, they need to be more clearly signalled by referring e.g. to written guidance that forms part of the knowledge required.
Knowledge and understanding are clearly different from performance criteria. Items of knowledge are set out like performance criteria, but separately in their own section within a NOS unit. As hinted just above, the inclusion of explicit knowledge can mean that a generalised performance criterion can often work if the knowledge dependent on context is factored out, in places where there would otherwise be no common approach to assessment.
In principle, knowledge can be assessed, but the methods of assessment differ from those of performance criteria. Action verbs such as “state”, “recall”, “explain”, “choose” (on the basis on knowledge) might be introduced, but perhaps are not absolutely essential, in that a knowledge item may be assessed on the basis of various behaviour. Knowledge is then treated (by eCOTOOL and others) as another kind of ability item, alongside performance criteria. The different kinds of ability item may be distinguished — for example following the EQF, as knowledge, skills, and competence — but there are several possible categorisations.
The NOS Guide gives the following technical data as mandatory:
- the name of the standards-setting organisation
- the version number
- the date of approval of the current version
- the planned date of future review
- the validity of the NOS: “current”; “under revision”; “legacy”
- the status of the NOS: “original”; “imported”; “tailored”
- where the status is imported or tailored, the name of the originating organisation and the Unique Reference Number of the original NOS.
These could very easily be incorporated into a metadata schema. For imported and tailored NOS units, a way of referring to the original could be specified, so that web-based tools could immediately jump to the original for comparison. The NOS Guide goes on to give more optional parts, each of which could be included in a metadata schema as optional.
Issues emerging from the NOS Guide
One of the things that is stressed in the NOS Guide (e.g. page 32) is that the Functional Analysis should result in components (main functions, at least) that are both necessary and sufficient. That's quite a demand — is it realistic, or could it be characterised as reductionist?
Optionality
The issue of optionality has been covered in the previous post in this series. Clearly, if NOS structures are to be necessary and sufficient, logically there can be no optionality. It seems that, practically, the NOS approach avoids optionality in two complementary ways. Some options are personal ways of doing things, at levels more finely grained than NOS units. Explicitly, NOS units should be written to be inclusive of the diversity of practice: they should not prescribe particular behaviours that represent only some people's ways of doing things. Other options involve broader granularity than the NOS unit. The NOS Guide implies this in the discussion of tailoring. It may be that one body wants to create a NOS unit that is similar to an existing one. But if the “demand” of the new version NOS unit is not the same as the original, it is a new NOS unit, not a tailored version of the original one.
The NOS Guide does not offer any way of formally documenting the relationship between variant ways of achieving the same aim, or function (other than, perhaps, simple reference). This may lead to some inefficiencies down the line, when people recognise that achieving one NOS unit is really good evidence for reaching the standard of a related NOS unit, but there is no general and automatic way of documenting that or taking it into account. We should, I suggest, be aiming at an overall structure, and strategy, that documents as much relationship as we can reliably represent. This suggests allowing for optionality in an overall scheme, but leaving it out for NOSs.
Levels and assessability
The other big issue is levels. The very idea of level is somehow anathema to the NOS view. A person either has achieved a NOS, and is competent in the area, or has not yet acheived that NOS. There is no provision for grades of achievement. Compare this with the whole of the academic world, where people almost always give marks and grades, comparing and ranking people's performance. The vocational world does have levels — think of the EQF levels, that are intended for the vocational as well as the academic world — but often in the vocational world a higher level is seen as the addition of other separate skills or occupational competences, not as improving levels of the same ones.
A related idea came to me while writing this post. NOSs rightly and properly emphasise the need to be assessable — to have a effective standard, you must be able to tell if someone has reached the standard or not — though the assessment method doesn't have to be specified in advance. But there are many vaguer competence-related concepts. Take “communication skills” as a common example. It is impossible to assess whether someone has communication skills in general, without giving a specification of just what skills are meant. Every wakeful person has some ability to communicate! But we frequently see cases where that kind of unassessably vague concept is used as a heading around which to gather evidence. It does make sense to ask a person about evidence for their “communication skills”, or to describe them, and then perhaps to assess whether these are adequate for a particular job or role.
But then, thinking about it, there is a correspondence here. A concept that is too vague to assess is just the kind of concept for which one might define (assessable) levels. And if a concept has various levels, it follows that whether a person has the (unlevelled) concept cannot be assessed in the binary way of “competent” and “not yet competent”. This explains why the NOS approach does not have levels, as levels would imply a concept that cannot be assessed in the required binary way. Rather than call unlevelled concepts “vague”, we could just call them something like “not properly assessable”, implying the need to add extra detail before the concept becomes assessable. That extra detail could be a whole level scheme, or simply a specification of a single-level standard (i.e. one that is simply reached or not yet reached).
In conclusion, I cannot see a problem with specifying a representation for skill and competence structures that includes non-assessable concepts, along with levels as one way of detailing them. The “profile” for NOS use can still explicity exclude them, if that is the preferred way forward.
Update 2011-08-22 and later
After talking further with Geoff Carroll I clarified above that NOSs are to do specifically with occupational competence rather than, e.g. learning competence. And having been pushed into this particular can of worms, I'd better say more about assessability to get a clear run up to levels.
2011-08-31 (17th in my logic of competence series)
The logic of competence assessability
The discussion of NOS in the previous post clearly implicated assessability. Actually, assessment has been on the agenda right from the start of this series: claims and requirements are for someone “good” for a job or role. How do we assess what is “good” as opposed to “poor”? The logic of competence partly relies on the logic of assessability, so the topic deserves a closer look.
“Assessability” isn't a common word. I mean, as one might expect, the quality of being assessable. Here, this applies to competence concept definitions. Given a definition of skill or competence, will people be able to use that definition to consistently assess the extent to which an individual has that skill or competence? If so, the definition is assessable. Particular assessment methods are usually designed to be consistent and repeatable, but in all the cases I can think of, a particular assessment procedure implies the existence of a quality that could potentially be assessed in other ways. So “assessability” doesn't necessarily mean that one particular assessment method has been defined, but rather that reliable assessment methods can be envisaged.
The contrast between outcomes and behaviours / procedures
One of the key things I learned from discussion with Geoff Carroll was the importance to many people of seeing competence in terms of assessable outcomes. The NOS Guide mentioned in the previous post says, among other things, that “the Key Purpose statement must point clearly to an outcome” and “each Main Function should point to a clear outcome that is valued in employment.” This is contrasted with “behaviours” — some employers “feel it is important to describe the general ways in which individuals go about achieving the outcomes”.
How much emphasis is put on outcomes, and how much on what the NOS Guide calls behaviours, depends largely on the job, and should determine the nature of the “performance criteria” written in a related standard. And, moreover, I think that this distinction between “outcomes” and “behaviours” is quite close to the very general distinction between “means” and “ends” that crops up as a general philosophical topic. To illustrate this, I'll try giving two example jobs that differ greatly along this dimension: writing commercial pop songs; and flying commercial aeroplanes.
You could write outcome standards for a pop songwriter in terms of the song sales. It is very clear when a song reaches “the charts”, but how and why it gets there are much less clear. What is perhaps more clear is that the large majority of attempts to write pop songs result in — well — very limited success (i.e. failure). And although there are some websites that give e.g. Shortcuts to Hit Songwriting (126 Proven Techniques for Writing Songs That Sell), or How to Write a Song, other commentators e.g. in the Guardian are less optimistic: “So how do you write a classic hit? The only thing everyone agrees on is this: nobody has a bloody clue.”
The essence here is that the “hit” outcome is achieved, if it is achieved at all, through means that are highly individual. It seems unlikely that any standards setting organisation will write an NOS for writing hit pop songs. (On the other hand, some of the composition skills that underlie this could well be the subject of standards.)
Contrast this with flying commercial aeroplanes. The vast majority of flights are carried out successfully — indeed, flight safety is remarkable in many ways. Would you want your pilot to “do their own thing”, or try out different techniques for piloting your flight? A great deal of basic competence in flying is accuracy and reliability in following set procedures. (Surely set procedures are essentially the same kind of thing as behaviours?) There is a lot of compliance, checking and cross-checking, and little scope for creativity. Again it is interesting to note that there don't seem to be any NOSs for airline pilots. (There are for ground and cabin staff, maintained by GoSkills. In the “National Occupational Standards For Aviation Operations on the Ground, Unit 42 – Maintain the separation of aircraft on or near the ground”, out of 20 performance requirements, no fewer than 11 start “Make sure that…”. Following procedures is explicitly a large part of other related NOSs.)
However, it is clear that there are better and worse pop songwriters, and better and worse pilots. One should be able to write some competence definitions in each case that are assessable, even if they might not be worth making into NOSs.
What about educational parallels for these, as most of school performance is assessed? Perhaps we could think of poetry writing and mathematics. Probably much of what is good in poetry writing is down to individual inspiration and creativity, tempered by some conventional rules. On the other hand, much of what is good in mathematics is the ability to remember and follow the appropriate procedures for the appropriate cases. Poetry, closely related to songwriting, is mainly to do with outcomes, and not procedures — ends, not means; mathematics, closer to airline piloting, is mainly to do with procedures, with the outcome pretty well assured as long as you follow the appropriate procedure correctly.
Both extremes of this “outcome” and “procedure” spectrum are assessable, but they are assessable in different ways, with different characteristics.
- Outcome-focused assessment (getting results, main effects, “ends”) allows variation in the component parts that are not standardised. What may be specified are the incidental constraints, or what to avoid.
- Assessment on procedures and conformance to constraints (how to do it properly, “means”, known procedures that minimise bad side effects) tends to have little variability in component procedural parts. As well as airline pilots, we may think of train drivers, power plant supervisors, captains of ships.
Of course, there is a spectrum between these extremes, with no clear boundary. Where the core is procedural conformance, handling unexpected problems may also feature (often trained through simulators). Coolness under pressure is vital, and could be assessed. We also have to face the philosophical point that someone's ends may be another's means, and vice versa. Only the most menial of means cannot be treated as an end, and only the greatest ends cannot be treated as a means to a greater end.
Outcomes are often quantitative in nature. The pop song example is clear — measures of songs sold (or downloaded, etc.) allow songwriters to be graded into some level scheme like “very successful”, “fairly successful”, “marginally successful” (or whatever levels you might want to establish). There is no obvious cut-off point for whether you are successful as a hit songwriter, and that invites people to define their own levels. On the other hand, conformance to defined procedures looks pretty rigid by comparison. Either you followed the rules or you didn't. It's all too clear when a passenger aeroplane crashes.
But here's a puzzle for National Occupational Standards. According to the Guide, NOSs are meant to be to do with outcomes, and yet they admit no levels. If they acknowledged that they were about procedures, perhaps together with avoiding negative outcomes, then I could see how levels would be unimportant. And if they allowed levels, rather than being just “achieved” or “not yet achieved” I could see how they would cover all sorts of outcomes nicely. What are we to do about outcomes that clearly do admit of levels, as do many of the more complex kind of competences?
The apparent paradox is that NOSs deny the kind of level system that would allow them properly to express the kind of outcomes that they aspire to representing. But maybe it's no paradox after all. It seems reasonable that NOSs actually just describe the known standards people need to reach to function effectively in certain kinds of roles. That standard is a level in itself. Under that reading, it would make little sense for a NOS to be subject to different levels, as it would imply that the level of competence for a particular role is unknown — and in that case it wouldn't be a standard.
Assessing less assessable concepts
Having discussed assessable competence concepts from one extreme to the other, what about less assessable concepts? We are mostly familiar with the kinds of general headings for abilities that you get with PDP (personal/professional development planning) like teamwork, communication skills, numeracy, ICT skills, etc. You can only assess a person as having or not having a vague concept like “communication skills” after detailing what you include within your definition. With a competence such as the ability to manage a business, you can either assess it in terms of measurable outcomes valued by you (e.g. the business is making a profit, has grown — both binary — or perhaps some quantitative figure relating to the increase in shareholder value, or a quantified environmental impact) or in terms of a set of abilities that you consider make up the particular style of management you are interested in.
These less assessable concepts are surely useful as headings for gathering evidence about what we have done, and what kinds of skills and competences we have practiced, which might be useful in work or other situations. It looks to me that they can be made more assessable in one of a few ways.
- Detailing assessable component parts of the concept, in the manner of NOSs.
- Defining levels for the concept, where each level definition gives more assessable detail, or criteria.
- Defining variants for the concept, each of which is either assessable, or broken down further into assessable component parts.
- Using a generic level framework to supply assessable criteria to add to the concept.
Following this last possibility, there is nothing to stop a framework from defining generic levels as a shorthand for what needs to be covered at any particular level of any competence. While NOSs don't have to define levels explicitly, it is still potentially useful to be able to have levels in a wider framework of competence.
Note that generic levels designed to add assessability to a general concept may not themselves be assessable without the general concept.
Assessability and values in everyday life
Defined concepts, standards, and frameworks are fine for established employers in established industries, who may be familiar with and use them, but what about for other contexts? I happen to be looking for a builder right now, and while my general requirements are common enough, the details may not be. In the “foreground”, so to speak, like everyone else, I want a “good” quality job done within a competitive time interval and budget. Maybe I could accept that the competence I require could be described in terms of NOSs, while price and availability are to do with the market, not competence per se. But when it comes to more “background” considerations, it is less clear. How do I rate experience? Well, what does experience bring? I suspect that experience is to do with learning the lessons that are not internalised in an educational or training setting. Perhaps experience is partly about learning to avoid “mistakes”. But, what counts as mistakes depends on one's values. Individuals differ in the degree to which they are happy with “bending rules” or “cutting corners”. With experience, some people learn to bend rules less detectably, others learn more personal and professional integrity. If someone's values agree with mine, I am more likely to find them pleasant.
There's a long discussion here, which I won't go into deeply, involving professional associations, codes of conduct and ethics, morality, social responsibility and so on. It may be possible to build some of these into performance criteria, but opinions are likely to differ. Where a standard talks about procedural conformance, it can sometimes be framed as knowing established procedures and then following them. A generic competence at handling clients might include the ability to find out what the client's values are, and to go along with those to the extent that they are compatible with one's own values. Where they aren't, a skill in turning away work needs to be exercised in order to achieve personal integrity.
Conclusions
It's all clearly a complex topic, more complex indeed than I had reckoned back last November. But I'd like to summarise what I take forward from this consideration of assessability.
- Less assessable concepts can be made more assessable by detailing them in any of several ways (see above).
- Goals, ends, aims, outcomes can be assessed, but say little about constraints, mistakes, or avoiding occasional problems. In common usage, outcomes (particularly quantitative ones) may often have levels.
- Means, procedures, behaviours, etc. can be assessed in terms of (binary) conformity to prescribed pattern, but may not imply outcomes (though constraints may be able to be formulated as avoidance outcomes).
- In real life we want to allow realistic competence structures with any of these features.
In the next post, I'll take all these extra considerations forward into the question of how to represent competence structures, partly through discussing more about what levels are, along with how to represent them. Being clear about how to represent levels will leave us also clearer about how to represent the less precise, non-assessable concepts.
2011-09-06 (18th in my logic of competence series)
Representing level relationships
Having prepared the ground, I'm now going to address in more detail how levels of competence can best be represented, and the implications for the rest of representing competence structures. Levels can be represented similar to other competence concept definitions, but need different relationships.
I've written about how giving levels to competence reflects common usage, at least for competence concepts that are not entirely assessable, and that the labels commonly used for levels are not unique identifiers; about how defining levels of assessment fits into a competence structure; and lately about how defining levels is one approach to raising the assessability of competence concepts.
Shortly after first writing this, I put together the ideas on levels more coherently in a paper and a presentation for the COME-HR conference, Brussels.
Some new terms
Now, to take further this idea of raising assessability of concepts, it would be useful to define some new terms to do with assessability. It would be really good to know if anyone else has thought along this direction, and how their thoughts compare.
First, may we define a binarily assessable concept, or “binary” for short, as a concept typically formulated as something that a person either has or does not have, and where there is substantial agreement between assessors over whether any particular person actually has or does not have it. My understanding is that the majority of concepts used in NOSs are intended to be of this type.
Second, may we define a rankably assessable concept, or “rankable” for short, as a concept typically formulated as something a person may have to varying degrees, and where there is substantial agreement between assessors over whether two people have a similar amount of it, or who has more. IQ might be a rather old-fashioned and out-of-favour example of this. Speed and accuracy of performing given tasks would be another very common example (and widely used in TV shows), though that would be more applicable to simpler skills than occupational competence. Sports have many scales of this kind. On the occupational front, a rankable might be a concept where “better” means “more additional abilities added on”, while still remaining the same basic concept. Many complex tasks have a competence scale, where people start off knowing about it and being able to follow someone doing it, then perform the tasks in safe environments under supervision, working towards independent ability and mastery. In effect, what is happening here is that additional abilities are being added to the core of necessary understanding.
Last, may we define a unorderly assessable concept, or “unordered” for short, as any concept that is not binary or rankable, but still assessable. For it to remain assessable despite possible disagreement about who is better, there at least has to be substantial agreement between assessors about the evidence which would be relevant to an assessment of the ability of a person in this area. In these cases, assessors would tend to agree about each others' judgements, though they might not come up with the same points. Multi-faceted abilities would be good examples: take management competence. I don't think there is just one single accepted scale of managerial ability, as different managers are better or worse at different aspects of management. Communication skills (with no detailed definition of what is meant) might be another good example. Any vague competence-related concept that is reasonably meaningful and coherent might fall into this category. But it would probably not include concepts such as “nice person” where people would disagree even about what evidence would count in its support.
Defining level relationships
If you allow these new terms, definitions of level relationships can be more clearly expressed. The clearest and most obvious scenario is that levels can be defined as binaries related to rankables. Using an example from my previous post, success as a pop songwriter based on song sales/downloads is rankable, and we could define levels of success in that in terms of particular sales, hits in the top ten, etc. You could name the levels as you liked — for instance, “beginner songwriter”, “one hit songwriter”, “established songwriter”, “successful songwriter”, “top flight songwriter”. You would write the criteria for each level, and those criteria would be binary, allowing you to judge clearly which category would be attributed to a given songwriter. Of course, to recall, the inner logic of levels is that higher levels encompass lower levels. We could give the number 1 to beginner, up to number 5 for top flight.
To start formalising this, we would need an identifier for the “pop songwriter” ability, and then to create identifiers for each defined level. Part of a pop songwriter competence framework could be the definitions, along with their identifiers, and then a representation of the level relationships. Each level relationship, as defined in the framework, would have the unlevelled ability identifier, the level identifier, the level number and the level label.
If we were to make an information model of a level definition/relationship as an independent entity, this would mean that it would include:
- the fact that this is a level relationship;
- the levelled, binary concept ID;
- the framework ID;
- the level number;
- the unlevelled, rankable concept ID;
- the level label.
If this is represented within a framework, the link to the containing framework is implicit, so might not show clearly. But the need for this should be clear if a level structure is represented separately.
As well as defining levels for a particular area like songwriting, it is possible similarly (as many actual level frameworks do) to define a set of generic levels that can apply to a range of rankable, or even unordered, concepts. This seems to me to be a good way of understanding what frameworks like the EQF do. Because there is no specific unlevelled concept in such a framework, we have to make inclusion of the unlevelled concept within the information model optional. The other thing that is optional is the level label. Many levels have labels as well as numbers, but not all. The number, however, though it is frequently left out from some level frameworks, is essential if the logic of ordering is to be present.
Level attribution
A key point that has been growing in conviction in me is that relationships for level attribution and level definition need to be treated separately. In this context, the word “attribution” suggests that a level is an attribute, either of a competence concept or of a person. It feels quite close to other sorts of categorisation.
Representing the attribution of levels is pretty straightforward. Whether levels are educational, professional, or developmental, they can be attributed to competence concepts, to individual claims and to requirements. Such an attribution can be expressed using the identifier of the competence concept, a relationship meaning “… is attributed the level …”, and an identifier for the level.
If we say that a certain well-defined and binarily assessable ability is at, say, EQF competence level 3, it is an aid to cross-referencing; an aid to locating that ability in comparison with other abilities that may be at the same or different levels.
A level can be attributed to:
- a separate competence concept definition;
- an ability item claimed by an individual;
- an ability item required in a job specification;
- a separate intended learning outcome for a course or course unit;
- a whole course unit;
- to a whole qualification, but care needs to be exercised, as many qualifications have components at mixed levels.
An assessment can result in the assessor or awarding body attributing an ability level to an individual in a particular area. This means that, in their judgement, that individual's ability in the area is well described by the level descriptors.
Combining generic levels with areas of skill or competence
Let's look more closely at combining generic levels with general areas of skill or competence, in such a way that the combination is more assessable. A good example of this is associated with the Europass Language Passport (ELP) that I mentioned in post 4. The Council of Europe's “Common European Framework of Reference for Languages” (CEFRL), embodied in the ELP, make little sense without the addition of specific languages in which proficiency is assessed. Thus, the CEFRL's “common reference levels” are not binarily assessable, just as “able to speak French” is not. The reference levels are designed to be independent of any particular language.
Thus, to represent a claim or a requirement for language proficiency, one needs both a language identifier and an identifier for the level. It would be very easy in practice to construct a URI identifier for each combination. The exact method of construction would need to be widely agreed, but just as an example, we could define a URI for the CEFRL – e.g. http://example.eu/CEFRL/
– and then binary concept URIs expressing levels could be constructed something like this:
http://example.eu/CEFRL/language/mode/level#number
where “language” is replaced by the appropriate IETF language tag; “mode” is replaced by one of “listening”, “reading”, “spoken_interaction”, “spoken_production” or “writing” (or agreed equivalents, possibly in other languages); “level” is replaced by one of “basic_user”, “independent_user”, “proficient_user”, “A1”, “A2”, “B1”, “B2”, “C1”, “C2”; and “number” is replaced by, say, 10, 20, 30, 40 , 50 or 60 corresponding to A1 through to C2. (These numbers are not part of the CEFRL, but are needed for the formalisation proposed here.) A web service would be arranged where putting the URI into a browser (making an http request) would return a page with a description of the level and the language, plus other appropriate machine readable metadata, including links to components that are not binarily assessable in themselves. “Italian reading B1” could be a short description, generated by formula, not separately, and a long description could also be generated automatically combining the descriptions of the language, reading, and the level criteria.
In principle, a similar approach could be taken for any other level system. The defining authority would define URIs for all separate binarily assessable abilities, and publish a full structure expressing how each one relates to each other one. Short descriptions of the combinations could simply combine the titles or short descriptions from each component. No new information is needed to combine a generic level with a specific area. With a new URI to represent the combination, a request for information about that combination can return information already available elsewhere about the generic level and the specific area. If a new URI for the combination is not defined, it is not possible to represent the combination formally. What one can do instead is to note a claim or a requirement for the generic level, and give the particular area in the description. This seems like a reasonable fall-back position.
Relating levels to optionality
Optionality was one of the less obvious features discussed previously, as it does not occur in NOSs. It's informative to consider how optionality relates to levels.
I'm not certain about this, but I think we would want to say that if a definition has optional parts, it is not likely to be binarily assessable, and that levelled concepts are normally binarily assessable. A definition with optional parts is more likely to be rankable than binary, and it could even fail to be rankably assessable, rather being merely unordered. So, on the whole, defining levels should surely reduce, and ideally eliminate, optionality: levelled concepts should ideally have no optionality, or at least less than the “parent” unlevelled concept.
Proposals
So in conclusion here are my proposals for representing levels, as level-defining relations.
- Use of levels Use levels as one way of relating binarily assessable concepts to rankable ones.
- The framework Define a set of related levels together in a coherent framework. Give this framework a URI identifier of its own. The framework may or may not include definitions of the related unlevelled and levelled concepts.
- The unlevelled concept In cases of levels of a concept more general than the set of levels you are defining, ensure the unlevelled concept has one URI. In a generic framework, this may not be present.
- The levels Represent each level as a competence concept in its own right, complete with short and long descriptions, and a URI as identifier.
- Level numbering Give each level a number, such that higher levels have higher numbers. Sometimes consecutive numbers from 0 or 1 will work, but if you think judgements of personal ability may lie in between the levels you define, you may want to choose numbers that make good sense to people who will use the levels.
- Level labels If you are trying to represent levels where labels already exist in common usage, record these labels as part of the structured definition of the appropriate level. Sometimes these labels may look numeric, but (as with UK degree classes) the numbers may be the wrong way round, so they really are labels, not level numbers. Labels are optional: if a separate label is not defined, the level number is used as the label.
- The level relationships These should be represented explicitly as part of the framework. This can either be separately, or within a hierarchical structure.
Representing level definitions allows me to add to the diagram that last appeared at the bottom of post 14, showing the idea of what should be there to represent levels. The diagram includes defining level relationships, but not yet attributing levels (which is more like categorising in other ways.)
Now, some long-delayed extra thoughts on specificity, questions and answers related to competence.
2012-04-12 (19th in my logic of competence series.)
More and less specificity in competence definitions
Descriptions of personal ability can serve either as claims, like “This is what I am good at …”, or as answers to questions like “What are you good at?” or “can you … ?” In conversations — whether informally, or formally as in a job interview — the claims, questions, and answers may be more or less specific. That is a necessary and natural feature of communication. It is the implications of this that I want to explore here, as they bear on my current work, in particular including the InLOC project.
This is a new theme in my logic of competence series. Since the previous post in this series, I had to focus on completing the eCOTOOL competence model and managing the initial phases of InLOC, which left little time for following up earlier thinking. But there were ideas clearly evident in my last post in this series (representing level relationships) and now is the time for followup and development. The terms introduced previously there can be linked to this new idea of specificity. Simply: binarily assessable concepts are ones that are defined specifically enough for a yes/no judgement about a person's ability; rankably assessable concepts have an intermediate degree of specificity, and are complemented by level definitions; while unorderly assessable concepts are ones that are less specifically defined, requiring more specificity to be properly assessable. (See that previous post for explanation of those terms.) The least specific competence-related concepts are not properly assessable at all, but serve as tags or headings.
As well as giving weight and depth to this idea of specificity in competence definitions, in this post I want to explore the connection between competence definitions and answering questions, because I think this will help to explain the ideas, because it is relatively straightforward to understand that questions and answers can be more or less specific.
Since the previous post in the series, my terminology has shifted slightly. The goals of InLOC — Integrating Learning Outcomes and Competences — have made it plain that we need to deal equally with learning outcomes and with competence or ability concepts. So I include “learning outcomes” more liberally, always meaning intended learning outcomes.
Job interviews
Imagine you are interviewing someone for a job. To make it more interesting, let's make it an informal one: perhaps a mutual business contact has introduced you to a promising person at a business event. Add a little pressure by imagining that you have just a few minutes to make up your mind whether you want to ask this person to go through a longer, formal process. How would you structure the interview, and what questions would you ask?
As I envisage the process, one would probably start off with quite general, less specific questions, and then go into more detail where appropriate, where it mattered. So, for instance, one might ask “are you a software developer?”, and if the answer was yes, go into more detail about languages, development environments, length of experience, type of experience, etc. etc. The useful detail in this case would depend entirely on the circumstances of the job. For a graduate to be recruited into a large company, what matters might be aptitude, as it would be likely that full training would be supplied (which you could perhaps see as a kind of technical “enculturation”). On the other hand, for a specialist to join a short-term high-stakes project, even small details might matter a lot, as learning time would probably be minimal.
In reality, most job interviews start, not from a blank sheet, but from the basis of a job advert, and an application form, or CV and covering letter. A job advert may specify requirements; an application form may contain specific questions for which answers are expected, but in the absence of an appliation form, a CV and covering letter needs to try to answer, concisely, some of the key questions that would be asked first in an informal, unprepared job interview. This naturally explains the universal advice that CVs should be designed specifically for each job application. What you say about yourself unprompted not only reveals that information itself, but also says much about what you expect the other person to reckon as significant or interesting.
So, in the job interview, we notice the natural importance of varying specificity in descriptions and questions about abilities and experience.
Recruitment
This then carries over to the wider recruitment process. Potential employers often formulate a list of what is required of prospective employees, in terms of which abilities and experience are essential or desirable, but the detail and specificity of each item will naturally vary. The evidence for a less specific requirement may be assessed at interview with some quick general questions, but a more exacting requirement may want harder evidence such as a qualification, certificate or testimonial from an expert witness.
For example, in a regulated world such as pesticides that I wrote about recently, an employer might well want a prospective employee to have obtained a relevant certificate or qualification, so that they can legally do their job. Even when a certificate is not a legal requirement, some are widely asked for. A prospective sales employee with a driving licence or an office employee with an ICDL might be preferred over one without, and it would be perfectly reasonable for an employer to insist that non-native speakers had obtained a given certified level of proficiency in the principle workplace language. In each case, because the certificate is awarded only to people who have passed a carefully controlled test, the test result serves to answer many quite specific questions about the holder's abilities, as well as the potential legal fact of their being allowed to perform certain actions in regulated occupations.
Vocational qualifications often detail quite specifically what holders are able to do. This is clearly the intention of the Europass Certificate Supplement (ECS), and has been in the UK, through the system of National Vocational Qualifications, relying on National Occupational Standards. So we could expect that employers with specific learning outcome or competence requirements may specify that candidates should have particular vocational qualifications; but what about less specific requirements? My guess is that those employers who have little regard for vocational qualifications are just those whose requirements are less specific. Time was when many employers looked only for a “good degree”, which in the UK often meant a “2:1”, an upper second class. This was supposed to answer generic questions, as typically the specific subject of the degree was not specified. Now there is a growing emphasis on the detail of the degree transcript or Europass Diploma Supplement (EDS), from which a prospective employer can read at least assessment results, if not yet explicit details of learning outcomes or competences. There is also a increasing trend towards making explicit the intended learning outcomes of courses at all levels, so the course information might be more informative than the transcript of EDS.
Interestingly, the CVs of many technical workers contain highly unspecific lists of programming languages that the individual implicitly claims, stating nothing about the detailed abilities and experience. These lists answer only the most general questions, and serve effectively only to open a conversation about what the person's actual experience and achievements have been in those programming languages. At least for human languages there is the increasingly used CEFR; there does not appear to be any such widely recognised framework for programming languages. Perhaps, in the case of programming languages, it would be clumsy and ineffective to give answers to more detailed questions, because the individual does not know what those detailed questions would be.
Specificity in frameworks
Frameworks seem to gravitate towards specificity. Given that some people want to know the answers to specific questions, this is quite reasonable; but where does that leave the expression of the less specific requirements? For examples of curriculum frameworks, there is probably nowhere better than the American Achievement Standards Network (ASN). Here, as in many other places, learning outcomes are defined only in one or two levels. The ASN transcribes documents faithfully, then among many other things marks the “indexing status” of the various components. For an arbitrary example, see Earth and Space Science, which is a topic heading and not “indexable”. The heading below just states what the topic is about, and is not “indexable”. It is below this that the content becomes “indexable”, with first some less specific statements about what should be achieved by the end of fourth grade, broken down into the smallest components such as Identify characteristics of soils, minerals, rocks, water, and the atmosphere. It looks like it is just the “indexable” resources that are intended to represent intended learning outcome definitions.
At fourth grade, this is clearly nothing to do with employment, but even so, identifying characteristics of soils etc. is something that students may or may not be able to do, and this is part of the less specifically defined (but still “indexable”) “understanding of the characteristics of earth materials”. It strikes me that the item about identifying characteristics would fit reasonably (in my scheme of the previous post) as a “rankably assessible” concept, and its parent item about understanding might be classified (in my scheme) as unorderly assessable.
How to represent varying specificity
Having pointed out some of the practical examples of varying specificity in definitions of learning outcome or competence, the important issue for work such as InLOC is to provide some way of representing, not only different levels of specificity, but also how they relate to one another.
An approach through considering questions and answers
Any concept that is related to learning outcomes or competence can provide the basis for questions of an individual. Some of these questions have yes/no answers; some invite answers on a scale; some invite a longer, less straightforward reply, or a short reply that invites further questions. A stated concept can be both the answer to a question, and the ground for further questions. So, to go back to some of the above examples, a CV might somewhere state “French” or “Java”. These might be answers to the questions “what languages have you studied?” or “what languages do you use?” They also invite further questions, such as “how well do you know …?”, or “how much have you used …, and in what contexts?”, or “how good are you at …?” – which, if there is an appropriate scale, could be reformulated as “what level is your ability in …?”
Questions could be found corresponding to the ASN examples as well. “Identify characteristics of soils, minerals, rocks, water, and the atmosphere” has the same format that allows “can you …?” or “I can …”. The less specific statement — “By the end of fourth grade, students will develop an understanding of the characteristics of earth materials,” — looks like it corresponds with questions more like “what do you understand about earth materials?”.
As well as “summative” questions, there are related questions that are used in other ways than assessment. “How confident are you of your ability in …?” and “is your ability in … adequate in your current situation?” both come to mind (stimulated by considerations in LUSID).
What I am suggesting here is that we can adapt some of the natural properties of questions and answers to fit definitions of competence and ability. So what properties do I have in mind? Here is a provisional and tentative list.
- Questions can be classified as inviting one of four kinds of answer:
- yes or no;
- a value on a (predefined) scale;
- examples;
- an explanation that is more complex than a simple value.
- These types of answer probably need little explanation – many examples can readily be imagined.
- The same form of answer can relate to more than one question, but usually the answer will mean different things. To be fully and clearly understood, an answer should relate to just one question. Using the above example, “French” as the answer to “what languages have you studied?” means something substantially different from “French” as the answer to “what languages are you fluent in?”
- A more specific question may imply answers to less specific questions. For example, “what programming languages have you used in software development?” implies answers such as “software development” to the question “what competences do you have in ICT?” Many such implied questions and answers can be formulated. What matters in a particular framework is the other answers in that particular framework that can be inferred.
- An answer to a less specific question may invite further more specific questions.
- Conversely to the example just above, if the question “what competences do you have in ICT?” includes the answer “software development”, a good follow-up question might be “what programming languages have you used in software development?” Similar patterns could be seen for any technical specialty. Often, answers like this may be taken from a known list of options. There are only so many languages, both human and computer.
- Where an answer is a rankable concept, questions about the level of that ability are invited. For instance, the question “what foreign languages can you speak?”, answered with “French” and “Italian”, invites questions such as “what is your European Language Passport level of ability in spoken interaction in French?”
- Where an answer has been analysed into its component parts, questions about each component part make sense. For example, if the answer to “are you able to clear sites for tree planting?”, following the LANTRA Treework NOS (2009) was “yes”, that invites the narrower implied questions set out in that NOS, like “can you select appropriate clearance methods …?” or “do you understand the potential impacts of your work on the environment …?”
- Unless the question is fully specific, admitting only the answers yes and no, and even in that case many times, it is nearly always possible to ask further questions, and give further answers. But everyone's interest in detail stops sooner or later. The place to stop asking more specific questions is when the answer does not significantly affect the outcome you are looking for. And that varies between different interested parties.
- Questions may be equivalent to other questions in other frameworks. This will come out from the answers given. If the answers given by the same person in the same context are always the same for two questions, they are effectively equivalent. It is genuinely helpful to know this, as it means that one can save time not repeating questions.
- Answers to some questions may imply answers to other questions in different frameworks, without being equivalent. The answers may contain, or be contained by, their counterparts. This is another way of linking together questions from different frameworks, and saving asking unnecessary extra questions.
That covers a view of how to represent varying specificity in questions and answers, but not yet frameworks as they are at present.
Back to frameworks as they are at present
At present, it is not common practice to set out frameworks of competence or ability in terms of questions and answers, but only in terms of the concepts themselves. But, to me, it helps understanding enormously to imagine the frameworks as frameworks of questions, and the learning outcome or competence concepts as potential answers. In practice, all you see in the frameworks is the answers to the implied questions.
Perhaps this has come about through a natural process of doing away with unnecessary detail. The overall question in occupational competence frameworks is, “are you competent to do this job?”, so it can go unstated, with the title of the job standing in for the question. The rest of the questions in the framework are just the detailed questions about the component parts of that competence (see Carroll and Boutall's ideas of Functional Analysis in their Guide to Developing National Occupational Standards). The formulation with action verbs helps greatly in this approach. To take NOS examples from way back in the 3rd post in this series, the units themselves and the individual performance criteria share a similar structure. Less specifically, “set out and establish crops” relates both to the question “are you able to set out and establish crops” and the competence claim “I am able to set out and establish crops”. More specifically, “place equipment and materials in the correct location ready for use” can be prefixed with “are you able to …” for a question, or “I am able to …” as a claim. Where all the questions take a form that invites answers yes or no, one really does not need to represent the questions at all.
With a less uniform structure, one would need mentally to remove all the questions to get a recognisable framework; or conversely, to understand a framework in terms of questions, one needs to add in those implied questions. This is not as easy, and perhaps that is why I have been drawn to elaborating all those structuring relationships between concepts.
We are left in a place that is very close to where we were before in the previous post. At simplest, we have the individual learning outcome or competence definitions (which are the answers) and the frameworks, which show how the answers connect up, without explicitly mentioning the questions themselves. The relations between the concepts can be factored out, and presented either together in the framework, or separately together with the concepts that are related by those relations.
If the relationships are simply “broader” and “narrower”, things are pretty straightforward. But if we admit less specific concepts and questions, because the questions are not explicitly represented, the structure needs a more elaborate set of relationships. In particular, we have to make particular provision for rankable concepts and levels. I'll leave detailing the structures we are left with for later.
Before that, I'd like to help towards better grasp of the ideas through the analogy with tourism.
2012-05-01 (20th in my logic of competence series.)
The logic of tourism as an analogy for competence
Modelling competence is too far removed from common experience to be intuitive. So I've been thinking of what analogy might help. How about the analogy of tourism? This may help particularly with understanding the duality between competence frameworks (like tourist itineraries) and competence concept definitions (like tourist destinations).
The analogy is helped by the fact that last week I was in Lisbon for the first time, at work (the CEN WS-LT and TC 353), but also more relevantly as a tourist. (If you don't know Lisbon, think of examples to suit your own chosen place to visit, that you know better.) I'll start with the aspects of the analogy that seem to be most straightforward, and go on to more subtle features.
First things first, then: a tourist itinerary includes a list of destinations. This can be formalised as a guided tour, or left informal as a “things you should see” list given by a friend who has been there. A destination can be in any number of itineraries, or none. An itinerary has to include some destinations, but in principle it doesn't have any upper limits: it could be a very detailed itinerary that takes a year to properly acquaint a newcomer with the ins and outs of the city. Different itineraries for the same place may have more, or fewer, destinations within that place. They may or may not agree on the destinations included. If there were destinations included by the large majority of guides, another guide could select these as the “essential” Lisbon or wherever. In this case, perhaps that would include visiting the Belem tower; the Castle of St George; Sintra; experiencing Fado; sampling the local food, particularly fish dishes; and a ride on one of the funicular trams that climb the steep hills. Or maybe not, in each case. There again, you could debate whether Sintra should be included in a guide to Lisbon, or just mentioned as a day trip.
A small itinerary could be made for a single destination, if desired. Some guides may just point you to a museum or destination as a whole; others may give detailed suggestions for what you should see within that destination. A cursory guide might say that you should visit Sintra; a detailed one might say that you really must visit the Castle of the Moors, as well as other particular places in Sintra. A very detailed guide might direct you to particular things to see in the Castle of the Moors itself.
It should be clear from the above discussion that a place to visit should not be confused with an itinerary for that place. Any real place has an unlimited number of possible itineraries for it. An itinerary for a city may include a museum; an itinerary for a museum may include a painting; there may sometimes even be guides to a painting that direct the viewer to particular features of that painting. The guide to the painting is not the painting; the guide to the museum is not the museum; the guide to the city is not the city.
There might also be guides that do not propose particular itineraries, but list many places you might go, and you select yourself. In these cases, some kind of categorisation might be used to help you select the places of interest to you. What period of history do they come from? Are they busy or quiet? What do they cost? How long do they take to visit? Or a guide with itineraries may also categorise attractions, and make them explicitly optional. Optionality might be particularly helpful in guided tours, so that people can leave out things of less interest.
If a set of guides covered several whole places, not just one, it may make comparisons across the different places. If you liked the Cathar castles in the South of France, you may like the Castle of the Moors in Sintra. Those who like stately homes, on the other hand, may be given other suggestions.
A guide to a destination may also contain more than an itinerary of included destinations within it. A guidebook may give historical or cultural background information, which goes beyond the description of the destinations. Guides may also propose a visit sequence, which is not inherent in the destinations.
The features I have described above are reasonably replicated in discussion of competence. A guide or itinerary corresponds to a competence framework; a destination corresponds to a competence concept. This is largely intended to throw further light on what I discussed in number 12 in this series, Representing the interplay between competence definitions and structures.
Differences
One difference is that tourist destinations have independent existence in the physical world, whereas competence concepts do not. It may therefore be easier to understand what is being referred to in a guide book, from a short description, than in a competence framework. Both guide book and competence framework may rely on context. When a guide book says “the entrance”, you know it means the entrance to the location you are reading about, or may be visiting.
Physical embodiment brings clarity and constraints. Smaller places may be located within larger places, and this is relatively clear. But it is less clear whether lesser competence concepts are part of greater competence concepts. What one can say (and this carries through from the tourism analogy) is that concepts are included in frameworks (or not), and that any concept may be detailed by (any number of) frameworks.
Competence frameworks and concepts are more dependent on the words used in description, and because a description necessarily chooses particular words, it is easy to confuse the concept with the framework if they use the same words. It is easy to use the words of a descriptive framework to describe a concept. It is not so common, though perfectly possible, to use the description of an itinerary as a description of a place. It is because of this greater dependence on words (compared with tourist guides) that it may be more necessary to clarify the context of a competence concept definition, in order to understand what it actually means.
Where the analogy with competence breaks down more seriously is that high stakes decisions rarely depend on exactly where someone has visited. But at a stretch of the imagination, they could: recruitment for a relief tour guide could depend on having visited all of a given set of destinations, and being able to answer questions about them. What high stakes promotes is the sense that a particular structure (as defined or adopted by the body controlling the high-stakes decisions) defines a particular competence concept. Despite that, I assert that the competence structure and the separate competence concept remain strictly separate kinds of thing.
Understanding the logic of competence through this analogy
The features of competence models that are illustrated here are these.
- Competence frameworks or structures may include relevant competence concepts, as well as other material. (See № 12.)
- Competence concept definitions may be detailed by a framework structure for that competence concept. Nevertheless the structure does not fully define the concept. (See № 12 and № 13.)
- Competence frameworks may include optional competences (as well as necessary or mandatory ones). (See № 15 and № 7.)
- Both frameworks and concepts may be categorised. (See also № 5.)
- Frameworks may contain sub-frameworks (just as itineraries may contain sub-itineraries).
- But frameworks don't contain concepts in the same way: they just include them (or not).
- A framework may be simply an unstructured list of defined concepts.
I hope that helps anyone to understand more of the logic of competence, and I hope that also helps InLOC colleagues come to consensus on the related matters.
2013-02-27 (21st in my logic of competence series.)
The pragmatics of InLOC competence logic
Putting together a good interoperability specification is hard, and especially so for competence. I've tried to work into InLOC as many of the considerations in this Logic of Competence series as I could, but these are all limited by the scope of a pragmatically plausible goal. My hypothesis is that it's not possible to have a spec that is at the same time both technically simple and flexible, and intuitively understandable to domain practitioners.
Here I'll write now about why I believe that, and later follow on to finalise on the pragmatics of the logic of competence as represented by InLOC.
Doing a specification like InLOC gives one an opportunity to attract all kinds of critique from people, much of it constructive. No attempts to do such a spec in the past have been great successes, and one wonders why that is. Some of the criticism I have heard has helped me to formulate the hypothesis above, and I'll try to explain my reasoning here.
Turn the hypothesis on its head. What would make it possible to have a spec that is technically simple, and at the same time intuitively understandable to domain practitioners? Fairly obviously, there would have to be a close correspondence between the objects of the domain of expertise, and the constructs of the specification.
For each reader, there may appear to be a simple solution. Skills, competences, learning outcomes, etc., have this structure — don't they? — and so one just has to reproduce that structure in the information model to get a workable interoperability spec that is intuitively understandable to people — well, like me. Well, “Not.”, as people now say as a one-word sentence.
Actually, there is great diversity in the ways people conceive of and structure learning outcomes, competences and the like. Some structures have different levels of the same competence, others do not. Some competences are defined in a binary fashion, that allows one to say “yes” or “no” to whether people have that competence; other competences are defined in a way that allows people to be ranked in order of that competence. Some competence structures are quite vague, with what look like a few labels that give an indication of the kinds of quality that someone is looking for, without defining what exactly those labels mean. Some structures — particularly level frameworks like the EQF — are deliberately defined in generic terms that can apply across a wide range of areas of knowledge and skill. And so on.
This should really be no surprise, because it is clear from many people's work (e.g. my PhD thesis) that different people simplify complex structures in their own different ways, to suit their own purposes, and in line with their own backgrounds and assumptions. There is, simply, no way in which all these different approaches to defining and structuring competence can be represented in a way that will make intuitive sense to everyone.
What one can do is to provide a relatively simple abstract representation that can cover all kinds of existing structures. This is just what InLOC is aiming to do, but up to now we haven't been quite clear enough about that. To get to something that is intuitive for domain practitioners, one needs to rely on tools being built that reflect, in the user interface, the language and assumptions of that particular group of practitioners. The focus for the “direct” use of the spec then clearly shifts onto developers. What, I suggest, developers need is a specification adapted to their needs — to build those interfaces for domain practitioners. The main requirements of this seem to me to be that the spec:
- gives enough structure so that developers can map any competence structure into that format;
- does not have any unnecessary complexity;
- gives a readily readable format, debuggable by developers (not domain practitioners).
So, you know the aims against which to evaluate InLOC. InLOC offers no magic wand to bring together incompatible views of diverse learning outcome and competence structures. But it does offer a relatively simple technical solution, that allows developers who have little understanding of competence domains to develop tools that really do match the intuitions of various domain practitioners.
2013-07-31 (22nd in my logic of competence series.)
Open Badges, xAPI, LRMI could have used InLOC as one cornerstone
There had been much discussion at the time of writing about (originally Mozilla) Open Badges, xAPI (Experience API, alias “Tin Can API”) and LRMI, as new and interesting specifications to help bring standardization particularly into the world of technology and resources involved with people and their learning. They had all reached their “version 1” that year, along with InLOC.
InLOC could have quietly served as a cornerstone of all three, providing a specification for one of the important things they may all want to refer to. InLOC allows documentation of the frameworks, of learning outcomes, competencies, abilities, whatever you call them, that describe what people need to know and be able to do.
Mozilla had been given, and devoted, plenty of resource to their OpenBadges effort, and as a result is it widely known about, though not so well known is the rapid and impressive development of the actual specification. The key part of the spec is how OpenBadges represents the “assertions” that someone has achieved something. The thing that people achieve (rather that its achievement) could well be represented in an InLOC framework.
Tin Can / Experience API (I'll use the customary abbreviation “xAPI”) had also been talked about widely, as a successor to SCORM. The xAPI “makes it possible to collect the data about the wide range of experiences a person has (online and offline)”. This clearly includes “experiences” such as completing a task or attaining a learning outcome. But xAPI does not deal with the relationships between these. If one greater learning outcome was composed of several lesser ones, it wouldn't be natural to represent that fact in xAPI itself. That is where InLOC naturally could have come in.
LRMI (“Learning Resource Metadata Initiative”) is, as one would expect, designed to help represent metadata about learning resources, in a way that is integrated with schema.org. What if many of those learning resources are designed to help a learner achieve an intended learning outcome? LRMI can naturally refer to such a learning outcome, but is not designed to represent the structures themselves. Again, InLOC can do that.
It would be disappointing if these three specifications, each one potentially very useful in its own way, all specified their own, possibly incompatible ways of representing the structures or frameworks that are often created to bring common ground and order to this whole area of life. [2023: I did not verify the extent to which this has happened. The current expert on this is Phil Barker.]
I hoped that would not happen! Instead, I believe we should be using InLOC for what it is good at, leaving each other spec to handle its own area, and no one would need to “reinvent the wheel”.
Draft proposals
These proposals were only initial proposals. The other specifications have moved on, and there was little discussion with other people involved with or interested in the other three specifications. I leave this section in simply for historical interest, as it is no longer accurate.
OpenBadges
The Assertions page gives the necessary detail of how the OpenBadges spec works.
- The BadgeClass
criteria property means the “URL of the criteria for earning the achievement.” If there is an InLOC LOCdefinition or LOCstructure that represents these criteria, as there could well be, then the natural mapping would be for the criteria property simply to hold the URI, either of the (single) LOCdefinition, or of the LOCstructure that comprises all of the definitions together.
- The BadgeClass
alignment property gives a list of “objects describing which educational standards this badge aligns to, if any.” In cases where there is no LOCdefinition or LOCstructure representing the whole of the badge criteria, it seems natural to put a set of LOCdefinition URIs into the (multiple) objects of this property — which are AlignmentObjects.
- Each AlignmentObject has the following properties, which map directly onto InLOC.
- name: this could be the title of a LOCdefinition
- url: this could be the id of the same LOCdefinition
- description: this could be the description of the same LOCdefinition
One could also potentially take both approaches at the same time.
xAPI
This is also known as the Experience API.
[I was commenting on the specification v1.0.1, 2013-10-01 — there is a current versions which may differ].
xAPI is based around the statement. This is defined as “a simple construct consisting of <actor (learner)> <verb> <object>, with <result>, in <context> to track an aspect of a learning experience.” There are a number of ways in which a statement could relate to a learning outcome or competence. How might these correspond to InLOC?
- If the statement “verb” is something like completed, mastered, or passed, the “object” could well be something like a learning outcome, or an assessment directly related to a learning outcome. The object has two properties on top of the expected objectType:
- id: this can be the same as a LOC id in InLOC
- definition: this in turn has recommended properties of:
- name: this is proposed as the LOC title
- description: this is proposed as the LOC description
- type: this is proposed as the URI for LOCdefinition or LOCstructure
The statement could be that some experiences were had (e.g. an apprenticeship), and the result was the learning outcome or competence. It might therefore be useful to give the URI of an InLOC-formatted learning outcome as part of an xAPI result. Unfortunately, none of the specified properties of the Result object have a URI type, so the URI of a LOC definition would have to go in the extensions property of the result.
Often in personal or professional development planning, it is useful to record what is planned. An example of how to represent this, with the object as a sub-statement, is given in the spec section 4.1.4.3, page numbered 20. The sub-statement can be something similar to the first option above.
A learning outcome may form part of the context of an activity in diverse ways. If it is not one of the above, it may be possible to use the context property of a statement, either as a statement reference in the statement property of the context, or as part of the context's extensions.
In essence, the clearest and most straightforward way of linking to an InLOC LOCstructure or LOCdefinition is as a statement object, rather than its result or context. The other options could be seen as giving too many options, which may lead away from useful interoperability.
LRMI
The documentation for the Learning Resource Metadata Initiative is at https://www.dublincore.org/specifications/lrmi/. The specification, and its correspondence with InLOC, is very simple. All the properties are naturally understood as properties of a learning resource. The property relevant to InLOC is educationalAlignment, whose object is an AlignmentObject.
Here, the LRMI AlignmentObject properties are mapped to LOCdefinition properties.
- targetURL: LOCdefinition id
- targetName: LOCdefinition title
- targetDescription: LOCdefinition description
What this all means
xAPI and LRMI
The implications for xAPI and LRMI are just that they could suggest InLOC as a possible format for the publication of frameworks that they may want to refer to. Neither spec has pretensions to cover this area of frameworks, and the existence of InLOC should help to prevent people inventing diverse solutions, when we really want one standard approach to help interoperability.
A question remains about what a suitable binding of InLOC would be for both specs. In many ways it should not matter, as it will be the URIs and some values that will be used for reference from xAPI and LRMI, not any of the InLOC syntax. However, it might be useful to remember that xAPI's native language is JSON, and LRMI's is HTML, with added schema.org markup using microdata or RDFa. Neither of these bindings has been finalised for InLOC, so an opportunity exists to ensure that suitable bindings are agreed, while still conforming to the InLOC information model in one or other form.
OpenBadges
The case of Mozilla Open Badges is perhaps the most interesting. Clearly, there is a potential interest for badges to link to representations of learning outcomes or competences as defined by relevant authorities. It is so much more powerful when these representations reside in a common space that can be referred to by anyone (including e.g. xAPI and LRMI users, personal development, portfolio, and recruitment systems). It is easy to see how badges could usefully become “metadata-infused” tokens of the achievement of something that is already defined elsewhere. Redefining those things would simply confuse people.
InLOC solves several problems that OpenBadges should not have to worry about. One is representing equivalence (or not) between different competencies. That is provided for straightforwardly within InLOC, and should be done by the authorities defining the competencies, whether or not they are the same people as those who define and issue the badges.
Second, InLOC gives a clear, comprehensive and predefined vocabulary for how different competencies relate to each other. Mozilla's Web Literacy Standard defines a tree structure of “literacies”, “competencies” and “skills”. Other frameworks and standards use other terms and concepts. InLOC is generic enough to represent all the relationships in all of these structures. As with equivalencies, the badge issuer should not have to define, for example, what roles require what skills and what knowledge. That should be up to occupational domain experts.
But OpenBadges do require some way to represent the fact that one, greater, badge can stand for a number of lesser badges. This is necessary to avoid being drowned in a flood of badges each one so small that it is unrecognisable or insignificant.
While so many frameworks have not been expressed in a machine processable format like InLOC, there will remain a requirement for an internal mechanism within OpenBadges to specify precisely which set of lesser badges is represented by a single a greater badge. But when the InLOC structures are in place, and all the OpenBadges in question refer to InLOC URIs for their criteria, we can look forward to automatic consistency checking of super-badges. To check a greater badge with a set of lesser component badges, check that the criteria structure or definition for the greater badge has parts (as defined by InLOC relationships) which are each the criteria of one of the set of lesser badges.
As with xAPI, JSON is the native language of OpenBadges, so one task that remains to be completed is to ensure that there is a JSON binding of InLOC that satisfies both the OpenBadges and the Tin Can communities.
That should have been it! [I did later draft a JSON-LD binding for InLOC but as far as I know it has never been used.]
2013-10-07 (23rd in my logic of competence series.)
InLOC and OpenBadges: a reprise
InLOC is well designed to provide the conceptual “glue” or “thread” for holding together structures and planned pathways of achievement, which can be represented by Mozilla OpenBadges.
Since the previous post — the last of the previous academic year, also about OpenBadges and InLOC — I have been invited to talk at the Open Badges in Scottish Education Group. This is a great opportunity, because it involves engaging with a community with real aspirations for using Open Badges. One of the things that interests people in OBSEG is setting up combinations of lesser badges, or pathways for several lesser badges to build up to greater badges. I imagine that if badges are set up in this way, the lesser badges are likely to become the stepping stones along the pathway, while it is the greater badge that is likely to be of direct interest to, e.g., employers.
All this is right in the main stream of what InLOC addresses. Remember that, using InLOC, one can set out and publish a structure or framework of learning outcomes, competenc(i)es, etc., (called “LOC definitions”) each one with its own URL (or IRI, to be technically correct), with all the relationships between them set out clearly (as part of the “LOC structure”).
The way in which these Scottish colleagues have been thinking of their badges brings home another key point to put the use of InLOC into perspective. As with so many certificates, awards, qualifications etc., part of the achievement is completion in compliance with the constraints or conditions set out. These are likely not to be learning outcomes or competences in their own right.
The simplest of these non-learning-outcome criteria could be attendance. Attendance, you might say, stands in for some kind of competence; but the kind of basic timekeeping and personal organisation ability that is evidenced by attendance is very common in many activities, so is unlikely to be significant in the context of a Badge awarded for something else. Other such criteria could be grouped together under “ability to follow instructions” or something similar. A different kind of criterion could be the kinds of character “traits” that are not expected to be learned. A person could be expected to be cheerful; respectful; tall; good-looking; or a host of other things not directly under their control, and either difficult or impossible to learn. These non learning outcome aspects of criteria are not what InLOC is principally designed for.
Also, over the summer, Mozilla's Web Literacy Standard (“WebLitStd”) has been progressing towards version 1.0, to be featured in the upcoming MozFest in London. I have been tracking this with the help of Doug Belshaw, who after great success as an Open Badges evangelist has been focusing on the WebLitStd as its main protagonist. I'm hoping soon (hopefully by MozFest time) to have a version of the WebLitStd in InLOC, and this brings to the fore another very pragmatic question about using InLOC as a representation.
Many posts ago, I was drawing out the distinction between LOC (that is, Learning Outcome or Competence) definitions that are, on the one hand, “binary”, and on the other hand, “rankable”. This is written up in the InLOC documentation. “Binary” ones are the ones for which you can say, without further ado, that someone has achieved this learning outcome, or not yet achieved it. “Rankable” ones are ones where you can put people in order of their ability or competence, but there is no single set of criteria distinguishing two categories that one could call “achieved” and “not yet achieved”.
In the WebLitStd, it is probably fair to say that none of the “competencies” are binary in these terms. One could perhaps characterise them as rankable, though perhaps not fully, in that there may be two people with different configurations of that competency, as a result perhaps of different experiences, each of whom were better in some ways than the other, and each conversely less good in other ways. It may well be similar in some of the Scottish work, or indeed in many other Badge criteria. So what to do for InLOC?
If we recognise a situation where the idea is to issue a badge for an achievement that is clearly not a binary learning outcome, we can outline a few stages of development of their frameworks, which would result in a progressively tighter matching to an InLOC structure or InLOC definitions. I'll take the WebLitStd as illustrative material here.
First, someone may develop a badge for something that is not yet well-defined anywhere — it could have been conceived without reference to any existing standards. To illustrate this case, an example of a title could be “using Web sites”. There is no one component of the WebLitStd that covers “using the web”, and yet “using” it doesn't really cover Web literacy as a whole. In this case, the Badge criteria would need to be detailed by the Badge awarder, specifically for that badge. What can still be done within OpenBadges is that there could be alignment information; however it is not always entirely clear what the relationship is meant to be between a badge and a standard it is “aligned” to. The simplest possibility is that the alignment is to some kind of educational level. Beyond this it gets trickier.
A second possibility for a single badge would be to refer to an existing “rankable” definition. For example, consider the WebLitStd skill, “co-creating web resources”, which is part of the “sharing & collaborating” competency of the “Connecting” strand. To think in detail about how this kind of thing could be badged, we need to understand what would count (in the eye of the badge issuer) as “co-creating web resources”. There are very many possible examples that readily come to mind, from talking about what a web page could have on it, to playing a vital part in a team building a sophisticated web service. One may well ask, “what experiences do you have of co-creating web resources?” and, depending on the answer, one could roughly rank people in some kind of order of amount and depth of experience in this area. To create a meaningful badge, a more clearly cut line needs to be drawn. Just talking about what could be on a web page is probably not going to be very significant for anyone, as it is an extremely common experience. So what counts as significant? It depends on the badge issuer, of course, and to make a meaningful badge, the badge issuer will need to define what the criteria are for the badges to be issued.
A third and final stage, ideal for InLOC, would be if a badge is awarded with clearly binary criteria. In this case there is nothing standing in the way of having the criteria property of the Badge holding a URL for a concept directly represented as a binary InLOC LOCdefinition. There are some WebLitStd skills that could fairly easily be seen as binary. Take “distinguishing between open and closed licensing” as an example. You show people some licenses; either they correctly identify the open ones or they don't. That's (reasonably) clear cut. Or take “understanding and labeling the Web stack”. Given a clear definition of what the “Web stack” is, this appears to be a fairly clear-cut matter of understanding and memory.
Working back again, we can see that in the third stage, a Badge can have criteria (not just alignments) which refer directly to InLOC information. At the second and first stage, badge criteria need something more than is clearly set out in InLOC information already published elsewhere. So the options appear to be:
- describing what the criteria are in plain text, with reference to InLOC information only through alignment; and
- defining an InLOC structure specifically for the badge, detailing the criteria.
The first of these options has its own challenges. It will be vital to coherence to ensure that the alignments are consistent with each other. This will be possible, for example, if the aspects of competence covered are separate (independent; orthogonal even). So, if one alignment is to a level, and the second to a topic area, that might work. But it is much less promising if more specific definitions are referred to.
(I'd like to write an example at this point, but can't decide on a topic area — I need someone to give me their example and we can discuss it and maybe put it here.)
From the point of view of InLOC, the second option is much more attractive. In principle, any badge criteria could be analysed in sufficient detail to draw out the components which can realistically be thought of as learning outcomes — properties of the learners — that may be knowledge, skill, competence, etc. No matter how unusual or complex these are, they can in principle be expressed in InLOC form, and that will clarify what is really “aligned” with what.
I'll say again, I would really like to have some well-worked-out examples here. So please, if you're interested, get in touch and let's talk through some of interest to you. I hope to be starting that in Glasgow this week.
2014-03-12 (24th in my logic of competence series.)
The growing need for open frameworks of learning outcomes
(A contribution to Open Education Week — see note at end.)
What is the need?
Imagine what could happen if we had a really good sets of usable open learning outcomes, across academic subjects, occupations and professions. It would be easy to express and then trace the relationships between any learning outcomes. To start with, it would be easy to find out which higher-level learning outcomes are composed, in a general consensus view, of which lower-level outcomes.
Some examples … In academic study, for example around a more complex topic from calculus, perhaps it would be made clear what other mathematics needs to be mastered first (see this recent example which lists, but does not structure). In management, it would be made clear, for instance, what needs to be mastered in order to be able to advise on intellectual property rights. In medicine, to pluck another example out of the air, it would be clarified what the necessary components of competent dementia care are. Imagine this is all done, and each learning outcome or competence definition, at each level, is given a clear and unambiguous identifier. Further, imagine all these identifiers are in HTTP IRI/URI/URL format, as is envisaged for Linked Data and the Semantic Web. Imagine that putting in the URL into your browser leads you straight to results giving information about that learning outcome. And in time it would become possible to trace not just what is composed of what, but other relationships between outcomes: equivalence, similarity, origin, etc.
It won't surprise anyone who has read other pieces from me that I am putting forward one technical specification as part of an answer to what is needed: InLOC.
So what could then happen?
Every course, every training opportunity, however large or small, could be tagged with the learning outcomes that are intended to result from it. Every educational resource (as in “OER”) could be similarly tagged. Every person's learning record, every person's CV, people's electronic portfolios, could have each individual point referred, unambiguously, to one or more learning outcomes. Every job advert or offer could specify precisely which are the learning outcomes that candidates need to have achieved, to have a chance of being selected.
All these things could be linked together, leading to a huge increase in clarity, a vast improvement in the efficiency of relevant web-based search services, and generally a much better experience for people in personal, occupational and professional training and development, and ultimately in finding jobs or recruiting people to fill vacancies, right down to finding the right person to do a small job for you.
So why doesn't that happen already? To answer that, we need to look at what is actually out there, what it doesn't offer, and what can be done about it.
What is out there?
Frameworks, that is, structures of learning outcomes, skills, competences, or similar things under other names, are surprisingly common in the UK. For many years now in the UK, Sector Skills Councils (SSCs), and other similar bodies, have been producing National Occupational Standards (NOSs), which provided the basis for all National Vocational Qualifications (NVQs). In theory at least, this meant that the industry representatives in the SSCs made sure that the needs of industry were reflected in the assessment criteria for awarding NVQs, generally regarded as useful and prized qualifications at least in occupations that are not classed as “professional”.
NOSs have always been published openly, and they are still available to be searched and downloaded at the NOS site. The site provides a search facility. As one of my current interests is corporate governance, I put that phrase in to the search box giving several results, including a NOS called CFABAI131 Support corporate decision-making. It's a short document, with a few lines of overview, six performance criteria, each expressed as one sentence, and 15 items of knowledge and understanding, which is what is seen to be needed to underpin competent performance. It serves to let us all know what industry representatives think is important in that support function.
In professional training and development, practice has been more diverse. At one pole, the medical profession has been very keen to document all the skills and competences that doctors should have, and keen to ensure that these are reflected in medical education. The GMC used to publish Tomorrow's Doctors, introduced as follows:
The GMC sets the knowledge, skills and behaviours that medical students learn at UK medical schools: these are the outcomes that new UK graduates must be able to demonstrate.
Tomorrow's Doctors covers the outline of the whole syllabus. It prepares the ground for doctors to move on to working in line with Good Medical Practice — in essence, the GMC's list of requirements for someone to be recognised as a competent doctor.
The medical field is probably the best developed in this way. Some other professions, for example engineering and teaching, have some general frameworks in place. Yet others may only have paper documentation, if any at all.
Beyond the confines of such enclaves of good practice, yet more diverse structures of learning outcomes can be found, which may be incoherent and conflicting, particularly where there is no authority or effective body charged with bringing people to consensus. There are few restrictions on who can now offer a training course, and ask for it to be accredited. It doesn't have to be consistent with a NOS, let alone have the richer technical infrastructure hinted at above. In Higher Education, people have started to think in terms of learning outcomes (see e.g. the excellent Writing and using good learning outcomes by David Baume), but, lacking sufficient motivation to do otherwise, intended learning outcomes tend to be oriented towards institutional assessment processes, rather than to the needs of employers, or learners themselves. In FE, the standardisation influence of NOSs has been weakened and diluted.
In schools in the UK there is little evidence of useful common learning outcomes being used, though (mainly) for the USA there exists the Achievement Standards Network (ASN), documenting a very wide range of school curricula and some other things. It has recently been taken over by private interests (Desire2Learn) because no central funding is available for this kind of service in the USA.
What do these not offer?
The ASN is a brilliant piece of work, considering its age. Also related to its age, it has been constructed mainly through processing paper-style documentation into the ASN web site, which includes allocating ASN URIs. It hasn't been used much for authorities constructing their own learning outcome frameworks, with URIs belonging to their own domains, though it could in principle be.
Apart from ASN, practically none of the other frameworks that are openly available (and none that are not) have published URIs for every component. Without these URIs, it is much harder to identify, unambiguously, which learning outcome one is referring to, and virtually impossible to check that automatically. So the quality of any computer assisted searching or matching will inevitably be at best compromised, at worst non-existent.
As learning outcomes are not easily searchable (outside specific areas like NOSs), the tendency is to reinvent them each time they are written. Even similar outcomes, whatever the level, routinely seem to be be reinvented and rewritten without cross-reference to ones that already exist. Thus it becomes impossible in practice to see whether a learning opportunity or educational resource is roughly equivalent to another one in terms of its learning outcomes.
Thus, there is little effective transparency, no easy comparison, only the confusion of it being practically impossible to do the useful things that were envisaged above.
What is needed?
What is needed is, on the one hand, much richer support for bodies to construct useful frameworks, and on the other hand, good examples leading the way, as should be expected from public bodies.
And as a part of this support, we need standard ways of modelling, representing, encoding, and communicating learning outcomes and competences. It was just towards these ends that InLOC was commissioned. There's a hint in the name: Integrating Learning Outcomes and Competences. InLOC is also known as ELM 2.0, where ELM stands for European Learner Mobility, within which InLOC represents part of a powerful proposed infrastructure. It has been developed under the auspices of the CEN Workshop, Learning Technologies, and funded by (what was) the DG Enterprise's ICT Standardization Work Programme.
InLOC, fully developed, would really be the icing on the cake. Even if people just did no more than publishing stable URIs to go with every component of every framework or structure of learning outcomes or competencies, that would be a great step forward. The existence and openness of InLOC provides some of the motivation and encouragement for everyone to get on with documenting their learning outcomes in a way that is not only open in terms of rights and licences, but open in terms of practice and effect.
Open Education Week web site
“its purpose is to raise awareness about the movement and its impact on teaching and learning worldwide”.
Cetis staff supported Open Education Week by publishing a series of blog posts about open education activities. Cetis have had long-standing involvement in open education and have published a range of papers which cover topics such as OERs (Open Educational Resources) and >MOOCs (Massive Open Online Courses).
2014-05-19 (25th in my logic of competence series.)
Why, when and how should we use frameworks of skill and competence?
When we understand how frameworks could be used for badges, it becomes clearer that we need to distinguish between different kinds of ability, and that we need tools to manage and manipulate such open frameworks of abilities. InLOC gives a model, and formats, on which such tools can be based.
I'll be presenting this material at the Crossover Edinburgh conference, 2014-06-05, though my conference presentation will be much more interactive and open, and without much of this detail below.
What are these frameworks?
Frameworks of skill or competence (under whatever name) are not as unfamiliar as they might sound to some people at first. Most of us have some experience or awareness of them. Large numbers of people have completed vocational qualifications – e.g. NVQs in England – which for a long time were each based on a syllabus taken from what are called National Occupational Standards (NOSs). Each NOS is a statement of what a person has to be able to do, and what they have to know to support that ability, in a stated vocational role, or job, or function. The scope of NOSs is very wide – to list the areas would take far too much space – so the reader is asked to take a look at the national database of current NOSs.
Several professions also have good reason to set out standards of competence for active members of that profession. One of the most advanced in this development, perhaps because of the consequences of their competence on life and death, is the medical profession. Documents like Good Medical Practice, published by the General Medical Council, starts by addressing doctors:
Patients must be able to trust doctors with their lives and health. To justify that trust you must show respect for human life and make sure your practice meets the standards expected of you in four domains.
and then goes on to detail those domains:
- Knowledge, skills and performance
- Safety and quality
- Communication, partnership and teamwork
- Maintaining trust
The GMC also published the related “Tomorrow's Doctors”, in which it
sets the knowledge, skills and behaviours that medical students learn at UK medical schools: these are the outcomes that new UK graduates must be able to demonstrate.
These are the kinds of “framework” that we are discussing here. The constituent parts of these frameworks are sometimes called “competencies”, a term that is intended to cover knowledge, skills, behaviours, attitudes, etc., but as that word is a little unfriendly, and bearing in mind that practical knowledge is shown through the ability to put that knowledge into practice, I'll use “ability” as a catch all term in this context.
Many larger employers have good reasons to know just what the abilities of their employees are. Often, people being recruited into a job are asked in person, and employers have to go through the process of weighing up the evidence of a person's abilities. A well managed HR department might go beyond this to maintaining ongoing records of employees' abilities, so that all kinds of planning can be done, skills gaps identified, people suggested for new roles, and training and development managed. And this is just an outsider's view!
Some employers use their own frameworks, and others use common industry frameworks. One industry where common frameworks are widely used is information and communications technology. SFIA, the Skills Framework for the Information Age, sets out all kinds of skills, at various levels, that are combined together to define what a person needs to be able to do in a particular role. Similar to SFIA, but simpler, is the European e-Competence Framework, which has the advantage of being fully and openly available without charge or restriction.
Some frameworks are intended for wider use than just employment. A good example is Mozilla's Web Literacy Map, which is “a map of competencies and skills that Mozilla and our community of stakeholders believe are important to pay attention to when getting better at reading, writing and participating on the web.” They say “map”, but the structure is the same as other frameworks. Their background page sets out well the case for their common framework. Doug Belshaw suggests that you could use the Web Literacy Map for “alignment” of the kind of Open Badges that are also promoted by Mozilla.
Links to badges
You can imagine having badges for keeping track of people's abilities, where the abilities are part of frameworks. To help people move between different roles, from education and training to work, and back again, having their abilities recognised, and not having to retrain on abilities that have already been mastered, those frameworks would have to be openly published, able to be referenced in all the various contexts. It is open frameworks that are of particular interest to us here.
Badges are typically issued by organisations to individuals. Different organisations relate to abilities differently. Some organisations, doing business or providing a service, just use employees' abilities to deliver products and services. Other organisations, focusing around education and training, just help people develop abilities, which will be used elsewhere. Perhaps most organisations, in practice, are somewhere on the spectrum between these two, where abilities are both used and developed, in varied proportions. Looking at the same thing from an individual point of view, in some roles people are just using their abilities to perform useful activities; in other roles they are developing their abilities to use in a different role. Perhaps there are many roles where, again, there is a mixture between these two positions. The value of using the common, open frameworks for badges is that the badges could (in principle) be valued across different kinds of organisation, and different kinds of role. This would then help people keep account of their abilities while moving between organisations and roles, and have those abilities more easily recognised.
The differing nature of different abilities
However, maybe we need to be more careful than simply to take every open framework, and turn it into badges. If all the abilities than were used in all roles and organisations had separate badges, vast numbers of badges would exist, and we could imagine the horrendous complexity of maintaining them and managing them. So it might make sense to select the most appropriate abilities for badging, as follows.
- Some abilities are plentiful, and don't need special training or rewarding — maybe organisations should just take them for granted, perhaps checking that what is expected is there.
- Some abilities are hard, or impossible, to develop: you have them or you don't. In this case, using badges would risk being discriminatory. Badges for e.g. how high a person can reach, or how long they can be in the sun without burning, would be unnecessary as well as seriously problematic, while one can think of many other personal characteristics, potentially framed as abilities, which might be less visible on the surface, but potentially lead to discrimination, as people can't just change them.
- Some abilities might only be able to be learned within a specific role. There is little point in creating badges for these abilities, if they do not transfer from role to role.
- Some abilities can be developed, are not abundant, and can be transferred substantially from one role to another. These are the ones that deserve to be tracked, and for which badges are perhaps most worth developing. This still leaves open the question of the granularity of the badges.
Practical considerations governing the creation and use of frameworks
It's hard to create a good, generally accepted common skills or competence framework. In order to do so, one has to put together several factors.
- The abilities have to be sufficiently common to a number of different roles, between which people may want to move.
- The abilities have to be described in a way that makes sense to all collaborating parties.
- It must be practical to include the framework into other tools.
- The framework needs to be kept up to date, to reflect changing abilities needed for actual roles.
- In particular, as the requirements for particular jobs vary, the components of a framework need to be presented in such a way that they can be selected, or combined with components of other frameworks, to serve the variety of roles that will naturally occur in a creative economy.
- Thus, the descriptions of the abilities, and the way in which they are put together, need all to be compatible.
Let's look at some of this in more detail. What is needed for several purposes is the ability to create a tailored set of abilities. This would be clearly useful in describing both job opportunities, and actual personal abilities. It is of course possible to do all of this in a paper-like way, simply cutting and pasting between documents. But realistically, we need tools to help. As soon as we introduce ICT tools, we have the requirement for standard formats which these tools can work with. We need portability of the frameworks, and interoperability of the tools.
For instance, it would be very useful to have a tool or set of tools which could take frameworks, either ones that are published, or ones that are handed over privately, and manipulate them, perhaps with a graphical interface, to create new, bespoke structures.
Contrast with the actual position now. Current frameworks rarely attempt to use any standard format, as there are no very widely accepted standards for such a format. Within NOSs, there are some standards; the UK government has a list of their relevant documents including “NOS Quality Criteria” and a “NOS Guide for Developers” (by Geoff Carroll and Trevor Boutall). But outside this area practice varies widely. In the area of education and training, the scene is generally even less developed. People have started to take on the idea of specifying the “learning outcomes” that are intended to be achieved as a result of completing courses of learning, educaction or training, but practice is patchy, and there is very little progress towards common frameworks of learning outcomes.
We need, therefore, a uniform “model”, not for skills themselves, which are always likely to vary, but for the way of representing skills, and for the way in which they are combined into frameworks.
The InLOC format
Between 2011 and 2013 I led a team developing a specification for just this kind of model and format. The project was called “Integrating Learning Outcomes and Competences”, or InLOC for short.
The content and much extra background material is available on my copy of the InLOC project web site. This post is not the place to explain InLOC in detail, but anyone interested is welcome to contact me directly for assistance.
What can people do in the meanwhile?
I've proposed elsewhere often enough that we need to develop tools and open frameworks together, to achieve a critical mass where there enough frameworks published to make it worthwhile for tool developers, and sufficiently developed tools to make it worthwhile to make the extra effort to format frameworks in the common way (hopefully InLOC) that will work with the tools.
There will be a point at which growth and development in this area will become self-sustaining. But we don't have to wait for that point. This is what I think we could usefully be doing in the meanwhile, if we are in a position to do so.
- 1. Build your own frameworks
- It's a challenge if you haven't been involved in skill or competence frameworks before, but the principles are not too hard to grasp. Start out by asking what roles, and what functions, there are in your organisation, and try to work out what abilities, and what supporting knowledge, are needed for each role and for each function. You really need to do this, if you are to get started in this area. Or, if you are a microbusiness that really doesn't need a framework, perhaps you can build one for a larger organisation.
- 2. Use parts of frameworks that are there already, where suitable
- It may not be as difficult as you thought at first. There are many resources out there, such as NOSs, and the other frameworks mentioned above. Search, study, see if you can borrow or reuse. Not all frameworks allow it, but many do. So, some of your work may already be done for you.
- 3. Publish your frameworks, and their constituent abilities, each with a URL
- This is the next vital step towards preparing your frameworks for open use and reuse. The constituent abilities (and levels, see the InLOC documentation) really need their own identifiers, as well as the overall frameworks, whether you call those identifiers URLs, URIs or IRIs.
- 4. Use the frameworks consistently throughout the organisation
- To get the frameworks to stick, and to provide the motivation for maintaining them, you will have to use them in your organisation. I'm not an expert on this side of practice, but I would have thought that the principles are reasonably obvious. The more you have a uniform framework in use across your organisation, the more people will be able to see possibilities for transfer of skills, flexible working, moving across roles, job rotation, and other similar initiatives that can help satisfy employees.
- 5. Use InLOC if possible
- It really does provide a good, general purpose model of how to represent a framework, so that it can be ready for use by ICT systems. Just ask if you need help on this!
- 6. Consider integrating open badges
- It makes sense to consider your badge strategy and your framework strategy together. You may also find this older post of mine helpful.
- 7. Watch for future development of tools, or develop some yourself!
- If you see any, try to help them towards being really useful, by giving constructive feedback. I'd be happy to help any tool developers “get” InLOC.
I hope these ideas offer people some pointers on a way forward for skill and competence frameworks. See other of my posts for related ideas. Comments or other feedback would be most welcome!
2014-12-02 (26th in my logic of competence series.)
How do I go about doing InLOC?
It's been three years now since the European expert team started work on InLOC, working out a good model for representing structures and frameworks of learning outcomes, skill and competence. As can be expected of forward-looking, provisional work, there has not yet been much take-up, but it's all in place, and more timely now than ever.
Then yesterday I received a most welcome call from a training company involved in one particular sector, who are interested in using the principles of InLOC to help their LMS map course and module information to qualification frameworks. Yes! I enthusiastically replied.
What might help people in that situation is a simple, basic approach that sets you on the right path for doing things the InLOC way. I realised that this isn't so easy to find in the main documentation, so here I set out this basic approach, which will reliably get anyone started on mapping anything to the InLOC model, and cross-references the InLOC documentation.
One description of what to do is documented in the section How to follow InLOC, but, for all the reasons above, here I will try going back to basics and starting again, in the hope that describing the approach in a different way may be helpful.
LOC definitions
The most basic feature that occurs many many times in any published framework is called, by InLOC, a “LOC definition”. This is, simply, any concept, described by any form of words, that indicates an ability – whether it be knowledge, skill, competence or any other learning outcome – that can be attributed to an individual person, and in some way – any way – assessed. It's hard to define more clearly or succinctly than that, and to get a better understanding you may want to look at examples.
In the documentation, the best place to start is probably the section on InLOC explained through example. In that section, a framework (the European e-Competence Framework, e-CF) is thoroughly analysed. You can see in Figure 2 how, for just one page of the documentation, each LOC definition has been picked out separately.
A LOC definition includes at least these overlapping classes of concept:
- anything that is listed as a learning outcome, a skill, a competency, an ability;
- any separate parts of any learning outcomes;
- anything that expresses an assessment criterion;
- any level of any outcome, skill, competence, etc. (at any granularity);
- a generic definition of what is required by a level.
Pieces of text that relate to the same concept – e.g. title and description of the same thing – are treated together. Everything that can be assessed separately is treated as a separate LOC definition. The grammatical structure of the text is of little importance. Often, though, in amongst the documentation, you read text that is not to do with abilities. Just pass over this for the moment.
One thing I've noticed sometimes is that some concepts, which could have their own LOC definitions, are implied but not explicit in the documentation. In yesterday's discussion, one example was the levels of the unit as a whole. Assessment criteria are often specified for different levels of particular abilities, but the level as a whole is implied.
The first step, then, is to look for all the LOC definitions in your documentation, and any implied ones that are not explicitly documented. ANY piece of text that represents something that could potentially be assessed as an outcome of learning is most likely a LOC definition.
Binary and rankable
If you've looked through the documentation, you've probably come across this distinction, and it is very helpful if you are going to structure something in the InLOC way. But when I was writing the documentation, I don't think I had grasped quite how central it is. It is so central that more recently I have come to putting it as a vital first concept to grasp. Very recently I quickly put together a slide deck about this, on Slideshare now, under the title Distinguishing binary and rankable definitions is key to structuring competence frameworks.
I first publicly clarified this distinction in a blog post before InLOC even started: Representing level relationships; and more recently mentioned in InLOC and OpenBadges: a reprise.
In essence: a binary learning outcome or competence (LOC) concept is one where it makes sense to ask, have you reached this level or standard? Are you as good as this? The answer gives a binary distinction between “yes”, for those people who have reached the level, and “not yet” for those who have not. The example I give in the recent slide deck is “can touch type in English at 60 wpm with fewer than 1 mistake per hundred words”. The answer is clearly yes or no. Or, “can juggle with three juggling balls for a minute or longer” (which I can't yet).
On the other hand, a rankable concept is one where there is no clear binary criterion, but instead you can rank people in order of their ability in that concept. A rankable concept related to the previous binary one would simply be “touch typing” or “can touch type”. A good question for juggling would be “how well can you juggle?” You may want to analyse this more finely, and distinguish different independent dimensions of juggling ability, but more probably I guess you would be content to roughly rank people in order of a general juggling ability.
The second step is to look at all the LOC definitions you have isolated, and judge whether they are binary or (at least roughly) rankable.
Relating LOC definitions together
The third step is to relate all the LOC definitions you found to each other. It is commonplace that frameworks have a structure that is often hierarchical. An ability at a “high” level (of granularity) involves many abilities at “lower” levels. The simplest way of representing that is that the wider definition “has parts”, which are the narrower definitions, perhaps the products of “functional analysis” of the wider definition. InLOC allows you to relate definitions in this way, using the relationship “hasLOCpart”.
But InLOC also allows several other relationships between LOC definitions. These can be seen in the three tables on the relationships page in the documentation. To see how the relationships themselves are related, look at the third table, “ontology”. The tables together give you a clear and powerful vocabulary for describing relationships between LOC definitions. Naturally, it has been carefully thought through, and is a vital part of InLOC as a whole.
Very simple structures can be described using only the “hasLOCpart” relationship. However, when you have levels, you will need at least the “hasDefinedLevel” relationship as well. Broadly speaking, it will be a rankable LOC definition that “hasDefinedLevel” of a binary definition. Find these connections in particular!
For the other relationships, decide whether “hasLOCpart” is a good enough representation, or whether you need “hasNecessaryPart”, “hasOptionalPart” or “hasExample”. Each of these has a different meaning in the real world. Mostly, you will probably find that rankable definitions have rankable parts, and binary definitions have binary parts.
There is more related discussion in another of the blog posts from my “logic of competence” series, More and less specificity in competence definitions.
Putting together the LOC structure
In InLOC, a “LOC structure” is the collection of LOC definitions along with the relationships between them. Relationships between LOC definitions are only defined in LOC structures. This is to allow LOC definitions to appear in different structures, potentially with different relationships. You may think you know what comprises, for example, communication skills, but other people may have different opinions, and classify things differently.
A LOC structure often corresponds to a complete documented scheme of learning outcomes, and often has a name which is clearly not something that is a LOC definition, as described previously. You can't assess how good someone is at “the European e-competence framework"(the e-CF) (unless you mean knowledge of that framework) but you can assess how good people are at its component parts, the LOC definitions (for rankable ones) or whether they reach the defined levels (for binary ones).
And the e-CF, analysed in detail in the InLOC documentation, is a good example where you can trace the structure down in two ways: either by topic, then later by levels; or by level, and then levelled (binary) topic definitions that are part of those levels.
Your aim is to document all the relationships between LOC definitions that are relevant to your application, and wrap those up with other related information in a LOC structure.
What you will have gained
The task of creating an InLOC structure is more than simply creating a file that can potentially be transmitted between web applications, and related to, referred to by, other structures that you are dealing with. It is also an exercise that can reveal more about the structure of the framework than was explicitly written into it. Often one finds oneself making explicit the relationships that are documented implicitly in terms of page and table layout. Often one fills in LOC definitions that have been left out. Whichever way you do it, you will be left with firmer, more principled structures on which to build your web applications.
We expect that sooner or later InLOC will be adopted as at least the basis of a model underlying interoperable and portable representations of frameworks of learning outcomes, skills, competences, abilities, and related knowledge structures. Much of the work has been done, but it may need revising in the light of future developments.
2017-08-18 (27th in my logic of competence series.)
The key to competence frameworks
So here I am (in 2017!) … reflecting on the thread of the logic of competence, nearly 7 years on. I'm delighted to see renewed interest from several quarters in the field of competence frameworks. There's work being done by the LRMI; and much potential interest from those interested in various kinds of soft skills. And some kinds of “badges” – open credentials intended to be displayed and easily recognised – often rely on competence definitions for their award criteria.
I just have to say to everyone who explores this area, beware! There are two different kinds of things that are both called similar things: “competencies”; “competences”; “competence definitions”; skills; etc.
- There is one kind of statements of ability that people measure up to or not. My favourite simple understandable examples are things like “can juggle 5 balls for a minute without dropping any” or “can type at 120 words per minute from dictation making fewer than 10 mistakes”. But there are many less exact examples of similar things, that clearly either do or do not apply to individuals at a given time of testing. “Knows how to solve quadratic equations using the formula” or “can apply Pythagoras' theorem to find the length of the third side of a right-angled triangle” might be two from mathematics. Many more from the vocational world, but they would mean less to those not in that profession or occupation.
- Then there is another kind, more of a statement indicating an ability or area of competence in which someone can be more or less proficient. Taking the examples above, these might be: “can juggle” or “juggling skills”; “can type” or “typing ability”; “knows about mathematics” or “mathematical ability”. There are vast numbers of these, because they are easier to construct than the other kind. “Can manage a small business”; “good communicator”; “can speak French”; “good at knitting”; “a good diplomat”; “programming”; “chess”; you think of your own.
What you can see quite plainly, on looking, is that with the first kind of statement, it is possible to say whether or not someone comes up to that standard; while with the second kind of phrase, either there is no standard defined, or the standard is too vague to judge whether or not someone “has” that ability or not — it's more like, how much of that ability do you have?
In the past, I've called the first kind form of words a “binary” competence definition, and the second kind “rankable”. But these are so unmemorable that I even forgot myself what I had called them. I'm looking for better names, that people (including myself) can easily remember.
Woe betide anyone who mixes the two kinds without realising what they are doing! Woe betide also anyone who uses one kind only, and imagines that the other kind either don't exist or don't matter.
The world is full of lists of skills which people should have some of. “Communication skills”. “Empathy”. “Resilience”. Loads of them. And in most cases, these are just of the second kind. They have not defined any particular level of the skill, and expect people to produce evidence about how good they are at the given skill, when asked.
In the vocational world of occupations and professions, however, we see very many well-defined statements that are of the first kind. This is to be expected, because to give someone a professional qualification requires that they are assessed as possessing skills to a certain, sufficient level.
The two kinds of statements are intimately related. Take any statement of the first kind. What would be better, or not so good? Juggling 3 balls for 30 seconds? Typing a 60 words per minute? These belong, as points on scales, respectively, of juggling skills and typing ability. Thus, every statement of the first kind has at least one scale that it is a point on. Conversely, every scale description, of the second kind, can, with sufficient insight, be detailed with positions on that scale, which will be statements of the first kind.
In the InLOC information model, these reciprocal relationships are given identifiers hasDefinedLevel and isDefinedLevelOf. These is perhaps the most essential and vital pair of relationships in InLOC.
So what about competence frameworks? Well, a framework, whether explicitly or implicitly, is about relating these two kind of statements together. It is about defining areas of ability that are important, perhaps to an activity or a role; and then also defining levels of those abilities that people can be assessed at. It's only when these levels are defined that one has criteria, not only for passing exams or recruiting employees, but also for awarding badges. And the interest in badges has held this space open for the seven years I've been writing about the logic of competence. Thank you, those working with badges!
Now I've explained this again, could you help me by saying which pair of terms would best describe for you the two kinds of statements, better than “binary” and “rankable”? I'd be most grateful.