07854 576 708
Contact Us

14/08/15
The annual furore over school league tables arrived recently.  League tables indicate the proportion of pupils in a school who attain 5 or more good GCSE exam results, including English and maths.  Though this year the main controversy was around which exams to include in the league tables and which not - resulting in prestigious schools such as Eton and Harrow being left at the bottom - it again raises the issue of how accurate and meaningful such results are. 
 
Challenges to the validity of league tables typically centre around them being too narrow a measure of school performance; surely schools are not there solely to produce children who have 5 or more GCSEs?  Given the focus on exam attainment, there's a crucial fact that's consistently overlooked - how accurate are the exam results such tables are based on?
 
Here's a thought experiment for you.  Imagine taking an object such as a table and asking 20 people to measure how long it is to the nearest millimetre.  What would you expect to see?  Would each of the 20 people come up with the same result or would there be some variation?
 
When I've asked people to consider this or even do it, there is always some variation in the measurements.  Judgements tend to cluster together - presumably indicating the approximate length of the table - but they are not exactly the same.  There's error in any measurement, as captured in the saying 'measure twice, cut once'.
 
If there is error or inconsistencies in measuring a physical object such as a table, what about when we are measuring aspects of knowledge and abilities using exams or other forms of assessment?  Such assessments work by asking a series of questions designed to tap into relevant knowledge and to show understanding and application of principles.  From the responses given we make judgements about where respondents stand on constructs we are interested in measuring; in the case of GCSEs the constructs reflect the amount of knowledge and understanding retained from instruction and learning.
 
This process necessarily involves a degree of judgement and inference.  Markers have to evaluate the adequacy of responses given to questions and, given an exam only covers a limited area of any syllabus, from this make an inference about the overall level of attainment in the subject area. 
 
Understanding the degree of error in any assessment is one of the cornerstones of psychometrics.  Error is captured in the concept of 'reliability' which describes how accurate our assessment tools are.  Effective use of tests should acknowledge error, make it explicit and treat test scores accordingly.  Acknowledging error means that any test score should be treated as an indicator and not an absolute measure.  This is a clear strength of the psychometric approach to measurement which is open and honest about limitations in measurement technology, incorporating these limitations into any consideration of outcomes. Educational exams, however, consistently fail to publically acknowledge error.
 
Though exam results do not come with an error warning, issues in their accuracy are regularly highlighted.  Each year after exam results are announced the media picks up on 'failings' in the exam system.  Usually these centre around re-marking, where parents have appealed against their child's marks and they have been awarded a different grade after the exam paper has been reviewed.  Such stories clearly highlight the issue of reliability, but quickly disappear only to be dusted-off and resurface the following year.  The exam system invariably take these criticisms on the chin, wheels out the usual defences, then goes back to business as usual.
 
Returning to exam league tables, individual results contain error and so do groups of results.  The degree of error varies according to the size of group - so the number of children in a school - but this is another factor conveniently ignored in league tables.  As consumers of league tables we are misled.  Regardless of our views on whether the emphasis on exam results over other factors in assessing school performance is appropriate, until league tables acknowledge error in exam results they cannot be fit for purpose.

  
02/06/15
"If you only assess one thing, use an assessment of cognitive ability."

In this post I highlight the effectiveness of ability tests when used within the recruitment process, in order to determine candidate aptitude.
09/02/15
Notable pieces of academic research too often fail to receive the public attention they deserve.  Not so with a recent article entitled 'Computer-based personality judgments are more accurate than those made by humans', which has attracted a flurry of media interest.
 
Online behaviour is being increasingly mined by psychologists, as vast amounts of information gathered through our everyday use of the internet is readily available to test out their ideas.  The team in this study used data from Facebook to predict users' personality according to the Big Five dimensions (Openness, Conscientiousness, Extraversion, Agreeableness and Neuroticism).  Using a sample of over 80,000 Facebook users, they first asked them to complete a Big Five personality questionnaire then looked at how this predicted their Facebook 'likes'.  Having established links between the Big Five and Facebook likes in part of the sample, they then used this information to predict behaviour in further samples of users.
 
There is much existing research looking at how our judgments of our own personality compare to judgments made by others such as friends, colleagues and partners.  This provided a useful benchmark against which the predictions generated from Facebook likes could be compared.  The research team found that computer-based predictions of users personality were significantly more accurate than human judgments.  The accuracy of prediction varied according to the number of likes a user had, but it took only around 227 likes for the computer to predict a user's personality as well as a spouse - the best human judge.  It took only 10 likes for prediction to be better than the average work colleague.
 
So we're constantly told that our online behaviour reveals much about us.  Be careful with our passwords, bank account details and don't post anything too offensive as you never know who's looking.  But what about something as personal as our preferences and characteristics, an important part of what makes us unique individuals?
 
Well, the answer is that Facebook behaviour revealing something about our personality should come as no surprise at all if we stop and think about how personality assessments work. 
 
The areas of personality assessed by questionnaires - often referred to as 'constructs' or 'factors' - cannot be directly observed as they lie deep within our minds.  However, where we stand on these constructs, whether we are high, low or somewhere in the middle, influences our behaviour.  Anyone who has completed a personality questionnaire will probably remember being faced with a long list of questions asking about their typical behaviours, preferences and similar.  Each question aims to get at a specific behaviour and tells us a little more about the respondent.  Together they build an accurate picture of the respondent's personality.  If personality assessment relies on knowing about behaviour, it should be of little surprise that our behaviour tells us something about our personality.  If it didn't, personality assessment wouldn't work.
 
Like any behaviour, how we interact with technology such as Facebook reveals something about our personality.  It's no great surprise that if you observe enough of a person's behaviour, whether through spending time with them, analysing their likes on Facebook or other online activity, you get to understand about their characteristics.  This is, in fact, one of the main reasons that personality assessments are useful; by knowing a person's characteristics you are able to predict with moderate certainty how they are likely to behave across a range of situations.  Without this element of prediction, personality assessment would be no more useful than a parlour game.

  
15/01/15
Whether you prefer to call it ‘down sizing’, ‘right sizing’, ‘restructuring’ or simply making redundancies, organisations continue to shed staff in an attempt to deal with the economic slowdown. For those who have to make the difficult decision of who to keep and who to let go, the importance of objective information on which to base these decisions will be all too apparent. But how many organisations have this information readily available and, if they do, it can be times like this that really lead us to question the quality of this information. We can readily rely on the usefulness of psychometric tests during selection, but they should not be used when we have information on actual job performance. In the case of redundancies we have (or at least should have) information on how employees perform and need to base our decisions on this. For our decisions to be defensible, however, they need to be based on sound objective measurement. Faced with these difficult decisions, our annual appraisal information is all too often not up to the task. Realise Potential have experience in all aspects of employee evaluation and can provide the information you need to make robust and fair decisions about your employees for any purpose. To find out how we can help you establish rigorous performance evaluations and provide objective information to support all employee decisions, please contact us.
15/01/15
Recruitment may be down but the war for talent remains a fierce as ever. Indeed, some employers see these difficult times as an opportunity to target competitors' employees who have been made redundant. That is one of the many interesting findings from a recent survey commissioned by StepStone recruitment (see here for the full article).

The survey goes on to identify the vital role that HR can play in identifying and nurturing talent. However, with the focus ever more on the bottom line, functions such as HR are increasingly challenged to demonstrate their return on investment (ROI).

Demonstrating ROI is an ongoing challenge. Though it is relatively simple to look at the bottom line or other indicators of performance at an organisational level, identifying the impact of individuals is much more of a challenge.

Consider this question: What constitutes 'success' for particular job roles in your organisation?

And now this one: How well do you measure individuals against these 'success' criteria?

These can be challenging questions, but it is exactly these types of question that our expertise in assessment and measuring people performance can help you solve. If you are interested in finding out how you can identify talent and demonstrate your return on investment, please contact us.