What is the value of training?
This week I decided to collect data on the value of my engagement with colleagues through training sessions.
On this ocassion I ‘collected’ data that had already been collected about me without my involvement. I requested the feedback that is automatically collected when colleagues attend training. The questionnaire is standardised and automatically issued, and the data received automatically collected, analysed and stored. This is done on what is thought to capture the value of my training: relevance, organisation, effectiveness, increased confidence.
Results and Analysis
This is data that is collected for internal reporting and decision making; some aspects can be used for marketing.
I am visualising my data to represent a standardised format, for purposes of comparison and quick analysis.
This depiction of value is inspired by the barcode; designed to encode data for quick and easy scanning by a non-human reader, from object to processor.
The question of the value of this part of my work, is carried out in an automated way, by the system. This is not intended to be ‘read’ by me hence it is not depicted in a human-readable format.
Again, as with the last Governance visualisation, the aim is for depersonalisation not personalisation. For this reason my visualisation is made by hand but not hand drawn, to remove any sense of humanity.
To be coded, questions have had to be phrased in such a way that the responses could be ‘measured’, either against a standard or against each other as a rank order. This reduces the data through simplification and also changes it: only certain responses expressed in a certain way are valid. To what extent does the process of measuring, influence the outcome and how do those involved in this understand and take responsibility for this?
This is the data I could access, there is probably more. How does sight of this data affect me… should it? How much weight does this data have and should I know that and/or be able to influence that? How did we move from someone mooting the idea of collecting data this way, to it being standard proceedure?
How does this relate to governance?
- These data are ‘...products of complex assemblages of technology, people and policies…’ that ‘…construct the infrastructure of accountability…’ which ‘...shapes what and who count…’ [Anagnostopoulos et al, 2013].
- Anagnostopoulos et al describes this infrastructure as ‘…sunken into objects...’ where it would ‘…recede into the background…’. Such is the case here, which makes it less likely to be noticed and therefore questioned.
- Standardised performance metrics like this can be made to ‘…appear as objective facts…’ [Anagnostopoulos et al, 2013], again making them less likely to be questioned.
- This routine of performance measurement suggests that matters can be easily understood and denies the underlying ‘…complexities and uncertainties...’ [Anagnostopoulos et al, 2013].
- All of these points make practices like this powerful; the question is then to whom (or what) this gives this ‘informatic‘ power [Anagnostopoulos et al, 2013] especially when considering their role in policy and decision making [Fontaine, 2016].
- Since the context has been removed from performance metrics, it denies the role of bias in their production [Bennett, 1982; Boring, 2017] so then fails to recognise how their use may reinforce existing inequalities and social injustice [Fontaine, 2016].
- Although this infrastructure may be expressed as being for the benefit of learners, those whose actions are being measured can interpret this as surveillance over which they have little or no control. Instead of acting to improve the metrics by ‘improving’ their performance, they may also act in ways to resist surveillance [Fontaine, 2016].
Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013. Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.
Bennett, S.K., 1982. Student perceptions of and expectations for male and female instructors: Evidence relating to the question of gender bias in teaching evaluation. Journal of Educational Psychology, 74(2), p. 170.
Boring, A., 2017. Gender biases in student evaluations of teaching. Journal of public economics, 145, pp.27-41.
Fontaine, C. 2016. The Myth of Accountability: How Data (Mis)Use is Reinforcing the Problems of Public Education, Data and Society Working Paper 08.08.2016.