Block 3: Week 9 Visualisation

What is the value of a meeting?

This week I decided to collect data on the value of my engagement with colleagues through meetings.

Methodology

For five days I recorded all my meeting activity (Eynon’s (2015) ‘what’), and categorised it by aspects that might be related to the value of my engagement: type of activity, numbers involved, immediacy of impact.

Results and Analysis

I am required to submit data on my work activity to be analysed and compiled into internal reports, possibly also used for external reporting.

I am visualising my meeting data based on a similar situation, where those analysing data, need a simplified representation that might be standardised, for purposes of comparison (and prediction), and that is based on typical commercial characteristics of value (e.g. activity rather than sentiment).

scale measuring the weight of gem stone
What is the value of a meeting?

This depiction of value is inspired by:

  • weighing: a method to measure value, through its comparison to a standard;
  • gems: something to which we ascribe value (the stuff of which it is made usually having little intrinsic value) based on multiple characteristics, but can also be susceptible to trends.

I consider my work valuable, but how is it valued by others, when its context is removed? The weight of a gem is one part of its value; why would you choose to value something by only one characteristic? Are large meetings (larger gems) considered more valuable because they get the message across to more people at one time, or they thought too expensive in terms of staff time: are meetings thought to be replaceable by email (especially if you disregard the details)?

In contrast to the two previous viewpoints, Learning and Teaching, in this situation where we consider the view of Governance, the aim is not for personalisation but depersonalisation.

To be visualised suitable for governance, data has been coded such that it could be compared with data from other staff, in other roles, in other institutions. All context which potentially prevents this has been removed: the person is reduced to a number, the profession to a function. I did not collect some data (by hand) as it is logged automatically: all meetings have been booked with one tool and use one platform; this, along with attendee and time data, might have been added, without my knowing, to my self-reported data.

The institution may process this data still further, in a way which suits its objectives. Though I could add contextual data to help its understanding, there is no assurance that it will be used (in fact, the method of data collection might prevent anything other the data requested being input).

Knowing what data is being collected and in what form, focusses attention on that (and away from other matters). Knowing that decisions could be made on the basis of this data could cause me to change behaviours (to change the data) outside changes that might improve my situation or performance. If I am aware that my data will be compared with that of others again, I may change my data so that I appear to be acting correctly (even if I know the comparison is inappropriate and the change does not lead to improvement). Thus it becomes control and surveillance, even if it is self-surveillance.

How does this relate to governance?

  • Requests for (or automatic collection of) performance data, along with judgements and ranking [Espeland & Sauder, 2016] to contribute to decision-making [Esposito & Stark, 2019], are a common part of ‘normal’ practice in higher education. This can also include rating ‘value’ [Burrows, 2012].
  • Those involved in governance will be involved with determining the ‘problem’ this data is going to solve and inform the means by which this will be done [Williamson, 2017a]. This may involve other actors, about which those whose data this is, may not be completely aware.
  • Processing to simplify data to a large extent will mean that some aspects will be lost completely, since its value (to the person whose data it is) is not shared by those in governance. This is not a neutral act [Williamson, 2017b].
  • When processing of data for governance, students and staff are represented as ‘thin descriptions’ [Ozga et al, 2011], dehumanised, possibly affecting how this data, only a proxy of what it is claimed it represents [Williamson et al, 2020], is seen and therefore how it is used.
  • Standardisation of data to make it context-free, is argued makes data objective and enables comparisons between subjects [Ozga, 2016], but risks rendering it too generic to be of real use with subjects that can vary widely.
  • Even if students or staff have any control over what data is shared and how, all important decisions have already been taken. That practices at the institution are data-led which, they say, is to increase efficiency etc. [Williamson, 2019], may not be up for discussion [Ozga, 2016]. And if outside agents are involved, they may be making use of data for their own purposes [Robertson, 2019].
  • Whether staff or students know what is done with and on the basis of their data, this is part of its coding and is ‘…structured and structuring…’ such that they could be ‘…driven by analysis of performance data…’ [Ozga, 2016].

References

Burrows, R., 2012. Living with the h-index? Metric assemblages in the contemporary academy. The Sociological Review, 60 (2), pp. 355–372.

Espeland, W. N., and Sauder, M., 2016. Engines of anxiety: Academic rankings, reputation, and accountability. New York, NY: Russell Sage Foundation.

Esposito, E., and Stark, D., 2019. What’s observed in a rating? Rankings as orientation in the face of uncertainty. Theory, Culture and Society, 36 (4), pp. 3–26. https://doi.org/10.1177/0263276419826276

Eynon, R., 2015. The quantified self for learning: critical questions for education. Learning, Media and Technology, 40 (4), pp. 407-411. DOI: 10.1080/17439884.2015.1100797

Ozga, J., 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15 (1), pp.69-81.

Ozga J., Dahler-Larsen P., Segerholm C., et al. (eds), 2011. Fabricating Quality in Education: Data and Governance in Europe. London: Routledge, pp.127–150.

Robertson, S., 2019. Comparing platforms and the new value economy in the academy. In R. Gorur, S. Sellar, & G. Steiner-Khamsi (Eds.), Comparative methodology in the era of big data and global networks, pp.
169–86. London, UK: Routledge.

Williamson, B., 2017a. Conceptualising Digital Data in Big Data in Education: The digital future of learning, policy and practice. Sage.

Williamson, B., 2017b. Digital Education Governance: political analytics, performativity and accountability, in Big Data in Education: The digital future of learning, policy and practice. Sage.

Williamson B., 2019. Policy networks, performance metrics and platform markets: Charting the expanding data infrastructure of higher education. British Journal of Educational Technology, 50 (6), pp. 2794–2809. doi:10.1111/bjet.12849.

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectives. Teaching in Higher Education. 25 (4), pp. 351-365.

2 Replies to “Block 3: Week 9 Visualisation”

  1. This is a really interesting set of reflections on questions of performance data and value. Lots of news stories have been reported over the last year about increased employee performance measurement as work has been completed at home. The ‘productivity score’ algorithm has become ‘the boss’ in a new culture of ‘workplace surveillance’ according to some accounts: https://www.theguardian.com/technology/2020/nov/26/microsoft-productivity-score-feature-criticised-workplace-surveillance.

    I think you are right to highlight that this kind of performance measurement can work as a governing technology in education too, inciting behaviours that ‘react’ to the measurement. I presume the goal is to create reactive behaviours that optimize productivity or performance – governing how people behave towards better outcomes. But might other reactive behaviours occur as a result, too? Espeland and Sauder call performance measurement technologies in HE ‘engines of anxiety’ that incite activities that should satisfy the performance scoring methodology but may undermine the values and missions of institutions too. If you can this week, have a look at some of the HESA data on UK HE and think about whether these might generate similar problems, even as they are clearly intended to promote improved performance outcomes.

  2. Thanks again, Ben.

    It is perhaps not suprising that we see instances of the same approach towards employees as students, presumably due to the levels of mistrust in others.

    Added to this, those who see the possibility of manufacturing a problem for which they can profit through providing a solution.

Leave a Reply

Your email address will not be published. Required fields are marked *