Critical Data and Education: final reflections


My final reflections below focus on the themes from the course that I found most interesting. They do not aim to cover everything.  The socio-emotional learning musical playlist below is designed to accompany them: it’s composed of music that I listened to across the course as I was working. Our socio-emotional lives play a significant role when we are learning: they’re a strong source of intrinsic motivation, driving curiosity, for example, and are closely bound up with our aesthetic relationship to the work we’re reading and producing. Music also wields enormous power over our emotions, including when we’re learning, affecting the way we feel in all sorts of ways; that’s why I included multimodal musical experiences at many points across my blog and why I’m doing so here as well. No two people have exactly the same feelings about the sounds that they hear; maybe that partly explains the delight we often feel when we meet someone who likes similar music to us.

I like Brian Foo’s work that turns, for example, stellar data, or climate change data, into music. You might interpret this playlist’s ‘musical data’ as telling a story about the different ways our social and emotional lives engage when we read, when we write, when we converse together online, and when we think alone.  Alternatively, you might also repurpose these musical data, interpret them in a different way, and tell a different, but equally true story about their meaning. I’m not going to tell you that second story here because it’s private, and my choice not to disclose it is just that — an exercise and assertion of choice, of control, of agency, of autonomy, and, ultimately, of power. Still, I hope you enjoy listening to the first story that goes with my final reflections. You’ll find those below.



Influenced by Bulger [2014:4], I took a shot at defining personalisation during our first tweetorial: it involves the dynamic adaptation to a person’s particular aims, interests, and levels and kinds of competences. In the context of education, the person in question is usually assumed to be student.  Bulger [2014: 4] also observes that ‘there are no established standards for describing or evaluating the extent to which a learning experience is personalised’; the significant difference between responsiveness and adaptiveness in product descriptions is often ignored [Bulger 2014: 5-6]. Ideally, so-called personalised learning systems, platforms, apps and tools don’t just respond to, but actually adapt to, a person’s goals, interests, and competencies –

‘Adaptive systems aim to functionally mirror and support the learning process, which is a flexible and changing, rather than fixed, process. Responsive systems are more limited, essentially offering an interface to pre-determined content…In comparison to truly adaptive systems, responsive systems are further from the neurological processes of teaching and learning, offering something much closer to an interactive textbook than a tutor.’  [Bulger 2014: 5]

The credibility of the claims of learning personalisation cheer-leaders and investors rest on the assumption that learning personalisation delivers something like an adaptive personal tutor rather than a merely responsive personal tutor. Learning personalisation does not, for the most part, deliver genuine adaptiveness; it merely delivers responsiveness [Bulger 2014: 6]. Supposedly ‘adaptive learning environments’ are merely trussed-up recommendation systems like Spotify, Netflix, and Amazon. Here, despite the rhetoric, learning personalisation not in fact adaptive since the content does not change in response to the students, as, in contrast, it often does where a student interacts with a human teacher. Ideologically and normatively, if the thought is that we ought to replace, or reduce the numbers of human teachers in favour of systems advertised as ‘learning personalisation systems’, this is tantamount to advocating the replacement or reduction of teaching that is more competent to support learning, with teaching that is less competent to support learning.  In the process, the role of the teacher, is conceptually, normatively, and practically reengineered – and without any serious public consultation.  

2. Data

In many contexts, data are treated as the evidential basis upon which a system dynamically adapts to one set of aims, interests, and competences rather than another. Big Data –high volumes and varieties of data, produced at high velocity in real time, and assumed to be exhaustive, i.e. as ‘capturing’ an entire system [Kitchin and McArdle, 2016] — are produced by learning management systems and platforms, apps and digital tools, social media sites, and so on. But although data are often regarded as neutral or ‘raw’, the visualisation tasks demonstrated the plausibility of Gitelman and Jackson’s 2013 claim that data are not raw (p.2) or neutral or objective. Williamson [2017: 29] reminds us that data are “social products” and this in turn means that their production is subject to the negotiation and power dynamics inflecting most social relations. I demonstrated that power and knowledge relations are often asymmetric enough to undermine informed consent to the use of data. Tsai, Perotta, and Gasevic [2020: 562] suggest that when it comes to data practices in education, the lack of meaningful choice, and, ultimately, lack of meaningful and informed consent, diminishes learner autonomy.

As my playlist above suggests, our learning processes are not just cognitive, they are also social and often charged with emotion. In Conceptual Space, I suggested that data-driven personalisation neither captures this nor accommodates the value of communication with others, of teamwork, and of what Bulger calls students’ ‘need for relatedness’ [Bulger, 2014:13]. Learning becomes a lonely, boring, solipsistic endeavour. While I don’t fully agree with Friesen’s [2019] claim that the one-to-one teaching relationship is a ‘myth’ – in some contexts, it can be an intense, highly productive form of collaboration with someone you are in tune with – it is plausible that those promoting AI and other data-driven tech in education ignore the importance of communication, collaboration, and community, making the mistake that ‘proper’ learning is *only ever* one-to-one.

3. Difficult to datify

Many of my visualisations explore dimensions of learning that are difficult to measure, difficult to quantify, difficult to categorise, difficult to interpret, and, above all, difficult to datify. I considered the implications of these difficulties for learning, teaching, and governing with data. In particular, datification — ‘the rendering of social and natural worlds in machine readable digital format’ [Williamson et al, 2020:351] —  brings that the risk that ‘only that learning that can be datified is considered valuable’ [Williamson et al, 2020:358].  For example, Listening on Twitter explored selective and careful listening on Twitter: I argued that such listening should not be devalued as passive ‘lurking’ but seen as what it is: an active and crucial part of learning, often overlooked by methods that equate ‘active behaviours’ to behaviours ‘producing data’ on software platforms. In The Whig View of Learning, I suggested that some dimensions of learning are non-linear. This poses problems for learning platforms using ‘smart’ learning algorithms to adapt content to meet the requirements of an individual learner [Kitchin, 2017:16] since many implicitly assume that there is a ‘correct order’ for the learner to proceed. Finally, I noted that the capacity, and opportunity, to carefully attend to, pursue, and sustain an extended line of thought is a significant dimension of learning. However, while many analytic systems claim to be interested in learning, their constant notifications and interruptions don’t seem to accommodate such ‘behaviour’.

4. The self

I’m most intrigued by multi-dimensional questions bound up with the nature of the self. [Brown 2020] and [Williamson et al 2020: 358] argue that dashboards, for example, limit how teachers ‘see’ students by repeatedly directing their attention to features like ‘platform engagement’. They also risk teachers’ failing to see students in relevant ways too (e.g. as productive deviants, as risk-takers). A student’s self-concept is also limited when dashboards encourage students to see themselves in a reductive way, usually in competition with other students. Eynon [2015:409-410] worries that data-driven ‘edtech’ involving the excessive quantification of the self-risks changing a learner’s self-concept. Pushing this thought further, one might think that excessive quantification of the self involves a kind of self-shaping without the meaningful consent of the one whose self is being shaped. This seems to be the way ‘nudging’ technology discussed by [Knox et al, 2020] is designed to work. What’s more, even if you leave aside the question of consent, and even if you argue that our selves have always been shaped relationally by others, you could argue that the real problem with this data-driven ‘nudging’ and shaping of the self is that the systems shaping us are not actually adaptive, but merely responsive. They are, in other words, incompetent shapers of selves in contrast with human beings who at least generally, if not perfectly, have some track-record as reasonably competent shapers of other selves [McGeer, 2015]. Just because personalised data-driven systems that are not in fact adaptive are promoted in terms of ‘enhancement’, does not make them competent.

Finally, just because data collected about you is labelled ‘personal’ does not mean that you alone own it. At least some sensitive biometic data collected in education is in this category: your genetic data, is not yours alone. It’s also the data of your closest kin. There is a serious sense in which data we think of as ours is often not only ours; in so far as we have ownership over it, it is more plausible to think of that ownership as collective.

Leave a Reply

Your email address will not be published. Required fields are marked *