Critical Data and Education: final reflections


My final reflections below focus on the themes from the course that I found most interesting. They do not aim to cover everything.  The socio-emotional learning musical playlist below is designed to accompany them: it’s composed of music that I listened to across the course as I was working. Our socio-emotional lives play a significant role when we are learning: they’re a strong source of intrinsic motivation, driving curiosity, for example, and are closely bound up with our aesthetic relationship to the work we’re reading and producing. Music also wields enormous power over our emotions, including when we’re learning, affecting the way we feel in all sorts of ways; that’s why I included multimodal musical experiences at many points across my blog and why I’m doing so here as well. No two people have exactly the same feelings about the sounds that they hear; maybe that partly explains the delight we often feel when we meet someone who likes similar music to us.

I like Brian Foo’s work that turns, for example, stellar data, or climate change data, into music. You might interpret this playlist’s ‘musical data’ as telling a story about the different ways our social and emotional lives engage when we read, when we write, when we converse together online, and when we think alone.  Alternatively, you might also repurpose these musical data, interpret them in a different way, and tell a different, but equally true story about their meaning. I’m not going to tell you that second story here because it’s private, and my choice not to disclose it is just that — an exercise and assertion of choice, of control, of agency, of autonomy, and, ultimately, of power. Still, I hope you enjoy listening to the first story that goes with my final reflections. You’ll find those below.



Influenced by Bulger [2014:4], I took a shot at defining personalisation during our first tweetorial: it involves the dynamic adaptation to a person’s particular aims, interests, and levels and kinds of competences. In the context of education, the person in question is usually assumed to be student.  Bulger [2014: 4] also observes that ‘there are no established standards for describing or evaluating the extent to which a learning experience is personalised’; the significant difference between responsiveness and adaptiveness in product descriptions is often ignored [Bulger 2014: 5-6]. Ideally, so-called personalised learning systems, platforms, apps and tools don’t just respond to, but actually adapt to, a person’s goals, interests, and competencies –

‘Adaptive systems aim to functionally mirror and support the learning process, which is a flexible and changing, rather than fixed, process. Responsive systems are more limited, essentially offering an interface to pre-determined content…In comparison to truly adaptive systems, responsive systems are further from the neurological processes of teaching and learning, offering something much closer to an interactive textbook than a tutor.’  [Bulger 2014: 5]

The credibility of the claims of learning personalisation cheer-leaders and investors rest on the assumption that learning personalisation delivers something like an adaptive personal tutor rather than a merely responsive personal tutor. Learning personalisation does not, for the most part, deliver genuine adaptiveness; it merely delivers responsiveness [Bulger 2014: 6]. Supposedly ‘adaptive learning environments’ are merely trussed-up recommendation systems like Spotify, Netflix, and Amazon. Here, despite the rhetoric, learning personalisation not in fact adaptive since the content does not change in response to the students, as, in contrast, it often does where a student interacts with a human teacher. Ideologically and normatively, if the thought is that we ought to replace, or reduce the numbers of human teachers in favour of systems advertised as ‘learning personalisation systems’, this is tantamount to advocating the replacement or reduction of teaching that is more competent to support learning, with teaching that is less competent to support learning.  In the process, the role of the teacher, is conceptually, normatively, and practically reengineered – and without any serious public consultation.  

2. Data

In many contexts, data are treated as the evidential basis upon which a system dynamically adapts to one set of aims, interests, and competences rather than another. Big Data –high volumes and varieties of data, produced at high velocity in real time, and assumed to be exhaustive, i.e. as ‘capturing’ an entire system [Kitchin and McArdle, 2016] — are produced by learning management systems and platforms, apps and digital tools, social media sites, and so on. But although data are often regarded as neutral or ‘raw’, the visualisation tasks demonstrated the plausibility of Gitelman and Jackson’s 2013 claim that data are not raw (p.2) or neutral or objective. Williamson [2017: 29] reminds us that data are “social products” and this in turn means that their production is subject to the negotiation and power dynamics inflecting most social relations. I demonstrated that power and knowledge relations are often asymmetric enough to undermine informed consent to the use of data. Tsai, Perotta, and Gasevic [2020: 562] suggest that when it comes to data practices in education, the lack of meaningful choice, and, ultimately, lack of meaningful and informed consent, diminishes learner autonomy.

As my playlist above suggests, our learning processes are not just cognitive, they are also social and often charged with emotion. In Conceptual Space, I suggested that data-driven personalisation neither captures this nor accommodates the value of communication with others, of teamwork, and of what Bulger calls students’ ‘need for relatedness’ [Bulger, 2014:13]. Learning becomes a lonely, boring, solipsistic endeavour. While I don’t fully agree with Friesen’s [2019] claim that the one-to-one teaching relationship is a ‘myth’ – in some contexts, it can be an intense, highly productive form of collaboration with someone you are in tune with – it is plausible that those promoting AI and other data-driven tech in education ignore the importance of communication, collaboration, and community, making the mistake that ‘proper’ learning is *only ever* one-to-one.

3. Difficult to datify

Many of my visualisations explore dimensions of learning that are difficult to measure, difficult to quantify, difficult to categorise, difficult to interpret, and, above all, difficult to datify. I considered the implications of these difficulties for learning, teaching, and governing with data. In particular, datification — ‘the rendering of social and natural worlds in machine readable digital format’ [Williamson et al, 2020:351] —  brings that the risk that ‘only that learning that can be datified is considered valuable’ [Williamson et al, 2020:358].  For example, Listening on Twitter explored selective and careful listening on Twitter: I argued that such listening should not be devalued as passive ‘lurking’ but seen as what it is: an active and crucial part of learning, often overlooked by methods that equate ‘active behaviours’ to behaviours ‘producing data’ on software platforms. In The Whig View of Learning, I suggested that some dimensions of learning are non-linear. This poses problems for learning platforms using ‘smart’ learning algorithms to adapt content to meet the requirements of an individual learner [Kitchin, 2017:16] since many implicitly assume that there is a ‘correct order’ for the learner to proceed. Finally, I noted that the capacity, and opportunity, to carefully attend to, pursue, and sustain an extended line of thought is a significant dimension of learning. However, while many analytic systems claim to be interested in learning, their constant notifications and interruptions don’t seem to accommodate such ‘behaviour’.

4. The self

I’m most intrigued by multi-dimensional questions bound up with the nature of the self. [Brown 2020] and [Williamson et al 2020: 358] argue that dashboards, for example, limit how teachers ‘see’ students by repeatedly directing their attention to features like ‘platform engagement’. They also risk teachers’ failing to see students in relevant ways too (e.g. as productive deviants, as risk-takers). A student’s self-concept is also limited when dashboards encourage students to see themselves in a reductive way, usually in competition with other students. Eynon [2015:409-410] worries that data-driven ‘edtech’ involving the excessive quantification of the self-risks changing a learner’s self-concept. Pushing this thought further, one might think that excessive quantification of the self involves a kind of self-shaping without the meaningful consent of the one whose self is being shaped. This seems to be the way ‘nudging’ technology discussed by [Knox et al, 2020] is designed to work. What’s more, even if you leave aside the question of consent, and even if you argue that our selves have always been shaped relationally by others, you could argue that the real problem with this data-driven ‘nudging’ and shaping of the self is that the systems shaping us are not actually adaptive, but merely responsive. They are, in other words, incompetent shapers of selves in contrast with human beings who at least generally, if not perfectly, have some track-record as reasonably competent shapers of other selves [McGeer, 2015]. Just because personalised data-driven systems that are not in fact adaptive are promoted in terms of ‘enhancement’, does not make them competent.

Finally, just because data collected about you is labelled ‘personal’ does not mean that you alone own it. At least some sensitive biometic data collected in education is in this category: your genetic data, is not yours alone. It’s also the data of your closest kin. There is a serious sense in which data we think of as ours is often not only ours; in so far as we have ownership over it, it is more plausible to think of that ownership as collective.

Block 3: Governing with data: Wrap-up

My data visualisations on governing with data represented (1) daydreaming (2) emotions and physical feelings and (3) conceptual space. Daydreaming, an activity sometimes regarded as unproductive and unworthwhile because it is hard to measure, may in fact play an essential part in the process of innovation, one of the central drivers of productivity. I argued that policy-making in education with the intention of measuring performance and productivity may have the unintended consequence of undermining it: even well-intended policies might sometimes have unintended consequences that are damaging. The discussion of the visualisation depicting emotions and feelings worried about the effects of an intrusive ‘database government’ [Williamson, 2017: 73] not only on behaviour but on one’s physical, emotional and internal mental life, where even the most subjective and private dimensions of learning are effectively policed by unaccountable others who take themselves as being entitled to manipulate and push another person’s mental life in one way rather than another.

My final visualisation on conceptual space aimed to demonstrate how the same data on socio-emotional learning processes can be visualised in a different way and recycled for different purposes, raising questions about the trustworthiness of actors who decide to repurpose data that parents for example, have consented to give about their child, to be later used for purposes for which no consent has been given. Williamson [2017, 80-81] observes that the UK’s National Pupil Database (NPD) makes available pupil data, including sensitive data, to third party analysis, some of which is even released to the media.

However, Williamson [2017, 81] claims that the issues stake here are trust and privacy. But this is neither a full nor accurate characterisation of what the targets of concern and analysis need to be. Persson [2016] (also quoted by Williamson) comes closer to the mark, zooming in not on trust but on trustworthiness. She writes:

“The trustworthiness of pupil data collection…depends on the limitation of the future scope of what purposes data will be used for and who will access them. Scope creep is not fiction, but very real, and today’s use of data by government can mean that what we sign up to, does not stay what we signed up to. Data handed into schools by parents and pupils before 2012 are now used for entirely different purposes since legislation was changed to permit the release of individual pupil data…The release of identifiable children’s confidential data without consent to companies and journalists is stunning”

NPD-style data collection processes in education that are shaped by government legislation allowing the use and repurposing of data in this manner are untrustworthy. And this means that it is perfectly rational to distrust them. In addition, trust, including public trust, is not something that others can ‘build’ as Williamson seems to think [Williamson, 2017, 71] but something that is given or refused. If anything is to be ‘built’ it is trustworthiness — the target of intelligent and well-placed trust. And when political institutions, data collection processes in education, and the imaginaries framing them, are trustworthy, and demonstrably so, trust can easily be given, well-placed, and appropriate. Otherwise, it’s rational to distrust what Persson is describing above. Finally, on privacy, it is not clear, as Williamson seems to be suggesting, that it is simply the privacy of individual children that’s at stake here since biometric data, for example, also concerns others (such as a child’s nearest kin). I’ll say more about that in my final reflections.

I’ll sign off this block by leaving you with this talk on trust and trustworthiness by Onora O’Neill. Enjoy.

Block 3. Week 11. Governing. Conceptual space reloaded

My first data visualisation represented how a student’s conceptual space could expand in response to provocations made via the use of a range of media. I used it to demonstrate how our learning processes are not just cognitive or social but also laden with emotion. My last visualisation aims to demonstrate how the same data on socio-emotional learning processes can be visualised in a different way and recycled for different purposes as Williamson [2017, 81]. For example, the UK’s National Pupil Database makes available pupil data, including sensitive data, to third party analysis. One can easily imagine this being used in a completely different context by a health insurance broker to assess someone’s mental health and then used to inform decisions related to the cost of their insurance premium.

Williamson [2017, 77] writes that images of data are ‘powerful explanatory and persuasive devices’. I’d add that sometimes the design of the visualisation itself constrains what data are represented. For example, I wanted this sea-like design (influenced by this) to be very simple to contrast with the earlier representation on conceptual space which was more complex. But this made it harder to represent the duration thinking and the platform used, so I excluded data representing them. Given that, as Williamson [2017, 77] notes, data visualisations and displays themselves are also used as policy instruments, this means that the visualisation chosen serve rhetorical purposes and can be used to leave certain data out or make others more salient than they might otherwise be. Furthermore, we might present so-called ‘intimate data’ [Williamson 2017: 82] on, for example, emotional responses, in such an aesthetically charming way that we ignore what’s left out or ignore ethical questions about the appropriateness of gathering intimate data on people’s emotional and mental lives. Data may sometimes be beautiful; that doesn’t make them good. Finally, you might think that the collection of data purporting to represent intimate emotional and mental states, and physical states of a person’s body is a form of data colonialism of a distinctively bodily kind without the meaningful consent of the person represented, suggesting in turn a kind of corporate sense of entitlement to a person’s body that the one represented is not able to resist or even contest since it is ‘concerned with the external appropriation of data on terms that are partly or wholly beyond the control of the person to whom the data relates’ [Couldry and Mejias, 2018; Prinsloo, 2020, 367].

Block 3. Week 10. Governing: emotions and feelings

This week I recorded my perception of the emotions and physical feelings of members of staff and students I was in contact with. The easiest thing in the world would have been to represent each person using emoji. I rejected that mode of visualisation because emoji don’t represent emotions; they represent facial expressions, and facial expressions are not a reliable guide to emotional states [Crawford, 2021]. Instead, I’ve represented each person as a flower because we are part of the natural world (I don’t believe in immaterial ‘souls’–sorry!) and because human physical, emotional, and mental lives more generally are actually quite fragile, the latter sometimes leaving us vulnerable to manipulation [Coons and Weber, 2014]. I have not indicated who each person is to preserve their privacy.

This visualisation could be used to provoke a discussion of the collection of data on children’s emotions and well-being in schools (and universities) and how it is shaped by government agendas [Williamson, 2017: 131]. This might turn into a deeply intrusive and constant governmental audit or ‘database government’ [Williamson, 2017: 73] not only of behaviour but of one’s physical, emotional and internal mental life. [To get a feel for how affective detection systems work, have a play with this but it will want access to your camera.] Do you have what policy-makers regard as the “right” emotional states for teaching or learning? If not, should we nudge you (or, perhaps, give you a really sharp elbow?) in what we regard as the “right” direction? [Anagnostopoulos, Rutledge, and Jacobsen, 2013: 1-2] tell us that policy makers use standardized performance data to distribute money and evaluate pay for teachers. If you are not evaluated as sufficiency chipper a teacher, will you be docked pay or just not be promoted?

Block 3. Week 9. Governing: Daydreams

This data visualisation represents the time I spent daydreaming during the time I set aside to study. It doesn’t represent every daydream, focusing simply on those daydreams about what I’d do with my blog had I more (1) technical skill and (2) time. [See the postscript below for details.] Each daydream is represented by a shape that could be interpreted either as a little fluffy cloud or a thought-bubble. I’ve intentionally left this ambiguity unresolved to show how the same visualisation can be interpreted in more than one way.

This visualisation could be used to provoke a discussion about the complexity of the relationship between creativity, measurement, performance and productivity, and innovation in education. There is next to no ‘performance’ data on these daydreams: not one has been implemented. Williamson [2017: 75] observes that ‘performativity makes the question of what counts as a worthwhile activity in education in to the question of what can be counted and of what account can be given for it’. By these lights, we might wonder whether daydreaming is a worthwhile activity – especially since it’s not the case that daydreaming ‘behaviour’ is public, and, therefore, not observable or measurable. Neither is there yet any ‘evidence that proves [the] effectiveness’ [Williamson, 2017: 75] of daydreams. However, daydreaming is also associated with creativity, innovation and motivation. When we daydream about what might be possible, we imagine ways the world might be, often by visualization or simulation. Such simulations are often precursors to innovation which is a significant driver of productivity. Ironically, it may be the case that daydreaming, sometimes regarded as an unproductive, unworthwhile activity because it is hard to measure, may in fact play an essential part in the process of innovation, one of the central drivers of productivity. Policy-making in education that fixates on measuring productivity, may, paradoxically, have the unintended consequence of undermining it.


The daydreams represented in the visualisation include (1) a link (or something like that) from my blog to a virtual reality dinosaur visit for our class to attend together; (2) inspired by this: developing the virtual reality platform so that it afforded us experiences of smells as well as sights and sounds to appreciate the sheer stinkiness of dinosaurs (and also to appreciate smell as a modality in digital education – it’s mostly ignored in favour of the visual and auditory); (3) developing a bot – Dinobot — that helps children apply measurements. (For more details about Dinobot ask James and Huw; he was described in detail on my IDEL blog); (4) a link to our own Minecraft garden party (or perhaps a rave?); (5) class trip a Berliner Philharmoniker digital concert; (6) any one of these virtual tours. And of course we could then study the data collected by some of these platforms and think through the implications of each data gathering together.

Teaching with data: Wrap-up

Photo credit: The author of this blog, Impressions and Ideas, Newport-on-Tay, 2021.

My data visualisations on teaching with data represented (1) reading ‘off-piste’ (2) conversations and (3) self-censorship.  In the comments, Jeremy Knox usefully crystallised reading-off-piste as productive deviation. Deviating from the set-reading list can be productive and enjoyable – but it’s also risky, inefficient, and stressful. Risky: you need to make a judgement-call about what to read and another about when to stop – sometimes you get this right; sometimes you don’t. Inefficient: when you don’t get the judgement-call right, time is wasted; reading off-piste while also doing the set readings means that you fall behind schedule with writing and although you learn a lot, there’s inevitably some material you read that you don’t use to write. Stressful: catching up with the visualisations and writing is stressful.

Still, I’m inclined to think it’s worth it, overall. Inefficient and stressful to read on top of everything else in the short term (from week to week), parts of Wu’s and Veliz’ books have been helpful over the longer term.  Data dashboards and softwares purporting to track student engagement don’t show teachers this kind of reading-off piste ‘behaviour’. And so neither do they show a teacher a student’s risk-taking, short-term inefficiency and stress, nor the longer-term benefits from reading this way; a lot is happening, but dashboards don’t register it. In contrast, a short conversation between a teacher and student might easily elicit this kind of information – and do so without requiring the gathering of data on, for example, stress levels, or mood, or ‘grit’, using the socio-emotional learning technologies discussed by Knox et al [2020:40-41].  Overly focusing on what the dashboard suggests not only risks teacher’s seeing students in limited ways [Brown, 2020] [Williamson et al 2020: 358]; it risks teachers’ failing to see students in relevant ways too (as productive deviants, as risk-takers, and so on.)

The ‘conversations’ visualisation plays with an observation that [Harrison et al, 2019] makes about conversation as a methodology. My discussion pushed beyond that paper, suggesting that while a dashboard, for example, may help a teacher know that a conversation is happening, without further detail or context, it’s going to be hard for a teacher to interpret, especially since conversations are often subtle and ambiguous. Jeremy plausibly suggested that this was the reason that analytics systems seem to focus on the easier task of analysing behaviour rather than language (and, perhaps, I think, communication more generally, including visual communication such as the data visualisations themselves). I’m inclined to think that this starts to get closer to understanding the limitations of the use of analytics systems in education where, traditionally at least, both learning and teaching have been conversation-driven.

Furthermore, we might observe that conversations often have dynamics that are not immune to wider socio-political power-dynamics [Austin, 1962; Langton 2009]. In the context of education, this might surface in situations where, for example, students sometimes dismiss or harass each other (and/or their teachers) verbally, or where racial and gendered slurs are the norm. In addition, as anyone who has ever grown up in a highly authoritarian socio-political world will tell you, what one might wish to communicate in conversation is not only sometimes deliberately self-censored; even when not self-censored, what is communicated often outstrips what is (literally) said [Grice, 1975]: in some contexts, what we choose not to say often gives others more of a clue to what we think and feel than what we literally say – you just need to be inculcated into the relevant conversational norms to pick up on it [Grice [1975], Lewis [1979]]. (Poets and playwrights often use this last method of communicating what they think and feel too.) The upshot of all this is that teachers cannot let their professional judgement simply dance to the tune of analytics systems that glibly infer (as many seem to) that verbal silence (silent ‘behaviour’?), for example, straightforwardly indicates a lack of communication, and, therefore, a lack of engagement. Emphasising this point reveals a further way to reinforce Gourlay’s [2020: 5] concern that ‘interaction’ and ‘participation’ have come to stand as a proxy for learning itself. Learning may well require communication but not all communication is explicit and verbal (or typed on a forum, for example); such communication won’t be picked up by analytics systems – that doesn’t mean that it’s not relevant, and even significant, to learning, and for teachers trying to interpret what’s going on in their classrooms.


I’m still deeply puzzled by the definition of critical big data literacy given by Sander [2020], not only for the workload-related reasons mentioned here but also because the focus on data literacy seems to sideline the significance of the knowledge and professional judgement teachers build up through qualification and experience. Do I really need extensive training to be data literate to understand students when I’ve already got training and qualifications to teach, a pile of experience, a sprinkling of intuition and emotional intelligence, and good old fashioned professional judgement?

Anyway, I’ve run out of time, and that makes it seem fitting to end with this. Cheers.

Block 2. Week 8. Teaching: self-censoring

In The Platform Society Van Dijck, Poell, and de Waal observe that ‘platformization is profoundly affecting the very idea of education as a common good on both sides of the Atlantic’ [2018: 117]. In response, I tracked the platforms I used during the time I put aside for studying for this course. I tracked the amount of time I spent on each and the amount of times I self-censored on each. I spent more time on Twitter and less on Facebook, but self-censored far more on Twitter than Facebook (where I didn’t self-censor at all). On Twitter, I can’t shake the feeling of being in the public eye and I’m very aware that my comments are public. I also don’t understand the norms of communication there either (honestly!). Result: I self-censor. A lot. I’ve tried to visualise those self-censorings here. In contrast, on Facebook, I tend to feel and behave as if I’m in a private space. I’m more talkative and so are my friends: they’re smart, gregarious, and hilariously funny. On Twitter, we’re more buttoned down. I’ve attempted to represent the elusive feelings of being in public and private in the visualisation even though they are subjective and therefore hard (impossible?) to quantify.

The data visualisation could be used to demonstrate how students communicate differently on platforms they perceive to be more open and public in contrast with those they perceive to be closed. It suggests that students might be more inclined to self-censor in spaces they perceive to be more open and public. The phenomena of self-censorship poses a problem for teaching on open platforms like Twitter since it means that teachers are less likely to hear what students think. As a result, many debates – especially on controversial topics – will probably be more stifled, and less freewheeling and spontaneous than they might be otherwise. Teaching on Twitter also puts teachers under the public gaze even more than they already are from teaching observations, learning walks, students recording them on mobile phones, data reporting, and so on (Page [2016]. I’ve visualised the data gathered as an eye to remind us that the Facebook (and Twitter) spaces increasingly used in education for reading groups, student societies, tutorials, and so on, are not private spaces even if we feel and behave as if they are. It’s worth remembering too that Facebook and Twitter do not serve institutional or public goods. They serve the ends of surveillance capitalism [Zuboff 2019; Williamson et al 2020: 360-361]. You don’t pay for the services they offer because you and your data are the service they offer to others.

On meaningful consent

I’ve been thinking about this conversation that me, Tracey, Enoch and Huw had on Twitter about this on and off since early February. It started when Huw tweeted —

Tracey was amazed even 3% read the terms and conditions and Huw doubted that the 3% actually read them — they just say they do. Enoch then dug out this website that had succeeded in persuading 99% of survey respondents to surrender sensitive things like the naming rights for their first-born child. Anyway, this was my response to Huw’s question above:

My suggestion was that corporations like Microsoft, Apple, Facebook and so on intentionally make the terms and conditions of their apps too long and complex for most readers in order to undermine the possibility of meaningful consent. This idea — drowning people in information resulting in the bypassing of meaningful consent — came up again when I was thinking about the use of digital platforms, apps, and, especially, facial recognition software, in education. But this time the information overload worry and its connection to the undermining of consent came from a different angle. I started reading this piece

Under the pressure of the rapidly escalating data requirements of deep learning, it is reseachers — not corporations this time — who gradually and then completely abandoned asking for people’s consent to read their faces. For example, this new study by Inioluwa Deborah Raji and Genevieve Fried shows that personal photos of private citizens (often including children) are increasingly being incorporated into systems of surveillance without their knowledge. The research practice of teaching machines to read human faces using deep-learning based facial recognition has not only eroded our privacy or created a culture of excessive surveillance without the knowledge of those under surveillance. It has also completely disrupted our norms of consent. Yet again, meaningful consent is being bypassed even if it is the case that researchers, for example, do not intend to by pass consent. Given the deluge of data, researchers are simply no longer able to keep up with the amount of consent required for this amount of data. They’re no longer able to ask for consent.

Raji contrasts earlier times in data collection when researchers exercised extreme caution about collecting, documenting, and verifying face data with how things are now, commenting that —

“Now we don’t care anymore. All of that has been abandoned…You just can’t keep track of a million faces. After a certain point, you can’t even pretend that you have control.”

In the context of education, we’re already seeing some companies boasting that their AI can now read children’s emotions as they learn. In this case, we were lucky to have Kate Crawford (below) to set the record straight. But it’s far from obvious such Twitter interventions will be enough to stop the installation of facial recognition software in the classroom.

And what will happen to all of the data on children that will be collected as they learn? Well, remember what Raji said above: ‘You can’t even pretend that you have control.” Think about that.

Sander on critical big data literacy

So in my last visualisation on conversations, I suggested that Sander’s [2020b:3] definition of data literacy — i.e. of critical big data literacy— was too demanding. Sander argues that data literacy needs to evolve from a data literacy narrowly defined in terms of data skills to something much more ambitious — critical big data literacy — that encompasses, in practice (and the in practice bit is important):

“….an awareness, understanding and ability to critically reflect upon big data collection practices, data uses, and the possible implications that come with these practices, as well as the ability to implement this knowledge for a more empowered internet usage [see also Sander, 2020a]. [My emphasis]

It seems to me that if, for example, teachers in particular are supposed to become more data literate so that they can implement this ambitious level of knowledge effectively in practice, you’re going to have to make serious adjustments to their already overloaded timetables. Or else you’re going to have to reject Sander’s ambitious definition of critical big data literacy as too demanding and not feasible to implement in practice as long as the rest of, for example, a teacher’s schedule stays the same. None of this, of course, means that the narrow definition of data literacy is acceptable either.

Block 2. Week 7. Teaching: Conversations

This week I tracked the conversations I had with people on our course. I chose to represent each conversation – with Colin, Tracey, Ben, and Jeremy – as a butterflyesque creature, playing on the idea of ‘social butterflies’ to emphasise the social dimension to both learning and teaching, and on the idea that conversation itself can be a useful methodology to elicit someone’s experience of learning, or, indeed, of teaching [Harrison et al, 2019].

The data visualisation tells you who I’m talking with, the subject we’re talking about, and the platform (University email, Twitter). It could be used to illustrate that there’s no obvious way for a teacher to interpret this data. X talked with Y on platform P about subject S – but so what? Was it a long conversation? Was it just about something relatively minor? Was there, for example, any argy-bargy in conversation with Ben? (Answer=no!). In addition, conversations are often nuanced, subtle, and ambiguous (even for their participants!) in a way that makes them hard for teachers (or anyone else!) to interpret without a whole a lot of contextual detail, and that detail, of course, is what a teacher would need to make proper sense of the conversations represented in the visualisation, and to properly judge whether or not they were helping or hindering a student’s learning. However, even in a small class of 25 students, it’s not obvious that any teacher would have time to interpret and evaluate more detailed data on every student conversation without adding a considerable amount of work to their workload. And this in turn suggests that the demand that teachers become more data literate in the sense of the definition given by Sander [2020:3] may in fact turn out to be unreasonably demanding given the time available (see here for my further discussion). It’s far from obvious to me that such detailed data interpretation should be part of the role of a teacher in the first place.