For my first data visualization, I decided to analyze the process of reading for this course, as it makes the major part of learning for post-graduate online students, like ourselves. In particular, I was focusing on what distracts me from reading and how often. It is also worth noting that I collected data for around 2 hours a day from Tuesday to Saturday. In my distractogram, each line makes an hour, the 5th line is 30 minutes. Overall, it demonstrates 6.5 hours of reading. Importantly, I was reading during my working day which is rather flexible, but can still fit into the interval from 10 am to 7 pm when most people are at work. The literature that I covered:
Line 1-2 Tsai, Y-S. Perrotta, C. & Gašević, D. 2020. Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analytics
Line 2-3-4 Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. Data & Society working paper.
Line 5-6: Knox, J, Williamson, B & Bayne, S 2019, ‘Machine behaviourism: Future visions of “learnification” and “datafication” across humans and digital technologies‘
The main conclusion that I can draw here is that I did get distracted quite often during my reading. To be exact, my eyes left the page 93 times during 390 minutes of reading, which makes it possible to claim that I got distracted almost every 4 minutes. Looks disastrous, doesn’t it? However, I didn’t track the exact time intervals between distractions, as it was challenging to organize and would make the process feel even more unnatural. Sometimes I thought that, perhaps, I got distracted even more often than usual because I knew I was ‘observed’.
If I were to classify the data, I would point out 4 categories:
* Useful distractors that contribute to learning (consulting a dictionary or additional reading on the topic) – 33%
* Predictable distractors (pop-up windows about new messages in
Outlook, Skype, Teams and phone calls) – 49%
* Unpredictable distractors (food delivery, a table assembler) – 3%
* Idle distractors (snacks, starting a robot vacuum cleaner, a bit of exercise) – 15%
There is another example that I could have added to the idle category, which is really difficult to track, that’s why I didn’t, I’d call it daydreaming. It’s when you are reading about machine behaviourism and then you catch yourself thinking about your daughter’s upcoming birthday. Would an algorithm be able to spot it? I doubt.
According to Gasevich, the main purpose of datafication is ‘understanding and optimizing learning and the environments in which it occurs’(p.528). With the same aim, today there are many programs that track students’ engagement with educational content through tracking time spent on the page or recording if students change pages or surf the Net while performing a task on a computer. Besides, some apps are able to read people’s faces and make assumptions about learners’ mood and feelings. Looking at my data, I tried to predict what a system like that could conclude about me as a learner:
- I have a limited attention span (around 4 mins)
- My English needs improving because I turn to the dictionary too often
- I multitask, so the reading is not very effective
- I’m not interested in the content or it’s is too difficult for me
Based on this information, the system would probably claim that my learning this week hasn’t been very effective, and would probably recommend me taking a course in English and stop multitasking. I know that this message would make me feel discouraged.
How can the algorithm know what’s best for me? In my defense, I would say that I’m pretty comfortable with multitasking. Being involved in many projects at work, I’m used to switching from one activity to another without additional stress. In terms of English, I have a habit of collecting good collocations and new phrases to expand my vocabulary. Indeed, reading for university during a working day may not be the best idea, but I do this, because I know it from my own experience which is vast, that my brain is much more receptive to new ideas during the daytime. Hence, for me, it’s better to put off some routine working tasks till the evening and prioritize reading. Would the technology listen to my self-justification, would it even care? Or would it label me as a poor student or an irresponsible employee and annoy me with its one-size-fits-all guidance?
Moreover, it is a slight chance that the machine will find out much about my learning this week relying on these data. Have I grasped and internalized the new concepts? Will I be able to use the new knowledge to understand the world around me better? These questions remain open.
At the same time, I would be happy to discuss these data with a tutor, for instance, because it will allow personalization in the real sense of this word. Overall, I like the idea expressed by Tsai et al that ‘learning analytics needs to leverage rather than replace human contact (p.564)’. However, as we all know, datafication often pursues the opposite aim.
I must confess that I feel a bit concerned about sharing these data about myself, even though it doesn’t seem too personal. Having said that, I am not quite sure that the instructional benefit that I could receive from this learning analytics would overweigh the unpleasant emotions that I experienced while collecting and opening these data.
I would like to finish with the conclusion made by Tsai et al who argue that ‘it is crucially important to acknowledge and address the conflicting beliefs about data-based personalization, surveillance and agency when introducing learning analytics as an equitable solution to educational challenges’ (p.565).