End-of-Block1 Reflections

Collecting data about my own learning and creating hand-drawn visualizations turned out to be a beneficial exercise. It demonstrated that I was only able to capture behaviour-related signals that were easy to measure, as time spent on the task, tools/people involved, number/kinds of messages, etc. However, those ‘indicators of learning’ make only the tip of the iceberg, whilst complex cognitive, social and emotional processes that are part-and-parcel of knowledge construction remain unnoticed.

Although hand-drawn data visualization seems to have less constraints and bias than learning analytics presented by conventional digital tools, it became obvious that any data representation is subjective, partial and devoid of context. No matter how much data you captured, there will always be more. In other words, any data is always a reduction of complex reality when we intentionally leave out something that we consider less important.

However, when students are responsible for data-related choices (collecting, visualizing, interpreting), like in our case, there are two apparent benefits. First, in this setting, learning analytics can become truly empowering and has the potential to enhance students’ agency, which, as Tsai et al (2020) describe in their research, is not always the case with data-intensive technology in the world of learning. Secondly, managing one’s own data withdraws the acute problem of ‘dataveillance’ from the educational agenda that was also described in the above-mentioned article.

To conceptualize learning from the data-driven perspective, it was essential to realize that the hype around learning analytics is connected to the utopian imagery of personalized learning able ‘to fix the outmoded management and practices of educational institutions at various levels’ (Friesen p.142). Interestingly, this vision has lived in the educational discourses for centuries. Dreaming of ‘Aristotle for every Alexander’ (Suppes), ’2-sigma benefit’ (Bloom) and ‘following one’s own bent’ (Asimov), widely exploited by tech companies, few stakeholders tend to question the benefits of personalized learning. However, as Bulger (2016) reveals, there is very little research into ‘what personalized learning systems actually offer and whether they improve the learning experiences and outcomes for students’ (p.3).

Conceiving of learning in the algorithmic age, it’s essential to keep in mind that learning is not only about living beings. In the work devoted to machine behaviourism (2020), the authors stress that machine learning systems ‘appear to work against notions of student autonomy and participation, seeking to intervene in educational conduct and shaping learner behaviour towards predefined aims’ (p.32). Since AI technologies are becoming more common in education, they transform the role of a learner as the central figure and the consumer of educational services described by Biesta (2013), and turn them into, as Knox et al mention, ‘prized products, from which valuable behaviours can be extracted and consumed by ever-improving algorithmic systems (p.35)’.

Overall, the readings and my attempts to quantify learning in this block have deepened my understanding of how data-intense technologies have been modifying the concept of learning. Despite the fact that the big promises of personalized learning are inspired by old-age ideas, they can never be realized to the full, since education-related questions are part of a bigger agenda concerned with power distribution, success and sense of life.    


Biesta, Gert. 2013a. “Giving Teaching Back to Education: Responding to the Disappearance of the Teacher.” Phenomenology & Practice 6 (2): 35–49.

Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. Data & Society working paper. Available: https://datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf

Friesen, N. 2019. “The technological imaginary in education, or: Myth and enlightenment in ‘Personalised Learning.” In M. Stocchetti (Ed.), The digital age and its discontents. University of Helsinki Press.

Knox, J, Williamson, B & Bayne, S 2019, ‘Machine behaviourism: Future visions of “learnification” and “datafication” across humans and digital technologies‘, Learning, Media and Technology, 45(1), pp. 1-15.

Tsai, Y-S. Perrotta, C. & Gašević, D. 2020. Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analytics, Assessment & Evaluation in Higher Education, 45:4, 554-567, DOI: 10.1080/02602938.2019.1676396

Week 5: One-to-one dialogues

This theme was inspired by Norm Friesen’s article where he emphasizes the central role of dialogue in the imaginary of educational technology. The author describes a long history of the dialogical method of teaching and links it to the promises of personalized learning. As a student, teacher and family member, I decided to analyze how much one-to-one communication I have in my WFH life, and whether it has any educational/enlightening impact on me.

So I tracked all my one-to-one conversations for four working days (Tue-Fri) and focused on the following: 1) who I’m talking to; 2) how we do this (media); 3) how long we talk; 3) if I’m learning/developing in the process of our talk

At the start, I also tried to note down the topics we covered, but it turned out to be next to impossible to keep track of them, as they change quickly and are sometimes even difficult to define. For instance, when talking about education at work (I’m an educator) – what are we talking about ‘work’ or ‘education’? If I failed here, the speech recognition system would have done even worse. As Friesen concludes, dialogue is ‘a ubiquitous yet irreducible experience…’ that ‘cannot be reduced to the requirements and usecases of engineering nor the certainties and probabilistic measurements of the natural sciences’ (p.155).

As a result of my mini-research, I haven’t identified any ‘enlightening conversations’ in my experience this week. However, I completed training at work, read for university and reflected on new ideas in my head. I believe that without one-to-one tutoring, I did quite well as a learner this week, but if the system had tracked my dialogical activities, maybe, it wouldn’t have arrived at the same conclusion, as it tends to count ‘what is easy to be counted’ (Selwyn et al, p.534) and ‘lingers on’ the level of behavior and words.

Tracking Personal Data

I’ve had smart scales at home for a year now, but I’d never tried their ‘smart functions’ before I was tasked to track some personal data last week. The app turned out to be really friendly. I was surprised to discover how much it ‘knows’ about me and my family.
Overall, I was pleased to find out that I’m right in the middle of their normal band. However, I have never succeeded to understand how they measure normality and biometric metrics, like protein, body fat etc. Premised on the phrase that ‘you are 45% lighter than other users’, it seems they count the arithmetic average of user weights, and the crowd defines normality. Hopefully, the producer also taps into some healthcare metrics, but I failed to find any relevant details on their site either. Does it mean that if the majority of users of my age and height are overweight, it will shift the norm? From my experience, though, I can say that people that use health trackers of all sorts are usually health-conscious individuals, for whom the trackers are, like learning journals for autonomous students, intended for celebrating new achievements. In the same vein, meeting an overweight person in a fitness club is a rare event.
Thus, I have a feeling that smart scales promote thinness rather than health. Look at the girl that advertises the scales on their site (see the screenshot below)! And what about the visualization of my weight changes? I put on 200 gr, and the diagram demonstrates a huge vertical jump. The rise of weight is marked with a red arrow, the fall with green. Since I’m taking those fluctuations critically, the visualization doesn’t make any difference. But maybe it will for a teenage girl for whom those emphases can become the source of a lifelong anxiety.
Nevertheless, the motivating function of the smart scales shouldn’t stay unnoticed. Last week, due to this data tracking experiment of mine, my husband discovered than he is 10 kilo overweight. He never paid any attention to my delicate hints, but was quick to trust a piece of plastic. As a result, he’s reconsidered his diet, which I’m absolutely happy about.
It was curious to find out that our peer proved that smart scales can leak personal data in his thesis. Besides, it’s essential to point out that their use of variables is limited and largely black-boxed, so using smart scales as the only source of data to inform your decisions can be risky to your mental and physical health.

Week 4: Feedback Comments

I decided to analyze my feedback comments to peers during the second week and try to answer the following questions:
1)      Does this activity enhance learning?
2)      Besides the number of comments or words, what else can be quantified in a written message?
3)      Tracking comments as measuring learning, is it a good idea? Can technology perform this well?
From the perspective of contemporary learning theories, like constructivism and connectivism, writing feedback comments can be part-and-parcel of learning, since it suggests participation and connecting to ‘more knowledgeable others’, interpretation and meaning-making in the process of reflection, and, in the ideal world, brings to creating a community of learners. I also find contributing to my peers’ blogs beneficial for my learning, since it enables me to extend my understanding of the subject, revise some ideas as well as gain new insights and inspiration.
In digital educational formats, like MOOCs, automated tracking of feedback comments is one of few ways to measure students’ engagement with the learning content, which has its obvious limitations in terms of evaluating quality and relevance of messages. In my self-reporting, I tried to address this through depicting meaning and implying that 100% of my comments are relevant. Is the technology able to do this today? Maybe not at this stage. However, if it is, will these data enhance learning? Perhaps, the comments analysis is not that helpful for students, but can be insightful for teachers in the form of signals that need investigation, like abuse, sudden disappearance of all comments, ignoring particular learners, too many identical comments (maybe a bot) etc.  
In conclusion, tracking comments may be more informative than tracking the time spent on the platform in terms of measuring learners’ engagement, but it is still a ‘trade-off between the reliability of the data that can be collected and the richness of what can be measured’ (Eynon, p.408). and should never be used as the only criterion for assessment or feedback.   

Week 3: Distractogram

For my first data visualization, I decided to analyze the process of reading for this course, as it makes the major part of learning for post-graduate online students, like ourselves. In particular, I was focusing on what distracts me from reading and how often. It is also worth noting that I collected data for around 2 hours a day from Tuesday to Saturday. In my distractogram, each line makes an hour, the 5th line is 30 minutes. Overall, it demonstrates 6.5 hours of reading. Importantly, I was reading during my working day which is rather flexible, but can still fit into the interval from 10 am to 7 pm when most people are at work. The literature that I covered:

Line 1-2 Tsai, Y-S. Perrotta, C. & Gašević, D. 2020. Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analytics

Line 2-3-4 Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. Data & Society working paper.

Line 5-6: Knox, J, Williamson, B & Bayne, S 2019, ‘Machine behaviourism: Future visions of “learnification” and “datafication” across humans and digital technologies

The main conclusion that I can draw here is that I did get distracted quite often during my reading. To be exact, my eyes left the page 93 times during 390 minutes of reading, which makes it possible to claim that I got distracted almost every 4 minutes. Looks disastrous, doesn’t it? However, I didn’t track the exact time intervals between distractions, as it was challenging to organize and would make the process feel even more unnatural. Sometimes I thought that, perhaps, I got distracted even more often than usual because I knew I was ‘observed’.

If I were to classify the data, I would point out 4 categories:

* Useful distractors that contribute to learning (consulting a dictionary or additional reading on the topic) – 33%

* Predictable distractors (pop-up windows about new messages in

Outlook, Skype, Teams and phone calls) – 49%

* Unpredictable distractors (food delivery, a table assembler) – 3%

* Idle distractors (snacks, starting a robot vacuum cleaner, a bit of exercise) – 15%

There is another example that I could have added to the idle category, which is really difficult to track, that’s why I didn’t, I’d call it daydreaming. It’s when you are reading about machine behaviourism and then you catch yourself thinking about your daughter’s upcoming birthday. Would an algorithm be able to spot it? I doubt.

According to Gasevich, the main purpose of datafication is ‘understanding and optimizing learning and the environments in which it occurs’(p.528). With the same aim, today there are many programs that track students’ engagement with educational content through tracking time spent on the page or recording if students change pages or surf the Net while performing a task on a computer. Besides, some apps are able to read people’s faces and make assumptions about learners’ mood and feelings. Looking at my data, I tried to predict what a system like that could conclude about me as a learner:

  1. I have a limited attention span (around 4 mins)
  2. My English needs improving because I turn to the dictionary too often
  3. I multitask, so the reading is not very effective
  4. I’m not interested in the content or it’s is too difficult for me

Based on this information, the system would probably claim that my learning this week hasn’t been very effective, and would probably recommend me taking a course in English and stop multitasking. I know that this message would make me feel discouraged.   

How can the algorithm know what’s best for me? In my defense, I would say that I’m pretty comfortable with multitasking. Being involved in many projects at work, I’m used to switching from one activity to another without additional stress. In terms of English, I have a habit of collecting good collocations and new phrases to expand my vocabulary. Indeed, reading for university during a working day may not be the best idea, but I do this, because I know it from my own experience which is vast, that my brain is much more receptive to new ideas during the daytime. Hence, for me, it’s better to put off some routine working tasks till the evening and prioritize reading. Would the technology listen to my self-justification, would it even care? Or would it label me as a poor student or an irresponsible employee and annoy me with its one-size-fits-all guidance?

Moreover, it is a slight chance that the machine will find out much about my learning this week relying on these data. Have I grasped and internalized the new concepts? Will I be able to use the new knowledge to understand the world around me better? These questions remain open.

At the same time, I would be happy to discuss these data with a tutor, for instance, because it will allow personalization in the real sense of this word. Overall, I like the idea expressed by Tsai et al that ‘learning analytics needs to leverage rather than replace human contact (p.564)’. However, as we all know, datafication often pursues the opposite aim.

I must confess that I feel a bit concerned about sharing these data about myself, even though it doesn’t seem too personal. Having said that, I am not quite sure that the instructional benefit that I could receive from this learning analytics would overweigh the unpleasant emotions that I experienced while collecting and opening these data.

I would like to finish with the conclusion made by Tsai et al who argue that ‘it is crucially important to acknowledge and address the conflicting beliefs about data-based personalization, surveillance and agency when introducing learning analytics as an equitable solution to educational challenges’ (p.565).

A try-out data visualization

Hello, everyone! A managerial position in a global IT company suggests working with a plethora of reports, learning analytics and stats of all sorts on a daily basis. Hence, looking into numbers, discussing them with my colleagues and taking decisions relying on what I see is part-and-parcel of my daily routines. That’s why for my ‘practise visualization’ I opted for tracking instances when I interact with the in-company analytics. Have I discovered anything revolutionary? Not really. The data show that, indeed, analyzing numbers is a regular activity for me. Most often I read reports or discuss the insights with colleagues. Perhaps last week was more data intense than usual, since I’m getting ready for promoting some of my team-members, hence I had to check many reports.

The topic for my first visualization may also explain why this course is of particular interest to me. I’m working in the environment where data are considered to be the supreme good for all spheres, and education is no exception. Thus, I feel it is crucial to develop a critical understanding of what data really are and what benefits and risks they offer. Working with teachers and students, I would really want to understand how I could empower them with the data we have at our disposal rather than use the analytics as a stick, something that Selwyn and Gasevich were talking about. As a learner myself, a teacher and a manager, I’m looking forward to considering datafication from all the 3 standpoints, and looks like the upcoming blocks will give us an excellent chance to do that.