Block 1 Summary

Learning and data have a complex relationship with each other. Learning involves a process of questioning and acquiring knowledge. Answers to questions or knowledge itself can be received from a “learned” person (the teacher). Yet, not all questions can be answered by existing knowledge, and unanswered questions drive observations and experimentations, which creates data that can give rise to new knowledge that supports learning in return.

While teachers’ use of data (e.g. field notes, journals, assessments, students’ records) to gain insights about their student has a long history, with learning being increasingly carried out online, we increasingly see institutionalised use of data analytics to monitor students’ behaviours within digital learning environments. Such development fuels the broader agenda to optimise learning for individual students (Tsai, Perrotta & Gasevic, 2020; Eynon, 2015). 

In Block 1 of CDE, I started my data collection journey by looking at the log data of my online activities from Chrome browser history, WhatsApp messages, Twitter history as well as email records, which is of similar nature to log data in learning management systems. This exercise confirmed my understanding of the reliance on digital footprints within learning management systems would fuel a behaviourist approach to education (Knox, Williamson & Bayne, 2019). For instance, a less-than-ideal pattern of participation in online learning activities could be treated as the ground for viewing behaviour modification as a way to optimise learning. 

In the latter part of Block 1, I started collecting data about my attention and thought process while completing the prescribed reading. In the process of collecting and visualising these data, I had to assign proxies or categories in order for a visualisation to be possible, hence I experienced first-hand the nature of data collection being a process of taking snapshots or creating proxies that represents a phenomenon. This provided a proof-of-principle to a previous saying I learnt – “‘raw data’ is an oxymoron.” (Gitelman, 2013). In addition, employing a “field-work” approach to data collection caused modification to my learning activity itself, making me acutely aware of myself being tracked. The act of recording my thought process while reading an article may have even converted that learning activity into a “pseudo-mindfulness exercise” (by focusing on my thoughts, accepting and recording all my thoughts and distractions). Overall, the data of my distractions patterns as well as thought process supported my metacognition, providing me more insights about my learning (Eynon, 2015).

Data-driven personalised learning requires pre-defined models of learning bahaviour set (Bulger, 2016), in order to support decisions on instructional modification and nudges to students who by such definition are regarded as less than ideal. Unavoidably, this approach favours a few learning bahaviours while shunning others. I would also argue that similar to the master-apprentice model of learning being idealised since aeons ago (Friesen, 2020), students have always been benchmarked against different ideal models, including their masters, the more diligent/capable/smart/hardworking/polite kid in the same neighbourhood, long before the advent of data-driven personalised learning. Perhaps this is precisely the sentiment that chartered this development in personalised learning.

(Word count: 508)

References

Bulger, M. (2016). Personalized learning: The conversations we’re not having. Data and Society, 22(1), 1-29.

Eynon, R. (2015). The quantified self for learning: critical questions for education. Learning, Media and Technology, 40(4).

Friesen, N. (2019). The technological imaginary in education: Myth and enlightenment in “Personalized Learning”. In Stocchetti, M. (Eds), The Digital Age and its Discontents. University of Helsinki Press.

Gitelman, L. (Ed.). (2013). Raw data is an oxymoron. MIT press.

Knox, J., Williamson, B., & Bayne, S. (2019). Machine behaviourism: Future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies. Learning, Media and Technology, 45(1), 31-45.

Tsai, Y. S., Perrotta, C., & Gašević, D. (2020). Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analytics. Assessment & Evaluation in Higher Education, 45(4), 554-567.

Week 5 Drawing

week 5 drawing

This week’s drawing I decided to collect qualitative data about my thoughts in reading the paper written by Eynon (2015). In reading the article, I employed a “thinking-out-loud” approach and recorded the thoughts I had, while logging the time points at which I had such thoughts. After the data collection, I developed 5 categories to describe my thoughts, and plotted them as a simple one-dimensional graph. This drawing may have a similar look to week 4’s drawing which aimed at visualising the pattern of my distraction while reading – the data collection method was very different.

On week 4, I employed simple logging by immediately categorising the distraction at the recording stage, resulting in data like “11:40 – non-work, 11:42 – data collection, 11:45 – data collection, 11:55 – work”.

However, for week 5, the data collection was descriptive, leading to data like “5:41 – am I sabotaging my reading by making my data collection this way? amused by myself; 5:44 – any way I can experiment with my data collection exercise?”. The categories were developed after reading through the data once.

Week 4 Drawing

Week 4 drawing

For the past week, my data collection focused on the big picture of my learning, providing additional contexts such as work and personal life. This week, I decided to go more microscopic and manually record the distractions I experienced while reading the Bulger (2016) paper and the Tsai, Perrotta & Gasevic (2020) paper.

The way I recorded distractions was similar to having a time sheet, where I logged the time I started reading the paper concerned. While reading, I also recorded the timepoints whenever I perceived I I got distracted. Whenever the distraction caused me to consciously put down the paper and did something else, I logged the times at which I put down and pick up the paper, and also record some qualitative details about the distraction (or rather, “derailment”). The distractions were categorised based on their nature (work-related, non-work-related, etc.).

I felt my scientist self also prompted me to experiment a bit in this data collection exercise. I have tried multiple times to get used to reading journal articles on electronic devices, as opposed to printing them out. I tried this during IDEL’s week 1 but then I fell miserably. The “experimental conditions” were as followed:

Bulger (2016): reading on OneNote with my hybrid laptop, sitting on my bed

Tsai, Perrotta & Gasevic (2020): reading on printed copy, sitting on my bed

However there were also limitations to this experiment: (1) Bulger (2016) was significantly longer than Tsai, Perrotta & Gasevic (2020); (2) I started reading Tsai, Perrotta & Gasevic (2020) much later than Bulger (2016) despite being on a different day.

The design rationale of the visualisation is based on the literal meaning of “distraction” – which is the phenomenon of my attention being taken away from what I was trying to do (Oxford Learner’s Dictionaries). Hence I used a time axis to symbolise the task at hand, and visualised my distractions as a curly line that points out from the time axis (i.e. my thought leaving the task at hand) and gets dragged back towards the axis (i.e. regaining attention on the task at hand). I also used icon as proxies for qualitative details about my derailments.

A common pattern is that I tend to have a cluster of distractions at the beginning of starting a task. It also shows that I experienced more derailment in reading Bulger (2016), including having to make a note of the reference I located for my manuscript, as well as deciding to send Bulger (2016) to my work colleagues.

In contrast, I experienced less derailment while reading Tsai, Perrotta & Gasevic (2020). This is likely caused by my being conscious of data being recorded, and I was working harder to block out distracting thoughts. Also towards the end I dozed off. This may suggest reading at 11pm would be counterproductive.

Overall, I have used this week’s data collection and visualisation exercise to reflect on my learning behaviour and attempt to derive insights from it. It taught me to expect distraction clusters at the start of a task, and hence additional effort is needed at the beginning of a task so as to make sure the task can be completed.

Week 3 Data Drawing

Data drawing for week 3

This is the data drawing I produced for week 3, visualising the timing of my activities within the Critical Data in Education (CDE) course, in the context of work and other commitments. The data were collected using multiple ways: Google Chrome history, Twitter history, Signal app history and by recall. The drawing paints a picture of how I fit the activities of CDE on top of my existing commitments (hence the placement of CDE activities on the top of the timelines) and significant transitions in my life (starting a new relationship!).

This drawing provides a broad overview on my overall well-being. It seems to show that I managed some balance between my work, study, relationship and family. It may suggest I am doing okay in terms of social wellbeing (e.g. staying in touch with family/friends, spending substantial amount of time communicating with my new girlfriend).

I felt this type of data collection exercise (i.e. time tracking) would be a good exercise for students’ wellbeing, providing students not only the information on how much time they have spent on studying, but also whether they have spent decent amount of time to maintain their social wellbeing.