Block 1 Reflection

Over these past few weeks, I have attempted to quantized some aspect of my daily activity to explore the relationship between data and their ‘insights’ within the context of learning.

I started the block thinking it would be an easy task – to record data and visualize it; but, I quickly realized that it would not be. Each week I faced a unique challenge of what and how to record data for the particular activity. In deciding the what, metadata was often omitted or reduced/regrouped as the week progressed, especially while drafting visualizations. Here we can begin to understand data as being partial and selective according to the context of its collection and end use (Michael, 2016; Williamson, 2017). As Kichin states, as cited by Williamson (2017), the term ‘capta’ should be used rather than ‘data’ as data are “inherently partial, selective and representative, and the distinguishing criteria used in their capture has consequences”.

The visualization of the data has an impact of what conclusions can be drawn. For example, the design of my first visualization, a week of walking, a chat log was emulated and messages were grouped temporally. Consequently, some of metadata that I would have like to have included was omitted to keep the visualization easy to read. Rather than grouping messages temporally, I could have grouped them by category (e.g. code, simulations, lab manuals, etc) or by sender. Breaking away from the chat log style, a series of pie charts or scatter plots could have been developed where there would have been more of a numerical focus. Or, all of these visualizations could have been developed to provide a range of perspectives of the same data set in the hopes to reveal a range of insights that might not be obtainable without the collection and analysis of the data, which is an aim of learning analytics (Knox, 2020).

In regards to the numerial focus, which seems to be the trend in the collection of student data, Bulger (2016) warns that students urged towards a quantified outcome will focus more on reaching that value rather than on the process of learning itself. While I agree with Bulger, I find this to not be unique to personalized learning analytics, but rather to most educational practices where grades are assigned, which is a form of reducing learning to a data point.

Target values are often predetermined and students may vary in their approach to a specific task subsequently leading to meeting, not meeting, or surpassing these values and lead to false positive or negative outcomes (Bulger, 2016). Knox (2020) highlights a new form of hypernudge platforms that ‘nudge’ students into these predetermined values. In these systems of realiging students to predetermined values or trajectories, student agency is attacked (Bulger, 2016; Tsai, 2020).

Throughout the data collect process I was the subject and data recorder and had a deep understand to the data that was collected, but unfortunately this is not the norm. As Tsai (2020) notes, the lack of full transparency of data and algorithms can lead to a distrust of the analytics and can further remove student agency preventing students from directly challenging the precision of the analytics. How data is collected, used, manipulated, and shared needs to be transparent and open to interrogation from educators and students – the ‘black box’ needs to be opened (ibid).


Bulger, M., 2016. Personalized learning: The conversations we’re not having. Data and Society, 22(1), pp.1-29.

Knox, J., Williamson, B. and Bayne, S., 2020. Machine behaviourism: Future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies. Learning, Media and Technology, 45(1), pp.31-45.

Michael, M. and Lupton, D., 2016. Toward a manifesto for the ‘public understanding of big data’. Public Understanding of Science, 25(1), pp.104-116.

Tsai, Y.S., Perrotta, C. and Gašević, D., 2020. Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analytics. Assessment & Evaluation in Higher Education, 45(4), pp.554-567.

Williamson, B., 2017. Big data in education: The digital future of learning, policy and practice. Sage.

1 thought on “Block 1 Reflection

  1. ‘Each week I faced a unique challenge of what and how to record data for the particular activity.’

    This might sound like a rather obvious observation, but I think it is really important. When we look at data visualisations I think we tend to assume we’re looking at ‘raw facts’, and forget that decisions have been made at every stage. I think you sum this up well here, drawing on useful literature.

    ‘While I agree with Bulger, I find this to not be unique to personalized learning analytics, but rather to most educational practices where grades are assigned, which is a form of reducing learning to a data point.’

    Good point here too. I think one of the reasons data often ‘fits in’ to education so well is that measurement and quantification are already quite engrained in many educational approaches. In this sense, ‘big data’ aren’t all that ‘new’.

    ‘the ‘black box’ needs to be opened (ibid)’

    I think this is a useful conclusion – although, I do think a certain amount of knowledge is needed to really understand we can be seen ‘inside the box’, and that requires education of a particular sort.

Leave a Reply to Jeremy Knox Cancel reply

Your email address will not be published. Required fields are marked *