Over the last four weeks, I have tracked:
Each one data collection has been bound by a sense of time in that each visualisation had a foundation in time (a time interval, or a day of the week). The visualisations have been structured, used common symbols, and included the elapsed time interval to show progression through the week.
From a personal perspective, the data that I collected reinforced what I already knew about myself… and along the same lines, the visualisations highlighted aspects of my personality. The largest observation is that completing these visualisations became a moment of pause and self reflection. In those moments of reflection, the data points in context became more personal than what I could have imagined.
This reflection, especially around my own personality shining through in the visualisation, comes as a direct result of sitting with the data and producing the visualisation by hand. The phone usage data that I analysed through my iPhone dashboard did not have the same personality. It was simplified down to standard charts and graphs. The data could have been anyone’s data. In a way, I became was anonymous when looking at that dashboard, even though one could argue that a phone is now one of life’s most personal possessions.
Over the course of time (and the introduction of technology), we have been desensitized by continuous streams of statistics and data. This desensitisation likely only makes us pause to think ‘huh, that’s interesting/sad/exciting/etc‘ for a split second, but rarely forces us to take in the true meaning or impact of the data.
COVID-19 is the harshest daily reminder of this. Since January 2020, we’ve had a continuous stream of data points related to COVID-19. At first, fear increased alongside the case count and number of deaths. Everywhere you looked in the media, there was a story of someone’s daughter, son, wife, husband, parent, grandparent, teacher, or colleague. Today, 13 months later, we see a significantly lower amount of these personal stories and we have become desensitised because we had to find a way to cope and continue.
The readings for Block 1 have highlighted that when it comes to the relationship between data and learning for students, it’s vital to collect and provide data that will demonstrate learning. Our discussions on Twitter demonstrated that many questions around learning analytics force us back to the question – “how do we define ‘learning’?” We’re reminded that the definition of learning is multi-faceted and the data collection intimately linked should serve a purpose. As highlighted by Bulger (2016), the collection of, and use of, student data may infringe upon the student’s right to privacy. Additionally, we also have to consider that a data point may highlight that a student grasped a concept, but only in context.
For example, a student may have answered a multiple choice question correctly, but without evidence, will we know if the student answered correctly, or guessed correctly? Submitting calculations to a maths question is easy proof, but proof of understanding the theme of a novel, or having emotional intelligence is not an easy task.
If a student is using data-driven technology and happens to guess right several times in a row, this may lead them down a path that was not intended. We have to ask if the technology includes the ability to course correct, and if so, how easily (or quickly) can it do so? Which data points in this scenario would help identify the need to course correct? These would necessarily not be the same data points as the ones collected to prove understanding or learning of a concept.
Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. Data & Society working paper. Available: https://datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf