Block 1: Week 5 Summary
Learning
Aware I would only be able to collect a limited amount of self-reported data, and since it was to be made public, without much context, I expected this would limit its application. There was the temptation, however, to think that by collecting more data, of more depth, this limitation could be overcome; not a unique phenomenon as it is behind the drive to collect ever larger amounts of (student) data, especially where collection is automated and storage vast [Williamson, 2017].
It is, however, always the case there is too little data: some data is never selected, some cannot be detected, and some never considered. I realised the limits to my ability to collect all the data I ‘needed’ and thereafter adjusted my expectations. From these three weeks, I can see that my data is clearly insufficient to provide personalised guidance.
A health app I used could not go beyond nudges towards a general ideal since I limited the data it had to work with (for the sake of data security). The data the app could collect, or feedback given, never went beyond the individual, revealing it had been built without awareness of the influence of context on personal behaviour, and expected all change through individual effort [Eynon, 2015; Tsai, et al. 2020].
Likewise, learning systems built this way can only be recommender systems, nudging towards ‘ideals’ [Knox et al, 2020], yet their advice may be trusted because of the apparent authority of the system, yet misunderstood because of its inscrutability, and unreliable because of their inbuilt bias [Tsai, et al. 2020].
A counterbalance is the demand for more personal (‘bodily’, ‘emotional’ [Knox et al, 2020] data, but this moves learning systems into the role of surveillance and control, and away from personalisation for student empowerment [Tsai, et al. 2020].
My choice of data was limited by my research question. In the case of a learning system, the data that best answers a question may not be available so the available data is used as a proxy [Bulger, 2016]. Alternatively, the available data might be used as a way of framing the research question [Eynon, 2013]; this suggests data is valuable, it is only a matter of making use of it [Bulger, 2016].
Though aware of the data gathering, I cannot be sure it has not influenced my actions, hence the data is not natural behaviour but a performance. Likewise, if students know they are being observed, it could influence their behaviour; however, that does not necessarily mean their learning approaches have changed, just that perhaps they have learned to behave (in terms of data collected) like the ideal they have been nudged towards [Eynon, 2015].
Data visualisation is not neutral; it leads to acts of reduction and transformation to make comprehensible and influential [Williamson, 2017]. It is unrealistic to expect to find trends, but there is nothing to stop manipulating the data until you ‘reveal’ them. It is troubling to consider data visualisation used to provide ‘neat’ solutions for ‘messy’ situations [Eynon, 2013].
Word count: 503
References
Bulger, M., 2016. Personalized Learning: The Conversations We’re Not Having. Data & Society working paper. Available: https://datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf
Eynon, R., 2013. The rise of Big Data: what does it mean for education, technology, and media research? Learning, Media and Technology, 38 (3), pp. 237-240.
Eynon, R., 2015. The quantified self for learning: critical questions for education. Learning, Media and Technology, 40 (4), pp. 407-411, DOI: 10.1080/17439884.2015.1100797
Knox, J., Williamson, B. & Bayne, S., 2019, ‘Machine behaviourism: Future visions of “learnification” and “datafication” across humans and digital technologies‘, Learning, Media and Technology, 45 (1), pp. 1-15.
Tsai, Y-S. Perrotta, C. & Gašević, D., 2020. Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analytics, Assessment & Evaluation in Higher Education, 45 (4), pp. 554-567, DOI: 10.1080/02602938.2019.1676396
Williamson, B., 2017. Conceptualising Digital Data in Big Data in Education: The digital future of learning, policy and practice. Sage.
Plenty of evidence of engagement with the dataviz tasks and with the set course readings in this very good commentary. You have clearly grasped some of the key critical arguments related to data use related to ‘learning’, especially the way it might affect how learners ‘perform’. This is also, of course, a major concern related to these of data to measure and assess teachers–that they perform in such a way to satisfy the demands of the measurement instrument, rather than to serve other purposes. You are also clearly interrogating the widespread assumptions that more and better data will lead to ‘neater’ and ‘cleaner’ explanations about learning–assumptions that, as you note, appear to neglect the social and personal contexts in which learning, and teaching, take place. Those are the kind of issues we will be confronting in the next blocks. I look forward to more of your visualizations and reflections.
Thank you, Ben. Data is certainly a very messy business; all simple answers are probably to be avoided.