
This week I have collected data on my browsing history for a week between 18 – 25th March 2021. Ozga (2015), Fontaine (2016) and Anagnostopolous et. al. (2013:12) highlight the focus on outcomes such as performance in tests rather than inputs. This output focused data governance is bound to be a little one-sided. You can only act on what you measure so how do deal with institutions that fall behind. Often they are punished, but if machine behaviourism is like human behaviourism then punishment will not work (Knox et. al. 2019, Skinner 2002). Indeed the schools which are punished become less likely to help those who need it most (Anagnostopolous et. al. 2013:217). Informatic power is misused (Ibid.).

The purpose of my data collection is to determine if collecting data on my inputs into teaching and learning would prove to be more useful in governing educational outcomes. My browsing history could be easily made available to my university as all the browsing was done on a university machine. I split my analysis between learning and teaching, MSc (Digital Education) and personal browsing. I counted the number of ‘hits’ on each platform, noting this does not necessarily represent the time spent on each platform. As this week was a teaching heavy week, the loading can be seen on LMS platforms such as Moodle and FutureLearn as well as Zoom and Mentimeter which aid my teaching along with Business Case and Simulation platforms. My time studying is split between the word press sites of each of my courses and Miro where some of my cultural course is also happening. I also use Google Photos to capture my data visualisations for this course and art work for the culture course. And finally a very small amount of browsing to check emails and LinkedIn.
Should the university collect data like these, it could be misleading and very variable from week to week. If I choose to read a paper article or text book there are no digital traces. The data are simplistic at best. However, as governing data it does describe inputs which might be compared with outputs. However this would be very likely to lead to feelings of mistrust and dataveillance (Williams et. al. 2020, Edwards 2020)
Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013a. Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.
Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013. Conclusion: The Infrastructure of Accountability: Tensions, Implications and Concluding Thoughts. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds) The Infrastructure of Accountability: Data use and the transformation of American education.
Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013b. Conclusion: The Infrastructure of Accountability: Tensions, Implications and Concluding Thoughts. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds) The Infrastructure of Accountability: Data use and the transformation of American education.
Fontaine, C. 2016. The Myth of Accountability: How Data (Mis)Use is Reinforcing the Problems of Public Education, Data and Society Working Paper 08.08.2016.
Knox, J, Williamson, B & Bayne, S 2019, ‘Machine behaviourism: Future visions of “learnification” and “datafication” across humans and digital technologies‘, Learning, Media and Technology, 45(1), pp. 1-15.
Ozga, J. 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81
Skinner, B.F., 2002. Beyond freedom and dignity. Hackett Publishing.
Williamson, B. Digital Education Governance: political analytics, performativity and accountability. Chapter 4 in Big Data in Education: The digital future of learning, policy and practice. Sage.
A behaviourism-inspired technology would, I suppose, aim to reward you for your “engagement” with various platforms/services, and then “nudge” you to optimize your use of them! My view is we do see some hints of this emerging. On the reward side, for example, I discovered the example of a “digital currency” called an “edCoin” as part of a “reward system” on a digital education platform: https://help.libereka.com/en/article/what-is-an-edcoin-1c3a9tv/. As I understand it, students might “earn” edCoins to exchange for other courses or for cash. On the nudge side, various analytics could suggest “personalized” recommendations for better studying. So your example here really does collapse together issues of learning, teaching, and governing–how one learns is shaped by new kind of “teaching machines”, both of them governed by technology-based organizations and the infrastructures they’ve built to enable the extraction and analysis of the resulting data.