Week 12: Final commentary

Before creating any visualisation, it was necessary to consider what data to collect.

It is suggested that data has intrinsic value; it is only a matter of using it [Bulger, 2016]. For those who see education as broken, proposing big data as a ‘fix’, termed ‘big data solutionism’ [Fuchs, 2019], is behind the drive to collect ever larger amounts of personal data, especially where collection is automated and storage vast [Williamson, 2017a]. As well as data from institutional platforms, using data from social media, has been proposed as it is ‘cheap’ and plentiful [Bartlett et al, 2014]. It is, however, always the case that however much data there is, it is never sufficient, as some data is never selected, some cannot be detected, and some never considered.

In this case, I was in control of the data I collected and shared.

Acquisition of performance data, for staff and students, along with judgements and ranking [Espeland and Sauder, 2016] to contribute to decision-making [Esposito & Stark, 2019], are common practice in higher education. Wherever the data originates, it will only be about behaviour, and only that which leaves a digital trace. This results in a data set which captures ‘what’ but not ‘why’ [Eynon, 2015], and not learning but its proxy [Bulger, 2016]. A demand to collect more personal data [Knox et al, 2020] risks moving institutions towards a surveillance role, and disempowering the individuals within it [Tsai, et al. 2020].

I was able to decide the research question and my methodology.

Those charged with governance determine the ‘problem’ and the methodology [Williamson, 2017b]. This can also involve actors beyond the institution, for instance platform vendors, which gives them considerable influence over policy [Van Dijck et al., 2018]. The data is used to frame the research question [Eynon, 2013] which is problematic when it is unrepresentative of learning or the student population [O’Neil, 2016; Schradie, 2017].

Collecting manually put a limit on the data recorded, both in terms of the amount and the quality.  

To be used, data is: (a) reduced: simplified for easy evaluation, meaning aspects are lost if their value is not appreciated; (b) standardised: made context-free for easy comparison [Ozga, 2016] denying the in-built bias [Boring, 2017]. Though they may be made to ‘…appear as objective facts…’, these data are “...products of complex assemblages…” that “…construct the infrastructure of accountability…” which “...shapes what and who count…” [Anagnostopoulos et al, 2013]. This infrastructure can “…recede into the background…” [Anagnostopoulos et al, 2013], making it unlikely to be noticed or questioned.

I was in control of how my data was collected, but typically students would have their data collected automatically, based on their interaction with a learning system.

Learning systems are powered by biased algorithms [Noble, 2018] and fuelled by highly-processed data. That it provides personalised learning is an illusion: at most it is a recommender system, nudging towards ‘ideals’ [Knox et al, 2020], and without awareness of the influence of context on behaviour, it positions all change as down to individual effort [Eynon, 2015; Tsai, et al. 2020]. Yet, despite this, such systems may be trusted because of their apparent authority, and may be used by students without teacher input.

Again, I was in control of the process by which I visualised my data. Students typically would not, their data from learning systems used to create a report or visualised via a dashboard.

Dashboards are offered as a way to make a complex situation easy to understand [Williamson, 2017a], but that not necessarily positive. Data visualisation is not neutral; it needs reduction and transformation to make data comprehensible [Williamson, 2017a] and provide ‘neat’ solutions for ‘messy’ situations [Eynon, 2013]. The decisions behind what is presented may not be explicit [Williamson et al, 2020]. If the underlying mechanism is known, this could help teachers, whose practice it can influence [Wise and Jung, 2019], understand how to judge its value [Brown, 2020]. It has been suggested that to do this, teachers need critical data literacy [Raffaghelli & Stewart, 2020; Sander, 2020]; but, if the teacher is disempowered, and the learning system uses its data to self-regulate, this will not help [Williamson et al, 2020].

Dashboards may appear to offer personalisation to the teacher but this is also an illusion, and limited to a choice of pre-selected options. Teachers may have some control over the data gathered, and whether students can see it, but they then have to deal with what taking a surveillance role [Tsai, et al. 2020], has on their relationship with students [Williamson et al, 2020] and how it makes them see themselves as teachers [Harrison et al. 2020]. True empowerment may only be possible if teachers had control over the decision to collect data at all.

I could choose to limit the data I shared through my visualisations (especially important as I knew it would be publically available); not the case with my performance data at work, which is collected automatically.

Whether staff or students know what is done on the basis of their data, this is “…structured and structuring…” such that they could be “…driven by analysis of performance data…” [Ozga, 2016]. Even where they have any control, all important decisions have been taken; that practices are data-led say, to increase efficiency [Williamson, 2019], may not be debatable [Ozga, 2016].

Observation of the dashboard becomes a proxy for observation of learning. If students know they are being observed, it could influence their behaviour. However, that may not mean their learning approaches have changed, just that they have learned to behave (in terms of data collected) like the ideal they have been nudged towards [Eynon, 2015].

With the dashboard visible to the institution, a teacher may fear this being seen as a proxy for their teaching [Williamson et al, 2020]. Instead of encouraging improved metrics through ‘improved’ performance, the dashboard may nudge them towards an ‘ideal’ [Eynon, 2015]. They may change their teaching in order to influence the dashboard, say shifting activity to where data is collected, rather than for pedagogic reasons [Brown, 2020; Harrison et al. 2020]. At worst this can become an ‘engine of anxiety’ [Espeland and Sauder, 2016] leading to reactions which might even counteract institutional aims.

Visualisations, formed of partial, simplified data, represent people as ‘thin descriptions’ [Ozga et al, 2011]. Depersonalised, this proxy of what it claims it represent, is used to value people and their achievements [Burrows, 2012] conversely managing to devalue them in the process.

Word count: 1074

References

Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013. Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.

Bartlett, J., Miller, C., Reffin, J., Weir, D. and Wibberley, S., 2014. Vox digitas. Demos.

Boring, A., 2017. Gender biases in student evaluations of teaching. Journal of public economics, 145, pp.27-41.

Brown, M., 2020. Seeing students at scale: how faculty in large lecture courses act upon learning analytics dashboard data. Teaching in Higher Education. 25 (4), pp. 384-400.

Bulger, M., 2016. Personalized Learning: The Conversations We’re Not Having. Data & Society working paper. Available: https://datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf

Burrows, R., 2012. Living with the h-index? Metric assemblages in the contemporary academy. The Sociological Review, 60 (2), pp. 355–372.

Espeland, W. N., and Sauder, M., 2016. Engines of anxiety: Academic rankings, reputation, and accountability. New York, NY: Russell Sage Foundation.

Esposito, E., and Stark, D., 2019. What’s observed in a rating? Rankings as orientation in the face of uncertainty. Theory, Culture and Society, 36 (4), pp. 3–26. https://doi.org/10.1177/0263276419826276

Eynon, R., 2013. The rise of Big Data: what does it mean for education, technology, and media research? Learning, Media and Technology, 38 (3), pp. 237-240.

Eynon, R., 2015. The quantified self for learning: critical questions for education. Learning, Media and Technology, 40 (4), pp. 407-411, DOI: 10.1080/17439884.2015.1100797.

Fontaine, C. 2016. The Myth of Accountability: How Data (Mis)Use is Reinforcing the Problems of Public Education, Data and Society Working Paper 08.08.2016.

Fuchs, C. 2019. Beyond Big Data Capitalism, Towards Dialectical Digital Modernity: Reflections on David Chandler’s Chapter. In: Chandler, D. and Fuchs, C. (eds.) Digital Objects, Digital Subjects: Interdisciplinary Perspectives on Capitalism, Labour and Politics in the Age of Big Data. pp. 43–51. London: University of Westminster Press. DOI: https://doi.org/10.16997/book29.c. License: CC‐BY‐NC‐ND 4.0.

Harrison, M.J., Davies, C., Bell, H., Goodley, C., Fox, S & Downing, B. 2020. (Un)teaching the ‘datafied student subject’: perspectives from an education-based masters in an English university, Teaching in Higher Education, 25:4, 401-417, DOI: 10.1080/13562517.2019.1698541.

Knox, J., Williamson, B. & Bayne, S., 2019, ‘Machine behaviourism: Future visions of “learnification” and “datafication” across humans and digital technologies‘, Learning, Media and Technology, 45 (1), pp. 1-15.

Noble, S. U. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism by Safiya Umoja Noble. NYU Press. doi:10.15713/ins.mmj.3.

O’Neil, C., 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. USA: Penguin Random House.

Ozga, J., 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15 (1), pp. 69-81.

Ozga J., Dahler-Larsen P., Segerholm C., et al. (eds), 2011. Fabricating Quality in Education: Data and Governance in Europe. London: Routledge, pp.127–150.

Raffaghelli, J.E. & Stewart, B. 2020. Centering complexity in ‘educators’ data literacy’ to support future practices in faculty development: a systematic review of the literature, Teaching in Higher Education, 25:4, 435-455, DOI: 10.1080/13562517.2019.1696301.

Sander, I. 2020. What is critical big data literacy and how can it be implemented? Internet Policy Review. 9 (2) DOI: 10.14763/2020.2.1479

Schradie, J., 2017. Big Data is Too Small: research implications of class inequality for online data collection. Media and class: TV, film and digital culture. Edited by June Deery and Andrea Press. Abingdon, UK: Taylor & Francis.

Tsai, Y-S. Perrotta, C. & Gašević, D., 2020. Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analytics, Assessment & Evaluation in Higher Education, 45 (4), pp. 554-567, DOI: 10.1080/02602938.2019.1676396

Van Dijck, J., Poell, T. and De Waal, M., 2018. The platform society: Public values in a connective world. Oxford University Press.

Williamson B., 2019. Policy networks, performance metrics and platform markets: Charting the expanding data infrastructure of higher education. British Journal of Educational Technology, 50 (6), pp. 2794–2809. doi:10.1111/bjet.12849.

Williamson, B., 2017a. Conceptualising Digital Data in Big Data in Education: The digital future of learning, policy and practice. Sage.

Williamson, B., 2017b. Digital Education Governance: political analytics, performativity and accountability, in Big Data in Education: The digital future of learning, policy and practice. Sage.

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectives. Teaching in Higher Education. 25 (4), pp. 351-365.

Wise A. F., and Jung Y., 2019. Teaching with Analytics: Towards a Situated Model of Instructional Decision-Making. Journal of Learning Analytics. 6(2), pp. 53-69. http://dx.doi.org/10.18608/jla.2019.62.4.

Leave a Reply

Your email address will not be published. Required fields are marked *