The data visualization blog has been an exciting opportunity to explore the complexities, limitations, promises, and potential of big data within education and society, and an exercise that I hope to continue past this course. By recording data and visualizing the data by hand, I became intimately involved in every stage of datafying my behavior, from determining what will be recorded to how it will be visualized. In this final post, I will reflect on my experiences from the exercise and highlight some of my conclusions from the course.
As I mentioned in my Block 1 reflection, I thought the data visualization exercise would be a relatively easy one; however, after my first visualization I realized that there is a unique challenge in working with social data and deciding what data to collect, the methodology guiding collection, how to analyze the data, and how to develop a visualization that relays the dataset efficiently while trying to provide a deeper layer of context.
Data are often assumed to be objective, truthful, and neutral (Kitchin, 2014; Williamson, 2017) and, subsequently, data are promised as a method to gain otherwise unobtainable insights (Knox, 2020) which afford educators the ability to personalize, measure, predict, and explain student, teacher, and institution performance. However, as Kitchin (2014) argues, data is not inherently objective, truthful, nor neutral, rather data is partial, selective, and representative. Data are social products (Williamson, 2017) and those who decide what data to collect and the methodology to collect it consequently imprint their values within the dataset (Williamson, 2020).
Even in my own manual collection of data the partial and selective practice of data persisted. While preparing for each visualization, I would first identify what I would like to focus on for the week and then began to outline a methodology with some sample data. Influencing these decisions were concerns of privacy (what data do I want to be public), how quantifiable the data are, and the expected volume of data. These concerns affected the type of data collected and the scope of data. For example, during the weeks where I measured my Discord activity, I only recorded conversation history from direct messages from coworkers and from our main staff server. I did not include data for personal direct messages and other servers I am apart of. Additionally, I only recorded data during working hours (08:00 – 17:00).
While I had complete control over what data was collected, students, teachers, and staff do not necessarily have the same control. Educational data are often determined by software developers, administrators, or politicians and their perceptions of what learning is rather than by teachers or without the consent of the student (Raffaghelli and Stewart, 2020; van Dijck, 2018). The data that is collected focuses on behavior that can be easily counted and accounted for (i.e. easily quantifiable data) and de-emphasis behavior that cannot be effectively measured or provide positive results (Williamson, 2017).
While collecting data for the visualization, I sometimes became aware of my own self tracking and would find myself altering my behavior so a “better” dataset could be produced. During “a week of beverages”, I consumed more water than I might have regularly consumed because I knew that the data would be public and under inspection from potentially anyone. When students and teachers are aware their behaviors are being monitored they are likely to adapt their behavior as well. Knox (2020) highlight these behavior adaption as nudging where certain behaviors from students are replaced with more “desirable” or “preferable” outcomes. Similarly, student performance has become a proxy measure for teacher performance which can lead to teachers urging students toward particular outcomes such as “teaching to the (standardized) test” (Fontaine, 2016; Bulger, 2016; Williamson, 2020). Good data, then, becomes a priority and both students and teachers focus on particular outcomes rather than on the learning process itself (Bulger, 2016).
Collapsing the data collected throughout the week and drafting the visualization provided another set of challenges. Preparing the data for visualization required to further strip some context from the data and give it a discrete value. During “a week of walking” I recorded what I was primarily thinking about in relation to certain landmarks along my route. By following this methodology, there are many tangent thoughts that were ignored because they were fleeting or they were not substantial enough for me to remember to record when I finished my walk. These decisions about what metadata to include, combine, or remove is influenced by the type of visualization desired and the conclusions the designer wants to convey.
Student performative data is often relayed to teachers in the form of learning dashboards which reduce and quantify student-student and teacher-student relationships and provide summaries of and suggestions for student learning (Brown, 2020). Just as the developers behind the data collection algorithms get to decide what data is collected, they also decide how that data is used, manipulated, and transformed. Consequently, teachers can be uncertain as to what meaning can be drawn from the data (Brown, 2020). Dashboards can also direct attention to specific areas that are perceived as “learning” (Williamson, 2017) and impose limits on how teachers see their students (Williamson, 2020). A critical understanding of data and data literacy has been suggested to help teachers understand and interrogate these data systems (Raffaghelli and Stewart, 2020; Williamson, 2020), but this requires the underlying dashboard mechanisms/algorithms to be more transparent and accessible.
Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. Data & Society working paper. Available: https://datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf
Brown, M. 2020. Seeing students at scale: how faculty in large lecture courses act upon learning analytics dashboard data. Teaching in Higher Education. 25(4), pp. 384-400
Fawns, T., Aitken, G. and Jones, D., 2020. Ecological teaching evaluation vs the datafication of quality: Understanding education with, and around, data. Postdigital Science and Education, pp.1-18.
Fontaine, C. 2016. The Myth of Accountability: How Data (Mis)Use is Reinforcing the Problems of Public Education, Data and Society Working Paper 08.08.2016.
Kitchin, R. 2014. The Data Revolution: Big data, Open Data, Data Infrastructures and their Consequences. London: Sage.
Knox, J., Williamson, B. and Bayne, S., 2020. Machine behaviourism: Future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies. Learning, Media and Technology, 45(1), pp.31-45.
Raffaghelli, J.E. & Stewart, B. 2020. Centering complexity in ‘educators’ data literacy’ to support future practices in faculty development: a systematic review of the literature, Teaching in Higher Education, 25:4, 435-455, DOI: 10.1080/13562517.2019.1696301
van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press
Williamson, B. 2017. Digital Education Governance: political analytics, performativity and accountability. Chapter 4 in Big Data in Education: The digital future of learning, policy and practice. Sage.
Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectives. Teaching in Higher Education. 25(4), pp. 351-365.