Block 3: governing with data

Governing education.

To ‘govern education’ means to control and direct the public business of education. This implies that education is a public good (Williamson et al 2020), a stand the vast majority would agree with. Thus, governing education should lie in the hands of public institutions, which we indirectly control in democratic countries.

Big data’s impact on governing education.

Data technologies are to help the governing bodies with this process. They promise to make it more effective and cheaper. However, a close inspection reveals data interventions in education rarely achieve any of those three goals. For example, the IMPACT assessment and feedback tool gathered data on student performance and improvement for teacher job performance (O’Neil 2016), but did so with extreme levels of irregularity of error. In fact, standardisation of student performance tests to monitor broadly understood educational quality is a common manifestation of big data (Anagnostopoulos 2013; Ozga 2016). It forms basis of improvement programs, such as IMPACT, No Child Left Behind (Anagnostopoulos 2013), as well as comparison tools like college rankings (O’Neil 2016), PISA (Ozga 2016) or OECD country rankings (Williamson 2017).

This has multiple negative effects. Firstly, the legislative process is changed – numbers seem easily understood therefore decision making can and is sped up in an unprecedented way (ibid.). This leads to suboptimal legislative outcomes. Secondly, in many cases the data-driven tools create their own vicious cycles. On the small scale this manifests in preparing students to study to the test, rather than develop all-rounded knowledge. Large scale mirrors it. For example, countries introduce educational reforms to match OECD criteria, not in their own best interest (ibid.) Universities hire researchers to skew results to fit college rankings, not because they fit their pedagogical bodies (O’Neil 2016). Hence, these data driven tools become self-serving. This occurs because we aren’t discussing nearly enough where education should be directed to: whether what is being measured is actually what the society values (Biesta 2013). The last data of this blocks shows values that most would agree should be taught, but are never datafied.

In this process, even the fundament of governing is cracking, as the control passes from these public bodies, to the private entities, who gather and utilise the data (Williamson 2017). Loss of control in data technology threatens especially the Global South which already has a long colonial history of being ‘civilised’ with the ‘newest technology’ (Prinsloo 2020). Many of these countries yield to the trend in fear of being left behind (Biesta 2013)

Autistic Students in data-driven education.

For autistic students a lot of advancements have been made in terms of inclusion (Tops et al 2017). This progress is measured with numerical data, i.e. student participation. But as big data loom over schools, that participation is under threat. The first visualisation this block demonstrates how the trends described above can solidify barriers for ASD students. The second data visualisation depicts the discrepancies in numbers caused by current insufficient scientific knowledge and diagnostic tools. We simply cannot get better data at this moment, but autistic students (including the vast numbers of undiagnosed ones) are sitting in classrooms every day. Autism in education is one example why we should be extremely careful not to overly rely on data-driven solutions.

Sources.

Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (2013). Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.

Biesta, G. (2013) Good Education in an Age of Measurement, University of Ljubljana, Faculty of Education

Knox, J. Williamson, B. & Bayne, S. (2020) Machine behaviourism: future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies, Learning, Media and Technology, 45:1, 31-45, DOI: 10.1080/17439884.2019.1623251

O’Neil, C. (2016) Weapons of Math Destruction. Random House Audio (Audible release date 09-06-2016)

Ozga, J. (2016). Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81

Prinsloo, P. (2020). Data frontiers and frontiers of power in (higher) education: a view of/from the Global South. Teaching in Higher Education, 25(4) pp.366-383

Tops, W. Van Den Bergh, A. Noens, I. Baeyens, D. (2017) A multi-method assessment of study strategies in higher education students with an autism spectrum disorder; Learning and Individual Differences 59, pp. 141-148

Williamson, B. (2017) Digital Education Governance: political analytics, performativity and accountability. Chapter 4 in Big Data in Education: The digital future of learning, policy and practice. Sage.

Williamson, B. Bayne, S. Shay, S. (2020) The datafication of teaching in Higher Education: critical issues and perspectives, Teaching in Higher Education, Vol 25:4, pp. 351-365

The unquantifiable value of my teachers.

Rationale.

This visualisation follows up on the question that emerged last week: if data we use in education policy making is unreliable, what should be done about it? The quickest answer is to get better data (O’Neil 2016). As Williamson (2017) shows, this approach has been undertaken by a number of agents, from OECD to National Pupil Database in the UK. They gather all sorts of metrics on students and teachers, in hopes that the more varied and detailed the data, the better it is. That, however, might be erroneous. As Kearns & Roth (2020) show, the data-based solutions, such as algorithmic tools, generate unpredictable results failing to preserve certain core values. Furthermore, the data collection itself introduces new practices to education, thereby changing its nature (Williamson 2017). This nulls the neutral observer role of data metrics. Finally, data gathering isn’t objective in itself (O’Neil 2016). It is also superficial (Ozga 2016).

My data visualisation this week shows that unquantifiable value of teachers. I juxtaposed the numerical information that usually represents teachers’ performance (Williamson 2017) for governing purposes, namely the students’ grades, with everything else I learned (or not) from these teachers that data are blind to.

Design.

For this final visualisation, I returned to Dear Data design (below). I used the mini blackboard in my room this time as I felt this, together with the theme, would be a nice closure to the visualisation blog. Number of arrows based on my subjective evaluation. All these teachers taught at the same time & institution.

Sources.

Kearns, M. Roth K. (2020) The Ethical Algorithm; Audible

Lupi, G. Posavec, S. (2016) Dear Data. Princeton Architectural Press

O’Neil, C. (2016) Weapons of Math Destruction. Random House Audio (Audible release date 09-06-2016)

Ozga, J. 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81

Williamson, B. (2017) Digital Education Governance: political analytics, performativity and accountability. Chapter 4 in Big Data in Education: The digital future of learning, policy and practice. Sage.

The data that governs me.

To govern with data, the decision makers have to first collect data sets, for example, through public censuses. Algorithms are fed this data as the starting point of their operations. (O’Neil 2016). These combined form the new tools governing bodies employ in decision-making (0zga 2016).

All of us are parts of these data sets. Programs such as No Child Left Behind (Anagnostopoulos et al 2013) mean that educational bodies pay close attention to those students who traditionally struggle at schools, for example, those with ASD. We strive for fairness and inclusion in education. For that we need stats on the number of autistic students. This affects them directly in the wake of standardised tests (.ibid), something last week’s data showed. It also affects teachers whose job is now evaluated based on test scores, like IMPACT (O’Neil 2016).

This week’s visualisation shows just how much the data is unreliable. It depicts the statistic on Autism prevalence in different groups. Depending on the source, the numbers vary. The difference might seem small. For any given school, it’s a handful of students. But If we multiply it by the number of schools, we get hundreds of students who are, well, left behind, and for whom schools aren’t getting funding. More importantly though, if the data is so unreliable, how can decisions made based on it be fair, trustworthy, and right?

(Employment data unavailable for most countries, and only gathered by a handful of organisations SOURCE)

BONUS:
Until recently, I was not even a part of the official data sets. Like most women on the spectrum without clear impairment (Rynkiewich et al 2019), I was diagnosed later in life:

Sources.

Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013. Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.

APA By the numbers: Autism rate increases

CENTRE FOR DISEASE CONTROL ASD Data & Statistics Data & Statistics – Prevalence

Hines, E. Rates of Autism Spectrum Disorder Diagnosis by Age and Gender at AUCD

Loomes R, Hull L, Mandy WPL. (2017)What Is the Male-to-Female Ratio in Autism Spectrum Disorder? A Systematic Review and Meta-Analysis. J Am Acad Child Adolesc Psychiatry. Jun;56(6):466-474. doi: 10.1016/j.jaac.2017.03.013. Epub 2017 Apr 5. PMID: 28545751

O’Neil, C. (2016) Weapons of Math Destruction. Random House Audio (Audible release date 09-06-2016)

Ozga, J. 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81

Rynkiewicz1, A. Janas-Kozik, M. Słopień, A. (2019) Girls and women with autism. Psychiatria Polska

Zhang, Y., Li, N., Li, C. et al. (2020) Genetic evidence of gender difference in autism spectrum disorder supports the female-protective effect. Transl Psychiatry 10, 4 https://doi.org/10.1038/s41398-020-0699-8

The data gates in education that I have crossed.

Rationale.

Using data to quantify students is a crucial part of education (Brown 2020). Education is also a hierarchical social structure so assessing and quantifying are key. In fact, most people would associate words such as ‘assessment’ and ‘grades’ with schooling. This week’s data focuses on the times that formalised quantification took place along my own path of education. These tests, especially those at the end of each school stage, have tremendous impact on one’s life both short and long term. Since I finished school, these assessments have become increasingly automatised. For example, ‘matura’ (Polish equivalent of A levels) went from a free writing task with lots of time to spare, to a test with multiple choice elements and fixed answers. There is, however, little publicly known on how the datafication of these gates is conceived (Kitchin 2017). I think a lot about the new rigidity the calculations wield on students these days, because if I had to be subject to them, I would have failed far more than I did. These evaluations and exams are becoming increasingly problematic for even the highly intelligent students on the autistic spectrum. Indeed more make it to the university, but they fare poorer than their neurotypical counterparts (Tops et al 2017). Datafication of education gives the governing bodies a new tool to rule, and in the new world order, some groups can be unintentionally disadvantaged.

Design.

I chose to draw a fairly traditional timeline. Due to its prevalence it is easy to understand and clear (Healy 2019).

Sources.

Brown, M. (2020). Seeing students at scale: how faculty in large lecture courses act upon learning analytics dashboard dataTeaching in Higher Education. 25(4), pp. 384-400

Healy, K. (2019) Data Visualisation. A practical introduction. Princeton Press

Kitchin, R. (2017). Thinking Critically about Researching Algorithms. Information, Communication and Society, 20(1), 14-29, DOI: 10.1080/1369118X.2016.1154087

Tops, W. Van Den Bergh, A. Noens, I. Baeyens, D. (2017) A multi-method assessment of study strategies in higher education students with an autism spectrum disorder; Learning and Individual Differences 59, pp. 141-148

Block 2: teaching summary

What is teaching?

Teaching is clearly different from learning, but whereas the latter receives a lot of attention, the former does not. In traditional models of education, learning happened as a result of teaching. However, with the commercialisation and platformisation of markets, we are viewing education more as a commodity, than a public good (Williamson et al 2020, van Dijck 2018). Teaching is seen as responsive to learning – the aim being  to ‘responds to learners’ errors and misunderstandings’ (Crook & Sutherland 2017 p. 20). In this sense the teacher is a facilitator of learning. The student and their personalised learning take the front stage, a process which Gert Biesta names ‘learnification’ (Biesta 2017). According to Biesta, teaching is a process of creation that navigates between self- and world- destruction (Biesta 2012). It reflects the ideat that education is a value in itself, and crucial for the society, that exceeds individualistic benefit of learning. Teachers are a key component of this.

How does data impact teaching?

Datafication of education stands at odds with this view. For teachers, it is changing the nature of the profession in a number of ways.

Firstly, datafication targets the job itself – how good performance is perceived. Many algorithmic tools are developed to monitor students’ performance as measurement of the quality of teaching. Teachers are now quantified (Rafagleli & Steward 2020) and reviewed based on the scores of their students. As a result teachers can, and have lost jobs unfairly, for example, like in the case of the IMPACT assessment and feedback tool (O’Neil 2016).

Secondly, the value of teachers is brought into question. When the focus is redirected to learning, and teachers are seen as mere facilitators of learning, they lose their unique role. They can now be compared to other facilitators of learning, of which we now have seemingly many thanks to the modern technology. In this obsession with quantifying and numerical rankings, teachers are disadvantaged (Williamson et al 2020). The second visualisation of this block shows this well by contrasting knowledge gained directly from teachers versus that from online resources. What it does not show, of course, is how teachers contributed to sparking interest in or encountering the resources. This is a known problem with data pointed by Eynon (Eynon 2013) – data gives us the what, but not necessarily the why.

Finally, dashboards created with learning analytics now require teachers to be data analysts (Brown 2020), often without adequate training. As Brown points out (ibid.) using data to teach is nothing new. However, as I observed in the last data visualisation of this block, previously teachers took active part in the data gathering, input and interpretation. Nowadays, however, this is not the case. Many of the algorithms are obscure (O’Neil 2016). The assumption is that those algorithms render fairer results than implicit bias-ridden humans. However, data can be as biased as people because people write algorithms or decide on the base data set input (ibid.). Algorithms can indeed be designed for fairness, but deciding on its definition is not straight forward (Kearns & Roth 2020). Teachers are often excluded from the design, and only see the data generated at the other end.

Sources.

Biesta, G,. 2012.  Giving teaching back to education: responding to the disappearance of the teacherPhenomenology & Practice, 6(2), pp. 35-49.

Biesta, Gert. 2017. ”The Beautiful Risk of Education” : https://www.youtube.com/watch?v=QMqFcVoXnTI

Brown, M. (2020) Seeing students at scale: how faculty in large lecture courses act upon learning analytics dashboard data, Teaching in Higher Education, 25:4, 384-400

Crook, C. Sutherland, R. (2017) Technology and Theories of Learning in: Duval et al Technology Enhanced Learning; Springer

Eynon, R. (2013). The rise of Big Data: what does it mean for education, technology, and media research? Learning, Media and Technology, 38(3), pp. 237-240.

Kearns, M. Roth K. (2020) The Ethical Algorithm; Audible

Raffaghelli, J. E. & Stewart, B. (2020) Centering complexity in ‘educators’ data literacy’ to support future practices in faculty development: a systematic review of the literature, Teaching in Higher Education, 25:4, 435-455,

Williamson B. Bayne, S. & Suellen Shay (2020) The datafication of teaching in Higher Education: critical issues and perspectives, Teaching in Higher Education, 25:4, 351-365,

van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press

The data I have been guilty of.

Rationale.

Gathering and keeping data about students isn’t new. In my career, most of the data was based on the CEFR framework, which is widely used. It is, however, not ideal as language skills are notoriously hard to quantify. Based on CEFR, I, the teacher was responsible for collecting, inputting and interpreting the data. The combination of those two unreliable factors demonstrate the issue of flawed data sets that then could serve either algorithms (Kearns&Roth 2020; O’Neil 2016) and inform decisions.

Data.

I calculated the rough number of students I taught at each of institution. ‚Profile‘ refers to a written paragraph about the student that is saved and passed on. Awareness of data gathering was based on explicity – if students were explicitly informed their data was being saved and processed.

Design.

I experimented with infographic format I saw in other blogs and McCadless (2012, below).

Conclusions.

This data demonstrates a few worrying trends, apart from the aforementioned objectivity. Students were not explicitly told about the data gathering. Worse yet, as time progressed, there was more automated and saved data but less information. This coincides with the dawn of platforms – more platforms, less awareness (van Dijck et al 2018).  Although not directly visible from this visualisation, it made me realise in the places where data was more standardised, processed, and formed basis for teaching, the faculty had no guidance or training on how to use it (Brown 2020).

Sources.

Brown, M. 2020. Seeing students at scale: how faculty in large lecture courses act upon learning analytics dashboard data. Teaching in Higher Education. 25(4), pp. 384-400

Kearns, M. Roth K. (2020) The Ethical Algorithm; Audible

McCandless, D. (2012) Information is Beautiful. HarperCollins Publishers

O’Neil, C. (2016) Weapons of Math Destruction. Random House Audio (Audible release date 09-06-2016)

van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press