Block 3 summary

For the governing with data block I tracked activity during our Tweetorial, the digital traces I’m leaving while studying, and finally my performance at work, hoping to gain more insight into the role of data in governing.

The literature for this block has revealed how policy making has moved away from political actors to a wide range of global actors, ranging from private sectors companies, to think tanks and independent experts (Williamson, 2017). Initiatives such as Pearson’s Learning Curve Data Bank show how influential commercial businesses have become by being able to ‘identify policy problems for national schooling systems, from which [they] also [have] the potential to profit by selling policy solutions’ (Williamson 2017, p. 23). This raises questions as to how valid these policy problems are or whether they were actively ‘created’ in order to profit from offering solutions.

The task of visualising public data such as Education GPS, a database by the OECD, gave insight into the vast amount of educational data that is available for anyone to analyse and interpret. Although the website includes a note that ‘[t]hese values should be interpreted with care since they are influenced by countries’ specific contexts and trade-offs’, it is easy to see how tools like these can be used to produce impressive reports and recommendations based on the perceived objectivity of data.

The issue of objectivity of data links to the subject of accountability which was a recurring topic in this block’s literature. Anagnostopoulos et al.’s (2013) chapters on test-based accountability have highlighted the trend in educational policy for measuring, monitoring and regulating. Tracking my ‘performance’ at work in week 11 emphasised the potential shortcomings when using standardised measures for determining performance. Anagnostopoulos et al. (2013, p.15) raise important questions such as ‘Who determines the tests and algorithms used to quantify student learning and teacher quality, who creates them, and who is left out of such decisions?’ This may not only be an issue for assessing students and teachers’ performance but also for other parts of education. Ozga (2016), for example, describes how a ‘shift away from ideas, possibilities and informed expert analysis in shaping the knowledge-governing relationship and towards the application of rules derived from recurring data patterns’ can create tensions amongst school inspectors.

Big data and developments in data-processing software have changed the educational governance landscape replacing ‘slow-paced bureaucratic policy processes’ with practises that claim to ‘make all educational problems measurable, calculable and knowable, and thereby solvable at high speed’ (Williamson 2017, p. 25). From what we have learned in the first two blocks of this course, it is important to consider issues such as ethics and privacy, data literacy and bias when relying on data for decision-making. The visualisations have, once again, highlighted how subjective and selective data collection and analysis can be.

References

Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (eds.) (2013). The Infrastructure of Accountability: Data use and the transformation of American education. Harvard Education Press.

Ozga J. (2016). Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal. 15(1):69-81. doi:10.1177/1474904115616629

Williamson, B. (2017). Big Data in Education: The digital future of learning, policy and practice. Sage.

Week 11: a week of performance

For my final data visualisation, I decided to track my performance at work. The difficulty was to determine how to measure ‘performance’. Due to the nature of my job, it isn’t easy to say how many tasks I have completed each day. I’m working on big projects and am fairly autonomous as to what I’m doing each day. How would a machine measure how I did, I wondered? It’s easy to see how many meetings I attended, how many emails I sent and how much time I’m spending on my computer. But how can this information give insight into how well I’m doing my job?

Week 11 visualisation
Legend

I chose to explore this issue after reading how problematic performance measuring can be as part of the information infrastructure of test-based accountability. According to Anagnostopoulos et al. (2013), there are questions around how well performance measures can represent teaching, learning and schooling. ‘As [standardised tests, mathematical models, and computing technologies] define what kind of knowledge and ways of thinking matter and who counts as “good” teachers, students, and schools, these performance metrics shape how we practice, value, and think about education.’ The perceived objectivity of data, therefore leads to a shift of power away from traditional actors in educational governing.

Looking back at my visualisation, it seems as if Monday (top left) was my most productive day although I perceived Thursday (bottom right) as the day I achieved most. Although this is a very small sample, it shows how difficult it is to measure performance by purely looking at data.

References

Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (eds.) (2013). The Infrastructure of Accountability: Data use and the transformation of American education. Harvard Education Press.

Week 10: a week of traces

Week 10 visualisation
Legend

This week I tracked the digital ‘traces’ I’m leaving while studying for this course. As surfaced in Williamson (2017), through the rise of big data, governments are increasingly monitoring the digital traces of their citizens resulting in new forms of ‘data-driven governance’ and ‘evidence-based policymaking’. Ozga (2016), however, describes the potential issues arising from using data instead of expert knowledge for governance. In her research on the role of digital data for school inspections, Ozga highlights the tensions between seemingly ‘objective’ and ‘transparent’ data processes, and knowledge creation through expert analysis. While my visualisation may not give away much in terms of my performance, collecting data at a large scale has become very valuable not only for institutions and edtech companies but also for governing purposes.

Evidence of how valuable educational data has become may be found in the increasing number of actors now involved in policy-making, for example ‘private sector and civil society organizations, including businesses, consultants, entrepreneurs, think tanks, policy innovation labs, charities and independent experts’ (Williamson, 2017). Data have enabled these actors to exert power over what information is being collected and how it is used while projecting particular values and ways of thinking (Anagnosopoulos et al., 2013). With some of these actors increasingly including global and commercial stakeholders, I wonder what impact their having on education in a local context. Is there a danger that we lose local, specialised knowledge in favour of global, standardised processes?

References

Anagnostopoulos, D. & Rutledge, S. & Jacobsen, R. (2013). The Infrastructure of Accountability: Data Use and the Transformation of American Education.

Ozga J. (2016). Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal. 15(1):69-81. doi:10.1177/1474904115616629

Williamson, B. (2017). Big Data in Education: The digital future of learning, policy and practice. Sage.

Week 9: a week of Twitter

Even though I had a really busy week at work and wasn’t able to participate in the Tweetorial, I monitored the tweets regularly and tried to visualise them. I wanted to illustrate the richness of the discussion so I chose to pull out the main terms from the various tweets.

Legend: Question 1/Question 2/Question3

Looking at the visualisation, key themes and issues become clear even for those who aren’t familiar with any of the literature on ‘governing’ with data.

As so often when data are involved, the terms ‘objective’ and ‘impartial’ appear. In the same breath, however, ‘bias’ and ‘subjectivity’ are mentioned, reminding us that data aren’t neutral. When it comes to data and governing, ‘accountability’ is a term that frequently arose in both tweets and literature. Big data and the associated notions of countability, numbering and statistical knowledge, give rise to new forms of ‘data-driven governance’ with an emphasis on ‘evidence-based policymaking’ (Williamson, 2017). This shift raises questions of whether bias is considered in policy making and which actors are involved.

An example of how powerful global policy actors have become is the OECD. Outcomes of their Programme for International Student Assessment (PISA) can result in countries changing their education policies in order to perform better in the assessment (Liss, 2013). Germany was one of the countries that performed badly in 2001 and the resulting ‘PISA shock’ led to steps being taken to improve test results. While this was achieved, the inequality gap in Germany has widened (Davoli and Entorf, 2018), reminding us of the potential issues of standardisation in education.

References

Davoli, M. & Entorf, H. (2018) ‘The PISA Shock, Socioeconomic Inequality, and School Reforms in Germany’, IZA Policy Papers 140, Institute of Labor Economics (IZA).

Liss, J. (2013) ‘Creative destruction and globalization: The rise of massive standardized education platforms’, Globalizations, Vol. 10, No. 4, pp. 557-570.

Williamson, B. (2017). Big Data in Education: The digital future of learning, policy and practice. Sage.