For the governing with data block I tracked activity during our Tweetorial, the digital traces I’m leaving while studying, and finally my performance at work, hoping to gain more insight into the role of data in governing.
The literature for this block has revealed how policy making has moved away from political actors to a wide range of global actors, ranging from private sectors companies, to think tanks and independent experts (Williamson, 2017). Initiatives such as Pearson’s Learning Curve Data Bank show how influential commercial businesses have become by being able to ‘identify policy problems for national schooling systems, from which [they] also [have] the potential to profit by selling policy solutions’ (Williamson 2017, p. 23). This raises questions as to how valid these policy problems are or whether they were actively ‘created’ in order to profit from offering solutions.
The task of visualising public data such as Education GPS, a database by the OECD, gave insight into the vast amount of educational data that is available for anyone to analyse and interpret. Although the website includes a note that ‘[t]hese values should be interpreted with care since they are influenced by countries’ specific contexts and trade-offs’, it is easy to see how tools like these can be used to produce impressive reports and recommendations based on the perceived objectivity of data.
The issue of objectivity of data links to the subject of accountability which was a recurring topic in this block’s literature. Anagnostopoulos et al.’s (2013) chapters on test-based accountability have highlighted the trend in educational policy for measuring, monitoring and regulating. Tracking my ‘performance’ at work in week 11 emphasised the potential shortcomings when using standardised measures for determining performance. Anagnostopoulos et al. (2013, p.15) raise important questions such as ‘Who determines the tests and algorithms used to quantify student learning and teacher quality, who creates them, and who is left out of such decisions?’ This may not only be an issue for assessing students and teachers’ performance but also for other parts of education. Ozga (2016), for example, describes how a ‘shift away from ideas, possibilities and informed expert analysis in shaping the knowledge-governing relationship and towards the application of rules derived from recurring data patterns’ can create tensions amongst school inspectors.
Big data and developments in data-processing software have changed the educational governance landscape replacing ‘slow-paced bureaucratic policy processes’ with practises that claim to ‘make all educational problems measurable, calculable and knowable, and thereby solvable at high speed’ (Williamson 2017, p. 25). From what we have learned in the first two blocks of this course, it is important to consider issues such as ethics and privacy, data literacy and bias when relying on data for decision-making. The visualisations have, once again, highlighted how subjective and selective data collection and analysis can be.
References
Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (eds.) (2013). The Infrastructure of Accountability: Data use and the transformation of American education. Harvard Education Press.
Ozga J. (2016). Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal. 15(1):69-81. doi:10.1177/1474904115616629
Williamson, B. (2017). Big Data in Education: The digital future of learning, policy and practice. Sage.