Governing Overview Reflections

Block 3: Summary

During the ‘governing’ block, I created visualisations on technology-enabled interactions, getting help while injured, and anxiety sparked from three different categorisations of my life – personal, work, and school. The visualisations and readings focused my thoughts on three main themes with policymaking and governance in mind:

  1. The purpose of data: What is purpose of the data and is the data ‘good’?
  2. The context/value of data: How can the context or value of the data be better included in the outcome?
  3. The power of data: Who holds the power of data?

What is purpose of the data and is the data ‘good’?

Policymaking and governance are reliant on data to provide insights, serve as evidence, and enhance transparency for the purpose of decision-making (Ozga, 2015). The ultimate goal is to know what is ‘good’ and what is ‘bad’, e.g. are the students learning, do they have the skills deemed necessary to advance, are the teachers effective, what schools are doing well, etc. Ironically, however, “Rather than empowering the people, the data may constrain what people know and how they think about their schools” (Anagnostopolous et. al., 2013). This is why getting to the answer of purpose and what is considered as ‘good’ data is important.

Note: the definition of ‘good’ for this discussion is what is useful and true. In an ideal scenario, this would also include data that does not infringe on someone’s privacy; however, certain private data points may be useful and true to the policymaking.

As a result of a ‘need to know’ culture and pressure to create policy and governance, the process appears to start with the end result rather than starting with the data and through analysis, finding an outcome regardless of ‘good’ or ‘bad’. While this is the scientific process, i.e. stating a hypothesis, making a prediction, and testing to determine the outcome, the readings give a sense that the process of iteration is limited when it comes to creating education policy. ‘Fast policy’ is the result of the increased number of actors (human and non-human) in education policymaking (Williamson, 2017). More data is being collected, enabled by the increased use of technology and improved infrastructure; however, the context seems to be forgotten as the game of telephone is played with the data after collection (Anagnostopolous et. al., 2013). The data collected travels through many actors and processes by the time it reaches those using it for policymaking and/or reporting. It is also invariably influenced by those doing the collecting (Ozga, 2015).

Williamson (2017) quotes Bowker (2008: 30), “A good citizen of the modern state is a citizen who can well be counted – along numerous dimensions, on demand.” This statement assumes all aspects of us as individuals can be quantified, yet this is not true. There are aspects of us as individuals that cannot be neatly quantified, defined, or categorised as evidenced by my own attempt to track anxiety. As a result, determining what ‘good’ data is, is complex and one that needs iteration and agility. ‘Fast policy’ and the use of technology may enable this iteration, if the assumption is that the policymakers are willing to be as agile and change existing policy as new information is available. The ideal for many would be that the data serves the education system (and it’s policymaking and governance) rather than a political or material purpose, which is often the case (Pinsloo 2020).

How can the context or value of the data be better included in the outcome?

Anagnostopolous et. al. (2013: 7) state, “Assigning students, teachers, and schools a numerical rating and classifying them as ‘failing’ or ‘effective’ says little about the causes of failure or what can or should be done about it and about school success and how to maintain it.” Context is important in understanding the data, but the context cannot always become a data point itself. For example, not all context is a quantifiable data point that can be added to, or understood by, a technology tool. Examples of this could include emotions and skills that are difficult to categorise neatly, like creativity and emotional intelligence.

In my own visulisations during this block, the context became key to understanding my own data as simply looking at the data points without knowing that I had, for example, been injured one week would dramatically change the interpretation and outcome. Imagine if the data was collected on a student, but the student was unable to provide that data point because it wasn’t possible in the system, or available as a question. The policy created from these data points, which become an indicator of performance, would likely not be ideal.

The statement made by Anagnostopolous et. al. aligns well to this: “As they define what kind of knowledge and ways of thinking matter and who counts as ‘good’ teachers, students, and schools, these performance metrics shape how we practice, value, and think about education” (2013: 11).

Who holds the power of Data?

The data that is now collected, is not only controlled only by government, but also non-governmental organisations like private sector companies (Williamson, 2017). These non-governmental organisations have increasing influence over education as they have a seat at the table to decide what can be inputted into the systems, the research that should be done, who (or what) completes the analysis of the data, and who will have access to the data.

Anagnostopolous et. al. (2013: 7) state, “Determining what kind of information about the nation’s students, teachers, and schools is collected and how it is processed, disseminated, and used, by whom, in what form, and for what purposes involve questions of power. They also reflect and privilege particular ways of thinking, particular values, and particular notions of worth.” What this highlights is that the student, the teacher, and the school that the data is collected on, no longer holds the power of their data. The power is held by the non-governmental organisations and governments who are analysing and reporting on the data. Similarly, this was a reason why I personally didn’t want to collect or highlight certain things in my own visualisations. As soon as the data has left my hands, the power to it has also left.

Taking the ‘infrastructural perspective’ approach (Anagnostopolous et. al., 2013), more time should be spent on identifying what data is collected for what purpose as well as how it is collected, and ultimately, pushed upstream to the end consumer. This large-scale datafication process involves countless actors (human and non-human), and the outcomes are now often readily available to those far beyond the school where it was collected (Williamson, 2017). Ultimately, there is a danger of a layer of abstraction as the data can become vague, or general, be interpreted from numerous perspectives, and end up being used in ways that were not originally intended (Anagnostopolous et. al., 2013). This is a key point when thinking about policymaking and governance in education. The hope, nonetheless, is that the policies and governance enacted benefit those in the education system, rather than limit or hinder them in any way.



Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013. Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.

Ozga, J. 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81

Prinsloo, P. 2020. Data frontiers and frontiers of power in (higher) education: a view of/from the Global SouthTeaching in Higher Education, 25(4) pp.366-383

Williamson, B. Digital Education Governance: political analytics, performativity and accountability. Chapter 4 in Big Data in Education: The digital future of learning, policy and practice. Sage.


Tracking Anxiety

Anxiety over a week during lockdown

This week it felt like a natural step to track anxiety as the final visualisation. As a student, the time leading up to the end of the year can be filled with excitement about what’s next, but it can also be filled with stress and anxiety around studying for exams and doing well.

What I wanted to highlight visually in this visualisation was the weight and continuous cycle you can find yourself in once you become anxious of something… one thing can often lead to many others piling on. For me, the early part of the week was work focused and the end of the week more personal focused. Sunday scaries anyone?

When thinking about how this relates to governance, is collecting data on anxiety? As a non-educator, my first instinct is to imagine that it’s more useful to think about from both a learning and teaching perspective.

One theme that has stood out in this governance block is that data collected is collected with the end result in mind. An outcome ‘should‘ prove or disprove something so that action (i.e. policy and governance) can be created. Often, it can be forgotten that the itself data in isolation, however, can be personal and individualistic. Tracking anxiety as an example is an emotion that likely has different meanings to different individuals. It’s difficult to track because we cannot determine the meaning in absolutes as we can for a maths equation. This is one reason that today, tracking anxiety may not be valuable in terms of governance and policy.

When combined for the purpose of analysis, the collection of data has a chance becoming an average. The average then in turn is what the policy is based on. An important note is that tracking something ambiguous like anxiety, can have many causes. For example, while it may be fair to say that most students feel anxious about exams, students may feel that anxiety as a result of very different reasons, e.g. a fear of failure, a desire to make parents proud, or even a recent breakup or other personal matter that has significantly impacted their ability to focus on studying.

This may or may not impact student grades. For the student looking at the grade, they can relate and understand the correlation. However, if those grades are used to measure a teacher’s performance, a school administrator would see the ‘average‘ picture. The assumption here is that there are enough students to provide data that lessens the extremes. Another reason why I considered tracking anxiety was because this past pandemic year, I can imagine anxiety had an overall impact on the education system because students found themselves in an unknown and uncertain time. It could have an impact on governance and policy.

If we could accurately quantify, describe and standardise anxiety to collect data on it, this may prove valuable from a governance and policy perspective in the future, if we face a similar situation again. A likely scenario is using technology to do this as artificial intelligence models continue to be created and trained for the purpose of tracking emotions.


Getting Help throughout the week

Last Sunday, I hurt my finger and as a result needed to wear a bandage through Friday of last week. In light of my predicament, I decided to track the number of times that I needed help with something in comparison to when I could still do something myself.

I’m usually not one to ask for help, so having an impaired finger stretched me outside of my comfort zone. I tracked this from three perspectives – personal, household, and work for five days.

  • The personal activities were limited to getting coffee/water, hair brushing, hand washing, and texting/calling.
  • The household activities were limited to dishes, laundry, taking the trash out, and mopping/sweeping.
  • Most of the work activities I could do by myself with the exception of sending the two parcels that I needed to send last week.

In reflection (and in reality), the daily activities that I could have tracked are countless, but some are ones that I don’t necessarily want to make public (like help getting dressed), connecting us to the privacy theme explored in the previous blocks.

When I was thinking about how to visualise the activities earlier in the week, I did a quick Google search and checked out the images tab for inspiration. One of the the images that I stumbled upon was related to a marketing persona with a timeline of a users activity on a website. Looking at that detailed audit trail was part of the reason that I didn’t want to track every activity. On a certain level, it starts to feel creepy, just like your phone recommending a new friend on Facebook when you’ve only had a conversation about them with a friend…

The other thing that I wanted to explore was what I could do by myself versus what I needed help with. The idea behind this was to spend some time reflecting on comparison as a one reason why we collect the data is for comparison (and ultimately ranking for decision making around policy and governance), e.g. one class is doing better than another, one teacher has more engagement in their class, etc.

In particular, this week made me reflect on several points made in the Ozga (2016) reading:

  • What is ‘good’ data? Where could needing help fall on the spectrum for collecting and reflecting on good data?
  • Does needing help rank well or poorly if it ultimately achieves the same outcome, like the laundry being done?
  • If the context of being hurt was left out, how does this change the perception of the data?
  • If I was ranking how much, or the value, of the help that I received, how would I write the descriptions for ‘outstanding’, ‘good’, ‘needs improvement’, and ‘inadequate’?

Maybe a more useful way to have visualised this week’s data is on a timeline to see the trend of needing less help throughout the week rather than categorisation? Would that make the data appear more ‘good’, or make it rank better?

These are just a few questions that I would have for someone responsible for collecting and visualising student data, if the goal was decision making, policy and governance.


Ozga, J. 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81


A Week of Technology-enabled Personal Interactions

This week I tracked personal interactions that I had with family, friends, and my partner through technology. In this case, technology is a phone call, Facetime hangout, and text messages.

My focus was to analyze how I interact with loved ones, and to keep this separate from the technology-enabled interactions that I have at work. Side note – If I was tracking interactions like Slack messages at work, we would likely need a small booklet of papers for the visual.

The interactions are very text message heavy and light on phone calls. I used colors to represent who I was interacting with and the symbol to denote the type of interaction.

I decided to place the symbols on the three lines radiating out, symbolising the interactions radiating out from me, i.e. I was the one initiating the interactions (calling, initiating a Facetime, or sending a text message). Each radiating line has slashes to represent day breaks, starting with Monday as the first day of the week. I chose not to track the specific time as that data point seemed ‘too much’.

In reflection, these three lines with symbols could in reality be anything – watching videos, engaging on online forums, sending tweets, etc. The only thing that ‘makes’ them interactions is the key of the visualisation. This point highlighted the following:

Labeling, understanding the label, and determining value of the data points is key when using a data set like this (i.e. a count) for policy and governance.

For example, there is nothing to denote whether or not these interactions were positive or negative. The symbols and colors only show that they occurred.

In this visualisation, the value is missing of each interaction.

For example, just because I used text messaging the most, does that mean it is the most valuable? Because I only made 3 phone calls, does that indicate that I don’t like phone calls?

My conclusion with this week’s visualisation is that data is just that – data. Without context, it’s difficult to demonstrate value. Conversely, too much context may spill into bias. Culturally, we often try very hard to find labels and categories to fit everything into a neat box, e.g. the box ticking exercise on any standardised test (age, gender, ethnicity, parents income level, etc).

The ironic thing is that life isn’t a neat box. It’s messy. I’d argue education is the same – messy. Education is where you are meant to make mistakes, learn by doing, practice and constantly build upon what you know. The learning process is messy, yet it appears that we are trying to fit it into a box when ‘everything’ becomes data (datafication) for the sake of policy and governance.