Block 3 Reflection

The data imaginary, as termed by Beer (2018), refers to the marketed promises and potential of datafication and data analytics while raising fears of “missing out” or “acting too slowly”. The data imaginary is used to expand the boundaries of or intensify what is acceptable for datafying, while simultaneously reducing resistance and increasing adoption. These new, uncharted data territories are thought of as data frontiers (Beer, 2019; Prinsloo, 2020) and the expansion into these territories lead to the idea of data colonialism (Couldry and Mejias, 2020; Prinsloo, 2020).

In data colonialism, social data (much like land, people, and resources in historical colonialism) are viewed as “just there” or “raw material” that can be extracted, appropriated, and commodified (Couldry and Mejias, 2020; Couldry and Mejias, 2019; Prinsloo, 2020). Making this colonization possible, through data imaginary, is the promise that data are: speedy, accessible, revealing, panoramic (data can see everything), prophetic (data can give insight and foresight), and smart (Beer, 2019; Prinsloo, 2020).

Education is one frontier that has been intensified over the last decade or two. New, large-scale data systems and infrastructures have been developed and installed by state governments, education corporations, and education institutions themselves to monitor, collect, analyze, forecast, and report data about schools, teachers, and students (Anagnostopoulous 2013; Fontaine, 2016; Williamson, 2017). These data have become a key component in developing educational policy (Williamson, 2017).

At the heart of educational data are metrics of performance and productivity from education institutions and their staff, faculty, and students (Williamson, 2017). Closely linked to performativity is that of accountability: measures of effectiveness and efficiency and focused on the quantifiable (Williamson, 2017; Fontaine, 2016). Accountability takes an instrumental view of learning and positions it as an output, allowing meaningful relationships between inputs (e.g., funding, pedagogy, and curriculum) to be made (Fontaine, 2016).

Educational policy that emphasises performance and productivity reorients educators to focus on things that can be quantified and quantified positively (Williamson, 2017). Standardized tests, for example, are a common source of data that promises to hold institutions accountable for student learning. Consequently, schools and educators are focused on raising or maintaining high test scores which can lead to “teaching to the test” by altering curriculum (Fontaine, 2016).

Intertwined within the notions of datafication and data colonialism is neoliberalism which further concentrates focus on measures of productivity, engagement, and inputs and outputs (Fawns, 2020; Ozga, 2015). Ozga (2015 and 2016) notes that neoliberal policy emphasizes the transparency of performance data to the public and data that is also comparable to other local, national, and international schools. These (inter)national data reports can result in competition and school ranking systems. Consequently, education systems can be pressured to improve their data (and rankings) which further prioritizes staff, faculty, and student data collection and policy intervention (Williamson, 2017) .


Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013. Conclusion: The Infrastructure of Accountability: Tensions, Implications and Concluding Thoughts. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds) The Infrastructure of Accountability: Data use and the transformation of American education.

Beer, D., 2018. The data gaze: Capitalism, power and perception. Sage.

Couldry, N. and Mejias, U.A., 2019. Data colonialism: Rethinking big data’s relation to the contemporary subject. Television & New Media, 20(4), pp.336-349.

Couldry, N. and Mejias, U.A., 2020. The Costs of Connection: How Data Are Colonizing Human Life and Appropriating It for Capitalism.

Fawns, T., Aitken, G. and Jones, D., 2020. Ecological teaching evaluation vs the datafication of quality: Understanding education with, and around, data. Postdigital Science and Education, pp.1-18.

Fontaine, C. 2016. The Myth of Accountability: How Data (Mis)Use is Reinforcing the Problems of Public Education, Data and Society Working Paper 08.08.2016.

Ozga, J. and Segerholm, C., 2015. Neo-liberal agenda (s) in education. Governing by inspection, pp.27-37.

Ozga, J. 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81

Prinsloo, P. 2020. Data frontiers and frontiers of power in (higher) education: a view of/from the Global South. Teaching in Higher Education, 25(4) pp.366-383

Williamson, B. Digital Education Governance: political analytics, performativity and accountability. Chapter 4 in Big Data in Education: The digital future of learning, policy and practice. Sage.

a week of measuring performance

At work, we are beginning to go through our employee evaluation period and I want to explore the types of data that could be used to measure my performance. Standardized, quantifiable, and easily comparable data are prioritized in performance and accountability policies (Anagnostopoulos, 2013; Ozga, 2016; Williamson, 2017). I decided to use my Git commit history as a metric for my performance as it satisfies the three characteristics above. At the end of each day, I recorded the total number of additions, deletions, and files modified. Also, I did not record any contextual information about the modification as quantifiable data often requires the removal of supplemental context. Using this data, the following visualization was developed.

Like other types of performance and accountability data, the process of committing is susceptible to manipulation allowing the data to be quantified positively. For example, when I make substantial changes to a file I often duplicate the file and save it as filename_old while working on the file. If I do not finish the modifications right away, I will commit both files to the repository, therefore artificially increasing the number of additions.

By reducing the history to quantifiable data and removing contextual information can potentially result in misleading conclusions as it says little about the quality of the code written, completeness of the project (which can often be difficult to assess), and the time needed to troubleshoot, find solutions, and test/play around with code. This could lead to a culture of incentivizing poorly written and purposely lengthed code, which may not be noticeable to the administrators responsible in implementing these types of policies.


Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013. Conclusion: The Infrastructure of Accountability: Tensions, Implications and Concluding Thoughts. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds) The Infrastructure of Accountability: Data use and the transformation of American education.

Ozga, J. 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81

Williamson, B. Digital Education Governance: political analytics, performativity and accountability. Chapter 4 in Big Data in Education: The digital future of learning, policy and practice. Sage.

another week of discord

This week I returned to discord to track my conversations at work. During the first Discord data tracking activity I focused on recording who sent a message, when a message was sent, what type of space a message was going to (private or “public”), and how I viewed a message. For this activity I focused on the context of the messages and noted if a message was (i.e. could be perceived as) “on” or “off” task; if the message contained a question, a file, or an image; and if the message received emojis. Using this data, the following visualization was developed.

another week of discord

The shift to remote working has intensified and accelerated the use of surveillance software by employers on their employees and, for many, the boundaries between personal and professional lives have been blurred, if not destroyed. Employee surveillance software isn’t necessarily new, keyloggers and web traffic monitoring have been implemented in offices for quite some time. Software powered by AI are quickly being adopted and advertised as able to provide deeper insights into employees mental states and satisfaction and can be use to schedule check-in/intervention meetings, identify areas of improvement (and success), or mange employee workloads. Data such as Discord messages is a prime target for these surveillance tools as bots can be easily integrated into the platform to monitor employee communications real-time.

There are significant limitations, however, and language is complicated and nuanced which often leads to misinterpretations/false conclusions. For example, last year a chess podcast was automatically removed by the YouTube auto-moderator due to the frequency of “black”, “white”, “attack”, and “dominates” and the algorithm interpreting the conversation as potentially racist. In a professional setting misinterpretations such as these could have significantly more radical consequences.

Within the realm of education, similar language monitoring software has been suggested to assist instructors in their grading and feedback of student students writing skills (argument, vocabulary, syntax, style, etc) and develop personalized curriculum. Educational organizations such as ETS have software ready for students and educators to evaluate student write and language learning. Similar software could be integrated into discussion forums to analyze student questions and comments and provide “feedback” on student thinking, depth of understanding, and levels of engagement and satisfaction. These metrics, alongside other forms of assessment, could be used to set standards or policy and evaluate teachers. Additionally, and this could be a bit too orwellian, lectures given by teachers and student engagement levels (combination or web traffic and geolocationing) could become analyzed real-time and used as teacher performance evaluations (or monitor the pace, difficulty, or content of the course).

a week of assessments

Last week we administered our lab skills assessments (LSA) through our in-house assessment platform. While this is our fifth time administering our LSAs virtually, I thought it would be interesting to see the types of inquiries our teaching assistants raise during assessment periods and what the response time was to begin resolve those inquiries from the teaching lab staff. The response time was recorded using the message timestamp from Slack. A few notes were recorded about the context of the message and categories were developed while drafting the visualization.

a week of assessments

Upon quick glance, the visualization demonstrates that the lab staff are generally quick to respond to inquiries from teaching assistants and nearly half of inquiries were to ensure a student properly submitted their assessment. For many inquiries, a response from the lab staff included a resolution/answer allowing teaching assistants to relay the information quicker. However, as the visualization below indicates, the time until resolution was generally longer than the initial response time, which makes sense as additional information may be needed and conversation between multiple people emerges.

This data also highlights the areas where the implementation of the assessments could improve, and consequently influence departmental or internal lab staff policy. For example, nearly a quarter of inquiries stemmed from the shared campus server our platform is hosted on reaching our maximum allocated connections. This information could be used to justify allocating funds to purchase a dedicated server for our platform providing more control and reliability for our assessment platform. Similarly, inquiries identifying question mistakes or clarifying question wording could be reduced by requiring additional reviews from the lab staff – a policy that has already been decided on for the next term.