Categories
Overview Reflections

Final Visualisation Blog Summary

Considering data from the perspective of learning, teaching, and governance has been a useful exercise, in which I have reflected on the collection, analysis, and presentation of data in education. In the first block, I was reminded that data in isolation may not demonstrate learning, even though sometimes some data is better than no data (Brown, 2020). In the second block, it became clear that the datafication and commodification of education is changing the role of the teacher (Williamson et al, 2020). Lastly, in the third block, the power of data was apparent as the governance perspective shed light on the “questions of power” in relation to the type of data collected, how it is understood and communicated, and for what purposes (Anagnostopolous et. al., 2013: 7).

In the first block, I tracked music habits, Twitter notifications, phone usage, emails and questions. Each visualisation was bound by time, highlighting the passing of time through data in a way that became unexpectedly personal. When comparing the visualisations, I was reminded that “what counts as education when it comes to digital data is what can be counted” (Williamson, 2017: 46). For example, counting emails is simple; however, tracking emotional engagement with Twitter is complex because it can be fluid, and is not easily bound by time. The question is if either demonstrates learning, and if so, if one is preferred over the other. Working in technology, it is a good reminder that data is personal, needs context, and that not everything can be counted.

Furthermore, there should be careful consideration of the type of data collected and the purpose as certain data impacts data privacy (Bulger, 2016). As technology usage increases, data privacy concerns will become increasingly complex, because artificial intelligence and other technologies can be used to track a student’s every move. For example, facial recognition can be used to read and understand student emotions (Chan, 2021), and wearables augment the type of data points available about the human body while learning (Knox et al, 2019). In my personal data reflection in Week 4, I analysed the data collected by my iPhone. While I was pleased to see that I spent 2 hours exercising in the dashboard for my “quantified self” (Eynon, 2015), the phone only tracks the app usage time, not the fitness value without the integration of a wearable technology. From a learning perspective, this highlights that further reflection is needed to assess if the “simple act of using numbers” does indeed demonstrate learning, or simply highlights that something happened as the 2 hour block shows in my iPhone dashboard (Eynon, 2015).

In the second block, I tracked sleep, emotions, and distractions under the assumption that these impact student engagement. Behavioral data could be used for gamification and personalised, or adaptive, learning if artificial intelligence or wearable technology was integrated into learning platforms (van Dijck et al, 2018); however, these are not data tracked by learning platforms today and raise data privacy concerns. In my technology experience, engagement is tracked by mouse clicks, time, comments, etc. From a teaching perspective, this highlights the importance of selecting valuable data points because a dashboard can limit the view that a teacher has, and in turn, impact their perspective of students (Williamson et al, 2020). For example, my visualisations provide a limited view of what impacts my ability to engage. Additionally, the dashboard could unknowingly limit teaching methods rather than positively impact them (Brown, 2020). As highlighted by Bulger (2016: 4), in classrooms, teachers leverage learner-centered instruction and personlise teaching based on “interpersonal cues…. subject matter expertise… knowledge of how people learn, and knowledge of each student, to determine individual needs, adjusting their lessons in response to questions and behaviors.” In the remote classroom, this is not as easily accomplished. The teacher needs dashboards to bridge the gap both from a learning and teaching perspective.

Similarly, this limited perspective may be transferred to teachers, if using the data for performance purposes because the data can become “proxy measures of the performance of staff, courses, schools, and institutions as a whole” (Williamson et al, 2020: 354). A distinction is needed between the data collected to demonstrate learning and the data collected to demonstrate teaching methods. This is an interesting consideration when remembering that the data actors are not always educators, but technology companies and other non-governmental organisations (Williamson, 2017). From a technology perspective, more data is an easy upsell, which translates to additional revenue and happy shareholders. From a teaching perspective, more data is not always beneficial when teachers may lack necessary skills to analyse dashboards and recognise bias in an algorithm that produced the dashboard (Brown, 2020).

In the third block, I tracked technology-enabled interactions, getting help, and anxiety with a focus on the purpose, value, and power of data through the governance perspective. The physical act of collecting data and creating the visualisation demonstrated all three as it became the beginning of understanding the complex layer of abstraction that influences governance, which in turn is pushed back down to teaching and learning. As such, learning, teaching, and governance become a cycle of, and for, data. How data is collected is invariably influenced by collectors (Ozga, 2015), but even more important, is acknowledging the number of actors (human and non-human) that interact with the data before it becomes a performance metric, or a report (Anagnostopolous et al., 2013). Technology improvements and actors like the ‘Big Five tech companies’ (van Dijck et al, 2018) have enabled the datafication and commodification of education, giving rise to ‘fast policy’ and influence over the education system (Williamson, 2017). This results in “questions of power” from the initial collection through dissemination of the data (Anagnostopolous et. al., 2013: 7).

In summary, a simplistic view of data in education is that it provides an opportunity to demonstrate learning, assign a value to teaching, and serve as insight or transparency for governance (Ozga, 2015). The visualisation task enabled a view into data for the purpose of learning, teaching, and governance, highlighting that this simplistic view is far from the truth. The data process – from collection to dissemination – most importantly, highlighted the separation of the student from the data and the risk of generalisation and unintended perspectives (Anagnostopolous et. al., 2013). Lastly, it continuously reinforced that what matters is what can be counted (Williamson, 2017), and ultimately, that data impacts “how we practice, value, and think about education” because it allows for the categorization of the good and the bad (Anagnostopolous et. al, 2013: 11).

Word Count: 1008 without citations, 1087 with citations

———————————————————————–

Sources

Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013. Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.

Brown, M. 2020. Seeing students at scale: how faculty in large lecture courses act upon learning analytics dashboard dataTeaching in Higher Education. 25(4), pp. 384-400

Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. Data & Society working paper. Available: https://datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf

Chan, Milly. (2021, February 21). This AI reads children’s emotions as they learn. CNN Business. Retrieved from https://edition.cnn.com

Eynon, R. 2015. The quantified self for learning: critical questions for education, Learning, Media and Technology, 40:4, pp.407-411, DOI: 10.1080/17439884.2015.1100797

Knox, J, Williamson, B & Bayne, S 2019, ‘Machine behaviourism: Future visions of “learnification” and “datafication” across humans and digital technologies‘, Learning, Media and Technology, 45(1), pp. 1-15.

Ozga, J. 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81

van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press

Williamson, B. 2017. Digital Education Governance: political analytics, performativity and accountability. Chapter 4 in Big Data in Education: The digital future of learning, policy and practice. Sage.

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectivesTeaching in Higher Education. 25(4), pp. 351-365.

Categories
Governing Overview Reflections

Block 3: Summary

During the ‘governing’ block, I created visualisations on technology-enabled interactions, getting help while injured, and anxiety sparked from three different categorisations of my life – personal, work, and school. The visualisations and readings focused my thoughts on three main themes with policymaking and governance in mind:

  1. The purpose of data: What is purpose of the data and is the data ‘good’?
  2. The context/value of data: How can the context or value of the data be better included in the outcome?
  3. The power of data: Who holds the power of data?

What is purpose of the data and is the data ‘good’?

Policymaking and governance are reliant on data to provide insights, serve as evidence, and enhance transparency for the purpose of decision-making (Ozga, 2015). The ultimate goal is to know what is ‘good’ and what is ‘bad’, e.g. are the students learning, do they have the skills deemed necessary to advance, are the teachers effective, what schools are doing well, etc. Ironically, however, “Rather than empowering the people, the data may constrain what people know and how they think about their schools” (Anagnostopolous et. al., 2013). This is why getting to the answer of purpose and what is considered as ‘good’ data is important.

Note: the definition of ‘good’ for this discussion is what is useful and true. In an ideal scenario, this would also include data that does not infringe on someone’s privacy; however, certain private data points may be useful and true to the policymaking.

As a result of a ‘need to know’ culture and pressure to create policy and governance, the process appears to start with the end result rather than starting with the data and through analysis, finding an outcome regardless of ‘good’ or ‘bad’. While this is the scientific process, i.e. stating a hypothesis, making a prediction, and testing to determine the outcome, the readings give a sense that the process of iteration is limited when it comes to creating education policy. ‘Fast policy’ is the result of the increased number of actors (human and non-human) in education policymaking (Williamson, 2017). More data is being collected, enabled by the increased use of technology and improved infrastructure; however, the context seems to be forgotten as the game of telephone is played with the data after collection (Anagnostopolous et. al., 2013). The data collected travels through many actors and processes by the time it reaches those using it for policymaking and/or reporting. It is also invariably influenced by those doing the collecting (Ozga, 2015).

Williamson (2017) quotes Bowker (2008: 30), “A good citizen of the modern state is a citizen who can well be counted – along numerous dimensions, on demand.” This statement assumes all aspects of us as individuals can be quantified, yet this is not true. There are aspects of us as individuals that cannot be neatly quantified, defined, or categorised as evidenced by my own attempt to track anxiety. As a result, determining what ‘good’ data is, is complex and one that needs iteration and agility. ‘Fast policy’ and the use of technology may enable this iteration, if the assumption is that the policymakers are willing to be as agile and change existing policy as new information is available. The ideal for many would be that the data serves the education system (and it’s policymaking and governance) rather than a political or material purpose, which is often the case (Pinsloo 2020).

How can the context or value of the data be better included in the outcome?

Anagnostopolous et. al. (2013: 7) state, “Assigning students, teachers, and schools a numerical rating and classifying them as ‘failing’ or ‘effective’ says little about the causes of failure or what can or should be done about it and about school success and how to maintain it.” Context is important in understanding the data, but the context cannot always become a data point itself. For example, not all context is a quantifiable data point that can be added to, or understood by, a technology tool. Examples of this could include emotions and skills that are difficult to categorise neatly, like creativity and emotional intelligence.

In my own visulisations during this block, the context became key to understanding my own data as simply looking at the data points without knowing that I had, for example, been injured one week would dramatically change the interpretation and outcome. Imagine if the data was collected on a student, but the student was unable to provide that data point because it wasn’t possible in the system, or available as a question. The policy created from these data points, which become an indicator of performance, would likely not be ideal.

The statement made by Anagnostopolous et. al. aligns well to this: “As they define what kind of knowledge and ways of thinking matter and who counts as ‘good’ teachers, students, and schools, these performance metrics shape how we practice, value, and think about education” (2013: 11).

Who holds the power of Data?

The data that is now collected, is not only controlled only by government, but also non-governmental organisations like private sector companies (Williamson, 2017). These non-governmental organisations have increasing influence over education as they have a seat at the table to decide what can be inputted into the systems, the research that should be done, who (or what) completes the analysis of the data, and who will have access to the data.

Anagnostopolous et. al. (2013: 7) state, “Determining what kind of information about the nation’s students, teachers, and schools is collected and how it is processed, disseminated, and used, by whom, in what form, and for what purposes involve questions of power. They also reflect and privilege particular ways of thinking, particular values, and particular notions of worth.” What this highlights is that the student, the teacher, and the school that the data is collected on, no longer holds the power of their data. The power is held by the non-governmental organisations and governments who are analysing and reporting on the data. Similarly, this was a reason why I personally didn’t want to collect or highlight certain things in my own visualisations. As soon as the data has left my hands, the power to it has also left.

Taking the ‘infrastructural perspective’ approach (Anagnostopolous et. al., 2013), more time should be spent on identifying what data is collected for what purpose as well as how it is collected, and ultimately, pushed upstream to the end consumer. This large-scale datafication process involves countless actors (human and non-human), and the outcomes are now often readily available to those far beyond the school where it was collected (Williamson, 2017). Ultimately, there is a danger of a layer of abstraction as the data can become vague, or general, be interpreted from numerous perspectives, and end up being used in ways that were not originally intended (Anagnostopolous et. al., 2013). This is a key point when thinking about policymaking and governance in education. The hope, nonetheless, is that the policies and governance enacted benefit those in the education system, rather than limit or hinder them in any way.

————————————————-

Sources:

Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013. Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.

Ozga, J. 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81

Prinsloo, P. 2020. Data frontiers and frontiers of power in (higher) education: a view of/from the Global SouthTeaching in Higher Education, 25(4) pp.366-383

Williamson, B. Digital Education Governance: political analytics, performativity and accountability. Chapter 4 in Big Data in Education: The digital future of learning, policy and practice. Sage.

Categories
Overview Reflections What data points are needed to demonstrate learning?

Block 2: Summary

In the last block, we focused on teaching with data. My goal was to consider the data collection and visualisation through the perspective of an educator because my professional life is devoted to platform selling and creating dashboards for platform users. I also wanted to understand a bit more about the perspective of educators on what data is important and why.

In this block, the themes that emerged for me include:

  • “Some data was better than no data – sometimes” (Brown, 2020)
  • It’s important to know who the big players are, and dig deeper into why they may want to play in the education space (van Dijck et al, 2018)
  • The data points collected are often behavioral and can be used for adaptive learning, but they may not always be directly correlated to learning (van Dijck et al, 2018)
  • The data going in affects the outcome of the algorithms. Do educators have the skills and knowledge to determine how biased the algorithm may be and how to adapt their dashboards to it Brown, 2020)?
  • The trend towards more datafication and commodification of education is changing the role of the teacher and perhaps how they are evaluated (Williamson et al, 2020).

In my own visualisations, I tracked sleep, emotions, and distractions. These, I believed, were things the teacher on the other side of a Zoom screen may want to be aware of as they could impact student engagement. The data points may give insight into the students well being, but the data points themselves may infringe upon the students data privacy as highlighted by van Dijck et al (2018) because they would require a minute by minute tracking of the students.

Working in technology, I have an assumed trust of certain players and distrust of others. In reflection, I asked myself:

If I was an educator, would I need these data points to influence my lesson plan, would I see them as superfluous, or would they change the way that I teach and ‘know’ my students?

The key take away here was that while I am very conscious of how I am tracked online through the use of cookies and which apps I use, I hadn’t taken the same level of data privacy into account from an education perspective. While some data may be better than none (Brown, 2020), does knowing if a student slept well, or is anxious, radically change the lesson plan, or the way I would teach? Moreover, would I have the skill set needed to critically understand the dashboard and adapt accordingly, or would the data unknowingly limit my teaching methods (Brown, 2020)?

The behavioral data points may assist from a gamification standpoint and lead to personalised, or adaptive learning, as highlighted by the education examples by van Dijck et al (2018); however, we have to think critically about what behavioral data points actually correlate with engagement and ultimately learning. In understanding how the majority of platforms are configured, I can attest that how we track data is limited. For example, engagement is likely being tracked from a simple mouse click. You can track that a video’s play button was clicked on and at what time the video was stopped, but unless you are video taping the user’s face as they watch the video, you don’t know what happened after they hit ‘play’.

Taking this a step further, why would we want to track every action? Well, from the technology perspective, you need to find a way to keep selling software. More data equals product enhancements and new technologies, which equals more revenue and happy shareholders. The data collected can also be sold for profit. At first glance, it is easy to trust the ‘Big Five tech companies’ (van Dijck et al, 2018). The marketing places a positive spin on the data collection, product enhancements and new solutions. It’s only when the data becomes creepy (i.e. speaking about something only for it to show up as an ad on Facebook two minutes later), or there’s a breach, that most become aware they’re being tracked and have a problem with it. Do we want children to be tracked every minute from the moment they enter the school system? Personally, I would hate it. Having old pictures and memories show up on Facebook is already more than enough. The readings and visualisations this week have made me reconsider if I would want this tracked by anyone other than myself.

Lastly, we should consider how the data collected and dashboards impact the role of the teacher and how they are evaluated (Williamson et al, 2020). As highlighted by Williamson et all (2020), the data points can become “proxy measures of the performance of staff, courses, schools, and institutions as a whole”. But is education a place where we should focus on customer satisfaction? Few K-12 students would be able to distinguish their anger at a bad grade due to possibly their own lack of preparation from their customer satisfaction of how the teacher taught the material and their skill set as an educator.

Sources:

Brown, M. 2020. Seeing students at scale: how faculty in large lecture courses act upon learning analytics dashboard dataTeaching in Higher Education. 25(4), pp. 384-400

van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectivesTeaching in Higher Education. 25(4), pp. 351-365.

Categories
Overview Reflections

Block 1 Summary

Over the last four weeks, I have tracked:

Each one data collection has been bound by a sense of time in that each visualisation had a foundation in time (a time interval, or a day of the week). The visualisations have been structured, used common symbols, and included the elapsed time interval to show progression through the week.

From a personal perspective, the data that I collected reinforced what I already knew about myself… and along the same lines, the visualisations highlighted aspects of my personality. The largest observation is that completing these visualisations became a moment of pause and self reflection. In those moments of reflection, the data points in context became more personal than what I could have imagined.

This reflection, especially around my own personality shining through in the visualisation, comes as a direct result of sitting with the data and producing the visualisation by hand. The phone usage data that I analysed through my iPhone dashboard did not have the same personality. It was simplified down to standard charts and graphs. The data could have been anyone’s data. In a way, I became was anonymous when looking at that dashboard, even though one could argue that a phone is now one of life’s most personal possessions.

This exercise has changed my perspective on the relationship between data and learning in that it’s reminded me that people are the often source of the data points because the data that I tracked in the past four weeks has put me in the center, or as the source of the data.

Over the course of time (and the introduction of technology), we have been desensitized by continuous streams of statistics and data. This desensitisation likely only makes us pause to think ‘huh, that’s interesting/sad/exciting/etc‘ for a split second, but rarely forces us to take in the true meaning or impact of the data.

COVID-19 is the harshest daily reminder of this. Since January 2020, we’ve had a continuous stream of data points related to COVID-19. At first, fear increased alongside the case count and number of deaths. Everywhere you looked in the media, there was a story of someone’s daughter, son, wife, husband, parent, grandparent, teacher, or colleague. Today, 13 months later, we see a significantly lower amount of these personal stories and we have become desensitised because we had to find a way to cope and continue.

The readings for Block 1 have highlighted that when it comes to the relationship between data and learning for students, it’s vital to collect and provide data that will demonstrate learning. Our discussions on Twitter demonstrated that many questions around learning analytics force us back to the question – “how do we define ‘learning’?” We’re reminded that the definition of learning is multi-faceted and the data collection intimately linked should serve a purpose. As highlighted by Bulger (2016), the collection of, and use of, student data may infringe upon the student’s right to privacy. Additionally, we also have to consider that a data point may highlight that a student grasped a concept, but only in context.

For example, a student may have answered a multiple choice question correctly, but without evidence, will we know if the student answered correctly, or guessed correctly? Submitting calculations to a maths question is easy proof, but proof of understanding the theme of a novel, or having emotional intelligence is not an easy task.

If a student is using data-driven technology and happens to guess right several times in a row, this may lead them down a path that was not intended. We have to ask if the technology includes the ability to course correct, and if so, how easily (or quickly) can it do so? Which data points in this scenario would help identify the need to course correct? These would necessarily not be the same data points as the ones collected to prove understanding or learning of a concept.

In summary, the collection of data, its use, and the ultimately, the questions that it is trying to solve is complex. The exercise has forced me to take a step out of my technology profession (and technology-led life).

It’s been a healthy reminder to pause and reflect on what data needs to be collect, what it could (and does not) demonstrate, and why it is being collected. Most importantly, this is because behind the data point, there is a student.

Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. Data & Society working paper. Available: https://datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf

Categories
Overview Reflections

First Data Visualisation

Last week, I decided to track my music habits. Music is something that I listen to every day. It keeps me focused, lifts me up when I need a mood or energy boost, and often is selected based on what I’m doing.

Throughout the week as I was adding the data points, I kept coming back to the following in Chapter 2 of Big data in education: the digital future of learning, policy and practice:

From the eighteenth-century perspective, ‘data are apparently before the fact: they are the starting point for what we know, who we are, and how we communicate’, and were often perceived as transparent, self-evident, neutral and objective, ‘the fundamental stuff of truth itself’ (Gitelman and Jackson 2013: 2-3).

Williamson 2017 (p. 29)

Music in the same way as data is described above is what I know, who I am, and how I communicate to myself. What I was not expecting, was that the visualisation of something that was meant to be ‘transparent’ or ‘objective’ would feel personal. When looking at the visualisation, I’m looking at what I needed, or how I felt at a certain point in time through the perspective of music.

In the uploaded image, I tracked on what device I listened on and the four playlists that were on rotation this week. I have Spotify on every device I own. I also decided to include a data point for any other time that I was wearing my headphones and not listing to music – when I sat in meetings.

In looking at this image and analyzing the week, I find that almost 50% of my days last week were filled up by meetings, and I wear my headphones more than I probably should… I also heavily rely on faster paced music to get me motivated for a workout, or when I needed to get something done. Weekends are, however, were filled with slower paced music when I was cooking brunch or cleaning up around the flat.

During working hours, I rely on my headphones. Alexa is rarely, if ever playing music as I work in the same room as my partner. At night, she is queen. My headphones and playlists are a way for me to transport myself to somewhere completely different when my partner is on a call (and I’m not).

As I couldn’t completely tear myself away from my list-focused and scheduled-oriented self, I decided to include a Y and Z-axis for the days of the week and rough waking hours to make it easier to read (as well as track). I knew that I would struggle to remember to jot down when I was listening to music, so I set an alarm on my phone three times a day to help me get into the habit. Maybe by the end of these next 9 weeks, data tracking will become a habit!

Looking through the Dear Data visualisations once more, I am so amazed at how creative Girogia and Stefanie are. It took me a while to come up with the symbols for meeting, phone, computer, and Alexa. I’m secretly hoping this exercise also sparks more creativity as I think through the data over the course.

Categories
Overview Reflections

Hello world!

Welcome to Critical Data and Education.

This is the beginning of a data journey. Over the next few months, I’ll be diving into the data collection process and attempting to make sense of data as it relates to education.