Categories
Overview Reflections

Final Visualisation Blog Summary

Considering data from the perspective of learning, teaching, and governance has been a useful exercise, in which I have reflected on the collection, analysis, and presentation of data in education. In the first block, I was reminded that data in isolation may not demonstrate learning, even though sometimes some data is better than no data (Brown, 2020). In the second block, it became clear that the datafication and commodification of education is changing the role of the teacher (Williamson et al, 2020). Lastly, in the third block, the power of data was apparent as the governance perspective shed light on the “questions of power” in relation to the type of data collected, how it is understood and communicated, and for what purposes (Anagnostopolous et. al., 2013: 7).

In the first block, I tracked music habits, Twitter notifications, phone usage, emails and questions. Each visualisation was bound by time, highlighting the passing of time through data in a way that became unexpectedly personal. When comparing the visualisations, I was reminded that “what counts as education when it comes to digital data is what can be counted” (Williamson, 2017: 46). For example, counting emails is simple; however, tracking emotional engagement with Twitter is complex because it can be fluid, and is not easily bound by time. The question is if either demonstrates learning, and if so, if one is preferred over the other. Working in technology, it is a good reminder that data is personal, needs context, and that not everything can be counted.

Furthermore, there should be careful consideration of the type of data collected and the purpose as certain data impacts data privacy (Bulger, 2016). As technology usage increases, data privacy concerns will become increasingly complex, because artificial intelligence and other technologies can be used to track a student’s every move. For example, facial recognition can be used to read and understand student emotions (Chan, 2021), and wearables augment the type of data points available about the human body while learning (Knox et al, 2019). In my personal data reflection in Week 4, I analysed the data collected by my iPhone. While I was pleased to see that I spent 2 hours exercising in the dashboard for my “quantified self” (Eynon, 2015), the phone only tracks the app usage time, not the fitness value without the integration of a wearable technology. From a learning perspective, this highlights that further reflection is needed to assess if the “simple act of using numbers” does indeed demonstrate learning, or simply highlights that something happened as the 2 hour block shows in my iPhone dashboard (Eynon, 2015).

In the second block, I tracked sleep, emotions, and distractions under the assumption that these impact student engagement. Behavioral data could be used for gamification and personalised, or adaptive, learning if artificial intelligence or wearable technology was integrated into learning platforms (van Dijck et al, 2018); however, these are not data tracked by learning platforms today and raise data privacy concerns. In my technology experience, engagement is tracked by mouse clicks, time, comments, etc. From a teaching perspective, this highlights the importance of selecting valuable data points because a dashboard can limit the view that a teacher has, and in turn, impact their perspective of students (Williamson et al, 2020). For example, my visualisations provide a limited view of what impacts my ability to engage. Additionally, the dashboard could unknowingly limit teaching methods rather than positively impact them (Brown, 2020). As highlighted by Bulger (2016: 4), in classrooms, teachers leverage learner-centered instruction and personlise teaching based on “interpersonal cues…. subject matter expertise… knowledge of how people learn, and knowledge of each student, to determine individual needs, adjusting their lessons in response to questions and behaviors.” In the remote classroom, this is not as easily accomplished. The teacher needs dashboards to bridge the gap both from a learning and teaching perspective.

Similarly, this limited perspective may be transferred to teachers, if using the data for performance purposes because the data can become “proxy measures of the performance of staff, courses, schools, and institutions as a whole” (Williamson et al, 2020: 354). A distinction is needed between the data collected to demonstrate learning and the data collected to demonstrate teaching methods. This is an interesting consideration when remembering that the data actors are not always educators, but technology companies and other non-governmental organisations (Williamson, 2017). From a technology perspective, more data is an easy upsell, which translates to additional revenue and happy shareholders. From a teaching perspective, more data is not always beneficial when teachers may lack necessary skills to analyse dashboards and recognise bias in an algorithm that produced the dashboard (Brown, 2020).

In the third block, I tracked technology-enabled interactions, getting help, and anxiety with a focus on the purpose, value, and power of data through the governance perspective. The physical act of collecting data and creating the visualisation demonstrated all three as it became the beginning of understanding the complex layer of abstraction that influences governance, which in turn is pushed back down to teaching and learning. As such, learning, teaching, and governance become a cycle of, and for, data. How data is collected is invariably influenced by collectors (Ozga, 2015), but even more important, is acknowledging the number of actors (human and non-human) that interact with the data before it becomes a performance metric, or a report (Anagnostopolous et al., 2013). Technology improvements and actors like the ‘Big Five tech companies’ (van Dijck et al, 2018) have enabled the datafication and commodification of education, giving rise to ‘fast policy’ and influence over the education system (Williamson, 2017). This results in “questions of power” from the initial collection through dissemination of the data (Anagnostopolous et. al., 2013: 7).

In summary, a simplistic view of data in education is that it provides an opportunity to demonstrate learning, assign a value to teaching, and serve as insight or transparency for governance (Ozga, 2015). The visualisation task enabled a view into data for the purpose of learning, teaching, and governance, highlighting that this simplistic view is far from the truth. The data process – from collection to dissemination – most importantly, highlighted the separation of the student from the data and the risk of generalisation and unintended perspectives (Anagnostopolous et. al., 2013). Lastly, it continuously reinforced that what matters is what can be counted (Williamson, 2017), and ultimately, that data impacts “how we practice, value, and think about education” because it allows for the categorization of the good and the bad (Anagnostopolous et. al, 2013: 11).

Word Count: 1008 without citations, 1087 with citations

———————————————————————–

Sources

Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013. Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.

Brown, M. 2020. Seeing students at scale: how faculty in large lecture courses act upon learning analytics dashboard dataTeaching in Higher Education. 25(4), pp. 384-400

Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. Data & Society working paper. Available: https://datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf

Chan, Milly. (2021, February 21). This AI reads children’s emotions as they learn. CNN Business. Retrieved from https://edition.cnn.com

Eynon, R. 2015. The quantified self for learning: critical questions for education, Learning, Media and Technology, 40:4, pp.407-411, DOI: 10.1080/17439884.2015.1100797

Knox, J, Williamson, B & Bayne, S 2019, ‘Machine behaviourism: Future visions of “learnification” and “datafication” across humans and digital technologies‘, Learning, Media and Technology, 45(1), pp. 1-15.

Ozga, J. 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81

van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press

Williamson, B. 2017. Digital Education Governance: political analytics, performativity and accountability. Chapter 4 in Big Data in Education: The digital future of learning, policy and practice. Sage.

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectivesTeaching in Higher Education. 25(4), pp. 351-365.

Categories
Governing Overview Reflections

Block 3: Summary

During the ‘governing’ block, I created visualisations on technology-enabled interactions, getting help while injured, and anxiety sparked from three different categorisations of my life – personal, work, and school. The visualisations and readings focused my thoughts on three main themes with policymaking and governance in mind:

  1. The purpose of data: What is purpose of the data and is the data ‘good’?
  2. The context/value of data: How can the context or value of the data be better included in the outcome?
  3. The power of data: Who holds the power of data?

What is purpose of the data and is the data ‘good’?

Policymaking and governance are reliant on data to provide insights, serve as evidence, and enhance transparency for the purpose of decision-making (Ozga, 2015). The ultimate goal is to know what is ‘good’ and what is ‘bad’, e.g. are the students learning, do they have the skills deemed necessary to advance, are the teachers effective, what schools are doing well, etc. Ironically, however, “Rather than empowering the people, the data may constrain what people know and how they think about their schools” (Anagnostopolous et. al., 2013). This is why getting to the answer of purpose and what is considered as ‘good’ data is important.

Note: the definition of ‘good’ for this discussion is what is useful and true. In an ideal scenario, this would also include data that does not infringe on someone’s privacy; however, certain private data points may be useful and true to the policymaking.

As a result of a ‘need to know’ culture and pressure to create policy and governance, the process appears to start with the end result rather than starting with the data and through analysis, finding an outcome regardless of ‘good’ or ‘bad’. While this is the scientific process, i.e. stating a hypothesis, making a prediction, and testing to determine the outcome, the readings give a sense that the process of iteration is limited when it comes to creating education policy. ‘Fast policy’ is the result of the increased number of actors (human and non-human) in education policymaking (Williamson, 2017). More data is being collected, enabled by the increased use of technology and improved infrastructure; however, the context seems to be forgotten as the game of telephone is played with the data after collection (Anagnostopolous et. al., 2013). The data collected travels through many actors and processes by the time it reaches those using it for policymaking and/or reporting. It is also invariably influenced by those doing the collecting (Ozga, 2015).

Williamson (2017) quotes Bowker (2008: 30), “A good citizen of the modern state is a citizen who can well be counted – along numerous dimensions, on demand.” This statement assumes all aspects of us as individuals can be quantified, yet this is not true. There are aspects of us as individuals that cannot be neatly quantified, defined, or categorised as evidenced by my own attempt to track anxiety. As a result, determining what ‘good’ data is, is complex and one that needs iteration and agility. ‘Fast policy’ and the use of technology may enable this iteration, if the assumption is that the policymakers are willing to be as agile and change existing policy as new information is available. The ideal for many would be that the data serves the education system (and it’s policymaking and governance) rather than a political or material purpose, which is often the case (Pinsloo 2020).

How can the context or value of the data be better included in the outcome?

Anagnostopolous et. al. (2013: 7) state, “Assigning students, teachers, and schools a numerical rating and classifying them as ‘failing’ or ‘effective’ says little about the causes of failure or what can or should be done about it and about school success and how to maintain it.” Context is important in understanding the data, but the context cannot always become a data point itself. For example, not all context is a quantifiable data point that can be added to, or understood by, a technology tool. Examples of this could include emotions and skills that are difficult to categorise neatly, like creativity and emotional intelligence.

In my own visulisations during this block, the context became key to understanding my own data as simply looking at the data points without knowing that I had, for example, been injured one week would dramatically change the interpretation and outcome. Imagine if the data was collected on a student, but the student was unable to provide that data point because it wasn’t possible in the system, or available as a question. The policy created from these data points, which become an indicator of performance, would likely not be ideal.

The statement made by Anagnostopolous et. al. aligns well to this: “As they define what kind of knowledge and ways of thinking matter and who counts as ‘good’ teachers, students, and schools, these performance metrics shape how we practice, value, and think about education” (2013: 11).

Who holds the power of Data?

The data that is now collected, is not only controlled only by government, but also non-governmental organisations like private sector companies (Williamson, 2017). These non-governmental organisations have increasing influence over education as they have a seat at the table to decide what can be inputted into the systems, the research that should be done, who (or what) completes the analysis of the data, and who will have access to the data.

Anagnostopolous et. al. (2013: 7) state, “Determining what kind of information about the nation’s students, teachers, and schools is collected and how it is processed, disseminated, and used, by whom, in what form, and for what purposes involve questions of power. They also reflect and privilege particular ways of thinking, particular values, and particular notions of worth.” What this highlights is that the student, the teacher, and the school that the data is collected on, no longer holds the power of their data. The power is held by the non-governmental organisations and governments who are analysing and reporting on the data. Similarly, this was a reason why I personally didn’t want to collect or highlight certain things in my own visualisations. As soon as the data has left my hands, the power to it has also left.

Taking the ‘infrastructural perspective’ approach (Anagnostopolous et. al., 2013), more time should be spent on identifying what data is collected for what purpose as well as how it is collected, and ultimately, pushed upstream to the end consumer. This large-scale datafication process involves countless actors (human and non-human), and the outcomes are now often readily available to those far beyond the school where it was collected (Williamson, 2017). Ultimately, there is a danger of a layer of abstraction as the data can become vague, or general, be interpreted from numerous perspectives, and end up being used in ways that were not originally intended (Anagnostopolous et. al., 2013). This is a key point when thinking about policymaking and governance in education. The hope, nonetheless, is that the policies and governance enacted benefit those in the education system, rather than limit or hinder them in any way.

————————————————-

Sources:

Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013. Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.

Ozga, J. 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81

Prinsloo, P. 2020. Data frontiers and frontiers of power in (higher) education: a view of/from the Global SouthTeaching in Higher Education, 25(4) pp.366-383

Williamson, B. Digital Education Governance: political analytics, performativity and accountability. Chapter 4 in Big Data in Education: The digital future of learning, policy and practice. Sage.

Categories
Governing

Tracking Anxiety

Anxiety over a week during lockdown

This week it felt like a natural step to track anxiety as the final visualisation. As a student, the time leading up to the end of the year can be filled with excitement about what’s next, but it can also be filled with stress and anxiety around studying for exams and doing well.

What I wanted to highlight visually in this visualisation was the weight and continuous cycle you can find yourself in once you become anxious of something… one thing can often lead to many others piling on. For me, the early part of the week was work focused and the end of the week more personal focused. Sunday scaries anyone?

When thinking about how this relates to governance, is collecting data on anxiety? As a non-educator, my first instinct is to imagine that it’s more useful to think about from both a learning and teaching perspective.

One theme that has stood out in this governance block is that data collected is collected with the end result in mind. An outcome ‘should‘ prove or disprove something so that action (i.e. policy and governance) can be created. Often, it can be forgotten that the itself data in isolation, however, can be personal and individualistic. Tracking anxiety as an example is an emotion that likely has different meanings to different individuals. It’s difficult to track because we cannot determine the meaning in absolutes as we can for a maths equation. This is one reason that today, tracking anxiety may not be valuable in terms of governance and policy.

When combined for the purpose of analysis, the collection of data has a chance becoming an average. The average then in turn is what the policy is based on. An important note is that tracking something ambiguous like anxiety, can have many causes. For example, while it may be fair to say that most students feel anxious about exams, students may feel that anxiety as a result of very different reasons, e.g. a fear of failure, a desire to make parents proud, or even a recent breakup or other personal matter that has significantly impacted their ability to focus on studying.

This may or may not impact student grades. For the student looking at the grade, they can relate and understand the correlation. However, if those grades are used to measure a teacher’s performance, a school administrator would see the ‘average‘ picture. The assumption here is that there are enough students to provide data that lessens the extremes. Another reason why I considered tracking anxiety was because this past pandemic year, I can imagine anxiety had an overall impact on the education system because students found themselves in an unknown and uncertain time. It could have an impact on governance and policy.

If we could accurately quantify, describe and standardise anxiety to collect data on it, this may prove valuable from a governance and policy perspective in the future, if we face a similar situation again. A likely scenario is using technology to do this as artificial intelligence models continue to be created and trained for the purpose of tracking emotions.

Categories
Governing

Getting Help throughout the week

Last Sunday, I hurt my finger and as a result needed to wear a bandage through Friday of last week. In light of my predicament, I decided to track the number of times that I needed help with something in comparison to when I could still do something myself.

I’m usually not one to ask for help, so having an impaired finger stretched me outside of my comfort zone. I tracked this from three perspectives – personal, household, and work for five days.

  • The personal activities were limited to getting coffee/water, hair brushing, hand washing, and texting/calling.
  • The household activities were limited to dishes, laundry, taking the trash out, and mopping/sweeping.
  • Most of the work activities I could do by myself with the exception of sending the two parcels that I needed to send last week.

In reflection (and in reality), the daily activities that I could have tracked are countless, but some are ones that I don’t necessarily want to make public (like help getting dressed), connecting us to the privacy theme explored in the previous blocks.

When I was thinking about how to visualise the activities earlier in the week, I did a quick Google search and checked out the images tab for inspiration. One of the the images that I stumbled upon was related to a marketing persona with a timeline of a users activity on a website. Looking at that detailed audit trail was part of the reason that I didn’t want to track every activity. On a certain level, it starts to feel creepy, just like your phone recommending a new friend on Facebook when you’ve only had a conversation about them with a friend…

The other thing that I wanted to explore was what I could do by myself versus what I needed help with. The idea behind this was to spend some time reflecting on comparison as a one reason why we collect the data is for comparison (and ultimately ranking for decision making around policy and governance), e.g. one class is doing better than another, one teacher has more engagement in their class, etc.

In particular, this week made me reflect on several points made in the Ozga (2016) reading:

  • What is ‘good’ data? Where could needing help fall on the spectrum for collecting and reflecting on good data?
  • Does needing help rank well or poorly if it ultimately achieves the same outcome, like the laundry being done?
  • If the context of being hurt was left out, how does this change the perception of the data?
  • If I was ranking how much, or the value, of the help that I received, how would I write the descriptions for ‘outstanding’, ‘good’, ‘needs improvement’, and ‘inadequate’?

Maybe a more useful way to have visualised this week’s data is on a timeline to see the trend of needing less help throughout the week rather than categorisation? Would that make the data appear more ‘good’, or make it rank better?

These are just a few questions that I would have for someone responsible for collecting and visualising student data, if the goal was decision making, policy and governance.

Sources:

Ozga, J. 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81

Categories
Governing

A Week of Technology-enabled Personal Interactions

This week I tracked personal interactions that I had with family, friends, and my partner through technology. In this case, technology is a phone call, Facetime hangout, and text messages.

My focus was to analyze how I interact with loved ones, and to keep this separate from the technology-enabled interactions that I have at work. Side note – If I was tracking interactions like Slack messages at work, we would likely need a small booklet of papers for the visual.

The interactions are very text message heavy and light on phone calls. I used colors to represent who I was interacting with and the symbol to denote the type of interaction.

I decided to place the symbols on the three lines radiating out, symbolising the interactions radiating out from me, i.e. I was the one initiating the interactions (calling, initiating a Facetime, or sending a text message). Each radiating line has slashes to represent day breaks, starting with Monday as the first day of the week. I chose not to track the specific time as that data point seemed ‘too much’.

In reflection, these three lines with symbols could in reality be anything – watching videos, engaging on online forums, sending tweets, etc. The only thing that ‘makes’ them interactions is the key of the visualisation. This point highlighted the following:

Labeling, understanding the label, and determining value of the data points is key when using a data set like this (i.e. a count) for policy and governance.

For example, there is nothing to denote whether or not these interactions were positive or negative. The symbols and colors only show that they occurred.

In this visualisation, the value is missing of each interaction.

For example, just because I used text messaging the most, does that mean it is the most valuable? Because I only made 3 phone calls, does that indicate that I don’t like phone calls?

My conclusion with this week’s visualisation is that data is just that – data. Without context, it’s difficult to demonstrate value. Conversely, too much context may spill into bias. Culturally, we often try very hard to find labels and categories to fit everything into a neat box, e.g. the box ticking exercise on any standardised test (age, gender, ethnicity, parents income level, etc).

The ironic thing is that life isn’t a neat box. It’s messy. I’d argue education is the same – messy. Education is where you are meant to make mistakes, learn by doing, practice and constantly build upon what you know. The learning process is messy, yet it appears that we are trying to fit it into a box when ‘everything’ becomes data (datafication) for the sake of policy and governance.

Categories
Overview Reflections What data points are needed to demonstrate learning?

Block 2: Summary

In the last block, we focused on teaching with data. My goal was to consider the data collection and visualisation through the perspective of an educator because my professional life is devoted to platform selling and creating dashboards for platform users. I also wanted to understand a bit more about the perspective of educators on what data is important and why.

In this block, the themes that emerged for me include:

  • “Some data was better than no data – sometimes” (Brown, 2020)
  • It’s important to know who the big players are, and dig deeper into why they may want to play in the education space (van Dijck et al, 2018)
  • The data points collected are often behavioral and can be used for adaptive learning, but they may not always be directly correlated to learning (van Dijck et al, 2018)
  • The data going in affects the outcome of the algorithms. Do educators have the skills and knowledge to determine how biased the algorithm may be and how to adapt their dashboards to it Brown, 2020)?
  • The trend towards more datafication and commodification of education is changing the role of the teacher and perhaps how they are evaluated (Williamson et al, 2020).

In my own visualisations, I tracked sleep, emotions, and distractions. These, I believed, were things the teacher on the other side of a Zoom screen may want to be aware of as they could impact student engagement. The data points may give insight into the students well being, but the data points themselves may infringe upon the students data privacy as highlighted by van Dijck et al (2018) because they would require a minute by minute tracking of the students.

Working in technology, I have an assumed trust of certain players and distrust of others. In reflection, I asked myself:

If I was an educator, would I need these data points to influence my lesson plan, would I see them as superfluous, or would they change the way that I teach and ‘know’ my students?

The key take away here was that while I am very conscious of how I am tracked online through the use of cookies and which apps I use, I hadn’t taken the same level of data privacy into account from an education perspective. While some data may be better than none (Brown, 2020), does knowing if a student slept well, or is anxious, radically change the lesson plan, or the way I would teach? Moreover, would I have the skill set needed to critically understand the dashboard and adapt accordingly, or would the data unknowingly limit my teaching methods (Brown, 2020)?

The behavioral data points may assist from a gamification standpoint and lead to personalised, or adaptive learning, as highlighted by the education examples by van Dijck et al (2018); however, we have to think critically about what behavioral data points actually correlate with engagement and ultimately learning. In understanding how the majority of platforms are configured, I can attest that how we track data is limited. For example, engagement is likely being tracked from a simple mouse click. You can track that a video’s play button was clicked on and at what time the video was stopped, but unless you are video taping the user’s face as they watch the video, you don’t know what happened after they hit ‘play’.

Taking this a step further, why would we want to track every action? Well, from the technology perspective, you need to find a way to keep selling software. More data equals product enhancements and new technologies, which equals more revenue and happy shareholders. The data collected can also be sold for profit. At first glance, it is easy to trust the ‘Big Five tech companies’ (van Dijck et al, 2018). The marketing places a positive spin on the data collection, product enhancements and new solutions. It’s only when the data becomes creepy (i.e. speaking about something only for it to show up as an ad on Facebook two minutes later), or there’s a breach, that most become aware they’re being tracked and have a problem with it. Do we want children to be tracked every minute from the moment they enter the school system? Personally, I would hate it. Having old pictures and memories show up on Facebook is already more than enough. The readings and visualisations this week have made me reconsider if I would want this tracked by anyone other than myself.

Lastly, we should consider how the data collected and dashboards impact the role of the teacher and how they are evaluated (Williamson et al, 2020). As highlighted by Williamson et all (2020), the data points can become “proxy measures of the performance of staff, courses, schools, and institutions as a whole”. But is education a place where we should focus on customer satisfaction? Few K-12 students would be able to distinguish their anger at a bad grade due to possibly their own lack of preparation from their customer satisfaction of how the teacher taught the material and their skill set as an educator.

Sources:

Brown, M. 2020. Seeing students at scale: how faculty in large lecture courses act upon learning analytics dashboard dataTeaching in Higher Education. 25(4), pp. 384-400

van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectivesTeaching in Higher Education. 25(4), pp. 351-365.

Categories
Teaching & Data

A Week of Distractions

This week I attempted to track my distractions while working and reading for the course. I tracked when I found myself picking up the phone because of a notification, getting up for the doorbell, feeling hungry and getting food (or tea) and lastly, when my partner asked me a question or started a conversation. Note: this is a simplification of distractions as the list of distractions could be infinite.

The goal was to put myself into the shoes of a student doing remote learning and attempt to track the distractions that my teacher may want to have insight into. As we explored in the Learning Block, just because the video is playing doesn’t mean that a student is engaged in the content.

A Week of Distractions

My first reflection was that distractions are everywhere you look. If you take your eyes off the screen for a second, there’s a distraction. There are distractions in the traditional classroom as well, but over the pandemic, students have struggled even more so to stay engaged in the lesson while at home. If everyone in the room is doing the same thing, you don’t have your phone, no one is allowed to start a conversation with you, and there’s no doorbell, I’d say you’ve got a higher chance of staying focused.

A second reflection is that a lot of the distractions are muscle memory. I’m sure I missed tracking several distractions as a result. Picking up and putting down the phone while doing three other things at once has become the new norm.

The idea behind minimising distractions is that it enhances engagement, and thus, learning. My assumption was that gathering this in a dashboard could be useful for the teacher to understand gaps in engagement.

One thing we would have to consider is how the data was collected – by hand and self reported, or using a technology. By hand would place a large responsibility on the student in addition to their ‘job’, i.e. to learn. Tracking distractions using technology could infringe upon their data privacy as it arguably “yields an abundance of data beyond mere academic test results” (van Dijck et al, pg. 125, 2018). Moreover, what do we do with that data once the school year is over? Is it the teacher responsibility to ‘get rid of it’?

The key question here is ‘if distractions are tracked, what use is it for the teacher on a dashboard‘? As hinted at by Brown (2020), some data may be better than no data, but only if the teachers truly understand it and know what to do with it.

Lastly, in a remote learning environment, there is not much the teacher can do, with or without technology, to minimise the distractions. Trying to could be seen as surveillance rather than helping the learning process (Lupton and Williamson, 2017).

Sources:

Brown, M. 2020. Seeing students at scale: how faculty in large lecture courses act upon learning analytics dashboard dataTeaching in Higher Education. 25(4), pp. 384-400

Lupton D, Williamson B. The datafied child: The dataveillance of children and implications for their rights. New Media & Society. 2017;19(5):780-794. doi:10.1177/1461444816686328

van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press

Categories
Teaching & Data

Student Performance Dashboards

Student Performance Dashboard

In my professional life, one of the things I often do before demoing a platform product is creating a dashboard to highlight value brought about from the software. For example, the time saved by automation, number of cases created and closed, and the increase in clicks or actions on something. As a sales tool, the dashboard shows the ‘art of the possible’. To the paying stakeholder, it shows what they’re paying for. To the user, it should demonstrate that the platform has the ability to create both visually appealing reports and provide opportunity for further investigation like a case gone wrong, if needed. The types of reports that I can create are limited by the data points. The last sales trick that I’ll mention is that you always want to show a wide range of reports to capture the audience’s eye and leave them with something visually pleasing – the software should look good.

When creating the above dashboard, I had a few initial reactions:

  1. I found it much harder to create dashboards in Excel than in the platforms I’ve used.
  2. I immediately started looking at individual data points. I didn’t pause to reflect on the names of the students, i.e. the fact that they were students became surprisingly irrelevant to me very quickly.
  3. If I had more time on my hands, I would have preferred to do this in another tool because the above is not visually appealing. The sales engineer in me would not be proud to show this to a client.

Working in technology, I have found several readings negative and very distrusting of my day job. During this exercise, I became intimately aware that I took on this task with my ‘work hat’ on. I went into design mode with the goal being that through the dashboard, I could tell a story about the class and demonstrate the value of the platform that was providing the data. Going through some of my notes from the readings, this really made me pause and consider the teacher perspective. There should be frustration by this disconnect – that they’re not involved in the process. The engineers and data scientists should be continuously reminded of the students and what would be helpful to the teacher during the design and build process.

One thing that I had failed to do by putting on my ‘work hat’ on was to consider what the students were being tracked against, or in other words, consider what learning had been achieved, if any. In taking a step back and considering the story that I would tell, I realized that e.g. the completion data point in isolation doesn’t actually mean much. As a teacher, I would want to understand more around the following:

  • ‘What’ did the student complete?
  • Does completion mean learning, or just ‘I completed it’?
  • How quickly did the student complete ‘it’? How does that compare to the average completion time and rate?

Just as the reports that I can create are limited by the data available, so would be the value of the dashboards for the teachers. Those questions cannot be answered through a dashboard without the right data points, collected at the right time.

On a higher level, I also considered whether these data points, or this datafication of the students, actually provided a foundation for personalisation? I think both the technologist and teacher would argue ‘no, it doesn’t’ – much more data and context is needed. If left as is, the teacher would likely not have much use for it other than for reporting final grades and providing data to administrators.

The idea that a teacher would be a “dashboard controller” (van Dijck et al, pg. 123, 2018) is interesting to me because is the teacher not a type of “classroom (or even student) controller”? Many of the data points included in the dashboard would be ones that the teacher would have, or need to collect anyway, such as the grade on each test.

It also stood out to me in thinking around the idea of value that this dashboard is significantly simpler than one a teacher would have at the AltSchool as described by van Dijck, Poell and Waal (2018). This dashboard does not contain detailed data on the students minute by minute activity, engagement, performance, or behavior. Many, would however, argue what the point is of tracking all that data.

At the end, that’s what it comes down to – the data.

From a teaching perspective: What data is actually needed? Why is it useful? How do we map it to learning (both for the students, and for teacher method improvement)?

From a technology perspective: How do we track difficult data points like emotional intelligence, or empathy? What are the data privacy and security concerns for each data point? Does the student have the right to be forgotten?

Sources:

van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press

Categories
Teaching & Data

A Week of Emotions

Last weekend, I stumbled upon Zoo Tycoon in the Xbox Store. Within a short minutes of playing, I was reminded of the ‘Sims’-like indicators that highlight how the guests and the animals are feeling. I found myself fascinated by the idea of the indicators in relation to the visualisations from a teaching standpoint:

what is the value of a teacher having similar indicators for students to highlight if students are happy, sad, angry, or simply feeling 'meh' (i.e. feeling the pandemic wall)? 

From this context, I tried to track my own emotions for the week. For inspiration, I referred to Week 11 of Dear Data, in particular Stefanie’s drawing of colored lines.

Week 7 Visualisation: Week of Emotions

In comparison to Stefanie’s drawing, I decided to create the visualisation with colors flowing from one to the other as emotions are:

  1. not perfect or precise in timing
  2. are sometimes fleeting, and other times long-lasting
  3. can be in direct result of something, or seem spontaneous to others because they were brought upon by a thought

Datafication of emotions is a difficult task for this very reason – emotions are personal; however, facial recognition in education is on the rise (Williamson et al, 2020). Looking back at my week, there are gaps in the data and estimations of when I shifted from one emotion to the other.

As a technologist, I often wonder not if, but how long, it will take for facial recognition AI to have an accuracy rating of 99% across a spectrum of emotions. Even more interesting is if you can ‘trick’ the AI, or know if the student on the camera is indeed a student and not a deep fake?

Rather than limit or reduce the view of students from a teacher’s perspective (Williamson et al, 2020), I am hoping a dashboard highlighting emotion would prompt action or provide a different perspective. For example, it could help identify if just one student is anxious, or if the class as a whole is anxious.

As highlighted by Bulger (pg. 4, 2016), in classrooms teachers are able leverage learner-centered instruction and personlise their teaching based on “interpersonal cues…. subject matter expertise… knowledge of how people learn, and knowledge of each student, to determine individual needs, adjusting their lessons in response to questions and behaviors”. A major concern here, as with the sleep data I considered last week, is data privacy and ethical implications. A teacher may ask if certain emotions should allow for more lenient grading, or how the teacher themselves can remain objective by constantly being exposed to the emotions of their students.

From a personal standpoint, I have seen many instances over the last year where the data in the dashboards that I present to clients are sometimes seen as ‘useless’ because of COVID-19. With emotions, I wonder whether this data is truly useful to the teacher. However, one thing that I have learned through experience is that in a remote world where we need to over-communicate every action and emotion even to those close to us because what we are going through and feeling is unprecedented.

As a final reflection for this emotional data, it may be more important that the teacher have high emotional intelligence and/or understanding of the emotions tracked rather than a deep understanding of the facial recognition AI and training data sets behind it.

Sources:

Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. Data & Society working paper. Available: https://datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf

Lupi, Giorgia and Stefanie Posavec. Dear Data Project (2015). Accessed via http://www.dear-data.com/all

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectivesTeaching in Higher Education. 25(4), pp. 351-365.

Categories
Uncategorized

Tracking Sleep

This week, I tracked sleep quantity and attempted to relate it to the amount of caffeine consumed throughout the day. Having a good night’s sleep can mean the world of difference in my own day to day, and correlates to whether or not I need more than one cup of coffee to feel human.

Shifting from the learning perspective to the teaching perspective when considering the importance of data, I was considering this week what teachers would be interested in knowing about their students. Sleep, I imagined would be one of them. It has been proven that lack of sleep, or poor quality of sleep, can negatively impact a student’s ability to focus and do well in school (Sharma, 2014). When interacting face-to-face, teachers would be able to pick up on body language and other clues easily to determine how well rested a student is feeling. In a virtual classroom, using the same clues becomes quite difficult, especially if the student is unable to (or doesn’t want to) use their camera. In the virtual class session this week, Ben Williamson shared a recent article in CNN Business (Chan, 2021) that stated emotion recognition AI may help teachers identify if students are happy, sad, angry, surprised, or fearful. With this advancement, it’s likely only a matter of time before we can add sleep deprived to the list.

When creating the visualisation, I put on my technology hat to put together an easily understood sleep diagram – something that a teacher could use to determined within a few seconds whether or not the student was well rested, or if lack of sleep could be a reason for their lack of motivation.

One thing that I do in my professional life is create dashboards for different data collected on a business’ physical locations, such as review ratings. One of the ways that we can visualise the ratings in the tool I work with is a geographical map that uses a RAG status to highlight which locations are doing well and which ones could use some improvements by coloring the location dot red, yellow, or green depending on the average review rating (1-5 scale, with 1 being the worst rating).

My first instinct was to use the same method for my own sleep pattern for the week, imagining that a teacher could look at the size and the color of the circle to determine within a split second whether or not I was well rested. As highlighted by Williamson, Bayne, and Shay (2020), teachers often have a limited view of students in large, or online, programs. The goal of this dashboard was to improve that view and be a window into one thing they would likely notice if interacting with the student face to face.

Sleep by RAG status and caffeine consumption

In reality, there are concerns with tracking sleep. One is that it ‘may change how teachers view [students]’ also highlighted by Williamson, Bayne and Shay (2020). I would argue that when it comes to sleep quality, the idea that a teacher views them differently could a positive thing in that the teacher may be able to adapt teaching methods, or dig deeper into why the students is not getting enough sleep. As an educator, I may be asking if this is my responsibility?

Another concern is data privacy. Should sleep quality be a data point that teachers have access to for their students? How would students report the data – self created sleep diaries, or through a wearable device? Should sleep data be considered under GDPR?

From a student’s perspective, if the teacher shares a clear correlation to sleep and their performance, could it contribute to additional sleep disturbance by giving the student anxiety? I’m sure we can all relate to a scenario where we lay in bed at night desperately wanting to fall asleep because we have a big day the following day.

In hindsight, while I can relate the increased caffeine consumption to my own lack of sleep, it wouldn’t necessarily be relevant for a teacher. Many children, I would hope, don’t consume caffeine to make up for the lack of sleep like adults do. Rather, I can imagine that they would be interested in knowing whether or not high stress events are directly contributing to the lack of sleep.

In a world where we are now hiding behind the Zoom camera, these data points could help the teacher understand the student in more context, even if the additional dashboard (or data point) may unfortunately contribute to additional ‘datafication’.

———————–

Chan, Milly. (2021, February 21). This AI reads children’s emotions as they learn. CNN Business. Retrieved from https://edition.cnn.com

Sharma M. G153(P) A Study of role of sleep on health and scholastic performance among children. Archives of Disease in Childhood 2014;99:A68.

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectivesTeaching in Higher Education. 25(4), pp. 351-365.