Categories
Governing

Tracking Anxiety

Anxiety over a week during lockdown

This week it felt like a natural step to track anxiety as the final visualisation. As a student, the time leading up to the end of the year can be filled with excitement about what’s next, but it can also be filled with stress and anxiety around studying for exams and doing well.

What I wanted to highlight visually in this visualisation was the weight and continuous cycle you can find yourself in once you become anxious of something… one thing can often lead to many others piling on. For me, the early part of the week was work focused and the end of the week more personal focused. Sunday scaries anyone?

When thinking about how this relates to governance, is collecting data on anxiety? As a non-educator, my first instinct is to imagine that it’s more useful to think about from both a learning and teaching perspective.

One theme that has stood out in this governance block is that data collected is collected with the end result in mind. An outcome ‘should‘ prove or disprove something so that action (i.e. policy and governance) can be created. Often, it can be forgotten that the itself data in isolation, however, can be personal and individualistic. Tracking anxiety as an example is an emotion that likely has different meanings to different individuals. It’s difficult to track because we cannot determine the meaning in absolutes as we can for a maths equation. This is one reason that today, tracking anxiety may not be valuable in terms of governance and policy.

When combined for the purpose of analysis, the collection of data has a chance becoming an average. The average then in turn is what the policy is based on. An important note is that tracking something ambiguous like anxiety, can have many causes. For example, while it may be fair to say that most students feel anxious about exams, students may feel that anxiety as a result of very different reasons, e.g. a fear of failure, a desire to make parents proud, or even a recent breakup or other personal matter that has significantly impacted their ability to focus on studying.

This may or may not impact student grades. For the student looking at the grade, they can relate and understand the correlation. However, if those grades are used to measure a teacher’s performance, a school administrator would see the ‘average‘ picture. The assumption here is that there are enough students to provide data that lessens the extremes. Another reason why I considered tracking anxiety was because this past pandemic year, I can imagine anxiety had an overall impact on the education system because students found themselves in an unknown and uncertain time. It could have an impact on governance and policy.

If we could accurately quantify, describe and standardise anxiety to collect data on it, this may prove valuable from a governance and policy perspective in the future, if we face a similar situation again. A likely scenario is using technology to do this as artificial intelligence models continue to be created and trained for the purpose of tracking emotions.

Categories
Governing

Getting Help throughout the week

Last Sunday, I hurt my finger and as a result needed to wear a bandage through Friday of last week. In light of my predicament, I decided to track the number of times that I needed help with something in comparison to when I could still do something myself.

I’m usually not one to ask for help, so having an impaired finger stretched me outside of my comfort zone. I tracked this from three perspectives – personal, household, and work for five days.

  • The personal activities were limited to getting coffee/water, hair brushing, hand washing, and texting/calling.
  • The household activities were limited to dishes, laundry, taking the trash out, and mopping/sweeping.
  • Most of the work activities I could do by myself with the exception of sending the two parcels that I needed to send last week.

In reflection (and in reality), the daily activities that I could have tracked are countless, but some are ones that I don’t necessarily want to make public (like help getting dressed), connecting us to the privacy theme explored in the previous blocks.

When I was thinking about how to visualise the activities earlier in the week, I did a quick Google search and checked out the images tab for inspiration. One of the the images that I stumbled upon was related to a marketing persona with a timeline of a users activity on a website. Looking at that detailed audit trail was part of the reason that I didn’t want to track every activity. On a certain level, it starts to feel creepy, just like your phone recommending a new friend on Facebook when you’ve only had a conversation about them with a friend…

The other thing that I wanted to explore was what I could do by myself versus what I needed help with. The idea behind this was to spend some time reflecting on comparison as a one reason why we collect the data is for comparison (and ultimately ranking for decision making around policy and governance), e.g. one class is doing better than another, one teacher has more engagement in their class, etc.

In particular, this week made me reflect on several points made in the Ozga (2016) reading:

  • What is ‘good’ data? Where could needing help fall on the spectrum for collecting and reflecting on good data?
  • Does needing help rank well or poorly if it ultimately achieves the same outcome, like the laundry being done?
  • If the context of being hurt was left out, how does this change the perception of the data?
  • If I was ranking how much, or the value, of the help that I received, how would I write the descriptions for ‘outstanding’, ‘good’, ‘needs improvement’, and ‘inadequate’?

Maybe a more useful way to have visualised this week’s data is on a timeline to see the trend of needing less help throughout the week rather than categorisation? Would that make the data appear more ‘good’, or make it rank better?

These are just a few questions that I would have for someone responsible for collecting and visualising student data, if the goal was decision making, policy and governance.

Sources:

Ozga, J. 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81

Categories
Governing

A Week of Technology-enabled Personal Interactions

This week I tracked personal interactions that I had with family, friends, and my partner through technology. In this case, technology is a phone call, Facetime hangout, and text messages.

My focus was to analyze how I interact with loved ones, and to keep this separate from the technology-enabled interactions that I have at work. Side note – If I was tracking interactions like Slack messages at work, we would likely need a small booklet of papers for the visual.

The interactions are very text message heavy and light on phone calls. I used colors to represent who I was interacting with and the symbol to denote the type of interaction.

I decided to place the symbols on the three lines radiating out, symbolising the interactions radiating out from me, i.e. I was the one initiating the interactions (calling, initiating a Facetime, or sending a text message). Each radiating line has slashes to represent day breaks, starting with Monday as the first day of the week. I chose not to track the specific time as that data point seemed ‘too much’.

In reflection, these three lines with symbols could in reality be anything – watching videos, engaging on online forums, sending tweets, etc. The only thing that ‘makes’ them interactions is the key of the visualisation. This point highlighted the following:

Labeling, understanding the label, and determining value of the data points is key when using a data set like this (i.e. a count) for policy and governance.

For example, there is nothing to denote whether or not these interactions were positive or negative. The symbols and colors only show that they occurred.

In this visualisation, the value is missing of each interaction.

For example, just because I used text messaging the most, does that mean it is the most valuable? Because I only made 3 phone calls, does that indicate that I don’t like phone calls?

My conclusion with this week’s visualisation is that data is just that – data. Without context, it’s difficult to demonstrate value. Conversely, too much context may spill into bias. Culturally, we often try very hard to find labels and categories to fit everything into a neat box, e.g. the box ticking exercise on any standardised test (age, gender, ethnicity, parents income level, etc).

The ironic thing is that life isn’t a neat box. It’s messy. I’d argue education is the same – messy. Education is where you are meant to make mistakes, learn by doing, practice and constantly build upon what you know. The learning process is messy, yet it appears that we are trying to fit it into a box when ‘everything’ becomes data (datafication) for the sake of policy and governance.

Categories
Overview Reflections What data points are needed to demonstrate learning?

Block 2: Summary

In the last block, we focused on teaching with data. My goal was to consider the data collection and visualisation through the perspective of an educator because my professional life is devoted to platform selling and creating dashboards for platform users. I also wanted to understand a bit more about the perspective of educators on what data is important and why.

In this block, the themes that emerged for me include:

  • “Some data was better than no data – sometimes” (Brown, 2020)
  • It’s important to know who the big players are, and dig deeper into why they may want to play in the education space (van Dijck et al, 2018)
  • The data points collected are often behavioral and can be used for adaptive learning, but they may not always be directly correlated to learning (van Dijck et al, 2018)
  • The data going in affects the outcome of the algorithms. Do educators have the skills and knowledge to determine how biased the algorithm may be and how to adapt their dashboards to it Brown, 2020)?
  • The trend towards more datafication and commodification of education is changing the role of the teacher and perhaps how they are evaluated (Williamson et al, 2020).

In my own visualisations, I tracked sleep, emotions, and distractions. These, I believed, were things the teacher on the other side of a Zoom screen may want to be aware of as they could impact student engagement. The data points may give insight into the students well being, but the data points themselves may infringe upon the students data privacy as highlighted by van Dijck et al (2018) because they would require a minute by minute tracking of the students.

Working in technology, I have an assumed trust of certain players and distrust of others. In reflection, I asked myself:

If I was an educator, would I need these data points to influence my lesson plan, would I see them as superfluous, or would they change the way that I teach and ‘know’ my students?

The key take away here was that while I am very conscious of how I am tracked online through the use of cookies and which apps I use, I hadn’t taken the same level of data privacy into account from an education perspective. While some data may be better than none (Brown, 2020), does knowing if a student slept well, or is anxious, radically change the lesson plan, or the way I would teach? Moreover, would I have the skill set needed to critically understand the dashboard and adapt accordingly, or would the data unknowingly limit my teaching methods (Brown, 2020)?

The behavioral data points may assist from a gamification standpoint and lead to personalised, or adaptive learning, as highlighted by the education examples by van Dijck et al (2018); however, we have to think critically about what behavioral data points actually correlate with engagement and ultimately learning. In understanding how the majority of platforms are configured, I can attest that how we track data is limited. For example, engagement is likely being tracked from a simple mouse click. You can track that a video’s play button was clicked on and at what time the video was stopped, but unless you are video taping the user’s face as they watch the video, you don’t know what happened after they hit ‘play’.

Taking this a step further, why would we want to track every action? Well, from the technology perspective, you need to find a way to keep selling software. More data equals product enhancements and new technologies, which equals more revenue and happy shareholders. The data collected can also be sold for profit. At first glance, it is easy to trust the ‘Big Five tech companies’ (van Dijck et al, 2018). The marketing places a positive spin on the data collection, product enhancements and new solutions. It’s only when the data becomes creepy (i.e. speaking about something only for it to show up as an ad on Facebook two minutes later), or there’s a breach, that most become aware they’re being tracked and have a problem with it. Do we want children to be tracked every minute from the moment they enter the school system? Personally, I would hate it. Having old pictures and memories show up on Facebook is already more than enough. The readings and visualisations this week have made me reconsider if I would want this tracked by anyone other than myself.

Lastly, we should consider how the data collected and dashboards impact the role of the teacher and how they are evaluated (Williamson et al, 2020). As highlighted by Williamson et all (2020), the data points can become “proxy measures of the performance of staff, courses, schools, and institutions as a whole”. But is education a place where we should focus on customer satisfaction? Few K-12 students would be able to distinguish their anger at a bad grade due to possibly their own lack of preparation from their customer satisfaction of how the teacher taught the material and their skill set as an educator.

Sources:

Brown, M. 2020. Seeing students at scale: how faculty in large lecture courses act upon learning analytics dashboard dataTeaching in Higher Education. 25(4), pp. 384-400

van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectivesTeaching in Higher Education. 25(4), pp. 351-365.

Categories
Teaching & Data

A Week of Distractions

This week I attempted to track my distractions while working and reading for the course. I tracked when I found myself picking up the phone because of a notification, getting up for the doorbell, feeling hungry and getting food (or tea) and lastly, when my partner asked me a question or started a conversation. Note: this is a simplification of distractions as the list of distractions could be infinite.

The goal was to put myself into the shoes of a student doing remote learning and attempt to track the distractions that my teacher may want to have insight into. As we explored in the Learning Block, just because the video is playing doesn’t mean that a student is engaged in the content.

A Week of Distractions

My first reflection was that distractions are everywhere you look. If you take your eyes off the screen for a second, there’s a distraction. There are distractions in the traditional classroom as well, but over the pandemic, students have struggled even more so to stay engaged in the lesson while at home. If everyone in the room is doing the same thing, you don’t have your phone, no one is allowed to start a conversation with you, and there’s no doorbell, I’d say you’ve got a higher chance of staying focused.

A second reflection is that a lot of the distractions are muscle memory. I’m sure I missed tracking several distractions as a result. Picking up and putting down the phone while doing three other things at once has become the new norm.

The idea behind minimising distractions is that it enhances engagement, and thus, learning. My assumption was that gathering this in a dashboard could be useful for the teacher to understand gaps in engagement.

One thing we would have to consider is how the data was collected – by hand and self reported, or using a technology. By hand would place a large responsibility on the student in addition to their ‘job’, i.e. to learn. Tracking distractions using technology could infringe upon their data privacy as it arguably “yields an abundance of data beyond mere academic test results” (van Dijck et al, pg. 125, 2018). Moreover, what do we do with that data once the school year is over? Is it the teacher responsibility to ‘get rid of it’?

The key question here is ‘if distractions are tracked, what use is it for the teacher on a dashboard‘? As hinted at by Brown (2020), some data may be better than no data, but only if the teachers truly understand it and know what to do with it.

Lastly, in a remote learning environment, there is not much the teacher can do, with or without technology, to minimise the distractions. Trying to could be seen as surveillance rather than helping the learning process (Lupton and Williamson, 2017).

Sources:

Brown, M. 2020. Seeing students at scale: how faculty in large lecture courses act upon learning analytics dashboard dataTeaching in Higher Education. 25(4), pp. 384-400

Lupton D, Williamson B. The datafied child: The dataveillance of children and implications for their rights. New Media & Society. 2017;19(5):780-794. doi:10.1177/1461444816686328

van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press

Categories
Teaching & Data

Student Performance Dashboards

Student Performance Dashboard

In my professional life, one of the things I often do before demoing a platform product is creating a dashboard to highlight value brought about from the software. For example, the time saved by automation, number of cases created and closed, and the increase in clicks or actions on something. As a sales tool, the dashboard shows the ‘art of the possible’. To the paying stakeholder, it shows what they’re paying for. To the user, it should demonstrate that the platform has the ability to create both visually appealing reports and provide opportunity for further investigation like a case gone wrong, if needed. The types of reports that I can create are limited by the data points. The last sales trick that I’ll mention is that you always want to show a wide range of reports to capture the audience’s eye and leave them with something visually pleasing – the software should look good.

When creating the above dashboard, I had a few initial reactions:

  1. I found it much harder to create dashboards in Excel than in the platforms I’ve used.
  2. I immediately started looking at individual data points. I didn’t pause to reflect on the names of the students, i.e. the fact that they were students became surprisingly irrelevant to me very quickly.
  3. If I had more time on my hands, I would have preferred to do this in another tool because the above is not visually appealing. The sales engineer in me would not be proud to show this to a client.

Working in technology, I have found several readings negative and very distrusting of my day job. During this exercise, I became intimately aware that I took on this task with my ‘work hat’ on. I went into design mode with the goal being that through the dashboard, I could tell a story about the class and demonstrate the value of the platform that was providing the data. Going through some of my notes from the readings, this really made me pause and consider the teacher perspective. There should be frustration by this disconnect – that they’re not involved in the process. The engineers and data scientists should be continuously reminded of the students and what would be helpful to the teacher during the design and build process.

One thing that I had failed to do by putting on my ‘work hat’ on was to consider what the students were being tracked against, or in other words, consider what learning had been achieved, if any. In taking a step back and considering the story that I would tell, I realized that e.g. the completion data point in isolation doesn’t actually mean much. As a teacher, I would want to understand more around the following:

  • ‘What’ did the student complete?
  • Does completion mean learning, or just ‘I completed it’?
  • How quickly did the student complete ‘it’? How does that compare to the average completion time and rate?

Just as the reports that I can create are limited by the data available, so would be the value of the dashboards for the teachers. Those questions cannot be answered through a dashboard without the right data points, collected at the right time.

On a higher level, I also considered whether these data points, or this datafication of the students, actually provided a foundation for personalisation? I think both the technologist and teacher would argue ‘no, it doesn’t’ – much more data and context is needed. If left as is, the teacher would likely not have much use for it other than for reporting final grades and providing data to administrators.

The idea that a teacher would be a “dashboard controller” (van Dijck et al, pg. 123, 2018) is interesting to me because is the teacher not a type of “classroom (or even student) controller”? Many of the data points included in the dashboard would be ones that the teacher would have, or need to collect anyway, such as the grade on each test.

It also stood out to me in thinking around the idea of value that this dashboard is significantly simpler than one a teacher would have at the AltSchool as described by van Dijck, Poell and Waal (2018). This dashboard does not contain detailed data on the students minute by minute activity, engagement, performance, or behavior. Many, would however, argue what the point is of tracking all that data.

At the end, that’s what it comes down to – the data.

From a teaching perspective: What data is actually needed? Why is it useful? How do we map it to learning (both for the students, and for teacher method improvement)?

From a technology perspective: How do we track difficult data points like emotional intelligence, or empathy? What are the data privacy and security concerns for each data point? Does the student have the right to be forgotten?

Sources:

van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press

Categories
Teaching & Data

A Week of Emotions

Last weekend, I stumbled upon Zoo Tycoon in the Xbox Store. Within a short minutes of playing, I was reminded of the ‘Sims’-like indicators that highlight how the guests and the animals are feeling. I found myself fascinated by the idea of the indicators in relation to the visualisations from a teaching standpoint:

what is the value of a teacher having similar indicators for students to highlight if students are happy, sad, angry, or simply feeling 'meh' (i.e. feeling the pandemic wall)? 

From this context, I tried to track my own emotions for the week. For inspiration, I referred to Week 11 of Dear Data, in particular Stefanie’s drawing of colored lines.

Week 7 Visualisation: Week of Emotions

In comparison to Stefanie’s drawing, I decided to create the visualisation with colors flowing from one to the other as emotions are:

  1. not perfect or precise in timing
  2. are sometimes fleeting, and other times long-lasting
  3. can be in direct result of something, or seem spontaneous to others because they were brought upon by a thought

Datafication of emotions is a difficult task for this very reason – emotions are personal; however, facial recognition in education is on the rise (Williamson et al, 2020). Looking back at my week, there are gaps in the data and estimations of when I shifted from one emotion to the other.

As a technologist, I often wonder not if, but how long, it will take for facial recognition AI to have an accuracy rating of 99% across a spectrum of emotions. Even more interesting is if you can ‘trick’ the AI, or know if the student on the camera is indeed a student and not a deep fake?

Rather than limit or reduce the view of students from a teacher’s perspective (Williamson et al, 2020), I am hoping a dashboard highlighting emotion would prompt action or provide a different perspective. For example, it could help identify if just one student is anxious, or if the class as a whole is anxious.

As highlighted by Bulger (pg. 4, 2016), in classrooms teachers are able leverage learner-centered instruction and personlise their teaching based on “interpersonal cues…. subject matter expertise… knowledge of how people learn, and knowledge of each student, to determine individual needs, adjusting their lessons in response to questions and behaviors”. A major concern here, as with the sleep data I considered last week, is data privacy and ethical implications. A teacher may ask if certain emotions should allow for more lenient grading, or how the teacher themselves can remain objective by constantly being exposed to the emotions of their students.

From a personal standpoint, I have seen many instances over the last year where the data in the dashboards that I present to clients are sometimes seen as ‘useless’ because of COVID-19. With emotions, I wonder whether this data is truly useful to the teacher. However, one thing that I have learned through experience is that in a remote world where we need to over-communicate every action and emotion even to those close to us because what we are going through and feeling is unprecedented.

As a final reflection for this emotional data, it may be more important that the teacher have high emotional intelligence and/or understanding of the emotions tracked rather than a deep understanding of the facial recognition AI and training data sets behind it.

Sources:

Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. Data & Society working paper. Available: https://datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf

Lupi, Giorgia and Stefanie Posavec. Dear Data Project (2015). Accessed via http://www.dear-data.com/all

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectivesTeaching in Higher Education. 25(4), pp. 351-365.