Categories
Teaching & Data

A Week of Distractions

This week I attempted to track my distractions while working and reading for the course. I tracked when I found myself picking up the phone because of a notification, getting up for the doorbell, feeling hungry and getting food (or tea) and lastly, when my partner asked me a question or started a conversation. Note: this is a simplification of distractions as the list of distractions could be infinite.

The goal was to put myself into the shoes of a student doing remote learning and attempt to track the distractions that my teacher may want to have insight into. As we explored in the Learning Block, just because the video is playing doesn’t mean that a student is engaged in the content.

A Week of Distractions

My first reflection was that distractions are everywhere you look. If you take your eyes off the screen for a second, there’s a distraction. There are distractions in the traditional classroom as well, but over the pandemic, students have struggled even more so to stay engaged in the lesson while at home. If everyone in the room is doing the same thing, you don’t have your phone, no one is allowed to start a conversation with you, and there’s no doorbell, I’d say you’ve got a higher chance of staying focused.

A second reflection is that a lot of the distractions are muscle memory. I’m sure I missed tracking several distractions as a result. Picking up and putting down the phone while doing three other things at once has become the new norm.

The idea behind minimising distractions is that it enhances engagement, and thus, learning. My assumption was that gathering this in a dashboard could be useful for the teacher to understand gaps in engagement.

One thing we would have to consider is how the data was collected – by hand and self reported, or using a technology. By hand would place a large responsibility on the student in addition to their ‘job’, i.e. to learn. Tracking distractions using technology could infringe upon their data privacy as it arguably “yields an abundance of data beyond mere academic test results” (van Dijck et al, pg. 125, 2018). Moreover, what do we do with that data once the school year is over? Is it the teacher responsibility to ‘get rid of it’?

The key question here is ‘if distractions are tracked, what use is it for the teacher on a dashboard‘? As hinted at by Brown (2020), some data may be better than no data, but only if the teachers truly understand it and know what to do with it.

Lastly, in a remote learning environment, there is not much the teacher can do, with or without technology, to minimise the distractions. Trying to could be seen as surveillance rather than helping the learning process (Lupton and Williamson, 2017).

Sources:

Brown, M. 2020. Seeing students at scale: how faculty in large lecture courses act upon learning analytics dashboard dataTeaching in Higher Education. 25(4), pp. 384-400

Lupton D, Williamson B. The datafied child: The dataveillance of children and implications for their rights. New Media & Society. 2017;19(5):780-794. doi:10.1177/1461444816686328

van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press

Categories
Teaching & Data

Student Performance Dashboards

Student Performance Dashboard

In my professional life, one of the things I often do before demoing a platform product is creating a dashboard to highlight value brought about from the software. For example, the time saved by automation, number of cases created and closed, and the increase in clicks or actions on something. As a sales tool, the dashboard shows the ‘art of the possible’. To the paying stakeholder, it shows what they’re paying for. To the user, it should demonstrate that the platform has the ability to create both visually appealing reports and provide opportunity for further investigation like a case gone wrong, if needed. The types of reports that I can create are limited by the data points. The last sales trick that I’ll mention is that you always want to show a wide range of reports to capture the audience’s eye and leave them with something visually pleasing – the software should look good.

When creating the above dashboard, I had a few initial reactions:

  1. I found it much harder to create dashboards in Excel than in the platforms I’ve used.
  2. I immediately started looking at individual data points. I didn’t pause to reflect on the names of the students, i.e. the fact that they were students became surprisingly irrelevant to me very quickly.
  3. If I had more time on my hands, I would have preferred to do this in another tool because the above is not visually appealing. The sales engineer in me would not be proud to show this to a client.

Working in technology, I have found several readings negative and very distrusting of my day job. During this exercise, I became intimately aware that I took on this task with my ‘work hat’ on. I went into design mode with the goal being that through the dashboard, I could tell a story about the class and demonstrate the value of the platform that was providing the data. Going through some of my notes from the readings, this really made me pause and consider the teacher perspective. There should be frustration by this disconnect – that they’re not involved in the process. The engineers and data scientists should be continuously reminded of the students and what would be helpful to the teacher during the design and build process.

One thing that I had failed to do by putting on my ‘work hat’ on was to consider what the students were being tracked against, or in other words, consider what learning had been achieved, if any. In taking a step back and considering the story that I would tell, I realized that e.g. the completion data point in isolation doesn’t actually mean much. As a teacher, I would want to understand more around the following:

  • ‘What’ did the student complete?
  • Does completion mean learning, or just ‘I completed it’?
  • How quickly did the student complete ‘it’? How does that compare to the average completion time and rate?

Just as the reports that I can create are limited by the data available, so would be the value of the dashboards for the teachers. Those questions cannot be answered through a dashboard without the right data points, collected at the right time.

On a higher level, I also considered whether these data points, or this datafication of the students, actually provided a foundation for personalisation? I think both the technologist and teacher would argue ‘no, it doesn’t’ – much more data and context is needed. If left as is, the teacher would likely not have much use for it other than for reporting final grades and providing data to administrators.

The idea that a teacher would be a “dashboard controller” (van Dijck et al, pg. 123, 2018) is interesting to me because is the teacher not a type of “classroom (or even student) controller”? Many of the data points included in the dashboard would be ones that the teacher would have, or need to collect anyway, such as the grade on each test.

It also stood out to me in thinking around the idea of value that this dashboard is significantly simpler than one a teacher would have at the AltSchool as described by van Dijck, Poell and Waal (2018). This dashboard does not contain detailed data on the students minute by minute activity, engagement, performance, or behavior. Many, would however, argue what the point is of tracking all that data.

At the end, that’s what it comes down to – the data.

From a teaching perspective: What data is actually needed? Why is it useful? How do we map it to learning (both for the students, and for teacher method improvement)?

From a technology perspective: How do we track difficult data points like emotional intelligence, or empathy? What are the data privacy and security concerns for each data point? Does the student have the right to be forgotten?

Sources:

van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press

Categories
Teaching & Data

A Week of Emotions

Last weekend, I stumbled upon Zoo Tycoon in the Xbox Store. Within a short minutes of playing, I was reminded of the ‘Sims’-like indicators that highlight how the guests and the animals are feeling. I found myself fascinated by the idea of the indicators in relation to the visualisations from a teaching standpoint:

what is the value of a teacher having similar indicators for students to highlight if students are happy, sad, angry, or simply feeling 'meh' (i.e. feeling the pandemic wall)? 

From this context, I tried to track my own emotions for the week. For inspiration, I referred to Week 11 of Dear Data, in particular Stefanie’s drawing of colored lines.

Week 7 Visualisation: Week of Emotions

In comparison to Stefanie’s drawing, I decided to create the visualisation with colors flowing from one to the other as emotions are:

  1. not perfect or precise in timing
  2. are sometimes fleeting, and other times long-lasting
  3. can be in direct result of something, or seem spontaneous to others because they were brought upon by a thought

Datafication of emotions is a difficult task for this very reason – emotions are personal; however, facial recognition in education is on the rise (Williamson et al, 2020). Looking back at my week, there are gaps in the data and estimations of when I shifted from one emotion to the other.

As a technologist, I often wonder not if, but how long, it will take for facial recognition AI to have an accuracy rating of 99% across a spectrum of emotions. Even more interesting is if you can ‘trick’ the AI, or know if the student on the camera is indeed a student and not a deep fake?

Rather than limit or reduce the view of students from a teacher’s perspective (Williamson et al, 2020), I am hoping a dashboard highlighting emotion would prompt action or provide a different perspective. For example, it could help identify if just one student is anxious, or if the class as a whole is anxious.

As highlighted by Bulger (pg. 4, 2016), in classrooms teachers are able leverage learner-centered instruction and personlise their teaching based on “interpersonal cues…. subject matter expertise… knowledge of how people learn, and knowledge of each student, to determine individual needs, adjusting their lessons in response to questions and behaviors”. A major concern here, as with the sleep data I considered last week, is data privacy and ethical implications. A teacher may ask if certain emotions should allow for more lenient grading, or how the teacher themselves can remain objective by constantly being exposed to the emotions of their students.

From a personal standpoint, I have seen many instances over the last year where the data in the dashboards that I present to clients are sometimes seen as ‘useless’ because of COVID-19. With emotions, I wonder whether this data is truly useful to the teacher. However, one thing that I have learned through experience is that in a remote world where we need to over-communicate every action and emotion even to those close to us because what we are going through and feeling is unprecedented.

As a final reflection for this emotional data, it may be more important that the teacher have high emotional intelligence and/or understanding of the emotions tracked rather than a deep understanding of the facial recognition AI and training data sets behind it.

Sources:

Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. Data & Society working paper. Available: https://datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf

Lupi, Giorgia and Stefanie Posavec. Dear Data Project (2015). Accessed via http://www.dear-data.com/all

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectivesTeaching in Higher Education. 25(4), pp. 351-365.