Tracking Sleep

This week, I tracked sleep quantity and attempted to relate it to the amount of caffeine consumed throughout the day. Having a good night’s sleep can mean the world of difference in my own day to day, and correlates to whether or not I need more than one cup of coffee to feel human.

Shifting from the learning perspective to the teaching perspective when considering the importance of data, I was considering this week what teachers would be interested in knowing about their students. Sleep, I imagined would be one of them. It has been proven that lack of sleep, or poor quality of sleep, can negatively impact a student’s ability to focus and do well in school (Sharma, 2014). When interacting face-to-face, teachers would be able to pick up on body language and other clues easily to determine how well rested a student is feeling. In a virtual classroom, using the same clues becomes quite difficult, especially if the student is unable to (or doesn’t want to) use their camera. In the virtual class session this week, Ben Williamson shared a recent article in CNN Business (Chan, 2021) that stated emotion recognition AI may help teachers identify if students are happy, sad, angry, surprised, or fearful. With this advancement, it’s likely only a matter of time before we can add sleep deprived to the list.

When creating the visualisation, I put on my technology hat to put together an easily understood sleep diagram – something that a teacher could use to determined within a few seconds whether or not the student was well rested, or if lack of sleep could be a reason for their lack of motivation.

One thing that I do in my professional life is create dashboards for different data collected on a business’ physical locations, such as review ratings. One of the ways that we can visualise the ratings in the tool I work with is a geographical map that uses a RAG status to highlight which locations are doing well and which ones could use some improvements by coloring the location dot red, yellow, or green depending on the average review rating (1-5 scale, with 1 being the worst rating).

My first instinct was to use the same method for my own sleep pattern for the week, imagining that a teacher could look at the size and the color of the circle to determine within a split second whether or not I was well rested. As highlighted by Williamson, Bayne, and Shay (2020), teachers often have a limited view of students in large, or online, programs. The goal of this dashboard was to improve that view and be a window into one thing they would likely notice if interacting with the student face to face.

Sleep by RAG status and caffeine consumption

In reality, there are concerns with tracking sleep. One is that it ‘may change how teachers view [students]’ also highlighted by Williamson, Bayne and Shay (2020). I would argue that when it comes to sleep quality, the idea that a teacher views them differently could a positive thing in that the teacher may be able to adapt teaching methods, or dig deeper into why the students is not getting enough sleep. As an educator, I may be asking if this is my responsibility?

Another concern is data privacy. Should sleep quality be a data point that teachers have access to for their students? How would students report the data – self created sleep diaries, or through a wearable device? Should sleep data be considered under GDPR?

From a student’s perspective, if the teacher shares a clear correlation to sleep and their performance, could it contribute to additional sleep disturbance by giving the student anxiety? I’m sure we can all relate to a scenario where we lay in bed at night desperately wanting to fall asleep because we have a big day the following day.

In hindsight, while I can relate the increased caffeine consumption to my own lack of sleep, it wouldn’t necessarily be relevant for a teacher. Many children, I would hope, don’t consume caffeine to make up for the lack of sleep like adults do. Rather, I can imagine that they would be interested in knowing whether or not high stress events are directly contributing to the lack of sleep.

In a world where we are now hiding behind the Zoom camera, these data points could help the teacher understand the student in more context, even if the additional dashboard (or data point) may unfortunately contribute to additional ‘datafication’.


Chan, Milly. (2021, February 21). This AI reads children’s emotions as they learn. CNN Business. Retrieved from

Sharma M. G153(P) A Study of role of sleep on health and scholastic performance among children. Archives of Disease in Childhood 2014;99:A68.

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectivesTeaching in Higher Education. 25(4), pp. 351-365.

Overview Reflections

Block 1 Summary

Over the last four weeks, I have tracked:

Each one data collection has been bound by a sense of time in that each visualisation had a foundation in time (a time interval, or a day of the week). The visualisations have been structured, used common symbols, and included the elapsed time interval to show progression through the week.

From a personal perspective, the data that I collected reinforced what I already knew about myself… and along the same lines, the visualisations highlighted aspects of my personality. The largest observation is that completing these visualisations became a moment of pause and self reflection. In those moments of reflection, the data points in context became more personal than what I could have imagined.

This reflection, especially around my own personality shining through in the visualisation, comes as a direct result of sitting with the data and producing the visualisation by hand. The phone usage data that I analysed through my iPhone dashboard did not have the same personality. It was simplified down to standard charts and graphs. The data could have been anyone’s data. In a way, I became was anonymous when looking at that dashboard, even though one could argue that a phone is now one of life’s most personal possessions.

This exercise has changed my perspective on the relationship between data and learning in that it’s reminded me that people are the often source of the data points because the data that I tracked in the past four weeks has put me in the center, or as the source of the data.

Over the course of time (and the introduction of technology), we have been desensitized by continuous streams of statistics and data. This desensitisation likely only makes us pause to think ‘huh, that’s interesting/sad/exciting/etc‘ for a split second, but rarely forces us to take in the true meaning or impact of the data.

COVID-19 is the harshest daily reminder of this. Since January 2020, we’ve had a continuous stream of data points related to COVID-19. At first, fear increased alongside the case count and number of deaths. Everywhere you looked in the media, there was a story of someone’s daughter, son, wife, husband, parent, grandparent, teacher, or colleague. Today, 13 months later, we see a significantly lower amount of these personal stories and we have become desensitised because we had to find a way to cope and continue.

The readings for Block 1 have highlighted that when it comes to the relationship between data and learning for students, it’s vital to collect and provide data that will demonstrate learning. Our discussions on Twitter demonstrated that many questions around learning analytics force us back to the question – “how do we define ‘learning’?” We’re reminded that the definition of learning is multi-faceted and the data collection intimately linked should serve a purpose. As highlighted by Bulger (2016), the collection of, and use of, student data may infringe upon the student’s right to privacy. Additionally, we also have to consider that a data point may highlight that a student grasped a concept, but only in context.

For example, a student may have answered a multiple choice question correctly, but without evidence, will we know if the student answered correctly, or guessed correctly? Submitting calculations to a maths question is easy proof, but proof of understanding the theme of a novel, or having emotional intelligence is not an easy task.

If a student is using data-driven technology and happens to guess right several times in a row, this may lead them down a path that was not intended. We have to ask if the technology includes the ability to course correct, and if so, how easily (or quickly) can it do so? Which data points in this scenario would help identify the need to course correct? These would necessarily not be the same data points as the ones collected to prove understanding or learning of a concept.

In summary, the collection of data, its use, and the ultimately, the questions that it is trying to solve is complex. The exercise has forced me to take a step out of my technology profession (and technology-led life).

It’s been a healthy reminder to pause and reflect on what data needs to be collect, what it could (and does not) demonstrate, and why it is being collected. Most importantly, this is because behind the data point, there is a student.

Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. Data & Society working paper. Available:

What data points are needed to demonstrate learning?

Week of Questions

This week I tracked the number of questions that I was writing down on a notepad throughout the week. I categorised them into three categories – work, personal, and school related.

When selecting the type of data that I wanted to focus on this week, I wanted to be intentional about not tracking something related to time. Instead of looking at the time interval when I had a question, I wanted to see the higher level theme of where my train of thought flowed on a daily basis as well as throughout the week.

The visualisation reads from top to bottom with a few questions likely missing from Sunday as I’m writing this around 3-4PM in the afternoon. There is no mention of time, rather the symbols are representative of my question list for the day. To use Monday as an example, I started the day with writing down a personal question, then opened my laptop to begin working. As I worked, I wrote down a few work related questions. As the day passed, you’ll notice that most of the later part of the day were personal questions again.

As a first reflection, this was harder to track than originally thought. There were several moments this week when I stopped myself throughout the day to ask if I had remembered to write every question down. I am sure there are a few that I missed – questions that were in the back of my mind, but I simply forgot to write down, or multiple things were happening at once, distracting me from the data collection task.

This reflection reminded me of an article that I read back in November about how the pandemic is likely affecting memory. At the time, I felt like I was losing it because I was forgetting things left and right. In reading about memory, I stumbled on the article and bookmarked it to remember the tips given. Hammond (2020) reported that the pandemic is likely affecting our memory because our day-to-day has become so monotonous that we have fewer things to anchor our memory, less social interaction and overall lack of variety. These data points have in the past shown a correlation with worsening memory, so it’s not usual that we are questioning our memories more than usual the longer the pandemic continues.

Secondly, the amount of personal questions stood out to me as I was reflecting on the day, but they actually doesn’t come as a surprise. This week was not a usual work week in that I was participating in an online conference, which has several presentations that were focused on us as individuals, e.g. a motivational speaker. This week as result had a big focus on asking personal questions, e.g. ‘what do I want to…’, ‘can I …’, ‘should I…,’ etc.

In a previous post, I mentioned that I try my best to separate my work and personal life. This week, that separation became obsolete as the presentations I participated in had a personal focus, even though it would be classified as a work activity.

In the past three weeks, the most important reflection for me is a renewed appreciation for how ‘personal‘ the data actually is when looking at data points from the perspective of, or on an individual level.

It’s also highlighted the importance of identifying the question you want to answer prior to embarking on the journey of data collection.

As someone who doesn’t work as an educator, each data collection has provided an opportunity for self reflection and learning about my own habits and behavior. This week focusing on questions was one that dug deeper than tracking something ‘surface level’. By this, I mean, for example, the distinction between talking about the weather and asking a question to truly understand how someone is feeling. It’s made me reflect on whether or not a student should have access to the list of data points collected, as well as the questions (i.e. what learning) are being asked.

Additional Reflections:

  1. I’d like to do more of a deep dive into the questions, but with the public nature of this post, I have chosen to keep it high level and not share my sub-categories, or the questions themselves as that would be sharing what I see to be personal data.
  2. One way to view the ‘missing questions’ is human error… and I have to admit, I find tracking data using technology much easier than relying on myself to do it properly.
  3. Looking back at the questions, it’s in reality more like a to-do list some days rather than a list of questions that require me to figure something out, get an answer, or reflect.

Hammond, C. (2020, November 16). Lockdown has affected your memory – here’s why. BBC. Retrieved from

Data & Technology

Thinking about emails

Last week in line with collecting data using technology, I was also tracking work emails for my visualisation. I wanted to understand how many emails were coming in and going out as well as how many of those are related to calendar invites.

In a previous post when looking at my music habits, I realized 50% of my time is spent in meetings. So when looking at emails, rather than trying to categorize them into several buckets, I wanted to simply track which ones were invitations to meetings. Below is my visualisation for attempting to track my work emails.

Work emails for the week – incoming vs outgoing from a calendar invite perspective

My work doesn’t require me to spend a lot of my time in email, rather the organization prefers using Slack to communicate. In this shift to Slack at the start of the pandemic, a lot of emails have transitioned to Slack. This left me with a total of 86 incoming emails last week with 32 of those being related to a calendar invite. That means 37% of my incoming email is a calendar invite.

For the visualisation, I decided on squares to represent an incoming email. If the email was related to a calendar invite, I added a ‘C’ to the square. For sent emails, I chose circles. An ‘S’ inside the circle represents a calendar invite that I responded to (rather than a sent email). The vertical columns read Monday to Sunday, left to right.

One of the first things that I realized when embarking upon this week’s visualisation is that I am tied to the idea of when something happens in the data collected. Looking back, each one of my visualisations is related to a time interval. Only adding squares and circles as they came in without a dimension of time seemed almost frightening; hence the sun rising, midday sun, and moon on the left hand side.

In reflecting on this throughout the week, I am coming to the realisation that this is most likely because I try my best to have a work life balance – a time for work and a time for ‘me’. For example, I have a separate work phone and computer, so once they’re switched off, they remain switched off until the next work day.

As you can see in the visualisation, last week’s incoming emails were consistent throughout the day, even with some extending late into the evening on Friday. As I work for a US-based company, my inbox often collects emails after UK work hours. The one thing I can visually see in this data collection is that my direct colleagues are great at not sending emails on the weekends unless it’s necessary.

While I set out to only look at calendar emails, I realized this week that the calendar emails often come along side a ‘case’ email, meaning that the calendar invite is only 1% of the process. The rest of the 99% is the prep that leads up to the meeting with the customer and that information is in the case email. In hindsight, I should have also tracked how many case emails I received and even perhaps color coded the emails to relate to a specific customer meeting.

This reflection left me thinking back to the idea that “…’what counts’ as education when it comes to digital data is what can be counted” (Williamson, 2017, p. 46).

What if what counted for me was emails because this is one of the few things that can be counted in my work?

Or, alternatively, the number of meetings I attended? If this was the case, I would most likely have a poor performance.

Williamson, B. (2017). Big Data in Education: The Digital Future of Learning, Policy and Practice.

Personal Data Reflections

Thinking about Personal Data

Like many others in the course, I’ve been confined to a radius that I can explore on foot (or my bike, if I’m feeling adventurous!). I spend the majority of my time in my apartment either working from home, or trying to stay busy by reading, watching a show, cooking, or talking to family. All of these activities require me to use either my computer, tv, or phone. You may be asking, why is she using her phone when she cooks?? I’m not a very talented chef, so inspiration as well as instructions are crucial to my success in the kitchen. I usually find both through my Pinterest app on my phone.

With the ability to use a technology to track personal data, I decided it would be an interesting idea to really dig deep into my screen time. Screen time has long been a feature on iOS, but I’ve not actually sat down to think it about it besides the occasional – ‘oh wow, I spent 35% more time on my phone this week!?!?’

When looking into screen time, you can see a weekly as well as a daily overview. Both overview includes screen time in minutes, number of pickups, and notifications. The weekly overview tracks Sunday through Saturday.

The statistics are as follows:

  • Daily average usage – 2 hours 49 minutes
  • Total Screen time – 19 hours 45 minutes
    • This was up 11% from the previous week
  • Time breakdown by type of app:
    • Social – 8 hours 34 minutes
    • Health & Fitness – 2 hours 1 minute
    • Productivity & Finance – 1 hour 21 minutes
  • 5 Most used applications, ranked most to least:
    • Instagram
    • SWEAT
    • Chrome
    • WhatsApp
    • Email
  • Average daily pickups – 78 times per day
    • Most pickups on Tuesday
  • Average daily notifications – 99 per day
    • Most notifications on Wednesday


The good:

  • I spent two hours working out.
    • Great reminder that even though I feel like a couch potato, I did do something good for me this week.
  • Sundays are my ‘phone’ quiet day.
    • Most of the time, I leave my phone in another room unconsciously on Sundays. I tend to do most of my house chores on Sundays and will use Alexa to play music. Because I don’t hear the notifications, or have it close, I don’t use it as much.
  • I’m very interested in my notifications specific to my email, messaging apps and Instagram… not the news apps or Fitbit notifications as I had thought.

The bad:

  • I spend way too much time on Instagram.
    • This may be a new goal of mine over the next few weeks – to turn off notifications on this app and try to remind myself that I can be doing something else to fill the time, like reading.
  • I get a ridiculous amount of emails.
    • Note – the phone that I was tracking this on was my personal phone and not my work phone. So not a single one of the over 220+ emails that I got over the week was a work email… they were mostly newsletters and sale emails.
  • Picking up my phone on average 78 times makes me feel somewhat embarrassed to see.
    • I am someone who doesn’t think they are dependent on their phone, but this says otherwise.

Reflecting on the article by Eynon (2015), I can see now how tracking specific data points could impact (positively or negatively) the person trying to understand the data on the other side. Looking at my own personal data, there were things that surprised me and others that I was embarrassed to admit. If a student is in a similar situation, it could similarly impact their behavior and shape how they behave in the future.

For certain data points, educators and parents are sure to want to reinforce positive trends and prevent negative ones. Either way, the question remains whether this tracking positively influences the students sense of self, their development, and creativity. In thinking about my own personal data, I want to limit the time that I spend on Instagram, but in many ways it is also a source of positivity and laughter during a time when this is scarce. Feeling embarrassed about the time spent will now impact how I act in the future.

Overall, this exercise was a great self reflection on where I spend my lockdown life time. Over the next few weeks, this is definitely something I’ll come back to, especially to see if I’ve managed to limit my Instagram time.

Eynon, R. 2015. The quantified self for learning: critical questions for education, Learning, Media and Technology, 40:4, pp.407-411, DOI: 10.1080/17439884.2015.1100797