Block: ‘Learning’ with Data / Week 5
This week, I recorded all the times my image was reflected back to me by software while I worked. This reflection is through Zoom and Microsoft Teams while I work from home. I attempted to classify these instances by the type of activity, whether my camera was on or off (with profile picture on display), and audience/participants by number and type (internal team members or faculty). For background, I am a learning technologist, and these measures are a personal codification of my work this week.
I plotted these instances along a subjective “fidget meter”. After each Zoom or Teams session, I noted how often I thought I fidgeted (made movements with my hands, arms, face and back) during that time.
Some things I learned:
- I appear to fidget the most during internal team meetings.
- I appear to fidget the least when I’m working with faculty members in consultations or leading workshops.
- The main way I appear to meet with faculty is through consultations and workshops, not meetings.
- I do not turn my camera off while leading consultations or workshops.
Fidgeting is used as an indicator of lower engagement by some algorithms (Chang et al. 2018), so you could say that I’m not very engaged in most meetings. However, the session in which I fidgeted the most was a meeting of our team’s “journal club”. I might have fidgeted a lot, but I was also actively participating in the discussion. I think I fidgeted a lot because I was with people I was comfortable with, discussing a more casual topic. Compared to working with faculty academics in workshops or consultations, where I have to act a certain way, or be more ‘switched on’.
I use the phrase ‘switched on’ to imply a different level of engagement, but also a performance. By paying attention to my fidgeting this week I likely fidgeted more or less often at times, performing for my data visualisation, my reflection and my real time audience. The nature of meeting people through software like Zoom and Teams, which reflect your image back to you (even with your camera off, you may display a profile picture or your name), also appear to encourage an awareness of self image that is different from offline meetings.
This awareness or gaze uncovers a performative aspect to the surveillance of bodily data that resists the idea that ‘correct’ forms of engagement can be measured through the body. If we are aware that bodily data, like other learning analytics measures, serve as “proxies for, but not accurate representations of, attentional focus” (Bulger 2016, p.16), we can perhaps find ways to perform focus to surveillance algorithms in queer, resisting or dissembling ways.
Bulger, M., 2016. Personalized learning: The conversations we’re not having. Data and Society, 22(1), pp.1-29.
Chang, C., Zhang, C., Chen, L. and Liu, Y., 2018, October. An ensemble model using face and body tracking for engagement detection. In Proceedings of the 20th ACM International Conference on Multimodal Interaction (pp. 616-622).