This week I returned to discord to track my conversations at work. During the first Discord data tracking activity I focused on recording who sent a message, when a message was sent, what type of space a message was going to (private or “public”), and how I viewed a message. For this activity I focused on the context of the messages and noted if a message was (i.e. could be perceived as) “on” or “off” task; if the message contained a question, a file, or an image; and if the message received emojis. Using this data, the following visualization was developed.
![](https://cde21.education.ed.ac.uk/dgidcumb/wp-content/uploads/sites/5/2021/03/week-10_data-vis-793x1024.jpg)
The shift to remote working has intensified and accelerated the use of surveillance software by employers on their employees and, for many, the boundaries between personal and professional lives have been blurred, if not destroyed. Employee surveillance software isn’t necessarily new, keyloggers and web traffic monitoring have been implemented in offices for quite some time. Software powered by AI are quickly being adopted and advertised as able to provide deeper insights into employees mental states and satisfaction and can be use to schedule check-in/intervention meetings, identify areas of improvement (and success), or mange employee workloads. Data such as Discord messages is a prime target for these surveillance tools as bots can be easily integrated into the platform to monitor employee communications real-time.
There are significant limitations, however, and language is complicated and nuanced which often leads to misinterpretations/false conclusions. For example, last year a chess podcast was automatically removed by the YouTube auto-moderator due to the frequency of “black”, “white”, “attack”, and “dominates” and the algorithm interpreting the conversation as potentially racist. In a professional setting misinterpretations such as these could have significantly more radical consequences.
Within the realm of education, similar language monitoring software has been suggested to assist instructors in their grading and feedback of student students writing skills (argument, vocabulary, syntax, style, etc) and develop personalized curriculum. Educational organizations such as ETS have software ready for students and educators to evaluate student write and language learning. Similar software could be integrated into discussion forums to analyze student questions and comments and provide “feedback” on student thinking, depth of understanding, and levels of engagement and satisfaction. These metrics, alongside other forms of assessment, could be used to set standards or policy and evaluate teachers. Additionally, and this could be a bit too orwellian, lectures given by teachers and student engagement levels (combination or web traffic and geolocationing) could become analyzed real-time and used as teacher performance evaluations (or monitor the pace, difficulty, or content of the course).
‘For this activity I focused on the context of the messages and noted if a message was (i.e. could be perceived as) “on” or “off” task; if the message contained a question, a file, or an image; and if the message received emojis’
Useful idea to return to previous data tracking ideas, particularly in that we’re now focused on the ‘governing’ theme.
This is a really neat visualisation, and one can see the relationship between ‘work’ and ‘off task’ activities really clearly. It is great to see you reflecting on the implications of increased surveillance due to the shift to remote working, and one can clearly see how visualisations such as this might be put to use in categorising ‘on-‘ and ‘off-task’ behaviour.
The link to education is really important here, and I agree that one could see this kind of tracking being used to track students. I wonder if we should resist the idea of being ‘on-task’ in education, though. Being ‘on-‘ or ‘off-task’ might make sense in a work environment, but can we as easily define what is engaged learning and what is not? Seems like the aim here would be to make education more efficient, but is that how learning works?
Of course, thinking about the ‘meaning’ behind such behaviours is important:
‘Similar software could be integrated into discussion forums to analyze student questions and comments and provide “feedback” on student thinking, depth of understanding, and levels of engagement and satisfaction’
And this seems like a logical step. However, I wonder if we then need to question the ways such software would categorise what students are doing – you example of misinterpretation seems to be pertinent here, and points to the ways in which such systems aren’t always very good at understanding context.