Lindsay's Data Visualisation Blog

4 – incomplete tasks …

  • Week 6, visualisation 4

So … I tried to do something different this week and not quite sure it was successful but it was an interesting process from which I can reflect on, in the context of ‘teaching with data’.

All the data that I tracked in the first block were measurable, positive things – eg reading, interactions and questions. It got me thinking that the absence of some data was an assumption that an alternative perhaps less desirable behaviour was there. For example in interactions if I wasn’t engaging over a certain period of time someone could assume that I was being anti-social (at the extreme). So this week, I tried to track a negative behaviour that I felt might be relevant from a teacher’s perspective. I chose instances where I planned to complete a task, and did not finish it. On the assumption that even from a progress perspective, a teacher accessing a dashboard that displays class completeness might use this data to inform their decisions about class and learner progression in their learning overall. I think I perhaps came into limit with the learning data I gathered being “what can be known and is knowable” (Williamson et al, 2020, p. 352).

My first reflection was around the volume of data. Over the 4 day period I didn’t track where I completed the task, it was the behaviour of ‘not getting it done as intended’ that I focused on. This links to the responsibilities of organisations tracking data on learners, and ensuring that it’s as full as possible on which to make decisions. It wasn’t always possible for me to complete these tasks – what might have been quite nice was to highlight those at the end of the week that remained ‘incomplete’, on the assumption that an LA system wouldn’t track that I started, then revisited (x times) the task to get it complete – a simple system would just look for the task to be complete and would report that to the teacher as good progress. Whereas the trends in incomplete tasks might cause a teacher to consider an intervention or even just check in to support the learner, if the tasks were considered to be achievable in ‘one sitting’.

Secondly, by the nature of procrastination – I found it much harder to consciously track a more negative behaviour than my weeks tracking a more positive behaviour. It didn’t encourage me to do things differently – I carried on leaving tasks incomplete as I moved through the week due to a range of reasons: distractions, procrastinations, laziness (!). The motivational effect of the data on the learner is important to the communicative aspects of the teaching. Without data technologies, similar processes exists with the interpersonal interactions between the teacher and the learner that is adapted due to the learner’s moods for example (Bulger, 2016). If there was data visibility to the learner as well as the teacher on these progress points there might be an expectation for that instantaneous communication that technology provides. A speedy intervention might have a greater impact for some learners in combination with the LA. Also what is the impact of this data on the interactions and relationship between teacher and learner, could it have the potential to change that – I’d say so.

Thirdly I wondered if procrastination really mattered, if I got the task done would it be important to the teacher to know that it took me a couple of goes or in the context of smaller broken down tasks is it just important that I got there? Especially on volume-based dashboard layout. I felt this mattered less towards the end of the week.

I wondered about how I might adapt this data visualisation to be more learner facing. I know I’m blending the blocks here but I feel that’s useful in my overall thinking of critical data. I wondered if a more from the objective/numerical to the more qualitative/descriptive on the Y axis could have adequately satisfied the change in audience from teacher to learner.

In short, I’m not sure I’d try and track data like this again – or if I did I would focus on one ‘descriptor’ eg incomplete work tasks and track more in detail information such as what happened, and was the task eventually completed. From a teaching perspective, this could inform intervention.

1 Comment

  1. Jeremy Knox Permalink

    ‘the absence of some data was an assumption that an alternative perhaps less desirable behaviour was there. ‘

    I think this is a really important point. Data can never represent *everything* and visualisation inevitably leaves out particular things. However, the absence of a particular behaviour can often appear more significant than it is, because visualisation favours the present and visible.

    ‘So this week, I tried to track a negative behaviour that I felt might be relevant from a teacher’s perspective’

    Really interesting approach! Although, I wonder what this would say about teachers – are they more interested in ‘seeing’ the negative behaviour than the positive?

    ‘Whereas the trends in incomplete tasks might cause a teacher to consider an intervention or even just check in to support the learner, if the tasks were considered to be achievable in ‘one sitting’.’

    Really great reflection here on how this system might be made useful for a teacherly gaze.

    ‘If there was data visibility to the learner as well as the teacher on these progress points there might be an expectation for that instantaneous communication that technology provides.’

    This identifies a key aspect of our shift from the theme of ‘learning’ to that of ‘teaching’: there is a significant difference in the idea of a student responding to a ‘self-tracked’ data visualisation, and responding to a teacher ‘intervention’ promoted by the data. The first leaves the student to interpret the visualisation, and hopefully be ‘motivated’ by the results. However, the ‘teacher dashboard’ might suggest that students don’t ever see any data, but rather continue with a face to face relationship with the teacher, only one that is informed by what the data suggested (and is interpreted by the teacher only). It strikes me that there are really significant differences that would play out here.

    ‘I wondered about how I might adapt this data visualisation to be more learner facing. I know I’m blending the blocks here but I feel that’s useful in my overall thinking of critical data.’

    I think that is a very useful reflective exercise!

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *