This week I tried to set few simple rules for work, study and personal aspects of my day and the idea is to measure how was the execution of those rules. I didn’t gather any additional data of why a certain rule was not followed. The following is the legend identifies the 6 rules to be monitored
- Need to stop working after 6pm – Work Rule
- Camera must be always on for work calls – Work Rule
- Stop using the phone at least 1hr before sleeping – Personal Rule
- No carbohydrates eating for a week – Personal Rule
- Coffee and tea should be limited to 4 or less with 4 considered border line – Personal Rule
- Study for at least 1hr a day – University Rule
I used the traffic light data visualisation to demonstrate: Green – 100% following the rules, Red – 100% missing the rule and Amber for in between. The data was collected from Sunday to Thursday as my complete working week (no weekends).
Other than the rule regarding camera on for work (on Thursday it was a group call and camera was not mandatory so I had it off), I have not been following any of the other rules ! For work, I had a heavy start considering that I’m going on leave the following week so I was motivated to work late hours to finish more pending work. Coffee I was mostly Amber – 4 cups a day exactly. For University studies I fell behind especially that I’m working more. Wednesday was a good day for me since I was able to catch up with work and studies and but cheated on food.
We use the traffic light indicator visualisation a lot at my work to provide dashboard performance view on sales, revenues and work related KPI’s. It is an oldish system that is good in giving a quick single measure update but does not really tell the story. One can sometimes notice trends in traffic light systems for example: in beginning of the week I was more stressed with work so I drank more coffee than towards the end when I was relatively more relaxed and the week is over. We can also link late hours of work to lack of studies.
From governing with data point of view, I wanted to approach this from the learning or educational institutes administration aiming to govern through similar dashboards and passing judgement on teachers and students using a singular view of data. Many rules of governance models regarding compliance or adherence to learning objectives could be formulated without looking deeper or understanding the story behind the results.
There are many factors that can have significant impact on the results and the way they are interpreted. For my data collection, on purpose I didn’t collect any additional information around why each target is met or not met, time of the day, mood or emotional status, level of stress, external factors, environment (being home all the time), etc. Contextual information and knowledge are important in Education to drive meaningful and relevant learning polices other than “quick” and maybe “cost effective” mass policy formation and governance model dependent on massive single points of data. I found the following paragraph from Ogza 2016 relevant for this topic.
Statistical data reduce the complexities of new national and local education practices through their selection of key indicators on the basis of which schools may be compared, and these ‘thin descriptions’, stripped of contextual complexity, make statistical data a key governing device (Ozga et al., 2011). Furthermore, because there is such a strong emphasis from policy-makers on ensuring that these data enable comparisons to be made (whether of pupil performance, teachers, pupil types), the knowledge claims that are most powerful are those that are de-contextualised, trans-historical and trans-situational, indeed:
“…the decline or loss of the context-specificity of a knowledge claim is widely seen as adding to thevalidity, if not the truthfulness, of the claim. (Grundmann and Stehr, 2012: 3)”
Ozga, J. 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81ADD citation
Another inventive approach to the dataviz task, and a very engaging commentary. It’s great to see you drawing on the Ozga paper and her notion of dataviz as “thin descriptions” that work effectively as governing devices. As you yourself have observed, traffic light visualizations can indeed act as thinly descriptive governing devices and are attractive to policy makers as a kind of “source of truth”. But what sort of truth-telling is going on here, and with what effects for those who are governed by such devices? I look forward to your final reflections on governing and policy with data.
Thank you Ben for your feedback and I’m looking forward for the end of this Block. I’m not sure about keeping up as I’m on personal leave this week and traveling with family with overwhelming amounts of data and so little time!!
Very interesting. The traffic lights are also widely used for immediate feedback in young learner classrooms. Usually it’s ‘green’ – I understood, yellow – not sure about some aspect, red – I didn’t understand. I never liked it for the reasons you mention after Ozga, it’s superficial. I did use it in assessed classes though, because it ‘ticks the box’. I dont think I have to pass a comment on that one!
For someone who works as a health coach helping people develop habits that stick, I’d love to see the reasons why. But also, could the number of new rules be the reasons you struggled with so many? Usually the best approach is to change one thing at a time, slowly.
Thank you for your feedback. My whole work revolves around the traffic light reporting and it takes me double effort to explain the reasoning behind each target measurement… as you said it gives you the tick in the box. I agree on rules and trying to change one at a time but needed data to show some relations or co-relations