This week I tried to set few simple rules for work, study and personal aspects of my day and the idea is to measure how was the execution of those rules. I didn’t gather any additional data of why a certain rule was not followed. The following is the legend identifies the 6 rules to be monitored
- Need to stop working after 6pm – Work Rule
- Camera must be always on for work calls – Work Rule
- Stop using the phone at least 1hr before sleeping – Personal Rule
- No carbohydrates eating for a week – Personal Rule
- Coffee and tea should be limited to 4 or less with 4 considered border line – Personal Rule
- Study for at least 1hr a day – University Rule
I used the traffic light data visualisation to demonstrate: Green – 100% following the rules, Red – 100% missing the rule and Amber for in between. The data was collected from Sunday to Thursday as my complete working week (no weekends).
Other than the rule regarding camera on for work (on Thursday it was a group call and camera was not mandatory so I had it off), I have not been following any of the other rules ! For work, I had a heavy start considering that I’m going on leave the following week so I was motivated to work late hours to finish more pending work. Coffee I was mostly Amber – 4 cups a day exactly. For University studies I fell behind especially that I’m working more. Wednesday was a good day for me since I was able to catch up with work and studies and but cheated on food.
We use the traffic light indicator visualisation a lot at my work to provide dashboard performance view on sales, revenues and work related KPI’s. It is an oldish system that is good in giving a quick single measure update but does not really tell the story. One can sometimes notice trends in traffic light systems for example: in beginning of the week I was more stressed with work so I drank more coffee than towards the end when I was relatively more relaxed and the week is over. We can also link late hours of work to lack of studies.
From governing with data point of view, I wanted to approach this from the learning or educational institutes administration aiming to govern through similar dashboards and passing judgement on teachers and students using a singular view of data. Many rules of governance models regarding compliance or adherence to learning objectives could be formulated without looking deeper or understanding the story behind the results.
There are many factors that can have significant impact on the results and the way they are interpreted. For my data collection, on purpose I didn’t collect any additional information around why each target is met or not met, time of the day, mood or emotional status, level of stress, external factors, environment (being home all the time), etc. Contextual information and knowledge are important in Education to drive meaningful and relevant learning polices other than “quick” and maybe “cost effective” mass policy formation and governance model dependent on massive single points of data. I found the following paragraph from Ogza 2016 relevant for this topic.
Statistical data reduce the complexities of new national and local education practices through their selection of key indicators on the basis of which schools may be compared, and these ‘thin descriptions’, stripped of contextual complexity, make statistical data a key governing device (Ozga et al., 2011). Furthermore, because there is such a strong emphasis from policy-makers on ensuring that these data enable comparisons to be made (whether of pupil performance, teachers, pupil types), the knowledge claims that are most powerful are those that are de-contextualised, trans-historical and trans-situational, indeed:
“…the decline or loss of the context-specificity of a knowledge claim is widely seen as adding to thevalidity, if not the truthfulness, of the claim. (Grundmann and Stehr, 2012: 3)”Ozga, J. 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81ADD citation