Commentary on the data visualisation task

All data visualisations produced in this course

A quick summary of all data visualisations I created for CDE

On week 3, I used the log data from Chrome browser and other social media platforms to visualise how much time I spent on studying in a week.

On week 4, I manually collected data about my distractions while completing the prescribed reading. 

On week 5, I manually collected data about my thought process while completing the prescribed reading. 

On week 6 and week 7, I recorded how I used a highlighting pen while completing the prescribed reading. This was informed by the highlighting and annotating feature of Kindle e-reader.

On week 8, I counted all emails I sent at work, and visualised them based on their categories and the day on which they were sent.

On week 9, I visualised all emails I replied at work based on the time I took to reply.

On week 10, I visualised the time at which I accessed Moodle for work during the week.

On week 11, I visualised my eye gazing and head movement while watching an instructional video.

Overall, I consumed 36 papers for drafting, and 14 papers for the finalised drawings each week. 

Reflections around data collection and visualisation

This data collection and visualisation task was mainly inspired by Lupi and Posavec (2016), who committed to exchange postcards drawing of the data about their day-to-day life over a period of 52 weeks. Similar to Lupi and Posavec (2016), I found doing this data collection made me more “in-tune” with myself. Over weeks 4 and 5, the data collection was manual and real-time, and it made me acutely aware of being tracked. The act of recording my thought process while reading an article caused me to focus on my thought process, accepting and recording all my thoughts and distractions. For weeks 4 and 5, the data collection itself supported my metacognition, providing me more insights about my behaviour in learning (Eynon, 2015). However, I am also aware of my bias towards choosing data collection methods that involves less of myself. I felt this was heavily influenced by my Biomedical Science background, where I am very much used to being the objective observer, thinking that the data would have been more accurate if it is collected without me knowing. Admittedly, my data collection for some weeks were not very well thought through, and as a result some data collection were retrospective and reliant on log data from digital platforms. My occasional success in salvaging some visualisations made me realise how much of my online behaviour is already being tracked without me knowing.

In the process of collecting and visualising data during this whole course, I used pre-assigned categories for my data collection as opposed to writing elaborate “field notes”. While that made my data collection process more manageable, it meant that the data I collected were essentially screened and bracketed into categories whilst being collected. This experience demonstrated to me that data collection is itself a process of creating proxies for a phenomenon and hence “‘raw data’ is an oxymoron.” (Gitelman, 2013). Similarly, the datafication of education meant that the teaching and learning process is being microdissected into data points (Williamson, Bayne & Shay, 2020). This ushers in a reductionist way of understanding of students’ learning, where the instruments for data collection define what can be measured and throw out what cannot be measured (Raffagheli & Stewart, 2020).

My data collection on weeks 3, 6 to 11 focused heavily on my use of digital platforms for teaching and learning, including learning management systems, email, social networking, video platforms and e-reading. This heavy focus was not intended initially, however I felt it is connected to the platformization of education itself, where education is being assimilated / integrated into the platform ecosystem of Facebook, Google, edX, Coursera etc. (van Dijck, Poell & de Waal, 2018). My data visualisation is a testimony of how much of my life / work / study is being integrated within several digital platforms. Technologies and platforms can often be seen as merely passive tools, however this instrumentalist understanding of technology (Hamilton & Friesen, 2013) overlooks how technologies can often influence human behaviour. For instance, my data visualisations are overall not too creative and limited to scatter plots, icons, barcharts etc., which are heavily informed by commonly used digital visualisation of data. 

In weeks 4, 5, 6, 7, 11, I collected and visualised data about my learning behaviour, and  such types of data can actually be readily trackable within existing digital learning environments. The use of “learning data ” from a teachers’ point of view connects very heavily with my day-to-day work, where I see data from the learning management system can potentially provide teachers more insight about individual students. My day-to-day work is also a proof-of-principle of how reliance on data analytics could fuel a behaviourist approach to education, drawing focus on students’ behaviour itself (Knox, Williamson & Bayne, 2019). For instance, students who show a less-than-ideal pattern of participation in online learning activities could be flagged by their teachers for intervention. While at my workplace it is done manually, as mentioned by Bulger (2016), such flagging can be done automatically via algorithms which draw on pre-defined models of learning bahaviour set (Bulger, 2016), in order to support decisions on which group(s) of students are to be flagged as less than ideal. Automated systems could also go one step further and define what instructional modification and nudges are needed for each group of students.

My data collection and visualisation on weeks 8 and 9 also made me recall my past experience with performance review and day-to-day progress meetings, where I was required to present summarised data or data visualisation as evidence of my progress and performance, reducing months of work into statements like “supported x number of teachers in developing digital learning resources in the last month”. For academic staff, institutes already look at teachers’ publication output as a measurement of their staff’s research performance. With the increased emphasis on student-centred learning, I can see learning analytics also being used at the institutional level as key performance indicators for teachers, where a teachers’ competence can be measured through tracking students’ engagement, students’ learning and students’ satisfaction (Williamson, Bayne & Shay, 2020). 

Similarly, Anagnostopoulos et al. (2013) also described how public test performance is being converted through numbers and data into published ratings and rankings that in turn hold schools accountable for their effectiveness and productivity (test-based accountability). While it might still be a long way before reaching an institutional (or even departmental) framework for learning analytics, we can see the potential for learning analytics, access to LMS, number of digital learning resources produced etc. to be used as measurements of teachers’ engagement with online teaching, which will in turn inform institutional decision-making and performance tracking. Therefore, learning analytics can become increasingly important for teachers’ ongoing professional development. Similar to how test-based accountability can prompt school practices like “teaching to the test” according to Anagnostopoulos et al. (2013, p. 14), an institutional learning analytics framework or any potential learning analytics-based accountability could prompt “teaching to the analytics” or even “learning to the analytics” practices among teachers and students.

In summary, this exercise has not only given me a chance to understand myself and my learning behaviour, it also allowed me to draw on my previous knowledge about research output tracking and existing performance management practice in higher education institutes, to reflect on how “learning data” can be utilised by teachers as well as institutes for their agenda in teaching and governance.

(Word count: 1052)

References

Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (2013). Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.

Bulger, M. (2016). Personalized learning: The conversations we’re not having. Data and Society, 22(1), 1-29.

Eynon, R. (2015). The quantified self for learning: critical questions for education. Learning, Media and Technology, 40(4).

Gitelman, L. (Ed.). (2013). Raw data is an oxymoron. MIT press.

Hamilton, E., & Friesen, N. 2013. Online Education: A Science and Technology Studies Perspective. Canadian Journal of Learning and Technology, 39(2).

Knox, J., Williamson, B., & Bayne, S. (2019). Machine behaviourism: Future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies. Learning, Media and Technology, 45(1), 31-45.

Lupi, G., & Posavec, S. (2016). Dear data. Chronicle books.

Raffaghelli, J.E. & Stewart, B. (2020). Centering complexity in ‘educators’ data literacy’ to support future practices in faculty development: a systematic review of the literature, Teaching in Higher Education, 25:4, 435-455, 

van Dijck, J., Poell, T., & de Waal, M. (2018). Chapter 6: Education, In The Platform Society, Oxford University Press

Williamson, B. Bayne, S. Shay, S. (2020). The datafication of teaching in Higher Education: critical issues and perspectives. Teaching in Higher Education. 25(4), pp. 351-365.

Leave a Reply

Your email address will not be published. Required fields are marked *