Block 2 summary

A: Which platform should we use for Professor ___________’s activity? Shall we use this [open source e-learning authoring tool]?

B: We can’t! Remember there is no analytics in that tool? How can we track students!?

Above is one of the typical conversations I engage in with my colleagues. This short excerpt shows two key issues with data I come across on a day-to-day basis at work – the “datafication” and “platformization” of teaching and instructional design. 

van Dijck, Poell & de Waal (2018) described the platformization of education as a phenomenon in which education is being assimilated / integrated into the platform ecosystem of Facebook, Google, edX, Coursera etc., and they argued that platforms uproot the fundamental values of public education, such as teachers’ autonomy and more. At a more microscopic or day-to-day level, I can see that instructional design decisions are often shackled to existing online learning platforms (Moodle quiz, SCORM packages, H5P interactive content etc.). While platforms can be seen as merely passive tools, this instrumentalist understanding of online learning technology (Hamilton & Friesen, 2013) overlooks how platform features can often influence the way we go at teaching and instructional design. For instance, my choice of data collection at week 5 and 6 was heavily influenced by the highlighting and annotating feature of Kindle e-reader.

Education is also said to be subject to “datafication” – where students’ experience (or more likely behaviours that are considered to be evidence of learning) become microdissected into data points – which are proxy measures of phenomena in a learning environment (Williamson, Bayne & Shay, 2020). Raffagheli & Stewart (2020) argued that datafication also act like a reductive lens for the understanding of students’ learning, as instruments for data collection sets the boundary of what can be measured (“knowable unknowns”), while throwing out what cannot be measured (“unknown unknowns”). Automated learning analytics dashboard also risks further reducing the nuanced understanding of such “knowable unknowns” by aggregating data while promising automated flagging of outlier students. Depending on the algorithmic setup of the dashboard, aggregating data from my drawings on week 5 and 6 may risk identification of “false normals”, when the extreme values arithmetically cancel out each other.

Students’ data from learning analytics are increasingly important parameters for teachers’ ongoing professional development. Together with the emphasis on student-centred “learning”, a teachers’ competence is often measured through tracking students’ engagement, students’ learning and students’ satisfaction (Williamson, Bayne & Shay, 2020). In higher education settings, academic staff often have an obligation to do 100% teaching, 50% teaching + 50% research, or even 100% research. The use of data tracking could potentially help a teacher measure their efforts and output in teaching and/or learning, so as to inform their professional development. For example, in my drawing in week 8, I collected data about my email sending behaviour, so as to provide an estimate of my efforts in teaching / instructional design, administrative tasks and research. I can look at such data visualisation and see how I allocated my time towards different domains of my work. Such data can also be leveraged by university senior leaders for performance management of their staff, and I look forward to revisiting this issue in block 3.

(Word count: 525)


Hamilton, E., & Friesen, N. 2013. Online Education: A Science and Technology Studies Perspective. Canadian Journal of Learning and Technology, 39(2).

Raffaghelli, J.E. & Stewart, B. 2020. Centering complexity in ‘educators’ data literacy’ to support future practices in faculty development: a systematic review of the literature, Teaching in Higher Education, 25:4, 435-455, 

van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectives. Teaching in Higher Education. 25(4), pp. 351-365.

Week 8 drawing

Work emails sent by Enoch on week 8

This week I moved onto a new direction for my data collection, and started looking how I as an instructional designer needs juggled between several domains of my work: teaching / instructional design, admin and research / scholarship of teaching and learning. This week I manually counted and categorised all emails I have sent during week 8. From this drawing, one can see that I was most busy with sending emails on Monday, and gradually reduced through Friday.

The number of emails sent as a parameter must be looked at with a critical lens, and ideally in conjunction with other parameters, such as timesheets, emails received, phone calls, diaries and many more. I can see that the number of emails sent is particularly weak at representing my research workload, as the few emails I sent on Tuesday and Wednesday were mainly myself communicating with a co-investigator about writing grant applications. It did not indicate the time I spent on writing the grant application itself as well as research manuscript writing. The large number of emails sent on Monday was primarily deal to back-and-forth emails.

The central message of this drawing is to highlight the necessity for a critical lens in collecting and visualising data for a teachers’ professional development. A lack of critical data literacy can risk serious abuse of data collection and analytics, leading to reductionist or downright wrong measurement of one’s performance or competency.

Week 7 Drawing

Highlighting pattern while reading Brown (2020)

Similar to week 6, I also tracked my highlighting pattern while reading Brown (2020) paper. Each vertical line represents a page, and a 3-colour system was employed for my highlighting:

  • Green: key ideas
  • Red: ideas where I found a tangible connection with my day-to-day practice
  • Blue: key terms I learnt in this paper

In conceptualising my data tracking exercise this week, I found I had practically designed myself a learning activity: to annotate Brown (2020) paper with the system prescribed. Me as a “teacher” prescribed a model way of reading – me as a student had to look out for key ideas, connect such ideas with my day-to-day practice and identify new terms; and the act of highlighting is endorsed as the official sign of engagement.

At a personal level, a teacher can look at this drawing and think that I have simply omitted “Data collection”, “Data analysis” and “Limitations”; identified almost no newly learnt keywords; and have managed to connect this paper to my day-to-day practice. A teacher may choose to nudge me if they see this deviated from their ideal way of reading this paper.

As mentioned in week 6, an aggregation of this data for a group of students allows “frequently highlighted text” to be highlighted for a paper. Imagine this feature is deployed to the e-reading app for the whole class, students could be nudged by the “frequently highlighted text” to pay extra attention in such parts of the text.

Week 6 Drawing

Pattern of text highlighting while reading Williamson, Bayne & Shay (2020)

This week, I decided to record how I used a highlighting pen to mark texts while I was reading Williamson, Bayne & Shay (2020) paper. where each column represent a page, and each mark represent approximately 3 lines of text. I used a traffic-light system to indicate how much texts were highlighted. This recording was inspired by the text-highlighting functionality of common e-reading software like Kindle. At a functional level, records like this provide a teacher some insights into what their students might think is important message in a prescribed reading. Similar to Kindle, collation of such data from a group of students allows features like “frequently highlighted text” to be highlighted for a reading activity.

However, as suggested by Williamson, Bayne & Shay (2020), datafication increases the risk of pedagogic reductionism as well as limiting the way a teacher see their students. If a teacher uses text highlighting as a parameter to measure students’ intellectual engagement, it also defines such behaviour as the model way a student should interact with a reading material. Teachers may be prompted to reinforce e-reading app as the “official” way or indeed approved way to read one’s course reading. Students might also be prompted to look good in their data by simply highlight most (if not all) of the text, gaming the system to generate an all-green pattern.


Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectivesTeaching in Higher Education. 25(4), pp. 351-365.