Commentary on the data visualisation task

All data visualisations produced in this course

A quick summary of all data visualisations I created for CDE

On week 3, I used the log data from Chrome browser and other social media platforms to visualise how much time I spent on studying in a week.

On week 4, I manually collected data about my distractions while completing the prescribed reading. 

On week 5, I manually collected data about my thought process while completing the prescribed reading. 

On week 6 and week 7, I recorded how I used a highlighting pen while completing the prescribed reading. This was informed by the highlighting and annotating feature of Kindle e-reader.

On week 8, I counted all emails I sent at work, and visualised them based on their categories and the day on which they were sent.

On week 9, I visualised all emails I replied at work based on the time I took to reply.

On week 10, I visualised the time at which I accessed Moodle for work during the week.

On week 11, I visualised my eye gazing and head movement while watching an instructional video.

Overall, I consumed 36 papers for drafting, and 14 papers for the finalised drawings each week. 

Reflections around data collection and visualisation

This data collection and visualisation task was mainly inspired by Lupi and Posavec (2016), who committed to exchange postcards drawing of the data about their day-to-day life over a period of 52 weeks. Similar to Lupi and Posavec (2016), I found doing this data collection made me more “in-tune” with myself. Over weeks 4 and 5, the data collection was manual and real-time, and it made me acutely aware of being tracked. The act of recording my thought process while reading an article caused me to focus on my thought process, accepting and recording all my thoughts and distractions. For weeks 4 and 5, the data collection itself supported my metacognition, providing me more insights about my behaviour in learning (Eynon, 2015). However, I am also aware of my bias towards choosing data collection methods that involves less of myself. I felt this was heavily influenced by my Biomedical Science background, where I am very much used to being the objective observer, thinking that the data would have been more accurate if it is collected without me knowing. Admittedly, my data collection for some weeks were not very well thought through, and as a result some data collection were retrospective and reliant on log data from digital platforms. My occasional success in salvaging some visualisations made me realise how much of my online behaviour is already being tracked without me knowing.

In the process of collecting and visualising data during this whole course, I used pre-assigned categories for my data collection as opposed to writing elaborate “field notes”. While that made my data collection process more manageable, it meant that the data I collected were essentially screened and bracketed into categories whilst being collected. This experience demonstrated to me that data collection is itself a process of creating proxies for a phenomenon and hence “‘raw data’ is an oxymoron.” (Gitelman, 2013). Similarly, the datafication of education meant that the teaching and learning process is being microdissected into data points (Williamson, Bayne & Shay, 2020). This ushers in a reductionist way of understanding of students’ learning, where the instruments for data collection define what can be measured and throw out what cannot be measured (Raffagheli & Stewart, 2020).

My data collection on weeks 3, 6 to 11 focused heavily on my use of digital platforms for teaching and learning, including learning management systems, email, social networking, video platforms and e-reading. This heavy focus was not intended initially, however I felt it is connected to the platformization of education itself, where education is being assimilated / integrated into the platform ecosystem of Facebook, Google, edX, Coursera etc. (van Dijck, Poell & de Waal, 2018). My data visualisation is a testimony of how much of my life / work / study is being integrated within several digital platforms. Technologies and platforms can often be seen as merely passive tools, however this instrumentalist understanding of technology (Hamilton & Friesen, 2013) overlooks how technologies can often influence human behaviour. For instance, my data visualisations are overall not too creative and limited to scatter plots, icons, barcharts etc., which are heavily informed by commonly used digital visualisation of data. 

In weeks 4, 5, 6, 7, 11, I collected and visualised data about my learning behaviour, and  such types of data can actually be readily trackable within existing digital learning environments. The use of “learning data ” from a teachers’ point of view connects very heavily with my day-to-day work, where I see data from the learning management system can potentially provide teachers more insight about individual students. My day-to-day work is also a proof-of-principle of how reliance on data analytics could fuel a behaviourist approach to education, drawing focus on students’ behaviour itself (Knox, Williamson & Bayne, 2019). For instance, students who show a less-than-ideal pattern of participation in online learning activities could be flagged by their teachers for intervention. While at my workplace it is done manually, as mentioned by Bulger (2016), such flagging can be done automatically via algorithms which draw on pre-defined models of learning bahaviour set (Bulger, 2016), in order to support decisions on which group(s) of students are to be flagged as less than ideal. Automated systems could also go one step further and define what instructional modification and nudges are needed for each group of students.

My data collection and visualisation on weeks 8 and 9 also made me recall my past experience with performance review and day-to-day progress meetings, where I was required to present summarised data or data visualisation as evidence of my progress and performance, reducing months of work into statements like “supported x number of teachers in developing digital learning resources in the last month”. For academic staff, institutes already look at teachers’ publication output as a measurement of their staff’s research performance. With the increased emphasis on student-centred learning, I can see learning analytics also being used at the institutional level as key performance indicators for teachers, where a teachers’ competence can be measured through tracking students’ engagement, students’ learning and students’ satisfaction (Williamson, Bayne & Shay, 2020). 

Similarly, Anagnostopoulos et al. (2013) also described how public test performance is being converted through numbers and data into published ratings and rankings that in turn hold schools accountable for their effectiveness and productivity (test-based accountability). While it might still be a long way before reaching an institutional (or even departmental) framework for learning analytics, we can see the potential for learning analytics, access to LMS, number of digital learning resources produced etc. to be used as measurements of teachers’ engagement with online teaching, which will in turn inform institutional decision-making and performance tracking. Therefore, learning analytics can become increasingly important for teachers’ ongoing professional development. Similar to how test-based accountability can prompt school practices like “teaching to the test” according to Anagnostopoulos et al. (2013, p. 14), an institutional learning analytics framework or any potential learning analytics-based accountability could prompt “teaching to the analytics” or even “learning to the analytics” practices among teachers and students.

In summary, this exercise has not only given me a chance to understand myself and my learning behaviour, it also allowed me to draw on my previous knowledge about research output tracking and existing performance management practice in higher education institutes, to reflect on how “learning data” can be utilised by teachers as well as institutes for their agenda in teaching and governance.

(Word count: 1052)

References

Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (2013). Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.

Bulger, M. (2016). Personalized learning: The conversations we’re not having. Data and Society, 22(1), 1-29.

Eynon, R. (2015). The quantified self for learning: critical questions for education. Learning, Media and Technology, 40(4).

Gitelman, L. (Ed.). (2013). Raw data is an oxymoron. MIT press.

Hamilton, E., & Friesen, N. 2013. Online Education: A Science and Technology Studies Perspective. Canadian Journal of Learning and Technology, 39(2).

Knox, J., Williamson, B., & Bayne, S. (2019). Machine behaviourism: Future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies. Learning, Media and Technology, 45(1), 31-45.

Lupi, G., & Posavec, S. (2016). Dear data. Chronicle books.

Raffaghelli, J.E. & Stewart, B. (2020). Centering complexity in ‘educators’ data literacy’ to support future practices in faculty development: a systematic review of the literature, Teaching in Higher Education, 25:4, 435-455, 

van Dijck, J., Poell, T., & de Waal, M. (2018). Chapter 6: Education, In The Platform Society, Oxford University Press

Williamson, B. Bayne, S. Shay, S. (2020). The datafication of teaching in Higher Education: critical issues and perspectives. Teaching in Higher Education. 25(4), pp. 351-365.

Block 3 Summary

As I was wrapping up block 2, I briefly touched on the issue of performance management by university management, and decided to track my digital footprints in my capacity as instructional designer / pedagogical assistant in weeks 9 and 10, and I tracked how long I took to reply to all of my emails at work, and how often I accessed Moodle. As mentioned by Williamson (2017, p. 75), there has been a move towards increasing measurement of the performance and productivity of educational institutes; and in turn, increased tracking also causes individuals or institutes to “change their practice to ensure the best possible measures of performance”.

As I mentioned in week 10, with LMS access being constantly tracked by default, institutes could very well track LMS access for all their teaching staff, and use such metrics as key performance indicators for teachers’ engagement with online teaching. Increasingly, the annual performance review process for university staff involves presenting summarised data as evidence of performance (e.g. this year I converted how many courses to blended delivery; this year I published how many articles in peer-reviewed journal etc.), which has tangible impact on continuation of employment or promotion.

Anagnostopoulos et al. (2013, p. 14) described an information infrastructure consisting of quantification, standardisation and classification processes. These processes transform raw information through numbers and data into performance metrics, ratings and ranking etc. within a test-based accountability. In these processes we see standardised measures being devised and people and phenomena are fitted into categories within a classification system. Such processes feed into the establishment of national standards that can in turn being used to hold schools accountable for their effectiveness and productivity. Anagnostopoulos et al. (2013, pp.15-16) also problematised the occurrence of these processes without public scrutiny, generating metrics that are mere simplifications of the complexities of teaching and learning and the institutes itself, often failing to address deeper questions. 

On week 11, I tracked my behaviour while watching an instructional video. From this exercise, I reflected on how video hosting platform (e.g. Panopto) often come analytics function “out of the box”, allowing users’ consumption of videos to be tracked. I can see learning analytics being problematised in similar manner as above for test-based accountability. We can see learning analytics, or more broadly, the datafication of education microdissect students’ experience or behaviour into data points (Williamson, Bayne & Shay, 2020), which Raffagheli & Stewart (2020) argued would draw a boundary on what can be measured (“knowable unknowns”), while throwing out what cannot be measured (“unknown unknowns”). 

Given the uneven adoption of learning analytics, we might still be far away from reaching an institutional (or even departmental) framework for learning analytics. However, we already see learning analytics being used to inform institutional decision-making and performance tracking. Similar to how test-based accountability can prompt school practices like “teaching to the test” according to Anagnostopoulos et al. (2013, p. 14), I would argue that learning analytics framework or any potential learning analytics-based accountability could prompt “teaching to the analytics” or even “learning to the analytics” practices among teachers and students.

On a side note, learning analytics can also be used to justify educational development initiatives in institutions. In a university I worked at previously, learning analytics are taken as measurements of attractiveness and/or effectiveness of digital learning resources. Such parameters were used as key performance indicators for the educational development initiatives per se.

(Word count: 567)

References

Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013. Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.

Raffaghelli, J.E. & Stewart, B. (2020). Centering complexity in ‘educators’ data literacy’ to support future practices in faculty development: a systematic review of the literature, Teaching in Higher Education, 25:4, 435-455, 

Williamson, B. (2017). Digital Education Governance: political analytics, performativity and accountability. Chapter 4 in Big Data in Education: The digital future of learning, policy and practice. Sage.

Williamson, B. Bayne, S. Shay, S. (2020). The datafication of teaching in Higher Education: critical issues and perspectives. Teaching in Higher Education. 25(4), pp. 351-365.

Week 11 Drawing

Tracking my video watching behaviour

This week as the final drawing for Block 3, I chose to carry out an experiment on a topic that has much relevance in my workplace (the higher education sector) – video watching behaviour. It involved me playing a 20-minute instructional video from YouTube on an unfamiliar topic, and then recording my screen as well as myself using a webcam. Afterwards, I watched the recording of myself to record the time points at which I faced away, closed my eyes, clicked fast forward or rewind, or stopped the video. In my data visualisation, I made colour-coded markings on the timeline to show the timing of these behaviours. This experiment is inspired by discussions about tracking students’ eye gazes and head movements in e-learning platforms, so as to obtain data about their engagement (Asteriadis et al., 2009).

Such data collection and visualisation empower new ways for governance in higher education settings. When visualised as a population, university management may see that students mostly managed to sustain their attention at the first 10 minutes of watching instructional videos. If it is shown to be common pattern, institutes may put forward guidelines that encourage videos to be shorter than 10 minutes. In a more hardline approach, institutes can use such data to officiate one form of instructional video while rejecting others. Video hosting platforms can also be hard coded to reject videos longer than 10 minutes if so wished by the institute.

On a day-to-day basis, institutes track their students’ progress in watching lecture recordings and instructional videos. Normally without eye gaze and head movement data, we can only look at how much of a video is played by each student’s account, and at best, whether the browser tab of the video remained onscreen or minimised. With a more intimate tracking, we can understand better how students sustain their attention while watching a video. However, as demonstrated in this experiment, it involved real-time video recording of myself and how such footage is used is entirely at the mercy of who holds that piece of data. If such data are employed for continuous assessment, the power imbalance between the assessor and the assessed may force students to yield and surrender their data, which would definitely be a concerning phenomenon.

References:

Asteriadis, S., Tzouveli, P., Karpouzis, K., & Kollias, S. (2009). Estimation of behavioral user state based on eye gaze and head pose—application in an e-learning environment. Multimedia Tools and Applications41(3), 469-493.

Week 10 Drawing

Moodle access tracking on week 10

This week I chose to visualise how often I accessed my university’s Moodle for work. This data is obtained through manually counting my access to Moodle from Google Chrome history. Each stroke on the drawing represents one instance of access to Moodle recorded by Google Chrome. This is exhaustive as I use Chrome as my sole web browser.

In an institute that has any form of blended learning initiative / strategy, a learning management system (LMS) tends become the official space for dissemination of learning resources.

For teachers, as more of learning takes place in the LMS, there could be an expectation for them to spend certain level of efforts in the LMS, such as putting together activities, moderating forums, answering questions etc. Such data visualisation can be utilised to monitor teachers’ engagement in the LMS. With a population level analysis, teachers who spend little to no time at LMS could be flagged by their institute as being resistant to teaching innovation or even disengaged from teaching duties. This could have very real implications for performance management or being used as evidence for considering promotions / renewals.

Likewise, students can also be monitored in similar manner, and who go to the LMS and watch lectures with their study group (rather than on their own) are likely not picked up by such data analytics, and may get wrongly labelled as “disengaged students”. Such labelling could have longer-term impact on the students’ welfare (e.g. special considerations for assessments, moderation of assessment marks etc.).

Week 9 Drawing

Time taken by Enoch to reply his work emails on weeks 8 and 9

This week I started looking at how long I take to reply emails at work during week 8 and 9. This is done through looking through all the emails I replied in these two weeks, and measuring the difference between the time received and the time sent. Based on this drawing, I have managed to reply all my work emails within 24 hours. In fact, most emails I managed to reply within 60 minutes. There were several outliers, as those emails were in fact received on a Sunday.

This idea was inspired by recalling my time during undergraduate studies – how certain lecturers would explicitly say to students they would reply each email within 1 or 2 days etc. As an undergraduate student, I remember I liked the lecturers who replied quick, and disliked those who never replies – little did I know a lecturer’s mailbox is often inundated with lots of emails, and I certainly know better nowadays.

Nonetheless, for an institute/university, encouraging teachers’ or student-support staff’s timely reply to students’ emails can crucial for ensuring students satisfaction. As such, an institute could survey their staff for similar data as I have shown here, and estimate how much time each email would their staff generally take to reply. While difficult to reinforce, an institute could put forward “soft policy” to encourage timely feedback. As it is common practice to include students’ satisfaction as one of the parameters to inform teachers’ performance review and promotion decisions; if teachers can see a tangible correlation between timely reply of students emails and their students’ satisfaction, they would probably be encouraged to reply within say 24 hours.

Block 2 summary

A: Which platform should we use for Professor ___________’s activity? Shall we use this [open source e-learning authoring tool]?

B: We can’t! Remember there is no analytics in that tool? How can we track students!?

Above is one of the typical conversations I engage in with my colleagues. This short excerpt shows two key issues with data I come across on a day-to-day basis at work – the “datafication” and “platformization” of teaching and instructional design. 

van Dijck, Poell & de Waal (2018) described the platformization of education as a phenomenon in which education is being assimilated / integrated into the platform ecosystem of Facebook, Google, edX, Coursera etc., and they argued that platforms uproot the fundamental values of public education, such as teachers’ autonomy and more. At a more microscopic or day-to-day level, I can see that instructional design decisions are often shackled to existing online learning platforms (Moodle quiz, SCORM packages, H5P interactive content etc.). While platforms can be seen as merely passive tools, this instrumentalist understanding of online learning technology (Hamilton & Friesen, 2013) overlooks how platform features can often influence the way we go at teaching and instructional design. For instance, my choice of data collection at week 5 and 6 was heavily influenced by the highlighting and annotating feature of Kindle e-reader.

Education is also said to be subject to “datafication” – where students’ experience (or more likely behaviours that are considered to be evidence of learning) become microdissected into data points – which are proxy measures of phenomena in a learning environment (Williamson, Bayne & Shay, 2020). Raffagheli & Stewart (2020) argued that datafication also act like a reductive lens for the understanding of students’ learning, as instruments for data collection sets the boundary of what can be measured (“knowable unknowns”), while throwing out what cannot be measured (“unknown unknowns”). Automated learning analytics dashboard also risks further reducing the nuanced understanding of such “knowable unknowns” by aggregating data while promising automated flagging of outlier students. Depending on the algorithmic setup of the dashboard, aggregating data from my drawings on week 5 and 6 may risk identification of “false normals”, when the extreme values arithmetically cancel out each other.

Students’ data from learning analytics are increasingly important parameters for teachers’ ongoing professional development. Together with the emphasis on student-centred “learning”, a teachers’ competence is often measured through tracking students’ engagement, students’ learning and students’ satisfaction (Williamson, Bayne & Shay, 2020). In higher education settings, academic staff often have an obligation to do 100% teaching, 50% teaching + 50% research, or even 100% research. The use of data tracking could potentially help a teacher measure their efforts and output in teaching and/or learning, so as to inform their professional development. For example, in my drawing in week 8, I collected data about my email sending behaviour, so as to provide an estimate of my efforts in teaching / instructional design, administrative tasks and research. I can look at such data visualisation and see how I allocated my time towards different domains of my work. Such data can also be leveraged by university senior leaders for performance management of their staff, and I look forward to revisiting this issue in block 3.

(Word count: 525)

References:

Hamilton, E., & Friesen, N. 2013. Online Education: A Science and Technology Studies Perspective. Canadian Journal of Learning and Technology, 39(2).

Raffaghelli, J.E. & Stewart, B. 2020. Centering complexity in ‘educators’ data literacy’ to support future practices in faculty development: a systematic review of the literature, Teaching in Higher Education, 25:4, 435-455, 

van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectives. Teaching in Higher Education. 25(4), pp. 351-365.

Week 8 drawing

Work emails sent by Enoch on week 8

This week I moved onto a new direction for my data collection, and started looking how I as an instructional designer needs juggled between several domains of my work: teaching / instructional design, admin and research / scholarship of teaching and learning. This week I manually counted and categorised all emails I have sent during week 8. From this drawing, one can see that I was most busy with sending emails on Monday, and gradually reduced through Friday.

The number of emails sent as a parameter must be looked at with a critical lens, and ideally in conjunction with other parameters, such as timesheets, emails received, phone calls, diaries and many more. I can see that the number of emails sent is particularly weak at representing my research workload, as the few emails I sent on Tuesday and Wednesday were mainly myself communicating with a co-investigator about writing grant applications. It did not indicate the time I spent on writing the grant application itself as well as research manuscript writing. The large number of emails sent on Monday was primarily deal to back-and-forth emails.

The central message of this drawing is to highlight the necessity for a critical lens in collecting and visualising data for a teachers’ professional development. A lack of critical data literacy can risk serious abuse of data collection and analytics, leading to reductionist or downright wrong measurement of one’s performance or competency.

Week 7 Drawing

Highlighting pattern while reading Brown (2020)

Similar to week 6, I also tracked my highlighting pattern while reading Brown (2020) paper. Each vertical line represents a page, and a 3-colour system was employed for my highlighting:

  • Green: key ideas
  • Red: ideas where I found a tangible connection with my day-to-day practice
  • Blue: key terms I learnt in this paper

In conceptualising my data tracking exercise this week, I found I had practically designed myself a learning activity: to annotate Brown (2020) paper with the system prescribed. Me as a “teacher” prescribed a model way of reading – me as a student had to look out for key ideas, connect such ideas with my day-to-day practice and identify new terms; and the act of highlighting is endorsed as the official sign of engagement.

At a personal level, a teacher can look at this drawing and think that I have simply omitted “Data collection”, “Data analysis” and “Limitations”; identified almost no newly learnt keywords; and have managed to connect this paper to my day-to-day practice. A teacher may choose to nudge me if they see this deviated from their ideal way of reading this paper.

As mentioned in week 6, an aggregation of this data for a group of students allows “frequently highlighted text” to be highlighted for a paper. Imagine this feature is deployed to the e-reading app for the whole class, students could be nudged by the “frequently highlighted text” to pay extra attention in such parts of the text.

Week 6 Drawing

Pattern of text highlighting while reading Williamson, Bayne & Shay (2020)

This week, I decided to record how I used a highlighting pen to mark texts while I was reading Williamson, Bayne & Shay (2020) paper. where each column represent a page, and each mark represent approximately 3 lines of text. I used a traffic-light system to indicate how much texts were highlighted. This recording was inspired by the text-highlighting functionality of common e-reading software like Kindle. At a functional level, records like this provide a teacher some insights into what their students might think is important message in a prescribed reading. Similar to Kindle, collation of such data from a group of students allows features like “frequently highlighted text” to be highlighted for a reading activity.

However, as suggested by Williamson, Bayne & Shay (2020), datafication increases the risk of pedagogic reductionism as well as limiting the way a teacher see their students. If a teacher uses text highlighting as a parameter to measure students’ intellectual engagement, it also defines such behaviour as the model way a student should interact with a reading material. Teachers may be prompted to reinforce e-reading app as the “official” way or indeed approved way to read one’s course reading. Students might also be prompted to look good in their data by simply highlight most (if not all) of the text, gaming the system to generate an all-green pattern.

Reference:

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectivesTeaching in Higher Education. 25(4), pp. 351-365.

Block 1 Summary

Learning and data have a complex relationship with each other. Learning involves a process of questioning and acquiring knowledge. Answers to questions or knowledge itself can be received from a “learned” person (the teacher). Yet, not all questions can be answered by existing knowledge, and unanswered questions drive observations and experimentations, which creates data that can give rise to new knowledge that supports learning in return.

While teachers’ use of data (e.g. field notes, journals, assessments, students’ records) to gain insights about their student has a long history, with learning being increasingly carried out online, we increasingly see institutionalised use of data analytics to monitor students’ behaviours within digital learning environments. Such development fuels the broader agenda to optimise learning for individual students (Tsai, Perrotta & Gasevic, 2020; Eynon, 2015). 

In Block 1 of CDE, I started my data collection journey by looking at the log data of my online activities from Chrome browser history, WhatsApp messages, Twitter history as well as email records, which is of similar nature to log data in learning management systems. This exercise confirmed my understanding of the reliance on digital footprints within learning management systems would fuel a behaviourist approach to education (Knox, Williamson & Bayne, 2019). For instance, a less-than-ideal pattern of participation in online learning activities could be treated as the ground for viewing behaviour modification as a way to optimise learning. 

In the latter part of Block 1, I started collecting data about my attention and thought process while completing the prescribed reading. In the process of collecting and visualising these data, I had to assign proxies or categories in order for a visualisation to be possible, hence I experienced first-hand the nature of data collection being a process of taking snapshots or creating proxies that represents a phenomenon. This provided a proof-of-principle to a previous saying I learnt – “‘raw data’ is an oxymoron.” (Gitelman, 2013). In addition, employing a “field-work” approach to data collection caused modification to my learning activity itself, making me acutely aware of myself being tracked. The act of recording my thought process while reading an article may have even converted that learning activity into a “pseudo-mindfulness exercise” (by focusing on my thoughts, accepting and recording all my thoughts and distractions). Overall, the data of my distractions patterns as well as thought process supported my metacognition, providing me more insights about my learning (Eynon, 2015).

Data-driven personalised learning requires pre-defined models of learning bahaviour set (Bulger, 2016), in order to support decisions on instructional modification and nudges to students who by such definition are regarded as less than ideal. Unavoidably, this approach favours a few learning bahaviours while shunning others. I would also argue that similar to the master-apprentice model of learning being idealised since aeons ago (Friesen, 2020), students have always been benchmarked against different ideal models, including their masters, the more diligent/capable/smart/hardworking/polite kid in the same neighbourhood, long before the advent of data-driven personalised learning. Perhaps this is precisely the sentiment that chartered this development in personalised learning.

(Word count: 508)

References

Bulger, M. (2016). Personalized learning: The conversations we’re not having. Data and Society, 22(1), 1-29.

Eynon, R. (2015). The quantified self for learning: critical questions for education. Learning, Media and Technology, 40(4).

Friesen, N. (2019). The technological imaginary in education: Myth and enlightenment in “Personalized Learning”. In Stocchetti, M. (Eds), The Digital Age and its Discontents. University of Helsinki Press.

Gitelman, L. (Ed.). (2013). Raw data is an oxymoron. MIT press.

Knox, J., Williamson, B., & Bayne, S. (2019). Machine behaviourism: Future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies. Learning, Media and Technology, 45(1), 31-45.

Tsai, Y. S., Perrotta, C., & Gašević, D. (2020). Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analytics. Assessment & Evaluation in Higher Education, 45(4), 554-567.