Block 3 Summary

As I was wrapping up block 2, I briefly touched on the issue of performance management by university management, and decided to track my digital footprints in my capacity as instructional designer / pedagogical assistant in weeks 9 and 10, and I tracked how long I took to reply to all of my emails at work, and how often I accessed Moodle. As mentioned by Williamson (2017, p. 75), there has been a move towards increasing measurement of the performance and productivity of educational institutes; and in turn, increased tracking also causes individuals or institutes to “change their practice to ensure the best possible measures of performance”.

As I mentioned in week 10, with LMS access being constantly tracked by default, institutes could very well track LMS access for all their teaching staff, and use such metrics as key performance indicators for teachers’ engagement with online teaching. Increasingly, the annual performance review process for university staff involves presenting summarised data as evidence of performance (e.g. this year I converted how many courses to blended delivery; this year I published how many articles in peer-reviewed journal etc.), which has tangible impact on continuation of employment or promotion.

Anagnostopoulos et al. (2013, p. 14) described an information infrastructure consisting of quantification, standardisation and classification processes. These processes transform raw information through numbers and data into performance metrics, ratings and ranking etc. within a test-based accountability. In these processes we see standardised measures being devised and people and phenomena are fitted into categories within a classification system. Such processes feed into the establishment of national standards that can in turn being used to hold schools accountable for their effectiveness and productivity. Anagnostopoulos et al. (2013, pp.15-16) also problematised the occurrence of these processes without public scrutiny, generating metrics that are mere simplifications of the complexities of teaching and learning and the institutes itself, often failing to address deeper questions. 

On week 11, I tracked my behaviour while watching an instructional video. From this exercise, I reflected on how video hosting platform (e.g. Panopto) often come analytics function “out of the box”, allowing users’ consumption of videos to be tracked. I can see learning analytics being problematised in similar manner as above for test-based accountability. We can see learning analytics, or more broadly, the datafication of education microdissect students’ experience or behaviour into data points (Williamson, Bayne & Shay, 2020), which Raffagheli & Stewart (2020) argued would draw a boundary on what can be measured (“knowable unknowns”), while throwing out what cannot be measured (“unknown unknowns”). 

Given the uneven adoption of learning analytics, we might still be far away from reaching an institutional (or even departmental) framework for learning analytics. However, we already see learning analytics being used to inform institutional decision-making and performance tracking. Similar to how test-based accountability can prompt school practices like “teaching to the test” according to Anagnostopoulos et al. (2013, p. 14), I would argue that learning analytics framework or any potential learning analytics-based accountability could prompt “teaching to the analytics” or even “learning to the analytics” practices among teachers and students.

On a side note, learning analytics can also be used to justify educational development initiatives in institutions. In a university I worked at previously, learning analytics are taken as measurements of attractiveness and/or effectiveness of digital learning resources. Such parameters were used as key performance indicators for the educational development initiatives per se.

(Word count: 567)

References

Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013. Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.

Raffaghelli, J.E. & Stewart, B. (2020). Centering complexity in ‘educators’ data literacy’ to support future practices in faculty development: a systematic review of the literature, Teaching in Higher Education, 25:4, 435-455, 

Williamson, B. (2017). Digital Education Governance: political analytics, performativity and accountability. Chapter 4 in Big Data in Education: The digital future of learning, policy and practice. Sage.

Williamson, B. Bayne, S. Shay, S. (2020). The datafication of teaching in Higher Education: critical issues and perspectives. Teaching in Higher Education. 25(4), pp. 351-365.

Week 11 Drawing

Tracking my video watching behaviour

This week as the final drawing for Block 3, I chose to carry out an experiment on a topic that has much relevance in my workplace (the higher education sector) – video watching behaviour. It involved me playing a 20-minute instructional video from YouTube on an unfamiliar topic, and then recording my screen as well as myself using a webcam. Afterwards, I watched the recording of myself to record the time points at which I faced away, closed my eyes, clicked fast forward or rewind, or stopped the video. In my data visualisation, I made colour-coded markings on the timeline to show the timing of these behaviours. This experiment is inspired by discussions about tracking students’ eye gazes and head movements in e-learning platforms, so as to obtain data about their engagement (Asteriadis et al., 2009).

Such data collection and visualisation empower new ways for governance in higher education settings. When visualised as a population, university management may see that students mostly managed to sustain their attention at the first 10 minutes of watching instructional videos. If it is shown to be common pattern, institutes may put forward guidelines that encourage videos to be shorter than 10 minutes. In a more hardline approach, institutes can use such data to officiate one form of instructional video while rejecting others. Video hosting platforms can also be hard coded to reject videos longer than 10 minutes if so wished by the institute.

On a day-to-day basis, institutes track their students’ progress in watching lecture recordings and instructional videos. Normally without eye gaze and head movement data, we can only look at how much of a video is played by each student’s account, and at best, whether the browser tab of the video remained onscreen or minimised. With a more intimate tracking, we can understand better how students sustain their attention while watching a video. However, as demonstrated in this experiment, it involved real-time video recording of myself and how such footage is used is entirely at the mercy of who holds that piece of data. If such data are employed for continuous assessment, the power imbalance between the assessor and the assessed may force students to yield and surrender their data, which would definitely be a concerning phenomenon.

References:

Asteriadis, S., Tzouveli, P., Karpouzis, K., & Kollias, S. (2009). Estimation of behavioral user state based on eye gaze and head pose—application in an e-learning environment. Multimedia Tools and Applications41(3), 469-493.

Week 10 Drawing

Moodle access tracking on week 10

This week I chose to visualise how often I accessed my university’s Moodle for work. This data is obtained through manually counting my access to Moodle from Google Chrome history. Each stroke on the drawing represents one instance of access to Moodle recorded by Google Chrome. This is exhaustive as I use Chrome as my sole web browser.

In an institute that has any form of blended learning initiative / strategy, a learning management system (LMS) tends become the official space for dissemination of learning resources.

For teachers, as more of learning takes place in the LMS, there could be an expectation for them to spend certain level of efforts in the LMS, such as putting together activities, moderating forums, answering questions etc. Such data visualisation can be utilised to monitor teachers’ engagement in the LMS. With a population level analysis, teachers who spend little to no time at LMS could be flagged by their institute as being resistant to teaching innovation or even disengaged from teaching duties. This could have very real implications for performance management or being used as evidence for considering promotions / renewals.

Likewise, students can also be monitored in similar manner, and who go to the LMS and watch lectures with their study group (rather than on their own) are likely not picked up by such data analytics, and may get wrongly labelled as “disengaged students”. Such labelling could have longer-term impact on the students’ welfare (e.g. special considerations for assessments, moderation of assessment marks etc.).

Week 9 Drawing

Time taken by Enoch to reply his work emails on weeks 8 and 9

This week I started looking at how long I take to reply emails at work during week 8 and 9. This is done through looking through all the emails I replied in these two weeks, and measuring the difference between the time received and the time sent. Based on this drawing, I have managed to reply all my work emails within 24 hours. In fact, most emails I managed to reply within 60 minutes. There were several outliers, as those emails were in fact received on a Sunday.

This idea was inspired by recalling my time during undergraduate studies – how certain lecturers would explicitly say to students they would reply each email within 1 or 2 days etc. As an undergraduate student, I remember I liked the lecturers who replied quick, and disliked those who never replies – little did I know a lecturer’s mailbox is often inundated with lots of emails, and I certainly know better nowadays.

Nonetheless, for an institute/university, encouraging teachers’ or student-support staff’s timely reply to students’ emails can crucial for ensuring students satisfaction. As such, an institute could survey their staff for similar data as I have shown here, and estimate how much time each email would their staff generally take to reply. While difficult to reinforce, an institute could put forward “soft policy” to encourage timely feedback. As it is common practice to include students’ satisfaction as one of the parameters to inform teachers’ performance review and promotion decisions; if teachers can see a tangible correlation between timely reply of students emails and their students’ satisfaction, they would probably be encouraged to reply within say 24 hours.