Block 2 summary

A: Which platform should we use for Professor ___________’s activity? Shall we use this [open source e-learning authoring tool]?

B: We can’t! Remember there is no analytics in that tool? How can we track students!?

Above is one of the typical conversations I engage in with my colleagues. This short excerpt shows two key issues with data I come across on a day-to-day basis at work – the “datafication” and “platformization” of teaching and instructional design. 

van Dijck, Poell & de Waal (2018) described the platformization of education as a phenomenon in which education is being assimilated / integrated into the platform ecosystem of Facebook, Google, edX, Coursera etc., and they argued that platforms uproot the fundamental values of public education, such as teachers’ autonomy and more. At a more microscopic or day-to-day level, I can see that instructional design decisions are often shackled to existing online learning platforms (Moodle quiz, SCORM packages, H5P interactive content etc.). While platforms can be seen as merely passive tools, this instrumentalist understanding of online learning technology (Hamilton & Friesen, 2013) overlooks how platform features can often influence the way we go at teaching and instructional design. For instance, my choice of data collection at week 5 and 6 was heavily influenced by the highlighting and annotating feature of Kindle e-reader.

Education is also said to be subject to “datafication” – where students’ experience (or more likely behaviours that are considered to be evidence of learning) become microdissected into data points – which are proxy measures of phenomena in a learning environment (Williamson, Bayne & Shay, 2020). Raffagheli & Stewart (2020) argued that datafication also act like a reductive lens for the understanding of students’ learning, as instruments for data collection sets the boundary of what can be measured (“knowable unknowns”), while throwing out what cannot be measured (“unknown unknowns”). Automated learning analytics dashboard also risks further reducing the nuanced understanding of such “knowable unknowns” by aggregating data while promising automated flagging of outlier students. Depending on the algorithmic setup of the dashboard, aggregating data from my drawings on week 5 and 6 may risk identification of “false normals”, when the extreme values arithmetically cancel out each other.

Students’ data from learning analytics are increasingly important parameters for teachers’ ongoing professional development. Together with the emphasis on student-centred “learning”, a teachers’ competence is often measured through tracking students’ engagement, students’ learning and students’ satisfaction (Williamson, Bayne & Shay, 2020). In higher education settings, academic staff often have an obligation to do 100% teaching, 50% teaching + 50% research, or even 100% research. The use of data tracking could potentially help a teacher measure their efforts and output in teaching and/or learning, so as to inform their professional development. For example, in my drawing in week 8, I collected data about my email sending behaviour, so as to provide an estimate of my efforts in teaching / instructional design, administrative tasks and research. I can look at such data visualisation and see how I allocated my time towards different domains of my work. Such data can also be leveraged by university senior leaders for performance management of their staff, and I look forward to revisiting this issue in block 3.

(Word count: 525)

References:

Hamilton, E., & Friesen, N. 2013. Online Education: A Science and Technology Studies Perspective. Canadian Journal of Learning and Technology, 39(2).

Raffaghelli, J.E. & Stewart, B. 2020. Centering complexity in ‘educators’ data literacy’ to support future practices in faculty development: a systematic review of the literature, Teaching in Higher Education, 25:4, 435-455, 

van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectives. Teaching in Higher Education. 25(4), pp. 351-365.

1 thought on “Block 2 summary

  1. Really great summary here Enoch, where you focus on two key concerns of ‘platformisation’ and ‘datafiction’, and connect these ideas to issues related to teaching. Linking your reflections on these topics to your data visualisation practice was also excellent, and demonstrates your thoughtful approach to the task.

    ‘instruments for data collection sets the boundary of what can be measured (“knowable unknowns”), while throwing out what cannot be measured (“unknown unknowns”)’

    This is a key point, I think, where you focus on ‘knowing’ students as a key aspect of teaching, and how this is rigidly categorised through data.

    ‘The use of data tracking could potentially help a teacher measure their efforts and output in teaching and/or learning, so as to inform their professional development.’

    I like this idea, and I think we do need to think of ways that data might more authentically empower teachers, not just reduce diminish their professionalism. I’d certainly like something that could quantify my teaching!

Leave a Reply

Your email address will not be published. Required fields are marked *