Block 1 reflections

For this block I tried to visualise three different aspects of my ‘learning’: reading, motivation and interactions. By collecting, visualising and analysing my own data, I was hoping to gain deeper insights into the relationships between data and learning.

One of the challenges I faced every week was to decide which variables to consider for my visualisations. Is it best to collect as much data as possible or should I trust my instinct and log what I feel is relevant for each theme? It became clear that more sophisticated technologies such as learning analytics systems face similar issues. Despite being able to collect and analyse more data, there are many aspects of learning that can’t be captured or analysed. Eynon (2015, p. 409) also warns of the danger that ‘[h]ours spent revising, numbers of words written per day, multiple choice questions answered in half an hour, can all become the most important metric, rather than the quality of the writing, the mathematical thinking or the cognitive process.’

While I recorded data, it realised that the data collected by me is in no way comparable to big data (BD) as ‘BD promises ultrarapid, updatable profiling of individuals to enable interventions and action to make each individual, their thoughts and behaviours, actionable in a variety of ways’ (Thompson & Cook 2017, p. 743). In comparison, my efforts were slow and selective but it raised the question whether big data is the only way to paint a meaningful, rich picture of the learner. During my self-recording, I often felt that it was context and personal circumstances that had an impact on my actions, yet these variables are difficult to measure.

During the last three weeks I frequently asked myself whether collecting data can somehow improve how I learn. In education, after all, learning analytics promises insights into learning that would otherwise be unobtainable (Knox et al., 2019).

One of the promises is the ability of students to have a greater sense of agency. Data is thereby used to make informed decisions during the learning process (Tsai et al., 2020). Interestingly, however, Tsai et al. (2020, p. 562) surface that student agency may potentially be diminished ‘through constant surveillance in online learning environments.’ I certainly felt conscious of my actions being recorded (albeit by myself) and could imagine how constant monitoring may have an effect on how I behave. While it could lead to increased self-motivation, I could also see how my focus could be on simply completing tasks without caring too much about how well I performed in them.

Digital data is often seen as a solution to various problems in education (Selwyn & Gašević, 2020). For data to be used in order to enhance ‘learning’, I suppose we need to assume that how or what we learn needs to be improved. Although I am not able to offer a definition of learning, I believe that it is very personal. So the thought of learning being tailored to each student and offering them the best possible ‘learning journey’ seems intriguing. Reflecting on this block’s literature regarding personalisation, however, there seems to be a conflict between personalised learning systems being beneficial to students and teachers, and having the potential to ‘disempower through opaque processes and prescriptive formats (Bulger, 2016, p.19).

What has become clear during this block is that there are many conflicts between data and ‘learning’. I’m hoping to continue to explore these conflicts along with the relationships between data and education during the next block.

References

Bulger, M. (2016). Personalized Learning: The Conversations We’re Not Having. Data & Society working paper. Available: https://datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf

Eynon, R. (2015). The quantified self for learning: critical questions for education, Learning, Media and Technology, 40:4, 407-411, DOI: 10.1080/17439884.2015.1100797

Knox, J., Williamson, B. & Bayne, S. (2019). ‘Machine behaviourism: Future visions of “learnification” and “datafication” across humans and digital technologies‘, Learning, Media and Technology, 45(1), pp. 1-15.

Selwyn, N. & Gašević, D. (2020). The datafication of higher education: discussing the promises and problems, Teaching in Higher Education, 25:4, 527-540, DOI: 10.1080/13562517.2019.1689388

Thompson, G. and Cook, I. (2017). The logic of data-sense: thinking through learning personalisation. Discourse: Studies in the Cultural Politics of Education. 38(5), pp. 740-754

Tsai, Y-S. Perrotta, C. & Gašević, D. (2020). Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analytics, Assessment & Evaluation in Higher Education, 45:4, 554-567, DOI: 10.1080/02602938.2019.1676396

Week 5: a week of interactions

This week I tracked my interactions with others from the course on our blogs. I chose the visualisation to represent a sort of network as this is what came to mind when thinking about engaging with others in a community. As in previous weeks, there are variables that, in hindsight, I could have included which may have provided a more detailed picture. For example, I could have tracked the type of comment (general or question) or all interactions between the other course students. On reflection, however, I’m not sure it would have given me any more insight.

Week 5 visualisation

I consider myself as more of a lurker so engaging with others doesn’t come naturally. I wanted to see, however, whether my efforts to communicate with others would be rewarded with a higher level of engagement in general. My visualisation shows that the majority of comments was initiated by me and that I only received a response on around half of my comments. While we could draw conclusions from this, we need to consider that there are many variables that can’t be tracked. The data don’t give much away in terms of why my visualisation looks like this. Perhaps my comments weren’t very thought-provoking, maybe people hadn’t had the chance to look at their blogs during the week or maybe we can only achieve high engagement if participation is assessed. Likewise, when it comes to learning analytics systems, it should be considered that the use of digital data ‘relies on making a number of assumptions that do not necessarily reflect the complexities of social life’ (Selwyn 2015, p. 75). What came to mind when thinking about my data and my learning, was the concept of communities of practice by Wenger-Trayner & Wenger-Trayner (2011) which to them ‘reflects the fundamentally social nature of human learning’. According to them:

Communities of practice are groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly.’

(Wenger-Trayner & Wenger-Trayner, 2015)

Does my picture show an ideal community of practice? Perhaps not. What the visualisation doesn’t show, however, is what I have learnt by simply reading the other blogs. I might have not achieved the engagement I initially expected but I still feel that the exercise was beneficial to my learning.

References

Neil Selwyn (2015). Data entry: towards the critical study of digital data and education. Learning, Media and Technology, 40:1, 64-82.

Wenger-Trayner, E. and Wenger-Trayner, B. (2015). Introduction to Communities of Practice. Available at: https://wenger-trayner.com/introduction-to-communities-of-practice/. (Accessed 13 February 2021).

Wenger-Trayner, E. and Wenger-Trayner, B. (2011). What is a community of practice? Available at: https://wenger-trayner.com/resources/what-is-a-community-of-practice/ (Accessed 13 February 2021).

Week 4: a week of motivation

Data Visualisation: motivation

This week I decided to track my motivation for studying. It was definitely more difficult than simply recording tasks or durations but I actually found the daily self-reflection very interesting and hope it can be beneficial for my learning. To decide which variables to include was probably the most challenging bit. Motivation can be affected by so many internal and external factors, it was hard to decide what the most important variables are. Before this exercise I always thought that I’m more motivated in the mornings than in the evenings so I included times of the day. My visualisation, however, doesn’t show a clear correlation between motivation and times of the day although a bigger sample would certainly be useful.

To better understand motivation, I explored Ryan and Deci’s (2000) Self-Determination Theory. It describes how three psychological needs – competence, autonomy and relatedness – need to be present to enhance self-motivation and mental wellbeing. Looking at my visualisation, I can see how feeling competent has potentially influenced my motivation. I struggled with the Friesen article and had to read it over several days (Monday-Wednesday). On the other hand, I found the Eynon article much easier both in terms of content and language, and felt motivated when I finished reading it. The reason for choosing the bars was that I could easily show the change in motivation. Even when I wasn’t motivated at the beginning, I sometimes got more enthusiastic if I felt engaged with the task.

What I realised when drawing was how subjective my data visualisations are. I select what I’m logging because of what I think makes sense or because of the number of colours or shapes I want to use. While learning analytics systems may have the capability of recording more data, there is only so much you can express on a dashboard, for example. And then there is the question of who chooses the data. If it is mainly software engineers as surfaced in Selwyn and Gašević (2020), how can we be sure that was is being collected, analysed and presented is beneficial to students’ learning?

References

Ryan, R. & Deci, E. (2000). Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being. The American psychologist. 55. 68-78.

Selwyn, N. & Gašević, D. (2020). The datafication of higher education: discussing the promises and problems, Teaching in Higher Education, 25:4, 527-540.

A week of reading

Data Visualisation: reading

For my first data visualisation I chose to log my course-related reading. Initially, I thought I would mainly log my reading of the course literature but it soon turned out that I would end up with quite a few different reading categories. I assumed it would be fairly straight-forward but once I started, I questioned how much detail to include or whether to consider other factors as well. Is it important to record exactly how long I’m reading? And should I log which article I’m reading or is it enough to state that I’m reading some of the course readings?

The time factor, in particular, kept playing on my mind and I was conscious that readers may judge me based on the amount of time I spent on each “task” or overall. What would a learning analytics system think of my efforts? It would probably agree with my findings in that my reading is very fragmented. However, while I attribute this to the exceptional circumstances in which I’m trying to combine work, study, childcare and homeschooling, an automated system would most likely not take these factors into account. Recording time spent on certain resources is easy to do, but as Bulger (2016, p.16) states ‘time itself is not a significant indication of engagement, but rather how that time is spent.’

Regarding the data visualisation itself, it was tempting to make use of traditional tools such as charts or graphs but I wanted to challenge myself and represent data in a different way. I hope my visualisation has achieved this. It was interesting experimenting with shapes and colour and I’m hoping that I can develop my visualisations in terms of creativity over the next few weeks.

References Bulger, M. (2016) Personalized Learning: The Conversations We’re Not Having. Data & Society working paper. Available at: https://datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf