Lindsay's Data Visualisation Blog

3 – questions

For week 5 (visualisation 3), I tried to track questions I asked – in the spirit of continuing my visualisations about my own learning in this first block. I had also recently read about a learning programme Marvin that used questions to learn and teach about complex concepts by drawing on easier concepts (Sammut & Banjeri, 1986).

I suppose the initial caveat comes in that I do not believe I captured all the questions that I verbalised, never mind those I did not! I guess it highlights the limits of capturing learning data points such as engagement (or questions in this case) when its part of a complex learning strategy really.

My visualisation for a week of questions
The key to deciphering!

Similarly, what appears to be a normal number of questions for me might not be someone else – I’m the sort of person that asks my peers a lot for advice, but is less likely to ask an ‘expert’ for advice or clarification. When it comes to Learning Analytics (LA) systems design this felt like a lived version of the point that Monica Bulger makes about an algorithm’s judgement (not sure if that’s the right word) if a learner is progressing well, that a myriad of strategies are used by students to varying degrees of success and how can we measure that accurately if that’s what part of the LA is trying to do? (Bulger, 2016).

If I was tracking this data again, beyond trying to capture more of the questions that I asked I wondered if it would be more useful to capture when I asked the questions. Maybe days were irrelevant here and hour of the day was a more telling and informative aspect of the data. So was I more inquisitive at a particular point in the day. I’m not sure there is any trend to be identified by having asked less questions on a Tuesday – I work a half day on a Tuesday so maybe that’s why. Again, tracking back to last week’s blog context is always important.

Layout was difficult, and I think I over simplified! Really, some of these questions could be repeated, or linked or unresolved and I wasn’t able to track those links. I did want to capture that my questions are not a neat end point and cascade but without getting way more complicated and detailed with the data capture as well as the data visualisation I’m not sure how I could get there. So maybe this is a snapshot into the questions I ask – rather than a track like my reading week.

I learned, from tracking and visualising this data that I ask more questions in response to information than say in a conversational aspect. I’m not sure if you’d asked me at the start of the week if I’d said that would have been true. Also I thought I asked WAY more rhetorical questions, but from the data it appears that maybe I ask them to myself in a way that isn’t verbalised.

I mean I suppose overall, I now think that the question is a consequential part of the more important thing that I learn from, communication with others – and that perhaps I wouldn’t flourish under the Socrates’ dialogical method! (Friesen, 2020)  

Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. Data & Society working paper.
Available: https://datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf

Friesen, N. 2019. “The technological imaginary in education, or: Myth and enlightenment in ‘Personalised Learning.” In M. Stocchetti (Ed.), The digital age and its discontents. University of Helsinki Press.

Sammut, C. and Banjeri, R. B (1986) ‘Learning Concepts by asking questions’, Machine Learning: An Artificial Intelligence Approach, Vol. 2 (Morgan Kaufmann) pp. 167-192.

3 Comments

  1. Jeremy Knox Permalink

    Another striking visualisation here, nice work! The question mark serves as a rather straightforward indication of a question, but also works really well aesthetically, particularly as there are so many of them! Tracking ‘questions’ is in itself quite a useful critical approach to analytics, which seems much more concerned with students getting answers correct, or not. As you also reflect upon here, asking questions is such an important part of learning, whether or not, I’d argue, we actually find an answer.

    ‘like a lived version of the point that Monica Bulger makes about an algorithm’s judgement (not sure if that’s the right word) if a learner is progressing well, that a myriad of strategies are used by students to varying degrees of success and how can we measure that accurately if that’s what part of the LA is trying to do? (Bulger, 2016).’

    So if I understand you correctly here, the point is that a ‘progression’ or ‘performance’ measure in learning analytics is such a narrow rendition of much more complex student activity? I’d agree, and I think in many ways these kind of ‘at risk’ measurements aren’t really interested in what students are actually doing at all. Rather, they are concerned with ‘converting’ potential failures into ‘passes’ – this reminds me of the way Zuboff (2015) discusses the ‘formal indifference’ of tech companies to the context behind the data they collect. In this sense, to the ‘at risk’ analytics system, it doesn’t really matter what a student is doing, or how much effort they are putting in, as long as they are producing the data points that avoid the ‘at risk’ calculation.

    Zuboff, S. 2015. Big other: surveillance capitalism and the prospects of an information civilization. Journal of Information Technology. 30 (1), pp.75-89

    Reply
  2. tmadden Permalink

    I love it! So much going on, so much recorded, but basically… what does it all mean?! I am embracing the messiness of it all that LA is trying to untangle.
    If an algorithm is going to try to draw you towards what has worked for others, who is to say that’s right for you?

    Reply
  3. Permalink

Leave a Reply

Your email address will not be published. Required fields are marked *