Interactivity is essential in various domains (Capriotti & Moreno, 2017).

I have collected data related to my lessons and how interactive were they. I consider it is presenting myself in an ‘intimate account’ (Anagnostopoulos et al., 2013) with high accountability (Williamson, 2017) because my workplace highly encourages interactivity. Such data can help make governance-related decisions. Educational institutions often see data as a solution to problems (Ozga, 2016).
Each circle represents a lesson. They were chosen instead of squares because of smoother looks. Circles are separated into 2 age groups (children & adults). Children are until 14 years old, and adults are 14 years old and older. An interactive lesson is defined as one where various online interactive educational resources are used, such as ESLBrains, Kahoot!, Wordwall & slides provided by my workplace. The colours of each interactive activity were chosen based on the materials logo or most occurring colour. If a circle was empty, it means that nothing that can be considered as an ‘interactive resource’ was used.
Key finding: 66% of the lessons included interactive material. 25% of the lessons included 2 resources. Various platforms were used a similar amount of times. I have used fewer games with adults than with children.
I did not consider the overall amount of time in the class that it was interactive. That would have probably been even more effective. Many other qualitative factors could have been considered. However, a quantitative approach was chosen when presenting data. A clear shift from qualitative to quantitative governance is seen among educational institutions (Williamson, 2017). Many educators believe that data can speak for itself (Anderson, 2008 cited by Ozga, 2016).
This visualisation can be used by the governance at my school. It can show how interactive my lessons are. Global North tech giants such as Google or Microsoft can use it too. These companies already see African countries as ‘data frontiers’ (Beer, 2019, cited by Prinsloo, 2020). They are highly influential in the Lithuanian market as well. Collected data can create a high profit (Ozga, 2016). Who has the power to use the collected data? Tech giants or data producers themselves (Beer, 2019, cited by Prinsloo, 2020)?
In my case, Google & Microsoft can find out about my preferences by using these platforms. However, not only educational data can be collected but also behavioural or nutritional that can be even more personal (Prinsloo, 2020).
Conclusion: I can improve interactivity in my lessons. This ‘made me up’ not only by discovering this gap in my teaching but also by making me think about what can be changed (Williamson, 2014, cited by Ozga, 2016).
References
Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (2013). Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.
Capriotti, P. & Moreno, Á. (2007). Corporate citizenship and public relations: The importance and interactivity of social responsibility issues on corporate websites. Public relations review, 33(1), pp.84–91.
Ozga, J. (2016). Trust in numbers? Digital Education Governance and the inspection process. European educational research journal EERJ, 15(1), pp.69–81.
Prinsloo, P. (2020). Data frontiers and frontiers of power in (higher) education: a view of/from the Global South. Teaching in higher education, 25(4), pp.366–383.
Williamson, B. (2017). Digital education governance: Political analytics, performativity and accountability. In Big Data in Education: The digital future of learning, policy and practice. 55 City Road: SAGE Publications Ltd, p. 65.
One reply on “How interactive are my lessons?”
‘I consider it is presenting myself in an ‘intimate account’ (Anagnostopoulos et al., 2013) with high accountability (Williamson, 2017) because my workplace highly encourages interactivity.’
Excellent link to the readings, and some important reflection here on how this kind of data might be seen as relating very directly to you professional conduct, and therefore to how you are ‘seen’ and valued by management.
‘Key finding: 66% of the lessons included interactive material. 25% of the lessons included 2 resources. Various platforms were used a similar amount of times.’
So, do you think this kind of profile would be ‘enough’ to satisfy an institution that you are making your teaching interactive?
‘I did not consider the overall amount of time in the class that it was interactive.’
Great point. So we can clearly see the limitations of this data, which is highly significant where important decisions about members of staff might be made. If one were to include a 5min interactive session in each lesson, this visualisation wouldn’t be able to flag up any concerns.
‘Many other qualitative factors could have been considered. However, a quantitative approach was chosen when presenting data. ‘
And this is another key concern, isn’t it? The underlying question here is *how* interactivity is being defined. This kind of visualisation would show that some kind of interactive resource is being used by the teacher, but it is not able to depict what kind of interaction each student is experiencing. Driving key decisions about the conduct of professionals using this kind of approach would seem to have the potential to overlook much key information, and perhaps conceal circumstances where little interaction was actually happening?