Final Reflections

As the authors of ‘Dear Data’ noted, we are ‘living in the age of “Big Data”, where algorithms and computation are seen as the new keys to universal questions, and where a myriad of applications can detect, aggregate, and visualize our data for us to help us become these efficient super-humans’(Lupi, Posavec, 2016, p.11). The central role of datafication in different spheres of life, including education, is, indeed, difficult to deny. However, the ‘dataist trust’ (Williamson, 2017, p.40) in numbers and their potential to revolutionize education shouldn’t be taken for granted. My involvement in every stage of data collection, visualization and interpretation for the past nine weeks enabled me to look ‘under the hood’ of the ‘massive data producing machine’ and uncover some tensions between big data promises and their practical realities.

The idea of hand-drawn data presentation was brilliant in the sense that it made it possible to take on the roles of different actors usually involved in generating digital visualizations. As Williamson argues, ‘the visualization of the world … is a complex sociotechnical act involving a variety of actors and technologies with the persuasive power to shape people’s engagement and interaction with the world itself’ (Williamson, 2017, p.37). Thus, by drawing my data, I managed to minimise the impact of code, algorithms, people who write them, and digital media on them. As a result, my visualizations became the materialization of my own data-related choices, understanding of education, creativity and drawing skills. It also suggests that in this exercise, data visualization is still ‘no neutral accomplishment’, as Kitchen et al put it (Ibid. 2017, p.36).  What is essential, though, is that it was me, who decided which aspects of behaviour are important to capture when conceiving of teaching and learning and which can stay unknown, which data about myself I feel comfortable to share with the public and what I consider private. In the online world, those choices are often taken away from us.

When thinking of how I could present my numbers, it turned out to be quite challenging to disregard conventional forms of data visualization, like chats and graphs. I took it for granted that PowerPoint like visualizations are strongly associated with objectivity, clarity and universality. However, as I further discovered, ‘visualization acts as a way of simplifying and reducing the complexity of the interaction of variables to graphical and diagrammatic form’ (Ibid. 2017, p.35), which means that digital tools impose constraints and may distort data to make them fit the pattern. Trying to be creative, I also experimented with Lego to visualize my data. It is noteworthy, that the limitations of this experience were obvious. Almost like with software tools, the baseboard and blocks of standardized size and shape made me compromise a number of variables and some meanings. Interestingly, my last data visualization looks more traditional than the rest. Maybe, being more relaxed by the end of the semester, I yielded to the mere exposure effect and subconsciously reproduced something familiar and trustworthy.

One more thing that influenced my visualization choices was collecting data through self-reporting. It took me some effort to free from social desirability bias when logging my data. This made me think of teachers who, in their new roles of ‘data collectors’ (ibid. 2017, p.82), are sometimes forced to use gaming strategies to avoid negative consequences of governing with data (Fontaine, 2016). Moreover, within my data collection practices, I discovered that there were aspects that I wasn’t able to capture, like quickly changing meanings in a conversation, my feelings, daydreaming, etc. Maybe, technology, through semantic and sentiment analysis, would have done better in this case. What I did with those tricky parameters was simply leaving them out, demonstrating a dangerous ‘count what can be counted’ tendency often employed in learning analytics.

Working on my weekly tasks, I tried to think metaphorically. According to Lakoff et al (2003), ‘the essence of metaphor is understanding and experiencing one kind of thing is terms of another’(p.5). Visualizing distractions as a cardiogram, looking at teaching through my parental duties and a game of darts, etc. helped me better realize the complex nature of educational processes and compare it with ‘the enduringly partial nature of whole datasets’ (Tsai, 2019, p.556) aimed to reflect them.

The most important takeaway from my data visualization work from the perspective of a learner was that education-related data have very little to say about learning per se. The analytics we can afford today will only serve as ‘indicators’ of students’ behaviour limited in explanatory power, whilst cognitive, social and emotional processes that constitute learning are too challenging to capture and thus left uncovered. Describing datafication in education, Tsai mentions the ‘phenomenon of information asymmetry’ (p.562) between data collectors and students caused by power imbalance, which turns the latter into ‘prized products, from which valuable behaviours can be extracted and consumed by ever-improving algorithmic systems (Knox, Williamson and Bayne, 2020, p.35)’. This raises numerous questions of students’ privacy, agency and equity, the values that education should originally promote.

The teaching block revealed that the usefulness of LA for educators is often overstated. As Brown’s article (2020) demonstrates, the use of LA tools in the classroom is often imposed on teachers and can undermine pedagogical strategies, bringing little value in return. Most importantly, students’ performance data are becoming the source of teachers’ appraisal whose negative consequences educators find difficult to push back. As a solution, Sander (2020) suggests developing critical understanding of digital data practices. While it is obvious that this competency will help teachers become more informed citizens, whether it could help them resist the ‘side effects’ of governing with data is a big question.

In block three, I tracked performance indicators that are becoming the main source for digital governance. Through the choices I had to make it became obvious that quantitative, decontextualized, ‘thin descriptions’ (Ozga, 2016, p.70) infuse current policies that are used to facilitate decision-making and impact educators. This transforms how we understand good teaching and prioritize certain school practices. Perhaps, my greatest discovery was realizing how new ‘fast policies’ (Williamson, 2017, p. 67) exacerbate the problems they were designed to eradicate (Anagnostopoulos, 2013).

Overall, my nine-week long data work and the recommended literature turned out to be particularly useful for deepening my understanding of complex processes related to data collection, visualization and their further use. As a result, I have developed a much more critical approach to datafication in education which will guide me through my learning, teaching and managerial practices.

References:

Anagnostopoulos, D., Rutledge, S.A. and Jacobsen, R. (2013) Conclusion: The Infrastructure of Accountability: Tensions, Implications and Concluding Thoughts. in Anagnostopoulos, D., Rutledge, S.A. and Jacobsen, R. (Eds) The Infrastructure of Accountability: Data use and the transformation of American education. Cambridge, Mass: Harvard Education Press, pp. 213-228 

Brown, M. (2020) Seeing students at scale: how faculty in large lecture courses act upon learning analytics dashboard data, Teaching in Higher Education, 25:4, 384-400, DOI: 10.1080/13562517.2019.1698540

Fontaine, C. (2016) The Myth of Accountability: How Data (Mis)Use is Reinforcing the Problems of Public Education, Data and Society Working Paper 08.08.2016

Knox, J., Williamson, B., and Bayne, S. (2020) Machine behaviourism: future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies, Learning, Media and Technology, 45:1, 31-45, DOI: 10.1080/17439884.2019.1623251

Lakoff, G. and Johnson, M. (2003) Metaphors we live by,With a new Afterword,Chicago, Ill.: University of Chicago Press

Lupi, G., Posavec, S. (2016) Dear Data. London: Particular Books

Ozga, J. (2016) Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81

Sander, I. (2020) What is critical big data literacy and how can it be implemented? Internet Policy Review, 9(2). DOI: 10.14763/2020.2.1479

Tsai, Y-S., Perrotta, C., and Gašević, D. (2020) Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analyticsAssessment and Evaluation in Higher Education, 45:4, 554-567, DOI: 10.1080/02602938.2019.1676396

Williamson, B., (2017) Conceptualizing digital data: Data mining, analytics and imaginaries. Chapter 2 in Williamson B. Big Data in Education: The digital future of learning, policy and practice. London: SAGE publications, pp. 25-48

Williamson, B., (2017) Digital Education Governance: political analytics, performativity and accountability. Chapter 4 in Williamson B. Big Data in Education: The digital future of learning, policy and practice. London: SAGE publications, pp. 65-96

End-of-Block3 Reflections

For my visualizations in this block, I decided to track my efficiency, punctuality and one aspect of well-being – data that are easily quantified, collected and defined, and which, potentially, can inform policy and governing in education. Students’ test scores, teachers’ interventions and learners’ course reviews are some more examples of indicators that have become important drivers for policy development. These ‘proxies’ of learning and teaching reflect what Ozga (2016) calls ‘thin descriptions’, stripped of contextual complexity’ (p.71) which facilitate and accelerate decision-making and enable comparison of educators, pupils and institutions across the globe.

The purposes of increased data use for governance are multiple and seem quite reasonable. ‘Data mining and big data supposedly enhance efficiency, increase transparency, enable greater competiveness, and evaluate the performance of schools and teachers’ (Ozga, 2016, p.70). It may sound like a dream come true to parents who can now participate in the life of schools and get ‘a trip advisor view’ (from Ozga, p.74) of educational institutions ‘from their sofa’. Sticking to the same goals, policy-makers can rationalize some predetermined courses of actions and shift responsibility to the ‘standards and metrics that appear as outside politics’ (Anagnostopoulos, 2013, p.11). Novice teachers may find data-intense environments stimulating and supportive.

However, the miraculous power of data, its objectivity in particular, is highly contentious. It becomes obvious if you look into a complex information infrastructure of accountability that has emerged around data use in education. According to Anagnostopoulos, ‘complex assemblages of technology, people, and policies’ (p.2) constitute the infrastructure of accountability. The author emphasizes two important things. First off, the obscurity of the system, the complex infrastructure remains largely unseen, like ‘water systems that run beneath our streets and into our homes’ (p.3). Secondly, the number of state, private, human and non-human stakeholders that participate in creating datasets required for governing with data is huge. Interestingly, some of them, like an algorithm developer or a private foundation, may have very limited understanding of education. Nevertheless, all parts of the complex infrastructure affect decision-making to this or that extent. Hence, when drawing on performance data to punish or reward teachers, it is essential to keep in mind that they are ‘a product of myriad decisions’ (Anagnostopoulos, p.2).

Although using data for policy creation is not addressing the problem of improved decisions bias wise, it has definitely contributed to creating more agile, networked and fast policies (Williamson, 2017). The time for the feedback loop between data collection and policy modification has decreased, which creates a feeling of automatic governance that some parties are probably aiming at in the future. Moreover, big data in education enables to locate low performers and best practices quite quickly and take action.

Having party addressed some of the issues, digital governance has also generated new problems in education. Gaming and performativity are the side effects of management with data. Prioritizing particular subjects, teaching to the test and pushing low performers out of school are the manipulations that some schools tap into to improve their rating (Anagnostopoulos, 2013). As a result, data-infused policies and the culture of accountability tend to exacerbate the problems of inequality, trust and customization while purporting to address them.

In conclusion, big data transforms not only the processes of policy creation, but also teaching practices and how we conceive of quality education.

 

Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013. Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.

Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013. Conclusion: The Infrastructure of Accountability: Tensions, Implications and Concluding Thoughts. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds) The Infrastructure of Accountability: Data use and the transformation of American education.

Ozga, J. 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81

Williamson, B. Digital Education Governance: political analytics, performativity and accountability. Chapter 4 in Big Data in Education: The digital future of learning, policy and practice. Sage.

Week 11: Efficient or not efficient?

For my last visualization, I decided to track my own efficiency during a working day. I work in education in the IT sector where reporting time and task progress in dedicated digital systems is a daily routine for most employees. However, our training department do not employ these task/time management practices yet. As an experiment, I logged the time for my work/non-work activities and ticked the task boxes for 5 days during regular working hours. The issues that concerned me in the process:

1)       I devote 1 hour of my working day to reading (professional development). As a rule, time for professional development is not counted as working hours, at least in my environment. Needless to say, professional development is essential for both employee motivation and quality teaching, so why educators need to sacrifice their sleeping or family hours to stay professional remains unclear.
2)       The numbers show that I am ‘underutilized’, as I don’t make 8 working hours a day. I must confess that it made me feel very uneasy, and the temptation to tweak the numbers was huge. It is not popular to be not busy these days.
3)       Some tasks weren’t completed for the reasons beyond my control. However, for the employer or the customer, it is usually a yes/no question (done/not done). To emphasize this message, I used black and white colours for the graphs with time and tasks.  As Anagnostopoulos noted, ‘data, in themselves, do not hold meaning’ (p.219). For the indicators on the right, for contrast, I chose colour coding to emphasize the fact that those who will look at my data or the algorithm that will process them will construct their meaning, in accordance with their values and priorities.

Thinking of education from this perspective, today teachers are turning into ‘data collectors’ and ‘data entry clerks’ (Williamson, 2017) that are also expected to log their daily activities, so that after their students test scores arrive, the interested parties could measure their effectiveness and define ‘best practices’ for further scaling. As Fontaine explains, ‘teaching and learning are increasingly being measured and quantified to enable analysis of the relationship between inputs (e.g., funding) and outputs (e.g., student performance) with the goal of maximizing economic growth and productivity and increasing human capital’ (p.2). It is noteworthy that ‘measuring teaching with the same ruler’ as IT work (e.g. code writing) is disputable practice. The same refers to insisting on meaningful causal relationships between teaching and students test scores. In reality, and this is proved by research, ’student performance is more closely linked to socioeconomic status’ (Fontaine, p.2) than teaching effort.

Week 10: Meet or miss

This week I’ve tracked how punctual I am in my working and personal context. I was capturing time-related data for 5 days, 17 hours a day. Here’s what I have found out:
 
1)      My patterns of behavior are similar in different contexts (I joined 67% of events at work in advance or on time.  68% is the number for private matters)
2)      I stick to my plans and schedules better at the beginning of the week. It gets more challenging to keep on track by Friday.
3)      Whether I joined a meeting in advance or was a few minutes late mostly depends on my role in it. If I’m the presenter, I always start the event a bit in advance. If I’m one of 30 participants in the FYI mode, being a bit late is absolutely ok, nobody starts on time here.
 
My third point is about context that remains unknown if you look at my visualization. Is being late a norm or misconduct in my cultural and working settings? What does my contract say or what unwritten rules work in my department? These data are obviously incomplete and decontextualized, and as other kinds of statistics, they reduce the complexities of real life. However, as Grundmann and Stehr (2012) note, ‘the decline or loss of the context-specificity of a knowledge claim is widely seen as adding to the validity, if not the truthfulness, of the claim (from Ozga, p.71).
Potentially, in a neoliberal, pressing-for-efficiency society, such indicators can be part of an algorithm that measures teachers’ or school management performance and discipline. In the online world, tracking working hours and punctuality is another simple measurement that can shape policies and impact ratings.
This tracking exercise made me recollect some unpleasant stories connected with firing teachers. The thing is that if you have to fire an instructor for poor teaching here, it’s next to impossible to do it for a real reason, because ‘low-quality teaching’ is badly defined, has a plethora of interpretations and sounds very subjective. Hence, it is not in any way described in the contract. So in search of evidence for misbehavior, what we did back in the pre-Covid times, we simply requested the clock in/clock out data from the security service. Late arrivals and early leaves could be easily detected and found fault with, since they are regulated by internal policies. As ridiculous as it may seem, but a manager can rely on this kind of ‘objective’ data to make life-changing decisions. In fact, in this case, punctuality/working hours are prioritized over quality teaching, just because they are easy to measure and define.    

Week 9: My diet





For my 7th visualization I’ve chosen to track what I eat and drink during the day and when. At first sight, it seems that this kind of personal data can hardly be linked to the topic of digital governance. However, I could relate them to two issues: teachers’ or students’ well-being and their efficiency. Interestingly, that these data can be interpreted in opposite ways. On the one hand, my diet looks more or less balanced, and it’s obvious that I eat regularly (even more often than needed:). This may suggest, quite indirectly though, that I have the potential to be efficient at work, since hunger is not what will take away my attention. On the other, I eat or drink almost every hour during the working day, which is distracting and can play against my efficiency. So what decisions can these data inspire? Or would it be necessary to link these numbers to another data set, like my performance at work, to be able to conclude something?

In 2020, many people switched to working from home, and some companies, especially in IT, are now struggling to decide whether WFH is as productive as working from the office to adapt their policies by the time the pandemic fades away. So by gauging employees’ behavior (engagement) when working from home, for instance, quantifying it and comparing it to the times in the office, could help institutions arrive at more ‘objective’ decisions that people would trust. In the time of ‘fast policies’ (Williamson, 2017), you can never predict what data will become part of the algorithm that will determine the fate of millions.

p.s. They say ‘we are what we eat’. I felt quite uneasy sharing my personal data with the public. I would most probably feel the same if I’m expected to log information about myself or my work so that it is used by ‘I don’t know who’ and ‘I don’t know how’. It made me think of teachers who are often forced to live and work ‘behind the glass wall’ to serve the current values of transparency, openness and participation.

End-of-block2 Reflections

In this block, I aimed to deepen my understanding of how the ubiquitous processes of datafication and personalization in education are affecting the concept of teaching and the role of a teacher. It turned out that current trends that impact teaching in the data-intense society are similar to those we came across in the discourses about learning.

First off, there is a conflict between a ‘dataist’ trust in the ‘magic’ of digital quantification’ (from Williamson,  p.352) and a reductive, subjective nature of education-related data collection and visualization. Learning analytics and educational platforms have become new ‘must-haves’ not only for commercial vendors but for public institutions as well. Big data advocates argue that digital tools can augment and facilitate pedagogical practices or, more radically, replace the traditional educator.   

To realize the limited potential of LA to enhance instruction, it is essential to look into the data they present. As a rule, they measure students’ engagement (attendance, emotions, time) or predict learners’ performance using algorithms. These metrics are highly contentious and can hardly reflect the complexity of the teaching/learning process. ‘What is learner engagement’ or ‘whose opinions are embedded in the algorithm code’ (O’Neil) are not the full list of questions that should be raised before using LA as ‘proxy measures of the performance of staff, courses, schools, and institutions as a whole (from Williamson, p.354)’.

The assessment of teacher efficiency via students’ quantified behavior risks to transform pedagogy ‘to ensure it ‘fits’ on the digital platforms that are required to generate the data demanded to assess students’ ongoing learning’(Williamson, p.358). Brown’s research on the use of LADs in the classroom (2020) demonstrates that instructors ‘expressed frustration with the ways that data displays undermined their existing pedagogical strategies’ (p.384) bringing little value in return. According to Prinsloo, ‘seldom learning analytics technologies align with pedagogical conceptions and theories, stemming mainly from developers’ priorities rather than educational processes (from Rafaghelli, p.439)’. Thinking of why teachers have little say in the process of education datafication, it is essential to remember who ‘sets the tone’ in this domain and their driving values (Van Dijck, 2018).

Faced with all the pressures of the ‘ranking’ society and numerous instances of data harm (O’Neil), educators should be looking for ways to resist the adverse effects of datafication. Academics are starting  ‘with a restatement of the inherent social and public good of higher education’ (from Williamson, p.362). Interestingly, Van Dijck (2018) describes how the core values of public education have been recently compromised by the Big Five, however, little is said about how the alarming tendencies can be resisted.

Maybe, understanding what is happing and why can be a good step ahead. More and more researchers (Sander, Raffaghelli and Steward) emphasize the importance of critical data skills development for educators to push back on technocratic control and unwelcome surveillance.

Drawing on the insights from the readings as well as my own data collection practices, I can conclude that datafication and personalization ‘as the mantras of a new educational paradigm’ (Van Dijck, p.18) are reshaping the role of teachers: ‘from classroom directors to dashboard controllers’ (Van Dijck, p.7), from ‘sages on stage’ to ‘sages engaged in data-informed divination’ (Brown, p.396). This simplified vision of the teacher functions is affecting pedagogy, sense and decision-making at all levels and can be destructive.

References:

Brown, M. 2020. Seeing students at scale: how faculty in large lecture courses act upon learning analytics dashboard dataTeaching in Higher Education. 25(4), pp. 384-400

O’Neil, C. 2016. Weapons of Math Destruction. Talks at Google. Available at: https://www.youtube.com/watch?v=TQHs8SA1qpk

Raffaghelli, J.E. & Stewart, B. 2020. Centering complexity in ‘educators’ data literacy’ to support future practices in faculty development: a systematic review of the literatureTeaching in Higher Education, 25:4, 435-455, DOI: 10.1080/13562517.2019.1696301

Sander, I. 2020. What is critical big data literacy and how can it be implemented? Internet Policy Review. 9(2) DOI: 10.14763/2020.2.1479

Van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectivesTeaching in Higher Education. 25(4), pp. 351-365

Week 8: My Browser Fingerprint

The importance of data literacy for a 21st century teacher is already taken for granted. However, according to Raffaghelli et al and Sander, the current approach to data literacy is fairly instrumentalist, it focuses on efficient data use in teaching and education management. As an alternative, the authors emphasize ‘the concept of an extended critical big data literacy that places awareness and critical reflection of big data systems at its centre ’(Sander, p.1 ).
 
Unfortunately, the critical perspective on big data practices has never been part of my educational agenda, although it’s been quite extended. So I decided to start my own development in this direction from visualizing my ‘browser fingerprint’ and looking at myself as ‘a data subject’ (Brown, 2020) leaving traces on the global web. For this purpose, I decided to track my browser history (a simplified version) and start noticing how well the sites I use ‘know me’ through ads and recommended content. I found the tools explored by Ina Sander, like myshadow.org and donottrack-doc.com valuable, they enriched my understanding of how the tracking mechanisms work and what risks they may increase.
 
My major observations:
–          I use more or less the same online resources every day that ‘know me pretty well’. Roughly, 80% of content and ads I am offered seemed relevant. I know close to nothing about what data they collect, how they are used and how it affects my experience.
–          I was offered to accept cookies only 3 times. Having become a little bit more concerned about my data collection, I didn’t blindly press ‘ok’ like I most probably did in the past.
–          Facebook, WhatsUp and Google, notoriously known for data manipulations, are among my most frequently visited platforms (the Big Five guys). According to donottrack-doc.com, I could use more data-conscious sites as DuckDuck.com or Telegram for the same purposes.
–          Using internet resources uncritically often times, I not only put at risk my own privacy, but also the privacy and data safety of my students, as we work online now.
 
It’s no secret that teaching as well as learning is becoming more and more technologically intense, which means that developing critical understanding of data has become no less essential than learning how to use a computer for any teacher. Ideally, they should go as ‘one package’ that is part of professional and secondary education agenda.

DIY Dashboards

Context
I used the second Sample Data file that provides data about a group of students that are doing a blended English course that consists of VLE activities + online lessons that are equally important (corporate training).

Questions I had to answer:

1) What data will be useful for an English teacher and could help them plan necessary interventions or inform their teaching practices?
*attendance and names of the students (attending classes is essential in such courses as it is the time for practicing the new language) – chart 1
*VLE data and names of the students (I included only the total number of posts on the platform (posts +replies) as it suggests students’ ‘production’ activities as opposed to time spent on the platform) – chart 2
*tests results and names of the students (no benchmarks for the teacher provided, leaving the results open to the teacher’s interpretation) – chart 3
*student’s performance = average test results ((test1+test2+test3+test4)/4) – chart 4
*based on the indicators of all the students, high/low/average performance/attendance/VLE engagement were defined. Chart4 can serve as ‘data-informed portraits of individual students’, as Brown put it (p. 396).

2) What data be can left out?
*VLE logs, forum views, pdf views- these data say very little about the engagement with the course. It’s more about learning habits or the students’ context.

Benefits for teachers:
1)      Quick visual summary of test results and attendance, it saves the teacher’s time
2)      Such data enable teachers to see students that need more support
3)      Students’ profiles (chart 4) can be useful when teachers have many students as a quick summary of their activity and their performance on the course
4)      Such charts may provide more visibility to other stakeholders, like parents or the students’ managers, which is often expected from teachers
5)      The numbers presented in the well-known forms will be better trusted than the teachers’ notes or feedback in free form
 
Constraints:
1)      These data can only serve as indicators of the situation that still need to be linked to the context to make sense
2)      The issues of privacy and surveillance: who does these data belong to? Are the teachers aware of what data are collected and how and who might use them?
3)      Defining what is low/average/high performance and engagement based on the indicators of a particular group is contentious. It reminds me of the assessment results adjustment in the UK when the measurements of the previous cohort impacted the future of the next generation.
4) Chart 4 looks very primitive. It shows that the person achieved top results in attendance and performance, but in reality they didn’t. It’s either because ‘the designer of the dashboard’ has limited tech expertise or the tools I could use had their constraints.

Conclusion:

One of the conclusions drawn by Brown in his research (2020) was that ‘some data was better than no data’ (p.392). I believe that this is also true in relation to the usefulness of my DIY dashboards. Overall, they have little potential to revolutionize one’s pedagogical strategies. Still, teachers might find them handy for lesson planning, drawing some actionable insights and, maybe, reflecting on one’s own efficiency. Having students’ profiles at hand can also be of help when reporting on students’ achievements or preparing for individual consultations with learners.  

As many researchers argue, ‘instructors appear responsive to data about teaching when they can identify useful connections to their daily work and when the data is framed as legitimate by their professional or disciplinary beliefs’(from Brown, p.385). So before introducing any kind of dashboards, it is essential to ensure that the educators understand how the algorithms work, and how these data can inform their day-to-day practices.

Week 7: Faces on/off

The idea to count how often my colleagues use their cameras during online meetings struck me during our last tutorial when few people showed their faces. Since the phenomenon of pervasive online teaching is relatively young, ‘Zoom ethics’ has not yet formed to the full, so the rules of online behavior differ from teacher to teacher, from institution to institution.   

In my working setting, learners are highly recommended to turn on their cameras in the virtual language classroom. Facial expressions enable the teacher to receive immediate non-verbal feedback, check engagement and sentiment, and react accordingly. Speaking to 10 frozen avatars is no fun at all.  

However, there are many arguments against introducing mandatory video policies in education. Recent research describes a wide range of reasons why video conferencing can be uncomfortable/disadvantageous to students.  At the same time, little is mentioned about learners who are hiding behind the avatars because they want to use their phone in parallel with a lesson or enjoy their morning coffee. Does such behavior  influence engagement and learning outcomes? I believe it does. Nevertheless, keeping your camera on doesn’t guarantee students’ engagement or success either.

How can these data be used in teaching: for empowering or controlling teachers? On the one hand, looking into why students use/don’t use cameras during online lessons can be beneficial, as it may help the teacher understand the learners’ contexts better. However, it is noteworthy that these data don’t provide any ‘whys ’, and the sense-making is still on the teacher. On the other, using these data as an indicator of students’ participation/engagement to measure instructors’ efficiency is contentious. First off, because of the superficial nature of these data. Unfortunately, we can’t be sure that it will never happen, since this aspect of online behavior is very easy to put into numbers and employ in more sophisticated algorithms.

Week 6: Mother’s Verbal Nudges

Since I’m not teaching this semester, I decided to focus on how efficient I am as a parent. I aimed to count how many times I need to say something to make my 5-year old girl do something that I want (verbal nudges). All in all, I tried to track all my action-encouraging requests/demands/orders/hints for 4 days, around 2-3 hours a day. I based my visualization on the game of darts, imagining myself a player who aims to hit a bull’s-eye as quickly as possible. However, sometimes it takes too many attempts to reach the target. If you have kids, I’m sure you know what I’m talking about.

My speculations:

  1. Unlike with learning, it makes little sense to consider teaching in isolation from learning. Even though the tendency to assess educators’ efficiency based on their students’ performance is not impeccable (Williamson 2019), conceiving of successful teachers we first off conceive of their successful students. For a similar reason, my data collection involves my kid’s reactions to my verbal stimuli.
  2. In the automated version of this visualization, the conclusions drawn from the data would depend on pedagogical benchmarks, social values or political aims inbuilt in the algorithm. As Jasanoff (2018) argues, ’data do not simply represent the reality of the world independent from human thought but are constructions about the world that have been assembled for specific purposes’. In my case, if my mom looked at my data, she would claim that I’m a poor parent, since I often have to repeat many times before my daughter does what I request. At the same time, some of my friends would label me as a very demanding mother based on their own experience with their kids and beliefs about happy parenting. So who will decide what it means to be a good teacher/parent? Anyway, most parents suffer from the ‘bad mother complex’, so any tracking system like this will risk to increase their anxiety level.
  3. This is a very simplistic way to visualize mother-child or learner-teacher relationships. Even in terms of requests/tasks, they are so versatile, sometimes not formulated as asks at all. Besides, there is a plethora of non-verbal communication between people. Moreover, there’s so much context in any conversation that technology will never be able to grasp and properly relate, and thus will leave it out. Teaching as a ‘gift’ that a teacher is granting to students, as Biesta described it, seems to be too challenging to quantify and represent as a graph. In the same vein, my daughter is not a board that is predictably responsive to darts…
  4. Data collection can be intrusive and surveillance is annoying. Understanding very little what I’m doing, my daughter felt irritated every time I wrote things down about her. Thinking of students, they would probably feel the same if the data systems did their tracking in a more explicit way. However, if we don’t notice them, it doesn’t mean that data collection is not taking place.