Final post

Introduction

Producing nine data visualisations seemed like a daunting task at the beginning. After a while, however, I felt more confident and I hope my development is reflected in the visualisations. Rather than tracking every-day tasks, I tried to focus on data related to my studies. Initially, I thought that this would give me a better insight into how data are used for learning and teaching, but towards the end of the nine weeks, I came to realise that it was the act of collecting data and drawing that was important and not the subject.

Visualising data and its challenges

For me the main challenge of this task wasn’t usually how to present the data but what to measure. I often felt restricted by what I could record as manually collecting data can be time consuming. Other times, I was mindful that if I was tracking a lot of data, my visualisations may become too complex. What this taught me, however, was that data scientists and other professionals involved in using educational data are likely to be facing similar restrictions. While they have the ability to collect vast amount of data, they will have to decide what a student dashboard displays, for example.

Block 1 – ‘Learning’ with data

The first block challenged us to critically think about how student learning is influenced by data-driven technologies and processes, how data can enhance student learning and what issues may arise from these practices.

Learning analytics and data science can give students, teachers and institutions valuable insights into their behaviour and practices. Knox et al. (2020, p.34) highlight that ‘the promotion of learning analytics is often premised upon its ability to reveal insights about learning unobtainable without the collection and analysis of learner-data’. Others argue that ‘simply observing learning events is not revealing of successful and unsuccessful learning patterns’ (Selwyn & Gašević 2020, p. 529). While drawing up my data visualisations, I was often mindful of how revealing they really were. After selecting what data to analyse and how to present them, I was aware of the number of choices I made during the process and the effect this had on objectivity.

During this course, I was also often aware of how much information about myself I am giving away and who might be able to access it. This raised the question of how can we find a balance between learning analytics enhancing student agency and the increased surveillance and control of students (Tsai et al. 2020). As surfaced in Tsai et al. (2020, p. 556), there are further ethical concerns such as ‘the dangerous tendency of predictive modelling to reproduce existing biases based on race, gender and class.

Block 2 – ‘Teaching’ with data

This block looked at the role of data in teaching and their impact on teachers and teaching practices. Due to the rise in big data and learning analytics, the use of data goes far beyond of providing teachers with information about their students’ learning progress. As surfaced in Williamson et al. (2020, p.354), ‘measures of student performance, sentiment, engagement, and satisfaction are also treated as proxy measures of the performance of staff, courses, schools, and institutions as a whole, leading to new claims that HE quality can be adduced from the analysis of large-scale student data.’ This development may have serious implications on teachers’ autonomy and creativity, particularly since ‘not all forms of learning can be quantified and analysed. And this means, potentially, that not all forms of teaching and learning will ‘count’ in terms of how teachers and students are measured and assessed’ (Williamson et al. 2020, p. 358).

Institutions should be careful to align learning analytics technologies with pedagogical theories and not only consider developers’ priorities which often have profits in mind (Raffaghelli & Stewart, 2020).

Block 3 – ‘Governing’ with data

By the time I reached block 3, I struggled to think of what data to visualise. Once I found a topic, however, the task of collecting information and producing my visualisation felt more natural.

What particularly stood out for me in my visualisation of our Tweetorial was that governing in education is no longer a political matter only. Organisations such as Microsoft, Google and OECD were mentioned during our exercise, indicating that a variety of actors are now involved in policy-making. Williamson illustrates that ‘alongside the rising use of data, education has experienced a ‘governance turn’ which sees authority over educational redistributed from central governments and their agencies to a much wider array of private sector and civil organizations, including businesses, consultants, entrepreneurs, think tanks, policy innovation labs, charities and independent experts, many of them tangled together in networks of relationships’ (Williamson 2017, p.67). We have to ask ourselves what motives are behind this investment in education and what the consequences are for teachers and learners.

Another issues that became apparent during this block were the reliance on data to assess performance (Ozga, 2016) and the rise of standardisation (Anagnostopoulos et al., 2013). While introducing standardised tests and policies can highlight opportunities in educational policies, there are also risks. ‘As large-scale information systems produce increasingly precise measurements of student, teacher, and school performance, they risk substituting precision for validity and distracting from important issues, such as educational equity, diversity, and social justice, that are not easily reduced to or redressed by standardized metrics’ (Anagnostopoulos et al. 2013, p. 16).

Summary

Each block has highlighted different aspects and issues of data in education but one of the recurring themes was the question of objectivity. Data, and the way they are presented, are often perceived to be neutral and used to back up findings, and produce recommendations. Deciding which data to collect and how to visualise them, has strongly emphasised that data carries a large amount of subjectivity. They contain our personal values, beliefs and knowledge and visualisations are therefore selective and biased.

Examining the bigger picture of educational data in the governing block, I became aware of the increasing power data have in education. Large organisations such as the OECD now have the ability to influence policies at a global scale, potentially putting specialised knowledge at risk.

I have enjoyed examining the role of data in education from a critical perspective and now feel more equipped at recognising potential issues, however, as someone working in digital education, I am still unsure of how to use data well. Nonetheless, not blindly trusting data and questioning practices is something that I am now able to do and which should be a skill for everyone working in education.

References

Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (2013). Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.

Knox, J., Williamson, B. & Bayne, S. (2020). ‘Machine behaviourism: Future visions of “learnification” and “datafication” across humans and digital technologies‘, Learning, Media and Technology, 45 (1), pp. 1-15.

Ozga J. (2016). Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal. 15(1), 69-81. doi:10.1177/1474904115616629

Raffaghelli, J.E. & Stewart, B. (2020). Centering complexity in ‘educators’ data literacy’ to support future practices in faculty development: a systematic review of the literature, Teaching in Higher Education, 25:4, 435-455, DOI: 10.1080/13562517.2019.1696301.

Selwyn, N. & Gašević, D. (2020). The datafication of higher education: discussing the promises and problems, Teaching in Higher Education, 25:4, 527-540, DOI: 10.1080/13562517.2019.1689388

Tsai, Y-S. Perrotta, C. & Gašević, D., (2020). Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analytics, Assessment & Evaluation in Higher Education, 45 (4), pp. 554-567, DOI: 10.1080/02602938.2019.1676396

Williamson, B. (2017). Big data in education: The digital future of learning, policy and practice. SAGE Publications Ltd, https://www.doi.org/10.4135/9781529714920

Williamson, B. Bayne, S. Shay, S. (2020). The datafication of teaching in Higher Education: critical issues and perspectives. Teaching in Higher Education. 25 (4), 351-365.

Block 3 summary

For the governing with data block I tracked activity during our Tweetorial, the digital traces I’m leaving while studying, and finally my performance at work, hoping to gain more insight into the role of data in governing.

The literature for this block has revealed how policy making has moved away from political actors to a wide range of global actors, ranging from private sectors companies, to think tanks and independent experts (Williamson, 2017). Initiatives such as Pearson’s Learning Curve Data Bank show how influential commercial businesses have become by being able to ‘identify policy problems for national schooling systems, from which [they] also [have] the potential to profit by selling policy solutions’ (Williamson 2017, p. 23). This raises questions as to how valid these policy problems are or whether they were actively ‘created’ in order to profit from offering solutions.

The task of visualising public data such as Education GPS, a database by the OECD, gave insight into the vast amount of educational data that is available for anyone to analyse and interpret. Although the website includes a note that ‘[t]hese values should be interpreted with care since they are influenced by countries’ specific contexts and trade-offs’, it is easy to see how tools like these can be used to produce impressive reports and recommendations based on the perceived objectivity of data.

The issue of objectivity of data links to the subject of accountability which was a recurring topic in this block’s literature. Anagnostopoulos et al.’s (2013) chapters on test-based accountability have highlighted the trend in educational policy for measuring, monitoring and regulating. Tracking my ‘performance’ at work in week 11 emphasised the potential shortcomings when using standardised measures for determining performance. Anagnostopoulos et al. (2013, p.15) raise important questions such as ‘Who determines the tests and algorithms used to quantify student learning and teacher quality, who creates them, and who is left out of such decisions?’ This may not only be an issue for assessing students and teachers’ performance but also for other parts of education. Ozga (2016), for example, describes how a ‘shift away from ideas, possibilities and informed expert analysis in shaping the knowledge-governing relationship and towards the application of rules derived from recurring data patterns’ can create tensions amongst school inspectors.

Big data and developments in data-processing software have changed the educational governance landscape replacing ‘slow-paced bureaucratic policy processes’ with practises that claim to ‘make all educational problems measurable, calculable and knowable, and thereby solvable at high speed’ (Williamson 2017, p. 25). From what we have learned in the first two blocks of this course, it is important to consider issues such as ethics and privacy, data literacy and bias when relying on data for decision-making. The visualisations have, once again, highlighted how subjective and selective data collection and analysis can be.

References

Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (eds.) (2013). The Infrastructure of Accountability: Data use and the transformation of American education. Harvard Education Press.

Ozga J. (2016). Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal. 15(1):69-81. doi:10.1177/1474904115616629

Williamson, B. (2017). Big Data in Education: The digital future of learning, policy and practice. Sage.

Week 11: a week of performance

For my final data visualisation, I decided to track my performance at work. The difficulty was to determine how to measure ‘performance’. Due to the nature of my job, it isn’t easy to say how many tasks I have completed each day. I’m working on big projects and am fairly autonomous as to what I’m doing each day. How would a machine measure how I did, I wondered? It’s easy to see how many meetings I attended, how many emails I sent and how much time I’m spending on my computer. But how can this information give insight into how well I’m doing my job?

Week 11 visualisation
Legend

I chose to explore this issue after reading how problematic performance measuring can be as part of the information infrastructure of test-based accountability. According to Anagnostopoulos et al. (2013), there are questions around how well performance measures can represent teaching, learning and schooling. ‘As [standardised tests, mathematical models, and computing technologies] define what kind of knowledge and ways of thinking matter and who counts as “good” teachers, students, and schools, these performance metrics shape how we practice, value, and think about education.’ The perceived objectivity of data, therefore leads to a shift of power away from traditional actors in educational governing.

Looking back at my visualisation, it seems as if Monday (top left) was my most productive day although I perceived Thursday (bottom right) as the day I achieved most. Although this is a very small sample, it shows how difficult it is to measure performance by purely looking at data.

References

Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (eds.) (2013). The Infrastructure of Accountability: Data use and the transformation of American education. Harvard Education Press.

Week 10: a week of traces

Week 10 visualisation
Legend

This week I tracked the digital ‘traces’ I’m leaving while studying for this course. As surfaced in Williamson (2017), through the rise of big data, governments are increasingly monitoring the digital traces of their citizens resulting in new forms of ‘data-driven governance’ and ‘evidence-based policymaking’. Ozga (2016), however, describes the potential issues arising from using data instead of expert knowledge for governance. In her research on the role of digital data for school inspections, Ozga highlights the tensions between seemingly ‘objective’ and ‘transparent’ data processes, and knowledge creation through expert analysis. While my visualisation may not give away much in terms of my performance, collecting data at a large scale has become very valuable not only for institutions and edtech companies but also for governing purposes.

Evidence of how valuable educational data has become may be found in the increasing number of actors now involved in policy-making, for example ‘private sector and civil society organizations, including businesses, consultants, entrepreneurs, think tanks, policy innovation labs, charities and independent experts’ (Williamson, 2017). Data have enabled these actors to exert power over what information is being collected and how it is used while projecting particular values and ways of thinking (Anagnosopoulos et al., 2013). With some of these actors increasingly including global and commercial stakeholders, I wonder what impact their having on education in a local context. Is there a danger that we lose local, specialised knowledge in favour of global, standardised processes?

References

Anagnostopoulos, D. & Rutledge, S. & Jacobsen, R. (2013). The Infrastructure of Accountability: Data Use and the Transformation of American Education.

Ozga J. (2016). Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal. 15(1):69-81. doi:10.1177/1474904115616629

Williamson, B. (2017). Big Data in Education: The digital future of learning, policy and practice. Sage.

Week 9: a week of Twitter

Even though I had a really busy week at work and wasn’t able to participate in the Tweetorial, I monitored the tweets regularly and tried to visualise them. I wanted to illustrate the richness of the discussion so I chose to pull out the main terms from the various tweets.

Legend: Question 1/Question 2/Question3

Looking at the visualisation, key themes and issues become clear even for those who aren’t familiar with any of the literature on ‘governing’ with data.

As so often when data are involved, the terms ‘objective’ and ‘impartial’ appear. In the same breath, however, ‘bias’ and ‘subjectivity’ are mentioned, reminding us that data aren’t neutral. When it comes to data and governing, ‘accountability’ is a term that frequently arose in both tweets and literature. Big data and the associated notions of countability, numbering and statistical knowledge, give rise to new forms of ‘data-driven governance’ with an emphasis on ‘evidence-based policymaking’ (Williamson, 2017). This shift raises questions of whether bias is considered in policy making and which actors are involved.

An example of how powerful global policy actors have become is the OECD. Outcomes of their Programme for International Student Assessment (PISA) can result in countries changing their education policies in order to perform better in the assessment (Liss, 2013). Germany was one of the countries that performed badly in 2001 and the resulting ‘PISA shock’ led to steps being taken to improve test results. While this was achieved, the inequality gap in Germany has widened (Davoli and Entorf, 2018), reminding us of the potential issues of standardisation in education.

References

Davoli, M. & Entorf, H. (2018) ‘The PISA Shock, Socioeconomic Inequality, and School Reforms in Germany’, IZA Policy Papers 140, Institute of Labor Economics (IZA).

Liss, J. (2013) ‘Creative destruction and globalization: The rise of massive standardized education platforms’, Globalizations, Vol. 10, No. 4, pp. 557-570.

Williamson, B. (2017). Big Data in Education: The digital future of learning, policy and practice. Sage.

Block 2 reflections

For the teaching block I chose to visualise data to illustrate the use of different platforms, highlighting habits when reading, and wellbeing. The first visualisation was closely linked to The Platform Society’s chapter on Education by van Djick et al. (2018, p. 119). The authors suggest that platformisation has implications for education as a common good as it introduces tensions ‘between two […] ideological sets of values: Bildung vis-à-vis skills, education versus learnification, teachers’ autonomy versus automated data analytics, and public institutions versus corporate platforms’. Like so many other aspects of our society, education is relying on a wide range of technologies, developed and marketed by powerful global organisations such as Google, Microsoft, and Amazon etc. This is leading to fears that the ‘adoption of commercial digital learning solutions whose design might not always be driven by best pedagogical practices but their business model that leverages user data for profit-making’ (Teräs et al. 2020, p.863). Will this development potentially reduce the teacher’s role to that of a facilitator? After all, it is pedagogical knowledge that makes teachers invaluable.

Tools such as learning analytics are often regarded as objective and neutral, yet the creation and application of technology solutions are indeed based on individuals’ behaviours, knowledge, norms and values (Lupton & Williamson 2017). Increasing use of learning analytics in teaching can be problematic as ‘the literature has pointed out how seldom learning analytics technology align with pedagogical conceptions and theories, stemming mainly from developers’ priorities rather than educational processes’ (Raffaghelli & Stewart 2020, p.439). Week after week, I am conscious of how my environment, experiences and opinions are impacting on my data tracking. Even though large-scale data collection is likely to be more representative, data will never be unbiased. This has to be taken into consideration when using data instead of teachers for assessment or monitoring of student learning.

While the data visualisations I have produced for this course are hand-drawn, the use of dashboards are becoming increasingly popular, be it for student- or teacher-facing purposes. As surfaced in Brown (2020), teachers can find it difficult to make sense of the data and often struggle to make the connection between the dashboard and their pedagogical philosophy. Perhaps this is a result of not being involved in the creation of these dashboards or not having the necessary skills to interpret the visualisations. Raffaghelli & Stewart (2020), for example, criticise the lack of faculty’s data literacy that goes beyond technical abilities.

Higher education is a competitive market and instructors are playing an important part in it. Williamson et al. 2020 (p.354) remind us that ‘[m]easures of student performance, sentiment, engagement, and satisfaction are also treated as proxy measures of the performance of staff, courses, schools, and institutions as a whole, leading to new claims that HE quality can be adduced from the analysis of large-scale student data’. What implications does this development have for teachers’ flexibility and creativity? Could the pursuit of high rankings lead to a loss of originality?

During the last three weeks I learned that data can be very helpful in giving teachers greater insight into their students’ behaviours and may help them to change their teaching in order to improve learners’ understanding as well as their wellbeing. As demonstrated above, however, the collection and analysis of data can be problematic for teachers and more questions should be asked.

References

Brown, M. 2020. Seeing students at scale: how faculty in large lecture courses act upon learning analytics dashboard data. Teaching in Higher Education, 25(4), pp. 384-400.

Lupton, D., & Williamson, B. (2017). The datafied child: The dataveillance of children and implications for their rights. New Media & Society, 19(5), 780–794.

Raffaghelli, J. E. & Stewart, B. (2020). Centering complexity in ‘educators’ data literacy’ to support future practices in faculty development: a systematic review of the literature. Teaching in Higher Education, 25(4), 435-455, DOI: 10.1080/13562517.2019.1696301

Teräs, M., Suoranta, J., Teräs, H. & Curcher M. (2020). Post-Covid-19 Education and Education Technology ‘Solutionism’: a Seller’s Market. Postdigital Science and Education, 2,863–878.

van Dijck, J., Poell, T., & de Waal, M. (2018). Chapter 6: Education, In The Platform Society, Oxford University Press.

Williamson, B., Bayne, S. & Shay, S. (2020). The datafication of teaching in Higher Education: critical issues and perspectives. Teaching in Higher Education, 25:4, 351-365, DOI: 10.1080/13562517.2020.1748811

Week 8: a week of wellbeing

This week was Moray House’s Health and Wellbeing week which prompted me to track my own wellbeing for my data visualisation. Given the uncertainties, isolation and move to online learning during the COVID-19 pandemic, mental health and general wellbeing are now more important than ever (Grubic et al. 2020).

Week 8 visualisation
legend

I tried to include as many factors as possible which I thought are contributing to my wellbeing. Some of the data, for example the data for screen time, may not be accurate as my daughters sometimes use my phone to watch programmes or phone their grandparents on WhatsApp. Nonetheless, I still thought that the time I spend in front of a screen (which didn’t include screen time for work) was very high yet I couldn’t find the time to exercise.

While there are many offerings from schools and universities to support wellbeing, I was thinking about how teachers would use data in order to address the wellbeing of their students.

Teachers can play an active part in student wellbeing by considering ‘changes to syllabus, curriculum, and university culture itself’ (Bail et al. 2019, p.676). Bail et al. also point out that being approachable and presenting learning materials clearly, were important contributors for students’ wellbeing. There is therefore a direct link between teachers and wellbeing and having access to data may help to improve students’ happiness.

Being aware of privacy and surveillance issues can also be a factor in wellbeing. Students who feel that they are constantly being monitored, may feel anxious or more under pressure. Of course, data like this may include sensitive information and students might not want to share such personal details with their teachers. Teachers, on the other hand, need to understand how to handle and interpret data in order to make meaningful decisions. As surfaced in Raffaghelli & Stewart (2020), teachers should be equipped with data literacy skills that look beyond technical skills and address datafication in education.

References

Baik, C., Larcombe, W. & Brooker, A. (2019). How universities can enhance student mental wellbeing: the student perspective. Higher Education Research & Development, 38(4), 674-687, DOI: 10.1080/07294360.2019.1576596

Grubic, N., Badovinac, S. & Johri, A.M. (2020). Student mental health in the midst of the COVID-19 pandemic: A call for further research and immediate solutions. International Journal of Social Psychiatry, 66(5), 517-518. doi:10.1177/0020764020925108

Raffaghelli, J. E. & Stewart, B. (2020). Centering complexity in ‘educators’ data literacy’ to support future practices in faculty development: a systematic review of the literature. Teaching in Higher Education, 25(4), 435-455, DOI: 10.1080/13562517.2019.1696301

Week 7: a week of highlights

No doubt, this week’s highlight has been the return of my girls to nursery/school. For this week’s data visualisation, however, I decided to visualise my highlighting of one of the core readings (Williamson, Bayne & Shay, 2020). Each rectangle represents a page. I always print out my reading material but it would also be possible to use one of the many highlighting tools for marking text online or in PDFs for example.

Week 7 visualisation

I normally don’t use different colours but have done for this exercise and I think it’s something that I will keep up in particular highlighting sections that I may want to refer to in my summaries or assignment.

I can see the potential benefits of analysing how highlighting is used for learners. The data may tell me if I have understood everything or whether I need to go away and do further research. It may also be useful in identifying which texts are particularly relevant for assignments if highlighting has been done with this in mind. Highlighting text is highly individual though and representing data in a dashboard, for example, would most likely not be valid.

In terms of teaching, however, I’m not sure how useful this data would be. Is it more of a box-ticking exercise to indicate that students have read the text similar to measuring attendance described by Brown (2020)? Could teachers actually see how students have engaged with the text? I’m not sure data like this would make an impact on teachers’ pedagogic response due to a lack of quality. It is also questionable whether teachers would have time to look at this data given their already-stretched workload.

Aside from issues regarding quality and validity, instructors may also lack the required data literacy in order to interpret results. As surfaced in Raffaghelli & Stewart (2020, p.435), ‘most approaches to educators’ data literacy address management and technical abilities, with less emphasis on critical, ethical and personal approaches to datafication in education.’  In a world where data are becoming increasingly important, there should be an emphasis on debates around privacy, ethics and equality (Raffaghelli & Stewart, 2020) for teachers, institutions and students.

References

Brown, M. (2020.) Seeing students at scale: how faculty in large lecture courses act upon learning analytics dashboard data. Teaching in Higher Education, 25:4, 384-400, DOI: 10.1080/13562517.2019.1698540 Williamson, B., Bayne, S. & Shay, S. (2020). The datafication of teaching in Higher Education: critical issues and perspectives. Teaching in Higher Education, 25:4, 351-365, DOI: 10.1080/13562517.2020.1748811

Week 6: a week of platforms

After seeing The Platform Society chapter I decided to track what platforms I am using this week. I have only tracked when I was at my desk and only recorded the platforms I engaged with most. Nonetheless, my visualisation shows how often I rely on using various platforms on a daily basis.

Week 6 visualisation
Legend

Van Dijck et al. (2018) highlight the potential implications of platformisation on education. With increasingly powerful tech companies, there are fears that education will be more and more governed by big corporations, ‘propelled by algorithmic architectures and business models’.

Platformisation and increased use of technologies in education also raises issues of privacy and surveillance. Being constantly monitored can have an impact on both students and teachers. As surfaced in Brown (2020), dashboards, for example, may have an impact on instructors’ pedagogical strategies.

COVID-19 is likely to have exacerbated the issue of platformisation as the educational technology sector is one of the few industries to profit from the pandemic. Dominated by powerful technical platforms, public education could see long-term consequences as state governance becomes less significant (Williamson et al. 2020). Adopting technologies without challenging the motives of big corporations could see global commercial platforms being incorporated into public education which, in turn, may be a risk to education as a public good (ibid. 2020).

Technologies and practices that were introduced during the pandemic are often regarded as emergency or temporary measures, however, some researchers point out that ‘[a]s these tools become rooted in teaching practice, it will become difficult to go back’ (Teräs et al. 2020, p.870).

My visualisation only shows a fraction of what a machine could have recorded but it nonetheless gives an insight into how entwined various platforms and our daily lives are.

References

Brown, M. 2020. Seeing students at scale: how faculty in large lecture courses act upon learning analytics dashboard data. Teaching in Higher Education, 25(4), pp. 384-400.

Teräs, M., Suoranta, J., Teräs, H. & Curcher M. (2020). Post-Covid-19 Education and Education Technology ‘Solutionism’: a Seller’s Market. Postdigital Science and Education, 2,863–878.

van Dijck, J., Poell, T., & de Waal, M. (2018). Chapter 6: Education, In The Platform Society, Oxford University Press.

Williamson, B. Eynon, R. & Potter, J. (2020). Pandemic politics, pedagogies and practices: digital technologies and distance education during the coronavirus emergency. Learning, Media and Technology, 45:2, 107-114.

Block 1 reflections

For this block I tried to visualise three different aspects of my ‘learning’: reading, motivation and interactions. By collecting, visualising and analysing my own data, I was hoping to gain deeper insights into the relationships between data and learning.

One of the challenges I faced every week was to decide which variables to consider for my visualisations. Is it best to collect as much data as possible or should I trust my instinct and log what I feel is relevant for each theme? It became clear that more sophisticated technologies such as learning analytics systems face similar issues. Despite being able to collect and analyse more data, there are many aspects of learning that can’t be captured or analysed. Eynon (2015, p. 409) also warns of the danger that ‘[h]ours spent revising, numbers of words written per day, multiple choice questions answered in half an hour, can all become the most important metric, rather than the quality of the writing, the mathematical thinking or the cognitive process.’

While I recorded data, it realised that the data collected by me is in no way comparable to big data (BD) as ‘BD promises ultrarapid, updatable profiling of individuals to enable interventions and action to make each individual, their thoughts and behaviours, actionable in a variety of ways’ (Thompson & Cook 2017, p. 743). In comparison, my efforts were slow and selective but it raised the question whether big data is the only way to paint a meaningful, rich picture of the learner. During my self-recording, I often felt that it was context and personal circumstances that had an impact on my actions, yet these variables are difficult to measure.

During the last three weeks I frequently asked myself whether collecting data can somehow improve how I learn. In education, after all, learning analytics promises insights into learning that would otherwise be unobtainable (Knox et al., 2019).

One of the promises is the ability of students to have a greater sense of agency. Data is thereby used to make informed decisions during the learning process (Tsai et al., 2020). Interestingly, however, Tsai et al. (2020, p. 562) surface that student agency may potentially be diminished ‘through constant surveillance in online learning environments.’ I certainly felt conscious of my actions being recorded (albeit by myself) and could imagine how constant monitoring may have an effect on how I behave. While it could lead to increased self-motivation, I could also see how my focus could be on simply completing tasks without caring too much about how well I performed in them.

Digital data is often seen as a solution to various problems in education (Selwyn & Gašević, 2020). For data to be used in order to enhance ‘learning’, I suppose we need to assume that how or what we learn needs to be improved. Although I am not able to offer a definition of learning, I believe that it is very personal. So the thought of learning being tailored to each student and offering them the best possible ‘learning journey’ seems intriguing. Reflecting on this block’s literature regarding personalisation, however, there seems to be a conflict between personalised learning systems being beneficial to students and teachers, and having the potential to ‘disempower through opaque processes and prescriptive formats (Bulger, 2016, p.19).

What has become clear during this block is that there are many conflicts between data and ‘learning’. I’m hoping to continue to explore these conflicts along with the relationships between data and education during the next block.

References

Bulger, M. (2016). Personalized Learning: The Conversations We’re Not Having. Data & Society working paper. Available: https://datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf

Eynon, R. (2015). The quantified self for learning: critical questions for education, Learning, Media and Technology, 40:4, 407-411, DOI: 10.1080/17439884.2015.1100797

Knox, J., Williamson, B. & Bayne, S. (2019). ‘Machine behaviourism: Future visions of “learnification” and “datafication” across humans and digital technologies‘, Learning, Media and Technology, 45(1), pp. 1-15.

Selwyn, N. & Gašević, D. (2020). The datafication of higher education: discussing the promises and problems, Teaching in Higher Education, 25:4, 527-540, DOI: 10.1080/13562517.2019.1689388

Thompson, G. and Cook, I. (2017). The logic of data-sense: thinking through learning personalisation. Discourse: Studies in the Cultural Politics of Education. 38(5), pp. 740-754

Tsai, Y-S. Perrotta, C. & Gašević, D. (2020). Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analytics, Assessment & Evaluation in Higher Education, 45:4, 554-567, DOI: 10.1080/02602938.2019.1676396