9 Weeks of Data Visualisations

The course requirement of selecting, recording, visualizing and reflecting discrete data points on a weekly basis for 9 weeks was definitely a huge learning curve with differentiated and interactive learning experiences. Thinking about what data to capture and how to represent them was a “learning with data” approach in itself. Lupi and Posavec’s ‘Dear Data’ project was an eye opener on hand-recorded data visualizations, but setting a high bar in terms of what data is available and how to generate interesting and creative ones while in a Pandemic lockdown. The exercises made me appreciate data more and realize the contemplations of data collection and visualization to what I am familiar to. In the first half of this blog, I would like to reflect on the data capturing and visualization learning experience; and in the second half, I will focus on how data visualization helped me comprehend the course objectives.


Data Visualization Exercise, Findings and Reflections

For each week, I adopted a process focusing first on building a plan of the data set to be collected and what would be a likely linkage to the theme of that week/block. The presentation took few iterations but then drawing it and reflecting on it became the creative and interesting part of all. Alongside my process of plan, define, collect, represent and reflect on the data here are some of the findings from the weekly visualizations.

  • Scope definition: I started each week with a question in my mind for the data collection and, at certain times, the data took me in other directions. It is important to keep the objectives in mind but equally important is to look at the data with a fresh eye and to adjust the scope as needed.

“First, the purpose of learning analytics is not tracking. Second, learning analytics does not stop at data collection and analysis but rather aims at ‘understanding and optimizing learning and learning environments. Instead, there is a clear focus on improving learning.” (Selwyn & Gašević 2020)

An example was Week 4, “My Teaching Roles”, as I started the week with general data about what I do on a daily basis and then I shifted towards a “teaching” category of my role and I was able to reflect on the data visualisations not only from a role perspective but what does it mean to be monitored as a teacher. 

“Teachers, too, are increasingly known, evaluated and judged through data, and come to know themselves as datafied teacher subjects.” (Williamson et. al. 2020)

  • Iterative process: in many cases I have either added or changed data attributes during the collection process. This was either impacted by the lack of depth to allow for a better visualization or to improve the messaging on educational themes. Going back to the drawing board makes is interesting but was only feasible being hand-drawn. The implications of an iterative process from data systems point of view would not be that easy or flexible.

“Data and metrics do not just reflect what they are designed to measure, but actively loop back into action that can change the very thing that was measured in the first place.” (Williamson et al 2020)

  • Data reliability: Being the sole producer/owner of the data, made me believe that the transparency and openness conditions in producing authentic learning data are addressed (Tsai et. al 2020). However, I noticed that it was extremely hard to be fully inclusive of all data while capturing and tracking data accurately and without bias. How reliable is the data being presented each week? Is a tough question to answer. I reflected on these in more details in the learning with data blog. 

‘There is likely to be a kind of trade-off between the reliability of the data that can be collected and the richness of what can be measured.” Eynon (2015)

  • Learning from others: the most fascinating part was looking into other’s data visualizations and reflections. In many cases, we are collecting the same data points e.g., drinking coffee, distractions, spaces, study material and etc., however, the vast differences in the approach, depth and artwork were insightful and demonstrated how similar data can be visualized in many different perspectives. A real testimony that data is not just data but hold personal preferences/biases, environments, locations and many other external factors impacting a data point like number of coffee cups a day.

“Data do not exist independently of the ideas, instruments, practices, contexts and knowledges used to generate, process and analyse them.” (Kitchin 2014)


Learning, Teaching and Governing with Data 

Although every week/block had specific theme/readings, at many instances, I found that one can use the same data sets to interpret and tackle multiple themes. This came more into effect during the Teaching and Governing with Data blocks. At the end of the 9 weeks, I can easily say that the three themes are interlinked and interdependent and focusing on one without understanding the implications on the other two would impact how we approach data in the educational sector. 

Looking into week 7 data visualization, A Week of Communication,  it could be replicated for all three themes. From a learning with data perspective, the data can be used to define how a student understand his/her learning communications to determine effective methods to priorities and manage learning objectives. From a teaching with data perspective, the same data can be used to generate understanding of what are the effective means of communication and how students respond to each method. The same data to decide on the right communication method for each student. Finally, from a governing with data angle, the data can be used to govern learning and teaching communication platforms and set some policies on learning environments and communication methods.

Each block presented a set of questions related how data is defined, produced and analysed from educational perceptions in attempt to understand how current data-driven technologies and systems/platforms are impacting the overall educational governance including teaching and learning. The analysis and interpretation of data could be subject to different objectives and motivations not necessarily pedagogical ones, especially, when considering predictive and AI based modelling of educational data. 

“Machines are tasked with learning, attention needs to be given to the ways learning itself is theorised, modelled, encoded, and exchanged between students and progressively more ‘intelligent’, ‘affective’, and interventionist educational technologies.” (Knox et al 2020)

There are benefits gained from the “datafication of Higher Education” when analysing educational data and gaining insightful knowledge/information. However, here are some persisting questions: what instruments are being used? what are the design principles? what educational expertise and knowledge used to design/build these technologies? What are the underpinning infrastructures? And Who are the actors? 

These are comprehensive questions to further analyze in this blog but will conclude with the following from Williamson et al 2020: 

“Datafication brings the risk of pedagogic reductionism as only that learning that can be datafied is considered valuabe […] There is a clear risk here that pedagogy may be reshaped to ensure it ‘fits’ on the digital platforms that are required to generate the data demanded to assess students’ ongoing learning.”

Here is a video of all my data visualisations.

References

  • Ben Williamson , Sian Bayne & Suellen Shay (2020) The datafication of teaching in Higher Education: critical issues and perspectives, Teaching in Higher Education, 25:4, 351-365, DOI: 10.1080/13562517.2020.1748811
  • Jeremy Knox, Ben Williamson & Sian Bayne (2020) Machine behaviourism: future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies, Learning, Media and Technology, 45:1, 31-45, DOI: 10.1080/17439884.2019.1623251
  • Kitchin, Rob. The Data Revolution: Big Data, Open Data, Data Infrastructures & Their Consequences. Sage, 2014.
  • Lupi, Giorgia, et al. Dear Data. Flow Press Media, 2018.
  • Neil Selwyn & Dragan Gašević (2020) The datafication of higher education: discussing the promises and problems, Teaching in Higher Education, 25:4, 527-540, DOI:10.1080/13562517.2019.1689388
  • Rebecca Eynon (2015) The quantified self for learning: critical questions for education, Learning, Media and Technology, 40:4, 407-411, DOI: 10.1080/17439884.2015.1100797
  • Tsai, Y-S. Perrotta, C. & Gašević, D. 2020. Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analytics, Assessment & Evaluation in Higher Education, 45:4, 554-567, DOI: 10.1080/02602938.2019.1676396

Governing with Data – Blog Post

During the “governing with data” week, I tried to reflect on some of the governing with data I experience at work especially in my data visualisations of the first two weeks working with communication and rules related data. In all sectors, including education, data are not only heavily used to measure and monitor performance but also to build a “data-driven” policy development engine endorsed by “advanced technology” and/or “scientific” approaches backed up by data. According to Williamson’s 2017 book: Big Data in Education:

Studies of educational policy, for example, have already begun to engage with the software packages and data infrastructures that enable policy information to be collected, and that also allow policies to penetrate into institutional practices.”

From the readings and the discussions of this block, there are some issues residing in adopting data-driven polices that are impacting decisions regarding students, teachers, educational institutes with a profound implication on the educational sector as a whole (Anagnostopoulos et. al 2013). I would like to use this blog to emphasize some of them:

• Non contextual policy formation: many policies would be developed from singular data points without taking into consideration contextual data, external factors or special circumstances. What Ogza 2016 described as:” ‘thin descriptions’ stripped of contextual complexity, make statistical data a key governing device”, is what I reflected on in my second week’s visualisations. The use of a traffic light KPI performance reporting has become key in many institutions and heavily used to drive business decisions and policies that are not necessarily applicable or reflective of realities.

• Non-educational actors: predictive and AI driven decision making methods to educational governance demonstrates great dependencies on code, algorithms and digital platforms managed by commercial actors that are influencing learning and teaching policies (Williamson 2017): “Digital software allows institutions, practices and people to be in Education constantly observed and recorded as data; those data can then be utilized by learning machines to generate insights, produce ‘actionable’ intelligence, or even prescribe recommendations for active intervention”

• Educational policy colonialization: adopting a ‘global north” data driven policies in other countries / regions with the promise to improve educational systems, better student results and cost effectiveness, does not consider the local gaps and specific educational needs and requirements (Prinsloo, P. 2020). Many countries have capacity and know-how challenges to build their own educational data and platforms and depend on global players to assume the ownership with the power of data.

• Educational infrastructure accountability: according to Anagnostopoulo et. al 2013, test-based data are creating a large-scale information system dependent on data being gathered, processed and released not only to students, teachers and/or educational institutes but: “Data from these systems are made available to ever-widening audiences and used to inform decisions across and beyond the educational system”. This is an issue because it being used to drive polices and make decision without educational sector being the driver or an owner/co-owner of this infrastructures.

There are some other issues highlighted with respect to fast policy, political analytics, open data and the rising privacy and trust concerns that are impacting the educational sector and how it is being governed by data.

The question is how to build data-driven educational policies’ frameworks and platforms that are based on educational sector stakeholders, industry knowledge and ownership, inclusive and contextual to the learning and teaching needs and flexible to allow for continuous improvement and innovation underpinned by trusted and ethical infrastructures?

References
• Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013. Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.
• Ozga, J. 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81
• Prinsloo, P. 2020. Data frontiers and frontiers of power in (higher) education: a view of/from the Global South. Teaching in Higher Education, 25(4) pp.366-383
• Williamson, B. Digital Education Governance: political analytics, performativity and accountability. Chapter 4 in Big Data in Education: The digital future of learning, policy and practice. Sage.

Teaching with Data – Block Blog

For this block’s data visualization exercises, it was harder to collect data from a “teaching with data” perspective. My aim was to approach the block from three angles: role of a teacher, student performance measurement – dashboards and learning platforms.

For the first visualization, my focus was to reflect on one of the questions from the block’s overviewWhat are the implications of increased data tracking on the role of teachers in educational institutions?”  It was difficult to answer this question from “My Teaching Role” visualization alone; however, what I learnt from the data collected, is that measuring performance through specific data points can be misleading and they do not reflect accurate assessment without contextual information. For example, measuring time spent in activities, other than teaching, during working or “class” hours might lead to the conclusion that the teacher is not performing well or is wasting student teaching time. As outlined by Williamson and Shay (2020) regarding technologies that use “data-based” measurement which dictates what is visible to others and “impact how decisions are being made through automation and it affect the ways people feel, act and behave”.

This impact on people’s – teachers’ and students’ behavior I noticed during the second data visualization exercise “A week of Performance Tracking”, when my performance was being measured based on data captured against a predefined benchmark or target. In the middle of that week, I noticed that some activities were under performing so I acted on them and changed behavior to algin to the expected targets. Although, it was my own data and targets but visualizing the data allowed me to change course. With lack of in-depth understanding of where the data is coming from and how the targets are being defined, behavioral changes could have a positive or negative connotation from a teaching with data standpoint depending on what is being displayed and measured.  

This leads me to teaching dashboards and also the question regarding: “how do ideas such as “data-driven decision-making’ shape teaching practices, and professional responsibilities?” and how would data-driven teaching dashboards improve the learning process? In Brown’s (2020) case study, teachers had little knowledge of how dashboards can assist in teaching and/or academic planning. The answer could be in the lack of understanding of the information being displayed or how are data being captured and organized. 

To make proper decisions or effectively use teaching dashboards, teachers should be able to configure, define and manage targets and build their own practices in the learning dashboards (Brown 2020). This is based on the assumption that teachers have the required ‘data literacy’ to create proper decisions and interventions (Raffaghelli & Stewart 2020). The data literacy here is not related to technical skills and data science capabilities only, but a “broader epistemological frameworks than a technical, instrumentalist focus on performance management, efficiencies, or evidence can offer.” 

Learning platforms are generating large amounts of data (van Dijck et. al. 2018) leading to economic and commercial decision making taking little account the role of the teacher and specific learners’ needs. My third visualization assesses the various platforms for reading recourses, the question is how adaptable these platforms are to allow teacher’s intervention to revise and personalise individual learnings resources and inject new / revised material. 

In conclusion, the teaching dashboards and data-driven educational technologies are impacting teaching and the role of the teacher as he/she become increasingly “datafied subjects” (Williamson and Shay 2020).  I will close from the article by (van Dijck et. al. 2018): 

The changing role of teachers from classroom directors to dashboard controllers, mediated by numbers and analytical instruments, is a major issue; professionals may feel that the core of educational activities—assessment and personalized attention—gets outsourced to algorithms and engineers.

References 

Learning with Data – Block Post

During the “Learning with Data” block, I focused my data visualization assignments on capturing data that would be relevant to a student. I captured three elements: distractions, learning spaces and emotions, with the intention to build a holistic view of physical, digital and emotional conditions of learning. For sure, more data should be captured to enable me to develop more realistic observations and findings and to construct more critical analysis regarding learning with data.  Being the designer, the producer and the recorder of the data, made me believe that the transparency and openness conditions in producing authentic learning data (Tsai et. al 2020) are addressed. However, during the data collection phase, I noticed that it was extremely hard to be fully inclusive of all data while capturing and tracking data accurately and without bias. I’m reflecting on these concerns, hereafter. 

Inclusive Data

The question here: how you ensure that all the needed data are captured? Although, it was a manual process, but there were data elements that were not captured, forgotten or neglected. With an automated data capturing system/technology, this issue could be resolved however designing the data collection triggers might not be as inclusive or well-defined. During personal data tracking, there were some automated data capturing, but there were many opportunities to change others or skip others. To build a learning opportunity from data-driven technologies, it’s important to capture comprehensive data.

Data may restrict the kinds of questions we can ask and the analysis and recommendation generated (Eynon 2015). 

Accuracy 

This is accuracy at all levels of data capturing, recording and analysis. The captured data might not reflect the real situation and could be subjective to the time, location, external factors and other factors. I noticed that my collected data captured had elements of intentional and non-intentional errors.

According to Eynon (2015): ‘there is likely to be a kind of trade-off between the reliability of the data that can be collected and the richness of what can be measured.”

This impacts the learning process and aspired benefits of learning analytics. The risk that it might be the opposite; jeopardizing learning outcomes and putting the learner at a disadvantage. 

Biased Data

Bias was also infused in my data selection. It’s strange that one is biased to his/her own self, but the bias here as in the learning activities selection and choices I made before and after data capture. The question would be if I was using a specific technology to capture the same data, would it be the same? The answer is no!

From learning with data perspective, bias could be built within algorithms and predictive analytics which are designed to impact and shape the learning process and behaviours. So

“what constitutes the ‘correct’, ‘preferable’, or ‘desirable’ behaviours for learning” (Knox et al. 2019)? 

I believe that these concerns could be solved by data-driven learning technologies. However, the question is how these technologies are shaping the learning process and what are the embedded design principles? As Bulger (2016) highlighted, the goal is to actually demand transparency, openness and accuracy of what is being collected and to understand the built-in assumptions and specifications to support students’ learning opportunities. 

References