9 Weeks of Data Visualisations

The course requirement of selecting, recording, visualizing and reflecting discrete data points on a weekly basis for 9 weeks was definitely a huge learning curve with differentiated and interactive learning experiences. Thinking about what data to capture and how to represent them was a “learning with data” approach in itself. Lupi and Posavec’s ‘Dear Data’ project was an eye opener on hand-recorded data visualizations, but setting a high bar in terms of what data is available and how to generate interesting and creative ones while in a Pandemic lockdown. The exercises made me appreciate data more and realize the contemplations of data collection and visualization to what I am familiar to. In the first half of this blog, I would like to reflect on the data capturing and visualization learning experience; and in the second half, I will focus on how data visualization helped me comprehend the course objectives.


Data Visualization Exercise, Findings and Reflections

For each week, I adopted a process focusing first on building a plan of the data set to be collected and what would be a likely linkage to the theme of that week/block. The presentation took few iterations but then drawing it and reflecting on it became the creative and interesting part of all. Alongside my process of plan, define, collect, represent and reflect on the data here are some of the findings from the weekly visualizations.

  • Scope definition: I started each week with a question in my mind for the data collection and, at certain times, the data took me in other directions. It is important to keep the objectives in mind but equally important is to look at the data with a fresh eye and to adjust the scope as needed.

“First, the purpose of learning analytics is not tracking. Second, learning analytics does not stop at data collection and analysis but rather aims at ‘understanding and optimizing learning and learning environments. Instead, there is a clear focus on improving learning.” (Selwyn & Gašević 2020)

An example was Week 4, “My Teaching Roles”, as I started the week with general data about what I do on a daily basis and then I shifted towards a “teaching” category of my role and I was able to reflect on the data visualisations not only from a role perspective but what does it mean to be monitored as a teacher. 

“Teachers, too, are increasingly known, evaluated and judged through data, and come to know themselves as datafied teacher subjects.” (Williamson et. al. 2020)

  • Iterative process: in many cases I have either added or changed data attributes during the collection process. This was either impacted by the lack of depth to allow for a better visualization or to improve the messaging on educational themes. Going back to the drawing board makes is interesting but was only feasible being hand-drawn. The implications of an iterative process from data systems point of view would not be that easy or flexible.

“Data and metrics do not just reflect what they are designed to measure, but actively loop back into action that can change the very thing that was measured in the first place.” (Williamson et al 2020)

  • Data reliability: Being the sole producer/owner of the data, made me believe that the transparency and openness conditions in producing authentic learning data are addressed (Tsai et. al 2020). However, I noticed that it was extremely hard to be fully inclusive of all data while capturing and tracking data accurately and without bias. How reliable is the data being presented each week? Is a tough question to answer. I reflected on these in more details in the learning with data blog. 

‘There is likely to be a kind of trade-off between the reliability of the data that can be collected and the richness of what can be measured.” Eynon (2015)

  • Learning from others: the most fascinating part was looking into other’s data visualizations and reflections. In many cases, we are collecting the same data points e.g., drinking coffee, distractions, spaces, study material and etc., however, the vast differences in the approach, depth and artwork were insightful and demonstrated how similar data can be visualized in many different perspectives. A real testimony that data is not just data but hold personal preferences/biases, environments, locations and many other external factors impacting a data point like number of coffee cups a day.

“Data do not exist independently of the ideas, instruments, practices, contexts and knowledges used to generate, process and analyse them.” (Kitchin 2014)


Learning, Teaching and Governing with Data 

Although every week/block had specific theme/readings, at many instances, I found that one can use the same data sets to interpret and tackle multiple themes. This came more into effect during the Teaching and Governing with Data blocks. At the end of the 9 weeks, I can easily say that the three themes are interlinked and interdependent and focusing on one without understanding the implications on the other two would impact how we approach data in the educational sector. 

Looking into week 7 data visualization, A Week of Communication,  it could be replicated for all three themes. From a learning with data perspective, the data can be used to define how a student understand his/her learning communications to determine effective methods to priorities and manage learning objectives. From a teaching with data perspective, the same data can be used to generate understanding of what are the effective means of communication and how students respond to each method. The same data to decide on the right communication method for each student. Finally, from a governing with data angle, the data can be used to govern learning and teaching communication platforms and set some policies on learning environments and communication methods.

Each block presented a set of questions related how data is defined, produced and analysed from educational perceptions in attempt to understand how current data-driven technologies and systems/platforms are impacting the overall educational governance including teaching and learning. The analysis and interpretation of data could be subject to different objectives and motivations not necessarily pedagogical ones, especially, when considering predictive and AI based modelling of educational data. 

“Machines are tasked with learning, attention needs to be given to the ways learning itself is theorised, modelled, encoded, and exchanged between students and progressively more ‘intelligent’, ‘affective’, and interventionist educational technologies.” (Knox et al 2020)

There are benefits gained from the “datafication of Higher Education” when analysing educational data and gaining insightful knowledge/information. However, here are some persisting questions: what instruments are being used? what are the design principles? what educational expertise and knowledge used to design/build these technologies? What are the underpinning infrastructures? And Who are the actors? 

These are comprehensive questions to further analyze in this blog but will conclude with the following from Williamson et al 2020: 

“Datafication brings the risk of pedagogic reductionism as only that learning that can be datafied is considered valuabe […] There is a clear risk here that pedagogy may be reshaped to ensure it ‘fits’ on the digital platforms that are required to generate the data demanded to assess students’ ongoing learning.”

Here is a video of all my data visualisations.

References

  • Ben Williamson , Sian Bayne & Suellen Shay (2020) The datafication of teaching in Higher Education: critical issues and perspectives, Teaching in Higher Education, 25:4, 351-365, DOI: 10.1080/13562517.2020.1748811
  • Jeremy Knox, Ben Williamson & Sian Bayne (2020) Machine behaviourism: future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies, Learning, Media and Technology, 45:1, 31-45, DOI: 10.1080/17439884.2019.1623251
  • Kitchin, Rob. The Data Revolution: Big Data, Open Data, Data Infrastructures & Their Consequences. Sage, 2014.
  • Lupi, Giorgia, et al. Dear Data. Flow Press Media, 2018.
  • Neil Selwyn & Dragan Gašević (2020) The datafication of higher education: discussing the promises and problems, Teaching in Higher Education, 25:4, 527-540, DOI:10.1080/13562517.2019.1689388
  • Rebecca Eynon (2015) The quantified self for learning: critical questions for education, Learning, Media and Technology, 40:4, 407-411, DOI: 10.1080/17439884.2015.1100797
  • Tsai, Y-S. Perrotta, C. & Gašević, D. 2020. Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analytics, Assessment & Evaluation in Higher Education, 45:4, 554-567, DOI: 10.1080/02602938.2019.1676396

Governing with Data – Blog Post

During the “governing with data” week, I tried to reflect on some of the governing with data I experience at work especially in my data visualisations of the first two weeks working with communication and rules related data. In all sectors, including education, data are not only heavily used to measure and monitor performance but also to build a “data-driven” policy development engine endorsed by “advanced technology” and/or “scientific” approaches backed up by data. According to Williamson’s 2017 book: Big Data in Education:

Studies of educational policy, for example, have already begun to engage with the software packages and data infrastructures that enable policy information to be collected, and that also allow policies to penetrate into institutional practices.”

From the readings and the discussions of this block, there are some issues residing in adopting data-driven polices that are impacting decisions regarding students, teachers, educational institutes with a profound implication on the educational sector as a whole (Anagnostopoulos et. al 2013). I would like to use this blog to emphasize some of them:

• Non contextual policy formation: many policies would be developed from singular data points without taking into consideration contextual data, external factors or special circumstances. What Ogza 2016 described as:” ‘thin descriptions’ stripped of contextual complexity, make statistical data a key governing device”, is what I reflected on in my second week’s visualisations. The use of a traffic light KPI performance reporting has become key in many institutions and heavily used to drive business decisions and policies that are not necessarily applicable or reflective of realities.

• Non-educational actors: predictive and AI driven decision making methods to educational governance demonstrates great dependencies on code, algorithms and digital platforms managed by commercial actors that are influencing learning and teaching policies (Williamson 2017): “Digital software allows institutions, practices and people to be in Education constantly observed and recorded as data; those data can then be utilized by learning machines to generate insights, produce ‘actionable’ intelligence, or even prescribe recommendations for active intervention”

• Educational policy colonialization: adopting a ‘global north” data driven policies in other countries / regions with the promise to improve educational systems, better student results and cost effectiveness, does not consider the local gaps and specific educational needs and requirements (Prinsloo, P. 2020). Many countries have capacity and know-how challenges to build their own educational data and platforms and depend on global players to assume the ownership with the power of data.

• Educational infrastructure accountability: according to Anagnostopoulo et. al 2013, test-based data are creating a large-scale information system dependent on data being gathered, processed and released not only to students, teachers and/or educational institutes but: “Data from these systems are made available to ever-widening audiences and used to inform decisions across and beyond the educational system”. This is an issue because it being used to drive polices and make decision without educational sector being the driver or an owner/co-owner of this infrastructures.

There are some other issues highlighted with respect to fast policy, political analytics, open data and the rising privacy and trust concerns that are impacting the educational sector and how it is being governed by data.

The question is how to build data-driven educational policies’ frameworks and platforms that are based on educational sector stakeholders, industry knowledge and ownership, inclusive and contextual to the learning and teaching needs and flexible to allow for continuous improvement and innovation underpinned by trusted and ethical infrastructures?

References
• Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013. Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.
• Ozga, J. 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81
• Prinsloo, P. 2020. Data frontiers and frontiers of power in (higher) education: a view of/from the Global South. Teaching in Higher Education, 25(4) pp.366-383
• Williamson, B. Digital Education Governance: political analytics, performativity and accountability. Chapter 4 in Big Data in Education: The digital future of learning, policy and practice. Sage.

A Week of Jet Lag

This week I was traveling to the USA on a personal vacation and no one can imagine the amount of Data available for collecting but at the same time I was trying to get some quality time visiting my son at his University. To me this trip was a 12-hour timezone difference and jet lag was creeping especially at night when my body is used to be fully awake.

I started logging my wake up hours and estimated my sleeping time to a proximity of +/-10 minutes. The data gathered was simple time of sleep, time of waking and the depth of the sleep. The thicker the line the deeper the sleep or the more alert my waking. The data was collected Saturday to Friday from 7pm to 7am.

The Legend

Here is my sleeping / waking-up data visualisation chart during a week of jet lag.

After looking at the chart, I was trying to make sense of it or finding a pattern but it was really hard. I was not getting any day time sleeping as we were out and about. By 7pm, I started the sleep struggle cycle. Tried to push my sleeping time at different time blogs like the second day still I was not able to sleep through the night. My deep sleep average time is about 2-3 hours only and not always in a continuous stretch.

Going to the science, a “typical sleep cycle” includes five stages of sleep with stages 1-2 as light sleep, 3-4 as deep sleep, and the fifth stage as REM (rapid eye movement) sleep according to sleepcycle.com.

https://www.sleepcycle.com/how-sleep-cycle-works/

Comparing my chart to the typical sleep cycle that I might have had the same deep sleep number of hours but I have not been able to maintain the cyclic approach especially when I get wide awake in between the sleep periods which made it harder for me to get back to sleep and continue the cycle.

Most of my hours are either at light sleeping or not fully awake which is a sign of being in a jet lag and also getting distracted with phone / messages at home country time zones.

Of course, there are many environmental factors affecting sleep like the bed, pillows, noise level and temperature which I have experienced during my hotel stay. Also, if I compare my non jet lag sleeping patterns; I’m usually a light sleeper, waking up at night more frequently and also age affects sleeping patterns especially being a women in 40ies. According to Psychology Today: “about 31 percent of women say they have trouble staying asleep at least four nights a week and wake in the morning feeling tired, rather than rested”.


How can I take this to a governing with data angle? I just finished reading Anagnostopoulos & Jacobsen 2013 reading and I would like to reflect the sentence: “Rather than empowering the people, the data may constrain what people know and how they think about their schools” in reference to how information that test-based infrastructures make available can only provide partial views of schools, teachers, and students. The challenge is that by setting an ideal performance / testing standards that could be used for building educational policies / regulations, we are looking at numerical ratings that are only saying: “little about the causes or failures” and how can measure qualitative teaching and learning data like: “creativity, and commitment, social capabilities” and etc. We keep going back to “one size fits all” – one standard sleeping cycle – governing by data approach. The question how can we design and construct information infrastructures that measures contextual learning and teaching data to provide true value and influence inclusive policy development.


References

  1. Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013. Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.
  2. The sleeping cycle website and app – https://www.sleepcycle.com/how-sleep-cycle-works/
  3. Psychology Today website – https://www.psychologytoday.com/us/blog/sleep-newzzz/201910/what-your-sleep-is-in-your-40s-and-50s

A Week of Rules

This week I tried to set few simple rules for work, study and personal aspects of my day and the idea is to measure how was the execution of those rules. I didn’t gather any additional data of why a certain rule was not followed. The following is the legend identifies the 6 rules to be monitored

  • Need to stop working after 6pm – Work Rule
  • Camera must be always on for work calls – Work Rule
  • Stop using the phone at least 1hr before sleeping – Personal Rule
  • No carbohydrates eating for a week – Personal Rule
  • Coffee and tea should be limited to 4 or less with 4 considered border line – Personal Rule
  • Study for at least 1hr a day – University Rule

I used the traffic light data visualisation to demonstrate: Green – 100% following the rules, Red – 100% missing the rule and Amber for in between. The data was collected from Sunday to Thursday as my complete working week (no weekends).

Other than the rule regarding camera on for work (on Thursday it was a group call and camera was not mandatory so I had it off), I have not been following any of the other rules ! For work, I had a heavy start considering that I’m going on leave the following week so I was motivated to work late hours to finish more pending work. Coffee I was mostly Amber – 4 cups a day exactly. For University studies I fell behind especially that I’m working more. Wednesday was a good day for me since I was able to catch up with work and studies and but cheated on food.

We use the traffic light indicator visualisation a lot at my work to provide dashboard performance view on sales, revenues and work related KPI’s. It is an oldish system that is good in giving a quick single measure update but does not really tell the story. One can sometimes notice trends in traffic light systems for example: in beginning of the week I was more stressed with work so I drank more coffee than towards the end when I was relatively more relaxed and the week is over. We can also link late hours of work to lack of studies.


From governing with data point of view, I wanted to approach this from the learning or educational institutes administration aiming to govern through similar dashboards and passing judgement on teachers and students using a singular view of data. Many rules of governance models regarding compliance or adherence to learning objectives could be formulated without looking deeper or understanding the story behind the results.

There are many factors that can have significant impact on the results and the way they are interpreted. For my data collection, on purpose I didn’t collect any additional information around why each target is met or not met, time of the day, mood or emotional status, level of stress, external factors, environment (being home all the time), etc. Contextual information and knowledge are important in Education to drive meaningful and relevant learning polices other than “quick” and maybe “cost effective” mass policy formation and governance model dependent on massive single points of data. I found the following paragraph from Ogza 2016 relevant for this topic.

Statistical data reduce the complexities of new national and local education practices through their selection of key indicators on the basis of which schools may be compared, and these ‘thin descriptions’, stripped of contextual complexity, make statistical data a key governing device (Ozga et al., 2011). Furthermore, because there is such a strong emphasis from policy-makers on ensuring that these data enable comparisons to be made (whether of pupil performance, teachers, pupil types), the knowledge claims that are most powerful are those that are de-contextualised, trans-historical and trans-situational, indeed:

“…the decline or loss of the context-specificity of a knowledge claim is widely seen as adding to thevalidity, if not the truthfulness, of the claim. (Grundmann and Stehr, 2012: 3)”

Ozga, J. 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81ADD citation

A Week of Communication

Visualisation of personal communications by platform vs source

This week I captured direct communication data points. Direct means, I’m communicating with someone or I’m involved in a communication direct to me only. Thus, mass messages and emails are removed from work emails and WhatsApp/messages/emails groups. I focused on 4 elements of the data captured.

The legend

  • Communication source: who I am communicating with. I identified 3 main groups : Family ( Husband, Kids, and siblings), Work, Friends, others (in this week it was some Unv communications and one face to face communication with a retailer.
  • Communication platform: Face 2 face, video call, voice call and messages
  • Communication impact : answering the question if this particular communication had a positive, negative impact of how I felt from before the communication to after.
  • Timing of the communication : morning and afternoon (12pm is the time breaker)

I developed the visualisation on focus areas. One with the Sources of the information as the main pillars and another one on the Platforms being the pillars.

Reflection points:

  • Family is the most source of my communications with face to face and voice call occupying the highest interactions. Impact of communication is almost equally split with many with “no impact” communications – catching up and status updates communication
  • Work is very focused on video calls which is expected with working from home. Also the impacts tends to be equally split however more positive communications with video calls compared to messages.
  • For friends, there is a lot of positive communications especially the face to face ones.

Reflection points:

  • Face to face communications has more positive effect as a percentage from all other platforms
  • Video calls has no impact when it came to work and more positive when it came to family.
  • Voice calls are the least of all focusing mainly non family and friends.
  • Messages remains the highest contributor especially with friends and then comes work and family.
  • I need to be calling friends more often instead of depending on the messages as messages tend to create negative impact while calls always give a positive impact.
  • Messages tend to have more negative or no impact across all sources especially friends

Reflecting on Governing with Data Blog and one of the “Tweetorial’ questions: ” What policy problems might big data be used to address in education, or what new problems might governing with data generate?” I wanted to extract from my visualisations some communications policies and measuring validity as possible.

A communication policy that can be deducted from this data set is: in order to improve motivation at work, messages should be eliminated as a mean of communication and face to face and/or voice calls communications should be introduced.

This could be a sample policy deduction looking at the communication sources and platforms from a single attribute of creating a positive impact on the communicator. However, the policy didn’t take into account the condition that, currently there are no face to face data to support this argument in the work environment. It has also ignored completely that many “no impact” communications of messages which could be more than sufficient in the work place. The policy is build on a specific set of data collected and ignored other sources and validations into what could be behind the impact on communication.

This is a basic sample of policy development based on a set of data where little contextual and additional information / data are considered. We can also see that technology platforms (in this case phone, video conferencing and messaging apps) are impacting policy development and measurement. According to Williamson’s concept of “Data Instrumentation” “

Understood as digital policy instruments, data processing technologies can therefore be seen as a digital policy instruments that reproduce and reenforce existing.

Williamson, B. Digital Education Governance: political analytics, performativity and accountability. Chapter 4 in Big Data in Education: The digital future of learning, policy and practice. Sage.

From an educational perspective, what would be the best communication platform that would improve learning and teaching experiences especially in this increased dependencies on digital education and hybrid learning environment? What data we need to collect and analyse to drive policies related to communication platform to govern the new learning environments?