Block 2 Reflection

Education has increasingly become datafield – reducing and quantifying complex learning processes and teacher-student and student-student relationships. Student behavior and performance are measured and analyzied, marketed as insights unobtainable without the data (Knox, 2020). These insights are often presented to educators in the form of learning dashboards where student learning is summarized and personalized learning paths (or other interventions) are recommended (Brown, 2020).

Who decides what aspects of student learning is datafied and how that data is collected, analyzed, and visualized is, unfortunately, not decided by every instructor in every classroom but by the developers behind the algorithms of the learning dashboard (Raffaghelli, 2020; van Dijck, 2018). Consequently, instructors can be uncertain on how data was collected and what meaning can be drawn from it (Brown, 2020). This is especially concerning as predictive capabilities of these systems could “radically affect” a students educational career (Williamson, 2020) and educators need to possess a critical understanding of the reductive and instrumental nature of data (Raffaghelli, 2020).

Dashboards direct an instructors attention to specific areas that are perceived by the developers as ‘learning’ (Williamson, 2020; Raffaghelli, 2020) possibly distracting the instructor from other aspects of their students learning. Both Williamson (2020) and Brown (2020) note that instructors may also use these dashboards as a way to classify and categorize students to give targeted interventions, but this could possibly lead to preferential treatment (or dismissal) of certain classes of students. Algorithmic culture already reinforces the digital inequalities which may be unknown to systems dependent on these algorithms.

Instructors, too, are becoming subjected to these datafied systems. Measures of student performance are often treated as proxy measures of instructor performance (Williamson, 2020). Good instructors, then, are the producers of ‘good’ student data. Data becomes the focus and instructors urge students toward particular outcomes rather than on the learning process itself (Bulger, 2016). Instructors become to know themselves and their practice through data (Harrison, 2020).

Subjecting instructors to these datafied systems risks their autonomy as there pedagogy may be (forcibly) reshaped to be dashboard friendly (Williamson, 2020). Interestingly, Brown (2020) comments that the instructors he was surveying did not rely on the dashboard to plan their teaching. Rather, they could not identify productive strategies to incorporate them into their teaching and relegating the dashboards to a glorified polling system and an identification tool for unfamiliar students (ibid). Similarly, the role of the instructor is shifting – personalized intervention and assessing students are outsourced to the algorithm (and developers) and the teacher is assuming the role of a dashboard monitor (van Dijck, 2018).


Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. Data & Society working paper. Available: https://datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf

Brown, M. 2020. Seeing students at scale: how faculty in large lecture courses act upon learning analytics dashboard data. Teaching in Higher Education. 25(4), pp. 384-400

Harrison, M.J., Davies, C., Bell, H., Goodley, C., Fox, S & Downing, B. 2020. (Un)teaching the ‘datafied student subject’: perspectives from an education-based masters in an English university, Teaching in Higher Education, 25:4, 401-417, DOI: 10.1080/13562517.2019.1698541

Raffaghelli, J.E. & Stewart, B. 2020. Centering complexity in ‘educators’ data literacy’ to support future practices in faculty development: a systematic review of the literature, Teaching in Higher Education, 25:4, 435-455, DOI: 10.1080/13562517.2019.1696301

van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectives. Teaching in Higher Education. 25(4), pp. 351-365.

a week of platforms

Methodology

For the final week of the teaching block, I recorded the software/applications that I used each day to investigate which platforms/tech ecosystems I use regularly, reflect on those that I could not separate from, and identify ways I could make the apps that I use more “efficient”.

Each day, I recorded the apps that I was opening or focusing and whether the app is free, open-source, and/or I use the app primarily for work or school activities. The resulting visualization was inspired from the Dear Data “Week 50 – A week of our phones” postcard by Giorgia Lupi.

Discussion

From the visualization, I primarily use applications/services from Google and Microsoft – nearly half of these all applications and half of the applications that I used everyday were from these two companies. However, this representation ignores some of the hidden infrastructure behind the other applications:

  • Amazon web services (AWS), Amazon’s on-demand cloud computing platform, dominates the web server industry controlling roughly 40 percent of the cloud market. Disney+, Reddit, Spotify, Zoom, Facebook, Twitch, LinkedIn, Instagram all depend on the AWS and other apps like Moodle and WordPress having the ability to be hosted on AWS.
  • Chromium is a free and open-source browser developed initially by Google (through the Chromium project). Electron is a software framework that depends on the chromium browser to render desktop GUI applications such as visual studio code, WordPress desktop, Slack, Discord, and Teams.
  • React and Angular are two popular open-source web application frameworks maintained by Facebook and Google, respectively. As such, many website and applications built in Electron (such as Discord) use one of these frameworks.

This all goes to show that the “Big Five” (van Dijck, 2018) influence more web services/applications that what is immediately obvious. It is not hard to imagine the ease activity from a group of seemingly unrelated websites/applications could be collected and analyzed to give some holistic insights to our “data doubles” (Williamson, 2020) for monetization purposes. van Dijck (2018) highlights learning data has become increasingly valuable to continue to complete our “data double”. Promises of personalization and democratization of education distract from the concerns of privacy, security, and commodification of student data and the potential of increased surveillance.

Educational platforms rely on these promises of personalization and democratization of education and are quickly being dominated by the “Big Five”. These platforms are impacting teaching and learning practices and autonomy and pose a risk of developing a “one-size-fits-all” approach to education (van Dijck, 2018) that is globalized and potentially lacks local and cultural values. Creating free and open-source educational material, platforms, and data has been offered as a way to democratizing education and push back against corporate platformization. Unfortunately, these initiatives are often quite costly and require time and expertise to develop.


van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectives. Teaching in Higher Education. 25(4), pp. 351-365.

a week of “engagement”

Methodology

Over the past week I recorded websites that I accessed that were UCR affiliated (ending in @ucr.edu; except for one of our two email accounts hosted by Gmail). At each site, I categorized the website, recorded the time the page finished loading, how I referred to the site (typed into url, link from another site, etc), and when I left the page. I also recorded if I was required to enter in a two-factor authentication password, when I connected to the campus VPN, and if I downloaded any files.

The majority of my day-to-day work is web development and we are undergoing major version updates. Subsequently I access our git repository, university databases, and test versions of our projects many times a day. I did not record this data as it would quickly become large in volume. This is a limitation for this methodology and the resulting visualization.

Discussion

In this activity, engagement was measured as visiting a website and there is little information about what I did on the site (other than total time spent and downloading files). As Brown (2020) notes, instructors are often faced with similar data and instructors often (1) assume that students who accessed the LMS regularly were to be more familiar with course rubrics, deadlines, and materials and (2) change their interactions with students based on engagement with the LMS. The latter is especially concerning as these interactions (or lack of) could “radically affect” the future of a student (Williamson, 2020).

In collecting data and developing the visualizations I omitted some data that would lead to “false positives”, e.g. duplicate two-factor authentication messages, refreshing the same page, accidentally downloading multiple files. While these false positives may be identified and algorithmically removed from learning analytic data sets, other types of false positives might not be – Brown (2020) recalls viewing students take multiple clickers out to presumably also give their friends attendance points. This raises concerns of the quality, validity, and trustworthiness of these data sets. Measures such as student attendance and student engagement are often also treated as proxy measures for instructor performance (Williamson, 2020). Instructor B from Brown (2020) notes their concern for the end of year evaluations in conjunction with a review of student data, namely attendance data, as they do not require attendance. Could these datafied performance evaluations lead to changing instructors changing policies to ensure favorable performance evaluations?


Brown, M. 2020. Seeing students at scale: how faculty in large lecture courses act upon learning analytics dashboard data. Teaching in Higher Education. 25(4), pp. 384-400

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectives. Teaching in Higher Education. 25(4), pp. 351-365.

a week of beverages

To start the new block, ‘Teaching’ with Data, I decided to collect my drinking habits throughout the week. For this block I will also try to transform the hand-drawn visualizations into a more “traditional” data analytics dashboard.

Methodology

Throughout the week I recorded what I drank, whether or not I purchased the beverages from a restaurant/cafe, whether I also had a snack/meal with the beverage, and a non-numerical consumption amount (e.g. a few sips, a large tea, a glass of juice, etc).

I did not fully record what I drank on Thursday – the last entry recorded is the water I had with dinner. I noticed this error Friday morning and did not append the data since I had not recorded it on Thursday. This is an attempt at modeling tracking applications as many do not allow you to append the data yourself either. This hints to the fact that data sets are often representative and limited.

The data for the week resulted in the following visualizations.

Block 2: Week 6 Visualization – a week of beverages
Sample Teachers Dashboard

Design and Discussion

The dashboard view was developed using Metabase (dependent on a local database [SQLite]) and Adobe XD. My hand-drawn visualizations are drawn using GoodNotes on my iPad.

In designing both visualizations, I found the most difficult aspect to convey was the amount consumed. Since I did not record a numerical value, I had to estimate one for each amount, which carries some assumptions: (1) I drank the entire bottled or purchased beverage, (2) I follow volume averages for “sips” and “gulps”, and (3) the initial volume was consistent across non-bottled beverages. For the hand-drawn visualization, I settled for the dashed line system above the drink category, where: single line is 4-9 fl oz, a double line is 10-15 fl oz, and a triple line is 16 and more fl oz.

For the dashboard view, I was limited not only by the lack of precise numerical values, but also the customization capabilities of the software and the translation of data into a database table. Consequently, I decided to use the number of dashes as the amount consumed. I also kept the system relatively simple – a single table with columns: drink (text), purchased (boolean), amount (integer), food (boolean), and day (text). In a more expansive system (and more robust database queries), some of the limitations, but not all, of the customization options could be overcome. However, in the classroom teachers may not have access to modify or view the structure of the backend. This restriction hinders a true and complete understanding of how the visualization is developed and what conclusions can be drawn.