a week of “engagement”

Methodology

Over the past week I recorded websites that I accessed that were UCR affiliated (ending in @ucr.edu; except for one of our two email accounts hosted by Gmail). At each site, I categorized the website, recorded the time the page finished loading, how I referred to the site (typed into url, link from another site, etc), and when I left the page. I also recorded if I was required to enter in a two-factor authentication password, when I connected to the campus VPN, and if I downloaded any files.

The majority of my day-to-day work is web development and we are undergoing major version updates. Subsequently I access our git repository, university databases, and test versions of our projects many times a day. I did not record this data as it would quickly become large in volume. This is a limitation for this methodology and the resulting visualization.

Discussion

In this activity, engagement was measured as visiting a website and there is little information about what I did on the site (other than total time spent and downloading files). As Brown (2020) notes, instructors are often faced with similar data and instructors often (1) assume that students who accessed the LMS regularly were to be more familiar with course rubrics, deadlines, and materials and (2) change their interactions with students based on engagement with the LMS. The latter is especially concerning as these interactions (or lack of) could “radically affect” the future of a student (Williamson, 2020).

In collecting data and developing the visualizations I omitted some data that would lead to “false positives”, e.g. duplicate two-factor authentication messages, refreshing the same page, accidentally downloading multiple files. While these false positives may be identified and algorithmically removed from learning analytic data sets, other types of false positives might not be – Brown (2020) recalls viewing students take multiple clickers out to presumably also give their friends attendance points. This raises concerns of the quality, validity, and trustworthiness of these data sets. Measures such as student attendance and student engagement are often also treated as proxy measures for instructor performance (Williamson, 2020). Instructor B from Brown (2020) notes their concern for the end of year evaluations in conjunction with a review of student data, namely attendance data, as they do not require attendance. Could these datafied performance evaluations lead to changing instructors changing policies to ensure favorable performance evaluations?


Brown, M. 2020. Seeing students at scale: how faculty in large lecture courses act upon learning analytics dashboard data. Teaching in Higher Education. 25(4), pp. 384-400

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectives. Teaching in Higher Education. 25(4), pp. 351-365.

2 thoughts on “a week of “engagement”

  1. ‘At each site, I categorized the website, recorded the time the page finished loading, how I referred to the site (typed into url, link from another site, etc), and when I left the page. I also recorded if I was required to enter in a two-factor authentication password, when I connected to the campus VPN, and if I downloaded any files.’

    Seems thorough! I did wonder of this kind of tracking might be done automatically though, if one had the skills, of course. Given that you were doing this ‘manually’, I wondered if there was scope to record *what* you did on these websites, given that the ‘manual’ process potentially provided an advantage here over automated data collection. Although, you use this method to usefully demonstrate some of the problems articulated by Brown (2020) and Williamson (2020).

    ‘e.g. duplicate two-factor authentication messages, refreshing the same page, accidentally downloading multiple files. ‘

    I also wondered if some of this info might be useful to a teacher. Or at least, there might be questions here about who decides what is useful and what isn’t. Things being ‘algorithmically removed’ structure in assumptions about what is relevant?

    ‘Could these datafied performance evaluations lead to changing instructors changing policies to ensure favorable performance evaluations?’

    Good question, and this raises the issue of the culture of performativity. Where the data gains such significance that it is the sole evidence used in decision making, the simple and often-repeated step seems to be to make sure the data is saying what we want it to say, rather than actually changing the thing the data is supposed to represent.

  2. Hi Dillon,
    “Brown (2020) recalls viewing students take multiple clickers out to presumably also give their friends attendance points…”
    This seems like a very good way of illustrating that if your requirements seem nonsensical (in this case, that students don’t see why their actual attendance at sessions is important, and maybe staff don’t either), students won’t respect them. Easy to see other data being fabricated in the same way for the same reason.

Leave a Reply

Your email address will not be published. Required fields are marked *