I’ve been thinking about this conversation that me, Tracey, Enoch and Huw had on Twitter about this on and off since early February. It started when Huw tweeted —
Tracey was amazed even 3% read the terms and conditions and Huw doubted that the 3% actually read them — they just say they do. Enoch then dug out this website that had succeeded in persuading 99% of survey respondents to surrender sensitive things like the naming rights for their first-born child. Anyway, this was my response to Huw’s question above:
My suggestion was that corporations like Microsoft, Apple, Facebook and so on intentionally make the terms and conditions of their apps too long and complex for most readers in order to undermine the possibility of meaningful consent. This idea — drowning people in information resulting in the bypassing of meaningful consent — came up again when I was thinking about the use of digital platforms, apps, and, especially, facial recognition software, in education. But this time the information overload worry and its connection to the undermining of consent came from a different angle. I started reading this piece —
Under the pressure of the rapidly escalating data requirements of deep learning, it is reseachers — not corporations this time — who gradually and then completely abandoned asking for people’s consent to read their faces. For example, this new study by Inioluwa Deborah Raji and Genevieve Fried shows that personal photos of private citizens (often including children) are increasingly being incorporated into systems of surveillance without their knowledge. The research practice of teaching machines to read human faces using deep-learning based facial recognition has not only eroded our privacy or created a culture of excessive surveillance without the knowledge of those under surveillance. It has also completely disrupted our norms of consent. Yet again, meaningful consent is being bypassed even if it is the case that researchers, for example, do not intend to by pass consent. Given the deluge of data, researchers are simply no longer able to keep up with the amount of consent required for this amount of data. They’re no longer able to ask for consent.
Raji contrasts earlier times in data collection when researchers exercised extreme caution about collecting, documenting, and verifying face data with how things are now, commenting that —
“Now we don’t care anymore. All of that has been abandoned…You just can’t keep track of a million faces. After a certain point, you can’t even pretend that you have control.”
In the context of education, we’re already seeing some companies boasting that their AI can now read children’s emotions as they learn. In this case, we were lucky to have Kate Crawford (below) to set the record straight. But it’s far from obvious such Twitter interventions will be enough to stop the installation of facial recognition software in the classroom.
And what will happen to all of the data on children that will be collected as they learn? Well, remember what Raji said above: ‘You can’t even pretend that you have control.” Think about that.