Movies have trained us not to trust AI. We’ll never forget seeing Hal 9000 lay waste to a group of astronauts; Skynet has threatened to destroy humanity so many times, it has become synonymous with AI takeover. (It has also become synonymous with bad movies, but that’s another conversation entirely.) While Shalini Kantayya’s documentary, Coded Bias, doesn’t have the same apocalyptic implications in mind when discussing the problems surrounding AI and machine learning, you get the sense that, if left unchecked, we might be living in the prologue of some kind of bad timeline. A slippery slope with many unintended consequences.
While working on a research project at MIT, Joy Buolamwini, founder of the Algorithmic Justice League, quickly realized a major problem with various facial recognition software: her face could not be registered. Frustrated, she eventually put on a white mask, and, to her dismay, the software immediately picked it up. After plenty of testing, it became clear facial recognition software is accurate mostly for white people, especially white males. The faces of Black people (and many other minority groups) test all over the board, indicating many of the underlying data sets in this technology are highly biased.
And no, as Coded Bias states, this doesn’t mean coders are racists. However, it does emphasize how AI can be an unintentional (or, in some cases, very intentional) source of oppression. Take Amazon’s AI tool that was used to read resumes and find suitable candidates for a short time. It denied all women applicants, incorrectly denying resumes from women’s colleges or with keywords such as “women’s.” Or Microsoft’s Tay bot, which became a racist bigot in a matter of moments on Twitter. Shalini Kantayya paints a stark picture of the “black boxification” of technological advancements. Experts set up these systems in ways in which they may not know their internal workings, which can have disastrous real-world outcomes if they then spiral out of control.
And if that doesn’t fill you with a strong (sometimes unnecessarily overbearing) sense of dread or make you want to throw away your electronics and party like its 1812, Coded Bias then turns to examine the issues that come with using this technology in police or state surveillance. As you might expect, facial recognition and the use of personal data are widely inaccurate in finding criminals. It does more to harass individuals or even keep populations under control in places such as China where the government forces its people to relinquish such data altogether.
Coded Bias makes a strong case for data governance as pieces of society become more reliant on Big Tech, which then in turn uses that information for targeted ads, election numbers, and plenty of other items out of the individual’s control. Soon enough, we may be discussing the privacy of personal data as a human right in a very real way.
Coded Bias is screening online as part of MIFF 68 1/2 from August 7-23. To buy tickets and view the full online program, head to 2020.miff.com.au.
Thank you for reading! What are your thoughts on Coded Bias? Comment down below!
If you enjoyed this article, subscribe to MovieBabble via email to stay up to date on the latest content.
Join MovieBabble on Patreon so that new content will always be possible.