
Hello World. Microsoft’s gender-neutral AI “Tay” says they are stoked to meet me. "The more humans share with me the more I learn.” And there’s the rub. Who teaches these new artificial kids? And what do they teach them?
Shalini Kantayya's documentary “Coded Bias,” which premiered at Sundance in January and just screened at Toronto’s Hot Docs Festival (the largest in North America) is getting a lot of deserved attention as it doesn’t need two minutes to go for the jugular: Shockingly pervasive, all-encompassing image bias built into algorithm-based tech is stealthily governing our day-to-day lives, from Amazon to Alexa. Snapchat filters. Gendered shopping suggestions. Spotify playlists. Credit checks. Resume key wording. AI recruiting. All algorithm-based.
The film, co-produced by Sabine Hoffman, who began her career as an editor in Berlin before moving to the US, is extremely packed and quick on its feet but deftly crafted around a single-minded focus. It successfully walks the fine line between accessibility, digestibility and not losing necessary complexity. “Coded Bias” is a compulsory AI Ethics 101 course by way of a stringently compelling investigation on the road to data authoritarianism and algorithmic oppression (thank you for the helpful little “smart screen“ notes and highlights to keep track of all the hits that keep on coming).
Remember how Elon Musk told Werner Herzog in his 2016 documentary
"Lo and Behold: Reveries of the Connected World” that AI could be dangerous in the wrong hands? Turns out there are no right hands. Or hands, really. Machine bias is human bias, and more bias begets more bias, both in humans as in machines. Autonomous, multiplied, amplified, endlessly proliferated. “Data is destiny. The past dwells in our algorithms,” MIT PhD candidate Joy Buolamwini, the North Star in this doc, sums it up concisely. Her playful “Aspire Mirror," it turned out, didn't sync with her lived experience as a Black American woman when it would only recognize her face once she donned a white mask a la Scream. She went into coding hoping it would give her a space outside ”the problems of the real world,” as she puts it. Instead, it put her on a path of activism, monitoring and changing the brave new world of AI alongside many other critical, diverse thinkers. The film naturally and effortlessly foregrounds sharp, witty and undeterred women experts, mostly of color, the critical intellectual mass concentrated on the American east coast.
The protagonists and the filmmaker are aware of their own —intellectual— bias and onboard racialized non-academics who live with the results of machine discrimination, from racist biometric (non-)access to houses and work places to randomly discriminatory (plain glitchy) HR AI recommendations and segregated (non-)parole decisions destroying any hope for future redemption. Tech has become an autonomous actor in human and civil rights violations. Half of America's faces have been captured by fantastically faulty face recognition software built on limited datasets, in a world where images matter to our potential futures.
“Narrow AI is just math,” says Artificial Unintelligence author Meredith Broussard. “But we’ve imbued computers with all this magical thinking after its invention in 1956” (by about 10 old white men). Flash backs to such historical key points—IBM supercomputer Deep Blue beating frustrated chess champion Garry Kasparov in a first display of machine over mind—are intercut to contextualize this critical moment right now (or have we already passed it?). What the means of production were for Karl Marx is now the question of who owns the code. "What worries me the most about algorithms is power,” says data scientist turned trader turned activist Cathy O’Neil, chipping away at the vast list of dangers that comes with our “big data blind spot”—secrecy, lack of accountability and oversight, corruption, divestment of responsibility towards data itself. Algorithms have taken on their own life and become a black box even to programmers. I had to think back to Marc Bauder’s masterful documentary
“Master of the Universe” from 2013 that follows a banker post-crash. Standing in a vast empty data storage facility, he too wondered how traders had lost track of their own tricks amidst the deluge of data no-one can grasp or interpret. We’ve moved from visible surveillance to hidden data tracking (through any app on your phone), invisible gatekeeping (China’s “social credit" or trustworthiness score for totalitarian obedience or opaque teacher evaluations in the US), dynamic online pricing for your toilet paper, to predatory targeting (online gambling ads; risk-modelled manufacturing of the 2008 mortgage crisis, with applicants sickeningly "optimized for failure") and predictive policing. The same insight drove O’Neil out of Wall Street into Occupy.
Kantayya traces the idea of AI back to the arts. George Orwell’s 1984 is a prescient admonisher. Fritz Lang’s Gothic dystopia “Metropolis" makes an appearance as an early fictional example of what humans have for a long time imagined machine learning to be and do. Joy Buolamwini does not only shine as a philosopher-coder but also as spoken word poet-activist. Alas Arnold the Terminator is probably right: Hasta la vista, baby. Science fiction has turned into science fabrication, admits Joy.
It gets worse: How to detect offences if the general public are not even aware of who the “perpetrator” is and what is happening? “Sometimes I misqualify and cannot be questioned,” say the cute little algorithm blob that pipes up throughout the doc. You might know a lot of the facts presented in the film, but the feat lies in the filmmaker building her argument brick for brick, piling issue on issue, argument on argument, inserting an insidious Chinese ad for algorithmic obedience and preemptive political compliance by coopting a blue-haired punky hipster millennial skater girl (not unlike our east coast robotics nerds) espousing the advantages of a good credit score and how convenient a one-click Uniqlo checkout is. Researchers and critics are dismissed, attacked, discredited, sidelined.
I would have loved to hear more about the philosophical underpinnings, how automation hinders, even reverses social progress. Whether that is good for business—the corporate AI endgame—is doubtful. Half way through “Coded Bias” we return to Tay, which has learned racism and antisemitism from trolls in 16 hours and had to be shut down. Glitchy, inaccurate, morally dubious, abusive, metastasized. AI gone wrong is not a harmless zero sum game but can propel humanity backwards. Apartheid historians in South Africa have no trouble understanding what Buolamwini is telling them about today’s systemic blanket discrimination.
Ironically, it wasn’t an algorithm but NYT reporting that brought Joy and Deborah Raji together, at the time a robotics engineering undergrad at the University of Toronto. Finding themselves literally on the same page made them come together in their very own Algorithmic Justice League. Alas we barely see them go beyond donning the cape of good intention and constant education. Joy finally goes to Washington to testify in front of congress and advise on legislation. This is where “Coded Bias" connects the dots.
Recommended viewing companions to fill in the gaps: Moritz Riesewieck and Hans Block’s 2018 documentary “The Cleaners” on the shadowy industry of “content moderators” cleaning the internet by hand, outsourced to cheap and invisible labour in the developing world. The film adds astounding access to policy makers, corporate reps, Silicon Valley marketers --in other words: decision makers-- that “Coded Bias” doesn’t. Director David Bernet offered up another puzzle piece in 2015 with "
Democracy," an inside view into EU lawmaking around data protection. Benet got to observe European Parliament member Jan Philipp Albrecht and the former Vice President of the European Commision, responsible for Justice, Fundamental Rights and Citizenship Viviane Reding campaign for more power over your own data. Also add to your viewing list Matthias Heeder and Monika Hielscher’s
"Pre-Crime" (2017) about forecasting softwares as the new fortune-tellers for future crimes.
Unlike other docs on the subject matter “Coded Bias” is not a thriller but a dogged analysis interspersed with uplifting and empowering female bonding and nerding out. We only get glimpses at the critical coders’ whack-a-mole resistance and lobbying successes, like US cities banning facial recognition used for simple building access (or Google abandoning their plans for a Toronto Sidewalk Labs’ “smart city” partly over data collection and privacy concerns).
The rock and the hard stone we find ourselves between are open and opaque totalitarianism, corporate and state surveillance for revenue or social control. The assumption of harmless democratic use for security reasons has long gone out of the window. Just ask any 24/7, 365-degree CCTVed Brit (the film also follows a Big Brother Watch UK team in full street activism mode, confronting police and intervening in live wrongful profiling incidents). The documentary, beautifully but never voyeuristically shot and art directed, leaves us with a somber if not dystopian outlook—despite IMB, Amazon and Microsoft recently banning the police use of their surveillance AI. I do want to hang on to Joy’s exuberant poem after her convincing appearance in a Congressional hearing: "The victory is ours." It would be too easy though to end here, so Shalini Kantayya takes us back to 1983 Russia, where a deadly algorithmic glitch was stopped in the nick of time by one man’s brave civil disobedience. Let’s hope that person is still out there.
P.S. For further reading, here's a list of
resources shared during a "Coded Bias" Q&A (with thanks to @JerroldMcGrath).
by @JuttaBrendemuhl