Readers of Black Perspectives are undoubtedly aware of conversations around tech, gender, and race. (And if you are unfamiliar with the conversation, Shalini Kantayya’s Coded Bias documentary is not to be missed.) In tech and Digital Humanities (DH) circles, discussions about tech bias have been ongoing for several years but are only slowly starting to reach the public. For instance, in 2018, Amazon came under fire for an AI recruitment tool, which was shown to have bias against women. Amazon’s algorithm had trained itself to downgrade application tools that mentioned clubs and organizations with the term “women” in them, and even to eliminate candidates from women’s colleges.
The recruitment bias was disturbing, particularly where women of color–and especially Black women–already face additional hiring and career advancement biases. Last December, Timnit Gebru, one of the leading experts in AI, tweeted that she was fired by Google, when she refused to remove her name from a paper that was critical of the company’s handling of racial bias in its AI program. And in the age of COVID, there have been additional attentions to AI’s anti-Black biases when it comes to healthcare. The pulse oximeter, a device that goes over the patient’s thumb to read oxygen levels was shown to register inaccurately Black patients, potentially under-diagnosing patients who may not be getting enough oxygen, contributing to higher rates of Black COVID deaths.
A recent news story discussed development of an AI-powered screener for detecting skin cancer that empowers users to scan and track suspicious lesions using their smart phones. While the researcher insisted that it was merely a pre-screen and not a replacement for a physician’s care, studies have shown alarming disparities in early detection and outcomes by race. Where early reliance is a big predictor of survival for cancer, the potential reliance on a tool that has been shown to have severe limitations in terms of its ability to read Black skin is troubling.
Because of the ties between tech (and its bias) and digital humanities, considerations of race and gender spill into DH scholarship. There have been important conversations lead by Kim Gallon, Jessica Marie Johnson, Roopika Risam, Jessica Wu, Dorothy Kim, Adeline Koh, Alex Gill, Caitlin Pollack, and others around decolonizing the field, and trying to make DH more responsive to race and exclusion. Kim Gallon’s essay, “Making a Case for the Black Digital Humanities,” provides one of the critical frameworks for discussions around tech and the humanities. Among other things, she rightly notes that history and DH both could “further expose humanity as a racialized social construct.” Digital humanities, Gallon argues is a “technology of recover that undergirds Black digital scholarship, showing how it fills the apertures between Black studies and digital humanities.”
She is right. Taken with the intersection of Black Studies, DH is another tool that humanities scholars can deploy to recover the past. Yet, the ongoing evolution of increasingly powerful and economically accessible DH tools necessitates a degree of vigilance around ethics and race. To wit, as Jacque Wernimont notes, just because we can, does not mean that we always should. It is a lesson that the tech industry seems reluctant to embrace, which means that digital humanists must be cautious and ethical around the use of these tools.
In late-February 2021, Twitterstorians and DH scholars were buzzing about a tool called MyHeritage, which runs on the D-ID platform, using AI to generate faces. Results have run from eighteenth century royals and their courtiers to Frederick Douglass. The tool guesses at what individuals looked like, based on their face shape, using a portrait, drawing, or photograph to create an approximation of what they might have looked like. Frederick Douglass famously declined to sit for portraits because of concerns about cartoonish racial tropes on features like his lips, hair, or nose making it into the portrait. He preferred to be photographed instead, which gave him more control of his own image and made him one of the most photographed men of the nineteenth century.
Tools like Artificial Intelligence and VR/AR technologies have remarkable promise for more and different types of public-facing digital humanities projects: recover to Black spaces and perhaps experiences in ways that traditional text-based scholarship cannot. Responsible use of these tools, however, means care to avoid digital blackface, or to further Black trauma. Alondra Nelson’s The Social Life of DNA, which discusses the big business of DNA ancestry programs as a cautionary tale in the desire to recover disrupted racial pasts. Aside with DNA results not inherently corelating with broader discussions around Blackness, there have been valid concerns about the ways that the DNA ancestry companies might use the data for medical research or law enforcement being yet another way in which Black people have been subjected to non-consensual experimentation by medical science as well as inequities in the criminal justice system.
In our eagerness to recover pasts, and particularly racialized pasts, it is therefore necessary to balance the eagerness of potential with a duty not to amplify Black trauma or engage in digital blackface. Virtual Reality, Augmented Reality, and AI all have enormous potential to offer new ways to access and engage with the Black past. But so too is the potential for harm. Perhaps it is time for a code of ethics that furthers Black DH’s important work as a tool for social justice, but also tries to mitigate that harm.