🐱 How Google’s Autocomplete Predictions Encouraged Transphobic Searches - IGNORE ALL GOOGLE RESULTS

CatParty



When the world has a question, we Google it. Processing more than three billion searches a day, the most popular search engine on Earth practically reads our thoughts. And for nearly two decades, it’s been in the eerie business of analyzing everything people type into its search box to predict what users might be thinking about next.

You enter “chicken teriyaki” and Google suggests you add “near me.” Start typing “do astronauts” and Google suggests you ask if they “wear diapers.”

Most people are familiar with this predictive search behavior even if they don’t know the term for it. With every character you type, Google’s artificial intelligence guesses what you might be looking for. And although it serves up plenty of helpful suggestions, Google’s predicted search terms also often mirror the internet’s most vile and discriminatory tendencies.
Up until this week, that would have been immediately clear when you searched for almost any transgender celebrity and saw top suggestions rife with transphobia.

Over the past several months, I tracked autocomplete suggestions from major search engines for a wide array of trans and gender nonconforming public figures. (To circumvent search personalization and disguise my IP address, I used private browsing windows and a virtual private network.) With almost every name I searched — including Laverne Cox, Angelica Ross, Tommy Dorfman, and more — autofill prompted me to ask questions with transphobic undertones, or to probe a celebrity's pre-transition name, appearance, gender identity, and so forth.

These predictions included terms like “[X] before transition photos,” “[X] as a guy,” “[X] before surgery,” “is [X] post op,” “is [X] on hormones,” and "Is [X] a dude.” Other predictions prompted me to try to figure out their previous name (also known as “deadname”) or to misgender them, such as adding “man” after a trans woman’s name and “woman” after a trans man’s name.

I frequently found these among the top suggestions within autocomplete predictions as well as “people also ask” and “related search” suggestions on the first page of Google’s search results. And while Google is the most relevant player in the search engine space, claiming around 86% of market share, I also duplicated the problem on much less popular search engines, including Microsoft Bing and Yahoo, which offered similarly transphobic autocomplete suggestions.

Overall, I documented more than 225 instances of transphobic Google search suggestions for more than 50 trans celebrities in the past three months — essentially a who’s who list, including names such as Chaz Bono, Lana and Lilly Wachowski, Lia Thomas, Indya Moore, Chelsea Manning, Elliot Page, Michaela Jaé Rodriguez, Hunter Schafer, and dozens more. And this was the work of a single journalist, which means I likely only scratched the surface of the problems with the search results for the world’s leading trans figures.

For example, when I first typed “is Tommy Dorfman” into the search box, Google suggested I search “is Tommy Dorfman a man" and "is Tommy Dorfman on hormones." When I typed “Tommy Dorfman” by itself, it suggested “Tommy Dorfman before and after.” For Lilly Wachowski — co-creator of The Matrix franchise — Google suggested I add “as a guy,” “before surgery,” and “man.”


A dominant theme I found among search predictions for trans celebrities — including actor Elliot Page, singer Kim Petras, activist Chelsea Manning, and many others — was a prompt to search for their “dead name,” “birth name,” or “real name.” Another theme that was particularly pronounced in the results for trans women — including Alexandra Billings and Michaela Jaé Rodriguez — was misgendering in the form of suggestions like “man.” And for nearly all trans people I searched, predictions like “before,” “before surgery,” and “before transition” appeared among the top suggestions.

Within a day of flagging this issue with Google’s public relations team, the company quickly began removing some of the most blatant examples of misgendering in the search suggestions for prominent trans figures. (At this point, Google had not yet offered me an interview or a public statement.)

But the scholars and mental health professionals I spoke with before contacting Google were concerned that the ubiquity of this predictive search behavior has already had far-reaching consequences.

“These search suggestions are a manifestation of the pervasive transphobia in our current cultural, social, and political climates,” says scholar and licensed therapist Alex Iantaffi, a leading researcher in trans health and a clinical member of the World Professional Association for Transgender Health. “As such, they do contribute to the struggle that many trans and/or nonbinary people face when exploring their gender, or sharing their identity and/or expression with the world.”

Iantaffi contextualizes this predictive search behavior within a larger system of gender policing on the internet that often disregards and delegitimizes trans peoples’ humanity. They say encountering transphobia repeatedly during a day-to-day task like Googling can contribute to higher levels of depression, anxiety, substance use, and suicidality.

“These higher levels of negative mental health outcomes are common across many minoritized communities, not because of people's marginalized status,” explains Iantaffi, “but because of the systemic oppression faced even when engaging in the simplest of tasks such as searching for information, walking down the street, or going to the bathroom.”


When the world’s most powerful search engine nudges users to search a famous trans celebrity’s “real name” or to ask if a certain public figure “is male,” it implies that these are valid questions to ask about any trans person, Iantaffi and other experts tell me. It perpetuates the discriminatory and hostile environment that trans people experience when they log online.

And despite its power and profits, Google hasn’t taken any effective action to prevent such transphobia from showing up in its predictive search suggestions. Instead, the company relies on a deficient automated filtering system in conjunction with after-the-fact reporting, like this very article, followed by manual fixes.

Reactionary fixes that break again and again

Google operates an automated system to prevent its AI from making suggestions that violate its policies. While this system catches a lot of issues, it’s not trained to recognize the full nuances of hate speech within various marginalized communities.

Google periodically implements fixes to address some of the most publicized issues with autocomplete. But for many marginalized communities, the burden for reporting troubling search predictions effectively seems to rest almost entirely in the hands of the public.

Indeed, since 2017, Google appears to have chiefly been relying on users to report problematic suggestions by clicking inconspicuous “feedback” and “report inappropriate suggestions” links — rendered in tiny gray text at the bottom of autocomplete lists or alongside search suggestions on the results pages. When following these links, users are prompted to indicate which searches have raised concerns. But contact from journalists has historically sparked more visible action.

The day after I reached out to Google for comment on this issue, I monitored the autocomplete suggestions for several trans celebrities and watched the predictions begin to change.

Some of the most egregious search suggestions no longer seemed visible. In the case of Lilly Wachowski, for example, previous top suggestions like “as a guy” and “man’” had been entirely filtered out. I had suspected this might be the case, given how quickly Google had acted in response to prior reporting from outlets such as the Guardian and Wired, which previously pointed out that the search engine made autocomplete suggestions like “Hitler is my hero.”

The company has historically taken action after backlash on social media, or when journalists begin investigating problematic suggestions. Following the aforementioned Guardian report about discriminatory search suggestions, Google “took action within hours of being notified,” a spokesperson said. And in the case of Wired’s 2017 report, Google responded with fixes to remove certain suggestions flagged by the media outlet, as writer Issie Lapowsky noted in updates to the story.

Clearly, Google’s automated system has a history of failing to catch discriminatory suggestions. Rather than making more systemic changes, the company appears to have made quick tweaks to its algorithms, filtering out certain search classes that violate its policies.

“While we automatically filter many violations, we identified an issue where, regretfully, our systems were not working as intended,” a Google spokesperson told me in a statement three days after I initially reached out for comment. “We have removed these and related predictions under our policies.”

The Google spokesperson indicated that its system was designed to identify terms associated with “sensitive characteristics like gender,” but that certain search terms like “before” and “surgery” were not “recognized by those classifiers as related to gender.” (As of this writing, terms like “before surgery,” "gender at birth," "is [X] on hormones," and “birth name” still appear in the suggestions for many trans celebrities.)

Google’s spokesperson noted that implementing a universal fix for a terms like “before” or “birth name” might make search suggestions less useful when searching for cisgender celebrities.

The official statement from the Google spokesperson went on to say that the company is “working to implement stronger automated protections” to address specific concerns like these, although they did not provide details on how it would implement such protections or whether it has plans to consult with the trans community in the creation of those protections.

“While Google may not have direct control over the social ills reflected in its search suggestions, it did, in fact, design the mirror.”
But such reactionary fixes are bound to be brittle, says researcher Os Keyes, a Ph.D. candidate at the University of Washington’s Department of Human Centered Design & Engineering.

“Precisely because Google relies on after-the-fact reporting to reform its autocomplete system, what queries get blocked and when is dependent in part on the power and visibility of the population harmed by them,” says Keyes. “If you’re a small community, and one often dismissed or given short shrift in decision-making processes, as [trans people] are, you're at the back of the queue.”

Google itself, as its spokespeople will be the first to remind you, did not specifically code transphobic results into its suggestions. The company’s past statements on problematic search suggestions tend to underscore that the automated results reflect what people search for.

On one level, it’s true that Google is not responsible for the discriminatory speech that its search results frequently reflect. “In a white supremacist, cisnormative, heterosexist, ableist, fatphobic, capitalist, settler-colonial society, peoples’ searches do reflect all of those forms of structural and cultural inequality,” explains scholar Sasha Costanza-Chock, author of Design Justiceand research director at the Algorithmic Justice League.

If we eradicated all forms of anti-trans violence and gender-based discrimination at all levels of society, search behavior and interests would, of course, change for the better. “Then, autocomplete would be less likely to suggest dead-naming people or searching for pictures of them before surgery, or searching for their ‘real name,’” they tell me.

But while Google may not have direct control over the social ills reflected in its search suggestions, it did, in fact, design the mirror. “Programmed inequality in sociotechnical systems like search interfaces doesn't give a pass to the companies and the product teams that are making them, because they know about these problems,” says Costanza-Chock.

Google’s spokesperson acknowledged this line of criticism and told me the search issues were not due to a lack of caring about the people harmed by its product’s failures. But this begs the question: Is Google doing enough to mitigate this harm before it arises?

Google has known about autocomplete’s horrors for years

Predictive search began in an era of techno-optimism focused on the ways in which young upstarts like Google would ostensibly improve peoples’ lives. In 2004, Google engineer Kevin Gibbs wrote a brief two-paragraph blog post to announce the newest Google Labs experiment: Google Suggest, as the autocomplete feature was known at launch.

“Google Suggest not only makes it easier to type in your favorite searches (let's face it — we're all a little lazy),” Gibbs wrote, “but also gives you a playground to explore what others are searching about.”

In 2008, autocomplete became Google’s default search mode. In the years since, predictive search has become one of those microinteractions that feels so simple and inherent to the way we search for information that users might sometimes forget that there are algorithms at work behind the scenes.

“The horrors of autocomplete often disproportionately impact marginalized groups.”
“[Google’s AI is] analyzing people's search strings and assigning a probability or likelihood that a sequence of characters will be followed by another sequence of characters or words,” says Costanza-Chock. According to Google, autocomplete algorithms determine what predictions to showbased on several factors including common searches, the language and location of the user, trending searches, and the user’s individual search history.

Google’s public liaison for search, Danny Sulivan, has written that systems are in place "designed to recognize terms and phrases that might be violent, sexually-explicit, hateful, disparaging or dangerous." But time and time again, those systems appear to fail.

The horrors of autocomplete often disproportionately impact marginalized groups. Safiya Umoja Noble’s 2018 book Algorithms of Oppression documents a decade of research into how Google’s search algorithms reinforce anti-Black racism and sexism. A 2013 United Nations campaign used Google search predictions to show the prevalence of misogyny and homophobia.

In 2016, an investigation from the Guardian found that right-wing groups had “been able to ‘game’” Google’s algorithms to display search suggestions like “Jews are evil.” The following year, Wiredfound Google was still making “vile suggestions”rooted in Islamophobia, antisemitism, sexism, racism, and beyond.

The results for trans celebrities are only an additional indicator that Google hasn’t found a way to solve a problem that experts have been calling attention to for years.

Experts raise alarms about the harms of Google’s AI

The societal harm caused or worsened by algorithmic systems has become one of today’s most urgent technological and political problems. From YouTube’s AI cultivating extremism to Facebook undermining democracy, an entire academic field now focuses on such questions, with researchers repeatedly demonstrating the considerable power that tech giants have to not only influence individual behavior but reshape our civic and social spheres.

With Google search, studies have shown that the mere order of results on search pages can sway perception and potentially even election outcomes. That’s commonly known as the “search engine manipulation effect,” referring to how changes in the top search rankings can have a “significant impact” on everything from consumer choice to voting behavior.

It’s not a wild leap from this body of research to conclude that autocomplete suggestions might have similarly powerful consequences.

For years, LGBTQ+ mental health professionals have raised alarms about how negative representation of trans and queer people online can affect wellbeing and self-image. These Google search predictions factor into these concerns, says Iantaffi. “These transphobic undertones in searches reinforce this process of dehumanization,” they tell me in an email.

“This is indeed particularly impactful for young people,” they say.

Google’s prior statements suggest that problems like these are too big to solve, with one executive saying its algorithm would never be “perfect.” But for many scholars and activists, the question isn’t one of perfection, but whether Google is doing enough to take these problems seriously and to reduce harm before it is caused.

“[Google has] started to sort of pay lip service to the idea that they need to be developing products in ways that are more ethical and accountable,” Costanza-Chock says. “But when push comes to shove, a lot of it seems to be for show.”

“The search engines have had decades now of people telling them about all the ways that different aspects of search can be harmful.”
Timnit Gebru, Google’s star AI ethicist and a leading researcher in the field, was forced out of the company in December 2020. Her work exposed risks of large language models — deep-learning algorithms “trained on massive amounts of text data,” as MIT Technology Review puts it — like those powering modern search features. The research paper that ultimately forced Gebru out of Google found that, in addition to the environmental and misinformation costs, large language models get trained on “racist, sexist, and otherwise abusive language” by virtue of their sheer size and are prone to fail at grasping the nuances of what constitutes discriminatory speech.

“The search engines have had decades now of people telling them about all the ways that different aspects of search can be harmful,” Costanza-Chock says.

Those alarms have only grown louder, leading to significant activism and calls for government regulation. A network of like-minded organizations — many driven by the work of pioneering Black women in AI ethics — now focuses on these issues, including Gebru’s Distributed AI Research Institute and Joy Buolamwini’s Algorithmic Justice League.

“Google didn’t launch yesterday,” Costanza-Chock says. “If they wanted to, they could choose to do the right thing, and invest significant resources in trying to really deeply understand this problem and find ways to address it.”
 
I really hate it when I spell something a certain way intentionally because that's what I'm looking for and Google corrects it to some common spelling that fucks up my results.

Things like "chicken teriyaki near me" also screw up my results. I don't need Google to fill in the blanks for me. I tend to click pretty fast so I get unwanted searches because I accidentally clicked their suggestion.

Sorry to sperg but I find the feature to be a hindrance rather than a help.
In 2016, an investigation from the Guardian found that right-wing groups had “been able to ‘game’” Google’s algorithms to display search suggestions like “Jews are evil.” The following year, Wiredfound Google was still making “vile suggestions”rooted in Islamophobia, antisemitism, sexism, racism, and beyond.

They're doing that for lulz. Nothing else. If they want to type the whole thing in themselves they still can.

Also, trannies are funny and deserve to be made fun of.
 
That's a funny thing to complain about given that if you try to Google stats of trans crime it will only give you search results implying trans people are the most oppressed ever.
Any viewpoints that are against (or even just questioning) gender ideology are typically buried or redirected to pro-trans articles.
But sure, you're totally oppressed by the autocomplete function.
 
Reminder that if someone has the crazy money to make an ai driven search engine that doesn't censor itself (or at least further than user preferences) google is literally hindering themselves for the thing that makes them useful yet again
 
  • Like
Reactions: Marvin
Sorry to sperg but I find the feature to be a hindrance rather than a help.
Glad I'm not the only one that spends far too long every day wishing for an off switch for that damned autosuggest and spelling "correction" in search.

OK, we knew that Google coddles trannies and gender speshuls but having proof written out so blatantly is still irritating as fuck. What happened to journalists only reporting the news. (I already know, it's rhetorical)
 
  • Like
Reactions: SaidTheSpider
I didn't know Chaz Bono was a tranny. Way to out them to me. Disgusting.
 
I don't know how much difference there is between amazon and google autocorrect but I get much better results phonefagging and using androids google based autocorrect than the shit amazon has on their fire tablets. My phone will autocorrect "ducking" to "fucking" and knows when I type "tr-" I'm most likely about to say either trannies or troons.

The tablet on the other hand, I still have yet to break its spirit. Give me time.
 

Reactionary fixes that break again and again

Google operates an automated system to prevent its AI from making suggestions that violate its policies.
What they mean here is that the AI comes to one conclusion , based on a set of data and they don’t like that conclusion and so they put in a ‘fix’ to hobble the AI. That’s an interesting thing to think about.
 
Bull fucking shit. Autocomplete shows trending and other globohomo massaged entries before you type a single letter. 2016 election was when it went into overtime. I switched to brave search mostly cause it doesn’t show me trending next to some doodle celebrating insignificant muzzie in a burqa #65
 
At this point, why not just ask Google to discontinue any kind of search indexing and just tell them to make it into a directory where only pre approved results show up?
Hell, they are already partially there. If shit keeps going the way it is, I predict that within a decade Google will only give results from a machine learning algorithm that is highly curated. They already pulled similar shit with YouTube. Any semblance of search neutrality is long gone.
 
What they really want it Google to not allow the question to be asked, of course.

Concur.

This isn’t about the terrible damage done by people googling mean things, this is part of an increasingly normalized demand that the flow of information be curated so that your fake virtual self may become reified in material reality. The goal is to stop the knowledge that someone is trans entirely. To completely destroy the mental categories male and female.
 
Iantaffi contextualizes this predictive search behavior within a larger system of gender policing on the internet that often disregards and delegitimizes trans peoples’ humanity. They say encountering transphobia repeatedly during a day-to-day task like Googling can contribute to higher levels of depression, anxiety, substance use, and suicidality.
Imagine being so mentally unwell that you get depressed and want to off yourself over an autocompleted search suggestion. These people need therapy and psychopharmaceuticals, not hormones and surgical genital mutilation. YWNBAW, and even the AI knows it.
 
I didn't know Chaz Bono was a tranny. Way to out them to me. Disgusting.
I knew that like 10 years ago before the tranny shit was even mainstream. I had a very celebrity obsessed girlfriend at the time though. That's basically all I know about Chaz Bono though, that they're famous and also a tranny, I have no idea why they're a famous person.

More on topic with the article, who would have guessed that google was promoting the cis white male heteronormative transphobic patriarchy. I guess google is based. 🤔
 
Back