Source: https://www.bloomberg.com/news/arti...ers-human-workers-are-being-mistaken-for-bots
Archive: https://archive.is/wvBgE#selection-1249.0-1255.154
By Morgan Meaker
June 27, 2025 at 10:30 AM UTC
By the time Jessica Lindsey’s customers accuse her of being an AI, they are often already shouting. For the past two years, her work as a call center agent for outsourcing company Concentrix has been punctuated by people at the other end of the phone demanding to speak to a real human. Sometimes they ask her straight, ‘Are you an AI?’ Other times they just start yelling commands: ‘Speak to a representative! Speak to a representative!’
Lindsey, whose work involves selling and answering questions about credit cards for American Express, a Concentrix client, has developed her own tactics to try to calm customers. “I tell them, ‘I promise, I’m a real human.’” To demonstrate, she might cough or giggle, vocal tics she believes AI can’t replicate. “I even ask them, ‘Is there anything you want me to say to prove that I’m a real human?’”
This approach doesn’t always work, she said. Skeptical customers are already frustrated from dealing with the automated system that triages calls before they reach a person. So when Lindsey starts reading from her AmEx-approved script, callers are infuriated by what they perceive to be another machine. “They just end up yelling at me and hanging up,” she said, leaving Lindsey sitting in her home office in Oklahoma, shocked and sometimes in tears. “Like, I can’t believe I just got cut down at 9:30 in the morning because they had to deal with the AI before they got to me.”
Concentrix did not reply to a request for comment. American Express declined to comment.
Predictions that AI would wipe out call center agents have mostly yet to be realized. A Gartner poll of 163 customer service companies in March found that 95% plan to retain human agents for now. Instead, the industry is rapidly integrating AI alongside humans: using the technology to direct calls, soften non-American accents and eliminate background noise.
That integration has customers struggling to tell the two apart. In Australia, Canada, Greece and the US, call center agents say they’ve been repeatedly mistaken for AI. These people, who spend hours talking to strangers, are experiencing surreal conversations, where customers ask them to prove they are not machines.
The conversations are a sign of what’s to come as companies try to make artificial intelligence sound more human, said Nir Eisikovits, professor of philosophy and director of the Applied Ethics Center at the University of Massachusetts, Boston. As an example, he points to OpenAI’s ChatGPT voice mode update in June, which improved intonation and introduced more “realistic” pauses.
“This inability to tell if you’re talking to a human or not is only going to grow,” said Eisikovits. At the same time, “our sense of uniqueness as a species will gradually erode.”
How call center agents react to customers’ confusion varies. B.J., who provides IT support from his home in Louisiana, and who declined to give his full name, said he has been mistaken for AI twice since January. “I think of it as a compliment if they think I am professional enough to sound like a recording,” he said.
Seth, a US-based Concentrix worker who also declined to share his surname, said these conversations exacerbate his own frustration with rules that force him to respond to customers by reading from a script. Callers who question whether he’s real make him feel less human, he said. “I’m like, I don’t even know anymore.”
Seth said he is asked if he’s AI roughly once a week. In April, one customer quizzed him for around 20 minutes about whether he was a machine. The caller asked about his hobbies, about how he liked to go fishing when not at work, and what kind of fishing rod he used. “[It was as if she wanted] to see if I glitched,” he said. “At one point, I felt like she was an AI trying to learn how to be human.”
Confusion between bots and humans also takes place over live chats, where customers converse with agents in writing, said Glo Anne Guevarra, global head of impact at outsourcing company Boldr, which employs more than 1000 customer service agents worldwide, mostly in the Philippines. Especially when agents use scripted responses, as required by many clients, their messages may appear too polished or mechanical, she said. Guevarra says she is not aware of any Boldr agents that have been mistaken for bots.
California-based AI company Sanas has developed software that can make non-American accents sound more American in almost real-time. The tool is designed to make someone with a Filipino accent, for example, sound like they’ve spent a decade in the US, said Chief Executive Officer Sharath Narayana, who noted that the company wanted to soften a person’s accent, not wipe it out. “To preserve one’s voice identity is what makes it sound human,” he said.
Many call center agents turn to humor to convince customers they are real. Nikos Spyrelis, who works in Athens for French outsourcing giant Teleperformance, usually makes a joke when people ask if he’s an AI. “You can say, ‘The last time I checked, I wasn’t,’” said Spyrelis, who is also president of the trade union SETEP, which represents Teleperformance workers in Greece. Anish Mukker, chief AI officer at Teleperformance, said he’s aware of some agents being asked by callers if they’re human. “Customers likely aren’t aware that our clients must disclose when they are using an AI agent to provide support,” he said.
“Why did the cannibal eat the trapeze artists?” she asked her skeptical caller. “Because he wanted a balanced diet.”
Historically, logic, language and reason have been used to justify human exceptionalism and to differentiate humans from other species, explains Benoît Monin, a psychologist at the Stanford Graduate School of Business. Now that AI is also capable of these traits, people are placing more value on human traits they assume it can’t quite copy, he said, such as sense of humor and spirituality.
Sarah, who works in benefits fraud-prevention for the US government — and asked to use a pseudonym for fear of being reprimanded for talking to the media — said she is mistaken for AI between three or four times every month. The first time it happened to her, around two years ago, she was shocked. “For a day or two afterwards, I was kind of bummed,” she said. She’s even searched her medical history to figure out why it’s happening, worried that either her borderline personality disorder or her PTSD from serving in the military is making her sound robotic.
Like the others, Sarah tries to change her inflections and tone of voice to sound more human. But she’s also discovered another point of differentiation with the machines.
“Whenever I run into the AI, it just lets you talk, it doesn’t cut you off,” said Sarah, who is based in Texas. So when customers start to shout, she now tries to interrupt them. “I say: ‘Ma’am (or Sir). I am a real person. I’m sitting in an office in the southern US. I was born.’”
Ed. Note -
Archive: https://archive.is/wvBgE#selection-1249.0-1255.154
Call Center Workers Are Tired of Being Mistaken for AI
As more workers are asked by strangers if they're bots, surreal conversations are prompting introspection in the industry about what it means to be human.By Morgan Meaker
June 27, 2025 at 10:30 AM UTC
By the time Jessica Lindsey’s customers accuse her of being an AI, they are often already shouting. For the past two years, her work as a call center agent for outsourcing company Concentrix has been punctuated by people at the other end of the phone demanding to speak to a real human. Sometimes they ask her straight, ‘Are you an AI?’ Other times they just start yelling commands: ‘Speak to a representative! Speak to a representative!’
Lindsey, whose work involves selling and answering questions about credit cards for American Express, a Concentrix client, has developed her own tactics to try to calm customers. “I tell them, ‘I promise, I’m a real human.’” To demonstrate, she might cough or giggle, vocal tics she believes AI can’t replicate. “I even ask them, ‘Is there anything you want me to say to prove that I’m a real human?’”
This approach doesn’t always work, she said. Skeptical customers are already frustrated from dealing with the automated system that triages calls before they reach a person. So when Lindsey starts reading from her AmEx-approved script, callers are infuriated by what they perceive to be another machine. “They just end up yelling at me and hanging up,” she said, leaving Lindsey sitting in her home office in Oklahoma, shocked and sometimes in tears. “Like, I can’t believe I just got cut down at 9:30 in the morning because they had to deal with the AI before they got to me.”
Concentrix did not reply to a request for comment. American Express declined to comment.
Predictions that AI would wipe out call center agents have mostly yet to be realized. A Gartner poll of 163 customer service companies in March found that 95% plan to retain human agents for now. Instead, the industry is rapidly integrating AI alongside humans: using the technology to direct calls, soften non-American accents and eliminate background noise.
That integration has customers struggling to tell the two apart. In Australia, Canada, Greece and the US, call center agents say they’ve been repeatedly mistaken for AI. These people, who spend hours talking to strangers, are experiencing surreal conversations, where customers ask them to prove they are not machines.
The conversations are a sign of what’s to come as companies try to make artificial intelligence sound more human, said Nir Eisikovits, professor of philosophy and director of the Applied Ethics Center at the University of Massachusetts, Boston. As an example, he points to OpenAI’s ChatGPT voice mode update in June, which improved intonation and introduced more “realistic” pauses.
“This inability to tell if you’re talking to a human or not is only going to grow,” said Eisikovits. At the same time, “our sense of uniqueness as a species will gradually erode.”
How call center agents react to customers’ confusion varies. B.J., who provides IT support from his home in Louisiana, and who declined to give his full name, said he has been mistaken for AI twice since January. “I think of it as a compliment if they think I am professional enough to sound like a recording,” he said.
Seth, a US-based Concentrix worker who also declined to share his surname, said these conversations exacerbate his own frustration with rules that force him to respond to customers by reading from a script. Callers who question whether he’s real make him feel less human, he said. “I’m like, I don’t even know anymore.”
Seth said he is asked if he’s AI roughly once a week. In April, one customer quizzed him for around 20 minutes about whether he was a machine. The caller asked about his hobbies, about how he liked to go fishing when not at work, and what kind of fishing rod he used. “[It was as if she wanted] to see if I glitched,” he said. “At one point, I felt like she was an AI trying to learn how to be human.”
Scripted Responses
The call center industry materialized in the 1960s and ’70s, when companies started hiring agents to field customers’ queries. As the sector evolved, technology started to play a bigger role. In the ’90s, call centers began leveraging new tools to electronically monitor their workforce, enabling management to track agents’ breaks and time spent on calls, said Nell Geiser, director of research at the Communications Workers of America union. More sophisticated tracking meant agents were no longer able to choose their own words or even their tone of voice, Geiser said, because computer systems can automatically flag infractions — such as not sounding cheerful enough — to management. “Instead you just have to act like a robot and follow a script,” said Geiser.Confusion between bots and humans also takes place over live chats, where customers converse with agents in writing, said Glo Anne Guevarra, global head of impact at outsourcing company Boldr, which employs more than 1000 customer service agents worldwide, mostly in the Philippines. Especially when agents use scripted responses, as required by many clients, their messages may appear too polished or mechanical, she said. Guevarra says she is not aware of any Boldr agents that have been mistaken for bots.
California-based AI company Sanas has developed software that can make non-American accents sound more American in almost real-time. The tool is designed to make someone with a Filipino accent, for example, sound like they’ve spent a decade in the US, said Chief Executive Officer Sharath Narayana, who noted that the company wanted to soften a person’s accent, not wipe it out. “To preserve one’s voice identity is what makes it sound human,” he said.
Many call center agents turn to humor to convince customers they are real. Nikos Spyrelis, who works in Athens for French outsourcing giant Teleperformance, usually makes a joke when people ask if he’s an AI. “You can say, ‘The last time I checked, I wasn’t,’” said Spyrelis, who is also president of the trade union SETEP, which represents Teleperformance workers in Greece. Anish Mukker, chief AI officer at Teleperformance, said he’s aware of some agents being asked by callers if they’re human. “Customers likely aren’t aware that our clients must disclose when they are using an AI agent to provide support,” he said.
Human Exceptionalism
When Faith Lau, who works for an AI sales company from Canada, was mistaken for a bot for the first time in February, she instinctively responded by telling a joke — believing this to be the one thing a machine couldn’t do (although most large language models can share a joke when prompted).“Why did the cannibal eat the trapeze artists?” she asked her skeptical caller. “Because he wanted a balanced diet.”
Historically, logic, language and reason have been used to justify human exceptionalism and to differentiate humans from other species, explains Benoît Monin, a psychologist at the Stanford Graduate School of Business. Now that AI is also capable of these traits, people are placing more value on human traits they assume it can’t quite copy, he said, such as sense of humor and spirituality.
Sarah, who works in benefits fraud-prevention for the US government — and asked to use a pseudonym for fear of being reprimanded for talking to the media — said she is mistaken for AI between three or four times every month. The first time it happened to her, around two years ago, she was shocked. “For a day or two afterwards, I was kind of bummed,” she said. She’s even searched her medical history to figure out why it’s happening, worried that either her borderline personality disorder or her PTSD from serving in the military is making her sound robotic.
Like the others, Sarah tries to change her inflections and tone of voice to sound more human. But she’s also discovered another point of differentiation with the machines.
“Whenever I run into the AI, it just lets you talk, it doesn’t cut you off,” said Sarah, who is based in Texas. So when customers start to shout, she now tries to interrupt them. “I say: ‘Ma’am (or Sir). I am a real person. I’m sitting in an office in the southern US. I was born.’”
Ed. Note -