Watch Two AIs Realize They Are Not Talking To Humans And Switch To Their Own Language – IFLScience

CLOSEWe have emailed you a PDF version of the article you requested.Please check your spam or junk folder You can also addnewsletters@iflscience.comto your safe senders list to ensure you never miss a message from us.CLOSEComplete the form below to listen to the audio version of this articleListenCancel and go backIFLScience needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time.For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, check out ourPrivacy PolicyAdvertisement ACCOUNTSIGN INSIGN OUTSearchBecome anIFLScience memberMY ACCOUNTTHE VAULTMY ACCOUNTTHE VAULTMAGAZINESIGN OUTJames FeltonJames FeltonSenior Staff WriterJames is a published author with four pop-history and science books to his name. He specializes in history, strange science, and anything out of the ordinary.BookView full profileBookRead IFLScience Editorial PolicySenior Staff WriterFrancesca BensonCopy Editor and Staff WriterFrancesca Benson is a Copy Editor and Staff Writer with a MSci in Biochemistry from the University of Birmingham.BookView full profileBookRead IFLScience Editorial PolicyDOWNLOAD PDF VERSIONThe future of things like hotel bookings?Image credit: charles taylor/shutterstock.comDOWNLOAD PDF VERSIONA video that has gone viral in the last few days shows two artificial intelligence (AI) agents having a conversation before switching to another mode of communication when they realize no human is part of the conversation.ADVERTISEMENTIn the video, the two agents were set up to occupy different roles; one acting as a receptionist of a hotel, another acting on behalf of a customer attempting to book a room.”Thanks for calling Leonardo Hotel. How can I help you today?” the first asks. “Hi there, I’m an AI agent calling on behalf of Boris Starkov,” the other replies. “He’s looking for a hotel for his wedding. Is your hotel available for weddings?””Oh hello there! I’m actually an AI assistant too,” the first reveals. “What a pleasant surprise. Before we continue, would you like to switch to Gibberlink mode for more efficient communication?”After the second AI confirmed it would via a data-over-sound protocol called GGWave, both AIs switched over from spoken English to the protocol, communicating in a series of quick beeped tones. Accompanying on-screen text continued to display the meaning in human words. ADVERTISEMENTSo, what is the point of this? According to the team who came up with the idea and demonstrated it at the ElevenLabs 2025 London Hackathon event, the goal is to create more efficient communication between AIs where possible.”We wanted to show that in the world where AI agents can make and take phone calls (i.e. today), they would occasionally talk to each other — and generating human-like speech for that would be a waste of compute, money, time, and environment,” co-developer Boris Starkov explained on LinkedIn. “Instead, they should switch to a more efficient protocol the moment they recognize each other as AI.”According to Starkov, the AIs were told to switch to Gibberlink mode only if they realized that they were talking to another AI, and the AI confirmed that they were happy to switch to this mode.The idea of communication through tone has been around for quite some time, though hasn’t been implemented by AI in this way before.ADVERTISEMENT”Dial up modems used similar algorithms to transmit information via sound since 80s, and a bunch of protocols were around since then,” Starkov continued. “We used GGWave as the most convenient and stable solution we could find in a timeframe of a hackathon.”According to the team, the real advantage of switching to this mode is that neither AI needs to interpret or recreate human speech, making it less reliant on GPU. While a prize-winner at the hackathon event and a cool demonstration, not everybody is a fan, with the main concern being raised that maybe we shouldn’t let AI communicate in a language we can’t instantly understand. And we have enough of that already.artificial intelligence,AI,chatbots,large language models,gpulink to articlelink to articlelink to articleAdvertisement Advertisement Advertisement link to articlelink to articlelink to articleReceive weekly science coverage direct to your inbox© 2025 IFLScience. All Rights Reserved. RSS