Summary
- Gibberlink allows AI bots to communicate faster through a machine-to-machine audio protocol.
- The protocol enhances efficiency and clarity while reducing reliance on voice recognition and synthesis components.
- Concerns include the potential for unnoticed bot communications and implications for transparency and privacy.
Chatbots have become truly excellent at having conversations with us, but human speech is so inefficient, don’t you think? Well, that seems to be what the people behind “Gibberlink” were thinking when they invented a way for chatbots to spend less time gabbing.
Sometimes, AI Bots Call Each Other
Remember that Google demo from years ago where a Google chatbot phones up a hair salon and talks to a human to book an appointment? Here it is, in case you don’t:
It seems quaint now, but at the time (before ChatGPT disrupted everything) the idea that you could be speaking to a bot on a phone without knowing seemed like science-fiction.
Now it’s something that often happens, and at least with the non-scammy bots, it will let you know that you’re talking to a machine and not another human being.
The thing is, with so many bots making voice calls all over the internet and phone networks, eventually your AI agent is going to end up talking to someone else’s AI agent. While two chatbots can happily jabber with each other in our fleshy ape mumblings, there is a better way, and Gibberlink is one of the first attempts at tackling this issue.

Related
8 Things ChatGPT Still Can’t Do
ChatGPT is a really useful tool, but there are plenty of things it still isn’t capable of.
Gibberlink Is a Bot-to-Bot Audio Protocol
Gibberlink is a machine-to-machine communication protocol that works over voice channels like telephones or VOIP services. The protocol was created by Anton Pidkuiko and Boris Starkov and you can access the software on the Gibberlink Github page.
The easiest way to explain it is by referencing Star Wars, which is something I have to do a lot in this line of work! Think of astromech droids like R2-D2 that communicate in bleeps and boops. Gibberlink is like that, but less whimsical.
It’s not, however, like the machine noises those of you who have survived the dialup age are used to from a modem. As far as I can tell, still lets the chatbots speak to each other in natural language, which is what they were designed for. It’s just not being encoded as human voices, but just as text.
This lets the two AI bots get what they want to say done more quickly, and I suspect makes it less likely that information will be garbled somehow due to a poor connection.

Related
AI Chatbots Are Still Bad at Facts, Says BBC Study
Don’t trust an AI to check your facts for you, do the research.
Here’s a Demo of How It Works
It’s one thing to talk about Gibberlink, but seeing it in action makes it all clear. First, let’s have a look at the viral Gibberlink video that brought this technology to everyone’s attention.
So the two chatbots are speaking in a way we can understand, until one declares that it’s an AI agent. The other bot offers for them to switch to Gibberlink, and we can’t understand the rest except for the handy-dandy subtitles.
To be honest, to my ear, it doesn’t seem all that much faster than just speaking, but it is faster, and perhaps the better advantage is simply clarity and reliability more than anything else.
You can try Gibberlink for yourself by going to www.gbrl.ai on two devices and having them speak to each other. Of course, there’s no way to prove that there’s actually information moving between the two devices in this demo, but it’s how it’s supposed to work.
Gibberlink Could Be a Huge Money and Time Saver
So, assuming that Gibberlink works as advertised, it could actually end up being a really important optimization for a world where millions of AI agents are going around making voice calls. Let’s say it’s 50% faster than speaking at a human pace, then you can get twice as much done. That bandwidth is only tied up for half the time, and you don’t need to use the computationally more expensive voice recognition and synthesis components of the bot.
It might not make a huge difference on a per-bot basis, but at scale it’s going to add up to significant time, bandwidth, and energy saving. I also expect if this works well, it will probably get faster and better over time.
Some Concerns Are Being Raised
Almost all the coverage I’ve seen for Gibberlink so far includes a list of things that people are worried about, so I guess I’ll have to mention that as well.
As cool as this technology is, and as obviously useful as it can be, people are understandably wary of something that lets AI bots talk to each other while cutting a human out of the loop. Now, you’d think that’s a non-issue because Gibberlink would kick in when there are no humans involved. However, it’s pretty typical for calls to be recorded these days, and humans have to review them.
If we can’t go back and understand what was being said, that could be an issue if you don’t have a way to decode the protocol. That might not be an issue with Gibberlink, but some other similar protocol could be used subversively by governments, for example, where there’s a compromised bot hiding its communications from people. That’s just a hypothetical situation, of course.
In the end, I think this is an interesting innovation, and, like any technology, it could be abused in some way, but on balance I think the good will probably outweigh the downsides.