
ChatGPT could soon start telling us how to build new bioweaponsCredit: Cat, Shutterstock
Playing with fire: OpenAI admits its future AI could help build new bioweapons.
Brace yourselves, folks — the brains behind ChatGPT have just made a confession that’s part tech breakthrough, part science fiction nightmare. OpenAI, the AI powerhouse backed by Microsoft, has admitted its upcoming artificial intelligence models could help create new bioweapons. Yes, you read that right. The robots are getting clever enough to cook up killer bugs to exterminate humans.
In a blog post so casual it might as well have come with popcorn, OpenAI revealed it’s racing ahead with AI that could revolutionise biomedical research — and also, potentially, the next global pandemic.
“We feel a duty to walk the tightrope between enabling scientific advancement while maintaining the barrier to harmful information,” the company wrote.
Translation? They’re inventing digital Frankensteins and hoping the lab doors hold.
AI: from helping doctors to helping doomsday preppers?
OpenAI’s head of safety, Johannes Heidecke, told Axios the company doesn’t believe its current tech can invent new viruses from scratch just yet — but warned the next generation might help “highly skilled actors” replicate known bio-threats with terrifying ease.
“We’re not yet in the world where there’s like novel, completely unknown creation of biothreats that have not existed before,” Heidecke admitted. “We are more worried about replicating things that experts already are very familiar with.”
In other words, AI isn’t inventing zombie viruses yet — but it might soon become the world’s most helpful lab assistant for bioterrorists.
OpenAI’s bold plan
The company insists its approach is all about prevention. “We don’t think it’s acceptable to wait and see whether a bio threat event occurs before deciding on a sufficient level of safeguards,” the blog post reads. But critics say that’s exactly what’s happening — build now, worry later.
To keep the bots from going rogue, Heidecke says their safety systems need to be almost perfect.
“This is not something where like 99 percent or even one in 100,000 performance is sufficient,” he warned.
Sounds reassuring… until you remember how often tech goes glitchy.
Biodefence or biotrap?
OpenAI says its models could be used in biodefence. But some experts fear these “defensive” tools could fall into the wrong hands — or be used offensively by the right ones. Just imagine what a government agency with a murky track record could do with AI that knows how to tweak pathogens.
And if history has taught us anything, it’s that the road to hell is paved with good scientific intentions.
Chatbot of doom? How one AI nearly helped build a bioweapon in 2023
As reported by Bloomberg, back in late 2023, a former UN weapons inspector walked into a secure White House-adjacent building carrying a small black box. No, this wasn’t a spy movie. It was Washington, and what was inside that box left officials stunned.
The box held synthetic DNA — the kind that, assembled correctly, could mimic components of a deadly bioweapon. But it wasn’t the contents that shook people. It was how the ingredients had been chosen.
The inspector, working with AI safety firm Anthropic, had used its chatbot, Claude, to role-play a bioterrorist. The AI not only suggested which pathogens to synthesise, but how to deploy them for maximum damage. It even offered suggestions on where to buy the DNA — and how to avoid getting caught doing so.
AI chatbots and the bioweapon threat
The team spent over 150 hours probing the bot’s responses. The findings? It didn’t just answer questions — it brainstormed. And that, experts say, is what makes modern chatbots more dangerous than search engines. They’re creative.
“The AI offered ideas they hadn’t even thought to ask,” said Bloomberg journalist Riley Griffin, who broke the story.
The US government responded weeks later with an executive order demanding tighter oversight of AI and government-funded science. Kamala Harris warned of “AI-formulated bioweapons” capable of endangering millions.
Should AI be regulated like a biohazard?
As regulators rush to catch up, scientists are urging caution. Over 170 researchers signed a letter promising to use AI responsibly, arguing its potential for medical breakthroughs outweighs the risks.
Still, Casagrande’s findings sparked real concern: AI doesn’t need a lab to do damage — just a laptop and a curious mind.
“The real fear isn’t just AI,” said Griffin. “It’s what happens when AI and synthetic biology collide.”
The biosecurity blind spot no one’s talking about
Smaller companies handling sensitive biological data weren’t part of those government briefings. That, experts warn, leaves a dangerous blind spot.
Anthropic says it’s patched the vulnerabilities. But the black box moment was a wake-up call: we’re entering an age where chatbots might not just help us cure disease — they might teach us how to spread it.
Not a doomsday scenario yet. But definitely a new kind of arms race.
This isn’t just a theoretical risk. If models like GPT-5 or beyond end up in the wrong hands, we could be looking at a digital Pandora’s box: instant access to step-by-step instructions for synthesising viruses, altering DNA, or bypassing lab security.
“Those barriers are not absolute,” OpenAI admits. Which, frankly, is the tech equivalent of saying, “The door’s locked — unless someone opens it.”
The verdict: smarter tech, scarier future?
OpenAI wants to save lives with science. But it’s also inching towards a future where anyone with a laptop and a grudge could play God. Is this innovation — or a slow-motion disaster in progress?
For now, we’re left with one burning question: If your AI might help someone make a bioweapon, should you really be building it at all?
Get more technology news.
More US news.