Today’s AI Could Make Pandemics 5 Times More Likely, Experts Predict ...Middle East

News by : (Time) -

Recent developments in AI could mean that human-caused pandemics are five times more likely than they were just a year ago, according to a study of top experts’ predictions shared exclusively with TIME.

The data echoes concerns raised by AI companies OpenAI and Anthropic in recent months, both of which have warned that today’s AI tools are reaching the ability to meaningfully assist bad actors attempting to create bioweapons.

Read More: Exclusive: New Claude Model Triggers Bio-Risk Safeguards at Anthropic

[time-brightcove not-tgx=”true”]

It has long been possible for biologists to modify viruses using laboratory technology. The new development is the ability for chatbots—like ChatGPT or Claude—to give accurate troubleshooting advice to amateur biologists trying to create a deadly bioweapon in a lab. Safety experts have long viewed the difficulty of this troubleshooting process as a significant bottleneck on the ability of terrorist groups to create a bioweapon, says Seth Donoughe, a co-author of the study. Now, he says, thanks to AI, the expertise necessary to intentionally cause a new pandemic “could become accessible to many, many more people.”

Between December 2024 and February 2025, the Forecasting Research Institute asked 46 biosecurity experts and 22 “superforecasters” (individuals with a high success rate at predicting future events) to estimate the risk of a human-caused pandemic. The average survey respondent predicted the risk of that happening in any given year was 0.3%.

Crucially, the surveyors then asked another question: how much would that risk increase if AI tools could match the performance of a team of experts on a difficult virology troubleshooting test? If AI could do that, the average expert said, then the annual risk would jump to 1.5%—a fivefold increase.

What the forecasters didn’t know was that Donoughe, a research scientist at the pandemic prevention nonprofit SecureBio, was testing AI systems for that very capability. In April, Donoughe’s team revealed the results of those tests: today’s top AI systems can outperform PhD-level virologists at a difficult troubleshooting test.

Read More: Exclusive: AI Outsmarts Virus Experts in the Lab, Raising Biohazard Fears

In other words, AI can now do the very thing that forecasters warned would increase the risk of a human-caused pandemic fivefold. (The Forecasting Research Institute plans to re-survey the same experts in future to track whether their view of the risks has increased as they said it would, but said this research would take months to complete.)

To be sure, there are a couple of reasons to be skeptical of the results. Forecasting is an inexact science, and it is especially difficult to accurately predict the likelihood of very rare events. Forecasters in the study also revealed a lack of understanding of the rate of AI progress. (For example, when asked, most said they did not expect AI to surpass human performance at the virology test until after 2030, while Donoughe’s test showed that bar had already been met.) But even if the numbers themselves are taken with a pinch of salt, the authors of the paper argue, the results as a whole still point in an ominous direction. “It does seem that near-term AI capabilities could meaningfully increase the risk of a human-caused epidemic,” says Josh Rosenberg, CEO of the Forecasting Research Institute.

The study also identified ways of reducing the bioweapon risks posed by AI. Those mitigations broadly fell into two categories.

The first category is safeguards at the model level. In interviews, researchers welcomed efforts by companies like OpenAI and Anthropic to prevent their AIs from responding to prompts aimed at building a bioweapon. The paper also identifies restricting the proliferation of “open-weights” models, and adding protections against models being jailbroken, as likely to reduce the risk of AI being used to start a pandemic.

The second category of safeguards involves imposing restrictions on companies that synthesize nucleic acids. Currently, it is possible to send one of these companies a genetic code, and be delivered biological materials corresponding to that code. Today, these companies are not obliged by law to screen the genetic codes they receive before synthesizing them. That’s potentially dangerous because these synthesized genetic materials could be used to create mail-order pathogens. The authors of the paper recommend labs screen their genetic sequences to check them for harmfulness, and for labs to implement “know your customer” procedures.

Taken together, all these safeguards—if implemented—could bring the risk of an AI-enabled pandemic back down to 0.4%, the average forecaster said. (Only slightly higher than the 0.3% baseline of where they believed the world was before they knew today’s AI could help create a bioweapon.)

“Generally, it seems like this is a new risk area worth paying attention to,” Rosenberg says. “But there are good policy responses to it.”

Read More Details
Finally We wish PressBee provided you with enough information of ( Today’s AI Could Make Pandemics 5 Times More Likely, Experts Predict )

Also on site :

Most Viewed News
جديد الاخبار