But what I do think is that acquisition of particular reagents, and trying to develop pathogens/chemical compositions that cause harm, would be difficult to do - even a multistep process is likely monitored.
What I do note is AI people taking this stance - we see many articles about "Godfather of AI Geoffrey Hinton" (it always says godfather, whatever that means) and his fears. But Hinton has never really articulated anything beyond "I was important at the nascent stage of an industry" and his fears came from that. Perhaps Hinton is a latter-day Faraday, but equally perhaps not.
If they are saying these things to "get out of dodge" in the event AI results in Nuremberg Trial's kind of situation - but that is unlikely, and hilariously arrogant for creators of tools that can - and have - on multiple occasions - hallucinated the wrong answer.
The brutal truth is that LLM's are cool, they're intriguing, but they aren't "agentic capable". They might be able to do some basic CS stuff in the next 10 years - the blocker there isn't the tech, it's the customer - and if the tool can't grapple that, then the tool isn't good enough.