Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

OpenAI Acknowledges New Models Increase Risk of Misuse To Create Bioweapons 28

OpenAI's latest models have "meaningfully" increased the risk that AI will be misused to create biological weapons [non-paywalled link], the company has acknowledged. From a report: The San Francisco-based company announced its new models, known as o1, on Thursday, touting their new abilities to reason, solve hard maths problems and answer scientific research questions. OpenAI's system card, a tool to explain how the AI operates, said the new models had a "medium risk" for issues related to chemical, biological, radiological and nuclear (CBRN) weapons -- the highest risk that OpenAI has ever given for its models. The company said it meant that the technology has "meaningfully improved" the ability of experts to create bioweapons. AI software with more advanced capabilities, such as the ability to perform step-by-step reasoning, pose an increased risk of misuse in the hands of bad actors, according to experts.

This discussion has been archived. No new comments can be posted.

OpenAI Acknowledges New Models Increase Risk of Misuse To Create Bioweapons

Comments Filter:
  • by cascadingstylesheet ( 140919 ) on Friday September 13, 2024 @03:11PM (#64786081) Journal
    So, what are you going to do? Ban knowledge? Ban computing?
    • by m00sh ( 2538182 )

      So, what are you going to do? Ban knowledge? Ban computing?

      Yes, that is the logical conclusion that will be reached in the future.

      When AI starts replacing humans in mass, the first thing they will do is restrict knowledge. The elites will have access to all of it, they will have a small controlled group that will maintain it for them and the rest will be slowly deprived.

      One of the surest way to maintain power is to restrict knowledge.

      With mass media control of the general population, we will do anything if they can advertise and play the message enough times.

      • as with my sig: ""The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

        More details:
        https://pdfernhout.net/recogni... [pdfernhout.net]
        "There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier

      • SMAC said it best:

        Beware of he who would deny you access to information, for in his heart he dreams himself your master.

    • Re:Er, ok (Score:5, Insightful)

      by serviscope_minor ( 664417 ) on Friday September 13, 2024 @04:01PM (#64786203) Journal

      You fell for it.

      Altman is very very very good at getting media coverage by making claims like this. The flavor of danger makes it extra reportable, and it just so happens that it keeps open ai in the news cycle about just how darned good open ai's models are.

      It's just clever marketing that's all

    • Yes. That is already the situation and it has been going on for several decades. Back before 2010 the Internet was a free flow of world-wide information. Since then, oligarchs, elites, and Governments have found ways to block avenues of free speech for their own benefit. This time they will be ready to restrict AI so the ignorant masses don't start another Arab Spring. Censorship is always good for the people in power. Censorship is always bad for the people with no power.
  • by gweihir ( 88907 ) on Friday September 13, 2024 @03:19PM (#64786115)

    I mean, come on. "Reason", "solve hard maths problems" and "answer scientific research questions"? That is complete bullshit. Obviously, it cannot do any of those. I think they are operating on the principle here that if they repeat a lie often enough, many people will believe it.

    • by serviscope_minor ( 664417 ) on Friday September 13, 2024 @04:06PM (#64786223) Journal

      It's marketing more than a lie per se (but also a lie). He keeps making predictions about how his ai is so good it's DANGEROUS 11!!1one, which gets lots of hits in the news cycle and keeps his company's name known along with how good their AI is.

      It's transparent bullshit, but not apparently transparent enough.

      • by gweihir ( 88907 )

        It's transparent bullshit, but not apparently transparent enough.

        Indeed. It works on far too many people. That does not say good things about these people, but even they cannot simply be replaced with artificial morons.

    • That is complete bullshit. Obviously, it cannot do any of those.

      So your argument is "nuh-uh."

      Apparently and obviously, you suck at reasoning.

  • If we restrict applications that can synthesize bio weapons only the government and terrorists will have them.....

  • We are entering a time where anyone can have an assistant who knows almost everything and has every skill, but has no wisdom, no ethics, and no free will. The vast majority of people will use this for good. The few who don't may well kill us all, long before we reach AI capable of doing it to us. Hopefully we can catch/impede/derail these attempts long enough to grow beyond it. If we actually attain GAI, maybe it will force us to play nice. Wouldn't that be a plot twist?
    • That's an interesting take on the Fermi Paradox. With technology as an infinite force multiplier, plus competition for limited resources the core of evolution, once a civ figures out technology they shortly hit themselves with the banhammer with infinite force. And there seems to be a strong pattern for defending being much harder than attacking, for nukes, bioweapons, asteroid redirect, etc, and no upper limit on the potential damage.

      • Entropy in a system is always increasing. You can get a temporary local decrease with a remote increase, but it's a lot of work.

        It's easier to break things than to build things, and humans and our civilizations are very fragile. And unfortunately we're always learning how to build things that break things in better ways. No matter how good your chemistry set, you're not going to make a gas that heals vast areas of wounded people, but you can sure as hell poison everyone. You're not going to make a highl

  • Comment removed based on user account deletion
  • The more capable you make a tool with broad application, the better it is when used for a specific use case.

    Honed obsidian rock chips made awesome sharp edges. Then people got stabby with them.

  • The sorry thing about Altman's doomerism statements like this is he is planting the seeds for tough new regulations against more open AI and startups. OpenAI is one of the least open frameworks out there. This paper ranks OpenAI at the absolute bottom across all openness metrics. https://dl.acm.org/doi/10.1145... [acm.org] Sam Altman can not be trusted to build safe AI, although he may manage to convince a few well-funded legislators and regulators that he can. In the long run, the way towards building more trustabl
  • by jd ( 1658 )

    Cannot solve any maths problem, because it has no notion of meaning. It can only say what things tend to go together. Even when the answer is "correct", it is wrong because it isn't the number at the end that matters in maths but the reasoning to get there.

    (Sound reasoning will produce the correct answer, unsound reasoning or - in AI's case, probable association - will produce a random answer that, because of the nature of randomness, will occasionally match the right answer by chance alone.)

    • Cannot solve any maths problem, because it has no notion of meaning. It can only say what things tend to go together.

      Even when the answer is "correct", it is wrong because it isn't the number at the end that matters in maths but the reasoning to get there.

      Much of the o1 gains are CoT related. In other words the models performance is linked to showing its work.

      (Sound reasoning will produce the correct answer, unsound reasoning or - in AI's case, probable association - will produce a random answer that, because of the nature of randomness, will occasionally match the right answer by chance alone.)

      Seems many have the mistaken impression LLMs are akin to old-school word n-grams.

Every cloud has a silver lining; you should have sold it, and bought titanium.

Working...