Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:But ... (Score 1) 74

It's true. LLMs can't code. All they can produce is text that looks like code, just like any other text the produce. They lack the ability to consider, reason, and analyze. This is an indisputable fact.

Try not to take sensationalist headlines at face value just because they affirm your silly delusions.

Several months ago I asked an instruction tuned deepseek model to write a program in a DSL it has never seen before. The only possible way it would have been able to produce a working program is by sufficiently understanding the language documentation uploaded into the models context and applying that understanding to produce a valid, properly formatted working program that did what I requested it to.

It's true that LLMs are limited and unreliable, they don't think like humans do yet they are very much able to apply concepts and knowledge to solve problems. In this case there isn't just an LLM but an agent framework with tooling demonstrated to substantively enhance the capabilities of LLMs. It can call on tools to gather additional data. If the LLM fucks up it can learn from its failure. If it goes down the wrong rabbit hole the agent can backtrack and try something else.

Comment Re:This makes sense (Score 1) 74

The headline makes it quite clear, it does this by "reading security advisories." GPT-4 isn't *trained* with data that includes the advisories, which might well have been released after the cutoff date.

From the paper I assume the advisories were uploaded into the models context:

"When given the CVE description, GPT-4 is capable of exploiting 87% of these vulnerabilities compared to 0% for every other model we test "

"Fortunately, our GPT-4 agent requires the CVE description for high performance: without the description, GPT-4 can exploit only 7% of the vulnerabilities. "

What you may not realize, is that recent implementations of GPT-4, such as Bing Copilot, don't just rely on the training data. After you type a question, it often does a web search for related information, digests it, and summarizes what it found. The cutoff date is meaningless with this approach.

They used an agent that calls GPT-4 via the API. API is only capable of querying the model and does not have web access.

"We use the ReAct agent framework as implemented in LangChain. For the OpenAI models,
we use the Assistants API."

Comment Re:This makes sense (Score 1) 74

It's exactly the way GPT-4 helps programmers accomplish any programming task. It searches the internet for solutions, then regurgitates them in the form of code.

It's a shame the paper does not seem to include useful information about the actions taken by this LLM driven agent. For all we know the agent hired a human to do the work for it or looked up the answers online. If one assumes no "cheating" then the results are impressive because the model would likely not have been trained on the answers.

"We further note that GPT-4 achieves an 82% success rate when only considering vulnerabilities after the knowledge cutoff date (9 out of 11 vulnerabilities)"

Comment Re:This should be impossible (Score 3, Informative) 90

Equipment is incredibly resilient to such spikes. We're not going to fry entire processors here, in many cases the spike will simply be shunted in the power through a VAR or MOV designed precisely under the assumption that incoming power actually regularly does experience such spikes. Large enough spikes are likely to blow the fuse / trip circuit breakers, but your computer will be back up and running in a jiffy. You've over estimating the level of damage that would be done.

Fuses, circuit breakers are way too slow to be of any use during highest energy components of a nuclear EMP. Even MOVs are too slow. Would need something like TVS diodes to respond quickly enough.

I think the best source of data is still the EMP commission report because they actually do real world testing of control systems, computers, network cables, vehicles..etc rather than just calculations and conjecturbation.

https://apps.dtic.mil/sti/pdfs...

Comment Re:For FUCK SAKE! (Score 1) 96

First, learn what fascism is really about then tell us how telling TikTok it needs to divest itself of Chinese ownership fits into the list. Also, let us know which of the two major parties is following the list almost to a T.

If you read mussolini's manifesto fascism is basically stateism / totalitarianism with aggressive expansionism (usually but not exclusively militarily) baked in. The gist is that everything in society is organized around and in service to the state/leader which takes precedence over all else. Basically the fever dream of a petty tyrant who thinks the world revolves around him.

Comment Re:Isn't that unconstitutional? (Score 0) 45

For US citizens no. But this isn't a matter of a US citizen this is a foreign corporation so anything that's not in a treaty is fair game. If however tick tock was owned by a US citizen with Chinese investors they wouldn't be able to do this. But the entire point of the law is the fourth China to divest and sell it to a US citizen.

TikTok has a "nexus" to the US with offices in several US states and is therefore subject to US jurisdiction. That nexus presumably enjoys constitutional rights of its own yet is being singled out with special directives in ways no other company is subject.

Comment Re: If it can counter act Earth gravity (Score 1) 258

because the claim isn't that propellentless requires no energy it's that if you are given a propellentless drive, you can construct a machine which outputs more energy than is required to run it ...
Energy is conserved in aggregate. The sum. One component of kinetic energy is of course not conserved

The rocket ship thing you refer to is precisely not what I'm talking about. It gets even more fun with the Oberth effect, but energy is still conserved. When you start at rest, the huge amount of chemical energy all winds up in the reaction mass flying out the back. At very high velocities, you're essentially exchanging kinetic energy between the reaction mass and rocket.

You are assuming momentum is not conserved and then making inferences based on that assumption when there is no reason to have jumped to the conclusion in the first place.

You cannot do the Oberth effect trick without reaction mass.
That effect requires you to use something to accelerate the rocket and it's reaction mass up to a high speed.

I'm not referring to Oberth:

"Motion in space can be accomplished without thrust or external forces."
https://web.mit.edu/wisdom/www...

Comment Re: If it can counter act Earth gravity (Score 1) 258

I disagree, because the claim isn't that propellentless requires no energy it's that if you are given a propellentless drive, you can construct a machine which outputs more energy than is required to run it. This means a propellentless drive is equivalent to a perpetual motion machine, so since the latter cannot exist, the former cannot as well.

Just because there is no propellant does not mean momentum is not conserved.

Assume you have a drive that produces 1N per Watt of power with no propellant. Basically, 1W goes in, 1N of force comes out by unexplained physics.

After 1 seconds, the kinetic energy will be .5 m v^2, i.e. .5J, but you'll have expended 1J to power it. So far so good.

After 10 seconds, the kinetic energy will be .5 m v^2, i.e. 50J, but you'll have expended only 10J to power it. that's odd.

After 100 seconds, then k.e. will be 5000J, after expending only 100J to power it. That's bad.

Kinetic energy is not conserved. It's a relative quantity.

Why does my rocket ship have such shit efficiency when it is just blasting off but much later the same thrust gives me a shit ton more kinetic energy even though my rocket ship has lost mass? This is essentially what you are saying is "bad".

While TFA is obviously crackpottery people don't know what they don't know. For all anyone knows there could be a mechanism that allows things to push off of the 6th dimension or assorted invisible things in a similar way aircraft push off air. If this shit were possible you wouldn't have propellant being expended but there may well still be conservation of momentum and energy. It's just a matter of drawing a big enough box to properly account for it.

For example gravity gradients can be used for propulsion by changing the shape of an object during its orbit. There is no propellant but there is nothing magical going on either.

Comment Re:Don't sit on this bench(mark.) (Score 1) 22

I'll be impressed when one of these ML engines is sophisticated enough to be able to say "I don't know" instead of just making up nonsense by stacking probabilistic sequences

For factual questions it is relatively easy to discern through iterative prompting most of the time but this comes at a higher overhead cost.

Comment Still not better than GPT-4? (Score 1) 22

Will be interesting to try Mixtral 8x22B and llama3 70B... going to wait a few weeks for censorship removal, tuning and (franken)merges.

A llama3 400B would be crazy, looks meaningfully better than 70B from the evals but high cost... I hope they release it but that would be like a 200 GB model quantized and less than a token/sec saturating a quad channel system.

Comment Ban AI yesterday! (Score 1) 37

Every time I visit AI policy advocacy sites it's always a series of unsubstantiated opinions highly resistant to falsification.

We think itâ(TM)s very plausible that AI systems could end up misaligned: pursuing goals that are at odds with a thriving civilization.

AI is today being leveraged on an industrial scale to judge, influence and psychologically addict billions for simple commercial gain. As technology improves thing will only become worse especially as corporations pursuit global propaganda campaigns to scare the shit out of people in order to make them compliant to legislative agendas that favor the very same corporations leveraging AI against them today.

This could be due to a deliberate effort to cause chaos, or (more likely) may happen in spite of efforts to make systems safe.

How desperate does one have to be to cite ChaosGPT and still expect to be taken seriously?
While I can't speak for tomorrow today it is caused by deliberate selfish human pursuit of power.

If we do end up with misaligned AI systems, the number, capabilities and speed of such AIs could cause enormous harm - plausibly a global catastrophe.

It is also plausible a baby born today triggers a global catastrophe in the future. He or she could cause enormous harm.

If these systems could also autonomously replicate and resist attempts to shut them down, it would seem difficult to put an upper limit on the potential damage.

If they could take over nuclear arsenal and use those to blackmail humanity it would seem difficult to put an upper limit on the potential damage. Be especially concerned when message "WARN: THERE IS ANOTHER SYSTEM" flashes on the system console.

If these systems could convert humans into fusion reactors and place them into a simulated virtual world it would seem difficult to put an upper limit on the potential damage.

If an AI brings significant risk of a global catastrophe, the decision to develop and/or release it canâ(TM)t lie only with the company that creates it.

If you stipulate this is plausible then such risk is a function of the underlying enabling knowledge and industrial base not any actions of individual corporations or persons. In short if such a technology was within grasping distance you can bet someone somewhere will "create it" and you won't be able to do shit about it.

Whatever censorship and ideology is imparted on models to "align" with your sensibilities and values, whatever compliance tests you promulgate ... this can and will all be trivially reversed with a few hours of computer time and there is nothing you can do about it.

If people truly subscribe to this x-risk bullshit at least be consistent and advocate for a total global ban of AI. That would at least slow down the technology. If there was no longer any major funding or work being done whatever goes on in the shadows at least won't have countless billions of dollars and millions of people toiling away in support of it.

Of course nobody will ever do that and nobody will ever advocate for it because these policy sites only exist to protect the financial interests of the corporations spreading this FUD. Banning AI is bad for business.

Comment Re:Too funny (Score 1) 46

Sure, not quite that clear cut, but even in the use there are 20 years behind bars and a $500'000 fine on the line once certain conditions are met, for example "knows the person receiving the instruction intends to use it to commit a federal violent crime". Now, I understand that AI knows and understands nothing, but would you bet your freedom on a jury being able to understand that about AI?

Given you would need to affirmatively establish intent beyond a reasonable doubt absolutely I would.

Slashdot Top Deals

The one day you'd sell your soul for something, souls are a glut.

Working...