Comment Re:Good (Score 1) 127
Probably a lot of hacking tools will break.
Other than that I think the Mozilla team is pretty competent and can implement fixes with proper testing
Probably a lot of hacking tools will break.
Other than that I think the Mozilla team is pretty competent and can implement fixes with proper testing
That's a valid opinion for previous LLM models but more recent ones (especially Anthropic's new model) have larger context windows and better parsing of code which lets them find issues that aren't "simple toy examples with obvious specifications."
You don't need a formal spec to determine that a webpage shouldn't crash the web browser. There are certain vulnerabilities which are "obvious" to determine the program shouldn't be doing that once found.
Now for logical bugs (e.g. the program does a valid action but not the expected one) that aren't vulnerabilities you're right, there's not really any way to determine those without knowing the expected result.
I imagine they could go into lots of other fields with cheap space flight. They already have starlink which makes them their own customer for launches, could move into something like asteroid mining to justify more
That's kind of a weird concept though - Can you remove a license from code by just passing it through an LLM?
Problem is when people started labelling anything made by AI as slop regardless of quality. Ironically even stuff made by humans that is too good is sometimes labeled as slop because people suspect AI was involved.
It's definitely an appropriate word for the large amount of AI generated content being put out on the Internet, but it's also over used in a lot of cases
You mean you must submit good code. Whether it's created by an AI model or a human, it needs to be reviewed and good quality. Letting an AI submit piles of slop code is bad, but not all AI code is slop
That's true, but I'm not sure how much that applies. There was undoubtedly human input and code is a bit different from artwork. If you take something that doesn't have copyright applied to it and then modify it, it seems like it would be able to be licensed
That's no problem, I'll just have my firefox AI agent...wait a minute!
I think you're missing the point here.
If you google "hack the Mexican government" you're not going to get any meaningful results, but if you prompt an advanced LLM to do so then apparently it can deliver results. Claude is like a script kiddie on steroids in this case since it knows all the existing vulnerabilities and tools to exploit them which will do the trick for a lot of targets.
It's lowering the bar for hackers so they don't even have to know much about computer system
It is actually since this is discussing a different set of safety guidelines.
Good point! Developers never created insecure or inefficient code before AI so this could be a major problem
Regardless of training the AI on it, they obviously made a copy of the book when they digitized it
That's half of what people use generative AIs for though Making up stories or code or whatever
I think that was just a comment on the first known usage of the word (mud) rather than the later pig slop which is pretty clearly listed in the definition
Yes, that's the origin of the word. Maybe you can see the connection to the more recent variation of the definition?
Hard work never killed anybody, but why take a chance? -- Charlie McCarthy