Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:AI is just Wikipedia (Score 1) 20

I've probably done tens of thousands of legit, constructive edits, but even I couldn't resist the temptation to prank it at one point. The article was on the sugar apple (Annona squamosa), and at the time, there was a big long list of the name of the fruit in different languages. I wrote that in Icelandic, the fruit was called "Hva[TH]er[TH]etta" (eth and thorn don't work on Slashdot), which means "What's that?", as in, "I've never seen that fruit before in my life" ;) Though the list disappeared from Wikipedia many years ago (as it shouldn't have been there in the first place), even to this day, I find tons of pages listing that in all seriousness as the Icelandic name for the fruit.

Comment Re:Sure, let someone else be the gatekeeper (Score 2) 115

A friend's mother bought a laptop that had Windows preinstalled but crashed every time during setup. She asked me to take a look and I couldn't get it to go past a certain point in the setup either. So I told her I could find a copy of Windows, wipe the laptop, and reinstall. She said she'd heard of Linux and wanted to try it anway, so maybe I could put that on? I installed a copy of Ubuntu and the only call I've gotten was when she wanted to hook it up to the TV and it Ubuntu defaulted to dual screen instead of mirroring.

My mother wanted to get a chromebook. One of her friends had one, and she liked how simple and easy it was to use. She didn't like how complicated "Windows" was. Turns out she was mad at office, not Windows itself. I suggested instead of getting a new chromebook she confine herself to using things in the browser, showed her how Google Docs works, and she's perfectly happy. So not acutally using Linux, but might as well be.

Comment Nonsense (Score 1) 20

The author has no clue what they're talking about:

Meta said the 15 trillion tokens on which its trained came from "publicly available sources." Which sources? Meta told The Verge that it didn't include Meta user data, but didn't give much more in the way of specifics. It did mention that it includes AI-generated data, or synthetic data: "we used Llama 2 to generate the training data for the text-quality classifiers that are powering Llama 3." There are plenty of known issues with synthetic or AI-created data, foremost of which is that it can exacerbate existing issues with AI, because it's liable to spit out a more concentrated version of any garbage it is ingesting.

1) *Quality classifiers* are not themselves training data. Think of it as a second program that you run on your training data before training your model, to look over the data and decide how useful it looks and thus how much to emphasize it in the training, or whether or not to just omit it.

2): Synthetic training data *very much* can be helpful, in a number of different ways.

A) It can diversify existing data. E.g., instead of just a sentence "I was on vacation in Morocco and I got some hummus", maybe you generate different versions of the same sentence ("I was traveling in Rome and ordered some pasta" ,"I went on a trip to Germany and had some sausage", etc), to deemphasize the specifics (Morocco, hummus, etc) and focus on the generalization. One example can turn into millions, thus rendering rote memorization during training impossible.

B) It allows for programmatic filtration stages. Let's say that you're training a model to extract quotes from text. You task a LLM with creating training examples for your quote-extracting LLM (synthetic data). But you don't just blindly trust the outputs - first you do a text match to see if what it quoted is actually in the text and whether it's word-for-word right. Maybe you do a fuzzy match, and if it just got a word or two off, you correct it to the exact match, or whatnot. But the key is: you can postprocess the outputs to do sanity checks on it, and since those programmatic steps are deterministic, you can guarantee that the training data meets certain characteristics..

C) It allows for the discovery of further interrelationships. Indeed, this is a key thing that we as humans do - learning from things we've already learned by thinking about them iteratively. If a model learned "The blue whale is a mammal" and it learned "All mammals feed their young with milk", a synthetic generation might include "Blue whales are mammals, and like all mammals, feed their young with milk" . The new model now directly learns that blue whales feed their young with milk, and might chain new deductions off *that*.

D) It's not only synthetic data that can contain errors, but non-synthetic data as well. The internet is awash in wrong things; a random thing on the internet is competing with a model that's been trained on reems of data and has high quality / authoritative data boosted and garbage filtered out. "Things being wrong in the training data" in the training data is normal, expected, and fine, so long as the overall picture is accurate. If there's 1000 training samples that say that Mars is the fourth planet from the sun, and one that says says that the fourth planet from the sun is Joseph Stalin, it's not going to decide that the fourth planet is Stalin - it's going to answer "Mars".

Indeed, the most common examples I see of "AI being wrong" that people share virally on the internet are actually RAG (Retrieval Augmented Generation), where it's tasked with basically googling things and then summing up the results - and the "wrong content" is actually things that humans wrote on the internet.

That's not that you should rely only generated data when building a generalist model (it's fine for a specialist). There may be specific details that the generating model never learned, or got wrong, or new information that's been discovered since then; you always want an influx of fresh data.

3): You don't just randomly guess whether a given training methodology (such as synthetic data, which I'll reiterate, Meta did not say that they used - although they might have) is having a negative impact. Models are assessed with a whole slew of evaluation metrics to assess how good and accurately they respond to different queries. And LLaMA 3 scores superbly, relative to model size.

I'm not super-excited about LLaMA 3 simply because I hate the license - but there's zero disputing that it's an impressive series of models.

Comment Re: Cue all the people acting shocked about this.. (Score 1) 41

Under your (directly contradicting their words) theory, then creative endeavour on the front end SHOULD count If the person writes a veritable short-story as the prompt, then that SHOULD count. It does not. Because according to the copyright office, while user controls the general theme, they do not control the specific details.

"Instead, these prompts function more like instructions to a commissioned artist—they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output."

if a user instructs a text-generating technology to “write a poem about copyright law in the style of William Shakespeare,” she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare's style.[29] But the technology will decide the rhyming pattern, the words in each line, and the structure of the text

It is the fact that the user does not control the specific details, only the overall concept, that (according to them) that makes it uncopyrightable.

Comment Re:Tell me again why we have these goofballs on to (Score 1) 149

People with unconventional personalities tend to surround themselves with similar people. Of my close friends, most are self-employed, one works about half the year and travels the rest, and one is actively trying to get fired so he can collect unemployment while working on starting his own thing.

Which is why research is important. I don't understand why someone would sit eight plus hours a day at a job they actively dislike, but most people do. Read the comments on any of the return-to-work stories here on Slashdot; even many Slashdotters *want* to go back.

I think working from home during the pandemic gave a lot of people a taste of one of the ultimate perks. Problem is, it's a perk that cancels out most of the others. The ones current management sacrificed to get. And that big monitor in that sunny office is pretty pointless when none of your underlings are around to a appreciate it.

Comment Re:Not a full test (Score 1) 69

Hey, I'm agreeing with you. The ideal number of g's is clearly more than you or I can take, but just about exactly what a fighter pilot can. I have no doubt most fighter pilots would be quite happy with that claim. And thrust vectoring totally has nothing to do with making maneuvers tighter, because you totally wouldn't want to do that. They're clearly ideal already.

Comment Re: Cue all the people acting shocked about this.. (Score 1) 41

Based on the Office's understanding of the generative AI technologies currently available, users do not exercise ultimate creative control over how such systems interpret prompts and generate material. Instead, these prompts function more like instructions to a commissioned artist—they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output.[28] For example, if a user instructs a text-generating technology to “write a poem about copyright law in the style of William Shakespeare,” she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare's style.[29] But the technology will decide the rhyming pattern, the words in each line, and the structure of the text.[30]

Compare with my summary:

" their argument was that because the person doesn't control the exact details of the composition of the work"

I'll repeat: I accurately summed up their argument. You did not.

Comment Re:The future of war is video games? (Score 2) 69

Sure. We have a Mario Kart war and you lose. Now I'm in charge. Wait, you don't like that? You're going to send your AI kill bots after me instead? Okay, I sent my AI kill bots after you and I won again. Now I'm in charge? No? Okay, infantry invasion, and I win again. Wait, your populace doesn't like that, so they're figting a guerilla war with kitchen knives and farm equipment?

Winning a war is about removing the other guy's ability to fight. Automated weapons just add additional layers until you find out how far down their "ability to fight" extends. They're great for democracies waging foreign wars though, because the ability of democracies to fight those is generally limited by money and coffins.

Comment Re:AI Incest (Score 2, Interesting) 41

Yes, "you've been told" that by people who have no clue what they're talking about. Meanwhile, models just keep getting better and better. AI images have been out for years now. There's tons on the net.

First off, old datasets don't just disappear. So the *very worst case* is that you just keep developing your new models on pre-AI datasets.

Secondly, there is human selection on things that get posted. If humans don't like the look of something, they don't post it. In many regards, an AI image is replacing what would have been a much crapper alternative choice.

Third, dataset gatherers don't just blindly use a dump of the internet. If there's a place that tends to be a source of crappy images, they'll just exclude or downrate it.

Fourth, images are scored with aesthetic gradients before they're used. That is, humans train models to assess how much they like images, and then those models look at all the images in the dataset and rate them. Once again, crappy images are excluded / downrated.

Fifth, trainers do comparative training and look at image loss rates, and an automatically exclude problematic ones. For example, if you have a thousand images labeled "watermelon" but one is actually a zebra, the zebra will have an anomalous loss spike that warrants more attention (either from humans or in an automated manner). Loss rates can also be compared between data +sources+ - whole websites or even whole datasets - and whatever is working best gets used.

Sixth, trainers also do direct blind human comparisons for evaluation.

This notion that AIs are just going to get worse and worse because of training on AI images is just ignorant. And demonstrably false.

Comment Re:Next up: Swarms (Score 1) 69

This doesn't seem terribly likely. Missile release mechanisms aren't the expensive part. Fighters are bigger and more expensive than missiles because they have more range, they can land, and they have more and better sensors. It doesn't make sense to crash all that expensive hardware into the enemy, so you build a reusable carrier with all the expensive bits that launches cheap bombs attached to rockets.

Slashdot Top Deals

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...