Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:This should be impossible (Score 1) 87

Checking out in a power failure has only gotten harder over time. Now it's well beyond just having someone who can do arithmetic. None of the prices are actually on the items so without the scanner and the POS looking it up in a database the cashier has no way to know the price other than have someone go look at the shelf (assuming they can FIND the correct price there). Once it's all totaled up (perhaps an hour or 2 later), there's no way to accept a card payment. If the power outage is generalized, the customer can't get the cash either even if they have plenty in the bank. In some areas, even knowing the different tax rates on different classes of items would be an issue (school supplies vs. staple goods vs. 'luxury' goods).

Comment Re:This should be impossible (Score 1) 87

At least the equipment that would be fried on the local distribution side is easy to come by. The transformers that would need a rebuild on the distribution side would have to be rebuilt since there are no spares. There's also nobody prepared to do such a rebuild in the U.S. currently.

If Congress is REALLY worried about any sort of strategic resiliency, that needs to be addressed. There should be spares and on-shore capability to manufacture and re-manufacture that equipment.

Comment Re: 20% survival is pretty good (Score 1) 57

If I understand your argument properly, you're suggesting that things will be OK with the reefs because "survival of the fittest" will produce a population of corals better adapted to warmer conditions.

Let me first point out is that this isn't really an argument, it's a hypothesis. In fact this is the very question that actual *reef scientists* are raising -- the ability of reefs to survive as an ecosystem under survival pressure. There's no reason to believe reefs will surivive just because fitter organisms will *tend* to reproduce more, populations perish all the time. When it's a keystone species in an ecosystem, that ecosystem collapses. There is no invisible hand here steering things to any preordained conclusion.

So arguing over terminology here is really just an attempt to distract (name calling even more so) from your weak position on whether reefs will survive or not.

However, returning to that irrelevant terminology argument, you are undoubtedly making an evolutionary argument. You may be thinking that natural selection won't produce a new taxonomic *species* for thousands of generations, and you'd be right. However it will produce a new *clade*. When a better-adapted clade emerges due to survival pressures, that is evolution by natural selection. Whether we call that new clade a "species" is purely a human convention adopted and managed to facilitate scientific communication.

You don't have to take my word for any of this. Put it to any working biologist you know.

Comment Re: If it can counter act Earth gravity (Score 1) 257

Now run the numbers for an electric motor where the rotor is a satellite with an electromagnet and the stator is the Earth.

Note carefully that I am not claiming the propellantless drive in TFA actually does anything but get warm (if that), just that a theoretical propellantless drive need not intrinsically violate thermodynamics.

Comment Re:AI is just Wikipedia (Score 1) 25

I've probably done tens of thousands of legit, constructive edits, but even I couldn't resist the temptation to prank it at one point. The article was on the sugar apple (Annona squamosa), and at the time, there was a big long list of the name of the fruit in different languages. I wrote that in Icelandic, the fruit was called "Hva[TH]er[TH]etta" (eth and thorn don't work on Slashdot), which means "What's that?", as in, "I've never seen that fruit before in my life" ;) Though the list disappeared from Wikipedia many years ago (as it shouldn't have been there in the first place), even to this day, I find tons of pages listing that in all seriousness as the Icelandic name for the fruit.

Comment Nonsense (Score 1) 25

The author has no clue what they're talking about:

Meta said the 15 trillion tokens on which its trained came from "publicly available sources." Which sources? Meta told The Verge that it didn't include Meta user data, but didn't give much more in the way of specifics. It did mention that it includes AI-generated data, or synthetic data: "we used Llama 2 to generate the training data for the text-quality classifiers that are powering Llama 3." There are plenty of known issues with synthetic or AI-created data, foremost of which is that it can exacerbate existing issues with AI, because it's liable to spit out a more concentrated version of any garbage it is ingesting.

1) *Quality classifiers* are not themselves training data. Think of it as a second program that you run on your training data before training your model, to look over the data and decide how useful it looks and thus how much to emphasize it in the training, or whether or not to just omit it.

2): Synthetic training data *very much* can be helpful, in a number of different ways.

A) It can diversify existing data. E.g., instead of just a sentence "I was on vacation in Morocco and I got some hummus", maybe you generate different versions of the same sentence ("I was traveling in Rome and ordered some pasta" ,"I went on a trip to Germany and had some sausage", etc), to deemphasize the specifics (Morocco, hummus, etc) and focus on the generalization. One example can turn into millions, thus rendering rote memorization during training impossible.

B) It allows for programmatic filtration stages. Let's say that you're training a model to extract quotes from text. You task a LLM with creating training examples for your quote-extracting LLM (synthetic data). But you don't just blindly trust the outputs - first you do a text match to see if what it quoted is actually in the text and whether it's word-for-word right. Maybe you do a fuzzy match, and if it just got a word or two off, you correct it to the exact match, or whatnot. But the key is: you can postprocess the outputs to do sanity checks on it, and since those programmatic steps are deterministic, you can guarantee that the training data meets certain characteristics..

C) It allows for the discovery of further interrelationships. Indeed, this is a key thing that we as humans do - learning from things we've already learned by thinking about them iteratively. If a model learned "The blue whale is a mammal" and it learned "All mammals feed their young with milk", a synthetic generation might include "Blue whales are mammals, and like all mammals, feed their young with milk" . The new model now directly learns that blue whales feed their young with milk, and might chain new deductions off *that*.

D) It's not only synthetic data that can contain errors, but non-synthetic data as well. The internet is awash in wrong things; a random thing on the internet is competing with a model that's been trained on reems of data and has high quality / authoritative data boosted and garbage filtered out. "Things being wrong in the training data" in the training data is normal, expected, and fine, so long as the overall picture is accurate. If there's 1000 training samples that say that Mars is the fourth planet from the sun, and one that says says that the fourth planet from the sun is Joseph Stalin, it's not going to decide that the fourth planet is Stalin - it's going to answer "Mars".

Indeed, the most common examples I see of "AI being wrong" that people share virally on the internet are actually RAG (Retrieval Augmented Generation), where it's tasked with basically googling things and then summing up the results - and the "wrong content" is actually things that humans wrote on the internet.

That's not that you should rely only generated data when building a generalist model (it's fine for a specialist). There may be specific details that the generating model never learned, or got wrong, or new information that's been discovered since then; you always want an influx of fresh data.

3): You don't just randomly guess whether a given training methodology (such as synthetic data, which I'll reiterate, Meta did not say that they used - although they might have) is having a negative impact. Models are assessed with a whole slew of evaluation metrics to assess how good and accurately they respond to different queries. And LLaMA 3 scores superbly, relative to model size.

I'm not super-excited about LLaMA 3 simply because I hate the license - but there's zero disputing that it's an impressive series of models.

Comment Re: Cue all the people acting shocked about this.. (Score 1) 41

Under your (directly contradicting their words) theory, then creative endeavour on the front end SHOULD count If the person writes a veritable short-story as the prompt, then that SHOULD count. It does not. Because according to the copyright office, while user controls the general theme, they do not control the specific details.

"Instead, these prompts function more like instructions to a commissioned artist—they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output."

if a user instructs a text-generating technology to “write a poem about copyright law in the style of William Shakespeare,” she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare's style.[29] But the technology will decide the rhyming pattern, the words in each line, and the structure of the text

It is the fact that the user does not control the specific details, only the overall concept, that (according to them) that makes it uncopyrightable.

Slashdot Top Deals

Neutrinos have bad breadth.

Working...