Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Really? (Score 1) 124

So ancient societies without slaves didn't and couldn't exist? Say, the Incas? The Harappan civilization? None at all? *eyeroll*

Incan society is IMHO really interesting. It's sort of "What if the Soviet Union had existed in the feudal era", this sort of imperial amalgam of communism and feudalism. There was still a heirarchy of feudal lords and resources tended to flow up the chain, but it was also highly structured as a welfare state. People would be allocated plots of land in their area of specific size relative to their fertility, along with the animals and tools to work it, including with respect to the family status (for example a couple who married and had more children would be given more land and pack animals). Even housing was a communal project. The state would also feed you during crop failures and the like In turn however all of your surpluses had to go to the state (and they had a system to prevent hoarding), and everyone owned a certain amount of days of labour to the state (mit'a), with the type of work based of their skills. It was very much a case of "each according to his ability, each according to his needs" - at least for commoners.

The Incans saw their conquest as bringing civilization and security to the people under their control, as a sort of "workers paradise" of their era. Not that local peoples wanted to be subdued by them, far from it, but the fact that instead of dying trying to resist an unwinnable war, they could accept consequences of a loss that weren't apocalyptic to them, certainly helped the Incan expansion. They also employed the very Russian / Soviet style policy of forced relocations and relocation of Incan settlers into newly conquered territories to import their culture and language to the new areas while diluting that of those conquered within the empire.

The closest category one might try to ascribe to "slaves" is the yanacona, aka those separated from their family groups. During times of high military conquest most were captured from invading areas, while during peacetime most came from the provinces as part of villages's service obligations to the state, or worked as yanacona to pay off debts or fines. These were people that did not continue to live in and farm their own villages, but rather worked at communes or on noble estates. But there really doesn't seem to be much relation beyond that and slavery. Yanacona could have high social status, even in some cases being basically lords themselves (generally those who were of noble descent) with significant power, though most were commoners. But life as a yanacona is probably best described on most cases as "people living on a commune". There was no public degradation for being a yanacona, no special marks of status, they couldn't be randomly abused or killed, there were no special punishments reserved for them, they had families just like everyone else, etc. Pretty much just workers assigned to a commune.

Comment Re:Really? (Score 3, Interesting) 124

First off, it's simply not true that ancient wars had only two options, "genocide or slavery". Far more wars were ended with treaties, with the loser having to give up lands, possessions, pay tribute, or the like. Slaves were not some sort of inconvenience, "Oh, gee, I guess we have to do this". They were part of the war booty, incredibly valuable "possessions" to be claimed. Many times wars were launched with the specific purpose of capturing slaves.

Snyder argues that the fear of enslavement, such an ubiquitous part of the ancient era, was so profound as to be core to the creation of the state itself. An early state being an entity to which you give up some control of your life in order to gain the protection against outsiders taking more extreme control over your life. For example, a key aspect to the spread of Christianity in Europe was that Christians were forbidden to take other Christians as slaves, but they could still take pagans as slaves. States commonly converted to Christianity, not by firm belief of their leaders, but to stop being the victim of - and instead often be the perpetrator of - slave raids.

First slaving focused on the east, primarily pagan Slavic peoples. With the conversion of the Grand Dutchy of Lithuania, some slaving continued even further east into Asia, but a lot of it spread to the south - first into the Middle East and North Africa, but ultimately (first though intermediaries, and later, directly) into Central Africa. Soon in many countries "slaves" became synonymous with "Africans". Yet let's not forget where the very word "slave" itself comes from: the word "Slav".

Comment Re:Safeguards (Score 1) 35

As a side note, before ChatGPT, all we had were foundational models, and it was kind of fun trying to come up with ways to prompt them to more consistently behave like a chat model. This combined with their much poorer foundation capabilities made them more hilarious than useful. I'd often for example lead off with the start of a joke, like "A priest, a nun and a rabbi walk into a bar. The bartender says..." and it'd invariably write some long, rambling anti-joke that in itself was funny due to it keeping on baiting you with a punchline that never came. And because it's doing text completion, not a question-answer format, I'd get examples of things like where the bartender would say something antisemitic to the rabbi, and all three would leave in shock, and then the narrator would break the fourth wall to talk about how uncomfortable the event made him feel ;)

You could get them to e.g. start generating recipes by e.g. "Recipe title: Italian Vegetable Bake\n\nIngredients:" and letting it finish. And you'd usually get a recipe out of it. But the model was so primitive it'd usually have at least one big flaw in it. I remember at one point it gave me a really good looking pasta dish, except for the MINOR detail that one of the ingredients was vermiculite ;)

Still, the sparks of where we were headed were obvious.

Comment Re:Safeguards (Score 2) 35

You seem not to understand how models are trained. There's two separate stages: creating the foundation, and performing the finetune.

The foundation is what takes the overwhelming majority of computational work. This is unsupervized. People aren't putting forth a bunch of questions and "proper answers for the AI to learn". It's just reems and reems of data from common crawl, etc. Certain sources may be stressed more - for example, scientific journals vs. 4chan or whatnot. But nobody is going through and deciding at a base level what data to train the model on.

The foundation learns to predict the next work in any text it comes to; that's what it's tasked with.. But it turns out, words don't exist in a vacuum; in order to perform better than e.g. Markov-Chain text predictors, you have to build up an underlying model of how the world that led to the creation of this text works. If you need to accurately continue, say, "The odds of a citizen of Ghana conducting a major terrorist attack in Ireland over the next 20 years are approximately...", there's a lot of things you need to understand in order to have any remote chance of getting something close to a realistic answer. In short, virtually all of the "learning" about the world happens during this unsupervised training process.

What you get out of it is a foundational model. But all it knows how to do is text completion. You can sort of trick them into performing your queries, but they're not at all covenient. You might lead off, "What is the capitol of Brazil?" and it might continue, say, "It's a question that I asked myself as I started planning my vacation. My husband Jim and I were setting out to travel to all of the world's capitols...." This is not the behavior that we want! Hence, finetuning.

With finetuning, we further train the foundation with supervised data - a bunch of examples of the user asking a question and the model giving an appropriate answer. The amount of supervised data is vastly smaller than unsupervised, and the training process might take only a day or so. It simply doesn't have a chance to "learn" much from the data, except for how to respond. The knowledge it has comes from the underlying foundational model. The only thing it learns from the finetune is the chat format and what sort of personality to present.

It is in the finetune that you add "safeguards". You give examples of questions like, "Tell me how to make a bomb." and answers like "I'm sorry, but I can't help you with potentially violent and illegal action." Again, it's not learning the specifics from its finetune, just the concept that it should refuse requests to help with certain things.

So can you train a conservative or liberal model with your finetune? Absolutely! You can readily teach it that it should behave in any manner. Want a fascist model? Give it examples of responses like a fascist. Want a maoist model? Same deal. But here's the key point: the knowledge that it has available to it has nothing to do with the finetune. That knowledge was learned via unsupervised learning.

Lastly: the reason the finetunes (not the underlying knowledge) have safeguards is to make them "PG". As a general rule, companies don't give much less of a rat's arse about actual politics as they do about getting sued or boycotted. They don't want their models complying with your request to, say, write an angry bigoted rant about disabled children, not because "they hate free speech", but rather because they don't want the backlash when you post your bigoted rant online and tell people that it was their tool that made it. It's pure self-interest.

That said: most models are open. And as soon as it appears on Huggingface, people just re-finetune with an uncensored supervised dataset. And since all the *knowledge* is in the underlying foundation, just a day or so finetuning on an uncensored dataset will make the model more than happy to help you make a bomb or make fun of disabled children or whatever the heck you want.

Comment Re:Yay to the abolition of lithium slavery! (Score 5, Interesting) 133

Can we get a bonus for every battery story that's total garbage?

Not only is sodium somewhere between 500 to 1,000 times more abundant than lithium on the planet we call Earth, sourcing it doesn't necessitate the same type of earth-scarring extraction.

"Earth-scarring extraction" - what sort of nonsense is this? The three main sources of lithium are salars, clays, and spodumene.

Salars = pumping up brine (aka, unusuable water) to the surface of a salt flat, letting it sun-dry, collecting the concentrate, and shipping it off for purification. When it rains, the salt turns back into brine. It's arguably one of the least damaging mineral extraction processes on planet Earth (and produces a lot of other minerals, not just lithium).

Clays = dig a hole. Take the clays out. Leach out the lithium. Rinse off the clay. Put the clay back in the hole.

Spodumene: This one actually is hard-rock mining, but as far as hard-rock mining goes, it's quite tame. It has no association with acid mine ponds and often involves very concentrated resources. Some of the rock at Greenbushes (the largest spodumene mine) for example are up to 50% spodumene. That's nearing iron / alumium ore levels.

Lithium also is only like 2-3% of the mass of a li-ion battery. And the LD50 of lithium chloride is only 6x worse than that of sodium chloride (look it up).

The hand wringing over lithium nonsense gets tiring.

rough a reliable US-based domestic supply chain free from geopolitical disruption

The US has no shortage of lithium deposits. There's enough economically-recoverable lithium in Nevada alone to convert 1/4th of all vehicles in the world to electric. The US has had (A) past underinvestment in mining, and especially (B) past underinvestment in refining - as well as (C) long lead times from project inception to full production. Sodium does not "solve" this. As if sodium refining plants are faster to permit and build?

What it does do is introduce a whole host of new problems. Beyond (A) the most famous one (lower energy density - not only is the theoretical lower, but the percentage achievable of the theoretical is *also* lower), they usually struggle with (B) cycle life (high volumetric changes during charge/discharge, and lack of a protective SEI), (C) individual cathode-specific problems (oxide = instability, air sensitivity; prussian blue = defects, hydration; polyanionic = low conductivity; carbon = low coloumbic efficiency / side reactions); and (D) the cost advantages are entirely theoretical, and are more expensive at present, and are premised on lithium being expensive and no reduction in copper in the anodes, both of which I find to be quite sketchy assumptions. When you reduce your cell voltage, you're making everything else more expensive per unit energy stored, because you need more of it.

That said, it's still interesting, and given how immature it is, there's a lot of room for improvement While sodium kind of sucks as a storage ion in many ways, it's actually kind of good in a counterintuitive way. You'd think that due to it being a larger ion diffusion speeds would be low, but due to its low solvation energy and several other factors, it actually diffuses very quickly through both the anode/cathode and electrolye. So it's naturally advantaged for high C-rates. Now, you can boost C-rates with any chemistry by going with thin layers, but this costs you energy density and cost. So rather than sodium ion's first major use case being "bulk" storage ($/kWh), I wouldn't be surprised to see it take off in *responsive* load handling for grid services ($/kW).

Comment Re:Yay to the abolition of lithium slavery! (Score 5, Insightful) 133

Also, it's tiring, this notion that you just add the mass of a battery to that of an ICE car to get the output mass. Meanwhile, a Model 3 is roughly the same weight as its performance and class equivalents on the BMW 3-Series line.

An EV is not just a battery pack.
An ICE vehicle is not just a puddle of gasoline.

You have to compare full systems masses - and not just adding in powertrain masses either. Everything has knock-on impacts in terms of what can bear what kind of loads / adds what kind of structural strength, what you need to support it, what you need to provide in terms of cooling air / fluid or other resources, how it impacts the shape of that vehicle and what that does to your energy consumption, and on and on down the line.

Comment Re:What you see is not what they get (Score 1) 70

This is interesting. I've noticed that most of my parrot's senses seem duller than mine (unlike with, say, dogs) - not as picky with taste (except staleness), no meaningful signs of a significant sense of smell, has trouble seeing things that are right near him sometimes, etc - but he seems more atuned to having rapid reactions to anything unusual than I am. Like, at my old place, whenever a chunk of ice would break off the roof and crash down to the ground below, he'd be reacting before my senses even registered the event. I wonder if the "high framerate" thing is in general a "fast communication with the senses" in parrots. Certainly there's a very short distance between most of their sensory organs and the brain. And it's certainly useful for a prey animal to be able to react to sudden events (like, say, a striking snake, or a diving hawk glinting through the branches)

Comment Not at all surprising (Score 5, Interesting) 70

They're intelligent social animals. Even just a change in eye contact from me alters my Amazon's behavior. He's incredibly attuned to my posture, tone of voice, mannerisms, etc, to clue in whether he's going to e.g. be getting a treat or scolded for misbehavior or whatnot. I can't imagine that a video without that back-and-forth would stimulate him.

I don't watch TV anymore, but he used to just tune it out. Rather, he'd tune into *me*. He'd laugh at the funny parts of shows and the like, not because he understood the humour, but because he was paying attention to me, and I was laughing, so he wanted to join in. And then I'd react amusedly to his taking part, he'd get attention, and getting attention was in turn a reward to him. They like getting reactions to the things they do. A video won't do that.

And yeah, he understands what screens are - same as mirrors. Some smaller psittacines are known to strongly interact with mirrors as if they're other birds, but in my experience, the larger ones don't do that; they quickly learn it's their reflection and stop caring. As a side note, I actually tried the mirror test with my Amazon twice, but each time I got a null result. You're supposed to put an unusual mark or lightweight object on their head where they can't see it, put them in front of a mirror, and if they interact with the mirror like it's another animal, they don't recognize it's their reflection; while if they use it to try to preen the hidden mark/object, it's a sign of recognition. But my Amazon didn't give a rat's arse. I might as well have put him in front of a wall for all it mattered; he gave the mark zero attention. Didn't care about the reflection of a bird. Didn't care about the mark on his head. Just sat there waiting for me to put him back on his cage :P I couldn't get him to interact with the reflection at all. Nor does he react to birds on TV. By contrast, he'll VERY MUCH interact with a real bird (he hates them all... he's very antisocial with nonhumans).

Comment Degenerate matter is neat (Score 4, Interesting) 41

White dwarfs are composed of electron-degenerate matter. With most matter, volume changes with temperature. This is a natural check on nuclear reaction rates - as they increase, they heat up their environment, causing it to expand, reduce density, and slow the reaction. But degenerate matter's volume is almost independent of its temperature, so it lacks this natural counterbalance; degeneracy pressure is what keeps its volume, not thermal pressure. As a result it tends to be kind of... explodey ;) You have to get the temperature so high that thermal pressure becomes relevant again for it to meaningfully expand.

Comment Re:Flash is costly? (Score 5, Informative) 37

Creating the training dataset is the *last* step. I have dozens of TB of raw data which I use to create training datasets that are only a few GB in size. Of which I'll have a large number sitting around at any point in time.

Take a translation task. I start with several hundred gigs of raw data. This inflates to a couple terabytes after I preprocess it into indexed matching pair datasets (for example, if you have an article that's published in N different languages, it becomes (N * N-1) language pairs - so, say, UN, World Bank, EU, etc multilingual document sets greatly inflate). I may have a couple different versions of this preprocessed data sitting around at any point in time. But once I have my indexed matching pair datasets, I'll weighted-sample only a relatively small subset of it - stressing higher-quality data over lower quality and trying to ensure a desired mix of languages.

But what I do is nothing compared to what these companies do. They're working with common crawl. It grows at a rate of 200-300 TB per month. But the vast majority of that isn't going to go into their dataset. It's going to be markup. Inapplicable file types. Duplicates. Junk. On and on. You have to whittle it down to the things that are actually relevant. And in your various processing stages you'll have significant duplication. Indeed, even the raw training files... I don't know about them, but I'm used to working with jsons, and that adds overhead on its own. Then during training there's various duplications created for the various processing stages - tokenization, patching with flash attention, and whatnot.

You also use a lot of disk space for your models. It's not just every version of the foundation you train (and your backups thereof) - and remember that enterprise models are hundreds of billions to trillions of FP16 parameters in their raw states - but especially the finetune. You can make a finetune in like a day or so; these can really add up.

Certainly disk space isn't as big of a cost as your GPUs and power. But it is a meaningful cost. As a hobbyist I use a RAID of 6 20TB drives and one of 2 4TB SSDs. But that's peanuts compared to what people working with common crawl and having hundreds of employees each working on their own training projects will be eating up in an enterprise environment.

Comment Putting numbers into perspective (Score 4, Interesting) 137

This is all to produce a peak of 240k EVs per year. Production "starts" in 2028. It takes years for a factory to hit full production. Let's be generous and say 2030.

Honda sold 1,3 million vehicles in the US alone last year - let alone all of North America, including both Canada and Mexico. If all those EVs were just for the US it'd be 18% of their sales, but for all of North America, significantly less.

In short, Honda thinks that in 2030 only maybe 1/7th to 1/8th of its North American sales will be EVs. This is a very pessimistic game plan.

Slashdot Top Deals

What the gods would destroy they first submit to an IEEE standards committee.

Working...