Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Submission + - The Gravity of the Situation (earth.com)

jd writes: A number of sites are reporting an unconfirmed breakdown of Relativity at extreme distance: Researchers have stumbled upon a phenomenon that could rewrite our understanding of the universe’s gravitational forces. Known as the “cosmic glitch,” this discovery highlights anomalies in gravity’s behavior on an immense scale, challenging the established norms set by Albert Einstein’s theory of general relativity. However, when applied to the vast scales of galaxy clusters and beyond, this model begins to show cracks. Robin Wen is the project’s lead author and a recent graduate in Mathematical Physics from the University of Waterloo. “At these colossal distances, general relativity starts to deviate from what we observe. It’s as if gravity’s influence weakens by about one percent when dealing with distances spanning billions of light years,” explained Wen. Here's the research paper causing the excitement: https://iopscience.iop.org/article/10.1088/1475-7516/2024/03/045

This is where it's being covered by the press: https://www.earth.com/news/cos... https://www.space.com/cosmic-g... https://phys.org/news/2024-05-... https://www.sciencedaily.com/r... https://uwaterloo.ca/math/news... https://www.newsweek.com/gravi... https://timesofindia.indiatime...

Comment That's just RAG. (Score 3, Interesting) 39

"Grok's differentiator from other AI chatbots like ChatGPT is its exclusive and real-time access to X data." That's just RAG. Retrieval Augmented Generation. All Grok is doing is acting as a summarizer. This is something you can do with an ultra-lightweight model, you don't need a 314B param monster.

Also, you don't need an X Premium subscription to "get access" to Grok, since its weights are public. To "get access" to an instance running it, maybe.

I've not tried running it, but from others who have, the general consensus seems to be: it's undertrained. It has way more parameters than it should need relative to its capabilities. Kinda reminiscent of, say, Falcon.

I also have an issue with "A snarky and rebellious" LLM. Except people using them for roleplaying scenarios (where you generally don't want a *fixed* personality), people generally don't want it inserting some sort of personality into their responses. As a general rule, people have a task they want the tool to do, and they just want the tool to do it. This notion that tools should have "personalities" is what led to Clippy.

Comment Re:Really? (Score 1) 135

So ancient societies without slaves didn't and couldn't exist? Say, the Incas? The Harappan civilization? None at all? *eyeroll*

Incan society is IMHO really interesting. It's sort of "What if the Soviet Union had existed in the feudal era", this sort of imperial amalgam of communism and feudalism. There was still a heirarchy of feudal lords and resources tended to flow up the chain, but it was also highly structured as a welfare state. People would be allocated plots of land in their area of specific size relative to their fertility, along with the animals and tools to work it, including with respect to the family status (for example a couple who married and had more children would be given more land and pack animals). Even housing was a communal project. The state would also feed you during crop failures and the like In turn however all of your surpluses had to go to the state (and they had a system to prevent hoarding), and everyone owned a certain amount of days of labour to the state (mit'a), with the type of work based of their skills. It was very much a case of "each according to his ability, each according to his needs" - at least for commoners.

The Incans saw their conquest as bringing civilization and security to the people under their control, as a sort of "workers paradise" of their era. Not that local peoples wanted to be subdued by them, far from it, but the fact that instead of dying trying to resist an unwinnable war, they could accept consequences of a loss that weren't apocalyptic to them, certainly helped the Incan expansion. They also employed the very Russian / Soviet style policy of forced relocations and relocation of Incan settlers into newly conquered territories to import their culture and language to the new areas while diluting that of those conquered within the empire.

The closest category one might try to ascribe to "slaves" is the yanacona, aka those separated from their family groups. During times of high military conquest most were captured from invading areas, while during peacetime most came from the provinces as part of villages's service obligations to the state, or worked as yanacona to pay off debts or fines. These were people that did not continue to live in and farm their own villages, but rather worked at communes or on noble estates. But there really doesn't seem to be much relation beyond that and slavery. Yanacona could have high social status, even in some cases being basically lords themselves (generally those who were of noble descent) with significant power, though most were commoners. But life as a yanacona is probably best described on most cases as "people living on a commune". There was no public degradation for being a yanacona, no special marks of status, they couldn't be randomly abused or killed, there were no special punishments reserved for them, they had families just like everyone else, etc. Pretty much just workers assigned to a commune.

Comment Re:Really? (Score 4, Interesting) 135

First off, it's simply not true that ancient wars had only two options, "genocide or slavery". Far more wars were ended with treaties, with the loser having to give up lands, possessions, pay tribute, or the like. Slaves were not some sort of inconvenience, "Oh, gee, I guess we have to do this". They were part of the war booty, incredibly valuable "possessions" to be claimed. Many times wars were launched with the specific purpose of capturing slaves.

Snyder argues that the fear of enslavement, such an ubiquitous part of the ancient era, was so profound as to be core to the creation of the state itself. An early state being an entity to which you give up some control of your life in order to gain the protection against outsiders taking more extreme control over your life. For example, a key aspect to the spread of Christianity in Europe was that Christians were forbidden to take other Christians as slaves, but they could still take pagans as slaves. States commonly converted to Christianity, not by firm belief of their leaders, but to stop being the victim of - and instead often be the perpetrator of - slave raids.

First slaving focused on the east, primarily pagan Slavic peoples. With the conversion of the Grand Dutchy of Lithuania, some slaving continued even further east into Asia, but a lot of it spread to the south - first into the Middle East and North Africa, but ultimately (first though intermediaries, and later, directly) into Central Africa. Soon in many countries "slaves" became synonymous with "Africans". Yet let's not forget where the very word "slave" itself comes from: the word "Slav".

Comment Not going to happen (Score 3, Interesting) 95

Every few years some government weenie wants to put breathalyser interlocks in all cars.....

"when the technology is mature."

I design a standards approved breathalyser in the mid 90's using a platinum acid fuel cell.

They won't work because
1- Not reliable enough (Imagine being stranded because the interlock broke)
2- Requires at least annual recalibration (Infrared and Fuel cell based)
3- Someone else can blow in them or make a breath simulator to blow for them.
4- Expensive.
5- Car manufacturers will lobby not to comply (because of 1,2,4)
6- People will rightly complain of government overreach.

The problem is solved through policing, fines, education and making DD cultural poison.

Comment Re:Never enough houses (Score 3, Insightful) 134

Italy and Japan have shrinking populations. We would too, if it weren't for immigration. However our population growth rate is still low, and if it were any lower we'd be facing serious economic and social challenges. Sure, a shrinking population would drop housing prices, but we are far from having so many people there isn't space to fit them. Our real problem is seventy years of public policy aimed at the elimination of "slums" and the prevention of their reemergence.

If you think about it, "slum" is just a derogatory word for a neighborhood with a high concentration of very affordable housing. Basically policy has by design eliminated the most affordable tier of housing, which eliminates downward price pressure on higher tiers of housing. Today in my city a median studio apartment cost $2800; by the old 1/5 of income rule that means you'd need an income of $168k. Of course the rule now is 30% of income, so to afford a studio apartment you need "only" 112k of income. So essentially there is no affordable housing at all in the city, even for young middle class workers. There is, however a glut of *luxury* housing.

In a way, this is what we set out to accomplish: a city where the only concentrations of people allowed are wealthy people. We didn't really think it through; we acted as if poor to middle income people would just disappear. In reality two things happened. First they got pushed further and further into the suburbs, sparking backlash by residents concerned with property values. And a lot of people, even middle-class young people, end up in illegal off-the-book apartments in spaces like old warehouses and industrial spaces.

Comment Re:Hopefully it's improved since 2019 (Score 1) 248

In other news I've never been in an accident in my car so why should any of my passengers need a seatbelt. The data is clear, it won't make them any safer.

At the risk of moving the discussion away from amusing reductio ad absurdums and in a constructive direction... the actual question to be asking is: "are the benefits of mandating this technology worth the costs?"

The benefits here are obvious: reduced deaths, injuries, and property damage.

The costs are: increased vehicle prices (to compensate for the development costs and materials required by the new technology) and potentially some accidents introduced in cases where the technology performs poorly enough to cause an accident rather than preventing one.

My intuition is that the technology is mature enough at this point that it makes sense to mandate it, but that's only an intuition; the NHTSA doesn't operate on intuition, it operates on extensive studies, so its opinion here is worth a whole lot more than mine.

Comment Re:Safeguards (Score 1) 38

As a side note, before ChatGPT, all we had were foundational models, and it was kind of fun trying to come up with ways to prompt them to more consistently behave like a chat model. This combined with their much poorer foundation capabilities made them more hilarious than useful. I'd often for example lead off with the start of a joke, like "A priest, a nun and a rabbi walk into a bar. The bartender says..." and it'd invariably write some long, rambling anti-joke that in itself was funny due to it keeping on baiting you with a punchline that never came. And because it's doing text completion, not a question-answer format, I'd get examples of things like where the bartender would say something antisemitic to the rabbi, and all three would leave in shock, and then the narrator would break the fourth wall to talk about how uncomfortable the event made him feel ;)

You could get them to e.g. start generating recipes by e.g. "Recipe title: Italian Vegetable Bake\n\nIngredients:" and letting it finish. And you'd usually get a recipe out of it. But the model was so primitive it'd usually have at least one big flaw in it. I remember at one point it gave me a really good looking pasta dish, except for the MINOR detail that one of the ingredients was vermiculite ;)

Still, the sparks of where we were headed were obvious.

Comment Re:Safeguards (Score 2) 38

You seem not to understand how models are trained. There's two separate stages: creating the foundation, and performing the finetune.

The foundation is what takes the overwhelming majority of computational work. This is unsupervized. People aren't putting forth a bunch of questions and "proper answers for the AI to learn". It's just reems and reems of data from common crawl, etc. Certain sources may be stressed more - for example, scientific journals vs. 4chan or whatnot. But nobody is going through and deciding at a base level what data to train the model on.

The foundation learns to predict the next work in any text it comes to; that's what it's tasked with.. But it turns out, words don't exist in a vacuum; in order to perform better than e.g. Markov-Chain text predictors, you have to build up an underlying model of how the world that led to the creation of this text works. If you need to accurately continue, say, "The odds of a citizen of Ghana conducting a major terrorist attack in Ireland over the next 20 years are approximately...", there's a lot of things you need to understand in order to have any remote chance of getting something close to a realistic answer. In short, virtually all of the "learning" about the world happens during this unsupervised training process.

What you get out of it is a foundational model. But all it knows how to do is text completion. You can sort of trick them into performing your queries, but they're not at all covenient. You might lead off, "What is the capitol of Brazil?" and it might continue, say, "It's a question that I asked myself as I started planning my vacation. My husband Jim and I were setting out to travel to all of the world's capitols...." This is not the behavior that we want! Hence, finetuning.

With finetuning, we further train the foundation with supervised data - a bunch of examples of the user asking a question and the model giving an appropriate answer. The amount of supervised data is vastly smaller than unsupervised, and the training process might take only a day or so. It simply doesn't have a chance to "learn" much from the data, except for how to respond. The knowledge it has comes from the underlying foundational model. The only thing it learns from the finetune is the chat format and what sort of personality to present.

It is in the finetune that you add "safeguards". You give examples of questions like, "Tell me how to make a bomb." and answers like "I'm sorry, but I can't help you with potentially violent and illegal action." Again, it's not learning the specifics from its finetune, just the concept that it should refuse requests to help with certain things.

So can you train a conservative or liberal model with your finetune? Absolutely! You can readily teach it that it should behave in any manner. Want a fascist model? Give it examples of responses like a fascist. Want a maoist model? Same deal. But here's the key point: the knowledge that it has available to it has nothing to do with the finetune. That knowledge was learned via unsupervised learning.

Lastly: the reason the finetunes (not the underlying knowledge) have safeguards is to make them "PG". As a general rule, companies don't give much less of a rat's arse about actual politics as they do about getting sued or boycotted. They don't want their models complying with your request to, say, write an angry bigoted rant about disabled children, not because "they hate free speech", but rather because they don't want the backlash when you post your bigoted rant online and tell people that it was their tool that made it. It's pure self-interest.

That said: most models are open. And as soon as it appears on Huggingface, people just re-finetune with an uncensored supervised dataset. And since all the *knowledge* is in the underlying foundation, just a day or so finetuning on an uncensored dataset will make the model more than happy to help you make a bomb or make fun of disabled children or whatever the heck you want.

Comment Re:Yay to the abolition of lithium slavery! (Score 1) 135

Sounds good, let's see it IRL. How much usable energy per unit of battery weight?

Don't know about weight, but you can buy 18650 cells using Na-Ion right now. They have the power capacity and curves of LiFePO4 cells at the moment.

The key part is that we have tons of sodium, unlike lithium, and a lot of it is already in ion form. Earth's lithium supplies are limited, while sodium supplies are basically limitless, and thus, it's stupidly cheap and unlikely to rise due to its abundance.

Sodium batteries are very similar to lithium, since it's in the same group (one row down) so the properties are similar. Hopefully that means enhanced sodium cells are soon as they apply the advancements made to lithium batteries to sodium batteries.

But you can apparently play with them today. A video on a YouTuber playing with them - https://youtu.be/s6zcI1GrkK4

Slashdot Top Deals

Real Programs don't use shared text. Otherwise, how can they use functions for scratch space after they are finished calling them?

Working...