Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Computers don't "feel" anything (Score 1) 40

It's different from humans in that human opinions, expertise and intelligence are rooted in their experience. Good or bad, and inconsistent as it is, it is far, far more stable than AI. If you've ever tried to work at a long running task with generative AI, the crash in performance as the context rots is very, very noticeable, and it's intrinsic to the technology. Work with a human long enough, and you will see the faults in his reasoning, sure, but it's just as good or bad as it was at the beginning.

Comment Re:Computers don't "feel" anything (Score 2) 40

Correct. This is why I don't like the term "hallucinate". AIs don't experience hallucinations, because they don't experience anything. The problem they have would more correctly be called, in psychology terms "confabulation" -- they patch up holes in their knowledge by making up plausible sounding facts.

I have experimented with AI assistance for certain tasks, and find that generative AI absolutely passes the Turing test for short sessions -- if anything it's too good; too fast; too well-informed. But the longer the session goes, the more the illusion of intelligence evaporates.

This is because under the hood, what AI is doing is a bunch of linear algebra. The "model" is a set of matrices, and the "context" is a set of vectors representing your session up to the current point, augmented during each prompt response by results from Internet searches. The problem is, the "context" takes up lots of expensive high performance video RAM, and every user only gets so much of that. When you run out of space for your context, the older stuff drops out of the context. This is why credibility drops the longer a session runs. You start with a nice empty context, and you bring in some internet search results and run them through the model and it all makes sense. When you start throwing out parts of the context, the context turns into inconsistent mush.

Comment Re:The price of wealth (Score 1) 80

Does a story like this make anybody else wonder if the lifestyle cost of wealth is too high?

The problem in this story is not the wealth, but its form. Cryptocurrency transactions are generally irreversible and not subject to the layers of process and protection that have been built up around large banking transactions. Keep your money in banks and brokerages like a sensible person and you don't have much risk.

Comment Re:Huh? Where? (Score 1) 59

No it's far from the most expensive option

Uh, yes, the 24-hour cancellation option is always the most expensive one for a given room (ignoring paying extra for add-ons like free breakfast or extra points). What other option would be more expensive? The one that gives the consumer the most flexibility is the one with the highest risk to the property, and that's priced in.

TFA postulates a scenario where the cancellations have disappeared.

Yeah, TFA overstated it. Though if you're not booking through the chain directly, in many cases it is hard to get a 24-hour cancellation policy. Many of the travel aggregator services hide them.

Comment Re:way more than some irrationality (Score 1) 55

The AI thing absolutely is a bubble, but it's not "sand-castle based or vapor based". It's very real. The problem is that the massive wave of investment is going to have to start generating returns within the next 3-4 years or else the financial deals that underpin it all will collapse. That doesn't mean the technology will disappear, it just means that the current investors will lose their shirts, other people will scoop up their assets at firesale prices, and those people will figure out how to deploy it effectively, and create trillions in economic value.

The problem is that the investors - and lenders - potentially losing their shirts include major international banks and pension funds, not just private shareholders. Recently, a J.P. Morgan analysis estimated that at least $650 billion in annual revenue will be required to deliver mere 10% return on the projected AI spend. And already banks like Deutsche Bank are looking to hedge their lending exposure to AI related projects.

If the AI bubble crashes hard, it could be a repeat of the 2007 global financial crisis.

Yep. That's all true even if AI is the most transformative technology ever invented, even if it generates trillions per year in economic output -- it might not do it soon enough to prevent another crash. You don't have to believe that AI is "sand-castle based or vapor based" (which it's really not) to see a big problem coming.

Comment Re:way more than some irrationality (Score 1) 55

Here is the thing, you are posting on Slashdot. Don't tell me you are not sharp enough to find a broker, and buy some long dated at the money PUTS either on the AI and AI adjacent firms or just the market over all with funds like SPY / QQQ.

The market can remain irrational longer than you can remain solvent.

The better strategy, IMO, is to keep your money safe and wait for the bubble to burst, then pile in for the recovery. Where to keep money safe is a good question, though. Just holding cash might be risky if inflation comes back, and the current administration seems anxious to pump up inflation.

Comment Re:way more than some irrationality (Score 1) 55

It is quite clear to everybody it is a bubble and a lot of the AI stuff is sand-castle based or vapor based... At least those of us understanding what the current crop of AI does

There's a pair of seriously bad assumptions underlying your analysis:

(1) What AI does right now is all it's going to do. Given the way capabilites have grown recently, this is a ludicrous assumption. Keep in mind that ChatGPT was launched November 30, 2022... it's less than three years old! And the reasoning models are barely a year old. There is no reason whatsoever to assume that this technology has peaked.

(2) We already know how to take full advantage of AI. Every time a new technology comes along it takes decades for us to fully understand how to effectively use it, and to deploy it everywhere it is useful. I'd say we still haven't fully incorporated the Internet into our society, and we've been working on that for over 30 years now. We're barely beginning to understand how to use what AI we've already got, and it'll take years, if not decades, for the full economic benefits to be achieved -- and in the meantime AI is probably going to continue improving.

The AI thing absolutely is a bubble, but it's not "sand-castle based or vapor based". It's very real. The problem is that the massive wave of investment is going to have to start generating returns within the next 3-4 years or else the financial deals that underpin it all will collapse. That doesn't mean the technology will disappear, it just means that the current investors will lose their shirts, other people will scoop up their assets at firesale prices, and those people will figure out how to deploy it effectively, and create trillions in economic value.

Well, assuming AI doesn't just kill us all.

Submission + - NASA Is Tracking a Vast Anomaly Growing in Earth's Magnetic Field (sciencealert.com)

alternative_right writes: For years, NASA has monitored a strange anomaly in Earth's magnetic field: a giant region of lower magnetic intensity in the skies, stretching out between South America and southwest Africa.

This vast, developing phenomenon, called the South Atlantic Anomaly, has intrigued and concerned scientists for decades, and perhaps none more so than NASA researchers.

The space agency's satellites and spacecraft are particularly vulnerable to the weakened magnetic field within the anomaly, and the resulting exposure to charged particles from the Sun.

Submission + - Children With Autism, ADHD, And Anorexia Share a Common Microbe Imbalance (sciencealert.com)

alternative_right writes: A new, small study suggests children with autism, ADHD, and anorexia

share similarly disrupted gut microbiomes, which, by some measures, have more in common with each other than with their healthy, neurotypical peers.

Led by researchers from Comenius University in Slovakia, the study used stool samples to assess the gut microbiomes of 117 children.

The exploratory study included 30 boys with autism spectrum disorder (ASD), 21 girls with anorexia nervosa, and 14 children with attention deficit hyperactivity disorder (ADHD). The remaining samples were from age- and sex-matched healthy and neurotypical children, providing a control group.

Submission + - AI-induced psychosis: The danger of humans and machines hallucinating together (phys.org)

alternative_right writes: These may be extreme cases, but clinicians are increasingly treating patients whose delusions appear amplified or co-created through prolonged chatbot interactions. Little wonder, when a recent report from ChatGPT-creator OpenAI revealed that many of us are turning to chatbots to think through problems, discuss our lives, plan futures and explore beliefs and feelings.

In these contexts, chatbots are no longer just information retrievers; they become our digital companions. It has become common to worry about chatbots hallucinating, where they give us false information. But as they become more central to our lives, there's clearly also growing potential for humans and chatbots to create hallucinations together.

Comment Re:Hardware will be fine (Score 3, Insightful) 55

This is a decent point, though one supposes the rush to build datacentres would slow further, so it won't all be gravy for the hardware companies either.

There comes a time where there has to be some actual utility for the software running on the hardware that is there however, because a significant amount of what it is being used for now quite often has zero, or negative utility itself. But it may mean some people are going to get access to compute power cheaper than they may have done previously once the realignment starts.

It's like the railroads. Enormous fortunes were made and then lost as the railroad boom played out and then the bubble burst. When people were driving hard to push rails across the continental US, the business case for doing so wasn't there. Yes, linking the east and west coasts had some value, but not much, since there really wasn't that much on the west coast. And there was a whole lot of nothing in between. But it was obvious to everyone that when the railroads connected the coasts and opened access to the interior, there would be enormous value. What exactly, no one knew, in the sense that no one knew where all of the railroad-enabled interior cities would be constructed or what kinds of things they would do. But it was clear that there was value in access to all of that land and that someone would do something with it.

On the other hand, realizing that value didn't happen right away. It took decades for all of the land granted to the railroads to become really valuable, because it wasn't valuable until people came and built farms, dug mines, established ranches and generally built lives and industry. The return on that massive investment was there... but it came far too late for most of the people that invested it. Lots of bankruptcies resulted, and others swooped in and snapped up the resources for bargain-basement prices, and they're the ones who became incredibly wealthy (well, they and the ones who supplied the steel, e.g. Carnegie).

It's been the same with pretty much every technology-driven bubble. Remember telecom/dot-com bubble in the 90s, with all of the "dark fiber" that was laid everywhere? Bankruptcies and consolidations resulted, and all of that fiber got lit up and used. That bubble built the Internet, and huge fortunes were made as a result -- the top half-dozen most valuable companies on the planet are all a direct result.

OpenAI and Anthropic are betting that this time will be different, that the payoff will come fast enough to pay back the investment. Google is betting this somewhat, too, but Google has scale, diversity and resources to weather the bust -- and might be well-positioned to snap up the depreciated investments made by others. If history is any guide, OpenAI and Anthropic are wrong. But, then again, AI is fundamentally different from every other technology we've created.

Comment Re:Thanks for the research data (Score 1) 116

It also corresponds to a time when the US was a lot Whiter, but I'm pretty sure that's a "coincidence" you don't want to discuss.

Like most racists, your critical thinking skills (assuming you have them) are shoved aside by overwhelming confirmation bias. Otherwise, you'd have noticed that the US was also a lot whiter before the Pendleton Act, and that the post-Pendleton boom continued and even accelerated after the Civil Rights acts and a large influx of non-white immigrants. We became the world's sole superpower and continued increasing our economic, political and cultural dominance as a diverse, melting-pot society. The rise of China as an economic power (oh, wait... how is that, they're not white, how can they possibly do well?) has flummoxed us somewhat, but even with Trump beginning to throw away the apolitical civil service, our international partnerships and, well, the rule of law as a whole, we're still on top. But the decline is beginning, and it's not the brown-skinned immigrants who are taking us down, it's the white nationalist administration.

If you could discard your biases and examine the situation objectively and critically, you would notice that the timeline you're referring to completely and utterly refutes the conclusion that you're trying to draw.

Submission + - Owning a Cat Could Double Your Risk of Schizophrenia, Research Suggests (sciencealert.com) 1

schwit1 writes: Having a cat as a pet could potentially double a person's risk of schizophrenia-related conditions, according to an analysis of 17 studies.

Psychiatrist John McGrath and colleagues at the Queensland Centre for Mental Health Research in Australia looked at papers published over the last 44 years in 11 countries, including the US and the UK.

Their 2023 review found "a significant positive association between broadly defined cat ownership and an increased risk of schizophrenia-related disorders."

T. gondii is a mostly harmless parasite that can be transmitted through undercooked meat or contaminated water. It can also be transmitted through an infected cat's feces.

Estimates suggest that T. gondii infects about 40 million people in the US, typically without any symptoms. Meanwhile, researchers keep finding more strange effects that infections may have.

Once inside our bodies, T. gondii can infiltrate the central nervous system and influence neurotransmitters. The parasite has been linked to personality changes, the emergence of psychotic symptoms, and some neurological disorders, including schizophrenia.

Comment Re:Huh? Where? (Score 2) 59

Literally every hotel I've booked in both Marriott or Hilton chains has a cancellation policy including night before. Literally. Every. Single. One. I only have about 500 nights in a hotel since 2018 including plenty in several states in America. Is this some hyper localised trend where the writer lives or something?

That's because you're taking the default, most expensive, booking option. On hilton.com, which I almost always use for business travel, click through the "more rates" link and you'll typically see rates for prepayment with no cancellation, rates with 2-3 day cancellation and rates with 24-hour cancellation. Also rates with free breakfast, rates with double points, etc.

Slashdot Top Deals

ASHes to ASHes, DOS to DOS.

Working...