Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Why would they admit this? (Score 2) 39

Because he's trying to fix it. He's saying the lower-level officers who handle the FOI requests are being too cautious to the point of self-censoring for fear of triggering a backlash:

Duffy said: "I see these letters and these FOI requests and Iâ(TM)ve got great volumes of them, and I see local officers going through quite a contorted processes to not to answer when they know, often, the answer but itâ(TM)s embarrassing.

They do it because they are frightened. They are worried about revealing the true state of whatâ(TM)s going on, theyâ(TM)re worried about reaction from NGOs and others, and possibly from the government, about the facts of the situation. And theyâ(TM)re often working at a local level but in a very nationally charged political environment, which is very difficult for them."

Reminiscent of occurences in the US:

An earlier draft of the news release, written by researchers, was sanitized by Trump administration officials, who removed references to the dire effects of climate change after delaying its release for several months, according to three federal officials who saw it. The study, published in the journal Scientific Reports, showed that California, the world's fifth-largest economy, would face more than $100 billion in damages related to climate change and sea-level rise by the end of the century. It found that three to seven times more people and businesses than previously believed would be exposed to severe flooding.

"It's been made clear to us that we're not supposed to use climate change in press releases anymore. They will not be authorized," one federal researcher said, speaking anonymously for fear of reprisal.

https://www.science.org/conten...

Comment Re:Same thing happening in Australia (Score 1) 305

The panels just get a little warmer (but never more than as warm as they'd get if they were just mounted in the sun but not yet wired up, which they can stand just fine)

Does being hotter make them less efficient when turned back on?

(Not that it's a big deal - I'm sure a passing cloud decreases output much more...)

Comment Re:Eh (Score 3, Interesting) 100

It's an interesting shift in perspective.

If you searched for "cheese not sticking to pizza" and got this result on reddit, it wouldn't be surprising - after all, it's "relevant" (i.e. something somebody said about the given topic) and that's all we expect from a search engine.

But with a conversational agent, expectations shift - you expect the response to be "true" or "good," which is a much higher standard.

Comment Re:I'm glad someone is saying it. (Score 3, Insightful) 78

yet for the most part you get your Sam Altmans (Gavin Newsomes) of the world saying that we just need to throw more processing power at them and we'll get to AGI!

That's not correct. Really, few AI systems are just a bare LLM - including for example ChatGPT.

An LLM in itself does not have persistent memory. But all you have to do is keep a transcript and re-submit it with each prompt and bam - now it has persistent memory.

An LLM is just a large "Language" model. But many deep nets today (including notably ChatGPT) are multi-modal, working not just with words but how they relate to sounds (speech) and imagery.

A passive LLM relies on the data it is fed - but situate it in an interactive environment, like an LLM conversing with people on a website or in an app - and now it can do active learning, ask questions, do experiments such as A/B testing of ads to see what works best.

An LLM doesn't interact with the physical world, but the same or similar network architecture can be trained to move and act within the physical world - a self-driving car for example is an instance of this.

An LLM by itself cannot plan hierarchically, but many familiar AI systems such as deep reinforcement learners can; they are just using the deep net to evaluate each option, with an algorithm on top to search possible future paths.

An LLM cannot do much logic, including even sorting a long list of numbers, but it can call the appropriate tool to do so, or write code to do so, like a person does. In a rule-bound setting like Chess, AI's can out-logic humans manyfold.

Does anything I said above mean we DO have all the ingredients for AGI? Not at all! But identifying what is missing and how to fix it is not as simple as identifying all these limitations of a bare LLM. These observations are comparably trivial to pointing out that a state machine with no memory (i.e. a Turing machine without a tape) cannot do general computation.

Comment Re:Not what the study says (Score 1) 287

For some strange reason, the inability to hear cars approaching has never caused massive casualties in the deaf community. How on earth do they do it?

Did you perhaps not check before assuming that?

RESULTS Rates of injury treatment in children with hearing loss were more than twice that of the control group (17.72 vs 8.58 per 100, respectively). The relative rate (RR) remained significantly higher (RR = 1.51, 95% confidence interval, 1.30â"1.75) after adjusting for age, race, sex, and the number of hospital or emergency department encounters for treatment of nonâ"injury-related conditions. Children with hearing loss had significantly higher treatment rates for every injury type, bodily location, and external cause, with a cell size sufficient for valid comparison.

https://www.ncbi.nlm.nih.gov/p...

Comment Re:Losing sight (Score 1) 25

By "focus on core business" I don't think they're talking about paying attention to things, but rather ceasing to dump money into what are essentially a bunch of startups. As independent enterprises, most will soon fail. To survive they would have to sell themselves to VC's all over again which is now much harder with higher interest rates.

Comment Re:The other MeToo movement (Score 4, Interesting) 25

AI really does change the landscape for search - that is, for google. So often now you can get an answer more quickly by asking ChatGPT. And if you want it to base its answer on relevant web pages and link its response to where it found the answer, it'll do that, too.

The biggest hope for google is that ChatGPT is an unsustainable business model, because it's not thoroughly infused with spam like google is. But if competing means google would have to dial back monetization to re-prioritize the user experience, that would be an extremely painful proposition for what has become a profit-bloated company.

Comment Re:What's the reason? (Score 0) 54

Well that's good, I hope the mainstream pendulum will swing back to something closer to balance.

For myself, I'm looking forward to being able to converse with ChatGPT (or something) on long drives, and I don't envision it displacing or becoming my family, like "Her."

Slashdot Top Deals

"I prefer the blunted cudgels of the followers of the Serpent God." -- Sean Doran the Younger

Working...