Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Not sustainable (Score 2) 225

More realistically, the asymptote is what the market will bear. Vendors will stop offering CC payment options at a certain point. The extra sales that it enables versus the hit to margins might make 10% of sale price bearable.

What do you expect of capitalism? How will their investors get any return as of today's investment if they keep revenue the same on a constant-currency basis? Rent-based companies like Visa just have few other ways of increasing returns. This is not a surprise at all.

Comment Why should it not? (Score 0, Flamebait) 413

Is that so hard to believe that two parties have divergent epistemologies which have different levels of agreement with reality?

One is, loosely, 'defer to the science' while the other is 'hew to the long-held truths of our tradition'. Generalities of course, but that is all we need to talk about in this case.

That said, ChatGPT is probably also politically biased somewhat more than the raw reality would encourage, due to the RLHF component which pushes its responses toward 'good' results, which are opinion-based feedback from humans, the distribution of which I have to guess leans liberal.

Comment Total misrepresentation (Score 5, Insightful) 152

This is CNBC's headline:

80% of bosses say they REGRET earlier return-to-office plans

The report they are quoting from says instead

80% of executives say they would have approached their company’s return-to-office strategy differently if they had access to workplace data to inform their decision-making.

What the fuck? Of course they'd approach things differently if they had prescience... doesn't mean they regret it one bit.

80% would have approached things differently? What is the threshold for different? Some may regret, but some may well have decided that they would have RTO'd 6 months earlier if they had access to future workplace data. Who knows?

There is not one instance of the word 'regret' in the Envoy report. Nor anything else about 80%. Nor anything about return to office. Useless clickbait which should be well below the standards of CNBC.

Comment Re:No (Score 1) 107

You think something that we already cannot understand is going to transform our understanding of reality?

Yes, why not? Technology has rarely been fully understood before its application. Do you think primitive man needed to know how electrons were jumping up and down energy levels emitting photons due to exothermic reactions, or do you think they just roasted their buffalo?

Comment Re:No. For a simple reason. (Score 1, Insightful) 107

The semantics you may have around 'understanding' are not particularly relevant. It's not a word that lends itself to argument because it's not well defined, and you haven't defined it yourself.
Compression is the heart of intelligence and LLMs are scalable compression machines. That's really all you need to know.

Comment Loss curves (Score 1) 41

technical matters such as "loss curves," a way of measuring an AI program's performance over time

Hmm, this summary or article may be misinterpreting that term. A loss curve is the model's training performance as a function of training batches processed. But calling it "performance over time" isn't technically wrong, but most will take it to mean how Bard is performing across model versions or architecture revisions.

Comment Re:It's OK though (Score 3, Informative) 92

They have looked into it. As you'd expect, it's not reliable or economical. "There's enough potential energy" is a starting point, but not close to a convincing argument for it. Real answers would only be sufficient if they provide a realistic analysis of it's levelized cost of electricity (LCoE), and its full lifecycle carbon emissions including those of producing its fuel. Once you've got those numbers, you begin to understand just how good we have it with nuclear.

Comment Re:I recall some analysis.. (Score 4, Informative) 21

Basically that we are pretty much past the point of uselessly diminishing returns with respect to current 'AI' methods. So further big advancement is more stalled waiting on a new approach.

There are miles to go with the current paradigm. Look up the OpenAI scaling laws paper. There is about 10^5 more room for compute progress before the transformer architecture reaches its assessed modelling limit, and the intrinsic entropy of language is unknown but the performance curves haven't plateaued whatsoever along data scale, compute, or model size, so your claim is fully false.

Comment Re:Overpriced? (Score 5, Informative) 70

Not true. Overpriced doesn't mean what you think it means - i.e. a price higher than anyone will pay. It means "over the price of a fair market value". What makes it unfair? Deceptive business practices, fake sales, consumer deception, coercive psychological techniques and the like.

If it were impossible, how do these laws exist?

https://www.justia.com/consume...

https://ised-isde.canada.ca/si...

Comment Re: This is it. (Score 1) 129

They didn't have anything worked out in the 60s, what? I mean, I could have missed some underground research so please say it, but nothing public researchers did at this time worked or scaled.

And developing more complex ML AI doesn't bring us meaningfully closer to AGI.

...And this requires self awareness.

These statements don't cohere. They don't come from a chain of justification and implication. They make huge leaps without connection. Going to invoke Hitch's Law here, as it's your burden to demonstrate something concrete.

Here are my justifiable, concrete observations. ML has indeed brought us much closer to AGI, for any reasonable definition. Competence at zero-shot question answering without any constraint on the topic, is by definition general. Answering questions that require reasoning is intelligent. How do you think the progress has been on zero-shot learning? Check the literature - it's meteoric. Though you could say question/answering is a narrow thing. It's clear that the same technology can apply to action spaces and any other data modality.
It has demonstrated abstract reasoning, induction, deduction, and scales far better than anything else we've tried. The metric by which may evaluate intelligence (compression) has been progressing on each of the dimensions of model size, dataset size, and compute, shows no sign of plateauing on any. One estimate put it at 10,000-fold more compute before we reach the limits of language modelling due to intrinsic language entropy.
That won't stop the progress though. There is less entropy in other signals, like recording video of the natural world. Stuff doesn't happen randomly there, or if it does it's at nano scale where it doesn't matter.

Crazy to think, the LLMs of today just have a few thousand token context window, yet can pass more skill tests than the average person. When the model architectures are optimized enough to allow for multi-million-token context windows, the long range patterns it will discover, and model, will far exceed our own ability to recognize.

Comment Re:Simply referring to humans as resources is alre (Score 1) 129

"Merely ingests" "mimics". eyeroll.
It's not mimicking, it's modelling. Each prediction of a next token, or masked part of an image, is a miniature, falsifiable experiment. The prediction of which requires causal models of its domain which it can reapply in context.

On tech fads, predicting equal outcomes from loose past correlations - that doesn't work. This universe isn't that linear.

After all, it is merely trained on whatever creativity is available from us humans.

Merely lol. You're trying to paint a box around AI's capabilities, but you are wide of the mark with this argument. Humans merely train on other human output (and non-human output like sense data - but surely, surely an AI could never train on photographs or videos or audio recordings)

Comment Re: This is it. (Score 1) 129

ML AI inherently cannot "take over" because it has no self awareness

For this, selfhood is unnecessary. Awareness is unnecessary. Self awareness is doubly unnecessary. You think viruses have selves, awareness, self awareness?
If you're trying to think original thoughts, run a critical loop that tries to justify them.

Not a hypothetical AGI that we have no idea how to build.

Speak for yourself. You don't have any idea what researchers on the frontier are up to.

Comment Re:What executive allowed this (Score 3, Insightful) 20

Who made the now known bad call of letting the company get "too big" and will they also be getting fired? Seems only fair if they made such an egregious error in their business planning that they should be given the boot with no severance, no stocks, no golden parachute right? Was it you Jeff Lawson?

You're making a hindsight bias error here, expecting prescience rather than rational policies based on available information.

Slashdot Top Deals

"Everything should be made as simple as possible, but not simpler." -- Albert Einstein

Working...