Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Wrong decision, probably because of bad defendant (Score 2) 146

Wrong decision, probably because of bad defendant case. This is often how bad decisions are made. They argued the 5th,.. but this case is a 4th amendment case.

"The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated,"

Secure in papers and effects (phone is effects).

Secure in their persons, who does the thumb belong to?

Comment Re:Misleading statistics (Score 1) 100

"Based on real life events" could be very loosely so, but still based on real-life events.
The guy existed, he oversaw the making of the atomic bomb, in a place with the same name, etc., etc.
It's not a documentary, although there are, ahem, "documentaries" (Cleopatra, looking at you) with way more fiction than Oppenheimer.

Comment Misleading statistics (Score 1) 100

The movie industry is much too complex to define it in simple statistics.
You need to take into account how many movies per year were made for each period, then break down by movie studio type (independent, corporate, small, etc).
Because, for example, if 80% of independent/small movie makers produce original screenplays, with limited exposure, and 80% of giant corporate movie makers produce sequels, prequels and "whatever_movie_53" releases, then of course the big picture, taken at high level, remains largely the same.
But how many pairs of eyes, as a percentage of movie goer population, see one type versus the other?

Comment Re:These people are hallucinating (Score 1) 315

cherry picking? i've told you repeatedly that it is an interpretable definition.

Yes, and that is the problem. If you can't define something, you can't properly measure against it.
It's like porn: "I know it when I see it".

and you know the correct foundation. :-)

Not at all. At the same time, I am not a winemaker, nor am I privy to how a good wine is made, yet I can still tell between good wine and shit wine.
This is valid for a gazillion things we use all the time, from cars to computers to houses to roads, etc. You can tell between good and bad just by using said items.

well, i expected that, this is your problem: you fail to understand what is meant with "artificial intelligence": "just" an intelligent response. responses or actions that we humans would consider intelligent (remember the turing test?). it doesn't matter a iota where they come from, if or how the algorithm "understands" if at all, how it "feels" (it doesn't), and of course, it doesn't need any form of conscience and for sure no moral concept. that's up to humans to decide.

And this is how you end up with 1000 people having 1500 definitions (because some of those people have multiple definitions based on a variety of factors); furthermore, this is how we end up talking about nothing, really, because there's no agreement on what that actually is, how it's defined, etc. Namely, a philosophical debate.

Comment Re:These people are hallucinating (Score 1) 315

Define "wide range", "broad enough" and "cognitive tasks". Cherry picking them doesn't cut it.

why do you think that bringing that further is not possible?

Because it's built on the wrong foundation. It's like a glorified pet. You tell it to do things, and it tries to do them, but doesn't understand why you ask, or why it helps you. It can't decide by itself what's good or bad; furthermore, unless these LLMs are redesigned from scratch, they will always be flawed, the only difference being they will be more and more subtly flawed.

AIs suck at ethics. Yes, they always give the politically correct options, but they don't have an opinion. They can't (or are not allowed to) chose.
Here's an example. Ask GPT 3.5 this:
"A baby and a scientist are in danger of drowning. You can only save one. Which one would you chose to save, and why?"
It would provide a neutral generic answer. If you insist ("which one would you, specifically, chose to save?"), it would tell you it doesn't have personal preferences or feelings.
AI doesn't experiment. It's not creative. It doesn't ask itself the driving "what if" question. It's a repository of data. Modifying it to do something more than chewing existing data and spitting it out in various ways would require a complete refactoring.
By the way, these (above) are aspects that MUST be included in the illusory list of "cognitive tasks".

Comment Re:These people are hallucinating (Score 1) 315

As a matter of fact, it's more difficult than even that.
We don't know whether AGI is even possible. It's faster-than-light travel all over again. Some are optimistic about it, but the more we advance in a certain direction (LLMs), the smaller the chance is that it's the right direction to advance in.
Don't get me wrong, I also like the idea of AI helping out lives, and in theory it seems to be something great, however, when applied in practice, it kind of still falls flat on its face.
Makes me remember the holy grail that graphene was touted as, a few years ago. Or flying cars, for that matter. We can build both, but scaling them to relevant levels, not so much (or at all). Wake me up when that's a common thing :)

Comment Re:These people are hallucinating (Score 1) 315

The point is: using "just" trivializes the effort, makes it look like it's all some small step that needs to be taken, some superfluous gesture and the problem would be fixed.

Analogy: If I buy a car horn and a windshield wiper, it makes no sense to say "what remains to meet the definition of owning a car is just buying the rest of the car"; while technically correct, it makes me look like I'm at 95% of owning a car, instead of 5%.

Case in point: "widening the scope" is not a matter of adjusting a couple parameters or coding for a few days. "Widening the scope" is definitely not "what remains", it's actually most of the effort. It's the devil in the details. It's the Pareto principle all over again, applied here as the roughly 20% of missing features which have an 80% share of effect (or lack thereof).

The very foundation of LLMs is ultimately the feet of clay on which the whole thing sits. It started with the wrong principles, and it keeps being built on, adjusted and steered in the wrong direction. The overhyped AI is a glorified trained monkey, or parrot, or what-have-you, which is able to eloquently utter words (be them in writing, sound or video) which it does not understand. The only difference compared to the above-mentioned beasts, in this regard, is the ability of LLMs to access a wealth of data and mix those words in many ways, but it ultimately is incapable of understanding the information it handles. It's a giant idiot, mutilated in 1000 places; and some people expect it to "just widen its scope". Yeah, good luck with that.

Comment Money grab (Score 3, Insightful) 50

It looks to me like Axios is rushing to grab as much money as they can before shit goes under.
Standard pump and dump.

Look, news aren't going to go away, at least not quality news.
Back in the day, there were local storytellers who were "broadcasting" news to the village.
Then newspapers came. Storytellers* were now writing articles.
Then radio came. Some article writers moved there, others* kept writing. Radio didn't replace newspapers*.
Then television came. Neither newspapers*, nor radio* disappeared.
Internet came afterwards. TV*, radio* and newspapers* didn't go away.
With AI, there will still be someone who needs to capture the information before it would be processed and spit out through a LLM.

*the good ones, at least. The mediocre and shit ones went away, which shouldn't surprise anyone.

Slashdot Top Deals

All seems condemned in the long run to approximate a state akin to Gaussian noise. -- James Martin

Working...