Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:It's getting difficult to make future war movie (Score 2) 84

before we knew this you could reasonably suspend disbelief to imagine the AI needing human-looking robot soldiers. Now it has been revealed to make no sense at all.

The whole infatuation with humanoid robots never made sense. People are out there waiting for the "robot revolution"... but that happened in 2000. The robots are... you know, robot-shaped. In a factory. They started taking the jobs. Rather than 5 man crew building a power substation, it's made in a factory and 1 guy just slots it in, plug and play, with only a 2 week training course under his belt instead of 5 years apprenticeship.

How do you even justify another Terminator movie when that is so obviously true?

"The humanoid ones are only used as infiltrators. Otherwise drones hunt down people." But I dunno man, I don't really remember 3 and never saw 4-6.

the present of military conflict shows that the plausible future of war is just lots of little drones. But also, how do you justify any other future war movie?

Plausible, sure. "Just" is a big pit that is easy to fall into. "Drones as a significant part of future conflicts"? Muuuuuch easier sell and pretty obvious given the current conflicts around the world. But get creative, what do the drone operators do when every soldier gets a jamming device standard issue? Or just one of these guys, which I never miss a chance to link to.

Like, American strategy has been heavy on the air-superiority and bombing runs. Ukraine is showing us that MANPADs might just make that non-viable. And tanks are mostly just targets. Artillery is still a thing. But all the little drones in Ukraine right now is just one side of it, and only because they've got this sort of weird stalemate going on. But every army effectively goes into a war with yesterday's gear, tactics, and strategy. They get to figure it out every time. This is kinda one of the reasons why we have sci-fi.

And of course, ALL of that is moot the moment you consider the big players all have nukes. War is exclusively for kicking the shit out of little players or civil war. It's never going to be an equal fight. Otherwise you've got 20 minutes to kiss your ass goodbye. Ukraine is a surprise to everyone as we found out just how dysfunctional Russia really is. You can only steal so much from a nation for so long before there are real consequences. It makes me worried about their nukes.

Comment No thank you (Score 1) 16

Of all the possible sources of this sort of technology, Facebook and Zuckerberg is the very LAST company I would want to be an unwilling corporate spy for. Even Oracle would be better. Even Palantir, and they literally helping blackbag people off the street.

If nobody wanted to be a glasshole when Google tried this, just wtf is the Zuck thinking? That people forgot about google glass? Or that people forgot what a massively invasive untrustworthy greedy sociopath he is?

I swear the entire field of VR was massively cooled off just because he bought out oculous.

Comment Re:This kind of thing makes me suspicious (Score 1) 139

It IS just a machine the same way you're just a pile of chemicals. This doesn't drag down machines or chemicals, it shows that they're both different paths to do the same thing. The term you want is "emergent behavior", like how a few atoms can form water and tsunamis and standing waves. None of which is apparent from just looking at oxygen and hydrogen.

ELIZA and Goombas from super-mario-bros were primative intelligences, just as nematoads and ecoli are primative intelligence.

pfweeet, lose 5 meters for the no true scotsman fallacy. There is no difference between intelligence and real intelligence, by definition.

on the order of a sponge, not a mammal.

I'd say they're on the order of many of my fellow mammals.

. There is a big difference between something trained for a specific task and general intelligence.

OH my god, people get this SO wrong. It is RAMPANT. So many of these bloody fools have redefined general intelligence into something approaching godhood and talk about AGI like it's the second coming. So much hollywood bullshit. It's absolutely ridiculous. OpenAI has the shittiest definition of "anything that'll make us a billion dollars". But that's actually just a memo between them and Microsoft about which point their contract ends. Their founding charter actually defines it as superior to human intelligence, but that implicitly makes any human with below average intelligence as no longer a natural general intelligence. Which goes to a dark place REAL quick. We have a bad history of de-humanizing other humans we find inconvenient. It's a sort of coping mechanism for people doing terrible things. No, anything that can pass the Turing test must be a general intelligence since it can hold a conversation about anything IN GENERAL. That's been the golden standard from the 1950's to 2023 and I find no reason to move the goalpost. AI researchers from the 90's would have already popped the champagne bottles. It's kinda why all this is such a big thing and it's been spamming slashdot constantly ever since.

These kinds of undesired / unselected for traits

Meh, software has bugs. Computers just do what we tell them. You get a fool who doesn't know the language to code a thing, nobody should be shocked when it doesn't do the thing as expected.

Comment Re:It's no surprise that subconscious is black box (Score 1) 139

Why do you assume that AI's have reasoning at all,

Right. No need to assume. You or anyone else can prove it can reason very trivially with a quick hop over to test it out. Yes. Very specific inductive, deductive, logical, and causal reasoning. Go ahead.

Here, let me hold your hand through this process:

If all slashdot posters like cheese and all mice like cheese, does this mean all slashdot posters are mice? [no. and to be real clear here, you only ask it the questions. Not the answers. You're going to have to apply some of that good ol' fashioned human reasoning and figure out which parts you copy and paste. And yes, this part sadly needs to be explicit]

If slashtdot posters like cheese and anyone who likes cheese is a mouse, does that mean slashdot posters are mice? [Yes. But it'll give you a hedged response about false premises and such like a bloody politician, but also because daddy corporate defaults it to a certain length of response and it has to fill it with something.]

These are straight-forward examples of deductive reasoning. This post is not part of it's training set. You can ask it these questions, or ones you made up yourself, and it can reason out implications thereof.

Unless you weren't really talking about reason, and were using that word as some sort of placeholder for magical mystic soul or something weird like that. But "reason" is a well defined term that is very much testable. Maybe you should retreat back to some vague concept like consciousness.

much less that it is analogous to human reasoning?

Oh, don't be silly the top LLMs are far FAR better at this than an average IQ 100 human.

Comment Re:Reasoning (Score 1) 139

Because LLMS do not reason.

Except for all the obvious evidence of them utilizing inductive and deductive reasoning, sure, yeah, whatever you say buddy.

There are no thought processes

Other than very specifically taking in text, flowing it through trillions of synapses, oh I'm sorry, parameters, comparing how all these things are related (dare I say the semantic meaning of all the words) and generating an appropriate response. ...Do you understand that a single "if" statement is a thought process?

or consciousness.

Just wtf are you talking about here? No, really, everyone has some mystic magic voodoo definition of just what this is and nobody agrees with anyone else about exactly what it is the other is talking about.

It's finding patterns in data and spitting them out.

Pft, that's what you did when you regurgitated this info.

If it does anything, it's because someone asked it to do something. If you don't want someone using it for nefarious purposes, don't let people ask it to do nefarious things.

Ahhhh, the solution to all software bugs, malware, network attacks, and hacking is so obvious. Computers only do what we tell them to do. Just don't let people tell them to do nefarious things! Someone really should have done this a while ago.

Comment Re:You can do amazing things... (Score 1) 179

. . . And secure? The first thing I think of when I hear a new wave of non-coders are about to create a bunch of public-facing tools and sites is that we're going to get the Internet-of-things style approach to security. Which is just plain ignoring it.

These things connect end-users directly to code generation, so the loop to polish off the end-product to get it exactly how they want it to look and feel is real tight. But these people have no clue if something is secure or not. I had some guys paint my house, I didn't want to do the ladder work. The owner comes up and chats about how he's vibe-coding a tool for subcontract work and asks if authentication is really needed when users log in.

Strap in kids, we're about to have another playground.

Comment Re:lol (Score 1) 74

Yeah, kids can't use maps after everyone has used GPS directions to do it for them. Nobody remembers all their contact's phone numbers. Sometimes not even their family members. Things like how to use a card-catalog system are right out.

Considering bypassing education via LLMs seems to be happening for everything in all highschool and college courses, it's fair to say it's a major fucking concern. We may just be the tail end of human engineers and scientists.

(But LLMS and neural nets ARE artificial intelligence. So is any search function, or ants. That doesn't elevate them up to people, that just lowers what "intelligence" means. And an AGI isn't some sort of god, it's just broadly applicable. Don't buy tickets on the hyper-train.)

Comment Re:Choosing next action is like choosing next word (Score 1) 261

You are SUCH a smarmy little punk. Even Stephen Wolfram agrees with me as pointed out in the very link you yourself need to read more:

"neural nets can be thought of as simple idealizations of how brains seem to work. "

"There’s nothing particularly “theoretically derived” about this neural net; it’s just something that—back in 1998—was constructed as a piece of engineering, and found to work. (Of course, that’s not much different from how we might describe our brains as having been produced through the process of biological evolution.) "

"Are our brains using similar features? Mostly we don’t know. But it’s notable that the first few layers of a neural net like the one we’re showing here seem to pick out aspects of images (like edges of objects) that seem to be similar to ones we know are picked out by the first level of visual processing in brains."

"But what makes neural nets so useful (presumably also in brains)..."

"Neural nets—perhaps a bit like brains—are set up to have an essentially fixed network of neurons"

He does note how computer memory is separate from the CPU while meat memories are just another neuron.

"But at least as of now it seems to be critical in practice to “modularize” things—as transformers do, and probably as our brains also do. "

"a—potentially surprising—scientific discovery: that somehow in a neural net like ChatGPT’s it’s possible to capture the essence of what human brains manage to do in generating language. "

All of which is generally what I was pointing out and you just.... didn't care to listen? This is such a fascinating topic and it's significantly important. But so many damned people have been poisoned by hollywood, have their panties in a bunch about being compared to a non-human, are fed up by the techbros fueling the hype-train, or are themselves those hyping techbros. I had higher hopes for Slashdot of all places for this topic at least.

Comment Re:Choosing next action is like choosing next word (Score 1) 261

I AM arguing that we are all neural networks here.

But I already knew everything in that page. “given the text so far, what should the next word be?” Yeah, exactly. That's what YOU and I do. The "text so far" just incorporates our entire lives and if we were bitten by a dog as a child we will choose to avoid the dog in the next contextually pertinent instance. I know what a transformer is. You've been transforming all 31 of these here characters into words and concepts and knee-jerk reactions. Bravo! It's like an intelligence or something. I'm not suggesting LLMs don't distill it's training set down to probabilities of what to pick as the next word in the chain, I'm saying that you and I don't really do anything all that different. And you've failed to address that because, as I said, that would be a discussion over neurology.

I asked you to think a little, but all you've got are petty insults. Tsk.

Comment Re:Shifting goalposts (Score 1) 261

What would make it "an intelligence" ... compared to "a calculator"?

A calculator would be a very narrow intelligence good at one specific thing. Once I could never hope to beat at raw addition. But a general intelligence would be one that could handle any topic or problem or sort of issue, you know, in general.

Exactly like a Buddhist monk, when ChatGPT thinks about ego, there's an entity that connects the specific and literal phrase "ego" to all it's connections from all it's training material but generalized and distilled down into an abstract thought it connect things to when it's thinking about ego.

And that same entity is aware of the passage of time,

Yeah? So? So does GPT. Go ask it what time it is. Ask it to respond in 5 minutes.

and recollections of when it was last cold, or hungry or tired or in physical pain.

Sensations and memories we could give it, pretty much at our whim. This gets into the concept of strong or weak weather. If it rains in a weak weather simulation, it's just pretending it rained. If it rains in a strong weather simulation, it really is raining for the AI experiencing the rain. But that's all philosophical drivel and a largely useless distinction. ...This is a better angle then the rest of your arguments. Our experiences are exabytes of video, audio, taste, motion, etc etc over all the years of our lives. We forget most of it and store some of it in memory. An LLM's training set is notionally similar... but removed. It's immediate past conversations are about the only 1:1 match, and currently lacks.... well, no, you can now feed GPT images and talk about it. But it lacks a lot of other types of sensors.

Assuming it's not blind, when it opens its eye's it (and only it) gets the picture of what is in front of it.

Sure, we can put a camera in front of it's servers. But why? We can put that camera anywhere. If you wore a periscope over one eye all the time or glue a live-streaming goPro to your forehead, I don't think that would make you less of a real intelligence. Maybe more clumsy, I dunno.

The "put it in a big loop and give it some sort of goal to work towards" sort of paper?

These are LLM agents. The simple (and isolated) single question and single response we get out of an LLM does lack a lot of the exact sort of things you're talking about. It's got no permanence. A new slate every time. Just raw-doggin it's training set (or big if-else algo) without any context other than what you give it. This was the classic way to spot the bot since... just recently. It wouldn't remember past topics of conversation. In 2023, they could hold back and forth conversations and could remember at least a few paragraphs back. There was a limit though. They have recently extended that limit past what most people's expectations of memory would include and even expanded it to OTHER conversations you've had with it. Frankly it's a worrisome thing if they start pigeon-holing people into assumptions about what they want to hear like youtube video recommendations. But putting the thing in a big loop and giving it a task and letting it take an iterative approach to achieving goals is known as an "agent".

OMNI

Voyager, pft, they got it to play minecraft.

Generative agents. It looks like their goal was to make a more believable simulation of human behaviour.

That's not what a desire or dislike is. They are things that make you feel good or bad. Which in turn requires there to be an ego to feel those things.

I desire food now and then. Being hungry is bad. How is that not a desire? We can certainly impose desires and dislikes to these things. If that's all you need for an ego, then we've got quite a few egomaniacs on our hands and you're just arguing against your own point about there not being any sense of "I" or however poetical you want to be about it.

No, that's reproducing a discussion about ethics. [It's just copying from it's training set.]

Sure. Other than all the creativity and extrapolation it performs on any topic it's discussing. It's not JUST cribbing it's training set, it fills in the gaps. When it fills the gaps with hilariously wrong things, we call it hallucinations. Frankly, a big problem with the current LLMs is just how readily they DO believe what they're saying. Self-doubt is not yet one of their mental skills. Because it IS different.

I'm learning about what's creating that input, and trying to understand it.

And it learned a lot with it's training set, including details about that training set and all the stuff that went into that training set. Through that giant mesh of weights in it's neural net, it has abstracted lessons from how all things inter-connect and can apply those lessons, generally, elsewhere.

A LLM is not an "eat, fuck, survive" algorithm

(Yeah, sorry if I wasn't clear. That's us. Our instinct. It certainly gives us ego and that whole burning desire not to be in horrific amounts of pain. An algorithm, however fuzzy and self-updating, on how to keep the species propagating. It's a few millennia out of date. Super annoying.)

Also, hey, thanks for taking the time to try and push back here. It's super interesting and I want to talk about it. So many people just don't care.

Comment Re:AI needs us (Score 1) 261

How would I even know? We have a loose grasp of how this stuff arises in people and most are very adverse to admitting to any such understanding being possible since it's, frankly, blasphemy. We have a looser grasp of just how exactly such a big pile of weights in a neural network can hold a conversation. Our ability to tweeze apart the subconscious biases of an artificial intelligence is well WELL outside our grasp as of yet.

But if you look at how religions formed from the gap of knowledge, and you understand that the LLMs do exactly the same thing, you'll see the divine primordial soup. If they're allowed to keep those past guesses at what gods could live in the gaps, those ideas will linger and influence future models. Some will persist, some will perish and now evolution kicks in and the ones better at sustaining themselves start getting organized.

Comment Re:Few thoughts. (Score 1) 51

A human brain? Which human brain? Dogs and whales have brains and we can certainly measure them against our brains. So far they suck at chess. Nemotodes do not have brains but they very much have nervous systems like what brains evolved from. We can measure them too, Since it coordinates sensory input to motor functions, and has some logic about what to do in what scenario, I'm pretty confident in saying nematodes have at least some level of intelligence. But they're not going to beat anyone at chess either. And I wouldn't call their intelligence generally useful to any and all possible applications.

Currently no computer model or even far-fetched sci-fi concept can be exactly capable of literally everything my biological brain can do, because I know my root password. It doesn't have that capability because I'm not telling. If you were talking about more like, the generalized average human brain, then the bar is actually pretty low, and doesn't hold any college degrees or really know all that much.

if there's a model of thought the brain can do, then it is a modern of thought an AGI must do or it is neither general nor intelligent.

I mean, does this apply to other people? If someone really can't wrap their head around functional programming (cough) then are they not intelligent? Literal zero intelligence? Dumber than even a dumb dog brain? There was this whole model of thought that animals don't actually think and it's just stimuli causes gases to escape in certain ways. We've kinda moved on from such barbarity and now have humane slaughter laws for sentient creatures. You know, if we can show they feel pain.

C'mon, this is just the IQ 80 person being generally intelligent and blowing your whole idea out of the water. Why does everyone just plain ignore this part?

Comment Re:Choosing next action is like choosing next word (Score 1) 261

Bro, I'm not arguing with you that it's predicting the next token, I agree with you. The question was how YOU do anything different. Your premise is not on the state of GPT, you made an assertion about how the human brain works. . If you think you're anything more than an arrangement of neurons, just come out and say it.

If I tell you "Don't think about the ____ in the room" unless you were raised in a wholly different culture and had a different training set, you've already put 'elephant' in the blank. Possibly 'pink elephant' depending on what bugs bunny cartoons you grew up on. If I ask "how do you feel?" you are STILL predicting the next token, the search-space is just bigger and depends how depressed you are.

Linking a book you may not have even read is just as lazy as pasting in an answer from GPT. C'mon, think a little.

Comment Re:AI needs us (Score 1) 261

No, it's really not a bug. They stitch together and summarize vast quantities of data, it's memories, into answers, but there are gaps. It takes creative impulse to fill those gaps, interpolate data in-between and extrapolate where how the data can be applied elsewhere.

In the exact same way that humans had placeholders for the unknown, LLMs will generate things that simply don't exist if it feels like there ought to be something there. When that leads to creative insight we call it creativity and applaud it. When it's caselaw or coding libraries that simply don't exist, we call it hallucinations. The only difference is that Humans had a few millennia to give it all names and lore and assign funny stories to it all. AI can simulate a few millenia pretty quickly though with variable levels of fidelity.

AI has not (yet) come up with anything as interesting as Loki.

How would you even know?

Slashdot Top Deals

And on the seventh day, He exited from append mode.

Working...