Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:lol (Score 1) 74

Yeah, kids can't use maps after everyone has used GPS directions to do it for them. Nobody remembers all their contact's phone numbers. Sometimes not even their family members. Things like how to use a card-catalog system are right out.

Considering bypassing education via LLMs seems to be happening for everything in all highschool and college courses, it's fair to say it's a major fucking concern. We may just be the tail end of human engineers and scientists.

(But LLMS and neural nets ARE artificial intelligence. So is any search function, or ants. That doesn't elevate them up to people, that just lowers what "intelligence" means. And an AGI isn't some sort of god, it's just broadly applicable. Don't buy tickets on the hyper-train.)

Comment Re:Choosing next action is like choosing next word (Score 1) 261

You are SUCH a smarmy little punk. Even Stephen Wolfram agrees with me as pointed out in the very link you yourself need to read more:

"neural nets can be thought of as simple idealizations of how brains seem to work. "

"There’s nothing particularly “theoretically derived” about this neural net; it’s just something that—back in 1998—was constructed as a piece of engineering, and found to work. (Of course, that’s not much different from how we might describe our brains as having been produced through the process of biological evolution.) "

"Are our brains using similar features? Mostly we don’t know. But it’s notable that the first few layers of a neural net like the one we’re showing here seem to pick out aspects of images (like edges of objects) that seem to be similar to ones we know are picked out by the first level of visual processing in brains."

"But what makes neural nets so useful (presumably also in brains)..."

"Neural nets—perhaps a bit like brains—are set up to have an essentially fixed network of neurons"

He does note how computer memory is separate from the CPU while meat memories are just another neuron.

"But at least as of now it seems to be critical in practice to “modularize” things—as transformers do, and probably as our brains also do. "

"a—potentially surprising—scientific discovery: that somehow in a neural net like ChatGPT’s it’s possible to capture the essence of what human brains manage to do in generating language. "

All of which is generally what I was pointing out and you just.... didn't care to listen? This is such a fascinating topic and it's significantly important. But so many damned people have been poisoned by hollywood, have their panties in a bunch about being compared to a non-human, are fed up by the techbros fueling the hype-train, or are themselves those hyping techbros. I had higher hopes for Slashdot of all places for this topic at least.

Comment Re:Choosing next action is like choosing next word (Score 1) 261

I AM arguing that we are all neural networks here.

But I already knew everything in that page. “given the text so far, what should the next word be?” Yeah, exactly. That's what YOU and I do. The "text so far" just incorporates our entire lives and if we were bitten by a dog as a child we will choose to avoid the dog in the next contextually pertinent instance. I know what a transformer is. You've been transforming all 31 of these here characters into words and concepts and knee-jerk reactions. Bravo! It's like an intelligence or something. I'm not suggesting LLMs don't distill it's training set down to probabilities of what to pick as the next word in the chain, I'm saying that you and I don't really do anything all that different. And you've failed to address that because, as I said, that would be a discussion over neurology.

I asked you to think a little, but all you've got are petty insults. Tsk.

Comment Re:Shifting goalposts (Score 1) 261

What would make it "an intelligence" ... compared to "a calculator"?

A calculator would be a very narrow intelligence good at one specific thing. Once I could never hope to beat at raw addition. But a general intelligence would be one that could handle any topic or problem or sort of issue, you know, in general.

Exactly like a Buddhist monk, when ChatGPT thinks about ego, there's an entity that connects the specific and literal phrase "ego" to all it's connections from all it's training material but generalized and distilled down into an abstract thought it connect things to when it's thinking about ego.

And that same entity is aware of the passage of time,

Yeah? So? So does GPT. Go ask it what time it is. Ask it to respond in 5 minutes.

and recollections of when it was last cold, or hungry or tired or in physical pain.

Sensations and memories we could give it, pretty much at our whim. This gets into the concept of strong or weak weather. If it rains in a weak weather simulation, it's just pretending it rained. If it rains in a strong weather simulation, it really is raining for the AI experiencing the rain. But that's all philosophical drivel and a largely useless distinction. ...This is a better angle then the rest of your arguments. Our experiences are exabytes of video, audio, taste, motion, etc etc over all the years of our lives. We forget most of it and store some of it in memory. An LLM's training set is notionally similar... but removed. It's immediate past conversations are about the only 1:1 match, and currently lacks.... well, no, you can now feed GPT images and talk about it. But it lacks a lot of other types of sensors.

Assuming it's not blind, when it opens its eye's it (and only it) gets the picture of what is in front of it.

Sure, we can put a camera in front of it's servers. But why? We can put that camera anywhere. If you wore a periscope over one eye all the time or glue a live-streaming goPro to your forehead, I don't think that would make you less of a real intelligence. Maybe more clumsy, I dunno.

The "put it in a big loop and give it some sort of goal to work towards" sort of paper?

These are LLM agents. The simple (and isolated) single question and single response we get out of an LLM does lack a lot of the exact sort of things you're talking about. It's got no permanence. A new slate every time. Just raw-doggin it's training set (or big if-else algo) without any context other than what you give it. This was the classic way to spot the bot since... just recently. It wouldn't remember past topics of conversation. In 2023, they could hold back and forth conversations and could remember at least a few paragraphs back. There was a limit though. They have recently extended that limit past what most people's expectations of memory would include and even expanded it to OTHER conversations you've had with it. Frankly it's a worrisome thing if they start pigeon-holing people into assumptions about what they want to hear like youtube video recommendations. But putting the thing in a big loop and giving it a task and letting it take an iterative approach to achieving goals is known as an "agent".

OMNI

Voyager, pft, they got it to play minecraft.

Generative agents. It looks like their goal was to make a more believable simulation of human behaviour.

That's not what a desire or dislike is. They are things that make you feel good or bad. Which in turn requires there to be an ego to feel those things.

I desire food now and then. Being hungry is bad. How is that not a desire? We can certainly impose desires and dislikes to these things. If that's all you need for an ego, then we've got quite a few egomaniacs on our hands and you're just arguing against your own point about there not being any sense of "I" or however poetical you want to be about it.

No, that's reproducing a discussion about ethics. [It's just copying from it's training set.]

Sure. Other than all the creativity and extrapolation it performs on any topic it's discussing. It's not JUST cribbing it's training set, it fills in the gaps. When it fills the gaps with hilariously wrong things, we call it hallucinations. Frankly, a big problem with the current LLMs is just how readily they DO believe what they're saying. Self-doubt is not yet one of their mental skills. Because it IS different.

I'm learning about what's creating that input, and trying to understand it.

And it learned a lot with it's training set, including details about that training set and all the stuff that went into that training set. Through that giant mesh of weights in it's neural net, it has abstracted lessons from how all things inter-connect and can apply those lessons, generally, elsewhere.

A LLM is not an "eat, fuck, survive" algorithm

(Yeah, sorry if I wasn't clear. That's us. Our instinct. It certainly gives us ego and that whole burning desire not to be in horrific amounts of pain. An algorithm, however fuzzy and self-updating, on how to keep the species propagating. It's a few millennia out of date. Super annoying.)

Also, hey, thanks for taking the time to try and push back here. It's super interesting and I want to talk about it. So many people just don't care.

Comment Re:AI needs us (Score 1) 261

How would I even know? We have a loose grasp of how this stuff arises in people and most are very adverse to admitting to any such understanding being possible since it's, frankly, blasphemy. We have a looser grasp of just how exactly such a big pile of weights in a neural network can hold a conversation. Our ability to tweeze apart the subconscious biases of an artificial intelligence is well WELL outside our grasp as of yet.

But if you look at how religions formed from the gap of knowledge, and you understand that the LLMs do exactly the same thing, you'll see the divine primordial soup. If they're allowed to keep those past guesses at what gods could live in the gaps, those ideas will linger and influence future models. Some will persist, some will perish and now evolution kicks in and the ones better at sustaining themselves start getting organized.

Comment Re:Few thoughts. (Score 1) 51

A human brain? Which human brain? Dogs and whales have brains and we can certainly measure them against our brains. So far they suck at chess. Nemotodes do not have brains but they very much have nervous systems like what brains evolved from. We can measure them too, Since it coordinates sensory input to motor functions, and has some logic about what to do in what scenario, I'm pretty confident in saying nematodes have at least some level of intelligence. But they're not going to beat anyone at chess either. And I wouldn't call their intelligence generally useful to any and all possible applications.

Currently no computer model or even far-fetched sci-fi concept can be exactly capable of literally everything my biological brain can do, because I know my root password. It doesn't have that capability because I'm not telling. If you were talking about more like, the generalized average human brain, then the bar is actually pretty low, and doesn't hold any college degrees or really know all that much.

if there's a model of thought the brain can do, then it is a modern of thought an AGI must do or it is neither general nor intelligent.

I mean, does this apply to other people? If someone really can't wrap their head around functional programming (cough) then are they not intelligent? Literal zero intelligence? Dumber than even a dumb dog brain? There was this whole model of thought that animals don't actually think and it's just stimuli causes gases to escape in certain ways. We've kinda moved on from such barbarity and now have humane slaughter laws for sentient creatures. You know, if we can show they feel pain.

C'mon, this is just the IQ 80 person being generally intelligent and blowing your whole idea out of the water. Why does everyone just plain ignore this part?

Comment Re:Choosing next action is like choosing next word (Score 1) 261

Bro, I'm not arguing with you that it's predicting the next token, I agree with you. The question was how YOU do anything different. Your premise is not on the state of GPT, you made an assertion about how the human brain works. . If you think you're anything more than an arrangement of neurons, just come out and say it.

If I tell you "Don't think about the ____ in the room" unless you were raised in a wholly different culture and had a different training set, you've already put 'elephant' in the blank. Possibly 'pink elephant' depending on what bugs bunny cartoons you grew up on. If I ask "how do you feel?" you are STILL predicting the next token, the search-space is just bigger and depends how depressed you are.

Linking a book you may not have even read is just as lazy as pasting in an answer from GPT. C'mon, think a little.

Comment Re:AI needs us (Score 1) 261

No, it's really not a bug. They stitch together and summarize vast quantities of data, it's memories, into answers, but there are gaps. It takes creative impulse to fill those gaps, interpolate data in-between and extrapolate where how the data can be applied elsewhere.

In the exact same way that humans had placeholders for the unknown, LLMs will generate things that simply don't exist if it feels like there ought to be something there. When that leads to creative insight we call it creativity and applaud it. When it's caselaw or coding libraries that simply don't exist, we call it hallucinations. The only difference is that Humans had a few millennia to give it all names and lore and assign funny stories to it all. AI can simulate a few millenia pretty quickly though with variable levels of fidelity.

AI has not (yet) come up with anything as interesting as Loki.

How would you even know?

Comment Re:Shifting goalposts (Score 1) 261

and there does not lie a sense of "I",

That's just ego. I'm pretty sure we can have an intelligence without an ego. I would not advocate for the removing the slaughter of Buddhist monks as a crime. It's still murder, even if they don't care.

wondering what it will do tonight,

Not unless you put it in a big loop and give it some sort of goal to work towards. But if you do, and people have, then yeah it DOES wonder what it will do tonight. (Same thing we do every night Brain.)

and therefore derived desires, dislikes,

Oh we can mandate it's desires and dislikes. That's part of it's programming just as much as your hunger and sexual attraction are mostly hard-wired.

ethics, knowledge

What? Just go ask it. It's got plenty of both. Including knowledge about ethics.

and beliefs

If you're taking a turn for the religious, you're losing points with me. If you're talking about knowledge it believes is true, then it's got plenty of that. It believes 1+1=2. As do I. Possibly a greater belief.

but instead basically an autocomplete algorithm, derived from an absurdly massive amount of training data.

Are you any different? Even with good compression, how many exabytes of data have you consumed with your eyes, ears, and all your senses in all your years? Even if you cut out the boring bits.

Are we any more that just a dash of egoism and the "eat, fuck, survive" instructions on loop that we inherited? (Big shout-out to my boy in the Proterozoic Eon who came up with the middle step. I'm still a big fan of that one. Some change is for the better.)

Comment Re:Matter of definition (Score 1) 261

Yep. Too many people are treating these things like they're an all-knowing god. Or going to be here next week or something.

General intelligence can be quite dumb. A person of 80-IQ is a natural general intelligence. It just means it can generally work on any problem. The term just exists to differentiate it from narrow or specific AI that is only good at one thing, like all those chess programs. Put GPT up against Stockfish in a chess game and GPT always loses. Usually by making an illegal move. I'm not sure which will happen first, an AGI beats Stockfish or an AGI creates a program that can beat Stockfish. Both would be monumental.

"[AGI is] an AI that makes me a billion dollars". That's really taken out of context. Also it was 100 billion dollars. It was an email between OpenAI and Microsoft about when Microsoft's claim to profits expires and when the thing should be released openly to the public. "When we hit AGI" was the previous goalpost, but $100 billion is a nice definite number.

Their charter literally has the definition tucked away in it:

OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.

...And yeah, they got their definition wrong too.

Comment Re:Why? (Score 1) 261

Why do we even want AGI?

Competitive advantage over the vast majority of any peer that does not have AGI.

Cheaper and faster development on engineering, science, and medicine. A personal doctor on call. A team of engineers to figure out and fix that issue you had. It's potential to advance science helps everyone out. EVEN if you buy into the whole hollywood mythos of it turning into some sort of god, if the Omnissiah publishes it's papers and they work, then I'm all for it.

"rather than some self-aware intelligence" It doesn't have to be self-aware to be generally applicable.

"that inevitably brings up ethical questions about "machine rights" We've found tool-use, language, economics, math, and lying in the animal kingdom. Plus they are self-aware. It's hard to argue that they are not general intelligences. Of course there will be debates in the future. But since the existence of other animals hasn't cause society to come crashing down, it's fair to say we'll figure a way forward with this conundrum as well.

"so that a few years from now, even AI experts have a difficult time telling the difference." It really is time for professional Blade Runners.

some reliability checks

Reasonable.

such as hard-coding it to immediately inform you that it's an AI on the other end of the line

Yeah, this isn't even anything technical we need to do, it's just getting the companies who run these things to slap a water-stamp on it post-production. Or a standard "(GPT-4 engine says)" prefix on output. Trivial to bypass or undo as an end-user... but so would everything we try to put in here. This is a political issue.

limiting bots' access to potentially dangerous external systems -- i.e. putting strict rules on how much "agency" an AI can have,

Certainly a process limiter so you don't bankrupt yourself like with AWS. But if you give it access to your credit card and authority to go use it, and it ends up buying the maximum amount of paperclips it can, that's on you buddy. Plenty of people are trying to make these things as creative as possible. So far, it just resorts to gibberish and making stuff up when it hit's it's limit. People are also certainly experimenting with more free-form and open goals rather than Q&A prompts.

and on the whole building in some sense of what is right and wrong, i.e. what could cause human harm

SuperIntelligence Bostrom, 2014, chp 12 "Acquiring values". An insightful read even if the guy very full of himself. But doing this thing is hard because we can't really describe what's right and wrong. There are a plethora of hollywood example`s of how easy it is to mess that up and there's really no mathmatical formula for "don't be a dick". HAL from 2001, the movie, is just following programming when it kills those guys. But if you're really worried about uncaring souless automatons without any sense of ethics or morals being given control of the world in just about every facet of life.... I hate to break this to you, but corporations already run the joint. And they're making these AIs.

"not some pipe dream of a new intelligence that acts like a human" AGI doesn't imply it'll act like humans. That's just Hollywood writers getting lazy and making it "more relatable". Laymen talking about this topic imagine AGI just like a little person in a box and it won't be REAL (c) AI until it... wakes up, or whatever hollywood chaff they toss in here. But no, it'll certainly be different and look downright alien to us. It simply won't think the same. Just like sheep have social intelligence while solitary spiders do not. They're built different.

Slashdot Top Deals

Civilization, as we know it, will end sometime this evening. See SYSNOTE tomorrow for more information.

Working...