Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re: I think you're missing the point (Score 1) 59

there is no evidence to support this claim

Yes, there is and you have provided it: "Few people today do back-breaking farm work, or monotonous, dangerous factory work, or laborious accounting on paper"

We have already seen many things that humans can no longer do better than inorganic 'life' (it is not life yet, but the point stays the same: 'things' also works). If history is your guide, then this trend will continue.

Naked apes aren't that special. In many things we're not even better than other organic life and have never been. Try fighting a chimp.

Comment Re: I think you're missing the point (Score 1) 59

You're missing my point. I did not say that speed doesn't matter. It most definitely does.

The key insight, however, remains that at some point organic life will be worse than inorganic 'life' at (almost) every economically interesting thing. Going from 7 fundamental skills to 6 skills is an entirely different situation than going from 1 to 0.

You seem to argue that new categories of work for humans will always pop up, for infinity. That is clearly an absurd thing to try to defend, so it must be a weaker variant like "new categories of work for humans will pop up this time", which is more interesting to discuss because it means going into detail as to which skills we will still be better at and thus which exact categories will pop up. Personally, I'd say that because robotics will probably not evolve as fast as AI we will still have a place in tasks where flexible physical manipulation of things (not burger flipping, but things like plumbing) is important.

Comment Re: I think you're missing the point (Score 3, Insightful) 59

It's not about the speed. It's about the number of economically viable skills humans have left after each revolution. We're not counting up. We're counting down. And at some point we will reach zero or something close enough to it that running society the way we've been doing breaks down completely.

Comment Re:Think of the programming grandchildren (Score 1) 16

No, you asked "tomorrow's programmers get what from where?", with 'tomorrow' clearly intended to mean 'the (somewhat near) future'. That includes everything that might reasonably exist in said future.

You gotta think out of your box, man. The current LLMs are not perfect, but the amount of resources being invested into further development of ANNs by both the private and public sector is insane (it's already at least 10% of R&D spending, worldwide). An entire legion of jaded "I walked through the snow in all AI^H^HMachine Learning winters, both ways" Slashdotters isn't going to change that. It really is different this time around and we need as many smart knowledgeable minds as we can get to try to get it right (or at least not horribly wrong).

Comment Re:Feds funded $2.3bn vaccine research (Score 2) 11

A populace misguided by the false dichotomy of capitalism vs. communism thinking that 'government sucks' and 'companies are efficient'.

The pharmaceutical industry needs to be collectivized. Besides all the natural monopolies in infrastructure essential services such as national security and defense, policing, and firefighting clearly should not be driven by profit. Health care and especially the highly inelastic 'market' of pharmaceuticals fits perfectly with a collectively funded system.

Comment Buy yachts? (Score 2) 61

I know it was meant to be funny, but it does actually point to something a lot of people don't realize: A veritable shitload of things we regularly use are an utter waste of resources for AI. We've built an entire world around our physical needs and limitations.

Take a look around where you are right now, look at different objects and ask yourself what use a(n advanced) AI would have for it. Unless you're in some kind of industrial environment like a factory, most things are very hard to come up with a use for.

Comment Re:Good for Lecun (Score 2) 30

how about we just unplug it like we do any other piece of misbehaving machinery

Yeah, like we do with botnets and the machines of malicious adversarial state actors. We just unplug them. Easy-peasy.

that these models will magically discover new laws of physics that will allow them to transcend the heavy infrastructure requirements (electricity, communications, teams of competent engineers) that allow them to run at all

What? Many of these models can run on consumer hardware. Sure, they are not as advanced as the ChatGPTs of this world, but a human brain runs on about 20 watts, so no laws of physics need to be broken for some future highly efficient and intelligent model to greatly surpass all humans on this planet with regard to intelligence and capability to survive in a distributed and covert manner. The question isn't if this will happen, but when.

I find it highly surprising (and depressing) that in a community like Slashdot there seems to be a combination of a belief that human biology and organic stuff is special and has some unattainable fairy dust sprinkled over it that will allow us to rule the universe forever and a jadedness based on past AI hype cycles. Talking about ChatGPT as 'just a statistical model' is like calling a supercar 'just a motorized horse buggy'. Looking at the present and the past will only get you so far in predicting the future.

Comment Re:Nah, it's been done. Things revert. (Score 1) 186

I don't know but I know it's an evolution of humans.

How sure are you of that and why?

The universe is all about evolution, the human race is evolving. It's nots to think that AI is in any kind of state now or ever to take over or surpass the evolution rate of human beings, who will harness AI to increase the rate of evolution.

This is a weird argument. The evolution of AI itself is clearly going way, way faster than human evolution. Even if you take a moment before the invention of computers as the starting point, AI has evolved to let's say ant or bird level intelligence in only 100 years. In addition to that, our modern social safety nets etc. have clearly slowed down if not completely eliminated human evolution through the normal natural selection process. Actively evolving our bodies is an option, but it would be either eugenics, genetic engineering or cybernetic engineering. All of them are problematic, but the latter ones are the most problematic:

Cybernetics:
Can you put wheels on a horse? Yep, you probably can, with the engineering being incredibly tricky to get right. But will it ever be able to compete with a car?
Neuralink-like brain computer interfaces and cybernetics in general have the same general issues as the example: Trying to 'upgrade' a million years old legacy 'design' with a modern approach. Again, not impossible, but way harder and less potent than just designing a new system from scratch.

Genetic engineering:
Incredibly long feedback loops: It takes a good chunk of a human lifetime to know for sure whether your genetic hacks actually worked properly. Unless.. You use advanced simulations. At which point you've effectively created AI anyway. And that is in addition to the issue of trying to upgrade a legacy design: genetically engineering 'wheels' on a horse is probably even impossible.

Food for thought: The propagation speed of signals in our bodies maxes out at about 100m/s. The (currently known) theoretical limit is about 300 000 000m/s, 6 orders of magnitude (or 'a million times') faster. AI already operates close to that limit.

This is utterly irrelevant until you are working on a game of instant reflexes - evolution favors the long play.

Exactly, it favors the long play. Why do almost all advanced forms of life have eyes (sensors for light)? It has evolved many times, in different places in the evolutionary tree, which is called convergent evolution. The answer is really simple: because (visible) light is the quickest and most information dense means to gain information about your surroundings and thus will provide an evolutionary advantage always, (almost) everywhere in the universe.

Now think about signal propagation speed again. Our communication networks operate near the theoretical limit. Would a communication network of an advanced alien civilization do that as well? Yes. Yes, it would. The technology would advance until it would reach that physical limit. It is convergent technological evolution.

Now ask yourself whether increasing internal signal propagation speed provides an evolutionary advantage (hint: myelin). Ask yourself whether what comes after us will have an internal signal propagation speed close to the speed of light or not. Look at your phone and computer and realize that they are already there, wirelessly even.

Beyond that, there are tons of other evolutionary advantages inorganic intelligence ('AI') has over us hairless apes: Way higher communication bandwidth, no significant deterioration due to lack of gravity or oxygen or water, more modern cooling system, far easier to scale in both (physical) size and power input (a human runs on about 100 watt and it is very hard to increase that), easier to protect against radiation or elements, incredibly easy to interface with a myriad of already existing sensors and actuators, easy and flexible hibernation, etc., etc.

Most of our evolution hasn't been towards achieving high performance intelligence and rationality (and looking at politics today, it shows :-P). The design of AI however is explicitly trying to achieve that.

So tell me: What comes after humans?

Comment Re:Nah, it's been done. Things revert. (Score 1) 186

All AI is and ever will be is another tool.

Tell me: What comes after humans?
Be it in 100, 1000 or a million years.

Do you believe that humans are the epitome of what this universe can produce when it comes to intelligence? Or will something else take our place at the helm at some point?

Food for thought: The propagation speed of signals in our bodies maxes out at about 100m/s. The (currently known) theoretical limit is about 300 000 000m/s, 6 orders of magnitude (or 'a million times') faster. AI already operates close to that limit.

Again: What comes after humans?

Slashdot Top Deals

We want to create puppets that pull their own strings. - Ann Marion

Working...