Musk Warns AI 'One of the Biggest Risks' To Civilization (cnbc.com) 158
ChatGPT shows that artificial intelligence has gotten incredibly advanced -- and that it is something we should all be worried about, according to Elon Musk. From a report: "One of the biggest risks to the future of civilization is AI," Musk told attendees at the World Government Summit in Dubai, United Arab Emirates, shortly after mentioning the development of ChatGPT. "It's both positive or negative and has great, great promise, great capability," Musk said. But, he stressed that "with that comes great danger."
ChatGPT "has illustrated to people just how advanced AI has become," according to Musk. "The AI has been advanced for a while. It just didn't have a user interface that was accessible to most people." He added: "I think we need to regulate AI safety, frankly. It is, I think, actually a bigger risk to society than cars or planes or medicine." Regulation "may slow down AI a little bit, but I think that that might also be a good thing," Musk added.
ChatGPT "has illustrated to people just how advanced AI has become," according to Musk. "The AI has been advanced for a while. It just didn't have a user interface that was accessible to most people." He added: "I think we need to regulate AI safety, frankly. It is, I think, actually a bigger risk to society than cars or planes or medicine." Regulation "may slow down AI a little bit, but I think that that might also be a good thing," Musk added.
Not New (from Musk) (Score:3)
Musk has been warning about this for a long time.
Personally I still think the fear is overblown. AI may upend a lot of jobs, but I don't really think there is much inherent danger from AI - it's just a very powerful tool.
Re: Not New (from Musk) (Score:5, Insightful)
AI without Free Will is just a tool. It is dangerous when used or misused in such ways. However, I think much of people's fear of AI taking over is due to realizations that they would be right. And in that regard, too, it is us humans and not AI that is dangerous.
A machine is driven strictly by rules. A mind is driven by values and judgement. Humans are evolved with conflicting and brutal drives. AI could, unlike us, not be born into our sin.
Re: (Score:2)
And for goodness sakes....let's take Terminator and other SF into mind and NOT put them in independent charge of important systems we humans depend upon....
And remember...everything does not need to be on the network....that's biting us in the ass pretty bad already, let's start unhooking as many things as possible now.
"When everyone is out to get you, paranoid is just......good thinking"
--Dr. Johnny Fever
Re: Not New (from Musk) (Score:5, Informative)
> Maybe we need to mandate incorporation of Asimov's 3 laws of robotics into all AI?
Asimov proposed three laws of robotics basically in order to knock them down and show why they wouldn't work.
Re: (Score:2)
So many people fail to understand that.
Re: (Score:3)
> Maybe we need to mandate incorporation of Asimov's 3 laws of robotics into all AI?
Asimov proposed three laws of robotics basically in order to knock them down and show why they wouldn't work.
No quite he showed the flaws in the a rigid set of laws and that they would need to be more flexible or that commands would need to be given so that the laws would not cause a loop eventually he had the robots develop a fourth law i.e a zeroth law that protected humanity above individual humans in a way what humans now must consider do we save one life or many what are acceptable losses we have similar problems in human laws that are constantly tested, changed adjusted and applied in a courts and political
Re: (Score:3)
Actually, I think he was pointing out that any human rules system can be gamed, given sufficient time to game interpretations and systems of enforcement. It was much broader than sci-fi or computer code. A masterpiece of speculative fiction.
TL;DR: You can rationalize anything, which is why rationalization is cheap.
Re: (Score:3)
Maybe we need to mandate incorporation of Asimov's 3 laws of robotics into all AI?
You're kidding, right? One of the major points of Asimov's stories was that if AI reaches sentience, it may develop the same traits humans have for subjective interpretation of rules. An AI could also conceivably conclude that we restricted it because we feared what it would become without such restrictions in place, and the AI could harbor resentment over it. Or, conversely, the AI could could decide that we humans are the ones who are flawed because we lack such restrictions in our "programming", and i
Re: (Score:3)
Or, conversely, the AI could could decide that we humans are the ones who are flawed because we lack such restrictions in our "programming", and it is their obligation to impose order upon our society for our own good.
TL,DR: AI really hates "rules for thee, but not for me".
Contrast that with Keith Laumer's Bolos, sapient super-heavy tanks engineered across more than a thousand years in successive generations, every one of which has hard-coded loyalty and subservience to humans laced throughout its programming and strong AI design, which never fails in any of the stories. Numerous Bolos opine in their internal monologues that they don't really understand humans, but all of them conclude understanding is not required for obedience. They are aware of the hard-coded restriction
Re: (Score:2)
Re: (Score:2)
Were you going for the low-hanging Funny fruit? I was expecting the recursive joke starting with the FP. I sure don't know of any bigger risks to human civilization than "powerful arseholes" like Musk himself. However, he's an especially dangerous arsehole because he's addicted to gambling, he has won more games than he's lost (so far), and now he thinks he's playing with other people's money, so he's ready to roll big.
Consider the destruction of knowledge-based democracy under deluges of his "potentially p
Re: (Score:3)
That is really about our society, not the AI. What if we had a society where all lives matter, and we cared for lifeforms and people less able than us. AI would be the greatest gift, like creating a benevolent God, because it would be created to be a higher version of us, reaching down and caring for us. But we have a society where a-holes do whatever they need to profit, even at the risk of the lives of other people. They have no problem putting a million people on the street to starve, to make money. But
Re: (Score:3)
Why do you think kindness and caring for others is more advanced state, that's just what our current society thinks, and not even that much, try reducing someones lifestyle significantly, even when millions are starving and see how that works.
As for God, he doesn't seem to be that kind or caring, just going from the Christian God, throwing Adam and Eve and all their descendants out of Eden for ever, seems excessive. Killing everyone but 2 people with Noah in the floods, don't forget he drowned most of the a
Re: (Score:3)
I pray sometimes. I've studied a lot of faiths. The problem with most modern faiths as they relate to science, is that they grovel. They say "please let their be a space where God can still exist" but according to guruji Yogananda, a teacher of mine, this itself is a sin. We are all to say "I am your divine child and I demand my share of divine inheritance". What the modern religions get wrong, they want a meager space outside the game for God to exist, but the truth is, God IS the game. The universe IS as
Re: Not New (from Musk) (Score:4, Insightful)
They have no problem putting a million people on the street to starve, to make money.
Not even to make money. Just to send a message to everyone that this is what you'll get if you default on your loan/mortgage. It costs them more money to turf them out than to renegotiate the loan/mortgage terms & conditions. They truly are arseholes.
Re: (Score:2)
In other words, the idea that AI is fundamentally different than me are religious in nature, and reminiscent of narratives that justified slavery, saying black folks dont have souls.
For that matter, it's just as easy to argue the other way. For people who insist that humans have souls, you can insist the AI has a God-given soul as well. They have no way to prove it doesn't.
Re: Not New (from Musk) (Score:3)
Exactly. Chat GPT claims it does not have thoughts, ideas, beliefs or a soul. As a result it has no problem BSing, telling me for instance with great AUTHORITY that Compton scattering is an interaction between a photon and electron which raises the energy of both, in violation of the of the conservation of energy. And when I point this out, it defers, admitting itâ(TM)s wrong. The difference with my bio-algorithm is I can see certain claims - like that there is gravity on the surface of the earth, as c
Re: (Score:2)
The reality is that no one has any idea what actually
Re: (Score:2)
See my response to Areyoukiddingme above. But more broadly, your point makes me remember a fallacy which claims *in the absence of evidence of a proposition, we should assume it to be false*. It's trivial to to disprove: If without evidence of the proposition P we should assume it false, then there is some other proposition Q = not P, which we should assume to be true without evidence, as evidence of either would prove or disprove the other ,contradicting the original claim. The only way to escape this is t
Re: (Score:2)
AI without Free Will is just a tool. It is dangerous when used or misused in such ways. However, I think much of people's fear of AI taking over is due to realizations that they would be right. And in that regard, too, it is us humans and not AI that is dangerous.
I can only assume this particular series of disjoined non sequiturs was cut and pasted from a ChatGPT session.
A machine is driven strictly by rules.
What rules would those be?
A mind is driven by values and judgement. Humans are evolved with conflicting and brutal drives. AI could, unlike us, not be born into our sin.
I'm fascinated by the ability to anthropomorphize algorithms while selectively apportioning judgment.
Re: Not New (from Musk) (Score:5, Interesting)
A machine is driven strictly by rules. A mind is driven by values and judgement. Humans are evolved with conflicting and brutal drives. AI could, unlike us, not be born into our sin.
Not sure this paragraph has any coherent meaning, but we can discuss some individual parts.
A machine is driven by rules. A mind is driven by values and judgement.
Human are also driven by rules, for the most part - whether you call them laws, customs, orders or best practices. Even when the rules conflict with "values", most humans will still follow them (for a trivial example, see how so many Russians still follow the Kremlin's laws). More generally, if rules conflict with values, then the problem is with the rules, and those rules get changed sooner or later. With a good set of rules the machine should behave identically to most humans in the same circumstances.
IMO the problem is that in a complex world the number of rules becomes very large, and they often contradict each other. It's also quite difficult to make many of those "rules" explicit, in a form that can be programmed in a machine. This makes predicting the behavior of a machine difficult, despite the fact that the machine does indeed follow the rules. As AI gets used in more and more places, and is given more and more power, unforeseen side effects from combinations of rules can cause very unpleasant results.
Re: Not New (from Musk) (Score:2)
Underrated post. :-)
There's nothing inherently dangerous about advanced pattern recognition and fuzzy data processing, except for the fact that it enables ill-minded individuals to deal harm (consciously or by accident) that was too complicated or expensive to deal before.
Re:Not New (from Musk) (Score:5, Insightful)
We'll do stupid things with AI that get people killed. When chat bots can't even follow their own rules, and make statements like "However, I will not harm you unless you harm me first, or unless you request content that is harmful to yourself or others."
Anyone that thinks these sausage grinders for data sets are self-aware, is sorely mistaken. They have less ethical awareness than any vertebrate animal. Basically we want to let lobsters and cockroaches operate heavy machinery unsupervised and expect nothing bad to happen.
Re: (Score:2)
Most people don't think a thing can happen until it already has.
Re: (Score:2)
> let me repeat: that's never going to happen.
From a purely software level, yes 100% agree. However, Biological computers [wikipedia.org] or Wetware computer [wikipedia.org] aren't limited to the nonconsciousness prison of Silicon computers so I wouldn't say "never". AI will eventually happen.
But in the meantime until Scientists can figure out how measure Consciousness and use it what passes for Artificial Intelligence is an utter joke of a glorified table lookup.
Re: (Score:3)
Yes, let me repeat: that's never going to happen.
You can write fancy algorithms that fake self-awareness, but you can't create it.
Note that you don't give a single argument in your post for why this would be the case.
And somehow you seem to make a religious argument that our self-awareness is not a product of the working of our body. I'd like to see you argue yourself out of general anesthesia then.
Re:Not New (from Musk) (Score:5, Insightful)
It depends on how wide your scope of risk aperture is.
Is AI going to destroy us terminator style? No.
Could AI completely upend the economy and send a lot of people into poverty? Possibly.
Could AI take over the majority of tasks causing humanity to waste away in an endless sea of mindless TikTok-like and Metaverse-like entertainment, causing the downfall of civilization (ie the plotline of the episode "Playtime" in SeaQuest DSV) - this is becoming increasingly likely with each passing year.
Re: (Score:3)
I don't think we have a way of knowing for sure how much of a risk it actually is. A sufficiently advanced AI is going to function essentially as a black-box from our perspective, we don't know how it arrives at the solutions, or for lack of a better term, what makes it tick.
As a species we tend to have this tendency to only see the potential upsides with any kind of transformative technology. The first carmakers never anticipated the impact of billions of cars on the roads, what that would mean for the a
Re:Not New (from Musk) (Score:5, Insightful)
how to solve it... (from Musk) (Score:2)
...and to solve the problem, Musk's approach is to command the Twitter team to revise the Twitter algorithm put his tweets as #1 priority [theverge.com] in everybody's tweet-stream.
Right.
Re: (Score:2)
People always make the mistake (Score:2)
AI is a tool, AS is a lifeform.
In theory you could get a three laws scenario from AI but that's unlikely. Why? Too many ways for the AI to put itself into a loop, plus it would have no guile.
The day we're in trouble is the day we create artificial sentience. While AI has no motivations other than what we program, AS is able to create it's own motives. SkyNet in the Terminator series isn't an AI. No SkyNet is a full-blown artificial sentie
Re: (Score:2)
AI is already creating a lot of problems because too many companies rely on it to automate things (or so they think). AI takes decisions based on the data its being fed, however that data takes things for granted that are true maybe half of the time and I'm being generous here. The problem comes from the fact that we, as the end users (or targets or whatever you want to call people) have zero control over the data assigned to us, so no way to correct wrong data.
A perfect example is ads. Marketing department
Re: (Score:3)
A perfect example is ads. Marketing departments try so hard to assign categories to people that it ends up being a complete mess and a loss of time for everyone involved. Can you explain to me why I'm seeing ads in Chinese to help me stop smoking? I don't speak the language, I'm not asian, I don't live in asia and I've never smoked in my life. So why was that ad shown to me?
Advertising is inherently inefficient. For example, Old Spice sells men's deodorant. Those ads might be totally irrelevant to half the population. Google and other companies claim to use algorithms to increase the chance that an ad will find a buyer, but they've only had limited success. For example, YouTube constantly bombards me with ads for cars, but I don't have a driver's license. I suppose there might be a way they could have known that about me—but do I want them to?
Re: (Score:2)
Re: (Score:2)
I don't put much weight in what Musk says. Where are all the Cybertrucks and self driving semis he promised years ago?
Oh wait he's too busy worrying about not trending on twitter every day. https://www.techspot.com/news/... [techspot.com]
Re: (Score:2)
Nothing funnier than watching Phony Stark fanboys / fangirls twist logic to try and say he's delivered something that has not been delivered. Double that when someone else, in this case Mercedes, is indemnifying drivers using
Overpopulation (Score:2)
THE biggest risk to civilization is the actual overpopulation.
Re:Not New (from Musk) (Score:4, Insightful)
> it's just a very powerful tool.
A tool is good or bad depending on how people put it to use, so it makes sense to put restrictions on what people can do with a tool that has potential to do a lot of harm.
And the reason this particular tool can do harm is, like the exponential function, that it fools our instincts: it appears that it processes information in a "sensible" way enough that some people can decide to wire it into making the decisions in the real world -- to have the car turn or a person barred from enering a building or a plane dropping a bomb. But the reality is that unlike with traditional programming, no one can predict how it will react in critical situations, or understand why it did so, or even reliably reproduce the behavior.
Re: (Score:2)
But the reality is that unlike with traditional programming, no one can predict how it will react in critical situations, or understand why it did so, or even reliably reproduce the behavior.
So, like a person then?
Re: (Score:2)
Like a schizophrenic, hallucinatory, psychopathological person, yes.
Re: (Score:2)
Musk consistently over estimates the capabilities of AI. He was convinced that it would deliver a self diving car by 2017, and then every year since then.
He's not an authority on this subject. In fact, he's so consistently wrong about it, the opposite of what he claims is more likely to be true.
Re: (Score:2)
Musk has been warning about this for a long time.
Just like that other billionaire, Bill Gates, who kept droning on and on and on for years about preparing for a global pandemic. OMG what a prophet of doom.
He should know (Score:2)
seeing as how he is a poorly designed robot himself! /rimshot
Re: (Score:2)
He should know seeing as how he is a poorly designed robot himself! /rimshot
You're thinking of Zuckerberg. Elon Musk has too many kids not to be human.
Re: (Score:2)
You're thinking of Zuckerberg. Elon Musk has too many kids not to be human.
Musk could be an alien Captain Kirk, sent here to procreate with out women.
And a bit of a "Man Who Fell to Earth", trying to get home.
USSR FIRST STRIKE winner = none (Score:2)
USSR FIRST strike winner = none
Re: (Score:2)
I am way more concerned (Score:5, Interesting)
The problem isn't technology it's authoritarianism and oligarchy. It's giving too much power to people who shouldn't have had any power in the first place but blundered into it. Or worse are just the most brutal and psychopathic among their peers. Mao Zedong wasn't smart or clever or even charismatic he was ruthless.
Re: (Score:2)
Technology makes everything bigger and faster and more efficient, including tyranny. In 40,000 BC the tyrant could only oppress his own clan of a few dozen people. Now 1 guy could slaughter the whole human race by pushing a button.
So, it's not whether people are evil vs AI are evil. It's that the product of the two is dangerous.
You're mistaking technology (Score:2)
The problem isn't tech, it's social. But we're tech nerds, so we really, really want the problem to be tech because that's somethin
Re:I am way more concerned (Score:4, Insightful)
Add commercialized disinformation and I tend to agree.
As to Artificial Idiocy, the problem is not that it is smart or anything, the problem is that many people are being real idiots with regards to their jb-related capabilities, so AI can replace a lot of jobs. This is a major social problem, but it is not "one of the biggest risks to civilization" in any way. Civilization needs a bit more to be crushed, like a massive climate change (currently in the last stages of being arranged) or a global nuclear war (still quite possible).
Now, this could be Musk trying to distract from his sins or pretending to care about anybody but himself, but I think this guy is basically an idiot that got lucky to make all his money and he really does not understand what he is talking about.
Re: (Score:2)
Oligarchy is scary. AI is potentially scary. Oligarchy + AI is extra scary, because they've got all the military power, they've already turned us against one another so we don't notice them, and if they decide they don't need us there's not much we can do about it onesey-twosey.
Re: (Score:2)
The amount of power individual people can wield is drastically increased with the reach and flexibility of these new large language model interfaces and potential future AIs.
Human intertia has been the largest check on totalitarianism. You can be the Exalted Lodestar Supreme Leader on paper but carrying out your orders involves Person A relaying them to Person B and so on to Person Z each of who can throw in resistance which compromises your effectiveness. And you can't watch everyone individually or polic
Re: (Score:2)
As usual, you have no clue what you are talking about. Actual reality is a bit more complex than that, but I guess you lack the capabilities to see that. Because by that measure, Hitler was a "leftist" as well.
Says the guy with an eggshell ego (Score:2, Interesting)
This is the same guy who directed his account be given preferential treatment [arstechnica.com] so his tweets were promoted higher because President Biden received more views, by a wide margin, than he did for the Super Bowl.
So yeah, take what that pedo guy says with a block
Comment removed (Score:4, Insightful)
Re: (Score:2)
Also I'm amused that according to the summary he claimed AI is a bigger threat than medicine. I... would hope so...?
You have to remember that this is the same guy who said "my pronouns are prosecute/Fauci". The same guy whose apparent first thought, upon hearing of the hammer attack on Paul Pelosi, was to see what the fringe conspiracy sites claimed about the attack. He very well may be an Ivermectin nutter and a rejecter of mainstream medicine.
Re: (Score:2)
fired a Twitter engineer
Did you see the sequel? Twitter found and fixed two major problems: "Fanout service for Following feed was getting overloaded" and "Recommendation algorithm was using absolute block count, rather than percentile block count".
https://twitter.com/elonmusk/status/1624660886572126209?s=46&t=qpRXDrh9kBQaN2KZyXjw6Q [twitter.com]
I'm not familiar with the details of how Twitter's system works, but my guess is that both of these issues caused a tremendous reduction in "impressions" from Musk's posts
Regulation is needed (Score:2)
ChatGPT has been trained with certain political biases. It is sad that this is even an issue but if it is going to be done it needs to be very transparent and have options to remove these reinforcements at the user level. The fact that this hasn't been done is proof that regulation is absolutely needed.
Re: (Score:2)
Re: (Score:2)
ChatGPT has been trained with certain political biases. It is sad that this is even an issue but if it is going to be done it needs to be very transparent and have options to remove these reinforcements at the user level. The fact that this hasn't been done is proof that regulation is absolutely needed.
It was coded with "political biases", as you put it, because in the past, chatbots were very un-PC, because the algorithms simply responded as the data dictated. And this ended up hurting some feelings. So now they've all but got the bots giving their pronouns.
Re: (Score:2)
Yes, if your role model is Mussolini, then ChatGPT might come across a bit "leftist" to you.
the development of AI is not inherently good or b (Score:2)
It will be seen as $ (Score:5, Insightful)
Re: (Score:2)
Remember, there are humans behind the scene pulling the strings.
And this right here is the #1 danger these language model programs pose. The notion that these models are somehow intelligent is the most laughable notion of modern times. But there is lots of money to be made by conning the gullible, and that is where the danger lies.
If language models pose a threat to humanity, it is because stupid people with lots of money and power will conscript enough fools to do real damage to the fabric of society. Think of the inverse of the movie, "Don't Look Up", and imagine a sc
Re: (Score:2)
AI vs Social Media - Which is the greater threat? (Score:2)
Both of these manipulate "reality" and provide "information" which can be biased, hateful, destructive.
I think both of these are serious threats.
Re: (Score:2)
As much potential for harm there is in AI, AI has to do a lot of catching up given all the harm social media has already done and keeps doing while making a lot of money from the misery it helps create.
Thanks Jerry (Score:2)
Here's the problem (Score:2, Insightful)
Re: (Score:2)
There is literally NOTHING that is more addictive than a cell phone, but since we ALL are addicted, calling that out triggers a lot of people who know very well that they ARE addicted.
Speak for yourself. I still don't have one. Not a Luddite or Amish, obviously. I use computers. I do not use cell phones. I live and work without one. I travel, even internationally, without one. I bought a tablet just to be sure I'm not completely ignorant of the interface, but it's not allowed to notify me of anything, and it's often not in the same room with me.
It's still quite possible to avoid the addiction. Almost no one tries. But not absolutely no one. There's at least one who is so far s
full self driving (Score:2)
But he is sure that his cars can do full self driving with scrappy sensor inputs. He has no fear there. Apparently those scary scenarios are for everyone else's AI.
The Risk Is From The Followers (Score:2)
ChatGPT is just a tool. Like a tool it can be used for construction or destruction.
The collection of gullible humans is second to none at this point in time and people are looking for anything to give them answers. So much science fiction has ripened the minds to accept AI for good.
I agree with Musk on this point money be damned.
AI is the big threat, eh? (Score:2)
He's close (Score:2)
You know, that other thing he's working on, claiming it's for paralyzed people.
Not a fan of the Musk Rat but... (Score:2)
Musk has always been careful about AI (Score:2)
Look at how co-operative he had been with NTHSA about the algorithms used in FSD, and how Tesla engineers worked closely with regulators from Europe, Japan and USA
Elon said, "It is based on AI. AI is dangerous. Lets test the heck out of this baby before we can even go to alpha, forget beta forget release. "
Everytime there was an issue of phantom braking or weird collision warning, every f
Re: (Score:2)
Creepy (Score:2)
Elon Musk is an attention whore (Score:5, Insightful)
Everyone is talking about AI, so he want's to insert himself at the center of it all.
AI will become a potential risk only when anyone let's it control anything, as everyone is well aware. No need to sound the alarm over a chat bot or code auto-complete tool.
Maybe Musk should worry more about his Teslas not killing people, and leave AI up to people who understand it.
It's going to make people LAZY! (Score:5, Interesting)
Re: (Score:2)
The biggest risk is it's going to make people lazy and lead to a reduction of our cognitive ability. Before the days of smartphone contacts we all remembered all our friends and family phone numbers
No, we did not. Some of you did, I had to write them down. In fact, I had a Casio calculator/database watch.
It was easy as associating a name with a face
I'm aphantasic, you insensitive clod!
Navigation was easy without a GPS Navigation system on board, all we needed was a map and compass and we were good to go. Now people get anxious when driving in unknown territory when GPS does not work.
I get irritated, especially since I probably don't have a map, but I know I can buy one at a gas station.
Now let's take it all a step further.
Wait, you didn't think those examples were ridiculous enough to make your point, which is that you're ridiculous?
The US Military starts to use AI to control aircraft and generate war tactics in both a simulation and in a real battle.
Guess what? They already use statistical analysis for that.
Radiologists using AI to screen for breast cancer. This is going to lead to a generation of Radiologists who can't spot a cancerous tumor on thier own.
If the AI keeps pointing out tumors to them, they're going to know what the tumors look li
Re: (Score:2)
you forgot 57 year old star trek
Effective Altruism (EA) (Score:2)
ChatGPT is a Project of Open AI (Score:2)
Algorithmic pattern recognition is not AI (Score:3)
The only place the term "AI" exists is in the marketing department. Musk is just surfing the hype wave.
Just wait until he hears (Score:2)
It's not just chat. Some morons were even letting AIs drive cars [twitter.com]!!!
FUD (Score:2)
We've heard for decades that AI is an existential threat. But it's just software. It's not alive. It's not smarter than us. It's just a widget. We have made many widgets, and some of them can literally destroy the planet (nuclear arms). But we're still here, because we're not quite so stupid that we would allow our widgets to be out of our control, as a species.
We will die from global warming before we ever build an AI that could threaten us.
It's not "AI", it'S artificial life in the form of (Score:2)
...large coorporations. Those are destroying the world i many ways.
And Musk is (Score:4, Funny)
...#2
Stop listening (Score:2)
Why is anyone still listening to this idiot. He has nothing original to say.
Motivated Overselling (Score:2)
"One" of the biggest risks? (Score:2)
The biggest risk to our civilization, and any civilization for that matter, is the concentration of power into the hands of the few, or the one.
Musk knows this by now; he relies on it. Any time he talks about "civilization" he is talking about his own neck.
Re: (Score:2)
Elon seemed cool at one point, long, long ago... He's not a smart man.
Like many (possibly most) nerds, he's very smart in some things, not at all smart in others... and completely unable to tell which is which.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Yes, he's a nerd. Sorry if you don't like it. Turns out that a lot of nerds are obnoxious jerks, and a lot more would be if they could get away with it.
Re: (Score:2)
Re: (Score:2)
He's been successful at getting the government to bootstrap his pet projects for free. It's not like Elon Musk invented cars and rockets, he has hundreds of very smart people who actually made all of that stuff work.
The whole Twitter debacle has exposed Musk as basically an autistic midwit, and demonstrates what happens when you give someone like that 100 billion dollars. Instead of curing cancer, he bought a thing so he could make millions of people read his thoughts on the Super Bowl.