Forgot your password?
typodupeerror

Comment Re:relevance? (Score 2) 56

Of course it's not relevant to the lawsuit, other than all the dirty laundry that is being aired, but I'd certainly trust Musk more than Altman.

Altman is a pathological liar and as multiple board members and executives are testifying will just make stuff up about who said what to try to manipulate people. I would basically assume that everything that Altman says is said to try to manipulate, with zero regard for whether it's true or not. He is one of those people (Trump is another) for who "truth" is just not part of their worldview.

I'm not sure that Musk really lies in any major way. He says outrageous things, but that is just his politics. However, I still wouldn't trust him in sense of wanting to have him in any position of power since he also seems a bit of a sociopath.

Comment Re:Altman vs Musk (Score 1) 56

> He also appears to live relatively humbly

So should we regard Hitler any better if we knew he drove a Volkswagon (not that he did afaik), or had humanizing qualities like enjoying painting (true)?

I'm not saying Musk is Hitler, although he does appear to have white supremancy leanings, just that what does living humble have to do with merit? Didn't the Unabomber live humbly in a hut in the woods?

As far as Musk's humble living, he apparently just made waves in Miami by helicoptering in to view a $300M waterfront mansion there.

Comment Re:When does terminating an AI become murder? (Score 1) 384

Sentient is just as ill-defined hence meaningless as "conscious".

But, taking the spirit of your post as "will there be a point where we should be ethically concerned about terminating AI", then I'd say it depends.

Most people don't see any ethical issue in killing animals to eat them, only differing by culture on which ones they consider as ok, all the way to hunter-gatherer tribes happy to eat basically anything. Society as a whole also seems ok with killing humans as long as it is called a war.

So, a first question to ask might be WHY should/are we be ethically concerned about killing humans, but perhaps not other animals, since this would guide our thinking about a new Silicon-based artificial species. Is it because of an ability to feel pain, or to empathize with others perhaps? Or is it because of a belief that humans are special in some way?

Certainly it would be illogical to not be concerned about terminating instances of an artificial species if our ethical decision making was based on scientific criteria and that species checked all the boxes.

If nothing else, what if we had "sentient" robots (humanoid or not) capable of learning, emotions, empathy, capable of forming deep companionship bonds with humans, then wouldn't it be unethical to terminate one of these just based on the human suffering that would ensue, just as if you killed someone's pet dog? Of course since you could upload it's brain and re-install it into a new body, that implies it's just the brain we should be concerned about, so if someone hit and destroyed your companion robot with their car, then perhaps we should not be concerned if the brain was backed up in the cloud and insurance pays for a new body to download it into.

Comment Define first, then evaluate (Score 1) 384

I thought Dawkins was meant to be a scientist? Opining that AI is "conscious" just based on vibes ("it really understood my book") is frankly a bit pathetic.

A good starting point to decide if something is conscious would be to define - precisely - what you mean by that word in the first place. If nothing else this would then let people understand if what you are talking about is what THEY mean by consciousness, and if they cared then have something concrete to evaluate your claims against.

As far as making a scientific claim, that would be meaningless unless you had a falsifiable theory, which again comes back to defining rigorously what your theory is.

Comment Re:relevance? (Score 1, Troll) 56

If Musk wanted to give free AI to the world (he'd probably use the word "humanity" or "human species") then he could be doing that right now with X.ai/Grok. but instead he's renting out his formerly X.ai datacenter to Anthropic for them to use instead (presumably because we wants to try to harm OpenAI).

Musk never seemed committed to OpenAI as a humanity-saving charity back in the day - he just wanted to be the boss of it, and withdrew in a huff when the others wouldn't let him. As I recall, didn't he both want to take OpenAI private and make it part of Tesla?

I could care less what happens in this lawsuit - just two billionaires having a pissing contest. I've got a feeling OpenAI will self-destruct without any help from Musk.

Comment If it works, it works ... (Score 2) 75

> said studies in mice had shown that psychedelics can rewire connections between nerves, a form of "plasticity" that could underlie their therapeutic effects. The big question is whether the same occurs in humans.

It's an interesting question where the therapeutic effect of Psilocybin comes from, but there are everyday drugs like Acetaminophen (Tylenol) that are not fully understood. As long as it can be proved safe in some given dosage regime, then to an extent who cares how it works!

Slashdot Top Deals

"The identical is equal to itself, since it is different." -- Franco Spisani

Working...