Forgot your password?
typodupeerror

Comment Re:But the real cost is increased service prices (Score 1) 72

Nuclear reactors use most surface water, not ground water.

Datacentres are no pickier. You can even cool a datacentre with saltwater, you just need a heat exchanger.

Also, closed loop does not evaporate. The loop is not closed if stuff escapes from it.

You're arguing with the actual terminology used in the nuclear industry. "Closed loop" or "closed cycle" designs have the water pumped in a cycle through cooling towers. The towers lose water to evaporation, taking heat with them, but the rest of the water is returned to be reheated again. "Open loop" or "open cycle" designs have no cooling towers. The water is heated and just discharged hot. They consume much more water (over an order of magnitude more), but most of that is returned. Closed loop are more common, but you see open loop in some older designs, and in seawater-cooled reactors.

Comment Re:According to the summary... (Score 1) 107

I've printed many hundreds of kg on my P1S, thanks.

I do not consider having to write data out to a card and transport it back and forth between the printer and the computer to be the pinnacle of convenience. That's something that would be considered embarrassingly inconvenient for a 1980s printer, let alone a modern net-connected device. And it's designed to be inconvenient for non-cloud prints for a reason.

Comment Re:But the real cost is increased service prices (Score 1) 72

Also, anything sounds big when you put it in gallons. Doesn't sound so big when you mention that's 92 acre feet, the amount used by less than 20 acres / 8 hectares of alfalfa per year. Or when you mention that a typical *closed loop* 1GW nuclear reactor uses 6-20 billion gallons of cooling water per year (once-through uses 200-500 billion gallons, though most of that is returned, whereas closed loop evaporates it)

Comment Re:That makes sense. (Score 4, Interesting) 76

I don't think it has anything to do with that. As soon as I saw the headline, my mind went "cohort study". And sure enough, yeah, it's a cohort study. Remember that big thing about how wine improves your health, and then it turned out to just be that people who drink wine tend to be wealthier and thus have better health outcomes? And also, the "sick quitter" effect, where people who are in worse health would tend to stop drinking, so you ended up with extra sick people in the non-wine group? Same sort of thing. This study says they're controlling for a wide range of factors, but I'd put money on it just being the same sort of spurious correlations.

Comment Re:Stop purchasing Bambu products (Score 2) 107

They've made a nice easy-to-use ecosystem. For $400 you can get a P1S that supports adding an AMS, auto bed leveling, enclosed-chamber printing, high precision, high print speeds, and 300/100C nozzle/plate temps, and has an easy cloud print service and a robust ecosystem of models you can just download and print with no extra config straight from the app.

But yeah, their behavior is increasingly entering bad-actor territory. I wonder how long it'll be before they lock entry-level printers into their branded filament?

Comment Re:relevance? (Score 2) 59

Of course it's not relevant to the lawsuit, other than all the dirty laundry that is being aired, but I'd certainly trust Musk more than Altman.

Altman is a pathological liar and as multiple board members and executives are testifying will just make stuff up about who said what to try to manipulate people. I would basically assume that everything that Altman says is said to try to manipulate, with zero regard for whether it's true or not. He is one of those people (Trump is another) for who "truth" is just not part of their worldview.

I'm not sure that Musk really lies in any major way. He says outrageous things, but that is just his politics. However, I still wouldn't trust him in sense of wanting to have him in any position of power since he also seems a bit of a sociopath.

Comment Re:Altman vs Musk (Score 1) 59

> He also appears to live relatively humbly

So should we regard Hitler any better if we knew he drove a Volkswagon (not that he did afaik), or had humanizing qualities like enjoying painting (true)?

I'm not saying Musk is Hitler, although he does appear to have white supremancy leanings, just that what does living humble have to do with merit? Didn't the Unabomber live humbly in a hut in the woods?

As far as Musk's humble living, he apparently just made waves in Miami by helicoptering in to view a $300M waterfront mansion there.

Comment Re:When does terminating an AI become murder? (Score 1) 400

Sentient is just as ill-defined hence meaningless as "conscious".

But, taking the spirit of your post as "will there be a point where we should be ethically concerned about terminating AI", then I'd say it depends.

Most people don't see any ethical issue in killing animals to eat them, only differing by culture on which ones they consider as ok, all the way to hunter-gatherer tribes happy to eat basically anything. Society as a whole also seems ok with killing humans as long as it is called a war.

So, a first question to ask might be WHY should/are we be ethically concerned about killing humans, but perhaps not other animals, since this would guide our thinking about a new Silicon-based artificial species. Is it because of an ability to feel pain, or to empathize with others perhaps? Or is it because of a belief that humans are special in some way?

Certainly it would be illogical to not be concerned about terminating instances of an artificial species if our ethical decision making was based on scientific criteria and that species checked all the boxes.

If nothing else, what if we had "sentient" robots (humanoid or not) capable of learning, emotions, empathy, capable of forming deep companionship bonds with humans, then wouldn't it be unethical to terminate one of these just based on the human suffering that would ensue, just as if you killed someone's pet dog? Of course since you could upload it's brain and re-install it into a new body, that implies it's just the brain we should be concerned about, so if someone hit and destroyed your companion robot with their car, then perhaps we should not be concerned if the brain was backed up in the cloud and insurance pays for a new body to download it into.

Comment Define first, then evaluate (Score 1) 400

I thought Dawkins was meant to be a scientist? Opining that AI is "conscious" just based on vibes ("it really understood my book") is frankly a bit pathetic.

A good starting point to decide if something is conscious would be to define - precisely - what you mean by that word in the first place. If nothing else this would then let people understand if what you are talking about is what THEY mean by consciousness, and if they cared then have something concrete to evaluate your claims against.

As far as making a scientific claim, that would be meaningless unless you had a falsifiable theory, which again comes back to defining rigorously what your theory is.

Slashdot Top Deals

There are three kinds of people: men, women, and unix.

Working...