Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Need more nuclear fission power plants. (Score 1) 109

Although it is true, that does not refute MacMan's claim. The study might have been for the recent period from 2018 onward or the study might have claimed that lignite and coal would have dropped even more if nuclear closures would be postponed. If lignite/coal use would have dropped even more then it is reasonable to assume less deaths from air pollution. Therefore claim the nuclear closures caused increase of deaths seems reasonable.

But you have shown that nuclear closures are not that much bad if the recent trend of lignite/coal use increase will reverse.

Comment Re:Need more nuclear fission power plants. (Score 1) 109

Why do you assume so?
This article was about premature deaths from particulate pollution. Fossil fuel plants have more particulate pollution than nuclear plants. Increase of deaths is expected when you replace nuclear plants with coal plants. MacMann's claim seems believable. Your claim seems to be pulled out of thin air. Maybe you can provide some reasoning...

Comment Re:OK (Score 4, Informative) 109

EVs are not significantly heavier than fossil cars.

EVs are about 30% heavier than corresponding combustion engine cars:

  • Ford 150 truck: electric, 6,015 pounds; gas-powered, 4,060 pounds
  • Hyundai: electric, 3,715 pounds; gas-powered, 2,899 pounds
  • Volvo: electric, 4,662 pounds; gas-powered, 3,726 pounds

Whether it is significant is subjective. EVs have much better acceleration. That leads to more road and tire wear and more particulate pollution. Increased EV weight has only a small impact on infrastructure. The more concerning negative impact is on lower safety in case of a collision.

https://www.politifact.com/art...

Comment Re:8.1 billion sooper-intelligunt computers (Score 1) 186

OK, got it. Thanks.
Although I personally believe humans are mere machines (there is no soul), that does not mean we can make AGI any time soon if ever at all. A machine can be too complicated for us to reproduce. Well, I think it is more probable that we will get there eventually.

Comment Re:No intelligence in the article (Score 2) 186

I would say that deductive reasoning is not enough for AGI. It is rather simple: just find all the consequences of the initial axioms and the inference rules. Yes, the state space can explode. But it is still only a search in some state space and checking whether a statement is in this state space or not (i.e. whether it is valid based on the initial axioms and the inference rules).

I think that a program must be able to perform useful inductive reasoning to qualify for AGI. It must be able to derive new models from the observed data. And the models must be useful, i.e. have good predictive ability and provide a value to achieve the required goals.

E.g. Newton crated a useful model for gravity. We have better models now but even Newton's model is still useful and used since it is simple. The model is the answer to the goal of explaining how celestial bodies move in our solar system. Well, except for Mercury. We need more complicated Einstein's model for it.

True AGI should be able to derive Newton's model of gravity given only the record of celestial body positions in time (i.e. without any prior knowledge of the already existing Newton's model). That is provided the goal for AGI would be: "Simplify/compress this record of celestial body positions in time."

Comment Re:An excellent testing ground.. (Score 1) 82

There are some promising technologies like the recent demo of the injection of the cooled brine, and we'll probably need to mandate a standardized water input valve for firefighters that goes straight into the battery similar to what mining equipment has.

Here is some information about it. There is also a picture of the water input on a mining equipment later on in the video.

Comment Re:First they laughed at it... (Score 1) 174

I agree that ChatGPT is very limited especially while its feedback loop goes through output in a textual form and it is trained not to generate "temporary garbage" in its textual output. That discourages the network to use its output as a temporary memory scratchpad.

I think it is hard to tell how much it can actually do. E.g. it is likely that regardless of the training it cannot balance parentheses which are nested deeper than the number of its attention heads. That is provided it cannot generate "temporary garbage" in its output which would track the current nesting depth. No surprise the stephenwolfram.com sample with 8 attention heads started to fail more often when the nesting level was deeper than 8 :)

Slashdot Top Deals

We gave you an atomic bomb, what do you want, mermaids? -- I. I. Rabi to the Atomic Energy Commission

Working...