Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Evidence based slashdot comment (Score 2) 95

Avi Loeb is no whacko. But he seems less "scientist" and more "attention whore". His M.O. is to publish a huge number of short grad-student-level papers (more "essays" than "studies") on a wide variety of topics in order to pad his publication record and make himself look like an expert on everything, while making a big, very public, angry fuss about other scientists tending to be too "conservative" and not "open-minded". We just "need to learn from the evidence", he says.

My favorite bit is when he had an argument with Jill Tarter and says "why should we fund searches for dark matter and not for technological civilizations as part of the mainstream, that's my argument, and I find it really surprising that i get opposition from you to that notion". Jill Tarter is the former director of the Center for SETI Research (Search for Extraterrestrial Intelligence). Certainly Jill wasn't saying there shouldn't be funding for a search for technological civilizations --- she was saying that she gets "pissed off" about Loeb "throwing the scientific culture under the bus". Loeb just seems unable to avoid being outspoken, oppositional and attention-grabbing.

Comment The obvious solution... (Score 2) 48

Seems to me that Android should highlight (with a bright red icon/text) the dangers of granting certain permissions, e.g. by saying "granting access to SMS will allow this app to see one-time passwords sent by your bank or other accounts. Only grant permission if you're sure you can trust it." Not sure why this should be limited to sideloading, even if Google does have some systems designed to detect trojans/malware.

I'm still sore that Google decided to grant internet access to everything without the user's permission, even including keyboard apps that see your passwords as you type them. (I mean, I get that apps are ad-supported. But is it really that nobody makes ad-free FOSS Android apps, or is it that only ad-supported apps have the SEO money they need to make themselves easier to find?)

Comment Is this about protecting children, really? (Score 3, Insightful) 101

AI-generated imagery and other forms of deepfakes depicting child sexual abuse (CSA) could be criminalized

Who's old enough to remember the long arguments we had about whether video-game violence caused real-life violence? Or (more on the nose) about whether the "rape" fantasies on porn sites cause real rapes? AFAIK there was never any scientific conclusion that games/fantasies lead to real-life crimes. My conclusions were (1) the evidence we have is for a weak correlation, with no evidence of causation (2) for video games there is an extremely high ratio of people who play violent games to people who commit violent crimes, so banning them is unjustified. And personally, when I finally obtained access to porn in ~1995, it didn't make me have more real-life sex - in fact I had none at all for many, many years afterward.

So it's obvious that groups supporting these policies hate pedophiles, but not that they care about protecting children. Think about it: imagine the web disappears tomorrow and there's no more p0rn. Does this really make you less likely to seek out real-life sex? That's the theory needed to support a law like this, and I think it's exactly backwards. Pedophiles know perfectly well that it's wrong to [you know] but human sex drive is powerful. I think many of them would accept a substitute if they could, but laws and enforcement against fictional child p0rn have gotten tighter over the years. Meanwhile, real-life children are no more rare than before.

Something else. If a 16-year-old wanks on camera, that's illegal production of child porn under typical laws (though curiously nobody seems to get prosecuted for it?). Likewise two 16-year-olds having sex is perfectly legal, but if they make a record of it, it's a serious crime. I bring this up because while these two cases may be serious crimes of "child pornography", it would be quite a stretch to call them "CSAM". Yet this is precisely what activist groups want. Two examples:

United States federal law defines child pornography as any visual depiction of sexually explicit conduct involving a minor [....] NCMEC chooses to refer to these images as Child Sexual Abuse Material (CSAM) to most accurately reflect what is depicted

While the term child pornography is still widely used by the public, it's more accurate to call it what it is: evidence of child sexual abuse. Thatâ(TM)s why RAINN and others have stopped using the term child pornography and switched to referring to it as CSAM â" child sexual abuse materials.

While some of the pornography online depicts adults who have consented to be filmed, thatâ(TM)s never the case when the images depict children. Just as kids can't legally consent to sex, they canâ(TM)t consent to having images of their abuse recorded and distributed. Every explicit photo or video of a kid is actually evidence that the child has been a victim of sexual abuse.

Nowhere does RAINN's article mention teenagers, they present only a "child-adult" dichotomy. They do say "In about four out of 10 cases, there was more than one minor victim, ranging from two to 440 children" which makes it clear that "child" is meant as a synonym for "minor" and so includes teenagers.

Since activist groups encourage everyone to sed s/child porn(ography)?/CSAM/g, when Apple or Google talks about their "CSAM" detection system, this seems to actually be a system for detecting porn (or simple nudity, or medical pictures) involving minors, which they call CSAM because activists insist on it.

This is an example of a more general phenomenon I call "casting negative affect": using words to create negative feelings in the listener. For example, calling Martin Luther King a "criminal" because he was put in jail 29 times, convicted of contempt of court, and convicted of disobeying a police order and fined $14. Likewise: suggesting that 16-year olds (or AI, or a hebephiliac with a box of pencils) can't make child porn, only Child Sex Abuse Material.

Comment Re:Long story short... (Score 1) 158

I know people don't like [...] the annual Lazard reports on LCOE [...] (for some reason I don't quite understand yet)

As I recall, Lazard tells you the cost of a power plant but (i) excludes the cost of transmission lines (more of which have to be built for spread-out renewables than more concentrated forms of energy), and (ii) it does not tell you the value of energy. Because energy must be sold in the same moment it is generated, the value of solar and wind will decrease as time goes on because e.g. solar competes with whatever other energy happens to be generated in that same instant -- and we're planning to build more solar, so that self-competition increases over time, as does cross-competition between solar and wind. Firm power, on the other hand, can strategically choose when to generate in order to increase revenue per unit of energy. So nothing's wrong with Lazard, it's just not telling the whole story. (Even I'm not telling the whole story because it's a long story.)

Nuclear is a special case, where it can theoretically choose when to generate but in traditional plants the fuel is cheap and the plant is expensive, so the owners are tempted to generate at all times (baseload). And if I were a solar/wind owner, I might be temped to lobby against nuclear (as well as potential competing solar projects) for this reason. However, Molten Salt Reactors can avoid such "overgeneration" using molten salt energy storage: run your reactor all the time, but put the heat into giant salt tanks instead of directly generating electricity. Then build three times as much generating capacity as the reactor needs, so that when electricity prices are high, the plant can dump lots of energy onto the grid and make a profit. (This wouldn't work as well for traditional reactors, because their heat is only 300 Celcius vs 650 Celcius for MSRs; at such a temperature, dramatically more salt would be required, and it would probably be tricky to keep it from solidifying)

One problem is it includes a single very bad dam break in China in the hydroelectric numbers, a single event that skews everything considerably, not exactly relevant to safety in the USA. Nuclear power safety numbers include Chernobyl, but not Fukushima

It's kind of weird to point out that bad communist engineering isn't relevant to the USA, without considering that bad communist engineering also caused Chernobyl. Chernobyl-type RBMK reactors did not have containment buildings, and they used a combination of materials (graphite moderator + light water + natural uranium) that was very cheap but also unstable (the technical term being "positive void coefficient of reactivity"). So by all means exclude the Banqiao dam disaster -- but then exclude Chernobyl for the same reason. RBMK-type designs have never been legal in the West.

Fukushima deaths caused by radiation are may be zero so far. For all I know, there could be hundreds of eventual deaths from increased cancer (though they try to compensate with increased cancer screenings), but I don't know of any researchers suggesting that Fukushima was anywhere near as bad as Chernobyl. Reportedly, over a thousand deaths were caused by "stress" of the population relocation, mostly among elderly people -- certainly more elderly deaths than the radiation would have caused if most people hadn't relocated, or were allowed to return home in a timely manner. But don't take my word for it, look at studies or cross-reference figures for radiation avoided (Table 5) against the risks of radiation according to NASA, or watch this video.

Comment The big secret about melatonin (Score 1) 143

...is that you don't need much.

The medically correct dose of melatonin is 0.3 mg (or at most 1mg), but someone patented the correct dose and ever since then supplement makers have avoided the patent by selling very large doses.

It's very convenient: a typical melatonin tablet can be broken in 4 to 8 pieces and retain full efficacy, making melatonin possibly the world's cheapest medicine.

Before I knew this, I was sometimes taking 20 mg at once. There were no side effects, but it didn't necessary work either - metatonin seems to be only half of what the body needs for sleep. It's like a binary switch; either the body recognizes its presence, or not.

Almost anything becomes toxic when the overdose is large enough, and melatonin seems unusually harmless in that regard, and TFA does not say how large a toxic dose would be. But if, for the sake of argument, the threshold of toxicity is 100x the effective dose, given that many people take doses that are 20x too large, they would only need to take 5 of those doses to reach toxic levels.

Comment This makes no sense as a "colorblind mode" (Score 1) 19

Color blindness doesn't mean you can't see color. Even if it did, the game would still be playable in black & white.

Rather, people with color-blindness only perceive two primary colors rather than three. Typically red and green look the same, so we may say they have yellow and blue as their primary colors, and only two "main" hues in total, in contrast to the 6 "main" hues that most people see, namely red, yellow, green, cyan, blue and magenta.

This doesn't even make sense as a mode for people who have severely blurry vision. If your vision is blurry, the horizontal and vertical lines will blend together into blobs of gray, so overall the game screen will look... like a whole lot of gray. Maybe something else would've been helpful, like making one character dark and the other bright so that the vision-impaired person can more easily tell which one is which. But while coloring the background gray might be helpful, making the foreground also look gray surely isn't.

Comment No, 55% of DOE budget is not commercial generation (Score 1) 331

He claims that

A key point to remember about the US DOE is that 55% of its budget is related to commercial nuclear generation. The other 45% covers dams, geothermal, wind, solar, tidal, wave, biomass and biofuel energy.

But if I Google for '55% of US DOE budget is related to commercial nuclear generation', this article itself is the top search result and no other page appears to support the assertion.

So what does the DOE budget actually contain? I easily found this 2023 budget request document which begins with a pie chart showing a breakdown of the $52 billion 2024 budget request, with the following main budget items:

  • 46% for the "National Nuclear Security Administration",
  • 17% for "Environmental Management",
  • 17% for "Office of Science",
  • 7% for Energy Efficiency and Renewable Energy,
  • 3% for "Nuclear Energy"

What does the NNSA do? Well, the NNSA is "a semi-autonomous Department of Energy agency responsible for enhancing national security through the military application of nuclear science".

A table down on page 8 provides a breakdown of the actual 2022 budget. It has the following main items:

  • $16.0 billion for "Energy programs", including $7.5 billion for "Science" (no further breakdown), $1.5 billion for "Nuclear Energy", $0.86 billion for "Uranium Enrichment Decontamination and Decommissioning", $0.03 billion for "Nuclear Waste Fund Oversight" and nothing else that looks related to nuclear energy.
  • $28.8 billion for "Atomic Energy Defense Activities" including $20.4 billion for NNSA, of which the main expense is $15.9 billion is for "Weapons Activities" (no further breakdown). The entire remaining $8.3 billion is for "Defense Environmental Cleanup, Other Defense Activities and Defense Uranium Enrichment".

So "55% is related to commercial nuclear generation" is a misrepresentation with no obvious relation to the real budget. It's as if he looked at the numbers and said "oh, nuclear weapons activities? I'll just put that down in the "commercial nuclear generation" column!

Comment Wow. (Score 1, Troll) 14

So Sam Altman did something that concerned the board so much that they fired him. Now all four of the board members that voted him out have been removed themselves. Sam once said:

Q: Even you would acknowledge, you have an incredible amount of power at this moment in time. Why should we trust you?
A: You shouldn't. [...] No one person should be trusted here. I don't have supervoting shares. I don't want them. The board can fire me, I think that's important...

I always thought Sam seemed like a responsible guy. But whatever happened does suggest that Sam is not really accountable to the safety-focused nonprofit board. And this came soon after OpenAI's core values were changed to this:

AGI focus

We are committed to building safe, beneficial AGI that will have a massive positive impact on humanity's future.

Anything that doesn't help with that is out of scope.

Comment Re:It didn't "create" the new materials (Score 1) 28

Comment How it's going (Score 2) 28

Before AI: "we've used computer simulations to predict 4 materials that may have a certain property in a computer! Our team is now working to design lab processes that will try to produce each of these materials in a lab and see which of them actually works... this time next year we hope to announce something exciting!"

After AI: "we had the computer check out hundreds of millions of possible materials. It thought 2.2 million of them might be stable, so we had it pick 700 of the most promising. Then we had the computer design manufacturing processes for the most promising and, uh, yeah, so we made 700 new materials last week and, uh, yeah, we just have a couple hundred technicians here checking all of those in real life now. So, yeah, I'm hoping to sign a ten-billion-dollar production deal and maybe retire to the Bahamas by Christmas, fingers crossed!"

Almost makes me wanna be an AI engineer, before the AI engineers are replaced by AGIs. And hey, don't worry, the AGIs will never harm us. That would be too sci-fi! Now, I know, technically psychopathy is defined by what is absent, not by what is present, but billions of investment dollars are flowing into building the first AGI. Surely our business executives will produce a perfectly reliable morality module that cannot be bypassed even by an egregious mistake in the config file! To do otherwise would just be inviting lawsuits!

Comment Re:My perspective as an effective altruist (Score 1) 80

Sure. I would refer you to this summary of my own story (watch out for the part about the CES letter).

If you are LDS, you may prefer a a less dry/reductionist (and more gradual/meandering/detailed) approach to this topic. In that case, please watch this, or possibly this.

Comment My perspective as an effective altruist (Score 1) 80

I signed the GWWC pledge to give 10% of my income to charity for the rest of my life. I used to have a religion, but I found out it was false (any LDS/Mormon folks reading this can ask me how I know, if they dare) so I decided I would no longer donate 10% tithing to missionary work, Books of Mormon, temples and the like. Now I give instead to things like cost-effective malaria nets which save one child's life per $5,500 spent, encouraging clean energy R&D, and with this whole AI thing heating up I would consider donating to an AGI safety organization, but I haven't decided which one.

I disagree with the characterization of EA as a "religion". In fact, I find Effective Altruism to be a refreshingly secular, rational, and diverse group of people (including not just atheists but Christians, Jews, and even *gasp* non-utilitarians).

Effective Altruism started in the SF Bay Area, and it seems like every movement must have its detractors, so now we face

  • - a professional philosopher suggesting that donating money cost-effectively does "serious harm"
  • - conservatives suggesting that people in extreme poverty should pull themselves up by their own bootstraps / not have their children kept alive by un-earned anti-malaria nets
  • - venture capitalists like Vinod Khosla and Marc Andreessen (who hope to make a fortune on AI and AGI) telling everyone that EAs asking for a temporary pause in AGI development are just "religious" without making any counterarguments against the risk factors.

I love these new AIs. As an "old" software developer (age 43, but started programming at age 11) the AI capabilities that have suddenly appeared in the last few years are exciting and I look forward to using AI models as a professional developer. Video and audio deepfakes, image generation, cracking captchas, writing unique poetry instantly, GPT4 passing some versions of the Turing test... wow. They'll be used for huge disinformation campaigns, but they're fun, amazing and very useful.

I'm also excited about AGI. I'd love to have my own personal AGI assistant modeled after Data from Star Trek TNG. But at the same time, there is a potential for these things to be really f**king dangerous, so I think well-thought-out regulations are needed and a culture of caution is good. GPT5 won't be what kills us all, because GPT5 won't be AGI. But tens of billions of dollars are being invested in AI, much of that going to a OpenAI, whose mission statement was changed to say "Anything that doesn't help with [AGI] is out of scope". Basically there's two ways this can go: either humans are able to control the AI agents, or they are not (AGIs control themselves.) Both possibilities could go very badly, and if we are able to fully control AGI v1.0, that doesn't prove v3.0 is safe too.

Now I (like most EAs) think that most likely everything will turn out okay, at least at first. I'm guessing there's roughly a 30% chance of catastrophe before 2100; many others think it's not that bad. But if there's even a 1% chance AGI causing catastrophe, isn't that reason enough to proceed with caution and fund safety research?

Meanwhile, a key group opposed to "AGI alarmist" EAs is e/acc or "Effective Accellerationism". e/accs have "faith" in the goodness of "the singularity":

e/acc is about having faith in the dynamical adaptation process and aiming to accelerate the advent of its asymptotic limit; often reffered to as the technocapital singularity

Effective accelerationism aims to follow the "will of the universe": leaning into the thermodynamic bias towards futures with greater and smarter civilizations that are more effective at finding/extracting free energy from the universe and converting it to utility at grander and grander scales

e/acc has no particular allegiance to the biological substrate for intelligence and life, in contrast to transhumanism

Parts of e/acc (e.g. Beff) consider ourselves post-humanists; in order to spread to the stars, the light of consciousness/intelligence will have to be transduced to non-biological substrates

No need to worry about creating "zombie" forms of higher intelligence, as these will be at a thermodynamic/evolutionary disadvantage compared to conscious/higher-level forms of intelligence

Oh but do go on about EA being a "religion".

Comment Re:Workers collect less than 1% of revenue? Um no. (Score 5, Informative) 148

And Google's revenue per employee is $2,020,329 in 2022 (says Zippia), while the average salary at Google is $123,944 (says PayScale.com) which is about 6% (and for software engineers "The median compensation package totals $275K", says levels.fyi).

So... is Varoufakis lying?

But Google also has to pay for data centers, electricity, office buildings, taxes and so on. And we could say the same for other tech companies. So a more reasonable thing to measure would be the profit per employee, which is said to be $158,000 for Alphabet (Google) as of 2019 and higher or lower for others.

And this "techno-feudalism" concept seems to uniquely describe Amazon and perhaps a few other "walled-garden" cash cows like Apple's app store. Google or Microsoft just seem like normal capitalist entities to me. Too powerful, perhaps, but not "feudalist".

Comment Was Covid bad? Not if you don't believe it was bad (Score 5, Interesting) 274

On one hand, you have statistics saying a million people died of Covid in the U.S. alone. On the other hand, some people have feelings. I'd like to share the story of my uncle Bert, who died "with" Covid in Alberta, my aunt Elaine, and my father Don who lives 5000 km away in Hawaii. All three of them became anti-vaxxers after the pandemic started because their right-wing sources told them about the evils of vaccines in general and (once mRNA vaccines got the EUA) mRNA vaccines in particular.

A key part of this belief system is that Covid isn't so bad (as long as you have ivermectin anyway). My aunt wrote this on Facebook:

Bert is in a ventilator in ICU in Lethbridge. He is in a deep sleep, seemingly unaware of his surroundings or anyone's touch. A team of 4 turns him every afternoon from his back to his tummy which seems to increase his oxygen level, and then in the morning they move him back onto his back. - Elaine

Last night, Bert's brother Don phoned me to say Bert is being treated for the wrong condition. He feels that Bert has suffered a stroke. He was in the front yard, watering the flowers when he fell, and was unable to get up. This is similar to other incidents that have occurred recently, and Bert has called me on his cell, so I rushed out to help him stand up. I don't know why we didnt do more than help him into the house so he could sit in his "lazy boy" for a while. This time a young couple driving by saw him fall and rescued him before I could get to him - hence his trip to the hospital and a diagnosis of covid 19.

So: Bert has fallen repeatedly in the garden. This time when he fell, passers-by called an ambulance. When he got to the hospital, he was tested for Covid and it came back positive. Perhaps due to this, Elaine wasn't allowed to see him (she was also infected, but had a very mild case). Later, he was placed on a ventilator (a common treatment for severe Covid). Don, from 5000 km away, diagnoses him with a stroke. The hospital told Elaine he died of Covid, but Don still thinks he died of a stroke. Elaine is inclined to believe Don, though she told me later, her voice breaking, that maybe if they had allowed her to give him ivermectin, he'd still be alive today.

And you know what, I do wish staff had given him ivermectin. He would've still died, but at least Elaine wouldn't be left with an impression that doctors are the enemy. As for my father--the man I remember is gone, replaced a mystery man who sent me messages reciting the talking points given to him by the TV, never with evidence to back it up. He ignored all of my replies. Now we don't talk anymore.

Slashdot Top Deals

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...