Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Appeal (Score 1) 202

Setting aside the fact that CTRL-Z is "suspend process," this is a poor decision by the judge for a myriad of reasons. Scrivener's errors can be corrected at any time, by common law, so the divorce should be invalidated on those grounds alone. Beyond that, valid contracts require both intent and consent, which was apparently lacking here, so the divorce "agreement" should not be binding.

Comment Re: These people are hallucinating (Score 1) 315

I'm not sure that intelligence is the bottleneck of technological (or any other) progress that many people seem to believe it is. I think this is the view of people for whom technology is inscrutable, but most progress is predicated on research, where the biggest bottlenecks are time and the adequate application of resources (and convincing people to give you those resources). It's not clear to me how a "super" intelligent AI would immediately change that, unless perhaps people trusted it implicitly, so it was consequently better able to allocate resources than we do at present.

In any case AI makes mistakes, and there's no reason to believe that mistakes diminish as intelligence increases, so trusting AI as above probably wouldn't be prudent. In other words, reliability/trustworthiness is its own thing, its own obstacle, and only tangentially related to intelligence, if at all. There are highly intelligent liars, for example and conversely, if you give a principled, intelligent person flawed information, they will naturally arrive at flawed conclusions. The quality/trustworthiness of information is just as important (if not more) than the capacity to analyze it intelligently, and the process for establishing the quality of information is through research, not by "being smarter."

Granted, ML algorithms can potentially expedite analysis, but it's still limited by the quality of data, which is not something I believe intelligence can inherently improve. I am open to that possibility; I just haven't really seen anyone explain how that might happen (let alone provide a testable explanation). Most people just wave a magic wand and say smarter = faster.

Comment Re: I've always felt the great filter (Score 1) 315

It can only happen that way because that's the way it happened. I believe that's called confirmation bias.

In any case, we already have access to essentially unlimited energy through fission. Before that we had inexhaustible (on the timeline of centuries) geothermal energy. It wasn't exploited earlier or more extensively because we had hydrocarbons, which were portable and thus doubled as convenient fuel for vehicles. But in the absence of abundant hydrocarbons, we might have developed a more robust electrified transport system. In fact, this was one competing vision back when motorized transport began. The fact that hydrocarbon-based transportation won the day doesn't mean electrified transport was infeasible, or that technological progress would have stagnated.

Progress in the absence of natural repositories of hydrocarbons might have taken longer (on human timescales), but not necessarily, and in any case the difference likely would have been insignificant on geological timelines.

Comment Re:This is not news. (Score -1) 188

All of this is true, but Havana Syndrome has all the hallmarks of mass hysteria, and no plausible explanation that doesn't either venture into the realm of science fiction, or else seriously misapprehend the laws of physics with regard to RF propagation and physical effects. Just because they're out to get you doesn't mean everything you experience is a consequence of that.

Comment Re:geolocation redundancy (Score 2) 99

Not really -- especially for something where you need (or benefit from) a lot of expertise in one place, it's an effective use of resources to concentrate in one geographical location. We see this with everything from chocolate to finance to aerospace. You can try to spread things out artificially, but whichever location starts doing better will create a positive feedback loop, drawing in more talent and eventually playing an outsized role.

Comment Re:Could they even do it? (Score 1) 32

"It would end up like the crypto thing," is a better analogy. Encryption was classified as a "munition" until 1996, and restricted for export. Technically there are still export controls for new encryption schemes today, but it's a paper tiger at this point. It's likely that any enforcement attempts would be successfully met with free speech challenges, as with DeCSS (although who knows what the SCOTUS would decide on anything these days).

Comment What does "behind" even mean? (Score 2) 32

The barrier to entry for AI (generative or otherwise) is incredibly low, and mostly consists of harvesting large quantities of training data (which China should excel at, TBH) as well as access to sufficient processing power to process that data. OpenAI doesn't have any magical insight into how a trained model will behave compared to anyone else in the field -- anyone with money could be up to speed in a month or two, tops. The algorithms themselves are well-established at this point, and the "secret sauce" is not very secret either: increase the resolution of your data and add more processing power.

I suspect the reason we don't see the field flooded with more startups (if you don't consider it saturated anyway) is because the business case is just not there. Let OpenAI and others take the risk that there's no pot of gold at the end of the rainbow. If an opportunity presents itself, it would be easy enough to start competing; at least no more difficult than starting today, but certainly cheaper to wait.

Comment Re: Legislators have never written software (Score 1) 50

Yeah, the bill is basically garbage. It's unconstitutionally vague -- abstract descriptions for who it covers and how -- and ignores first amendment protections for developers.

It's also rather hyperbolic, as LLMs are is not capable of reasoning, and "more powerful" not-reasoning doesn't magically reach some tipping point of competence. It just gets faster or more efficient at doing the same crap. GPAI will require a paradigm shift -- if such a thing is even possible with even theoretical technology (as in, we have a very plausible theory about how to create this) -- at which point a bill won't stop it.

Slashdot Top Deals

I program, therefore I am.

Working...