Forgot your password?
typodupeerror

Comment Re:Question (Score 1) 20

retains access to the AI startup's technology until 2032, including models that achieve AGI

Exactly how do they envision an autocomplete gaining sentience?

It hasn't been "autocomplete" in a long time. Sure, there's a training step based on a corpus of Human language, and the autoregressive process outputs a single token at a time, but reinforcement learning trains specific behaviors beyond merely completing a sentence.

Besides, the best way to write something indistinguishable from what a Human might write is to, well, "think" like a Human.

Comment Will we finally learn our lesson? (Score 1) 32

Are we, as a sapient species facing an uncertain prospect of continuence in a world full of rapidly-advancing bullshit going to learn from this catastrophic and absurdly predictable failure of information security, personal and professional ethics, civilian government, market economics, basic common sense, and consumer psychology?

Eight-Ball-Based-On-Cursory-Reading-Of-Literally-Any-Slice-of-Human-History says "no".

What do you say, and why is it also "no"?

Comment Re:C'mon, Saudi (Score 5, Informative) 92

Nothing would make it “help get a little closer to making it a reality” if it’s not physically possible, and there’s a very strong argument that that’s the case. If nothing else, the maximum specific tensile strength allowed by covalent bonding - which is fundamental physics that we can’t change - combined with the reality of defects in a 36,000 km cable - is far below what’s needed to build a space elevator in Earth gravity. It might be possible to build a space elevator on the Moon or even (in the far future) on Mars, because their gravity is such that real materials could potentially do the job. But doing that involves bootstrapping an entire offworld industry, which is far beyond anything even the most advanced nations are capable of currently, let alone a technologically stunted oil state.

Comment Separate from the rebranding of covid.gov... (Score 5, Insightful) 213

...an article worth considering from Princeton University's Zeynep Tufekci:

We Were Badly Misled About the Event That Changed Our Lives

Since scientists began playing around with dangerous pathogens in laboratories, the world has experienced four or five pandemics, depending on how you count. One of them, the 1977 Russian flu, was almost certainly sparked by a research mishap. Some Western scientists quickly suspected the odd virus had resided in a lab freezer for a couple of decades, but they kept mostly quiet for fear of ruffling feathers.

Yet in 2020, when people started speculating that a laboratory accident might have been the spark that started the Covid-19 pandemic, they were treated like kooks and cranks. Many public health officials and prominent scientists dismissed the idea as a conspiracy theory, insisting that the virus had emerged from animals in a seafood market in Wuhan, China. And when a nonprofit called EcoHealth Alliance lost a grant because it was planning to conduct risky research into bat viruses with the Wuhan Institute of Virology â" research that, if conducted with lax safety standards, could have resulted in a dangerous pathogen leaking out into the world â" no fewer than 77 Nobel laureates and 31 scientific societies lined up to defend the organization.

So the Wuhan research was totally safe, and the pandemic was definitely caused by natural transmission â" it certainly seemed like consensus.

We have since learned, however, that to promote the appearance of consensus, some officials and scientists hid or understated crucial facts, misled at least one reporter, orchestrated campaigns of supposedly independent voices and even compared notes about how to hide their communications in order to keep the public from hearing the whole story. And as for that Wuhan laboratoryâ(TM)s research, the details that have since emerged show that safety precautions might have been terrifyingly lax.

Full article

Comment Re:A question for AI crazy management. (Score 1) 121

This matches how I use it. I’ll add a few other points:

4. Writing the first core version of a service or UI. I’ll typically use close to 100% of those generated lines, and then continue building with LLM assistance where it makes sense. It makes a big difference to development velocity.
5. Finding bugs. If some bug isn’t obvious to me, provide the code to an LLM and describe the problem. Its success rate is high.
6. Working with tech I’m not particularly familiar with (an extension of your #3, i.e. learning)
7. Writing documentation.
8. Reverse engineering existing code, i.e. describe some code to me so I don’t have to dig through it in detail.
9. Writing unit tests.

Comment Re:Cannot wait... (Score 1) 159

This is why code generating LLMs need to make heavy use of external tools.

Are you saying that ChatGPT, Claude, Deepseek etc. “make heavy use of external tools” to write code? Because they all write pretty good code, up to a certain size of program. Certainly far better than the average human, who can’t code at all; or the average software developer, who isn’t really very good.

Slashdot Top Deals

"Virtual" means never knowing where your next byte is coming from.

Working...