Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:Double standard (Score 5, Insightful) 38

The problem here is that developers can take responsibility for the action while AI can not. Humans do make mistakes and that's ok; best practice is not to just can employees for messing up. Once is a mistake. Twice is an HR event. When someone does something dumb we forgive but we also insist that meaningful steps are taken to prevent that problem in the future. AI can't really take those steps because AI can't be accountable for "don't do it again." Taking down production because you dropped a table once is forgivable. Taking it down twice for the same reason is a different matter.

The developer can be accountable. And if HR fails to hold them to account for it, HR is accountable. And if HR isn't held accountable, leadership is. And if leadership isn't held accountable, the board is. And if the board isn't held accountable, the stockholders have some hard decisions to make. And if they choose not to make them than it wasn't really that big a deal, was it?

But with an AI the option is "we stop using AI" or "we live with the result."

Comment The problem isn't technical; it's legal/ethical (Score 2) 147

Everyone is so excited about not having to pay software engineers to write code that they've forgotten what engineers actually do. It's less common in the software world but go find a civil engineer or an electrical engineer or an aerospace engineer and follow them around for a week.

At some point, there's going to be a document in front of them laying out how something is going to be built and they're going to be asked to approve it. And when they do that they're taking responsibility for the design. If it falls down, if it catches on fire, or if it crashes into the mountains and kills people, they're the name on the form saying that won't happen. They're responsible.

Claude 4.5 Opus is very impressive, but if it writes a software application that kills people it can't take responsibility. It can't be punished. It can't even really be sued.

I just don't see how we, as a society, can trust fundamentally unaccountable entities to build systems that can do real harm if they go wrong. I suppose the alternative is that Anthropic accepts full legal liability for everything its models do. Their unwillingness to make that move tells you all you probably need to know about their own internal confidence in those models.

Comment Re:We have lost our ability to debate and decide (Score 1) 77

One thing the science does tell us is that we all have a very hard time separating the world that existed when we were children from our perception of that world through the eyes of a child.

Ask nearly any population in the United States when this country was best and you'll get a majority who'll swear to you it was when they were teenagers. The age of the group doesn't matter. You get the same result from 20 year olds as 40 year olds as 60 year olds as 80 year olds. And what you're seeing is people looking back to a time when they had lots of free time, lots of freedom, and most of their income was disposable and thinking "that was pretty great." And it was.... except they were living under a roof someone else paid for and still experiencing the risks and complexities of the world through the filter and safety net provided by their parents.

And since we're being scientific about this: yes, obviously not everyone. I'm sure someone reading this right now is thinking "I had a tough childhood." And I'm sure they did but anecdotes are not data.

The 1980s were -- and I say this as both a historian and someone who lived through them -- fucked. Reagan torched the New Deal consensus. The AIDS crisis was literally laughed out of the White House press room. Our government perpetuated a long string of dirty intelligence/foreign-policy interventions. The wealthy and powerful were juiced to the gills on cocaine.

There was a sense of decorum which has sense evaporated from American politics but that's about it.

Comment Re:Be careful what you ask for. (Score 1) 49

The Foundation TV series has been a lot of fun but I just can't shake how very much it is NOT ASIMOV'S FOUNDATION. Not even a little bit. It's fine that they didn't want to tell the Foundation story. Honestly, I'm not sure it would make good TV in a faithful adaptation. But... why set yourself up for failure like that? It's not like the majority of the people watching it are 1940s era Sci Fi fans.

Comment Re:That much? (Score 2) 24

Inside the /. bubble, sure, that makes sense. But crypto badly wants to be mainstream and, demographically, it's a lot younger than this community is. You might be surprised at how few people under 30 maintain bookmarks or consume news from specific outlets intentionally.

Comment Containers (Score 3, Interesting) 16

I'm increasingly convinced that if you're running an AI interaction at all it needs to live in a container. Somehow the sci-fi wisdom of "no seriously, don't give an AI access to the internet" flew right out the window when AI could tell us when our boss' emails actually had something in them worth reading. I get that, but ESPECIALLY for software developers, if you're going to make use of agentic AI systems, you need to have a metaphorical (if not literal) moat around the agent before you just turn it loose.

That was true before we started talking about the security implications of an AI with privileged access coming under attack.

Comment Re:Pure fusion bombs... (Score 2) 31

Sure, but "pure fusion" bombs are pretty much science fiction at this point. Igniting fusion in LiDu requires a tremendous amount of energy and not an insubstantial flux of neutrons. You're not getting either of those things in a profile suitable for military deployment without a fission primary. It might be a small (maybe even less than a kiloton) primary but you're not getting the Plutonium or Uranium (or maybe Neptunium in some cases) out of there any time soon.

Comment Re:How would a jammer work ? (Score 1) 131

It can't be that fruitless. Green Bank West Virginia, which houses the National Radio Observatory, has all kinds of restrictions on what kinds of devices and equipment are allowed within some distance of the NRO since they interfere with the signals.

I will concede that the radio sources the NRO is looking for are rather weaker and more distant than broadcasting satellites in low earth orbit.

Submission + - Betterment's Financial App Sends Customers a $10,000 Crypto Scam Message (theverge.com)

An anonymous reader writes: Betterment, a financial app, sent a sketchy-looking notification on Friday asking users to send $10,000 to Bitcoin and Ethereum crypto wallets and promising to “triple your crypto,” according to a thread on Reddit. The Betterment account says in an X thread that this was an “unauthorized message” that was sent via a “third-party system.”

Comment AI Reaction (Score 5, Insightful) 23

This was an inevitability, and not just because of the tightening job market. The real gasoline dumped on the tire-fire of employment (especially tech employment) in the United States is AI. It used to be that, when I posed a job opening, I got maybe 100 applications over the course of a few weeks from people who moderately embellished their resumes. And that's because the conventional wisdom was to spend the time to put together a really class-A application to the jobs that were the best fit for you.

But AI has changed the math.

Now the hours-long process of fine-tuning your resume to fit a job description and crafting a nice cover letter to go with it are the work of a click or two. Candidates have every reason in the world to apply to as many jobs as they can. It's a classic prisoners dilemma; sure, everyone else is worse off if the HR departments are flooded but if everyone else does it and you don't, you're never going to land a job.

And so the firehose opens and the job ad that used to get me 100 resumes over the course of a month gets me 1,000 resumes over the course of an hour.

And, just for funsies, most of them are wildly unqualified. AI is happy to lie on your resume for you and getting it not to do that is hard. So now not only do I have a ton of resumes to go through, I have a crisis of trust on my hands. Who, out of all of these applicants, are telling the truth and who's basically echoing the job-ad back to me?

In that situation, it makes a ton of sense to lean back on trust relationships. Harvard, MIT, Stanford, etc have reputations to uphold. I can count on them to police their candidates. My local university wants to maintain a good relationship with area businesses; if their graduates are BSing me in their resumes I can meet the head of their career center for coffee and get them to deal with it.

There is no solution here. The employment market is a feedback loop. The bigger the applicant:job ratio grows the harder it is for employers to adequately consider and respond to applications, turning the application process into something more akin to a lottery ticket than a proper application. The more converting an application to a job feels like random chance the more incentivized applicants are to prioritize quantity over quality, driving the ratio ever higher.

Nothing short of a sudden and profound cratering of the unemployment rate is going to slow this arms race. This isn't the end state of the market; it's going to get worse before it gets better.

Comment Re:Should dumb people get degrees? (Score 1) 93

They strive to build The Everyman, but fail at the core mission more often that we'd like.

But... shouldn't they? I graduated from college in 2002. Today I run software development teams that build cloud hosted machine learning models pretty much none of which was a thing in 2002. Even if we want to imagine that college is essentially a trade school but with ivy and columns, the fact remains that your average college graduate's degree far outlives the value of most of what they're taught unless they're majoring in history.

We certainly want people to exit college with the skills they need to either enter the workforce or grad school, but those skills could be aquired for a whole heck of a lot less time and money than a four-year-degree. The college degree is about critical thinking skills; managing complexity; proving a capability for self education and improvement; etc. Or it should be, anyway.

Comment Re: Shades Of The 2008 Financial Crisis (Score 1) 39

The infection was already there. Anyone who'd invested in any mortgage derived securities was compromised because the underwriting and scoring process was fundamentally fraudulent. Securities are rated and, at least in theory, those ratings are supposed to reflect the degree of risk. But the scoring agencies essentially became captives of the folks minting the securities and just handed out great ratings like halloween candy.

The Fed and the money printing and all that -- concerning as it all was -- really had more to do with making sure the national banking industry and therefore the currency didn't fail. There was a moment there when it really felt like a financial 9/11... like everything could come crashing down all across the world as a result of something that happened in Manhattan.

It's just an anecdote, but I vividly remember the short-term credit markets freezing up and everyone collectively realizing that nearly every business in the country uses short term credit to manage cashflow so that they can decouple employee paychecks and infrastructure purchases from sales and client payments in terms of timing. Most people just didn't know that and, without credit to make the accounting process move fluidly, like half the country was looking down the barrel of "well, we'll pay you when we know we have money in the accounts."

It's easy to run the Fed down now, but we were one or two days away from a grim fable with an unhappy, bloody ending.

Slashdot Top Deals

You are lost in the Swamps of Despair.

Working...