Forgot your password?
typodupeerror

Comment Like 16th Century Americas (Score 1) 116

Just a bit more than five hundred years ago Cortes & Co. arrived in the Americas. They were riding horses, wearing steel armor, wielding firearms, and spreading diseases for which the natives of the western hemisphere had no defenses. When two previously unconnected networks of similar entities encounter each other, there is conflict, and one "giant component" emerges. The natives that are left are perhaps 1% of their former number and in general they subsist at the edges of a transplanted European society.

AI has reached the point where it's hard to tell meat from machine and the internet is now having that same experience. These attempts to create human only networking are going to crush the life out of existing social media KPIs, and I think it'll be good for the Fediverse. Bot operators don't want to manually work their way through archipelagos of tiny spaces that do NOT want them. There's a political repression angle to the identity verification as well - if you want to manipulate the masses, gotta herd 'em into a space where you can DO that. Ten thousand digital islands are frightful when you have clear memories of being able to operate in a few globally flat spaces like Facebook and Twitter.

I've done computational social sciences stuff with a heavy conflict component. The day Musk took over Twitter was the equivalent of the Titanic bumping that iceberg. The sinking took about six months and I'm glad I made it to a life boat. But the really frightful thing here?

The same dynamics that apply to these social sites today are coming for white collar jobs and this isn't going to be measured in decades, it's going to happen in at most a few quarters. I hope my health care startup is about to get funded, because the alternatives for me are pretty grim. As for the vast majority of people who don't have a computer science background and the autistic focus superpower? I imagine what they feel is akin to the mood in Tenochtitlan in the early 1520s.

Comment Re:Working with other people's code (Score 0) 150

Yes. So far, the LLM tools seem to be much more useful for general research purposes, analysing existing code, or producing example/prototype code to illustrate a specific point. I haven't found them very useful for much of my serious work writing production code yet. At best, they are hit and miss with the easy stuff, and by the time you've reviewed everything with sufficient care to have confidence in it, the potential productivity benefits have been reduced considerably. Meanwhile even the current state of the art models are worse than useless for the more research-level stuff we do. We try them out fairly regularly but they make many bad assumptions and then completely fail to generate acceptable quality code when told no, those are not acceptable and they really do need to produce a complete and robust solution of the original problem that is suitable for professional use.

Comment Re: sure (Score 2) 150

But one of the common distinctions between senior and junior developers -- almost a litmus test by now -- is their attitude to new, shiny tools. The juniors are all over them. The seniors tend to value demonstrable results and as such they tend to prefer tried and tested workhorses to new shiny things with unproven potential.

That means if and when the AI code generators actually start producing professional standard code reliably, I expect most senior developers will be on board. But except for relatively simple and common scenarios ("Build the scaffolding for a user interface and database for this trivial CRUD application that's been done 74,000 times before!") we don't seem to be anywhere near that level of competence yet. It's not irrational for seniors to be risk averse when someone claims to have a silver bullet but both the senior's own experience and increasing amounts of more formal study are suggesting that Brooks remains undefeated.

Comment Unsurprising (Score 3, Informative) 49

There is nothing at all surprising about this, you have to look at what AI fluent operators can DO with frontier LLMs.

I have a health care startup that has been enabled by Anthropic's AI. The $100/month I pay for Claude Max gets me the full time equivalent of a really smart (but completely unseasoned) developer, and a half time MBA research assistant. I spend time every day trying to figure out how to employ the 40% of my weekly allocation that currently goes unused.

Clawdbot and its successors are sketchy AF, but I did just give Claude Code the run of a one liter HP EliteDesk with a Proxmox cloud install. No way would I trust it with production systems, but for exploring new stuff it'll get the job done, so long as I stand over it.

If you're any sort of knowledge worker and you can't tell a similar story to this, your career is pretty much cooked.

Comment Startup economics (Score 1) 112

Right now I run with a $100/month Anthropic Max subscription, and the net effect is that I have a really smart (but completely unseasoned) Ph.D. in computer science who works for me full time, and a very organized generalist MBA research assistant that's roughly half time. There are a couple of gratis services in that mix — Exa and Perplexity, that I will start paying for in April. Overall this $200-ish monthly expense would cost me around a quarter million annually if I had to hire humans to replace it. And I won't get someone who matches the 16x7 focus I bring to getting my startup moving.

We are about to hit a hard haves/have nots boundary on this stuff. I've already accepted that AI access is like a turn of the century professional cell phone bill and by summer it's going to match the cost of the sort of luxury sedan an enterprise sales wiz would select. Come next fall I think the choices will be pretty stark - be ready for an inference bill similar in size to the rent on the cute SoMa studio I'm sitting in as I write this, or ... the price of failure is just too ugly to contemplate.

Comment Re:Please don't use Paramount+ Platform (Score 3, Interesting) 55

(+1, Truth)

Of all the major streaming platforms, Paramount+ stands alone in how often it just doesn't work. It doesn't work reliably on state-of-the-art streaming boxes. It doesn't work reliably on desktop PCs. In fact, of all the devices we have in our household, it works reliably on a total of zero of them.

We have several of the other commercial streaming platforms plus the apps or online services for several of our main national TV channels as well and almost all of them work almost all of the time. It's bizarre how bad Paramount+ manages to be compared to literally everyone else. It must be hurting their bottom line to some degree or surely will do soon if they don't get a handle on it, because why pay for something you literally can't watch?

Comment Re: Interesting Summary (Score 1) 58

There's a difference between not using AI tools at all and not using code generated by AIs.

The latter involves a lot of risks that aren't well understood yet -- some technical, some legal, some ethical -- and it's entirely possibly that some of those risks are going to blow up in the face of the gung-ho adopters with existential consequences for their businesses.

I mostly work with clients in industries where quality matters. Think engineering applications where equipment going wrong destroys things or kills people and where security vulnerabilities are a proxy for equipment going wrong.

I know plenty of smart, capable people working in this part of the industry who are totally fine with blanket banning the use of AI-generated code on these jobs. A lot of that code simply isn't up to the required standards anyway, but even if it does produce something you could actually use, there are still all the same costs for review and certification that any other code incurs. That includes the need for at least one human reviewer to work out why the AI wrote what it did, which may or may not have any better answer than "statistically, it seemed like a good idea at the time".

Comment Re:Interesting Summary (Score 2) 58

The claims also seem a bit sus. "Eighty percent of new developers on GitHub use Copilot within their first week." Is this the same statistic someone was debunking recently where anyone who had done something really basic (it might have been using the search facility?) was counted as "using Copilot"? A lot of organisations seem to be cautious about using code generated by AIs, or even imposing a blanket ban, so things must be very different in other parts of the industry if that 80% is also representative of professional developers using Copilot significantly for real work.

Comment An assault on reality (Score 1) 63

AI is crossing a sort of digital Rubicon, in that its engaging in an outright assault on objective reality.

It *seems* clever to use AI to screen resumes. Then AI gets democratized and the candidates are using it. So the AI screening gets amped up, no AI submissions. And all the while the Anthropic "agent employees" are moving in for the kill. The slop benefits the machine, not the meat.

This happened on Xitter from 2022 to 2024. It had been insanely toxic for years, but the arrival of automation was really obvious. I used to do fire watch here in NorCal, live tweeting urban fire evacuations and stuff. I stated an opinion on an unrelated matter, in the middle of the night, and within three minutes an obvious bot insulted me based on an episode from years in the past. Once is an accident, twice a coincidence, three times is enemy action. This happened often enough that it accelerated my exit from the platform.

The thing that is just starting to emerge is that when environments get past a certain level of gamey, people just opt out. Once out, they will start sorting things into human vs. machine. We're going to end up with a well funded "corporate reality" and a whole bunch of people in a "underemployed poor reality".

A lot of grim stuff flows from this starting point ...

Comment Obvious profiling for repression (Score 5, Insightful) 62

Sorry, maybe y'all are new here, but this is an old, familiar pattern.

Platform used by social movements to organize protests becomes highly effective.

But think of the children gets trotted out, new regulations under a plausible guise.

And then suddenly the would be civil society participants are finding ICE kicking in their doors.

Have seen this during Iran's Green Revolution, Arab Spring, Occupy, Black Lives Matters, same crap over and over and over and over, and people just keep going for it.

Comment Re: I think it's worse than that (Score 2) 54

> and making sure the code is both commented
> well and self-commenting where
> thatâ(TM)s possible -

I can and do agree with you 100% on the rest, but this made me LoL. Enterprise code being documented is a unicorn.

Frankly, Iâ(TM)ve thought about seeing what caliber of docs come out of handing some AI a codebase. Even if itâ(TM)s half-wrong, itâ(TM)d beat the nuthinâ(TM) I usually get.

Comment brain damage measurable via MRI (Score 1) 31

They are well and truly caught - there's brain damage that's visible via MRI. Doomscrolling is the cognitive equivalent of that Hitachi wand that some women come to regret owning. At a macro level it's a bit like bit tobacco in the late 20th century, only this time the addictive thing is also what we use to conduct political debates. That's a flavor of weird the dystopian authors of yesteryear never really contemplated. If only Aldous Huxley were alive to see what we've become.

We are going to have to protect preteens with stern regulation. And that will immediately open the question ... why aren't teens protected? And older than twenty five adults will get seen to right after that.

I liked having programmatic access to vast English datasets of political and social commentary via Twitter's streaming API. But the crack house atmosphere that evolved in the teens is just ... icky. Maybe if there is to be anything social media at all, it's gotta be Fediverse and local owned, so we don't get the manipulation and chronic overstimulation.

Comment managing humans/agents (Score 2) 15

Anyone who wants to be in a managerial role is going to be managing both humans and agents. This is the new normal, the people who get it quickly will continue to have jobs, a whole lot of the corporate bench are going to be put out.

If you've ever worked in corporate America tech you know how it goes - lots of people around for day to day, but when TSHTF there's that small group that goes into a conference room, they do NOT take the procedures manuals with them, and when they come out its fixed.

Those actual builders, Nate B. Jones calls them "tiger teams", are gonna have ongoing employment, plus some folks who get AI who will be handling the day to day agent tooling. Any of the steady state day to day folks who want to continue working are going to have to adapt to this new normal. Most will not. There will be organizational politics trying to kill AI that works, I expect a lot of companies will be culturally incapable of making the transition, and they will bankrupt, get bought, etc.

Slashdot Top Deals

"All my life I wanted to be someone; I guess I should have been more specific." -- Jane Wagner

Working...