Forgot your password?
typodupeerror

Comment 486 seemed magically advanced in the mid 1990s. (Score 2) 128

My first Linux installation was Redhat 3.03 on a 16MHz 386/SX system in mid-1995. For those of you without an AARP card, that's a 32 bit CPU with a 16 bit bus, which Intel released to cannibalize the market for the 286, which did not have a memory management unit. That means no swapping, you run out of ram, it was game over.

I think the 486/25 that replaced the 386/SX arrived in ... 1996 ... and it had an astonishing *eight megabytes* of memory. I had kept a one megabyte LIM/EMS 4.0 physical memory card from my 286 when I got the 386/SX, and that actually mattered with Windows 3.x. I put it in the 486, but given that vast eight megabyte expanse of dram it didn't last long.

Then in late 1997 my employer went bankrupt and as part of the dissolution I brought home the dual Pentium 133 system with 32 megabytes of ram. I remember all my IRC friends were so jealous of that monster ...

Comment Re:LinkedIn issue (Score 1) 90

I'm late in my career, only a few years left -- and almost all of my jobs have been through connections with people I actually know or have worked for; so I'm not worried.

I think LinkedIn was 'ok' when it started, then it turned into spammers, bots, clueless headhunters, obnoxious self-promoters with made up titles/terms, people trying to be 'influencers' (hate that term, more like hucksters), etc. To use Doctorow's term, it got totally 'enshitified', I don't miss it at all.

Comment LinkedIn issue (Score 2) 90

I deleted all my data and closed my LinkedIn account years ago (their security was [is?] atroious).

I was surprised to see that people sometimes put salary history in LinkedIn, seems like a bad idea to me for various reasons. Thinking about this situation, you are tipping your hand to prospective employers.

Though, I guess people could lie and pad their salary to say they made $X + 15000k for example. I don't know if a prospective employer can ask your current employer what you make? I'm guessing this varies from state to state, and even country to country?

Comment Re: prediction (Score 1) 39

I was thinking the same, carefully crafted echo chambers and intentional confirmation bias. The irony of the name 'social media' just keeps growing, as it continues to drive wedges between people instead of bringing them together -- or at a minimum, at least support a live and let live, 'agree to disagree' mentality. It's really a dystopian nightmare in the making.

Comment Like 16th Century Americas (Score 1) 116

Just a bit more than five hundred years ago Cortes & Co. arrived in the Americas. They were riding horses, wearing steel armor, wielding firearms, and spreading diseases for which the natives of the western hemisphere had no defenses. When two previously unconnected networks of similar entities encounter each other, there is conflict, and one "giant component" emerges. The natives that are left are perhaps 1% of their former number and in general they subsist at the edges of a transplanted European society.

AI has reached the point where it's hard to tell meat from machine and the internet is now having that same experience. These attempts to create human only networking are going to crush the life out of existing social media KPIs, and I think it'll be good for the Fediverse. Bot operators don't want to manually work their way through archipelagos of tiny spaces that do NOT want them. There's a political repression angle to the identity verification as well - if you want to manipulate the masses, gotta herd 'em into a space where you can DO that. Ten thousand digital islands are frightful when you have clear memories of being able to operate in a few globally flat spaces like Facebook and Twitter.

I've done computational social sciences stuff with a heavy conflict component. The day Musk took over Twitter was the equivalent of the Titanic bumping that iceberg. The sinking took about six months and I'm glad I made it to a life boat. But the really frightful thing here?

The same dynamics that apply to these social sites today are coming for white collar jobs and this isn't going to be measured in decades, it's going to happen in at most a few quarters. I hope my health care startup is about to get funded, because the alternatives for me are pretty grim. As for the vast majority of people who don't have a computer science background and the autistic focus superpower? I imagine what they feel is akin to the mood in Tenochtitlan in the early 1520s.

Comment Re:Working with other people's code (Score 0) 150

Yes. So far, the LLM tools seem to be much more useful for general research purposes, analysing existing code, or producing example/prototype code to illustrate a specific point. I haven't found them very useful for much of my serious work writing production code yet. At best, they are hit and miss with the easy stuff, and by the time you've reviewed everything with sufficient care to have confidence in it, the potential productivity benefits have been reduced considerably. Meanwhile even the current state of the art models are worse than useless for the more research-level stuff we do. We try them out fairly regularly but they make many bad assumptions and then completely fail to generate acceptable quality code when told no, those are not acceptable and they really do need to produce a complete and robust solution of the original problem that is suitable for professional use.

Comment Re: sure (Score 2) 150

But one of the common distinctions between senior and junior developers -- almost a litmus test by now -- is their attitude to new, shiny tools. The juniors are all over them. The seniors tend to value demonstrable results and as such they tend to prefer tried and tested workhorses to new shiny things with unproven potential.

That means if and when the AI code generators actually start producing professional standard code reliably, I expect most senior developers will be on board. But except for relatively simple and common scenarios ("Build the scaffolding for a user interface and database for this trivial CRUD application that's been done 74,000 times before!") we don't seem to be anywhere near that level of competence yet. It's not irrational for seniors to be risk averse when someone claims to have a silver bullet but both the senior's own experience and increasing amounts of more formal study are suggesting that Brooks remains undefeated.

Comment Unsurprising (Score 3, Informative) 49

There is nothing at all surprising about this, you have to look at what AI fluent operators can DO with frontier LLMs.

I have a health care startup that has been enabled by Anthropic's AI. The $100/month I pay for Claude Max gets me the full time equivalent of a really smart (but completely unseasoned) developer, and a half time MBA research assistant. I spend time every day trying to figure out how to employ the 40% of my weekly allocation that currently goes unused.

Clawdbot and its successors are sketchy AF, but I did just give Claude Code the run of a one liter HP EliteDesk with a Proxmox cloud install. No way would I trust it with production systems, but for exploring new stuff it'll get the job done, so long as I stand over it.

If you're any sort of knowledge worker and you can't tell a similar story to this, your career is pretty much cooked.

Comment Startup economics (Score 1) 112

Right now I run with a $100/month Anthropic Max subscription, and the net effect is that I have a really smart (but completely unseasoned) Ph.D. in computer science who works for me full time, and a very organized generalist MBA research assistant that's roughly half time. There are a couple of gratis services in that mix — Exa and Perplexity, that I will start paying for in April. Overall this $200-ish monthly expense would cost me around a quarter million annually if I had to hire humans to replace it. And I won't get someone who matches the 16x7 focus I bring to getting my startup moving.

We are about to hit a hard haves/have nots boundary on this stuff. I've already accepted that AI access is like a turn of the century professional cell phone bill and by summer it's going to match the cost of the sort of luxury sedan an enterprise sales wiz would select. Come next fall I think the choices will be pretty stark - be ready for an inference bill similar in size to the rent on the cute SoMa studio I'm sitting in as I write this, or ... the price of failure is just too ugly to contemplate.

Comment Re:Please don't use Paramount+ Platform (Score 3, Interesting) 55

(+1, Truth)

Of all the major streaming platforms, Paramount+ stands alone in how often it just doesn't work. It doesn't work reliably on state-of-the-art streaming boxes. It doesn't work reliably on desktop PCs. In fact, of all the devices we have in our household, it works reliably on a total of zero of them.

We have several of the other commercial streaming platforms plus the apps or online services for several of our main national TV channels as well and almost all of them work almost all of the time. It's bizarre how bad Paramount+ manages to be compared to literally everyone else. It must be hurting their bottom line to some degree or surely will do soon if they don't get a handle on it, because why pay for something you literally can't watch?

Comment Re: Interesting Summary (Score 1) 58

There's a difference between not using AI tools at all and not using code generated by AIs.

The latter involves a lot of risks that aren't well understood yet -- some technical, some legal, some ethical -- and it's entirely possibly that some of those risks are going to blow up in the face of the gung-ho adopters with existential consequences for their businesses.

I mostly work with clients in industries where quality matters. Think engineering applications where equipment going wrong destroys things or kills people and where security vulnerabilities are a proxy for equipment going wrong.

I know plenty of smart, capable people working in this part of the industry who are totally fine with blanket banning the use of AI-generated code on these jobs. A lot of that code simply isn't up to the required standards anyway, but even if it does produce something you could actually use, there are still all the same costs for review and certification that any other code incurs. That includes the need for at least one human reviewer to work out why the AI wrote what it did, which may or may not have any better answer than "statistically, it seemed like a good idea at the time".

Comment Re:Interesting Summary (Score 2) 58

The claims also seem a bit sus. "Eighty percent of new developers on GitHub use Copilot within their first week." Is this the same statistic someone was debunking recently where anyone who had done something really basic (it might have been using the search facility?) was counted as "using Copilot"? A lot of organisations seem to be cautious about using code generated by AIs, or even imposing a blanket ban, so things must be very different in other parts of the industry if that 80% is also representative of professional developers using Copilot significantly for real work.

Slashdot Top Deals

It is not every question that deserves an answer. -- Publilius Syrus

Working...