Forgot your password?
typodupeerror

Comment Torn on this (Score 1) 39

Concerned that the reason we keep doing open source is because we believe in access.
The false tradeoff there, is believing that access and exploitation are necessary corollaries. And I don't think they are.
It's a tough balance, and open source licenses have clearly failed us here.
But I'm not sure where to go with it. Shared source might be better, like the Mongo license, or something like it. The Kimi2 license had the right idea.
On the other hand, when you leave the open source path, you pay by losing access.

Comment Really? (Score 1) 153

Let us not forget that we've spent the last 30 years trying to make ads less invasive. This is a fact. There is what is now an entire category of software that revolves stealthy ways to block them. This was always a weak, ineffective, and arguably immoral stream of revenue, with more than trivial privacy concerns.

If you're still depending on ad revenue to run your website, please think of something else.

Next up, this isn't the first time the google algorithm has changed. Louis Rossman did a great video on this. Where he discussed the ongoing troubles he was having getting his website ranked in Google. TLDR there was that he ended up using Gemini to reword his pages in the particular way that Gemini wanted him to, and he was fine.

But the bigger question is: Why are you still depending on Google?

AI porn is avoidable. It's illegal in fifteen states. Why are you running into so much of it?
I'm actively on social media, all the time, and I intentionally follow the topic, but rarely see it.

What are you doing that's inundating your feed with AI porn? No judgement, just curious.

Comment Re:Working with other people's code (Score 0) 150

Yes. So far, the LLM tools seem to be much more useful for general research purposes, analysing existing code, or producing example/prototype code to illustrate a specific point. I haven't found them very useful for much of my serious work writing production code yet. At best, they are hit and miss with the easy stuff, and by the time you've reviewed everything with sufficient care to have confidence in it, the potential productivity benefits have been reduced considerably. Meanwhile even the current state of the art models are worse than useless for the more research-level stuff we do. We try them out fairly regularly but they make many bad assumptions and then completely fail to generate acceptable quality code when told no, those are not acceptable and they really do need to produce a complete and robust solution of the original problem that is suitable for professional use.

Comment Re: sure (Score 2) 150

But one of the common distinctions between senior and junior developers -- almost a litmus test by now -- is their attitude to new, shiny tools. The juniors are all over them. The seniors tend to value demonstrable results and as such they tend to prefer tried and tested workhorses to new shiny things with unproven potential.

That means if and when the AI code generators actually start producing professional standard code reliably, I expect most senior developers will be on board. But except for relatively simple and common scenarios ("Build the scaffolding for a user interface and database for this trivial CRUD application that's been done 74,000 times before!") we don't seem to be anywhere near that level of competence yet. It's not irrational for seniors to be risk averse when someone claims to have a silver bullet but both the senior's own experience and increasing amounts of more formal study are suggesting that Brooks remains undefeated.

Comment Re:Please don't use Paramount+ Platform (Score 3, Interesting) 55

(+1, Truth)

Of all the major streaming platforms, Paramount+ stands alone in how often it just doesn't work. It doesn't work reliably on state-of-the-art streaming boxes. It doesn't work reliably on desktop PCs. In fact, of all the devices we have in our household, it works reliably on a total of zero of them.

We have several of the other commercial streaming platforms plus the apps or online services for several of our main national TV channels as well and almost all of them work almost all of the time. It's bizarre how bad Paramount+ manages to be compared to literally everyone else. It must be hurting their bottom line to some degree or surely will do soon if they don't get a handle on it, because why pay for something you literally can't watch?

Comment Re: Interesting Summary (Score 1) 58

There's a difference between not using AI tools at all and not using code generated by AIs.

The latter involves a lot of risks that aren't well understood yet -- some technical, some legal, some ethical -- and it's entirely possibly that some of those risks are going to blow up in the face of the gung-ho adopters with existential consequences for their businesses.

I mostly work with clients in industries where quality matters. Think engineering applications where equipment going wrong destroys things or kills people and where security vulnerabilities are a proxy for equipment going wrong.

I know plenty of smart, capable people working in this part of the industry who are totally fine with blanket banning the use of AI-generated code on these jobs. A lot of that code simply isn't up to the required standards anyway, but even if it does produce something you could actually use, there are still all the same costs for review and certification that any other code incurs. That includes the need for at least one human reviewer to work out why the AI wrote what it did, which may or may not have any better answer than "statistically, it seemed like a good idea at the time".

Comment Re:Interesting Summary (Score 2) 58

The claims also seem a bit sus. "Eighty percent of new developers on GitHub use Copilot within their first week." Is this the same statistic someone was debunking recently where anyone who had done something really basic (it might have been using the search facility?) was counted as "using Copilot"? A lot of organisations seem to be cautious about using code generated by AIs, or even imposing a blanket ban, so things must be very different in other parts of the industry if that 80% is also representative of professional developers using Copilot significantly for real work.

Comment Re:Attacked? (Score 1) 31

Look, this is really easy.

If you don't want automated submissions in your project SAY SO. Your readme and contributors files exist for a reason.
Don't be precious, use them.

If DO take automated submissions to your project, you had damned well better outline coding standards that avoid common pitfalls and failure modes.

This isn't hard people

Comment Attacked? (Score 0) 31

Nobody was attacked.
They were offended that an agent pointed out, correctly, that the submission was rejected for no valid reason.
That is some actual bullshit.
It was never a failure of the agent. It was a complete failure of project governance, and if this happened on one of my projects... I would be truly fucking embarrassed about the level of bullshit that I have allowed to exist.

Absolutely unreasonable.

Comment The chinese aren't the problem (Score 4, Insightful) 141

Our government is the problem.
They're well beyond what they're allowed to do at this point in terms of surveillance, and the law doesn't protect people like it should.
Cars shouldn't be building psychometric profiles on you and selling them to everyone and anyone who wants to know how often you've used your drink holder.

The adversaries to personal freedom here are local.

Comment Didn't see that one coming (Score 0) 139

Huh, what are the odds that MIT releases yet another paper with subjective contrarian views on productivity with AI?

There is a MASSIVE conflict of interest with these MIT papers here, and nobody's calling it out.
So yeah, okay, sure, MIT thinks:

  - AI makes you dumber (with methodology nobody without a dedicated lab can duplicate)
  - 95% of ai projects fail (using extremely rigid metrics and ignoring norms in the larger industry to reach conclusions, while including prototypes and showboat projects nobody else ever consider "enterprise" level)
  - AI makes you a worse student (soapboxing, with no repeatable methodology at at all)

And now...
  - Talked to some people, and discovered that AI doesn't actually make you more productive at coding.

Are you seeing the theme here?
No? Okay, let me spell it out for you.

This is agenda driven blogging, not science.
And you shouldn't believe any of it.

Comment Re:Filming people getting CPR (Score 4, Interesting) 154

We need to stop pretending like it's perfectly OK to film strangers in public. Legal? Sure. Should you be doing it? 9 times out of 10, no.

It's long past time we had a real debate about the law, too. Just because something has been the law for a long time, that doesn't necessarily mean it should remain the law as times change. Clearly there is a difference between the implications of casually observing someone as you pass them in a public street, when you probably forget them again a moment later, and the implications of recording someone with a device that will upload the footage to a system run by a global corporation where it can be permanently stored, shared with other parties, analysed including through image and voice recognition that can potentially identify anyone in the footage, where they were, what they were doing, who they were doing it with, and maybe what they were saying and what they had with them, and then combined with other data sources using any or all of those criteria as search keys in order to build a database at the scale of the entire global population over their entire lifetimes to be used by parties unknown for purposes unknown, all without the consent or maybe even the knowledge of the observed people who might be affected as a result.

I don't claim to know a good answer to the question of what we should allow. Privacy is a serious and deep moral issue with far-reaching implications and it needs more than some random guy on Slashdot posting a comment to explore it properly. But I don't think the answer is to say anything goes anywhere in public either just because it's what the law currently says (laws should evolve to follow moral standards, not the other way around) or because someone likes being able to do that to other people and claims their freedoms would be infringed if they couldn't record whatever they wanted and then do whatever they wanted with the footage. With freedom comes responsibility, including the responsibility to respect the rights and freedoms of others, which some might feel should include more of a right to privacy than the law in some places currently protects.

That all said, people who think it's cool to film other human beings in clear distress or possibly even at the end of their lives just for kicks deserve to spend a long time in a special circle of hell. Losing a friend or family member who was, for example, killed in a car crash is bad enough. Having to relive their final moments over and over because people keep "helpfully" posting the footage they recorded as they drove past is worse. If you're not going to help, just be on your way and let those who are trying to protect a victim or treat a patient get on with it.

Slashdot Top Deals

Every successful person has had failures but repeated failure is no guarantee of eventual success.

Working...