Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Leaving for less bad experience (Score 4, Informative) 63

I was one of the people who left GitHub this week.

Originally I created a GitHub account (back before Microsoft bought it) because that is where the developers were. People kept complaining to me that they wanted to contribute to my projects, but didn't want to do something so difficult as e-mailing a patch or using "old" technology like svn. The new devs are all about pull requests.

So I made a GitHub account and uploaded some of my projects there and I did get the pull request. But, over time, it's been a less and less good experience. I get almost no pull requests from real developers fixing real problems anymore. It's all AI slop and harassment and trolls and nagging from Microsoft to enable 2FA to enable tokens to upgrade to an Enterprise account. The GitHub experience is almost all pain and next to no benefit.

I've started migrating my projects to another platform that doesn't demand 2FA for a fun weekend project, doesn't try to up-sell, and doesn't push its automated crap into my projects.

Comment Is that really falling for it? (Score 1) 151

The summary says people will typically click on the links in phising e-mails to learn about things like changes to the vacation policy. But is that really "falling" for the phishing attack? The employee isn't putting in any information, isn't giving away any secrets, isn't trying to login to anything. They're just clicking a link to see where it goes.

It's not _good_ that the employee is getting that far along in the process, but it's hard to say that just clicking the link is falling victim to a phising attack if they aren't doing anything that gives up information or affects the company. Now, if you got them to sign into the fake website or take action in the company based on the info on the fake website, then I'd agree the employees fell for the attack.

Comment Re:Replace Yourself (Score 1) 34

Someone trying to train AI agents on bad data soon won't have a job. It's not an unsupervised position where anyone can just toss bad data into the machine. Typically there are a couple of layers to the process. Person A provides training data. Person B makes corrections. Person C verifies those corrections. To get bad data into the system you'd need to have all three random people working together to provide bad training data. Anyone not working in good faith would soon be caught and fired.

Comment Re:Superman is not an interesting character (Score 1) 124

Superman _can_ be an interesting character, when written well. The probably is studios usually want to show the big, flashy, punch-up scene and that is pretty boring when it's Superman.

Superman is at his best, I think, when we see behind the scenes. Who he is as Clark, how he navigates balancing his double identities, how he resists doing things the "easy way" with his god-like powers.

The New Adventures Of Superman (for all its faults) got that right. It focused on Clark most of the time and brought out Superman sparingly. We got to see him trying to be a good guy in an imperfect world, not just a super strong guy that could lift stuff.

Comment Test it occasionally (Score 1) 248

I've tested a variety of chat bots on a regular basis to see how they perform in terms of coding or suggesting logic solutions. I've tried ChatGPT, Copilot, and Llama. Most of the time they are pretty bad, particularly doing anything above entry-level.

So I could see them being useful for a complete beginner (as long as the beginner checks the results) or as a way to fill in some common boilerplate stuff, but it's not really useful for anything beyond that.

Most of the code LLMs have given me (in Bash, C, Java, and Python) doesn't compile or doesn't run properly or is _close_ but has the logic backwards. It looks okay, but it's not actually functional for the task at hand.

Where I have found LLMs semi-useful is in brainstorming. If I give it a problem I'm working on and ask for a couple of approaches it'll usually give me one way to solve the problem that makes sense. It usually can't actually code the solution properly, but it'll give me some ideas and then I can code the solution myself.

Comment Re:"You are not crazy," the AI told him. (Score 1) 175

Of course it's not illegal. Anyone can tell another person they don't think they're crazy. I tell my friends they are not crazy on a regular basis.

What would be illegal is pretending to be a doctor and claiming that, in your medical opinion, a person is/isn't crazy. But the AI isn't a person, it can't impersonate a doctor because it's clearly not a person. Much like how Monopoly money isn't counterfeit because no one would believe it was real money.

Comment Re:Needs sufficient oversight (Score 1) 80

> In Canada the law wasn't supposed to allow MAID for people with only mental health conditions, but an inquiry determined that the system was approving it for practically anyone who asked.

Good! How is that a bad thing? It doesn't matter why someone else wants to end their life. If they are capable of making the choice then let them make the choice. No one else should have the right to interfere or tell them they can't decide when to live and when to die.

How about it's none of your business why someone wants to die and just let them get on with their own affairs that has nothing to do with you.

Comment Re:Ripping the junk out (Score 2) 57

The problem is the AI bots often get the content wrong and strip away important pieces of context. People might be getting a "nicer" experience, but they are getting a much less accurate one. It's not going to be good for people needing accurate information rather than just mindless entertainment.

Comment Re:Lightspeed (Score 1) 21

I wouldn't expect it to be, but if I were working at Red Hat I'd be pretty embarrassed. The difference is _most_ AI chatbots will talk about anything - any topic, any field, so it make more sense they would make mistakes.

Red Hat's Lightspeed _only_ talks about Red Hat software and related topics and _still_ gets it wrong. It's an almost infinitely smaller scope, but its answers are still terrible.

Comment Lightspeed (Score 4, Interesting) 21

It should be mentioned that Lightspeed only answers questions about Red Hat products and related topics, like compilers and open source software. It also does not always answer truthfully. For example, I asked Lightspeed about working with boot environments on RHEL and it told me this feature was available by default (false) and provided me with a link to on-line documentation which went to an invalid URL (HTTP error 404). It also gave incorrect instructions for enabling extra repositories like EPEL and RPMFusion, which is pretty basic stuff.

Comment Oh Apple (Score 4, Insightful) 60

Only Apple would create multiple virtual machines and call then Containers. I just know in a few months I've going to hear some Apple fanboi brag that they don't use virtual machines anymore, they use Containers because containers are lightweight and efficient. And the reality distortion field grows stronger.

Slashdot Top Deals

The IBM 2250 is impressive ... if you compare it with a system selling for a tenth its price. -- D. Cohen

Working...