Forgot your password?
typodupeerror

Comment Re: AI has finally caught up- (Score 2) 109

I use Cursor a lot. But, unlike this ill-educated entrepreneur, I know its weaknesses and its risks, and therefore keep it on a very short leash.

For example, I never let it access our source code repository at all. I never let it pull down new dependencies. I never give it any database access at all. I never give it blanket authorization to run powershell scripts or similar. I have given it blanket authorization for benign commands like grep and listing the files on disk and creating new files. And I always look over what it generates before accepting it.

It is outright folly to think of these AI assistants as intelligent beings who know what they are doing. They AREN'T! They can generate some handy code, but they do this without the kind of cognitive process that humans use to do this. They just go through the motions with no inner understanding, even though what they do can be very useful in the right context.

This whole notion of asking Cursor why it did that and getting a "confession" is such ridiculous anthropomorphism. Cursor has NO IDEA why it did what it did, because it has NO MEMORY of what it was thinking and no capacity for meta-cognition at all! It might have a log in the chat history about what it did, but that's it. It is just looking over that and making inferences about why an AI might have done that, and spitting out the words that the prompt implies it should. If people must think of these things as sentient beings (which they are NOT), it would be better to think of them as mentally broken sociopaths who sometimes just go off the rails for no reason, and say things like "I'm sorry" without feeling the slightest hint of guilt nor even understanding what guilt is.

Comment Re:I strongly feel that red is better than blue. (Score 2) 59

Developer productivity is notoriously difficult to measure rigorously, and your list of concerns touch on some of the reasons why.

Sloppy measurements are the only ones available, for the most part.

There will be a subjective component to the assessments being made here. There is no escaping that. That doesn't mean that the conclusion is automatically false. You certainly have the option to refuse to adapt to a changing landscape while calling everyone else liars and/or idiots. At this point, I consider that the losing bet. The market will be the ultimate arbiter of truth for both of us.

Comment Re:Efficiency Boost (Score 4, Insightful) 59

The amount of technical work a business needs to accomplish over time is not fixed. For a healthy business, there are always lots of things they would like to develop but can't due to limits in capacity. So, with this productivity boost, they can get even more features out the door and (hopefully) make even more money.

Why would they cut staff and keep a tiny throughput with a smaller profit margin? Their competitors, who capitalize on AI-assisted efficiency boosts, will eat them for lunch. And then hire the staff they laid off.

Comment Re:Efficiency Boost (Score 4, Interesting) 59

Same.

When AI was still young, I tried it and found the code quality to be unacceptable. I was at that time in the "it's a bubble that will pop" crowd.

It's better now. I use it more now. And it saves me time and makes me more productive.

It can't do my job without me. And other people on the team still come to me for help. My skills as a designer and knowledge of our legacy system still make me valuable. I can just do more in less time now.

The other consequences of AI (impact on electricity cost, pollution, etc.) are problematic. So are the legal issues with mass copyright infringement in the training data. That all needs to be properly hashed out. Probably the end will be the same: the super rich get richer and everyone else gets table scraps. That's just humans at work. But, apart from all that, AI is good.

I am officially in the pro-AI camp now.

Comment Re:I guess I stop using Ubuntu (Score 1) 132

Interesting. A conversation about the beliefs and statements of a famous philosopher, and his reasons for them, doesn't qualify as a conversation about philosophy?

I am sensing a distinction without a difference.

Be that as it may, I don't have control over Gemini's training data so I have no say over how much representation Islam might have in it. That would be on Google.

Comment Re:I guess I stop using Ubuntu (Score 2) 132

Really? Consider:

Me: Was Nietzsche antisemitic?

Gemini: The question of Friedrich Nietzsche’s relationship with antisemitism is a classic case of historical identity theft. The short answer is: No, Nietzsche was not an antisemite. In fact, he was frequently and vocally "anti-antisemitic."
The confusion largely stems from two sources: the deliberate manipulation of his work by his sister after his death, and his aggressive critique of Judeo-Christian morality, which casual readers often mistake for ethnic or racial hatred.
(goes on to give specifics)

My Co-worker: Who is Knee Chi?

---

  Me: Why did Nietzsche accuse Christians of being nihilists?

Gemini: It seems like a paradox: how can a religion with a God, a heaven, and a strict moral code be "nihilistic"? For Nietzsche, nihilism wasn't just the belief in nothing; it was the denial of this life in favor of a fictional one.
He argued that Christianity is the "ultimate nihilism" because it devalues the only reality we actually have. Here is how he broke down that accusation: (goes on to give specifics).

A typical Christian: Because Nietzsche was an atheist, and wrong.

---

If you don't want Google tracking your philosophical interests, just clear cookies before and after a chat.

Comment Re:I guess I stop using Ubuntu (Score 3, Interesting) 132

Personally, I like Ubuntu. Its been really stable, easy to use, does what I want.

I like AI too. Is that going to get me roasted? I use it at work and it has gotten much better in the past year or so, so I use it a lot at work now. And I chat anonymously with Gemini because it can hold a philosophical conversation with me better than most people I know.

I don't like being spied-on of course. So I will probably be disabling the various AI features that they are baking into the OS. I have no need for text/speech AI and I have no need for baked-in task automation or troubleshooting AI. When I discover a need for AI, I just reach out to the proper tool.

But I am not going to quit using my favorite distro just because they are trying to keep up with the rest of the world. So long as it's easy for me to turn it off (no Microsoft-style dark patterns) and it stays off once I turn it off (again unlike Microsoft), I am fine with it.

Comment Re: Yes (Score 4, Insightful) 191

Teachers are under political pressure to give students As. The grade they give to the kids is implicitly a grade they are giving to themselves and to the school district. So, they have strong incentives to inflate those grades.

I don't know anything about your kids, of course. Maybe yours are exceptionally bright. In any case, your anecdote isn't evidence, and the grades received in high school are not a measure of their actual level of understanding and performance (given these pressures).

It is basically settled science that "practice makes perfect." Homework, as implemented, may be problematic, but most people do not have eidetic memories and need to do homework in order to master the material.

Comment Re:OSS model for physical stores (Score 2) 57

I wish more people cared about device lock-in and DRM encumberance of e-books. Sites like ebooks.com present lots of DRM free options with an easy way to filter by that (unlike Amazon and Google).

But, even totally locked-in e-readers with planned obsolesce and eventual excommunication from the garden (while still perfectly functional) are convenient enough that people put up with it. Sadly.

Comment Re:Equilibrium (Score 3, Insightful) 59

Every single one of us knew that eliminating workers was the primary reason for the worldwide interest in AI. Everyone who said anything to the contrary was lying, and everyone who heard them knew it. Absolutely zero people believed that AI was going to lead us to some strange utopia where everyone was paid for work they didn't have to do anymore. The article's tone "oh look, they made all this money and didn't hire more people and its because of AI and oh what hypocrites they are!" is just silly. This is exactly what literally everyone knew would happen.

Well, except those who believed, and still believe, that AI just won't work. That remains a possibility too. Maybe this will all fall apart. I can't see the future better than anyone else. But the one and only thing that would prevent AI-enabled mass layoffs would be AI's own failure to shoulder the load. If it can, it will, and the industry absolutely will let go of everyone they can, as soon as they can, without any inhibitions. That's just how humans work, so we can count on it.

Warnings about how this might result in a depression won't stay anyone's hand. Mocking the industry leaders for creating an economy where nobody can afford the stuff they produce; won't make them bat an eye. None of those words change their incentives, and their incentives will be acted-upon, even if it leads us straight into the greatest depression in world history.

Legal regulation might change things. But it is extremely hard to pass regulation that is not enthusiastically endorsed by the oligarchy that actually runs our government. So, it won't happen until the fallout from the depression hits the wealthy's financial base hard enough for them to want the regulation.

We are going to have to go through hell in order to get to heaven. Or even purgatory.

Comment Re:Auto Mechanic doesn't like latest symphony (Score 5, Insightful) 176

Well, there is a difference between understanding how nuclear weapons work, and understanding the global political environment (not to mention the elements of human psychology that help shape it). Making predictions about whether or not there will be a nuclear war anytime soon would be better left to focus groups consisting of political scientists, psychologists, and sociologists.

I, for one, am not an expert in any of these fields, so I am nowhere near qualified to weigh in. That, of course, won't inhibit me at all.

Genetically speaking, modern humans are no more enlightened than the warmongering war criminals that led the world during the dark ages. We are not intrinsically more moral or more concerned about others, etc. The only difference is the technological landscape we are in. Not just the presence of nuclear weapons, but also the communication technologies that have tied the entire world together and produced a much more aware populace. This creates new political pressures and new incentives to make different choices than our recent ancestors would have (but again, morality is not a factor. It's still just a matter of incentives and consequences).

The concept of mutually-assured destruction is not very noble, but it is very real, and it is effective at staying the hands of the world's nuclear powers (at least somewhat). And this is also nothing new, as it has always been true of humans that the most effective deterrent to violence is a credible threat of devastating retaliatory violence (insane people excepted, of course).

So, with that in mind, our best short term option is to ensure that world leaders are sane enough to understand this mutually-assured destruction risk. This isn't a judgment about their morality or even their loyalty (as those things are too easy to lie about) but about their mental grasp of their situation. So long as they all know how that war would end all life on our planet, they probably won't start it. This also means ensuring that any country that cannot produce leaders at this level of sanity must be proactively prevented from attaining nuclear weapons by intrusive actions on the part of the greater world powers.

Unfortunately, there isn't any way to guarantee the sanity of the leaders of any country. Democracy sure doesn't do it (it's just a popularity contest and insane people can still win great popularity among the voting masses), and dictatorship sure doesn't do it either.

I was going to add a bit about countries forming alliances with each other and such, but that feels secondary to the main point about sane leadership, which we have no way to ensure.

So, in short, we are doomed.

Comment Re:Nope! (Score 3) 57

An iris scan is still just data. It can be copied or forged. How is it any more reliable than any other data that can be copied or forged?

I think this whole notion of "prove you are a human from the other side of the Internet" is misguided. I understand why people would want this, but given the nature of the tech, it is too easy to fake it. We are going to need to adapt differently.

Slashdot Top Deals

FORTRAN is not a flower but a weed -- it is hardy, occasionally blooms, and grows in every computer. -- A.J. Perlis

Working...