Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Absolutely pointless (Score 1) 16

There's only 2 reasons to have a new piece of hardware rather than make this an app on your phone:

1)It adds new sensors that your phone doesn't have (yet) that will enable new functionality. This won't be the case, as there's no usecase for it
2)It adds a new IO methods that aren't possible on the phone. AR goggles might do this. An AI assistant doesn't, it's all audio and voice.

This is basically just going to be replacable with a bluetooth microphone paired to an app on your phone. Which means nobody is going to buy it- even if they can actually find a usecase people want AI for (doubtful).

Comment Re:WhatsApp? (Score 1) 83

Those exist, but divide the view count by number of comments. It will show for the most part thousands of views per comment. That means most people aren't using the social part. I've yet to ever write a youtube comment, but I use it daily. So if you asked me if I use YouTube you'd get a yes, but it's not social media for me. If you limit it to those who read/write comments it would be fair, but I'm not sure they did that.

Comment Re:WhatsApp? (Score 3, Interesting) 83

I'd say the same for YouTube. It's used to watch videos. The number of people who comment on them is minimal compared to the userbase. I'd be very curious to the exact definition of "social media" they use is. I don't think it's what most people consider to be social media.

Comment Perfect storm of mediocre (Score 0) 18

Microsoft hasn't been able to do proper security - or proper development for that matter - in half a century, and AI is notorious for pissing out poor quality code.

Glad I only use the git part of Github.

If only Microsoft saw some sense and quit pushing this disaster of a technology - or at least gave people the option to leave it out of their activities. Fuck this AI shit, seriously. It's getting really tiring now...

Comment I'm not that optimistic. (Score 1) 92

Even if the prediction of comparatively controlled impact is accurate; I think it's worth considering just how grim it is likely to be; not in purely economic terms; but in the character of the work.

Maybe this is a personal peculiarity; but I that there's something exquisitely dispiriting about beating your head against people who are stubborn or clueless enough that every conversation is just a baffling sequence of different confusions, some of the repeated from previously. It's a totally different thing from dealing with someone who is merely ignorant; but learning, especially if they are enthusiastic about it.

Even if everything is fine in terms of job pace and security and all; that seems like it is shaping up to be a really hellish aspect of dealing with bots. The experience is sort of a somewhat weirder simulation of dealing with a chirpy, people-pleasing, very-junior type; except they are far more likely to lie than to admit ignorance; and they never learn(possibly the SaaS guys hoovering up your interactions in the background will make the next iteration better, possibly not, progress seems to have slowed considerably after only a brief period of improvement; but a given release is more or less full groundhog day).

That seems like a nightmare. Everything that sucks about teaching or mentoring; but precisely none of the rewarding aspects.

Comment Re:Stop now (Score 1) 114

yes, actually yes. I would do it differently though, I would use sodium and burn it in water to create the particulate matter, this would accomplish more than one goal, it would block a percentage point of the Solar energy and would percipitate into the ocean water deacidifying it. If done xorrectly, maybe as NaK alloy it can also be used to generate power while burning in water.

Comment Universal positive regard (Score 5, Interesting) 32

Sometimes, to get your thoughts straight, all you need is to discuss them with somebody. Chatbots seem to be just great for this. You really do not need anything from them, you just explain your ideas and this makes them more organized. This is really useful. Especially, now when you really have to be careful what you say to others, or you may end up totally cancelled.

ChatGPT has three aspects that make this practice - what you describe - very dangerous.

Firstly, ChatGPT implements universal positive regard. No matter what your idea is, ChatGPT will gush over it, telling you that it's a great idea. Your plans are brilliant, it's happy for you, and so on.

Secondly, ChatGPT always wants to get you into a conversation, it always wants you to continue interacting. After answering your question there's *always* a followup "would you like me to..." that offers the user a quick way that reduces effort. Ignoring these requests, viewing them as the result of an algorithm instead of a real person trying to be helpful, is difficult in a psychological sense. It's hard not to say "please" or "thank you" to the prompt, because the interaction really does seem like it's coming from a person.

And finally, ChatGPT remembers everything, and I've recently come to discover that it remembers things even if you delete your projects and conversations *and* tell ChatGPT to forget everything. I've been using ChatGPT for several months talking about topics in a book I'm writing, I decided to reset the ChatGPT account and start from scratch, and... no matter how hard I try it still remembers topics from the book.(*)

We have friends for several reasons, and one reason is that your friends will keep you sane. It's thought that interactions with friends is what keeps us within the bounds of social acceptability, because true friends will want the best for you, and sometimes your friends will rein you in when you have a bad idea.

ChatGPT does none of this. Unless you're careful, the three aspects above can lead just about anyone into a pit of psychological pathology.

There's even a new term for this: ChatGPT psychosis. It's when you interact so much with ChatGPT that you start believing in things that aren't true - notable recent example include people who were convinced (by ChatGPT) that they were the reincarnation of Christ, that they are "the chosen one", that ChatGPT is sentient and loves them... and the list goes on.

You have to be mentally healthy and have a strong character *not* to let ChatGPT ruin your psyche.

(*) Explanation: I tried really hard to reset the account back to its initial state, had several rounds of asking ChatGPT for techniques to use, which settings in the account to change, and so on (about 2 hours total), and after all of that, it *still* knew about my book and would answer questions about it.

I was only able to detect this because I had a canon of fictional topics to ask about (the book is fiction). It would be almost impossible for a casual user to discover this, because any test questions they ask would necessarily come from the internet body of knowledge.

Comment Re: Doesn't matter (Score 2) 137

No, he chose war in Europe, this is the next step for putin. You didn't really think the ruzzian murderers that make up their armed forces will be allowed to return back to the motherland alive, did you? The next target is Estonia or Latvia, then Poland and the rest will follow. Regardless what anyone thinks, ruzzia has learned to fight the next type of war and nobody is ready for this except for one nation, that is holding the orcs - Ukraine.

Slashdot Top Deals

"Irrigation of the land with sewater desalinated by fusion power is ancient. It's called 'rain'." -- Michael McClary, in alt.fusion

Working...