If i was considering buying Dell in the future I sure as hell ain't now.
But George Soros amirite folks?!
I am seriously confused about the rationale here.
You thought Dell was a good fit for your next purchase. You've done some research, and Dell appears to have met your needs for quality and price.
The owners of Dell pledged $625 billion to a system that should help children in various ways. There have been a lot of complaints recently from young people about how the system has failed them: housing is too expensive, not enough high paying jobs, they can't afford to get married, own a home, and have kids.
This donation seems like it would be a start towards fixing this. Perhaps other high-end donors will add to the accounts.
Having an easy way to add to your children's future over time seems like it would help fixing this, it's simple and a "no brainer" for parents: they don't have to learn financing or do research or set up accounts, everyone online will be analyzing the accounts and tell whether they are a good idea. Parents can focus on parenting and rely on expert analysis for whether this is a good idea.
And you are so miffed at this that... for some reason... you've decided to boycott Dell and spend your money elsewhere?
I'm completely at sea here. In what universe does your decision make sense?
Addendum: Looking over the edit page of this post, it occurs to me that there is a universe where "jacks smirking reven" is not a US citizen, and is just making troll posts to foment divisiveness in America. I've seen a lot of really funny news accounts (example) of political shitposters on X being from foreign nations. Are we in that universe?
And will Slashdot ever add the "account based in" feature?
(BTW, I'm from the US, I promise
The project was doomed to fail from the very beginning.
Apple is too privacy focused. To make AI competitive, you have to be willing to share private data with it.
Right now, Gemini Live on Android sucks terribly, but I am more hopeful for it..
I'm not the original poster.
Nexperia, I don't know. But the US did have issues with China having access to the technology from its Dutch sister company ASML.
https://www.bloomberg.com/news...
So it's not far-fetched to think that the US has something to do with this issue with Nexperia as well.
If it were a Chinese office with a Netherlands owning company, you can be sure the
That couldn't possibly happen!
Foreign companies (Tesla excepted) are not allowed to own more than 49% of Chinese-based companies/joint ventures.
To be fair, (with the exception of Tesla), foreign companies are not allowed to own more than 49% of Chinese companies.
So if international law was truly applicable, then there would be parity between nations about foreign majority ownership.
Sometimes, to get your thoughts straight, all you need is to discuss them with somebody. Chatbots seem to be just great for this. You really do not need anything from them, you just explain your ideas and this makes them more organized. This is really useful. Especially, now when you really have to be careful what you say to others, or you may end up totally cancelled.
ChatGPT has three aspects that make this practice - what you describe - very dangerous.
Firstly, ChatGPT implements universal positive regard. No matter what your idea is, ChatGPT will gush over it, telling you that it's a great idea. Your plans are brilliant, it's happy for you, and so on.
Secondly, ChatGPT always wants to get you into a conversation, it always wants you to continue interacting. After answering your question there's *always* a followup "would you like me to..." that offers the user a quick way that reduces effort. Ignoring these requests, viewing them as the result of an algorithm instead of a real person trying to be helpful, is difficult in a psychological sense. It's hard not to say "please" or "thank you" to the prompt, because the interaction really does seem like it's coming from a person.
And finally, ChatGPT remembers everything, and I've recently come to discover that it remembers things even if you delete your projects and conversations *and* tell ChatGPT to forget everything. I've been using ChatGPT for several months talking about topics in a book I'm writing, I decided to reset the ChatGPT account and start from scratch, and... no matter how hard I try it still remembers topics from the book.(*)
We have friends for several reasons, and one reason is that your friends will keep you sane. It's thought that interactions with friends is what keeps us within the bounds of social acceptability, because true friends will want the best for you, and sometimes your friends will rein you in when you have a bad idea.
ChatGPT does none of this. Unless you're careful, the three aspects above can lead just about anyone into a pit of psychological pathology.
There's even a new term for this: ChatGPT psychosis. It's when you interact so much with ChatGPT that you start believing in things that aren't true - notable recent example include people who were convinced (by ChatGPT) that they were the reincarnation of Christ, that they are "the chosen one", that ChatGPT is sentient and loves them... and the list goes on.
You have to be mentally healthy and have a strong character *not* to let ChatGPT ruin your psyche.
(*) Explanation: I tried really hard to reset the account back to its initial state, had several rounds of asking ChatGPT for techniques to use, which settings in the account to change, and so on (about 2 hours total), and after all of that, it *still* knew about my book and would answer questions about it.
I was only able to detect this because I had a canon of fictional topics to ask about (the book is fiction). It would be almost impossible for a casual user to discover this, because any test questions they ask would necessarily come from the internet body of knowledge.
We can predict everything, except the future.