Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Unsurprising (Score 1) 110

Wave function collapse implies that the wave collapses to an actual state at the point where the photon hits the screen after going through the slits. But what if it didn't collapse? What if the wave just starts interfering with the wave of the screen itself (combining into a single, far more complex wave)? And then that wave, which includes the wave function of the photon being absorbed and re-emitted continues propagating until it interacts with the wave of our detection apparatus (or our eyes)? What I'm describing is generally called quantum decoherence. Can't this explain the effect of an observer without any magic? The only downside is that the wave function now includes the results of what would happen in all the different possible locations where the photon interacted with the screen, which is just the many worlds interpretation, which is admittedly very wasteful, but is also a valid explanation. (Not my idea... I heard this from Sean Carroll.) The question then becomes, why do we seem to only experience one projection of this wave function (the projection of our universe)?

Comment Re:Unsurprising (Score 3, Interesting) 110

The thing is, our observations agree with quantum mechanics to as many decimal places as we can measure, and we can measure to a lot of decimal places now. So there's a lot of evidence that the wave function describes the behavior of reality. And since reality really seems to behave this way, we're left to ask interesting philosophical questions, like whether there really is no free will, or if reality is non-local. These are fascinating and important questions.

Comment Re:AI coding (Score 4, Insightful) 54

It's not even doing that. It's more like... if you studied hundreds of thousands of Chinese books and meticulously worked out the probabilities of certain characters following other characters, and used that knowledge to repeatedly choose the next character in an ongoing "conversation" with someone who knew Chinese. That person might actually think you knew Chinese, but you really don't.

Comment Re:AI can't do much for work yet (Score 1) 65

I've been trying to use AI for coding, and I've been following a bunch of people who blog about AI coding or make youtube videos about it. The consensus is that it doesn't actually help you very much, and in some cases it actually slows you down. Most of what we're hearing about it in the news is hype, and the media is complicit in it. Think about it... if you owned a newspaper wouldn't you like your journalists to think they were on the verge of being replaced by a toaster? It makes salary negotiations so much simpler.

Comment Re:Project Managers (Score 2) 48

Being a middle manager is an interesting exercise in strategy. You spend most of your time trying to position yourself close to projects that might potentially succeed... close enough to take some credit when they're successful. But you also need to keep yourself far enough removed from each project that you can claim you had almost nothing to do with it if it crashes and burns. It's a lot of work, honestly.

Comment Re:AI can't do much for work yet (Score 1) 65

Except that people don't review the AI generated text they're sending out. Lawyers are getting in trouble for submitting hallucinated precedents to judges. The US health department got busted for releasing a report that cited papers that don't exist. Journalists have published AI-generated articles of top 10 books containing books that the cited authors never wrote, or that don't exist at all. My wife is a psychologist. The industry is switching to a technology that takes recorded audio of a session, uses speech to text on it, and then uses an LLM to summarize the session into notes, and then it deletes the recorded conversation for patient confidentiality reasons. The psychologist is responsible for reviewing the summary for accuracy, but we all know there are many who won't do it out of laziness or naivety. That means the summaries of sessions are going to be saved with inaccurate information. LLMs are thus rewriting history. This is very dangerous, and it's going to take several years before regulators are going to react to all this nonsense.

Comment Re: He's right.. (Score 1) 154

People like Trump do not have a concept of reality. Its either what they want, or its not.

While there is a theory that there is "one God" and therefore "one truth" - because its "what God sees/saw" they do not believe that. They may claim to, but they also do not have a concept of "understanding" - again, there is only what they want, or what they don't: their brains can't pass this barrier.

They are certainly not Christians or Jews in the normal, every day, meanings of those words.
They are, in fact, idiots.

Comment Re:AI can't do much for work yet (Score 3, Informative) 65

If I search in google for "job description for a sql developer -ai" then the top result is this. That's information that's curated by an actual human being who knows what the job entails. And I got there really fast. Why would I want to take the extra time to get an LLM to generate the content for me when I know that it's likely to hallucinate the wrong information? Seriously, LLMs are for people who are too stupid to use a search engine effectively, which I am sorry to learn is a lot of people.

Comment Assumption (Score 5, Informative) 65

This summary seems to assume that AI adoption is a good thing. There's ample evidence that claims of productivity increases are overblown and mostly hype. And the energy consumption that AI systems use is really, really high. Maybe lower adoption rates is a good thing? I learned recently that you can put "-ai" at the end of your Google query and it stops it from including AI summaries in your search results. The bonus? It's much faster.

Slashdot Top Deals

"The medium is the message." -- Marshall McLuhan

Working...