Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:I swear (Score 2) 27

All captive markets should crumble. They do nothing beneficial to the consumer.

In this case I think the Play store's "captiveness" is beneficial to the consumer in one important way: Google does a much better job of policing the Play store for malware than most third party app stores do. The extra hoops that users have to jump through to use third party stores do keep most users "captive", but they also keep them fairly safe. The fact that users can easily turn on the ability to sideload other apps or app stores, though, means that they're not really captive. I think this is the right level of friction, though obviously the courts disagreed.

Unless Play can find a way to effectively police malware on third party app stores (which will be hard) they're now going to be required to distribute through Google Play, I predict that this will be pretty bad for Android users. Play could try to put warnings on third party app stores and leave it up to the user to decide, but the courts may not allow that, and it's not really a good solution anyway because when given a choice between security and something they want right now, nearly all users ignore security. I think there needs to be a little more friction than clicking through a warning.

This court ruling is really good for Android malware authors and somewhat good for Epic, but I think it's a net negative for Android users. I hope I'm wrong!

(Disclosure: I work for Google, on Android Platform Security, but not on the anti-malware team. I do below-the-OS security stuff.)

Comment Re:Unsurprising (Score 1) 91

Wave function collapse implies that the wave collapses to an actual state at the point where the photon hits the screen after going through the slits. But what if it didn't collapse? What if the wave just starts interfering with the wave of the screen itself (combining into a single, far more complex wave)? And then that wave, which includes the wave function of the photon being absorbed and re-emitted continues propagating until it interacts with the wave of our detection apparatus (or our eyes)? What I'm describing is generally called quantum decoherence. Can't this explain the effect of an observer without any magic? The only downside is that the wave function now includes the results of what would happen in all the different possible locations where the photon interacted with the screen, which is just the many worlds interpretation, which is admittedly very wasteful, but is also a valid explanation. (Not my idea... I heard this from Sean Carroll.) The question then becomes, why do we seem to only experience one projection of this wave function (the projection of our universe)?

Comment Re:Unsurprising (Score 3, Interesting) 91

The thing is, our observations agree with quantum mechanics to as many decimal places as we can measure, and we can measure to a lot of decimal places now. So there's a lot of evidence that the wave function describes the behavior of reality. And since reality really seems to behave this way, we're left to ask interesting philosophical questions, like whether there really is no free will, or if reality is non-local. These are fascinating and important questions.

Comment Re:If... (Score 1) 43

You assume I have not been using "AI". I have. It sucks. I've done all the things you suggest.

Good for you, I guess.

YMMV, especially if you suck at your programming job.

Well... I'm a Staff SWE at Google with >35 years of professional software development experience, and my code running in the core OS of 3B devices. By the standards of most people, I'm a skilled, experienced and highly productive programmer. Maybe your standards are higher.

Comment Re:If... (Score 1) 43

Oh, one more suggestion (because I just did it in the other window): Do ask the LLM to make code modifications for you. Suppose you're changing a method signature in a somewhat-complex way such that you can't just search & replace, or let the IDE refactoring tool do it. Tell the LLM to find and fix all calls. Often you can be that vague, too "Find and fix all calls to my_func()". Sometimes you have to specify more precisely... but always start by telling the LLM to do it the same way you'd tell a human junior engineer, rather than working harder to spell it out precisely.

Oh, one more yet :-): "git add ." before every command to the LLM so you can "git diff" to see exactly what it changed. This is useful even if the AI integration into your IDE highlights the changes.

Comment Re:If... (Score 1) 43

Nice try. It's been shown that using "AI" can actually slow down productive programmers, because they have to do more work to get the "AI" to produce a usable result. I find "AI" rather exhausting, honestly. Overall it's a net negative for a lot of programmers. It may not seem that way, but it is that way.

If definitely makes me faster if I use it correctly. If it slows you down, don't use it... but you may find yourself falling behind your peers who have figured out how to be productive with it.

Some things I find helpful:

Don't ask it to write your core code. Trying to explain in English what you want some complex function to do is often harder than just writing the code yourself. Definitely don't re-prompt more than a couple of times if the output isn't right -- write or fix it yourself. The goal is to get the work done, not to find the magic incantation that makes the LLM do it correctly by itself.
Do ask the LLM to write boilerplate for you, and to implement methods that have a clear contract and simple interface. It's usually very good at this.
Do ask it to write unit tests for you (this alone can be an incredible time-saver).
Don't bother reading the LLM's code right away (except a quick skim to see if it looks vaguely correct). Instead, tell the LLM to compile it and run the tests, and to fix any compilation errors or test failures. Once the code builds and the tests pass, then it's worth your time to look at it.
Do ask it to debug problems. It will often surprise you how quickly and accurately it can find the root cause of a test failure. If it fails, you only lost a few seconds.
While the LLM is working, do other stuff. Email, chats, reading documentation, whatever.
Do ask it to explain code to you, but don't just believe it. I find that asking it to explain a complex pile of code and then checking it is a great way to understand a codebase quickly. It's also a good way to understand specifications if what you're working on is an implementation of a formal specification document.

I think LLM usage is probably more effective if you're working in a statically-typed language. The LLM needs those guardrails even more than human programmers do.

Comment Re:AI coding (Score 4, Insightful) 47

It's not even doing that. It's more like... if you studied hundreds of thousands of Chinese books and meticulously worked out the probabilities of certain characters following other characters, and used that knowledge to repeatedly choose the next character in an ongoing "conversation" with someone who knew Chinese. That person might actually think you knew Chinese, but you really don't.

Comment Re:Analogies (Score 1) 49

To clarify: The handrail is already there

It's really not. Or if we want to continue the analogy, the handrail is there, but it has gaps through which you can still fall. It's your responsibility to know where the gaps are and to grasp the handrail in the right places -- and many of the gaps are subtle.

I've been writing C++ for 35 years, and have been a huge fan of Modern C++ since its introduction. The combination of RAII and move semantics is incredibly powerful and represents an enormous advance in efficient memory safety. But Rust takes all of the safety-related ideas that C++ has and significantly raises the bar. Not only does Rust actively discourage you from using unsafe practices, unlike C++ which requires you to actively choose to use the safer ones, Rust's borrow checker goes far beyond what any C++ compiler can do to diagnose subtle mistakes that could lead to memory errors, at zero runtime cost (though compilation is slower).

Here's an example:

std::vector<int> vec = {1, 2, 3, 4, 5};
auto it = vec.begin();
vec.push_back(6);
std::cout << *it << std::endl;

Obviously, you should not use vector iterators after you've made a change to the vector that might cause a reallocation (or, equivalently, grab a reference or pointer to an element of a vector and then use it after doing something that might cause a reallocation). But the point is that you have to know and remember this rule, along with a lot of other rules about what you should and shouldn't do. Further, in more complex code it can get really hard to tell if you're following the rules or not (which is a hint that you should simplify the code, but that's a separate issue). Further complicating the issue is that the above code might work "correctly" most of the time, because it only fails when push_back reallocates the vector. Also, when it fails, it may still appear to work most of the time, because the reallocation might not have changed the values in the referenced memory, and the iterator might still be able to find the "right" value even though it's getting it from unallocated heap storage. This makes for intermittent heisenbugs that can be very hard to find.

In Rust, you can't do this sort of thing. The compiler won't let you. In the equivalent Rust code, getting the iterator would take an immutable reference to the vector, and then trying to call push_back would require also taking a mutable reference, but the borrow checker won't let you take a mutable reference if there's already another reference. Note that the borrow checker's conservatism means that it also often calls out code that is actually fine. One common example is taking mutable references to different parts of a struct. So you occasionally have to do a little extra work (which gets optimized away in every case I've examined, so it doesn't often have a runtime cost) to work around the borrow checker which can be annoying. But unless you use unsafe you know you can't make this sort of error.

In addition, Rust also takes on concurrency errors, providing deep compiler and library support for safe concurrency, which is something C++ doesn't address at all. Rust doesn't fully address the challenge of safe concurrency, unfortunately, because deadlocks and livelocks are still possible, but it makes unsafe concurrent memory accesses just as impossible as it makes other memory errors.

Further, in most areas (not all), Rust has better ergonomics than C++, which makes it more productive and -- IMO -- more fun to use. I still like C++, and when I use it I use the Modern C++ style, structuring my code very similarly to how I'd write it in Rust. But I still use valgrind on the resulting C++ binaries to check for memory bugs, because they are still possible, mostly through subtle reference aliasing or integer over/underflow, and I'm never as confident of the correctness of the result as I am with Rust.

Comment Re:AI can't do much for work yet (Score 1) 64

I've been trying to use AI for coding, and I've been following a bunch of people who blog about AI coding or make youtube videos about it. The consensus is that it doesn't actually help you very much, and in some cases it actually slows you down. Most of what we're hearing about it in the news is hype, and the media is complicit in it. Think about it... if you owned a newspaper wouldn't you like your journalists to think they were on the verge of being replaced by a toaster? It makes salary negotiations so much simpler.

Comment Re:Project Managers (Score 2) 48

Being a middle manager is an interesting exercise in strategy. You spend most of your time trying to position yourself close to projects that might potentially succeed... close enough to take some credit when they're successful. But you also need to keep yourself far enough removed from each project that you can claim you had almost nothing to do with it if it crashes and burns. It's a lot of work, honestly.

Comment Re:If... (Score 1) 43

It's not "saving" you anything. You are still required to work the standard 8 hours per day like most people are required to do.

If you're a contractor, you could use the savings to work less. If you're an employee, your employer would reap the rewards of your higher productivity. In either case, it's rather odd to argue that you should try to be less productive. If that's your position, you should avoid using high level languages, IDEs, debuggers, etc. Why not write your code in machine language (not assembler -- that would make you more efficient) on paper, do all of your testing with hand simulation, then use toggle switches to enter it, one word at a time?

With "AI" you're also required to check everything the LLM spits out for errors, which is way too often, and is also a lot of work if you don't want to create more problems for yourself later.

Without AI you're also required to check everything you write for errors, and if you work in a good shop you're also regularly checking your co-workers' output for errors (i.e. doing code reviews). That's just a normal part of the job. With an LLM you write less and check more... but the net is higher total output of good code. Yes, you can let it produce crappy code, but you could also just write crappy code.

generally your cognitive load with them isn't any less, and in fact it can be more of a cognitive load because you need to read and understand what the "AI" spit out, and that isn't free, it has a real cost in developer time.

That is not my experience at all. Reviewing AI-generated code is basically the same process as reviewing another engineer's code, or reading code that you're trying to understand because you need to modify it. If you can't read code fluently, you're not a good developer; it's an essential skill.

And even worse if you don't understand the language you are having it write.

That would certainly be dumb. It would be more productive than trying to write in the language you don't understand yourself, though. Yes, you need to know the tools you're using, with or without AI, if you want the output to be anything other than crap. Though with AI, you could probably get some mostly-working crap. Keep in mind we're talking about the situation in July 2025. In July 2026 the LLMs will be better. Possibly a lot better.

Comment Re:They should do this over the San Joaquin (Score 1) 80

I think we're talking about different altitudes. I look at the weather and see clouds move from the West to the East. If those clouds are salty, that's bad for anything to the East.

Well, the spray only goes up a few hundred feet. I suppose it's possible that the minerals get lifted higher in some cases, but I don't think it would be lifted thousands of feet, up to where the wind direction shifts. Looking at aviation wind maps, it looks like you have to get up about 3000 feet above sea level before the wind over So Cal shifts.

Still, it's a valid point that research is needed to see how far the salt might be carried. Maybe they need to be 100 nm offshore, or 200 -- keeping in mind that the low-lying winds are going to push it away from land for a while before it gets high enough to be carried toward land. Also, while ongoing deposition of salt would be bad, the harm done by a small amount wouldn't be permanent, so this is the kind of thing that could be observed, measured and reacted to with changes in approach.

Comment Re:AI can't do much for work yet (Score 1) 64

Except that people don't review the AI generated text they're sending out. Lawyers are getting in trouble for submitting hallucinated precedents to judges. The US health department got busted for releasing a report that cited papers that don't exist. Journalists have published AI-generated articles of top 10 books containing books that the cited authors never wrote, or that don't exist at all. My wife is a psychologist. The industry is switching to a technology that takes recorded audio of a session, uses speech to text on it, and then uses an LLM to summarize the session into notes, and then it deletes the recorded conversation for patient confidentiality reasons. The psychologist is responsible for reviewing the summary for accuracy, but we all know there are many who won't do it out of laziness or naivety. That means the summaries of sessions are going to be saved with inaccurate information. LLMs are thus rewriting history. This is very dangerous, and it's going to take several years before regulators are going to react to all this nonsense.

Comment Re:AI can't do much for work yet (Score 3, Informative) 64

If I search in google for "job description for a sql developer -ai" then the top result is this. That's information that's curated by an actual human being who knows what the job entails. And I got there really fast. Why would I want to take the extra time to get an LLM to generate the content for me when I know that it's likely to hallucinate the wrong information? Seriously, LLMs are for people who are too stupid to use a search engine effectively, which I am sorry to learn is a lot of people.

Slashdot Top Deals

Executive ability is deciding quickly and getting somebody else to do the work. -- John G. Pollard

Working...