Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:The Photophone (Score 1) 23

I used to work at an outfit that had a big conference room, with big beautiful windows, that faced out across an airfield into a wooded area (good hiding places). In order to mitigate such optical surveilance, the windows were equipped with small piezoelectric speakers. Driven with (I'm guessing) white noise.

If I'm understanding the article correctly, the conference room window mitigation wouldn't work against this. It doesn't rely on vibrations of the windows. Instead, you'd just need a piece of paper inside the room, lit by ordinary lamps. As long as the light reflecting off the paper could pass through the windows unmodified (i.e. the windows provide clear visibility) the white noise vibrations of the windows would have no effect.

On the other hand, lightweight curtains that blocked the view through the window would stop this technique, but probably wouldn't significantly reduce what was detectable from a laser bounced off the windows (assuming no white noise).

Comment Re:I swear (Score 1) 41

You didn't read correctly.

I think we're talking past one another. I'll try to be clearer.

I said, that if you think Play is keeping you safe, nobody prevents you from only using *Play*.

Sure, but that's not the point. The point is that Android does prevent most users from using anything other than Play. Not by actually blocking them from using other app stores but by simply not offering the option. And that's a good thing, because most users have no idea how to decide whether or not something is safe.

I think perhaps the confusion here is because you and I are looking at this from different directions. You seem to be looking at it from the perspective of what you or I might want to choose. I'm looking at it from the perspective of an engineer whose job is to keep 3B users safe, most of whom have no idea how to make judgments about what is safe and what isn't. Keeping them within the fenced garden (it's a low fence, but still a fence) allows them to do what they want without taking much risk. The fact that the fence is easily stepped over preserves the freedom of more clueful and/or adventurous users to take greater risks. I think this has been a good balance.

And while you are usually (not sure for all manufacturers) not prevented from using other stores

I'm pretty sure that the ability to allow unknown sources is required by the Android compliance definition document, and that a manufacturer who disables it is not allowed to call their device Android, or to pre-install the Google apps or Play.

Google does a few things to make it uncomfortable. Trusting the store is a one-time thing, but you still have to acknowledge every app install twice and updates require confirming you really want to update the app, while Play can update apps in the background, optionally without even notifying you.

Until Epic decides that they want their store to be able to install and update as seamlessly as Play can, and gets a court to order that. Still, your point is valid, there is still some friction for other stores. Is it enough? I guess we'll find out. Will it be allowed to remain? I guess we'll find that out, too.

Comment Re:whats the harm (Score 1) 19

How much could it possibly be costing them to keep this service alive... they could have it in a holding pattern for another 15 years and then kill it when its really no longer being used and it would cost them pennies.

goo.gl links are a significant abuse vector, so Google has to maintain a non-trivial team to monitor and mitigate the abuse. I'll bet there are several full-time employees working on that, and that the total annual cost is seven figures.

Even if it weren't an abuse vector, the nature of Google's internal development processes mean that no service can be left completely unstaffed. The environment and libraries are constantly evolving, and all the services require constant attention to prevent bit rot. A fraction of one engineer would probably be enough for something like goo.gl if it weren't abused, but that's still six figures per year, not pennies.

Comment Re:I swear (Score 1) 41

Nobody prevents you from only installing stuff from Play.

This isn't true for the vast majority of Android users. To a first approximation, all Android users are using devices that have "unknown sources" disabled, so they can only get stuff from Play. Of course, it's trivial to find out how to enable unknown sources and install stuff from other places and I'd expect that nearly all slashdotters who use Android have at least experimented with that, even if they don't use f-droid or whatever on a regular basis. But slashdotters are not remotely a good representative sample of Android users.

I mean for other software you probably also have a selection of sites you trust and avoid others.

If you're talking about desktop/laptop software, sure... but most Android users don't use a desktop or a laptop and are accustomed to expecting that anything they can install is safe. And even among those who do use a non-mobile device, people expect mobile devices to be safer, because they are. This court ruling may change that, to some degree. The result will probably be good for Apple, since Android insecurity will drive people to the safety of Apple's walled garden.

Comment Re:I swear (Score 1) 41

I mean, the ultimate way to ensure your protection would be to place you in a padded room with a straight jacket and never let you out. /s Stop trying to enslave others because you're too scared to make your own decisions. That's literally the most charitable benefit of the doubt I can give you on this one.

Delegating security decisions to users is the best way to ensure that users have no security. I'm all for enabling users who understand what they're doing to make their own choices and are willing to accept the consequences, but the vast, vast majority don't understand security or the consequences of their security decisions, especially not in the face of clever attackers who are quite good at making malware appear completely innocuous. Even a knowledgeable security professional can't reliably distinguish malware from a legitimate app, not without deep and very specific expertise, and not always even then, and you think your grandma can?

There are three billion Android devices in the world; it's used by approximately 1/3 of all people living, and they put a lot of very important information about themselves in their devices. Android platform security decisions have enormous consequences. Android has gradually gotten more opinionated about user security because we've found time and again that if you ask users, they don't understand the implications and they make bad choices.

Many people think that the existence of unlockable bootloaders and the developer options are bad choices and suggest that we should push the Android ecosystem into the Apple model of closed, locked-down hardware and a closed app ecosystem. I disagree, and I've worked hard to make sure that the ability of people to run the software they want on the hardware they own is not restricted. For example, I have regular meetings with the leaders of various Android ROMs, including Lineage, Graphene, Calyx, etc., to help them navigate the security hardware changes that we make. This isn't something I do because my management tells me to, it's something I do on my own because I think it's important.

User freedom is deeply important to me... and so is user security, but these things are in tension. To a first approximation, increasing one decreases the other. IMO, Android has struck the right balance. By default, devices are locked down and software comes from a controlled source, but users who know what they're doing have the right and ability to remove the restrictions (mostly; low-level firmware is locked down -- I would like to see Android gain a "dev screw" capability like ChromeOS to completely open it up in a safe way). This court ruling seems likely to upset that balance in a direction that endangers users who don't know what they're doing -- and it doesn't provide any additional capabilities to users who do. It's all risk, no benefit.

Even more so if your disclosure is real.......

Try a web search for my username and "Android". Or look for "swillden" in the AOSP codebase and commit logs. Seriously, why would you imply that I'm lying when it's extremely easy to verify? And if you think that I made up a /. username to match some rando Android engineer, look at my /. UID. I've been on /. since before Android even existed.

Comment Re:I swear (Score 1) 41

Google does a much better job of policing the Play store for malware than most third party app stores do

A logical equivalent to your sentence is that some third party app store do a better job than Google. That alone is an argument to allow the third party stores. People are not obliged to use them, but at least they have a choice to have better than Google.

Very, very unlikely -- the resources required to do good malware detection at any sort of scale are enormous -- but also irrelevant. The issue isn't what the best app store does, it's what the worst does. Users who would choose an app store because it does extremely good vetting are users who would be careful what they install regardless of how careful their store is. It's the users who aren't cautious that will be harmed by Google being required to give them access to many app stores.

Comment Re:I swear (Score 3, Interesting) 41

All captive markets should crumble. They do nothing beneficial to the consumer.

In this case I think the Play store's "captiveness" is beneficial to the consumer in one important way: Google does a much better job of policing the Play store for malware than most third party app stores do. The extra hoops that users have to jump through to use third party stores do keep most users "captive", but they also keep them fairly safe. The fact that users can easily turn on the ability to sideload other apps or app stores, though, means that they're not really captive. I think this is the right level of friction, though obviously the courts disagreed.

Unless Play can find a way to effectively police malware on third party app stores (which will be hard) they're now going to be required to distribute through Google Play, I predict that this will be pretty bad for Android users. Play could try to put warnings on third party app stores and leave it up to the user to decide, but the courts may not allow that, and it's not really a good solution anyway because when given a choice between security and something they want right now, nearly all users ignore security. I think there needs to be a little more friction than clicking through a warning.

This court ruling is really good for Android malware authors and somewhat good for Epic, but I think it's a net negative for Android users. I hope I'm wrong!

(Disclosure: I work for Google, on Android Platform Security, but not on the anti-malware team. I do below-the-OS security stuff.)

Comment Re:If... (Score 1) 43

You assume I have not been using "AI". I have. It sucks. I've done all the things you suggest.

Good for you, I guess.

YMMV, especially if you suck at your programming job.

Well... I'm a Staff SWE at Google with >35 years of professional software development experience, and my code running in the core OS of 3B devices. By the standards of most people, I'm a skilled, experienced and highly productive programmer. Maybe your standards are higher.

Comment Re:If... (Score 1) 43

Oh, one more suggestion (because I just did it in the other window): Do ask the LLM to make code modifications for you. Suppose you're changing a method signature in a somewhat-complex way such that you can't just search & replace, or let the IDE refactoring tool do it. Tell the LLM to find and fix all calls. Often you can be that vague, too "Find and fix all calls to my_func()". Sometimes you have to specify more precisely... but always start by telling the LLM to do it the same way you'd tell a human junior engineer, rather than working harder to spell it out precisely.

Oh, one more yet :-): "git add ." before every command to the LLM so you can "git diff" to see exactly what it changed. This is useful even if the AI integration into your IDE highlights the changes.

Comment Re:If... (Score 1) 43

Nice try. It's been shown that using "AI" can actually slow down productive programmers, because they have to do more work to get the "AI" to produce a usable result. I find "AI" rather exhausting, honestly. Overall it's a net negative for a lot of programmers. It may not seem that way, but it is that way.

If definitely makes me faster if I use it correctly. If it slows you down, don't use it... but you may find yourself falling behind your peers who have figured out how to be productive with it.

Some things I find helpful:

Don't ask it to write your core code. Trying to explain in English what you want some complex function to do is often harder than just writing the code yourself. Definitely don't re-prompt more than a couple of times if the output isn't right -- write or fix it yourself. The goal is to get the work done, not to find the magic incantation that makes the LLM do it correctly by itself.
Do ask the LLM to write boilerplate for you, and to implement methods that have a clear contract and simple interface. It's usually very good at this.
Do ask it to write unit tests for you (this alone can be an incredible time-saver).
Don't bother reading the LLM's code right away (except a quick skim to see if it looks vaguely correct). Instead, tell the LLM to compile it and run the tests, and to fix any compilation errors or test failures. Once the code builds and the tests pass, then it's worth your time to look at it.
Do ask it to debug problems. It will often surprise you how quickly and accurately it can find the root cause of a test failure. If it fails, you only lost a few seconds.
While the LLM is working, do other stuff. Email, chats, reading documentation, whatever.
Do ask it to explain code to you, but don't just believe it. I find that asking it to explain a complex pile of code and then checking it is a great way to understand a codebase quickly. It's also a good way to understand specifications if what you're working on is an implementation of a formal specification document.

I think LLM usage is probably more effective if you're working in a statically-typed language. The LLM needs those guardrails even more than human programmers do.

Comment Re:Analogies (Score 3, Informative) 49

To clarify: The handrail is already there

It's really not. Or if we want to continue the analogy, the handrail is there, but it has gaps through which you can still fall. It's your responsibility to know where the gaps are and to grasp the handrail in the right places -- and many of the gaps are subtle.

I've been writing C++ for 35 years, and have been a huge fan of Modern C++ since its introduction. The combination of RAII and move semantics is incredibly powerful and represents an enormous advance in efficient memory safety. But Rust takes all of the safety-related ideas that C++ has and significantly raises the bar. Not only does Rust actively discourage you from using unsafe practices, unlike C++ which requires you to actively choose to use the safer ones, Rust's borrow checker goes far beyond what any C++ compiler can do to diagnose subtle mistakes that could lead to memory errors, at zero runtime cost (though compilation is slower).

Here's an example:

std::vector<int> vec = {1, 2, 3, 4, 5};
auto it = vec.begin();
vec.push_back(6);
std::cout << *it << std::endl;

Obviously, you should not use vector iterators after you've made a change to the vector that might cause a reallocation (or, equivalently, grab a reference or pointer to an element of a vector and then use it after doing something that might cause a reallocation). But the point is that you have to know and remember this rule, along with a lot of other rules about what you should and shouldn't do. Further, in more complex code it can get really hard to tell if you're following the rules or not (which is a hint that you should simplify the code, but that's a separate issue). Further complicating the issue is that the above code might work "correctly" most of the time, because it only fails when push_back reallocates the vector. Also, when it fails, it may still appear to work most of the time, because the reallocation might not have changed the values in the referenced memory, and the iterator might still be able to find the "right" value even though it's getting it from unallocated heap storage. This makes for intermittent heisenbugs that can be very hard to find.

In Rust, you can't do this sort of thing. The compiler won't let you. In the equivalent Rust code, getting the iterator would take an immutable reference to the vector, and then trying to call push_back would require also taking a mutable reference, but the borrow checker won't let you take a mutable reference if there's already another reference. Note that the borrow checker's conservatism means that it also often calls out code that is actually fine. One common example is taking mutable references to different parts of a struct. So you occasionally have to do a little extra work (which gets optimized away in every case I've examined, so it doesn't often have a runtime cost) to work around the borrow checker which can be annoying. But unless you use unsafe you know you can't make this sort of error.

In addition, Rust also takes on concurrency errors, providing deep compiler and library support for safe concurrency, which is something C++ doesn't address at all. Rust doesn't fully address the challenge of safe concurrency, unfortunately, because deadlocks and livelocks are still possible, but it makes unsafe concurrent memory accesses just as impossible as it makes other memory errors.

Further, in most areas (not all), Rust has better ergonomics than C++, which makes it more productive and -- IMO -- more fun to use. I still like C++, and when I use it I use the Modern C++ style, structuring my code very similarly to how I'd write it in Rust. But I still use valgrind on the resulting C++ binaries to check for memory bugs, because they are still possible, mostly through subtle reference aliasing or integer over/underflow, and I'm never as confident of the correctness of the result as I am with Rust.

Comment Re:If... (Score 1) 43

It's not "saving" you anything. You are still required to work the standard 8 hours per day like most people are required to do.

If you're a contractor, you could use the savings to work less. If you're an employee, your employer would reap the rewards of your higher productivity. In either case, it's rather odd to argue that you should try to be less productive. If that's your position, you should avoid using high level languages, IDEs, debuggers, etc. Why not write your code in machine language (not assembler -- that would make you more efficient) on paper, do all of your testing with hand simulation, then use toggle switches to enter it, one word at a time?

With "AI" you're also required to check everything the LLM spits out for errors, which is way too often, and is also a lot of work if you don't want to create more problems for yourself later.

Without AI you're also required to check everything you write for errors, and if you work in a good shop you're also regularly checking your co-workers' output for errors (i.e. doing code reviews). That's just a normal part of the job. With an LLM you write less and check more... but the net is higher total output of good code. Yes, you can let it produce crappy code, but you could also just write crappy code.

generally your cognitive load with them isn't any less, and in fact it can be more of a cognitive load because you need to read and understand what the "AI" spit out, and that isn't free, it has a real cost in developer time.

That is not my experience at all. Reviewing AI-generated code is basically the same process as reviewing another engineer's code, or reading code that you're trying to understand because you need to modify it. If you can't read code fluently, you're not a good developer; it's an essential skill.

And even worse if you don't understand the language you are having it write.

That would certainly be dumb. It would be more productive than trying to write in the language you don't understand yourself, though. Yes, you need to know the tools you're using, with or without AI, if you want the output to be anything other than crap. Though with AI, you could probably get some mostly-working crap. Keep in mind we're talking about the situation in July 2025. In July 2026 the LLMs will be better. Possibly a lot better.

Comment Re:They should do this over the San Joaquin (Score 1) 80

I think we're talking about different altitudes. I look at the weather and see clouds move from the West to the East. If those clouds are salty, that's bad for anything to the East.

Well, the spray only goes up a few hundred feet. I suppose it's possible that the minerals get lifted higher in some cases, but I don't think it would be lifted thousands of feet, up to where the wind direction shifts. Looking at aviation wind maps, it looks like you have to get up about 3000 feet above sea level before the wind over So Cal shifts.

Still, it's a valid point that research is needed to see how far the salt might be carried. Maybe they need to be 100 nm offshore, or 200 -- keeping in mind that the low-lying winds are going to push it away from land for a while before it gets high enough to be carried toward land. Also, while ongoing deposition of salt would be bad, the harm done by a small amount wouldn't be permanent, so this is the kind of thing that could be observed, measured and reacted to with changes in approach.

Slashdot Top Deals

"The medium is the message." -- Marshall McLuhan

Working...