Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Better Solution (Score 1) 28

Actually, it occurs to me that there is a technological solution to this problem. Simply have camera device makers sign their output using some kind of secure hardware key so the receiver can verify that the video was the input as seen by the camera on a X laptop or whatever. Of course, you still need to guard against attacks that stick a screen in front of a camera but that's doable if the camera has any focusing information or uses IR lines to reconstruct 3d information.

I'm sure there will be all sorts of clever ways you can sign the image stream that will be robust to a degree of encoding or other changes. Simplest option video calls display the signed still picture and maybe youtube videos display some uncompressed frame every so often or check that it matches. But I suspect you can do something cleverer than that.

But algorithmic detection is unlikely to be the answer unless it's an interactive situation where the detector can issue queries.

Comment Re:Laser focused? (Score 1) 28

At worst it's like we are back in the 1900s before we had easy access to video and audio recordings. People managed pretty well then. I think it will be less disruptive than you suggest even if -- just like now -- older folks who aren't used to the new dangers are vulnerable to scams.

But I think we can solve this by just having cameras and audio recording devices sign their output using hardware keys.

Comment More Harm Than Good? (Score 4, Interesting) 28

The problem with any technology like this that can be run cheaply by the end user is that the more advanced attackers can just take that software and train models to specifically trick it. Sure, maybe it catches the low effort attacks but at the cost of potentially helping the more advanced attacks seem more legitimate when they don't trigger the fake detection.

The real solution is the same one we used back before photography, audio and video were common and people could pretend to be anyone they wanted in a letter. People need to be skeptical and authenticate interactions in other ways -- be it via shared knowledge or cryptography.

---

Yes, if you only run the detection on a server and limit reporting -- for instance only report a fake/non-fake determination after several minutes of video -- and don't share information about how the technology works adversarial training might be difficult but that has it's own problems. If researchers can't put the reliability of the security to the test there is every incentive just to half-ass it and eventually attackers will figure out the vulnerabilities like most security through obscurity.

Comment Clever Move (Score 1) 49

That's a clever move. China knows that the US is turning to intel out of concern that China could either destroy TSMC (e.g. in an invasion) or has agents who can report on vulnerabilities to them. Taking action to limit intel sales in China is an effective way to handicap the US's attempt to protect their access to high end lithography.

Sure, it's not like intel doesn't have security issues or problems. But from a security POV they are no worse than AMD and probably no worse than companies like Apple (whose chips may seem more secure because information is more restricted and security researchers have fewer tools).

Comment Right Conclusion, Wrong Argument (Score 2) 119

I agree with the conclusion, but the argument is wrong. Remember what apple refused to do was create software that would allow it to workaround the limit on password guessing so the FBI could brute force the device password. The fact they refused implies that they *could* create that kind of software. Presumably, nation states like China could -- at least with access to the appropriate apple secret keys -- create the same kind of workaround. A system where apple used a secret key on an airgapped sealed cryptographic module to create per device law enforcement decryption keys would be no less secure.

The real danger is the second you create that legal precedent apple isn't going to be able to pick and choose which law enforcement requests it complies with -- be it from some random judge who issues the order ex-parte (say for a device image taken without your knowledge) without you having the chance to contest it or a request from judges in China. The danger here is mostly legal not technical.

Indeed, the greater hacking risk is probably someone hacking into a local police department and changing the account ID requested in a warrant and then getting access to your icloud backups that way than hacking a well-designed system that allowed apple to issue secondary per device decryption keys to law enforcement.

Comment Fucking Finally (Score 1) 47

This is the feature that makes smart glasses worth using -- giving you seem less information on who you are talking to -- it's just all the tech makers have been too cowardly to actually enable it.

And no bullshit about how this harms privacy. Your privacy is as or perhaps more invaded by the tons of cameras in stores, atms etc recording and saving your image. This just makes that fact salient.

I agree it's important to make sure the devices alert when they are storing recordings but facial recognition is useful and not particularly privacy invasive.

Comment Meaningless Platitudes (Score 2) 47

If you actually look at the pledge the content is a bunch of meaningless platitudes. Specifically, it requires

1. Transparency: in principle, AI systems must be explainable;
2. Inclusion: the needs of all human beings must be taken into consideration so that everyone
can benefit and all individuals can be offered the best possible conditions to express
themselves and develop;
3. Responsibility: those who design and deploy the use of AI must proceed with responsibility
and transparency;
4. Impartiality: do not create or act according to bias, thus safeguarding fairness and human
dignity;
5. Reliability: AI systems must be able to work reliably;
6. Security and privacy: AI systems must work securely and respect the privacy of users.

They might as well have pledged to "only do an AI things we think we should do" for all the content it has. If you think that some information shouldn't be released you don't call it non-transparency you call it privacy. When you think a decision is appropriate you don't call it bias you call it responding to evidence and you wouldn't describe it as not taking someone's interests into account unless if you think you balanced interests appropriately. It might as well have said

The only requirement that even had the possibility of a real bite is #1 with explainable, but saying "in principle" makes it trivial since literally all computer programs are in principle explainable (here's the machine code and the processor architecture manual).

Comment Objecting to military use is just selfish (Score 1) 308

I respect the objections that a committed pacifist (or opponent of standing armies) might have to their company taking on military contracts -- even if I disagree. But, anyone else is just being a selfish fucker. They are saying: yes, I agree that we need to maintain a military so someone needs to sell them goods and services but I want it to be someone else so I don't have to feel guilty.

Doing the right thing is often hard. Sometimes it means doing things that make you feel uncomfortable or icky because you think through the issue and realize it's the right thing to do. Avoiding doing anything that makes you feel icky or complicit doesn't make you the hero -- it makes you the person who refused to approve of interracial or gay marriages back in the day because you went with what made you feel uncomfortable rather than what made sense.

Comment Nowhere Near Kolmogorov Complexity (Score 1) 22

We are likely nowhere near the true Kolmogorov complexity. Note the restriction on running time/space. The Kolmogorov complexity is defined without regard for running time and, in all likelihood, that's going to use some algorithm that's hugely super exponential (with large constant) in time and space.

Comment Fundamentally Useless Without Threat Model (Score 1) 23

Also, data regulations are fundamentally useless as long as we can't agree (or even really try) on a threat model. Mostly, what the regulations do now is just limit what kind of creepy ads you might get which just gives people a false sense of security.

If you're concern is people breaking the law to use data to engage in blackmail and other bad acts then the regulations which limit what data is actively placed into corporate databases is useless when it can still be harvested and reconstructed from raw weblogs. If you're concerned about the ability of governments to use data against citizens then you don't want to be passing laws about data collection (the government can collect data in a dark program) but about encryption and guarantees of anonymity. If you're concerned about the the loss of 'privacy' (really pseudo-anonymity in public) then the regulations need to be more focused on what *other* people can post to the web (e.g. people streaming cameras covering public spaces). If you're concerned about the ability of hackers to gain illicit access to data then the focus needs to be on meaningful security (require regular red team attacks not just meaningless standards) for organizations that hold sensitive information not on it's collection.

However, no one seems really interested in taking these questions seriously so we just get more useless laws that impose barriers to entry for small companies and help concentrate more power into the hands of a few tech giants -- which tends to make the serious concerns even more troubling.

Comment Count on the EU for bad regs (Score 1) 23

I suspect we can yet again count on the EU for yet more stupid regulations here. Yet more in the line of data privacy protections that are deeply concerned about the US not including certain formal legal protections but don't have an issue with Chinese firms that practically don't protect data at all. Not to mention inconsistent silly regulations about cookies that don't do anything to protect real privacy but make us all click through dumb consent screens.

Comment Misses the point (Score 1) 45

I don't find arguments for AI alignment x-risk very compelling, however, the whole point of those arguments is to suggest that AI won't just sorta get things wrong in the ways that governments of corporations might but that they will be highly systematic and unstoppably effective in pursuing goals that take no note of human concerns. The whole argument is that they won't just be like a government or corporation that gets too wrapped up in profit but that they will turn the world to ash to build more paperclips.

I think the concern about kludilly misaligned AI far more compelling and probable but it's misleading to not phrase this as an argument against AI risk and make that argument explicitly.

After all, this suggests we should be pretty hopeful. Sure, no one would claim there are no issues with corporations or states but it's hard to deny that humans are better off now than they have been at any point in the past. Billions have been lifted out of poverty, we have antibiotics, air-conditioning, labor saving devices etc.. etc.. A far smaller percentage of our populations die in violence now than they do in hunter-gatherer populations of even in the ancient world. And if we don't find the x-risk stuff compelling we should be pretty optimistic we'll be able to do the same with AI.

Comment Just Bureaucratic Stupidity (Score 1) 78

Bernstein seems to be correct that NIST did something dumb in calculating the time needed to break this algorithm. Basically, they said each iteration requires this expensive giant array access which takes about the time needed for 2^35 bit operations and that each iteration requires 2^25 bit operations. However, rather than adding the cost of the memory access to the cost of the bit operations in each iteration they multiplied them. That's bad [1].

But then Bernstein has to imply that this isn't just your usual bureaucratic stupidity but that it's somehow an attempt by the NSA to weaken our encryption standards. Yes, of course they consulted with the NSA because that's what they should do and it's part of the NSA's job to protect the security of US information. Any algorithm that NIST approves is going to be used by all sorts of government agencies and government contractors and if we'd be horribly unsafe if that algorithm could be cracked. That's presumably why the NSA helped with the design of the S-boxes in DES to make them more resistant to differential cryptoanalysis before the rest of the world knew about that.

This isn't even the *kind* of error that would be beneficial to the NSA. We aren't going to trick the Chinese or Russians into using an algorithm by multiplying numbers that should have been added. Even if the NSA is trying to make the new algorithm breakable they'd want to inject a flaw that only someone with some secret knowledge they have could break not just encourage NIST to adopt an algorithm where the best *public* attack takes less operations than they say it does.

--

[1]: Though it's still possible that the issue is more subtle than this and the claim is that each operation requires one of these expensive array accesses but I can't find the source for these so can't check. That doesn't seem quite right but it might be how this thing got confused.

Comment Better Corporate governance (Score 1) 43

This is why we need to make it easier for shareholders to control executives or easier to execute a hostile takeover. Otherwise, the incentive for executives is just to use all that cash they control to expand their business in dumb ways that let the execs feel like they have more influence or control more things.

Sure, netflix merchandise associated with their shows isn't a terrible idea but why start your own stores. Just sell it online or via existing outlets.

Comment Sue Kodak (Score 1) 89

You could say the same thing about a bunch of other technologies. Photography and cameras made it much easier to stalk people as well. Messaging apps made it easier to send people unwanted messages etc.. etc...

All new technologies come with upsides and downsides and I think it's an awful precedent to set to suggest that anyone who introduces a new technology is responsible for mitigating any downsides that technology might have. It's particularly ridiculous in a country that accepts the idea that gun makers aren't liable for the foreseeable fact that people might use guns in an illegal fashion.

Realistically, I think apple has done more than what can be expected from someone introducing a new product. It shouldn't be their legal responsibility to head off all the bad ways that someone could use a new technology yet they included warnings about airtags being nearby and other mitigating measures. Morally, I'm glad that they did so but I don't see what more could reasonably be demanded of them without adopting a rule that you can't make life better for most people in any way that might also harm some others.

Slashdot Top Deals

Their idea of an offer you can't refuse is an offer... and you'd better not refuse.

Working...