Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:Stop companies using AI to replace jobs (Score 1) 94

And it would be pretty hypocritical to try and stop it now after we've been insisting that people who work with their hands accept the fact that technology might cost them their jobs for hundreds of years. But yes, the question of whether we are going to use the resulting productivity gains to improve the general welfare or to further empower the wealthy is a big one. I'm hoping this time we can get it right.

Comment Ohh Now It's Your Job It's Different? (Score 2) 94

Lots of this anger seems to stem from the fear that AI will take the jobs of programmers and artists. And while I sympathize (it might destroy my vocation as well) for centuries we've been asking craftspeople and those who do manual labor to accept the fact that their careers can be upended in the name of economic progress. The original sabotage is (supposedly) a term that arose out of the anger about automatic looms hundreds of years ago. Now the same kind of automation that took jobs from people who worked with their hands is coming for white collar jobs and it would be pretty hypocritical to suddenly call a halt now.

But maybe this time we can actually try and make the economic benefits work for the welfare of society as a whole, e.g., by using taxes to distribute the benefits.

Comment Re:Only bad one (Score 3, Insightful) 94

What your missing is the fact that lots of people use it responsibly and no one notices. If you are using it right, e.g., to help you write a bunch of documents (it helped immeasurably with the right tone and suggesting phrases for all the paperwork for my wife's tenure application) no one notices because you use it responsibly and don't just copy paste whatever it does. Exactly *because* people are so hostile, everyone using it in a responsible way doesn't get noticed and all you hear about are the idiots who copy paste it without thinking.

For instance, if you look closely you can see all sorts of great uses on youtube where suddenly channels have great animations helping explain what they are talking about (e.g. for continental drift or engine parts) but those aren't noticed and everyone complains about the slop.

Comment Re:It's automated plagiarism (Score 3, Interesting) 94

You mean because it learns just like people do from our common cultural heritage?

Remember, the point of intellectual property is to incentivize creation not to allow authors to block the creation of new works. That's why it's only supposed to be a limited time and we have exceptions for sufficiently transformative uses.

Comment Ridiculous Politics (Score 4, Insightful) 94

Why not strip an award because the creator has the wrong political affiliation or anything else you don't like. Game awards should just evaluate the quality of the game not make political statements.

If you are really convinced generative AI makes games shitty then what's the problem? Presumably those games won't get the awards because they suck. The only reason for this policy is because you think it *will* make for good games but you want to stop its use anyway.

Comment We Don't Know How To Regulate Yet. (Score 1) 50

The issue isn't that AI doesn't need any regulation. It's that we have no idea how we should regulate it yet that makes sense. All that regulation now would do is create hurdles that prevent small competitors or open-source alternatives and centralize power in the few people deciding what we get to do with AI. That's the truly scary outcome. Right now regulation would just end up being based on ideas from sci-fi films.

I mean the real problems the internet created and we care about now aren't those that seemed important in the 90s (I mean they weren't wrong that people would find porn but it doesn't seem like a big deal anymore).

Comment Better Solution (Score 1) 28

Actually, it occurs to me that there is a technological solution to this problem. Simply have camera device makers sign their output using some kind of secure hardware key so the receiver can verify that the video was the input as seen by the camera on a X laptop or whatever. Of course, you still need to guard against attacks that stick a screen in front of a camera but that's doable if the camera has any focusing information or uses IR lines to reconstruct 3d information.

I'm sure there will be all sorts of clever ways you can sign the image stream that will be robust to a degree of encoding or other changes. Simplest option video calls display the signed still picture and maybe youtube videos display some uncompressed frame every so often or check that it matches. But I suspect you can do something cleverer than that.

But algorithmic detection is unlikely to be the answer unless it's an interactive situation where the detector can issue queries.

Comment Re:Laser focused? (Score 1) 28

At worst it's like we are back in the 1900s before we had easy access to video and audio recordings. People managed pretty well then. I think it will be less disruptive than you suggest even if -- just like now -- older folks who aren't used to the new dangers are vulnerable to scams.

But I think we can solve this by just having cameras and audio recording devices sign their output using hardware keys.

Comment More Harm Than Good? (Score 4, Interesting) 28

The problem with any technology like this that can be run cheaply by the end user is that the more advanced attackers can just take that software and train models to specifically trick it. Sure, maybe it catches the low effort attacks but at the cost of potentially helping the more advanced attacks seem more legitimate when they don't trigger the fake detection.

The real solution is the same one we used back before photography, audio and video were common and people could pretend to be anyone they wanted in a letter. People need to be skeptical and authenticate interactions in other ways -- be it via shared knowledge or cryptography.

---

Yes, if you only run the detection on a server and limit reporting -- for instance only report a fake/non-fake determination after several minutes of video -- and don't share information about how the technology works adversarial training might be difficult but that has it's own problems. If researchers can't put the reliability of the security to the test there is every incentive just to half-ass it and eventually attackers will figure out the vulnerabilities like most security through obscurity.

Comment Clever Move (Score 1) 49

That's a clever move. China knows that the US is turning to intel out of concern that China could either destroy TSMC (e.g. in an invasion) or has agents who can report on vulnerabilities to them. Taking action to limit intel sales in China is an effective way to handicap the US's attempt to protect their access to high end lithography.

Sure, it's not like intel doesn't have security issues or problems. But from a security POV they are no worse than AMD and probably no worse than companies like Apple (whose chips may seem more secure because information is more restricted and security researchers have fewer tools).

Comment Right Conclusion, Wrong Argument (Score 2) 119

I agree with the conclusion, but the argument is wrong. Remember what apple refused to do was create software that would allow it to workaround the limit on password guessing so the FBI could brute force the device password. The fact they refused implies that they *could* create that kind of software. Presumably, nation states like China could -- at least with access to the appropriate apple secret keys -- create the same kind of workaround. A system where apple used a secret key on an airgapped sealed cryptographic module to create per device law enforcement decryption keys would be no less secure.

The real danger is the second you create that legal precedent apple isn't going to be able to pick and choose which law enforcement requests it complies with -- be it from some random judge who issues the order ex-parte (say for a device image taken without your knowledge) without you having the chance to contest it or a request from judges in China. The danger here is mostly legal not technical.

Indeed, the greater hacking risk is probably someone hacking into a local police department and changing the account ID requested in a warrant and then getting access to your icloud backups that way than hacking a well-designed system that allowed apple to issue secondary per device decryption keys to law enforcement.

Comment Fucking Finally (Score 1) 47

This is the feature that makes smart glasses worth using -- giving you seem less information on who you are talking to -- it's just all the tech makers have been too cowardly to actually enable it.

And no bullshit about how this harms privacy. Your privacy is as or perhaps more invaded by the tons of cameras in stores, atms etc recording and saving your image. This just makes that fact salient.

I agree it's important to make sure the devices alert when they are storing recordings but facial recognition is useful and not particularly privacy invasive.

Comment Meaningless Platitudes (Score 2) 47

If you actually look at the pledge the content is a bunch of meaningless platitudes. Specifically, it requires

1. Transparency: in principle, AI systems must be explainable;
2. Inclusion: the needs of all human beings must be taken into consideration so that everyone
can benefit and all individuals can be offered the best possible conditions to express
themselves and develop;
3. Responsibility: those who design and deploy the use of AI must proceed with responsibility
and transparency;
4. Impartiality: do not create or act according to bias, thus safeguarding fairness and human
dignity;
5. Reliability: AI systems must be able to work reliably;
6. Security and privacy: AI systems must work securely and respect the privacy of users.

They might as well have pledged to "only do an AI things we think we should do" for all the content it has. If you think that some information shouldn't be released you don't call it non-transparency you call it privacy. When you think a decision is appropriate you don't call it bias you call it responding to evidence and you wouldn't describe it as not taking someone's interests into account unless if you think you balanced interests appropriately. It might as well have said

The only requirement that even had the possibility of a real bite is #1 with explainable, but saying "in principle" makes it trivial since literally all computer programs are in principle explainable (here's the machine code and the processor architecture manual).

Comment Objecting to military use is just selfish (Score 1) 308

I respect the objections that a committed pacifist (or opponent of standing armies) might have to their company taking on military contracts -- even if I disagree. But, anyone else is just being a selfish fucker. They are saying: yes, I agree that we need to maintain a military so someone needs to sell them goods and services but I want it to be someone else so I don't have to feel guilty.

Doing the right thing is often hard. Sometimes it means doing things that make you feel uncomfortable or icky because you think through the issue and realize it's the right thing to do. Avoiding doing anything that makes you feel icky or complicit doesn't make you the hero -- it makes you the person who refused to approve of interracial or gay marriages back in the day because you went with what made you feel uncomfortable rather than what made sense.

Comment Nowhere Near Kolmogorov Complexity (Score 1) 22

We are likely nowhere near the true Kolmogorov complexity. Note the restriction on running time/space. The Kolmogorov complexity is defined without regard for running time and, in all likelihood, that's going to use some algorithm that's hugely super exponential (with large constant) in time and space.

Slashdot Top Deals

Just about every computer on the market today runs Unix, except the Mac (and nobody cares about it). -- Bill Joy 6/21/85

Working...