Yeah, me too. Good memories of something that in retrospect was not that great.
Yeah, me too. Good memories of something that in retrospect was not that great.
The title should have read "Carefully crafted decoy using massive computation resources can fool not up-to-date AI".
Here's how it works:
1. Get access to the AI model you want to fool (and only this one). Not necessarily the source code, but at least you need to be able to use the model as long as you want.
2. Solve a rather complex optimization problem to generate the decoy
3. use your decoy in very controlled conditions (like stated in the linked paper)
While the method for fooling the model is fine (and similar work has been buzzing lately), the conclusion are much weaker than you expect. First, because if you don't have the actual model, you cannot do that. You need to run the actual model you are trying to fool. So that takes out all remote systems with rate limiting accesses. Second, your rely on tiny variation which can be more sensitive than real world variation. Take for example the sticker on road sign, if you took the picture on the sunny day, the decoy will very likely not work on rainy day or at night. Third, if the model evolves, you have to update the decoy. Here's the problem with statistical learning systems: they learn. It's very likely that the model got updated by the time you finished the computation and printing the sticker. Many people believe that future industrial systems will perform online learning which renders those static methods useless.
So yeah, actual research model can be fooled in very specific cases. However, It's not as bad as some article try to make it sound. I'm not saying it won't happen, I'm saying it's not as bad as you think it is. Hey, if you want to impersonate somebody, put some make up and if you want people to crash their car, cover the roadsigns with paint. There you have it, humans are easily fooled by some paint.
Just like for regular humans. People almost never question the religion there were born with, or views on races and culture for that matter.
Joanna Bryson, a computer scientist at the University of Bath and a co-author, warned that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously counteract learned biases.
"unlike some humans"
There, fixed that for you. Or even better: "like most humans".
Statistical learning does inferences based on what humans produced. If humans are crap, do not expect something better than crap. .
What if I could purchase a robot that could go out and earn a living for me?
You can. You just have to buy shares of a company and vote for a board that will fire employees and replace them with machines and algorithms in order to increase dividends.
The caveat is that you need to have so much money that you already don't need to work. If you don't, then you'd better vote for universal basic income, because those who have will do anything to increase their dividends, including replacing you with machines and algorithms.
Climatologists are not mechanical engineers, they are PhD's. I agree, engineers are very careful about the details. However, PhD's don't have life-risk to consider. I fact, there is overt manipulation of the data upon which most (if not all) of the climate "conclusions" are based.
Oh yeah, like the engineers at Volkswagen that cared about the details of their vehicles emission.
Engineers are in the first row when it's about cooking the data to fit the specifications. Dishonesty is everywhere the same, as long as it involves a gain. Most people don't care about being right or wrong, they just care more about themselves than about the facts. It's putting feelings over reality. Which might explain why you elected Trump.
Because when you mix 2 things at temperature T, it doesn't make a thing at temperature 2T. Don't mistake temperature for energy.
Same question as who's liable for decision pets make. When your dog escape from the house and makes a carnage in a nearby kindergarten, you are liable. If you feel you're not cut to control things you own so that it doesn't get out of control, don't buy a dog nor a robot.
We're still a lot farther away from truly autonomous cars than most people tend to think. Sure, driver assist and suped-up cruise control is coming in, and will be great on clearly marked and standardized interstates. But good luck trying to get a computer to navigate the old backroads of some city or country backwater.
Hell, Alexa still can't even understand a lot of basic questions I ask her. I'm sure as shit not about to let that bitch drive.
Inferring the performances of a vision based vehicle control software from the performances of a natural language processing software is about as relevant as saying that all hammers are flawed because your screwdriver is not functioning properly.
The lawsuit raises interesting questions, such as whether employment law requires corporations to have the sort of common decency we expect from individuals.
They don't. And that's exactly why they were created in the first place: to avoid pesky human feelings from hindering business.
I think it's not so much because of the ratings than because of cynical politics. You see, lowering the grades allows you to temporarily reduce the unemployment by having a whole generation with higher degrees than the previous one. Temporarily because of course they don't have the associated knowledge, but it doesn't really matter since they were not necessary in the first place and sometime someone will figure this out. The politics have understood that some time ago and make a huge benefit out of it.
Last I checked, master's degrees were delivered to about 25% of a generation in France (i.e., 25% of the people born in 1990 have a master's degree). It's awesome from the societal perspective, but do you really believe we need that many people with that much qualification (assuming they have it)? I think not. In less than 10 years, I bet you that masters will be awarded to half of the generation, and that the PhD will be the next hype degree.
No society needs such amount of MScs and PhDs, for sure. You have to realize we are already in the employment bubble. We have automated things so well that a non-negligible portion of the population is useless, and it's growing. Since we don't know how to build a society outside of employment, we have to give them fake degrees leading to fake jobs. We have to say that it now requires an engineer to fill a spreadsheet. Of course, it doesn't, but guess what, the guy doing it isn't really an engineer and anyway the spreadsheet is useless for the project it's related to.
This is a bubble, and I think it will burst sooner than expected with investors refusing to fund fake jobs once it's obvious that they are what they are.
That's a question I usually ask myself when the holidays kick in. The answer has still to be found.
Apparently you miss one very important word in my sarcastic post: 'most'. That and the fact it was sarcastic like the previous comment.
Some student are doing just fine, like your daughter, because they got caring parents that gave them a good basic education and could build up from there. They even tend to do better thanks to the effectiveness of current technology. Good for them.
Most students, however, are just plain disastrous. They'll never get bilingual and trilingual is not even considered. For most of the younger people, technology is not an aiding tool, it's a replacing tool. Which means that they don't use the tool the help them doing better (more efficient, more correct), but they use the tool to do something they are completely incapable of doing without it. They don't even know that they are not capable of doing those things.
What online search and social media have done is widen the gap between the good and the ugly. People that were on top are now on a higher top, people that were at the bottom are now at a deeper bottom, and they account for the majority. We believed technology would allow for a cheap mass education and that rapidly we would be in a society of geniuses. Turns out it doesn't work that way, and you can't solve a social problem with a technical solution (that never worked). Because nobody wanted to pay high enough taxes to have a correct education system, we now have a mass of incompetent dudes that don't even realize they're incompetent but still have high expectations of what their job (or better, salary) should be because they're doing college. You have the illusion they're doing fine, but it's technology that's doing fine. With growing automation, they'll be rapidly completely out of the business.
Of course it's not as black and white as I'm putting it here, but you get the idea.
He's probably right, eventually taking your glasses off will be like suffering from some kind of learning disability. All text you see automatically scanned and available for perfect recall, the name of ever person you meet whispered in your ear in case you forgot, any equation instantly solved... And an unquenchable thirst for Pepsi, an uncontrollable urge to buy a Tesla.
It's already sort of the case. Most of modern students are incapable of doing anything if they don't have facebook to ask elder friends for what to search on google. And then they have an unquenchable thirst for Pepsi. Conclusion, you don't need a brain interface to sell crap and render people useless.
I haven't seen anyone come up with a good reason people wouldn't use basic income to work less and be lazy. I can tell you, if I had guaranteed income for life, I would probably not ever work again.
And then natural selection kicks in. If you sit on your butt all day watching dumbing down tv while eating greasy food, you're likely to die very early. Good riddance! UBI is actually a way to get rid of all the lazy useless assholes while indeed committing no crime. I think it's priceless.
To be even more effective, I think we should double the amount of UBI if you opt in for sterilization.
"They that can give up essential liberty to obtain a little temporary saftey deserve neither liberty not saftey." -- Benjamin Franklin, 1759