"Weak" AI (and that is what we are talking about here) cannot "learn from mistakes".
Your definition of "Weak" AI is not standard and is not how machine learning works.
For example, we do only observe actual intelligence in connection with consciousness. Seeing them as separate is hence not a scientifically sound approach.
I don't agree. There are very few things we call intelligent. I'm sure they have lots of incidental correlations between them.
And we have even less of an idea what consciousness is. According to the current scientific state-of-the-art, there is no physical mechanism for consciousness, yet it clearly exists.
This is a good point. We have no scientific definition for intelligence or consciousness. Trying to reason about them is just an exercise in contradiction and equivocation.
How do you program for even every physical condition a stop sign may find itself in?
This assume the AI even needs to see the stop sign. A driverless car has many advantages over a human. It can have a database of the locations of all stop signs. It have telemetry information from other nearby cars. It can have 360 degree sensors that include cameras and lidar. It doesn't get tired or drunk. It can receive updates based on "mistakes" made by other driverless cars.
Even if there are problems with some of the information, the system can still perform an action based on the total information that is safe for the people in the situation. For example, even if doesn't see a new stop sign, it might still have enough information to see that there is another car entering the intersection.
Of course, it will make mistakes, but it just has to make significantly fewer mistakes than humans. Honestly, given the pace of progress, that doesn't seem too hard.
You wouldn't even go about training a machine learning algorithm that way as it would be pointless. The idea is to let it make better predictions, not train to to make the same predictions as an existing person.
Actually, much of machine learning is about trying to do as well as a human. Humans are expensive. Google could hire lots of professional language translators to handle every query, but it would cost a lot of money. Ideally, you want the algorithm to do as well as the existing people who created the gold standard training data. But not only does the algorithm do worse, it also reflects the bias of the training data.
Rejected applications are pointless for training as you don't know whether they were a good or bad rejection, whereas if you just give it approved loans and the outcome (i.e., was the loan defaulted on) then the AI can try to develop a set of rules.
What you suggest would create badly biased (in the statistical sense) data. You need to something more sophisticated. Maybe create a hold out set where you approve everybody and see how they do. This would be great for this problem as it would remove any human bias in the labels. A less expensive option (in terms of money lost on defaults) is to use a more sophisticated algorithm that does more than simple batch induction. Perhaps a contextual bandit algorithm or an apple picking algorithm...
If you truly wanted to avoid racial or gender bias you would just remove that information from what you feed into the algorithm, at which point it can't a priori be biased against anyone because it can't even evaluate them based on those criteria.
In general, it depends on the labels. If a human labeled the data and has a bias then the hypothesis learned will reflect those biases. As explained in the article, for complex problems based on ideas such as word embeddings, these biases can also show up as a result of things not obviously connected to labels.
I do agree it's a good idea to remove features that can be used for bias. A machine learning algorithm can use any features that are correlated with the label. Even if we are dealing with simple batch learning and unbiased labels, "bad" features can make the learned hypothesis biased. Assume race is correlated with poverty which is correlated with loan default rate. If there is a race feature, the algorithm might give some influence/weight to that feature. Now we have a model that is biased. A black man might just miss the cutoff because of his race, while he would have gotten the loan if he was white. This might even be logical when given a Bayesian interpretation; given a lack of other information, the algorithm uses the prior information associated with his race to infer this missing information and determine he is a loan risk.
But let's suppose you do that and then look at the results after the fact, add that data back in and come to the startling conclusion that your AI is disproportionately rejecting candidates from some group. It can't possibly be because it knows they're a member of that group, but because that group happens to have worse outcomes.
If the labels are biased then the model is probably biased. Even if you remove "biased" features, the algorithm might learn a model that is based on features that are correlated to your biased labels. For a simple batch induction problem, it might be enough to remove any biased features and to ensure that you have labels that are generated by some type of unbiased process.
But the authors of the article are making such a statement, they just have nature completely backwards. They believe mankind, separated from "society" is naturally non-racist, non-sexist, non-gendered even, and that the outcomes of race, gender, or class groups is imposed on the formless humans by society, to where the concepts themselves of race and gender are "social constructs," and if we smash them everything will just...be great.
I would actually claim the opposite. Man can be racist, sexist, etc, but that "good" societies sets up rules to prevent those qualities from discriminating against people. This seems consistent with the article.
Smash the Patriarchy and gender equality will simply emerge. If it doesn't, well, it must be because there's still evil sexists hiding around here and they need to be identified and purged.
This is a good point. Someone can always point out differences, and this is not a solid argument that things are unfair. I think people need to be reasonable and logical in coming up with rules of society to try to make things fair.
I think that although we presented this pretty liberally we were also pretty open minded and clear about the fact that language communicates all associations, learning the associations is called "bias" in ML and bias is what you need, it's the signal you've found in all the noise of the universe.
While you can call that bias, the term is already pretty overloaded in ML. I first learned bias in the sense of Tom Mitchell's inductive bias work. Here the basic idea is to get around the No Free Lunch Theorems by assuming things about the problem eg. restrict the concept space. An older ML related definition is based on statistics in terms of the bias variance decomposition...
I hope you mean the Guardian article not the Science article?
Unfortunately, I only had a chance to read the Guardian article. Still it seemed fairly reasonable. One concern was their claim that humans might lie about why they made a biased decision. I would think it's more likely that they don't know why they made a decision and just rationalized an answer when questioned. This is part of the reason why expert systems failed so badly.
The other thing that seems hard, which they acknowledge, is how to correct for bias. You can remove features that could directly lead to bias, such as race or gender, but ML is all about correlation. They system might learn concepts that are correlated with race, but still not causal. For example, it could learn that people who eat sauerkraut are horrible drivers and should pay higher insurance rates.
No, they're still paying more taxes. Far more, in most cases. The fact that those taxes make up a lower fraction of their income does not mean they are paying less taxes than those with lower income.
Compared to the value they get for those taxes, which does not vary much from one individual to another based on income, they are significantly overpaying.
This is debatable. I would say the US spends a lot of money in the interest of rich people. Around 40% of income taxes are spent on the military which protects the assets of rich people (among other things.) If someone conquered the US, it's doubtful they would let Richie Rich keep his mansion.
Moreover, that portion they don't spend on taxable goods is being invested, which does far more good for society than one could reasonably expect to result from handing it over to the government.
There must be a limit. As this process concentrates wealth, we must eventually reach this limit. Does the money leave the country to invest in other opportunities? Is this better than letting the government redistribute the money which helps drive our own economy.
You're proposing to seize those "excess" earnings and distribute them as a handout, which at best would just drive up prices
Did the parent propose that? Before we talk inflation, why don't we start by paying down the national debt...
Punishing saving and investment in particular is a lousy way to help the average citizen, ensuring that the next generation will be worse off than its predecessors.
Some income redistribution could help direct this investment. By creating extra demand for less expensive products, businesses would have an incentive to help the average citizen.
Go ahead and explain why should some, those with the capacity to produce be supporting others, who do not have that capacity?
Because that is a purpose of most democracies. My country allows people to live on it's land and use it's resources. You are not an island. You live in a society. You are not allowed to set the rules.
Animals that cannot feed themselves die off, that is the nature of things.
So you justify how to create a society by looking at common animals.
Of course they can try and steal, that is expected. Of course those, who have something of value will protect themselves, that is also the nature of things. But to feed and to shelter and to entertain your would be assailants because they want what you have? That IS perversion.
You talk about the natural ways that animals behave, while most talk about how people should behave. A society is built to enforce rules for the common good of people. If that means taking away some fraction of the resources of the rich then that is justified. You're kidding yourself if you think that they didn't earn those resources off the backs of others. Capitalism is a government created compromise for the betterment of society. It is not one of your natural laws.
I suppose *some* level of voluntary charity always existed and will exist in the future, however beyond some voluntary charity and beyond the threat of violence what else do you actually think is there? Religion? There is no god, religion is a useful political tool to keep the poor at bay (a threat of everlasting violence after death scares a large number of human animals).
Do you feel superior? Careful, there is always someone smarter than you. Maybe someday you won't make your cut.
So what is your idea, why should a newly born person be entitled to the productive output of an existing person?
So how would your great society work? The devil is in the details.
While I agree with your main point, I think these types of articles are important for people to realize the costs of pollution. The reality is that the balance between the people and the corporations is heavily stacked in favor of the corporations.
In more detail, companies have no incentive to control their pollution, so the government has to step in. It's a classic tragedy of the commons. As we can see, the corporations just buy off the politicians and the people do not know to fight. Ironically, in a true free market, the polluters don't even gain much. The government needs to step in to prevent a race to the bottom.
Another problem is that alternatives that pollute less are disadvantaged because of the externalities that the polluters are exploiting. The government tries to step in and offer incentives to balance things but then the right starts crying about how this is against the free market and that regulations are killing jobs. As technology advances, there should be a constant increase in regulation to replace these polluters with cleaner technology and level the playing field.
That sample group actually represents less than 0.00004225% of the population that makes up the NRA membership.
It's actually quite interesting how multiplicative and additive arguments are easy to mix up. You're essentially making a multiplicative argument that a percentage is relevant, but in this case it's the additive argument that matters. I don't care about the size of the population, all I care is that I have a big enough sample size to detect what I care about. For example, it might be a sample of 100 people, but I don't care if I've got a population of a thousand people or a trillion people.
On the other issues you are correct. How you formulate the questions and how you sample the population can have a huge effect on the results.
Mathemeticians stand on each other's shoulders while computer scientists stand on each other's toes. -- Richard Hamming