And this after the average Christian has heard a Phd's worth of instruction (i.e. 3 hours every Sunday, 150 hours per year, times 12 years = 1800 hours of instruction).
Gonna need a citation for that. Because I know lots of Christians, but very few who spend 3 hours every Sunday at church.
At which case, you walk out the door, and walk to the nearest policeman or police station...I"m sure they would like to hear about these threats.
Not if those police are anything like you. They would just tell the girls they got what they deserved. And the girls probably expected exactly that.
For example:
OpenCV - Great it can recognize a face, however training models were largely done on white people, so they have white-bias for detecting faces.
Humans are notoriously bad at recognizing people from other races. "They all look the same" has been a punchline for a long time. Failing the same way humans do, and for the same reason, seems like a vote in favor of the deep learning solutions.
They are all universally designed for commercial applications (eg phone IVR's) and thus there is no standardization and you end up retraining your data, wasting months of processing time when a better NN vocoder or synth comes out.
Should we be looking for standardization at this point? I could see arguments on either side. We want to try lots of things vs. we need to be able to compare the different things we're doing.
Also they use very low quality inputs, which results in some really low quality voice synths that "sound a little better than telephone conversations."
So we need better inputs. That means the pretty-impressive results we're already getting will only get better.
The AI can eventually figure out how to solve these games better than a human because it's FASTER at making decisions, not because it's better.
Chess masters study previous games and situations so that when they see an arrangement on the board it looks like a solution they've already studied. How is that different from the AI doing it in real time?
Chatbots - Can not solve customer's issues, they are primarily designed to play queue-bounce. Chatbots can be designed to help customers pick the right solution, but they are largely (and websites of the same companies) are designed to bury human contact by trying to get the customer to help themselves, but really the result is more frustration.
Many CSRs work from scripts designed to do the exact same thing. Is there a functional difference between a chatbot that isn't able to improvise and a human who isn't allowed to?
Deep Learning however has no plasticity once it's put into production. Quite literately, when it's not in training mode, it can't learn.
This one I completely agree with you. As long as the hardware required for training is significantly greater than the hardware required to run the agent, it's going to run up against edge cases that it can never handle.
Does it come with BASIC and Turtle Graphics?
And graph paper mapping the screen so you can plan your images?
If a candidate wins an election with 53 percent of the vote, that would be a decisive victory. If a probability model gives a candidate a 53 percent chance of winning, that means that if we ran simulations of the election 100 times, that candidate would win 53 times and the opponent 47 times -- almost equal odds.
That's a bad comparison. A probability model would actually report something more like: There is a 90% probability that the candidate will get between 51% and 55% of the vote. A 90% probability of victory should absolutely not be interpreted to mean it's predicting the candidate will get 90% of the vote.
The concept of the government being able to dictate where you can move about, for a risk this small, is sickening.
There's something sickening, but travel restrictions aren't it.
It works on two levels, get it? It's an insult - implying that your opinion is sickening - and it's describing the virus, which is sickening millions of people worldwide.
It's unfortunate that so many died from this disease, but this experience taught us who is vulnerable and who is not, what treatments work better than others, and (again it is unfortunate these people died) those that died cleared the population of those most likely to spread the disease.
Have you seen evidence that those who die from it are also most likely to spread it to others? Because I haven't seen anyone claiming that.
If we assume the elderly and compromised will die from the same level of infection that a young, healthy person would recover from, I would expect those young, healthy people to be out and about more than the elderly both before and after they show symptoms. So those who survive would be most likely to spread it.
I'd be happy to see good research saying the opposite.
panic: kernel trap (ignored)