Overpopulation is not simply continent-wide population density. It is, among other things, the ability of a population to feed, water, cloth, and house itself relative to its environment. It is affected by infrastructure, geography, land-use, technology level, government, and other factors.
Europe (as a continent) may a higher population density, and (IMHO) is somewhat overpopulated. However, overall it is doing fine in terms of feeding and watering itself, it is over the demographic transition, and looks like the population will be decreasing overtime.
Africa, on the other hand, is wildly diverse in terms of local overpopulation, with some areas experiencing huge demographic momentum (i.e. large percentage of young people, pre-reproductive age), terrible infrastructure, and governments that cannot cope.
So, yes, broadly speaking, Africa has an overpopulation problem.
I get a stacktrace that includes:
Could not find function foo in com.lete.ool
I then want to search specifically for com.blah.bar package, with the periods in there. (It's the Object Orientation Library from the company LETE). I do _NOT_ want to get back something matching completetool
I suspect most
What? Why do you suspect that? I want laws against money laundering.
and don't want the reporting requirements.
I don't like the reporting requirement, but I understand why it was there.
You may disagree, I may disagree, but you are assuming too much in assuming the people here want roughly the same law and just disagree on means.
You totally didn't answer the parent's question: "What do you propose?" Are you proposing removing the reporting requirements entirely and not replacing it, thereby making money laundering much easier? Why?
they should just devise a better contest quite frankly, with combination categories or lists of "whats in the picture in relation to each other", like "wine in a glass" vs. "wine glass and a wine bottle"
Yes, they should 'just' create a better contest. The issue with that is that creating a contest, identifying objects, labels, testing, error-correcting, etc. is a slow, expensive, and unglamorous process. The ILSVC is only a couple of years old. And already it is showing its age; I really don't think that they expected it to be solved for much longer.
So, what's next in terms of contests? Probably a multi-object challenge, where a picture can have many objects; alternately the task would be to label not only the main object but also the parts. The previous were limited because there was a single primary labeled object. ILSVC doesn't even using a bounding box (which Pascal VOC did). So, the next step is to create a data set with lots of objects and have them all labeled, and the computer has to draw the boundary (not just the bounding box) around the object.
Deciding the performance is a pain in many of these contests, and eventually it becomes kind of arbitrary. How do you decide that a bounding box correctly covers the ground truth bounding box? Any measurement (i.e. 50% overlap) is going to be arbitrary. Doing it for object boundaries is going to be even harder
Okay, so we have a benchmark where the bog-standard human being scores 94.9%.
Yes, and now the algorithms are better. More importantly, the 'standard human' only does that when it is paying attention, which it can't do for more than 15 minutes or so. The computer does it day in, day out, forever. And it will get better over time.
Then in February (that's three months ago), Microsoft reports hitting 95.06%; the first score to edge the humans. Then in March, Google notches 95.18%. Now it's May, and Baidu puts up a 95.42%. Meh. Swinging dicks with big iron are twiddling with their algorithms to squeeze out incremental, marginal improvements on an arbitrary task.
You denigrate their work, but that's the way science works: incrementally almost all the time. In any field, you will see tweaking, slight improvements, variations, and a couple of new ideas. And then one of the researchers will hit on the next big idea. So what? What the hell have you done? You're just being a dick.
“Our company is now leading the race in computer intelligence,” said Ren Wu, a Baidu scientist working on the project.
I presume that next month it will be IBM boasting about "leading the race" and being "much greater than their competitors". The month after that it will be Microsoft's turn again. Google will be back on top in August or so...unless, of course, some other benchmark starts getting some press.
First, what they are doing is very hard. So, yeah, doing 0.25% better than someone else is a big deal. Let's see you do better.
Second, look at the performance over time. There was the NIST handwriting sets, and then the Stanford data sets, then the 'standard' was the PASCAL Visual Object Challenge and people were slowly improving to the point that someone else needed to step up and provide a better standard (more categories and more examples of each). And that was the ILSVC, and now we're down to the last couple percent on those. The next set will be bigger and harder. And performance will improve on that one too. That's expected and a good thing. Image recognition is stunningly hard; thanks to the hard work by these researchers it's gotten a lot better.
here's your obligatory XKCD
As someone who was involved in the previous neural network hype cycle (late 80s, early 90s), I'd have to agree with him that we go through these cycles, where a particular approach gain ascendency, then is shown to not work as well as the hype, and then gets rejected. On the inside, however, lots of good work continues to be done. The press (and then in popular opinion) keeps saying 'this is it, we're really close to AI' or somethign similar, and then when it doesn't pan out, then it is considered a bust. But, we are making progress, we know more than we did last year, and a lot more than 10 years ago. It is just that the problem is hard, and we're still trying to figure out some basic principles, so don't expect us to be there yet.
Apache has a number of vital, rapidly improving projects. The one that I'm using currently is Apache Spark. We use Solr and Nutch, and they are being actively developed. We're excited about Calcite getting to the point that it is fully featured and stable, and that's progressing.
there are plenty of projects that have moved to the Attic, which is where they go for the long, slow retirement and death. And many of the projects are, I would say, lethargic and not frequently updated, because they are large, stable, and feature complete, but likely to be replaced by other projects. Maven is a good example, where I think there is something better, but there is a large, installed userbase that Apache supports.
Based on his (vague) project description, it sounds like apache might be perfect for it.
What we really need is a human body simulator, down to the molecules.
That would be nice, but rather un-realistic currently. We are currently working on a worm, and you can see progress at: http://www.openworm.org/ . It's cool, cutting edge, open source, and all that, but 1. the models are really complicated and we don't know all the parameters; and 2. they take a long time to run. In a couple of years, we should (cross fingers) be able to see the effect of chemicals on a nematode, so if it gets sick, we can simulate treating it.
Please note that C. elegans has 959 cells in it. Humans have 100 billion neurons. We're still many, many orders of magnitude off from simulating the effect of drugs on a human body.
1) Slavishly reimplement millions of models in the new medium's physical construction, to emulate the quirks and behaviors of the target system's physical construction, wasting huge amounts of energy and making a system that is actually *MORE* complex than the original....
2) Deconstruct all the mechanisms at work in the physical system that currently performs $BAR to get $FOO, evaluate which of these are hardware dependent, and can be removed/adapted to high efficiency analouges in the new hardware platform-- and produce only the components needed for $BAR to be accomplished, to generate $FOO?
The former will most certainly get you $FOO, but is HORRIBLY INEFFICIENT, and does not really shed light on what is actually needed to get $FOO.
The latter is MUCH HARDER to do, as it requires actually understanding the process, $BAR, through which $FOO is attained. It will however, yeild the higher efficiency synthetic system, AND the means to prove that it is the best possible implementation.
Basically, it's the difference between building a rube-goldberg contraption, VS an efficient machine.
We've been trying, in various ways, to do #2, but can't do it yet. So, we're trying to do #1, analyse it, and then do #2. You say that we should 'produce only the components needed', but really, that's the crux of the matter. We don't know what the components needed are. We can't even simulate a worm yet at either the individual cell OR functional level; see the OpenWorm project (http://www.openworm.org/) for an attempt at the former. We can use that sort of model organism to figure out what the important features are, model those, and move forward, but it seems unreasonable to complain that full nervous system modeling is the wrong approach, when the alternatives haven't worked yet.
Yeah nice false dilemma there. Just because some good comes of it at times does not mean we should just accept the status quo of rising taxes, rising inflation, and diminishing returns.
Only, we don't have rising taxes. Right now inflation is at or below what the Fed generally goes for. I don't even know what you mean with dimishing returns. And none of these is strongly related with military or intelligence R&D.
On the flip side we have:
1. bio warfare 2. nuclear weapons 3. autonomous robot weapons 4. electronic surveillance 5. speeding fines that have nothing to do with safety 6. e-waste
Now shut up and go reread the bill of rights.
Humans have misused almost every scientific and technological advance. They are short-sighted, greedy, and oppress their fellow humans. None of this is a surprise. However, things like the 'toy' that the OP complained about, and the list of negatives that you give, are not a reason to stop progess. The human race is better off, living healthier, more connected, safer lives, due to the creation of 'toys' paid for by taxes, even taking the negative effects into account.
My problem is that it only has 300 books!!! Seriously, how frigging hard is it to put 3000+ books in there. Put the whole damn gutenberg project on it! there is no reason not to have a huge library of books. Shakespeare has 36 plays all by himself, Twain has over 20, Doyle has over 20, Dickens about 20, and those are just off the top of my head.
Watch the space shuttle program make a dramatic re-appearance. This is a massive national security issue that I bet no one brought up when they decided, "Gee, lets go and outsource our rockets and launches to a foreign power we've had cold relations with since the early 20th century."
The US has a working, currently available space shuttle. it's called X-37B. Works great. You just don't hear much about it; it's not manned. We also have a pretty good and improving disposable launch capability, though we do use russian rockets for the Atlas V. what we don't have is a manned program.
It would make sense to rapidly (well, as rapidly as possible) develop a manned launch capability.