The Environmental Protection Agency will issue the proposed regulations this summer, and final regulations by 2016, according to the person, who spoke on the condition of anonymity because the administration had asked the person not to speak about the plan. The White House declined to comment on the effort. Methane, which leaks from oil and gas wells, accounts for just 9 percent of the nation's greenhouse gas pollution — but it is over 20 times more potent than carbon dioxide, so even small amounts of it can have a big impact on global warming.
Medical researchers like Dr. Michael Patton believe this sort of prototyping will become "the new normal" in a very short time. He says, "What you can now do through 3D printing is like what you're able to do in the software world: Rapid iteration, fail fast, get something to market quickly. You can print the prototypes, and then you can print out model organs on which to test the products. You can potentially obviate the need for some animal studies, and you can do this proof of concept before extensive patient trials are conducted.
A true artificial intelligence will show evidence of maintaining a mental model of reality, and of testing that model against incoming data, and adjusting the model when necessary. This strongly implies that the AI models itself in some manner, such that it can "imagine" a different way of "looking" at the world, and then judge whether the new model is a better way of thinking about things than the old model. The process is clearly fractal, since at the next level the software would be "imagining" a different way of judging which of two models was better, and eventually reaching the point where it makes decisions about whether in the current context it should act pragmatically or ethically.
Indeed. "Mental" modeling — maintaining and manipulating an abstract computational representation of beliefs — is at the heart of strong AI. Such models include, for example, beliefs about the world, beliefs about other agents (including what they believe about you), and beliefs about self. This is where computer scientists, linguists, cognitive psychologists and others all have some common ground and interdisciplinary research can be very productive. Learning is the ability to make systematic normative changes to mental models as a consequence of reasoning about experience; normative in the sense that such changes improve the ability to reason with and about the model in ways that maximize some value (e.g., ability to make accurate predictions). Experience involves reasoning about both the outside "real" world and the internal reasoning process itself. This is where your comment about "the next level" is germane. Those of us working on this topic call reasoning at multiple levels "meta-cognition", that is, thinking about thinking. There is no theoretical reason to limit meta-cognition to any specific number of levels. Current research on meta-cognition typically considers the level (or two) "above" (abstracted from) experiential belief modeling and action planning. This is also about the right level of abstraction for ethical reasoning ("would", "could", "should", "may" and their opposites). I've observed that most researchers assume a utilitarian ethics, which makes some sense if maximizing performance is the overall imperative. However, I count myself among those who believe that future AIs must be able to reason about moral imperatives if we expect them to behave themselves appropriately as we live and work alongside each other. Ronald Arkin at Ga.Tech is a leader in this area and he is a pioneer on the topic of computational methods to help ensure ethical behavior by potentially lethal robots.