Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Transportation

NY Police Get Tall SUVs To Combat Texting While Driving 319

coondoggie writes "The New York State Police have a new weapon to fight the plague of drivers that insist on texting while operating their vehicle: tall SUVs. Most recently reported by the AP, NY has begun operating a fleet of 32 unmarked SUVs that let troopers more easily peer down into a car to see if the driver is texting or not. 'Major Michael Kopy, commander of the state police troop patrolling the corridor between New York City and Albany, quoted a Virginia Tech study that found texting while driving increased the chance of a collision by 23 times and took eyes off the road for five seconds — more than the length of a football field at highway speed. Kopy worries that as teens get their driver's licenses, texting on the road will become more prevalent. "More people are coming of driving age who have had these hand-held devices for many years, and now as they start to drive, they're putting the two together, texting and driving, when they shouldn't."'"

Comment Re:Of course not. (Score 1) 227

The example you gave, if true, is a classic demonstration that IT management does not understand their business, not the other way around.

First, while you may want to approach a person directly to give them a friendly heads-up as a first step, the basic thing IT management is supposed to understand is that a user having weak passwords is not so much a risk to that user but a risk to the business. If a user ignores your friendly heads-up, or the problem is more widespread than 1 person, the next step is to go to the person responsible for that part of the business. Now, you don't have to be a douche and call out the specific individual(s) in question, but you then tell that person that there is a systemic risk to their operation because X% of users (or alternatively, a few users with extensive access rights to critical systems) have weak passwords that all appear near the top of /-/@xX0r brute force password dictionaries.

The key thing that even moderately competent managers (IT or otherwise) understand in these kinds of situations is that you have to put the decision (and relevant information) in squarely in the hands of the person accountable and responsible for the issue. In this case the issue is not that someone has a weak password that might result in someone messing up their My Documents folder, it is that weak passwords are a risk to the business. If a bank comptroller's password is 'password', that is not a problem-waiting-to-happen for the comptroller, it's a ticking timebomb for the bank.

In your example, you do not put the decision to act (or not act) in the hands of the account owner, but in the hands of the account owner's business unit head.

Security and IT issues in general tend to get short shrift in many business (at least in my personal experience) not so much because non-IT/non-technical managers are stupid, but because the IT managers lack even basic competence relative to the second half of their title.

Comment Re:TDD (Score 1) 156

Your parent's statement is not an oxymoron.

If every single print driver has components running in both ring 0 and userspace, but the preponderance of components (by number or 'size') of every single print driver is in userspace, then it is more precise to say that "all print drivers are mostly in userspace" as opposed to "printer drivers are mostly in userspace". The latter is semantically a superset of the former, as it could either mean the same thing, or also describe a situation where some printer drivers are completely implemented in ring 0, but the majority are completely or mostly implemented in userspace such that the preponderance of the set of all printer driver implementations resides in userspace.

In other words, your parent's statement is more precise about not only the aggregate population of printer drivers, but the distribution within the population. Whether that statement is actually correct or even the real intent of your parent poster's statement is another question :).

Comment Re:Saving everyone a few seconds on wiki (Score 1) 209

First, disagreements aside, thank you for taking the time to respond. I am genuinely interested in trying to understand more about the topic (for my own personal benefit if nothing else) and that is harder to do in a vacuum if there is no discussion. Also, I do apologize if I mix up terms in a way that hampers discussion.

Mind-brain devolves to brain = magic if we must accept that the brain is special in some way that makes it immune to analysis. If it is not, then it is functionally identical to and scientifically indistinguishable from a biological implementation of the Chinese Room with a critical modification (the removal of the arbitrary requirement that all input to the CR be reduced to symbols). The Other Minds response highlights this problem well, and Searle's response that the the assumption that other people are conscious is necessarily axiomatic is either a strong indication that his definition of consciousness irrelevant to science (faith in the consciousness of others (or even the self) is a not a falsifiable position) or begging the question. Note that I did not say that his meaning is unimportant (it could still be the most important question ultimately facing any intelligent existence), but it is simply outside of the scope of science. The third option is to fall back on the broader symbol grounding issue as you seem to be doing.

The core of the argument is that you can't get semantic content from purely syntactic content. Ultimately, it's an attack on computationalism, and a damn good one.

That statement just raises the Connectionist argument that the validity of the CR thought experiment depends upon the false premise that all computation is necessarily syntactic. Searle specifies as axiomatic to the CR argument that any program must be symbolic, and further implies that any programmable computer must therefore only be capable of symbolic manipulation, and as such the CR argument a priori limits the scope of the problem to the syntactic. As such, the CR does not simulate the overall physical reality of, e.g., the propagation of pressure waves and the subsequent audio-neural transduction (hearing Cantonese) or the EM-neural transduction (seeing Chinese characters on a page). This then limits the interaction of the system with the outer world as occurring through a filter stripping that interaction of all a-symbolic and sub-symbolic components. We can contrast this with the experience of a native Cantonese speaker, for whom hearing spoken Cantonese or seeing pinyin characters on a page is a fundamentally non-symbolic interaction that only becomes symbolic after being processed by the brain. The original CR is therefore fundamentally flawed in its conception or inconsistent in its premise that the CR can perform in a manner identical to a real intelligence while stipulating conditions that are impossible to impose on, e.g., a human intelligence. Even the CR-inside-a-mind modification later articulated by Searle exhibits this flaw as Searle’s consciousness still filters and strips all non-syntactic input before passing it to the internalized CR.

To paraphrase parts of the Connectionist argument, the view that all computation as identical to manipulation of meaningless syntactic constructs is an observer-dependent interpretation that unjustifiably excludes the physical reality of the computer system, which has structure and properties independent of our choice to view the cascade of physical interactions as manipulation of symbols. In short, a la his wordstar-on-the-wall argument, Searle is correct in asserting that a program in the abstract is a collection of symbols that has no inherent meaning, but it is incorrect in asserting that a physical system responding to inputs that we perceive to be a program is identical to that abstract concept, and that because the one of many real inputs into the system to which we assign meaning is symbolic to us, that the system’s response is necessarily and only symbolic.

In any case, Neural net modeling, which was the original topic of this discussion, is essentially focused on studying non-syntactic computation. If the response to this is that it is still flawed because ANNs are mostly implemented on digital (and therefore nominally symbolic) computational systems, then, even ignoring the point of the larger system above, this objection reduces to a simple implementation technology issue as opposed to a fundamental constraint of ANNs as an ANN is not necessarily limited to implementation on nominally digital platforms. If implementation on digital logic is found to be fundamentally limiting for some as-yet unknown reason, then that can be addressed via moving to analog, quantum, or biochemical systems (which, taken to the extreme, could simply be an artificially constructed human brain).

In essence, while I agree that the cognitive model of the mind processing symbols to which we assign meaning is flawed, it is incorrect to assert that computation is necessarily limited to this model. In that view, the CR only addresses a very narrow class of AI based on the reality/interaction -> symbol -> meaning cognitive model, where the mind's interaction with reality is mediated by a purely symbolic interface, as opposed to the connectionist model of reality/interaction -> a-symbolic sensory input -> meaning -> symbol where symbols are only created as a result of assignment of meaning, and then later inserted into the cognitive process as useful filters or abstractions that allows the mind to more quickly/efficiently deal with inputs. This model of the cognitive process rejects Searle's CR as valid as in this context the CR argument's premise begs the question by imposing the restriction that symbols must be specified before meaning is assigned (i.e., they are not symbols to the CR, but are in fact someone else's symbols), and the more subtle inference that the system must ultimately respond to each symbol as a hypothetical native Chinese speaking conversation partner (or otherwise immediately pass the Turing test as applied by a Chinese speaker) in order to qualify as being conscious. In order to do so with a foreign set of symbols that have not been internalized by the consciousness means it must either fail the Turing test or to simply get inside (or alternatively internalize) the Chinese Room and process symbols via complex lookup. As passing the Turing test is a premise of the CR, then we are left with the latter scenario whereby even an assumed consciousness, being forced to work with symbols created by someone else to which it has not developed or assigned any meaning, must simply respond to inputs mechanically based on the rules it has been supplied a premise of the though experiment. In other words, even if a consciousness does exist, it cannot simultaneously understand Chinese and satisfy the other premises due to the conditions imposed by the thought experiment, so the possibilities of the CR are a consciousness that does not understand Chinese, or a lack of consciousness and simple mechanical symbol processing. The conclusion of the CR argument is therefore a recursive and inevitable outcome of the combination of the reality -> symbol -> meaning cognitive model and the conditions of the experiment.

Comment Re:Saving everyone a few seconds on wiki (Score 1) 209

In the case of the Chinese room, it does indeed show that purely computational approaches can not be sufficient. To proceed anyway, operating under the irrational belief that the problem will vanish by magic is delusional.

No, you have it exactly backwards. The Chinese Room argument does not show what you think at all, though that is a common misinterpretation.

The Chinese Room argument is, at its core, an appeal to consciousness/intelligence as a construct that is special and cannot be considered a simple emergent effect of the physical system of the brain. To use your terminology, it is the argument that intelligence/consciousness is magical, or that at least some intelligences/consciousnesses are somehow special.

To explain the above I will try to distill some of the more directly relevant pieces of a ton of philosophical discussion that has happened since 30+ years ago when Searle first articulated his argument (apologies in advance, it took more words than I first thought).

If intelligence is an emergent effect of the physical processes of the brain, then those processes can (for the same value of "can" used by Searle's argument with respect to an arbitrarily complex computer system) be mapped out on paper in terms of the physical interactions at an arbitrarily low level (theoretically down to probability functions describing the interactions of subatomic particles if necessary). If that mapped brain understood Cantonese, and the map describing the sequence and rules governing the interactions is written in English, then you can stuff the whole thing in a box with Searle and you have a Chinese Room derived from a 'real' intelligence as opposed to a program. Ok, but that is just a facsimile or copy of the mind, and not the real thing! That's true, but if the intelligence is a property of the physical system of the brain, which we can map, then the mind is merely a Chinese Room actuated by the physical processes (as opposed to Searle) governing the interactions of its constituent particles. Well, the physical brain does not require a conscious actuator, which distinguishes it from the Chinese Room. True, but neither did the original Chinese Room. If its instructions were formulated such that Searle did not need to understand them, and he had only to manipulate symbols (a fundamental premise of his original argument) then it could just as easily have been executed by a computer and thus been self-actuated as well. Ah, then either the physical brain cannot be mapped, or 'true' intelligence is not simply an emergent physical effect and is metaphysical epiphenomenon of the physical brain. The first we can dismiss as is prima facie false unless there is some magic that makes the physical substance of the brain unique in this regard relative to all other matter in the universe. If we take a different tact and instead try to argue that perhaps the brain is not unique, but that all matter cannot truly be mapped in that manner (after all, as science has not yet unraveled the mysteries of the ever-elusive grand unified theory, such an undertaking may not simply be infeasible, but fundamentally impossible for reasons that we do not yet understand), then that reasoning also destroys the Chinese Room argument because an arbitrarily complex computer system cannot therefore necessarily be mapped either, making the original Chinese Room impossible to construct. That leaves us with 'true' intelligence, if it exists, as being some kind of metaphysical property or brains being magical. No one believes in magical brains, so we have now taken this particular line of discussion squarely out of the intersection of philosophy and science and shifted it into the region of philosophy and religion where it stands today as a (still raging) debate regarding the metaphysical mind (and if such a thing exists).

In other words, Searle is ultimately (and explicitly) arguing that efforts to create synthetic intelligence are futile not because any specific scientific approach (such as computational modeling) is flawed and will not work, but that scientific methods are fundamentally incapable of creating intelligence because 'real' intelligence possesses a secret sauce that has yet to be precisely defined (e.g., "original intentionality") that an artificial creation cannot possess (because, e.g., any intentionality or understanding will simply be a derivation of its creator's, and therefore not original).

If we take a step back, we can also note that Searle is specifically not arguing that functionalist approaches such as computational simulation cannot produce something that is functionally identical to true intelligence such that it is impossible in the absolute sense for an external observer to distinguish the two--he specifically concedes that it could, and it is a premise of the Chinese Room. His argument is that even if it does produce such a thing, it will still not be "true" intelligence in the metaphysical sense of intentionality or consciousness. This means that even if Searle is right, and there is a real metaphysical construct or property that is both undetectable and unreproduceable by science, then from a scientific and utility perspective the distinction, even if real, does not matter. Turning that around to more directly address your assertion, Searle himself does not argue that the Chinese Room implies that computational models of the mind cannot work. He argues that it doesn't matter whether they work because that is besides the point. This is one of the reasons why, even though Searle originally coined the term, the current definition of "strong AI" used in neuroscience and AI research explicitly means functional equivalence or superiority when compared with human intelligence rather than philosophical equivalence.

Comment Re:Saving everyone a few seconds on wiki (Score 1) 209

Observing something and then trying to recreate it with rigor and controlled conditions is called science. It's pretty nifty, you should look into it.

That was implied in the point I was making. My parent poster's assumption, if true, would make experimental science either impossible (you would not be able to 'experiment' without already completely understanding the subject of the experimentation--hence it is not experimentation) or unnecessary (because you must already understand the topic completely in order to perform an experiment) depending on your semantic inclination.

Comment Re:Saving everyone a few seconds on wiki (Score 1) 209

To add: I mention "long-standing problems" which suggest that the effort in question is ultimately futile. These problems are well-established and fundamental to the AGI problem the summary implies that we're on the brink of solving. To ignore them expect that those problems will just vanish if we just build a better bamboo airplane is nothing short of magical thinking.

The point that it is possible to solve a problem without necessarily fully understanding the problem or the mechanism of the solution is equally applicable to intermediate problems that we believe must be overcome in order to develop AGI. To use your analogy, building physical replicas of airplanes will not, in and of themselves, result in shipments of cargo directly, but they could (with persistent iterative effort) lead to better understanding of aeronautical engineering (e.g., repeatedly pushing modified bamboo replicas off a cliff will eventually demonstrate that shape, surface area/weight ratio, distribution of weight, etc. affect performance), ultimately resulting in working aircraft, and probably resulting in economic development as a side-effect.

Children go through that process building their first paper airplane. First they copy someone or follow instructions by rote, then they fiddle with folding patterns, shapes, materials, etc. to try to make improvements or changes to flight characteristics. The more inquisitive ones continue he process to find that certain shapes and patterns tend to have certain results, and the more persistent and sufficiently-interested ones continue study and experimentation to try to understand the underlying causes of those results. Very few (if any) first study aeronautical engineering and physics to first understand the underlying physical forces and interactions (to say nothing of the math required to understand and solve systems of PDEs etc.) before constructing their first paper airplane.

With respect to the one specific issue you've identified, a solution for or complete understanding of the issues touched on by the symbol grounding problem is a requirement only for the development of a more rigorous definition of and ability to then test for and recognize intelligence, not the creation of intelligence.

To believe that the approach in question will ultimately result in "creating synthetic intelligence" is not merely belief without evidence, it's belief in face of evidence to the contrary!

We don't in fact have evidence to the contrary. What we do have are open philosophical questions/problems regarding the nature and definition of intelligence that may prevent us from being able to formally prove that a program is in fact intelligent according to a (as-yet non-existent) rigorous definition. Searle's Chinese Room argument, for example, simply articulated a counterargument to the computational/functional definition of intelligence, but did not provide any alternative definition. If you also accept his refutation of the systems reply to the Chinese Room argument then we cannot, given the current state of human knowledge and understanding, even prove that other humans are in fact intelligent. If we accept that other humans are intelligent, then it follows that the existence of intelligence is independent of our ability to understand and define intelligence.

To return to your earlier reference, even if we assume that the symbol grounding problem must be overcome in order for intelligence to exist, it is nevertheless true that it could be overcome while both the creators of the intelligence and the resulting intelligence do not comprehend (or even have the ability to precisely identify) the mechanism of the solution. Inability to solve the symbol grounding problem or, e.g., address Searle's Chinese Room argument to Searle's satisfaction, therefore, is not prima facie evidence that neural net modeling as an approach cannot eventually create intelligence.

Comment Re:Saving everyone a few seconds on wiki (Score 2) 209

That is the meaning that I usually assign to the term cargo cult as well, but I was using it in my post in the same manner as my original parent poster.

If we assume that I misinterpreted my parent poster's meaning, and it was in fact using the definition you provided, then the implication is that if neural net modeling is a cargo cult activity we must be imitating the actions of someone else who doesunderstand the fundamental nature and mechanism of intelligence. Unless my parent poster is insinuating the presence of an intelligent power whose deliberate actions we are trying to imitate in a misguided effort to produce the same results, the only reasonable interpretation of the use of the term in this thread is to refer to the broader concept of simply trying to reproduce phenomena that we do not understand by trying to replicate the circumstances we associate with the phenomena.

Comment Re:Saving everyone a few seconds on wiki (Score 3, Insightful) 209

Your assertion that a 'cargo cult' approach cannot achieve a given effect contains the assumption that it is necessary to first develop an accurate understanding of why and how a potential mechanism works before it can be implemented.

All crop development prior to Mendel or Darwin, for example, was essentially cargo cult directed evolution--and yet it resulted in incredible development (e.g., corn from teocinte).

More generally, achievement of an effect isn't just possible without understanding, it's possible without intent. Predators culling prey populations such that frequency of undesirable alleles within the prey population is minimized is an entirely unintentional effect. "Cargo Cult" solutions are simply scenarios where you have intent but lack understanding (which again does not mean that the solution will necessarily be ineffective).

With respect to the neuron modeling approach, it actually builds on lots of earlier successful work in computer science with respect to emergent properties of systems of finite automata. Essentially the approach follows the sequence:

  1. 1) Observe a complex phenomenon that you do not understand and further do not understand how to analyze in its entirety.
  2. 2) Identify discrete components of the phenomenon that you can analyze (e.g., neurons)
  3. 3) Model those components as finite automata and tweak the number of components in the model, as well as the configuration of the interaction topology and properties of individual automata until you recreate the original phenomenon (or alternatively other unexpected but interesting phenomena) (e.g., play with simulated neural nets)
  4. 4) Use the resulting working model to help you identify and analyze attributes of the system and their effect on the emergent property of interest, which leads to further understanding of the phenomenon (already has happened in fields like image recognition)

Note that in the above approach you not only recreate something before you understand it or how it works--you do so specifically to gain a better understanding of how it works. This is certainly a realistic scenario of how strong AI could be developed via "cargo cult" methodology. It is entirely possible that creating synthetic intelligence will be a step towards the understanding intelligence as opposed to an outcome of that understanding.

Comment Re:What year is this? (Score 3, Insightful) 559

I don't mean this as a personal attack, but what you've basically done is run down a list of fallacies that are covered in econ 100-level classes.

RE: 1) The 'want' of things beyond what you can afford is a central driver of economic development. Technology--including automation--constantly lowers the bar with respect to affordability of goods and services. To paraphrase the claim of many popular software frameworks, economic development and technological progress make expensive things affordable and impossible things expensive. Also, just because you are not willing to work for luxury goods and services does not mean that no one is.

RE: 2) Ignoring the incorrect usage of 'deflation', this is a classic expression of the fallacy of overproduction. Cheaper housing is not a bad thing at all, and cheaper housing did not cause any financial crisis. If housing became cheaper in general, people would either spend more money on other things or upgrade to larger/more elaborate houses (or some combination of the two). The root of the recent housing market collapse was simply that people overextended themselves based on terrible judgment. If your ability to keep making your mortgage payments depends on the value of your house increasing indefinitely so that you can leverage that increased equity via re-mortgaging, you will go bankrupt sooner or later. Not only that, but you are likely to do so at the same time as everyone else because your (or your neighbor's) default puts downward pressure on the price of housing in your area, which increases the risk of others defaulting, which puts downward pressure on the price of housing, etc. ad infinitum. Do this on the scale of meaningful %s of US GDP due to federal legislation ostensibly intended to "make housing affordable" and you have a massive blowup on your hands that is completely unrelated to the real cost (as opposed to price) of housing.

RE: 3) There is no causal relationship between technological progress in general (or increased use of automation in particular) that drives growth in 'public services' as a percent of GDP. There is likewise no rationale for technology forcing nationalization of greater percentages of the economy as a whole. If a car costs less due to more efficient production technology, people will buy more cars or other goods and services with the money they would have otherwise spent on the car. People who choose to save the money instead provide capital (via the banking system) that funds entrepreneurs who develop new businesses or new ventures within existing businesses to adapt to/take advantage of the new economic environment.

RE: 4) Setting aside the unfounded claim that 'new industries require less and less work' and that 'previous changes still required loads of people to operate', the fundamental flaw in this line of reasoning is that it ignores the downstream effects of the efficiencies gained, which inevitably result in new economic developments. Cheaper and more reliable automobiles, for example, have allowed for the incredible growth in US economy by increasing mobility of labor and expertise. Whereas you previously had to find employment within walking/horseriding distance of your house (or live in on-site housing) and employers had essentially monopoly access to your labor, automobiles allowed people to force every employer within driving distance of their house to compete for their labor. This also fueled growth in the housing market as suburbs became feasible as maximum distance between work, house, and other facilities relied upon by the average household increased. The transportation industry--first for bulk goods and services and later to small-scale bespoke delivery services like UPS and FedEx was also made possible by further developments in the auto industry. Making autos even cheaper will open access to broader markets (e.g., in developing nations) and allow access to higher-end features in mid-level models.

Taking a step back, unless every 'want' of every individual is completely satisfied, there will always be more that can be done. As long as there is more that can be done than people to do it, there will be jobs available. Whether the set of jobs available is equal to the set of jobs desired by currently unemployed people is an altogether different question. Whether everyone else should essentially pay to make them equal (via taxes, protective tarriffs, subsidies, price controls, etc.) is also a different question.

Comment Re:Scientific progress (Score 1) 586

So your assertion is that we should require "100+ years of study" before deploying any new technology? If not any new technology, how do you determine a priori which technologies should be subjected to that level of scrutiny?

By that logic we should not be using televisions, digital electronics, new cultivars developed after the 1920s (and most developed prior, as though they are over 100 years old, very few specific cultivars had been in production for 100+ years), any antibiotics, the polio vaccine, any of the procedures and drugs associated with modern medicine, etc.

Comment Re:Scientific progress (Score 5, Insightful) 586

Scientists don't make conclusions based on lack of evidence. We need proof they're not harmful to us and the environment. We don't have that proof.

You cannot prove a negative. What you can prove (and what already has been proven) is that all of the GMO crops are safer than peanuts, penicillin, and organic bean sprouts and spinach, and cell phones (GMOs 0 deaths with 100s of millions of exposures over nearly 20 years, the others many thousands of deaths between them due to anaphylactic shock, e. coli, driving-while-texting, etc) .

I am not arguing that there is no risk with GMO technology. What I am saying, however, that the rational approach is to formulate scientific hypotheses regarding risks such that those hypotheses can be tested via either examining existing data or conducting experiements that quantify the risk such that it is possible to determine whether those risks are acceptable based on our best current knowledge.

The "poison" genes you're referencing (e.g., Bt11) are currently based on proteins that are produced by soil bacteria, bacteria naturally found in milk, and other things that humans have been consuming for centuries and that are in many cases applied to organic crops as well. They have also been tested according to both USDA and EPA regulatory standards (in the US, pest control GM traits are regulated by both agencies whereas herbicide tolerance traits are only regulated by the USDA). At this point it is not a question of whether or not they have been tested, but whether they have been tested to a sufficient degree. My personal opinion is that given that they have been tested (both scientifically in controlled experiments and by default through market exposure) far more thoroughly than anything else I consume short of pharmaceuticals, I am OK with them. Beyond my personal comfort level I would challenge anyone to come up with a scientifically defendable justification to require greater testing that would not logically require much greater testing of non-GM foods as well.

The problem is not that there is a lack of "clear thinking" regarding GMOs. The problem is that the ratio of rational thought to irrational thought is unfortunately very small, and the ratio of rational communication vs. irrational communication regarding the issue is even worse. These difficulties are also compounded by the unfortunate fact that many people conflate GMOs with IP laws, religious beliefs, personal philosophy, etc.

Comment Re:Scientific progress (Score 5, Insightful) 586

It is the wholesale rejection of an entire body of science and technology on non-scientific bases that will affect both Europe's ability to contribute to scientific progress in those areas and its ability to produce its own food.

In other words you have confused the direction of the cause and effect relationship between scientific progress and food production in this case.

EU

Europe Needs Genetically Engineered Crops, Scientists Say 586

First time accepted submitter Dorianny writes in with a story about the ongoing battle over genetically engineered crops in Europe. "The European Union cannot meet its goals in agricultural policy without embracing genetically engineered crops (GMOs). That's the conclusion of scientists who write in Trends in Plant Science, a Cell Press publication, based on case studies showing that the EU is undermining its own competitiveness in the agricultural sector to its own detriment and that of its humanitarian activities in the developing world. 'Failing such a change, ultimately the EU will become almost entirely dependent on the outside world for food and feed and scientific progress, ironically because the outside world has embraced the technology which is so unpopular in Europe, realizing this is the only way to achieve sustainable agriculture,' said Paul Christou of the University of Lleida-Agrotecnio Center and Institució Catalana de Recerca i Estudis Avançats in Spain."

Slashdot Top Deals

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...