Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Submission + - A blast from the past! Disney/Lucasarts release X-Wing and Tie Fighter!

An anonymous reader writes: Time for a trip down memory lane for you old school gamers, GOG has posted digital downloads at $10 a pop for the updated versions of X-Wing and Tie Fighter for Windows with hints of more to come from the vaults at Lucasarts.

Submission + - Indie Developer Crashes PAX East With an Oculus Rift and Draws Huge Crowds (roadtovr.com)

An anonymous reader writes: [James thought] he'd try his luck at some guerilla PR tactics and crash PAX East with his trusty Oculus Rift and demo rig in tow. After many laps of the PAX East Show floor, James was about ready to give up and go home, but on his final round he spotted a demo station with one lone occupant and his Oculus Rift. This was his chance.

Comment Re:It's a Great Learning Experience (Score 1) 226

I suspect there's a bit of a definition issue at play here (with all fault apparently being on my end, given some of the other comments in this discussion). In my mind, DevOps roles are such only if the "Dev" and "Ops" parts are connected--I.e., you manage operations for the software that you've developed. I agree that there are rapidly diminishing or negative returns otherwise. E.g., if you write some Nodejs web services on Monday and troubleshoot MS Exchange/ActiveDirectory integration issues on Tuesday, there isn't much benefit. In that case, however, I'd argue that you don't have a DevOps role, you just have 2 different unrelated roles (which, as I stated, is apparently a definition issue on my part).

The only part that I would argue with is..

The difference is between developers knowing the operations side and being the operations side.

You cannot, in my opinion, "know" the operations side if you have never actually been the operations side. The real question is whether knowing the operations side is worth the effort of being the operations side (at least for a while). In my experience, the answer is unequivocally "yes" (but again, with the caveat that you are the operations side only for the software that you develop, and not for, e.g., rolling out the latest Windows service pack to all users at your location).

I should also clarify that my experience has only been with internal development. The demographic differences with respect to external-facing applications (i.e., user/developer ratios on the order of possibly millions to 1 vs. 10s or 100s to 1), among other things, would necessarily limit the ability of developers to participate in operations.

As you've noted, having to run operations to the exclusion of all development activity would bore you to tears. What that has done is forced me to consider--to a degree and precision that would never have occurred to me previously--how the design and architecture of a proposed solution impacts deployment and operations. Because I did not want to spend all my time supporting the system I mentioned in my previous post, I designed it such that it required about all of 30 minutes every other month to administer, and was easy as hell to troubleshoot in production. This meant a much more complex design, and more difficulty in implementation, but saved me a ton of time on net balance such that I could still spend the vast majority of my time doing more interesting stuff.

If deploying and administering the software that you've developed becomes your full-time occupation to the exclusion of all other activity, then either:

  1. You do not actually understand deployment and administration in the relevant environment(s), and are therefore horribly inefficient at it (and would benefit greatly from learning).
  2. Your design made it very difficult/time-consuming to deploy and/or administer. This is almost an inevitable outcome if the above is true, but can also occur if the developer has a "not my job/problem" attitude when it comes to deployment and administration, or can be a straight-up deliberate trade-off based on available resources.
  3. Both of the above. Or...
  4. You are working at a scale or in a domain for which deployment and administration is an inherently difficult problem independent of solution design (though paradoxically in this case it is usually even more important for the developer to understand Ops, because while there may be little they can do to make the hard problem easier, there are lots of ways they can inadvertently make the hard problem impossible).

Comment It's a Great Learning Experience (Score 4, Interesting) 226

I essentially have this kind of role within my organization. I design, develop, deploy, and support small to mid-tier systems (e.g., the planning system for a $XXXmio/yr global department, with 300+ direct users) while being one of my own customers, as I am actually a business planner (by role) as opposed to developer. I develop systems as a way to do my "day job" much more effectively. Typical tech stack would be Excel UIs, PostgreSQL data store, and whatever else I need in the middle (e.g., nodejs, tomcat, redis, whatever).

What I've found is that, in general, doing the right thing the "right way" is not worth the cost compared to doing the right thing the "wrong way". By definition, in either scenario, the right things is getting done. What most pure developers utterly fail to understand is that in trying to do the former, there is an overwhelming tendency to do the wrong thing the right way instead.

This is because, as Fred Brooks pointed out long ago--and as the "lean startup" movement is re-discovering today--for any non-trivial novel problem you cannot know in advance what the "right thing" is until you've actually tried to implement a solution. Brooks stated this understanding as the need to throw away the first try, and the lean startup movement is essentially defined by a corollary--you have to figure out how to try cheaply enough that you can afford to throw it away and try again (and again, and again if necessary), while progressively elaborating a robust definition of what the "right thing" looks like by using those iterations as experiments to test hypotheses about what the "right thing" is. Doing things the "right way" usually costs so much in time if not capital that you simply can't afford to throw away the first try and start over, or you cannot complete enough iterations to learn enough about the problem.

Now, I'm not saying that you should be totally ignorant of software engineering best practices, design patterns, etc. What I am saying is that there is a limit to how effective you can be in reality if you live purely within the development silo. Having a "DevOps" role (granted, self-imposed in my case) has been one of the best things that's ever happened to me as far as making me a better developer, right up there with the standard oldies like writing your own recursive descent parser and compiler.

In short, it is commonly-accepted wisdom among programmers (for good reason!) that you are more effective if you actually understand the technology stack down to the bare metal or as close to it as you can manage (even if only in abstract-but-helpfully-illustrative examples like Knuth's MMIX VM), and that this understanding can only be gained via practice. It should be obvious that the same is true in the other conceptual direction through deployment and end use.

Google

MIT Researchers Bring JavaScript To Google Glass 70

colinneagle (2544914) writes "Earlier this week, Brandyn White, a PhD candidate at the University of Maryland, and Scott Greenberg, a PhD candidate at MIT, led a workshop at the MIT Media Lab to showcase an open source project called WearScript, a JavaScript environment that runs on Google Glass. White demonstrated how Glass's UI extends beyond its touchpad, winks, and head movements by adding a homemade eye tracker to Glass as an input device. The camera and controller were dissected from a $25 PC video camera and attached to the Glass frame with a 3D-printed mount. A few modifications were made, such as replacing the obtrusively bright LEDs with infrared LEDs, and a cable was added with a little soldering. The whole process takes about 15 minutes for someone with component soldering skills. With this eye tracker and a few lines of WearScript, the researchers demonstrated a new interface by playing Super Mario on Google Glass with just eye movements."

Comment Re:How long would that last... (Score 2) 353

You can basically interpret it as "I acknowledge your communication". It is meant to convey that they are, in fact, paying attention and wish to communicate that fact, but the acknowledgement is specifically devoid of any communication regarding the interpretation or judgment of the content communicated.

Comment Re:Yeah, like the present school system is working (Score 2) 715

When it comes time for admission and staying in, a student in the top 10% of a US high school just does not have the ability to compete with his/her counterparts who come from China and India [1]. It is like someone wheelchair bound competing in a 100 yard dash against 10 Usain Bolt clones for a single spot.

What you are seeing is the effect of the top 10% of a country with 300M citizens competing agains the top 0.01% of countries with 1B+ citizens each.

Given that educational opportunity in other countries is also subject to extreme selectivity, those 0.01% have also had the benefit of superior education not just through the system provided, but also due to the peer environment. A genius in a school full of geniuses must learn to work hard to succeed as opposed to being able to coast on the momentum of inherent advantage. The benefit of developing a good work ethic manifests itself in college where even a genius has to apply themselves consistently (if not strenuously) in order to master the material being presented.

Comment Re:Why couldn't he say this 10 years ago? (Score 1) 341

Maybe if someone actually spoke the truth while in office the problems plaguing our government would have a better chance of being addressed.

No, but this is probably difficult to understand until you've held or are qualified to hold a position of significant accountability and independent authority. At the level of executive leadership, you have to be cognizant of the consequences--especially with respect to your responsibilities.

Essentially, the problem in this case boils down to the fact that speaking candidly as he is doing now would have destroyed his ability to be an effective Secretary of Defense, as the little cooperation he was getting from Congress and the Whitehouse would have evaporated in an instant. It therefore would have been an irresponsible thing to do while he held that position.

The logic goes something like this:

  • - If I am going to prosecute a PR battle with powerful but corrupt/petty/incompetent politicians and bureaucrats, it will do no good unless I win
  • - In order to win a PR battle with professional popularity contest winners and influence brokers, I will need to devote all of my energy to the effort. Failure is still probable.
  • - Even if I win, is the resulting good still worth the opportunity cost related to the list of tasks T that I have allowed to lapse in the meantime? (hint: the answer is 'no' when T == 'stuff the US Secretary of Defense is supposed to be doing').

The responsible thing to do is to pour your energy into fulfilling your responsibilities. If you do not feel that you can fulfill them adequately, resign (after some due dilligence to ensure sufficient continuity in the organization). Wait a while before commenting on your past position and its challenges, as doing so immediately upon resignation is likely to poison the well for your successor. These are the actions of someone focused on doing the best thing to fulfill the responsibilities of the role--up to and including self-removal therefrom if the logical conclusion is that they are not able to effectively do the job.

If you feel that the role of critic is more important than the role you were given, you should not have accepted it in the first place and instead applied to become a journalist/commentator.

Transportation

NY Police Get Tall SUVs To Combat Texting While Driving 319

coondoggie writes "The New York State Police have a new weapon to fight the plague of drivers that insist on texting while operating their vehicle: tall SUVs. Most recently reported by the AP, NY has begun operating a fleet of 32 unmarked SUVs that let troopers more easily peer down into a car to see if the driver is texting or not. 'Major Michael Kopy, commander of the state police troop patrolling the corridor between New York City and Albany, quoted a Virginia Tech study that found texting while driving increased the chance of a collision by 23 times and took eyes off the road for five seconds — more than the length of a football field at highway speed. Kopy worries that as teens get their driver's licenses, texting on the road will become more prevalent. "More people are coming of driving age who have had these hand-held devices for many years, and now as they start to drive, they're putting the two together, texting and driving, when they shouldn't."'"

Comment Re:Of course not. (Score 1) 227

The example you gave, if true, is a classic demonstration that IT management does not understand their business, not the other way around.

First, while you may want to approach a person directly to give them a friendly heads-up as a first step, the basic thing IT management is supposed to understand is that a user having weak passwords is not so much a risk to that user but a risk to the business. If a user ignores your friendly heads-up, or the problem is more widespread than 1 person, the next step is to go to the person responsible for that part of the business. Now, you don't have to be a douche and call out the specific individual(s) in question, but you then tell that person that there is a systemic risk to their operation because X% of users (or alternatively, a few users with extensive access rights to critical systems) have weak passwords that all appear near the top of /-/@xX0r brute force password dictionaries.

The key thing that even moderately competent managers (IT or otherwise) understand in these kinds of situations is that you have to put the decision (and relevant information) in squarely in the hands of the person accountable and responsible for the issue. In this case the issue is not that someone has a weak password that might result in someone messing up their My Documents folder, it is that weak passwords are a risk to the business. If a bank comptroller's password is 'password', that is not a problem-waiting-to-happen for the comptroller, it's a ticking timebomb for the bank.

In your example, you do not put the decision to act (or not act) in the hands of the account owner, but in the hands of the account owner's business unit head.

Security and IT issues in general tend to get short shrift in many business (at least in my personal experience) not so much because non-IT/non-technical managers are stupid, but because the IT managers lack even basic competence relative to the second half of their title.

Comment Re:TDD (Score 1) 156

Your parent's statement is not an oxymoron.

If every single print driver has components running in both ring 0 and userspace, but the preponderance of components (by number or 'size') of every single print driver is in userspace, then it is more precise to say that "all print drivers are mostly in userspace" as opposed to "printer drivers are mostly in userspace". The latter is semantically a superset of the former, as it could either mean the same thing, or also describe a situation where some printer drivers are completely implemented in ring 0, but the majority are completely or mostly implemented in userspace such that the preponderance of the set of all printer driver implementations resides in userspace.

In other words, your parent's statement is more precise about not only the aggregate population of printer drivers, but the distribution within the population. Whether that statement is actually correct or even the real intent of your parent poster's statement is another question :).

Comment Re:Saving everyone a few seconds on wiki (Score 1) 209

First, disagreements aside, thank you for taking the time to respond. I am genuinely interested in trying to understand more about the topic (for my own personal benefit if nothing else) and that is harder to do in a vacuum if there is no discussion. Also, I do apologize if I mix up terms in a way that hampers discussion.

Mind-brain devolves to brain = magic if we must accept that the brain is special in some way that makes it immune to analysis. If it is not, then it is functionally identical to and scientifically indistinguishable from a biological implementation of the Chinese Room with a critical modification (the removal of the arbitrary requirement that all input to the CR be reduced to symbols). The Other Minds response highlights this problem well, and Searle's response that the the assumption that other people are conscious is necessarily axiomatic is either a strong indication that his definition of consciousness irrelevant to science (faith in the consciousness of others (or even the self) is a not a falsifiable position) or begging the question. Note that I did not say that his meaning is unimportant (it could still be the most important question ultimately facing any intelligent existence), but it is simply outside of the scope of science. The third option is to fall back on the broader symbol grounding issue as you seem to be doing.

The core of the argument is that you can't get semantic content from purely syntactic content. Ultimately, it's an attack on computationalism, and a damn good one.

That statement just raises the Connectionist argument that the validity of the CR thought experiment depends upon the false premise that all computation is necessarily syntactic. Searle specifies as axiomatic to the CR argument that any program must be symbolic, and further implies that any programmable computer must therefore only be capable of symbolic manipulation, and as such the CR argument a priori limits the scope of the problem to the syntactic. As such, the CR does not simulate the overall physical reality of, e.g., the propagation of pressure waves and the subsequent audio-neural transduction (hearing Cantonese) or the EM-neural transduction (seeing Chinese characters on a page). This then limits the interaction of the system with the outer world as occurring through a filter stripping that interaction of all a-symbolic and sub-symbolic components. We can contrast this with the experience of a native Cantonese speaker, for whom hearing spoken Cantonese or seeing pinyin characters on a page is a fundamentally non-symbolic interaction that only becomes symbolic after being processed by the brain. The original CR is therefore fundamentally flawed in its conception or inconsistent in its premise that the CR can perform in a manner identical to a real intelligence while stipulating conditions that are impossible to impose on, e.g., a human intelligence. Even the CR-inside-a-mind modification later articulated by Searle exhibits this flaw as Searle’s consciousness still filters and strips all non-syntactic input before passing it to the internalized CR.

To paraphrase parts of the Connectionist argument, the view that all computation as identical to manipulation of meaningless syntactic constructs is an observer-dependent interpretation that unjustifiably excludes the physical reality of the computer system, which has structure and properties independent of our choice to view the cascade of physical interactions as manipulation of symbols. In short, a la his wordstar-on-the-wall argument, Searle is correct in asserting that a program in the abstract is a collection of symbols that has no inherent meaning, but it is incorrect in asserting that a physical system responding to inputs that we perceive to be a program is identical to that abstract concept, and that because the one of many real inputs into the system to which we assign meaning is symbolic to us, that the system’s response is necessarily and only symbolic.

In any case, Neural net modeling, which was the original topic of this discussion, is essentially focused on studying non-syntactic computation. If the response to this is that it is still flawed because ANNs are mostly implemented on digital (and therefore nominally symbolic) computational systems, then, even ignoring the point of the larger system above, this objection reduces to a simple implementation technology issue as opposed to a fundamental constraint of ANNs as an ANN is not necessarily limited to implementation on nominally digital platforms. If implementation on digital logic is found to be fundamentally limiting for some as-yet unknown reason, then that can be addressed via moving to analog, quantum, or biochemical systems (which, taken to the extreme, could simply be an artificially constructed human brain).

In essence, while I agree that the cognitive model of the mind processing symbols to which we assign meaning is flawed, it is incorrect to assert that computation is necessarily limited to this model. In that view, the CR only addresses a very narrow class of AI based on the reality/interaction -> symbol -> meaning cognitive model, where the mind's interaction with reality is mediated by a purely symbolic interface, as opposed to the connectionist model of reality/interaction -> a-symbolic sensory input -> meaning -> symbol where symbols are only created as a result of assignment of meaning, and then later inserted into the cognitive process as useful filters or abstractions that allows the mind to more quickly/efficiently deal with inputs. This model of the cognitive process rejects Searle's CR as valid as in this context the CR argument's premise begs the question by imposing the restriction that symbols must be specified before meaning is assigned (i.e., they are not symbols to the CR, but are in fact someone else's symbols), and the more subtle inference that the system must ultimately respond to each symbol as a hypothetical native Chinese speaking conversation partner (or otherwise immediately pass the Turing test as applied by a Chinese speaker) in order to qualify as being conscious. In order to do so with a foreign set of symbols that have not been internalized by the consciousness means it must either fail the Turing test or to simply get inside (or alternatively internalize) the Chinese Room and process symbols via complex lookup. As passing the Turing test is a premise of the CR, then we are left with the latter scenario whereby even an assumed consciousness, being forced to work with symbols created by someone else to which it has not developed or assigned any meaning, must simply respond to inputs mechanically based on the rules it has been supplied a premise of the though experiment. In other words, even if a consciousness does exist, it cannot simultaneously understand Chinese and satisfy the other premises due to the conditions imposed by the thought experiment, so the possibilities of the CR are a consciousness that does not understand Chinese, or a lack of consciousness and simple mechanical symbol processing. The conclusion of the CR argument is therefore a recursive and inevitable outcome of the combination of the reality -> symbol -> meaning cognitive model and the conditions of the experiment.

Comment Re:Saving everyone a few seconds on wiki (Score 1) 209

In the case of the Chinese room, it does indeed show that purely computational approaches can not be sufficient. To proceed anyway, operating under the irrational belief that the problem will vanish by magic is delusional.

No, you have it exactly backwards. The Chinese Room argument does not show what you think at all, though that is a common misinterpretation.

The Chinese Room argument is, at its core, an appeal to consciousness/intelligence as a construct that is special and cannot be considered a simple emergent effect of the physical system of the brain. To use your terminology, it is the argument that intelligence/consciousness is magical, or that at least some intelligences/consciousnesses are somehow special.

To explain the above I will try to distill some of the more directly relevant pieces of a ton of philosophical discussion that has happened since 30+ years ago when Searle first articulated his argument (apologies in advance, it took more words than I first thought).

If intelligence is an emergent effect of the physical processes of the brain, then those processes can (for the same value of "can" used by Searle's argument with respect to an arbitrarily complex computer system) be mapped out on paper in terms of the physical interactions at an arbitrarily low level (theoretically down to probability functions describing the interactions of subatomic particles if necessary). If that mapped brain understood Cantonese, and the map describing the sequence and rules governing the interactions is written in English, then you can stuff the whole thing in a box with Searle and you have a Chinese Room derived from a 'real' intelligence as opposed to a program. Ok, but that is just a facsimile or copy of the mind, and not the real thing! That's true, but if the intelligence is a property of the physical system of the brain, which we can map, then the mind is merely a Chinese Room actuated by the physical processes (as opposed to Searle) governing the interactions of its constituent particles. Well, the physical brain does not require a conscious actuator, which distinguishes it from the Chinese Room. True, but neither did the original Chinese Room. If its instructions were formulated such that Searle did not need to understand them, and he had only to manipulate symbols (a fundamental premise of his original argument) then it could just as easily have been executed by a computer and thus been self-actuated as well. Ah, then either the physical brain cannot be mapped, or 'true' intelligence is not simply an emergent physical effect and is metaphysical epiphenomenon of the physical brain. The first we can dismiss as is prima facie false unless there is some magic that makes the physical substance of the brain unique in this regard relative to all other matter in the universe. If we take a different tact and instead try to argue that perhaps the brain is not unique, but that all matter cannot truly be mapped in that manner (after all, as science has not yet unraveled the mysteries of the ever-elusive grand unified theory, such an undertaking may not simply be infeasible, but fundamentally impossible for reasons that we do not yet understand), then that reasoning also destroys the Chinese Room argument because an arbitrarily complex computer system cannot therefore necessarily be mapped either, making the original Chinese Room impossible to construct. That leaves us with 'true' intelligence, if it exists, as being some kind of metaphysical property or brains being magical. No one believes in magical brains, so we have now taken this particular line of discussion squarely out of the intersection of philosophy and science and shifted it into the region of philosophy and religion where it stands today as a (still raging) debate regarding the metaphysical mind (and if such a thing exists).

In other words, Searle is ultimately (and explicitly) arguing that efforts to create synthetic intelligence are futile not because any specific scientific approach (such as computational modeling) is flawed and will not work, but that scientific methods are fundamentally incapable of creating intelligence because 'real' intelligence possesses a secret sauce that has yet to be precisely defined (e.g., "original intentionality") that an artificial creation cannot possess (because, e.g., any intentionality or understanding will simply be a derivation of its creator's, and therefore not original).

If we take a step back, we can also note that Searle is specifically not arguing that functionalist approaches such as computational simulation cannot produce something that is functionally identical to true intelligence such that it is impossible in the absolute sense for an external observer to distinguish the two--he specifically concedes that it could, and it is a premise of the Chinese Room. His argument is that even if it does produce such a thing, it will still not be "true" intelligence in the metaphysical sense of intentionality or consciousness. This means that even if Searle is right, and there is a real metaphysical construct or property that is both undetectable and unreproduceable by science, then from a scientific and utility perspective the distinction, even if real, does not matter. Turning that around to more directly address your assertion, Searle himself does not argue that the Chinese Room implies that computational models of the mind cannot work. He argues that it doesn't matter whether they work because that is besides the point. This is one of the reasons why, even though Searle originally coined the term, the current definition of "strong AI" used in neuroscience and AI research explicitly means functional equivalence or superiority when compared with human intelligence rather than philosophical equivalence.

Comment Re:Saving everyone a few seconds on wiki (Score 1) 209

Observing something and then trying to recreate it with rigor and controlled conditions is called science. It's pretty nifty, you should look into it.

That was implied in the point I was making. My parent poster's assumption, if true, would make experimental science either impossible (you would not be able to 'experiment' without already completely understanding the subject of the experimentation--hence it is not experimentation) or unnecessary (because you must already understand the topic completely in order to perform an experiment) depending on your semantic inclination.

Comment Re:Saving everyone a few seconds on wiki (Score 1) 209

To add: I mention "long-standing problems" which suggest that the effort in question is ultimately futile. These problems are well-established and fundamental to the AGI problem the summary implies that we're on the brink of solving. To ignore them expect that those problems will just vanish if we just build a better bamboo airplane is nothing short of magical thinking.

The point that it is possible to solve a problem without necessarily fully understanding the problem or the mechanism of the solution is equally applicable to intermediate problems that we believe must be overcome in order to develop AGI. To use your analogy, building physical replicas of airplanes will not, in and of themselves, result in shipments of cargo directly, but they could (with persistent iterative effort) lead to better understanding of aeronautical engineering (e.g., repeatedly pushing modified bamboo replicas off a cliff will eventually demonstrate that shape, surface area/weight ratio, distribution of weight, etc. affect performance), ultimately resulting in working aircraft, and probably resulting in economic development as a side-effect.

Children go through that process building their first paper airplane. First they copy someone or follow instructions by rote, then they fiddle with folding patterns, shapes, materials, etc. to try to make improvements or changes to flight characteristics. The more inquisitive ones continue he process to find that certain shapes and patterns tend to have certain results, and the more persistent and sufficiently-interested ones continue study and experimentation to try to understand the underlying causes of those results. Very few (if any) first study aeronautical engineering and physics to first understand the underlying physical forces and interactions (to say nothing of the math required to understand and solve systems of PDEs etc.) before constructing their first paper airplane.

With respect to the one specific issue you've identified, a solution for or complete understanding of the issues touched on by the symbol grounding problem is a requirement only for the development of a more rigorous definition of and ability to then test for and recognize intelligence, not the creation of intelligence.

To believe that the approach in question will ultimately result in "creating synthetic intelligence" is not merely belief without evidence, it's belief in face of evidence to the contrary!

We don't in fact have evidence to the contrary. What we do have are open philosophical questions/problems regarding the nature and definition of intelligence that may prevent us from being able to formally prove that a program is in fact intelligent according to a (as-yet non-existent) rigorous definition. Searle's Chinese Room argument, for example, simply articulated a counterargument to the computational/functional definition of intelligence, but did not provide any alternative definition. If you also accept his refutation of the systems reply to the Chinese Room argument then we cannot, given the current state of human knowledge and understanding, even prove that other humans are in fact intelligent. If we accept that other humans are intelligent, then it follows that the existence of intelligence is independent of our ability to understand and define intelligence.

To return to your earlier reference, even if we assume that the symbol grounding problem must be overcome in order for intelligence to exist, it is nevertheless true that it could be overcome while both the creators of the intelligence and the resulting intelligence do not comprehend (or even have the ability to precisely identify) the mechanism of the solution. Inability to solve the symbol grounding problem or, e.g., address Searle's Chinese Room argument to Searle's satisfaction, therefore, is not prima facie evidence that neural net modeling as an approach cannot eventually create intelligence.

Slashdot Top Deals

2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League

Working...