Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Stick to what you know (Score 1) 387

Ah, the old "new languages do the same thing as old languages" argument.

No, it's that new languages aren't generally solving any new problem with any really different way of expressing things. You basically have your functional style and procedural style and those were invented before electronic computing (lambda calculus and Turing machines). Maybe include prolog in there as a slightly different take on logical expression for computers.

Scala is one example of a recent language that does a great job putting together functional and object-oriented paradigms in a way that's actually usable and productive.

But it's not really new. It's just another variation on things that have gone before. Just looking at the front page of the website "Haskell" instantly comes to mind.

Comment Re:Awesome thread (Score 0) 420

At the time the comment was written, almost every post was about Raspberry Pi and Plex. A couple of actual functional solutions had been recommended, but the majority of the thread was your typical neckbeard fare.

And so? Your solution to this is what? To complain about it.

Congratulations: you have become what you hate.

Anyway, thanks for your contribution. Like OP, you really offered a lot toward helping to reach a solution.

I did, in another thread in a minor way offering a suggestion on getting iplayer content to which I suspect you are the AC that got all upset that I didn't know explictly about a feature I hadn't tried as if that was completely useless information because of it. (Whether or not the questioner would have found it useful nonetheless to use the features of the program even without having to have it stream *directly* since it can just "record" it directly to disk for another program to use being up to him to decide, not you, O self-apointed arbiter of usefulness and apparently definitely not a neckbeard despite engaging in this sort of nonsense).

By the way, you can use the backspace key when the text is medium. This will help you to avoid those embarrassing "Oh wait" moments. When you realize you have made a mistake, simply backspace it. No need to ask everyone to wait while you correct it.

What the hell are you on about?

BTW:

When are you going to provide references to the cheap, elegant, well-documented and functional solutions that already exist troll?

You aren't?

That's what I thought. Feel free to continue complaining about how much of a non-contribution everyone else is providing in the meantime. It's not at all ironic.

Comment Re:Anyone (Score 1) 732

We also haven't (to my knowledge) developed anything that logically works like our brains. A computer follows rules, procedures and steps to accomplish tasks. The human brain is completely different.

It's not that different if you think about the physical substrate rather than the mathematical abstraction.

A computer is a bunch of electrons whizzing around causing gates to activate/deactivate when they are in a certain "high" or "low" threshold.

A brain is a bunch of chemicals whizzing around causing neurons to activate/deactivate when these chemicals are in a certain "high" or "low" threshold.

The basic idea of switching things on and off and sending signals around a system isn't that different.

Our brains work by association and reference. When a computer comes up with an answer it either calculates it or looks it up in a file or database according to some criteria.

You make a category error. Computer software might operate in the manner you describe. The computer is just the thing that allows that software to run. There is absolutely no reason to not have software that works by association and reference nor to think that association and reference are not basically computational activities carried out by the brain.

When a human comes up with an answer, it is a complex procedure

Which makes it hard to reproduce on our computer technology but so far I am not seeing anything that contends my assertion that the procedure is inherently computational in nature and hence it is not inconceivable that it could be represented in an alternate system to the brain.

For instance, if you make a computer repeat a number (or calculation, etc) and then ask it something completely unrelated (like, pick a random vegetable), it's answer will seldom have anything to do with its other work.

Well why would it? I mean really, why would it? If I had multiple personalities (which is probably not even a real thing but lets go with the Fight Club version) why should you expect one to know anything about the other if it is perfectly segmented?

This doesn't even deal with the fact that you cannot "ask" a computer to do something like this in the way you can a person - the software doesn't exist to allow it. My assertion is simply there's no reason to think no software can exist that can do it on a theoretical computer.

Contrastly, if you tell a human to say "six" a bunch of times, then pick a vegetable, 98% will pick carrot. Skip the "six" part, and your results will be completely different.

I'm not sure what to make of this. It seems to be saying there is some sort of inherent cultural association between "six" with its repetition priming a response "carrot" when asked, "pick a vegetable". This is not a common priming phenomena I'm aware of and it doesn't make sense - least of which being there are cultures that have, and still, exist where neither "six" nor "carrot" are meaningful.

If all you're saying is that priming a human can lead to more predictable responses I'm not going to argue because that technique is used for a variety of things. However I would argue that does kind of point human cognition more towards the computational rather than non-computational domain. You provided a particular input skewing output in a particular direction. You could easily write a silly kind of stocastic Eliza to reproduce this sort of effect where you could prime it with particular inputs that would bias it to a particular output whereas if you didn't then the output would be unbiased.

A human brain generates these relationships subconsiously, almost by accident.

Well I don't think the millions of years of cognitivie evolution of brains is a complete accident - that the brain has its learning software OS already available to it is not in contention. The contention that just because it already does (and how could it be any use if it didn't? Not exactly designed for hobbiest electronics where you need to flip switches to insert bytes to program a 4004) and no software currently exists on computers we have created - past or present - doesn't mean it cannot be created.

Got that? My contention is simply there is no reason for me to think that software to make this work cannot be written. Everything I've seen indicates to me that there's no reason to not suppose it's possible, and no reason to suppose such a task is easy to write, and everything to indicate that the complexity of the problem scales quickly but I am convinced by the exemplification of the underlying principles that are believed to be at work and by the study of how the brain performs certain functions in ways that certainly look computational to me.

Comment Re:Anyone (Score 1) 732

That's why computers cannot replace human beings. They are tools. Nothing more.

Philosophically I am inclined to believe that human brains are computational in nature. Even if what they compute is very different in nature to the logic based electronic gate basis of current computing techology I have no reason to believe there isn't a theoretical algorithm that could exist to replicate brain function. Indeed the more we understand about how brains work the more I believe that is the case. However it is correct to say that current technology isn't exactly anywhere close to what our brains are capable of doing: although they do other things much, much better. It is those things that are being, and continue to be, replaced. Things that play to the strengths of brain computation aren't under any immediate threat but putting them over the threshold of "can't ever do" seems unreasonable to me.

Comment Re:Hard AI (Score 1) 172

If a computer program were intelligent it should be able to determine if a computer program is looping endlessly.

And for certain programs you could certainly do that. The thing that is often forgotten about the Halting Problem when it is brought up in AI discussions is its application to things like the Goldbach Conjecture. If a Halting Problem algorithm existed it could be used to determine the truth of the Goldback Conjecture. That it is proven it cannot doesn't mean that the Goldbach Conjecture is like an infinite loop. It has a definite answer, it is just not currently known if it has a finite proof. No human intelligence has yet to demonstrate this and it is not a simple matter to say we can determine whether or not it will "loop endlessly" because we don't.

Slashdot Top Deals

Any program which runs right is obsolete.

Working...