Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment He May be Partially Right... (Score 1) 830

Okay so, his argument makes very little sense. The way he comes up with a number of lines of code is terrible handwaving. He also doesn't seem to understand that the genome isn't a direct encoding of the brain, but rather a very complex and convoluted encoding of how to build a whole human being, keep it alive, and grow it to adulthood. It probably isn't the case that we will reverse engineer the human brain from its genome, and if it is, it certainly won't happen in 10 years.

However... Neuroscientists have already began reverse engineering the brain with the tools they have, which largely involves probing around animal brains with electrodes and seeing the response of various neurons to precisely calibrated stimuli. We seem to have a pretty good understanding of what happens in the first layers of the visual cortex, and the transformations that are applied on the visual input seem fairly straightforward to understand. It seems that we could encode what these first layers perform in terms of convolutional transformations in less than a page of programming code.... We might actually be able to simulate a significant part of the human visual cortex (hundreds of millions of neurons) in real-time using simple DSP chips.

In my opinion, it perhaps isn't so unlikely that other parts of our brains have a very regular structure, as these first layers of the visual cortex do, and so simulating what the human brain does in terms of computation might not take all that much code or as much CPU power as people imagine it would. It's possibly already achievable using the computational power of a medium-sized computer cluster any university can afford, or by designing specialized hardware.

Unfortunately, it seems that neuroscientists are very limited by the tools they have. Studying the early visual cortex using electrodes seems viable, because we can conceive of the simple convolutional transformations that occur easily, and map receptive fields using these simple tools. However, when it comes to analyzing the behavior of the neocortex, it seems quite difficult to quantify thoughts and reasoning using electrodes. It seems that, in a way, neuroscientists could do a much better job if they were able to map the connectivity of the brain first, and study its behavior using simulations. But so far, they have been deducing the connectivity by studying the behavior of different cells... So it becomes somewhat of a chicken and egg problem.

Comment Re:Now just hopefully... (Score 1) 57

>> Hell? The thought of being able to watch my favorite movies over and over "for the fist time" sounds pretty good to me.

Yeah... If you are still capable of even understanding what's going on in the said movies. Alzheimers heavily damages short term memory, as far as I know.

Here in Canada, it seems like euthanasia will be legalized soon. This could enable people to choose ahead of time to be terminated if things get very bad. Right now, people who suffer the torture of debilitating illnesses don't even have that legal option. They have to suffer it through to the end and bring their family along, whether they want that or not.

I haven't had to go through this myself, but other people have told me they have seen their great grandparents go through it... To the point where these people became senile, then unable to walk, unable to talk, blind, and eventually starved to death. I think we should allow people to decide whether or not they want to finish their life that way.

Comment Re:Wow (Score 1) 207

Another big improvement may be optical chip interconnects. This could make the connection from the processor to the RAM and other devices much faster, while also saving space on motherboards and RAM chips to put... More RAM. Not to mention possible power savings, and the fact that it should be rather easy to have more RAM channels with this technology... Imagine your processor having an individual, parallel connection to each RAM chip.

It's true that eventually, we will reach a plateau, and in a sense, I think it's fair to say that computers are already evolving less rapidly than they used to, at least in terms of performance... but we're not quite at the limit yet. We might still get to see 50TB SSDs that can do gigabits per seconds, desktops with 512GBs of RAM, processors with 128/256/512 cores, etc. We will get to see personal computers with levels of performance that seem "ridiculously unnecessary" by today's standards.

And once this technology really does reach a plateau, if we really do find ourselves "stuck" at some performance level, it will force people to... You know, optimize things. Come up with clever ways of doing more with what you have. It could also lead to *gasp* more standardization, as the evolution of computing technology slows down.

Comment Re:They always say (Score 1) 355

>> Depends on what is meant by "bad". If by bad you mean get castrated, then I'd say that it would be better to not have gotten head.

But if you do somehow get castrated in some freak accident, this new discovery will allow you go grow a little head down there as a replacement!

Comment Re:Okay, that's it... (Score 1) 355

>> because her mind is my own we'll only be thinking of sex! But what if your female clone is into women only? I mean... Would you have sex with a male version of yourself? Think of the implications!

Comment Re:Is this spectacular? (Score 1) 70

>> I know this might be flamebait, but. Java SUCKS for gaming. [...]

Would you care to explain why you feel that way? As someone who's programmed in both C++ and Java, I think the main reason would be the lack of native OpenGL support in Java, but that's not necessarily a fault of the language itself. Java is actually a pretty convenient language to work with. If vendors provided proper 3D/sound APIs, it seems it would be perfectly fine to program games with.

Comment Re:well no (Score 1) 541

It might automate code generation but it doesn't automate debugging or QA testing which in my experience take significantly more effort then running the build system....

They most likely use some kind of "compatibility layer" on which they develop the games. Something to handle the rendering, audio, input, networking, etc. (all interactions with the outside) in a cross-platform manner. It's also likely that most of the bugs in the compatibility layer are already fixed, because most of them will be pretty obvious (it's not very complex code, after all). The rest of the bugs, such as bugs in the game logic, will most likely have the same result on any platform.

Supporting Macs requires a big initial effort in building this compatibility layer and properly testing it, but once that's done, you can just have your coders use it transparently. As for your beta testers, just have some of them use macs, some of them use PCs, to be on the safe side, but they most likely all would experience the same bugs, because most of the code is the same on either platform. The more games you crank out using your cross-platform API, the better tested it is, the less likely it becomes for people to find flaws in the said API.

A few years ago, a friend an I coded a rendering API that could use either Direct3D or OpenGL as its target. It took us some effort to find clever tricks to keep the performance good. We had to find ways to have the GPU transform between coordinate systems as needed. For our modest 3D engine, it wasn't an impossible effort though. We did discover some cases where both targets didn't perform exactly the same down the road, but those bugs were easily fixed.

Comment Re:ARM (Score 1) 213

I agree, but I think an even more important factor is the interface. Sure, your cellphone is could have enough computing power to run most of the applications you use on a regular basis, but... How fast can you type on it? How comfortable is it to do that for extended periods of time? What about all the students doing assignments, all the people writing reports and making spreadsheets? How comfortable do you feel working with such a tiny screen? The fact that cellphones have to be small to be portable limits their usability in important ways.

Alot of these limits could be fixed, say, by having a small docking port (perhaps even wireless docking) with external I/O devices, monitors, or TVs. But then, if you're going to have a device to hook your cellphone to at home so you can use it as a desktop, how much more expensive would it be to add a CPU, some RAM and a hard drive to that thing, in order to make it... A full-fledged desktop computer.

Comment Re:Linux support? (Score 1) 200

I agree and I honestly don't understand. People who develop 3D for the web probably won't want to use all the latest and fanciest features DirectX exposes. Furthermore, they could have developed their own 3D API layer that uses DirectX internally, but can still map to OpenGL/etc. on another platform. Why limit yourself to the latest Windows when you simply don't have to?

Not to be mean, but these people most likely haven't thought out their strategy very far, and their plugin probably won't succeed.

Comment Re:Bandwidth is a killer (Score 1) 200

>> 3D graphics is bandwidth intensive, especially for textures.

Well, fortunately, bandwidth is increasing, slowly, over time. It's apparently pretty easy to get a 100Mbps connection in Japan now. Even downloading 100MBs of textures at that speed wouldn't be so bad. In the meantime, textures can be compressed for download. Quake 3 used jpeg files for its textures. That can easily give you a compression ratio of 10:1.

>> 3D accelerated postage stamps just won't be that compelling.

Look at the browser games people are playing. My girlfriend keeps getting addicted to them. None of them are really that sophisticated, looks wise. If someone can just manage to get some 3D RPG game online, even if it looks like a 10 year old game, people WILL play it, *alot*.

>> Procedural textures are vastly smaller but are rather labour intensive to create. While this is a nice concept it won't be replacing downloaded 3D content anytime soon.

It's my opinion that procedural content is "The Future (TM)". If you give people enough motivation to use it, they just might. Web-based games might be a good reason to develop the technology further, because it makes even more sense in that context.

>> I have enough trouble convincing people to wait for a 2MB Java applet that's downloaded once and cached with WebStart.

In an earlier post, someone was talking about a web-based (WebGL) port of Quake. They said the game fetched the textures after the level was loaded, while the user was playing. You can imagine something like that, if properly implemented, mitigating the problem. Textures only need to be loaded when you are about to see them, and they only need to be loaded in full quality when you can see them up close.

Comment Re:Another pointless plugin? (Score 1) 200

>> I agree that WebGL will eventually make 3D more accessible in browsers (once it's supported in mainstream browsers). I doubt, however, if any commercial developers will use it, because it's based on scripting, so offers a way for everyone to view the source code, something that commercial publishers tend to dislike.

There are obfuscators available. It's not perfect, but it's a start. If the technology is available, people will want to use it. If some big companies don't, smaller companies will. It will certainly be interesting to see what kind of browser games people implement based on WebGL. I might even be interested in playing around with it myself.

>> I also imagine that its scripting nature will mean that WebGL games won't have access to advanced gaming technology such as physics, and so relegate it to more casual games.

They will cater to the lowest common denominator in terms of hardware. Right now, most people don't have systems that can do GPGPU. Perhaps later on it will become available... Still, you can do quite a bit in terms of physics on a simple CPU. None of the latest games *require* special physics acceleration, if I'm not mistaken. Now, of course, we're talking about web games coded in JavaScript, but still, JavaScript VMs are getting better.

Comment Re:PEBCEK is the issue... (Score 1) 596

What's your point, exactly? It's obviously impossible to formally prove that your software will never fail under any given condition, unless your software is trivial, but... There is a huge difference between well-designed/well-written code and code written by people who simply don't care and will only go so far as to make sure their software works under a typical use case.

Case in point: how many bugs in web applications were caused by code that didn't escape strings going into an SQL request? How many buffer overflows were caused by people not ensuring that the input would properly fit in a buffer? Now, how easy is it to simply write a function that both reads the user input and ensures that those conditions are met, before doing anything else with the said input? Not hard at all.

A very small effort can go a long way. As a developer, I try to ask myself "how could I make the software fail, as a user?" and "how could I prevent such failures, as a programmer?"

Comment Re:Evolved Neural Network Brains (Score 1) 115

Having done such simulations as well, I can tell you it's possible to integrate Hebbian learning in there. There has also been research work combining evolving the neural network's structure and using backpropagation to adjust weights. Finally, in my own experiments, I showed that agents with recurrent neural networks can learn without either of those things. It's essentially possible to build the neural network equivalent of a flip-flop. The agent can then turn these neural switches on and off during its lifetime, exhibiting some degree of adaptation to the environment.

Comment Nuclear Waste and Geothermal Energy (Score 1) 373

I see alot of people talking about nuclear waste and how to handle it. Wouldn't it be possible to use some of that to build RTGs or something similar? (http://en.wikipedia.org/wiki/Radioisotope_thermoelectric_generator) Those materials are releasing energy, if we could put it to use, then this "waste" would become a useful asset.

And about geothermal energy becoming our main source of energy someday. It all sounds nice, but, wouldn't it be a bit risky if we used geothermal energy for almost 100% of our energy needs? I'm not a geologist, but it seems to me like this could accelerate the cooling of the earth's core... And if it ever became solid, our planet could be without a magnetic field. Of course, we're talking about very long term consequences, but it would suck to have the earth lose its atmosphere to space as Mars did... Especially if we never even manage to leave the solar system. Of course, if this possibility is millions of years away, then I suppose it could be acceptable to use geothermal energy until we can find something better (I'm hoping we'll have managed fusion, 1000 years from now).

Slashdot Top Deals

Old programmers never die, they just become managers.

Working...