It's like someone decided to knock off the convertible tablets which have been around for ages now, but had Bloody Stupid Johnson do the hinge design!
History, know it. Other than another data point on the size-weight-features continuum, this device brings nothing new to the table.
http://en.wikipedia.org/wiki/Tablet_personal_computer#Timeline_of_tablet_PC_history
I think most of the recent papers are associated with Intel. The models used there may actually be based on the same PrimeSense camera behind the Kinect. Ideally, this info should be in the paper using the camera. This isn't always the case, but someone who wanted to know badly could start flipping through papers until they find one which documents their equipment adequately.
I'll try to find out more...availability to hobbyists is an important question.
More so since the "fancy camera" is all you'd get by hacking the Kinect's I/O.
But you can buy RGB-D cameras separately. There are several about to be released in the $100-150 range (or maybe released by now, I'm lazy, YOU google it). Not vaporware either - these models have been used in the research community for a while now. It would probably be more beneficial to go to the effort of obtaining one of those then doing some useful software work.
The problem here is that you can't replace precise, experienced control with anything except more of the same. You can do art pixel by pixel using the off-hand and get precision by throwing massive quantities of time at it - and you can do this using the exact same tool set as before. Experience will increase the off-hand precision.
It may be worth making now the time to experiment with new media - you'd be starting from more or less the same point regardless of the injury, so the awkwardness of off-hand manipulation will be less of a factor. It may also be less depressing than facing something you could previously do well, and finding that you no longer can.
Firstly, it's probably going to be 50 years before this turns into an actual medical procedure rather than a proof-of-concept experiment. Let's just get that out of the way.
So what they're doing is taking people with a defective retina, and adding a synthetic one. The retina normally receives photons and sends a signal along the optic nerve. What they're doing is implanting a silicon photoreceptor behind the retina of people whose retinas aren't doing the job. The chip receives the photons and sends an electrical signal, serving the same function as a "healthy" retina to some fidelity. The results are sort of low-fi since (a) it's just a proof of concept trial, and (b) the retina is a horrendously complex photodetector so it will take a lot of work to approach that in an implantable device. But dude, blind people. Seeing. Go, science!
If someone else can come in remotely and change what you've got installed, it's not your system and it's not your software.
But we encourage you to think of it as your own - it makes the fees hurt less, and we can always straighten you out on the details of ownership later.
You pay to use GMail. You just don't pay cash. You pay by letting them whore you to advertisers.
Google is an advertising company. The tech ventures are just the bait.
The insulator is generally treated silicon, e.g. silicon nitride.
Also, metals are something you find pockets of in the Earth's crust. The majority ended up in the core by virtue of its greater density. Silicon, on the other hand, is a key ingredient in the crust itself, and tends to be present in the minerals which you would have to find, extract, and process to get the metals involved in circuit-on-silicon fabrication.
Also, the amount of material in the silicon wafer itself is far, far more than the entirety of all surface features comprising the integrated circuit.
If anything, you would want to be comparing the relative scarcity or value of the metals involved versus the dopants involved, the relative ease of fabrication, and the particulars of what you can fabricate like minimum feature size, chip area per circuit element, and compatibility with other things you want to do on your wafer.
Silicon is not something we're going to run out of in the foreseeable future. If we do, it would probably be right after we ran out of nitrogen.
Mainly, most immediately, it gives you an additional way to make a diode or diode-based structure when you're designing your fabrication sequence. Fabrication on the foundry / mass-production level occurs through processes which give you pretty much a set sequence of layers (deposited materials, treatments, patterning, etching, etc.). You can make anything you can design within that process...and most anything else usually stays in a research lab.
The extraordinarily common CMOS process involves numerous metal layers "high" above the wafer (numerous layers intervene). These are separated by insulators. Normally, you make diodes at the wafer layer where you're doing your doping.
MiM means you can put diodes in regions of your chip where they couldn't practically be fabricated before without a lot of time doing a one-off chip in a lab. With "a lot" often being several months to a year, assuming everything turns out perfectly, assuming your lab even HAS all the necessary equipment, and assuming you don't have something better to do - which is rare if you're not still a grad student.
"The medium is the massage." -- Crazy Nigel