Comment Well that explains why (Score 1) 28
I started getting the "you appeared in X searches this week" emails recently.
I started getting the "you appeared in X searches this week" emails recently.
Pretty sure it's just a call to malloc().
To me it looks like what they are demonstrating is legit neurocontrol and the muscle contractions you see are just a byproduct of using a shared data bus.
The person doesn't have to be consciously attempting to move those specific muscles in their arm/hand you see contracting... Their "move ship" or "jump dinosaur" thought triggers muscles in their hand to contract because they are using the same wiring. If we directly wired this device to your brain you could avoid such problems, but because we can't do that yet, we have to repurpose existing wiring in your arm. So the signal just ends up going to two places. The device and your some random muscles they share a path with... Over time I suspect the contractions will get smaller/weaker, maybe not even visible to the naked eye, and by the point where you develop a strong "muscle memory" you wouldn't even have to be consciously aware they were taking place.
Think about what happens when you try to teach yourself to touch type. Your conscious thoughts are of the individual steps. You look at the keyboard and find the specific letter, think about which finger you want to use, consciously move it into position, press the key, and then return it to the home row. It's very slow and robotic... That may seem like a ultra low level series of thoughts but even those are very high level abstractions. Consider what is involved just in the thought, "find the X key on the keyboard". It has to have an insane number of levels of abstraction itself...
By the time you learn to touch type you are not consciously thinking any of those steps. However, the end result of any thought at it's lowest level what is really a series of specific signals are sent across specific motor neurons which trigger individual muscles to contact. However, we don't consciously think at that level. We think "flex bicep" not individual signal all 25,000 motor neurons which trigger the 1,250,000 muscle fibers to contract that our bicep is made up of. Our Motor Cortex is responsible for that sort of translation.
Consider when you stand in place and how complex a balancing act that is. Despite what Newton's first law says no object is ever at rest... It is impossible to stand perfectly still and your body is performing a never ending series of muscle contractions to offset all those equal and opposite forces that would prevent you from staying upright. That includes all the forces your last set of muscle contractions just added to the equation. Now, think about just how many muscles are involved in that process... How often are you consciously aware are you of any of those individual contractions while having a "I want to stand here" thought? Hell, how often are you even aware you are having a "I want to stand here" thought? Even that is pushed into your subconscious a heck of a lot of the time while you are focusing on any number of other tasks you perform while you happen to be standing up.
So despite the user's thought of "make dinosaur jump" or "move ship" resulting in muscle contracting in their hand/arm they are not consciously triggering them by the time the thought has "muscle memory" and its's unlikely they are aware that it is happening. It's simply a byproduct of the fact this device using existing wiring to the brain and those signals always have to trigger some existing muscle fibers.
Why do these demonstrations (specifically the Dino Rider and Asteroids part) not satisfy being classified as Neurocontrol?
Do you mean there hasn't been a live/public demonstration to verify their these demos are real? Or is it the fact that signal is obtained via motor neurons rather than directly from the brain it's not actually defined as Neurocontrol?
Also, didn't you hear Cold Fusion is only a couple years away!
I'm not saying muscles in your arm/wrist/hand don't move...
Something has to move/react because you're sending "move" signals from your brain (motor cortex) to that arm where the sensor is so there is always some amount of muscle movement. However, you don't have to map them the same way as your physical body. Say in order to make a fist with your physical hand it takes 100 specific motor neurons to signal all relevant muscles to contract. You don't have to use that same mapping or even 100 to make your virtual hand make a fist. This way our physical hand could have no perceptible movement but your virtual hand makes a fist.
Want to add a 6th finger you can control to your virtual hand? That's doable. Want to control a virtual arm with completely different sets of joints and range of motion? Also possible. However, that's not the real interesting stuff. This device effectively allows bidirectional adaptation and a level of personalization not possible with physical input devices.
I highly recommend you listen to this interview with Reardon as it makes the potential "black magic" uses for this device a bit more explicit. My understanding is it's way more powerful than just removing the travel distance/time from a keyboard but crazy stuff like making it impossible to make a typo. If you intend to hit the "E" key you won't be capable of missing it because it will capture your intention not where you actual muscles physically would move your finger.
Of course in reality it's not quite that simple. You don't intend to trip over your own feet but we still do it from time to time....But if it's able to improve accuracy by a significant amount while reducing/eliminating physical movement that could be a pretty big win even if the only application was typing.
You don't actually have to move, at least not in the traditional sense... You effectively reduce your muscle movements to levels imperceptible to the naked eye. I haven't personally tried the device so I don't know if it feels like you're just really concentrating deeply the entire time or you can get to a place where it requires as conscious thought as walking. You think about where you want to go but not about the complex movement you are actually performing. So you aren't consciously breaking down all the individual actions that make up the act of walking you just walk... I suspect at the start it's a bit of the former and transitions to the latter just like how you learn any physical movement. At some point it becomes almost thoughtless.
Check out this interview where they talk about the difference between Myocontrol and Neurocontrol at 41m32s. This device is capable of doing both. Myocontrol is "easy" part in that they seem to have that full implemented/functional. Neurocontrol is the black magic stuff. Based on the podcast and other videos/interviews it seems like the hardware is there and right now they have a AI signal processing problem and are just refining their training models.
The ultimate goal of this technology is to allow you to get the result of performing complex physical inputs without actually train yourself how to physically performing the complex physical inputs for a specific device. But more than that there is a feedback loop where while you are training on a new input device it's also learning how you use it and customizing itself to fit you better. Basically like a keyboard that evolves it's shape to fit your common misstrokes so you can't make typos. Maybe that is a stretch but if it can reduce training times for complex input systems that could be a huge win.
It's your intent but they just capture it a bit late is all
No, look closer... You don't have to actually make gestures in the traditional sense. You can control multi axis devices without actually moving. Well, technically some of your muscles are always contracting but not enough of them to actually produce physical movement you visible the naked eye.
No, this is not accurate. It's way more powerful than a Leap Motion.
It is capable capturing your intentions which don't have to match your actual movements. So you can train yourself to control multi axis devices without actually moving. Well, technically muscles are always contracting but not enough of them to actually produce physical movement you visible the naked eye.
This video explains the technology and some uses cases. It's not reading how your muscles move but the signals being sent to your muscles. So if the muscle is restrained it still is capable of capturing your intention. It gets wilder in that it takes a lot of signals from many different motor neurons to actually make your muscles to contract in a way we classify as movement. So you can train yourself to control multi axis devices without actually moving. Well, technically muscles are always contracting but not enough of them to actually produce physical movement you visible the naked eye.
Thomas Reardon the CEO/founder of CTRL Labs did an excellent way more in depth interview about the technology on the Hidden Forces Podcast worth checking out. The discussion on the technology starts at 33m20s. While the first half hour doesn't talk about the tech Thomas is a pretty interesting guy so I think it's also worth listening to.
Hmm I think I figured out how PSVR is a success while Kinect isn't despite the large discrepancy in sales figures. It's not about hardware it's about software.
Look at the number of Kinect games vs PSVR games. PSVR has somewhere around six times as many titles listed as Kinect.
Then look at the actual games.. Nearly 15% of all Kinect games are literally "Just Dance" sequels. Another 10-20% are just various other dancing or fitness based franchises. Many of those Kinect games are not really Kinect games but just some regular game that happened to add some Kinect compatibility. Mass Effect 3, Skyrim, The Sims... Look at the big sports franchise like FIFA, Tiger Woods, NBA2K and how many years they continued to include Kinect support.
This is why PSVR is a success and Kinect was a failure. PSVR owners are buying lots of software and developers are actually making games for it.
Cigarettes still sells pretty darn well. Apple hardware isn't all bad though.
Wow, Wikipedia says 35 million by Oct 2017. Xbox360 and Xbox One sold about 150-200 millions units combined so that would put it near 20% adoption. That seems pretty high for a console peripheral let alone one considered a failure.
I agree that 5 million vs the entire PS4 market is kinda small...
However comparing Kinect as a failure despite selling 5x as many units isn't quite apples to apples... Kinect had life over two console generations and a market on PC for hackers using it for computer vision. It failed to gain critical mass for consumer use but I think the core technology still lives on in Hololens and is used by hackers for cheap 3D video.
Remember to say hello to your bank teller.