Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Eye-tracking (Score 1) 1

Is it real pupil tracking though? They mention navigation by tilting the smart phone.

If it was eye-tracking, you could move a cursor around with your eyes, and if your gaze reached the bottom of the window, it would automatically scroll down (eye-scrolling video demo at 0:22: http://youtu.be/2q9DarPET0o?t=...).

But it does say that Amazon uses infrared, and the eye-tracking companies like Eye Tribe and Tobii do require infrared.

I do hope Amazon helps to make touchless gestures more mainstream.

Augmenting with eye-tracking is almost always faster â" itâ(TM)s not choose between âoemouse + keyboardâ vs. âoeeye tracker + keyboardââ" itâ(TM)s âoemouse + eye-tracking teleport + keyboardâ or âoeeye tracker + keyboardâ â" that is, eye-tracking is always there

Eye-tracking +keyboard: Eye-Tracking doesnâ(TM)t have the precision of a mouse, but if an interface element and hit state is large enough, a âoeclick-where-Iâ(TM)m-looking atâ keyboard button will work.

Eye tracking + keyboard: there are eye tracking features that allow an eye controlled cursor to snap, zoom, etc to a smaller target element.
Sometimes itâ(TM)s a two-step process, so even if you have the ability to instantly teleport the cursor, the process may not be suitable in certain situations.

Eye tracking + mouse and keyboard: However, whenever you need the mouse, eye-tracking will be there with the initial cursor teleport.
Eye-tracking is used to initially teleport your cursor near your target, and then you can use the mouse to precisely place the cursor.

If you have both hands on the keyboard, you lose time switching one hand to the mouse, and bringing the hand back to the keyboard.
Youâ(TM)re usually choosing between both hands on the keyboard, and one hand on the mouse.

With eye tracking, it can be used either with both hands on the keyboard, or one on the mouse.
You never have to forgo something to use eye-tracking; itâ(TM)s always ready to make normal computer interaction faster.

A research paper pits mouse control by itself against mouse control + eye-tracking in a series of computer tasks.
Mouse control + eye-tracking ends up being the clear winner.
(If you want to skip to a demo, the authors of the paper put up a YouTube video.
2:41 show a competition where you have to click targets as fast as possible: http://youtu.be/7BhqRsIlROA?t=...).

Comment Re:Not sure we need it (Score 1) 174

"I think the ideal would be for most road conditions, detours, and traffic issues to be kept up-to-date on a database that could allow for dynamic routing instead of the car relying completely on markers."

*Crowdsourcing long-term knowledge*
Unlike humans, the cars will already know where the road markings are, even when theyâ(TM)re covered: "Collaborative 3D Scanning with Paracosm and Project Tango" â" âoemultiple entities scan different parts of same the space, and join the data to create a 3-D modelâ: http://i.imgur.com/Y4OOdRe.gif.

*Crowdsourcing real-time knowledge*
Now in terms of real-time conditions, and winter driving, autonomous cars could constantly refresh each other with new information. Thereâ(TM)s a four-way stop thatâ(TM)s about two blocks from my house. When there is a good sheet of black ice, youâ(TM)ll see car after car slip and slide; itâ(TM)s extremely dangerous. As soon as a driverless car detects black ice, itâ(TM)s going to alert every single other autonomous car, and update them with the new info about that location.

Comment Active control versus passive control (Score 1) 22

Active control versus passive control

In a video of Eye Tribeâ(TM)s presentation at Techcrunchâ(TM)s Hardware Battlefield, they distinguish between active control, where youâ(TM)re using your eyes to manipulate interface elements, and passive control, such as when your eye gaze approaches the bottom of a webpage of text, it automatically scrolls down.

They emphasize passive control in the presentation, but I think that itâ(TM)s because here, youâ(TM)re using your eyes already. If you look at an interface widget to highlight it, and then touch something to activate and select it, I think thatâ(TM)s passive because your eyes usually go to the target before your hands mechanically react.

Iâ(TM)m guessing that active control, like eyes only fruit ninja, will occur a lot less of the time, since you still have your hands. It would be useful for future computer glasses if youâ(TM)re doing something else with your hands.

Comment Changing the dwell time (Score 1) 22

Changing the dwell time could be a very common action for web browsing (eye tracking interfaces allow you to activate a graphical widget by dwelling on it, and you can set the time that you need to fixate on the target). For example, if you are on a website that youâ(TM)ve never been to before, you might want to more carefully and slowly examine the hyperlinks, so you might choose to put a longer dwell time for activating links and other web elements. On another tab, you might be on one of your favorite websites, and youâ(TM)re familiar with the location of all the webpage links and menus. Having a âoechange dwell timeâ switch element already up on the screen will allow you to quickly switch to a lower required activation time so you can navigate through your favorite site faster.

Comment *Eye tracking advantages* (Score 1) 22

*Advantages:*

*Comfort*
I have not seen any examples of a developer doing serious programming on a touchscreen. Iâ(TM)ve seen programmers that operate in a three-monitor environment, and I don't think that repeatedly reaching their arms across to touch the screens would be comfortable over time.

Gorilla arm syndrome: "failure to understand the ergonomics of vertically mounted touchscreens for prolonged use. By this proposition the human arm held in an unsupported horizontal position rapidly becomes fatigued and painful".

Eye control can be much lower in physical exertion.

*Augmentation, not replacement*

Eye control can be an additional input that works together with your hands.

e.g. you can use a macro program like Autohotkey to remap a keyboard button to click.

Appskey::Click

Look at any interface widget to highlight it, and then touch the application key on the keyboard to left-click and select it.

*Bringing speed and concept of virtual buttons like Android launcher icons, and Windows 8 tiles to desktop users*

Lastly, after using Autohotkey for remapping, I soon didn't have enough keyboard buttons to attach macros and lines of code to them, so I'd have to make new scripts that use the same button. After more scripts, it can be easy to forget which button does what.

You can now optionally take away your hands for moving the mouse cursor. Instead, stare at a target on-screen button, and using a keyboard button to click, you can instantly invoke custom virtual buttons that have your macros and commands that are attached to them. Quick activation of on-screen interface elements without a touchscreen is now more feasible. It virtually turns a non-touch screen into a touchscreen. You could pretty much design the buttons and controls to look however you want. Customizable, virtual buttons are infinitely more productive than static physical keys.

*Example:*

e.g. I remapped F1 to launch a google search on whatever is on the clipboard:
F1::Run google.com/search?hl=en&safe=off&q=%Clipboard% .

With another script, F1 could execute something completely different. And within that script, depending on the context, such as what program is currently running, or what window is in focus, the use of F1 could change again; it can get confusing.

It would be more intuitive to look at a virtual button that is actually labeled, "Google Search the Clipboard", and then tap my activation key.

*Already using your eyes*

Before you move your mouse to select something, it is very likely that your eye gaze goes to the target first. The same thing goes for touch user interfaces. Your eyes are most likely already âoetouchingâ the interface widgets before you decide to actually reach out and physically touch them.

*Achieving different actions on a target: eye highlighting + touching virtual function buttons vs. touch gestures alone vs. mouse clicking on a desktop*

Eye highlighting + function buttons

If you had eye control on a touch device, you could have multiple go-to, base, function buttons (could be two or three) that you can press after you highlight something with your eyes.

Example: a video

E.g. You look at a video that youâ(TM)re about to watch, and then you could press function button one to open and play it, press function two to preview a thumbnail-sized highlight reel of it, and function three could be set to do whatever other command you want, like go to the comments section.

Touch alone: multiple touch gestures for different actions

Currently, if I take something like the Chrome icon on the home screen of Android, I can tap it to open it, or long-press/hold it to move it. (There's also double tap, triple tap, and swiping that are available for use, but I think it ends there).

Desktop: different types of mouse clicking for different actions

For desktop users, left and right single click, left and right double-click, left and right mouse drag, and the middle mouse click are some examples of mouse clicking that achieve different actions on a target once a mouse cursor is on it. More advanced mice have even more keys and buttons that can be reprogrammed, as some people need more.

Advantages of eye tracking + function buttons

Single tapping function keys would probably be faster and more comfortable than repeatedly doing double clicks, double taps, long presses/holds, or multi-finger gestures, such as pinching and zooming.

Since you may only need a couple activation buttons, your thumbs or fingers reach out for fewer things. If itâ(TM)s a larger, tablet-sized screen, which requires more hand movement to reach all the buttons, then entrusting yourself to merely a couple buttons and positions will give you even more of a speed and comfort advantage.

Slashdot Top Deals

If you are good, you will be assigned all the work. If you are real good, you will get out of it.

Working...