Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Comment Lessons from head-up displays (Score 1) 52

There are several big problems with AR in the real world. These are well known in the head-up display (HUD) community and are going to surface in consumer AR scenarios too. The biggest problem is cognitive capture, where you ignore important details in the real world in favor of AR imagery. I've seen this in research studies and it is a nasty piece of work. Thankfully, these were simulator lab studies.

The next problem is more subtle but still problematic. AR imagery can mask things in the real world, effectively blinding you even if you are looking. If the pop-up window covers the oncoming car, you're out of luck even if the image is see through. A similar problem is focal length. In wearable tech, the AR image is likely to be hovering some fraction of a meter away from you in terms of focal length. Very few things are at that focal distance so you'll have to refocus constantly when looking at the real world. This takes time and reduces awareness of things out in the world. Refocusing is not an issue for pilots since the HUD image is set close to optical infinity and nothing should ever get that close when you're flying. In cars, the focal length is often near the front bumper. That's not ideal, but you can't spend commercial jet or fighter plane money on consumer automotive HUDs.

In short, you're going to miss threats and react slowly to them when you actually see them.

Comment Re:Everyone is waiting for California (Score 5, Informative) 320

They described the five categories of vehicle automation, and explained that the first autonomous (not Musk’s so called “autopilot” which isn’t) vehicles will hit the road in the summer of 2015.

Here's the levels. Most high-functioning systems on the market, like the Tesla version, are in the Level 1-2 range.

No-Automation (Level 0): The driver is in complete and sole control of the primary vehicle controls – brake, steering, throttle, and motive power – at all times.

Function-specific Automation (Level 1): Automation at this level involves one or more specific control functions. Examples include electronic stability control or pre-charged brakes, where the vehicle automatically assists with braking to enable the driver to regain control of the vehicle or stop faster than possible by acting alone.

Combined Function Automation (Level 2): This level involves automation of at least two primary control functions designed to work in unison to relieve the driver of control of those functions. An example of combined functions enabling a Level 2 system is adaptive cruise control in combination with lane centering.

Limited Self-Driving Automation (Level 3): Vehicles at this level of automation enable the driver to cede full control of all safety-critical functions under certain traffic or environmental conditions and in those conditions to rely heavily on the vehicle to monitor for changes in those conditions requiring transition back to driver control. The driver is expected to be available for occasional control, but with sufficiently comfortable transition time. The Google car is an example of limited self-driving automation.

Full Self-Driving Automation (Level 4): The vehicle is designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip. Such a design anticipates that the driver will provide destination or navigation input, but is not expected to be available for control at any time during the trip. This includes both occupied and unoccupied vehicles.

U.S. Department of Transportation Releases Policy on Automated Vehicle Development

Comment Re:iOS? Android? (Score 1) 61

It is also quite straightforward to make your app VoiceOver friendly. The tricky part is managing some of the VoiceOver gestures. Aside from the equivalent of alt tags on images, the most important part is getting the step forward/backward and continuous scroll gestures working right in your app. Turn on VoiceOver, swipe down with two fingers and see what happens. Then swipe one finger left or right. If you've done your job right, the cursor will move in the order you expect. Scrolling on lists that go below the fold are also subtle, yet important.

Comment Similar to No Hands Across America in 1995 (Score 2) 163

This actually isn't that big of a leap from a technical difficulty level. A pair of Carnegie Mellon researchers drove across the country in 1995 using a forward camera based system. 98.2% of the trip was autonomous. The non-autonomous parts of the NHAA drive are the same which would be needed under this approach.

Comment Already solved and operating in the senior market (Score 1) 353

ITNAmerica has already sorted this out. It is possible to load your parent's ride account with credits from cash or your own driving contributions. For example, you can drive people around your area and the credits are used by your parents who live halfway across the country. If you fall short one month, just fill up the credits with cash.

Comment Re:Look Who's Talking Now (Score 1) 390

In many US States, the local Departments of Transportation want nothing to do with enforcement actions. They will let the police/town/etc install red light cameras, but they don't want to be involved beyond that. In fact many red light cameras are operated by private companies under contract with local municipalities.

Here's an example of why DOTs don't want to be involved in enforcement. A while back some politician in New Jersey, not part of the local DOT, floated the idea of using EZPass toll data to automatically issue speeding tickets. This was almost certainly a money grab. Massive amounts of drivers started asking how to get rid of their EZPass accounts and turn in their transponders. DOT knew lower market penetration would negatively impact congestion at toll booths. They, thankfully, squashed the idea quickly.

Comment Point of clarification (Score 1) 168

The main "duty" of most non-tenured professors is to produce research. If you do that best by working regular 9am-5pm hours or by only coming in in the middle of the night, nobody's going to care much. Aside from that, you need to attend occasional meetings and turn your grades in at the end of the semester. Once you have tenure, the obligation to produce continuous research is lessened a bit, and most of the schedule on which you "fulfill your duties" is really up to you.

From my perspective in the trenches, the reduction is not as big as most people might think for CS and the sciences. If you worked like crazy while building your credentials, either for tenure or to a senior position in a non-tenure research track, you can't really slack off too much. You still need to bring in the cash to cover your team, grad student tuitions, and your own salary, which are now more expensive too. This means just as much research effort and proposal writing. This is exacerbated when research funding is cut at a large scale (sequestration). The reduction really comes from i) having established robust lab practices, methods, and management skills and ii) improved proposal writing skills combined with a track record. Junior faculty expend a lot of time finding and developing the right models, processes, and skills.

Another problem is that you spend your early career developing and reinforcing workaholic habits. It is very hard to step away from work, even for a regular weekend. Unlike most high intensity jobs, the flexible time is great for scheduling around family so they actually see you. You can insulate them from the worst of it.

Comment Reaction times (Score 1) 149

V2V is peer-to-peer and really focused on reducing reaction times. It allows the car ahead to instantly tell the car behind it is braking. This means less latency for corrective action. This also helps non-autonomous cars since V2V equipped vehicles could, theoretically, suck up some of the shockwaves present in current highway driving.

Comment Re: How is this news? Answer: Closer to market (Score 2) 149

The main advance is the progression towards real-world sensor selection and packaging. If you look at all the cars which completed the Urban Challenge, and the Google cars, you'll notice the spinning Velodyne laser sensor on the roof. It is a great sensor and makes autonomous driving much easier. Unfortunately, that sucker costs more than most luxury cars and would never be deployed the real-world since nobody wants a spinning can on their roof.

Carnegie Mellon would not have won the Urban Challenge without that sensor or the others littered all over the exterior of the car. The major advance for this new Carnegie Mellon car is comparable performance with cheaper sensors fully packaged within the car. This is a big deal since (a) economics limits which sensors you can buy and (b) the car body and shape limit the size and location of sensors. These obviously limit your overall sensing capability.

The new car also has better computer packaging. Most autonomous vehicles have no trunk space and frequently have no back seat room. For a historical perspective, Carnegie Mellon's Navlab 1, which found a spot and parallel parked autonomously in 1992, had racks of computers and an extra air conditioning system to handle the heat load. Urban Challenge vehicles also had racks in their trunk areas. The Cadillac SRX team was able to cram all the computational gear out of sight. This is really Moore's Law, etc but it is still a respectable achievement.

Submission + - Turning Down a Glass Explorer Invite

awtbfb writes: An invited Explorer discovers Glass has accessibility problems and may interfere with her hearing aid and cochlear implant. There's a 30 day return period in case the accessibility barriers are insurmountable, and she was willing to try. However, Google is insisting she gamble on travel expenses to one of their three showrooms. It seems they still have customer service problems. Disclaimer: I'm related to the author.

Comment Without NSF, there'd be no Google (Score 1) 307

It's not that simple. A lot of groundbreaking work is the result of side project within a larger research effort. Google is a good example of this. The ideas and approach had their origins in the NSF project Larry and Sergey were working on. While the SDLP project probably had an impact on digital libraries, the stated goal of the work, the larger impact was the creation of a technology behemoth with thousands of US jobs and a major influence on the digital economy. Using your model, would Google have happened? Probably not.

Also, the way you posed the question is interesting for other reasons. Whether a person changes their behavior is often based on far more than just basic science and technology advancements. Issues like federal policy (political science) can have a huge impact. For example, I'm working on technology research related to the aging of the population. This is a very real societal need and it is easy to justify the work from a financial perspective (take a look at nursing home and caregiver costs). However, many health and independence technologies are intertwined with privacy, whether Medicare will pay, and other non-technical issues. We rely on the insights of our colleagues doing research in the social sciences to help us understand the interplay between functionality and barriers to acceptance and commercialization. Without their research, we'd probably make very expensive paperweights.

Slashdot Top Deals

Nothing makes a person more productive than the last minute.