I like clean interfaces, but seriously, what does this help? It doesn't save space, the title bar is still there. Ignoring those buttons costs nothing, and replacing a button with a non-graphical multiple-action like double clicking isn't making an interface simpler, it's making it more complex. I understand the confusion about a minimize button with no taskbar, but this doesn't seem like a particularly well thought out design change. We got rid of feature X, so action Y isn't the same anymore. Okay, just get rid of it.
I know it's Slashdot, and everyone here gets a rush from insulting people who they think know less, but really?
IT for an NGO with 20 people is a pretty entry-level position for setting up infrastructure. Even with experience it's useful to know what current thoughts are. Slashdot has a huge concentration of experienced people who can give good advice (and plenty more who can't.) You'd be a pretty poor infrastructure engineer if you didn't do some research before building things up for a new company. I think they made a great decision, the poster is being proactive in asking a big group of knowledgeable people for their current advice, internet searches alone can yield outdated advice. Sounds like someone I'd like to hire.
Plus it gives us all a great chance to update long-standing arguments about custom vs. generic, cloud vs. internal, mac vs. linux vs. windows, etc... And don't even try to say you don't like arguing about these things.
Actually, the issues of indoor 3D mapping are significantly more challenging than doing so from a plane or ground vehicle outdoors.
Advances in MEMS sensors for acceleration and position make knowing the position of the lidar base much easier and more accurate.
Inertial sensors arn't a panacea, especially the MEMs-based ones. MEMs-based inertial sensors are MUCH less accurate than the systems used in survey equipment. Even the best high-end MEMs inertial systems are quite noisy, and while top-of-the-line optical (not MEMs) gyros can be extremely accurate and give you orientation with very low drift over time, the basic premise of an accelerometer makes knowing position impossible over any length of time. Remember - you have to integrate the signal TWICE to get position, that adds up to a lot of noise. Also, I'm pretty sure from looking at their backpack that they arn't using a MEMs based IMU.
and it is nothing new. it's been flown in planes (by the USGS to map the coastline of the US), attached to vehicles of all kinds.
When you are outdoors you have access to GPS, and that makes all the difference. It gives you the corrections needed to maintain an accurate knowledge of position over long distances and after sharp or erratic maneuvers. Additionally, when your sensors are mounted to a plane or vehicle the scan to scan motion is roughly linear, i.e. planes don't jump up and down and side to side a lot. People walking bounce all over the place, and that makes your position estimates from accelerometers alone next to useless for more than a few steps.
This "advance" is really nothing that anyone knowlegable in the art couldn't predict or produce.
There's a reason why there are lots of companies that provide high-accuracy outdoor mapping, both ground and air-based, and none that provide high-accuracy indoor mapping without requiring fixed, surveyed markers and slow, step-by-step scanning from rigidly mounted scanners. Nobody knowledgable in the art, as you say, can do it yet.
To do indoor mapping successfully you have to align each data scan with other data scans - the most common way to do this to use a SLAM (simultaneous localization and mapping) algorithm. While this has been well explored using planar lidar data from a rolling base in 2D, and reasonable well implemented on a rolling platform in 3D (often assuming level floors, etc...) putting it on a human means you have to solve the problem fully in 3D with noisy data and very poor odometry over long distances. This has been demonstrated (somewhat poorly) in the past on a single-floor basis, but aligning data from multiple floors or wings connected by a single long corridor is not at all a solved problem. The end result of most of these indoor approaches is a map that is topologically correct, but spatially very flawed. Without a global reference to correct your position, a long, straight, hallway may curve a little bit, a turn that should be 45 degrees might end up as 40 degrees, and those errors very quickly add up to a spatially incorrect map.
Your linked videos are all examples of extremely impressive hardware and motion control. You don't want a PR2 if you're interested in motion control and dynamics problems, you want one to interact and manipulate in a similar manner and with similar capabilities as a human torso and arms. The PR2 platform isn't for industrial use, and it's not supposed to walk. It's to give researchers a common-ground solution with extensive software to work on AI problems with, not traditional control problems.
The industrial arms are very accurately playing back a path that was programmed in by hand - I certainly don't want to downplay how impressive that demo is, but there's not a lot of 'intelligence' about 3D motion control. The pick and place machines do need some very basic vision tasks to identify and track the targets on the assembly line, but in general the problem is solving motion control (plus with the Flexpicker I think a terrific mechanical design helps.) Likewise bipedal locomotion is a difficult problem, and the dynamic stability of the humanoid robot is a great feat, but again a different field from the towel folding task. To balance/run as in the video requires no external sensing, only internal inertial sensors and a pre-programmed gait (technically it's not running, to run both feet need to be off the ground at the same time - watch closely, it's actually a fast walk.) I think they are all very impressive, and I have no wish to imply that they are any more/less impressive than towel folding, just that they are different problems.
Yes, I agree, a 50x speedup is painful and completely impractical. The cool thing here is the object recognition and manipulation, especially with soft, flexible objects. Identifying a random towel, picking it up and finding all four corners, then grasping two adjacent corners and folding twice, all dynamically is not an easy task. There are tons of easier and faster ways to fold a towels automatically, but none of them could work from an untidy pile of different types of towels piled on a table - they'd all need some special loading mechanism and take up half a room. In theory this robot could wander around a room actively looking for towels people had left around and do the same thing - that's what makes it interesting. And no, I'm not saying we should ever plan on using expensive humanoid robots solely to pick up and fold towels, it's just cool to know we're one step closer to the day when they -can- do it.
I couldn't agree more. That Internal Server Error looks way better at 120 Hz on my 45" HD display.
What are you talking about? They specifically say iPhone 3G, but then lump all Blackberry 8300 devices together as one. I think you've got it backwards
What's so offensive about chaps and forts?
Also, I doubt the average tuition is $40,000.
In Canada where I live,
I don't think you quite realize how well off Canadians have it with cost of education. The two big schools in Pittsburgh are University of Pittsburgh and Carnegie Mellon. U Pitt is a state school, in-state tuition is $13-$16K a year, out of state tuition is $23-$29K a year. Carnegie Mellon is probably where the quoted figure comes from, as their tuition for entering freshman this year was $40,300. Doubt no longer.
Maybe Computer Science should be in the College of Theology. -- R. S. Barton