Well, certainly the part where you take what materials science researchers have discovered in concrete technology and design structural members of a bridge certainly seems to fit that statement of what an engineer does quite nicely. Depending on how "cutting edge" the bridge is, I image there is more or less engineering involved vs. looking up the right sizes in a table, although I'm not a civil engineer so......
Astronauts are allowed a small (in both weight and size) amount of personal items, which have to be approved for travel (http://www.nasa.gov/mission_pages/shuttle/shuttlemissions/sts121/launch/qa-hahn.html). They usually leave them there when they come back down (I've heard a few astronauts talk about it). They have also shipped up larger items (presumably including the guitar) using spare space on various spacecraft (like the Shuttle or the Dragon test mission). If you go read the Wikipedia article on Skylab, you'll see that one of the crew's basically mutinied over lack of rest/personal time. Since then, NASA has built rest and down time into the schedules for astronauts on space stations. Presumably the Russians do the same. On ISS, they've sent up leisure items so people don't go nuts. I have seen reference to an every growing DVD library on the ISS as well.
As for the camera/memory cards... That was probably on the ISS as part of the standard gear. Part of the mission is to take pictures of stuff on Earth. Since they now have an Internet connection, presumably they'll transfer the pictures and leave the memory cards up there until they stop working, when they'll be sent to an inglorious (and fiery) end on a Progress ship.
MIT is almost certainly using Kerberos for their authentication since a) they invented it and b) that's what they were using at least as recently as 2005. In any event, how Kerberos stores passwords depends on the exact implementation, but in at least some implementations (admittedly old) you could decrypt the password database on the Kerberos key server with a key stored in a file in
This has to be 2.3 *peta* FLOPS not giga FLOPS. For instance, in 2010, an Intel desktop processor could do 109 gigaFLOPS (reference: http://en.wikipedia.org/wiki/FLOPS).
Not really... Apollo 10 was everything landing on the moon was except the actual landing. The command module went into orbit, which meant it had to have enough fuel on board to do a burn to get back to Earth. The Lunar Module was mostly fueled (supposedly not completely fueled because they were afraid the astronauts would actually land if they had enough fuel to do so) and it did a deorbit burn, descended toward the surface and then did another burn to get back to the command module. This proposed mission isn't even Apollo 8, which went into lunar orbit.
Maybe Dragon can be turned into a Mars lander capsule and maybe a Falcon Heavy can launch a manned landing mission, but *this* capsule and *this* mission aren't really a dress rehearsal for landing or even for putting humans into Mars orbit (where they could, for instance, directly control a rover). It seems mostly like a publicity opportunity. That doesn't make it a bad idea to *do* (I'd love to see it happen, especially with private money because it may encourage the much more expensive landing mission to happen), but nobody should be fooled into thinking that it's one step away from actually landing.
Liquid nitrogen ice cream is awesome....
However, the XK6 chillers are a lot more boring. We take room air from under the floor, run it through a cold plate, blow it through the cabinet across 12 Opterons or 6 GPU's vertically, and then go through another cold plate and exhaust it at (approximately) the same temperature it came in it.
> Unless you know how the digital is encoded/modulated/carried, all you're going to hear is random noise.
Only if you look at it from the perspective of digital 1's and 0's. If you look at it from the perspective of analog signals, you'll see square waves or sine waves on a frequency. That doesn't really occur in nature (except from pulsars). So maybe we'd never figure out what the aliens are *saying*, but we would be clear that a signal existed on a given frequency. That said, I don't really believe that we'll find anything "out there", at least not in my lifetime.
It only takes about 10 minutes to get to orbit. I believe the Shuttle and the Progress & Soyuz spacecraft all took about 2 days to dock with ISS. I believe most of that time is spent matching the orbits perfectly and "catching up" with it in orbit (you don't want to approach too fast and slowing down requires fuel, and fuel is weight so you want to use as little as feasible).
Dragon is taking awhile longer because this is only the second time that the Dragon has flown and the first time docking. So, they're going to run a whole bunch of tests to ensure that they can control the spacecraft from the ground and then a bunch more to make sure the astronauts on the ISS can control it. Then, finally, they'll let it get close enough to dock. I suspect (though I have no actual information on this) that once they get past the "test flight" phase, it will take a similar amount to time to Soyuz/Progress/Shuttle to get there.
I was going to recommend QLab myself. I use it for live theater and it is excellent. The free download only outputs 2 channels (but is otherwise fully functional for audio). It isn't that expensive to get the paid version that does essentially unlimited channels. It has MIDI integration for triggers and a variety of other features.
I haven't found anything free that does what it does.
There is actually a use for "copyrighted birdsong". I've purchased a bunch of it as a part of sound effects collections. The "value add" is that it has been well recorded, mixed, compressed properly, and is free of annoying background noise (like planes flying over, cars in the distance, etc). Of course, the companies that do that also sell the collections with a license to redistribute the sound as part of another work (Ie, I can use it in a play or a movie I just can't resell the whole thing as a sound effects collection).
That's only true if conditions are the same in every 20 to 30 minute chunk, which in this case, they aren't. While the spacecraft is on the ground, it has the earth's magnetosphere and then the atmosphere protecting it from radiation. The further away from the ground it gets, the less protection it has. So, in theory then, the likelihood of a failure due to radiation is *lower* in the initial phases of flight than it is during cruise when it is out of range of any type of natural radiation protection.
Meantime, you've also got the high vibration and g-force environment of a launch going on, so if the parts were not soldered well (hopefully they used x-ray testing for that, but....) you have the possibility of failure being higher there than during cruise when it isn't shaking at all.
So no, in space flight, any random time is not created equal.
It has traveled 22 miles (34 km), according to one of the JPL people who drive it:
Typically, you write a testbench that can, in fact, printf() (sort of). Basically, you end up running a timing-level simulation of the FPGA, or sub-blocks thereof. You're really not developing a piece of software, you're developing a small ASIC. In any event, after you run timing simulations through your testbench where you put in known inputs and verify that you get the expected outputs, you're ready for anywhere from a few minutes to a few days (depending on the size of the FPGA) of compilation to get your code turned into a bitstream to program the device. Then you run the same inputs from the testbench and see if you get the same output. At this point, you hope that you remembered to build various debugging registers into your design so you can have a prayer of finding problems. You can also stop the clock and scan out the value of every register on the chip through something called JTAG. You can then import that back into your simulator to try to figure out what has gone wrong. Then restart at the "few days of compilation" stage.
This is why so few people do reconfigurable computing....
Remember that in the US, until it changes next year, it's "first to invent" not "first to file". This page: http://www.iusmentis.com/patents/uspto-epodiff/ as well as a few others lead me to believe that is not the case in Europe.
Presumably, Apple didn't invent this in 1992, so the "touchscreen toggle design" may be prior art. However, it's pretty plausible that they invented it before the N1M came to market (it was announced in 1st quarter of 2005 and the Apple patent application was from later that year). If that's true, the N1M "doesn't count" and it's a more subjective judgement on whether the touchscreen toggle is similar enough.
Another key point is that the Dutch patent has apparently been through a court, whereas the American one hasn't. That means that the only people who have considered prior art for this are a small number patent examiners.
"How hard would it be to let the pad track where you tap on the button area and have an option let it send left/center/right events?"
You need one of the multitouch trackpads, but....
Right Click: System Preferences -> Trackpad -> "Tap Trackpad Using Two Fingers Sends Secondary Click".
Alas no builtin solution for middle click, but try this: http://clement.beffa.org/labs/projects/middleclick/