Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment wrong rocket in the animation (Score 3, Informative) 100

The rocket launch animation in the video does not show the Falcon 9 rocket+Dragon but the not yet fully developed Falcon 9 Heavy (SpaceX) which according to the SpaceX launch manifest will have a first test launch later this year (but I guess late 2013 seems more likely judging from SpaceX's previous track record of delays).

       

Comment Re:RIP: SpaceDev, hello newcomers! (Score 1) 500

But it should probably be mentioned that his dream died long before he did (which was in 2008 by the way). Their first planned satellite NEAP (Near earth asteroid prospector) would have landed on an asteroid and claimed it as SpaceDev property but they ran into funding problems and cancelled the project. As subsidiary of the Sierra Nevada Corporation they are currently building the dream chaser spacecraft.

As for the plans of Planetary Resources Inc. the only way I see this failing is that the investors pull out after 4-5 years when they are going to realize that it will be far more expensive than originally thought and they'll get annoyed at the amount of red tape they have to wade through in order to get a clearance for earth orbit insertion of an asteroid. So probably very likely!

Comment Launch window (Score 1) 97

Can he just delay by one week? There are only small launch windows for Cape Canaveral launches to ISS. Does somebody know the approximate window size for a Falcon 9+Dragon launch to ISS? Also from this ISS launch schedule, there is a launch of a soyuz at may 15th so if he delays too much, he will probably have to move the launch date back by at least a month.

I wonder what the requirements are at NASA versus SpaceX concerning mission failure probabilities? Reaching a 90% chance of success is probably easy but 99.99% chance of success is much harder.
And then you could ask if NASA or SpaceX has such high requirements why didn't SpaceX plan accordingly? Are they forced to promise early launch dates to keep investors?

Comment Re:Open Source (Score 1) 350

I don't know about the details of the recent embargo enacted for Iran, but I would guess that if any component could be used to build high tech weapon- or reconnaissancesystems nobody is allowed to export it to Iran. On the other hand China will probably supply all these things with or without embargo.
Sorry I'm too lazy to look it up on wikipedia, maybe someone here knows more about this?

Comment Datasheet (Score 1) 227

Someone asked how the water production rate depends on air humidity. Here is a link to the datasheet of the WMS1000 from their website.

Depending on the available power the production rate drops to 350-550 liters/day in desert area with average relative air humidity of 30-35%.

There is also a neat picture of its internal components.

Comment Re:We've probably gone farther (Score 1) 238

Interesting, obviously I didn't know this one. Always revealing to realize that most of the things we argue about today have been discussed over and over again by previous generations.

Even if you consider that the destruction is not in your own country or economy (so that one might think one does not need to bear the cost of repair) it will eventually be coming back to bite you in other forms (take the emerging poppy trade as a result of the afghanistan war and the resulting increased trug trafficking as an example) .

Comment Re:We've probably gone farther (Score 2) 238

But then strictly speaking the money spent on wars is not really wasted. It is used to pay wages, buy weapons, invest in military research. Military personal then put the money back into the economy when spending the money on houses, cars, laptops, smartphones etc. Engineers, mechanics and so on are payed to design and build the weapons who in turn get payed for it.
Even the fuel wasted by the military is bought from the oil companies who in turn buy new oil drilling platforms which have to be designed and manufactured by engineers and technicians.

The idea that a lot of the technology comes from the space programming is mostly a myth. Take Teflon for example,discovered in 1938, used as corrosion protection in the Manhattan project in 1943 then in 1954 first applied to kitchen products long before sputnik.

The problem is that you can't expect people to be excited or involved in something that ultimately influences them only very little. People are concerned with their own survival (which in todays world becomes more and more expensive) and not everybody can be a spacecraft engineer or scientist.
The main interest of humans has always been to have work and be able to support themselves and their families and if the military-industrial complex provides that you can't blame them for trying to defend it.

We space enthusiasts were lucky for a while that space exploration was fueled by the cold war, when defense interests overlapped with space exploration interests. Without the "need" for ICBMs we would never have built any orbit capable rockets at all.

I mean how do you justify sending a space probe to the heliopause to the common taxpayer? "Please give us your money so that a couple of scientists and graduate students will be able to publish some papers and advance their careers in about 25 years?"
Even among the scientific community the (real) interest for heliopause research is probably very small. The timescales of such projects are just too long. A heliopause probe would take maybe 5 years to develop and another 20 years to reach its destination. That's almost the duration of a typical research career, nobody working in science can afford to wait that long for any results.

The way I see it is, that it is not yet the time for such research. Just like 16th century physicists would not have been able to learn something of subnuclear particles as the technology was not available at the time, we today have to wait for a significant amount of space exploration before we can properly investigate the outer regions of our solar system.
Once we have research outposts on Titan or Pluto that can send their own probes to the oort cloud this sort of research will be much more affordable and simpler.
Alternatively we have to wait for better space propulsion technology so that we can sent those probes faster to their destination.

In the meantime I'm not particulary worried that we will descent into savagery again (at least technology wise). The average citizen has grown too fond of their little tech gadgets and other helpers for everyday lives to just throw it all away. We are at a stage where we will defend with all our strength the right to access the internet and so on and so forth.
The time for proper space exploration will come just maybe not in our lifetimes...

     

Comment Re:99.8% data loss (Score 1) 156

The main point of a quantum network is to produce entangled qubit pairs and to store them long enough to use them. With 99.8% failure rate you just have to try 500 times and do it faster than your decay rate (which is easy). Once you succeed, the important figure of merit is your state fidelity (how close the real state is to the desired ideal state). Here they report around 85% fidelity. Which means that if you create 100 entangled pairs 85 of them will be pure (in the sense that any subsequent operation with them, like a teleportation, will work).
This not too bad, but not the best in the field. There are so called entanglement purification protocols with which you can filter out the bad apples and get almost 100% fidelity.

Comment Re:Perhaps I missed this qbit ... (Score 1) 156

The entanglement procedure relies on the polarization of the photons. What they do is to apply some light pulse to atom A which prepares its state into either one of two possible states (qubit A 0 or 1). As long as you don't measure it, the atom is in a superposition of those two state (so you cannot even in principle say whether it is 0 or 1). However depending on this state the photon emitted from atom A during the state preparation will have a specific polarization (so before you measure qubit A the photon will be a superposition of two polarizations). This photon after going through the fiber will be absorbed (with 0.5% probability) by atom B and depending on the polarization this will prepare this atom in either of two states as well (this is then qubit B 0 or 1), but as the polarization is not yet determined qubit B will also be in a superposition of 0 and 1 (again you can't even in principal know the state before measuring).
Now qubit A and B are entangled because if you measure either atom A or atom B it will automatically determine the state of the other atom as well.

By the way here is a link to arxiv preprint:
An Elementary Quantum Network of Single Atoms in Optical Cavities

Comment NOT the first quantum network (Score 1) 156

(Disclaimer: IAAP) This is actually not the first realization of a quantum network, that's not the point. Chinese and other researches have already created a quantum network link over up to 16km distance (see for example here: Experimental free-space quantum teleportation (only abstract)). Allthough strictly speaking this was only a 2-point link.
A quantum repeater which is elementary part of a quantum network has also been demonstrated with atomic ensembles.

The new thing here is that their quantum repeater used a single atom in an optical cavity as a photon storage device. The advantage of using only a single atom as qubit storage is the potentially much longer storage time compared to a group of atoms but it is much more difficult to get enough coupling strength to the photon.
This is why they use a cavity which is resonant with the atomic transition used in their setup. But even then you only successfully write a photon in 0.5% of all tries.
But that doesn't actually matter, all you need is to establish an entangled pair before your storage time runs out, so you just to need to repeat the write attempts fast enough.

To clarify: the applications are in quantum key distribution, distribution of entangled qubits for quantum computing purposes, it cannot be used for FTL communication, if ever it will be a very long time before this can be used for superior data transfer (look up quantum dense coding).

This is all basic research to learn how to handle single atoms, how to couple them to photons (so that you can use optical fiber networks) and how to increase fidelity of state preparation and storage time (the stronger you make the coupling to the photons the faster any quantum state will decay due to coupling to the environment). But the main purpose is (to be as cynical as possible) to advance the careers of the principal scientists involved, ensure the flow of grant money and produce phds:)
Seriously the setups they use (ultra high vaccuum, laser cooling etc.) will NEVER be used in any commercial application. You'd need some sort of solid state device where the physics is quiet different and you'd have to do the research all over again.
   

Comment Re:little disappointed (Score 1) 289

Well I probably shouldn't have used the word monkey... But I let's be honest, wouldn't it be better? 3D doesn't add any new creative content to a movie, it's a gimmick. And as such the engineer in me was hoping for something like this:
Me: "That is some pretty awesome stuff you have there, guys! So how did you do it?"
Guys from Stereo D: "Well, you see, we developed this very simple but clever theory and derived this nice little formula here. We then put this formula into our software and voila, 3D conversion! And by the way you can now buy this software do this with all your movies!"
You know, so that the work those people did was not to go through every frame themselves but design the software to do it for them.
Don't you think it is more impressive to design and build the robot that builds your house automatically instead of building the house yourself?
If the house is supposed to be a piece of art then the answer would probably be no, but in this case the original movie was the piece of art, while the 3D conversion was not (it doesn't add any new creative content).

I'm still on /. here, right?

Comment little disappointed (Score 1) 289

I am a little disillusioned by the article to be honest. It was always clear to me that once you know the depth of every pixel in a movie frame, turning this depth information into a parallax projection is the trivial part (just like once you know the color of a texture in a B&W film, actually putting that color into each frame is easy).
The hard part is getting the depth or the color in the first place. So I always thought (or the nerd in me hoped) those studios had some kind of awesome general algorithm or technique with which they extract depth information from the 2D image (in case of 3D conversion) or color information from the pixel grey value (in case of b&w colorization).

The reality is, that there is no such technology. When filming a scene in 2D the depth information is lost in the projection process. There are some tricks you can use:
(a) using depth of focus (from gaussian beam theory you can relate the distance from the focus to the spot size),
(b) relative movement of objects (closer objects move faster over the frame than more distant objects)
(c) and the brightness (further away=darker).
But these don't work in general only in special situations, (a) requires a small scene with small depth of focus so that you can see the varying sharpness of objects, (b) only works in situations where movement is linear for example (in a rotating setting like in Matrix, more distant objects move faster) and (c) I guess depends on the illumination of the scene.

To summarize: the studios basically just hire an army of frame monkeys to painstakingly go through every frame using one of the technique (a),(b),(c) (or combinations thereof) or just use the intended distance from the original production (they know how far the camera was away from the object) and paint the depth map pixel by pixel over the frame until it looks realistic.
I should have known really, if there was an algorithm that works in general all you'd have to do is to load the movie into a supercomputer wait a few hours and get the 3D version back. It would be dirt cheap and everyone could do it at home:). Come to think of it even with such a general algorithm you'd still need some QA guy going through every frame to make sure it looks good (so instead you need an army of QA frame-checker monkeys and you're back at square one).
 

Comment Maximum necessary Triangle rate/Shannon theorem (Score 1) 331

The article mentions that at a resolution of 8000x4000 you need a triangle rate of at most 40 billion triangles per second to render your scene perfectly quoting the shannon sampling theorem. Probably I'm just missing something here but how does he arrive at that number?
I'm guessing he argues that you have to process 8000*4000*72FPS=2.3 billion pixels per second and the smallest possible triangle would be 1 pixel sized, so that you need a triangle rate of at most 2.3 billion/sec.
Now there are two levels of sampling going on here. First you use triangles to sample reality (not actually a sampling more an approximation) and then you sample those triangles with the pixels on your screen. The shannon theorem says that with a maximum bandwith/max. frequency in your scene of fmax you need at most a sample rate of 2*fmax. The resolution then gives you maximum frequencies of 1/4000 in one direction and 1/2000 in the other. These are the highest frequencies you can sample in your scene. With the triangles you can now approximate these frequencies, which will always give an infinite bandwidth (i.e. spectrum of a saw tooth function). To perfectly approximate these frequencies you need an infinite number of triangles but it doesn't make sense to make them smaller than 1 pixel.
I mean, basically you don't even need Shannon here, it's 1 pixel =1 triangle, right?
So how does he get from 2.3 to 40 billion?

Comment Icing problems (Score 1) 65

The sky tree seems to have problems with ice build up on its steel beams during cold winter days (Mentioned in this article here: CNet). Although after a quick google search this seems to be general problem of tall structures with open truss structures (for example here is video of a ice falling from a TV tower Youtube).
I guess usually that is not such a big problem as TV towers are build in parks or large open spaces but the Tokyo sky tree is build in the center of the city surrounded by a lot buildings. Apparently they had to install electric heaters on the Sky tree to prevent ice forming.

Comment Re:So casual... (Score 1) 82

Well in my experience the standards of high school education is only half the story. Sure the better it is the more easily you will master challenges in your later life but I think you can rectify most of the gaps during college or university as most of the things you learn in high school are not so important.
I'm from germany and in my high school (I graduated in 2003) there were no multiple choice tests, but mostly essay type of questions. In science tests you were given a questions and you had to develop the answer just like at university. I had no problems with that, I thought it was normal and math, physics and chemistry tests were actually quite fun to do (not so much the language tests, that was a fucking nightmare:)). I guess in retrospect it was quite a good high school education and I basically aced my first university semesters for my physics degree.
But after that I tanked, my performance got worse and worse and I just barely graduated from university with average grades. Basically because there was nothing to motivate me.

Anyway my point is, a good high school education doesn't help you very much in later life. It gives you a good head start but you loose that advantage pretty fast. The problem with todays youth is not the lack of good education but the lack of visible(!) pioneering research programs. Seriously if the government started a massive mars program today (probes, manned missions and colonization etc.) they would have no problem finding young people in their 20s willing to rise up to the challenge (if the challenges are completely novel, a 20 year old is just as good as a 40 year old).

I think the reason we wouldn't trust people in their 20s with todays kind of technology is that we have become far less tolerant to failures. Before apollo rocket scientists/engineers blew up engines and rockets regularly. But with todays limited funding you have one failure and you are out of the game.
Look at SpaceX, they had a lot of failures in the beginning but now they have one delay after another because they want to make 100% sure that their Falcon 9 rocket does not fail. One failure and spaceX would loose a lot of business. For this you want experienced people.

Slashdot Top Deals

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...