How is this news? Oh right, it's not - It says so right in the title.
Can I start submitting stories about how h.264 conversion consumes CPU cycles? I mean, it theoretically doesn't need to - I can fathom a zero-work scenario where it just happens. I can even give a play-by-play about how I open my system monitor to verify performance. Amazing stuff!
Honestly. How did this BS make it to the frontpage.
Sure, I'd love to hear how h.264 conversion doesn't theoretically need cpu cycles. How does the zero-work scenario function? The problem with itunes is that it uses a significant and noticeable amount of cpu when it shouldn't and apparently has been doing so for years with users complaining about this left and right.
I am going to call BS on this one. These are indication systems. Think of smashing your speedometer and turning the needle with pliers and expecting the car to go faster.
Remote control is not a direct connect. It follows communications paths, and the information and control path apparently connects through the internet, both through the display and control path.
No one needs direct connection within the airplane -- all ya need to do is control it through the internet, at any receiver path, and any transmitting path. with additional directional antenna paths.
Can't do it from onboard, has to be from a remote site, and will involve additional receiver and transmit packages, not included on the android phone. (don't even have to be near the android used for control).
Are you in sales or marketing by any chance? Because this is the sport of keyboard heavy information free verbiage that typically comes from them.
That;s a lovely boat. It's a work boat, not a pagoda on a glass brick like some other luxury yachts we've dissected. It might even weather a real storm in a real ocean as opposed to sinking at the dock as soon as the tide turned.
He should donate to a real oceanography group, Scripps, Texas A&M, hell, NOAA could probably use it.
Or, if he will simply transfer the title to me, I'll pay for the moorage and start buying lottery tickets to put fuel in the thing. Maybe a kickstarter project....
Given that it was used in oceanographic research cruises in the Antarctic, I think it's probably already weathered some real storms and been through some pretty crazy seas before. The specs on the thing make it look like it'd be a nice working vessel.
Is internet traffic really only 26 Petabytes a month, while that is a big number it sounds awefully low to me as the place I work does 15 Terabytes a month and they are little more than a miniscule pimple on face of the internet.
That's just wrong. Open Science Grid transfers about 1.4PB a day and I seriously doubt OSG uses a significant fraction of the bandwidth on the net.
The contaimination levels are explained in this article. I believe aids and influenza are both BSL-2. I think the levels are based on ease of infection, potential severity, and treatments. AIDS is pretty hard to get outside of fluid transmission but it's pretty severe once you get it . OTOH, influenza A is fairly easily transmitted but most people recover (~30,000 die each year from it in the US).
I'm pretty sure we're all meant to run a LOT more than we do - and we've forced ourselves to stop due to social pressure.
Hate to break it to you... but we're not. Humans run worse than just about every vaguely similar sized animal on the planet. The reason that we are the way we are is most definitely not because we can run fast.
It's up to you whether you run - I hate running personally, but love swimming, football (yes I know that involves running), rowing, tennis (see before). My knees are not cut out for long distances.
Actually, if you look at the stats, people tend to be the most efficient runners on the planet (with kangaroos coming in second). Although quadrupeds can run faster, they tire out much more quickly as well as overheat. The end result is that over longer distances (45+ km), humans are pretty competitive with animals such as horses. There's actually a hunting technique that's been used called exhaustion hunting, where people chased a deer or whatever until it collapsed from exhaustion and then ran up to it and killed it. It works because running on two legs is more efficient than running on 4 legs and because people have a few adaptations (e.g. hairless skin, etc.) that allow them to get rid of heat more easily.
Newton's law of gravity is broken as well. The thing is that although it's inaccurate and broken, it's a really easy approximation to how gravity works that gets you results that work well enough that people still use it for most situations. SR is similar, it doesn't work in non-inertial frames but with inertial frames, it's good enough in most situations and a lot easier to use than GR.
No it decreases reliability by half. If any one of the drives fail, you cannot recover data off the other.
It's more than that. If p is the probability that one of the drives will fail in a given timespan, the chances of your array staying up is 1-(2*p + p^2) . The problem is that you need to consider the possibility of both drives failing so the probability of the array going down is p+p+p^2 so things are worse than just having two independent drives.
There are not many problems these days that cannot be parallelized and split up to be run on a large number of off the shelf hardware. It is much easier to grow a Beowulf Cluster to add performance than redesigning to eke out every bit of capability of top-of-the-line hardware. Much easier also, to redesign your problem so that it can take advantage of parallelism. I agree that this was probably a boondoggle by a politician wanting to get some publicity for himself.
You're mistaken. There's a large class of problems that are pleasantly parallel and can be split up like you say (e.g. einstein@home or seti@home type problems). However, any problem that requires a lot of internode communication such as computation fluid dynamics, gravity simulations, weather or climate simulations/forecasting, combustion/flame problems (e.g. modeling engines), molecular dynamics will require a system like this. A beowulf cluster using ethernet to connect nodes together will result in most of the cpus waiting for information from neighboring nodes to be sent to it so that it can go through an iteration. A lot of the cost in a system like this comes from having a very low latency, high speed network connections. Ideally, you'd want to have every cpu connected to every other cpu, but that is impossible so you end up trying to maximize the number of connections and bandwidth while minimizing the collisions with other cpu-cpu communications for a given amount of money. It's not cheap by any means.
Well, if you want that kind of resource, Amazon is very happy to sell it to you these days. In 2008, it was still a novel concept. Assuming that a government project should be able to spearhead such a development, especially with a huge one-time investment in hardware, that's the real stupidity.
What???? Amazon EC2 instances aren't comparable because they have much more latency for internode communications. In any case, if you have a decent workload, EC2 is really expensive. Using 2 large instances for compute nodes and using 50TB of storage will cost you about $7500 a month. Amazon's calculator gives an estimate of $30k a month for a HPC cluster. At that pricing, you can easily buy comparable equipment and come out ahead even with power, maintenance, and people if you're using it regularly. EC2 only makes sense if you need this sort of computational power for a week or so every few months.
Whether you're liberal or conservative, does anyone really believe that the government spending tax dollars on expensive speculative investments makes sense?
You mean like basic research on things that may not be realizable for a decade or two? What's your feeling on the internet which grew out of research on networking in the 70s and 80s. What about the funding for ultrafast networking that's happening now? What about things like the tevatron and LHC which resulted in things like MRIs being made feasible?
Personally, I'm all for it.
My experience is it would be better to provision a cluster of EC2 boxes to run the task than build a purpose-built super computer (with some exception). One disadvantage of clustered machines is longer communication latency, so tasks that require lots of process to process communication will run slower. Many problems can be tweaked with search spaces sliced so that this latency is not a big deal.
There are huge classes of problems were you can't tweak things like this. Basically any simulation where things are large distances interact or where there is a lot of communication can't really be shoved into a cluster. For example, computation fluid dynamics (e.g. anything looking at air or water moving over surfaces), weather simulations, molecular dynamics, simulating gravity, etc. All of these types of problems will run like crap if you try to use EC2 instances for them.
Also, have you really priced out what computation and data storage on EC2 costs? There's a few studies that show that EC2 on-demand instance will cost you 2-3 times more than purchasing a comparable server even with power, cooling, and maintenance/administration factored in. See, this or this for example. EC2 is great if you want to explore certain problems and need to temporarily scale up or want the ability to scale up on demand but if you have a base level of work that you'll be doing all the time, it's much more efficient to buy your own hardware. That is doubly true if your problems need any significant amount of storage space.