Please create an account to participate in the Slashdot moderation system


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 internet speed test! ×

Comment Re:That came in at a pretty steep angle (Score 5, Interesting) 206

There are two reasons that I've seen.

Because the rocket is almost out of fuel, even burning only one engine at minimum throttle, the thrust to weight ratio is more than one (ie, the rocket would fly, not land). So, they can't hover, they have to hit the ship and shut the engine off at the exact moment that the velocity is zero (or very close to it). So, to help with that problem, they come in at an angle which helps consume at least some of the thrust in a direction that isn't upward.

The second reason is, as you say, to protect the landing platform. If they run out of fuel (or the engine fails or....), the stage just drops into the ocean rather than crashing into the barge at a very high speed. That said, based on their last several failed landing attempts, that barge can take quite a hit and stay in one piece....

Comment F*ucking idiot (Score 0) 518

See the comment subject. Lindsay, girl, this is really ill-advised. Not only to you make yourself look like a f*ucking idiot, but you are insulting religion. Which is a dumb thing to do considering how many -millions- of people take this stuff seriously. Like maybe the cop who pulls you over for a busted tail-light and decides to throw in an extra $300 speeding ticket (37 in a 35 zone) just because she goes to Mass every week and doesn't get the joke that you're a Pastafarian.

Plus this driver's license is a legal document. It's not the proper place for this shit.

Grow up, girl, Get a cute boyfriend to hump your brains out on a regular basis and you won't feel the need to go around with a fucking pot on your head.

Comment Roll it yourself but take responsibility (Score 1) 219

Super-Micro has 36 and 72 drive racks that aren't horrible human effort wise (you can get 90 drive racks, but I wouldn't recommend it). You COULD get 8TB drives for like 9.5 cent / GB (including the $10k 4U chassi overhead). 4TB drives will be more practical for rebuilds (and performance), but will push you to near 11c / GB. You can go with 1TB or even 1/2TB drives for performance (and faster rebuilds), but now you're up to 35c / GB.

That's roughly 288TB of RAW for say $30k 4U. If you need 1/2 PB, I'd say spec out 1.5PB - thus you're at $175K .. $200k.. But you can grow into it.

Note this is for ARCHIVE, as you're not going to get any real performance out of it.. Not enough CPU to disk ratio.. Not even sure if the MB can saturate a 40Gbps QSFP links and $30k switch. That's kind of why hadoop with cheap 1CPU + 4 direct-attached HDs are so popular.

At that size, I wouldn't recommend just RAID-1ing, LVMing, ext4ing (or btrfsing) then n-way foldering, then nfs mounting... Since you have problems when hosts go down and keeping any of the network from stalling / timing out.

Note, you don't want to 'back-up' this kind of system.. You need point-in-time snapshots.. And MAYBE periodic write-to-tape.. Copying is out of the question, so you just need a file-system that doesn't let you corrupt your data. DEFINITELY data has to replicate across multiple machines - you MUST assume hardware failure.

The problem is going to be partial network down-time, crashes, or stalls, and regularly replacing failed drives.. This kind of network is defined by how well it performs when 1/3 of your disks are in 1-week-long rebuild periods. Some systems (like HDFS) don't care about hardware failure.. There's no rebuild, just a constant sea of scheduled migration-of-data.

If you only ever schedule temporary bursts of 80% capacity (probably even too high), and have a system that only consumes 50% of disk-IO to rebuild, then a 4TB disk would take 12 hours to re-replicate. If you have an intelligent system (EMC, netapp, ddn, hdf, etc), you could get that down to 2 hours per disk (due to cross rebuilding).

I'm a big fan of object-file-systems (generally HTTP based).. That'll work well with the 3-way redundancy. You can typically fake out a POSIX-like file-system with fusefs.. You could even emulate CIFS or NFS. It's not going to be as responsive (high latency). Think S3.

There's also "experimental" posix systems like ceph, gpfs, luster. Very easy to screw up if you don't know what you're doing. And really painful to re-format after you've learn it's not tuned for your use-case.

HDFS will work - but it's mostly for running jobs on the data.

There's also AFS.

If you can afford it, there are commercial systems to do exactly what you want, but you'll need to tripple the cost again. Just don't expect a fault-tolerant multi-host storage solution to be as fast as even a dedicated laptop drive. Remember when testing.. You're not going to be the only one using the system... Benchmarks perform very differently when under disk-recovery or random-scatter-shot load by random elements of the system - including copying-in all that data.

Comment Git for large files (Score 1) 383

Git is an excellent system, but is less efficient for large files. This makes certain work-flows difficult to put into a git-repository. i.e. storing compiled binaries, or when having non-trivial test-data-sets. Given the 'cloud', do you forsee a version of git that uses 'web-resources' as SHA-able entities, to mitigate the proliferation of pack-file copies of said large files. Otherwise, do you have any thoughts / strategy for how to deal with large files?

Comment network-operating systems (Score 1) 383

Have you ever considered a network-transparent OS-layer? If not why? I once saw QNX and and how the command line made little differentiation of which server you were physically on. (run X on node 3, ps (across all nodes)). You ran commands pretty much on any node of consequence.. I've ALWAYS wanted this capability in Linux... cluster-ssh is about as close as I've ever gotten. These days hadoop/storm/etc give a half-assed approximation.

Comment GPU kernels (Score 4, Interesting) 383

Is there any inspiration that a GPU based kernel / scheduler has for you? How might Linux be improved to better take advantage of GPU-type batch execution models. Given that you worked transmeta and JIT compiled host-targetted runtimes. GPUs 1,000-thread schedulers seem like the next great paradigm for the exact type of machines that Linux does best on.

Comment Missing the point here (Score 1) 189

The endless discussion on the advantages of analog over digital recording always gloss over the fact that a customer has to pay $10-20 for EACH album purchased in the vinyl analog format, while a $10-$20 64Gb SD card stores 1200 albums (@12 songs; @ 5Mb per song in 256BPS MP3 format) for the same price. Plus 1200 albums fills the wall of a house and weighs 100+ kilos, while a 64GB SD card is the size of a thumbnail.

The question of preserving sound quality on different media is like being concerned that a Mozart symphony would disappear if the paper that Mozart himself wrote the symphony on crumbled. Instead we just get a new symphony orchestra to play the same notes that Mozart wrote down with the same instruments. And the symphony sounds the same 250 years later without using any recording technology.

It's the musical experience that is important, not the recording of the musical experience.

Comment Re:Wrong Math ? (Score 1) 100

Yeah, I think their math is off as well. My wife and I have the camera that they seem to have used (a Canon 70D - you can see it in some of their "Making Of" shots) and it shoots full-res RAW files in the 25MB to 35MB range. Even if you turn on RAW+JPEG mode, that's at most ~40MB/image. So I'm not clear on how they ended up with that much data unless it's, like, 20 shots per location and 70,000 locations? But then why say 70,000 images?

Comment the primal fundamental reason boys do computers (Score 1) 779

Boys have an inherent fascination with the concept of using symbol manipulation to change the functionality of physical machinery. By changing how machines work by typing words and code. Boys are absolutely obsessed with the concept that you can create machines that do what you tell them to do by changing mere symbols (which is what source code is). It is a way of creating life from dead objects by using 'magic' symbols. Religions are based on this. Programming is a type of religion. Boys are very much into this.

Girls, on the other hand, are absolutely fascinated by their ability to create actual living, thinking, unique human beings with their own bodies. They don't need magic symbol manipulation to create artificial life from physical objects. Their bodies create life from their interactions with other life. The lives that girls create can't be controlled like the robots or machinery that boys create, but their human-life creations are infinitely more complicated than what the boys can do.

This is the basic primal fundamental reason why boys are much more attracted to computers and science. Boys spend their lives and careers trying to gain and master the life-creating abilities that girls are endowed with at birth.

Comment Re:Action vs. inaction (Score 1) 307

I have a choice of which computer and handset I choose to buy and (on the computer) what OS to run. I have two choices in ISP's, or I can move where I *might* have two different choices in ISP's (or not). Moving is a pretty high bar to clear. Buying another handset or changing OS's (or even buying a second handset or computer) because whatever application I want isn't available on the platform I have is a much lower bar.

The problem is that we have chosen to not allow everyone and their brother to dig up the street to lay new cable or string new cable on overhead wires. There are good reasons for that. That, however, means that the so called "last mile" delivery, at least to residential areas, is always going to be a place where competition is artificially limited. So, at that point, you either take the cable-TV route and just let the monopoly abuse its customer base with no innovation for *years* or you need some government regulation to get more competition. Neutrality is one form of that regulation. Personally, I think that without it, the Internet as we know it will cease to exist and turn, instead, into content channels that are available like cable-TV channels on whatever ISP you happen to be attached to. That will severely restrict new websites from being created. Maybe that view is too pessimistic. Another option (or an additional option on top of neutrality) is to have the public "own" (or at least have a strong interest in the operation of) the last-mile network, kind of like the public owns the roads, and force the actual owner of the cable to allow multiple ISPs to exist on the cable (that could also take the form of prohibiting a single company from operating both a last-mile infrastructure and offering public Internet access, or several other similar forms, or it could just mandate that the lines must be leased to anyone who can pay to play).

I'd love to hear another set of options that can be plausibly implemented that would encourage competition in content creation and content delivery that doesn't a) require government regulation (remember, prohibiting exclusive contracts for last-mile service is, itself, a government regulation against an otherwise legal contract) and b) doesn't involve an unwieldy tangle of wires above and/or below every street.

Slashdot Top Deals

Anyone can do any amount of work provided it isn't the work he is supposed to be doing at the moment. -- Robert Benchley