Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment What about Mac apps? (Score 1) 21

The support document talks about what happens to your iPad/iPhone apps but doesn't specifically say whether Mac apps come along as well. Some of the choices Apple made here seem odd, like replacing your Primary Account's payment method, Up Next queue and podcast data. I have an ancient Apple account that has some older Mac App Store purchases in it but for the last 10 years, I've been using my primary account. I hope I don't suddenly nuke, say, my Apple TV data since the secondary account has none.....

Comment Re: This opinion isn't new and is still wrong. (Score 3, Informative) 411

So far for 2017, Linux has 128 code execution vulnerabilities whereas Microsoft has 71.

Because each Linux vulnerability is reported for multiple distributions. And because Linux vulnerabilities are found faster and therefore fixed faster. However you want to spin it, Windows is the one getting successfully exploited in multiple ways, so that new Windows vulnerabilities are hardly news any more, whereas its big news any time a hole shows up in Linux, and then very few fall victim to it.... partly because of the early and widespread disclosure, but more because Linux vulnerabilities typically require local access, login shell, etc. Whereas a dodgy flash file is often enough to take out a Windows box.

Comment Re:Exception to butterage (Score 1) 303

I just grabbed the opera .deb package and installed it, total elapsed time roughly 30 seconds, and it is suh-weet. Thanks to whoever posted this for the suggestion. Chrome is really pissing me off these days. Firefox still likely be be my main browser because of vastly superior tab handling and actual open source project.

Comment La Jetée (Score 2) 1222

The best science fiction movie of all time is a short (half-hour) black/white French film called "La Jetée" [translates as the airport's observation deck]. It is in the format of scanned photos with narration, like Ken Burns' PBS documentaries.

    A child sees a man crumble and die while visiting Paris Orly airport's observation deck in 1962. Shortly after there is total nuclear war. Because he is obsessed with this image of a man's death, he is selected to be a guinea pig in an experiment to send him time traveling into the future in order to get an energy source to restart civilization. He succeeds in moving in time, but always ends up in the pre-war era. There, he meets a beautiful woman and falls in love.

    It doesn't sound like much, but it is a true masterpiece. MIT even published a coffee-table book detailing every scene.

    It is super low-budget. One scene that shows the Arc of Triumph in Paris with a huge chunk blown out of it actually has a pin hole from a thumbtack displayed in it.

    David Bowie did a homage to it in video for a song from his Black Tie/White Noise album in the early 1990s.

    It is available on DVD from most big-city library systems.

Comment Re:That came in at a pretty steep angle (Score 5, Interesting) 206

There are two reasons that I've seen.

Because the rocket is almost out of fuel, even burning only one engine at minimum throttle, the thrust to weight ratio is more than one (ie, the rocket would fly, not land). So, they can't hover, they have to hit the ship and shut the engine off at the exact moment that the velocity is zero (or very close to it). So, to help with that problem, they come in at an angle which helps consume at least some of the thrust in a direction that isn't upward.

The second reason is, as you say, to protect the landing platform. If they run out of fuel (or the engine fails or....), the stage just drops into the ocean rather than crashing into the barge at a very high speed. That said, based on their last several failed landing attempts, that barge can take quite a hit and stay in one piece....

Comment F*ucking idiot (Score 0) 518

See the comment subject. Lindsay, girl, this is really ill-advised. Not only to you make yourself look like a f*ucking idiot, but you are insulting religion. Which is a dumb thing to do considering how many -millions- of people take this stuff seriously. Like maybe the cop who pulls you over for a busted tail-light and decides to throw in an extra $300 speeding ticket (37 in a 35 zone) just because she goes to Mass every week and doesn't get the joke that you're a Pastafarian.

Plus this driver's license is a legal document. It's not the proper place for this shit.

Grow up, girl, Get a cute boyfriend to hump your brains out on a regular basis and you won't feel the need to go around with a fucking pot on your head.

Comment Roll it yourself but take responsibility (Score 1) 219

Super-Micro has 36 and 72 drive racks that aren't horrible human effort wise (you can get 90 drive racks, but I wouldn't recommend it). You COULD get 8TB drives for like 9.5 cent / GB (including the $10k 4U chassi overhead). 4TB drives will be more practical for rebuilds (and performance), but will push you to near 11c / GB. You can go with 1TB or even 1/2TB drives for performance (and faster rebuilds), but now you're up to 35c / GB.

That's roughly 288TB of RAW for say $30k 4U. If you need 1/2 PB, I'd say spec out 1.5PB - thus you're at $175K .. $200k.. But you can grow into it.

Note this is for ARCHIVE, as you're not going to get any real performance out of it.. Not enough CPU to disk ratio.. Not even sure if the MB can saturate a 40Gbps QSFP links and $30k switch. That's kind of why hadoop with cheap 1CPU + 4 direct-attached HDs are so popular.

At that size, I wouldn't recommend just RAID-1ing, LVMing, ext4ing (or btrfsing) then n-way foldering, then nfs mounting... Since you have problems when hosts go down and keeping any of the network from stalling / timing out.

Note, you don't want to 'back-up' this kind of system.. You need point-in-time snapshots.. And MAYBE periodic write-to-tape.. Copying is out of the question, so you just need a file-system that doesn't let you corrupt your data. DEFINITELY data has to replicate across multiple machines - you MUST assume hardware failure.

The problem is going to be partial network down-time, crashes, or stalls, and regularly replacing failed drives.. This kind of network is defined by how well it performs when 1/3 of your disks are in 1-week-long rebuild periods. Some systems (like HDFS) don't care about hardware failure.. There's no rebuild, just a constant sea of scheduled migration-of-data.

If you only ever schedule temporary bursts of 80% capacity (probably even too high), and have a system that only consumes 50% of disk-IO to rebuild, then a 4TB disk would take 12 hours to re-replicate. If you have an intelligent system (EMC, netapp, ddn, hdf, etc), you could get that down to 2 hours per disk (due to cross rebuilding).

I'm a big fan of object-file-systems (generally HTTP based).. That'll work well with the 3-way redundancy. You can typically fake out a POSIX-like file-system with fusefs.. You could even emulate CIFS or NFS. It's not going to be as responsive (high latency). Think S3.

There's also "experimental" posix systems like ceph, gpfs, luster. Very easy to screw up if you don't know what you're doing. And really painful to re-format after you've learn it's not tuned for your use-case.

HDFS will work - but it's mostly for running jobs on the data.

There's also AFS.

If you can afford it, there are commercial systems to do exactly what you want, but you'll need to tripple the cost again. Just don't expect a fault-tolerant multi-host storage solution to be as fast as even a dedicated laptop drive. Remember when testing.. You're not going to be the only one using the system... Benchmarks perform very differently when under disk-recovery or random-scatter-shot load by random elements of the system - including copying-in all that data.

Comment Git for large files (Score 1) 383

Git is an excellent system, but is less efficient for large files. This makes certain work-flows difficult to put into a git-repository. i.e. storing compiled binaries, or when having non-trivial test-data-sets. Given the 'cloud', do you forsee a version of git that uses 'web-resources' as SHA-able entities, to mitigate the proliferation of pack-file copies of said large files. Otherwise, do you have any thoughts / strategy for how to deal with large files?

Slashdot Top Deals

We are experiencing system trouble -- do not adjust your terminal.

Working...