Comment Re:What an arrogant ass... (Score 0) 195
Then the GPS industry should buy the spectrum if it's necessary for proper operation of their devices.
Then the GPS industry should buy the spectrum if it's necessary for proper operation of their devices.
You want semantics.
LightSquared does not interfere with GPS transmitters and GPS receivers with proper filter are not affected either. The FCC ruling admits that the utility of the GPS receivers outweigh the right to use the adjacent bands ( even with buffers ) for alternate uses because manufacturers didn't install proper filtering on the receivers.
FYI, it's the GPS fault for making the presumption that the adjacent spectrum would always be quiet. With this ruling the FCC admits that the GPS receivers are in violation of their license.
The basic fact is GPS was in violation of their part B license first. LightSquare would be able to operate if it were not for the GPS industry cheaping out on their filters. The FCC ruling is a tough one since they are forced to take the side of a ubiquitous service which is in violation of their license or rule for the startup that could potentially bring competition to the broadband market nationwide.
Studios live on a strong distribution model where they control the vast majority of the content and the distribution channels. Any tool that is viable for "piracy" is also viable by independent distributors as well. While I don't condone copyright infringement I think studios are more interested in their long term viability than to protect their content from "piracy". I expect similar behavior from the major publishing houses in the next couple of years as ebooks break their hold on the distribution channels.
Backups simply are not really an option past 20+ terabytes of storage, and simply not feasible if the storage is volatile in nature. AFAIK everyone has gone to redundancy over backups at scale.
200TB/130TB usable clustered/distributed system with 4x LTO5 drives and we do a full snapshot to tape every week. With data that size you either pay up-front for proper engineering or you pay for the life of the system for poor performance and eventual cleanup of the mess.
Well since anything over 100TB is not supported by the vendor I would say not really a great idea. The reason it's not supported is there is no reasonable way to maintain ( things like an error would result in days worth of outages to fsck and/or restore from backup ).
That is only when you use the minimal guarantees from the datasheets. In practice, with healthy disks, read errors are a lot less common.
Are you willing to bet 70TB+ on it, because that's what you are doing.
jornal checksumming only prevents errors in the journal, not once the data has been written to the main storage area. This was done primarily to ensure the atomic nature of the journal is not violated by a partial write.
Our BTRFS evaluation resulted in rejecting it for some very serious problems ( what they claim are snapshots are actually clones, panic in low memory situations, no fsck, horrible support tools, developers who are hostile to criticism, pre-release software,
Unless every read does a checksum ( they don't or it would kill performance ) then there is still the possibility of a silent read corruption. At 70TB it would be rare, but not as rare as many would think and would depend on the sector size and checksum on the individual drives.
A much better test of linux "big data"
1) write garbage to X blocks
2) run fsck if no errors found, repeat step 1
How long would it take before either of these filesystems noticed a problem and how many corrupt files do you have? With a real filesystem you should be able to identify and/or correct the data before it takes out any real data.
After evaluating our options in the 50-200TB range with room for further growth we ended up moving away from linux and to an object based storage platform with a pooled, snapshotted, and checksummed design. One of the major reasons for this was the URE problem, we would virtually be guaranteeing silent data corruption at that size with a filesystem that did not have internal checksums. The closest thing in the OS world would be ZFS whose openness is in serious doubt. It is scary how much trust the community places on spinning rust.
The tests are also useless since the "speed" will be linerally controlled by the IOPS of the array. Sure would be nice to be able to throw 10x15k spindles at 3.5TB ( 230 disks for the 72TB test ) that's one way to improve random IO performance, but how many can afford such luxury on a big data store that could reach into the 100's of TB?
Personally I think people are just not ready to accept the fact that high overhead, low margin, stores like bookstores are a dying breed and not likely to survive long term. They are on the short end of the "innovation" stick and it's only a matter of time before they become extensions of their profitable online counterparts or go out of business. They require too much of a footprint for the small profit margins. Between amazon and ebooks, the traditional bookstore is the buggy-whip of the the 21st century. With the death of the traditional bookstore will also usher in the death of the traditional publishing conglomerate and I will not shed any tears for those rentiers.
Personally I think there is still room for retail innovation, but people have to be willing to think outside the box to make something great. Think the best of the coffee shop, tiny lending library, amazon kiosk, and/or on demand paperback printing. Something that adds a tiny overhead to the profitable coffe shop model while adding the best points of the bookstore. Heck, work with local government to partner with the library system so that you don't even need to own the lending books.
You should ask Macys how that's working out with the Martha Stewart collection. Arbitrage is a bitch.
Math is like love -- a simple idea but it can get complicated. -- R. Drabek