Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Unpopular here, but I'm with Berners-Lee. DRM e (Score 3, Interesting) 207

I would happily support DRM that actually cared about customers' rights. I want the guarantee that, like physical media, DRM-protected content will be available in the far future. Blu-ray already fails this test, and I only purchase Blu-rays to strip the DRM and save a long-term format. I want the ability to gift, loan, or sell any media that I possess the rights to. I don't want to possess merely a ticket which grants me admittance to content for a limited time, under limited conditions, subject to the dissolution of whatever producer, licencor, or operator manages the DRM scheme.

Because piracy has absolutely no effect on 99% of customers I am fairly certain that what content producers/licencors truly fear is "casual piracy" and fair use like loans and libraries where market forces drive the resale cost of digital media down to its natural price in the free market.

It's perfectly natural to resist inferior DRM schemes by refusing to make them standard. If you want me to support an open DRM standard then it needs to be capability based with normal customers like you or me represented as first class owners of those capabilities and implement a durable scheme for transfer of those capabilities into the indefinite future.

For example, consider a ownership-based scheme where producers issue N digitally-signed capabilities to a particular copyrighted work and sell them to customers on an electronic marketplace. Bitcoin has proven that it's possible to maintain a globally consistent transaction ledger of ownership of individual tokens, and a much cheaper implementation could maintain ownership and facilitate programmatic transfer of capabilities to digital works (to support sales, gifts, and even temporary loans) because the marginal value of acquiring more than one capability to the same work is zero and so there will be little need to spend gigawatts of electricity maintaining the blockchain against adversaries. The copyrighted work doesn't even have to be encrypted. Just make standards-compliant devices/software require current ownership of a capability to use the work. Yes, this is an easily defeated scheme for pirates, but so is every other DRM scheme. At least this respects individual property rights, the first sale doctrine, fair use, and libraries for the vast majority of users.

Comment Re:Detecting weapons is NOT the purpose of TSA... (Score 1) 349

Frankly I doubt concealed carry on a plane could possibly be very effective. If five terrorists can get box cutters on a plane then five terrorists can get concealed carry permits and outnumber any other concealed carry citizens. Not to mention that explosive decompression is no fun and I doubt plane windows were intended to resist more than a couple shots from the inside before catastrophically failing. Explosives and planes don't mix well, and given that a fair number of gun injuries are actually accidents (not to mention uncounted accidental firings) I would imagine the first time a plane has issues due to gunfire it would be accidental from an otherwise legal concealed handgun.

In the best case scenario some idiot terrorist without a gun gets shot a couple times in the center of mass with no over-penetration before setting off a bomb. Virtually any other scenario doesn't require firearms to handle; terrorists on a plane are literally at arms-length to a hoard of people who hate terrorists. Terrorists taking overt action need room to maneuver or the ability to barricade themselves, or else the ability to instill overwhelming fear and inaction in everyone surrounding them. Taking over a plane with the threat of personal violence is pretty much the hardest thing a terrorist could accomplish at this point.

Comment Re:Don't overthink it (Score 1) 174

1.) all the photos I took of her seemed incredibly important at the time but are never looked at any more

Yeah, photos have a weird W-shaped utility; They get shared and looked at a lot when brand new. After 6 months to a year they sit in boxes/drives for years and after about 20 years the utility climbs again until ~150 years later when no living relatives remember the people in the photos. Then after a few more decades they have historical value. Hence the need to plan for long-term storage.

Comment Re:Geographic redundancy (Score 1) 174

Take UDF. Expand it to the PB realm, not the existing 2TB. Add some ZFS features like ditto blocks, 64-128 bit CRCs, cryptographically signed writes with public keys, standard encryption, standard compression, ability to duplicate the filesystem as an image (so rsync utilities are usable to preserve hierarchy), snapshot directories a la OneFS/WAFL,

ZFS is probably your best bet for now. Oracle built filesystem-level encryption into the Solaris offering, no luck for the free versions. No cryptographic signing of writes, but that is imho overkill when you have to trust the whole kernel and filesystem layer and so whole-disk encryption plus SHA256 checksums gives basically the same assurance that no data has been modified. You can hold snapshots in ZFS to prevent them from being accidentally deleted and treat them as basically WORM.

Comment Re:Geographic redundancy (Score 1) 174

So from your smallest box 3x 3TB = 9 TB of data, and Glacier and Google nearline (maybe others too?) are charging $0.01/GB-month, so about $90/month if you back up the whole thing. I don't know how much you pay for electricity in both locations, but if a box can run/idle at 100W and you leave it on all the time you spend ~900KW a year. At $0.20/KWh that's about $180/year per server. Disks every 3 years (if you get HGST's warranty) is $140/year (using $0.035/GB rough cost today), or $27/month per server for ongoing costs not including replacing the other hardware periodically. $54/month vs. $90/month? Sure, it's a little cheaper. If you wanted one box and one online service it makes running your own look better; $120/month vs $54/month. What about connectivity at both sites? If you are already paying an ISP for other reasons at both ends that's one thing, otherwise throw another ~$50/month on top of at least the backup server cost. AWS and Google appear to currently charge $0/GB for incoming transfers. Of course if you can get deals on cheap drives and run them past the warranty in a state with cheap electricity (or in a dorm room with free Internet/electricity) it's a lot cheaper.

As for security, encrypt before copying anywhere. You might as well be running local disk encryption too so you never have to worry about returning a disk with plaintext for warranty repair. I don't trust any company to keep the data I upload secret (FISA courts, NSA, bla bla bla), so encrypting incremental ZFS snapshots and uploading them is an efficient way of maintaining an offsite backup. I only have 1TB I care to back up this way so it's less sticker shock each month, but I still find it amusing that the first box I built was 4*320GB RAID5 and now that costs $9/month.

Comment Re:Traditional internal facing IT shop .. (Score 1) 198

1500 VMs isn't that crazy for 3000 people when you have to use Windows. Every individual piece of software is going to want its own VM, often two or more for redundancy/load balancing, plus an equal number for the test environment, and often a few more for dev/upgrade environments. Many software packages with a server component are big cumbersome globs of many .exes that the vendor "recommends" be run on separate VMs because the developers have no clue how to write software and rebooting Windows is the first solution to half the issues. Think a 3000 person company doesn't have the necessary ~200 apps to reach 1500 VMs by this measure? There's usually several software applications that are specific to each department, and there are lots of departments: purchasing, accounting, distribution/receiving, each core business unit, HR, PR, engineering/plantops, business office, sales, and last but not least IT which is guaranteed to run dozens if not hundreds of separate apps to do their jobs. Sure; not all of them require a server, but many do, even if it's just a ridiculous license server. Data? Anyone processing video or images is just going to have a crapload of data period. Same for some raw scientific data from instrumentation. That said, it really does depend on the industry; I can imagine a 3000 person company where most employees are sales/warehouse/factory drones not needing that much software. Basically if most employees are "knowledge workers" (or shoehorned into it like healthcare where doctors and nurses are required to use atrocious piles of software to record minutia about patient care) then IT is going to be bigger than others.

Comment Re:One time pad (Score 4, Interesting) 128

That said, you could probably use a synchronized random number generator as the shared pad data. The other side would only be able to decrypt messages for as long as they buffer the random number data; after which the message is lost to everyone for eternity. This could work for a TLS session where messages are exchanged with only a couple minutes (or preferably seconds) delay so that the buffer does not need to be very big.

That's roughly the definition of a stream cipher (e.g. RC4 or a block cipher in Counter mode). Only a cryptographically secure random number generator works, which is why such a thing is called a stream cipher and not just a "pseudo-random one time pad". In any case it's not a true one time pad because the entropy of the stream of pseudorandom data is limited to the entropy of the internal state of the cipher, and further limited by the entropy of the key. That means stream ciphers can be broken given only the ciphertext, as opposed to using a one time pad. Stream ciphers also share the same weakness as one time pads; reusing the same stream cipher key is just as bad as reusing a one time pad (virtually automatic recovery of all plaintexts encrypted with the same pad/stream).

Comment What are your IOPS and throughput requirements? (Score 2) 219

For high throughput/IOPS requirements build a Lustre/Ceph/etc. cluster and mount the cluster filesystems directly on as many clients as possible. You'll have to set up gateway machines for CIFS/NFS clients that can't directly talk to the cluster, so figure out how much throughput those clients will need and build appropriate gateway boxes and hook them to the cluster. Sizing for performance depends on the type of workload, so start getting disk activity profiles and stats from any existing storage NOW to figure out what typical workloads look like. Data analysis before purchasing is your best friend.

If the IOPS and throughput requirements are especially low (guaranteed < 50 random IOPS [for RAID/background process/degraded-or-rebuilding-array overhead] per spindle and what a couple 10gbps ethernet ports can handle, over the entire lifetime of the system) then you can probably get away with just some SAS cards attached to SAS hotplug drive shelves and building one big FreeBSD ZFS box. Use two mirrored vdevs per pool (RAID10-alike) for the higher-IOPS processing group and RAIDZ2 or RAIDZ3 with ~15 disk vdevs for the archiving group to save on disk costs.

Plan for 100% more growth in the first year than anyone says they need (shiny new storage always attracts new usage). Buy server hardware capable of 3 to 5 years of growth; be sure your SAS cards and arrays will scale that high if you go with one big storage box.

Comment Re:Singularity (Score 1) 484

The only thing you're missing is support for arbitrary SIP-level proofs beyond type safety (e.g. support for arbitrary proofs of SIP behavior such as time/space complexity, halting, semantic properties, etc.) , and a formally verified self-verifying proof-checker to make sure the compiler is generating correct code and proofs. It looks like you're looking into PCC and TAL, so once you can ship the verifier with its own proof and self-verify during the boot process, you can be fairly certain that hardware errors are the only problem left. I assume you've already executing with a subset of the x86(_64) instruction set for easier verification. I figure that limiting code generation to the smallest set of opcodes can take advantage of formal verification done by Intel/AMD/others in processor design, while excluding all the complex protected-mode and virtualization instructions. Turning off SMM and injection of other arbitrary BIOS/EFI code would also be handy. The hardest part to model and prove correct will probably be the mutli-processor cache coherency behavior, but hopefully Intel at least has done some of that work already and can guarantee adherence to the specs.

Comment They already have quick access to social media. (Score 1) 562

With a warrant, that is. Same with webmail and any other hosted service. Warrents describing a particular place and person have a way of producing encryption keys from service providers. When warrents aren't fast enough for them, then you know they're doing something very, very wrong. Unlike movies where Jack Spy decrypts the terrorists' plans in real-time to thwart them, our jokers can barely even share high-priority bulletins about suspected terrorists planning to board a plane in a day or two. It's ludicrous to suggest that they need faster access to information when they can't even manage what they have already.

Slashdot Top Deals

"Security is mostly a superstition. It does not exist in nature... Life is either a daring adventure or nothing." -- Helen Keller

Working...