Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Why? (Score 1) 180

The patents/licensing issue is the reason that the BBC developed the Dirac codec. They have a few hardware boxes that they've developed that take uncompressed or analog audio/video in one end and spit it onto Ethernet on the other end. If you use those everywhere, then there are no compatibility issues. The BBC uses it right now, so it is certainly a possibility on current equipment.

I don't buy the "produced live" as an issue as you're not likely to see a full second delay at any point. Outside the studio, no one cares about a fraction of a second because everything else adds seconds of delay. Just turn on a "live" sports game on TV and the radio and you'll notice how the radio announcer is always a few seconds ahead of what you see on the TV. You would have to be filming a delayed stream in the same view as the live stream to know, and even then the delay is unlikely to be noticeable.

I could see a case where you have multiple timing planes with edits being made using circular dependencies that would wreak havoc. That sounds like more of a process issue though.

It still sounds to me like you're making the process difficult by holding onto ideas of how certain pieces have to be done, and that maybe the whole process needs to be rethought. That said, I know there are things going on that I just don't have the background to foresee or understand, so it may very well be impossible without moving around insane amounts of uncompressed data. It'll certainly be interesting to see what the next decade brings for this field.

Comment Re:Why? (Score 1) 180

You can time stamp every frame coming in, buffer appropriately wherever you're trying to switch between viewed streams, and things will NEVER be out of sync. Never

Right, your solution to latency is to add more latency. Their solution is to be on time.

The question was why does it matter, which was never answered. The GP did some hand waving about keeping in sync, which didn't answer the question. I pointed out that it didn't answer the question, and that sync isn't an issue.

Now you have managed to not answer the question, or rather say the equivalent of, "latency is important because latency". Good job.

Comment Re:Why? (Score 1) 180

Well that's a failure of imagination. I'll admit technically speaking it often is *somewhat* compressed, - eg. 422 Subsampled chroma at least. But there is a massive difference between a delivery codec and a signal you're still working with. To start with H264 and their ilk are computationally expensive to do anything with. A single frame of 1080p is a pretty big dataset, and it's painful enough doing basic matrix transforms, but adding a bunch of higher level computations on top of that?... For example just cutting between two feeds of an inter frame compressed codec requires that the processor decompress the GOP and recreate the missing frames. Several of orders of magnitude more complicated than stopping one feed and starting another.

And generally speaking the uncompressed feed you have in broadcast situation you're doing *something* oo. Switching, mixing, adding graphics, etc. But the biggest question is one of generation loss. Even one round trip through one of those codecs results in a massive drop in quality (as you rightly point out). You don't want to be compressing footage out of the cameras any more than you can, because you KNOW that you're going to be rescaling, retiming, wiping, fading, keying etc etc etc...

H264 has vastly varying levels of compression and computational complexity. Heck, it even has lossless modes, so there is zero generational loss. And there was dedicated hardware out there years ago that could compress frames before the next frame was finished receiving. Really though, this scenario is probably better suited to one of the less complex and lower efficiency codecs, which is what the BBC is doing with the Dirac codec. And I'd imagine that a lossy codec that retained 99.9% of detail would be workable as a dozen recompressions would still leave about 99% of detail. Given the infrastructure gains, that seems like an easy win.

Comment Re:Why? (Score 1) 180

Out of curiosity, what is the big deal would be with such a small latency?

Because it doesn't take many frames before the human eye can perceive the difference, and if you're trying to be slick you don't want any perceptible glitches. Because if you have a little latency here and a little latency there you eventually wind up with a bunch of latency.

This didn't really answer the question. You can time stamp every frame coming in, buffer appropriately wherever you're trying to switch between viewed streams, and things will NEVER be out of sync. Never ever. Traditional A/V types get weird ideas about how things have to work. I've help design and work with digital video storage/transmission standards, and there is absolutely no reason multiple video/audio streams should ever go out of sync.

You're never likely to be more than a frame off that you'd need to buffer. It's not like you have to decode/encode at ever station either, you decode for the viewer, but pass on the already encoded data. And even if you have to buffer at 10 different stations (for reasons I can't imagine) you'd be delaying transmission by less than a second, which inconsequential. "Live TV" is delayed far more than that by the time it reaches a viewer anyway, and it's not like anyone could notice.

Comment Re:Soul Crushing? (Score 1) 276

In a luxury apartment building (i.e. gym, door and laundry service, concierge, rooftop garden), yes, that's about right. However in places like the East Village - which is considered quite expensive in Manhattan standards - most decent one-bedroom places go for around $2200 and you can pick up a two-bedroom for $2500.

Or I could continue to live in my 5 bedroom house with a significantly cheaper mortgage payment, building equity, and deciding which fruit trees I should plant in the yard. I really don't understand all the suburb hate. Sure, I have to drive to get a number of places, but because I have a car I can go downtown, or out into the rural, or wherever else, and it takes about as long. Plus I get more/nicer living space, a nice yard, and nice neighbors, with essentially zero crime.

Comment Re:What Greyhole isn't (Score 1) 234

  • Enterprise-ready: Greyhole targets home users.

I realize this quote is taken from the Greyhole home page, but it should be taken with a bit of salt. IIRC, the project has been around for years and has never lost any data. "The biggest one has 43TB, and uses 26 drives" isn't exactly big in an actual enterprise, but this isn't a 10k+ company, it's a small research group. The data they're talking about pretty small in comparison.

That said, I'm not even sure how they'd use Greyhole to help solve their problem. The summary says they want to unify the directories of the drives that data is written to, but I would imagine it would be far simpler to just move the data off of the drives onto a larger array (which could use Greyhole).

In the end, I suspect we just don't have enough information to understand what they are trying to do.

Comment Re:RAID (Score 1) 405

Granted, this is talking about RAID 5, so let's naively assume that doubling the parity disks for RAID 6 will halve the risk... but then since we're trying to duplicate 24 terabytes instead of twelve, we can also assume the risk doubles again, and we're back to being practically guaranteed a failure.

Bottom line is that 24 terabytes is still a huge amount of data. There is no reliable solution I can think of for backing it all up that will be cheap. At that point, you're looking at file-level redundancy managed by a backup manager like Backup Exec (or whatever you prefer) with the data split across a dozen drives. As also mentioned already, the problem becomes much easier if you're able to reduce that volume of data somewhat.

Use Greyhole.
http://www.greyhole.net/

It distributes files across multiple storage locations, with a user defined level of duplication (lots of options for per file type and location duplication levels). Full drive failures aren't terribly common, but read errors are. To survive a drive failure AND a read error reduplicating/redistributing files to recover from the failed drive, then you would need 3x redundancy. If you used a 3x redundancy with 4TB drives, it would be 18 drives, which is a lot but not unreasonable.

Really though, that's for more of an onsite backup. To duplicate the data for taking offsite you'd only need six 3TB drives because you only need a single copy in the backup. Set up a system with the six drives, and tell Greyhole to store one copy of every file on that system. When you're ready, shutdown that system and take it offsite. For best results have two of these systems that you rotate. When you hook it up to the network, the main onsite Greyhole system will see it show up and start syncing files to it. The best part is that a full drive failure on the backup doesn't impact files on the other drives. Just replace the failed drive and wait for everything to sync back up.

In a crisis where the onsite location is destroyed, and all but one backup drive fails, you can still hook up the one working drive to any system and copy the files off.

Comment Re:USB and disk Speed (Score 1) 405

If he's looking for reliability in a backup, then his choice of disks is going to be a factor. A drive with consumer grade chances of URE is going to die in a handful of writes and reads. USB grade drives (Caviar Green anyone?) aren't known for their reliability. Something like a Hitachi Ultrastar RE has a very very low chance of encountering a URE, so will be much more reliable.

Citation required. Information I've read from the manufacturers themselves indicates a similar error rate and MTBF between various drive lines. The difference between consumer and enterprise grade drives tends to be in the firmware where the enterprise drive will give up faster when encountering a read error so as to not risk dropping the drive out of the RAID array.

Comment Re:Prioritize efficiently. (Score 1) 331

Personally I wouldn't take anything unless it is 100% un-replaceable (discontinued systems and since-last-offisite-transfer backups). Remember, your insurance will (if the person that negotiated it wasn't a complete moron) cover ALL hardware that is caught in the fire, they might NOT cover hardware that you broke in the U-Haul truck while trying to save it. You should already have offsite backups, so at the most you should save the "didn't make it to offsite yet" recent backups (1 day to 1 week's worth depending on your setup). For everything else: let it burn, that's what you pay those high insurance premiums for! If your insurance company doesn't like that plan, THEY can move it out of the f*$ing building.

While this is mostly true, there are some caveats. I used to work for a city and communicated regularly with other cities in the state for IT matters. I know of one city specifically on the coast that had been evacuated multiple times in the past decade for hurricanes. Their standard operating procedure was to take all of their servers/workstation to another city a few hundred miles inland and set up shop there until the crisis was over. (They had an agreement with the other city for this.) While their flood insurance would have replaced all of the equipment and they had backups to restore data, just leaving it there would have meant AT LEAST several days of lost productivity, if not weeks acquiring all new equipment and getting it set up.

A working network is even more important for a city in a crisis, so if you have the time then moving it is probably a better option.

Comment Re:Do not use standard passwords (Score 1) 198

The salt I'm talking about is unique per user. The difference is the salt generating algorithm is in the code and not known.

Start with the username, have a long fixed salt value in the code, and then add the password. That ensures that the hacker must also gain access to the code, decipher the hashing method, and be able to isolate the fixed salt value. If you had a single hardened system that only took the password as input, read the database, and then responded with a true/false then I imagine that retrieving the tiny bit of authentication code would be rather difficult.

That said, there are a number of hardware cards designed specifically to not be able to reveal the private key. I imagine it would be trivial to have one that took two values (username + password), inserted the salt between then, perform a common SHA-1 hash, and spit it out. You'd need to have one physically attached to any system actually performing authentication, but it'd be pretty much unbreakable right now with a salt at least 128-bits in size. Use SHA-512 plus a 512 bit hash and gain a few decades.

Comment Re:So... (Score 3, Informative) 293

Say the vaccine is 96% effective and we're studying a population of 1000 kids. If they were all vaccinated, and they all come in contact with the virus, you'd expect roughly 40 of them to still get sick. If 30 of those do not get vaccinated, and all 1000 were exposed to the virus, you'd have a cap of 30 non-vaccinated kids getting sick, but still roughly 39 of the vaccinated kids will be sick, simply because there are more of them.

The total amount of people with the disease goes up significantly, but most of the people coming down with the disease are still people who were vaccinated. If you stop assuming all those people came in contact with the virus, the fact that there are now 30 kids who weren't vaccinated increases the chance of 39 kids for whom the vaccine didn't work to come in contact with the disease, so there's a larger proportion of vaccinated kids getting sick.

I wish I had some mod points from you because this is the critical piece that most people miss. Vaccines aren't 100% effective, and small number of unvaccinated kids can be the tipping point to infecting the kids with responsible parents.

Comment Re:Just installed (Score 2) 195

Your system is probably overpowered to run it.

The problem is, XBMC 11 is a backwards motion; it hogs CPU for unnecessary things, causing previously usable systems that were near the hardware minimums to no longer meet them. XBMC 10 was good because it actually made the system leaner and eliminated a lot of the "gameloop" style coding that was necessary to run on an original Xbox but just caused Windows or Linux systems to waste power when it was "running but idle."

Do you have a citation for this? Things like "dirty region rendering" should significantly reduce CPU load, so I'm curious what was added that would require additional CPU, and where.

Slashdot Top Deals

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...