Forgot your password?

Comment: Re:Entitlement (Score 1) 362

by danversj (#41511959) Attached to: Think Tank's Website Rejects Browser Do-Not-Track Requests
My position on this is not very nuanced. I block ads simply because I can. I read free content simply because I can. I don't expect it nor do I feel entitled to it, but it's there so I read it. If it wasn't there I wouldn't read it and probably would go back to having a life outside of the internet. I definitely don't like being tracked (it's creepy ok?) and so will do anything to thwart attempts to do so, regardless of the detriment this may cause to some business models.

No business model has a god-given right to be sucessful. States may grant certain business models monopoly rights, but that doesn't always stop them from being gazumped by new disruptive technologies. We live in a time of change. The best business models are ones that "just work" because they provide a clear and well-understood benefit to all parties involved, making them attractive to both business and customers. Tracking is sneaky and erodes trust. Ads are annoying and and the precedent set by radio, TV and print says that media consumers cannot be forced to pay attention to them. Now media consumers are using computing devices on which to receive media, the avoidance of ads can be automated. Telling people not to block ads because it's the "wrong thing to do" sounds like the honor system to me. A nice idea but not a strong basis for a business model.

Comment: Re:sour grapes (Score 1) 180

by danversj (#41385113) Attached to: 100GbE To Slash the Cost of Producing Live Television
Hi BB,

It's probably not appropriate for me to comment on SMPTE's paper approval process in this instance. I wasn't ever really told what the minimum standard for a paper was, but I did get it proofread by a number of my peers well before submitting it. Obviously I needed someone with a bit more expertise outside of outside broadcast engineering to look over it. All I will say is that your logic is sound.

Shortly after I submitted my last response I realised what you were getting at. Yes, there's more to a 100GbE switch than just the laser modules. I think the logic there was that I was starting my analyisis from a "cost per port" point of view. When I saw the price of the 100GbE laser modules I assumed (probably fatally) they were the bulk of the cost of the switch, and that the other parts were insignifigant from a cost perspective. I don't think the overall conclusion was wrong (timescale perhaps out by a few years), but yes, there are errors in my working it out. Looking back, and with the benefit of your criticisim, it would have been better to write it as an opinion article rather than as a research paper.

So - my opinion on another of your earlier comments - The nature of the outside broadcast business is fast turnaround, and that requires that many technical aspects be "plug and play". Another of the reasons I tend to favour Audio Video Bridging is that it presents itself as a "plug and play" approach to QoS, VLAN and other networking aspects. The promoters of the standard(s) specifically mention that very little networking knowledge is required to deploy an AVB-enabled network. I'm not aware of the particulars, but the 802.1ak standard hopefully should allow for things like multicast trees, spanning tree, VLANs, etc to be automated to a large degree by being established on an "as needed" basis. Indeed, a packet/frame-switched system is pretty-much useless to us outside broadcasters unless these things can be automated. Also, topologically speaking, a single outside broadcast facility resembles a single LAN. I would not expect anyone working at an OB to need or have knowledge of routing protocols at all. While we require expensive, high-bandwidth equipment, I don't see the need to make the network more complicated than it needs to be.

I do find the idea of a uni research program interesting. I will be attending the SMPTE meeting at CSIRO where I'm sure we'll discuss this further.



Comment: Re:sour grapes (Score 1) 180

by danversj (#41349737) Attached to: 100GbE To Slash the Cost of Producing Live Television
Hi AC.... author here... and ouch! :)

My main motivation for writing this paper (or article or whatever it is) is that I was fully aware that most people were thinking along the lines of circuit-switched TV production moving in a packet (and frame)-switched direction - the problem was that there was very little written about it, and what it would mean for the industry. Perhaps this sort of commentary isn't appropriate for SMPTE - it's certainly up for debate. But I wanted to write about it, and having written it, I submitted it to SMPTE on the off-chance they'd be interested. I felt that they wouldn't be interested for exactly the reasons you mention. Turns out, they were.

Yes, there's no new scientific or engineering knowledge in the paper. But I wanted to start the conversation. Multicamera production is a specific niche of the broadcast industry that packet-switiching technology hasn't yet penetrated. I did a lot of searching of trade publications and saw very little discussion of what this transition would mean and challenges it would pose. I don't personally have a lot of research resources at my disposal other than Google - and that was good enough to obtain the back-of-the-envelope figures you see in the paper. At least I cited my sources to allow them to be criticised. And I want criticisim such as yours. In an absence of publicly-available of information about the future of where the industry is going, I felt my paper would fill a gap. Judging by many of the comments here on Slashdot, despite it's failings it has succeeded in informing a wider audience about the issues that this transition will face.

I do take issue with your argument that laser transceivers are not interchangable with video routers. The entire point of the paper is that Ethernet switches will replace video routers (and other anciallary TV equipment such as CCUs). We will plug our cameras and vision mixers into Ethernet switches and will not need video routers. So a comparison of their relative costs is entirely appropriate - it's a direct replacement of one technology with another. Ethernet equipment follows a steep price curve because it is a commodity product. Broadcast TV equipment is not a commodity product and is not subject to nearly as steep a price curve - meaning that in 2015 Ethernet equipment will be much cheaper than today but TV equipment will not be. Yes the predicted figures are rubbery - I don't have access to proper price modelling and market research - that stuff isn't available for free. But I wanted to crunch the numbers in a way that would more show the relative orders of magnitude in play and general trends. I think the comparisons are good enough to get people thinking serously now about developing live-production systems that connect directly to Ethernet, eliminating baseband transmission entirely. Can you go out and buy such a system today? No. So it's not a solved problem.

I would be absolutely delighted if someone read my paper, recognised it's failings and decided they could do much better. I'd like the quality of discussion to be much higher. I want to see more written about this and there to be fierce debate. Debate is already beginning about acceptable levels of latency in a live-switched production facility. Despite your assertion that these discussions are a waste of time, I assure you there are those in the industry with differing opinions as to what constitutes an acceptable level of latency. Perhaps this is where more scientific research is needed - and I would like to do some trials on this at my workplace. Also - is resource reservation really going to work? Elsewhere in the comments on this page there are those with doubts about this. Is Audio Video Bridging the right technology, is it really needed, or are more standard QoS measures sufficient? Cisco and Xilinx are addressing these issues but they are being trialled in distribution and contribution environments - point-to-multipoint or point-to-point situations. What about the massively-multipoint-to-multipoint environment that is a live TV production facility (where latency and synchronisation are extremely important)? What are the engineering challenges in building an entirely Ethernet-based Vision Mixer? I covered a lot of ground in the paper but didn't delve into a lot of detail - I more wanted to get people thinking. These things may be obvious to some segments of the industry but there's precious little discussion in the live-production area - we're a very circuit-centric bunch so the packet-switched message needs to be delivered with a simple blunt instrument sometimes.

Most of the papers presented at SMPTE and other conferences (and articles in trade publications) are product-focussed. They aren't open sales pitches but they almost always are mentioning products or services that are or soon will be on sale. This is quite OK, but I still would like to see more papers that are more general in nature, that discuss standards and technology in a more general way. Writing papers is not my day job (obviously) but I do feel I have something to contribute occasionally.

Yes, Ethernet switches frames. Conflating Ethernet and IP and TCP does makes me look a bit stupid in front of an IT engineering audience. Yes, "packet-switching" should really only be used to refer to what's going on at Layer 3. I absolutely do not pretend to be an IT engineer - my background is circuit-switched TV, which doesn't have the tradition of deep abstraction the way Computer Science has. My networking knowledge is self-taught and I know more about it than most of my peers - which is more than enough to do my job. But I appreciate being called out on my terminology flubs and resolve to do better next time. :)

Comment: Re:Numbers seem VERY wrong (Score 1) 180

by danversj (#41344385) Attached to: 100GbE To Slash the Cost of Producing Live Television
100Gbit Ethernet would be very handy in plugging 30 cameras into a vision mixer. Or into a multiviewer. Or into a processing box (aspect ratio conversion, standards conversion, color correction, etc), or for recording into a video server. Inside a production facility there are 100's of uncompressed video signals flying about. Building these facilites where every signal needs it's own cable is expensive and time-consuming. It'd be much nicer to have 2 connectors instead of, like, 80 to plug into a device.

Comment: Re:I predict a drop in reliability. (Score 1) 180

by danversj (#41344179) Attached to: 100GbE To Slash the Cost of Producing Live Television
Hi Above, article author here.

I thank you for your criticism. I had hoped I'd get more comments like yours - so far yours is the only one, but that in no way diminishes it's relevance. Yes, the jury is out on the resource reservation issue - that's the most concerning part to me. I don't think in any way we should jump into packet-switched live production without doing extensive trials and tests. The reliability afforded by circuit-switching should not be given up unless the successor technology can give the same reliability (or 99.9% of the same reliabilty or whatever the market dictates).

However, I do think that Audio Video Bridging looks promising - standardised QoS management at layer 2 gives the concept the best possible chance of working. AVB's first application is for automotive and public-address sound systems. The audio production industry has been using proprietary Ethernet-based audio transport systems for years. AVB is an effort to replace these proprietary systems with open standards. But the point is, they obviously do work - I haven't read any accounts of how packet-switching is generally failing as an audio distribution medium.

That said, AVB is nowhere near ready for broadcast television - current AVB switches are only 1GB-per-port. My article was written in part to point out the economics of using Ethernet-based systems vs. Circuit-switched systems. The economics of packet-switched facilities will push development of more reliable systems. TV broadcasters face tightening budgets, so there is a big impetus to make this work. There was similar criticism of station automation. Fully-manned control rooms are always going to be more reliable than a mostly-automated station. Still, TV stations perservered with the teething problems (and there were many), and now most playout control rooms in most TV stations are run by one person (or none).

Comment: Re:Price is a little inaccurate... (Score 1) 180

by danversj (#41343955) Attached to: 100GbE To Slash the Cost of Producing Live Television
Author here. The numbers of CFP modules accounts in the article accounts for both ends of the links required. :) In my first draft I forgot that, but added corrected it in the version online. The numbers still favour Ethernet as being cheaper by 2015. I did point out that to account for bias I over-estimated the Ethernet costs and under-estimated the tradtional TV equipments costs. Even with these biases in place, Ethernet still works out cheaper by 2015.

Comment: Re:Is the Network really the bottleneck? (Score 1) 180

by danversj (#41343901) Attached to: 100GbE To Slash the Cost of Producing Live Television
Coordinating signals implies delaying them, inside the facility where the show is being produced. You're right - latency doesn't matter for the audience as they only see one point of view and are largely passive in their consumption. At the place where the show is produced, the director is interacting with the event in real-time. Gamers hate lag, and so do TV directors. In a TV control room, directors get very pissed off when the video doesn't cut at the *very instant* they press the button.

Comment: Re:Why? (Score 1) 180

by danversj (#41343839) Attached to: 100GbE To Slash the Cost of Producing Live Television
Hi, Author here, I'll answer the question. :) Yes, you are technically correct that buffering all the incoming signals with produce an output where all the sources are in sync. A couple of problems with this: As you're probably aware, codecs used in the broadcast industry are patent-encumbered. And has been mentioned, in a typical TV studio there are 100's of video sources and 100's of video destinations. A codec on each and every one of these would be expensive, to say the least, and a nightmare in trying to ensure full compatibility and consistency (not all implementations of codecs are equal). With uncompressed this is not an issue at all. The other issue is that adding latency to each and every signal every time it enters and leaves a piece of equipment creates a plethora of timing planes within the one facility. Another headache I don't want, which we don't have with uncompressed. Also, live-produced television is produced in real-time (believe it or not). So the director is interacting live with the content he/she is switching - talking to commentators, reacting to events as they happen, etc. You're aware that gamers hate lag? It's exactly the same with directing live television - it's interactive, and lag/latency is the antithesis of interactivity. Lag is not a problem with the present, circuit-switched, uncompressed methods.

"A great many people think they are thinking when they are merely rearranging their prejudices." -- William James