The patents/licensing issue is the reason that the BBC developed the Dirac codec. They have a few hardware boxes that they've developed that take uncompressed or analog audio/video in one end and spit it onto Ethernet on the other end. If you use those everywhere, then there are no compatibility issues. The BBC uses it right now, so it is certainly a possibility on current equipment.
I don't buy the "produced live" as an issue as you're not likely to see a full second delay at any point. Outside the studio, no one cares about a fraction of a second because everything else adds seconds of delay. Just turn on a "live" sports game on TV and the radio and you'll notice how the radio announcer is always a few seconds ahead of what you see on the TV. You would have to be filming a delayed stream in the same view as the live stream to know, and even then the delay is unlikely to be noticeable.
I could see a case where you have multiple timing planes with edits being made using circular dependencies that would wreak havoc. That sounds like more of a process issue though.
It still sounds to me like you're making the process difficult by holding onto ideas of how certain pieces have to be done, and that maybe the whole process needs to be rethought. That said, I know there are things going on that I just don't have the background to foresee or understand, so it may very well be impossible without moving around insane amounts of uncompressed data. It'll certainly be interesting to see what the next decade brings for this field.