I think go for easy solution. introduce the patches in batches for the board. ("monday updates for week 32")
I won't comment on MS Windows, although I don't think what you have said will work very well, since I have never seen Production MS centric machines updated on a weekly basis.
Providing information for Linux (Redhat example) is very easy if you have the rpm's. All you need to do is run "rpm -qp --changelog " on every package associated with a particular kernel release/update and provide that information to the Change Advisory Board (CAB) which may result in 100's of package information. This is extremely easy to automate and should only take you a few minutes.
If you provide the above type of info to the CAB then I am quite sure they will do one of two things. 1) Throw a "hissy-fit" (grin) and never want to speak to you again, or 2) Thank you for the information and get back to you in a few months. Of course to keep on their good side you could just give them the changelog of the kernel you are going to install then explain that this is your reference and in the case of a Redhat distribution, which the company has to pay for, this should be enough although you may want to list all the packages that will be updated and let CAB decide if they need their changelogs as well.
I do have to state that in a production, QA, development and to a lesser extent a test and/or "crash and burn" environment you should have appropriate software contracts in place whether it be for a Linux, Microsoft or Unix solution or even some other OS. Having an appropriate software contract in place should save yourself allot of problems and you actually look good with management especially if you can give CAB the info they require (not necessarily want) that will get the job done quickly and efficiently.
In the case of Linux it is fairly easy to setup (approx a days works) an "in-house" repo "jump" server keeping in mind your network people need to get involved here since all target machines will need network access to this machine (or multiple machines if you have separate networked environments. On your "repo" server (appox 100GB+ needed) make sure the appropriate distribution are kept current (within a week) then create links to a staging area that the software updater programs (ie. yum or apt-get) on the target machines can see which contains the packages that will be updated against a kernel (changelog provided to CAB) that they will be reference against. It must be noted that emergency (ie. security) patches should always (you need to check) have the kernel that the patch came out with which means you should update all packages associated with that kernel. Google and your software provider is your best friend here.
Obviously in the case of a company you must follow "Change Management" procedures and if they don't have one (yes some companies don't have this) make sure there is one in place since this covers you if things don't actually go as planned, then you would need to fall back to the appropriate part of the companies "Disaster Recovery Plan" (your company does have one that is tested, I hope).
Sounds complicated, well it is and it isn't. Basically no company that is serious should be without "Change Management" procedures and an appropriate tested "Disaster Recovery Plan" should contain a section for backups and recovery processes and the policies covering them. I am aware that some people will disagree with me but put yourself in the the shoes of the System Admin who has to explain to Management why the Production machine crashed and/or data was corrupted or lost because procedures were not followed.