Does your other OS hold on to outdated versions of system files for compatibility reasons, like windows 7+ does?
Of course it does.
There's a thing called version numbers of libraries that predates MS but they didn't decide to go that way until relatively recently. It's a way of being able to retain old versions of system files for compatibility reasons without slowing down the system searching for stuff. That is why linux, solaris, oracle, mac, *bsd and everything else apart from MS can run old stuff without a great deal of mucking about.
A window trying to draw should be the highest possible priority on desktop workstation and should not be treated like other processes. There is good reason to keep other processes waiting while I/O completes but window drawing operations should never be kept waiting
One of the more odd design choices in MS Windows is a good demonstration of where you do not want the rendering to have priority. Click on "control panel" and it starts drawing a lot of little icons, then it finds a lot more things to add to the list and redraws with a lot more icons. If a user attempts to click on one of the icons in the several seconds (yes that long - how fucked is that?) when it's rendering stuff and reordering it then they are very likely to click on a icon that was not there when they decided to click and end up opening something different. A sane way to do it, as done in many other parts of the MS GUI, is to make a list AND THEN present it to the user instead of a slooooow interative process. The desktop starting up is another example like that where you can see controls but can't use them for up to tens of seconds depending on how much stuff is loading. A sane way, which as far as I know is used in every non-MS computing environment, is to have some sort of splash screen or indication that the environment is not ready, then it provides the controls at the point where you can actually use them. A marketing choice to have X seconds to the desktop and cheat by drawing controls before the user can actually use them means we have an interface that frustrates and confuses users and the public perception of the relaibility of computers has been going downhill over time.
So IMHO rendering should wait until the user can actually interact with the rendered thing. Putting it there early is frustrating for the user. Nobody wants to click six times on a thing before it's ready then eventually get six instances of it when all you wanted is one as soon as possible.
Linux distributions assume you really want to give priority to server applications
Since the year 2000 Microsoft has been doing that too because it turns out to be a really good idea for anything more multi-purpose than a game console.
Even as an experienced systems integration engineer, I would need a few hours at least to develop a plan on how to do that
Joking, newbie, selling yourself way short or completely and utterly fucking useless - what is it to be? The amusing bit is the condescending crap on the end about home backups when the situation is that if you are responsible for the gear then you are failing in your duty if you cannot do a bare metal restore of critical systems AND talk somebody with minimal experience through it. I've been there and had a complex pile of stuff only I knew how to restore properly, but I did my job and got it all down to a procedure just a few lines long with simple steps and it gets packed in with the tapes.
you have to know the overall engineering plan to set them up.
So you refine the plan so that a monkey can step through it and you document it well enough that you can read it to somebody two minutes after waking or someone with limited experience can read it themselves.
FFS - it's far easier now than it ever was before since we can boot stuff off USB drives and then remotely populate their disks with what was on their before instead of reinstalling before restore.
Since you laid on the condescending crap it's time for me to ask you a question. As a "systems integration engineer" shouldn't you be considering things from an engineering approach of improving the system that is broken instead of an ad-hoc basketweaving approach of the technician just doing what seems sort of OK a different way each time while waiting for someone to write procedures for them to follow? I don't consider myself an engineer anymore since I've been exclusively on the IT side since 2000 but I do still apply the approach that I used on engineering problems, something you self-declared engineers who do not have their title accepted by a professional body should consider if you want to be taken seriously.
Sometimes the recruiters just aren't good at finding the right people.
Since they often don't know what the key words actually mean it's very difficult for them to do so. That's the problem with using general recruiters instead of getting someone with a skillset related to that of the person you want to employ involved early in the piece. If the perfect employee lists a skill of which the desired skill is a subset the general recruiter is going to reject them because they only know the key words and not what they actually mean. If your degree says engineer and the recruiter is looking for a programmer many recruiters are going to reject the application without even getting as far as considering experience in previous employment.
"Given the choice between accomplishing something and just lying around, I'd rather lie around. No contest." -- Eric Clapton