We're conflating use cases and identities when we say "Newbie." As technology designers, we need to be concerned with use cases. There may be a statistical overlap between the two, but mistaking one for the other gets us into deep water for design purposes.
Rather than newbies, let's talk use cases.
Case #1: User is not "at desk, at work" but is rather "in flow, in everyday personal life." They need, for party-planning purposes, or for kid-care purposes, for example, to "send an SMS, "send an email," or "buy more diapers on Amazon.com." These are use cases that are all much better handled by tablets or mobiles, particularly if the user does not spend most of their work or personal life sitting at a computing system. The larger computing system imposes an extraordinary amount of overhead for (say) the stay-at-home parent that just wants more diapers. Leave the playroom, go to the den, power up the desktop, sit down, confront a desktop full of resources, figure out which one is the right one, start the application, and so on. All of that is overhead when we have mobiles: pull iPhone from pocket, press button, tap Amazon, type "diapers", click Buy, put phone back in pocket.
As technology folks, we have a terrible habit of taking someone's bewilderment to mean that they need more training or they're a "newbie," rather than looking at it practically: they're being told that they have to do an *awful lot of work* (moving through the house, navigating a full suite of powerful computing resources, learning to manage them) just to get some more diapers in the midst of their *real life*, the one that they actually care about, which involves diapers, not computing.
Case #2: Person new to computing is also new to the job, but it is now their *full time job* to make charts and graphs with Excel. They will happily sit down with the 600 page book, online training tutorials, and get to work learning. Why? Because this is a set of resources that are not overhead to them—it is the productive work that they will be expected to do, so the investment in time and computing use makes perfect sense. It is work, not waste.
I'd argue that in most cases, trying to marry a full-on computing environment (hierarchical file system or DB storage in quantity, multiple applications, multiple peripherals and forms of connectivity) to a rapid, task-based interface is not going to work out because they're two different use cases. Rapid, task-based use demands lightness and speed. General-purpose "big computing" resources toward the achievement of office work demands feature-richness, open-endedness, and deliberateness (i.e. the opposite of lightness and speed). One is highly endpoint-oriented, the other has no endpoint and is highly process-oriented.
The right answer is not to redesign the desktop environment. The right answer is to get the stay-at-home parent an iPad, or a laptop with everything but the web browser uninstalled, one that preferably boots straight to the web browser—in which case, the UI doesn't matter at all, because the user has no intention to use it.
The "newbies" that we commonly reference are actually a use case—people that feel that their current goals are not well-served by a high-overhead investment in full-scale computing. To serve their needs with a full-on whitebox computer, we just have to strip out the general purpose computing entirely, or at the very least, hide it entirely—which makes the system all but useless for those embroiled in "general purpose computing" use cases, particularly in comparison to those that have a full desktop UI available to them.
Make a better desktop environment *and* make a better information appliance, and both sets of users will thank you.
Try to make a desktop environment *that is* an efficient information appliance, and the computing-for-work people will find it to be inefficient and unhelpful while the casual-net-users will find it to be slow and needlessly complex in comparison to their sister's iPad.