Dual-core Systems Necessary for Business Users? 398
Lam1969 writes "Hygeia CIO Rod Hamilton doubts that most business users really need dual-core processors: 'Though we are getting a couple to try out, the need to acquire this new technology for legitimate business purposes is grey at best. The lower power consumption which improves battery life is persuasive for regular travelers, but for the average user there seems no need to make the change. In fact, with the steady increase in browser based applications it might even be possible to argue that prevailing technology is excessive.' Alex Scoble disagrees: 'Multiple core systems are a boon for anyone who runs multiple processes simultaneously and/or have a lot of services, background processes and other apps running at once. Are they worth it at $1000? No, but when you have a choice to get a single core CPU at $250 or a slightly slower multi-core CPU for the same price, you are better off getting the multi-core system and that's where we are in the marketplace right now.' An old timer chimes in: 'I can still remember arguing with a sales person that the standard 20 Mg hardrive offered plenty of capacity and the 40 Mg option was only for people too lazy to clean up their systems now and then. The feeling of smug satisfaction lasted perhaps a week.'"
You've got more threads than you might think... (Score:5, Insightful)
All the anti-virus, anti-spyware, anti-exploit, DRM, IM clients, mail clients, multimedia "helper" apps, browser "helper" apps, little system tray goodies, etc., etc., and so on, it can start to add up. A lot of home and small business users are running a lot more background and simultaneous stuff than they may realize.
That's not to say these noticeably slow down a 3.2GHz single-core machine with a gig of RAM, but the amount of stuff running in the backgrownd is growing exponentially. Dual core may not be of much benefit to business users now, but how long will that last?
- Greg
I don''t agree either. (Score:3, Insightful)
I definitely don't agree. I remember hearing the same rubbish comments in various forms from shortsighted journos and analysts when we were approaching cpus with 50mhz. then I heard the same creeping up to 100mhz then 500mhz then 1ghz.
It is always the same. "The average user doesn't need to go up to the next $CURRENT_GREAT_CPU because they're able to do their average things OK now". Of course they're able to do their average things now, that's why they're stuck doing average things.
The old timer's right - it's a stupid argument (Score:3, Insightful)
Spend the extra money on flash-cache (Score:3, Insightful)
1996 Called (Score:5, Insightful)
wbs.
nope (Score:3, Insightful)
I think its been said for years that the vast majority of users need technology at around the 1995 level or so and that's it. Unless of course you're into eye-candy [slashdot.org] or need to keep all your spyware up and running in tip-top condition. Seriously though, you know its true that the bulk of business use it typing letters, contracts, whatever; a little email; a little browsing and a handful of spreadsheets. That was mature tech. 10 years ago.
I run debian on an athlon1700 with 256 megs and its super snappy. of couse I use wmii and live by K.I.S.S. Do I need dual-core multi-thread hyper-quad perplexinators? nope.
I know. I'm a luddite.
Most folks DON'T need much HDD space... (Score:4, Insightful)
Sweeping generalizations are rarely more than "Yeah, me too!" posts.
resource usage (Score:1, Insightful)
Ones "necessary expenses" always grow to meet ones income.
Re:I don''t agree either. (Score:2, Insightful)
Same applies elsewhere... I bought my car (Yaris) for gaz saving (because the price in quebec is waaaay too high), not for speed. I don't need speed, I just go to work day and come back night, with a bit of camping on weekends and the usual downtown parties. Tell me, why would I buy the latest ferrari when I can put my saved money in something else, such as buying a new computer (i'm a geek and i play games and reverse hashes... oh wait)? Would you?
Re:I don''t agree either. (Score:2, Insightful)
In, say, three years, when dual core systems are slowly entering the low end, it makes sense for business users (and, frankly, the vast majority of users in general) to get it. Right now, dual core is high end stuff stuff, with the price premium to prove it. Let the enthusiasts burn their cash on it, but for businesses, just wait another generation.
You're not leasing sports cars for your salesforce, you're not getting Mont Blanc pens for your office workers, why should you pay a premium on electronics that doesn't do anything for productivity either?
Really simple math (Score:3, Insightful)
It might seem trivial, but even with web based services that are hosted in-house, that 12 seconds of waiting is a LOT of time. Right now, if I could get work to simply upgrade me to more than 256MB of ram, I could reduce my waiting. If I was to get a full upgraded machine, all the better... waiting not only sucks, it sucks efficiencies right out of the company.
As someone mentioned, doing average things on average hardware is not exactly good for the business. People should be free to do extraordinary things on not-so-average systems.
Each system and application has a sweet spot, so no single hardware answer is correct, but anything that stops or shortens the waiting is a GOOD thing...
We all remember that misquote "512k is enough for anybody" and yeah, that didn't work out so well. Upgrades are not a question of if, but of when... upgrade when the money is right, and upgrade so that you won't have to upgrade so quickly. Anyone in business should be thinking about what it will take to run the next version of Windows when it gets here... That is not an 'average' load on a PC.
Re:Not really (Score:3, Insightful)
But it didn't have to be that way; most multiprocessor operating systems will allow you to bind processes to a specific set of processors. In fact, some mixed workloads (although, admittedly, rare) show significant improvement when you optimize in this way. I've even seen optimized systems where one CPU is left unused by applications - generally in older multiprocessor architectures where one CPU was responsible for servicing all the hardware interrupts in the system.
dual core, like most parallelized technologies, doesn't do nearly as much as you think it does, and won't until our compilers and schedulers get much better than they are now.
Compilers are being held back by the programming languages chosen by developers. As hardware concurrency increases, the technology behind compilers for imperative and procedural languages (C, Pascal, Fortran, Java) shows just ill-suited it is take advantage of that power. Instead, we will need to move to new languages that will enable compilers to optimize for concurrency, much as circuit designers moved from alegbraic logic languages (ABEL, PALASM) to concurrent logic languages (VHDL, Verilog) with the transition from programmable logic devices to field programmable gate arrays.
Not Now, but a swell idea if you plan to run VISTA (Score:2, Insightful)
Where I work, we're starting to use VMWare or VirtualPC to isolate troublesome apps so one crappy application doesn't kill a client's PC. Virtualization on the desktop will expand to get around the universal truth that while you can install any windows application on a clean windows OS and make it run, installing apps two and beyond aren't guaranteed to work together. Between virtualization and Vista, it's wise for business customers to OVERBUY for today so it's usable in 3-4 years.
56K? (Score:2, Insightful)
But I don't think it applies to the single/dual core issue.
I don't think any of the bottlenecks right now are processor related. Most of the issues I see are bandwidth to the box and graphics.
Which would you prefer:
#1. A second proc at the same speed as your current proc?
#2. A second pipe (LAN or Internet) at the same speed as your current pipe?
Assuming that the machine/OS/apps can fully utilize either option.
There are very few systems I've ever seen that ever hit a processor bottleneck
I'm all in favour of the development of inexpensive, multi-core procs. Even for the desktop. Even for them becoming the standard on the desktop. Because I don't know what cool new functionality will be available tomorrow.
But from what I see right now, the limitation is how fast I can get data to the single proc I'm running today.
2x the processor power
or
2x the pipe?
Since when.... (Score:3, Insightful)
Re:Most folks DON'T need much HDD space... (Score:4, Insightful)
Whenever work has to be done on one of the office PCs, we do not give you the opportunity to transfer stuff off before we move it out. Lost a file? Go ahead, complain... you'll get written up for violating corporate policy.
Personal files? While discouraged, each user gets so much private space on the network.
Unbelievable (Score:3, Insightful)
It's a flocking behaviour... (Score:4, Insightful)
Q: "What function of Word that wasnt available in Word 6.0 and is now requires this insane increase of performance need?"
A: The ability to open and read documents sent to you by third parties using the newer tools.
For example, when your lawyer buys a new computer, and installs a new version of Office, and writes up a contract for you, you are not going to be able to read it using your machine running an older version of the application. And the newer version doesn't run on the older platform.
Don't worry - the first copy of a program that has this continuous upgrade path lock-in is free witht he machine.
-- Terry
Re:Most folks DON'T need much HDD space... (Score:4, Insightful)
That's nice. I've got about 2GB of automated tests I need to run before I make each release of new code/tests I write to source control. Running these from a local hard drive takes about 2 hours. Running them across the network takes about 10 hours, if one person is doing it at once. There are about 20 developers sharing the main development server that hosts source control etc. in my office. Tell me again how having files locally is wrong, and we should run everything over the network?
(Before you cite the reliability argument, you should know that our super-duper mega-redundant top-notch Dell server fell over last week, losing not one but two drives in the RAID array at once, and thus removing the hot-swapping recovery option and requiring the server to be taken down while the disk images were rebuilt. A third drive then failed during that, resulting in the total loss of the entire RAID array, and the need to replace the lot and restore everything from back-ups. Total down-time was about two days for the entire development group. (In case you're curious, they also upgraded some firmware in the RAID controller to fix some known issues that may have been responsible for part of this chaos. No, we don't believe three HDs all randomly failed within two days of each other, either.)
Fortunately, we were all working from local data, so most of us effectively had our own back-ups. However, this didn't much help since everything is tied to the Windows domain, so all the services we normally use for things like tracking bugs and source control were out anyway. We did actually lose data, since there hadn't been a successful back-up of the server the previous night due to the failures, so in effect we really lost three days of work time.
All in all, I think your "store everything on the network, or else" policy stinks of BOFHness, and your generalisation is wholly unfounded. But you carry on enforcing your corporate policy like the sysadmin overlord you apparently are, as long as you're happy for all your users to hold you accountable for it if it falls apart when another policy would have been more appropriate.
You've ALREADY got more threads than you need (Score:3, Insightful)
Yes, the typical user nowadays is runs lots of processes. And having does almost double the nuber of processes your system can handle. But so does doubling the clock speed. And most business machines already have processors that are at least twice as fast as they need to be.
As always, people looking for more performance fixate on CPU throughput. One more time folks: PCs are complicated beasts, with many potential bottlenecks.
Except that few of these bottlenecks have any effect on your typical office productivity apps. Word processors, browsers, spreadsheets: none of these require a lot of CPU time, or do heavy disk access, or overload your video card. Running lots of apps used to overload main memory, but nowadays systems all ship with at least 256 meg. So if Word isn't performing fast enough for you, get IT to do a spyware scan and to defragment your disk, and forget about that new expensive toy. It will run faster at first, but if you neglect it like you're neglecting your current box, it'll soon be as slow as your current box.
Developers Will Make it Necessary (Score:3, Insightful)
Continued from the wikipedia page... "Cooperative multitasking has the advantage of making the operating system design much simpler, but it also makes it less stable because a poorly designed application may not cooperate well, and this often causes system freezes."
Cooperative multitasking was the programming equivalent of nice guys finishing last. I spent big chunks of my life watching that litte hourglass turn and turn and turn as each and every program power grabbed as much resources as possible while trying to freeze out every other program.
Concerned that dual cores are too much resource for today's programs? Not to worry, big numbers of software developer are currently gearing up to play fast and loose with every cycle dual cores have to offer.
When I had my first 286 an engineer friend of the family came over and I jumped at the opportunity to show off what was a then $3200 kit. He liked but said he stayed with his XT because he found he could always find other work to do while his numbers were being crunched. Sound, mature reasoning.
Stuck doing Average Things? (Score:3, Insightful)
Of course they're able to do their average things now, that's why they're stuck doing average things.
So, if I were to take the newest, hottest dual core processor, load up with RAM, a massive hard-drive, top-of-the-line video card, etc., etc. and hand it over to the average user, they'd do "exceptional things?"
Please! They'd browse the web, type a letter, send e-mail, fool around with the photos or graphics from their digital camera, and play games. Just about any computer since the mid-'90's can do those fairly well. Even an old 486/33 computer can do it. They aren't going to suddenly start programming or using their computers for power computing.
What drives their purchases are price, and can it perform those basic requirements in a reasonable manner. That the OS, application, or whatever they have on it are what drive the processor/memory/video/storage needs.
Re:You've got more threads than you might think... (Score:4, Insightful)
The number of resident processes really doesn't matter. What does matter is to look at your CPU utilization when you're not actively doing anything. Even with all those "running" processes, it probably isn't over 5%. That's how much you'll benefit from a dual processor.
Apple is pretty good at this (Score:4, Insightful)
The other nice thing they have is the Accelerate.framework - if you link against that, you automatically get the fastest possible approach to a lot of compute-intensive problems (irrespective of architecture), and they put effort into making them multi-CPU friendly.
Then there's xcode which automatically parallelises builds to the order of the number of CPUs you have. If you have more than one mac on your network, it'll use distcc to (seamlessly) distribute the compilation. I notice my new Mac Mini is significantly faster than my G5 at producing PPC code. Gcc is a cross-compiler, after all...
And, all the "base" libraries (Core Image, Core Video, Core Graphics etc.) are designed to be either (a) currently multi-cpu aware, or (b) upgradeable to being multi-cpu aware when development cycles become available.
You get a hell of a lot "for free" just by using the stuff they give away. This all came about because they had slower CPUs (G4's and G5's) but they had dual-proc systems. It made sense for them to write code that handled multi-cpu stuff well. I fully expect the competition to do the same now that dual-CPU is becoming mainstream in the intel world, as well as in the Apple one...
Simon
Re:Memory bound, not CPU bound ... (Score:5, Insightful)
I'm a software developer [...] I'm almost never CPU bound if I have enough memory.
Don't compile much, huh? I'd love to have dual cores -- "make -j3", baby!
I want my CPU cycles back. (Score:4, Insightful)
For example, they can write code that unnecesarily makes lots of copies of arrays (no lazy evaluation, using pass-by-value ), [unnecessarily] evaluate the same function/expression a huge number of times, badly misuse things like linked-lists, or even just use stupid implementations [bubblesort, etc]...
And they will never realize how slow these things are because they are trying small datasets for their testing/debugging. Routine "X" may seem fast because it executes in 20ms (practically instant), but perhaps a more skilled person could write it using lower-order complexity algorithms and it would only need 10ms... The disturbed reader may ask what's the point... Well, if you are on a computer that is 3X slower and using real-world input data that is 5X bigger, you WILL notice a huge difference in the two implementations!!!!
And if you are like most of the public, you will blame the slowness on your own computer being out-of-date ---- and you will go and buy a new one.
Plus, "time-to-market" pressures mean that companies probably tend toward releasing poorly designed & inefficient code, all in the name of the almighty buck. Fscking "Moore" created a self-fufilling prophesy that made things more cost efficient [for software development] to buy a better computer than to write a more efficient program.
When computers stop getting faster, software will start getting a whole lot better...
Re:How much time do you wait for your machine? (Score:3, Insightful)
Daily: I wait for 15 minutes for some corporate machination called "codex" which:
Insures I have the mandatory corporate agitprop screen saver
Changes the admin password just in case I cracked it yesterday
Makes sure I haven't changed any windows files
Scans my system for illicit files or applications
Twice Weekly: I wait for over an hour for Symantec Anti-virus to scan a 40 gig drive that's half empty
And my Microsoft sponsored reboot which runs my daily time waster
Monthly: I wait for at least 45 minutes for the latest MS hotfix to be forced on to my system.
Occasionally I wait a random time while the network is unavailable and due to the configuration of the desktops they are essentially unusable.
Result I have one official desktop that I use for e-mail, calendaring, and surfing and
I have a stealth ultra-portable that I have my compiler and other tools on and occasionally I sync my CVS tree to the network.
It's almost as if one computer consumed appeasing the corporate types and one doing work.
Re:Overkill Dragging Customers Along (Score:3, Insightful)
Yep. In the shop I'm in now we support about 17,000 retail lanes with POS gear and servers. A very big ish is when a donk is at (a) end of life (vendors don't make 'em), (b) end of availability (nothing on the second hand market either) and (c) end of support (can't even beg used replacments to fix).
Stuff stays on roughly in sync with Moore's Law, 18 months. We have to upgrade at that point, and we spend more on cables for new peripherals than we do on software upgrades. All this in a business environment that really, really wishes there was no such thing as progress.
Oh please, man, please. (Score:2, Insightful)
Yeah, that's true -- if we're all 100% productive every second of every day, from punch-in to clock-out. Right.
Here's a startling revelation for "productivity" freaks who obsess over how this or that will shave precious microseconds off their busy schedule -- we all waste more time reading slashdot, IMing people, and otherwise screwing around, than we ever have lost to slow desktop machines.
And that's us, part of the so-called technical aristocracy. The article itself was about "average business users", most of whom are not coming anywhere close to using their computer to the maximum. The computer is usually sitting around idle while the user stares in utter confusion at the "File" menu, trying to figure out how to open a new spreadsheet, or wondering which one of their fifty-seven currently open IE windows they were supposed to be looking at. Do they really need dual-core processors to handle the daunting task of experimenting with fonts for their Powerpoint presentation?
Most "business users" would be better advised to stop running stupid crap in the background, stop downloading every idiotic Free Screensaver they come across, and other basic fundamentals of computer use, than worrying about how many megahertz their shiny new computer has. For the average schmuck that runs Outlook, Excel, Word, and IE, the only excuse for having a slow machine is the sheer amount of nonsense they're running in the background because they refuse to excercise any common sense whatsoever.
As for me, I am sitting near a guy who rolled in around 10am, had a brief meeting with our boss, and hasn't done shit since then other than read some websites (not that I'm the paragon of productivity right now either, but...). And you're actually suggesting that he would "save time" measured in seconds per week with bigger, better, faster machines. Save time doing what, exactly?