Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Dual-core Systems Necessary for Business Users? 398

Lam1969 writes "Hygeia CIO Rod Hamilton doubts that most business users really need dual-core processors: 'Though we are getting a couple to try out, the need to acquire this new technology for legitimate business purposes is grey at best. The lower power consumption which improves battery life is persuasive for regular travelers, but for the average user there seems no need to make the change. In fact, with the steady increase in browser based applications it might even be possible to argue that prevailing technology is excessive.' Alex Scoble disagrees: 'Multiple core systems are a boon for anyone who runs multiple processes simultaneously and/or have a lot of services, background processes and other apps running at once. Are they worth it at $1000? No, but when you have a choice to get a single core CPU at $250 or a slightly slower multi-core CPU for the same price, you are better off getting the multi-core system and that's where we are in the marketplace right now.' An old timer chimes in: 'I can still remember arguing with a sales person that the standard 20 Mg hardrive offered plenty of capacity and the 40 Mg option was only for people too lazy to clean up their systems now and then. The feeling of smug satisfaction lasted perhaps a week.'"
This discussion has been archived. No new comments can be posted.

Dual-core Systems Necessary for Business Users?

Comments Filter:
  • The key quote here, IMO, is: "Multiple core systems are a boon for anyone who runs multiple processes simultaneously and/or have a lot of services, background processes and other apps running at once."

    All the anti-virus, anti-spyware, anti-exploit, DRM, IM clients, mail clients, multimedia "helper" apps, browser "helper" apps, little system tray goodies, etc., etc., and so on, it can start to add up. A lot of home and small business users are running a lot more background and simultaneous stuff than they may realize.

    That's not to say these noticeably slow down a 3.2GHz single-core machine with a gig of RAM, but the amount of stuff running in the backgrownd is growing exponentially. Dual core may not be of much benefit to business users now, but how long will that last?

    - Greg

  • by Saven Marek ( 739395 ) on Thursday March 23, 2006 @09:31PM (#14984980)
    for the average user there seems no need to make the change. In fact, with the steady increase in browser based applications it might even be possible to argue that prevailing technology is excessive.'

    I definitely don't agree. I remember hearing the same rubbish comments in various forms from shortsighted journos and analysts when we were approaching cpus with 50mhz. then I heard the same creeping up to 100mhz then 500mhz then 1ghz.

    It is always the same. "The average user doesn't need to go up to the next $CURRENT_GREAT_CPU because they're able to do their average things OK now". Of course they're able to do their average things now, that's why they're stuck doing average things.
  • by strider44 ( 650833 ) on Thursday March 23, 2006 @09:32PM (#14984990)
    It's inevitable. The more resources we have, the more we're going to want to use. That goes for basically everything - it's just human nature.
  • by dsginter ( 104154 ) on Thursday March 23, 2006 @09:37PM (#14985016)
    I'd rather spend the extra $750 on flash cache memory for the hard drive. Or, just replace the hard drive altogether [eetimes.com]. I gurantee either of these would win the average Business Joe's pick in triple blind taste test.
  • 1996 Called (Score:5, Insightful)

    by wideBlueSkies ( 618979 ) * on Thursday March 23, 2006 @09:44PM (#14985047) Journal
    It wants to know why we need pentiums on the desktop. Why isn't a 486 DX fast enough?

    wbs.
  • nope (Score:3, Insightful)

    by pintpusher ( 854001 ) on Thursday March 23, 2006 @09:46PM (#14985060) Journal
    [out-of-context quote] prevailing technology is excessive.[/out-of-context quote]

    I think its been said for years that the vast majority of users need technology at around the 1995 level or so and that's it. Unless of course you're into eye-candy [slashdot.org] or need to keep all your spyware up and running in tip-top condition. Seriously though, you know its true that the bulk of business use it typing letters, contracts, whatever; a little email; a little browsing and a handful of spreadsheets. That was mature tech. 10 years ago.

    I run debian on an athlon1700 with 256 megs and its super snappy. of couse I use wmii and live by K.I.S.S. Do I need dual-core multi-thread hyper-quad perplexinators? nope.

    I know. I'm a luddite.
  • by Loopy ( 41728 ) on Thursday March 23, 2006 @09:46PM (#14985062) Journal
    Really, consider the average business PC user. Outside of folks that have large development environments, do video/graphics/audio work, work on large software projects (such as games) really do not need 80GB hard disks. If you DO need more than that, you probably are quickly getting to the point of being able to justify storing your data on a file server. My unit at work only has 30GB on it, and that includes several ghost images of the systems I'm running QA on. Sure, grouse about Microsoft code bloat all you want but it doesn't take up THAT much HDD space.

    Sweeping generalizations are rarely more than "Yeah, me too!" posts. /rolleyes
  • resource usage (Score:1, Insightful)

    by Anonymous Coward on Thursday March 23, 2006 @09:49PM (#14985080)
    Another quote relates to spending.

    Ones "necessary expenses" always grow to meet ones income.
  • by Poltras ( 680608 ) on Thursday March 23, 2006 @09:51PM (#14985093) Homepage
    What if it is true? My mom does not need to play Doom III. My cousin does not need to load 500 things in the background (such as QoS and scheduler, great services of course...). My grand father just wants to play cards with friends over the internet, while his wife wants to print recipes. That's average things, and they ask a computer to do them. I don't want them to be blasted off by a great Aero Glass window border, because they can put that saved money elsewhere (notably in banks, so that I can have it when they die... muhahaha :P). Why would they do so?

    Same applies elsewhere... I bought my car (Yaris) for gaz saving (because the price in quebec is waaaay too high), not for speed. I don't need speed, I just go to work day and come back night, with a bit of camping on weekends and the usual downtown parties. Tell me, why would I buy the latest ferrari when I can put my saved money in something else, such as buying a new computer (i'm a geek and i play games and reverse hashes... oh wait)? Would you?

  • by JanneM ( 7445 ) on Thursday March 23, 2006 @09:52PM (#14985097) Homepage
    I'd like to rephrase it as "The average user does not need the $CUTTING_EDGE_STUFF because the $CURRENT_CHEAP_LOWER_END will run all they want to do just fine for the next few years."

    In, say, three years, when dual core systems are slowly entering the low end, it makes sense for business users (and, frankly, the vast majority of users in general) to get it. Right now, dual core is high end stuff stuff, with the price premium to prove it. Let the enthusiasts burn their cash on it, but for businesses, just wait another generation.

    You're not leasing sports cars for your salesforce, you're not getting Mont Blanc pens for your office workers, why should you pay a premium on electronics that doesn't do anything for productivity either?
  • Really simple math (Score:3, Insightful)

    by zappepcs ( 820751 ) on Thursday March 23, 2006 @09:57PM (#14985120) Journal
    The value of having faster hardware is more simple than all this cogitation would lead us to believe. If you spend 12 seconds of every minute waiting on something, that is 20% of your day. By decreasing this wait to 2 seconds, it greatly reduces waste: wasted manhours, wasted resources, wasted power....

    It might seem trivial, but even with web based services that are hosted in-house, that 12 seconds of waiting is a LOT of time. Right now, if I could get work to simply upgrade me to more than 256MB of ram, I could reduce my waiting. If I was to get a full upgraded machine, all the better... waiting not only sucks, it sucks efficiencies right out of the company.

    As someone mentioned, doing average things on average hardware is not exactly good for the business. People should be free to do extraordinary things on not-so-average systems.

    Each system and application has a sweet spot, so no single hardware answer is correct, but anything that stops or shortens the waiting is a GOOD thing...

    We all remember that misquote "512k is enough for anybody" and yeah, that didn't work out so well. Upgrades are not a question of if, but of when... upgrade when the money is right, and upgrade so that you won't have to upgrade so quickly. Anyone in business should be thinking about what it will take to run the next version of Windows when it gets here... That is not an 'average' load on a PC.
  • Re:Not really (Score:3, Insightful)

    by jevvim ( 826181 ) on Thursday March 23, 2006 @09:59PM (#14985131) Journal
    In point of fact, both were context switching in and out of both CPUs pretty regularly.

    But it didn't have to be that way; most multiprocessor operating systems will allow you to bind processes to a specific set of processors. In fact, some mixed workloads (although, admittedly, rare) show significant improvement when you optimize in this way. I've even seen optimized systems where one CPU is left unused by applications - generally in older multiprocessor architectures where one CPU was responsible for servicing all the hardware interrupts in the system.

    dual core, like most parallelized technologies, doesn't do nearly as much as you think it does, and won't until our compilers and schedulers get much better than they are now.

    Compilers are being held back by the programming languages chosen by developers. As hardware concurrency increases, the technology behind compilers for imperative and procedural languages (C, Pascal, Fortran, Java) shows just ill-suited it is take advantage of that power. Instead, we will need to move to new languages that will enable compilers to optimize for concurrency, much as circuit designers moved from alegbraic logic languages (ABEL, PALASM) to concurrent logic languages (VHDL, Verilog) with the transition from programmable logic devices to field programmable gate arrays.

  • by pcguru19 ( 33878 ) on Thursday March 23, 2006 @10:00PM (#14985132)
    The big ole bag of ass that will become Vista someday is going to make good use of that 2nd core. The current preview version loves all the CPU, RAM, and Video processing you can throw at it.

    Where I work, we're starting to use VMWare or VirtualPC to isolate troublesome apps so one crappy application doesn't kill a client's PC. Virtualization on the desktop will expand to get around the universal truth that while you can install any windows application on a clean windows OS and make it run, installing apps two and beyond aren't guaranteed to work together. Between virtualization and Vista, it's wise for business customers to OVERBUY for today so it's usable in 3-4 years.
  • 56K? (Score:2, Insightful)

    by khasim ( 1285 ) <brandioch.conner@gmail.com> on Thursday March 23, 2006 @10:01PM (#14985141)
    What about when 56k modems were fast enough for everyone.
    56K? Son, most people won't read fast enough to keep up with 1,200 baud.
    The capacity of applications will always grow to meet and exceed the available capacity to it.
    I think you're onto something there.

    But I don't think it applies to the single/dual core issue.

    I don't think any of the bottlenecks right now are processor related. Most of the issues I see are bandwidth to the box and graphics.

    Which would you prefer:
    #1. A second proc at the same speed as your current proc?

    #2. A second pipe (LAN or Internet) at the same speed as your current pipe?

    Assuming that the machine/OS/apps can fully utilize either option.

    There are very few systems I've ever seen that ever hit a processor bottleneck ... that are not BROKEN at the time. Endless loops don't count.

    I'm all in favour of the development of inexpensive, multi-core procs. Even for the desktop. Even for them becoming the standard on the desktop. Because I don't know what cool new functionality will be available tomorrow.

    But from what I see right now, the limitation is how fast I can get data to the single proc I'm running today.

    2x the processor power
    or
    2x the pipe?
  • Since when.... (Score:3, Insightful)

    by countach ( 534280 ) on Thursday March 23, 2006 @10:07PM (#14985170)
    Since when was "legitimate business purposes" part of the equation? Many business users just using office and email could use a 5 year old PC. But the industry moves on. Lease agreements terminate. The upgrade cycle continues its relentless march. Smart businesses could slow their upgrades down. Typcial businesses will keep paying Dell and keeping HP's business model afloat.
  • by MyLongNickName ( 822545 ) on Thursday March 23, 2006 @10:08PM (#14985180) Journal
    Agreed. In fact, in any kind of multi-person office (or single for that matter), only PC software should be on the hard drive. No files. Anything of any importance should be saved to a network.

    Whenever work has to be done on one of the office PCs, we do not give you the opportunity to transfer stuff off before we move it out. Lost a file? Go ahead, complain... you'll get written up for violating corporate policy.

    Personal files? While discouraged, each user gets so much private space on the network.
  • Unbelievable (Score:3, Insightful)

    by svunt ( 916464 ) on Thursday March 23, 2006 @10:11PM (#14985201) Homepage Journal
    I really can't believe this debate is ongoing. It's really the same thing, as has been pointed out above, as any "I don't need it this week, so it's just not important, period" argument, which can be traced back some decades now. For some of us, it's worth the early adopter price, for the rest, it's worth waiting until it's a much cheaper option, but as we all should know by now, what Gateway giveth, Gates taketh away. As the new hardware becomes available, software developers will take advantage of it. The only quetion is - how long can you hold out while the price comes down. It'll be a different answer for all of us. There is no definable "business user" to make such generalisations about accurately.
  • by tlambert ( 566799 ) on Thursday March 23, 2006 @10:35PM (#14985301)
    It's a flocking behaviour... and you *must* take it into account when choosing software.

    Q: "What function of Word that wasnt available in Word 6.0 and is now requires this insane increase of performance need?"

    A: The ability to open and read documents sent to you by third parties using the newer tools.

    For example, when your lawyer buys a new computer, and installs a new version of Office, and writes up a contract for you, you are not going to be able to read it using your machine running an older version of the application. And the newer version doesn't run on the older platform.

    Don't worry - the first copy of a program that has this continuous upgrade path lock-in is free witht he machine.

    -- Terry
  • by Anonymous Brave Guy ( 457657 ) on Thursday March 23, 2006 @10:47PM (#14985343)
    Agreed. In fact, in any kind of multi-person office (or single for that matter), only PC software should be on the hard drive. No files. Anything of any importance should be saved to a network.

    That's nice. I've got about 2GB of automated tests I need to run before I make each release of new code/tests I write to source control. Running these from a local hard drive takes about 2 hours. Running them across the network takes about 10 hours, if one person is doing it at once. There are about 20 developers sharing the main development server that hosts source control etc. in my office. Tell me again how having files locally is wrong, and we should run everything over the network?

    (Before you cite the reliability argument, you should know that our super-duper mega-redundant top-notch Dell server fell over last week, losing not one but two drives in the RAID array at once, and thus removing the hot-swapping recovery option and requiring the server to be taken down while the disk images were rebuilt. A third drive then failed during that, resulting in the total loss of the entire RAID array, and the need to replace the lot and restore everything from back-ups. Total down-time was about two days for the entire development group. (In case you're curious, they also upgraded some firmware in the RAID controller to fix some known issues that may have been responsible for part of this chaos. No, we don't believe three HDs all randomly failed within two days of each other, either.)

    Fortunately, we were all working from local data, so most of us effectively had our own back-ups. However, this didn't much help since everything is tied to the Windows domain, so all the services we normally use for things like tracking bugs and source control were out anyway. We did actually lose data, since there hadn't been a successful back-up of the server the previous night due to the failures, so in effect we really lost three days of work time.

    All in all, I think your "store everything on the network, or else" policy stinks of BOFHness, and your generalisation is wholly unfounded. But you carry on enforcing your corporate policy like the sysadmin overlord you apparently are, as long as you're happy for all your users to hold you accountable for it if it falls apart when another policy would have been more appropriate.

  • by fm6 ( 162816 ) on Thursday March 23, 2006 @10:48PM (#14985345) Homepage Journal
    That quote caught my eye too. Only my reaction is stronger: bullshit.

    Yes, the typical user nowadays is runs lots of processes. And having does almost double the nuber of processes your system can handle. But so does doubling the clock speed. And most business machines already have processors that are at least twice as fast as they need to be.

    As always, people looking for more performance fixate on CPU throughput. One more time folks: PCs are complicated beasts, with many potential bottlenecks.

    Except that few of these bottlenecks have any effect on your typical office productivity apps. Word processors, browsers, spreadsheets: none of these require a lot of CPU time, or do heavy disk access, or overload your video card. Running lots of apps used to overload main memory, but nowadays systems all ship with at least 256 meg. So if Word isn't performing fast enough for you, get IT to do a spyware scan and to defragment your disk, and forget about that new expensive toy. It will run faster at first, but if you neglect it like you're neglecting your current box, it'll soon be as slow as your current box.

  • by Quirk ( 36086 ) on Thursday March 23, 2006 @10:48PM (#14985348) Homepage Journal
    I remember when Cooperative Multitasking [wikipedia.org] was the buzzword of the day. In the 80's Cooperative Multitasking was going to give MicroSoft users the ability to run multiple programs... from the wikipedia page: "by (programs) voluntarily ceding control to other tasks at programmer-defined points within each task."

    Continued from the wikipedia page... "Cooperative multitasking has the advantage of making the operating system design much simpler, but it also makes it less stable because a poorly designed application may not cooperate well, and this often causes system freezes."

    Cooperative multitasking was the programming equivalent of nice guys finishing last. I spent big chunks of my life watching that litte hourglass turn and turn and turn as each and every program power grabbed as much resources as possible while trying to freeze out every other program.

    Concerned that dual cores are too much resource for today's programs? Not to worry, big numbers of software developer are currently gearing up to play fast and loose with every cycle dual cores have to offer.

    When I had my first 286 an engineer friend of the family came over and I jumped at the opportunity to show off what was a then $3200 kit. He liked but said he stayed with his XT because he found he could always find other work to do while his numbers were being crunched. Sound, mature reasoning.

  • by NorbrookC ( 674063 ) on Thursday March 23, 2006 @10:49PM (#14985350) Journal

    Of course they're able to do their average things now, that's why they're stuck doing average things.

    So, if I were to take the newest, hottest dual core processor, load up with RAM, a massive hard-drive, top-of-the-line video card, etc., etc. and hand it over to the average user, they'd do "exceptional things?"

    Please! They'd browse the web, type a letter, send e-mail, fool around with the photos or graphics from their digital camera, and play games. Just about any computer since the mid-'90's can do those fairly well. Even an old 486/33 computer can do it. They aren't going to suddenly start programming or using their computers for power computing.

    What drives their purchases are price, and can it perform those basic requirements in a reasonable manner. That the OS, application, or whatever they have on it are what drive the processor/memory/video/storage needs.

  • by timeOday ( 582209 ) on Thursday March 23, 2006 @10:56PM (#14985393)
    i ended up showing that i have almost 50 backround apps running,
    Resident in memory, sure, but I doubt they were actually in the "run" state at that moment. Most of them were waiting for a timer to expire, or for a Windows message, a network packet, a keypress, etc.

    The number of resident processes really doesn't matter. What does matter is to look at your CPU utilization when you're not actively doing anything. Even with all those "running" processes, it probably isn't over 5%. That's how much you'll benefit from a dual processor.

  • by Space cowboy ( 13680 ) * on Friday March 24, 2006 @12:13AM (#14985715) Journal
    If you look at the way most OSX apps are designed, it's easy to multi-thread them. Cocoa pretty much imposes a model/view/controller pattern, and when your model manipulation is separate from your UI, it's pretty simple to spawn a background thread to calculate long tasks, or adopt a divide & conquer approach.

    The other nice thing they have is the Accelerate.framework - if you link against that, you automatically get the fastest possible approach to a lot of compute-intensive problems (irrespective of architecture), and they put effort into making them multi-CPU friendly.

    Then there's xcode which automatically parallelises builds to the order of the number of CPUs you have. If you have more than one mac on your network, it'll use distcc to (seamlessly) distribute the compilation. I notice my new Mac Mini is significantly faster than my G5 at producing PPC code. Gcc is a cross-compiler, after all...

    And, all the "base" libraries (Core Image, Core Video, Core Graphics etc.) are designed to be either (a) currently multi-cpu aware, or (b) upgradeable to being multi-cpu aware when development cycles become available.

    You get a hell of a lot "for free" just by using the stuff they give away. This all came about because they had slower CPUs (G4's and G5's) but they had dual-proc systems. It made sense for them to write code that handled multi-cpu stuff well. I fully expect the competition to do the same now that dual-CPU is becoming mainstream in the intel world, as well as in the Apple one...

    Simon
  • by swillden ( 191260 ) <shawn-ds@willden.org> on Friday March 24, 2006 @12:24AM (#14985762) Journal

    I'm a software developer [...] I'm almost never CPU bound if I have enough memory.

    Don't compile much, huh? I'd love to have dual cores -- "make -j3", baby!

  • by woolio ( 927141 ) on Friday March 24, 2006 @02:30AM (#14986142) Journal
    I think the main problem today is that many programmers still wet behind the ears, developing on the latest and greatest machine, combined with ineptitude/inexperience...

    For example, they can write code that unnecesarily makes lots of copies of arrays (no lazy evaluation, using pass-by-value ), [unnecessarily] evaluate the same function/expression a huge number of times, badly misuse things like linked-lists, or even just use stupid implementations [bubblesort, etc]...

    And they will never realize how slow these things are because they are trying small datasets for their testing/debugging. Routine "X" may seem fast because it executes in 20ms (practically instant), but perhaps a more skilled person could write it using lower-order complexity algorithms and it would only need 10ms... The disturbed reader may ask what's the point... Well, if you are on a computer that is 3X slower and using real-world input data that is 5X bigger, you WILL notice a huge difference in the two implementations!!!!

    And if you are like most of the public, you will blame the slowness on your own computer being out-of-date ---- and you will go and buy a new one.

    Plus, "time-to-market" pressures mean that companies probably tend toward releasing poorly designed & inefficient code, all in the name of the almighty buck. Fscking "Moore" created a self-fufilling prophesy that made things more cost efficient [for software development] to buy a better computer than to write a more efficient program.

    When computers stop getting faster, software will start getting a whole lot better...
  • How much time do I wait for my computer at work?

    Daily: I wait for 15 minutes for some corporate machination called "codex" which:
          Insures I have the mandatory corporate agitprop screen saver
          Changes the admin password just in case I cracked it yesterday
          Makes sure I haven't changed any windows files
          Scans my system for illicit files or applications

    Twice Weekly: I wait for over an hour for Symantec Anti-virus to scan a 40 gig drive that's half empty
              And my Microsoft sponsored reboot which runs my daily time waster

    Monthly: I wait for at least 45 minutes for the latest MS hotfix to be forced on to my system.

    Occasionally I wait a random time while the network is unavailable and due to the configuration of the desktops they are essentially unusable.

    Result I have one official desktop that I use for e-mail, calendaring, and surfing and
    I have a stealth ultra-portable that I have my compiler and other tools on and occasionally I sync my CVS tree to the network.

    It's almost as if one computer consumed appeasing the corporate types and one doing work.
  • by Nefarious Wheel ( 628136 ) on Friday March 24, 2006 @03:47AM (#14986353) Journal
    Unfortunately, ultimately, most business users will be forced to upgrade to new systems simply because there will no longer be replacement parts for the old systems.

    Yep. In the shop I'm in now we support about 17,000 retail lanes with POS gear and servers. A very big ish is when a donk is at (a) end of life (vendors don't make 'em), (b) end of availability (nothing on the second hand market either) and (c) end of support (can't even beg used replacments to fix).

    Stuff stays on roughly in sync with Moore's Law, 18 months. We have to upgrade at that point, and we spend more on cables for new peripherals than we do on software upgrades. All this in a business environment that really, really wishes there was no such thing as progress.

  • by rantingkitten ( 938138 ) <kittenNO@SPAMmirrorshades.org> on Friday March 24, 2006 @12:14PM (#14988306) Homepage
    A couple seconds here and there, lets say 2 seconds in sixty. Now cut that to one second in sixty with a faster machine, ignoring multiple cores for now. Gain a day of work for every sixty. Six days of work a year.

    Yeah, that's true -- if we're all 100% productive every second of every day, from punch-in to clock-out. Right.

    Here's a startling revelation for "productivity" freaks who obsess over how this or that will shave precious microseconds off their busy schedule -- we all waste more time reading slashdot, IMing people, and otherwise screwing around, than we ever have lost to slow desktop machines.

    And that's us, part of the so-called technical aristocracy. The article itself was about "average business users", most of whom are not coming anywhere close to using their computer to the maximum. The computer is usually sitting around idle while the user stares in utter confusion at the "File" menu, trying to figure out how to open a new spreadsheet, or wondering which one of their fifty-seven currently open IE windows they were supposed to be looking at. Do they really need dual-core processors to handle the daunting task of experimenting with fonts for their Powerpoint presentation?

    Most "business users" would be better advised to stop running stupid crap in the background, stop downloading every idiotic Free Screensaver they come across, and other basic fundamentals of computer use, than worrying about how many megahertz their shiny new computer has. For the average schmuck that runs Outlook, Excel, Word, and IE, the only excuse for having a slow machine is the sheer amount of nonsense they're running in the background because they refuse to excercise any common sense whatsoever.

    As for me, I am sitting near a guy who rolled in around 10am, had a brief meeting with our boss, and hasn't done shit since then other than read some websites (not that I'm the paragon of productivity right now either, but...). And you're actually suggesting that he would "save time" measured in seconds per week with bigger, better, faster machines. Save time doing what, exactly?

2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League

Working...