Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Dual-core Systems Necessary for Business Users? 398

Lam1969 writes "Hygeia CIO Rod Hamilton doubts that most business users really need dual-core processors: 'Though we are getting a couple to try out, the need to acquire this new technology for legitimate business purposes is grey at best. The lower power consumption which improves battery life is persuasive for regular travelers, but for the average user there seems no need to make the change. In fact, with the steady increase in browser based applications it might even be possible to argue that prevailing technology is excessive.' Alex Scoble disagrees: 'Multiple core systems are a boon for anyone who runs multiple processes simultaneously and/or have a lot of services, background processes and other apps running at once. Are they worth it at $1000? No, but when you have a choice to get a single core CPU at $250 or a slightly slower multi-core CPU for the same price, you are better off getting the multi-core system and that's where we are in the marketplace right now.' An old timer chimes in: 'I can still remember arguing with a sales person that the standard 20 Mg hardrive offered plenty of capacity and the 40 Mg option was only for people too lazy to clean up their systems now and then. The feeling of smug satisfaction lasted perhaps a week.'"
This discussion has been archived. No new comments can be posted.

Dual-core Systems Necessary for Business Users?

Comments Filter:
  • The key quote here, IMO, is: "Multiple core systems are a boon for anyone who runs multiple processes simultaneously and/or have a lot of services, background processes and other apps running at once."

    All the anti-virus, anti-spyware, anti-exploit, DRM, IM clients, mail clients, multimedia "helper" apps, browser "helper" apps, little system tray goodies, etc., etc., and so on, it can start to add up. A lot of home and small business users are running a lot more background and simultaneous stuff than they may realize.

    That's not to say these noticeably slow down a 3.2GHz single-core machine with a gig of RAM, but the amount of stuff running in the backgrownd is growing exponentially. Dual core may not be of much benefit to business users now, but how long will that last?

    - Greg

    • I'd rather spend the extra $750 on flash cache memory for the hard drive. Or, just replace the hard drive altogether [eetimes.com]. I gurantee either of these would win the average Business Joe's pick in triple blind taste test.
    • by The Crazed Dingus ( 961915 ) on Thursday March 23, 2006 @09:38PM (#14985020)
      Its true, I recently took a look at my own systems running processes, and while it only shows four or five icons in the system tray, i ended up showing that i have almost 50 backround apps running, and to boot almost 1500 process modules running. This is way up from a year ago, its something that is coming to the forefront that multi-core processers are going to become the norm.
      • by timeOday ( 582209 ) on Thursday March 23, 2006 @10:56PM (#14985393)
        i ended up showing that i have almost 50 backround apps running,
        Resident in memory, sure, but I doubt they were actually in the "run" state at that moment. Most of them were waiting for a timer to expire, or for a Windows message, a network packet, a keypress, etc.

        The number of resident processes really doesn't matter. What does matter is to look at your CPU utilization when you're not actively doing anything. Even with all those "running" processes, it probably isn't over 5%. That's how much you'll benefit from a dual processor.

    • by Anonymous Coward on Thursday March 23, 2006 @09:44PM (#14985050)
      Obviously, few (if any) business users need anything more than a Pentium III running at 500 MHz. That processor is perfectly acceptable for business applications like OpenOffice.

      Unfortunately, ultimately, most business users will be forced to upgrade to new systems simply because there will no longer be replacement parts for the old systems.

      Consider the case of memory modules. 5 years ago, 64MB PC100 SODIMMs were plentiful. Now, they are virtually extinct. By 2010, you will not be able to find any replacement memory modules for your 1999 desktop PC because it requires PC100 non-DDR SDRAM, and no one will sell the stuff. In 2010, the only thing that you can buy is DDR2 SDRAM, Rambus DRAM, or newer-technology DRAM.

      In short, by 2010, you will be forced to upgrade for lack of spare parts.

      • OpenOffice on a P3 500? I feel sorry for you.

        I can't even tolerate its glacier like performance on my Dual Xeon system with 8 gigabytes of RAM.
      • by Anonymous Coward
        > few (if any) business users need anything more than a Pentium III running at 500 MHz.

        The processor is acceptable, but the hard drives and RAM subsystems typically found in machines of that era are not. The Intel 915 board topped out at 512MB of slow SDRAM, and the 20GB disks found in those machines have horrendus seek times.

        Since most companies did not buy multi-thousand-dollar workstations for their desktops back in 1998 or whenever, the fact is that older machines simply can not handle the typical 20
      • Obviously, few (if any) business users need anything more than a Pentium III running at 500 MHz. That processor is perfectly acceptable for business applications like OpenOffice.
        Are you sure you aren't thinking of Word 97? Loading a long .doc with OpenOffice takes a loooong time.
      • Obviously, few (if any) business users need anything more than a Pentium III running at 500 MHz. That processor is perfectly acceptable for business applications like OpenOffice.

        Obviously you haven't gone and looked at what gets installed on many business PCs. My employer's standard systems are 2.4ghz p4 on the desktop, 1.7gz p4m on the laptops, 512-1025m ram depending on system usage.

        Everyone complains that they're slow. Why? Lets see:

        Software distribution systems that check through 5000 packages every
      • by TheCarp ( 96830 ) * <sjc.carpanet@net> on Thursday March 23, 2006 @11:41PM (#14985582) Homepage
        ok... I just had the dog and pony show from intel themselves a few days ago at work. Nothing special or anything you can't find online.... just the standard demo.

        You are right... however, thats always the way of it.

        Build it and they will come. Once the technology exists, somebody is gonna do something way fuckin cool with it, or find some great new use for it, and its gonna get used.

        Research computing, number crunching, they will eat this stuff up first, then as it becomes cheaper, it will make the desktop.

        Think of it for servers. Sure you don't NEED it... but what about power? Rack space? You have to take these into account.

        Sure you don't need more than a pentium 500 to serve your website. However, if you have a few of these puppies, you can serve the website off a virtual linux box under VMWare.
        Then you can have a database server, and a whole bunch of other virtual machines. All logically seprate... but on one peice of hardware.

        Far less power consumption and rack space used to run 10 virtual machines on one multi core multi "socket" (as the intel rep likes to call it) box. Believe it or not, these issues are killer for some companies. Do you know what it costs to setup a new data center?

        Any idea what it costs to be upgrading massive UPS units and power distribution units? These are big projects that end up requiring shutdowns, and all manner of work before the big expensive equipment can even be used.

        Never mind air conditioning. If you believe the numbers Intel is putting out, this technology could be huge for datacenters that are worried that their cooling units wont be adequet in a few years.

        Seriously, when you look at the new tech, you need to realise where it will get used first. Who really does need it. I already know some of this technology (not the hardware level but vmware virtual servers) is already making my job easier.

        Intel has been working with companies like vmware to make sure this stuff really works as it should. Why wouldn't they? Its in their interest that this stuff works, or else it will never get past system developers and implimented. (ok, thats a bit of a rosy outlook, in truth it would get deployed and fail and be a real pain and Intel would make money anyway... but it wouldn't be good for them really)

        The numbers looked impressive to me when I saw them. I am sure we will be using this stuff.

        -Steve
      • Unfortunately, ultimately, most business users will be forced to upgrade to new systems simply because there will no longer be replacement parts for the old systems.

        Yep. In the shop I'm in now we support about 17,000 retail lanes with POS gear and servers. A very big ish is when a donk is at (a) end of life (vendors don't make 'em), (b) end of availability (nothing on the second hand market either) and (c) end of support (can't even beg used replacments to fix).

        Stuff stays on roughly in sync with Moore

    • I setup an OpenBSD system in vmware and tried to trim it down as much as possible. Init would just exec bash and nothing more. There were still some kernel processes. I realized you cant have a monoprocessor OS today beside DOS.

      So beside DOS, anything else can utilize the second core. As far as its feasibility goes, if it costs twice as much as a monocore, its not worth it. If it costs 20% more, well worth it.
    • But how many of those threads are CPU bound? The moment you start doing any number crunching then (assuming code written to take advantage of it and only up to a limit) the more CPU's the merrier, but no amount of extra CPU is going to get that data off the disk faster.

      That being said though, if you have enough memory to hold your entire SQL/Mail/whatever database in memory, you might start to see the benefits of multiple cpu cores for read oriented queries.

      A cool tool would be one that watches system activ
    • That quote caught my eye too. Only my reaction is stronger: bullshit.

      Yes, the typical user nowadays is runs lots of processes. And having does almost double the nuber of processes your system can handle. But so does doubling the clock speed. And most business machines already have processors that are at least twice as fast as they need to be.

      As always, people looking for more performance fixate on CPU throughput. One more time folks: PCs are complicated beasts, with many potential bottlenecks.

      Except t

  • Yes, very (Score:4, Funny)

    by Anonymous Coward on Thursday March 23, 2006 @09:29PM (#14984965)
    Also, that 30 inch monitor is also very important.
  • by EdMcMan ( 70171 )
    'I can still remember arguing with a sales person that the standard 20 Mg hardrive offered plenty of capacity and the 40 Mg option was only for people too lazy to clean up their systems now and then. The feeling of smug satisfaction lasted perhaps a week.'

    If you build it, they will fill it.
  • by Saven Marek ( 739395 ) on Thursday March 23, 2006 @09:31PM (#14984980)
    for the average user there seems no need to make the change. In fact, with the steady increase in browser based applications it might even be possible to argue that prevailing technology is excessive.'

    I definitely don't agree. I remember hearing the same rubbish comments in various forms from shortsighted journos and analysts when we were approaching cpus with 50mhz. then I heard the same creeping up to 100mhz then 500mhz then 1ghz.

    It is always the same. "The average user doesn't need to go up to the next $CURRENT_GREAT_CPU because they're able to do their average things OK now". Of course they're able to do their average things now, that's why they're stuck doing average things.
    • I definitely don't agree. I remember hearing the same rubbish comments in various forms from shortsighted journos and analysts when we were approaching cpus with 50mhz. then I heard the same creeping up to 100mhz then 500mhz then 1ghz. It is always the same. "The average user doesn't need to go up to the next $CURRENT_GREAT_CPU because they're able to do their average things OK now". Of course they're able to do their average things now, that's why they're stuck doing average things.

      The only reason busi
      • No, they need fast CPUs because they want cheap software, and programmers can't afford to take the time to rewrite all the software to run on the bare metal! Instead, they have to reuse the layers upon layers of abstraction in order to keep the cost of maintaining the programs low, at the cost of speed.
    • What if it is true? My mom does not need to play Doom III. My cousin does not need to load 500 things in the background (such as QoS and scheduler, great services of course...). My grand father just wants to play cards with friends over the internet, while his wife wants to print recipes. That's average things, and they ask a computer to do them. I don't want them to be blasted off by a great Aero Glass window border, because they can put that saved money elsewhere (notably in banks, so that I can have it w
    • I'd like to rephrase it as "The average user does not need the $CUTTING_EDGE_STUFF because the $CURRENT_CHEAP_LOWER_END will run all they want to do just fine for the next few years."

      In, say, three years, when dual core systems are slowly entering the low end, it makes sense for business users (and, frankly, the vast majority of users in general) to get it. Right now, dual core is high end stuff stuff, with the price premium to prove it. Let the enthusiasts burn their cash on it, but for businesses, just wa
    • It is always the same. "The average user doesn't need to go up to the next $CURRENT_GREAT_CPU because they're able to do their average things OK now". Of course they're able to do their average things now, that's why they're stuck doing average things.

      As opposed to what non-average things?

      I upgraded from an 850 MHz Centris to a 2.4 GHZ Athlon a few months ago when the old mobo died; I don't see any noticeable difference in performance except video, which is a different matter. And I do DTP, more demandi

    • Of course they're able to do their average things now, that's why they're stuck doing average things.

      So, if I were to take the newest, hottest dual core processor, load up with RAM, a massive hard-drive, top-of-the-line video card, etc., etc. and hand it over to the average user, they'd do "exceptional things?"

      Please! They'd browse the web, type a letter, send e-mail, fool around with the photos or graphics from their digital camera, and play games. Just about any computer since the mid-'90's can do

      • Even an old 486/33 computer can do it.
        No, it can't, because nowadays "browsing the web" includes having a browser able to parse things like XML and CSS (in other words, no "lite" browsers like Dillo), being able to handle Flash, and running a firewall and virus scanner in the background.
    • I'm not so sure about that (although it sounds like you have got a number of years on me with computers, so it's likely I'm just inexperienced). Software has gradually been improving for years, including many more features, better graphics, higher resolution. But how many of these really warrant 2 GHz more power? Have typical user applications really hit a point where a 3.0 GHz computer is likely? Other then server applications, processor-intensive programs, media programs, and games, there aren't many
  • by strider44 ( 650833 ) on Thursday March 23, 2006 @09:32PM (#14984990)
    It's inevitable. The more resources we have, the more we're going to want to use. That goes for basically everything - it's just human nature.
  • Not really (Score:3, Informative)

    by Theatetus ( 521747 ) on Thursday March 23, 2006 @09:33PM (#14984995) Journal
    Multiple core systems are a boon for anyone who runs multiple processes simultaneously and/or have a lot of services

    Not really. It all depends on your scheduler. There's just no telling without testing if a given application / OS combination will do better or worse on dual-core.

    Remember, two active applications, or two threads in an active application, does not mean those two processes or threads get to be piped to separate cores or processors. That might possibly happen but it probably won't.

    I had a boss who loved to get dual-CPU systems. Why? "Because that way one CPU can run the web server and one CPU can run the database." No matter how often I tried to shake that view from his head it never left. (In point of fact, both were context switching in and out of both CPUs pretty regularly).

    In short: dual core, like most parallelized technologies, doesn't do nearly as much as you think it does, and won't until our compilers and schedulers get much better than they are now.

    • Re:Not really (Score:3, Informative)

      I had a boss who loved to get dual-CPU systems. Why? "Because that way one CPU can run the web server and one CPU can run the database." No matter how often I tried to shake that view from his head it never left. (In point of fact, both were context switching in and out of both CPUs pretty regularly).

      Those are not exactly CPU-hungry applications that could take advantage of multiple CPUs. No scheduler in the world will help run a webserver and database better on that machine if the I/O subsystem is the bot
    • Re:Not really (Score:3, Informative)

      by iamacat ( 583406 )
      Last I remember, you could assign processor affinity in Windows task manager to really run database on one CPU and web server on another, if that's what gives you the best performance. Of course the main point is that CPU-intensive code from both processes (say sorting in the database and image rendering in web server ) can run simultaneously. What's exactly wrong with your boss's point of view?
      • What's wrong is that the database and web server are probably not contending for CPU time anyway. They are both contending for disk and memory access.

        • You actually store your database and your webpages on the same hard drive?
          • That doesn't matter.

            They are contending for PCI bus bandwidth, disk controller bandwidth, and (like I said before) memory bandwidth. Either your needs are lightweight enough that storing your database and your web pages on the same disk are basically fine, or your needs are heavyweight enough that you'll get better performance for less by separating out the systems further.

    • Remember, two active applications, or two threads in an active application, does not mean those two processes or threads get to be piped to separate cores or processors. ...but it does mean that those two "active" applications (with X number of threads) can have 2 threads execute concurrently, one on each core (assuming little resource contention taking place in other places).

      (In point of fact, both were context switching in and out of both CPUs pretty regularly). So? If the both had the need of CPU resour
    • Re:Not really (Score:3, Insightful)

      by jevvim ( 826181 )
      In point of fact, both were context switching in and out of both CPUs pretty regularly.

      But it didn't have to be that way; most multiprocessor operating systems will allow you to bind processes to a specific set of processors. In fact, some mixed workloads (although, admittedly, rare) show significant improvement when you optimize in this way. I've even seen optimized systems where one CPU is left unused by applications - generally in older multiprocessor architectures where one CPU was responsible for ser

    • Re:Not really (Score:4, Interesting)

      by shawnce ( 146129 ) on Thursday March 23, 2006 @10:04PM (#14985157) Homepage
      In short: dual core, like most parallelized technologies, doesn't do nearly as much as you think it does, and won't until our compilers and schedulers get much better than they are now.

      Yeah just like color correction of images/etc done by ColorSync (done by default in Quartz) on Mac OS X doesn't split the task into N-1 threads (when N > 1 and N being the number of cores). On my quad core system I see the time to color correct images I display take less then 1/3 the time it does when I disable all but one of the cores. Similar things happen in Core Image, Core Audio, Core Video, etc. ...and a much of this is vectorized code to begin with (aka already darn fast for what it does).

      If you use Apple's Shark tool to do a system trace you can see this stuff taking place and the advantages it has... especially so given that I as a developer didn't have to do a thing other then use the provided frameworks to reap the benefits.

      Don't discount how helpful multiple cores can be now with current operating systems, compilers, schedulers and applications. A lot of tasks that folks do today (encode/decode audio, video, images, encryption, compression, etc.) deal with stream processing and that often can benefit from splitting the load into multiple threads if multiple cores (physical or otherwise) are available.
      • by Space cowboy ( 13680 ) * on Friday March 24, 2006 @12:13AM (#14985715) Journal
        If you look at the way most OSX apps are designed, it's easy to multi-thread them. Cocoa pretty much imposes a model/view/controller pattern, and when your model manipulation is separate from your UI, it's pretty simple to spawn a background thread to calculate long tasks, or adopt a divide & conquer approach.

        The other nice thing they have is the Accelerate.framework - if you link against that, you automatically get the fastest possible approach to a lot of compute-intensive problems (irrespective of architecture), and they put effort into making them multi-CPU friendly.

        Then there's xcode which automatically parallelises builds to the order of the number of CPUs you have. If you have more than one mac on your network, it'll use distcc to (seamlessly) distribute the compilation. I notice my new Mac Mini is significantly faster than my G5 at producing PPC code. Gcc is a cross-compiler, after all...

        And, all the "base" libraries (Core Image, Core Video, Core Graphics etc.) are designed to be either (a) currently multi-cpu aware, or (b) upgradeable to being multi-cpu aware when development cycles become available.

        You get a hell of a lot "for free" just by using the stuff they give away. This all came about because they had slower CPUs (G4's and G5's) but they had dual-proc systems. It made sense for them to write code that handled multi-cpu stuff well. I fully expect the competition to do the same now that dual-CPU is becoming mainstream in the intel world, as well as in the Apple one...

        Simon
    • What are you talking about? That task bouncing problem you mentioned was fixed in the 2.6 kernel and wasn't really a major problem in 2.4 kernels.

      If, though it's not likely, your bosses web server and DBMS were CPU-bound then without a doubt he'd see better performance on two cores with any modern scheduler worth its bits.

      And yes, they would be running on one core each.

  • 40 Mg? (Score:4, Funny)

    by EXMSFT ( 935404 ) on Thursday March 23, 2006 @09:35PM (#14985003)
    Even my oldest hard drives weighed more than that.

    He may be an old timer - but I would think even the oldest old timer knows that MB = Megabyte...
  • by Sycraft-fu ( 314770 ) on Thursday March 23, 2006 @09:38PM (#14985023)
    In general, for office productivity type stuff, processor speed isn't much of a problem. We find that older CPUs like 1.5GHz P4s are still nice and responsive when loaded with plenty of RAM, and we still use them. Office use (like Word, Excel, e-mail, etc) is a poor benchmark by which to decide how useful a given level of power is, since it usually lags way behind other things in what it needs. I mean an office system also works fine with an integrated Intel video card, but I can think of plenty of things, and not just games, that benefit or mandidate a better one.

    Dual cores are useful in business now for some things, a big one I want one for is virtual computers. I maintain the images for all our different kinds of systems as VMs on my computer. Right now, it's really only practical to work on one at a time. If I have one ghosting, that takes up 100% CPU. Loading another is sluggish and just makes the ghost take longer. If I had a second core, I could work on a second one, while the first one sat ghosting. It also precludes me form doing much intensive on my host system, again, just slows the VM down and makes the job take longer.
    • I mean an office system also works fine with an integrated Intel video card, but I can think of plenty of things, and not just games, that benefit or mandidate a better one.

      I'm curious what. Intel video works fine for most sorts of 2D graphics or video applications (photoshop, etc), and for professional 3D, you want a professional card. I guess what I'm getting at is that there's very little need for a consumer Nvidia/ATI card in a business system other than for games.
    • Another concern is that a company does not buy a computer for a year. They buy something for 3 years. While I'd like to say we could save money by picking up some $500 whitebox computers, I can't. We'd be buying them every year. As it stands now, we buy the top-of-the-line Dell every 3 years. We may pay $5000 per box, but at least we get something that will still be usable in 3 to 5 years. Not to mention 24/7 support.

      On top of all that, company software changes regularly. We may go through a few iter
    • Lets just not forget people are running a three to five year-old office suite atop a five year old operating system.

      This stuff was made for sub-gigahertz CPUs with less than half a gigabyte of RAM.
  • by NitsujTPU ( 19263 ) on Thursday March 23, 2006 @09:38PM (#14985027)
    My goodness. I wonder often why people want nice new computer hardware at all. I, personally, am happy with my 8080. People who want new, fast computers are such idiots. Look who's laughing now. My computer only cost my $10, and I can do everything that I want on it.
  • by Hyram Graff ( 962405 ) on Thursday March 23, 2006 @09:39PM (#14985031)
    Multiple core systems are a boon for anyone who runs multiple processes simultaneously and/or have a lot of services, background processes and other apps running at once.

    In other words, it sounds like it's perfect for all those people who wanted to get another processor to run their spyware on but couldn't afford the extra CPU before now.

  • 1996 Called (Score:5, Insightful)

    by wideBlueSkies ( 618979 ) * on Thursday March 23, 2006 @09:44PM (#14985047) Journal
    It wants to know why we need pentiums on the desktop. Why isn't a 486 DX fast enough?

    wbs.
  • by MECC ( 8478 ) * on Thursday March 23, 2006 @09:45PM (#14985054)
    They'll want them. Perhaps 'necessary' is not as relevant as 'desired'. Or 'Halo'.
  • nope (Score:3, Insightful)

    by pintpusher ( 854001 ) on Thursday March 23, 2006 @09:46PM (#14985060) Journal
    [out-of-context quote] prevailing technology is excessive.[/out-of-context quote]

    I think its been said for years that the vast majority of users need technology at around the 1995 level or so and that's it. Unless of course you're into eye-candy [slashdot.org] or need to keep all your spyware up and running in tip-top condition. Seriously though, you know its true that the bulk of business use it typing letters, contracts, whatever; a little email; a little browsing and a handful of spreadsheets. That was mature tech. 10 years ago.

    I run debian on an athlon1700 with 256 megs and its super snappy. of couse I use wmii and live by K.I.S.S. Do I need dual-core multi-thread hyper-quad perplexinators? nope.

    I know. I'm a luddite.
  • by Loopy ( 41728 ) on Thursday March 23, 2006 @09:46PM (#14985062) Journal
    Really, consider the average business PC user. Outside of folks that have large development environments, do video/graphics/audio work, work on large software projects (such as games) really do not need 80GB hard disks. If you DO need more than that, you probably are quickly getting to the point of being able to justify storing your data on a file server. My unit at work only has 30GB on it, and that includes several ghost images of the systems I'm running QA on. Sure, grouse about Microsoft code bloat all you want but it doesn't take up THAT much HDD space.

    Sweeping generalizations are rarely more than "Yeah, me too!" posts. /rolleyes
    • by MyLongNickName ( 822545 ) on Thursday March 23, 2006 @10:08PM (#14985180) Journal
      Agreed. In fact, in any kind of multi-person office (or single for that matter), only PC software should be on the hard drive. No files. Anything of any importance should be saved to a network.

      Whenever work has to be done on one of the office PCs, we do not give you the opportunity to transfer stuff off before we move it out. Lost a file? Go ahead, complain... you'll get written up for violating corporate policy.

      Personal files? While discouraged, each user gets so much private space on the network.
      • by Anonymous Brave Guy ( 457657 ) on Thursday March 23, 2006 @10:47PM (#14985343)
        Agreed. In fact, in any kind of multi-person office (or single for that matter), only PC software should be on the hard drive. No files. Anything of any importance should be saved to a network.

        That's nice. I've got about 2GB of automated tests I need to run before I make each release of new code/tests I write to source control. Running these from a local hard drive takes about 2 hours. Running them across the network takes about 10 hours, if one person is doing it at once. There are about 20 developers sharing the main development server that hosts source control etc. in my office. Tell me again how having files locally is wrong, and we should run everything over the network?

        (Before you cite the reliability argument, you should know that our super-duper mega-redundant top-notch Dell server fell over last week, losing not one but two drives in the RAID array at once, and thus removing the hot-swapping recovery option and requiring the server to be taken down while the disk images were rebuilt. A third drive then failed during that, resulting in the total loss of the entire RAID array, and the need to replace the lot and restore everything from back-ups. Total down-time was about two days for the entire development group. (In case you're curious, they also upgraded some firmware in the RAID controller to fix some known issues that may have been responsible for part of this chaos. No, we don't believe three HDs all randomly failed within two days of each other, either.)

        Fortunately, we were all working from local data, so most of us effectively had our own back-ups. However, this didn't much help since everything is tied to the Windows domain, so all the services we normally use for things like tracking bugs and source control were out anyway. We did actually lose data, since there hadn't been a successful back-up of the server the previous night due to the failures, so in effect we really lost three days of work time.

        All in all, I think your "store everything on the network, or else" policy stinks of BOFHness, and your generalisation is wholly unfounded. But you carry on enforcing your corporate policy like the sysadmin overlord you apparently are, as long as you're happy for all your users to hold you accountable for it if it falls apart when another policy would have been more appropriate.

    • Business users? Yes, if they use more than 30GB on their computers, they are (probably) doing something seriously wrong.

      On my home computer, however, I have over 500GB storage, and all but 4GB of it is full.
  • by tlambert ( 566799 ) on Thursday March 23, 2006 @09:48PM (#14985073)
    "...getting a couple [for the executives]..."

    I can't tell you how many times I've seen engineers puttering along on inadequate hardware because the executives had the shiny, fast new boxes that did nothing more on a daily basis than run "OutLook".

    Just as McKusick's Law applies to storage - "The steady state of disks is full" - there's another law that applies to CPU cycles, which is "There are alwways fewer CPU cycles than you need for what you are trying to do".

    Consider that almost all of the office/utility software you are going to be running in a couple of years is being written by engineers in Redmond with monster machines with massive amounts of RAM and 10,000 RPM disks so that they can iteratively compile their code quickly, and you can bet your last penny that the resulting code will run sluggishly at best on the middle-tier hardware of today.

    I've often argued that engineers should have to use a central, fast compilation software, but run on hardware from a generation behind, to force them to write code that will work adequately on the machines the customers will have.

    Yeah, I'm an engineer, and that applies to me, too... I've even put my money where my mouth was on projects I've worked on, and they've been the better for it.

    -- Terry
    • by woolio ( 927141 ) on Friday March 24, 2006 @02:30AM (#14986142) Journal
      I think the main problem today is that many programmers still wet behind the ears, developing on the latest and greatest machine, combined with ineptitude/inexperience...

      For example, they can write code that unnecesarily makes lots of copies of arrays (no lazy evaluation, using pass-by-value ), [unnecessarily] evaluate the same function/expression a huge number of times, badly misuse things like linked-lists, or even just use stupid implementations [bubblesort, etc]...

      And they will never realize how slow these things are because they are trying small datasets for their testing/debugging. Routine "X" may seem fast because it executes in 20ms (practically instant), but perhaps a more skilled person could write it using lower-order complexity algorithms and it would only need 10ms... The disturbed reader may ask what's the point... Well, if you are on a computer that is 3X slower and using real-world input data that is 5X bigger, you WILL notice a huge difference in the two implementations!!!!

      And if you are like most of the public, you will blame the slowness on your own computer being out-of-date ---- and you will go and buy a new one.

      Plus, "time-to-market" pressures mean that companies probably tend toward releasing poorly designed & inefficient code, all in the name of the almighty buck. Fscking "Moore" created a self-fufilling prophesy that made things more cost efficient [for software development] to buy a better computer than to write a more efficient program.

      When computers stop getting faster, software will start getting a whole lot better...
  • by mad.frog ( 525085 ) <steven&crinklink,com> on Thursday March 23, 2006 @09:49PM (#14985081)
    Wally: When I started programming, we didn't have any of these sissy "icons" and "windows". All we had were zeros and ones -- and sometimes we didn't even have ones. I wrote an entire database program using only zeros.

    Dilbert: You had zeros? We had to use the letter "O".
  • Really simple math (Score:3, Insightful)

    by zappepcs ( 820751 ) on Thursday March 23, 2006 @09:57PM (#14985120) Journal
    The value of having faster hardware is more simple than all this cogitation would lead us to believe. If you spend 12 seconds of every minute waiting on something, that is 20% of your day. By decreasing this wait to 2 seconds, it greatly reduces waste: wasted manhours, wasted resources, wasted power....

    It might seem trivial, but even with web based services that are hosted in-house, that 12 seconds of waiting is a LOT of time. Right now, if I could get work to simply upgrade me to more than 256MB of ram, I could reduce my waiting. If I was to get a full upgraded machine, all the better... waiting not only sucks, it sucks efficiencies right out of the company.

    As someone mentioned, doing average things on average hardware is not exactly good for the business. People should be free to do extraordinary things on not-so-average systems.

    Each system and application has a sweet spot, so no single hardware answer is correct, but anything that stops or shortens the waiting is a GOOD thing...

    We all remember that misquote "512k is enough for anybody" and yeah, that didn't work out so well. Upgrades are not a question of if, but of when... upgrade when the money is right, and upgrade so that you won't have to upgrade so quickly. Anyone in business should be thinking about what it will take to run the next version of Windows when it gets here... That is not an 'average' load on a PC.
    • The point of TFA was not against faster hardware in general, it was about dual core CPUs in particular. Chances are that those 12 sec/min you spend waiting are due to lack of RAM, a slow hard drive, memory or system bus; usually the processor is the most wasted resource in a typical desktop PC, because most of the time it stays idle, waiting for I/O bounded programs. A faster CPU, leaving everything else equal, won't improve performance that much, neither would do a dual core CPU.
  • Oh please. TFA is just another knucklehead pontificating and rehashing the same old tired argument that has gone on for oh I don't know twenty years or better. I can no longer count how many times this same old argument has been made every time a new technology comes out. It's the same bull crap nonsense that started way back when 8K of core (you remember that right?) memory was deemed to be plenty. And I'm sure it goes even further back in time.

  • by gstoddart ( 321705 ) on Thursday March 23, 2006 @09:59PM (#14985130) Homepage
    In my experience, and I'm a software developer so take that with a grain of salt, the vast majority of people will get more performance from more memory than more CPU speed.

    I'm almost never CPU bound if I have enough memory. If I don't have enough memory, I get to watch the maching thrash, and it crawls to a halt. But then I'm I/O bound on my hard-drive.

    Dual-CPU/dual-core machines might be useful for scientific applications, graphics, and other things which legitimately require processor speed. But for Word, IM, e-mail, a browser, and whatever else most business users are doing? Not a chance.

    Like I said, in my experience, if most people would buy machines with obscene amounts of RAM, and not really worry about their raw CPU speed, they would get far more longeivity out of their machines.

    There just aren't that many tasks for which you meaningfully need faster than even the slowest modern CPUs. If you're doing them, you probably know it; go ahead, buy the big-dog.

    Repeat after me ... huge whacking gobs of RAM solve more problems than raw compute power. Always has.
  • The big ole bag of ass that will become Vista someday is going to make good use of that 2nd core. The current preview version loves all the CPU, RAM, and Video processing you can throw at it.

    Where I work, we're starting to use VMWare or VirtualPC to isolate troublesome apps so one crappy application doesn't kill a client's PC. Virtualization on the desktop will expand to get around the universal truth that while you can install any windows application on a clean windows OS and make it run, installing apps
  • ... from the "well-duh" department.

  • Since when.... (Score:3, Insightful)

    by countach ( 534280 ) on Thursday March 23, 2006 @10:07PM (#14985170)
    Since when was "legitimate business purposes" part of the equation? Many business users just using office and email could use a 5 year old PC. But the industry moves on. Lease agreements terminate. The upgrade cycle continues its relentless march. Smart businesses could slow their upgrades down. Typcial businesses will keep paying Dell and keeping HP's business model afloat.
  • Obligatory Quotes: (Score:3, Interesting)

    by absurdist ( 758409 ) on Thursday March 23, 2006 @10:07PM (#14985173)
    "640 KB should be enough for anybody."
    -Bill Gates, Microsoft

    "There is no reason why anyone would want a computer in their home."
    -Ken Olsen, DEC
  • by mfifer ( 660491 ) on Thursday March 23, 2006 @10:07PM (#14985175)
    in our financial world, users often have several spreadsheets open (deeply linked to other spreadsheets), Bloomberg, Outlook, several instances of IE, antivirus software and antispyware software running in the background... you get the idea.

    the more memory and horsepower I can provide them, the better experience they have with their machines. and empirically it seems that underpowered machines crash more; they sure generate more support calls (app X is slooowwww!!!)

    same goes for gigabit to the desktop; loading and saving files is quicker and those aforementioned linked spreadsheets also benefit from the big pipes...

    IF one can afford it, and the load is heavy as is our case, every bit of power one can get helps...

    -=- mf
  • by ikarys ( 865465 ) on Thursday March 23, 2006 @10:07PM (#14985177)
    I will benefit from multi-core.

    I'm perhaps not a typical business user, but what business wants is more concurrent apps, and more stability. Less hinderance from the computer, and more businessing :)

    Currently, I have a Hyperthreaded processor at both home and work. This has made my machine immune to some browser memory leak vulnerabilities, whereby only one of the threads has hit 50% CPU. (Remember just recently there was a leak to open windows calc through IE? I could only replicate this on the single core chips).

    Of course hyper threading is apparently all "marketting guff", but the basic principles are the same.

    I've found that system lockups are less frequent, and a single application hogging a "thread" does not impact my multitasking as much. I quite often have 30 odd windows open.. perhaps 4 word docs, outlook, several IEs, several firefoxs, perhaps an opera or a few VNC sessions and several visual studios.

    On my old single thread CPU this would cause all sorts of havock, and I would have to terminate processes through task manager and pray that my system would be usable without a reboot. This is much less frequent on HT.

    With muli-core, I can forsee the benefits of HT with added benefits of actually being 2 cores as opposed to pseudo 2 cores.

    For games, optimised code should be able to actively run over both cores. This may not be so good for multi tasking, but should mean that system slowdown in games is reduced as different CPU intensive tasks can be split over the cores, and not interfere with each other.

    (I reserve the right to be talking out of my ass... I'm really tired)
  • Unbelievable (Score:3, Insightful)

    by svunt ( 916464 ) on Thursday March 23, 2006 @10:11PM (#14985201) Homepage Journal
    I really can't believe this debate is ongoing. It's really the same thing, as has been pointed out above, as any "I don't need it this week, so it's just not important, period" argument, which can be traced back some decades now. For some of us, it's worth the early adopter price, for the rest, it's worth waiting until it's a much cheaper option, but as we all should know by now, what Gateway giveth, Gates taketh away. As the new hardware becomes available, software developers will take advantage of it. The only quetion is - how long can you hold out while the price comes down. It'll be a different answer for all of us. There is no definable "business user" to make such generalisations about accurately.
  • The CEO will insist on having a 8000GHZ, 256-core machine with 12TB RAM and infinity-plus-one hard drive, so he can feel more important.

    Even though all he uses for work are Outlook and Word, neither of them well, and installs every ActiveX control that promises free porn.
  • by Quirk ( 36086 ) on Thursday March 23, 2006 @10:48PM (#14985348) Homepage Journal
    I remember when Cooperative Multitasking [wikipedia.org] was the buzzword of the day. In the 80's Cooperative Multitasking was going to give MicroSoft users the ability to run multiple programs... from the wikipedia page: "by (programs) voluntarily ceding control to other tasks at programmer-defined points within each task."

    Continued from the wikipedia page... "Cooperative multitasking has the advantage of making the operating system design much simpler, but it also makes it less stable because a poorly designed application may not cooperate well, and this often causes system freezes."

    Cooperative multitasking was the programming equivalent of nice guys finishing last. I spent big chunks of my life watching that litte hourglass turn and turn and turn as each and every program power grabbed as much resources as possible while trying to freeze out every other program.

    Concerned that dual cores are too much resource for today's programs? Not to worry, big numbers of software developer are currently gearing up to play fast and loose with every cycle dual cores have to offer.

    When I had my first 286 an engineer friend of the family came over and I jumped at the opportunity to show off what was a then $3200 kit. He liked but said he stayed with his XT because he found he could always find other work to do while his numbers were being crunched. Sound, mature reasoning.

  • by eh2o ( 471262 ) on Thursday March 23, 2006 @11:25PM (#14985528)
    Lam1969 writes "Hygeia CIO Rod Hamilton doubts that most business users really need 400 hp BMWs, yet the parking lot is full of them: 'Though we are getting a couple to try out the new Toyota Corolla, the need to acquire this new technology for legitimate business purposes is grey at best. The higher fuel consumption which improves driving performance is persuasive for regular speeders, but for the average business person there seems no need to drive that fast. In fact, with the steady increase in speeding tickets given to rich white people in spite of their obvious superior social status it might even be possible to argue that BMWs are just plain excessive.' Alex Scoble disagrees: 'A BMW is a boon for anyone who runs a business and/or has a lot of responsibility, important meetings and pointy hair. Are they worth it at $75000? No, but when you have a choice to drive a junky commuter or a slightly slower 1995 Tercel for 1/20th the price, you are better off getting the top of the line Beemer and that's where we are in the marketplace right now.' An old timer chimes in: 'I can still remember arguing with a sales person that the 20 Mpg BMW was really for inferior people and only the 40 Mpg vehicle was superior enough for those with the gumption to succeed in management. The feeling of smug satisfaction lasted perhaps a week, when my boss got a new 545i and trounced me on the highway'"

  • by Perdo ( 151843 ) on Friday March 24, 2006 @01:52AM (#14986048) Homepage Journal
    A couple seconds here and there, lets say 2 seconds in sixty.

    Now cut that to one second in sixty with a faster machine, ignoring multiple cores for now.

    Gain a day of work for every sixty.

    Six days of work a year.

    A week of extra work accomplished each year with a machine twice as fast.

    You are paying the guy two grand a week to do auto cad right?

    That two year old machine, because machine performance doubles every two years, just cost you 2 grand to keep, when a new one would have cost a grand.

    The real problem is, we are not to the point where you only wait for your computer 1 second in 60. It's 10 seconds in 60. It costs you $10,000 a year in lost productivity. $20,000 in lost productivity if the machine is 4 years old.

    That's why the IRS allows you to depreciate computer capital at 30% a year... Because not only is your aging computer capital worth nothing, it's actually costing you money in lost productivity,

    Capital. Capitalist. Making money not because of what you do, but because of what you own. Owning capital that has depreciated to zero value, costing you expensive labor to keep, means that you are not a capitalist.

    You are a junk collector.

    Sanford and Son.

    Where is my ripple. I think this is the big one.

    Dual core? that is just the way performance is scaling now.

    The best and brightest at AMD and Intel can not make the individual cores any more complex and still debug them. No one is smart enough to figure out the tough issues involved with 200 million core logic transistors. So we are stuck in the 120 to 150 million range for individual cores.

    Transistor count doubles every two years.

    Cores will double every 2 years.

    The perfect curve will be to use as many of the most complex cores possible in the CPU architecture.

    Cell has lots of cores but they are not complex enough. To much complex work is offloaded to the programmer.

    Dual, Quad etc, at 150 million transistors each will rule the performance curve, keeping software development as easy as possible by still having exceptionally high single thread performance but still taking advantage of transistor count scaling.

    Oh, and the clock speed/heat explanation for dual cores is a myth. It's all about complexity now.
    • How much time do I wait for my computer at work?

      Daily: I wait for 15 minutes for some corporate machination called "codex" which:
      Insures I have the mandatory corporate agitprop screen saver
      Changes the admin password just in case I cracked it yesterday
      Makes sure I haven't changed any windows files
      Scans my system for illicit files or applications

      Twice Weekly: I wait for over an hour for Symantec Anti-virus to scan a 40 gi

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...