Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD Businesses

AMD Rumored To Announce Layoffs, New Hardware, ARM Servers On Monday 81

MojoKid writes "After its conference call last week, AMD is jonesing for some positive news to toss investors and is planning a major announcement on Monday to that effect. Rumor suggests that a number of statements may be coming down the pipe, including the scope of the company's layoffs, new CPUs based on Piledriver Opterons, and possibly an ARM server announcement. The latter would be courtesy of AMD's investment in SeaMicro. SeaMicro built its business on ultra-low power servers and their first 64-bit ARMv8 silicon is expected in the very near future. However, there's always a significant lag between chip announcements and actual shipping products. Even if AMD announces Monday, it'd be surprising to see a core debut before the middle of next year."
This discussion has been archived. No new comments can be posted.

AMD Rumored To Announce Layoffs, New Hardware, ARM Servers On Monday

Comments Filter:
  • by faragon ( 789704 ) on Saturday October 27, 2012 @06:44AM (#41788079) Homepage
    There is no justification in this world for such crazy massive generalization of power hog Intel CPUs in servers: Intel's CPUs are only justified for per-thread maximum performance. And that is unnecesary for 99.9% server applications.
    • Re: (Score:3, Interesting)

      by Anonymous Coward

      WTF! This should be breaking news. When did Intel's 22nm processor exceed the performance of IBM's 45nm POWER7 or 32nm POWER7+.

      • by CajunArson ( 465943 ) on Saturday October 27, 2012 @10:05AM (#41788845) Journal

        Intel's 22nm transistors certainly do. The overall chips don't because the price differential between even a top-line 8 Core Sandy Bridge Xeon chip/system and the Power7 chips/systems that actually have the high-end performance you are talking about is similar to the price differential between the chip in my cellphone and the high-end Xeon chip.

        I know guys that do CPU design for IBM and they will flat out tell you that Intel has a better process. The difference is that IBM is making chips for million dollar+ servers with huge legacy needs in markets where even Itanium isn't trying to compete. At that point, you can afford to design CPUs with 200+ watt TDPs and exotic liquid cooling systems that are made in tiny quantities compared to what Intel & AMD churn out.

        • by fatphil ( 181876 )
          Indeed. All you need to do is look at the Spec TPS benchmarks, and the system descriptions/costs. As much is spent on cooling as on the rest of the system put together.
    • by Anonymous Coward on Saturday October 27, 2012 @07:01AM (#41788139)

      Of course there is. The justification is that other options are not as good, in many cases.

      Companies like Google and Facebook for example have no real compatibility issues, or any particular ties to x86, and they are certainly interested in the most price efficient option, taking into account cost of acquisition, cost of running. They have not significantly moved away from x86 architecture yet.

      I'm not saying it won't happen, but as yet x86 devices still hold their own in low end and mid range servers.

      • CALXEDA, MARVELL (Score:4, Interesting)

        by Anonymous Coward on Saturday October 27, 2012 @08:34AM (#41788417)

        Wait for the newer products coming out from CalXeda, Marvell, etc. Their newest chips are strong contenders for the server market, featuring multi-cores with extra core for management, and fail-in-place capability. If they're any indicator on performance and capabilities, mean that they'll ultimately make their way into data centers and the emerging cloud. This is a good thing, since ARM is less power-hungry, and thermal output is a prime concern for data centers.

        CHANGE is good - finally, we'll see the Intel x86 goliath defeated. Remember, if it hadn't been for AMD/Opteron putting the heat to Intel's feet at one point, then Intel wouldn't have taken the trouble to improve its chips soon after. Likewise, ARM is injecting new and intense competition into the marketplace, which the rest of us will all benefit from.

        • by Anonymous Coward

          Wait for the newer products to come out? What? Who are you talking to? The people who need servers right now can not wait. And right now, Intel's x86 is the best option for much of the market, certainly vastly more than any miniscule niche ARM currently has in the server market.

          I repeat: certainly things will change. One day even ARM might dominate the low end server market (it won't happen with the first generation of aarch64 CPUs and servers, but it may happen one day, and by the way if it does happen, th

        • While we are all waiting for newer ARM products, Intel isn't going to wait. They'll be working just as hard to get better power/performance ratios, and there's no reason to assume that they'll fail to keep up.
        • Comment removed based on user account deletion
          • Shh, you're bursting the bubbles of the angry ARM fans. ARM everywhere!!!eleven!
            • Comment removed based on user account deletion
              • its the "Anybody but M$!" bullshit

                Nobody in this thread has mentioned Microsoft with respect to this. The real reasons for ARM fanboyism is :

                1. Anti-Intel : You are forgetting that like Microsoft, Intel has also collected "Anti"-fanboys along the path to its success. You yourself seem to be a card-carrying member of the society. Cheering for ARM currently is quite good a proxy for anti-Intellism, as success of Intel has negative correlation with ARM's for the moment.

                2. Good old computer science : ARM (not ARM64) architecture is simply more

    • by Anonymous Coward

      Many server tasks require significant compute power, even web serving when dynamic content is involved which it usually is these days, often implemented in frameworks like PHP and Node.js which aren't terribly efficient. Also the ubiquitous use of virtualisation technology means that "excess" power doesn't go to waste, rather it means you can host 100 VMs on the same box.

      Performance per watt does matter and is the reason why Opterons have historically done well in this segment, but ARM isn't fast enough yet

    • by Anonymous Coward

      Probably will be the other way. Intel concluded the race in reaching the maximum Ghz possible years ago, and finnished the race on core count, it probably won't go with more than 8 cores. Now all the muscle is in in the race of performance per watt, in a pair of year it will have a processeror similiar to Core when started killing AMD in the past decade. The other factor will be Surface Pro, in two months all the x86 code from thirty years will be available to the tablet market.

    • by AaronW ( 33736 ) on Saturday October 27, 2012 @12:01PM (#41789545) Homepage

      No it won't. Having done some serious looking in to ARM64 it is almost as much of a mess as X86, and in fact in many ways is worse.

      ARM64 has almost nothing in common with ARM32. All of the things that make ARM "ARM" such as conditional execution, having the instruction pointer a general purpose register, etc. are gone in ARM64. The instruction encoding is a complete mess and is totally incompatible with ARM32.

      Most RISC processors are fairly clean between 32 and 64-bit instructions. For example, MIPS and PPC just add new 64-bit instructions to the instruction set. ARM is not like this. With ARM, everything down to the most fundamental level changes in 64-bit mode. There is zero compatibility between the two.

      As a developer I certainly am not looking forward to ARM64. The stuff I do I periodically need to look at hex output and figure out what instructions are being executed. On MIPS and PowerPC this is trivial. This is not the case on ARM, where the instruction encoding is a complete mess, far worse than X86. It is as if the ARM64 instruction encoding was designed to be obfuscated.

      I think the big ARM64 push is the fact that it's not Intel and Microsoft wants to use it to pressure Intel. There are far cleaner 64-bit processors out there including MIPS, PowerPC.

      For the record, I work on bootloaders for MIPS64 processors.

      • by yupa ( 751893 )

        They already started the mess with thumb2.

        The first arm32 encoding was clean (stack pointer was a generic register), now with thumb2 and arm64 we got pop, push, ret instruction...

    • Re: (Score:2, Informative)

      Are you high? Ivy Bridge is more power efficient than Piledriver by a significant margin. This has been the case even before Ivy Bridge and Bulldozer. Xeon is used more often than Opteron, face it. Xeon has better performance and power efficiency, which is key when installing hardware for a data center.
      • Piledriver gets a lot better when you're not trying to run it at 4 GHz.
        • Yeah, it gets more efficient at the cost of performance. But why settle for a chip that you need to downclock for efficiency (AMD Opteron) when you can just get a more powerful AND more efficient CPU (Intel Xeon) from the beginning?
    • Comment removed based on user account deletion
    • Well right now Intel's server CPUs are way more efficient in terms of W/MIPS and similar metrics. Ivy Bridge has maintained that and Haswell will too.
      Yes that might change if somebody started making ARM CPUs with the latest tech but as long as Intel has a full node worth of advantage it won't.

    • You are full of rubbish. Good luck virtualizing servers on your shitty little ARM processors, rocket surgeon.

      If you want to serve up 10,000,000,000 "Hello World" applications, ARM servers are great, I'll give you that...

  • I remember (Score:5, Insightful)

    by MindPrison ( 864299 ) on Saturday October 27, 2012 @06:45AM (#41788081) Journal

    when AMD used to be the new kid on the block, super cheap processing power for all of us who wanted power without the money, I was a student back then. Amd could be overclocked out of this world, and Intel costing 3 times as much, and wasn't so overclockable.

    It's always saddens me to see layoffs with the competitors because it only leads to more expensive products with the main stream, less innovations and everyone is going the safe way, saving, reducing costs, spending less on innovation and experimentation.

    We need the confidence back.

    • by Anonymous Coward

      >, super cheap processing power for all of us who wanted power without the money

      Ok, now you are exaggerating juuust a little bit. Sure, AMD was cheaper but "super cheap". No.

      BTW The biggest overclocker in the day was the 300Mhz Celeron from Intel. It went to 450Mhz on air cooling.

      • by sjwt ( 161428 )

        and was still a POS.. Celerons ever overclocked were junk

        • There were two differences between the Celeron and the Pentium II. It had an external bus speed of 66MHz as opposed to 100MHz and it had half the size level 2 cache, but which ran twice as fast. Overclocking made the external bus speed 100MHz, and reduce the cost of cache misses. L2 cache misses were more common (the cache was smaller), but L1 cache misses were cheaper (the L2 cache was faster). It was also possible with a slight tweak to run two in an SMP configuration with some quite cheap motherboard
          • by dbIII ( 701233 )
            I had a couple of those on a two socket board and they were effectively rebadged pentium II's to cover a shortage of celeron A(with a dot).
          • It was also possible with a slight tweak to run two in an SMP configuration with some quite cheap motherboards: a dual processor 300MHz Celeron overclocked to 450MHz beat pretty much anything else in terms of price/performance.

            I had an Abit BP6 back in the day overclocked running Linux.. at the time I had a Sparc 20 and then a Sun Ultra 2 at work. The Abit board was great for a home workstation for those of us without the deep pockets for a RISC workstation in the den.

      • Comment removed based on user account deletion
      • No, the biggest overclocker in the day was a 333 MHZ Celeron. It OC'd to 966 stable on air with a good heat sink, and maybe a table fan if you were running it hard :) The mhz possible out of that chip wasn't seen again till the Pentium 3/Thunderbird lines came out.

    • Re: (Score:3, Insightful)

      by Taco Cowboy ( 5327 )

      It was one misstep after another.

      AMD had had misstep before - as well as Intel.

      But the only difference that separate Intel and AMD is that Intel had had a vision, and AMD had not.

      AMD, since the beginning, tried to copy Intel.

      When Intel was in the NOR flash business, AMD followed. Of course Intel had enough cash reserve to pull out from that business and still was able to fun its R&D.

      For AMD, the loss that incurred on their NOR operation meant they had less money for R&D.

      Still, AMD did come out with

      • by Anonymous Coward

        On copying, it used to be a requirement of military contracts that ever item had a second source (copy). All semiconductor companies used to live in this environment and it made for good competition and sharing. But we digress... ;)

      • Re:I remember (Score:5, Informative)

        by Rockoon ( 1252108 ) on Saturday October 27, 2012 @10:54AM (#41789091)

        But the only difference that separate Intel and AMD is that Intel had had a vision, and AMD had not.

        yes, the vision to screw AMD out of the market by paying off OEMs to not sell AMD chips right when AMD was building several new fabs to meet the capacity the market leader should have needed.

        ..and before you say it, Intel was *CONVICTED* of this. Its not just some anti-Intel hype.

      • While I prefer Intel CPU's and use them to build most of my systems, when it comes to copying would you kindly remind me as to who copied who on the x64 instruction set?

    • And Cyrix
    • Re: (Score:3, Insightful)

      Comment removed based on user account deletion
      • The problem isn't so much the module design in Bulldozer (I consider it very similar to Intel's HT and the rest is hype by AMD's moron marketers.) It's mostly the speed/effectiveness of the memory interface, the L3 cache and the instruction decoder. All of those are way behind what Intel has, leading to stalls, and if AMD could fix those the Bulldozer arch would be competitive. Clearly Piledriver only improved these slightly given that IPC didn't improve much. Oh well. I'm still going to buy an 8350 as soon

        • Comment removed based on user account deletion
          • I agree.

            I do have a question though...why are you choosing the 8350 over the X6? Do you have a specific workload that can use 8 integer cores that doesn't have much if any use for floating point?

            Yes: A SW workstation mostly for compiling in VMs, running Linux.

      • by zixxt ( 1547061 )

        The problem is AMD's new "half core" design is a complete flop and is often just BARELY better than X6, and that is when you put X6 against the new X8, if you put them equal, X6 VS X6, then Phenom II wins.

        Half core? Its full cores with shared FPU. Most CPUs/cores sold do not have a FPU at all. One does not need a FPU to be a cpu/core. It like saying Intel Sandy Bridge is a fake 4 core cause theres only one GPU on broad.

    • Re:I remember (Score:4, Informative)

      by Osgeld ( 1900440 ) on Saturday October 27, 2012 @08:13PM (#41793047)

      New kid on the block?

      They have been making microprocessors since 1975 starting with the 8080.

      They didnt just show up one day in the late 90's, they have had processors (among many other products) for every state in the x86 game, besides the fact they are only 10 months "younger" than intel.

  • Why are layoffs announced in the title as a Awesome thing to have? Not sure where these jobs are going to be killed, but the bottom line is that people are going to be out on their ass, some won't find work again, some will lose their homes.. I just don't understand why celebrating layoffs (listing next to two potentially nice things, new products) is something we should be compelled to do.

    • by Anonymous Coward

      It was not labelled as "nice". That is your misinterpretation of the title.

      The title was simply a statement. Company X rumored to be doing X, Y & Z.

      Filling in the blanks; Company SuperTech rumored to be Making rockets, eating babies and making new cars.

      Besides, AMD is laying people off to save the company, it cannot afford these massive losses, it simply cannot. It's not a happy thing, it's a survival thing.

      • That is the same justification used for reducing workforce in any situation. The company (not necessarily AMD) may be doing fine but the uppity-ups found that they could be more profitable eliminating X jobs through whatever means, automation, robotics, or brute productivity increases on the poor sods that are left behind to do the work of 2 people.

        Basic economics says that if there is no demand, there is no supply. If there is no supply, there is no Company. Truncating workforces means there are fewer

        • by Kjella ( 173770 )

          The company (not necessarily AMD) may be doing fine but the uppity-ups found that they could be more profitable eliminating X jobs

          AMD is not by any stretch of the imagination doing fine. Last year after Q3 they had an operating income (note: not total income) of 297 million. This year they have a 634 million operating loss. That's a lot for a company with 4612 million in assets. What's worse is where they're going

          a) Revenue is down - they sell less
          b) Gross margin is down - they make less per sale
          c) R&D is down - a little but they're behind already
          d) Accounts receivable is down - orders are down
          e) Inventory is piling up - can it be

          • by Kjella ( 173770 )

            AMD is not by any stretch of the imagination doing fine. Last year after Q3 they had an operating income (note: not total income) of 297 million. This year they have a 634 million operating loss.

            I realized that under GAAP rules their one-time payoff to GlobalFoundries earlier this year was counted as "Operating Cost, Other" and that was 703 million so their daily operations are not that screwed. But now in the last quarter they had a real operating loss, even AMDs "Adjusted EBITDA" was negative. Guess it's too long between I read financial reports.

        • Comment removed based on user account deletion
  • by turgid ( 580780 ) on Saturday October 27, 2012 @07:59AM (#41788293) Journal

    I've worked for a few very large companies who have made huge redundancies amongst engineering staff just as soon as projects are completed and ready to ship.

    The logic is pretty simple: there are great new products ready to go and the cost base can be instantly reduced by letting go thousands of staff making profits might higher as a proportion of the cost base in the very short term (next 1 to 4 quarters).

    The trouble is, you have to skate to where the puck is going, i.e. you have to be constantly developing new and better stuff to come out in a year to 18 month's time. If you don't have the R&D staff, you are in a tricky situation.

    I suppose the logic is that you can hire people back when you're out of the economic hole, but I've never seen that happen. What does happen is a continuation of the company's decline until it eventually gets bought out.

    Many of the people can't be hired back anyway, because they've moved on with their lives (retired, retrained, got new jobs). Do CEOs think that us little people sit around on our backsides all day worshipping their corporations and doing nothing except waiting for them to offer us jobs?

    When you let your institutional knowledge leave the building, it goes for good. MBAs don't understand this.

    • by Guppy ( 12314 ) on Saturday October 27, 2012 @09:57AM (#41788783)

      When you let your institutional knowledge leave the building, it goes for good. MBAs don't understand this.

      Maybe they do. There is some speculation that AMD management is prepping the company for a sale, and thus mostly concerned with making the short-term numbers look good. From what I understand, AMD's x86 cross-licensing agreements with Intel do not transfer over to a new owner, so their ARM posturing may make sense in that fashion, as the only buyers with both the cash and the need (for anti-Intel IP) would be interested in that field.

      An intriguing possibility is Apple. Now, Apple would never buy AMD for their x86 CPUs, as they have historically been more useful to Apple as a price-negotiation cudgel, to get better deals from Intel. However, if Apple decides to finally make the jump to in-house CPU designs, then it starts to make sense -- especially considering Apple's current Patent Paranoia.

      • AMD's x86 cross-licensing agreements with Intel do not transfer over to a new owner

        Uhh.. what?!

        AMD is a publicly listed company. Corporations are "sold" by purchasing a controlling # of shares.

        If the Intel agreement prohibits transferring it, then AMD will just remain a subsidiary of the new owner.

        So even after it's "sold", AMD will still be AMD, just with different shareholders.

    • Comment removed based on user account deletion
  • "SeaMicro built its business on ultra-low power servers and their first 64-bit ARMv8 silicon is expected in the very near future."

    If by "very near future" you mean late 2014 (optimistically, assuming TSMC can execute) then sure.

    People have been talking about how the A15 is going to be the second coming since 2009 and we are finally starting to see the very first real A15 parts show up on the market literally this month, and it will be a long time before they are the majority of chips shipped in high-end sm

  • The 3D of servers. Hype with no substance. Gotta love those Wall Street Analysts.

    • I've used ARM servers for years, but I'm probably just delusional, they clearly don't exist and don't make any business sense [nas-central.org].
    • by Anonymous Coward

      Any one who thinks ARM servers are a good idea, has never used an ARM device as a server. I installed a photo server on my SheevaPlug. Processing 100 digital photos took _6 hours_. By contrast, even an old Celeron from 2007 clocked at 600 MHz with 512 MB of RAM was able to do the same task in about 3 minutes. I decided to repeat my experiment on a Palm Pixi I hacked. I gave up after 8 hours.

      • Re:ARM Servers (Score:5, Informative)

        by TheRaven64 ( 641858 ) on Saturday October 27, 2012 @09:54AM (#41788773) Journal
        The SheevaPlug uses a CPU with no FPU, a feature that has been standard on most ARM chips aimed for anything except the ultra low end of the embedded market for quite a few years now. If you're doing image processing using software floating point and expecting even vaguely reasonable performance, then you are an idiot.
      • by lenski ( 96498 ) on Saturday October 27, 2012 @10:28AM (#41788945)

        Your comment is on target given that ARM systems have a history being both lightweight and worse yet, inconsistently equipped with floating point hardware. The consequence has been that application and package developers face a choice between being able to run on lots of hardware by avoiding dependency on FP, or to provide good performance by limiting their applicability to systems with that hardware. I do not know whether ARM can overcome that history in a bid for a place in the server marketplace.

        I expect that ARM architects recognize the need for consistency, with the result that the ARMv8 64-bit spec is way more specific about what developers can count on, so they can use high performance compiler settings consistently, while still being sure their applications can run on all servers.

        This is a very important place where the Intel IA32 and AMD's x86-64, won. Beginning with the i486 (not SX), developers had a consistent set of compiler optimization choices providing "really good" performance. Anyone wanting really kick-ass, custom-optimized performance is welcome to go with tightly customized, processor-specific compilation, as one might be able to justify in HPC.

        So the question is whether ARM's history of support for giving silicon implementers major freedom in selecting from among many options, will leave a legacy of inconsistency or whether they can get past that to enter the marketplace where consistency is required for success.

        BTW, as an embedded developer, I've found the flexibility of choosing silicon that's well-tuned to my device-specific needs to be very important.

  • And the only welcome news from the business / consumer standpoint would be the immediate release of a Phenom III with 12+ cores. Probably the only thing, even with a socket change, that would keep AMD still relevant on the desktop after the Bulldozer fiasco (current reviews of Piledriver are very disappointing, with nothing but a minor speed bump).

    • (current reviews of Piledriver are very disappointing, with nothing but a minor speed bump).

      By what measure?

      I've looked at a number of reviews. In all but single threaded preformance it usually beats the i5 and often matches, sometimes even beats the i7, a much more expensive processor.

      In single threaded preformance, it's about 60-75% of the speed of an i5. Given how much of my stuff is single threaded limited these days, I'd happily take that hit to have i7 performance at a fraction of the cost.

Without life, Biology itself would be impossible.

Working...