well, hopefully we'll get to a point where the idiot who didn't stop will pay for the treatment and never be allowed to drive a car again
well, hopefully we'll get to a point where the idiot who didn't stop will pay for the treatment and never be allowed to drive a car again
usually not until after they've killed someone
if everyone on the road stopped when they didn't know wtf they were doing, we'd have a lot fewer dead people. the real problem is that people are asshats, so they speed up and drive worse when they're confused or upset. it may not be as emotionally satisfying to stop and think, but it's actually the most sensible thing to do.
They're selling more chips for less profit--Intel still has them trounced in terms of the R&D budget regardless of how many units they ship. All you have as an argument is "ARM is better so eventually it will actually be better", but the instruction set frankly just doesn't matter very much.
Note that Intel is a fairly large ARM vendor, and had other RISC products in the past. They still design & build such chips for embedded controllers, so it's not like they don't know how to do it, but if they thought that was the best path forward for general purpose CPUs they probably wouldn't have sold that tech off to Marvell.
Sure, if we imagine that vendor X comes up with something implausibly advanced (scaling software to 1024 cores is hard, which is why single thread performance still matters), and intel actually goes backwards (you can buy a 32 core intel blade today) instead of developing new tech, then sure, vendor X can win.
Though nobody would buy it if it were tied to a single-vendor version of linux. BTDT, it sucks.
" Intel's least demand, lowest margin customers are ARM's high margin most demanding customers"
This is where I think you're wrong. The phones & the tablets are where the money is, the chromebooks are an uninteresting sideshow for the ARM vendors just as much as for Intel. There's no way they're making the same money on $200 netbooks as they are on $700 phones. They're also not putting any R&D into that segment, it just happens to move along with cobbled-together parts. It's not a path to anything.
"I can easily imagine a future generation of SOC for systems with keyboards as much as they are useful in today's tablets."
You seem to misunderstand. Of course systems are getting more integrated--the question is whether consumers are interested in buying a server whose hardware is completely different than the server they bought six months ago, which needs completely different core drivers, can't boot the same kernel, etc. It's not in the consumer's interest to have that degree of vendor customization in the desktop and server markets. I already pointed out that Intel actually derives a competitive advantage from standardized SOCs: their competitors have to be better engineered just to overcome intel's process advtantage. E.g., you need to have a singificantly better 28nm 10GBE implementation to be more power efficient than intel's 14nm implementation. Is that likely? Can the ARM server vendor outperform intel's CPU, and outperform intel's best in class networking, and outperform intel's fairly solid storage controllers, and outperform intel's pcie controllers, and outperform intel's memory controllers, etc.? That's a lot of R&D, and none of the competitors have that kind of head count.
Don't get me wrong--I'd love to see ARM as a strong competition to intel in the server space. But watching how fast intel has pivoted, how quickly and reliably they deliver on new tech, and how slow and underwhelming the ARM vendors have been, I just don't see it as likely.
But the main reason they can sell anything in step iii is that intel doesn't care about those customers. It's not clear that ARM vendors are actually making much money on those products, and if intel cut its profit margin (i.e., if they cared enough about that particular market to actually go after it) then the ARM products would be economically untenable. There simply isn't a fundamental advantage there for the ARM vendors to take advantage of: their advantage is cost, and that's because intel has *decided* not to lower prices that much. Again, ARM's marginal power advantage simply doesn't matter on a typical laptop because the CPU isn't the most power-hungry part. (Unless you're crunching numbers, but then you probably want to have a faster chip even if it uses more power.) Even on phones the advantage of ARM is less about power consumption than the fact that you can configure an ARM SOC any way you want it--while intel has basically no interest in licensing its most advanced IP so that OEMs can build custom SOCs. The limitations of that strategy are clear--ARM hardware is basically disposable once the initial OS becomes obsolete, because nobody cares about engineering updates for old products--and I just don't see custom SOC being a driver for laptops/desktops/servers. Those markets demand more standardized hardware, and that brings us back to ARM competing toe-to-toe with intel. For the niches where hardware coprocessors really matter, intel has phi for HPC and quickassist for crypto/compression/DSP/etc.
Yes, ARM is used in a lot of phones. A phone chip is very different than a server chip. The question is whether any ARM vendor has the money to do *general purpose server* R&D in competition with intel. So far, everyone who has tried has either crashed & burned or provided fairly disappointing results. What they have going for them is power efficiency, which matters in embedded solutions (think raspberry pi & smaller) but isn't that compelling on full size laptops, desktops, or servers--saving a few watts over an intel solution doesn't matter when the screen, memory, and communications consume more power than the CPU. (Side note--intel has a material advantage here by integrating some of the power-hungry components like 10GBE on silicon that's one or two generations ahead in terms of process compared to the ARM competition.) ARM seems firmly in the region of diminishing returns--they can't consume less than 0, so there just isn't that much more to cut. Intel has room to improve, and with the money they can throw at things, they will--to the extent that makes sense. In most applications single thread performance is still more relevant than a very high number of cores. So intel's current strategy is to be reasonably power efficient, integrate components in a compelling fashion, but not sacrifice too much single thread performance. So with D-1540 you get integrated 10GBE, integrated SATA, integrated DDR4, & 8 fairly powerful cores. The ARM vision is to deliver 48 slower cores, for a total package that's a little more power efficient and roughly on-par performance-wise for embarassingly parallel applications (of which there are few). Given how many distinct architectures intel has delivered over the past few years, I'm pretty confident that, if high-scaling applications actually materialize, intel will be able to crank out a new SKU faster than any ARM vendor will be able to explit the niche, bascially by scaling up avoton. (The successor to that architecture, denverton, is due out at the end of this year, probably with 16 cores & integrated 10GBE on a 14nm process.)
Have you been watching Intel's product releases? Intel decided a couple of years ago that they weren't going to let ARM have the low-power server market and completely retooled their product line, starting with the avoton server line (C2xxx) and following up with the D-15xx family. (Remember how AMD keeps talking about interest from data centers? D-1540 retail availability has been tight for months because some major datacenter providers have bought essentially *all* of them...) Watching how fast Intel was able to change course and deliver products that beat the ARM *roadmap* in that timeframe (let alone delivered products) made me abandon hopes that ARM might have a serious presence in the server market. Intel just has too much R&D money & process tech for any existing competitor to go toe-to-toe with them in a segment they decide to invest in.
The memory thing was basically "dial-a-pricepoint". I remember machines with a base price on the order of $5k, with $10k+ of memory (which was less than you probably have in your phone).
I'm also amused whenever one of these sparc nostalgia threads comes up, because the way I remember things the cool kids had the SGIs and DECs and the Suns were kinda the lame/cheap crap, basically the PCs of the UNIX world. They exploded during the
Yes, you can load a jpg in 4M, but you can no longer load the kernel.
Ordinary phones will probably pass 4G by the end of the decade.
If it's so easy, why don't you take over the port and show us how it's done? Debian has been very up front for years now that the sparc port was on its way out due to lack of interest; if anyone really cared, they would have stepped up to maintain it. The problem here isn't that it's impossible, or even a theoretical challenge, the problem is that the sparc hardware in general isn't really all that great and there isn't really a compelling reason to use it when people are literally throwing out higher-spec'd x86 gear. Only on the highest end is the sparc line potentially interesting, and nobody spends that much money to run a research project as an OS; by the time the hardware is available to hobbyist developers it's obsolete--and again, why bother plugging in a really power-hungry system and spend years developing for a platform that, by the time it's usable, will be outperformed by tomorrow's junk?
Lucky (?) for you, I just went through purchasing a storage refresh for a cluster, as we're planning to move to a new building and no one trusts the current 5 year old solution to survive the move (besides which, we can only get 2nd hand replacements now). The current system is 8 shelves of Panasas ActiveStor 12, mostly 4 TB blades, but the original 2-3 shelves are 2 TB blades, giving about 270 TB raw storage, or about 235ish TB in real use. The current largest volume is about 100 TB in size, the next-largest is about 65 TB, with the remainder spread among 5-6 additional volumes including a cluster-wide scratch space. Most of the data is genomic sequences and references, either downloaded from public sources or generated in labs and sent to us for analysis.
As for the replacement...
I tried to get a quote from EMC. Aside from being contacted by someone *not* in the sector we're in, they also managed to misread their own online form and assumed that we wanted something at the opposite end of the spectrum from what I requested info on. After a bit of back and forth, and a promise to receive a call that never materialized, I never did get a quote. My assumption is they knew from our budget that we'd never be able to afford the capacities we were looking for. At a prior job, a multi-million dollar new data center and quasi-DR site went with EMC Isilon and some VPX stuff for VM storage/migration/replication between old/new DCs, and while I wasn't directly involved with it there, I had no complaints. If you can afford it, it's probably worth it.
The same prior job had briefly, before my time there, used some NetApp appliances. The reactions of the storage admins wasn't all that great, and throughout the 6 years I was there, we never could get NetApp to come in to talk to us whenever we were looking for expansion of our storage. I've had colleagues swear by NetApp though, so YMMV.
I briefly looked at the offerings from Overland Storage (where we got our current tape libraries), on the recommendation of the VAR we use for tapes & library upgrades. It looked promising, but in the end, we'd made a decision before we got most of those materials...
What we ended up going with was Panasas, again. Part of it was familiarity. Part of it was their incredible tech support even when the AS12 didn't have a support contract (we have a 1 shelf AS14 at our other location for a highly specialized cluster, so we had *some* support, and my boss has a golden tongue, talking them into a 1-time support case for the 8 shelf AS12). We also have a good relationship with the sales rep for our sector, the prior one actually hooked us up with another customer to acquire shelves 6-8 (and 3 spares), as this customer was upgrading to a newer model. Based on that, we felt comfortable going with the same vendor. We knew our budget, and got quotes for three configurations of their current models, ActiveStor 14 & 16. We ended up with the AS16, with 8 shelves of 6 TB disk (x2) and 240 GB SSD per blade (10 per, plus a "Director Blade" per). Approximate raw storage is just a bit under 1 PB (roughly 970-980 TB raw for the system).
In terms of physical specs, each shelf is 4U, have dual 10 GbE connections, and adding additional shelves is as easy as racking them and joining them to the existing array (I literally had no idea what I was doing when we added shelves on the current AS12, it just worked as they powered on). Depending on your environment, they'll support NFS, CIFS, and their own PanFS (basically pNFS) through a driver (or Linux kernel module, in our case). We're snowflakes, so we can't take advantage of their "phone home" system to report issues proactively and download updates (pretty much all vendors have this feature now). Updating manually is a little more time-consuming, but still possible.
As for backups, I honestly have no idea what I'm going to do. Most data, once written, is static in our environment, so I can probably get away with infrequent longer retention period backups for everything more than 6 months old, while doing much more frequent backups of newer data (and
Hope that helped a bit.
The trouble with being poor is that it takes up all your time.