Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Huge difference (Score 2) 331

There's a practical difference (at least that's how it's defined in Switzerland - which is one of the possible tax avoidance place, although far less attractive than the ones in the summary).

- One is *lying*, giving false information and not paying the taxes you're required by law to pay. You pretend you don't have money and try to hide it (in order not to pay taxes. But according to the law you should be paying taxes). This is illegal. A person or a company doing so should be persecuted.

- The other is just shifting money around. You're absolutely honest and give any needed information out. You simply move the money to another place, where the tax happen to be lower than the first place. Once there, you openly collaborate with the local tax institution, declare all the money you have and pay all the taxes you're required to pay. It just happens that said taxes are lower than in the country of origin. But nothing is hidden, all money is openly accounted for. No one pretends anything false. This *IS LEGAL*. A person or a company doing so is just cleverly playing the system. WHAT NEEDS TO BE DONE is collaborating with the local government so some tax money is funneled back to the original country.

Ireland, for example, has almost no taxes. There's nothing wrong in the law about storing your money there. There's nothing wrong about paying almost no taxes (as long as you declare everything and don't hide anything). If you're unhappy with this, you should bring to court the company putting their money there. you should instead write to your politician asking that the European Union finally comes up with a solution for EU-level taxes (so money is shared between Ireland and the other countries where the money was prior transfer but were the company isn't paying taxes).

Comment New Wifi (Score 1) 553

So if some new technology comes out... won't work for Windows 7. Imagine if this spring, some new version of WiFi is released that works over distances of 20 miles, at gigabit speeds, and allows infinite porn downloading.

You mean, like IEEE 802.11ac? :-D

More seriously:
A new feature will probably be supported the same way Bluetooth was supported before microsoft included it into the service pack: an ugly vendor specific hack included with the driver.
So the end user will be stuck between two hard choices:
- keep the older OS version and put up with the crappy stack
- pay and upgrade to the newer OS version with buil-in official support

Comment The reverse (Score 1) 161

The reverse would be comparing the number of stars in the sky with the number of hairs.
It's probably a gross under-estimation (I in turn am no astrophysicist), but the listener clearly gets the point:
There's an insane amount of other stars out there than our sun.

Comment Speciality (Score 1) 161

but i am an astrophysicist, not a neuroscientist ;)

Don't worry. Some of us here realise it and take it with the necessary amused grain of salt.
We understand what you meant actually:
a gigantic number-crunching machine with bat-shit crazy computing capability.

Comment Biology is against you (Score 1) 161

A neuron is more complex than a transistor. Let's say it does a job, for the sake or argument, that would take about 16 transistors

Let's say that, you completely under estimated the computing power of a neuron.

First of all, a transistor just takes a few inputs, integrates them and give 1 signal out. To over simplify (medical doctor/researcher speaking here, so it will be an abusive over simplification) it will work like a basic boolean operator on 2 input bits.

A neuron is several order of magnitude more complex than that. It takes many more different inputs. The connection between two neuron is a synapse: neural impulse comes to one extremety of the source neuron. at the synapse this cause the release of a chemical (a neurotransmitter), across a gap there's a corresponding receptor on the target neuron. When the chemical is docked into the receptor, this opens a small gate which let a few ion flow through changing the electrical properties on the target neurons.

There can be several *thousands* of synapses on a single neuron, meaning that a single neuron can receive input from thousands of its peers.
Also the integration of all this inputs is complex (a docked neurotransmitter will only *change* the electrical properties on the target neurons, not necessarily fire the target neuron. Whether the neuron will fire or not depends on the net result of all the activities at all synapse. Some will raise the probability of firing, other will lower it). At that point we're already far from the "two bits in ; bolean operator ; 1 bit out" of a computer transistor. We're speaking of "thousands of signals in followed by complex and subtle integration".

Conversely a single neuron projects its output on a lot of target neurons too, so the overall network (synapses) can get very complicated for a given number of nodes (neurons).

The signal itself isn't binary. It's not "fire / no fire" duality unlike the bits in a byte. In fact neuron are (most of them) constantly firing. What varies is the rate at which they are firing. And this can vary across a wide range. So neuron aren't even digital, they are analogue with a lot of subtleties (and few signal loss, because they use the time domain instead of the signal amplitude).

And that's only the near inter-neuron communication. (At the synpase level). There are also a lot of general circulating molecules in the blood flow which can have an impact on the activities of neurons (hormones, etc.)

And all that is only the simulation of the activity going at a single point in time. But neurons are living objects and constantly changing.
Their metabolism changes, they might change their inner structure, etc. For example: The whole point of treatment of the depression is encouraging the neuron cells to produce more receptor for serotonin.

Even if neural cell don't reproduce, the network change over time: new synapses are created (e.g.: more synapses along often used paths) other are removed.
The total population of neuron change two, on one side, old neuron may die (or can get poisoned by drugs and toxins), on the other hand, stem cell (in the amygdalia region) produce new neuron which can then migrate and insert themselves into the network, compensating the loss (well, as long as the individual isn't suffering from dementia).

You will need actually much more than 16 transistors to simulate such complexity. You might as well need a small computer (or at least a whole separate thread) just to simulate 1 neuron. You definitely need a massive super cluster if you want to fully replicate the work of even the simplest animal brain.

Even if we *do* use neural net in computer research and data processing, such nets use virtual neuron which are much more simplified than the real biological counterpart. It's good enough to do research and data processing, it's not good enough to simulate a brain.

So definitely no, you can't count a 1:16 neuron-to-transistor relation in your cyber-brain.

Comment Not exactly so (Score 1) 119

The main problem between mr torvalds and nvidia, is that up to that point nvidia has never ever wanted to collaborate with them, nor even try to use whatever already exist in linux.
nvidia wanted the lazy way and share as much code as possible. their current way to do things is throw everything in the garbage and do things their own way. the problem is that this doesn't play nice with everyone else (including Intel hardware) and thus some functionality is completely broken (Optimus only works though third party hacks) although the necessary functionnality is alread in the kernel (but ignored by nvidia).

recently they've started to wisen up. they are open to collaborate better and try to use what exist. but now the situation has reversed and currently the linux developpers aren't playing nice (the necessary technology has been licensed as GPL only from the beginning)

Maybe Valve will solve the problems by simply forking Linux for their Steambox project

and that would be stupid.
Instead valve went the hard route and started collaboration between their own developers, and GPU drivers developers (both official binary at AMD and Nvidia, and official opensource at Intel and AMD). All this leading to several improvement.
Thanks to them, binary driver are being improved, and even the opensource stack (the Intel driver is officially opensource, and AMD officially supports opensource efforts in addition to their own in-house binary drivers. All these based on the same standart Linux infrastructure) has seen an increased pace of development (among other Valve is responsible of adding several debugging related extension to Gallium 3D, paving the way for future OpenGL 4.x).

And these efforts do indeed pay back: Source engine with an opengl back-end is much faster than with direct3d... even on windows.

Comment Android convergence vs. Linaro (Score 1) 60

The Android convergence was about bringing the specificities needed to run android userland back into the main kernel tree, instead of a special separate fork.
(e.g.: Among other, Android uses some special type of IPC)
Since recent kernel (was it 3.5? 3.6? memory fails me) these modifications are back into the main tree.
So, today, *as long as the hardware is supported*, you can run Android using any modern kernel. (for example: there are some interesting experiment of having both Ubuntu and Android running on the same ARM device using the same kernel)

Linaro is about the rest, about the "as long as the hardware is supported" part. As said above by godrik:

No two ARM machine boots the same. No two ARM processors expose component the same way.

As I've explained elsewhere in this topic, this has lead in the past to a situation where every single ARM machine has a seprate monolithic patch which was custom-written in one shot by the machine maker.

The point of Linaro is to bring some discipline in this, bring cooperation between all makers & developpers, and help modularize all this into small independent component.
So when the next machine comes to the market, instead of writing yet another new monolithic patch completely from scratch, as much work as possible can be leveraged from what already exist.
And the other way around: when a new kernel version comes, instead of having to port several thousand different monolithic patches (which never happens. end-users are always stuck with hardware specific versions), it will be easier to just maintain the few modules affected by the update changes.

When both are combined, that means it'll be incedibly easy to use Android on as many devices as possible. For the manufacturer, that's a no-brain choice. The mainline kernel contains everything needed to run android, and contains a lot that can be leveraged to support the hardware. Either write just a few new needed drivers, or even better (if you're lucky) only write new settings for some generic hardware.

BTW: Microsoft has chosen a different path - screw everything else, we're only supporting one specific way, one single type of machine. Either go our way, or pick another OS. My bet is that this isn't going to work very well, specially given the flexibility that android is offering on the other hand.

Comment Linaro is not *a distribution* (Score 1) 60

Linaro is not a distribution. It's a joint effort to *help linux run on ARM hardware*.

For example there are about three linux based distros for the openmoko, all with custom UI front ends.

And for these "linux based distros" to work you need a running linux (kernel) on which to base them. That's the work of the software company named Linaro.

To give into more details:
as a point of comparison, in the x86 processor world, things are rather standardized, and well modularized. There's more or less only one single main platform (the PC) with some weirder variant (BIOS vs. (U)EFI, or even weirder PC vs. custom Apple Intel Macs) and a few exception (they are really rare and don't matter much for mass consumption). With clearly organized components (no matter if you go for Intel, AMD, Via or more rare hardware: you've got the same basic CPU, northbridge, south bridge, PCIe bus, etc.). And well defined procedure to initialise and control everything. On the software side, all this is controlled by using clearly modularized and segregated components.
When something new arrive to the market (like the jump from BIOS to EFI, the move from PCI to PCIe, etc.) you only need to write a module and leverage what already exists for everything else.
To boot into linux, you use the same kernel everywhere, and only load different drivers depending on the local variations.

in the hardware arm world, things are much more messy: lots of weird SoC are put together, with far less standardisation. Also, whereas the x86 hardware is general purpose and widely available to everyone (just think about all the beige boxes everywhere. Then think that if you need server, a compute node or a home theaterPC you use the samecomponents. And branded machine (brand servers) use the same componenents too), lots of the ARM hardware tend to be put together for very specific custom usage (a company using their own SoC + PCB for a router. Another for a smart phone. Another for a NAS. Another for TV set top box. Etc.). Lots of this hardware is one-shot (there's few design re-use between a router and a smartphone).

as a consequence, before Linaro, Linux on ARM was a big mess: each time a company put together some hardware, they also do their own port in a one-shot fashion: they download the linux source, write a big monolithic patch to support their own weird variant, compile it, even sometimes publish the code (to be compliant with the GPL) and call it a day.
when another company wants another ARM-based machine, they just to the same.
No modularisation. No code re-use. No easy rebasing of the kernel.
That's why for several pieces of hardware, you're basically stuck with one very specific kernel version (openmoko is still at 2.6.x something) even when the source is available: the hardware depends on a huge monolithic platform drivers which is tighly dependant on the very specific kernel version against which the patch was written.
If there's any known kernel bug, your only hope is to wait for the back-port. you can't just move to a recent kernel (to 3.6.x).
If you want to provide a distribution for several pieces of hardware, you need one separate kernel per each separate device, sometimes different kernel version (depending against which kernel version was the patch written).
A big mess.

It's not a surprise that Microsoft is having big difficulties with Windows 8 on tablets and smartphones: the hardware landscape is really weird (and their own approach is to impose 1 single specific type of platform, so writing Windows 8 is easy and have everyone else standardize on it) (that's why they won't end up being as much popular on smart phone as they wished).

The role of Linaro was to put some order in this mess, by gathering together several of the people involved (there are hardware companies here) and giving them opportunities to work together and coordinate their effort. Among them as well as together with the kernel developers and the rest of the community.
Split everything into small component which will be easy to re-use (for newer slightly different hardware) or re-base (new kernel = keep as much of the drivers as possible). Organise everything cleanly.
The target: to achieve the same situation as with x86 hardware - same kernel everywhere, all the magic lies in modules and component. Re-using should be easy and cheap (just write new drivers for the new hardware elements, re-use as much as possible from the rest. Or even write a new batch of settings for a generic driver). Everything should be as modular as possible, and get everything possible back into the official kernel (no monolithic patches).
The end users see benefits (they get better quality of code, and longer maintainability. This requires open source code, but also require code which is easy to work with. The past mess wasn't. The future for which Linaro strives is).
The hardware makers see benefits (the initial cost of developing new embed hardware is lower, no need to write full monolithic patches, leverage as much as possible from what already exist. The overall development/maintenance is also simpler: no need to keep one separate kernel for every piece of hardware poduced, and easier to get updates from the mainline kernel).
And the kernel developing community see benefits (easy to keep and maintain modules in the mainline kernel, less headache when big changes arrive with newer kernel as everything is nicely modularized).

If today you start seeing interesting distros on ARM (lots of smartphone-specific distros. big name distros like Ubuntu and openSUSE. etc.) that's exactly because of the work behind the scene of developers like those from Linaro, making the Linux kernel easy to make work every where.

Comment Use cases: AMD leverage ARM+Radeon (Score 1) 213

Also, AMD can spit out some interesting use cases where it can find a nice empty niche market:
by leveraging their built-in GPUs.

Not simply putting a low powewred mobile GPU (AMD radeon's Qualcom Adreno cousin, PowerVR, etc) something more high-powered (some of their own low power radeon designs):

- Coupled with a multi-core ARMv8 CPU, Can be useful for netbooks with good graphic performances (the same kind of market after which Nvidia is running with their own Tegra series).

- Some numerical loads can benefit from a low power CPU core and a decent GPU.
When building server farms, the performance-per-watt matters, and that's why ARM is starting to eat on the x86 territory.
With a low-power ARM and decent GPU, AMD's creations can also end-up as low-power GPGPU nodes with a crazy low energy requirement. (I've seen ARM + custom parallel chips combo appearing. There might be market for something like this).

So there is definitely some place for a multicore ARM 64bit by AMD.

Best part for us geeks? AMD is likely to keep their open-source policies.
So you can expect documentation pushes to coreboot&co (for the ARM part) and to opensource radeon&co (for the GPU part).
And this is a racidally different situation than the present ARM situation, where most of the graphics core are closed.
Either completely closed (PowerVR)
or undergoing some reverse engineering (Mali/Lima)
or getting some docs (Adreno's common ancestry with Radeon helps)
or being an opensource farce (Broadcom's Videocore is basically a software 3D engine running on a dedicated core. Their "fully opensource driver" is a thin layer which uploads the firmware to the dedicated core and then forwards the OpenGL ES calls as-is).

Currently, no nice opensource graphics in the embed world, unlike Intel (and AMD).
But if AMD keep their opensource trend, and Nvidia keep their promises (after the Linus' "Fuck you scandal" they ended up promising to open-up a little bit more), we're going to see an interesting battle of "ARM with big GPUs" in the embed linux world.

Comment 3rd Party (Score 5, Informative) 467

If you tell Facebook your secret, it's not a secret anymore and you're a moron for thinking it would be.

The problem isn't what they told to Facebook. The problems is that the girls got added to some queer-themed group. group-adding on facebook doesn't require user confirmation nor anything.
A 3rd party just clicked on a group button while the girls were online, and their homophobic parents saw "Girl1 and Girl2 joined group 'lesbian chorus singers' " and freaked out. Without the girls ever needing to do anything, they didn't even need to write their preferences into their profile, and in fact their account could even have been dormant.

The biggest problem is not only that clueless users could mess their own privacy online, but morons can mess other people's privacy as well (and in a few cases including privacy of people who aren't even on facebook themselves).

Comment small is better than zero (Score 1) 262

Good luck breaking RSA.

Yup.
for a 4096 bit key, it would give the correct numbers only a very tiny fraction of the time. But that's still better than zero.

consider that to mutliply the factors to verify the answer is trivial.
if the speed of the quantum unit is high enough and it spits answers at a sufficient rate, even if it only spits the correct answer at a very low rate, by veryfying constantly the result of the quantum unit, after a while you may end up with the correct result with a sufficiently high confidence.
if the answer-spitting-rate is high enough compared to the correct-answer-rate this "after a while" might be better than the time necessary to brute-force-factorise the numbers with current technology.

Yes you can't have nice things (for free).
But quantum is the kind of weird technology which can give lots of *shitty* things (for free).
Okay, these things are *shitty* and not *nice*, but you still get them for free.
And you can still try putting them in a big pile, light them on fire and use the heat to cook a nice dinner. And a dinner is *nice*.

maybe the production rate is forced to scale badly compared to the true-positive rate and they cancel each other exactly the same way a perpetual motion machine always ends up having an output energy of zero or worse.
but that not a reason to not even try proving *that*. we need to at least understand quantum computing well enough to be able to see if this 50% vs 48% difference will become an obstacle in the long term or not. we have to test and prove your predictions.

maybe cooking dinner over quantum virtual shit is not worth it, but at least we have to prove it.

Comment Checking (Score 2) 262

Consider the "use quantum computing to get an answer, use regular computing to verify answer" post higher in this thread.

While it's computationally intensive to factor number, it's absolutely trival to check factors' prdocut against target number.
Yes, it's possible that the quantum computer will give a false positive half of the time like "13 = 3 x 5"
On the other hand, it's quite trivial to see that 3 x 5 make 15 and not 13 and thus this was a false positive.

It can be a viable pre-filtering technique. Currently, factoring large numbers is awfully resource intensive. You basically have to recursively build a table of prime numbers up to sqrt(n)
Now with a quantum computer, you might only need to do multiplications to check the things that the quantum unit is spitting, and keep doing this until you have a big enough confidence of the result.
Depending on the implementation (specially the hardware implementation of the quantum unit), you might get a sufficient enough answer in a much shorter time frame than with a classic computer (where the time frame might be more like "universe heat death")

As you said the problem currently is successfully making a quantum computing units that can work on anything bigger than a trivially small number of qubit.

Comment Non exclusive garden example (Score 3, Informative) 360

A walled garden does not preclude allowing the ability to turn it off. {...} They could quite easily add an 'opt out' and let people install outside software at their own risk.

As an example of a non obligatory garden: the webOS system from Palm/HP/Gram, on Pre smartphones and the like.

Out of the box your Pre/TouchPad is the classical walled garden example:
It has a application manager, which lets you download or buy applications from a official repository of doctored applications.

But if you want to use your device in creative new way, you need just to type 1 command to switch the phone into developer mode. This command is well documented in the developer documentation. (The only draw back is that the first version was a little bit long to type, because it was a joke on the komani code. later versions introduced shorter alternatives). And then you can do pretty much anything you want with your smartphone/tablet. including installing any software of your liking. Or even installing an application manager which can also use homebrew and opensource repositories. (= Preware). And once you've finished sideloading external software, just switch back into regular mode and continue using your new homebrew apps or the new app manager.
There are no need for hacking, for exploits, for stolen keys, etc.

Using this is at the owners' own risk. But if you corrupted your smartphone/tablet by installing too much weird shit, there's the webOS doctor which is designed by Palm/HP/Gram, to revert back your hardware to factory default. (Though you lose anything you did which was not backed up on the cloud. You lose your homebrew applications. But not the personal assistant data).

And a non-locked android smartphone works in the same way, letting the user do side-loading or replace the firmware altogether.

BUT

Apple and Microsoft decided they didn't wanted to do it that way. They are trying to do as much as possible to prevent going out of the walled garden.
Apple refused to let users do anything else than get applications from the Apple AppStore and at some point even tried to sue against circumvention.
Microsoft is at the center of a controversy due to their abusive requirement regarding the ARM version of Win8 and the Secure UEFI booting.

Comment Yep.It's your point of view. (Score 1) 220

What I meant is that in both cases the company threw out good design practices chasing a single metric, higher clocks in the case of Intel, higher core count per TDP in the case of AMD, and both paid/are paying for it. In both cases you get chips that run hotter, suck more power, and give less IPC.

It's just that in my opinion, AMD didn't fuck as bad as Intel. AMD has design and support problem, but has a viable design which can be saved and even work nicely in the long term. Intel dig themselves in a hole, and there was nothing that could be done to save Netburst back then.

Specially Intel did threw good design practice: they didn't let anything stand in the way to the GHz race.
- They used an ultradeep pipeline, just because that helps making smaller shorter steps and thus higher clock counts - but ultra-deep pipeline are problematic and are prone to catastrophic stalls. (Hence they were forced to develop HyperThreading just to compensate for this. Which brought its own set of problem as mentioned before). Just to get a bigger GHz they moved to something which can only work worse.
- They completely ignored anything else: power consumption, thermal output. (To their defense, nobody used to give a fuck about it then neither. That era was similar regarding those metric as gaz guzzling SUVs where to mileage. Transmeta had a hard time persuading everybody that a ultra-low-power/low-thermal output CPU has actual use cases, even if that meant slightly less powerful than Intel chips). The problem is for the netburst architecture to shine, you have to push past 10GHz (according to Intel's initial plan). But if your chip needs to be powered by a mini nuclear reactor, and output enough heat to not only cook your diner but even bake the china on which to serve said diner, that won't be easy to achieve without the whole thing melting or even achieving fusion - if you pardon my understatement. For Netburst to succeed, Intel needed to jump from 130nm to 14nm over a few months and make vapo-chill cooling mainstream. That didn't happen. So Netburst was useless from the begin.
- They completely ignored even the tendency in software evolution. Concept behind Netburst would be good to run a single (repetitive, non branching) task or just a few threads as fast as possible. It might have made some sense given the designs of some games back then. But it didn't make any sense given the evolution of the whole ecosystem: multitasking got progressively more popular (and with the jump from the DOS-based monstruosity WinME to the NT-based WinXP: even the home user version of microsoft's OS got some decent multitasking support). Even if the task run is purely sequential and single threaded, there are progressively more such tasks running concurrently. (Cue-in jokes when the first consumer-level quad-core appeared: you need 1 core for the slow Microsoft OS, 1 core all the mandatory antivrus and similar protection, 1 core for all the crapware/spyware/malware that has infected the machine anyway, and one last 4th core to finally get things done. It's a joke, but it actually shows the global tendency)

Meanwhile, there's nothing wrong in Bulldozers. There aren't any fundamental argument against half-cores. They can be seen as a type of evolution beyond hyperthreading. They make sense given the long term tendency in software development (more and more multiprocessing and multithreading). They still have a clever fall back mode (shut one pipeline, fuse the half cores, boost the clock speed, and have the module work as a glorified super-overclocked really-superscalar (2x the resources) single core, instead of 2 half-cores). They are much more efficient when in two half-core mode compared to HT (HT: is a shared everything solution: The two threads have to share a single pipeline. And use the same set of integer-/float-/etc.- units. The theoretical maximal IPS count is the same with or without HT. You just can keep the CPU more busy in case of stalls. It just make better use of the ressources. Meanwhile Half-cores have each their own pipeline and doubles the set of interger units. It can really process more threads faster). And although it comes at some increased complexity (HT: only adds extra logic at the beginning, in order to feed the pipeline and processing units from 2 different threads instead of one while everything else is shared. Meanwhile Bulldozzer's half cores also require a second pipeline per module and doubling the integer units), its still a lot less silicon than putting two separate core, and the extra integer units do come handy when in single-unique-core overclocked mode.
The whole architecture DOES MAKE SENSE (On the paper at least).

Okay it serve as an excuse to write "8 halfcores" instead of "4 core" or "4 modules". So it helps the marketing department writing a bigger number for some metric. But under the hood, a 4 modules/8 halfcores bulldozzer chips makes much more sense than cramming 8 separate cores on a huge die. 8 real cores would have been the equivalent of sacrificing everything for a single metric, but what AMD did is much more clever, though in my opinion they should NOT have bragged that much with this number "8".

Most of the problems bulldozzer is encountering are completely orthogonal to the choices behind the bulldozer architecture.
- The only actual problematic consequence is that modules use more transistors and silicon real-estate because of the doubled structures for the halfcores.
- software support is something which has nothing to do with bulldozzer.
- also AMD is late in the process: they still produce 40nm chips, when intel is producing 32nm and introducing 22nm with IvyBridge. Given *THAT* it's actually amazing what they have managed to put as performance until now.

And having good Linux support really doesn't help when more than 90% of your market is NOT running Linux, you might as well say "Well if the world switched to netBSD all the problems would disappear!" because the world isn't gonna switch to Linux or NetBSD thanks to all the mission critical programs that are Windows only and will never be ported.

You know, there's this tiny little unnoticed niche market called "servers" and "clusters" (a.k.a.: most of the stuff you connect to when you click on the blue or orange icon to visit teh interwebs). Linux and other unices are *kings* in these sectors.

AMD have always offered interesting hardware, and even bulldozer has an interesting cost/performance ballance for some tasks.
I mean, there are even people considering ARM even if it is underpowered, just because the ultra low thermal/power limits are interesting under some circumstances.

By your logic 64bits (AMD64) wouldn't have caught on, because it needs special support and only Linux supported it when it got introduced, it wasn't supported by mainstream Windows XP. In practice Opterons got quite a success in several server use cases.

And while I agree that theoretically the problems could be partially fixed by the scheduler, in reality MSFT has made it clear they WILL NOT FIX in ANY version of their OS except...Windows 8. Considering Windows 8 {...} you again have the NetBSD problem

Appart from an experimental Windows XP 64 (whose driver support was even more catastrophic than Linux), microsoft didn't introduce mainstream 64bits support until Windows Vista, which was a comparable huge catastrophe by itself. Still AMD64 caught on and even forced Intel to consider it as a possible 64bits evolution instead of their Itanic attempts. All this thanks to the above-mentioned tiny niche easy to neglect "servers and clusters" market. (a.k.a.: a huge part of the internet and of the universities and research labs).

Yup, in 2012 we still haven't seen the mythical "year of Linux on the Desktop". But on the other hand, the "year of Linux on everything else" has already come and passed since long time ago.

The only significant difference is that AMD64 chips weren't much disadvantaged when running 32bits. All the extra features (more registers, bigger address space, etc.) were sitting idle. It was simply unused silicon. And they were competing with netburst which, as said before, basically dug their own grave.
Whereas running bulldozer on non-supporting OSes takes a performance hit.

they were stupid enough not to give MSFT the heads up as to what was going on

AMD announced and published everything well in advace. They even had linux support in kernel 2.6.30 (which is opensource and could be used as example implementation for anyone interested), before even the first bulldozzers hit the market.
It was just Microsoft being a little bit lazy and using yet another obsolescence strategy to force people to move toward their latest shit.

they've locked themselves into a path where the most hated Windows OS since MS Bob is the ONLY hope they have...

Well, that and the handful machines you interact with when you type https://www.google.com/search?q=porn in your browser and click on the various links.

My personnal opinion is still "don't underestimate the impact of Linux in the server/cluster world". Opteron is already popular there. And bulldozer fits nicely a lot of use cases.
(For example: web servers serving lots of dynamic content. webserver daemons tend to use multi-threading to serve answers, treating each request in parallel to lower latency. So processors able to run lots of threads are welcome (Sun SPARC Niagara are other examples). Now, serving dynamic content might require some processing omph (to run PHP...) so ultra-low-power multicore ARM won't be that much advantaged. Bulldozer start to look an interesting possibility and doesn't cost that much. Thanks to the modularity of AMD hardware you can even try building machines using cheap desktop-grade processors on AM3+ server-grade motherboard)

Slashdot Top Deals

UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker

Working...