Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:and this is news... why? (Score 2) 80

Closer to two decades.... 128MB RAM machines would have been around at the launch of Windows 95.

The first consumer level Pentium chip-set to properly support more than 64 MB of RAM, the HX, came out in Feb 1996. Even then, the HX was the high end model, most of the Intel chip-sets over the Pentium's life fully supported only 64 MB of RAM properly. You could put 128 MB in them, but that would actually reduce performance as only the first 64 MB would get cached. 128 MB was definitely not common when Windows 95 came out.

Comment Re:This actually makes sense (Score 1) 207

If you have a proper server then you can do all that troubleshooting over ILO or whatever remote access card your vendor supplies, no physical display required. Some x86 servers have console serial ports, but that is of limited use for a Windows server.

My point was that neither x86 or Windows require a video card to function. I used the example of a desktop board specifically because it lacks an ILO emulating a video card (no on board video either).

Comment Re:Windows is not optimized for Bulldozer (Score 4, Informative) 235

Windows is "not optimized" for Bulldozer because BD lies to the OS. A BD claims to have twice as many cores as it really has and Windows schedules as if this were true. In reality the BD "cores" are just a better form of hyper-threading. If BD said it had hyper-threading instead of real cores then Windows would schedule properly. All Linux and Windows 8 do is ignore the lies from the chip and use the hyper-threading scheduler.

Comment Re:In summary (Score 2) 197

The AMD chips had a significantly better GPU, at the cost of a slightly slower CPU (which is a good tradeoff). Apple didn't go with it because AMD couldn't guarantee the volumes that Apple needed.

Umm, no. AMD doesn't have a chip that competes with Intel's ultra low power Sandy Bridge chips like in the Air.

The AMD Brazos chips compete on power consumption, but they are way slower. They are an Atom competitor, something they do very well but SB chips are in a completely different performance bracket.

The AMD Llano chips would qualify as "significantly better GPU, at the cost of a slightly slower CPU", but at much higher power consumption. Not suitable for the AIR either.

Comment Re:So basically... (Score 2) 196

No, that is not correct. Hyper-threading gives each thread the same amount of resources, assuming they can use them equally. The only difference between hyper-threading and a BD module is that the BD module has a dedicated integer execution unit and L1 D cache for each thread. Everything else is shared just like in Intel cores. It is simply a better hyper-threading, not real cores.

Comment Re:Weird (Score 1) 196

It's not like hyper threading. For integer operations, the AMD chips are much better. What AMD doesn't have is two floating point units so that's what gets bogged down. There are two instruction decoders and two units to handle integer math, but one floating point unit per component.

It's a lot closer to hyper-threading than you think. The BD chips do *NOT* have two instruction decoders per module, just one. The only duplicated parts are the integer execution units and the L1 Data caches. The Instruction fetch+decode, L1 Instruction Cache, Branch prediction, FPU, L2 Cache and Bus interface are all shared.

It's pretty obvious how limited each BD "core" really is given these benchmarks. AMD should have presented the CPU as having hyper-threading to the OS.

Comment Re:So basically... (Score 3, Informative) 196

No, It's because AMD is lying to the OS. The "8 core" BD is not really 8, core. It only has 4 cores with some duplicated integer resources. Basically a better version of hyper-threading, but not a proper 8 core design.

The problem is that the BD says to Windows "I have 8 cores" and thus Windows schedules assuming that is true. If BD said "I have 4 cores with 8 threads" then Windows would schedule it just like it does with Intel CPUs and performance would improve just like in the FA.

There shouldn't need to be any OS level tweaks because Windows already knows how to schedule for hyper-threading optimally. If BD reported it's true core count properly then no OS level changes would be needed.

Comment Re:How times change (Score 1) 471

While security updates stopped in July 2010 for 2k, other updates such as directx, IE, and silverlight stopped long before then. Also long before 2010 was the end of support for adobe flash and many other non-MS products for 2k.

All of those things stopped being supported for Win2K long after XP took off. Your examples are rather poor given that the final supported DirectX and Silverlight versions for Win2k are the same as XP (9.0c and 4 respectively).

Comment Re:How times change (Score 1) 471

XP took off when MS stopped offering any updates for (the only 1-year-older) win2k. Soon other companies followed suit and stopped supporting 2k for their software and it died abruptly while XP, with very few actual advantages beyond software support, took off.

I'm pretty sure XP took off long before July 2010, because that is when updates for Win2k stopped.

Comment Re:Three guys beat IE!!! (Score 1) 373

The JRE issue is simple. The JRE is being exploited to deliver Windows malware. Linux or other OSes can get "infected" by the same exploit, but since the payload code is for Windows it won't run on other OSes. The JRE is just the delivery method, it's not actually running the malware.

The big issue with Java is that while it is platform independent, it is not version independent. There are many many Java apps that require a specific version of the JRE and will not run on a newer one. So if you need to run an app that needs an old JRE you can't patch and secure your system. At a previous employer about 80% of our comprimised systems were because of Java with almost all the rest because of Adobe products. That was despite our default browser being IE6.

Comment Re:Norton Disk Doctor (Score 2) 375

Which effectively means that spinrite is incapable of any recovery in that situation. If it can't relocate data it has no ability to put it somewhere else. You trust it to put the recovered data back in the SAME bad sector? Spinrite can't fix bad sectors, all that magnetic crap it claims to be able to do is just that, crap. Seriously, run it against a USB flash drive sometime and watch all the BS magic magnetic info it gives you.

Comment Re:Norton Disk Doctor (Score 1) 375

No, that is not fine. First off, if the disk is damaged, why would you trust a different spot? If the sector spinrite writes to is also bad you have recovered nothing. If the disk is really bad there may not be any sectors that are safe to write to.

Second, if the filesystem is damaged or spinrite doesn't understand it (ie, anything other than FAT), then it has no idea of what sectors contain useful data and which are free. Spinrite can very easily write over other data that you want to recover.

Slashdot Top Deals

Thus spake the master programmer: "After three days without programming, life becomes meaningless." -- Geoffrey James, "The Tao of Programming"

Working...