Forgot your password?

Comment: Re:So what's Metro? (Score 1) 545

by bored (#47934077) Attached to: What To Expect With Windows 9

? Are you trying to tell me that GDI wasn't directly accelerated in pre vista, non win9x OSs?

Because I believe you to be strongly mistaken. From the windows 2000 XDDM reference, GDI functions Implemented by printer and display drivers:

You will notice that everything from font rendering to curve drawing, path filling (yes with xor), you name it _CAN_ be implemented although only a small subset is required. _BUT_, I would say that most were implemented by the better hardware manufactures for the common video resolutions.

You will also notice that the documentation has been updated and says "The functions documented in this section are implemented by printer drivers and by drivers in the Windows 2000 display driver model, but they are not implemented by drivers in the Windows Vista display driver model." This is also directly noticeable in GDI benchmarks between the two OS's (especially when run on slower CPU machines, or while monitoring CPU usage). There are also a fair number of youtube ( for example videos of people showing things like the scrolling speeds in explorer on XP vs vista. 7 improved the situation slightly, but as of a few years ago the benchmarks I remember seeing were still strongly tilted in XP's favor if one monitors CPU usage during the benchmark.

Comment: Re:So what's Metro? (Score 1) 545

by bored (#47931291) Attached to: What To Expect With Windows 9

Standard desktop apps were accelerated on OS's that predate windows vista. In fact GDI was hardware accelerated all they way back to the window 3.0 days.

This is actually one of the reasons that older windows releases often feel snappier, since probably >50% of windows applications are using GDI or a toolkit that uses GDI.

Comment: For the same reason you don't have a solar panel. (Score 1) 444

by bored (#47894721) Attached to: If Tesla Can Run Its Gigafactory On 100% Renewables, Why Can't Others?

There are dozens of reasons. Lets start with, the costs go up. Its the free market after all, if a company could _ACTUALLY_ reduce their power bill they would do it.

Second, lots (most?) of companies are strongly OPEX leaning, meaning that they are already shifting all their CAPEX , and investing in solar/wind is overwhelmingly CAPEX (or its going to drive up their debt).

Third, most companies are busy worrying about their next product, and a long list of other issues.

I could probably list another dozen things, but I'm betting that combination pretty much covers 99% of US companies.

Comment: Re: So.... (Score 5, Insightful) 170

by bored (#47850073) Attached to: Fedora To Get a New Partition Manager

I considered moderating you, but I think this is really a case of <whine> "C++ is haaardddd, learning it enough to understand how to plug in a new module is going to take me months. Instead I'm going to rewrite it" </whine>

Or similar bullshit by people who think "scripting" languages are appropriate for base system tools. Now you will have python dependency hell every-time you want to do something simple like repartition your disks. Oh, and is that project python 2 or python 3? On and on..

Frankly, its fsking stupid and its another sign that redhat is jumping the shark.

Plus, do you really want to depend on the skills of some "leet" hacker that thinks python is an appropriate tool for this?

Comment: Re:Russian revolution? (Score 1) 85

by bored (#47817349) Attached to: Amazon's Plan To Storm the Cable Industry's Castle

It will never be a real battle until amazon starts providing last mile services. The Cable Co's and the content providers (amazon in this case) need each other to much to actually have a battle to the death.

So, much like the "blackouts" and other BS that happens once in a while, the end result is not positive for the consumer. The cable bills never go down or even stay the same. Instead they go up and both sides get to blame the other. All while making record profits for wall street.

Nothing will change until we start actually regulating the last mile providers in meaningful ways. That includes a more alacart channel selection where the _CONSUMER_ chooses which media/content providers they wish to subscribe to. I don't mind the content providers bundling things (aka get National Geo, Fox New, FX, etc as a block), its just that I want to be the one making the choices rather than having to give Fox money when all I want is to watch a couple HBO channels.

Comment: Re:DDR2/3/4 (Score 1) 181

by bored (#47794089) Attached to: Intel's Haswell-E Desktop CPU Debuts With Eight Cores, DDR4 Memory

CAS latency hasn't been measured directly in nanoseconds for some time now. It is now measured in clock cycles.

Yah, so to compare two different sticks of RAM you have to multiply the time/cycle by the number of cycles. Which gives you (wait for it....) time!

Which the parent did, to point out that all these "new" memory technologies haven't been decreasing the RAM latency much at all. RAM latency is still a _VERY_ important part of overall execution performance. Particularly for single threaded operations reading RAM in unpredictable manners. Cache misses are overwhelmingly the single largest optimization variable for modern applications.

Comment: Re:*drool* (Score 1) 181

by bored (#47793985) Attached to: Intel's Haswell-E Desktop CPU Debuts With Eight Cores, DDR4 Memory

Yes, and for a desktop machine probably 90% of what I do is limited by the single thread performance .Hence why I haven't upgraded in a while myself.

So, I do welcome faster machines, what I don't welcome is the fact that the vast majority of machines being sold today are actually _SLOWER_ than what was available a few years ago.

This happened at work, we replaced a couple of older machines that cost a fortune with a couple newer far less expensive one and the performance was actually worse.

Comment: Re:*drool* (Score 1) 181

by bored (#47793945) Attached to: Intel's Haswell-E Desktop CPU Debuts With Eight Cores, DDR4 Memory

There are LOTS of applications outside of gaming where more speed is appreciated.

But a lot of those applications are also runnable in networked clusters. I stopped compiling code on my desktop probably 15 years ago and haven't looked back. Buying a single machine with 32-cores and a super fast RAID shared between a dozen or so developers both improves individual compile times and saves a bunch of money over buying a bunch of faster desktop machines for everyone. Edit the code locally, save to a network share, compile remotely.

Same thing for VMs, ray tracing, transcoding, scientific computing, etc,etc.

There are still a few "workstation" level applications but its questionable whether the i7 line is more appropriate in those circumstances than just buying multisocket xeon configurations (which provide even more cores and memory bandwidth).

All that said, don't get me wrong, I really like my single threaded performance which is where I think people have been sort of missing the boat for the desktop. AKA I would pick a dual core machine over a 16 core one if the cores were even 2x as fast at single threaded operations.

Comment: Re:*drool* (Score 1) 181

by bored (#47793905) Attached to: Intel's Haswell-E Desktop CPU Debuts With Eight Cores, DDR4 Memory

Its not even about optimizing the code, its making choices that from the beginning cannot result in faster code. People like to focus on the overhead of JITs, GC's, and hidden object copies, etc, in many "modern" languages, but frankly while they have an effect, the mindset they bring is a worse problem.

Modern machines can lose a lot of performance with poor memory placement/allocation in a NUMA configuration, doing cache line ping ponging, and on an on. Things that are simply not controllable if your language cannot even guarantee a consistent location for the data in question.

Lets not even talk about the horrors of HTML/javascript/CSS/AJAX/etc.

Now, all that said, a huge percentage of applications are going to be "fast enough" if they were written in bash, running in an emulated x86, in javascript, in firefox on a $50 tablet. Simply because even the slowest thing available today has 100x the performance of the machines 15 years ago which somehow managed to be useful without storing all their data in the "cloud" for the NSA to peruse.

Comment: Re:Are they available in the cloud? (Score 1) 113

by bored (#47793821) Attached to: IBM Gearing Up Mega Power 8 Servers For October Launch

The problem is that trying to fab a processor without a foundry seems to be a big disadvantage. For IBMs mainframe business its probably not a critical problem as they aren't as performance intensive.

But for something like POWER which directly competes with x86 I suspect that they will have an even harder time selling their processors if they follow the AMD (or sparc, mips, etc) route. The ARM vendors seem to do fine without foundries, but the best performing ones seem to regularly come from companies that actually have their own in house fabs.

Comment: Re:"2-socket system" (Score 1) 113

by bored (#47760901) Attached to: IBM Gearing Up Mega Power 8 Servers For October Launch

If the workload naturally fits into more nodes of smaller size, it frequently makes sense to opt for the higher node count. There is of course different break points depending on judgement calls, but most places seem to think of two sockets as about the sweet spot.

That describes the problem I work on, the throughput scales pretty nice as the machine size grows, but the costs of the larger machines grow much faster than their performance. So, it is far more cost effective to ship a few 2 socket machine with higher clocked processors than try to cram it all into one or two large machines.

But! While the throughput of the larger machines scales, their latency does not. In fact for the latency sensitive portions of our application we are far better off with smaller machines with faster ram, faster clocked CPU's , and closer IO busses. There are points where its actually impossible to buy better latency than we get for just a couple grand in our mid-range machine.

Comment: Re:That ship has already sailed. (Score 1) 113

by bored (#47760651) Attached to: IBM Gearing Up Mega Power 8 Servers For October Launch

The pricing I saw a couple months ago didn't even approach what we are paying for our machines. Sure the machines in question _may_ have been ~30% faster but they cost literally 4x as much.

For customers buying larger Intel platform machines (4 sockets or more) the power8's are possibly competitive, but compared with the mid-range dual socket machines its wasn't even close.

Maybe IBM has adjusted the pricing since then, they keep telling me its going to be better than x86, but I have yet to see that for our use cases. Plus, I suspect that Intel will adjust their pricing in a few months if POWER is actually competitive. They have a habit of doing that. Just taking back the 4 socket "tax" they added a few years ago when AMD stopped being competitive will probably blow a hole in IBM's model.

Comment: Re:Are they available in the cloud? (Score 4, Informative) 113

by bored (#47760487) Attached to: IBM Gearing Up Mega Power 8 Servers For October Launch

If you go to IBM conferences you will find a fair amount of talk on this very topic by 3rd party vendors. There are probably a dozen vendors that want to provide AS400/iSeries cloud instances, but IBM won't let them because it violates the terms of the IBM i license which is tied to a hardware instance.

Plus, the whole software ecosystem piggybacks on the same idea, (often based on machine capabilities). This means that even if you can rent an iSeries for an hour its likely your software vendor won't license you their application.

So, while it is entirely possible, IBM seems to be dragging their feet on the license issues, and the vendors seems to be in a chicken/egg situation.

Comment: Re:Nobody else seems to want it (Score 1) 727

by bored (#47718289) Attached to: Linus Torvalds: 'I Still Want the Desktop'

Not sure what the GP actually intended, but I'm convinced the fact that the kernel and a few thousand drivers all simultaneously have to be bug free for any given "release" is a serious problem. Should your hardware experience a driver problem you get to roll the dice again and hope the next version fixes your problem without breaking something else. Good luck, especially if you have a couple dozen different hardware configurations to contend with, especially if any of them are not x86.

Its futile, the drivers and the kernel should be separate and there should be a stable API, if not a full blown ABI for them. Linux has been evolving for ~20 years now its probably time to start trying to maintain some kind of actual kernel mode API. That way the _USER_ can pick and choose the kernel and any given set of drivers independently from one another. If kernel X happens to be "good" but you need a driver newer than that kernel you shouldn't have to upgrade to the latest buggy kernel just to get a driver for a more recent piece of hardware.

Android avoids this problem because the OEM spends time assuring that the driver set for their device is working/stable before shipping the device. Then rarely are they ever upgraded for anything other than bug fixes.

To thine own self be true. (If not that, at least make some money.)