No, it means that Windows has a 100ns granularity on it's timestamps.
And this, boys and girls, is how we end up with Windows 7/64 guzzling two gigs of memory after start-up.
Not by this one isolated idea, but the very concept of "meh it doesn't cause a problem" snowballing until it IS a problem.
I drafted up a mini-essay assuming it was C-style code, but the article is talking about methods. Clearing out half of the methods means that those virtual method tables are now half the size, which will result in much snappier execution. Less cache misses, less trash in the cache lines, shorter hash collision tables, it's all good stuff!
Nevermind all of the benefits of faster loading times, less address exhaustion, etc that apply to ANY language.
No, they don't count as mobile devices. How many TVs do you need anyways? Time to get a bit of exercise, you can walk to the living room!
Also your car doesn't need an IP address. Period. End of story.
I bundle Phone/Tablet/Kindle into "3-generation-old-iphone". It's kinda small, but I have mild myopia (about -1.75 diopters) so it's easy for me to read e-books on it.
I have a PSP, but there really isn't much out for it that I haven't finished already, and the 3-generation-old-iphone is sorta absorbing that functionality. I'm also rather disillusioned with console-y things anyways.
I don't drive the car at all (it's literally without a valid license plate due to me not caring), and my phone does GPS (which I don't need generally anyhow), so the phone's doing four to five things by itself now..
So I basically have:
- psp (retired)
- laptop (which barely qualifies as mobile, it's basically a desktop replacement unit)
Only the smartphone has any vague need of anything that isn't RFC1918 space, and 3/4 of the time, it's in that space too. The only thing I ever did with the PSP with the 'net was downloading updates..and games come with those anyhow, since it's part of their DRM system...
It's rare for me to need more than 1 IP for any of those at any given moment in time, and they're usually fine with NATted IPv4 space.
50 billion mobile devices? How much of this will end up as landfill? Does everybody REALLY need seven mobile devices?
Also, I'd feel a lot better about IPv6 if there weren't quite so many RFCs associated with it. The more complex a standard is, the more room there is for security holes, bugs, and non-conforming implementations... Is the second system effect going to bite us in the ass really hard?
Well, maybe we WILL need seven devices, just to load the new stack once..
Frak that. Forget about PowerPC. Let's take it way back.
IBM should have chosen the 68000 for the PC.
How would starting the 32-bit age in 1981 sound to you?
While the 68K had a 24-bit address bus and 16-bit data bus, all of the internal registers were 32-bit, aside from the CCR. That meant any non-droolingly-retarded-code would run just fine when the 68020 was released with complete and total 32-bit capabilities.
If the cost was too high (and that's utter BS right there, we're talking about an $80 part in a $5,000 heap of shit), Motorola did release the 68008 later on, which was more in line with the 8088 that IBM did select: 20 bits of address bus, and 8 bits of data bus. Still 32-bit inside*.
NB: The Macintosh was an example of droolingly-retarded-code, the original ROM was NOT 32-bit clean. Motorola explicitly warned developers in the 68K literature that the upper 8 bits of the address registers would be connected to address lines in the future. Commodore and Atari(er, I think) listened. Apple did not. Some game developers did not.
640k of segmented memory can bite my shiny metal ass.
(* = raving intel fanbois often point out that the original 68000 lacked 32x32 multiply and such, and that the ALU was really only processing 16 bits at once in most cases. That's generally irrelevant as the 68000 was not matched against the 80386, but rather the 8086/8088. Also, the 16x16 multiply resulted in a 32-bit number, which could be used to create a 32x32=32-bit answer, which is all that 99% of high level languages can handle anyways. This was addressed with the '020, which offered 32x32=64-bit multiply, still before the 386 hit market. The ALU issue was even less important, as it was utterly invisible to even low level programmers. Again, the release of the '020 in 1984 fixed that too.)
Cairo's backend for Windows is GDI on Win32. Non-GDI versions are being considered but they're listed as 'in development'.
Otherwise, why is it consuming 800 GDI handles right now? I'll be totally laughing my ass off if it turns out that DirectWrite/Direct2D et all are really just overlays on GDI classic.
My system works fine, and scores in the 7.x range for the Windows Experience Index. Under XP, GDI apps are snappy and fast--8 times faster, I'd estimate. Under Windows 7, it's roughly the same speed as my A3000 030@25mhz. It's a heck of a lot prettier, but if I wanted pretty, I'd get a damn macintosh.
BTW, the slowness isn't quite as apparent if you have animations on; the 250ms+ animations hide the fact that it's taking 80-125ms to draw that text screen. Compare a freshly installed Windows 7 (or Vista..) vs. a freshly installed Windows XP, running current drivers on the same hardware, with animation crap turned off. You'll note that XP is faster in all GUI cases, save for possibly a few relatively unimportant things like solid window dragging. If I wanted pretty-- oh wait, I already said that.
I'd like to think there's been progress since 2001..or 1989. I'm constantly disappointed.
If this were Linux, I'd not mind so much as it's free in every meaning of the word. But I paid 300 dollars for this crap -- it needs to be better in every way to XP.
I also don't like to hear this crap about APIs being deprecated. The PC's strength over the other, superior platforms, was that it had the largest software base AND backwards compatibility. Those two factores were related, and proved to be very decisive. We really WILL see a "post PC" world if Microsoft forgets this.
And if the address lines aren't all connected?
Yeah, they do in image editing.
However, there will always be things that must be done in series, and always a maximum speed-up you can get from multiprocessing. (Amdahl's Law comes to mind) Plus, you'll often hit other bottlenecks, especially if you have an obscene number of cores. Memory, disk, video, network..
Memory has always been a problem after the 6502 era. Even single core systems splat into the performance barrier that is main memory.
I'd rather have a single-core system that's 8x faster than an 8-core system. However, it's my belief that we're seeing crazy core increases not because it's the best way to better performance, but rather that the CPU makers are hitting walls (or at least massive difficulties) with traditional speed increases (mhz/ipc/branch prediction accuracy/etc).
Intel engineer: Our new architecture, with the die shrink, is about five percent faster...
Intel manager: How are we going to sell that to people for $300-1200??
Intel engineer: Well, we COULD put two/four/six/eight of them into a single die, as they're much smaller than before..
Intel manager: Do it!!
Intel engineer: Sir, it would cost us thirty or forty percent more due to--
Intel manager: Nobody's going to buy a 5% increase without this! DO IT NOW!!
It's definitely clean. It's been re-installed within the last six months (new drive, trying to get the hard drive score above 5.9...), it's a fully legit copy from Microsoft, from a big-box retailer, holograms and certificates of authenticity and everything, protected by Avast and Windows Defender. My browsing habits are conservative and safe, and my installation habits are reactionary as all heck. (Yeah, I'm one of those guys who actually checks MD5s/SHA/other hashes when possible for downloads and I have a very minimalistic set of software I install).
Motherboard is a high end model from Asus, RAM has been memtested to heck and back. It's quality Corsair memory too. This thing performs like a dream under Windows XP (minus most of the memory being totally unavailable--Microsoft does support PAE for Windows, but they only enable it for a select few versions because they're jerkwads), it isn't until Windows 7 becomes involved that issues happen performance-wise.
The issues are mostly with desktop type, GDI-based apps. Most Direct3D or OpenGL software (including my own) seems to run just as well or at least close enough to not notice the difference. An old DirectDraw app that I wrote (which still works after something like twelve years with no changes aside from setting new resolution--commercial devs can suck it with their crapware that works on DirectX8 and fails totally on DirectX9) seems to take a bit of a performance hit, but it can't be more than 30%. And it's using DirectX 3 mostly (with a touch of 5 to set refresh rates), so I could understand if Microsoft/nVidia/et all want to spend all of their time optimizing for a decades-old API that you can't even compile against anymore with current DirectX SDKs.
Low-display-usage, compute-only things are running just as well under 7 as XP. Well, single-threaded ones anyways. I've heard that the Windows 6.x family has some issues with scheduling over 5.x; see MadBoris' benchmarks in the Supreme Commander forums. I don't have much compute-heavy stuff that's heavily threaded though, aside from SupCom.
I did turn off Aero glass, it didn't seem to help. In fact, it seemed even slightly slower.
I'm using current drivers from nVidia specifically for Win 7/x64. Those aren't XP compatible ones.....are they?
To linux based ones, perhaps. X is starting to look fast these days...
None of them have upgrade paths to the new APIs, and I highly doubt they're significantly faster. "Useless mac-style pretty" seems to be the order of the day. I bet they can do much more in terms of flashy features, but that's not very comforting when all I need is colored fraking text for syntax highlighting.
- How much faster are these APIs in hard figures?
- Or, if they aren't, how much SLOWER are they?
- What apps make use of them so that I can test them out?
- Is there anything as good as SecureCRT? or at least Putty?
- How about Editplus? Is there a DirectWrite/Direct2D replacement for that?
- what about browsers? (Chrome and IE are out of the picture for unrelated reasons)
I hardly think my GTX 470 is old and crappy hardware. Nor is my i7 920. In many ways, it's faster than the latest sandybridge junk (hellooooo triple-channel memory access!).
Tabbing, scrolling, editing, inserting, etc in Firefox, Editplus, and SecureCRT/putty is a real pain nowadays.
I'd move back to XP if I had a 64-bit version of it. Having 3 of 12 gigs available would be a bit fail otherwise. I'd miss some of the convenience features, but to be honest, those could have been ported to XP. Heck, there's a port of DirectX 11 for it... driver requirements my ass.
We do 25 and 50 megabits/sec in Canada for residential. Bell Fibe 25 (25 down, 7 up) and Rogers' defective 50 meg cable service (50 down, 2 up).
And as you know, we're mostly empty space up here...
You're all slackin' on laying lines south of the border!
Oh yeah, that GDI+ garbage was supposed to take over from GDI. That was dog slow too. Does anybody use Direct2D? I looked over it once, and it looked way too inefficient to be of any use. How does it perform? And how about if you don't have a $700 video board?
I don't recall Microsoft ever officially deprecating classical GDI though. I do recall a rather damning article on one of Microsoft's own tech blogs about how Vista glyph rendering was twelve times slower than XP's, and how they were putting some hardware acceleration back in for 7, but it's obviously inadequate. My laptop (Core 2 Duo T7500 w/2 gigs of ram, XP SP3 32, with craptastic GeForce 8600M underclocked chip) is actually faster for text editing and general usage than my desktop (i7 920 w/12 gigs of ram, Win 7 64bit, now with a GeForce 470 GTX).
Y'know, if I wanted to have 1995-era 486-with-a-dumb-isa-video-board performance, I would have built a 486 with a dumb ISA video board. I could probably scrounge around my parts bin and do that entirely for free (minus an AT case, at least).
If Microsoft can't be bothered to keep up existing APIs in their new operating systems, they can price them down to a level more appropriate to the effort they put in. Like say, dropping a digit.