Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:FIXED THAT FOR YOU (Score 1) 126

Frak that. Forget about PowerPC. Let's take it way back.

IBM should have chosen the 68000 for the PC.

How would starting the 32-bit age in 1981 sound to you?

While the 68K had a 24-bit address bus and 16-bit data bus, all of the internal registers were 32-bit, aside from the CCR. That meant any non-droolingly-retarded-code would run just fine when the 68020 was released with complete and total 32-bit capabilities.

If the cost was too high (and that's utter BS right there, we're talking about an $80 part in a $5,000 heap of shit), Motorola did release the 68008 later on, which was more in line with the 8088 that IBM did select: 20 bits of address bus, and 8 bits of data bus. Still 32-bit inside*.

NB: The Macintosh was an example of droolingly-retarded-code, the original ROM was NOT 32-bit clean. Motorola explicitly warned developers in the 68K literature that the upper 8 bits of the address registers would be connected to address lines in the future. Commodore and Atari(er, I think) listened. Apple did not. Some game developers did not.

640k of segmented memory can bite my shiny metal ass.

(* = raving intel fanbois often point out that the original 68000 lacked 32x32 multiply and such, and that the ALU was really only processing 16 bits at once in most cases. That's generally irrelevant as the 68000 was not matched against the 80386, but rather the 8086/8088. Also, the 16x16 multiply resulted in a 32-bit number, which could be used to create a 32x32=32-bit answer, which is all that 99% of high level languages can handle anyways. This was addressed with the '020, which offered 32x32=64-bit multiply, still before the 386 hit market. The ALU issue was even less important, as it was utterly invisible to even low level programmers. Again, the release of the '020 in 1984 fixed that too.)

Comment Re:As a switcher and a switcher. (Score 1) 1880

Cairo's backend for Windows is GDI on Win32. Non-GDI versions are being considered but they're listed as 'in development'.

Otherwise, why is it consuming 800 GDI handles right now? I'll be totally laughing my ass off if it turns out that DirectWrite/Direct2D et all are really just overlays on GDI classic.

My system works fine, and scores in the 7.x range for the Windows Experience Index. Under XP, GDI apps are snappy and fast--8 times faster, I'd estimate. Under Windows 7, it's roughly the same speed as my A3000 030@25mhz. It's a heck of a lot prettier, but if I wanted pretty, I'd get a damn macintosh.

BTW, the slowness isn't quite as apparent if you have animations on; the 250ms+ animations hide the fact that it's taking 80-125ms to draw that text screen. Compare a freshly installed Windows 7 (or Vista..) vs. a freshly installed Windows XP, running current drivers on the same hardware, with animation crap turned off. You'll note that XP is faster in all GUI cases, save for possibly a few relatively unimportant things like solid window dragging. If I wanted pretty-- oh wait, I already said that.

I'd like to think there's been progress since 2001..or 1989. I'm constantly disappointed.

If this were Linux, I'd not mind so much as it's free in every meaning of the word. But I paid 300 dollars for this crap -- it needs to be better in every way to XP.

I also don't like to hear this crap about APIs being deprecated. The PC's strength over the other, superior platforms, was that it had the largest software base AND backwards compatibility. Those two factores were related, and proved to be very decisive. We really WILL see a "post PC" world if Microsoft forgets this.

Comment Re:When are multiple cores going to help me? (Score 1) 189

Yeah, they do in image editing.

However, there will always be things that must be done in series, and always a maximum speed-up you can get from multiprocessing. (Amdahl's Law comes to mind) Plus, you'll often hit other bottlenecks, especially if you have an obscene number of cores. Memory, disk, video, network..

Memory has always been a problem after the 6502 era. Even single core systems splat into the performance barrier that is main memory.

I'd rather have a single-core system that's 8x faster than an 8-core system. However, it's my belief that we're seeing crazy core increases not because it's the best way to better performance, but rather that the CPU makers are hitting walls (or at least massive difficulties) with traditional speed increases (mhz/ipc/branch prediction accuracy/etc).

Intel engineer: Our new architecture, with the die shrink, is about five percent faster...
Intel manager: How are we going to sell that to people for $300-1200??
Intel engineer: Well, we COULD put two/four/six/eight of them into a single die, as they're much smaller than before..
Intel manager: Do it!!
Intel engineer: Sir, it would cost us thirty or forty percent more due to--
Intel manager: Nobody's going to buy a 5% increase without this! DO IT NOW!!

Comment Re:As a switcher and a switcher. (Score 1) 1880

It's definitely clean. It's been re-installed within the last six months (new drive, trying to get the hard drive score above 5.9...), it's a fully legit copy from Microsoft, from a big-box retailer, holograms and certificates of authenticity and everything, protected by Avast and Windows Defender. My browsing habits are conservative and safe, and my installation habits are reactionary as all heck. (Yeah, I'm one of those guys who actually checks MD5s/SHA/other hashes when possible for downloads and I have a very minimalistic set of software I install).

Motherboard is a high end model from Asus, RAM has been memtested to heck and back. It's quality Corsair memory too. This thing performs like a dream under Windows XP (minus most of the memory being totally unavailable--Microsoft does support PAE for Windows, but they only enable it for a select few versions because they're jerkwads), it isn't until Windows 7 becomes involved that issues happen performance-wise.

The issues are mostly with desktop type, GDI-based apps. Most Direct3D or OpenGL software (including my own) seems to run just as well or at least close enough to not notice the difference. An old DirectDraw app that I wrote (which still works after something like twelve years with no changes aside from setting new resolution--commercial devs can suck it with their crapware that works on DirectX8 and fails totally on DirectX9) seems to take a bit of a performance hit, but it can't be more than 30%. And it's using DirectX 3 mostly (with a touch of 5 to set refresh rates), so I could understand if Microsoft/nVidia/et all want to spend all of their time optimizing for a decades-old API that you can't even compile against anymore with current DirectX SDKs.

Low-display-usage, compute-only things are running just as well under 7 as XP. Well, single-threaded ones anyways. I've heard that the Windows 6.x family has some issues with scheduling over 5.x; see MadBoris' benchmarks in the Supreme Commander forums. I don't have much compute-heavy stuff that's heavily threaded though, aside from SupCom.

Comment Re:As a switcher and a switcher. (Score 1) 1880

To linux based ones, perhaps. X is starting to look fast these days...

None of them have upgrade paths to the new APIs, and I highly doubt they're significantly faster. "Useless mac-style pretty" seems to be the order of the day. I bet they can do much more in terms of flashy features, but that's not very comforting when all I need is colored fraking text for syntax highlighting.

Questions:
- How much faster are these APIs in hard figures?
- Or, if they aren't, how much SLOWER are they?
- What apps make use of them so that I can test them out?
- Is there anything as good as SecureCRT? or at least Putty?
- How about Editplus? Is there a DirectWrite/Direct2D replacement for that?
- what about browsers? (Chrome and IE are out of the picture for unrelated reasons)

Comment Re:As a switcher and a switcher. (Score 1) 1880

I hardly think my GTX 470 is old and crappy hardware. Nor is my i7 920. In many ways, it's faster than the latest sandybridge junk (hellooooo triple-channel memory access!).

Tabbing, scrolling, editing, inserting, etc in Firefox, Editplus, and SecureCRT/putty is a real pain nowadays.

I'd move back to XP if I had a 64-bit version of it. Having 3 of 12 gigs available would be a bit fail otherwise. I'd miss some of the convenience features, but to be honest, those could have been ported to XP. Heck, there's a port of DirectX 11 for it... driver requirements my ass.

Comment Re:Wait what? (Score 1) 247

We do 25 and 50 megabits/sec in Canada for residential. Bell Fibe 25 (25 down, 7 up) and Rogers' defective 50 meg cable service (50 down, 2 up).

And as you know, we're mostly empty space up here...

You're all slackin' on laying lines south of the border!

Comment Re:As a switcher and a switcher. (Score 1) 1880

Oh yeah, that GDI+ garbage was supposed to take over from GDI. That was dog slow too. Does anybody use Direct2D? I looked over it once, and it looked way too inefficient to be of any use. How does it perform? And how about if you don't have a $700 video board?

I don't recall Microsoft ever officially deprecating classical GDI though. I do recall a rather damning article on one of Microsoft's own tech blogs about how Vista glyph rendering was twelve times slower than XP's, and how they were putting some hardware acceleration back in for 7, but it's obviously inadequate. My laptop (Core 2 Duo T7500 w/2 gigs of ram, XP SP3 32, with craptastic GeForce 8600M underclocked chip) is actually faster for text editing and general usage than my desktop (i7 920 w/12 gigs of ram, Win 7 64bit, now with a GeForce 470 GTX).

Y'know, if I wanted to have 1995-era 486-with-a-dumb-isa-video-board performance, I would have built a 486 with a dumb ISA video board. I could probably scrounge around my parts bin and do that entirely for free (minus an AT case, at least).

If Microsoft can't be bothered to keep up existing APIs in their new operating systems, they can price them down to a level more appropriate to the effort they put in. Like say, dropping a digit.

Comment Re:Honestly? (Score 1) 1880

Hey, they have a Linux version too!

I never noticed that.

SecureCRT for Linux, here I come!!

(and to Clived, putty has excellent features/performance to cost/size ratio. But that's because it's small and free. Otherwise, it's kinda like those little donut-type spare tires...)

Comment Re:20 Years Too Late (Score 1) 148

Eh, back in the day, those guys WERE the end-destination anyways. You didn't connect to Compuserve to get to the Internet; you connected to Compuserve to get to their own resources.

Their logging wouldn't be any different than say, Slashdot's. When you're accessing Compuserve's content, they probably have a right to know.

They added Internet gateways as an afterthought, anyways. Real net connections didn't involve those over-sized BBSes. They involved borrowing a professor's account at York Uni .. er oops ahem nevermind.

Comment Re:Tower PC is here to stay. (Score 1) 559

A 22 minute x 24-episode season re-compressed from blu-ray©®(no copyright infringement implied, attempted, or done) is bigger than most mobile devices.

The ones that can store such a season can store one and ONLY one season.

Thin clients will never, ever win. The network will become less reliable with time, not more so. Unless you're a big company who can afford a private MPLS network. We're actually pulling hardware out of overloaded public IP networks and plunking them into MPLS because, like browers, there's no money in the 'net, in clouds, in Scott McNealy or IBM's retarded 5-computer vision. Especially not when you can charge 30K/mo for a PIP/MPLS E1/T1 network with five nodes.

I can't believe that I'm seeing a comment about clouds in the same place as the "How can I best destroy stacks of hard drives to sate my rampant paranoia?" discussion.

Supercomputers don't have hundreds of processors, they have tens of thousands of processors. The Japanese K-computer, currently leading the TOP500, has 60,000+ processors. And you'll need one to run Windows 10. Actually you'll need five of them. Start saving now...you'll need a bigger house.

And while many problems can't be solved by throwing more processors at it, much of the CS field thinks that most problems can be, and that for those that can't, it's still The Right Thing To Do, so you'll need to have SMP capability out the wazoo until they stop doing lines. And if you happen to come across things that really CAN be solved by threading, you'll need a beefy machine with lots of cores to gnaw through the latest problems (like video encoding, image editing, nuclear explosion modelling, and of course notepad).

Also, I doubt there will be any significant advancement past 22nm. Maybe a cycle or two more, and then Moore's Law dies and manufacturers start making up more BS to move basically the same model as last year.

Information state changes require energy and will always produce waste heat. Even if you can make a 600W CPU fit inside a wristwatch, do you really want to wear it? I guess it wouldn't matter in the end though, as it would eat through a 3.7v, 1500mAh battery (roughly that size for high end phones) in what ... 33.3 seconds?

Slashdot Top Deals

"No matter where you go, there you are..." -- Buckaroo Banzai

Working...