Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re: Why not? (Score 1) 383

Multi-processing is only exploited when you design your system to exploit it. There was no wide-spread auto-parallelization of algorithms on these systems, at least not in the 80s. Certain database operations could be parallelized if you wanted to pay for it. The support on the application end was scant IIRC.

Decimal operations on modern ia32, ia64 and arm are implemented using 64-bit integers and not slow at all.

Comment Re:Why not? (Score 1) 383

Yep. There was a time in Sweden where quite a few businesses ran on ABC machines from DIAB, networked using DNET cards running X.25 on a RS-422 token ring. Record processing was done using ISAM instructions in the excellent structured Basic that ran on those machines. It performed very well and was much easier to use than COBOL. And that was mid-80s! For business continuity, the DNET cards were made available with ISA interface, and the Basic interpreter was ported to PCs. That particular dialect of BASIC was a decent general-purpose language, and fast enough to implement decent interactive business applications running in text mode. There were serious accounting and ERP systems built on that.

Comment Re: Why not? (Score 1) 383

Never mind the clunkiness of the old programming languages themselves, the entire build system and tooling support on these platforms is abysmal, as you've hinted. Modern containerized develop - test - deploy cycle is miles ahead in terms of productivity and robustness vs. these legacy systems. I'm pretty damn sure that the actual performance of the old monolithic single-threaded code is not exactly stellar either, and a lot of compiled COBOL could be easily replaced with much lighter SQL and Python running on its default bytecode VM, just to give one example. Those can then be easily scaled according to demand, using modern management tools. Scaling on big iron is nice on paper, but in practice - at least to me - it feels like going back to IT trade shows in the 80s. You'll pry Docker out of my dead hands, but only after rigor mortis subsides :)

And this point of view doesn't come from someone with no background. I used to play on CP/CMS and MVS systems as a kid. Had a DG Eclipse in my bedroom growing up. VAX already was a breath of fresh air compared to those. I had machines running CP/M, PC-DOS, some Swedish systems from DIAB running DNET (I still use embeedded X.25 at work - it's cheap). About the only thing I remember fondly about IBM infrastructure was the Rexx scriptability. I missed Rexx on DOS and CP/M. Then early Linux came to be and I promptly forgot about the DOS nonsense :)

Comment Re:X11 vs the world (Score 1) 160

I have no idea what you mean by "poorly optimized drivers". The only things that an X server is expected to do and do well today as far as screen drivers go is to composite pixmaps generated by the painting backends in Chromium, Qt, GTK, etc. An X driver is not meant to do any drawing anymore - yes, the X servers still leave the old code paths around so that some obsolete app might use the X server to actually draw other primitives on screen. Nothing of note uses an X server that way anymore.

Given this, there's really no sense to X. Wayland with a VNC backend is all you need for remote work.

Comment Re:X11 vs the world (Score 1) 160

X is largely irrelevant. Today, X is used to do three things:

1. Push pixmaps from the application to the screen. Notice that nowhere does X get involved in doing any rendering of those pixmaps.

2. Push UI events from the user to the application, and poorly participate in window management.

3. Allow applications to open a windowed OpenGl context.

It's a lot of cruft that does nothing much, only so that some obsolete pure X11 application will still keep on working. Architecturally speaking, it's nonsense. For modern apps you'll get much better performance if you attach a VNC backend to your application and access it that way (Qt allows it, I'd presume GTK should too somehow).

Comment Re:You do need a *lot*. 1/3rd of all the land (Score 1) 238

we'd need to flood 1/3rd of the continental United States

Of course, but that's just stupid. You'd need to flood that much because there's so little head available in most places. 700m head when you go underwater is nothing to scoff at. 10MWh per sphere is quite decent.

Comment Re:Interviews need training, too (Score 1) 1001

In absence of specialized instructions, a lookup table might work best on an 8/16-bit microcontroller with no cache. On anything more advanced than that, not doing memory accesses might save you enough time to do more computations and less look-ups. Another thing people routinely forget is that basic big-O notation by design only tackles computations, not memory access. A multi-layered memory system you'll find in any modern CPU that can easily give you lots of computations at a low cost if you can only feed the compute units with data, and be able to stream out their output. When these I/O paths are blocking, the computational efficiency of the platform can drop by multiple orders of magnitude - if you've only got a thousand elements, an O(n^2) algorithm that has a slowly evolving working set may perform better than O(n log n) where you're reliably forced to do log n page fetches from disk each time.

Comment Re:I could not agree more (Score 1) 1001

It all depends. I know that if you call any of the GDI's filled polygon/filled shape APIs, it's all at the very least tessellated in software, and then perhaps if the hardware is quick enough to render a bunch of triangles as an instantaneous sort of a thing, it will pass it on, but I doubt that this has been done for any recent hardware. It requires very low overhead in starting a "job" on the hardware and is mostly suited for old style of graphics hardware that can fill a list of triangles without invoking the entire 3D pipeline (if it has any). It used to be a thing in the late 90s and early 00s. For 99%+ of common display architectures out there now, GDI is not really accelerated outside of blitting stuff.

If you use any graphics library that doesn't depend on GDI, you can probably do better even in software rasterization since you have enough knowledge to split the job and parallelize it. Or you simply pass it as a draw list to DirectX and let that do it orders of magnitude faster - but again, GDI doesn't do it since its stateful architecture is really a poor fit to modern asynchronous rendering pipelines.

Comment Re:Perhaps a better method... (Score 1) 1001

Yeah: Is it really so much to ask an experienced developer to prove that they can do code reviews? And if someone can't review code without an IDE, they're handicapped. The question remains whether their other qualities make such a handicap worthwhile. Perhaps, as a means of self-development, everyone should spend some time on StackOverflow and Code Review and learn to spot mistakes in what others do - iff they don't do code reviews in their current job.

Slashdot Top Deals

Don't get suckered in by the comments -- they can be terribly misleading. Debug only code. -- Dave Storer

Working...