The DVI specification mandates a maximum pixel clock frequency of 165 MHz when running in single-link mode. With a single DVI link, the highest supported standard resolution is 2.75 megapixels (including blanking interval) at 60 Hz refresh.
Dual-link DVI is twice the bandwidth but that is still not nearly enough for a 4k*4k display at 60Hz.
I have first-hand experience of this driving T221 monitors (which are less than ten megapixels). Over a dual-link DVI connection only about 30Hz refresh is possible, even if you overclock the DVI link beyond the spec.
As for analogue VGA connectors, there is no defined limit, but basic signal processing laws limit the pixels you can push down the wire. In practice, even with a very short 0.5 metre cable of the highest quality I could find, the picture quality at a mere 1920x1080 resolution is noticeably worse with analogue cabling than with DVI. That might be due to the A-to-D converter in the monitor rather than to a limitation of the cable or graphics card, but making A-to-D converters capable of handling this large bandwidth, together with the higher-spec cabling required, would be very expensive. Much more so than using a digital interface such as Displayport 2.0.
Can your average onboard video card drive monitors at that resolution?
Yes, without any difficulty. It's 2012. Unless you want to play 3D games - in that case, just drop down to a lower resolution to play your game fullscreen, and go back to normal res when you exit.
Obsessive 'gamers' who want to play the latest titles at maximum resolution and maximum refresh are very much in the minority, and they have always tended to buy separate video cards anyway.
Conditional execution is nice, but it really interferes with modern architectures. The ARMv8 core is a fully speculative, out-of-order with register renaming implementation.
Conditional execution lets you avoid a test and jump. If you rewrite code to have conditional jumps instead of conditional execution, there are still just as many code paths for the speculative execution to worry about. But I am not a chip designer so there may be some reason why it's easier.
I do wonder whether speculative out-of-order execution is truly the 'modern' way, though. For single-threaded code, certainly. But if your system is going to be multicore anyway, it might be better to spend the silicon on having two simpler, non-speculative cores rather than one more complex one.
My favorite thread was when the US/UK attacked Iraq in '98 and an author posted how bad and illegal it was and closed the thread to comments.
I think this is the best wax figure ever.
Until a new slightly taller and lighter wax figure is released next year.
The other scenario, which I believe the article is talking about, is where bricks are trading at $1.00 and without high-frequency trading, you might be able to buy ten of them for a dollar each. Not by putting in an order for ten at once - that would push the price up - but by being a bit stealthy and buying only one or two at a time. Now, with high frequency traders, somebody will notice that you are buying one or two bricks and guess that you're likely to buy more. They buy some, pushing the price up, and hoping to offload them later. But not all of the difference in price is creamed off by the high frequency trader; most of it goes to the original seller. So instead of buying ten bricks at $1.00 each, you pay $1.10 each, the high frequency trader skims off $0.01, and the seller receives $1.09. So the seller, who might also be an 'ordinary investor', gets a better price for the bricks he is selling.
We always imagine that there is some magic way to interpose yourself in transactions and take a cut, but markets don't work like that. The seller would not bother to trade with the high frequency people unless they were offering at least a slightly better price than he would have got otherwise.
LOL - Oh hell yes.
"If I do not want others to quote me, I do not speak." -- Phil Wayne