I find here in the UK the DAB stations often sound worse than their FM equivalents, thanks to an antiquated codec (MP2!). DAB+ was supposed to fix this by using AAC+, but that doesn't seem to have been deployed here. Backwards compatibility issues I guess.
Maybe according to the strict F95 standard it's not allowed, but I've done it. I think it was officially introduced in F2003 (or some TR that I forget) but compilers supported it even before then. Similarly for allocatable dummy arguments to subroutines.
Not to say there aren't some weird quirks with Fortran arrays, like how if you pass an allocatable array to a subroutine where the dummy argument is not allocatable then it *must* be allocated, even if the subroutine isn't going to touch it.
It's a hardware thing -- the memory bus and memory read/write speeds are still a limiting factors, particularly as CPU cores get faster and more efficient.
Oh yes, I've seen plenty of code that's limited by memory bandwidth. But I don't think that's what's going on here - simply deallocating and reallocating shouldn't actually touch all of the memory in question, should it?
Fortunately for that kind of code, avoiding such reallocations isn't difficult.
Does it still win with dynamic memory allocation? How granular is the dynamic memory allocation? Complete like C?
Fortran's dynamic memory allocation is much easier to work with than C's. You simply declare a variable allocatable, then allocate as needed with the appropriate size. It automatically gets deallocated when it falls out of scope, so no memory leaks (at least since F95).
allocate (myarray(1000), stat=ierr)
(something to check error code ierr here)
I've written a bit of finite difference code in Fortran. Repeatedly allocating and deallocating can give a huge performance hit, so I tend to do all my allocations before the main loop. Not entirely sure why the penalty is so big, but it seems to be - these are allocations of hundreds of MB or even a few GB, so the cost of operations done on the arrays should dwarf the cost of the allocation. Unless there's some underlying reason why touching newly allocated memory is so slow, but I don't know enough about how virtual memory behaves to say.
If end user hardware doesn't support it or isn't configured properly, then they will be completely unaware of and unaffected by its existence.
End user hardware generally does support it though - any vaguely modern computer, smartphone or tablet should automatically pick up and use an IPv6 address if available. So if the ISPs start supplying v6 it's essential that it works reliably, because the users' devices will try and use it. Broken v6 does affect connectivity, even if v4 still works fine. And even if the fault is with the users own equipment, you can bet they'll be complaining to the ISP.
Second post because I realised my first one doesn't directly address your point above.
That should be true in theory, but the IPv6 hardware & software is nowhere near as well tested as the IPv4 equivalent, both in terms of home equipment and in the ISPs own networks. How often does this kind of thing work perfectly first time? And the staff don't have the same experience with it to fix problems when they do occur. Anything new is a risk, and since hardly any home customers are demanding IPv6 it might seem like it's a risk not worth taking until made absolutely necessary by v4 exhaustion.
That's not what *I* want, but from an ISP's perspective I can see how it would make sense to prepare & test their network for v6 steadily, slowly and thoroughly but not actually deploy it while they still have enough v4 addresses.
I assume that's for the US, which seems ahead of the game despite having plenty of v4 addresses.
Here in the UK, none of the major ISPs have deployed v6 at all, and I don't think any of the mobile companies have either. I suppose they're just risk averse, as dealing with support calls for unexpected problems isn't cheap and their margins aren't huge.
IPv6 will never take off.
According to Google, it is. Slowly, admittedly, but about 5% of Google users now have IPv6.
So you could argue: they are neither english nor french as both languages adopted them from the same source. But that is incorrect in so far as english indeed adopted the words via the french invaders and not via the latin/roman invaders.
Those words in the original post were adopted into English long after Anglo-Norman was dead, so invasion can't be the answer here. They aren't native French words either - both English and French for some reason seem to like to coin new words from the classical languages. I suppose they thought telephone and television sounded grander than farspeaker and farseer (though we do have loudspeaker, oddly). This tendency seems to have greatly reduced recently though - computing terms are generally made from words already in English rather than new borrowings.
I don't think the original poster claimed that French had borrowed them from English, just that they are not native French.
Televisions were a common candidate for percussive maintenance, but it could help computers too. My old BBC Micro often wouldn't power up without a good whack on the top left (PSU).
I miss thumpable electronics. "Try turning it off and on again" just isn't the same.
Yes that's right, but the Norman words are so well integrated now that most English speakers wouldn't recognise them as foreign. The German ones still tend to look German. I doubt too many people would see this post and know that recognise, foreign, tend, recent, doubt, people and post are loanwords. "Integrate" still has a foreign feel about it though.
For those who don't trust Slashdot and are too impatient to wait for the official result, there's a poll from a source of unimpeachable virtue - Grindr.
Results suggest 54% no, 46% yes, with a small minority for "fancy a shag".
Did not know that the english speak a celtic dialect
Most Scots don't either - less than 2% speak any Scottish Gaelic.
Prescription drug prices in the US market are much higher than the NHS negotiated prices; without the US market and the high amount of US consumer spending on drugs, drug companies would have little incentive to invest in new drugs.
Your own doctors' and hospitals' inability to negotiate a good deal isn't the NHS's fault. You don't really think that if the NHS paid more the drug companies would say "oh well, we'll charge everyone else less", do you? They'll charge as much as they can get away with, just like now.
A more likely reason for drugs being so expensive in the USA is that they spend more on sales & marketing than R&D. How much cheaper would it be if they didn't do that? Don't blame your own dysfunctional system on the NHS "not paying up" because they do, and the companies make a fat profit out of it.
For example, American consumers and taxpayers are paying for most of the medical research that the UK's single payer system would never be able to finance on its own;
What? The NHS pays for is medicines and technology at prices negotiated with the pharmaceutical companies. It doesn't get them for free.