I'm going to argue there are no special cases that don't fit.
In a strictly mathematical sense, yes, various things are equivalent and various patterns are universal. However, that's a bit like saying you can do anything with sequencing, selection and repetition. While true in a sense, realistically it doesn't necessarily represent the clearest way to express everything. In practice, I have sometimes found that while I might build individual parts of a complicated algorithm from tools like folds, it may be clearer and easier to write the "big picture" using explicit recursion rather than trying to adapt everything to fit some standard algorithm.
As a practical example, not so long ago I was working on some code that would take some information in a certain format as input, and update a rather complicated graph-like data structure to incorporate that extra information. This algorithm involved walking the graph, and depending on the properties of each node reached and of the information to be merged in, either updating that single node "in place" or changing the structure of the graph around it. Each such step would typically transfer some of the remaining information into the graph, and then continue walking the rest of the graph to merge in the rest of the information until one or the other ran out. No doubt with enough mathematical machinations this could have been shoe-horned into some standard pattern, but in practice it was far simpler and more transparent to write a small set of mutually recursive functions that implemented the required behaviour at each step. And of course each of those functions then received information about the state of the graph walk and the state of the information being merged in through parameters.
At this point I think purity allows for laziness and laziness demonstrates a lot of the advantages of purity.
If you only care about the result of evaluating a function, sure, but if you also care about the performance characteristics of your program, I don't think it's so simple. Laziness can be both a blessing and a curse.
As for lazy with large amounts of data, Hadoop is lazy. So I'm not sure what you are saying.
In short, unrestricted laziness can cause huge increases in the amount of working memory required to run a program, until finally something triggers the postponed evaluations and restores order. As I recall, there was even a simple tutorial example in Real World Haskell that could wind up exhausting the available memory just by scanning a moderately large directory tree because of the accumulated lazy thunks.
... Blue Strat is lying and blustering again
Blue Strat is dead-on correct on this one.
Until functional programmers start speaking the same language as people in industry, we'll keep rolling our eyes and ignoring you.
I'm pretty sure maths has been around longer than programming, so who is really redefining the language here?
Also, false dichotomy is false. Functional programming concepts are widely and effectively used in industrial programming. The idea that what we're talking about is some academic, ivory tower idea is decades out of date.
That's just bad functional code.
It was a simplified example, but I think the point would still be valid in some more complicated case that doesn't fit one of the everyday functional programming patterns. The state is still there, it's just conveyed by accumulating function argument(s) in recursive, functional code instead of storing it in loop control variable(s) in imperative code.
The other thing is you don't want to be "doing stuff" and iterating. You want to be computing stuff and then "doing stuff" on the entire set of output. The system as it pulls output will drive the iteration on the computation.
I think you're conflating lazy evaluation with functional programming here. In any case, I think that sort of claim needs some qualification. Haskell-style laziness is nice for composition in theory and sometimes it lets us write very elegant code in practice, but it can also become a liability, particularly if you're working with very large amounts of data or anything time-sensitive.
On the other hand, if you've used a language that is designed to support functional programming, you probably wouldn't be in much doubt.
For example, here's the all-positive check written in Haskell:
all_positive = all (>0) [1, 2, 3, 5, 8, 13]
which is just a convenient notation for:
all_positive = all (\x -> x > 0) [1, 2, 3, 5, 8, 13]
where the backslash is Haskell's general syntax for introducing a lambda.
Criticising the ideas of functional programming because, for example, C++'s syntax for lambdas is horrific is like criticising OOP because setting up dispatch via vtables is a bit messy in assembly language. It's just not the right tool for the job, and it's unlikely to give great results no matter what you do with it. You have to look at the underlying principles to see whether they're useful or not.
I see lambdas as the opposite end of the pendulum swing from the goto statement.
They have their place but they both lead to lots of confusion with poor coders who are trying to maintain very old code.
...nothing appears on the linked page at all!
The small home wood heating systems are toxic to everyone near by so that won't be lasting long. Studies are showing that moderate levels of PM2.5 smog is a major health problem and excessively deadly
Wood isn't 100% renewable in most cases. If you remove a bunch of trees, there is a very good chance that the total mass of trees that grow back will be smaller. In places with heavy deforestation, the amount of trees that can grow back may only be 50% of what was logged in the 1st round.
Trees are delicately balanced bags of water. Their height and mass is related to how much wind other trees can protect them from along with hundreds of millions of years of evolution optimizing their density. It is an example of applied use of fractals.
Solar panels have a very large capital expense, they are cheap in the long run, but they are not feasible for running industry in poor countries.
Raw, ready-to-mount, single-crystal panels are down to $0.50/watt now, in pallets of ten at about 350 watts each, and have good lifetimes. Even adding the control electronics and batteries for nighttime and bad weather power, and replacing the batteries periodically, that's cheaper than building and running coal plants and their distribution infrastructure (even at third-world labor prices).
The control electronics is mostly semiconductor devices and still benefiting from Moore's Law. Solar panels are still improving, as are batteries (following their own Moore's Law like curves.) Solar has a factor of several in efficiency yet to go, and lot of room for cheaper manufacture. Batteries are pretty efficient, but still have lots of room for improvement in charge/discharge rates, lifetime, and manufacturing cost. Coal plants, meanwhile, are already close to as efficient and cheap to run as they can get. So solar will continue to improve its lead.
The main remaining advantage to coal plants is grid power gives suppliers an ongoing revenue stream and a captive market, while solar provides only an occasional capital purchase.
(But why do you never hear about the greenhouse effect of solar panels?)
Too bad the colonies across the pond are now run by a muppet.
Yeah, and Carthage must be destroyed, too.
Your side lost. Five and a half months ago. Isn't it time you got over it?
Kahan found that increased scientific literacy actually had a small negative effect: The conservative-leaning respondents who knew the most about science thought climate change posed the least risk. Scientific literacy, it seemed, increased polarization. In a later study, Kahan added a twist: He asked respondents what climate scientists believed. Respondents who knew more about science generally, regardless of political leaning, were better able to identify the scientific consensus—in other words, the polarization disappeared. Yet, when the same people were asked for their own opinions about climate change, the polarization returned. It showed that even when people understand the scientific consensus, they may not accept it.”
Notice how the author slips in his unsupported interpretation of the data: Greater knowledge about science causes more polarization.
Well, maybe. That’s a reasonable hypothesis, but it seems incomplete. Here’s another hypothesis that fits the same observed data: The people who know the most about science don’t think complex climate prediction models are credible science, and they are right.
In fact, there's more incentive to lie about climate science than cancer research: More immediate funding is at stake, more groupthink applies, it will be decades before others can prove you wrong, and unlike falsified cancer research, people won't die because you misdirected searcher.
And as for saying "the fraud was in the review process, not the work itself," that's like saying "Well, Anthony Weiner was only caught sexting. He never actually cheated." The odds that the fraud we've caught is the only fraud committed by those willing to commit fraud would seem pretty low...
Rich corporations and people are allowed to do what they want.
There are exceptions: Volkswagen to pay $2.8 billion in US diesel emission scandal
That's because they cheated the GOVERNMENT.
But it's nice to see the individuals who got hurt (lower mileage once the patches are applied, lower resale value) getting some of the bux for a change.
(Why do you still get robo-calls? Because the Fed preempted state laws that had let people sue the robo-callers for damages.)
I thought one of the previous releases mentioned Weeping Angel (or at least weeping something) and that it turned Samsung TVs into room bugs. So I assumed this one was more details on it.
But the media seems to be talking about it as if it's new with this release and a big surprise.
Did they just notice it now, or am I misremembering the earlier stuff? (Either way, it's good that it's finally getting public attention.)
(Sorry to bother others with the question. But I've been too busy to plow through it all personally and would appreciate info from people who have done some deep-diving.)
... the sheer number of "why would you want that at all" or "nobody needs that" or "the software is fine as it is" type responses from software users. What is particularly puzzling is that its not the developers of the software rejecting the suggestions -- its users of the software
You've answered your own question. To mix a few metaphors:
One of the things about software is that a LOT of people stand on the shoulders of each giant - by being users of his code. A change that isn't a straight augmentation (and even some that are intended to be) can shift the sand under their castles and bring them crashing down.
Live within your income, even if you have to borrow to do so. -- Josh Billings