Sure, but how would any of that give rise to the original statistic?
Sure, but how would any of that give rise to the original statistic?
Why would Google have any control or visibility of anyone's connections, unless either that person also independently uses Google services in some sort of ISP capacity or the sites they are visiting independently use Google services in some sort of hosting capacity?
Since then, Google has seen a 23 percent reduction in the fraction of navigations to HTTP pages with password or credit card forms on Chrome for desktop.
Just ask yourself how Google can possibly know that and you can get a pretty good idea of where it really stands on the spyware/privacy issue.
I've developed reliable dev estimates in the past. This relied on us having a single codebase that we worked consistently on for an extended period. We knew our own infrastructure. We knew what we were doing. If there were major areas of doubt, we used a timebox for investigations.
The kicker: however long it took us to write the code in the end, it took us one tenth as long to create the estimates.
I'm not sure we should be dictating commercial restrictions on the supply of all creative content to an entire continent based on the three people in that continent who are studying film or music analysis.
In any case, lots of people are commenting here as if forcing sales to the entire EU to be at the same price will bring the cheaper prices to the richer nations. It seems far more likely that it will bring the more expensive prices to the poorer nations. Your "background music" licence is exactly the kind of expendable luxury that could suffer under the more uniform regime.
Sorry, but that just isn't how economics works.
Firstly, market segmentation is absolutely routine, including by purchaser power. There are countless ways to appeal to people who can afford to spend more, and businesses do this all the time. Have you ever seen a box for a "coupon code" when you ordered something online? That's market segmentation in action. Post coupons to everyone on the poorer street in your example, and now everyone isn't paying the same price.
Secondly, as someone who actually runs some online facilities at-cost, I can tell you that it is a real problem for people in less well-off nations if your price online is the same everywhere. You can't lower the headline price because if everyone was paying the lower price then you literally couldn't afford to keep these services running at that point. However, then the people where salaries and costs of living are generally lower can't keep up, so they lose out. The kind of adjustment we're talking about here is the online equivalent of posting coupons to all the homes in the poor part of town.
The genuine, uniform market price you're talking about doesn't exist in most real markets, because most real markets are not uniform.
You're talking about a monopoly situation. For works covered by copyright, that already exists in the sense that for any given work the rightsholders can decide to offer it only via certain channels.
However, unless those works are also essential, the customer still has the option not to buy them at all, and if the price is too high they will choose to spend their money elsewhere.
Moreover, while individual works may have a monopoly supplier, most creative works will be in competition with other works for providing information, entertainment, etc. Those competitive effects also moderate pricing, preventing the kind of "extraction" model you're talking about.
Around here, Amazon won the pricing war for most CD/DVD/Blu-ray content long ago, yet today it would still be cheaper for me to binge-watch a lot of TV shows through Netflix than through buying all the box sets. Amazon's prices for buying permanent copies of films or shows I really like on disc aren't much different to what they were a few years ago when you could still easily buy the same things in bricks and mortar stores.
It seems unlikely that EU law will prevent a vendor from selling something at all in selective member states if there is a good reason not to. We looked into this issue when the EU VAT mess was the big news a couple of years ago, fearful that some sort of anti-discrimination provisions would say otherwise. The experts made some straightforward arguments that, for example, declining to sell to customers elsewhere in the EU would be OK if the costs of operating the new tax scheme were prohibitive, because that would be a strictly commercial decision. Presumably complying with the law of the land would also be considered an acceptable basis for making such a decision.
The EU is working being a common market, where it started.
That's lovely, and when the economic situation in all EU member states is similar, maybe they'll achieve it. In the meantime, it is far from clear that this is a good thing.
At least in the sort of context we're talking about here, the "real market price" you mentioned is what someone is prepared to pay for something, no more and no less. Forcing people from areas with very different economic situations to pay the same price just means a lot of things won't be accessible to people from those places that can't afford the same rates as their wealthier neighbours.
I'm going to argue there are no special cases that don't fit.
In a strictly mathematical sense, yes, various things are equivalent and various patterns are universal. However, that's a bit like saying you can do anything with sequencing, selection and repetition. While true in a sense, realistically it doesn't necessarily represent the clearest way to express everything. In practice, I have sometimes found that while I might build individual parts of a complicated algorithm from tools like folds, it may be clearer and easier to write the "big picture" using explicit recursion rather than trying to adapt everything to fit some standard algorithm.
As a practical example, not so long ago I was working on some code that would take some information in a certain format as input, and update a rather complicated graph-like data structure to incorporate that extra information. This algorithm involved walking the graph, and depending on the properties of each node reached and of the information to be merged in, either updating that single node "in place" or changing the structure of the graph around it. Each such step would typically transfer some of the remaining information into the graph, and then continue walking the rest of the graph to merge in the rest of the information until one or the other ran out. No doubt with enough mathematical machinations this could have been shoe-horned into some standard pattern, but in practice it was far simpler and more transparent to write a small set of mutually recursive functions that implemented the required behaviour at each step. And of course each of those functions then received information about the state of the graph walk and the state of the information being merged in through parameters.
At this point I think purity allows for laziness and laziness demonstrates a lot of the advantages of purity.
If you only care about the result of evaluating a function, sure, but if you also care about the performance characteristics of your program, I don't think it's so simple. Laziness can be both a blessing and a curse.
As for lazy with large amounts of data, Hadoop is lazy. So I'm not sure what you are saying.
In short, unrestricted laziness can cause huge increases in the amount of working memory required to run a program, until finally something triggers the postponed evaluations and restores order. As I recall, there was even a simple tutorial example in Real World Haskell that could wind up exhausting the available memory just by scanning a moderately large directory tree because of the accumulated lazy thunks.
Until functional programmers start speaking the same language as people in industry, we'll keep rolling our eyes and ignoring you.
I'm pretty sure maths has been around longer than programming, so who is really redefining the language here?
Also, false dichotomy is false. Functional programming concepts are widely and effectively used in industrial programming. The idea that what we're talking about is some academic, ivory tower idea is decades out of date.
That's just bad functional code.
It was a simplified example, but I think the point would still be valid in some more complicated case that doesn't fit one of the everyday functional programming patterns. The state is still there, it's just conveyed by accumulating function argument(s) in recursive, functional code instead of storing it in loop control variable(s) in imperative code.
The other thing is you don't want to be "doing stuff" and iterating. You want to be computing stuff and then "doing stuff" on the entire set of output. The system as it pulls output will drive the iteration on the computation.
I think you're conflating lazy evaluation with functional programming here. In any case, I think that sort of claim needs some qualification. Haskell-style laziness is nice for composition in theory and sometimes it lets us write very elegant code in practice, but it can also become a liability, particularly if you're working with very large amounts of data or anything time-sensitive.
On the other hand, if you've used a language that is designed to support functional programming, you probably wouldn't be in much doubt.
For example, here's the all-positive check written in Haskell:
all_positive = all (>0) [1, 2, 3, 5, 8, 13]
which is just a convenient notation for:
all_positive = all (\x -> x > 0) [1, 2, 3, 5, 8, 13]
where the backslash is Haskell's general syntax for introducing a lambda.
Criticising the ideas of functional programming because, for example, C++'s syntax for lambdas is horrific is like criticising OOP because setting up dispatch via vtables is a bit messy in assembly language. It's just not the right tool for the job, and it's unlikely to give great results no matter what you do with it. You have to look at the underlying principles to see whether they're useful or not.
Scala is no longer hip.
Again, it seems we basically agree on this one in principle, but again, I'm perhaps a little wary in practice. When we start talking about regulating software development, and so recognising accepted good practice in some way, that implies that there is someone qualified to judge what good practices actually are and some reasonable basis for determining what the regulations should be. My personal view is that I'm optimistic about the future but we're not there yet.
In particular, suppose we tried to move in that direction tomorrow, or maybe we even went as far as making software development a proper engineering discipline and a licensed profession. I think the kind of people who would find their way into the influential regulatory positions probably would not be the people who were actually best qualified to advise on such issues, not least because they're busy building useful software. Instead, I think you'd get the dreaded consultants -- not the legitimate ones who really do have wide experience and now make a living sharing it to help others, but the ones who are more politician than engineer, engaging speakers and writers, always quick to tell others how they should write software, yet typically having built relatively little of their own and having little actual data to support their recommendations. (I have this vision in my head now of some Extreme Agile Craftsmanship Consultant telling guys who have been writing security-sensitive networking stacks for 30 years how in future they should TDD their way to the basic functionality and then add "security" on later, and as long as the tests are still passing they can just ship right away.)
This isn't to say that the underlying problem is not serious. The idea that everything should be connected and the idea that security and privacy concerns are being adequately addressed by today's market is a terrifying and potentially extremely dangerous combination. As a geek, I'm able to protect myself and my family to some extent by avoiding a lot of the junk, but obviously most people don't have that advantage and general public awareness of the real implications of these modern trends is still disturbingly low.
I wonder whether a useful way forward in the near future would be some sort of voluntary endorsement system to help raise that public awareness. You don't have to absolutely require following lots of specific regulations, but maybe those who can demonstrate that they at least meet some basic, uncontroversial standards get to label their products with some sort of reserved mark, and then maybe customers start asking why some other product doesn't come with, say, a money-back guarantee and extra compensation in the event of certain bad things happening.
Nothing makes a person more productive than the last minute.