Do you do it?
evolution = variation + selection
What's happening here is likely more about selection then variation, although maybe a bit of both. I suspect this is largely the mechanics of punctuated equilibrium at work.
The way evolution is taught at high school level is typically over simplified to the point of being wrong, as indeed are many subjects. Evolution is NOT a continuous process of each generation getting better fitted to the environment via the process of natural selection acting on genetic changes introduced in individuals in that generation...
The normal way that evolution is understood to play out in practice is via "punctuated equilibrium" whereby genetic changes - which are typically too small and/or irrelevant to have any immediate impact on fitness - accumulate in animal populations over many generations. It's not the genetics of individuals that are changing so much as the genetics of the interbreeding population as a whole as accumulated changes get spread throughout the population over a number of generations. This is the "equlibrium" phase whereby genetic changes are accumulating but there is no external evidence of this as the changes are irrelevant to fitness.
What happens next is the "punctuated" part of "punctuated equilibrium" - something changes in the external environment that the animals are part of - in this case the arrival of an invasive species. These changes in the environment (drought, disease, invasive species, etc, etc) can happen very quickly compared to the speed at which genetic change accumulates. Now, it may happen that in the new changed environment some of the accumulated genetic changes that were previously benign now become a factor in fitness (either positively or negatively) and therefore a "sudden" change in the population may be seen as those individuals possessing what has now become a helpful trait, or not posessing a negative trait, prosper relative to their peers and rapidly come to dominate the population.
When a change in the environment brings about a quick change in an animal species, it is tempting - but sloppy - to say they are rapidly evolving. What happened rapidly was the change in the environment, not the slow process of genetic change that suddenly became significant.
In this case the Florida lizard population presumably already had all the traits - to some degree - that would prove positive or negative when the invading Cuban species arrived, and a quick change was seen as natural selection did it's thing and over a few generations the population became dominated by individuals having the (slowly come by) traits that now proved to be critical.
Of course there's more to how the dynamics of evolution play out than just puncuated equilibrium... While it's always going to take a long time for any complex feature such as sticky toes (or toes themselves for that matter) to evolve, the way genetic coding works is such that it may be very easy for a feature - once it exists - to be modified by a small change (e,g. a birth defect giving you unwanted extra limbs or extra sticky toes - advantageous if you mostly climb slippery trees, disadvantageous if you don't).
So.. the big picture here is that the Florida lizard species will have already accumulated the feature set that proved advantageous (or disadvantagous for those that dies giving way to the "new" variety), and this just played out once the environment changed. Subsequent to the changed environment additional variation/selection (which you could think of as optimization "tweaking") of the most critical features (toe pad size, scale stickyness) may have occurred.
Apple's Ion Strengthened Glass (which, confusingly, they call Ion-X Glass on the Apple Watch) might be Gorilla Glass, but could also be Ashai Glass's Draontrail-X Glass which is similarly ion strengthened or maybe a new product from a different manufacturer.
If the misclassification only occurs on rare inputs then any random perturbation of that input is highly likely to be classified correctly.
The fix therefore (likely what occurs in the brain) is to add noise and average the results. Any misclassified nearby input will be swamped by the greater number of correctly classified ones.
This will actually help!
First, voice doesn't use that much data. For example, Viber (a popular VOIP app) uses 0.5MB/min which would be about 0.5GB for 1000min.
More importantly, once every one is transitioned off 3G onto 4G/LTE (i.e. VOIP over LTE) the carriers can repurpose the 3G spectrum for 4G and thereby gain more 4G/LTE capacity.
The deal doesn't make sense to me, but presumably it would involve Dr Dre and Jimmy Iovine being contracted to stay for some minimum amount of time, which brings a lot of clout (esp. Iovine) in the music biz.
The $3.2B price if true seems insane though. Between 2012 and 2013 Beats bought out HTC's 50% ownership for a total of $415M (25% in 2012 for $150M, 25% in 2013 for $265). So, if half the company is worth $415M, the whole thing should be worth closer to $430M, not $3.2B!
And the civilian world isn't prepared for a zombie apocalypse either, or to be suddenly attacked by hoards of man eating tigers.
Is this a slow news day?
You're problem isn't that you're too old to learn, but rather that (per your own description) you were never that good to begin with. Any fool can do simple programming.
In practice, I believe that the present text-based programming paradigm artificially restricts programming to a much simpler logical structure compared to those commonly accepted and used by EEs. For example, I used to say "structured programming" is essentially restricting your flow chart to what can be drawn in two dimensions with no crossing lines. That's not strictly true, but it is close. Since the late 1970s, I've remarked that software is the only engineering discipline that still depends on prose designs.
You appear to be thinking about a very limited subset of software where the essence is captured by the "two dimensional" control flow.
As Fred Brooks famously wrote: "Show me your [code] and conceal your [data structures], and I shall continue to be mystified. Show me your [data structures], and I won't usually need your [code]; it'll be obvious.''
Nowadays he probably would have updated that pithy formulation to include mention of your threading model as well as data structures.
If you start trying to visualize the dynamic behavior of complex synchronization-heavy multi-threaded programs or ones with significant non-trivial shared data structures, then I can assure you there'll be plenty of crossed lines!
The time when most programs could be described by flowcharts was probably 40 years ago. We've moved on a bit since then!
Haha - very Zen! Love it!
I think part of what your missing is based on your own self-described lack of experience.. that you can write simple programs but get bogged down writing more complex stuff. Professional programmers don't really have this problem (or at least the experienced ones don't - there is a learning curve as in any field).
The main "trick" to designing/writing complex programs is to be able to think at many different levels of abstraction and therefore to "divide and conquor" the complexity. At each level of your program (think of it as a layered onion from the highest level on the outside down to the low level stuff of simple programs on the inside) you're going to be implementing one level of complexity/capability by using software components that are essentially only ONE level lower in capability than the level you're at... ditto for the next lower level, etc, all the way down. Designed this way, it's no harder to write the highest levels of the program than it is to write the lowest levels that you are familiar with.
Note though that the software components you're using at any level of your design are going to be domain-specific components that you've designed yourself to make the job easy - they are not going to be 100% off-the-shelf components (other than company internal re-use), other than at the bottom most layer of the design. It's having the right components that makes the job of implementing the next layer up easy (like the idea of the adjacent possible - without having "adjacent" components the corresponding "adjacent possible" is NOT possible, or at least is way more difficult). So, the real issue is not whether one is using a visual vs text-based method of composition but rather in having (creating) the right components at each level.
It's also worth noting that since programmers are the ones with the skills to create programming tools that we therefore necessarily have pretty much the tools we need. Good programmers are lazy (strive for minimalism), and arn't going to fight the same battles every day if they can build a tool to make their lives easier. Of course there's always a bleeding edge of new technology where the tools havn't yet matured (e.g. now that clock speeds are topping out and parallelism is replacing it, there's more need for better tools to deal with parallelism), but basically we DO have the tools we need.
Yep - 2014 will be the year of Linux (with hot grits).
There's PrimeCoin where the miners generate prime numbers
OCZ is/was a horribly managed company, but IMO one of their other core problems is/was that they arn't a flash memory (NAND) manufacturer... Difficult to compete on price when their major SSD competitors (Intel, Samsung, Crucial/Micron, SanDisk) all have their own fabs...
Presumably you're assuming a 6 chamber gun with one bullet and "therefore" a 1-in-6 chance of getting shot.
However... due to the weight of the bullet, when you spin the chamber prior to firing, the bullet will tend (due to gravity) to end up in one of the lower vs the highest (firing) positions, so the average chance of getting shot should actually be somewhat less than 1-in-6.