Empowerment by modern technology means that a very few smart people can meet and exceed the needs of the entire human population.
This will in turn drive down the cost of living. With less money necessary to meet the population's basic needs, those smart people will need to focus less on those needs and can work on moving humanity forward. That in turn will create new demand (lots of labor will be necessary if we get to the point of colonizing another planet, for instance).
The problematic sector of the population to me is the ineducable one. If there are no jobs left for them, they can't be educated to perform new ones, and they aren't capable of making those types of advances, work for them is going to be pretty sparse under all conditions. In that way it seems we'll always be assured of the existence of an underclass.
Correct. The majority of such systems use weighted state transition models, such as Markov chains, to capture rules such as what note to follow a particular note with. More globally aware versions of the same can be used to generate dynamics. (Rhythm is harder, but you'll notice that both of the samples had a more or less constant rhythmic pattern for the duration of each). But here's the rub: those states and rules don't arise in a vacuum. The model is trained to recognize them, either automatically on a piece such as a Mozart sonata, or manually (as the article seems to suggest) through feedback from the user. It's all machine learning and its ability to compose is limited by the patterns it can extract from the pieces in the training set.
While the samples are by no means perfect, I am still impressed that it was able to pick up at least some fragments of particular cadences, such as the buildup near the end of the 2nd sample. I could have heard that resolving very nicely in the baroque style (essentially I was conditioned through years of listening to think of what Bach would do there), but instead the program just meandered off past the opportunity and came to a close on a much less satisfying ending.
This is still a good deal more developed than the majority of algorithmic composition systems.
The more I've learned about AI, the less convinced I've become that we are close to realizing it in its strong form (and I'm now a machine learning researcher...). For instance, we do not have a single working definition of intelligence. Creating something is kind of hard if you can't even define it! As a result, everyone is scattered around the field, trying to solve the same problem from different approaches, at least a sizable minority of them convinced that their own way is the One True Way To AI and that it's Just Around The Corner.
There's nothing that theoretically prevents it - I don't buy Searle's argument that a system which operates by symbol manipulation is necessarily unintelligent - but neither is there any indication that it's coming any time soon.
show a little independence
But not too much!
Machine learning is the logical place to take a combined knowledge of programming and statistics. It's a much rarer skill and commands a much higher salary, plus you're doing the closest thing we currently have to predicting the future for a living - and you generally still get to code plenty.
In other words, statistical knowledge can be a significant career advantage in addition to enhancing development and debugging.
It is lazy and disrespectful of you and other armchair commentators to simply dismiss all that work with a three-line opinion.
Doing precisely this is one of the more distasteful parts of most scientists' jobs.
Variables don't; constants aren't.