Comment Re:Evolution (Score 1) 453
...and instead of "clearly optimal", I should have said "a likely improvement" because no heuristic is guaranteed to provide an improvement in performance 100% of the time.
...and instead of "clearly optimal", I should have said "a likely improvement" because no heuristic is guaranteed to provide an improvement in performance 100% of the time.
It requires nothing of the sort. It just requires that you sometimes try going up instead of down.
Depending on the terrain, however, "sometimes going down" could almost always result in worse performance.
One solution is more randomness in the culling heuristics, as well as bigger populations and occasional bigger mutations.
Of course, this applies to whole populations, not individuals. Simply saying "follow the gradient and occasionally don't" is more likely to result in degraded average performance. Actually knowing when going against the gradient is helpful and not harmful, and knowing which direction to take against the gradient, OR having a good heuristic for guessing when it's a good idea, on the other hand, is clearly optimal.
Randomly doing something counterintuitive may reveal clues toward a more optimal solution, but in itself it is no guarantee of improvement.
Vista is a very good example of what happens when you take tough theoretical problems and throw entry-level programmers at them who haven't spent enough time converting C code to assembly on 4-bit microcontrollers with 64 bits of onboard RAM to appreciate the inherent value in code optimization and algorithm design, and there's enough processor speed and memory available that nobody notices or cares about the inefficiencies until it hits shelves and millions of end-users are forced to hit "Allow" 300 times a day.
Except following the gradient is just an example of using a suboptimal solution that works in the majority of cases, and is significantly less difficult to implement than the "next step up," which requires, at the very least, an internal model of the surrounding terrain. If it is actually known that going down will help you get higher, it's not actually a dumb decision despite how it may appear to agents without the "internal model" algorithm.
In fact, the gradient follower in that case is actually the dumber process, because it takes only one factor into account. But if the gradient follower is able to observe the internal modeler performing counterintuitive steps and achieving greater results, it may attempt to modify its own behavior without understanding the justification behind it, or the full ramifications thereof. This is where IT Managers come from.
Dammit, infinite regress only works when you don't acknowledge it!
You've doomed us all. I hope you're happy.
"Given the rapid advance of Moore's Law, when does it make sense to throw hardware at a programming problem? As a general rule, I'd say almost always.
even the most rudimentary math will tell you that it'd take a massive hardware outlay to equal the yearly costs of even a modest five person programming team
All the hardware in the world isn't going to fix an insidious segmentation fault, or ensure that your database queries properly handle all inputs, or rework a poorly designed algorithm that runs in O(n^n) time. "Throwing hardware at a programming problem" is like trying to fix a flat tire by putting more gas in your car.
The moon is made of green cheese. -- John Heywood