I think over the next 10 years several things will be commonplace in programming. One will be a renewed emphasis on KISS (Keep It Simple Stupid) in software design, and the other will be more emphasis on programing for security.
Which ultimately brings programming almost full circle to less abstraction and more agility. In that environment I expect I will continue to program the way I've been programming since 1981 - with an added emphasis on secure programming and understanding what your tools are producing in the runtime environment.
Ensuring all developers in the industry are competent is a pipe dream. Take a look at the most exacting careers you can think of - and you'll find varying levels of competence.
People are imperfect (in the sense that they can have a bad day, and let typos slip by from time to time - even the very best of us). Additionally the real software lifecycle is not like frozen water. It is more like all the different states of water - solid, liquid, and gas, changing as its environment changes on a continuum from birth to death.
I agree we should do something. I think that 'something' should be more than just training and hoping they use what they've learned.
The real problem here is willingness to fund what is necessary - refactoring all code used in critical systems to ensure they are secure - and to maintain that approach over time in an iterative basis.
We should touch code (at least to review it) - every year - which research indicates is the sweet spot for zero-day exploits. We get more benefits if we refactor the code - effectively resetting the clock for exploit writers to find a new zero day, and develop applications to exploit it.
Working in IT today, I can tell you from experience no one is willing to spend money to constantly refactor code without delivering new functionality (read 'revenue generating functionality'). This approach also is counterintuitive to software engineers trained to value code reuse over rewriting or building new solutions.
Instead, they focus on cosmetic bandaids - such as firewalls, antivirus, patch updates, and policy management. All of these things are important - but in the scheme of things will not stop a zero day exploit - particularly given that most patches for zero days are not available until the zero day is discovered - and then the time it takes the developer/company in question to put out a fix - on average 6 months to a year after the zero day is discovered and reported. Meanwhile the network is wide open to anyone who has figured it out (which is roughly 6 months to a year after a new piece of software is deployed on the network). The problem is related more to how humans learn systems than any particular coding practice. Your code refactor efforts just need to fall inside of that curve - leading rather than following.
Finally - the proposed fixes, such as more regulations, will not fix the problem - and will only serve to drive people out of the business, at the precise time when we need more developers than ever to address the problem effectively.
1. Pay for what is needed in IT instead of being cheap. If you get more specific regulation of this - you might not have a choice (e.g. Sarbanes-Oxley)
2. Let your developers as a whole spend some time on evaluating code - the more eyeballs you have the better.
3. Move away from expensive water-fall projects to more flexible agile methods, and adjust your funding protocols to match.
Reading the post and associated comments reminds me that the uptake, continued use, and internalization of technology is influenced by generational 'norms'.
Of course the Millennial is flabbergasted you don't have a Facebook, Linkedin, and 10 other flavor of the month social media whatz-its... his/her life being both focused on their network of friends, and the assurance that they can find anything online to answer any question about newcomers.
The Gen X-er shrugs, and goes back to trying to make money (making on average 12% less then their fathers/mothers and trending down - yet optimistic to the end), and thinking about his next job hop and who from his fraternity can get him in touch with the hiring manager.
The Boomer rails at the dangers of social media, and the lack of independence of the Millennial, counting email as sufficient for all contacts less formal than a letter.
Realizing that all generalizations are less than useless, I sit here in the corner loathing you all.
Every organisation needs a "not boring" slot of time for their developers. Not for product that needs to ship NOW.. but for stuff that may need to ship next year.
Except I would add: "may never ship at all."
The key point here is you aren't betting the company on it, but you still should be doing it. Every company should encourage innovation - and even if the company isn't willing to bet any cash on it. Another way is to encourage your developers to spend some time on their own personal FOSS projects. What this gives you is experience - and from a risk vs. reward perspective, success is attained not by how much working (boring) code you produce, but really how many times you try something that fails, and get up again and keep pushing on with new/modified ideas based upon this experience giving your customers real value. Companies without this perseverance will fail, or at best will be mediocre.
On the flip side - if your core business (the part that you are trying to show your customers you are innovative and a leader in) becomes too boring - and by too boring I mean while it may 'work', it may not do what a customer really wants/needs - then you run the risk of losing those customers to someone who will try and be willing to fail.
Just like all oversimplified prescriptions, the article's concept does not take into account the nuances of business goals, risk aversion level, available human factors and skills, and so on.
They should have viewed this presentation about increasing a python data crunching application 114,000 times faster before they set off on their research project.
To summarize - there are a multitude of ways to optimize your application including using the chip's onboard cache to avoid the overhead/delay of accessing memory on the motherboard across the bus
Yes - as we try to eek out more performance from our applications - we'll need to consider the relationship between our applications and the underlying implementation and capabilities of the hardware it lives on. Further - I would say we also should be considering how to make our tools do this sort of thing for us. Given the complexities we are seeing in the development arena today, including virtualization, the need to do more with less both on the back end, as well as on small hand held devices, and the need to build more faster while increasing security of what we build, I consider it imperative.
When I started to have wrist tendon pains about 10 years ago - I got a Microsoft ergonomic - which helped a lot - and I used that for years. However, through use of that keyboard, I also started to have pain again. Part of it was the non-adjustable nature of that keyboard, and the other part of that was the short compression rubber dome switches it used - which was destructive to my fingers and caused me to slow down and use a lighter touch (I tend to pound the keyboard when typing fast due to learning to touch type on a manual typewriter back in the day, and then a number of years of buckling spring style keyboards in between that and the commoditization of computers and keyboards that emerged in the 90s).
Today I have two Kinesis Maxims (one for work, one for home), and I love the way it adjusts - three different 'tent' levels, and fully adjustable angle spacing. Even as good as that is, I still have an old IBM buckling spring keyboard (from an old RS6000 - not a model M - but identical for all intents and purposes), and recently purchased a Corsair gaming keyboard with no number pad (I never use the keypad on 101 keyboards anyway - always type my numbers using the number keys at the top) - which itself uses Cherry Red key switches. Not quite as clicky as the buckling spring - but very nice on my fingers when compared to the pseudo mechanical/dome switches on the Maxim. So when working on different machines, I get different experiences.
For me, the key to healthy wrists and hands as a typist is to mix things up - use more than one type of keyboard. Don't get stuck in a rut.
Power corrupts. And atomic power corrupts atomically.