Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
Every organisation needs a "not boring" slot of time for their developers. Not for product that needs to ship NOW.. but for stuff that may need to ship next year.
Except I would add: "may never ship at all."
The key point here is you aren't betting the company on it, but you still should be doing it. Every company should encourage innovation - and even if the company isn't willing to bet any cash on it. Another way is to encourage your developers to spend some time on their own personal FOSS projects. What this gives you is experience - and from a risk vs. reward perspective, success is attained not by how much working (boring) code you produce, but really how many times you try something that fails, and get up again and keep pushing on with new/modified ideas based upon this experience giving your customers real value. Companies without this perseverance will fail, or at best will be mediocre.
On the flip side - if your core business (the part that you are trying to show your customers you are innovative and a leader in) becomes too boring - and by too boring I mean while it may 'work', it may not do what a customer really wants/needs - then you run the risk of losing those customers to someone who will try and be willing to fail.
Just like all oversimplified prescriptions, the article's concept does not take into account the nuances of business goals, risk aversion level, available human factors and skills, and so on.
They should have viewed this presentation about increasing a python data crunching application 114,000 times faster before they set off on their research project.
To summarize - there are a multitude of ways to optimize your application including using the chip's onboard cache to avoid the overhead/delay of accessing memory on the motherboard across the bus
Yes - as we try to eek out more performance from our applications - we'll need to consider the relationship between our applications and the underlying implementation and capabilities of the hardware it lives on. Further - I would say we also should be considering how to make our tools do this sort of thing for us. Given the complexities we are seeing in the development arena today, including virtualization, the need to do more with less both on the back end, as well as on small hand held devices, and the need to build more faster while increasing security of what we build, I consider it imperative.
When I started to have wrist tendon pains about 10 years ago - I got a Microsoft ergonomic - which helped a lot - and I used that for years. However, through use of that keyboard, I also started to have pain again. Part of it was the non-adjustable nature of that keyboard, and the other part of that was the short compression rubber dome switches it used - which was destructive to my fingers and caused me to slow down and use a lighter touch (I tend to pound the keyboard when typing fast due to learning to touch type on a manual typewriter back in the day, and then a number of years of buckling spring style keyboards in between that and the commoditization of computers and keyboards that emerged in the 90s).
Today I have two Kinesis Maxims (one for work, one for home), and I love the way it adjusts - three different 'tent' levels, and fully adjustable angle spacing. Even as good as that is, I still have an old IBM buckling spring keyboard (from an old RS6000 - not a model M - but identical for all intents and purposes), and recently purchased a Corsair gaming keyboard with no number pad (I never use the keypad on 101 keyboards anyway - always type my numbers using the number keys at the top) - which itself uses Cherry Red key switches. Not quite as clicky as the buckling spring - but very nice on my fingers when compared to the pseudo mechanical/dome switches on the Maxim. So when working on different machines, I get different experiences.
For me, the key to healthy wrists and hands as a typist is to mix things up - use more than one type of keyboard. Don't get stuck in a rut.
This sounds a lot like my Corsair Gaming keyboard with Cherry Red key switches I am using to type this.
I also have two Knesis Maxim keyboards (ergonomic split keyboard with 3 height adjustment levels), and one IBM keyboard with buckling spring switches I nabbed off an abandoned RS6000.
I keep on of the Maxims at work - and the rest I use at home. I like to move from one to the next periodically - because I have found that mixing up the different types of keyboards (and trying to avoid the rubber dome ones if possible) - is better for my Carpal-Tunnel than sticking with any one type.
My favorites though are the IBM buckling spring, and the Corsair Cherry keyswitches. --- very nice action - and I can type very fast / accurate on these in comparison to the maxim or other rubber dome switch keyboards.
Bill Gates, Steve Jobs, and Steve Wozniak were part of the Digital Revolution where they wanted to decentralize data and put computers in the hands of the people.
Now it looks like we need a backlash.
No, the solution isn't centralization of our data systems. You can already see where that is leading with the high profile exposures today (Sony, Target, et al). It is a fallacy to assume corporations have all the answers, or will act in the general public's best interests. Short term profit is the only thing that has any meaning in that system.
At the same token we can't continue going along like we are - as that is already proven to fail.
The very thing that makes the internet useful for communications and commerce for large populations spread all over the globe, is the same thing that is at the core of it's weakness: public key encryption. To be more specific, computers are designed not to be random, and the systems we've devised to get around this problem have limits that may be exploited. When paired with encryption these limits open up potential exposure, and advancements in computing technology allow those exploits to be more readily used. For certain short term transactions, this level of exposure may be an acceptable risk - for data that is transient in nature, and not useful to someone at some future point in time. However, much of the data we trust to encryption could be useful to a 3rd party in the future.
We could ensure our systems (personal or corporate - doesn't matter) are completely secure from a remote attacker - by placing them inside a Faraday cage, and disconnecting them from the internet. While the data would be secure, it wouldn't be very useful in the broader context of communication and commerce - but for some types of information it might be an appropriate approach, and I imagine is what some sensitive government networks opt for their classified systems. For all other systems it would be as useful as throwing them into the deepest part of the Pacific Ocean - secure, but useless.
In order to communicate on the wider stage then, we must accept a certain amount of risk. I think we are all in agreement that the current risks are unacceptable the way they are today. I also think there is no single magic bullet. I think you will see the teams focus on the following areas, assuming corporate interests are not overly impacted by the potential solutions:
Tools - tools need to be devised that don't allow neophyte application programmers to shoot themselves in the foot.
Training - training has to be developed based upon new approaches, and made available widely.
Willpower - everyone - corporations down to individual developers - must have the willpower to do some things that might be hard at first (e.g. code reviews of all code - including libraries, refactoring/rewriting same in light of security issues etc) - and these things need to become habit.
Whatever the outcome, there will be no silver bullet.
The big bang theory is just that: a theory. It is not yet proven indisputably as a law of nature.
New ideas and observations, such as this article on new equations and this article on lack of expected gravitational waves put the theorum to the test. Furthermore, the Pope declaring the 'big bang theory right' only increases the need to check our models and assumptions on this subject (and now that I think about it, wouldn't the church have a vested interest in a non-permanent universe to mesh with end-times dogma)?
At least until we get some indisputable evidence, we need to continue to question our theories, record our observations - and try to see where the puzzle pieces fit. Being a dogmatic scientist is worse than being ignorant - the scientist should know better.
We need to address the real underlying problem you are describing right there - code written by different people that does not conform to any standards is hard to manage over its lifecycle - and this goes double for limited frameworks that may get some things right, at the expense of not allowing you to get all things right.
This is one thing that open source has gotten right on occasion - think of the Linux kernel for example, and how many people contribute to that and keep it going.
So really the answer I think is twofold - on the one hand people need better tools that make it easier to integrate their efforts, on the other hand entities engaged in this activity need to develop standards that ensure when people develop things - they document and build interfaces that are consistent, if not globally, at least between members of the groups expected to work on the code. If you do both of these things - and by extension some other things that those recommendations imply (e.g. code reviews, agile development methods etc).
Now, if you are only building software for yourself, then this isn't so important. However, if you expect other people to extend and manage your code over the long term, then I would still opt for leaning towards either creating and documenting standards, or selecting and learning existing well known standards - and sticking to that in your own code. Keep it consistent between all the things you build that you want to share, and you just might get people to help - if that's what you are looking for.
I would not consider being overly risk averse as being rational behavior.
There are many rational reasons to take risks:
1. Gives you, and by extension your company the opportunity to learn and grow. If you never take risks you stagnate and learn nothing.
2. Real invention occurs through taking risks. If you never take risks you don't innovate.
3. Taking responsibility, and therefore risk is what men and women do. Being overly risk averse is immature slug-like, weasel word behavior.
If your company does not reward risk-taking - then you are in the wrong company.
I've told this story elsewhere, but it applies directly to this issue, so I'll recap in short:
Vender is contracted to create an integrated support application for large sums of money ($millions) over a 6 month period; contractor chooses an obscure commercial java framework to build the system on. The application is delivered and appears to work fine for several months, then starts getting sluggish, then a month later the application locks up - and has to be restarted. This progressively gets worse, and is asymptotic with the growth of the underlying customer base - and soon becomes completely useless - shutting down within minutes of being started with a memory exhaustion error.
The main problem we found was the equivalent of a memory leak in Java. The code would instantiate objects based upon the framework in the main loop, and they would never go out of scope. Furthermore the code imported hundreds of libraries that were never used - further impacting clarity and understanding of what the thing was doing.
To make a long story short, since this was already in production and now there was even more pressure to get a solution in place fast (and all the lawyers threats in the world can't replace a knowledgeable developer) - we rebuilt the whole system using perl in a little over 1 week. That solution is still running today - even as we've scaled orders of magnitude since then.
So - to your point - this stuff really does happen, and wastes godawful amounts of time and money, when a more simpler home grown solution would do just as well, if not better.
Programmers have to take more responsibility and think holistically about what they are building - and integrate testing to validate their assumptions against the hard light of the real world. To be a great programmer, you should know how to test and build tests and test rigs as needed. To be a great tester, you should know how to code - so you can automate what you're testing. I think the lines have to blur - a firewall between the two only leads to silos, and limits what can be done if they were to work seamlessly (the quote attributed to Aristotle applies here, "the whole is greater than the sum of its parts").
Of course, in many development shops the 'just a programmer' mentality is baked into the whole process - so as a developer you might feel that you are stuck. That being said, if you know better, then it is in the interests of your business if not yourself to champion the issue and effect change.
It would all depend on your definition of 'significant rewrite/technology/architectural changes'. There is a lot of room in there for interpretation - particularly if a project was changing constantly.
At the same token, if a project has stablilized to the point of little or no changes, then have a long-lived 'W' wouldn't necessarily be a bad thing either.
Human beings create these numbering schemes for human consumption - and therefore can reasonably adjust them to avoid confusion as necessary.