Comment Re:Thought it was going to follow Apple (Score 1) 272
Apple rewrote its OS as a layer on top of Linux maybe 25 years ago.
To clarify, OS X was descended from NeXTSTEP, which was built on Mach and BSD.
Apple rewrote its OS as a layer on top of Linux maybe 25 years ago.
To clarify, OS X was descended from NeXTSTEP, which was built on Mach and BSD.
Honestly, I think the philosophy of software engineering has gone wrong.
I agree. Sadly, software engineering is not engineering. Nobody, out side of safety critical systems, analyses the program structure and makes valid correctness claims for it as part of their quality process.
Software is at a stage that architecture went through before structural engineering really became widely adopted towards the back end of 19th century.
While we have pretty good tools these days that could do formal verification of our software, the process is incredibly time consuming. Moreover, all formal verification can ever do is show conformity to the specification. The specification can, of course, still be wrong. The move from the informal world of business to the formal specification of a system leaves a lot of room for mistakes.
How does a buyer of software know whether one piece of software is higher quality than another? Is there any real way for them to independently judge the quality of the code in most purchases?
My final thought to reflect on is that acceptable quality is enough quality and for most users that is reached fairly quickly. People will tolerate software that is really quite buggy. Games developers are actually giving us relatively deep insight in to that part of the economics. They still make money shipping games that are basically broken.
This point about game development is quite illuminating I think. The reason that most software is quite buggy is fundamentally an economic question - not an engineering question. Generally speaking, people are not prepared to pay for quality. They want enough quality that the software isn't a false economy - and we as an industry largely supply software of that quality.
It's one of these things I find online, especially talking to Americans, is this desire to believe in any wild conspiracy theory that crosses their mind.
A vast conspiracy within the Democrats to deliberately turn off their own power to hurt Trump's re-election chances is just laughable. Extraordinary claims require extraordinary evidence. Who did it? How? and Why? I'm not convinced your why is good enough.
Between 160,000 dead American and him saying "it is what is" - he doesn't need a giant conspiracy to take him down. He can do that all by himself.
This, I believe, is the story of EVERY migration. It's not necessarily that older is better, or "they don't make them like they used to", but that software development is a bug-prone and arduous process that you will not get right the first time.
This is absolutely the case. Software projects are still incredibly risky. You only have to read the Standish Group's CHAOS report to see how risky these sorts of projects from a management perspective.
The fact that the system is still there doing it's job means that the original project was one of the lucky ones that made it through to a somewhat successful conclusion. You need a very good reason to run that risk again.
In general, just upgrading your dependencies and tool-chain is probably not a sufficient excuse. You need some other compelling reason.
When I look at the list of 100 bugs found by a single tester in my team, who is not busy having review meetings and counting metrics, in a week, I laugh at these numbers.
If your tester is finding 100 bugs a week, you're doing it wrong. Your underlying quality is much too low. It's much more expensive to find a bug by functional testing than by code inspection. This is because all those bugs need to be fixed and retested. This usually requires a rebuild and other ancillary tasks that drive up cost.
Worse, it's usually a geometric progression with this kind of pattern in that for every hour spent bug fixing, there's a ratio of new bugs introduced that have to be removed by the process. This process repeats until the defect count is acceptable. Even with a relatively low co-efficient of bug introduction, the geometric series usually adds 20-30% additional cost to the development.
Sometimes I think a lot of software processes are held up as improving quality not because they actually work, but because the reduced productivity makes the quality metrics look better..
This comes back to my earlier point on people ignoring published research because they feel they know better. Do you know there's actually properly controlled scientific trials that actually establish the truth of what I'm saying? Why is your thought superior to this research? Why is this research defective?
No offense meant, honestly, but your place sounds miserable to work at. It's not the process, but the ridiculous level of formalization and standardization.
Code inspections work best when they're formal with clearly defined roles and clear reporting steps. There have been large scale studies done that confirm this. The research fed in to the development of the Cleanroom methodology pioneered at IBM.
The less formal the structure, the less well it works.
One of my big bugbears with software development as a craft is our failure to really learn from experience. There were lots of studies done on the craft from decades ago that cleanly establish these basic principals. We choose to ignore them because developers feel they know better than published research.
The truth is that people suck at writing software. Even the very best developers in an organisation are not as a good a team of lower quality people that inspects their own output. Teams > individuals.
Honestly, it isn't as corporate as it first appears. Once the roles are defined, the work turns to inspecting the source. It takes a few seconds to cover off that part of the meeting and from there the real work begins.
There are other benefits
One is that everyone has read everybody's source. There's none of this "Only Bill knows that piece of code." The whole team knows the code very thoroughly.
Another is that relatively junior people end producing code just as solid as person with 25 years experience. They end up learning a lot on the way. Do not estimate the tremendous power of that.
My teams enjoy the process and they certainly enjoy not getting as many bugs coming back to bite them in the future when the feature is out in production. Once they're done, they tend to be done and are free to move on to the next feature.
The benefits of having a cleaner code base, fewer issues and more accurate delivery times has a huge affect on morale.
Please mention the place so I never get into a mile of it. How would of Linus have created Linux without people like you? Didn't he understand the technical debt he was creating? He could have been finding bugs at a rate of 1.25 per applied man hour instead of actually creating something useful! Silly man. You process guys are useless.
I find this example really odd because Linux is built around a process of a huge amount of code review. They do it differently because they're a distributed team but they absolutely have a rigorous code review process.
When in doubt, mumble; when in trouble, delegate; when in charge, ponder. -- James H. Boren