Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Re:Migrations are costly and newer is not better (Score 1) 217

This, I believe, is the story of EVERY migration. It's not necessarily that older is better, or "they don't make them like they used to", but that software development is a bug-prone and arduous process that you will not get right the first time.

This is absolutely the case. Software projects are still incredibly risky. You only have to read the Standish Group's CHAOS report to see how risky these sorts of projects from a management perspective.

The fact that the system is still there doing it's job means that the original project was one of the lucky ones that made it through to a somewhat successful conclusion. You need a very good reason to run that risk again.

In general, just upgrading your dependencies and tool-chain is probably not a sufficient excuse. You need some other compelling reason.

Comment Re:Team Reviews are far superior (Score 1) 186

When I look at the list of 100 bugs found by a single tester in my team, who is not busy having review meetings and counting metrics, in a week, I laugh at these numbers.

If your tester is finding 100 bugs a week, you're doing it wrong. Your underlying quality is much too low. It's much more expensive to find a bug by functional testing than by code inspection. This is because all those bugs need to be fixed and retested. This usually requires a rebuild and other ancillary tasks that drive up cost.

Worse, it's usually a geometric progression with this kind of pattern in that for every hour spent bug fixing, there's a ratio of new bugs introduced that have to be removed by the process. This process repeats until the defect count is acceptable. Even with a relatively low co-efficient of bug introduction, the geometric series usually adds 20-30% additional cost to the development.

Sometimes I think a lot of software processes are held up as improving quality not because they actually work, but because the reduced productivity makes the quality metrics look better..

This comes back to my earlier point on people ignoring published research because they feel they know better. Do you know there's actually properly controlled scientific trials that actually establish the truth of what I'm saying? Why is your thought superior to this research? Why is this research defective?

Comment Re:Team Reviews are far superior (Score 2) 186

No offense meant, honestly, but your place sounds miserable to work at. It's not the process, but the ridiculous level of formalization and standardization.

Code inspections work best when they're formal with clearly defined roles and clear reporting steps. There have been large scale studies done that confirm this. The research fed in to the development of the Cleanroom methodology pioneered at IBM.

The less formal the structure, the less well it works.

One of my big bugbears with software development as a craft is our failure to really learn from experience. There were lots of studies done on the craft from decades ago that cleanly establish these basic principals. We choose to ignore them because developers feel they know better than published research.

The truth is that people suck at writing software. Even the very best developers in an organisation are not as a good a team of lower quality people that inspects their own output. Teams > individuals.

Honestly, it isn't as corporate as it first appears. Once the roles are defined, the work turns to inspecting the source. It takes a few seconds to cover off that part of the meeting and from there the real work begins.

There are other benefits

One is that everyone has read everybody's source. There's none of this "Only Bill knows that piece of code." The whole team knows the code very thoroughly.

Another is that relatively junior people end producing code just as solid as person with 25 years experience. They end up learning a lot on the way. Do not estimate the tremendous power of that.

My teams enjoy the process and they certainly enjoy not getting as many bugs coming back to bite them in the future when the feature is out in production. Once they're done, they tend to be done and are free to move on to the next feature.

The benefits of having a cleaner code base, fewer issues and more accurate delivery times has a huge affect on morale.

Comment Re:Team Reviews are far superior (Score 1) 186

Please mention the place so I never get into a mile of it. How would of Linus have created Linux without people like you? Didn't he understand the technical debt he was creating? He could have been finding bugs at a rate of 1.25 per applied man hour instead of actually creating something useful! Silly man. You process guys are useless.

I find this example really odd because Linux is built around a process of a huge amount of code review. They do it differently because they're a distributed team but they absolutely have a rigorous code review process.

Comment Re:Team Reviews are far superior (Score 3, Interesting) 186

You sound like a bean counter, and your organisation sounds like it is hell to work in. 1.25 bugs per man hour? Christ.

Well I'm the head of development at our place so I inhabit both worlds. Businesses like to measure return on investment. By being able to speak that language, I can generally frame activities developers naturally want to do in those terms. This leads to developers getting more of what they want.

You know what developers really, really, really hate? Having to work with technical debt and having no process to remove that technical debt because the program is now "working".

The best way around technical debt is not to put it in to the program in the first place. This process does a sterling job at that. So our developers are generally a pretty happy bunch.

Comment Team Reviews are far superior (Score 4, Interesting) 186

In our organisation, we have teams of six people that work together on their sprint. QA staff are included in this team.

On major features, the team code reviews the feature together in a special session. Roles are assigned. The author is present, a reader (who is not the author) reads the code. There is an arbitrator who decides whether a raised issue gets fixed. This arbitrator role is rotated through the team on an inspection by inspection basis. Finally, there is a time keeper role who moves the conversation to a decision if one topic is debated for more than three minutes.

This process typically finds a humongous number of issues. It takes us about 4 hours of applied effort to discover a bug in pure functional testing. This process discovers bugs at a rate of 1.25 bugs per man hour of applied effort. So if you have five people in a room for one hour, you have applied 5 man hours. You'd expect to find 6-7 bugs. If you include all the stylistic coding standards bugs, this is typically 10-15 bugs per hour.

So while on the surface it looks expensive to have all those people in a room talking. The net result is that it tends to accelerate delivery because so many issues are removed from the software. Better still, the review occurs before functional testing begins. This means the QA staff on the team can direct their testing at the areas highlighted by the inspection process. This further improves quality

It's true that about 50% of the ossies are stylistic issues. But usually we get 1 or 2 bugs per session that present a serious malfunction in the program. The rest could be problems under some circumstances or minor faults.

Team reviews are vastly, vastly superior to pair-programming. There really is no contest.

Comment LISP (Score 4, Interesting) 429

LISP is probably the most powerful language every discovered. I say "discovered" here and not "created" deliberately. There is a quality about it that makes it feel more like an extension of mathematics rather than a language.

It might have conquered the world if only Eich had been allowed to build Scheme in the browser, as he was hired to do.

Instead, it languishes for some reason I can't really understand. I still wish for a day it becomes a mainstream language but I think it'll just remain a wish.

Comment I doubt it (Score 2) 265

I'd be surprised if a random member of the public could even define what free software is. They'd probably think it's connected to the cost of the software rather than its freedom giving properties.

That said, I think that the view that with enough eyes all bugs are shallow is false. Given that bash is used in millions and millions of servers and the bug took decades to root out, we must think of a better way to get eyes on the code.

The whole stack needs a line by line review by security experts. That will cost tens if not hundreds of millions of dollars but my view is that it's probably worth it. Then we have to make sure all changes get reviewed in the same way.

The result of this process would be a super-hardened version of OpenBSD. It would come with a nice fat government certification and if you want to do business with the government, you have to use that distro.

That might rub people up the wrong way but I think that's what's ultimately going to happen eventually. A lot of this infrastructure is so critical to the modern economy that we can't just run any old code anymore.

Comment Microsoft is a spent force (Score 4, Interesting) 142

Microsoft doesn't have many fans on Slashdot but even the most die-hard of fans must now see that they're in a real bad position.

The used to be invincible in the consumer space but now the computing device of choice is either the tablet or the smart phone. Precious few of these are Windows based.

The used to be invincible in the business user space but the move to mobile computing means business people are using iPhone and iPads, not Windows Phones and Surface.

Then there's Bing, who's only claim to fame is being the world's greatest search engine. For. Porn.

Then there's Azure. We actually looked at Azure and discovered that the same hardware in EC2 was half the price. If you going to twice as much you might as well give up and go home.

Then there was the own goal of the latest generation XBox. They managed to piss everyone off for no discernible gain.

The only area their grip is still strong is PC gaming. For how long, who knows?

Microsoft is a spent force. They're out of ideas. In a few short years they've gone from being the 800lb gorilla to just struggling just to remain relevant.

It reminds me of Brazil versus Germany at this year's world cup. I'm not celebrating any more; it's just sad at this point.

Comment Re:No steering wheel? No deal. (Score 4, Insightful) 583

Sorry. While I love technology, my not-so-humble opinion is that we're nowhere near the level of reliability needed for a car that's completely free of manual control.

The Google car has done something like 700,000 miles and crashed twice. Both times this occurred, it was under control of the human occupant.

I drive to work every morning and the number of times I see people not paying attention is extraordinary. Women doing their makeup, people texting, trying to argue with their children etc.

Honestly, in my view, removing the steering wheel is a safety feature.

Comment It's pretty standard... (Score 1) 232

You think Software Development is bad for this? At least the equipment is inexpensive and the material accessible.

In aviation, you'll pay > $60,0000 of your own money to get your ATPL all to start on a wage of $25,000.

What about medical school or law school? That's pretty expensive and comes out of your pocket.

Many serious professions require you to spend money on your training. It just comes with the territory.

Comment Re:need to get over the "cult of macho programming (Score 2) 231

I actually agree with both of you. The Open SSL guys gave out their work for free for anybody to use. Anybody should be free to do that without repercussions. Code is a kind of literature and thus should be protected by free speech laws.

However, if you pay peanuts (or nothing at all) then likewise you shouldn't expect anything other than monkeys. The real fault here is big business using unverified (in the sense of correctness!) source for security critical components of their system.

If regulation is needed anywhere, it is there. People who develop safety and security critical stuff should be certified and businesses with a turn over $x million dollars should be required to use software developed only by the approved organisations.

There is nothing in this definition that prevents an open source implementation. In fact, there's an argument to say that any such verified implementation must be open source precisely so it can be inspected. But it is quite a lot of work and people need to be paid to do that work. You can't expect to get this level of quality assurance for free.

Comment Still fewer cancers than fossil fuels (Score 2, Informative) 157

Fukushima is a serious nuclear disaster. It's a very situation that we should all be concerned about. But this should not lead to any pause in our appetite for nuclear energy.

What people often fail to appreciate is that even coal fired powerstations release quite large amounts of radioactive material in to atmosphere. Coal fired powerstations burn about a million times as much material as a nuclear powerstation per joule of energy produced. Some of that material is radioactive. That stuff isn't been sealed in a container in burrried in a mountain, it's being blown up chimney stacks along with the rest of the rather unpleasant stuff.

Don't believe me? Reflect on this passage taken from this (PDF) document:

The EPA found slightly higher average coal concentrations than used by McBride et al. of 1.3 ppm and 3.2 ppm, respectively. Gabbard (A. Gabbard, “Coal combustion: nuclear resource or danger?,” ORNL Review 26, http://www.ornl.gov/ORNLReview... 34/text/colmain.html.) finds that American releases from each typical 1 GWe coal plant in 1982 were 4.7 tonnes of uranium and 11.6 tonnes of thorium, for a total national release of 727 tonnes of uranium and 1788 tonnes of thorium. The total release of radioactivity from coal-fired fossil fuel was 97.3 TBq (9.73 x 1013 Bq) that year. This compares to the total release of 0.63 TBq (6.3 x 1011 Bq) from the notorious TMI accident, 155 times smaller.

So far, there has not been a single confirmed death due to Fukushima accident. In comparison, there were 20 deaths in the US just mining for coal in 2013. This is not to mention all the deaths being caused by cancers and other health problems being caused by breathing polluted air.

If we're ever going to get on top of this climate change challenge, nuclear must be leading the charge. Nuclear is a safe, non-polluting technology. Modern designs are fail-safe in every sense of the word. The newer designs can even cope with a loss of external power (like Fukushima experienced) yet still stay safe.

This is the 21st century. The technology is mature, sensible and safe. Really, we should be looking to retire every coal fired plant as a matter of urgency, if only to reduce the amount of radioactive contamination of the atmosphere!!

Comment A few problems... (Score 5, Insightful) 149

A few problems:

- What about circular reactions?
- Is SQL really that right language for encoding business logic?
- Triggers are kind of an anti-pattern.
- What about atomicity? What if I need the whole reaction chain to work or none of it.

I'm afraid there more questions than answers with this proposed pattern.

Comment Re:And they called me crazy (Score 3, Interesting) 221

256GB USB drives full of true randomly generated one-time pads

I know this is a piece of humour but since this is Slashdot why not?

What a lot of people don't understand is that is much harder than it first appears. For example, doing cat /dev/random to a file on disk will not give you bytes suitable for use in a OTP.

The issue is that the many TRNGs hash their entropy pool with a cryptographically secure hash. When you use such a hash there is no guarantee that the input space would be uniformly mapped to the output space.

To illustrate this, suppose we had an entropy pool 1024-bits deep. Suppose before producing the output the pool is hashed with SHA-1. This is an output that 160-bits wide. There is no proof whatsoever that if we cycled a counter from 0 to 2**1024 that the hash of these would distribute evenly of 2**160 possible has outputs. If this were the case, each output hash value would appear exactly 2**864 times. It is highly unlikely that this is the case.

What this means is the the output is distinguishable from a true random source, which completely breaks the security proof for the OTP. Granted, the attacker would likely to have to do an infeasible amount of work to use this distinguisher. However, the OTPs proof gives you security from computationally unbound adversaries. It's the whole point of using the OTP!

So in short, you can't use /dev/random, you can't use pretty much any commercial random number generator. You'd have to roll your own and show that your bias is small enough for no attack to be practical. Like I said, it's harder than it looks.

Slashdot Top Deals

MAC user's dynamic debugging list evaluator? Never heard of that.

Working...