MAD Magazine and Tom Koch nailed it with "Three-Cornered Pitney". Any good board game should involve uncooked popcorn kernels, conch hackers, rolling whirtlings, and pancakes.
The Rhode Island School of Design (RISD), one of the world's preeminent schools of art and design, is also the leader of the STEAM educational movement. STEAM is an acronym created by adding an A for Art into STEM, the term representing the US government's current emphasis on education in Science, Technology, Engineering, and Mathematics.
From the summary:
“they report that language design does have a significant, but modest effect on software quality.”
“strong typing is modestly better than weak typing”
“static typing is also somewhat better than dynamic typing”
“functional languages are somewhat better than procedural languages”
“It is worth noting that these modest effects arising from language design are overwhelmingly dominated by the process factors”
“we hasten to caution the reader that even these modest effects might quite possibly be due to other, intangible process factors”
“Hence, we are unable to quantify the specific effects of language type on usage”
Paraphrased: “Findings were inconclusive.”
You know that feeling you get when you read a Dilbert cartoon and think it was specifically written about you (or your workplace)? That's the feeling I got reading your post.
Each time I see someone mention the D-K effect, they focus only on the first manifestation: unskilled individuals tend to suffer from illusory superiority, mistakenly rating their ability much higher than is accurate.
But, there is an equal-but-opposite manifestation, as well: highly skilled individuals tend to rate their ability lower than is accurate.
Why is this one typically ignored?
"Research from firms like Gartner are accepted without question; even though they can get their results from untrusted and unvetted sources."
"With myriad statistics, surveys, data breach reports, and global analyses of the costs of data breaches, there is an overabundance of data, and an under abundance of meaningful data."
Unchecked sources. Abundance of meaningless data. These are problems.
"The authors note that information security and operational risk has operated for far too long as an art, with not enough science. This is the gap that FAIR attempts to fill."
Yet, the book doesn’t seem to address those problems.
The CRASH 2014 "Summary of Key Findings" can be found at http://www.castsoftware.com/ / ADVERTISING-CAMPAIGNS /
Thus far, I've been unable to find the actual report. I found and downloaded the "Summary of Key Findings", which says, "This report provides a brief summary of the important results from the full 2014 CRASH Report.". But, I can't actually find the "full 2014 CRASH Report".
This is making it difficult to evaluate. Perhaps on purpose...?
On that note: http://www.ipetitions.com/peti...
Help stop "one size fits all" standards!
Sogeti has been presenting marketing as research for years with their World Quality Report: http://www.sogeti.com/solution.... This smells similar.
I’d guess that FDX (Fear Driven X) exists in nearly every industry. Google “motivated by fear” or “driven by fear”, and you won't just find a bunch of software development articles. This is a human problem, not an engineering problem.
Figure out how to stop this type of behavior at a larger scale, and the answer will probably apply to the smaller one.
That is a damn fine blog post. Very well said...
Testing is essentially "evaluating a product via experimentation". While experimentation certainly requires plenty of scientific rigor, it also requires plenty of creativity, as well. And trying to standardize creativity is unwise. There simply is no "one size fits all" way to test. Extended, or not.
This. Mod parent up.
In (very) short, "testing is evaluating a product via experimentation" (see http://www.satisfice.com/blog/...). According to this definition, truly anyone can test. Anyone can "evaluate a product via experimentation".
However, formal, professional testing also has a purpose: to inform. That is, "testing provides information about the quality of a product so that others can make informed decisions."
So, formal, professional testing is "evaluating a product via experimentation - in order to inform". And
Sadly, not everyone thinks like you.
By using words like "internationally agreed" (instead of "locally agreed" or "internationally begrudgingly accepted") and "standard" (implying "the way", and not "a way"), ISO/IEC/IEEE strikes fear into the following, unthinking leaders of companies, who then force the workers to...begrudgingly implement and comply with the "internationally agreed standards".
Anyway, I don't believe that something like testing can be standardized anyway. There simply is no "one size fits all" way to test. "Internationally agreed", or not.