From what I have seen, Mitre and NIST often show inaccurate CVSS scores on the CVE pages.
Have to stop you there, sorry for perhaps being a bit pedantic, but the NIST score is more or less the "official" score of a vulnerability, given how closely they work with organizations like MITRE. The CVSS scoring rules have some nuance to them, and in some scenarios the official rules on scoring a vector is not what you'd expect. NIST tries to follow the official scoring rules as strictly as possible. You may not agree with the rules (and many people don't, I'm not trying to knock you), but technically their scores are the most accurate.
CVSS recently released v3.0 scoring in order to try to address some criticisms in scoring. It did this by upgrading its base vector to be a bit more easily comprehensible by adding obvious metrics like "user interaction required", which was previously embedded in "access complexity" in v2. I think in general I like the concepts and it makes it easier for the most part, but time will tell if the general public agrees. The sticking point I think is the idea of scope, which is not a bad idea in general, but the definition seems a little fuzzy to me. We may have only shifted where the nuance is, and so disagreement in scoring may continue into the future.
In order for the metric to be truly useful, every organization has to localize measurement to their environment and each vendor needs to measure impact against their use or non-use of the underlying code. At the end of the day, it's all about risk measurement, but with those steps you end up with a reasonably accurate assessment.
Exactly. CVSS allows for this by use of temporal and environmental scores, but unfortunately, most organizations don't use them. This means most people run around talking about the base score without a clear sense of how it applies to them. I've seen vulnerabilities with a base score of let's say 7.0 or so being knocked down to 1.5, after you factor in its temporal factors (such as a patch being available) and environmental factors (such as not very widely deployed). I wish more people would talk about the environmental factors. CERT is one of the few places that lists temporal and environmental metrics, though their database is not comprehensive.
CVSSv3.0 is weakest in the fact that they essentially threw out the environmental metrics; yeah, its technically there, but its shadow of its former self -- it doesn't include important metrics like population anymore. I hope they will put that back in for CVSSv3.1, and encourage more widespread adoption.
There is nothing wrong with the current system that wider spread adoption and education cannot fix. Part of the problem is the media hype surrounding the bugs. If every little issue wouldn't get a cute name -- Shellshock, Logjam, POODLE -- the reactions might be a little less kneejerk.
I agree, but education can sometimes take a while and be harder than you think. There's momentum -- and money -- behind the current system. You get everyone wound up, and then offer to sell a widget that "protects against it". There's a lot of snake oil for sale in the industry right now, and so far, companies and governments are eating up. It will continue as long as money is being made. The bigger question is, how do you make it more profitable to tell the truth about threats?
Organizations like CERT tend to straight talk it and provide honest feedback with their temporal and environmental scores, but they're not picked up in the media as much as these security start-ups that are out to cause a ruckuss and make money. The start-ups seem to me to be more marketing companies than security companies these days; they tend to overinflate the CVSS base score and talk it up by reaching out to media directly, when in reality, the base score itself may not be that high, nevermind that temporal and environmental factors might lower it more. Fear makes money right now.