Questionable, in the sense that the theory is a very speculative extrapolation of the data we have been able to observe, about the origins of the universe before the "time" we can actually observe. Just because something fits a mathematical model doesn't mean we have solid evidence for it; it simply means it's a model which matches what we've been able to [indirectly] observe. You could say the same thing about n-dimensional string theory as a unified model, for example.
It's gratifying to see that the public's general acceptance of scientific theories is roughly proportional to the actual evidence to support the theories themselves. For things which there is good evidence, there is broad understanding; for things which are highly questionable and politicized, there is much skepticism.
Good for the US population.
It is an interesting conceptual argument, although it ignores a couple a real-world points.
First, not all bugs are equal, in terms of exploitation opportunity, as he's glossing over; the vulnerability is only as valuable as what it can be exploited to allow access to, in monetary exploitation terms. A bug in something which cannot be exploited for any particular gain is next to worthless, in market terms.
Second, not all companies will pay for vulnerability information, because it's not just a value proposition, but also a risk and resources assessment. If nobody expects your software to be "secure", there's no point is spending too much money on software security; for example, nobody pays much attention to the software in cars (yet), so manufacturers have little financial incentive to make it secure. Moreover, if you don't have deep pockets, you're not going to pay for exploits, especially if you're struggling to simply produce features that potential customers want. In either of those scenarios, the value proposition for paying for exploits is inconsequential.
Most (by volume) software has an effectively unlimited amount of bugs, which nobody will pay for. That's the real world of software.
Well, speaking as a [software] engineer...
In my profession, there are certainly certifications one can get, and ethical considerations (as a general statement), although there is no particular licensing. Regardless of these, though, I am employed to write software, but I would not certify that the software I write is flaw-free (nor would anyone else that I know). It's entirely possible that, due to flaws in my work product, someone will lose money, or have other negative outcomes befall them.
If that happened, and my employer blamed me publicly (explicitly or implicitly), I would be seeking large monetary damages, even if the flaw was my fault. My argument would be that I'm employed to write software, not write flaw-free software, and if the company causes me damages (in current or future income) by stating or implying that I did not perform by work duties appropriately, then that is slander, and they are liable. In this case, the "lie" would be to imply that my work product was supposed to be flaw-free, which I never asserted or consented to, regardless of what they desired. Implying that someone is unable to perform one's occupation is textbook slander, and the company would find themselves writing a large check. And yes, even naming the engineer in this context, without strong evidence of gross or malicious negligence, would be cause for civil penalty (imho).
I guess it just comes down to this: there are laws which protect people from having their lives and/or livelihood ruined by false accusation (direct or implied), and implying that an engineer must create a flaw-free work product to be proficient is a false accusation (unless there's a specific contractual obligation to do so, and that would seem suspicious). If I were a company considering this, I'd think twice, and then not expose myself to the obvious liability.
I could see two potential outcomes, if blaming engineers for product flaws becomes commonplace...
First, engineers will (or should) demand an indemnity clause as part of their employment contract, where the company agrees not to blame them publicly for any product flaws, and/or take any action which would identify them. Depending on the repercussions for the test cases, this might become a necessity for employees.
Second, I could see some significant lawsuits for slander, since the company is causing real (and substantial, and more importantly provable) financial loss for the engineers they blame for product deficiencies. Unless they have a pretty solid intentional negligence defense, they could (and absolutely should) find themselves paying out a few million more to each engineer they throw under the metaphorical bus.
Companies are responsible for their products, not the people they employ to make/provide them. Companies reap the rewards when they work, and bear the responsibility when they don't. Absent malicious negligence, naming/blaming individual employees is irresponsible at best, and should absolutely expose the company to civil liability.
This may seem infeasible and/or culture-prohibitive, but there's another way Microsoft could go, which could see them gaining market share in mobile, and perhaps even surpassing google/Apple eventually.
Instead of trying to figure out how to optimally leverage the technology and assets they currently have (Nokia, Office, Windows Phone, patents, etc.) to optimize their own profit, as they have been doing for the last decade or so, they could try something new: building something which actual customers want. I know it's somewhat unheard of in the age of big companies, patent portfolios, and quarterly reports, but anyone who think there's not substantial room for innovation in virtually all aspects of the mobile space is simply not trying to think.
Microsoft had (and still has remnants of) the technical might to pursue multiple avenues of innovation at the same time. If they could simply change focus away from brow-beating their reluctant customers with their latest profit-optimized business plan, and toward giving customers what they actually want (here's a free hint: a phone where you are in control of where your data is, and what apps can access it), they could still do quite well. If they continue their current business mindset, though, it won't matter what they pick to focus on: eventually, they will be toast.
That's what I "love" about my "representative": never afraid to state the blindingly obvious, while completely and derisively ignoring the will of the people she nominally speaks for. Of course the government is not going to willingly give up their police-state surveillance powers; governments never give up power they have taken, legally or otherwise. Blah blah, security, protection, something about terrorism, etc.
Just to point at one example: Elementary. The commercial volume is consistently MUCH higher than the show volume, which itself fluctuates enough during the show to make it annoying to watch. If the FCC really wanted results, they could just have some automated application "listening" to programs, and fining broadcasters automatically, rather than judging effectiveness based on quantity of people with enough time to waste to go through their complaint process. Based on how easy that is, I'd say they have no desire to actually help anyone.
I agree with the list, generally-speaking.
I would expand on the designing things point: it's not just matching requirements with implementation, but also designing something which will still work 10 years and 20+ design and technology iterations from now. It's designing something to anticipate future changes, and code in flexibility where appropriate. It's making something which doesn't just perform the function, but doesn't have subtle flaws which will cause very hard to diagnose issues down the road. It's building code which will still be maintainable, even after various future iterations. The ability to design code properly is what differentiates between a "normal" developer, and and extraordinary one.
No worries, the Kinect only needs to be connected and powered on for the system to function, you can "turn it off" (in software), and it won't do "anything" [that you can see]. Moreover, the XBone doesn't need an always-on internet connection, so even if it were watching your every move and listening to you 24/7, it wouldn't be uploading that information until the next time you connected. And even if it were secretly doing that, Microsoft wouldn't be sharing that data with the government unless legally required to. And even if we were sharing the data voluntarily through a well-documented Prism access tunnel, you have nothing to worry about unless you are a terrorist. And you're not a terrorist, are you?
The system could be fixed trivially, if people cared to do so. For example, add a provision to the law which allows an affirmative defense of independent discovery; after all, an innovation worthy of a patent is supposed to be innovative enough that other people cannot just stumble upon it through normal evolutionary work. That would put the onus on the patent holder to prove that the patent information was accessed by the accused-infringer during development, which would make it substantially harder for patent trolls to exist (that is, it's much easier to assert that you didn't see the patented item in the Patent Office publications, than in a competitor's widely distributed implementation).
The fact that it's not fixed points to malfeasance on the part of Congress, as much as private individuals taking advantage of the system.
(I work in a small but successful company as the lead developer.)
To be honest, most companies produce fairly bad software, fairly inefficiently, and you really don't need really good developers to do that. You can make a good amount of money just supporting existing products someone wrote a while ago, adding small improvements for marketing purposes, and/or writing something which is not particularly complicated, but profitable (see: 95% of apps for mobile, internal enterprise apps/scripts, template-based web sites, etc.).
You need "rock star" developers for only a few things:
- Doing cutting-edge research type programming and/or optimization (eg: DB design work, compiler design, optimizing embedded firmware, etc.)
- Doing necessarily complex functionality, and only if you _need_ it (eg: highly multi-threaded apps, lock-free programming, etc.)
- High-level design, organization, and/or refactoring for large/critical projects (eg: re-organizing the Windows/Linux kernels)
- Writing "optimal-to-maintain" code to do more with less people/time/resources (eg: having less than five people writing and supporting high value, expansive enterprise apps)
If you don't have one of those cases, "normal" developers will probably be fine (assuming reasonable management and organization structure). My 2c.
I am not a lawyer, but...
Based on (at least) the Barry Bonds prosecution by Congress, any potential witness should be able to assert their 5th Amendment right. Why? Because in that case, it was established that the government could prosecute you solely for your testimony, if they felt your testimony was not revealing enough, regardless of how accurate it was.
Under that precedent, it would be impossible to give any testimony without potentially incriminating yourself. Thus, you have a 5th Amendment right to refuse to offer testimony (unless the state offered you transactional immunity at all government levels for anything arising from your testimony, which would be highly unlikely). I'm somewhat surprised more people haven't realized the implications of that prosecution, but it seems pretty clear-cut to me.
Thoughts on a way to fix this sort of thing generally:
The government should define a minimum support window for software, say 5 years or so. From the point where you purchase a software product at retail (not resold), you are entitled to support for critical security flaws (ie: exploitable risks which you cannot mitigate with normal usage) during that period. At the vendor's option, that support can be either free software patches (with no degradation of functionality or additional licensing requirements/terms), full version upgrades (under the same conditions), or the release of the complete source for the product into the public domain (BSD-style). The last option would be the legally-mandated requirement if the vendor was unwilling or unable to supply one of the first alternatives. Companies could, of course, adjust pricing of their software as appropriate to comply with the mandate.
It's not a very clean solution, but it would do wonders to curtail the "forced paid upgrade" trend in software. Plus, companies with "good" support policies in place (both large and small) would benefit.