Although I hate miro as software, I have to give them credit for getting the concept right (Tivo for internet TV) and having a great library of content feeds (including MAKE and most of the TED series) which makes me happy enough to use it despite it's resource hogging and glitches.
I couldn't agree with you more, however I would like to point out the reasons why the bureaucracies exist and why we pay so much more on government programs than private ones. Having worked in both sectors I understand what's going on and the advantages and disadvantages on both sides.
The reason government programs are so costly is accountability. Unlike private organizations, government institutions are nothing more than organizational structures. Therefore their setup must assume minimal competency at each point in the decision tree. Because of this decisions are split up into sub responsibilities with each sub responsibility being sent off to separate nodes in the tree. Those nodes each send their assessments (most of the time a simple pass/fail type system) up the chain where someone else signs off. Funding allocations are determined on how effective the whole unit is and the point of the process is not only to produce an item, but to additionally produce an accountability trail that is thorough enough that any failure can be re-examined and the root causes for failure can be brought out and blame can be properly assigned. When government partners with industry the bureaucracy doubles, because the government part of the process has to justify all the changes, expenses and performances that the industrial partner makes. On the gov side the motto is "Trust But Verify". The partner has to keep nearly identical records in case they fail and the funding they've been provided gets clawed back.
The fact that this problem gets in it's own way is an issue, but the thought behind it isn't a bad one. There's something to be said about having a trace for the thought process behind development. That said, currently despite that every decision and discussion is documented and recorded, the volume the government produces acts most effectively as camouflage and the sheer volume of reporting makes it difficult to track back to root causes when stuff goes bad.
To increases efficiency, you must assume more liability at each node in the tree. You improve competency at each node, and things move faster. Making nodes more competent, you can eliminate redundant nodes and stream line the process. On top of that they're only preforming half the function in that they aren't producing the same accountability trail that is required in government work.
I'm not saying that the government way of doing things is correct, but I would say that there is a philosophy behind it that isn't totally worthless, even if the implementation is screwed up. I'm willing to bet that if SpaceX exhibits a failure, you'll have to turn to one of their senior engineers to understand and what went wrong and why. And if you're going to invest lots of public money into the system, there should be some assurance that if that happens, you don't just get a "shrug, idontkonw" in return. There should be a way of confirming that what he says the day after the failure is consistent with what he said before the failure.
Monty went beyond that to suggest that all the company talent was going toward other projects instead of MySQL and that was hurting the quality of the project. So it doesn't seem to be so much about where the quality bar is set and how the company is managed rather than over the existence of bugs. Some of it might be because there isn't a strong enough grasp of how the product is being used to allow for people to make those kinds of value decisions.
More importantly though it's impressive to see a company realize that instead of trying to squelch their development people, letting them say what they want and contribute to the conversation rather than telling them to shut up and get in line is rather impressive. The idea that open source means more than just disclosing code is a key part of becoming a member of the community and it seems like a culture shift in Sun's thinking. Definitely progressive from 5-10 years ago, when this would have been unthinkable.
in other words, a tribe is established by who we choose to share information with, and although we can now share information globally without respect to boundaries, that doesn't mean we're a part of a "global tribe" because the tribe is still a subset of the global system. Where who we choose to share info with might have once been an issue of geographic happenstance, it no longer a sufficient criteria for the designation of a tribe. There is no longer a one to one mapping of the people in close proximity and the people who have open access to my information and actions.
This is why
The CAN-SPAM Act is directed at the commercial entities that actually create the message, not the service providers who happen to be the medium.
as the actual medium as it's put is already constitutionally protected from being liable. So although ISP's are not common carriers in the US, the law is virtually identical for the considerations discussed within the article.
I don't work in IT, but I do work with very sensitive data that is high risk, so IT security is an important topic and I try to understand it as much as possible.
the reason I'm shocked is that I'd expect people to at least recognize that as a blindspot in their training before they ever graduate school. Our system is exclusively proprietary so I understand not everyone would need it, but it seems like it would make sense to know it since a good seventy percent of papers I see involve unix solutions. Granted, I might just be operating in a biased environment but I figured everyone would know at least the basics about linux.
and to ducomputergeek, I was infering that I don't see perl as being a unix based language... as far as I know it's platform independent and isn't strictly Unix based.
he methodology is complemented with an introduction to the standard Unix-based text processing tools (grep, awk, Perl, etc). This methodology is later on applied, with a strong hands-on and how-to spirit, to an extensive set of common security use-cases, such as the perimeter threat, compliance, and the insider threat.
Do you really want someoen who doesn't know grep to be security admin? And is Perl correctly included in that list?
Now although they do go on to continue to crunch MPG numbers further on in the article, at least they cop to the fact that they're not real. This goes forward to my point we need a weighted kilojoule per mile calculation for reporting efficiency of hybrid vehicles. People should have an understanding of how efficiently they use the electric energy, and that it doesn't just run on gas with fantastic efficiency, that the energy used in making that MPG number first came from your wall socket."With the Plug-in Electric Hybrid version of the Aptera(typ-1h) the mileage of the vehicle is difficult to describe with one number. For example, the Typ-1h can drive 40 to 60 miles on electric power alone. Perhaps for such a trip, the engine may only be duty-cycled for a few seconds or minutes. This would produce a fantastic number, an incredible number that, though factually true, would have no useful context, i.e. it's just a point on a graph.
Don't hit the keys so hard, it hurts.