We have to look critically at any promising new technology, if only because the odds are strongly in favor of it failing to take root. Sometimes the underlying ideas are not entirely sustainable, sometimes a competing technology takes the lead (whether or not due to merit, the result is the same), sometimes there isn't enough market uptake to sustain it.
I'm happy to learn that most of my technology bets have paid off, but that's at least in part due to betting conservatively. This isn't cynicism, it's a recognition that when faced with an abundance of choice, one way to reduce the flood to manageable levels is to have good rejection heuristics. What's left can then be examined in depth.
The point you make about the inherent cost of change is valid as well, though if anything you understate it. There are secondary effects which make technology change particularly expensive. Technology rarely stands alone; more often it has to be integrated with current systems and practices as well as emerging ones. What happens during the transition are many small but pervasive changes to accommodate both the old and new technologies. If your existing app has required a lot of apache customization, it will take time to transition it to nginx. Meanwhile the requests for customization don't suddenly come to an end, so now you find yourself tracking twice the number of changes while also managing the platform transition. The net effect is combinatoric.
I managed an AI research lab for a dozen years. Our goals were always pragmatic and our methodology as rigorous as we could make it. We didn't need to engage with the "AI of the gaps" problem because, in the first place, nobody was trying to formulate grand claims, and in the second place, there was plenty of useful work to keep us busy. The main philosophical shift I saw in my time there was toward situated intelligence, but in practical terms there was no big shift for us, because most of what we were doing was situated in sensing and action anyway.
From that experience I've taken away a sense that it makes no more sense to talk about AI in the abstract than it did for the classical philosophers to debate the origins of the universe. Presumably there are answers to these questions, since we have a universe and we have something which we're prepared to agree is intelligence. But it can take a very long time to build up enough empirical knowledge to demonstrate an answer convincingly. It's funny that when that finally happens, all the attendant philosophical paradoxes and conundrums, charming and elegant though they may be, simply fall away. There's a bit of a philosophy of the gaps as well.
Randomness is not a sufficient mechanism for free will in any case. Lots of heuristics in AI are pointedly nondeterministic, but designing them in this way doesn't suddenly confer free will onto them. For example, I think that the process of "simulated annealing" for heuristic search is quite elegant - it guarantees convergence to a solution in constant time, with the solution less likely to be correct in shorter times - but the process is completely explicit and nowhere draws upon something we could reasonably call free will.
But that's nothing different from business as usual. If your development process can't handle development, you've got a problem.
Oh wait, not in Debian/Ubuntu, because deb packages don't properly support relocation.
In about 1996 when we were supporting a heterogenous network of several hundred Unix systems which had to support a combination of both BSD (everything flattened under
Organizing around System V style was critical to making this successful, because we had to expose multiple versions of each app on multiple platforms. Because each app version was distinctively located under
The same architecture allowed us to automatically cache local repositories of apps in addition to the canonical ones maintained on the fileservers. Our institution supported multiple research groups which in some cases maintained their own fileservers and software licenses and in other cases shared in the funding and operation of common resources. The same architecture also allowed us to cleanly arrange for both users and client systems (workstations and compute farms) to have not only the correct range of access to the entire superset of apps across the fileservers but the correct precedence in which they would be found.
Putting each program in its own directory turns out, in practice, to be a very sensible idea as soon as requirements extend beyond a single isolated workstation and a handful of apps.
When I head off to the island or to go sailing, I'm deliberately unplugging. Lying in the hammock, listening to the ambient sounds and feeling the dappled warmth of sunlight, if I'm going to immerse myself in writing, it's going to be a paper novel or a magazine I picked up on the ferry as a treat to myself, something like "Wooden Boat". It's better for the soul.
Speaking of weighing evidence, can you be a little more specific than a vague reference to "half a dozen smaller countries"? It's not possible to take such claims seriously. They certainly don't constitute grounds to think less of Mozilla, but they do raise doubts about you if this is your best way of establishing credibility. (And no, you can't date my daughter either, in case you were wondering. You're definitely not in her league.)
Purely in terms of policy, I'm more inclined to favor removing a questionable root cert than installing it on the off chance that it will be missed. You're claiming that its removal will "force" citizens to "use insecure communications" when such is not remotely the case:
- If you're serious about security, you can generate your own cert for free, or set up your own CA for that matter. It's done all the time. I've personally led four large internal PKI initiatives: two for industry, one for academic research and one for government. This approach is more robust than going to a third party CA.
- You're not forced to do anything when one particular CA has come under a shadow of doubt; there are hundreds of CAs who will be delighted to sign your cert request in exchange for a modest fee and a pathetic level of background verification. The "weak link" CA problem is not due to scarcity but to excess. And finally,
- There's nothing stopping you from installing any root cert you like, including reinstalling the very certs that the browser maintainers have determined are suspicious. Go for it. Have a blast.
The preinstalled root certs have enormous leverage. If the validation of certificate requests performed by CAs is a known weak link in X.509, how much more so the point where those CAs are designated as trusted?
Thanks to the efforts of Mozilla, among others, we have a much more diverse browser ecosystem than even a few years ago. To some extent at least, the free market can decide which browser to use. I know that I'm more inclined to use a product that is squarely on the side of human rights than one which can be used as an instrument of oppression. And these difficult questions of policy and enforcement provide a chance for Mozilla to distinguish itself, which I think it's doing very ably.
As a preeminent place for the exploration of ideas, MIT held a refreshingly open attitude towards all forms of intellectual curiosity, collaboration and information exchange - both ancient and emerging. That spirit is what I associate with people like Richard Feynman, Noam Chomsky and Richard Stallman, who not only have fundamentally interesting ideas to share but are particularly outspoken about the freedom to be outspoken.
It's significant that the MIT Lisp Machine and its various exotic descendents provided no authentication. This was a fairly extreme design decision that, in my view, only makes sense in this particular social context. Many of us objected to that decision on technical grounds, but in fact no one knew whether it would turn out to be a brilliant move or a naive one.
Well, now we know. The letter from Israel Ruiz gives a nod to the original spirit of the Internet:
MIT has a long history of operating an open network environment, allowing devices on MIT's network unrestricted incoming and outgoing access to the Internet.
I saw the real breakdown beginning, oh, just about exactly 20 years ago. Windows 3.X. It was crap, and we laughed at it. But businesses bought into it at a faster rate, and were more thoroughly locked into their decision, than we had ever experienced in the scientific/engineering community. Expectations of it were completely unrealistic and driven by desperation, which management downloaded onto the IT staff.
Public perception of IT shifted from respect for expertise to open disdain. Why? As long as graphics workstations were being used within an expert community, the respect for expertise was natural. It's easy for one engineer to recognize the worth of another. But once any consumer could go out and purchase what looked at least superficially like the same thing, and twiddle on it, it would be easy to assume out of simple ignorance that all the so-called IT experts were just twiddling too.
Well said. And bad decisions when integrating complex systems build compound interest. One staffer goes in and hacks some config file as a workaround to something that really should be fixed properly. Before he can fix it properly, he's off putting out another fire somewhere. By the time the next staffer gets asked to straighten out the accumulated mess in the config file, the reason for half of the workarounds is no longer clear. You don't dare touch them, so you end up working around the workarounds.
Because not everything happens at once, especially in government. Nor is the public sector is famous for agile development.
Government, by its nature, is bureaucratic. When we're on the the receiving end of government services, we often perceive the bureaucracy as ponderous and inefficient. That's because accountability is a big part of the system. I've worked in government IT alongside some really progressive, dedicated and talented people where we had a clear mandate, ample funding and few political enemies. I figure in this best case scenario we spent 80% of our time on project tracking and accountability.
See, you never have to turn a profit in government. But you will be audited. You'll be asked to show the work you've done and justify it. Knowing that the audit is coming, you make friends with the auditors and ask them what controls need to be put in place. Since you have to be in compliance eventually, that's the most efficient course of action, but it adds a chunk of overhead up front. There may be several auditing bodies: one for finance, one for security, one for privacy, one for ethics, one for affirmative action. And of course there are other controls to make you accountable to your boss, and he to his boss and so on.
But you're also accountable to your colleagues. Unlike in the private sector, there's no such thing as "good enough". As a business owner, I can decide to launch a product at any moment I decide the time is right. No matter how broken it may be and no matter whether the market is ready for it or not, that's my exclusive decision. Not so in government. All it takes is someone at a meeting to raise the idea of a unit test that could possibly be done and suddenly you're on the hook to do it. This is not because your colleague can tell you what to do, but because his comments were minuted and those minutes were circulated to all the stakeholders - which includes your department head, who is now responsible to ensure that if anyone in future ever asks about that unit test, you will be able to give him the test logs. And let me tell you, half of all government workers are worry warts. They sincerely think that coming up with new things to worry about is a positive contribution to a project. To be fair, sometimes it is.
See how it works? Now, along comes a new mandate which says not merely to "evaluate" open source but to "prefer" it. People who are running projects that have already done their initial requirements gathering and compliance controls and are now onto architecture and design, what are those people going to do? Their first reflex is not going to be to go back to the drawing board. Not in a million years. Their first reflex is going to be to hold a series of meetings to show due diligence in evaluating whether or not the project now underway is subject to the new open source mandate.
On the other hand, any new project to come along is going to be subject to the open source mandate. People who don't like it will privately grumble but they'll go along. People like me who've been waiting for the mandate will happily embrace it. And then it's payback time. We'll be the people in the meetings raising the questions about due diligence in respect of open source compliance. We'll be the ones suggesting cost and performance controls for existing projects so that in future we can measure their value against comparable open source projects.
Open source will win out. Open Document Format will eventually win out. On merit, mind you. You just have to understand that government moves very slowly and cautiously. But once it becomes a matter of policy you can take it to the bank that these things will happen.