Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Poor Management (Score 1) 347

Efficient fix tracking and evaluation and merging and testing is all part of the ordinary workflow of any software development organization. Whether the proposal comes from one of the devs or a customer or a completely random person doesn't fundamentally matter, it's still simply code to be managed. In all cases the ordinary workflow routinely has to make decisions, based on merit and opportunity and risk and backward compatibility and all sorts of factors, about how and when to incorporate the proposed fix.

But that's nothing different from business as usual. If your development process can't handle development, you've got a problem.

Comment Re:Bad idea. (Score 1) 466

puttting each program in its own directory is an absurdly bad idea. Then you would end up with a massive PATH variable and also does not follow the time honored Unix tradition which needs to be respected.

In about 1996 when we were supporting a heterogenous network of several hundred Unix systems which had to support a combination of both BSD (everything flattened under /usr/local ) and System V (apps contained within their own subdirs of /opt) conventions, we found that an effective solution was to install the actual apps in System V style but then symlink into them so as to produce a BSD style directory tree which could be on the users' default PATH.

Organizing around System V style was critical to making this successful, because we had to expose multiple versions of each app on multiple platforms. Because each app version was distinctively located under /opt (or its equivalent network mount point) a user could always ensure that a specific app of a specific version took precedence over the default if necessary. It also meant that we could easily roll out new versions for people to test, that we could advance or revert the default version of an app simply by changing a symlink, and that we could maintain older versions indefinitely without holding back progress.

The same architecture allowed us to automatically cache local repositories of apps in addition to the canonical ones maintained on the fileservers. Our institution supported multiple research groups which in some cases maintained their own fileservers and software licenses and in other cases shared in the funding and operation of common resources. The same architecture also allowed us to cleanly arrange for both users and client systems (workstations and compute farms) to have not only the correct range of access to the entire superset of apps across the fileservers but the correct precedence in which they would be found.

Putting each program in its own directory turns out, in practice, to be a very sensible idea as soon as requirements extend beyond a single isolated workstation and a handful of apps.

Comment Re:More Flexibility? (Score 1) 466

Moreover, the "centralized" Registry isn't centralized at all. It's specific to each individual PC. It doesn't scale at all. You can't use it to manage a network of systems even in the simple ways we were doing in Unix at the time Microsoft introduced its Registry concept, much less in the contemporary sense necessary for Cloud configuration.

Comment Re:My Magazines (Score 1) 363

Likewise!

When I head off to the island or to go sailing, I'm deliberately unplugging. Lying in the hammock, listening to the ambient sounds and feeling the dappled warmth of sunlight, if I'm going to immerse myself in writing, it's going to be a paper novel or a magazine I picked up on the ferry as a treat to myself, something like "Wooden Boat". It's better for the soul.

Comment Re:Good to see (Score 1) 123

The willingness to hear about suspicions is a necessary part of gathering evidence, it's not a final assessment of evidence. "Talking about" doing something is a necessary part of due process, it's not the final outcome. If you don't understand these basic distinctions already, please give them some thought,

Speaking of weighing evidence, can you be a little more specific than a vague reference to "half a dozen smaller countries"? It's not possible to take such claims seriously. They certainly don't constitute grounds to think less of Mozilla, but they do raise doubts about you if this is your best way of establishing credibility. (And no, you can't date my daughter either, in case you were wondering. You're definitely not in her league.)

Purely in terms of policy, I'm more inclined to favor removing a questionable root cert than installing it on the off chance that it will be missed. You're claiming that its removal will "force" citizens to "use insecure communications" when such is not remotely the case:
  • If you're serious about security, you can generate your own cert for free, or set up your own CA for that matter. It's done all the time. I've personally led four large internal PKI initiatives: two for industry, one for academic research and one for government. This approach is more robust than going to a third party CA.
  • You're not forced to do anything when one particular CA has come under a shadow of doubt; there are hundreds of CAs who will be delighted to sign your cert request in exchange for a modest fee and a pathetic level of background verification. The "weak link" CA problem is not due to scarcity but to excess. And finally,
  • There's nothing stopping you from installing any root cert you like, including reinstalling the very certs that the browser maintainers have determined are suspicious. Go for it. Have a blast.

Comment Good to see (Score 4, Interesting) 123

It's good to see browser maintainers recognizing that the browser is an essential - albeit uncertified - part of HTTPS authentication.

The preinstalled root certs have enormous leverage. If the validation of certificate requests performed by CAs is a known weak link in X.509, how much more so the point where those CAs are designated as trusted?

Thanks to the efforts of Mozilla, among others, we have a much more diverse browser ecosystem than even a few years ago. To some extent at least, the free market can decide which browser to use. I know that I'm more inclined to use a product that is squarely on the side of human rights than one which can be used as an instrument of oppression. And these difficult questions of policy and enforcement provide a chance for Mozilla to distinguish itself, which I think it's doing very ably.

Comment Re:Optional (Score 2) 144

It's noteworthy. It represents the end of an era which, I appreciate, many Slashdot readers are too young to have experienced. That doesn't mean that it was unimportant.

As a preeminent place for the exploration of ideas, MIT held a refreshingly open attitude towards all forms of intellectual curiosity, collaboration and information exchange - both ancient and emerging. That spirit is what I associate with people like Richard Feynman, Noam Chomsky and Richard Stallman, who not only have fundamentally interesting ideas to share but are particularly outspoken about the freedom to be outspoken.

It's significant that the MIT Lisp Machine and its various exotic descendents provided no authentication. This was a fairly extreme design decision that, in my view, only makes sense in this particular social context. Many of us objected to that decision on technical grounds, but in fact no one knew whether it would turn out to be a brilliant move or a naive one.

Well, now we know. The letter from Israel Ruiz gives a nod to the original spirit of the Internet:

MIT has a long history of operating an open network environment, allowing devices on MIT's network unrestricted incoming and outgoing access to the Internet.

Comment Re:Rapid change in IT is the problem (Score 3, Insightful) 397

Interesting. I've been in IT and software development for the past 35 years. At the expert end, it was a fantastic field to be in, especially with the advent of graphical Unix workstations and widespread use of NSFnet. Equipment and software was interesting and sufficiently expensive to justify reasonable IT salary levels to integrate it and take care of it all.

I saw the real breakdown beginning, oh, just about exactly 20 years ago. Windows 3.X. It was crap, and we laughed at it. But businesses bought into it at a faster rate, and were more thoroughly locked into their decision, than we had ever experienced in the scientific/engineering community. Expectations of it were completely unrealistic and driven by desperation, which management downloaded onto the IT staff.

Public perception of IT shifted from respect for expertise to open disdain. Why? As long as graphics workstations were being used within an expert community, the respect for expertise was natural. It's easy for one engineer to recognize the worth of another. But once any consumer could go out and purchase what looked at least superficially like the same thing, and twiddle on it, it would be easy to assume out of simple ignorance that all the so-called IT experts were just twiddling too.

Comment Re:Rapid change in IT is the problem (Score 2) 397

An hour of work today is less effective than in the past because you are paying "interest" on your previous bad decisions.

Well said. And bad decisions when integrating complex systems build compound interest. One staffer goes in and hacks some config file as a workaround to something that really should be fixed properly. Before he can fix it properly, he's off putting out another fire somewhere. By the time the next staffer gets asked to straighten out the accumulated mess in the config file, the reason for half of the workarounds is no longer clear. You don't dare touch them, so you end up working around the workarounds.

Comment Re:Is this real? (Score 2) 123

So if they are serious about this, why does their shitty new jobsearch website require CVs to be uploaded in .doc or docx formats?

Because not everything happens at once, especially in government. Nor is the public sector is famous for agile development.

Government, by its nature, is bureaucratic. When we're on the the receiving end of government services, we often perceive the bureaucracy as ponderous and inefficient. That's because accountability is a big part of the system. I've worked in government IT alongside some really progressive, dedicated and talented people where we had a clear mandate, ample funding and few political enemies. I figure in this best case scenario we spent 80% of our time on project tracking and accountability.

See, you never have to turn a profit in government. But you will be audited. You'll be asked to show the work you've done and justify it. Knowing that the audit is coming, you make friends with the auditors and ask them what controls need to be put in place. Since you have to be in compliance eventually, that's the most efficient course of action, but it adds a chunk of overhead up front. There may be several auditing bodies: one for finance, one for security, one for privacy, one for ethics, one for affirmative action. And of course there are other controls to make you accountable to your boss, and he to his boss and so on.

But you're also accountable to your colleagues. Unlike in the private sector, there's no such thing as "good enough". As a business owner, I can decide to launch a product at any moment I decide the time is right. No matter how broken it may be and no matter whether the market is ready for it or not, that's my exclusive decision. Not so in government. All it takes is someone at a meeting to raise the idea of a unit test that could possibly be done and suddenly you're on the hook to do it. This is not because your colleague can tell you what to do, but because his comments were minuted and those minutes were circulated to all the stakeholders - which includes your department head, who is now responsible to ensure that if anyone in future ever asks about that unit test, you will be able to give him the test logs. And let me tell you, half of all government workers are worry warts. They sincerely think that coming up with new things to worry about is a positive contribution to a project. To be fair, sometimes it is.

See how it works? Now, along comes a new mandate which says not merely to "evaluate" open source but to "prefer" it. People who are running projects that have already done their initial requirements gathering and compliance controls and are now onto architecture and design, what are those people going to do? Their first reflex is not going to be to go back to the drawing board. Not in a million years. Their first reflex is going to be to hold a series of meetings to show due diligence in evaluating whether or not the project now underway is subject to the new open source mandate.

On the other hand, any new project to come along is going to be subject to the open source mandate. People who don't like it will privately grumble but they'll go along. People like me who've been waiting for the mandate will happily embrace it. And then it's payback time. We'll be the people in the meetings raising the questions about due diligence in respect of open source compliance. We'll be the ones suggesting cost and performance controls for existing projects so that in future we can measure their value against comparable open source projects.

Open source will win out. Open Document Format will eventually win out. On merit, mind you. You just have to understand that government moves very slowly and cautiously. But once it becomes a matter of policy you can take it to the bank that these things will happen.

Comment Even before (Score 0) 305

More than a decade ago, the special effects artists working the Steven Spielberg film Minority Report synthesized experimental thinking about GUIs to produce a floating interface that Tom Cruise manipulated with his hands."

And about a decade before that, people in my lab were arlready testing the effectiveness of haptic interfaces to simulate force-feedback and texture. It was fairly crude and preliminary, but the concept was already established to the point where undergrads had access to this stuff.

But it doesn't look like all that interesting on the silver screen, so I guess Hollywood didn't bother with it.

Comment Re:Features lacking in paper course materials... (Score 3, Insightful) 372

That's fine. You're points are all valid, but they don't address the underlying issue of resource scarcity. (Nor do mine, directly.) We face the perennial challenge of how to divide our scarce resources between explorations in breadth or depth.

Classically, we've found depth to be the critical dimension. This goes back to Plato and beyond, though reexamined by Kant, Fichte and Hegel. If you neglect to understand a subject in depth, you may well fail to capture some of its essential properties. Any interdisciplinary synthesis made on that basis will then be flawed. Therefore synthesis is a final step in applying knowledge, not a preliminary one.

For example, Ph.D studies - not just in the sciences but in all fields - are explicitly framed as exercises in depth. Thesis supervisors routinely have to point this out to grad students, in order to redirect their very natural tendency to go off exploring in all directions. I went through this stage myself. Probably everybody does.

Sure, specialization creates arbitrary barriers between disciplines. So does modularity create arbitrary barriers between components. So does all individuation of subject from object, agency from action, et cetera. All dualistic thinking has this particular shortcoming. We accept that because we gain a powerful analytical tool in return. And sometimes we forget that we have made such a choice.

The converse, I have to point out, is not "hard-earned wisdom" but the default way that people function when they impose no particular discipline on their studies. We don't need universities to teach that. It comes for free, as part of the human condition.

And so we come around again to the question of classroom technology. It's easy to indulge in endless fiddling with bits and pieces of technology in the name of education. Occasionally, such fiddling may produce a valuable new synthesis of ideas. Stephen Wolfram sincerely believes this about Mathworld - that there's no telling what might happen if you facilitate mathematical exploration and let human nature take its course. I have nothing against it, only against claims that undisciplined exploration is the best way or the only way to conduct a search for knowledge.

Look, everyone wants to be in on the synthesis part. But anyone can dabble in multiple subjects. What makes you think you're qualified to make any real contribution if you have no depth of experience?

Slashdot Top Deals

Neutrinos have bad breadth.

Working...