Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
User Journal

Journal Journal: Access Forbidden

Who said you could read this anyway? What's your problem, Jack?

User Journal

Journal Journal: Word from an Oregon Senator on software radio 3

I received a letter in response to a request by myself to Senator Ron Wyden (Oregon) on the topic of software radios. I pointed out that Open Source is often more secure than closed source, that a ban on open source would be a-priori restraint of trade that would probably be detrimental to the deployment and usefulness of such devices, and that the FCC's position on the matter did not appear to be justified by the facts. I tried to avoid the whole freedom argument, on the grounds that politicians are generally not elected by intellectuals. Over-priced, crippled technology that would probably be made elsewhere... that's an argument politicians can hear better.

(No insult intended to Senator Wyden, he may very well be extremely smart, but since I don't know him, the most logical thing for me to do is to insinuate all the areas that could dent his popularity and fund-raising potential.)

His response is interesting. Firstly, he agreed that Open Source can be more secure. A fair enough position to take, given the level of closed-source IT industry in Oregon, and far more generous than I'd have expected for that same reason.

His second comment - that many in the software industry have made identical - or near-identical - objections was fascinating. Politicians are extremely adept at saying what you want to hear - they have to be, it's their only way to survive in their line of work - but to the extent that IT industry leaders have complained, the Senate is apparently taking notice. They would appear to be aware now of Open Source - for good or bad - and are adjusting their thinking accordingly.

He goes on to say that he is not satisfied that the FCC's claims that closed-source will make the software more secure are correct and that banning open-source may be counter-productive to the FCC's objectives. Again, that's good. Whether he believes it or not, I don't know, but there's clearly enough doubt in his mind as to the wisdom of the FCC's course that he's willing to be in writing in saying that he believes Open Source could make for a more secure product and that the FCC's actions could backfire.

The last part is the part that unnerves me slightly. He says that if legislation comes before the Senate, he will keep my views in mind. He did NOT say he would oppose legislation that would ban Open Source software radios, only that he would keep in mind that I - and others - oppose such a ban. Nor did he say that he would make any effort to bring forward any legislation requiring the FCC to re-examine the issue or explain themselves.

Why is that unnerving? Because although he expresses disquiet, he won't commit himself to any actual action over it. Maybe I'm being too hard on him, but it bothers me intensely that he acknowledges my concerns are widespread in the industry but promises nothing. Not even so much as to ask the FCC why they're being so shirty on the issue. The letter is good, I appreciate his taking the time to, well, ask his secretary to probably print out a standard form letter, but that's not going to achieve results. Why should the FCC care how many form letters have been printed? Well, unless they have shares in the company making the envelopes.

A response that shows some sympathy is better than no response at all, but only if it is accompanied by action. I hope it does. I hope my mail to him made some useful contribution to the debate. I also hope that someday I'll win the lottery. I am curious as to which has the greater odds of success.

User Journal

Journal Journal: Oh goody. Exactly what I didn't need. 6

A person I designed an online store for didn't want to pay for it. That happens. They also turned out to be a gun and knife fanatic. No big deal, right? That depends on how you interpret the phrase "you'd better watch your back, if I were you". May this be a lesson to you all - never do software consulting work for a latent psychotic.
User Journal

Journal Journal: Who uses Freshmeat?

One thing that has often puzzled me, when working in places that use Open Source software, is how many people know of Slashdot (I'd say 75% or more read it daily) but how few were even aware Freshmeat existed. The same was true of an announcement service that tracked Open Source and shareware products. Yet those projects I track on Freshmeat (I own something like 150 records and am subscribed to something like twice that) show hundreds - sometimes thousands - of accesses after a new release. If the corporate sector is totally blind to Freshmeat, who is doing the accessing?

Looking at the numbers, I think I can hazard some guesses. Educational and Government places probably rank high in the user charts, as clustering and scientific software are often moderately or highly subscribed and show moderate to high activity after an update. The stats are also skewed towards servers and other administrative or maintenance software, so I'm guessing it's more used by admins than users, which is somewhat foolish as users should be the ones driving updates as they're the ones who know what functions they need and what bugs they experience.

The popularity of MPlayer is an odd one, as most users will get this from their distro and it's unlikely to be used for system maintenance. Nonetheless, it is more popular than any other package, including the Linux kernel. Even the Linux kernel is oddly placed, at second, as this is announced in so many different places, from LWN and Slashdot to the Linux Kernel Mailing List and LinuxHQ. Most software is only ever announced on its homepage and on Freshmeat only if someone has made a record for it and is keeping it up-to-date. The dilution of the Linux kernel announcements is so staggering that it is amazing that a single service would get so much attention.

I guess if we assume a heavy Government/Educational userbase, it's more understandable. Those are going to be places where heavy-duty mailing lists are not going to be an option, and where surfing websites on the off-chance of an announcement would be frowned upon.

If I'm correct, how do we interpret the numbers? The usage won't be a random sample of a complete cross-section of the population, it'll be a self-selecting group with relatively narrow interests that is largely built up from a relatively small segment of the Open Source userbase.

Well, why should we interpret the numbers? That's an easy one. Corporations resist software they consider "unpopular" or "unused", no matter how useful or productive it would be. They are staggeringly blind to reality. If you can produce meaningful usage estimates, and can defend them, it sometimes (not always) weakens resistance to vitally-needed updates and changes. If you can show that some project has been downloaded by tens of thousands of probable competitors, you can be damn sure that project will be on the server by the next morning, come hell or high water.

Some would argue that it doesn't matter - we get paid to do what we're told to do and to make the managers look good. That entire discussion could - and does - fill vast volumes, with no real answer. I've got my own thoughts, but that';s not really this discussion and I'd probably run Slashdot's servers out of disk space if I were to put them all down here.

Here, I am far more interested in knowing why the userbase for any announcement service should be self-limiting. I've seen places be utterly ignorant of what software exists or where it can be found. I've had people ask me how to search for programs or how I know about updates before the distros push the packages out. On the flip-side, as I've already pointed out, there are packages whose records show far greater levels of access than you would expect, given the availability of the same (or better) information elsewhere, sometimes much sooner.

Based on what I've seen, I am going to say that the records for "mission-critical" software and software of specific interest to one of the niches inhabiting Freshmeat will be relatively close to the actual levels of active interest. Passive interest (eg: users of a desktop Linux system are probably not actively interested in new kernel or glibc releases, but still use those updates) is probably a lot higher, but I don't think it's easily calculable. I'm going to guess that the number of people who actually download the source code is somewhere between two and five times the number who visit the site via Freshmeat.

For commercial and industrial software, I'm going to guess that Freshmeat numbers are way too low, that people discover packages by accident or media rumor, or outsource the updates to some group that use a commercial tracking/monitoring service. For this type of software, I'm guessing that the actual number of people impacted by announcements might be anywhere from five to fifty times the number given in the stats. There is no simple way of finding out who knows what, though, because there is nowhere to look.

However, when giving a presentation to managers on why product A is the one to go for, you can't be vague, you can't be hesitant and you absolutely can't be technical. That's why having a bit more certainty would be a good thing. Lacking any means of being certain, though, anyone in that position has to give some number that managers can use. I would take the URL access value from Freshmeat (the number who actually visited the site, not just the record) and scale it by the midpoint of the numbers I've suggested. It's not perfect, but it's almost certainly the best number you are going to be able to get as things stand.

Yeah, yeah, GIGO. But managers don't generally care about GIGO. They care that they have plausible and defendable numbers to work with. That is what they are getting. If you wait to give them something precise and accurate, you'll certainly be waiting until long after any decision has been made, and probably be waiting forever in many cases.

What if you're a home user? Plenty of those exist. Well, to home users, I would argue that updates from distros are typically slow in coming, that library version clashes are far too frequent, that permutations of configuration that may be interesting or useful usually won't be provided, and that even distros that build locally (Gentoo, for example) have massive problems with keeping current and avoiding unnecessary collisions.

If you're not specifically the sort of user served by the distro of your choice, you WILL find yourself building your own binaries, and you would be very strongly advised to be aware of all updates to those packages when they happen.

User Journal

Journal Journal: Is Social Networking worthwhile? 3

There are plenty of online social networking sites - LinkedIn is the one I'm most familiar with. They seem to be designed around the notion of the Good Old Boys Club, the gentrified country clubs and the stratified societies of the Victorian era, where who you knew mattered more than what you knew.

But are they really so bad? So far, my experience has given me a resounding "maybe". People collect associations the way others collect baseball cards or antiques - to be looked at and prized, but not necessarily valued (prized and valued are not the same thing), and certainly not to be used. But this defeats the idea of social networking, which attempts to break down the walls and raise awareness. Well, that and make a handsome profit in the deal. Nothing wrong with making money, except when it's at the expense of what you are trying to achieve.

So why the "maybe", if my experience thus far has been largely negative? Because it has also been partially positive, and because I know perfectly well that "country club" attitudes can work for those who work them. The catch is that it has to be the right club and the right attitude. That matters, in such mindsets. It matter a lot.

So, I ask the question: Is there an online social networking site that has the "right" stuff?

User Journal

Journal Journal: Why is wordprocessing so primitive? 12

This is a serious question. I'm not talking about the complexity of the software, per se - if you stuffed any more macros or features into existing products, they'd undergo gravitational collapse. Rather, I'm talking about the whole notion on which word-processors, desktop publishing packages and even typesetting programs such as TeX are based.

What notion is that? That each and every type of writing is somehow magical, special and so utterly distinct from any other type of writing that special templates, special rules and special fonts are absolutely required.

Of course, anyone who has actually written anything in their entire life - from a grocery list onwards - knows that this is nonsense. We freely mix graphics, characters, designators, determinatives and other markings, from scribblings through to published texts. If word-processing is to have maximum usefulness, it must reflect how we actually process words, not some artificial restraint that reflects hardware limitations that ceased to exist about twenty years ago.

The simplest case is adorning the character with notation above it, below it, or as subscript or superscript to either the left or right. With almost no exceptions, this adornment will consist of one or more symbols that already exist in the font you are using. Having one special symbol for every permutation you feel like using is a waste of resources and limits you to the pitiful handful of permutations programmed in.

The next simplest case is any alphabet derived from the Phoenician alphabet (which includes all the Roman, Cyrillic and even Greek alphabets). So long as the program knows the language you want to work in, the translation rules are trivial. The German esset is merely a character that replaces a double s when typing in that language. A simple lookup table - hardly a difficult problem.

Iconographic and Ideographic languages are just an extension to this. You specify a source language and a destination language, and provided you have such a mapping, one word gets substituted with one symbol. You could leave the text underneath and use it as a collection of filenames for grabbing the images, if you wanted to make it easy to edit and easy to program. As before, you already have all the symbols you're ever likely to want to overlay, so you're not talking about having every possible image in a distinct file. Anything not provided can be synthesized.

Other languages can be more of a bugbear, but only marginally. A historical writing style like Cuneiform requires two sizes of line, two sizes of circle, a wedge shape and a half-moon shape. Everything else is a placement problem and can be handled with a combination of lookup tables, rotations and offsets. Computationally, this is trivial stuff.

If the underlying engine, then, has a concept of overlaying characters with different offsets and scales, rotating characters, using lookup tables on regular expressions, and doing simple substitutions as needed, you have an engine that can do all of the atomic operations needed for word-processing or desktop publishing.

This method has been used countless times in the past, but past computers didn't have the horsepower to do a very good job of it. Word-processing has also been stifled in general by the idea that it's a glorified typewriter and that it operates on the character as the atomic unit. What I'm talking about is a fully compositional system. Each end-result character may be produced by a single source symbol, but that would be entirely by chance, as would any connection between any given source symbol and what would be considered a character by the user.

If it's so good, why isn't it used now? Because it's slow. Composing every single character from fundamental components is not a simple process. Because it's not totally repeatable. Two nominally identical characters could potentially differ, because the floating-point arithmetic used is like that. That's why you don't use equalities much in floating-point arithmetic. Because it puts a crimp on the font market. Most fonts are simple derivatives of a basic font, and the whole idea of composition is that simple derivatives are nothing more than a lookup table or macro.

If it's all that, then why want it? Because it makes writing any ancient or modern alphabet trivial, because you can do more in 20 fonts than you can do on existing systems with 2,000, and because it would bugger up the whole Unicode system which can't correctly handle the systems it is currently trying to represent. (The concept behind Unicode is good, but the implementation is a disaster. It needs replacing, but it won't be until someone has a provably superior method - which is the correct approach. It just means that a superior method needs to be found.)

User Journal

Journal Journal: In Other News For Nerds

There is a new science/geek website out there, called Null Hypothesis, that covers highly unlikely but totally real science. The headline story at the moment is about the sounds of protein molecules. The BBC's coverage of this attempt to out-geek the geeks reports that there are only 60,000 readers - something like a hundredth of what I believe Slashdot's readership is. Even if nobody actually joins the site, it is our clear moral duty to our fellow nerds (and an interesting science experiment they can report on) to attempt to melt the server under a severe Slashdotting.
User Journal

Journal Journal: Software announcements (or: how to irritate JD) 3

Yes, back to the grumbling again. I do not enjoy this. If I could write about stuff I liked, I would vastly prefer it. However, that will have to wait until there's stuff I like happening.

This rant has to do with software announcements. I covered this to a degree in a previous journal entry on the secrecy of some open source projects. This time, I will be more concerned with neglect (the known version is truly ancient, compared to the published version), quality (compare the Slashdot description with the Freshmeat one for the same piece of software) and reaction.

Neglect is a big one. I own 113 project records on Freshmeat and have bookmarked an additional 303. Why so many? 303 bookmarks is a lot - can't I just look to see when the project updates are announced? I would, if they ever were. The bookmarks are reminders of correctly-assigned records that the author can't be bothered to maintain. If they get updated at all, it's because I updated them. With the sheer volume of projects involved, you're damn right if I sometimes think some of the bigger Open Source consortia that develop these packages should be paying me for my time. Globus is no small concern, it's a friggin' international collaboration of multinational corporations. Why are they depending on volunteers to take on the unpaid, thankless, tedious task of fixing their neglect?

Ok, what about those 113? How many do you think I actually created? I'd say maybe half, the others I picked up usually because the owner no longer existed. In a few cases, the records were so stale and decayed that the last update predated the owner field, yet the software has been continuing just fine. Again, that's not good. At best, it means that inaccuracies or other reports will fail - nobody to send to.

Freshmeat is not the only software inventory out there, although it's the only one I make any effort to assist. I've assisted a few paid sites as a consultant, and frankly the stagnation there was even worse. It would be so easy to spend every waking moment just bringing these databases up to speed. We're talking extreme neglect not in the hundreds of records but in the tens of thousands. These are professional sites, paid by customers who want accurate information. They aren't getting it. What they get is something that could be anywhere from a few days to a few years behind reality. Frankly, those customers would be infinitely better off buying a giant disk array and using Harvest to index every site that Google reports has at least one page with the word "download" on it. It would work out cheaper very quickly, and you can be sure of how fresh the information is.

Ok, what about quality? If there's no freshness, then quality is automatically suspect, as projects are evolving entities. They're not fixed for all time, except in rare cases. Ignoring that, though, how accurate are announcements as a rule? Not very. The quality of information is generally fairly poor - sometimes because the person providing it doesn't really understand what is being communicated ("Chinese Whispers") and sometimes because the information simply doesn't exist and has to be inferred from the meager clues that have been left. Sherlock Holmes may be a great detective, but he is also a work of fiction. And if anyone did have those skills, do you think they'd be spending them on correcting project records? Where it's good, it can be truly excellent, but since it would also take someone of the power of Holmes to tell you when the information is good, it's not that useful. If the only way to tell is if you already know the answer, you have no need to be able to tell.

What about reaction? Well, let me put it this way. Atlas' official version is 3.7.29. Fedora Core 7 beta 1 uses version 3.6.0. The official version of Geant is 8.2 patch-level 1, but Fedora Core 7 beta 1 uses version 3.21. I've done some experiments with my own Open Source projects and have found that updates and patches follow the laws of Brownian motion. It is simply not possible to predict if/when/how updates will ever occur within a single distribution, but across all variations of all distributions, the net rate of pickup and refinement is more-or-less constant. This is, of course, completely useless to most users - even those with subatomic vector plotters.

Overall, it's a nightmare to find what you want, a bigger nightmare to determine if it is actually what you want, and a total and utter diabolical nightmare from the 666th plane of hell to determine if what is actually available in any way reflects what it was that you thought you were getting.

User Journal

Journal Journal: Are distros worth the headaches? 6

One of my (oft repeated) complaints about standard distributions such as Gentoo, Debian or Fedora Core, is that I slaughter their package managers very quickly. I don't know if it's the combination of packages, the number of packages, the phase of the moon, or what, but I have yet to get even three months without having to do some serious manual remodelling of the package database to keep things going. By "keep things going", I literally mean just that. I have routinely pushed Gentoo (by doing nothing more than enabling some standard options and adding a few USE flags) to the point where it is completely incapable of building so much as a "Hello World" program, and have reduced Fedora Core to tears. That this is even possible on a modern distribution is shocking. Half the reason for moving away from the SLS and Slackware models is to eliminate conflicts and interdependency issues. Otherwise, there is zero advantage in an RPM over a binary tarfile. If anything, the tarfile has fewer overheads.

Next on my list of things to savagely maul is the content of distributions. People complain about distributions being too big, but that's because they're not organized. In the SLS days, if you didn't want a certain set of packages, you didn't download that set. It was really that simple. Slackware is still that way today and it's a good system. If Fedora Core was the baseline system and nothing more, it would take one CD, not one DVD. If every trove category took one or two more CDs each, you could very easily pick and choose the sets that applied to your personal needs, rather than some totally generic set.

My mood is not helped by the fact that my Freshmeat account shows me to have bookmarked close to three hundred fairly common programs that (glancing at their records) appear to be extremely popular that do not exist on any of the three distributions I typically use. This is not good. Three hundred obscure programs I could understand. Three hundred extremely recent programs I could also understand - nobody would have had time to add them to the package collection. Some of these are almost as old as Freshmeat itself. In my books, that is more than enough time.

And what of the packages I have bookmarked that are in the distros? The distros can sometimes be many years out-of-date. When dependencies are often as tightly-coupled to particular versions as they generally are, a few weeks can be a long time. Four to five years is just not acceptable. In this line of work, four to five years is two entire generations of machine, an almost total re-write of the OS and possibly an entire iteration of the programming language. Nobody can seriously believe that letting a package stagnate that long is remotely sensible, can they?

I'll finish up with my favorite gripe - tuning - but this time I'm going to attack kernel tuning. There almost isn't any. Linux supports all kinds of mechanisms for auto-tuning - either built-in or as third-party patches. And if you look at Fedora Core's SRPM for the kernel, it becomes very very obvious almost immediately that those guys are not afraid of patches or of playing with the configuration file. So why do I end up invariably adding patches to the set for network and process tuning, and re-crafting the config file to eliminate impossible options, debug/trace code that should absolutely never be enabled on a production system (and should be reserved solely for the debug kernels they also provide), and clean up stuff that they could just as easily have probed for? (lspci isn't there as art deco. If a roll-your-own-kernel script isn't going to make use of the system information the kernel provides, what the hell is?)

User Journal

Journal Journal: Highly Secret Open Source Projects 7

Nothing in this world will ever be more confusing than projects that are:
  1. Released as Open Source on public web sites
  2. Bragged about extensively on those websites - especially their Open Sourceness
  3. Never to be mentioned or referenced in any way, shape or form by anyone else

Pardon me for my obvious ignorance of the ways of the world, but it would seem obvious enough to even the most demented that once something has been posted on a public site that other people WILL find out about it - from search engines if by no other means.

It would also appear that secrecy and Open Source are mutually exclusive - if you publish the source under a GPL or BSD license, it's rather too late to start whining if others then start poking around the code. I'm not talking about people distributing closed-source and having people try to reverse-engineer or reverse-compile it. That's different. I'm strictly talking about code where the source is open to everyone, where the license is explicitly stated, and the license is - beyond all doubt, reasonable or otherwise - one of the standard Open Source licenses that we all know and love/hate/have-a-strong-opinion-on.

So what gives? Why do we have cases of individuals or organizations who obviously want to take advantage of the Open Source model but who do everything in their power to violate that same model (and possibly even their own licensing scheme)?

I'll offer a suggestion, and those guilty of the above offense will likely take even greater offense at this. I believe it is because Open Source has become the "in thing". It's "hip" to release something Open Source. It's fashionable. It's highly desireable. In some circles, it might even be considered sexy. So what's wrong with any of that? When these are the only reasons, there is a LOT wrong with it. When Open Source ceases to be open and has even less to do with the source but is solely used as a substitute for some perceived genital defect, it ceases to be Open Source. I'm not sure what you'd call it, but it has nothing to do with any community that has even the vaguest understanding of either openness or freedom.

So what should these people do? I'm not going to say that they need to do anything at all, other than be honest. If these programs are "invite only" or to be circulated only amongst friends, then get them the hell away from the public part of the web and use a .htaccess file to restrict who can get them. Or put them on a private FTP site where you can control who has the password. Or only e-mail them to people you like.

Why? There's one very good reason why. If you advertise something as Open Source, offer it as Open Source, post it as Open Source, license it as Open Source, but deny the entirety of Open Source civilization any rights that are explicitly or implicity granted by doing so, purely because they're not your type, they aren't the ones in the wrong. If you offer someone a hamburger but then give them a slice of pizza, they aren't being ungrateful swines if they tell you that's not what you offered.

This particular resentment has been brewing in me for some time, but some projects on Freshmeat recently got closed to editing and then willfully broken by the software developers concerned. Why? So that nobody would bother them. Get a few thousand extra eager eyes looking at the code and you needn't worry about being bothered, although you might have to start screening out all the screaming F/OSS fans who want a glimpse of the next megastar.

I guess I'm posting this today, right now, in a time that has traditionally (well, since the time of the Saturn cults in ancient Rome, at least) been associated with sharing far more than any other time, because the Grinch is not merely alive, well and extremely evil, he's now burning the houses down as he leaves.

User Journal

Journal Journal: Treasure-Seekers Plunder Ancient Treasure 2

The lost treasure of Dacia is the target of treasure seekers in search of an estimated 165,000kg of gold and 350,000kg of silver that was hidden shortly before the Romans destroyed the region. Tens of thousands of solid gold artifacts have already been located and smuggled out, apparently after bribing Government officials and police.

There are many schools of thought on this sort of thing. There are those who would argue that the treasure was hidden quickly, so there is no archaeological information lost, provided some examples remain to be studied. Museums and galleries often end up with stolen objects anyway (the Getty museum and the Louvre being recent examples), so in all probability they aren't going to be lost to society. Besides which, many countries treat their national heritage disgracefully, so many of these stolen items might actually be treated vastly better than they would have been.

I don't personally agree with the methods, there, but Governments have shown themselves totally incompetent at protecting either national heritage or world heritage, so if there is to be any heritage at all, it is going to have to receive protection from somewhere else.

Another school of thought is that such excavations should be performed by trained archaeologists, who can document everything in detail, who are trained in the correct way to preserve every last iota of information, and who can ensure that nothing is lost.

Again, there is a lot to be said for this approach. Except that archaeologists are poorly funded, have a tendancy for naivety when it comes to dealing with people (Seahenge, Dead Sea Scrolls, etc) or indeed the information they collect (Seahenge, Dead Sea Scrolls, etc). Their interpretations often fail to properly document sources and are prone to speculation where little evidence exists. Archaeologists simply don't have the means to carry out the kind of excavation required, and wouldn't necessarily have the skills required even if they did.

The third option is to leave the stuff where it is. The world has moved on, let it rest in peace. We already have a lot from that period of time, why would we need a few hundred thousand items more?

This line of thinking ignores that everything has a story to tell. It assumes some sort of equivalence between ancient art (where everything was unique and took time and skill to make) and modern art (where everything is mass-produced on a production line). It also assumes that such ancient artifacts take up space that we need. The world is a BIG place. Things can be moved. Or built around, as happened with the Rose Theatre in Stratford-upon-Avon. We're not even restricted to the two dimensions of the surface.

The last option is a mix of the above. Archaeologists rarely need the original object, museums never do. We can etch the surface of an object to within a few tens of nanometers, can identify the composition of a dye or paint an atom at a time, and can read long-erased writings from trace amounts of residual molecules.

This approach would argue that whilst archaeological context is the ideal, vast amounts of information could very easily be extracted from any collected item, if anyone could be bothered to do so - certainly far more than necessary for a museum to exhibit ancient history that would otherwise be lost, and probably far more than would be needed by archaeologists to produce extremely detailed conclusions and infer the vast majority of information they'd have collected if they'd dug the items up themselves.

In the end, I want the maximum information possible to be preserved and for the artifacts to be protected and preserved as best as possible. Plundering is probably not the best way to achieve this. Anything not gold or silver is likely being destroyed, in the Dacia site, for example. But if it ends up with anything being salvaged at all, it'll be an improvement, as bad as it is.

Better solutions are needed, but it is doubtful any will be developed in time to save those things that need such a solution to survive.

User Journal

Journal Journal: ...And here's an overkill I produced earlier... 3

Probably not many non-UKians are familiar with the cult children's TV show "Blue Peter". To cut a long story short, it is one of several arts, crafts and high adventure TV shows in the UK intent on destroying the world economy by building fully-working space shuttles out of cardboard, glue and sticky-back plastic. And here is one I prepared earlier.

They also hand out badges to children, between the ages of 5 to 16, for significant achievements. You might get a blue badge for running a bring-and-buy garage sale that completely rejuvinates the local economy. A gold badge might go to a kid who swims unaided through the flood waters during a hurricane to rescue little old ladies.

As a mark of respect for the wielders of The Badge, many places offer discounts or free entry to the heros, inventers and artists who have achieved these heights. Well, that changed nine months ago.

Nine months ago, the worst crime imaginable occured. People were caught selling their Blue Peter badges on e-bay! Arguably, the badges belong to these people so they have a right to sell them. Right? Well, that's where it gets tricky. The badges DO belong to those individuals, but the badges carry a lot more than just some painted steel, it has a measure of respect.

The solution to the problem was unveiled on Monday, June 19th. There is to be an identity card (with holographic image of the person awarded the badge) that goes along with the badge. With the hologram, it no longer matters if something gets sold, as nobody else will look like the person in the photo.

Of course, you know where this is going, don't you? Holograms will eventually be replaced or extended by biometric data. Not horrible, surely? Well, the UK has been fighting hard to add a national ID card - and losing - for some time. This would allow them to pervert a glorious medal of honor into a scheme allowing national ID cards to be seen not as a mark of being watched, but as a mark of achievement.

By getting kids to WANT national ID cards for themselves, the problem would be easily solved. By getting kids to beg for more of their personal data to be carried, the Government won't have to fight to get National ID installed, they'll have a fight to prevent exposure of FAR too much personal data.

Five year olds have barely the wherewithall to learn to read. We do NOT need them to have to combat identity theft and spin doctoring at the same time.

User Journal

Journal Journal: When is a web design good? 5

One big problem with a lot of web sites is that they are poorly designed, if they're designed at all. They operate on a very limited number of platforms - some will only work at all with specific versions of IE on specific versions of Windows. Some are hopelessly cluttered with every possible feature web browsers have to offer, creating a mess that is quite unusable. Some concentrate on the "feature of the day". This used to be backgrounds (making the text unreadable). Later generations made the page hopeless by relying on plugins and embedded scripts of one kind or another. Web 2.0 - the fad of the moment - is the latest way people can seriously mess up what would otherwise be a superb site.

Is it all bad, though? No, all these things are mixed blessings. People have used them all in very effective and productive ways, making sites far more navigable, far more readable and far more elegant. In general, this is because the person behind the idea has actually thought it through and designed the site correctly.

Funny how this journal entry comes up as Slashdot moves to the new CSS, and a Web 2.0 story is on the front page... Well, no, it's not a coincidence. Slashdot's new look & feel is a perfect example to draw on, as it's something we're all using to access this journal entry. The new l&f is undoubtedly crisper and cleaner than the previous format. I liked the old look, it was extremely functional, but the new layout has some definite improvements in my humbly arrogant opinion.

HOWEVER, every feature present offers two chances for problems - it might be implemented incorrectly in the web page, or implemented incorrectly in the browser. As "novel" features aren't necessarily going to be retained, if they don't prove to be useful in practice, every feature present also risks being unusable in future browsers, This is not to say you should never use new features, only that you should be aware of the risks involved and work to mitigate them sensibly. You should also not assume browsers function correctly and have suitable provisions.

(It is not possible to test a page on every web browser in existence, but if the features are imported into the page dynamically, it should be easy enough to select alternatives for specific browsers when a problem is identified.)

Most of this is common sense, but it drives me nuts that so many pages on the Internet today DO violate every ounce of common sense for the sake of looking better to the select, when they could look better in just the same way to everyone at very little expense.

User Journal

Journal Journal: Tenth Planet - For Real!

Hot on the heels of the discovery of 2003 EL61 comes another planetary body in our solar system. Designated 2003 UB313 and nicknamed Lila, this planet is at a distance of 97 AU and is at least the size of Pluto, possibly up to twice as big. Based on the current definitions, this would make it an Official Planet. Nasa's Jet Propulsion Laboratories has more information and pretty pictures.

User Journal

Journal Journal: Ice on Mars

A significant disk of ice has been found on the inside of a large impact crater. The ice is very visible and very much on the surface. According to the BBC, "The highly visible ice is sitting in a crater which is 35 km (23 miles) wide, with a maximum depth of about two km (1.2 miles)." The European Space Agency has more details and bigger images.

Slashdot Top Deals

Suggest you just sit there and wait till life gets easier.

Working...