Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
User Journal

Journal Journal: Welcome to the world of Puba! 5

Puba Health Concepts Inc
"Your life in our hands"

Congratulations on becoming one of the many healthy consumers that enjoy the Puba PaceMaker 3! We at Puba Health Concepts Inc would like to thank you or your employer for choosing a Puba affiliated physician to address your Cardiac care needs. Your new Puba PaceMaker 3 is equipped with state-of-the-art remote monitoring and control technology allowing us to modify your treatment as needed. We would like you to take a moment to read through the terms and conditions of use of your new Puba PaceMaker 3.

Terms and Conditions

1) By using the Puba PaceMaker 3 service you indicate your acceptance of the following terms. If this agreement is unacceptable to you, immediately discontinue use of the Puba PaceMaker 3. Violation of any of these terms will result in immediate remote termination of the Puba PaceMaker 3 service.

2) Your initial subscription payment is due immediately and must be paid within 7 days of installation of the Puba PaceMaker 3. Subsequent subscription payments are due monthly within 5 working days. Failure to pay within these time periods will result in immediate termination of service.

3) The Puba PaceMaker 3 contains proprietary intellectual property of Puba Health Concepts Inc. Any attempt to reverse engineer the Puba PaceMaker 3 or to circumvent our Technology Rights Management system will result in immediate termination of service. Additionally, under the terms of the New Century Treason and Theft Prevention Act, your body will become the automatic property of Puba Health Concepts Inc.

5) This document is the intellectual property of Puba Health Systems and any disclosure of the contents of this document to a third party will result in immediate termination of service. Additionally, under the terms of the New Century Treason and Theft Prevention Act, your body will become the automatic property of Puba Health Technology.

6) You may not make any statement, written or verbal, referring to Puba Health Technology in a derogatory, or defamatory manner, or otherwise damage the reputation of Puba Health Technology. Any attempt to pursue alternatives to the Puba PaceMaker 3 will be viewed as defamation of Puba Health Technology. Failure to comply with this term will result in immediate termination of service.

User Journal

Journal Journal: Package Management in the untimate operating system 5

I have frequently thought about how to put together the next iteration of the operating system, and a recent discovery on SweetCode reminded me of this.

Zero Install is designed to make the process of installing new software completely transparent. It achieves this by mapping a filesystem to HTTP. for example, if you wanted to install The Gimp, you would simply run it from /uri/0install/www.gimp.org/bin/gimp. This would transparently download the gimp software, and any libraries on which it depends. Once downloaded such applications are cached such that the next time you run the gimp it is just as fast as if you ran it from your local filesystem (because you would be running it from your local filesystem!).

This idea, of breaking down the barriers between the local filesystem and the Internet to effectively eliminate the notion of installation definitely seems like the way forward. This is the kind of convenience that tools such as Debian's apt-get begin to provide - however 0Install takes it to its ultimate conclusion.

The next step, then, would be to build an entire Linux distribution around the 0Install principal. This would also benefit from upcoming filesystem innovations which allow set operations on directories (for example, having two directories and creating a new virtual directory that contains everything the two real directories do). At the simplest level this would eliminate things like the $PATH variable.

Creating a new Linux distribution that isn't afraid to make bold advances such as those outlined above will be the true next step in the evolution of operating systems - it won't be brain surgery, just elegantly combine a number of technologies (such as Linux, 0Install, and the new Reiser FS) that are already out there or in development.

User Journal

Journal Journal: Collaborative document editing 7

I am been interested in the area of collaborative document editing for a while, and for the last few days have been working on something to allow this. I just put up an alpha-quality version of it at http://3D17.org, please let me know what you think.
User Journal

Journal Journal: Locutus 0.5

Just released a new version of Locutus. Locutus offers collaborative spam filtering and employs some novel algorithms to avoid getting fooled by spammers that try to evade collaborative filters. It also does fast local and remote keyword searching for documents using a scalable search algorithm inspired by Freenet but generalized for fuzzy searching. Locutus isn't free as in speech, but it is free as in beer. If you run Outlook or Outlook Express on Windows please check it out - the more people that use it the better it gets.
User Journal

Journal Journal: News about me. Stuff that matters. 4

It is getting a little bit silly how frequently I and/or one of my projects is appearing on /. these days. I am not complaining, it is great when stuff you are doing attracts attention. Nor do I take it for granted, I am amazed that people are still willing to hear about my crazy ideas. Having said that, the latest story is something I never wanted or intended to enter the public arena.

There are those that have accused me of being a self-publicist (you know who you are), but I think this latest episode demonstrates that not everyone that gets some publicity for something did it through shameless self-promotion.

I was following a debate on Slashdot about an Intel engineer who had admitted (under considerable duress) that he was a terrorist. Many people took this to imply that he was actually guilty. In an attempt to wake people up to the seriousness of what was going on I posted a comment under my slashdot nickname (but making no further attempt to identify myself) explaining that I was leaving the US, and how concerned I was about the direction this country was going. Admittedly, I used somewhat hyperbolic language, and gave fans of Godwin's Law the opportunity to indulge in the meta-arguments that they enjoy so much, but it was just a throwaway comment on /. after all...

...or so I thought. It didn't take long for someone to post this to InfoAnarchy, and with that my throwaway comment had graduated to an "announcement". It didn't take long to go from there to a story on BoingBoing.net by Cory Doctorow where by now my throwaway comment was a "goodbye letter".

As a result of this I got one phonecall from the Canadian Broadcasting Company, and an article in the Irish Times, followed by a request from GrepLaw to do an interview. As always, I accepted these requests to discuss what I had said - I learned long ago that whether you talk to them or not, journalists will write a story - and it is far better that you have the opportunity to inject your perspective than to leave it up to them and their editors.

Bottom line - it might make people feel better to imagine that the amount of coverage someone gets is directly proportional to the size of their inflated egos, but it isn't always true. Of course, sometimes it is ;-)

User Journal

Journal Journal: Godwin's Law - "law" or cop-out? 24

I have always been uncomfortable with Godwin's Law. For those unfamiliar with it, it states that "As a Usenet discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches one." Many people infer from this that whoever, during a debate, makes a comparison with Hitler or Nazis, loses the argument automatically.

It wasn't until a recent email conversation with Cory Doctorow, started by a /. comment of mine, that I was forced to introspect and find the reason for my discomfort.

Here, I outline a fictional debate between Cory and I, using various extracts from our conversations and comments, which I hope gives a fair indication of Cory's viewpoint:

Me: As an Irish citizen living in the US - I have decided that it is time to leave this country - it is starting to look, smell, and act as Germany did during the 1930s.

Cory: It's a shame that [you] violated Godwin's Law, as it gives those who would distract us from the real issue here a handy red herring to toss into the fray, i.e., pointless arguments about the appropriateness of a comparison to Nazi Germany.

Me: I think the comparison with *1930s* Germany is apt, although a comparison with 1940s Germany would not be, you can't invoke Godwin's law when the conversation *really is* about Nazi Germany ;-)

Cory: The point for me of G's law is not its aptness -- I happen to agree that it is an apt analogy, and I speak as someone who lost a significant fraction of his family in the death camps.

The point of G's law is that comparisons to Nazi Germany immediately end all discussion about the subject at hand and instead divert the whole debate to an argument about the aptness of the comparison.

Me: In some cases, however, a discussion about the aptness of the comparison is actually useful, and gets to the core of the issue.

Cory: My point is that Doctorow's Corollary To Godwin's Law is that anyone who wishes to be an effective rhetorician should completely expunge the notion of Nazi comparisons from his bag of tricks, because it creates a vulnerability to an attack that is otherwise neutralized ("My opponent is of such poor judgement and callous insensitivity that he believes it's appropriate to make comparisons to Nazi Germany!").

Me: Well, I am not so sure I agree with you there. *If* a comparison to Nazi Germany is pertinent then an effective rhetorician will be sufficiently skilled to counter this kind of ad hominem attack. They say that those who forget history are doomed to repeat it, and what more important lesson for society than the events in Germany during the Nazi period.

Refusing to use such an important lesson of history in debate for fear of exposure to fallacious arguments seems like an unfortunate surrender of a powerful tool for those who wish to fight against fascism. For this reason - I have never been entirely comfortable with Godwin's Law.

Unfortunately this is where the debate must end as I still await Cory's response to my last comment.

I would be curious to hear some third-party opinions on this, since Godwin's Law is one of the Internet debate doctrines that never rang true for me.

Anyway, bottom line is that I now propose:

Clarke's Law: Anyone that invokes Godwin's Law in an argument automatically loses the meta-argument

User Journal

Journal Journal: Unified configuration mechanism 11

One of the biggest problems with Linux (and I am certainly not the first to observe this), is the vast number of configuration files, many with completely different - and often nonsensical layouts, and most of which must be edited manually. It is a mess.

I would propose that someone come up with a unified Linux daemon which handled all configuration information - keeping it all in a well-ordered datastructure, perhaps based on XML. The idea would not be dissimilar to the Windows registry, although it could incorporate a number of features to make it even better:

  • Security
    Different parts of the configuration tree could be given read and write security permissions on a per-user basis
  • Backwards compatability
    Through the use of pipes, older software which doesn't directly support the configuration mechanism can read their configs from a file that is actually a pipe to the config daemon
  • Network support
    Access to the config daemon could be handled over a network, or the local config daemon could be configured to "fall back" to a remote daemon - allowing centralized configuration for software, but still letting the user modify user-specific stuff
  • Cross-configuration
    Often - software needs to base its settings on the settings of another piece of software in the system. This approach would make it easy for one piece of software to check the configuration of some other software.

I think such a mechanism would be one of a variety of necessary stages to creating a new environment which can be built around a Linux kernel to bring it closer to the kind of unified integrated approach we see in OSX.

User Journal

Journal Journal: A WhittleBit of extra intelligence 2

After a long delay I finally have a reasonably reliable implementation of my "learning" web search engine up at WhittleBit.com.

Yeah - weird name, I know. All other comments welcome.

Addendum 8/1/03: Sorry for those that tried this and found it to be down - the problem was that I use a Java daemon to do the donkey-work but I couldn't find a VM that would run on the server I was using (which has a weird setup). Finally I got it working using Kaffe - hopefully it will prove to be stable.

User Journal

Journal Journal: Removing bias in collaborative editing systems 28

A few weeks ago a friend of mine that had been thinking about reader-edited forums (like K5) posed an interesting question. He was concerned about how people's bias would influence their voting decisions and wondered whether there could be any way to identify and filter out the effects of such bias. Of course, in some situations bias is expected, such as political elections, however in other situations, such as when a jury must vote on someone's guilt or innocence, or when a Slashdot moderator must vote on a comment, bias is undesirable. After some thought, I came up with a proposal for such a system.

First, what do we mean by "bias"? It is a difficult question to answer exactly; examples would include political left or right-wing bias, nationalist bias, anti-Microsoft bias, and bias based on race. The dictionary definition is "A preference or an inclination, especially one that inhibits impartial judgment." Implicit in the mechanism I am about to describe is a more precise definition of bias; it is the aptness of this definition that will determine the effectiveness of this approach.

Visitors to websites such as Amazon and users of tools like StumbleUpon will be familiar with a mechanism known as "Automatic Collaborative Filtering" or ACF. Amazon's recommendations are based on what other people with similar tastes also liked, this is an example of collaborative filtering in action. There are a wide variety of collaborative filtering algorithms, which range widely in terms of sophistication and processor requirements, but all are designed to do more or less the same thing: anticipate how much you will like something based on how much similar people liked it. One way to look at it is that collaborative filtering tries to learn your biases and anticipate how they will influence how much you like something.

My idea was to use ACF to determine someone's bias towards or against a particular article, and then attempt to remove the effect of that bias from their vote. The effect of their bias is assumed to be the difference between their anticipated vote based on ACF, and the global average vote for that article. Having determined this, we can then take their vote, and remove the effect of their bias from it.

Let's look at how this might work in practice. Joe is a right-wing Bill O'Reilly fan who isn't very good at setting aside his personal views when rating stories. Joe has just found an article discussing human rights abuses against illegal Mexican immigrants. Joe, not particularly sympathetic to illegal Mexican immigrants, gives the article a score of 2 out of 5. On receiving Joe's rating, our mechanism uses ACF to determine what it might have expected Joe's score to be. It notices that many of the people who tend to vote similarly to Joe (presumably also O'Reilly fans) also gave this article a low score - meaning that according to our ACF algorithm - Joe's expected vote was 1.5. Now we look at the average (pre-adjusted) vote for the story and see that it is 3 - we then assume that Joe's anticipated bias for this story is 1.5 minus 3 or -1.5. We use this to adjust Joe's vote of 2 to make it an actual vote of 3.5 - which means that Joe's adjusted vote for this story is actually above average once his personal bias has been disregarded!

So, how well will this system work in practice - and what is it really doing? What are the implications of this mechanism for determining someone's bias? Is it fair?

I don't pretend to have the answers to these questions, but it might be useful to think of it in terms of punishment: when your vote is adjusted by a large amount, then you are being punished by the system as your vote will have an effect different from that which you intended.

The way to minimize this punishment is for your votes as predicted by the ACF algorithm to be as close to what the average vote is likely to be as possible. The worst thing you can do is to align yourself with a group of people who consistently vote in a manner in opposition to the majority.

I have been trying to think of scenarios where it might be bad for people to do the former, or bad for them to do the latter, but so far I haven't come up with anything. What kind of collective editor would such a system be? What kind of negative side effects might it have? I am curious to hear your opinions.

Privacy

Journal Journal: A draft of a new Freenet article 4

I have been working on an article describing Freenet's "Next Generation Routing" algorithm. You can find my working draft here - comments appreciated, but remember that it is still a draft so please don't link to it except through my blog.

When complete I will probably submit it to /. among other places to get some wider peer review.

Programming

Journal Journal: Progluminators 11

For millennia before the invention of the printing press, people known as "Illuminators" were responsible for the manual copying of religous and other manuscripts. Of course they didn't just copy the books, they would adorn the documents with elaborate paintings and illustrations, often spending weeks just working on the first letter of a chapter. It may have taken months or even years to copy a book - but by golly! - the result was perfect - not just because of its beauty, but because you knew just how much time had gone into its creation.

Back in the early days of the computer, people devoted similar (by today's standards) exacting care and attention to the preparation of punched cards, which contained hundreds or even thousands of painstakingly planned and constructed instructions, which they would feed to the machine which would eventually spit out a response.

Imagine the horror of the medieval illustrators when they first saw the printing press. Now, rather than taking months or years, a book could be created in just hours, not by an artist, but by anyone capable of operating the press! The new machine had automated the sacred task of carefully painting each beautifully crafted letter on the page! Rather than the infinite possibilities where characters are drawn with the flexible tip of the illuminator's brush - each is now stamped out, each identical, forced to conform. Suddenly books were no-longer works of art, each representing vast amounts of personal labor - they were mass-produced to a quality no greater than that which was required to convey their meaning to the reader!

The punched card gave way to terminals where a faulty program could be modified in minutes, rather than the hours necessary to repunch the cards and await one's turn at the altar of the great computer. The binary programming languages were replaced by assembly, and later by compilers - which automated the sacred task of deciding which register should be used to store which byte of data.

It is the same Illuminator's revulsion that we occasionally see in response to modern languages such as Java which (shock horror!) dare to automate tasks like deciding when particular areas of a computer's memory are no longer required. No more can the progluminator carefully craft the exact combination of operating system function calls required to send a byte to the network, now they are forced to conform to the artistically uninspired methods of the cross-platform API!

Of course, just as the printing press - despite upsetting the old and honored profession of illumination - gave birth to a new era of learning and scientific progress, so will programming languages like Java and C# lead to a richer diversity of ideas in computer science, despite upsetting the progluminators of our industry forever regretting the passing of the days when it took two weeks to create a linked list - but by golly it was perfect!

Now, I have - in my time - heard criticism from C++ advocates for my decision to implement Freenet in Java. This always struck me as ironic since generally their opinions were typically reminiscent of the progluminator argument, yet the incredible inelegance of C++, in my view, makes it much more deserving of progluminator scorn than Java could ever be. C++ is essentially an elaborate macro pre-processor for C*, which attempts to crudely duplicate concepts such as Object Orientation and templates, while hammering them into something that can convinently be translated into C. The result is predictably ugly - sure, you can have templates, but be sure you are familiar with the 101 caveats they bring with them due to their underlying implementation. Sure, have your classes and objects, but woe to the programmer that forgets that ultimately they are using an elaborate macro preprocessor.

The bottom line? Don't bother telling me that Freenet should be implemented in C++ unless you are willing to spend months illustrating your code on stretched leather with a carefully prepared pheasant feather while paying particular attention to the initial "#".

* In fact Bjarne Stroustrup's original implementation was just-that, it took C++ code and converted it to C prior to compilation by a C compiler

Patents

Journal Journal: Open Source obfuscating the EU software patents debate 3

As can be determined from previous journal entries, I am extremely concerned about the proposed introduction of software patents in the EU.

There seems to be a misconception in the press that it is only Open Source software that is at risk. I think that this is partly due to the fact that most of the people that are vocal on this issue are Open Source advocates too - and perhaps they are too quick to use anecdotes relating to Open Source to make their point. The reason this is a problem is that it allows pro-patent people to make the fallacious argument that since Open Source software is free, there is no economic impact involved here and people's objections are purely idealogical.

Among the other things that bug me is the fact that these proposed changes are characterized as a "liberalization" of EU software patent law. This is completely backwards, allowing more software patents merely serves to restrict people's freedom, it is the opposite of a "liberalization".

Is it just me, or is starting a new society from scratch on the moon or at the bottom of the ocean looking more attractive every day?

Censorship

Journal Journal: Response to Peacefire's "Distributed Cloud" paper 4

Last week I received some emails from Bennett Haselton quizing me about certain aspects of Freenet. Bennett is the man behind Peacefire, and someone that has done great work in the fight against Internet censorship.

After our conversation I discovered his 2001 paper entitled Why a "distributed cloud" of peer-to-peer systems will not be able to circumvent Internet blocking in the long run. Of course, Freenet is the leading example of a "distributed cloud" architecture.

Needless to say, I didn't entirely agree with his conclusions - and so here is an email I sent to his "Circumventor Design" mailing list, I am still awaiting a response (either from Bennett, or someone else on the list).

Thanks for subscribing me Bennett.

After our interesting off-list conversation, I have read your paper "Why a "distributed cloud" can't work", and have given the matter some thought. here are some preliminary observations, along with self-serving explanations of how this relates to Freenet ;-)

While I agree that this "spidering" attack is theoretically possible, I don't believe that it would be anywhere near practical with a well designed architecture, even for a very well funded and motivated government. I further suspect that this attack will always be a theoretical possibility with any censorship circumvention technology that relies on IP, that is sufficiently usable that it could gain wide acceptance in countries like China (of course I would love somebody to contradict me by describing an easy-to-use architecture that is not vulnerable ;)

This is not to say that there aren't strategies which maximize the cost of such an attack - and I think that Freenet is a good example of this. If you have a situation where an attacker can identify nodes and shut them down, it is important to do the following:

  1. Make any kind of "directed harvesting" difficult or impossible
    By this I mean that the Chinese government cannot easily direct their node address harvesting efforts to those nodes they can block - rather they are forced to wade through a potentially large number of nodes in order to find the ones susceptible to blocking.

    This is pretty-much the case with Freenet, nodes have little control over what nodes wind up in their datastore. A censor would just have to passively collect nodes which would be a slow process. Further, if the censor started to kill every node it's node was seeing, then that node would rapidly become isolated (much like a cop who killed all of his informants). It is an oft-abused and rather questionable saying that the "Internet routes-around censorship", but in Freenet's case there is much truth to it.

    A corollary of this is that the mechanism through which new nodes are added to the network should not provide a shortcut for the censors to identify fresh nodes. This means that the mechanism through which new nodes are added to the network must be as decentralized as possible.

    While Freenet is typically distributed from our web site, we also have a mechanism which we call our distribution servlet, which facilitates "viral" distribution of Freenet. Basically a user can configure his Freenet node to make a web page available from his computer from which other people can download a copy of Freenet which is "seeded" with the nodes in the "parent" nodes routing table. These are made available for a limited time at a randomly generated URL such as:

    http://80-192-4-36.cable.ubr09.na.blueyonder.co.uk:8889/MM9L2lTOmNI/

    which that user can then send to their friends. Note that there isn't anything in this URL that would make it easy for an automated email monitoring tool to spot. Through this mechanism, Freenet can self-procreate without any reliance on a centralized download source or seeding mechanism.

  2. Minimize the effect of shutting down any given node by making the network fault tolerant and spread reliance evenly across the nodes in the network

    Freenet achieves this, in simulations we could shut-down up to 30% of the nodes in the Freenet network without any significant degradation in performance (this means they were all shut down at the same moment). Further, we could shut down the busiest 20% of the nodes in the network without seeing significant problems (see page 9 of [1]).

    It is worth saying that the goal of evenly distributing load across the network is in conflict with the desire to take advantage of resources where available, I think we've reached a good compromise between these two goals in Freenet but it is an area of ongoing development.

I'm not saying this is a comprehensive list of guidelines when defending against this type of threat, but it's all I can think of right now.

Another issue which Bennett and I discussed was the fact that it is likely to be easier for a censor to restrict access to servers outside their country, than between computers inside the country. personally I think that even if an architecture could not support direct communication with servers outside the repressive country, this certainly does not mean that it isn't useful. In fact I think that giving a voice to people inside the repressive country is more valuable than just letting them hear what we have to say. further, it would only take one unrestricted line of communication between the outside world and the internal censorship resistant network to give people inside the country access to external information.

On a different note, it is well known that Freenet does have latency issues, although these have been steadily improving as development continues. We are currently working on a concept we call "Next Generation Routing" which we hope will lead to a dramatic improvement in Freenet's latency. I'm currently working on an article that describes this, but if anyone would like to learn more sign up to the Freenet development mailing list where it is currently a topic of discussion.

All the best,

Ian.

[1] http://freenetproject.org/papers/freenet-ieee.pdf

Patents

Journal Journal: Working the system

I have been corresponding via email with the MEP (Member of the European Parliament) for Leinster, Ireland (where I grew up) about my fears over the introduction of software patents in the EU.

Here is my most recent email (slightly edited), the article I refer to is one written by Arlene McCarthy, a UK MEP that is pushing for software patents in the EU - you can learn about her misguided perspective here. While this isn't the article I refer to in my email, it says pretty-much the same thing.

If you live in the EU - particularly if you are involved in a business that you think could be hurt by software patents, please please please contact your MEP and educate them about these issues - find your MEP here. Mrs Doyle was responsive to email, but some might respond better to fax or even phone calls. It is particularly important to stress the negative economic impact that software patents will have, and specific examples relating to your business will also be useful.

Dear Mrs Doyle,

Many thanks for your email. I am most grateful to you for your interest and help in this area, particularly since this may not have been an issue familiar to you previously. Please feel free to share my concerns with anybody you think can help, including Mr Wuermeling.

In her article Mrs McCarthy says "At a time when many of our traditional industries are migrating to China and Eastern Europe and when we Europeans are having to rely on our inventiveness to earn our living, it is important for us to have the revenue secured by patents and the licensing out of ideas".

Unfortunately, the only Europeans that are likely to benefit from the proposed changes are those that own stock in large American multi-nationals like Microsoft and IBM, and those that make a living as Intellectual Property lawyers. Europeans who consume software, and Europeans that work for smaller software firms that have neither the time or resources to apply for patents on every trivial idea they come up with, will be the victims.

I can categorically state, as someone that has worked in the software industry for my entire professional life and founded three software companies, one in the UK, one in the US, and one with offices in the US and Ireland, that I have never once seen a software patent being used in a manner that would help anybody's economy. Rather, I have seen them used as a way for large software companies to stifle their competition, not by delivering a better cheaper product to their customers, but by aggressively patenting everything in sight then throwing law suits at their competitors. This article describes exactly how this happens - in that case, it was IBM shaking down Sun in the 1980s:

http://www.forbes.com/asap/2002/0624/044_print.html

Some claim that software patents are ok provided that they aren't on trivial or obvious techniques or innovations within the software industry. The problem is that it can be difficult, or even impossible, to confirm that a software patent application meets these criteria - therefore (as has been seen in the United States) the Patent Office will be under pressure to "pass the buck" by granting the patent and letting the courts sort it out down the line. This encourages exactly the kind of litigation that Intellectual Property lawyers love, but which can drive a small software firm out of business.

Intellectual Property lawyers quickly become experts at taking a simple obvious idea, and turning it into a patent application that totally obfuscates what is being patented, and how simple and obvious it really is. I have seen patents on techniques where even the inventor of the technique in question (not the person who filed for the patent) did not recognize that the patent actually covered their idea!

The software industry has thrived without software patents, and where such patents have been permitted their application has done nothing but inhibit progress and competition within the software industry. If the European Patent system must be harmonized, let it be harmonized to something sensible, lets not blindly emulate the mistakes of the United States.

Kind regards,

Ian Clarke

Slashdot Top Deals

"Beware of programmers carrying screwdrivers." -- Chip Salzenberg

Working...