Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
User Journal

Journal Journal: Thoughts on AF447 5

First, a disclaimer: I don't work in aerospace fields at all though I do design aerodynamically correct lifting body airplanes.... A lot of the conclusions here are my research as a layman. Rather I am writing here to put theories down on paper, as well as weigh relative weights of the theories. Also, my heart goes out to those who have lost loved ones in this tragedy, and if any are reading this, I hope it helps put some of the media speculation in more context.

Now.... What is known: At 0200Z (GMT), the pilot of AF447 sent a manual transmission that they were flying through a storm system. This corolates well with Tim Vasquez's projections and analysis but is way off from the BBC's maps. The plane would have entered the backward edge of the mesoscale convective system (MCS) and would have exited the forward edge, where the storm cells would have been strongest.

At 0210Z, the plane sent a series of ACARS messages denoting a large number of failures from 0210Z through 0214Z. These messages are designed to speed aircraft maintenance rather than determine the cause of an accident so they lack certain details which are important in this case. At the moment, however, they are one of the more important sets of information which is publically known.

After 0214Z, no further details are known. The vertical stabilizer was eventually recovered, but it isn't clear where or when it broke off yet. Most likely the vertical stabilizer was broken off by sideways forces but at the moment it isn't clear whether this happened in flight or during impact.

Finally, we have the Air Comet pilot report where the pilot at 7N49W reported seeing a bright light in the distance following a vertical downward trajectory for six second. Due to the curvature of the earth, the Air Comet pilot was not in a line of sight to the AF flight. The AC pilot could have seen a meteor.

Theories and weighting (note the weightings could change rapidly with new information):

1) Initial messages caused by lightening strike. Probability low to moderate. A lightening strike to the Radome could damage Pitot tube systems, weather radar, inertial reference systems, etc. The lightening strike would have to enter or exit on the radome to cause this sort of damage. Such would seem generally unlikely due to the tolerances involved. The main reason to suspect radome destruction is that pitot tube icing itself can't explain the TCAS fault reported. However, the inertial reference units are near the pitot tubes so it seems to me that severe turbulance-related damage would be more likely. Recovery of nose section, radome cover, etc should be sufficient to eliminate or confirm this possibility.

2) Meteor strike causing destruction of radome. Probability: extremely low. This would have a similar damage profile to the lightening strike scenario if the meteor was small enough to avoid further damage but large enough to destroy the radome. Also unlike a lightening strike, these are not events which frequently happen. Recovery of radome cover should be abe to rule this out or confirm it.

3) Pitot tube icing resulting in unsafe speed of aircraft. Probability: Moderate to high. Pitot tubes are known to ice up in conditions where no liquid water exists. For example a 1999 meteorological flight reported ice and graupel from 18k feet upwards through 41k ft and the DC8 involved in the 39-41k ft. range reported pitot tube icing. This would suggest that pitot icing can occur from processes different from structural icing. That case is worth reading in comparison to the present tragedy because it is reasonable to see the storms in both cases as comparable (both were equatorial meso-scale convective systems). The problem though is that the TCAS (Terrain Collision Avoidance System) faults might not be explained by simple cases of pitot icing because that system relies on groundspeed and GPS measurements rather than airspeed indicators. However, if severe turbulance was encountered (perhaps exacerbated by the autopilot increasing thrust to compensate for low airspeed readings), this might be sufficient to cause damage to aircraft systems including the TCAS and the IR systems (more on that below-- note though that the IR Disagree errors occur the next minute suggesting that they probably occur after the TCAS fault). While this seems like the most likely explanation, barring additional evidence to the contrary, it isn't possible yet to suggest that this is entirely certain. All of the 0210Z messages, however, except the TCAS error could be explained by the computer recognizing bad input from the Pitot tubes.

After the initial incident, the ACARS messages paint a picture of rapid deterioration of the situation. At least one internal reference unit failed, and shortly thereafter both primary and secondary air control systems would have failed. It is unclear at that point whether the aircraft was in direct law, or on manual backup (which gives LIMITED use of the rudder and elevator trim). The manual backup systems of an airbus are not designed for turbulance or even landing (they are only designed to provide some troubleshooting time while a plane is in-flight).

The next question is whether the aircraft broke up on impact or whether it broke up in the air. At the moment, there does not seem to be sufficient information to say. The last message, indicating a fault with the pressurization system due to external pressure increases COULD indicate decompression at that point, but it could also be due to cascades of bad information from the Air Data unit or an actual increase of outside pressure due to a rapid descent (for example, after a mach tuck allowed to progress too far due to lack of inertial reference). While it is likely that more detailed analysis of the vertical stabilizer will help answer this question, it is too soon to say whether it disintegrated in the air or when hitting the water (or a mixture of both). (The vertical stabilizer appears to have been broken off bit sideways force but whether this was the result of a sideways crash or problems in the air is currently uncertain.)

All in all, this is my rating of hypotheses surrounding the crash.

At this point, the evidence is not sufficient to conclude much beyond this IMO. Unfortunately a lot of this has been the subject of wild speculation from the media. Such speculation probably does not help anyone who is in search of truth whether due to curiosity or loss. I hope my post helps clarify at least one layman's view of the evidence for any such folk.

User Journal

Journal Journal: Lori Drew, The SCO Group, and the GPL 4

I have decided that I think that it is necessary at this point to put my thoughts together regarding the GPL, and when licence violations can gain the force of copyright violation. I am not a lawyer, but this has come out of watching a number of cases, discussing the issue with a number of lawyers, and trying to understand all sides.

When a GPL violation case comes up, folks generally are quick to argue that it is definitely copyright infringement. Stallman has even argued that nVidia's drivers infringe on Linus's copyrights. While I think that a subset of GPL violations do rise to the level of copyright infringement, I think these cases are somewhat overstated.

The GPL, despite what Stallman says, is a contract in which both parties agree to abide by certain behaviors in joint interest. The contract is an adherence contract similar in force to a web-site's terms of service (where use of the good or service requires adhering to the contract) and the consideration found is in the requirement of equal access to pulically distributed code. The GPL is much more like a contract than are more permissive licenses, like the BSD license, because the consideration factor is quite a bit greater. For example, while the BSD license might be argued not to include consideration since the only requirements are those required minimally by copyright law (not stripping copyright headers) and (when distributed in source form), not making false claims about warranties, the GPL actually requires the licensee to share something further with the idea that it will be available to the original developer. "I will share if you will share" is consideration while "I will share, but don't say I am giving a warranty when I am not" might be argued not to be. Similarly, the 4-clause BSD license (with the advertising clause) isclearly a contract, while the two-clause BSD license might not be.

At the same time, it seems reasonable to argue that a contract violation regarding copyright terms could become a copyright violation if the behavior is sufficiently outside the scope of the license. For example, if I grant someone a license to publish five copies of my book for a flat fee of $20, and they publish 5000 copies of the book, that would seem to be copyright violation, not a mere contractual issue. At the same time, I don't think it is copyright infringement if there is a reasonable argument to be made that the contract allows the use, or if the difference is small enough as to represent an issue resolvable through contract dispute (you print 6 copies instead of five by accident, that should be a contract matter). And certainly a mere reasonable disagreement as to the terms of a contract should not subject the loser in the case to copyright infringement sanctions.

Lawyers in contentious cases tend to find as many areas to allege misbehavior and as many grounds for relief as possible. Consequently, one can expect that any case of stepping outside of the perceived boundaries of a license will be labelled as copyright infringement because of the chance that the court will find for the plaintiff on this matter. It is thus understandable that lawyers will raise this issue in minor contractual disputes for leverage.

One of the most interesting cases which provides a parallel currently is United States v. Lori Drew. In this case, the US Attorney involved is seeking criminal sanctions over terms of service violations on MySpace's web site. Lori has been convicted of three misdemeanor counts of computer hacking for violating MySpace's terms of service (and creating a fake profile). Currently the court is considering throwing out those convictions in a directed verdict motion. If not, the next step is the 9th Circuit Court of Appeals. The judge is obviously having a hard time with the ruling since sentencing has been delayed for a total of seven months while he considers the motion to acquit. The key element here from many who support dismissing is that web site terms of service violations should simply not be prosecutable as crimes. Many of us feel that turning any term of service violation into a crime is dangerous to our system of law, and the same occurs with any other adherence contract. To hold the GPL to a different standard than MySpace's terms of service just because we like the license is hypocritical and similarly dangerous.

What I would propose in these cases is the concept of a penumbra around contracts, where violations are merely contractual issues. The penumbra would be defined both in terms of severity of the violation and vagueness in the contract. Any reasonable argument that the behavior was allowed in the contract would be sufficient to place the behavior under the penumbra where contractual violations could not lead to further legal or statutory challenges, as would the argument that the violation was not particularly egregious.

Back to my book analogy... Suppose in addition to limiting the number, I also require the book to be distributed on media suitable for being input directly into a computer or an offer valid for three years to provie such. Suppose the publisher does this by typesetting the book in the OCR-B font, and arguing that this is suitable for optical scanning and therefore they have met their terms under the contract. I take them to court. I don't think the court should entertain the notion that there are copyright violations in this case because there is a reasonable argument to be made that printing the book in a medium designed for both humans and computers is allowed by the contract. If I ultimately prevail, it should be on intent of the contract, and it should be a contractual matter.

So the next issue becomes the question of whether the GPL can regulate bridges (via linking) between a GPL application and a closed source application. Stallman says such bridges (such as the LGPL components of the nVidia drivers) are not in line with the license. He raises arguments which seem to be similar in nature to the arguments raised by The SCO Group in their suit against IBM. The major questions are:

1) Does linking NECESSARILY imply derivation?
2) Is derivation contageous? I.e. if A is derivative of B, and B is derived of C, can we say that A is derivative of C without further evidence?

Regarding the nVidia Driver issue, the typical understanding is that nVidia has ported the core logic of their windows drivers into a module which is independant of the Linux API itself. nVidia then provides a Linux driver, under the LGPL, which links the Linux kernel and the closed source module together and handles how the Linux kernel interacts with the closed-source module. Assuming this is the case, it would seem that nVidia has actually fulfilled their rights under the GPL v2. The reasons are elucidated by a reading of various rulings in SCO v. IBM.

In SCO, the court ruled that derivation was not contageous, and that one must show a continuity of the expressive elements in order to find derivation. In short, if A is derivative of B, and B is derivative of C, in order to say that A is derivative of C, one must show actual structures in A that are derivative of structures in C. It seems unlikely, given the standard understanding of this case, that the nVidia drivers in fact are derivative of the Linux kernel in this way, so they are not bound by the GPL. Similarly, under the GPL v3, it seems to my mind that one could easily create such a bridge without running amok because one can add additional permissions to specific modules (or even license modules under more permissive licenses like the BSD license).

The second issue, however, is the question of whether linking is decisive in the derivation discussion is an interesting one and has been dealt with substantially in other papers (see previous journal entries for citations). The general attitude seems to be that linking does not by itself imply derivation though it could lend some weight to the idea, particularly where object-oriented techniques like inheritance are used. However, a lack of linking does not mean that a work is not derivative either, particularly in more expressive content such as game displays (altering a game display, however done, might well be seen as creating a derivative work).

However, even if such a view were to be frowned on by the courts, I would hope they would see reasonable arguments to the contrary as requiring damages based solely on contractual violations rather than copyright infringement. Either way, I think Stallman is wrong and is advancing some dangerous arguments which we are rightly wary of in different cases. The key issues for me are: 1) do most lawyers I know accept those arguments, and 2) would we feel differently if those arguments were being advanced against Free/Open Source Software?

User Journal

Journal Journal: Why the patent threat hasn't materialized 1

Everyone worries about software patents. Yet despite sabre rattling from Microsoft, we haven't yet seen any major patent litigation against Free and Open Source Software. This entry explores the disincentives for enforcing patents against Free and Open Source Software as well as a little more information as to which project may need to worry more than others.

The scope of patent problems in the software industry are fundamentally new. Even the auto industry which at one point had several hundred manufacturers with major patent litigation for many years did not compare to the problems today. Not only are there overlapping patent claims, but the claims themselves are somewhat vague, and not tied sufficiently to a physical machine to be readily understandable as to what, exactly, is covered. Yet for all these problems, we see a few patent infringement lawsuits in the industry and these almost exclusively fall into two categories: countersuits and small players suing big players.

While patent protections and litigation may have been a big part of the reason for the consolidation of the auto industry in the early 20th century, patents apply to manufacture of physical goods with a high barrier to entry in a fundamentally different way to the production of intangible goods with a low barrier to entry. This is why business process and software patents have been threats which have failed to materialize.

To make an automobile, you have to:

  1. Build a factory
  2. Buy lots of expensive equipment
  3. Hire workers

Each of these costs money up front one hopes to make back through the manufacture of the product. A successful injunction means you are not only out of marketing your product, but also out your startup costs which would be substantial. Thus the mere possibility of patent litigation is an effective way to prevent competition.

In contrast, we see Free/Open Source Software requiring none of these. It can be made from home on common household equipment with essentially no startup costs. Some folks don't even make money writing the sotware, and it isn't terribly common that the developers themselves or the distribution companies have enough assets to make suits against them worth it.

So when we look at a company that might want to use patents to forestall competition, we see three real options:

  1. Do nothing, but maybe issue press releases stating that these infringe on unspecified patents. The problem with this is that until one notes which patents are at issue, this doesn't have a lot of credibility or effect.
  2. Issue a press release mentioning which patents one believes are infringed by which products. This has more credibility but opens up the business to a number of problems. First, the open source projects are likely to look at the patents and engineer around problem spots. Then one might see re-examination proceedings started, and some patents successfully challeged. So while this offers some short-term gains, it offers no long-term benefits and has some serious problems.
  3. One could actually sue over patent infringement. This would have more of an effect still than #2 above, but it has the same drawbacks. Furthermore, it is far more expensive than merely announcing the patent infringement issues because actual, costly patent litigation ensues along with, very likely, a great deal of pro bono work for the defence. Finally, it is even more risky because there is the chance that either the court or the patent office will invalidate the patent.

In these cases, litigating patents against Free or Open Source software doesn't make any strategic sense. No rational player will seek to use patents in this way. However, patents are a liability for big players (which is why I support Red Hat's patent pool). For larger players, it is typical for patents to be mostly useful in defence, but risky in offence. However, large players are vulnerable to patent lawsuits because they CAN pay royalties, etc. Consequently nearly every suit we see is by a smaller business against a big business.

I have concluded that software patents are useless against Free and Open Source software simply because they are usually easy to work around and the damage done by actual litigation is fairly limited.

Of course, IANAL, TINLA, and if you don't get that you shouldn't be reading Slashdot!

User Journal

Journal Journal: Proposal for a new Free Software License 1

The purpose of this license would be to provide better compatibility between "Free Documentation" and "Free Software." Currently, one of the big things hampering this collaboration is the inability to include material, say, from the GFDL in the help files of a GPL program. This proposal would rectify this by ensuring:

1) No auxiliary countent could be produced in DRM-ridden versions
2) Auxiliary content with invariant sections could still be used.

As in the GFDL, invariant sections could include neither code nor functional documentation, but could include things like political arguments and force these to be displayed in startup messages, code comments, help files and the like. Some flexibility would be allowed, so if help files with invariant sections were removed from a distribution, the same invariant sections could be distributed with the software in other means.

I have asked RMS for permission to create a derivative license of the GPL v3 which would allow invariant sections provided they are scoped similarly to the GFDL. We will see what the response is.

User Journal

Journal Journal: Thoughts on Google Books Settlement 2

As a self-published author (http://www.amazon.com/gp/product/1439223084/) I am very pleased with the structure of the book settlement. The structure of financial compensation needs to be looked into, but the real significance is how it relates to orphaned works (in-copyright, out-of-print). Many authors including myself have been hoping that orphaned works will be treated differently, and generally when publishers retire a book the author has little recourse and no way of bringing the book back to market, unless such terms are negotiated into the authoring contract. In some cases, authors might be able to buy back the rights to the book that is out of print (often for a fairly hefty sum of money), but this is not always possible.

The most expensive books in my library have copies on the market for between $1000 and $2000 dollars. These are inevitably out of print scholarly works published on a limited run because the audience was low. After they go out of print, the prices sky rocket and they become very difficult to track down. Books on folklore studies are some of the worst in this area.....

In the past I have argued that copyright law ought to be amended to revert rights on copyright books to the original authors who might be able to bring the books to market through other means. This agreement addresses these concerns in another way, by establishing a precedent that out of print books are pose fundamentally different copyright concerns than books currently in print. This sets possible groundwork (though not present in this case) for compulsatory licensing in republishing out of print books, and this would be a good way to address the problem of orphaned works.

Copyright is fundamentally a contract between society and artists, where society grants the artist a temporary monopoly on a work in exchange for being able to use the work later in an unrestricted way. This helps keep artists (including authors) fed, encourages them to create more works, and enriches society by eventually bringing these works into the public domain. When a book is taken out of print, society is cheated in this deal, and this becomes a bigger issue as copyright terms have become substantially longer.

Once copyright is given (and in the US, because one cannot sue for more than fiancial losses prior to registration, I would say this is after copyright is registered), I think we need to consider this contract complete. An author or publisher which removes a book from the market prior to the expiration of the term of copyright is cheating the public in this deal, and there are good reasons to argue that this monopoly should be weakened when this occurs. The idea of compulsatory licensing in this case makes sense because the author still gets paid as per the contract with society, but no longer has the right to remove the work from the market. Both sides get their dues.

The Google settlement does not get us all the way there, but the section relating to out of print books is a very significant step in the right direction.

User Journal

Journal Journal: My book is published!!! 9

Ok, this is somewhat far afield from normal Slashdot stuff, but.....

I have been working on a book introducing the Runes to serious students: both those studying the rise of neopaganism in this country and those practicing it should get a fair bit out of it. My book discusses patterns in Norse, Germanic, and Indo-European mythology and legend surrounding certain mystical and magical constructs related to the Futhark systems and surrounding lore. The main focus is given to the Rune Poems, the names, and how structures in them relate to other mythic traditions more generally.

The book is titled "The Serpent and the Eagle" and is available on Amazon at:
http://www.amazon.com/Serpent-Eagle-Introduction-Elder-Tradition/dp/1439223084/

It is also listed on ABEBooks, Alibris, etc.

A more interesting element to Slashdotters is that I did all the typesetting of the body and the cover in LaTeX. Diagrams were done in xfig. I did my own book design and came to really appreciate how good LaTeX is in this area. Also the image on the cover was a public domain scan of a 17th century manuscript found on Wikipedia. This helps show why the public domain and open source can help make things easier than we might otherwise think.

User Journal

Journal Journal: Proposed "Compromise" for the Georgian conflict 2

With great alarm I have been watching the conflict in Georgia, where Russian troops appear to be continuing to advance and target military installations despite the "order" to hold to a ceasefire. In my humble opinion, this is a premeditated attempt to bully and subjugate all of Georgia, and use them as an example so that other former Soviet republics (like the Ukraine) don't get too uppity.

However, the fact is that my horror is out of a sincere belief in collective self-determination, as well as a growing concern relating to that Europe has become overly dependant on energy from Russia. So if I could wave my hand and propose a solution it would be one which would require painful compromises to both sides:

1) Give Russia the regions of Abkazia and South Ossetia
2) Give Georgia full NATO membership so that the European countries can pledge to protect the Caspian Sea pipeline.

Needless to say this would please nobody. It would, however, help with the stability of the region.

User Journal

Journal Journal: Why Firefox 3 is Bad for Developers 2

One of the major problems I have run into with Firefox three is that XHTML which passes the W3C's valiators is not rendered at all. The problem has to do with the fact that the W3C and the Firefox team apparently have different views as to what the XHTML spec actually means.

See the discussion in the following bugzilla entries:
https://bugzilla.mozilla.org/show_bug.cgi?id=408702
https://bugzilla.mozilla.org/show_bug.cgi?id=412114

The following XHTML document renders on Seamonkey but not on FF3. Furthermore, it passes the W3C's validators with no problems:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
        "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
    <title>LedgerSMB 1.2.99</title>
    <meta http-equiv="Pragma" content="no-cache" />
    <meta http-equiv="Expires" content="-1" />
        <link rel="shortcut icon" href="favicon.ico" type="image/x-icon" />
 
    <link rel="stylesheet" href="css/ledgersmb.css" type="text/css" title="LedgerSMB stylesheet" />
 
    <link rel="stylesheet" href="UI/login.css" type="text/css" title="LedgerSMB stylesheet" />
 
    <link rel="stylesheet" href="css/ledgersmb.css" type="text/css" title="LedgerSMB stylesheet" />
 
    <script type="text/javascript" language="JavaScript" src="UI/login.js" />
 
    <meta http-equiv="content-type" content="text/html; charset=UTF-8" />
    <meta name="robots" content="noindex,nofollow" />
 
</head>
 
<body class="login" onload="setup_page('Name:',
    'Password:');">
    <br /><br />
    <center>
        <form method="post" action="login.pl" name="login"
            onsubmit="return submit_form()">
        <input id="menubar" type="hidden" name="menubar" value="" />
        <input id="blacklisted" type="hidden" name="blacklisted" value="" />
        <div class="login">
            <div class="login" align="center">
                <a href="http://www.ledgersmb.org/" target="_top"><img src="images/ledgersmb.png" class="logo" alt="LedgerSMB Logo" /></a>
                <h1 class="login" align="center">Version SVN Trunk</h1>
                <div align="center">
                    <div id="credentials"></div>
                    <div id="company_div">
                      <div class="labelledinput">
                        <div class="label">
                            <label for="company">
                            Company
                            </label>
                        </div>
                        <div class="input">
                            <input class="login"
                            type="text"
                            name="company"
                            size="30"
                            id="company"
                            accesskey="c" />
                        </div>
                    </div>
                </div>
                <button type="submit" name="action" value="login" accesskey="l">Login</button>
            </div>
        </div></div>
        </form>
        <p><a href="admin.pl">Administrative login</a></p>
    </center>
</body>
</html>

The problem lies with the tag in the document head. Firefox 3 sees it as unterminated while the W3C validator sees it as properly terminated. The Firefox team argues that the W3C validator is wrong and that they won't support content that the W3C's tools say are valid.

The Firefox Team's Rationale:

The new behavior in Firefox, as it turns out is behavior by design. The issue has to do with a security fix which had to do with unterminated tags in earlier versions, where the tag would be ignored and the rest of the document would be processed as HTML. Obviously this is a problem because it leads to ambiguity as to the nature of content, whether it is executable, and what it does. I support their efforts to fix it, but I also think that the fix is as flawed from developers' perspective as User Access Control is in Vista from a power user's perspective. Basically a self-terminating tag is differentiable from an unterminated one and since the W3C's validators show this as valid, it should be handled.

See:
https://bugzilla.mozilla.org/show_bug.cgi?id=305873

Why this is a bad thing:

Now, arguably the W3C's validators are wrong. According to the spec and the DTD, the script tag really *should* have an end tag. However, expecting that all developers are going to be spec lawyers is a serious mistake. Many developers *do* rely on the W3C validators as being the ultimate arbitrator of what is valid HTML and XHTML. Failing to handle this case in the same way will:

1) Needlessly flood bugzilla with repetative bugs.
2) Drive developers away from Firefox to those who are more inline with existing validation tools
3) Create support headaches for everyone.

This is a trivial fix without the dire security issues the Firefox team has invoked.

A second option would be to engage with the W3C to help ensure that such discrepancies are resolved bidirectionally. A little effort now will save a lot of headache for everyone down the road. I would hate to see Firefox 3 become more or less the Windows Vista of Open Source, but this may happen if the FF team decides to go it alone in their interpretation of the specs. That would be a real shame.

User Journal

Journal Journal: A Way out of Colombia's Mess

I have been thinking a great deal about how Colombia can get out of the civil war which has torn the country apart for over four decades. The problems are deep but they may be soon solvable. In short, leadership of the government devoted to social justice, economic growth, and security for all Colombians is absolutely necessary. There is hope that this can happen but not under Alvarro Uribe.

Colombia's civil war began as a Communist insurrection in the 1960's. The goal of the Communist powers was probably to weaken US influence in the region especially in the wake of the decision by Colombia to send thousands of troops to fight in the Korean war. However, over time, FARC has cut all political ties to any political parties and has become primarily occupied with their own financial interests relating to drug trafficking. There are other leftist rebels in Colombia, but none of them match the force of the FARC.

In part to contain the FARC, the Colombian government has financed and sponsored a number of right-wing militias which are also into terrorism, narcotrafficking, etc. While not as big or as strong as the FARC, they are still a major force to be reckoned with. Unfortunately, the current president, Uribe, has continued policies of backing these militas. Consequently Colombians are left with no guarantees of security in a civil war where both sides readily resort to terrorism and where both sides finance their aims via the manufacture and sale of cocaine.

As hellish as the situation sounds (and Colombia is not likely to be somewhere I would go to visit at the moment), the beginnings of hope are starting to emerge. The peace process and FARC's handling of it have caused most of the left-wing of Colombian politics to cease supporting the organization, since their main goal has become that of narcotrafficking. While the right-wing and the government has not abandoned their terrorist organizations yet, political pressure is building to do so.

What Colombia needs is for a center-left candidate to emerge victorious in Presidential elections with a message that Colombia as a whole can unite behind. THe center-left part is important because this is necessary for being able to crebibly reject violence from the FARC. The message needs to be one of social justice, economic growth, and an attempt to provide security for all Colombians from the terrorist organizations which have dominated both sides of this conflict. Once Colombians turn away from violence, then the militias (including the FARC) can be taken down.

It will not be easy-- Uribe is seeking modifications to term limits to let him run again for the same office. In this regard, he joins the ranks of Hugo Chavez, Alberto Fujimori, and other Latin American authoritarian leaders who would rather rewrite the law than step down. While it is hoped that the measure doesn't pass, we will have to see. Secondly, any President able to marginalize the militias would almost certainly have a platform that the US would not like. There would be additional resistance to breaking the historic ties between the countries.

Nonetheless, I am hopefull it can be done. It seems possible that within another decade, this horrible civil war will be only a memory.

User Journal

Journal Journal: Winners and Losers in the Latin American Crisis 1

I have generally called Colombia's raid unbelievably stupid. However, I figured I would discuss who the real winners and losers are in the developing crisis:

Winner: The FARC. With this crisis, Colombia has recalled their troops away from border areas, giving FARC a large and safe corridor in which to operate. Venezuela's mobilization is also believed to give the leader of FARC military protection of the Venezuelan government. If FARC continues with negotiations and releases the hostages as previously expected, France has also expressed some willingness to cease regarding FARC as a terrorist organization. Legitimacy by even some EU members may be a strategic victory in the on-going conflict. (Personally, I think that FARC needs to be seen for what it is-- a large-scale mafia-like organized crime syndicate which does not recognize international borders. I am not sure I would call them "terrorists" so much as the "Colombian Mob." They have no real political platform other than their own economic interests in cocaine production.)

Winner: Ecuador. Ecuador has in the past taken a tough line against general FARC activity in Ecuador with the exception of offers to try to mediate a peace process and hostage release. The attack has been seen across Latin America as unacceptable, and Correa has gained much-needed support.

Winner: OAS. OAS has shown that they are capable of dealing with crises in the area and helping to get people to back away from the brink of war (Ecuador has threatened military retaliation against Colombia over the cross-border raid).

Loser: Hugo Chavez. Closing the border to Colombia will exacerbate inflation and food shortages in Venezuela and likely cause dissatisfaction in the longer run.

Loser: Averro Uribe. Uribe is standing alone in this crisis-- it is unclear how much support he has even in Colombia relating to his handling of the crisis. The economic toll to Colombia is likely to be a real problem in the ongoing civil war.

Winner: de Silva. Brazil has shown that they can be a real diplomatic powerhouse in the area. It is likely that Brazil's influence in South America will be strengthened by their role in this crisis.

Loser: The USA. Bush's handling of the crisis ensures without any doubt that the lease for the only USAF base in South America (in Manta, Ecuador) will not be renewed. The posturing of all presidential candidates on the issue have further weakened US credibility and diplomacy in the area. The leftist governments see the US as being imperial, while the right-wing governments see the US as undermining their sovereignty. This is not good for us.

To be fair, Colombia's actions would be like having the US, without proper authorization from Mexico, carry out raids on drug trafficking organizations using American air and ground troops. Colombia clearly overstepped any reasonable lines and this explains Mexico's support for Ecuador on the matter.

User Journal

Journal Journal: And FARC won the battle :-( 2

As an American, I often feel like my President is unparalleled in stupidity* when it comes to managing our involvement in conflicts and crises around the world. Then, on occasion, someone does something somewhere which makes me realize we are not alone in our experience of being ruled by idiots.

* Ok, that is an overstatement. Bush may be inept at handling many crises but he has made important contributions to the resolution of others.

This week was one such week. Colombia attacked an encampment of FARC personnel in Ecuador during hostage negotiations prompting threats of war from two of Columbia's neighbors, Ecuador and Venezuela. When asked for an appology, Colombia has given given one entirly devoid of substance (sort of "sorry for the inconvenience, but we would do it again").

So the crisis began to unfold. By now at least Ecuador and Venezuela have deployed troops to the border with Colombia, and Peru may be doing the same. The three countries (often at odds with eachother) have united in support for Ecuador's sovereign control over that country's territory. Ecuador has even threatened military action against Colombia if substantive actions are taken on Colombia's side.

This was an extremely stupid move on Colombia's part. This has alienated the country which leases the space for the USAF base which the US uses for most anti-FARC missions, and it has also given Chavez an excuse to provide actual military support to the FARC. Furthermore this isolates Colombia and thus risks to cause issues for the economy of the area. Finally, as long as Ecuador is threatening military action against Colombia over the incursion, this provides FARC with a safe corridor of operations near the border of Ecuador, Peru, and Venezuela. I personally believe that the FARC needs to be defeated but Colombia cannot do this by themselves. I fear this action may cause many issues for Colombia for a long time.

As if this wasn't enough, Ecuador has managed to get all of South America largely on their side. However, Colombia far from being apologetic has decided that they will take the matter to the International Criminal Court where they will charge Chavez with genocide for his alleged (though probable role) in supporting FARC. The problem with this is that the timing at least (and in all likelihood the charges themselves) is so clearly politically motivated that I don't think the ICC would act. Instead this just drives those who are infuriated over the incident into positions where more is required.

I wish I could say that I was optimistic. But I now fear that this will degrade into some sort of war. Colombia has chosen to be the worst kind of neighbor and in all likelihood this will cause serious problems for a long time. If Colombia is to defeat the FARC, they will need all the help they can get. Making enemies with three of their four neighbors does not seem wise.

FARC has scored an important political victory here in the same way that Hezbullah scored an important victory in the war with Israel last year. They have been given, through ineptness on the part of Uribe, a safe haven, and safe corridors of operation. They have also restricted the amount of aid which will be available to Colombia long-term (largely sealing the fate of the lease of the USAF base at Manta, Ecuador), and set their enemies all alone in the night.

User Journal

Journal Journal: Religion vs Science and Non-Overlapping Magisteria

One of the more interesting arguments I have been in recently is whether scientific epistomology is limited and whether this implies that spiritual truth is somehow separate from scientific truth. The principle of separation has been labeled "Non-Overlapping Magisteria" or NOMA by the Catholic Church and the name has stuck. NOMA has been criticized as suggesting a false coexistance but I would argue that the issues with NOMA are due to both science and religion treading on eachothers' territory.

Of course, I am not a Catholic, or even a Christian. I expect my views here to ruffle quite a few feathers but I have done my best to make my argument solid.

The False Problem with NOMA:
A naive look at NOMA suggests that it is in fact a problematic principle because religions such as the Catholic Church have progressively given more and more of what was historically their domain over to science. Thus it is tempting to see NOMA as a drawn-out surrender where science gets to answer anything it can prove and religion gets to answer the rest. In this view, religion basically is there to offer certainty about things we can't be certain about and hence has no place in a scintific world is due to problems defining spiritual truth among both scientists and religious people.

Science does not Exist in a Vacuum:
Similarly, a lot of people water down science by suggesting that it is somehow fully self-contained, and that data inevitably leads to theory. The basic problem with this view is that the development of theory requires more than just a mathematical review of the data. Scientific theories arise from review of data, and by definition they are falsifiable (if some data is later discovered which disproves the theory) but they also contain elements of the theoritician's world-view beyond simply trying to put the pieces together. I suspect that this is why theoretical physicists who are deeply into phylosophy and spirituality are so well represented in the top tier of their field.

In short, as Werner Heisenberg pointed out, theory is developed by an individual reviewing data and applying ideas which pre-exist the review of the data to them (see "Physics and Philosophy"). I suspect that this was also behind Einstein's proclemation that imagination is more important than knowledge. Science is thus largely an area of applied philosophy where philosophical principles are applied to interpretation of data in the formation of theories much in the same way that engineering applies physics, chemistry, enad the like.

As such, science deprived of non-scientific ideas would also be denied the major breakthroughs that we have seen in every area.

The Limitations of Science:
Science is a methodology for a limited form of natural discovery which can provide some grounding for certain forms of philosophy but lacks any direction of itself-- even the direction of science is dictated by outside ideas as documented above.

Science is at its strongest where reproducible experimentation is possible. It is at most overextended where reproducible experimentation is not possible. For example, science probably cannot say much for certain about otherwise normal people who claim to have witnessed miracles, claim to have been abducted by aliens, etc except that "we don't know." Hence the hard experimental sciences in areas like particle physics and quantum mechanics are where it is strongest, and the soft sciences relating to subjects like psychology are where science is the weakest. In the middle are areas where limited experimentation may be possible but where the bulk of material to work from has to be unearthed-- areas such as paleontology, archeology, and historical linguistics. (Mathematics does not fit into this classification and is probably better described as a branch of deductive logic rather than science.)

Interestingly, what hamstrings scientific psychology is a scientifically valid observation argued first (in the field of modern psychology) by Carl Jung, that humans do not come into the world as blank slates, that we carry with us individual personalities (what he called the "a priori self") from a time before birth. This is observable in that some fetuses are more active than others, and display other behavioral differences as well. Since there is a portion of the psyche which is not reproducible, then you end up with a fundamental problem when looking for reproducible results-- at best results may show up in statistical analysis, but actual experimentation and repeatability is fundamentally limited even excluding ethical concerns.

The Valid Role of Science:
Science represents the best epistomology for seeking an understanding of mechanism of our current natural and (broadly-speaking) historical external world. It also provides more limited insight into older historical elements of the artificial world through archeology, historical linguistics. Finally there is even more limited value in areas such as clinical psychology (psychiatry is something different which is generally grounded in neural science and experimentation and is hence more scientific than clinical psychology).

However, science can never go from mechanism to goal-- it can never tell us what we as a society should value, and can never provide by itself any viable ethical principles (it can provide a method for validating actions we take in support of ethical principles, however).

Science can never be the guide to what is great artwork or music, and I believe it will ever be able to provide guidance for relating to the experience of the divine, which if my understanding of comparitive religion is accurate seems to be a near-universal aspect of the human condition. Science might be able to provide some insight into what makes certain pieces of art great or what the mechanism is in the brain for the sense of the divine, but cannot provide guidance beyond mechanism.

So science must remain concerned only with mechanisms and things which (necessarily spring from mechanism, such as chronology, age, and timing) of the natural and to a lesser extent the psychological and artificial world as well.

The Place of Religion:
If Science limited by its own epistomology, then the question becomes what of religion? Is there a place for religion alongside science and philosophy? I think that this problem, when approached rationally provides ample room to criticize and limit the scope of religious discourse. But I think that when one looks back it is clear that there is a place for religious belief.

The first issue is that religion must give up any claim of authority over mechanism (and hence chronology, age, timing, etc) of the natural world. This means that evolution for example exists in an area where religion has no claim of authority.

Most of the world's religions historically important religions developed at least in large part before the adoption of writing. Walter Ong ("Orality and Literacy") has documented a fundamental shift of thinking which occurs when one moves from oral-tradition-centered cultures to literacy-centered cultures. The literacy shift is, I believe, the core of the cognative shift which lead from the Renaissance to the "Enlightenment." However, as the Abrahamic religions developed in literary societies (including but not limited to Christianity, Islam, and Judaism), they have tended to have an emphasis more on the idea of literal truth. It is thus my belief that religion must step back and address issues more along the way that Hinduism or Platonism does-- as metaphores where the Ideas behind must be sought rather than mere simple pronouncements of literal truth.

Religious traditions thus end up being language-like structures for helping us relate to various aspects of our inner and outer world but exists in a way which is fundamentally does not overlap with science and yet provides immeasurable value (perhaps greater value than religions do today). In this view, science and religion could enhance eachother and coexist without conflict and both provide their best to society and eachother.

Ultimately, this means that most religions today must give up more than science, but in my view this is not a surrender but an act where religion refocuses itself on what it is really all about.

User Journal

Journal Journal: Towards a New FOSS-supporting Organization? 6

For some time I have been looking at the possibility of starting a new organization to help fill what I believe is a gap in the available coverage by other organizations. The Institute for the Advancement of Open Source would:

1) Help with outreach and advocacy of free/open source software, documentation, and content.

2) Provide minimalist and marketing-friendly guidelines and definitions for what qualifies as free/open source. This is different from the OSI OSD in the sense that the OSI OSD is largely designed for evaluating specific licenses by a specific organization. I believe that it is well beyond the time and ability of marketing managers to understand the organization-specific interpretations of that definition (and how the organization's interpretation differs from the nearly identical DFSG).

3) Provide a place where people involved in Free/Open Source software, documentation, and content can come together, work together, and mentor others involved in the same area.

This will be different from other existing organizations in the following ways:

Unlike the FSF, we will have objective criteria for what constitutes Free/Open Source. Nobody will be left wondering how forced advocacy (as in the GNU Manifesto as an invariant section of the EMACS documentation) fits into the free speach/free software world.

Unlike the OSI and SPI, we will not limit ourselves to software. Also OSI has not really taken on a lot of the positive outreach possibilities in recent years, and although SPI has that in its charter, they have not done so either. We will not be providing the organizational support for specific projects that SPI does, nor will we provide license certification as does the OSI. Instead we will be primarily an outreach organization aimed at advancing free/open source sofware.

What do people think? Does this make sense?

User Journal

Journal Journal: Metatron Technology Consulting's Free/Open Source Guidelines 1

Following disagreements which have gotten me banned from the OSI license discuss list over the right or power of OSI to unilaterally claim total authority over the term "open source" (a view which is disclaimed on the OSI site, but is generally held by various license-discuss participants including the list moderator-- those interested in what was actually said can check the December and January archives of that list on the OSI site), I have decided that the best way forward is to help fill the void in the industry by offering simple and concrete guidelines that my business will be using to determine whether or not we can use a license for development under our open source policy. This policy is generally in the same spirit as OSI's OSD and the FSF's Four Freedoms, but is designed to be less subject to arguments over interpretation, and easier for businesses to use as guidelines for when to state that software is uncontroversially free and open source. Note however that the guidelines are somewhat stricter than either the FSF's guidelines for Free Software or the OSI's OSD.

One of the key points one must be aware of is that software freedom carries with it an economic advantage. The goal of this set of guidelines is to help provide an objective framework for understanding when we feel that this freedom is crippled through onerous requirements either on the developers or the end users. Our commitment is to preserving this freedom for our customers and we hope other businesses will adopt similar guidelines.

The first two requirements are hard requirements. If either of these are violated, we will not do work on the project though we may help arrange others to do this.

1: Open source works must not place restrictions on use, nor may it force one to distribute source code except when one has opted to provide a copy of object code. Furthermore, no bundling restrictions may be in place. This provision disqualifies licenses which restrict commercial use either directly or indirectly (as the Aladdin License seeks to do), as well as ones which force distribution of modifications (such as those of the Aferro GPL and Larry Rosen's Open Software License). It does not disqualify the GPL v2, nor does it fully disqualify the GPL v3.

2: Modifications must be possible for all sections of the work except for the license text itself. One must be allowed to distribute such derivative works and to provide the same rights downstream as were granted to oneself. This disqualifies GFDL works which include invariant sections. If someone stated that one must *choose* a specific license and not provide the rest of the rights downstream, we would not work on that project either. This would also disqualify GPL v3 programs where additional permissions require their own removal on modification.

The following guidelines involve license selection. These do not disqualify us from working on various projects, but help us determine what licenses are best for a project:

3: Licenses should be no more restrictive then absolutely necessary for either party. Where all things are equal, more permissive licenses are preferred.
4: Licenses should be no more complicated than absolutely necessary. Where all things are equal, simpler (and usually shorter) licenses are preferred.

The final guideline defines our community involvement:

5: Multivendor solutions are preferred over single-vendor solutions.

What do people think?

UPDATE:
There has been a change in moderators of the license-discuss list. I was banned by Russ Nelson, not the current moderator (Ernie P). Ernie has been an important positive influence on the OSI in general and I wish him luck. However, I have serious questions about the OSI's ability to actually contribute substantial resources to the community at the present time, so I suppose I will work on these challenges and reconsider my involvement on the lists later.

User Journal

Journal Journal: Constitutional Citizenship, Arizona, and all that 2

A ballot initiative in Arazona has been put forward to attempt to deny citizenship to individuals born in this country to parents who are not here legally. I believe that this measure is Unconstitutional and something every American ought to oppose.

To be fair the issue of children being born to illegal immigrants who then give additional benefits to their parents is a real issue, but solutions to this problem exist which do not impose burdens on American citizens and do not run amok with the Constitutional definition of citizenship (being applicable to all who are born or naturalized in this country and who are subject to the jurisdiction of our country) found in the 14th amendment.

Proponents of the measure suggest that individuals born in this country to illegal immigrants do not fall under this ammendment. They point to congressional record to support their arguments but neither the plain wording of the amendment of the 14th Amendment nor the congressional record support this limitation. In general, the 14th amendment was understood to exclude certain classes of individuals, most notabably alians (neither born nor nationalized in the US), and children of diplomats or ministers of foreign governments (whose diplomatic immunity excludes them from US jurisdiction).

An attempt to ensure that only children of US citizens are US citizens would impose serious burdens on US citizens when their children are born. As my son was born overseas, I had to go through a similar process ater his birth. I don't think that requiring something in the US would be wise even if it was Constitutional (which I don't believe it is).

A better approach would be to make a simple rule: US citizens must meet a minimum age of, say, 21 years old before petitioning the entry of blood relatives. Furthermore, when a family of illegal immigrants is detained and deported, any US citizen children of the family who have no immediate relatives legally in the US would be issued a passport and then deported. The passport would then allow the child to apply for a new passport when he/she becomes an adult and re-enter the US as a US citizen. No interpretation of the Constitution I can find would prevent the US government from barring entry of minor US citizens when not accompanied by a legal guardian who is also legally able to enter the country. By placing a minimum age of 21 on top of a petition process which can last several years, we could remove any incentive to have children in the US for the sole purpose of one's own immigration status.

A second piece of our policy needs to be a sane immigration policy which does not create a massive black market for illegal immigration. This black market fuels drug and human trafficking, and these industries could be set up to smuggle other dangerous substances into the US, such as those which might be useful in large-scale terrorist activities. Therefore illegal immigration is a systemic threat to our national security. This means that we need strong reform for the processes of bringing in foreign workers and that we work to slowly help ensure that people who are currently on welfare have an opportunity to make a better living for themselves taking jobs which currently go to illegal immigrants (however, here, the devil is in the details).

Slashdot Top Deals

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...