Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?

Submission + - Ireland to Fix Downloading Law for EMI (

2phar writes: The Irish Government is to publish an order early in the new year to allow music publishers, film producers and other parties to go to court to prevent ISPs from allowing their customers access to 'pirate' websites.

The government has written to music publisher EMI Ireland confirming the order will be published and incorporated into existing legislation in January. EMI Ireland recently warned the Government that it would take legal action against the State if the Government did not address the problem, following its unsuccessful attempt to seek an injunction against UPC last October

Comment Re:Wait a minute. (Score 1) 112

That sounds likely if we haven't had supporting evolution of the bone/tendon/etc genes.

My intuition at why we'd have this is slightly different than yours: My first feeling is that it would be likely to be part of the system that controls cancer. (I am not a professional in this area, so take it for what it is worth.)


Comment Re:Really? (Score 1) 463

"So due to the transaction costs, it is not better for society"

It seems you are implying that when the risk is in the side of the producer there're not costs for society.

That's not my view at all; I'm sorry that it came off that way. The part that was implied and probably shouldn't have been is that a more closely estimated risk has a significantly lower cost than one that's less closely estimated. This is a common economic theory; it is generally considered that higher risk (variation) goes with a need for higher returns (see e.g. modern portfolio theory).

I also think there's several reasons why the risk will be lower if taken by the software developer rather than through contracts for future payment. As a software developer, I can change the scope of the project. I can start out with something that I think will be a Windows program, and then if Mac becomes popular among my target group, I can switch to having a user interface for Mac (only) instead. Or I can switch to a web interface. Or I can switch my product around, and instead of having it target general music enthusiasts, I can find that my beta DJ users love it and would be willing to pay a significant price for it, while the general music lover says "Meh." In the worst case, the developer can cancel it if it takes too long to develop. However, if I'm going to pay for somebody to develop something, I am not willing to pay for "We'll deliver it on some platform; and it will do something that some people are interested in." I want a specific set of features, for a specific platform. This adds risks for the developer, as they lose flexibility.

There's also the important part of time delayed delivery and customer circumstances changing. If I have something that takes a while to develop, it's likely that some of the potential customers have moved on to other things, while some new customers have come along. As an extreme example, if I'm developing a game that target 12-13 year old girls and it takes four years to develop, *all* potential customers will have moved on and been replaced by new customers. As a less extreme example, I might have switched to wanting my software on the Mac or on the web or on Android - while having paid for what seemed reasonable at the time. This adds risks for the customer.

Bad news: the costs are exactly the same since a failed project is a failed project anyway.

But the costs of a failed project are different, and the value of the money is different. As an example, if I'm in a startup, software that I can use now has a high value, and cash in hand has high value - while software that I get delivered in a while has much less value.

But the basis of marketplace-like capitalism is that both producers and consumers are perfectly informed of their options

I think you're thinking of what defines a perfect market / perfect competition (

which is less the case when there are production efforts that are invisible for the consumer. Add to this that the "trick" for license-based business is that while the production costs are bounded, benefits are not. That's what allowed to the owners of the software giants from the eighties-nineties to become some of the richest people of the world in record time spans. Whenever you see net benefits going well over 100% you can bet capitalism is not working as expected and society as a whole has a worse deal.

You're thinking of "increasing return to scale". That's certainly one of the violations, but there's also violation of "Homogeneous products", "Perfect information", and probably some others.

In the specific case of Microsoft, they manipulated the market by tying their product to another product (CPUs), making sure that if anybody else wanted to sell a competing product, that product would have to be bough *in addition to* Microsoft's product.

As for net benefits going well over 100%: It really depends on what risk is being taken. If you have 100 companies that start up in an area, and you expect 99 of them to fail and the group as a whole to have neutral return, you expect over 100x return on the last one (100x + inflation over the time period). If you afterward look at only the single one, it'll look like a capitalism failure, but it's really flat risk handling.

Comment Re:Reasons for negative response (Score 1) 373

I don't think his code is GNU licensed though? If it was his intent to keep it free, that would be the project's home.

That's a false assumption.

I've never had the intent of taking any software that I have proprietary; yet I always use licenses that give more freedom than the GPL. (I've used some software I've written in a proprietary setting, but I have always released all changes to my own software and >90% of changes to other people software.)

Some of us, for various reasons, have problems with the freedoms the GPL takes away under the guise of "protecting freedom" ("we had to destroy the village in order to save it"). Personally, I believe that proprietary software can be good, as it allows a pool of users to effectively get together and share the cost of having something made that they could not afford to have made individually. If we release free sofware, we raise the bar for this - it is only worthwhile to release proprietary software that is better than the free one in some way (even if that way is just knowledge and availability). If we release BSD licensed software, proprietary software can build on top of that - say, you can have a word processor that's based off a BSD licensed codebase but is reworked to specifically work well for independent accountants. This will (in my experience) also lead to changes being contributed back to the core BSD codebase - not all the changes that are made, certainly, but many of them.

If you have a GPLed codebase, on the other hand, nobody can build a product for accountants out of it and sell it. If the GPLed codebase is good enough, then rebuilding everything that would give you adds so much expense that it becomes unfeasible (and "good enough" don't have to be that good to do that). If the GPLed codebase or codebases is good enough to cover the entire market - though, you don't get the custom application that would be valuable. The open source world has lost the changes that might have been contributed back (and in practice, almost all derivatives of BSD licensed code do contribute some of their changes back, and most contribute most of their changes back) - and the accountants and their clients lost the value of the custom app.

You can claim that accounts could get together and pay to have the changes developed and open sourced; but this requires somebody to organize it, and requires the accountants to get into the profession of evaluating the quality of likely software output from the software developer's claims, which is clearly substantially harder for them than evaluating whether a particular existing package works for them. (It also has the freerider problem, which makes it harder to get them to commit to giving money.)


Comment Re:Talk about missing the point. (Score 1) 421

Performance doesn't matter any more. Correctness and quick development does. FP provides that in abundance. (Of course, correctness is just another way to say "quick development" nowadays, but whatever...)

To adress some of your points:

1) Two words: undefined behavior. You'll find it around every corner in C or C++ (two very different languages, of course) -- this leads to unreasonably hard-to-find bugs. In C++ it's also extremely hard to avoid such behavior consistently -- compilers are happy exploit it for optimizations, but somehow can't provide warnings for all cases where you are (unwittingly) relying on UB.

2) Really? Haskell or Ocaml do not rely on any of those things you mentioned. Difficult? Perhaps, but see my point #1. Besides, who would you like making your software... someone who's just "learned java" or someone who knows what the fuck they're doing?

3) So all FP languages which don't perform as well as C (or order-of-magnitude at least) don't perform as well as C. What an insight. Btw, Haskell is also within OoM of C. Also, see the top of this post.

4) How hardware works is fucking irrelevant. If compiler for language X can optimize "fib N" to a constant expression it doesn't matter if your C compiler can generate code which executes a million iterations of a fib-computing loop per second. Certainly, we're not quite there yet, but in the C world there's no hope of doing this beyond *really* simple examples (aka not fib), but FP could conceivably get further.

There's one significant problem with this compared to the C model: The performance becomes very hard to understand. There's one major advantage to the C model: Performance characteristics are fairly easy to understand.

While I generally like higher level languages, pretending that their drawbacks don't exist is just getting in the way of fixing them (and of their use, as people will think all other claims are the same level of hype).

This debate is very old - 20 years ago the slogans were "Lisp programmers know the value of everything and the cost of nothing" and "A Sufficiently Smart Compiler..."


Comment Re:Really? (Score 1) 463

That move the risk to people that have a much harder time evaluating it, and incur insane transaction costs. It is significantly easier to evaluate "Do I want to pay for specific software X that let me do Y today" than "Will the software that this write in five years be what I need in five years? Will this guy even be able to write software?"

So due to the transaction costs, it is not better for society, though I think you're right that it would be if there were no transaction costs.

Comment Re:This is getting old (Score 1) 480

How about you read some of the actual ways things happen before you publicly have an opinion? ~All relevant Unix level changes in OSX has been offered back, including some manpower to help merge them; unfortunately not enough to make it technically feasible.

The problem is with the number of changes you get when you use a full operating systems development mode and branch; merging gets hard, and it is generally more rewarding for somebody to work on their own changes rather than doing merges. We've very far from fully cross-merged between NetBSD/OpenBSD and FreeBSD either, and there you at least can't claim that it's a licensing or commercial body issue.

Cross merges between BSDs and between Linux distributions tends to happen at the same granularity: Import of whole tools.

GPL vs BSD license is irrelevant to this; and it's irrelevant to most contributions of changes. (The extra changes that gets contributed when you have a GPL license is a small amount; and you lose a lot of changes that don't get made. Trying to work out the exact economic corner cases where you get more changes with the GPL is a complicated mind game - but you have to start with "Why is X making these changes in the first place, and why would they both want to make them even if they can't keep the value (can't keep the proprietary) and not want to give them away".)

Comment Re:Why has it taken 50 years? (Score 1) 585

That's somewhat true. The problem is understanding what is a religious and what is a scientific question; if it is a scientific question, with a hard answer available by looking at the world, religion has to give. And this means the the scientists are much more likely to understand what questions are scientific or religious than religious people.

Comment Re:argument by definition (Score 1) 585

I think your scale is skewed; I think you'll find that the label used on P(0.5) is usually "Christian", and P(small) is "agnostic", and that atheists are assumed to be P(zero). There's also been a tendency to label anybody in the "we can't know, but I presume it to be no God" as agnostic.

I also think there's not vastly more people believing there's about 50% chance. I think there may be more people that politely don't tell Christians that they're wrong, and that want to avoid the controversy that comes with calling themselves atheist. (I used to call myself an agnostic, partially on the grounds that there is no way to know for sure - the belief of rational theists are for things that are forever outside science, and make no observable difference whatsoever - and partially on the grounds of not wanting to go into conflicts with Christians. These days, without significant change in my beliefs, I call myself an atheist. After a fair bit of thinking, I've come to the conclusion that we can clearly show the origins of Christian faith and other superstitions as superstitions; and the only reason to treat them more kindly than belief in Santa Claus is that there are so many people that base their life on them.

Comment Re:Whose name is the account under? (Score 1) 346

    You do realize that it wouldn't happen like on TV, right?

    (as a side note, I am ignoring the differences between criminal and civil here. Most of it applies to both sides.)

    If they've seized your computer, there will be a subpoena compelling you to provide the correct password. It's not like you'll be held in an interrogation with a cop saying "give me the password or else [blah, blah, blah]". It'll be a long, drawn out process. You'll learn the wonderful world of the legal process. Subpoenas, depositions, countless hearings, motions, and eventually you'll actually end up in the court room to testify about stuff.

    "I'm stressed, I can't remember it" might (but probably) won't work on day one. By the time you end up in front of a judge, claiming that you can't remember the same password that you had to type every day to unlock your computer, he'll laugh at you, and then you can learn about "contempt of court".

I have been in a situation where I then would have to be found in contempt of court. I had the same password for a year, typing it many times per day. After I stopped using it for a couple of months, I was unable to remember it when I went back to get data out of that computer (with the important ones being miscellaneous changes to FreeBSD that I'd not submitted yet).

If I'd been accused of having pirated something, I'd have been just as incapable of remembering the password as I was when I just tried to get open source software I'd written out of the computer.

And if anybody should need me to testify to that, I'm perfectly willing to.


Comment Re:Two reasons software patents should not be (Score 1) 223

Much of the relevant research is basic research that's funded through government grants (ie, taxes) anyway, so counting just the "top of the pie" done by the pharmaceuticals isn't 100% reasonable. But let's do that for a moment anyway.

Using the figures from the 2006 report at as a basis (this is the first result from a Google search for "how much of pharmaceutical research is done by pharmaceutical companies") we find that the net cost of an "NME" (New Molecular Entity) to a pharmaceutical comapny is about 802 million USD (as of 2006) including opportunity cost for the research money, and there's 30-40 new NMEs approved each year. Assuming 40, that's 32,080 millon USD.

If we can find another way to finance that, we'll have most drugs available at the same kind of cost as ibuprofen is today (pennies per dose), we'll have more rational and less marketing driven choice of what drugs to use, and we'll have the benefit of research being directed towards where it can make the most difference rather than wanting to get a piece of the pie with me-too drugs - and we'll get rid of the significant corruption of science done around pharmaceuticals. And we'll avoid having to pay the marketing costs of the pharmaceutical companies, which are about twice the research costs.

If we say that we can't get any international help whatsoever, and it's all going to be funded in the US, this is $104.50 per person per year in extra taxes.

If we can fund it internationally, we could just attach it to the OECD funding overall. According to Wikipedia at the net OECD funding for research in 2006 was 729,430.80 millon USD - or about 23x more. Of this, 30.2% (220,287.9 million USD) was government funded. So, we end up with an extra 14.6% in research spending for the OECD group.

It's substantial, but it's not infinite. And it comes with a number of other benefits.

It is clear that a significant part of the cost of the pharmaceutical research comes from patents as well, so the above is a worst case scenario - it will be cheaper than that.

Comment Re:Land of the free... (Score 1) 964

It would be a wise move for your police to leave their weapons at home *because unarmed police leads to less use of violence in general*.

But it's like the death penalty leading to more murders (also fairly well documented): People trust their prejudices ("gut feeling") instead of trying to look up what the data says.

Slashdot Top Deals

"An ounce of prevention is worth a ton of code." -- an anonymous programmer