Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Why bother? (Score 1) 421

why is Apache still spawning processes for every request that comes in... don't they realize the overhead of that?

My guess is they're UNIX devs - under Linux (and probably some other Unices), forking is ridiculously cheap. In fact (IIRC), spawning a thread has more overhead than forking, since Linux threads are just processes which share resources.

I'm not sure how many people are using Apache under Windows, but I wouldn't be surprised if they were a minority.

Comment Re:Hope it works better then my wallet (Score 1) 110

Ah, I think you misunderstood me. When I said that it uses challenge-response, I was referring to the cryptographic challenge-response (e.g. the card receives a message, signs it with a private key, then transmits the signature), in contrast to magstripe, where data is simply read from the stripe.

Comment Re:No problem. (Score 2) 137

I suspect the test could be generalized to work for N variables, since the noise should increase as we move along a causal chain. The only issue is the exponential drop-off in confidence. If the accuracy could be improved, it could be quite useful for deriving or verifying Bayesian networks.

Comment Re:Some people better be out of a job... (Score 1) 110

And replace it with what, exactly?

Seriously, how do you intend to manage all of the addressing, both the IP level and the human-readable level, without some form of central authority?

I've been playing around with some ideas lately on how to implement a decentralised DNS, and what it basically comes down to is how you resolve conflicts. e.g. Microsoft reserves www.microsoft.com, then I try to do so. Ideally, the order shouldn't affect the final result, because a first-come-first-server system encourages squatting. Crypto-based systems also have to consider if the domain name can be reacquired if the private key is lost/stolen.
Here's a quick summary of the different approaches:

Traditional DNS: uses first-come-first-serve (FCFS) and conflicts are resolved through legal means (trademark law). Conflicts are resolved by the registrar - the second application is denied because the name is already in use. Centralized.

mDNS: uses multicast, impractical for global usage. No conflict resolution. This is the only decentralized approach that doesn't involve a DHT.

Microsoft PNRP: requires registrars which sign names to handle conflict resolution. (The unsecured variant has no conflict resolution.) Also requires IPv6, which is currently impractical.

Namecoin (decentralized with FCFS): Conflict resolution is implemented algorithmically. There is a small (1 cent) cost associated with updates.

Decentralized with voting: whichever resolvent the majority decide is official gets the domain name. Impractical, due to ease with which fake votes could be created. (Can be mitigated by making voting expensive - the bitcoin approach.)

Decentralized with trust-on-first-use (TOFU): conflict resolution is implemented by the resolver. Where there is a unique resolvent, it is used and added to a list of trusted resolvents. Where there are multiple resolvents, and the name has not been resolved by the user previously, the client may check white/blacklists published by other clients whom they have previously marked as trusted. If unique resolution is still not possible, manual intervention is required.

Currently I'm leaning towards the TOFU approach, since it's an extension of what's currently used for SSH clients. The only issue is that allowing multiple clients to resolve the same name differently borders on breaking the internet (see RFC 2826). However, it does have the nice property that it's the only decentralized system where a name-holder have their private key seized by an attacker, and still recover the domain name (by creating new keys and having people blacklist the old domain name in favour of them).

If anyone has some ideas/suggestions on this, I'd love to hear them.

Comment Re:Hope it works better then my wallet (Score 1) 110

The VISA Pay Wave doesn't have user challenge/response, it's simply a wireless magstripe.

Do you have a citation for that? It seems odd to me that they would use such a weak mechanism, when the existing chip already uses challenge/response.
The standard used is ISO/IEC 14443, which enables half-duplex communications, suggesting that challenge/response is at least plausible.

Additionally, in my country (Australia), I found that when they introduced PIN-less transactions for contact less cards below a certain threshold ($100), PINs were no longer required when the chip was inserted, which is consistent with my belief that the RFID mechanism is just another means of connecting to the chip.

Comment Re:Hope it works better then my wallet (Score 1) 110

Got my passport in 2006, don't think it has RFID. My VISA card does - or did until I centered a hole punch over the chip and whacked it with a hammer. That was strangely satisfying :-)

I really don't understand this logic. Yes, wireless connections to the card are a risk (and I say that as someone who took measures to shield my wallet), but that risk is minuscule in comparison to the risks associated with using the magstripe (vulnerable to skimming) instead of the chip (uses challenge and response).
These days, if someone requires me to use magstripe, I look at the terminal extremely carefully before swiping.

Comment Re:huh what? (Score 1) 388

The practical effect is the same - the user is denied access to the site via an attack on the name resolution protocol. If the registrar is subpoenaed, it doesn't matter if they set the domain to resolve to a takedown notice or a NXDOMAIN result - the practical result is that anyone who doesn't have the site's IP address written down will be unable to access it.

Both hosting and registering the domain outside of the US will provide some resilience if you are doing something they don't like, though they can still block resolution for everyone who isn't using DNSSEC.

Comment Re:Public road is not for joy riding... (Score 1) 688

There's a level in risk in life that most people are willing to accept in order to live life the way they want. Just because some people are happy wrapped up on cotton wool and kept away from any possible harm doesn't mean that sort of life should be inflicted on the entire population.

Society as a whole is what decides where on the freedom-safety spectrum it lies. Given that we already have speed limits, it's not unlikely that limits on manual driving may be put in place eventually.

Comment Re:Piss poor open source (Score 1) 36

For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable.

The wording of the GPL is quite clear - it only requires the Makefiles to be included, and even adds an exception for the compiler when included with the OS as a runtime dependency. It doesn't say anything about the requirement to include the compiler.

Keep in mind that when the GPL was first written, GCC was only 2 years old, and proprietary compilers were unavoidable in many areas. Even today, proprietary compilers are still unavoidable for certain applications. e.g. FPGAs. To require the publishers of open source programs to cover the cost of licensing the compiler for all their users would have been insane, and significantly limited the spread of open source software.

The obvious intent of the GPL is for you to get a code in a way that allows you to work with it and get results.

The intention of the GPLv2, to paraphrase Linus Torvalds, is that in exchange for the ability to modify the software to suit yourself, the changes you make can be merged back into the upstream. The GPLv3 places a greater focus on the ability of the user to generate a useful executable, but the v2 was chosen (possibly intentionally) for this instead. Whatever your opinion on v3, their choice of v2 speaks for itself.

Comment Re:Keurig's only reason is profit. (Score 1) 270

The solution is the same as for the razor blade model - stick with products which accept generic consumables. e.g. coffee grinders that take beans, or double edged razors (all the blades have the same shape and are intercompatible). The difference in cost is usually about an order of magnitude. e.g. DE razor blades are ~30c each.

Comment Re:Have Both (Score 1) 567

I've rotated my screen 360 degrees :-)

Does it improve the picture now that you have twisted cables?

Make sure you rotate by -360 degress in the Southern Hemisphere or the electrons will get tangled.

Do that and they'll disappear into a singularity (mathematical, not physical). What you really need is to use quaternions, like -ijk.

Comment Re:This is by design (Score 1) 415

This feels like a troll, but I'll respond anyway.

There is no standard and free audio API on Linux.

Wrong on both accounts. Both ALSA and Pulseaudio are available on pretty much every Linux distro. Pulseaudio is generally regarded as the standard these days, but you can target ALSA if you really care about supporting the minority of Arch and Gentoo users without Pulseaudio.

Both of these are free (both libre and gratis), with the GPL family of licenses being the FSF's gold standard.

As for your claim about the LGPL, I am not aware of any evidence that supports your interpretation. In fact, the existence of Linux ports for numerous AAA games indicates that many large companies do not consider the risk significant. Furthermore, courts are generally quite conservative, and prefer to avoid disrupting existing arrangements where possible. The idea that the LGPL was explicitly designed to enable the use of libraries by non-GPL'd programs, combined with the number of companies relying on it, means that regardless of ambiguities in the actual wording, the chances of the LGPL being turned into a regular GPL are slim to none.

Slashdot Top Deals

One man's constant is another man's variable. -- A.J. Perlis

Working...