Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Reactions of other parties (Score 1) 241

The funny thing is of course how the other parties reacted. When it became clear that the Pirate Party would likely get into the parliament (predicted to get 6.5% at most), they were already scandalized, how anybody could vote such loonies.

The interesting stuff on "those loonies" are the typical objections: e.g., the Pirate Party is said to be a "only one topic party".

My favourite answer to such objections simply is to take a step back. The Pirate Party did often say they're very knowledgable in certain matters, but certainly not in others - and they leave those things to others until they do have enough clue. To me, this is a lot more trustworthy and works for me much better than some party who states they're a master of all arts.

On the other hand, parties like the liberals (FDP) do have a very long party program with lots of interesting topics, but over the last 10 years or so they basically made themselves a "one topic party" by continuously repeating "reduce taxes, this will solve all problems". So in the end, a few of those objections "against" the pirate party are in deed things to watch out for with the well-known parties.

Comment Re:Let me see... (Score 1) 822

Wind cannot contribute a majority of electricity generation out of load levelling concerns.

Solar is prohibitively expensive and only does well in Germany due to strong economic incentives that
would be very costly to scale. It also doesn't work during the night, and large scale energy storage is
prohibitively expensive.

Been there, done that :-)

Back in 1995, Germany's nucler power plant operators turned ads in large newspapers stating that it is technically impossible and implausible to run more then 5% of electric energy using renewable energy. In 2000, the german government decided to promote renewables; this resulted in a dramatic tech improvement for e.g. solar and wind energy, prices for such energy plants did drop and are now pretty close to the same prices like other sources of energy. A new industry on renewable industry with thousands of jobs did arise.
Now in 2010, roughly around 15% of germany's power originated in renewable energy. In January 2011, Spain even reached close to 50% by renewables (they do rely pretty much on wind energy). However, nuclear power plants can't be powered up and down that fast and often according to what's needed to support renewables and so in spain, a lot of wind energy plants are actually powered down just because the nuclear power plants can't handle dynamic load that well.

There have been quite a few studies by independent parties, and basically all of them are now stating that it's possible for germany to reduce nuclear power to zero by further promoting renewables and suitable storage technologies. To do so within the next 10 years merely requires around 5% higher electricity prices and the revenues from inventing storage technologies and technical

The issue of "non-storable" power is often quoted, so there are actually some ideas and projects in place. At one site in germany, wind energy is used to literally push air into underground caverns or pump water into an artificial lake. Once power is needed, the air or water are used to power generators. Of couse, the overall
efficiency does drop and the storage capacity is usually limited to a few hours or up to a day or so.

So People now start thinking about using renewable power to use electrolysis to seperate hydrogen from oxygen and mix the resulting hydrogen with carbondioxide to methane, which may be fed into the existing gas grid and used as "renewable" gas. Gas can be stored quite well and the full storage capacity of the german gas grid equals roughly around 3-4 months of electric power supply.
Of course, efficiency to re-create electricity this way does drop to roughly 30%, but any energy used to support this process is actually renewable energy, so this won't hurt the environment and often this energy is "too much" for the power grid. So instead of powering down wind energy, you may also spend the "extra" wind energy on creating "renewable gas", which may be stored and later used to supply a gas power plant which re-creates electricity on demand.

Comment Re:so just how many (Score 1) 822

In south-western germany, with roughly around half a dozen nuclear power plants, there were 35 earthquakes during the last 200 years with magnitudes of 7 or higher on the MSK scale (which roughly equals a magnitude of 6 or higher on the richter scale). However, earthquakes are "rare" enough, usually limited to a smaller area and so people either tend do forget or underestimate them, so the earthquake resistance standards for nuclear power plants in Germany are actually much lower than in Japan - and probably too weak.

For example, the nuclear power plant in the city of Mülheim-Kärlich close to Luxemburg had been planned to be installed in an earthquake-prone area. When this became publicly known, they decided to install the power plant only 70 meters away from the original site in order to reduce the risk. After a few weeks of operation, some courts decided that the more-or-less ignored issue of the earthquake-prone area will invalidate any current installing permits and that this power plant needs to be taken offline and removed. After three years of further legal battle into highest courts, the power company finally started deconstructing that power plant.

The main issue with Fukushima weren't exactly the Earthquake or the Tsunami but the power outage within the nuclear power plant which completely disabled the cooling system. The earthquake also made any cooling attempts much harder, as the site has been devasted quite a lot. To explain: a non-powered Nuclear power plant still needs to be cooled down, and when any kind of major natural disaster (earthquake, flood, storm, ...) interrupts the external power supply, that site in question is in trouble. Usually, nuclear power plants rely on having either some backup diesel generators on site to take over for 2-4 days or they rely on getting power from another block on the same site. But in reality, those concepts are still flawed. If the "uplink" to the power grid is broken, the power plant produces "too much" power and so about every block on site needs to be powered down, but still needs cooling. And if there is a major power outage within the power grid without some way to refill the backup generators in time, 2 days of backup generators are simply not enough.

For example during the last few months, a german nuclear power plant trouble report became publicly known where one time last year the backup generators failed, the power supply by next block redundancy didn't work (maintenance) and so at least one power plant's block had to rely on commercial power from the power grid. As there was no outage involved, the incident back than had been reported to be "minor" and didn't go publicly noticed. So such "issues" do arise, but didn't became known until someone investigated.

And people do remember that even power outages are rare and short (around 15 minutes per year in germany), but major electricity blackouts actually can happen due to a lot of reasons. For example, back in November 2005, heavy snow on landline power lines cracked down 82 power poles in north-western germany, leading a full power blackout for villages and cities in the "Münsterland" area. Power companies, fire brigades and other emergency technical assistance units installed mobile power generators and temporarily replaced the power lines by on-ground-cabling, but it took up to five days to supply every city with electric power again.
schneechaos-muensterland.de has some nice pictures and explanations (in german) of the situation back than.

According to some statistics by germany's federate power agency (which may also be found on the site above), there have been around a dozen major power outages due to up to 172 broken power poles within an area during the last 30 years, so such issues aren't exactly rare. It doesn't happen to everyone,
but it still happens :-)

Yet another example: the river Oder between Poland and Germany had at least two major floods during the last 15 years. During such a flood, the actual power usage of an area does drop close to zero and the power grid is no longer that reliable, so if you're operating a nuclear power plant in that area, you may actually be forced to immediately shut down all of your nuclear power blocks. However, they still need to be cooled: you can't rely on the power grid, the other blocks on site even in low-power mode generate too much electricity, so your only hope to avoid a nuclear power accident are your backup generators, who only have fuel for 2-4 days. So if your fuel trucks can't reach your site to refill your generator's tanks within that time frame, you're in severe trouble.

Comment just learn from britain's law ... (Score 1) 304

See here for some recent case where a 19 year old was sent to jail for 16 weeks for not disclosing his password to the police.

So, the US has just to copy some lines from the UKs "Regulation of Investigatory Powers Act 2000" and police will be fine.
First, you'll be temporary withheld for whatever reason, then you'll be arrested for not disclosing your password.

Comment Re:Yahoo! is relying on old, incomplete data. (Score 1) 290

Geoff Huston wrote an article about comparing 6to4 to IPv4 in terms of failure rate - and found out that about 0.2% of IPv4 connections to his web server were also broken. Geoff's article also provides insight why exactly a large percentage of 6to4 connections to his web server failed: routing packets around the planet just because a lack of 6to4 gateways and in three out of four issues, some broken firewall dropping 6to4 packets.

Issues A and B also doesn't necessarily mean that IPv4 is permanently broken but "occasionally".
For example, every mobile carrier in my country deploys Large Scale NAT/Carrier Grade NAT for IPv4, but in order to max out those boxes, they're running with very low session timeout settings. Their NAT routers silently drop my session when a tcp connection is idle for longer than a few seconds. While web browsing "usually" works, things like IMAP sessions very often do break and reconnect, for interactive use I'm forced to run ssh-sessions with "ServerAliveInterval 7". One of those carriers even temporarily blocks access to Apple's iTunes store - maybe because the iTunes store is known to eat up to 300 parallel NAT sessions for a single user (compared to roughly 20-30 for "usual" web surfing). When accessing some very slow web server, the NAT session timeout also kicks in, resulting in my browser "endlessly" loading the same page.

Right now, the same carriers don't yet offer IPv6, so technically, they're forcing me to issue "A". Once they do offer IPv6, my mobile internet access is likely to be issue "B".

Comment Re:A German website tried this (Score 1) 290

The same experiment can actually work out very differently. At Google's IPv6 implementors conference in summer 2010, a japanese ISP reflected about the very
same experiment like heise.de or the World IPv6 day do of adding AAAA-records for a day.
They've been doing IPv6 for years now, including hosting via IPv6. When they added AAAA-records for their very large japanese portal site biglobe.ne.jp, they lost about 5% of page views immediately and 5 minutes later, their phone started ringing endlessly. A few hours later, they've chosen to cancel the experiment by removing AAAA-records from their DNS.
In my mind, many japanese ISPs have been using and offering IPv6 access for years now, but there haven't been any major services available via IPv6 in Japan, so the actual IPv6 traffic has been very low and most people weren't aware that their IPv6 setup is simply broken. Maybe even Yahoo's and Google's often-quoted "0.025% of users do have IPv6 issues" bases on Japan being largely broken in terms of IPv6 service while the rest of the world may run IPv6 without any issues :-)

Well, Germany is quite a very different issue. Most large german access (DSL/broadband/dialup) ISPs don't yet support IPv6 and the de-facto standard-dsl-router range of most ISPs (AVM's Fr!tz-box) didn't support any kind of IPv6 at all until quite recently. Even now, IPv6 is something hidden deep in their menues and actually needs to be manually turned on. German web hosting consists of a few large companies, where support of IPv6 is currently left as a DIY-option for dedicated servers and not for any shared hosting plans. On the other hand, close to every ISP peers via IPv6, is running 6to4 gateways and happily runs IPv6 on their own networks, but IPv6 isn't yet used for any actual major public service, so in theory, IPv6 shouldn't be that hard to get working in Germany today ... but for today, IPv6 in Germany is actually VERY poor.

To illustrate how worse IPv6 in germany is, check the TLD stats at Hurricane Electric, compare the amount of AAAA-records vs. the amount of A-records.
For about every TLD (.com, .net, .org, ...), there's one AAAA-record for roughly about every 90 A-Records. For .de, only about one out of thousand A-Records do have an AAAA-record. That's a ten-fold in being worse!

So heise.de didn't really venture a lot when they turned on IPv6, as even far less users in Germany actually do use IPv6 than in any other country. However, they've still done something very intelligent: once German Internet access ISPs do turn on IPv6 connectivity for their customers and customers notice about heise.de being unreachable, heise.de users are already aware that heise.de has been served via IPv6 for months without any problems, so any brokenness must be related to their own ISP (or their personal setup). They'll directly complain to their ISP and won't blame heise.de.

Comment Re:You would think. (Score 1) 348

In fact, there are quite a few people out there using Anycast for TCP-sessions. It's really a matter on what timescale you're looking at. The networking guys see TCP as something to use for long-living connections - e.g. a BGP session running for days, weeks or even months. A flapping route in this setup will result in a broken session. But: what does this really mean to you? If your CDN distributes downloads which are "done" within a few minutes, such a rarely flapping route will result in a few broken sessions once a day out of millions of downloads successfully served. Compared to issues like non-working DNS, overloaded servers and filled lines, that's nothing and can actually enhance the overall CDN service.

A nice paper to read is this one from Matt Levine. He's working for a CDN provider using TCP-Anycast for years now and sums up the most important issues on TCP-Anycast.

Basically the most important one is that your anycasted servers really have to be spread far enough so that flapping routes at some peering point won't matter. As a rule of thumb, put one CDN loadbalancer on the US east coast, one to the US west coast, another one to western europe, one to Australia and one in Hong Kong. If you'd like to put multiple CDN loadbalancers to one continent, leave space between them, e.g. one box for each country/state.

Comment ... and don't forget about the children! (Score 1) 291

VG Musikedition (the other "club" represented by GEMA), sums up this issue and
VG Musikedition on photocopies in Kindergarten outlines their "new" offer. You may want to give e.g. Google Translate a try if you don't understand the german language.

Until recently, Kindergartens weren't permitted at all to copy single song sheets, their only option were to buy the books of those song sheets, but those books again didn't permit copies at all. However, Copyright expires 75 years after death of the writer; and those songs may be freely copied.

Now, there's also the option for Kindergartens to pay a fee of 56 Euros (plus 7% VAT) per year (€44,80 for Kindergartens operated by churches or cities) which permits up to 500 copies of song sheets. For example, for 112 Euros (plus 7% VAT), up to 1000 copies are permitted.

The odd thing with this option is, that kindergarten teachers then do have to keep an account on the amount of copies being taken for songs whose writer is dead for at least 75 years: bureaucracy at its best.

Comment Re:The Internet is Full (Score 1) 520

Okay, without sarcasm.

Back in the "good ole days of the internet", IP-addresses were given out as permanent property and there's about no legal way for IANA or the current RIRs to recall those IP-adresses.

Nowadays, IP-addresses are given out as some kind of semi-permanent lease. For example, your RIR may offer you a larger allocation than a requested one, but they also may require you to hand back your old allocation after a few months to allow renumbering your old IP space.

Even forcing those organizations to hand out their /8 does give us about two more years, that's simply ridiculous. IPv6 has been in work since last century's nineties, experimental networks like 6bone were closed in 2006 after IPv6 has been declared as being stable for "production use".

Even older operating systems like Linux 2.4 and Windows 2003 have IPv6-stacks officially being seen as "stable for production use" (Windows 2003 is lacking IKE for IPsec, but that's all about it). Whoever isn't capable of deploying IPv6 within the next two years deserves being doomed.

Comment Re:The Internet is Full (Score 1) 520

You forgot to mention a few other issues.

IANA reserved 224.0.0.0/4 for "multicast"-usage, that's the equivalent of 16 /8-sized networks.
Renumber multicast!

We also need to replace RFC 1918's wasteful use of 10.0.0.0/8. No organization ever needs 16 Million IP addresses, even Google has a fraction of physical servers than this.

While I'm thinking about it: RFC 3330 spends more than 16 Million IP addresses for a single box.
Whoever is using this 127.0.0.0/8: please do renumber to e.g. ::1/128 and return 127.0.0.0/8!

Comment Re:1 word. (Score 1) 596


I want to see what I'm working on and not have to deal with... my hand and wrist covering up my work.

A problem that utterly destroyed the work of amateurs like DaVinci, Michaelangelo, and Raphael, right?

During his last two decades, Ludwig van Beethoven lost his hearing. He was completely deaf when he composed
his ninth symphony (famous for e.g. "Ode to Joy"). That doesn't mean that a hearing impairment enhances songwriter skills.

Imagine the works of those artists if they weren't bound to cover the area they're working on.
They might've raised the bar of perceived perfection to even higher levels.

Comment Re:Lies, damn lies, and repeated lies! (Score 1) 135

Okay, let's recap the situation.

You said you've been using your 1&1 webspace for hosting a counterstrike mirror back in Germany during CeBIT 2000.

CeBIT 2000 took place in the end of february, so lets think about spring of 2000 back in Germany.

  • Back than, it has been quite hard to get anyone with decent admin knowledge on the job market, as the internet hype has just been in its hot time. So I'd expect any company to let their techs do some real work instead of hanging around at some trade fair booth, sipping coffee and chatting with some people passing by. I guess you've been talking either to sales or someone from the user helpdesk.
  • Back than, broadband was not really wide-spread in Germany - the very first lines DSL-were installed in the summer of 1999 in very few selected cities. Cable was not available at all in Germany and about everyone who hasn't been working at a university and didn't have a 2 mbit leased line in their company was connecting to the net via ISDN or modem. According to Wikipedia, only 2900 DSL lines were installed in Germany in back 1999.
  • ISDN was somehow spread, but access was billed per b-channel, so most users would only connect to their ISP at 64 kbit/s.
    Most users were using simple modems, connecting to their ISP at something around 40-52 kbit downstream (depending on line quality). So for most users, it took around 2-3 minutes per MB of download.
  • HTTP 1.1 (which enables pipelining and partitial downloads) was published in June of 1999. For "download sites" and "mirrors", you were expected to offer (anonymous) FTP, as http back than has been too unreliable and when your download aborted, you had to re-download the complete file.
  • Average workstations were running at 128-256 MB of RAM, the average server back than had something between 256 and 512 MB of RAM. Larger boxes were lucky of running at 768-1024 MB of RAM.
  • Back than, you could easily DoS most web servers just by opening a few hundred idle connections.
    Software back then wasn't really built to withstand any higher usage scenarios or serious DoS attacks.
  • Back then, people optimized their images by hand, cutting them into the magic 216-colour-"netscape"-palette for GIFs and made images small enough so that the web site would load within a few seconds.
  • 1&1 is known for hosting sites on non-clustered, "smaller" servers. While upgrading their UPS in 2001, they experienced a major power outage, but were able to get back online within a few hours, as those hundreds of boxes did run their fsck in parallel. To compare: a few weeks later in 2001, their competitor Strato's highly-available storage clusters went offline for about a week and had to run a full fsck.
  • "1&1 Puretec" back than has been offering shared web hosting plans for hosting personal and small office websites, which translates to "small files, short-living http-requests". Like today, 1&1 does rely on the Apache web server. You had ftp-level access to your website, but your hosting IP address has been shared among dozens of other customers, so you didn't have some way to run anonymous ftp on well-known ports.
  • Apache back than also had the quite negative behaviour being served to a client to load about the whole file into RAM.
  • Counterstrike has always been one large file of around 70-120 MB in size.

Now imagine what might've happened. Got it? No?

You've been hosting some file in a country where the average user would need a few hours to retrieve that file from some service which has been built for about the exact opposite of what you're doing. Once just a few people try to get your "mirror" file, they'll bring the web server down, swapping and crawling on its knees.

Your website is hosted on the same ip address than hundreds of other customers, so your website can't be moved to a different server. About the only thing the admin can do in that situation is to get those hundreds of users other than you back online is either to swap the core component of web hosting software (Apache) by "something yet to be invented", change from a non-clustered hosting service to a large-scaled cluster or simply prevent users from downloading your file.

Comment Re:Problems (Score 1) 217

How much crime does a better passport stop, anyway?

It doesn't prevent any crimes and was never made to do so. A passport is just a commonly accepted item for identification.

In fact, many countries don't issue a passport unless you're applying for one, and the main reason for having a passport is the point that some country other than your own would like to verify wether the one who wants to pass their borders is the one claiming to do so while making sure that a blacklist of "known bad people" won't enter their country.

If you're reading the technical specs on the passport documents, you'll note that the biometric information (JPEG of photo, hashes of fingerprint marks) is stored into an RFID chip on the actual passport, and not somewhere else.

Even just the thought of inter-connecting the millions of passport-checking locations and granting those passport-checking devices (who are under control of hundreds of different countries with tens of thousands of IT operation teams) international access to distributed giant databases is ridiculous in the eyes of anyone who ever tried to set up a mashed VPN network.

In clear: there is no online comparison or verification of those biometric data against the data provided by the passport-issueing country.
What actually may be verified is that a scanned image of the real photo matches the JPEG stored on the RFID chip and that the JPEG is cryptographically signed with a known-good signature.

And that's the point: the biometric data in an RFID chip are being used to aid against illegal passport duplication.

So in essence, the guys make copying a passport much harder.
It does no longer only take some better colour printer, sealing those printouts in plastic and wrapping it between cardboard sheets, now it also needs someone who can break public key cryptography. That's all.

Comment Re:Gee, thanks for the notice (Score 2, Insightful) 255

The 64-bit NTP timestamp spans 136 years with a resolution of 232 picoseconds, the 128-bit NTP timestamp spans 584 billion years with a resolution of .05 attoseconds - so right from those points, NTP is good enough for your applications.

What's still problematic is a problem that NTP also tries to compensate: the network latency.
When you're receiving just two packets with exactly the same latency, you can't be sure that the third packet will be there with the same latency, so you're having an possible error rate of 33%. However, if you've seen a million packets with the same latency, your possible error rate is very close to zero, and that's why NTP can sort out the network latency problem only over time.

Slashdot Top Deals

Without life, Biology itself would be impossible.

Working...