Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Velocity (Score 1) 133

I wonder if the Earth was in orbit around a star that was part of such a cluster, would we notice the effects of such an ejection?

On a related note, I have been wondering if a civilization, that manages to populate an entire galaxy could use such ejections to spread to other galaxies.

Comment Re:About time! (Score 1) 306

It's my understanding that fc80::/64 is assigned automatically by the stack

An address from fe80::/64 is assigned automatically. But by default you'll only get one per network interface. Moreover any application using them must indicate which interface it want to communicate on, as it is intended for communication between two hosts not separated by a router, and addresses could be duplicated on multiple interfaces. That makes it overly complicated to use, if you really just want local communication i.e. between two processes on the same computer.

Comment Re:About time! (Score 1) 306

You got the names and prefixes wrong. fc00::/6 and fd00::/6 is actually the same prefix since they only differ in the 8th bit, and /6 indicates only 6 bits are significant. Also that /6 is not allocated to one specific purpose. Half of it is allocated and the other half is reserved.

Site local used to be FEC0::/10, but it has been deprecated because it was not well-defined, what the boundaries of a site is. It was replaced with unique local addresses, which are only routed locally, but should be globally unique. FC00::/7 has been allocated for this. FC00::/7 was split into two halves with different allocation policy. In FD00::/8 you can create your own /48 by simply generating 40 random bits and append those to the 8 bit prefix. The result could for example be fd1d:19b8:d39f::/48. This allows the scope of such prefixes to overlap, since there will only be a conflict if the two randomly chosen 40 bit strings by chance are identical. Due to the "birthday paradox" you can expect to be part of one million different overlapping scopes before you run into a conflict. If such conflicts should happen, some central management may be needed, which is what FC00::/8 is reserved for.

Comment Re:I was wondering this myself... (Score 1) 109

In the end, hydrogen needs to be bound back to other atoms to be a usable fuel for transportation, some promising uses for H2 could be to manufacture CH4 using CO2 from a cement factory

And of course, the cheaper way that has actually been done on a massive scale for decades, extract hydrogen from natural gas

Natural gas primarily consists of CH_4. So combining those two ideas sounds like turning natural gas into hydrogen and then turning that hydrogen into natural gas.

Comment Re:Lesson here folks (Score 1) 306

Getting a new protocol deployed means deploying hardware and software,

Depends on the protocol. For the most part NATs did not require software changes though they break some applications if not done properly.

NAT breaks some applications. You cannot implement a NAT in a way that is guaranteed to not break any applications. Does that mean NAT is never done properly?

Some applications may work through a NAT automatically, while others may require lots of work. In certain situations it is just plain impossible to get an application working through a NAT at all. Application developers are not supposed to spend their time working around NAT. That time should be spend on building new features instead.

If an application works flawlessly without NAT, but fails in the presence of a NAT, that just demonstrates that NAT is a problem.

You can deploy a NAT and still have some applications work without changes. But lots of development time has been wasted the last two decades on working around NAT. NAT could be deployed without needing involvement from ISPs, which is why it is so widespread today. But that is also a significant reason why IPv6 deployment is going so slow. Had NAT never been invented, we could all have been running IPv6 today, and things would work much better than they do.

IPv6 is happening.

I ran IPv6 for a few months last year and kept an eye of how often I was able to establish a connection with it. Answer? Almost never. If you call that "happening" we have a different definition of the term.

Are you asking for servers with IPv6 support? Google, YouTube, facebook, and Akamai are a few examples with IPv6 support. Were you never able to establish a connection with any of them?

I use IPv6 on a daily basis. Whenever I am on a network without IPv6 support, I realize how much more difficult it makes my work, not to have IPv6 access. Luckily Teredo works from most networks (but only for connecting with sites, that care enough about reliability to deploy their own Teredo relays).

Comment Re:Lesson here folks (Score 1) 306

The IPv6 head start is so minimal that Linksys shipping a new shimming protocol with its NAT routers would exceed IPv6 usage within six months.

Wrong. Shipping routers with support for a new protocol doesn't make it happen. If that was all it took, we'd all have been running IPv6 years ago. Getting a new protocol deployed means deploying hardware and software, which can support it on the entire route from one end to the other. And it means network operators have to get addresses, setup peerings and turn it on for their customers. There is no way Linksys could achieve all of that within six months.

IMHO that is still the way to go, because IPv6 just isn't happening.

IPv6 is happening. It is happening 13 years later than it should have, but at least it appears it is not falling any further behind. At the current rate we'll reach 50% dual stack by 2018 (and by 2030 we'll probably be 50% dual stack again as IPv4 will be phased out). The question is how bad the network will get in the meantime. Users of any P2P service have already been experiencing problems due to NAT, and that will keep getting worse until those services move to IPv6.

Will it get so bad, that end users realize something has gone horribly wrong, and start demanding somebody take action? Or will ISPs manage to deploy IPv6 with the majority of users being blissfully unaware what is happening?

Suggesting another solution because "IPv6 isn't happening" makes no sense. By the time you'd have a working standard for any alternative to IPv6, you'd be up against IPv6 deployed to half the internet. And deploying it couldn't be significantly simpler than IPv6, which means ISPs would be waiting for a decade to see if anybody else was deploying it first. And why would any ISP want to deploy a competitor to IPv6 which was less tested than IPv6 and did not have nearly the same market share? It took more than a decade to get them moving on deploying one replacement to IPv4, they are not suddenly going to support two replacements.

Comment Re:Lesson here folks (Score 1) 306

However even then we could easily have reserved say 255:255:255:8 as the extensible value of the IP address.

It already is! Along with all the other IP addresses in the range from 240.0.0.0 through 255.255.255.254. That's 268435455 IPv4 addresses reserved for extensions. But nobody has been able to come up with a way to utilize those reserved addresses to solve the IPv4 shortage. But that's not the only range that people have tried using in order to solve the problem. The 192.88.99.0/24 range is reserved as well, for a well-defined purpose, which was intended to help getting IPv6 deployed. It did not help, it may even have slowed down IPv6 deployment by 1-2 years because it lead to broken IPv6 connectivity for some users.

The list of header fields, where values have been reserved, in order to help in this upgrade is long.

  • The version field: 6 through 9 are all reserved for different candidates for the next protocol, but everybody have now settled on one of them.
  • The protocol field: The value 41 can be used to embed an IPv6 packet within an IPv4 packet. And several other values are reserved for IPv6 related protocols.
  • IP addresses: As mentioned lots of addresses were reserved from the start with very little success. A much smaller range was reserved later with a bit more success, that unfortunately backfired.
  • Options: Option type 145 is reserved for extending the addresses in a way that maintained full IPv4 compatibility (until IPv4 addresses are exhausted).

The only gaining any traction was IPv6 and tunneling of IPv6 over IPv4. The lack of IPv6 adoption is not due to any technical issue with IPv6. And none of the other ideas have technical advantages over IPv6, which would have given them better traction. The lack of deployment is entirely caused by lack of incentive, which would be the same regardless of which technical solution was chosen.

By upgrading you are faced with some technical challenges, and there is little benefit to upgrading until a significant fraction of the Internet has upgraded. By postponing the upgrade you are hurting the entire Internet, but as long as you are hurting your competitors at least as much as you are hurting yourself, it still makes sense from a business perspective.

Rationing of IPv4 addresses should not have waited until 2011. Rationing of IPv4 addresses should have started way earlier, by 2004 it was already clear that lack of incentive to upgrade was the main blocker for IPv6 deployment. At that point rationing could have been introduced in such a way as to keep the installed base of IPv4-only hosts constant. The rule should have been, that you new networks could get the IPv4 addresses they needed for dual-stack deployment, and existing networks could get new IPv4 addresses only if they could document, that they had upgraded an equivalent number of IPv4 hosts to dual stack. Had that been done, there would have been 40% dual stack hosts by the time IPv4 addresses ran out.

But pointing out what could have been done smarter in the past is not very productive. I am very interested in hearing any suggestions on what can be done today in order to accelerate IPv6 deployments. What is clear today is that IPv6 is the future. There is no other viable option. The IPv4 network is going to fall apart slowly as more and more NAT is being deployed. And any other protocol, which is not IPv4 or IPv6, is not going to be a real option. Even if a technically superior protocol showed up, IPv6 would still have a 20 year head-start.

Comment Re:If you're just beaming it down to earth anyways (Score 1) 230

If there exists an orbital path which can see the sun 24 hours/day, and that same orbital path lets you see the receiving stations, say, 18 hours/day

You can swap those and look for an orbit that can see the receiving station 24 hours/day and the sun more than 18 hours/day. Geostationary orbit would be an option covering both. To not have a period at night, during which you receive no power, you could have two satellites at different points in geostationary orbit pointed at the same receiving station. Then you will have two periods during the night where only one receive power, but you could position the satellites such that those periods happen when power usage is lowest. The tilt of the Earth's axis might even ensure that the problem of losing power at night is only an issue at certain times of the year.

I am just wondering when (if ever) geostationary orbit will get too crowded.

Comment Re:Strange conclusion (Score 1) 333

I'm going to get ahead of the game and work on that "heat death of the universe" limit.

Yeah, we need to find a workaround for that limitation sooner or later. If we don't, it will be the end of mankind. We do have a few other challenges to work on before that though. It would suck to find a solution to the death of the universe only to have life on Earth wiped out be a meteor before the other solution could be implemented.

Comment Re:Have I ever thought what? (Score 1) 128

I thought releasing it as a kernel module would make it cross user-space and kernel-space less often?

You are right. A user mode player will typically be crossing that boundary several times per second during playback. Doing it as a module would not cross that boundary anymore once it had started playing. (But other code running on the computer might.)

Comment Re:Lol wut (Score 1) 128

One wrong jump is all it takes.

That is true even if you keep the array in user space. Kernel code has privileges to do anything, even jumping directly to user space and executing code from there. What you need to review is not the byte array, but the code processing it. Because that code is running with kernel privileges, so you need it to be bug-free.

Comment Re:Cut off your nose to spite your face (Score 1) 86

You may recall that elliptic curve encryption was thought to be a highly promising encryption technology at the time.

Yes, compared to other asymmetrical primitives. I have seen no research suggesting that it would be a good idea to replace symmetrical cryptography with elliptic curves. Quite the contrary, since symmetrical cryptography is more resistant to cryptoanalysis using quantum computers, than asymmetrical cryptography is.

Comment Re:Cut off your nose to spite your face (Score 1) 86

There is no evidence that a backdoor actually exists, only that one is possible with the technology.

  • Using asymmetrical primitives to build a PRNG is suspicious, since a PRNG can be build from symmetrical primitives, but placing a backdoor which can be used by yourself but not by others requires asymmetrical primitives.
  • Long before DECDRBG was published it was well established among cryptographers, that you document where your constants came from, and that any constant which is not justified is by default assumed to be a backdoor.
  • It is fully documented how the constants in DECDRBG could have been obtained with a backdoor, and to this date this date no other explanation for the exact value of the constant has been given.
  • Leaked documents suggest that NSA has been actively working on planting backdoors in cryptographic standards.

To me that is more than sufficient evidence to assume DECDRBG to have a backdoor. A deliberately placed backdoor definitely is the most likely explanation for the structure of DECDRBG. By now there is really only one additional piece of information, which could change that picture, and that would be the actual calculations that were used to produce the constants.

Slashdot Top Deals

"If you want to know what happens to you when you die, go look at some dead stuff." -- Dave Enyeart

Working...