Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:Just like Bulldozer? (Score 1) 345

Think of it this way, when you've worked on code that's 10 years old and you think "this would be so much better if we could throw it away and start from scratch" imagine that Intel thinks the same way with x86 only it's dealing with a 40 year long chain of incremental improvements.

At least AMD64 did do some preparations for ditching some of the cruft. The 64-bit mode of the AMD64 architecture left out some of the features of the original x86 design. If we can get the 16 bit BIOS interfaces replaced with 64 bit interfaces, then it would make sense for the next generation of CPUs to switch on in 64 bit mode. After that, it won't be long before you can completely drop the 16 and 32 bit support from the CPUs. Support for 32 bit user mode on a 64 bit kernel may need to live for a little longer though.

Comment Re:Just like Bulldozer? (Score 3, Interesting) 345

Yup, and the BS about them being first to 64-bit...maybe in the consumer sector, but Intel, IBM and DEC all had 64-bit chips before the Athlon was even designed let alone shipped.

That is true. However AMD were the first to make a 64-bit architecture, which was x86 compatible. And it was also the first 64-bit CPU to be in a price range that was acceptable to average consumer. But most importantly, AMD designed an architecture so successful that Intel decided to make their own AMD compatible CPU. Today Intel probably earns most of its money on CPUs using AMD's 64 bit design.

But if AMD now want to go and build an entirely new design, which is nothing like x86, they may very well be repeating the exact same mistake Intel made to let AMD64 get the lead.

By now it might be safe to ditch all 8, 16, and 32 bit backwards compatibility with the x86 family. But AMD64 compatibility is too important to ignore.

Comment Re:How about "no thanks" .... (Score 1) 218

gmail still has it's original old plain HTML interface, although they hide the option to switch to it fairly well.

I know about it, and I did use it for some period when the default interface was unusable for me. But that interface is less functional than the original Gmail interface. It is comparable with webmail interfaces from before Gmail surfaced.

I have been wondering if there is an open source webmail system, I could host myself, which is as functional as Gmail was when it was best.

Comment Re:How about "no thanks" .... (Score 4, Interesting) 218

It's not good enough that something have reached a state of maturity that works well with users, and they like.

That has happened to Gmail multiple times over the years. And each time Google decided that it was time to redesign the Gmail UI. After their last major UI change, I completely gave up on using Gmail to write emails. Now I only use it to read and search emails.

Comment Re:Velocity (Score 1) 133

Right now we can't even get to our nearest neighbor(Alpha Centauri) in a lifetime.

Being able to do that would require some major technological advances. But if mankind can successfully colonize a planet orbiting any of the closest stars, then I don't think nearly as large technological advances would be needed to continue throughout the rest of the galaxy. It would basically just be doing the same thing over again using known technology. It may take a long time.

If the trip takes one generation, colonization of a planet takes a couple of generations, and preparing for the next trip takes a couple of generations, the time to move from start to star would seem like an eternity from the individual's point of view. But on a cosmic time scale, it would still be fairly quick, and colonizing the entire galaxy would seem like an inevitable outcome, if the colonization of the first handful of star systems had been realized.

But even if all of that happened, it would not imply that means of intergalactic transportation would be easily within reach. The trip between a couple of stars within our galaxy is much shorter than the trip between two galaxies. Such a trip could take many many generations and would require an energy source for the trip.

Comment Re:They didn't pay the rent? (Score 1) 133

At that point, the velocity needs to be computed through general relativity, which is fiendishly more complicated than just v/a.

Doing that division is no problem, and the result you will get is 354 days. Next one has to interpret the result. Now look carefully at what I said about the interpretation of that division. I said you'd be moving at relativistic speed after less than one year.

The definition of relativistic speed is that you are moving so fast, that Newtonian formulas are no longer a good approximation. The Newtonian formulas say you'd be moving at 103% of the speed of light after a year, that is not a good approximation. Hence the speed you will be moving at is by definition relativistic.

Let's put it differently and assume I was wrong. In other words we assume you would not reach relativistic speed. That implies Newton's laws would give a good approximation of the actual speed, and they say you'd have accelerated above the speed of light. Either way, you'd reach relativistic speeds within a year.

Comment Re:Velocity (Score 1) 133

I wonder if the Earth was in orbit around a star that was part of such a cluster, would we notice the effects of such an ejection?

On a related note, I have been wondering if a civilization, that manages to populate an entire galaxy could use such ejections to spread to other galaxies.

Comment Re:About time! (Score 1) 306

It's my understanding that fc80::/64 is assigned automatically by the stack

An address from fe80::/64 is assigned automatically. But by default you'll only get one per network interface. Moreover any application using them must indicate which interface it want to communicate on, as it is intended for communication between two hosts not separated by a router, and addresses could be duplicated on multiple interfaces. That makes it overly complicated to use, if you really just want local communication i.e. between two processes on the same computer.

Comment Re:About time! (Score 1) 306

You got the names and prefixes wrong. fc00::/6 and fd00::/6 is actually the same prefix since they only differ in the 8th bit, and /6 indicates only 6 bits are significant. Also that /6 is not allocated to one specific purpose. Half of it is allocated and the other half is reserved.

Site local used to be FEC0::/10, but it has been deprecated because it was not well-defined, what the boundaries of a site is. It was replaced with unique local addresses, which are only routed locally, but should be globally unique. FC00::/7 has been allocated for this. FC00::/7 was split into two halves with different allocation policy. In FD00::/8 you can create your own /48 by simply generating 40 random bits and append those to the 8 bit prefix. The result could for example be fd1d:19b8:d39f::/48. This allows the scope of such prefixes to overlap, since there will only be a conflict if the two randomly chosen 40 bit strings by chance are identical. Due to the "birthday paradox" you can expect to be part of one million different overlapping scopes before you run into a conflict. If such conflicts should happen, some central management may be needed, which is what FC00::/8 is reserved for.

Comment Re:I was wondering this myself... (Score 1) 109

In the end, hydrogen needs to be bound back to other atoms to be a usable fuel for transportation, some promising uses for H2 could be to manufacture CH4 using CO2 from a cement factory

And of course, the cheaper way that has actually been done on a massive scale for decades, extract hydrogen from natural gas

Natural gas primarily consists of CH_4. So combining those two ideas sounds like turning natural gas into hydrogen and then turning that hydrogen into natural gas.

Comment Re:Lesson here folks (Score 1) 306

Getting a new protocol deployed means deploying hardware and software,

Depends on the protocol. For the most part NATs did not require software changes though they break some applications if not done properly.

NAT breaks some applications. You cannot implement a NAT in a way that is guaranteed to not break any applications. Does that mean NAT is never done properly?

Some applications may work through a NAT automatically, while others may require lots of work. In certain situations it is just plain impossible to get an application working through a NAT at all. Application developers are not supposed to spend their time working around NAT. That time should be spend on building new features instead.

If an application works flawlessly without NAT, but fails in the presence of a NAT, that just demonstrates that NAT is a problem.

You can deploy a NAT and still have some applications work without changes. But lots of development time has been wasted the last two decades on working around NAT. NAT could be deployed without needing involvement from ISPs, which is why it is so widespread today. But that is also a significant reason why IPv6 deployment is going so slow. Had NAT never been invented, we could all have been running IPv6 today, and things would work much better than they do.

IPv6 is happening.

I ran IPv6 for a few months last year and kept an eye of how often I was able to establish a connection with it. Answer? Almost never. If you call that "happening" we have a different definition of the term.

Are you asking for servers with IPv6 support? Google, YouTube, facebook, and Akamai are a few examples with IPv6 support. Were you never able to establish a connection with any of them?

I use IPv6 on a daily basis. Whenever I am on a network without IPv6 support, I realize how much more difficult it makes my work, not to have IPv6 access. Luckily Teredo works from most networks (but only for connecting with sites, that care enough about reliability to deploy their own Teredo relays).

Comment Re:Lesson here folks (Score 1) 306

The IPv6 head start is so minimal that Linksys shipping a new shimming protocol with its NAT routers would exceed IPv6 usage within six months.

Wrong. Shipping routers with support for a new protocol doesn't make it happen. If that was all it took, we'd all have been running IPv6 years ago. Getting a new protocol deployed means deploying hardware and software, which can support it on the entire route from one end to the other. And it means network operators have to get addresses, setup peerings and turn it on for their customers. There is no way Linksys could achieve all of that within six months.

IMHO that is still the way to go, because IPv6 just isn't happening.

IPv6 is happening. It is happening 13 years later than it should have, but at least it appears it is not falling any further behind. At the current rate we'll reach 50% dual stack by 2018 (and by 2030 we'll probably be 50% dual stack again as IPv4 will be phased out). The question is how bad the network will get in the meantime. Users of any P2P service have already been experiencing problems due to NAT, and that will keep getting worse until those services move to IPv6.

Will it get so bad, that end users realize something has gone horribly wrong, and start demanding somebody take action? Or will ISPs manage to deploy IPv6 with the majority of users being blissfully unaware what is happening?

Suggesting another solution because "IPv6 isn't happening" makes no sense. By the time you'd have a working standard for any alternative to IPv6, you'd be up against IPv6 deployed to half the internet. And deploying it couldn't be significantly simpler than IPv6, which means ISPs would be waiting for a decade to see if anybody else was deploying it first. And why would any ISP want to deploy a competitor to IPv6 which was less tested than IPv6 and did not have nearly the same market share? It took more than a decade to get them moving on deploying one replacement to IPv4, they are not suddenly going to support two replacements.

Comment Re:Lesson here folks (Score 1) 306

However even then we could easily have reserved say 255:255:255:8 as the extensible value of the IP address.

It already is! Along with all the other IP addresses in the range from 240.0.0.0 through 255.255.255.254. That's 268435455 IPv4 addresses reserved for extensions. But nobody has been able to come up with a way to utilize those reserved addresses to solve the IPv4 shortage. But that's not the only range that people have tried using in order to solve the problem. The 192.88.99.0/24 range is reserved as well, for a well-defined purpose, which was intended to help getting IPv6 deployed. It did not help, it may even have slowed down IPv6 deployment by 1-2 years because it lead to broken IPv6 connectivity for some users.

The list of header fields, where values have been reserved, in order to help in this upgrade is long.

  • The version field: 6 through 9 are all reserved for different candidates for the next protocol, but everybody have now settled on one of them.
  • The protocol field: The value 41 can be used to embed an IPv6 packet within an IPv4 packet. And several other values are reserved for IPv6 related protocols.
  • IP addresses: As mentioned lots of addresses were reserved from the start with very little success. A much smaller range was reserved later with a bit more success, that unfortunately backfired.
  • Options: Option type 145 is reserved for extending the addresses in a way that maintained full IPv4 compatibility (until IPv4 addresses are exhausted).

The only gaining any traction was IPv6 and tunneling of IPv6 over IPv4. The lack of IPv6 adoption is not due to any technical issue with IPv6. And none of the other ideas have technical advantages over IPv6, which would have given them better traction. The lack of deployment is entirely caused by lack of incentive, which would be the same regardless of which technical solution was chosen.

By upgrading you are faced with some technical challenges, and there is little benefit to upgrading until a significant fraction of the Internet has upgraded. By postponing the upgrade you are hurting the entire Internet, but as long as you are hurting your competitors at least as much as you are hurting yourself, it still makes sense from a business perspective.

Rationing of IPv4 addresses should not have waited until 2011. Rationing of IPv4 addresses should have started way earlier, by 2004 it was already clear that lack of incentive to upgrade was the main blocker for IPv6 deployment. At that point rationing could have been introduced in such a way as to keep the installed base of IPv4-only hosts constant. The rule should have been, that you new networks could get the IPv4 addresses they needed for dual-stack deployment, and existing networks could get new IPv4 addresses only if they could document, that they had upgraded an equivalent number of IPv4 hosts to dual stack. Had that been done, there would have been 40% dual stack hosts by the time IPv4 addresses ran out.

But pointing out what could have been done smarter in the past is not very productive. I am very interested in hearing any suggestions on what can be done today in order to accelerate IPv6 deployments. What is clear today is that IPv6 is the future. There is no other viable option. The IPv4 network is going to fall apart slowly as more and more NAT is being deployed. And any other protocol, which is not IPv4 or IPv6, is not going to be a real option. Even if a technically superior protocol showed up, IPv6 would still have a 20 year head-start.

Slashdot Top Deals

It is easier to write an incorrect program than understand a correct one.

Working...