Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:IPv6 would make the problem worse (Score 1) 248

While in practice most admins configure /64s as subnets, there's nothing preventing netblocks that are smaller than /64.

But those are never advertised through BGP between AS. For backbone connections between AS 48 bits is sufficient. Within your own AS, you can use a hierarchical structure, which due to its hierarchical structure can be routed more efficiently.

To summarize - for the foreseeable future I guess 200k entries matching on the first 64 bits will be plenty for backbone routers. And 10k entries matching on all 128 bits will be plenty for edge routers.

Comment Re:IPv6 would make the problem worse (Score 1) 248

There's no good reason to think there'll be a significant improvement in HD with IPv6, or significantly fewer prefixes advertised.

You'd need more than 10^12 internet users to push the IPv6 HD ratio up to the same ridiculous level that we have on IPv4 (for those bits that matter to backbone routing). Dagger2 is right, the HD ratio does have a measurable impact on number of advertised prefixes. The average number of adverstised prefixes per AS is five times higher on IPv4 than on IPv6.

Comment Re:Not really to do with "BGP" or "IPv4" as such.. (Score 1) 248

This isn't really to do with BGP or IPv4 as such, it's an inherent problem in the way "The Internet" regards addresses.

It is a problem made five times worse by the extreme high HD-ratios needed to keep IPv4 alive. If we switch to IPv6, we can go on much longer before this becomes a problem again.

It may become a problem again after IPv4 has been abandoned as the network keeps growing. Something scaling better than BGP would be nice. I predict a more scalable solution is going to need more addresses - no problem for IPv6 but would make such a scalable solution unusable with IPv4.

Comment Re:Lack of incentives...? (Score 1) 248

Imagine if your business suddenly lost internet connectivity because your IP blocks have been reclaimed.

Who is going to configure their backbone routers to reject announcements from parties who got their addresses reclaimed for such reason? I don't see an incentive to reject those announcements, hence the reclaiming won't have any immediate effect.

Comment Re:Yes, Please (Score 1) 248

This has little to do with IPv6. In fact there is only 256k available by default if you switch.

So what if you can only have half as many entries on IPv6? Due to IPv6 being designed for an HD-ratio in the 80-90% range rather than the 95%+ needed with IPv4, there is much less address space fragmentation. The result is that on average each AS only has one fifth the number of IPv6 routes compared to IPv4. So those 256k IPv6 routes are going to last longer, even if the entire world switched to IPv6 next month.

Comment Re:Betteridge (Score 1) 248

Every bit counts.

Not on backbone routes. Backbone routes only need 48 bits. And if you use the recommended link prefix length, you don't need longer than 64 bits anywhere. 64 bit networks ought to be enough for anybody.

Even if you decide to make your link prefixes longer than 64 bits, you don't need a CAM with thousands of entries for that. Most routers don't have thousands of ports.

Comment Re:Time Shifting? (Score 1) 317

Which is odd, considering iTunes, Windows Media Player and even Xbox 360 and PS3 will rip CDs.

It does make a difference whether the primary purpose of the device is to rip CDs. But I believe the real reason they didn't go after those devices is, that there may not be enough money to go after.

The devices you mention probably cost less than 2500$ per unit. A car could cost significantly more than 2500$, so it would be a lot easier to squeeze 2500$ per unit out of a car manufacturer.

That strategy could backfire if in the end the question about the primary purpose gets applied to the entire car and not just the CD player. I don't think they'll manage to convince the court that the primary purpose of the car is to rip CDs.

Comment Re:Time For Decentralized DNS (Score 1) 495

Using blockchain technology for decentralized consensus.

If you are thinking about using bitcoin style proof of work, then I'd say that is a poor choice. It is an extreme waste of processing power, and it is not even needed for DNS. The purpose of the proof of work is to prevent double spending. But if you tried to perform a double-spending like action on a DNS system build on similar principles, the only damage you'd cause would be to your own domain.

But by all means, let's get data and hosting decoupled. DNSSEC provides the ability to validate records, wherever you got them from. But it still has the centralized authority. I'd rather see that once a zone hand over authority over a subdomain to a different public key, then a signature with that key has to be used to hand authority back or transfer it to a new key.

Comment Re:Can bitcoins be blacklisted? (Score 1) 88

it it possible or even practical to identify a bitcoin as having been a "direct descendant" of a coin involved in a given transaction and/or as a coin that has been "co-mingled" with such a coin?

Definitely. That is easy to do. However since each transaction can have multiple inputs and outputs, the set of descendants is likely to grow over time, until eventually most bitcoins are descendants of that transaction.

it may make it practical to for major players and for that matter anyone who uses BC to "locally blacklist" seized bitcoins.

If there isn't any consensus in the "community", then such a blacklist is unlikely to have any effect.

If some miners decide to blacklist transactions involving certain coins, then other miners are just going to pick them up. If only a minority of miners are in on the blacklisting, then this is going to cause a fork in the blockchain. Other miners have to decide, which fork they are going to bet their resources on. If there isn't consensus on what to blacklist, there could be so many forks blacklisting different subsets, that each fork is going to become irrelevant leaving only the chain with no blacklisting as viable.

Even if you could manage to get a majority of miners to agree on exactly what should be blacklisted, it is of questionable value to the miners to attempt blacklisting. It could be seen as introducing a dangerous precedence for introducing blacklists. This would introduce a new and even more unpredictable danger to anybody owning bitcoins.

Traders could decide to blacklist certain bitcoins. This would mean you would refuse to accept blacklisted coins. But if you are selling goods for bitcoins, then you'd have to announce in advance, which coins you consider blacklisted, otherwise you'd have disputes where the buyer of goods says they have paid, but seller of goods says the received bitcoins are no-good. And as receiver of bitcoins you'd also have to decide how diluted the blacklisted bitcoins would have to be, before you'd accept them. And in all, there'd have to be consensus about both the set of blacklisted bitcoins and the dilution threshold. Otherwise nobody will know, if the bitcoins they are accepting are good or not, and without such knowledge blacklisting wouldn't have the intended effect, instead you'd just be rejecting arbitrary payments, you might as well flip a coin and say no-thanks to a certain payment.

I think the only consensus that has a real chance of being reached is that bitcoins are not blacklisted.

Comment Re:Ghash.IO is not consistently over 51%, yet anyw (Score 1) 281

Keep in mind; if the miners did have to communicate with the pools constantly and synchronously with their mining, it could slow down their mining, and therefore give them a competitive disadvantage.

True. I was assuming it was obvious, that the communication had to be asynchronous. And I can't see any reason to communicate with other pools more often than once per block.

Once a node has started computing, it should be able to go on for quite a while without any communication. If the node doesn't hear anything else, it should just keep doing whatever it was doing. The only thing that can render the continued computation completely pointless for the node is if a node somewhere (in the same pool or any other pool) successfully mines a block. If communication has been totally dead for an hour, it is probably a waste of energy to keep trying to mine a block, since somebody else likely mined it already. But if you haven't heard anything for five minutes, just keep trying to mine the same block you were already working on.

This means the most important information to get synchronized between nodes is the fact that somebody mined a block. This should be totally independent of the pool, so this can be communicated between nodes even if they are in separate pools.

The other information a node needs to receive is information about which transactions to include in the block. It's no big deal if that information is lagging a bit behind. You could update the list of transactions multiple times while trying to complete a block, but if it lags a couple of blocks behind, nothing is going to break.

Comment Re:Ghash.IO is not consistently over 51%, yet anyw (Score 1) 281

I believe 98% of miners are using standard mining tools which communicate with the selected pool only

So, we are dealing with a (minor) weakness in the standard mining tools.

What i'd like to see happen is a pool cross-submission scheme, where: instead of miners having just one pool configured, they have at least 3 configured, and: while they may only be requesting work units from 1 pool; they could send a 'heads up' to all the secondary pools, when a new block is detected...

Sounds like a reasonable solution.

Slashdot Top Deals

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...