Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Time For Decentralized DNS (Score 1) 495

Using blockchain technology for decentralized consensus.

If you are thinking about using bitcoin style proof of work, then I'd say that is a poor choice. It is an extreme waste of processing power, and it is not even needed for DNS. The purpose of the proof of work is to prevent double spending. But if you tried to perform a double-spending like action on a DNS system build on similar principles, the only damage you'd cause would be to your own domain.

But by all means, let's get data and hosting decoupled. DNSSEC provides the ability to validate records, wherever you got them from. But it still has the centralized authority. I'd rather see that once a zone hand over authority over a subdomain to a different public key, then a signature with that key has to be used to hand authority back or transfer it to a new key.

Comment Re:Can bitcoins be blacklisted? (Score 1) 88

it it possible or even practical to identify a bitcoin as having been a "direct descendant" of a coin involved in a given transaction and/or as a coin that has been "co-mingled" with such a coin?

Definitely. That is easy to do. However since each transaction can have multiple inputs and outputs, the set of descendants is likely to grow over time, until eventually most bitcoins are descendants of that transaction.

it may make it practical to for major players and for that matter anyone who uses BC to "locally blacklist" seized bitcoins.

If there isn't any consensus in the "community", then such a blacklist is unlikely to have any effect.

If some miners decide to blacklist transactions involving certain coins, then other miners are just going to pick them up. If only a minority of miners are in on the blacklisting, then this is going to cause a fork in the blockchain. Other miners have to decide, which fork they are going to bet their resources on. If there isn't consensus on what to blacklist, there could be so many forks blacklisting different subsets, that each fork is going to become irrelevant leaving only the chain with no blacklisting as viable.

Even if you could manage to get a majority of miners to agree on exactly what should be blacklisted, it is of questionable value to the miners to attempt blacklisting. It could be seen as introducing a dangerous precedence for introducing blacklists. This would introduce a new and even more unpredictable danger to anybody owning bitcoins.

Traders could decide to blacklist certain bitcoins. This would mean you would refuse to accept blacklisted coins. But if you are selling goods for bitcoins, then you'd have to announce in advance, which coins you consider blacklisted, otherwise you'd have disputes where the buyer of goods says they have paid, but seller of goods says the received bitcoins are no-good. And as receiver of bitcoins you'd also have to decide how diluted the blacklisted bitcoins would have to be, before you'd accept them. And in all, there'd have to be consensus about both the set of blacklisted bitcoins and the dilution threshold. Otherwise nobody will know, if the bitcoins they are accepting are good or not, and without such knowledge blacklisting wouldn't have the intended effect, instead you'd just be rejecting arbitrary payments, you might as well flip a coin and say no-thanks to a certain payment.

I think the only consensus that has a real chance of being reached is that bitcoins are not blacklisted.

Comment Re:Ghash.IO is not consistently over 51%, yet anyw (Score 1) 281

Keep in mind; if the miners did have to communicate with the pools constantly and synchronously with their mining, it could slow down their mining, and therefore give them a competitive disadvantage.

True. I was assuming it was obvious, that the communication had to be asynchronous. And I can't see any reason to communicate with other pools more often than once per block.

Once a node has started computing, it should be able to go on for quite a while without any communication. If the node doesn't hear anything else, it should just keep doing whatever it was doing. The only thing that can render the continued computation completely pointless for the node is if a node somewhere (in the same pool or any other pool) successfully mines a block. If communication has been totally dead for an hour, it is probably a waste of energy to keep trying to mine a block, since somebody else likely mined it already. But if you haven't heard anything for five minutes, just keep trying to mine the same block you were already working on.

This means the most important information to get synchronized between nodes is the fact that somebody mined a block. This should be totally independent of the pool, so this can be communicated between nodes even if they are in separate pools.

The other information a node needs to receive is information about which transactions to include in the block. It's no big deal if that information is lagging a bit behind. You could update the list of transactions multiple times while trying to complete a block, but if it lags a couple of blocks behind, nothing is going to break.

Comment Re:Ghash.IO is not consistently over 51%, yet anyw (Score 1) 281

I believe 98% of miners are using standard mining tools which communicate with the selected pool only

So, we are dealing with a (minor) weakness in the standard mining tools.

What i'd like to see happen is a pool cross-submission scheme, where: instead of miners having just one pool configured, they have at least 3 configured, and: while they may only be requesting work units from 1 pool; they could send a 'heads up' to all the secondary pools, when a new block is detected...

Sounds like a reasonable solution.

Comment Re:Ghash.IO is not consistently over 51%, yet anyw (Score 1) 281

A miner connected to the bitcoin network AND the pool, could in theory foil the attack.

If you are mining without communicating with the rest of the bitcoin network, you are putting somebody else in charge of that communication, which means you are giving somebody the power to cheat. Any miner not intending to cheat should be considering that to be a vulnerability in the mining software.

In other words, any miner not intending to cheat have an interest in running mining software, that does communicate with the rest of the bitcoin network, even if the rest of the mining pool doesn't.

Comment Re:Ghash.IO is not consistently over 51%, yet anyw (Score 1) 281

Take steps to prevent accumulating 51% hashing power, including: not accepting new miners

Why is this even necessary? I was under the impression that a mining pool would not be able to pull off an attack without it being immediately visible to the miners in the pool. Doesn't that mean that having a pool with majority of the processing power isn't enough to pull of an attack, you also need all miners in the pool to conspire to perform the attack?

Comment Re:Fuck IPv6 (Score 1) 305

To not make the IP addresses overly lengthy.

The size of the IPv6 address was chosen carefully. But you can never predict everything, and a few use-cases has shown up for more than 128 bits. But we'll just have to make do with the 128 bits we got, because nobody want to go through this entire upgrade process one more time.

So, why was it set at 128 bits in the first place? First of all, the address just like the IPv4 address consist of a network part and a host part. Due to the too short IPv4 address, the boundary between the two parts was first made variable at byte boundaries, and when that turned out to not be enough to avoid running out, the boundary was permitted to be at any bit. Even that was not enough to avoid running out. With IPv6 this mistake was not to be repeated. Hence each of the two parts had to be made large enough.

From IPv4 deployments we learned that 32 bits was not enough. In fact we have more or less removed the host part of the address (with lots of complications) and we have forced utilization way beyond the reasonable, and 32 bits is still not enough. 36 bits for the network part might be enough, if utilization was at 100%. However research has lead to the concept of an HD-ratio which indicate what percentage of the bits in an address can be effectively used when it need to have a hierarchical structure that can be utilized in routing. Research show that a reasonable HD ratio is in the range 80% to 90%. If we have 45 bits and 80% HD ratio, we have 36 bits effectively.

Instead of making the network part be 45 bits, which is an awful size for a computer to work with, it was rounded up to 64 bits. Those additional bits were put to reasonable use. In front of the 45 bits were put 3 bits which splits up the addresses in 8 blocks of which the first and last are used for addresses that need special handling in the protocols. The other 6 blocks are there such that we have 6 chances for getting the address allocation right in order to avoid running out again. After the 45 bits were put a group of 16 bits that can be used for subnetting within a site.

Some ISPs are so scared about those 45 bits running out that they have already now commandeered some or all of the 16 bits intended for subnetting within a site. This is most likely a reflex reaction caused by too many years of being forced to be extremely careful with allocation of IPv4 addresses. It is not like those 45 bits are going to be running out.

For the host part there was a desire to introduce auto configuration, which could generate the network part from a MAC address. If you also wanted to have room for addresses not generated from a MAC address, that mean the host part had to be at least 49 bits. This too was rounded up to 64 bits. Is it wasteful to round up from the 94 bits of documented need to 128 bits of actual address size? I'd say it would have been wasteful to require CPU time being spent on the bit operations needed to save a mere 34 bits on the size of the address.

Saving CPU time by rounding up the size of the addresses makes sense to me. Saving CPU time by eliminating needless fields from the header also makes sense to me. In fact three different 16 bit fields that routers would need to process when forwarding an IPv4 packet got removed such that routers no longer need to waste processing time on those on IPv6.

Why did it suddenly turn out that 128 bits was not quite enough? Once we got the chance to work with the much larger size of addresses, people suddenly realized, that it is possible to apply cryptographic operations to part of the IP address. With IPv4 that was unthinkable due to only having a total of 32 bits. But with the 128 bits cryptography suddenly came within reach. However cryptographic primitives with only 128 bits are considered to be on the weak side by now, and we can't even use the entire IPv6 address for cryptographic operations. So where cryptographic data in the IP address makes sense, we have to compromise on the security, but it still provides some benefit compared to not being able to do that cryptography in the first place.

This is not the only reason 128 bits is not quite enough. RFC 4193 defines a way to generate local prefixes with low risk of collisions, this is to replace RFC 1918 where collisions is a real problem. RFC 4193 leaves 16 bits for subnetting. But with RFC 1918 you could use 10.0.0.0/8 in which you had 24 bits and could realistically use up to 21 bits of that for subnetting. This is not to say RFC 4193 put you in a worse position than RFC 1918 did, but we are just 5 bits short of saying that it is unconditionally better.

Then you can look at protocols such as 6to4 and Teredo. 6to4 needed to embed an IPv4 address inside the network portion of the IPv6 address. That fits just fine. But due to deployments of NAT on IPv4, 6to4 is not usable on all IPv4 networks. Along came Teredo to solve that problem. Teredo however uses UDP and need to embed both IPv4 address and port number, and it need both client and server addresses to be embedded along with a few flag bits. In total that's 112 bits that you would want to embed inside the network part preferably with bits to spare for subnetting. So on top of the 112 bits you need a prefix on the order of 16 to 32 bits and 16 bits for subnetting, that's about 144 bits that need to fit inside the network portion of the IPv6 address.

That was obviously not possible. So first of all, Teredo used not just the network part of the address but also the host part. That means Teredo would not be suitable for connecting an entire network but only for single hosts. That means the bits for sunetting would also not be used. This was not quite enough to make all the embedded data fit inside the IPv6 address, so the server port number was hardcoded in the protocol such that it would not have to be embedded in the IPv6 address.

If only ISPs had deployed IPv6 in time, there wouldn't have been any need for contraptions like Teredo.

Comment Re:Why IPv6? (Score 1) 305

Why does my ISP issue me with only a 32 bit address?

Not enough competition. You only get to choose among those companies who are actually in the area and can get a physical wire to your address. Plus most consumers don't see the connection between the problems they experience and the lack of IPv6 connectivity on their internet connection. But things are moving forward, I might actually get native IPv6 at home next week, and I live in a country which is lacking far behind the rest of the world.

Why does my server host only give me 32bit addresses?

For the same reason you haven't moved to a competitor, which does have IPv6 support. For hosting there is more competition, because it is easier to move. And I believe that is part of the reason why the percentage of hosting companies with IPv6 support is larger than the percentage of ISPs with IPv6 support.

You can get dual stack hosting, if you make it a large enough priority that you are willing to switch hosting provider to get it. That's the positive side. The number of customers actually switching hosting provider to get dual stack is small, but I am one of those who has done it. We don't need 100% of customers ready to switch hosting provider to get IPv6. I think that if just 30% of customers were ready to switch hosting provider, then 90% of the hosting providers would deploy IPv6.

the default settings in IPTables are 32bit?

iptables is for IPv4, ip6tables is for IPv6.

but there seems to be no more forward motion.

There is forward motion. It is happening 13 years too late. If we keep being 13 years behind schedule compared to my calculations, then by 2020 we'll have 86% of users on IPv6.

it strikes me that some group has dropped the ball; but which group?

I would say the ball was dropped in 1999, when the technical spec wasn't followed up with policy adjustments. The introduction of CIDR as a stop-gap measure in the early 90's meant changes in how IPv4 addresses were handed out. Once the IPv6 spec was finalized, there should have been another change. A new policy ensuring that those deploying IPv6 would get easier access to IPv4 addresses than those not deploying IPv6 could have made a difference. Did IANA drop the ball? Or were they simply following a policy set by policymakers, who had dropped the ball?

The last /8 in APNIC is being rationed as is the last /8 in RIPE. But those account for only about 2% of the total pool, not something that can give a strong incentive. Imagine if 30% of the IPv4 pool could have been handed out according to a policy set to give incentive to deploy IPv6. That didn't happen, and by the time IANA ran out of addresses, IPv6 deployment had hardly gotten started.

I think the problem now is that nobody knows how to set the right incentives to deploy IPv6. The benefit you get from deploying IPv6 at this time are not great because only a minority of those you need to communicate with have IPv6 at all, and they still have IPv4 as well. Those who are being most hurt by lack of IPv6 deployment are those who don't have IPv4 addresses, those who can do something about the deployment is those who do have IPv4 addresses. It will have to get a lot worse before it starts to get better.

I find it interesting that 25% of people in the poll have chosen "When we build a new internet" as the answer as to when IPv6 will arrive.

One could argue that by deploying IPv6 we are building a new internet. Just like the previous internet was build on top of infrastructure originally intended to support telephone calls, the new internet will be build on top of infrastructure originally intended to support the old internet. But really this is just a play on words. What's more interesting is the games being played with peering. I get the feeling providers are in two camps, those who think getting early into the IPv6 deployment means you get a better place in the hierarchy vs those who think that whatever place you had in the hierarchy on IPv4 is the place you are entitled to in the IPv6 hierarchy when you finally decide to get started with it. It will be interesting to see which of those camps "win". And it could change the structure of the internet, because it is peerings that make up the internet.

I suspect some are joking but that others, like myself, have a gut feeling that the entire internet needs an overhaul.

I can think of plenty of other areas where an overhaul could be needed.

  • We need to get rid of protocols that can be abused for amplification attacks, or we need to squeeze a spoofing protection layer in between IP and UDP
  • We need to be able to track down the source of a flood of packets from the receiving end without involving administrators of intermediate routers. And we need to be able to push filters all the way across the internet to the source of those packets. And wee need to achieve that while maintaining the principle of keeping all intelligence at the edge of the network. And all the while each intermediate router must only need a constant amount of memory to support this operation.
  • We need opportunistic end-to-end encryption with optional validation of the identity of the peer after the encrypted channel has been established. Making the validation optional is a key point to security.
  • We need to get rid of the overloading of meaning of IP addresses. Today IP addresses are related to your physical location, but they are simultaneously used to track reputation, and ISPs are enforcing limitations on what their customers can do with IP addresses belonging to the ISP.

Comment Re:Fuck IPv6 (Score 2) 305

I agree with this, 0.0.0.0.0 - 255.255.255.255.255 is much easier

That's it. I have now officially heard that suggestion too many times.

I have seen it come in two variations. Extending the IP address from four octets to five octets has been suggested frequently. It was funny, when it was mentioned in the IPv4.1 spec published as an April's fool joke a few years back. It was funny then because it was written as a suggestion by somebody with enough of a clue to include the diagrams making it blindingly obvious why it is a non-solution (which would only be slightly more work to deploy than IPv6.)

Another variation is the suggestion to increase the maximum for each octet from 255 to 999 to fully utilize all three digits. Increasing the range to 0-999 would give almost 40 bits of address space, slightly less than the extra octet, which would give exactly 40 bits of address space. But how much address space do we really need? Calculations based on population growth and HD-ratios has shown 45 bits to be on the safe side, and based on that the recommendation to assign a /48 to each site out of the /3 assigned to IANA was approved.

But each of the two suggestions above gave us only about 40 bits, which is less than 45. But if we combine the two, we get almost 50 bits. That should be enough, right? Well, what we have discussed is only notation. The suggestion tends to be made by people who haven't bothered looking at what wire formats actually look like. The only exception was the IPv4.1 spec, which did specify a wire format (and that was one of the primary hints telling the reader, that it was a joke. Another hint was the name 4.1 for something published April 1st. That the IP was extended from 4 bytes to 4+1 bytes just made it extra fun.)

So if we were to accept the notation with five groups of numbers ranging from 0 to 999, what wire format could we use? IPv4 wire-format is a no-go, because there are not enough address bits. We could invent a new format. If we managed to come up with a new format, which is obviously better than both IPv4 and IPv6, then we still have a 20-year deployment task to complete and a deadline three years ago, which makes a new wire-format a no-go as well. This leaves us with only one possible wire-format to apply that notation to, which is IPv6.

Is such a contraption possible? Sure, have a patch. And as you can see, it works:

$ ssh 256.93.800.0.1 uname
Linux

And it can use familiar looking addresses such as 127.0.0.0.1 for localhost, 192.168.273.35.102 to address a host on your LAN, or 203.0.113.42.789 to address a host using 6to4 behind a router with IPv4 address 203.0.113.42.

This may not be exactly what you had in mind, but it is as close as you can get when you missed the 1998 deadline for improving the official IPv6 spec.

Comment Re:IPv6 Addresses (Score 1) 305

I didn't write that list, but I can explain to what extent the points are true or not.
  1. With IPv6 you don't have to deal with NAT and other workarounds due to shortage of IP addresses. This can lead to a simpler and cleaner network topology with IPv6. This makes the topology easier to learn for administrator and attacker alike. It also makes it easier for the administrator to secure. If the administrator forget to put a firewall where one is needed, then on IPv4 they may have been saved by a NAT in place. But in that case leaking information about network topology would be the least of your concerns. Also a NAT doesn't prevent an attack from performing a traceroute into your network, they just have to wait for outgoing connections, which they can utilize to trace back into the network.
  2. It is true, the IPv6 stacks are not as well tested as IPv4 stacks. But you are not going to solve that problem by simply waiting. You need to give others incentive to move ahead with native IPv6 and get those stacks hardened. One way you could move ahead was to keep your LAN IPv4 only and deploy translation on the edge of your network to connect to an IPv4 backbone. That will still give you many of the benefits of native IPv6 while giving others incentive to deploy IPv6 and get the stacks tested. Bear in mind that most of the weaknesses are link-local only. The implication of that is that first of all enabling IPv6 on your backbone connection doesn't put you at risk. And disabling IPv6 on your backbone connection doesn't protect you against an insider attack, since IPv6 is enabled by default on every modern OS, and for some it is even the case that IPv4 only is no longer an officially supported configuration.
  3. IPSec was originally developed as part of the IPv6 spec. At some point IPSec support was mandatory in IPv6. IPSec did get backported to IPv4, but is not mandatory. Moreover it was changed to being optional but strongly recommended in IPv6. This means through the years the security advantage of IPv6 in this field has been reduced to the point of being almost exactly the same as IPv4. But the advantage was never that significant in the first place because in spite of IPSec support being mandatory, the IPSec design is overly complicated and difficult to get right. Plus it is not mandatory to perform any key distribution. Two fully compliant IPv6 stacks with full IPSec support still communicate in clear by default.

Comment Re:IPv6 Addresses (Score 1) 305

If the system complains that the passwords are similar, not merely identical, then it must be storing unhashed passwords.

Comparing the password you are changing from and the new password is trivial since you have to type in both in order to change your password. If they accept your new password, they could store the old password using the new password as encryption key. That way every time you change password, the old password can be used to decrypt the previous, which can be used to decrypt the previous and so on. With that approach it is trivial to compare the similarity to every one of your older passwords.

Or they could do like Cacadril suggested and compare the most obvious variations of your new password with stored hashes. This is however going to require a lot more CPU time. One could use a combination of the two approaches and store a salted hash of the current password plus an encrypted version of a hash of all the older passwords with a weaker salting that remains constant per user and uses no iteration. You still can't extract the older passwords, even if you know the decryption key, but you can generate variations of the new password and efficiently check against decrypted hashes.

Or you can drop all of that complexity and simply only check for similarity with the most recent password and exact matches further back.

Slashdot Top Deals

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...