Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:Not really to do with "BGP" or "IPv4" as such.. (Score 1) 248

This isn't really to do with BGP or IPv4 as such, it's an inherent problem in the way "The Internet" regards addresses.

It is a problem made five times worse by the extreme high HD-ratios needed to keep IPv4 alive. If we switch to IPv6, we can go on much longer before this becomes a problem again.

It may become a problem again after IPv4 has been abandoned as the network keeps growing. Something scaling better than BGP would be nice. I predict a more scalable solution is going to need more addresses - no problem for IPv6 but would make such a scalable solution unusable with IPv4.

Comment Re:Lack of incentives...? (Score 1) 248

Imagine if your business suddenly lost internet connectivity because your IP blocks have been reclaimed.

Who is going to configure their backbone routers to reject announcements from parties who got their addresses reclaimed for such reason? I don't see an incentive to reject those announcements, hence the reclaiming won't have any immediate effect.

Comment Re:Yes, Please (Score 1) 248

This has little to do with IPv6. In fact there is only 256k available by default if you switch.

So what if you can only have half as many entries on IPv6? Due to IPv6 being designed for an HD-ratio in the 80-90% range rather than the 95%+ needed with IPv4, there is much less address space fragmentation. The result is that on average each AS only has one fifth the number of IPv6 routes compared to IPv4. So those 256k IPv6 routes are going to last longer, even if the entire world switched to IPv6 next month.

Comment Re:Betteridge (Score 1) 248

Every bit counts.

Not on backbone routes. Backbone routes only need 48 bits. And if you use the recommended link prefix length, you don't need longer than 64 bits anywhere. 64 bit networks ought to be enough for anybody.

Even if you decide to make your link prefixes longer than 64 bits, you don't need a CAM with thousands of entries for that. Most routers don't have thousands of ports.

Comment Re:Time Shifting? (Score 1) 317

Which is odd, considering iTunes, Windows Media Player and even Xbox 360 and PS3 will rip CDs.

It does make a difference whether the primary purpose of the device is to rip CDs. But I believe the real reason they didn't go after those devices is, that there may not be enough money to go after.

The devices you mention probably cost less than 2500$ per unit. A car could cost significantly more than 2500$, so it would be a lot easier to squeeze 2500$ per unit out of a car manufacturer.

That strategy could backfire if in the end the question about the primary purpose gets applied to the entire car and not just the CD player. I don't think they'll manage to convince the court that the primary purpose of the car is to rip CDs.

Comment Re:more leisure time for humans! (Score 4, Insightful) 530

Both Capitalism and Communism are supposed to be about maintaining the work force, so guess where we all are today?

A nominally capitalist country pays a communist country for much of its manufacturing because it's cheaper, instead of employing its own citizens. So the logical next step is to just buy the robot factory workers from China to replace workers in the U.S. to save on shipping costs.

Comment Re: AI is always "right around the corner". (Score 1) 564

The machine has no fucking clue about what it is translating. Not the media, not the content, not even what to and from which languages it is translating (other than a variable somewhere, which is not "knowing". None whatsoever. Until it does, it has nothing to do with AI in the sense of TAFA. (The alarmist fucking article)

How would you determine this, quantitatively? Is there a series of questions you could ask a machine translator about the text that would distinguish it from a human translator? Asking questions like "How did this make you feel?" is getting into the Turing Test's territory. Asking questions like "Why did Alice feel X" or "Why did you choose this word over another word in this sentence?" is something that machines are getting better at answering all the time.

To head off the argument that machine translation is just using large existing corpus of human-generated text, my response is that is pretty much what humans do. Interact with a lot of other humans and their texts to understand the meaning. Clearly humans have the tremendous advantage of actually experiencing some of what is written about to ground their understanding of the language, but as machine translation shows it is not a necessity for demonstrating an understanding of language.

For the argument that meaning must be grounded in conscious experience for it to be considered "intelligence" I would argue that machine learning *has* experience spread across many different research institutions and over time. Artificial selection has produced those agents and models which work well for human language translation, and this experience is real, physical experience of algorithms in the world. Not all algorithms and models survived, the survivors were shaped by this experience even though it was not tied to one body, machine, location, or time. Whether machine translation agents are consciously aware of this experience, I couldn't say. They almost certainly have no direct memory of it, but evidence of the experience exists. Once a system gets to the point that it can provide a definite answer to the question "What have machine translation agents experienced?" and integrate everything it knows about itself and the research done to create it, then we'll have an answer.

Comment Re:AI is always (Score 1) 564

Everything humans do is simply a matter of following a natural-selection-generated set of instructions, bootstrapping from the physical machinery of a single cell. Neurological processes work together in the brain to produce intelligence in humans, at least as far as we can tell. Removing parts of the human brain (via disease, injury, surgery, etc.) can reduce different aspects of intelligence, so it's not unreasonable to think that humans are also a pile of algorithms united in a special way that leads to general intelligence and that AI efforts are only lacking some of the pieces and a way of uniting them. As researchers put together more and more of the individual pieces (speech and object recognition, navigation, information gathering and association, etc.) the results probably won't look like artificial general intelligence until all the necessary pieces exist and it's only the integration that remains to be done. For example there's another article today about the claustrum in a woman that appears to be an effective on-off switch for her consciousness, strengthening the evidence for consciousness being an integration of various neural subsystems mediated by other regions that produce consciousness.

It's important to consider that AGI may act nothing like human or animal intelligence, either. It may not be interested in communication, exploration, or anything else that humans are interested in. Its drives or goals will be the result of its algorithms, and we shouldn't discount the possibility of very inhuman intelligence that nonetheless has a lot of power to change the world. Expecting androids or anthropomorphic robots to emerge from the first AGI is wishful thinking. The simplest AGI would probably be most similar to bacteria or other organisms we find annoying; it would understand the world well enough to improve itself with advanced technology but wouldn't consider the physical world to consist of anything but resources for its own growth. It may even lack sentient consciousness.

Producing human-equivalent AGI is a step or two beyond functional AGI. Implementing all of nature's tricks for getting humans to do the things we do in silicon will not be a trivial task. Look at The Moral Landscape or similar for ideas about how one might go about reverse engineering what makes humans "human" so that the rules could be encoded in AGI.

Comment Re:Time For Decentralized DNS (Score 1) 495

Using blockchain technology for decentralized consensus.

If you are thinking about using bitcoin style proof of work, then I'd say that is a poor choice. It is an extreme waste of processing power, and it is not even needed for DNS. The purpose of the proof of work is to prevent double spending. But if you tried to perform a double-spending like action on a DNS system build on similar principles, the only damage you'd cause would be to your own domain.

But by all means, let's get data and hosting decoupled. DNSSEC provides the ability to validate records, wherever you got them from. But it still has the centralized authority. I'd rather see that once a zone hand over authority over a subdomain to a different public key, then a signature with that key has to be used to hand authority back or transfer it to a new key.

Comment Re:Can bitcoins be blacklisted? (Score 1) 88

it it possible or even practical to identify a bitcoin as having been a "direct descendant" of a coin involved in a given transaction and/or as a coin that has been "co-mingled" with such a coin?

Definitely. That is easy to do. However since each transaction can have multiple inputs and outputs, the set of descendants is likely to grow over time, until eventually most bitcoins are descendants of that transaction.

it may make it practical to for major players and for that matter anyone who uses BC to "locally blacklist" seized bitcoins.

If there isn't any consensus in the "community", then such a blacklist is unlikely to have any effect.

If some miners decide to blacklist transactions involving certain coins, then other miners are just going to pick them up. If only a minority of miners are in on the blacklisting, then this is going to cause a fork in the blockchain. Other miners have to decide, which fork they are going to bet their resources on. If there isn't consensus on what to blacklist, there could be so many forks blacklisting different subsets, that each fork is going to become irrelevant leaving only the chain with no blacklisting as viable.

Even if you could manage to get a majority of miners to agree on exactly what should be blacklisted, it is of questionable value to the miners to attempt blacklisting. It could be seen as introducing a dangerous precedence for introducing blacklists. This would introduce a new and even more unpredictable danger to anybody owning bitcoins.

Traders could decide to blacklist certain bitcoins. This would mean you would refuse to accept blacklisted coins. But if you are selling goods for bitcoins, then you'd have to announce in advance, which coins you consider blacklisted, otherwise you'd have disputes where the buyer of goods says they have paid, but seller of goods says the received bitcoins are no-good. And as receiver of bitcoins you'd also have to decide how diluted the blacklisted bitcoins would have to be, before you'd accept them. And in all, there'd have to be consensus about both the set of blacklisted bitcoins and the dilution threshold. Otherwise nobody will know, if the bitcoins they are accepting are good or not, and without such knowledge blacklisting wouldn't have the intended effect, instead you'd just be rejecting arbitrary payments, you might as well flip a coin and say no-thanks to a certain payment.

I think the only consensus that has a real chance of being reached is that bitcoins are not blacklisted.

Comment Re:Functionally correct, but insecure (Score 1) 199

Unless all the code running on the machine is absolutely type-safe and only allows "safe" reflection then trying to hide sensitive data from other bits of code in your address space is a lost cause. Code modification, emulation, tracing, breakpoint instructions, hardware debugger support, etc. are all viable ways for untrusted code with access to your address space to steal your data.

Wiping memory is only effective for avoiding hot or cold boot attacks against RAM, despite its frequent use for hacking terrible operating systems to hope/pretend that userspace software isn't leaking data into other processes either directly via attacks or accidentally through kernel mishandling of memory.

Comment Re:Ghash.IO is not consistently over 51%, yet anyw (Score 1) 281

Keep in mind; if the miners did have to communicate with the pools constantly and synchronously with their mining, it could slow down their mining, and therefore give them a competitive disadvantage.

True. I was assuming it was obvious, that the communication had to be asynchronous. And I can't see any reason to communicate with other pools more often than once per block.

Once a node has started computing, it should be able to go on for quite a while without any communication. If the node doesn't hear anything else, it should just keep doing whatever it was doing. The only thing that can render the continued computation completely pointless for the node is if a node somewhere (in the same pool or any other pool) successfully mines a block. If communication has been totally dead for an hour, it is probably a waste of energy to keep trying to mine a block, since somebody else likely mined it already. But if you haven't heard anything for five minutes, just keep trying to mine the same block you were already working on.

This means the most important information to get synchronized between nodes is the fact that somebody mined a block. This should be totally independent of the pool, so this can be communicated between nodes even if they are in separate pools.

The other information a node needs to receive is information about which transactions to include in the block. It's no big deal if that information is lagging a bit behind. You could update the list of transactions multiple times while trying to complete a block, but if it lags a couple of blocks behind, nothing is going to break.

Slashdot Top Deals

"But what we need to know is, do people want nasally-insertable computers?"

Working...