Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Time Shifting? (Score 1) 317

Which is odd, considering iTunes, Windows Media Player and even Xbox 360 and PS3 will rip CDs.

It does make a difference whether the primary purpose of the device is to rip CDs. But I believe the real reason they didn't go after those devices is, that there may not be enough money to go after.

The devices you mention probably cost less than 2500$ per unit. A car could cost significantly more than 2500$, so it would be a lot easier to squeeze 2500$ per unit out of a car manufacturer.

That strategy could backfire if in the end the question about the primary purpose gets applied to the entire car and not just the CD player. I don't think they'll manage to convince the court that the primary purpose of the car is to rip CDs.

Comment Re:more leisure time for humans! (Score 4, Insightful) 530

Both Capitalism and Communism are supposed to be about maintaining the work force, so guess where we all are today?

A nominally capitalist country pays a communist country for much of its manufacturing because it's cheaper, instead of employing its own citizens. So the logical next step is to just buy the robot factory workers from China to replace workers in the U.S. to save on shipping costs.

Comment Re: AI is always "right around the corner". (Score 1) 564

The machine has no fucking clue about what it is translating. Not the media, not the content, not even what to and from which languages it is translating (other than a variable somewhere, which is not "knowing". None whatsoever. Until it does, it has nothing to do with AI in the sense of TAFA. (The alarmist fucking article)

How would you determine this, quantitatively? Is there a series of questions you could ask a machine translator about the text that would distinguish it from a human translator? Asking questions like "How did this make you feel?" is getting into the Turing Test's territory. Asking questions like "Why did Alice feel X" or "Why did you choose this word over another word in this sentence?" is something that machines are getting better at answering all the time.

To head off the argument that machine translation is just using large existing corpus of human-generated text, my response is that is pretty much what humans do. Interact with a lot of other humans and their texts to understand the meaning. Clearly humans have the tremendous advantage of actually experiencing some of what is written about to ground their understanding of the language, but as machine translation shows it is not a necessity for demonstrating an understanding of language.

For the argument that meaning must be grounded in conscious experience for it to be considered "intelligence" I would argue that machine learning *has* experience spread across many different research institutions and over time. Artificial selection has produced those agents and models which work well for human language translation, and this experience is real, physical experience of algorithms in the world. Not all algorithms and models survived, the survivors were shaped by this experience even though it was not tied to one body, machine, location, or time. Whether machine translation agents are consciously aware of this experience, I couldn't say. They almost certainly have no direct memory of it, but evidence of the experience exists. Once a system gets to the point that it can provide a definite answer to the question "What have machine translation agents experienced?" and integrate everything it knows about itself and the research done to create it, then we'll have an answer.

Comment Re:AI is always (Score 1) 564

Everything humans do is simply a matter of following a natural-selection-generated set of instructions, bootstrapping from the physical machinery of a single cell. Neurological processes work together in the brain to produce intelligence in humans, at least as far as we can tell. Removing parts of the human brain (via disease, injury, surgery, etc.) can reduce different aspects of intelligence, so it's not unreasonable to think that humans are also a pile of algorithms united in a special way that leads to general intelligence and that AI efforts are only lacking some of the pieces and a way of uniting them. As researchers put together more and more of the individual pieces (speech and object recognition, navigation, information gathering and association, etc.) the results probably won't look like artificial general intelligence until all the necessary pieces exist and it's only the integration that remains to be done. For example there's another article today about the claustrum in a woman that appears to be an effective on-off switch for her consciousness, strengthening the evidence for consciousness being an integration of various neural subsystems mediated by other regions that produce consciousness.

It's important to consider that AGI may act nothing like human or animal intelligence, either. It may not be interested in communication, exploration, or anything else that humans are interested in. Its drives or goals will be the result of its algorithms, and we shouldn't discount the possibility of very inhuman intelligence that nonetheless has a lot of power to change the world. Expecting androids or anthropomorphic robots to emerge from the first AGI is wishful thinking. The simplest AGI would probably be most similar to bacteria or other organisms we find annoying; it would understand the world well enough to improve itself with advanced technology but wouldn't consider the physical world to consist of anything but resources for its own growth. It may even lack sentient consciousness.

Producing human-equivalent AGI is a step or two beyond functional AGI. Implementing all of nature's tricks for getting humans to do the things we do in silicon will not be a trivial task. Look at The Moral Landscape or similar for ideas about how one might go about reverse engineering what makes humans "human" so that the rules could be encoded in AGI.

Comment Re:Time For Decentralized DNS (Score 1) 495

Using blockchain technology for decentralized consensus.

If you are thinking about using bitcoin style proof of work, then I'd say that is a poor choice. It is an extreme waste of processing power, and it is not even needed for DNS. The purpose of the proof of work is to prevent double spending. But if you tried to perform a double-spending like action on a DNS system build on similar principles, the only damage you'd cause would be to your own domain.

But by all means, let's get data and hosting decoupled. DNSSEC provides the ability to validate records, wherever you got them from. But it still has the centralized authority. I'd rather see that once a zone hand over authority over a subdomain to a different public key, then a signature with that key has to be used to hand authority back or transfer it to a new key.

Comment Re:Can bitcoins be blacklisted? (Score 1) 88

it it possible or even practical to identify a bitcoin as having been a "direct descendant" of a coin involved in a given transaction and/or as a coin that has been "co-mingled" with such a coin?

Definitely. That is easy to do. However since each transaction can have multiple inputs and outputs, the set of descendants is likely to grow over time, until eventually most bitcoins are descendants of that transaction.

it may make it practical to for major players and for that matter anyone who uses BC to "locally blacklist" seized bitcoins.

If there isn't any consensus in the "community", then such a blacklist is unlikely to have any effect.

If some miners decide to blacklist transactions involving certain coins, then other miners are just going to pick them up. If only a minority of miners are in on the blacklisting, then this is going to cause a fork in the blockchain. Other miners have to decide, which fork they are going to bet their resources on. If there isn't consensus on what to blacklist, there could be so many forks blacklisting different subsets, that each fork is going to become irrelevant leaving only the chain with no blacklisting as viable.

Even if you could manage to get a majority of miners to agree on exactly what should be blacklisted, it is of questionable value to the miners to attempt blacklisting. It could be seen as introducing a dangerous precedence for introducing blacklists. This would introduce a new and even more unpredictable danger to anybody owning bitcoins.

Traders could decide to blacklist certain bitcoins. This would mean you would refuse to accept blacklisted coins. But if you are selling goods for bitcoins, then you'd have to announce in advance, which coins you consider blacklisted, otherwise you'd have disputes where the buyer of goods says they have paid, but seller of goods says the received bitcoins are no-good. And as receiver of bitcoins you'd also have to decide how diluted the blacklisted bitcoins would have to be, before you'd accept them. And in all, there'd have to be consensus about both the set of blacklisted bitcoins and the dilution threshold. Otherwise nobody will know, if the bitcoins they are accepting are good or not, and without such knowledge blacklisting wouldn't have the intended effect, instead you'd just be rejecting arbitrary payments, you might as well flip a coin and say no-thanks to a certain payment.

I think the only consensus that has a real chance of being reached is that bitcoins are not blacklisted.

Comment Re:Functionally correct, but insecure (Score 1) 199

Unless all the code running on the machine is absolutely type-safe and only allows "safe" reflection then trying to hide sensitive data from other bits of code in your address space is a lost cause. Code modification, emulation, tracing, breakpoint instructions, hardware debugger support, etc. are all viable ways for untrusted code with access to your address space to steal your data.

Wiping memory is only effective for avoiding hot or cold boot attacks against RAM, despite its frequent use for hacking terrible operating systems to hope/pretend that userspace software isn't leaking data into other processes either directly via attacks or accidentally through kernel mishandling of memory.

Comment Re:Ghash.IO is not consistently over 51%, yet anyw (Score 1) 281

Keep in mind; if the miners did have to communicate with the pools constantly and synchronously with their mining, it could slow down their mining, and therefore give them a competitive disadvantage.

True. I was assuming it was obvious, that the communication had to be asynchronous. And I can't see any reason to communicate with other pools more often than once per block.

Once a node has started computing, it should be able to go on for quite a while without any communication. If the node doesn't hear anything else, it should just keep doing whatever it was doing. The only thing that can render the continued computation completely pointless for the node is if a node somewhere (in the same pool or any other pool) successfully mines a block. If communication has been totally dead for an hour, it is probably a waste of energy to keep trying to mine a block, since somebody else likely mined it already. But if you haven't heard anything for five minutes, just keep trying to mine the same block you were already working on.

This means the most important information to get synchronized between nodes is the fact that somebody mined a block. This should be totally independent of the pool, so this can be communicated between nodes even if they are in separate pools.

The other information a node needs to receive is information about which transactions to include in the block. It's no big deal if that information is lagging a bit behind. You could update the list of transactions multiple times while trying to complete a block, but if it lags a couple of blocks behind, nothing is going to break.

Comment Re:Ghash.IO is not consistently over 51%, yet anyw (Score 1) 281

I believe 98% of miners are using standard mining tools which communicate with the selected pool only

So, we are dealing with a (minor) weakness in the standard mining tools.

What i'd like to see happen is a pool cross-submission scheme, where: instead of miners having just one pool configured, they have at least 3 configured, and: while they may only be requesting work units from 1 pool; they could send a 'heads up' to all the secondary pools, when a new block is detected...

Sounds like a reasonable solution.

Comment Re:Ghash.IO is not consistently over 51%, yet anyw (Score 1) 281

A miner connected to the bitcoin network AND the pool, could in theory foil the attack.

If you are mining without communicating with the rest of the bitcoin network, you are putting somebody else in charge of that communication, which means you are giving somebody the power to cheat. Any miner not intending to cheat should be considering that to be a vulnerability in the mining software.

In other words, any miner not intending to cheat have an interest in running mining software, that does communicate with the rest of the bitcoin network, even if the rest of the mining pool doesn't.

Comment Re:Ghash.IO is not consistently over 51%, yet anyw (Score 1) 281

Take steps to prevent accumulating 51% hashing power, including: not accepting new miners

Why is this even necessary? I was under the impression that a mining pool would not be able to pull off an attack without it being immediately visible to the miners in the pool. Doesn't that mean that having a pool with majority of the processing power isn't enough to pull of an attack, you also need all miners in the pool to conspire to perform the attack?

Slashdot Top Deals

An authority is a person who can tell you more about something than you really care to know.

Working...