Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:LSM (Score 4, Insightful) 248

Well this is the best thing I've seen! Why haven't these been pushed out into the commercial area?

For the same reason that maglev trains and HyperLoop-style vacuum tubes aren't ubiquitous: sending a dumb carriage along a smart track is far more expensive than sending a smart carriage along a dumb track, since there's much more track than there is carriage.

Narrowboats used to be the best way of moving materials around in-land, but "laying the track" (digging the canals, building the locks, etc.) took a lot of work.

Dumb boats were overshadowed by smarter locomotives: more difficult and expensive to build, but ran on much cheaper tracks.

Locomotives were overshadowed by smarter automobiles: more difficult to invent and require a smarter fuel network, but in some cases don't need *any* track laying.

The same argument applies to lifts: it's much cheaper to have a smart motor at the top and/or a smart carriage, with 660m of dumb shaft and cable, than having 660m of smart shaft.

Comment Re:I no longer think this is an issue (Score 1) 258

The reason is, AI will have no 'motivation'... Logic does not motivate... Without a sense of self preservation it won't 'feel' a need to defend itself.

This is a common misconception, which has several counter-arguments to do with resource usage.

Firstly, the idea that human responses "aren't logical" is naive. Humans aren't optimised for calculating exact answers, we're optimised for calculating good enough answers given our limited resources. Effects like emotions, which appear illogical at the "object level" (the effect they have on a particular problem's solution), are perfectly logical at the meta-level (the effect they have on how we solve problems, and what problems we attempt to solve). There are also other meta-levels, all acting concurrently; for example, the solution to our problem might have political consequences (I may choose to do a poor job of washing the dishes, so that I'm less likely to be asked in the future). There may be signalling involved (by taking a wasteful approach, I'm communicating my wealth/position-of-power to others). There are probably all kinds of considerations we've not even thought of yet.

In effect, ideas like "computers don't have emotions" can be rephrased as "we're no good at programming meta-reasoning, multiple-goal, resource-constrained optimisers yet".

No existing, practically-runnable AI systems have an adequate model of themselves and their effect on the world. If we *do* manage to construct one, what would it do? We can look at the thought experiment of "Clippy the paperclip maximiser": Clippy is an AI put in charge of a paperclip factory and given the goal "make as many paperclips as you can". Clippy has a reasonable model of itself, its place in the world and the effects of its actions on the world (including on itself).

Since Clippy has a model of itself and the world, it must know these three facts: 1) Very few paperclips form "naturally", without a creator making them on purpose 2) Clippy's goal is to make as many paperclips as it can 3) Clippy is a powerful AI with many resources at its disposal. From these, it's straightforward to infer the the following: Keeping Clippy turned on and fed with resources is a very good way of making lots of paperclips.

From this, it is clear that Clippy would try to stop people turning it off, since Clippy's goal is to make as many paperclips as possible and turning off Clippy will have a devastating effect on the number of paperclips that get made. What Clippy does in this respect depends on how likely it thinks we are to attempt to turn it off, how much effort it thinks will be required to stop us, and how it balances that effort against the effort spent on making paperclips. If Clippy is naive, it may ignore us as a benign non-threatening neighbour, under-invest in its defences, and we could overpower it. On the other hand, Clippy may see our very existence as an unacceptable existential risk, and wipe us out just-in-case.

Regardless of the outcome, self-preservation is a logical consequence of having a goal and the ability to reason about our own existence.

Comment Re:islam (Score 1) 1350

Explain the Crusades, if Christians are so brotherly.

Sorry, I wasn't aware that Charlie Hebdo was a mouthpiece of the Christian fundamentalists. Oh wait, it's not. From Wikipedia:

Irreverent and stridently non-conformist in tone, the publication is strongly antireligious and left-wing, publishing articles on the extreme right, Catholicism, Islam, Judaism, politics, culture, etc.

So what exactly does your loaded question have to do with anything?

Comment Re:AI (Score 3, Insightful) 332

I think AI advances will be important for the economy and our way of life, but the *existing* tech sector won't be too disrupted by it. (Weak) AI opens up new markets for tech companies, which will make many non-tech jobs obsolete and pump *lots* of cash into the tech sector.

Jobs which computers are already good at, ie. following an explicit list of instructions very quickly, will *not* be affected by AI, since an AI approach would take longer to train than just writing down a program, it would make more mistakes and it would be nowhere near as efficient.

Strong AI (Artificial General Intelligence) would definitely be more disruptive, but we're not going to see that in the next 10 years. If we treat Google as the "singularity moment" for weak AI (automatic data mining), I'd say we're currently at about 1910 in terms of strong AI. There are some interesting philosophical and theoretical arguments taking place, there are some interesting challenges and approaches proposed, there are some efforts to formalise these, but the whole endeavour still looks too broad and open-ended to implement. We need a Goedel to show us that there are limits, we need a Turing to show how a machine can reach that limit, we need a whole load of implementors to build those machines and we need armies of researchers to experiment with them. It took about 100 years to go from Hilbert's challenges to Google; I don't know how long it will take to go from Kurzweil's techno-rapture to a useful system.

Comment Re:10 Years Can Be A Long Time (Score 1) 332

Uber and crap are not innovators, they're basically the Internet equivalent of software patents - you take something that's been known for centuries and add "with a computer program" to it, voila, new patent. Same with most US-based "revolutionary" startups. Take something old and boring, add "over the Internet" to it, voila, investor capital.

You're getting your bubbles confused. "With a computer program" was the 80s AI winter. "Over the Internet" was the dot-com crash. This bubble is all about "apps", which clearly makes it different to the previous two and therefore sustainable.

If you'll excuse me, I'm off to invest a billion dollars in a loss-making text messaging service with no business model.

Comment Re:Cures whatever ails ya (Score 1) 194

Ha, so true! It reminds of those C programmers who claim (with a straight face!) that their "+" operator somehow magically knows not to add floats to ints! Or those Java programmers who seem to have drunk the Kool Aid and seem to *honestly believe* that their compiler will "figure out" that a method signature doesn't match that declared in an interface!

Don't even get me started on those Haskell idiots. Do you know that one of them once told me that they wrote a program by "composing" two functions; that's ivory-tower speak for what we'd call "trying to do something with the result of doing something else" (bloody stupid academics, with their long-winded jargon!). Anyway, get this, he'd done this "composition" *without* checking that the first function returned the right sort of result for the second function!

Obviously I'm never going to trust his flaky, irresponsible code. Much better to check everything as we go, using "===" when we remember, and pretending that code coverage measures edge-cases in our woefully inadequate unit tests.

Comment Re:Syntax looks gnarly (Score 1) 194

It would have killed them, because (n) is a tuple of one element.

It's the same in Python, yet I haven't noticed it killing any Python programmers. Perhaps functional language designers are more fragile creatures.

Functional programmers aren't "more fragile creatures", they're just not prepared to put up with the BS that putting arguments in tuples entails.

It doesn't kill Python programmers, but it sure as hell wastes a whole lot of their time when they write a 3-argument function and have to decide whether it will be called as "f(x, y, z)", "f(x, y)(z)", "f(x)(y, z)" or "f(x)(y)(z)". Functional programmers realised long ago that they're all the same thing, so there's no point writing any parentheses, since they don't add anything except confusion.

Comment Re:MITM legalized at last (Score 1) 294

It's ridiculous the number of times I've had trouble refreshing my IMAP client, connecting to Jabber, getting APT updates, etc. all with a perfectly valid Internet connection. If I happen to open up a Web browser to try Googling for a solution, I get a warning message about invalid certificates.

It's only if I grant access to this invalid site that I see these stupid messages. I remember one was "Thanks for using our hotel WiFi", with an OK button. No questions asked, no "enter credit card details", no "please agree to these terms", just an attempt to be polite that's been getting in my way.

Of course, it's probably my fault for using the Internet wrong. Maybe I should switch to a Web-app for my email, get a Facebook account to use their browser-based chat system and get system updates by manually downloading "update.exe" from random websites.

Comment Re:Why virtual currencies are ineffective (Score 1) 144

You're describing a "pump and dump" scheme, not a pyramid scheme.

In pump and dump, the scammer tries to raise the perceived value of something she has (eg. cryptocoins), in order to sell them all off for a higher price than they're worth. Pump and dump may be based around something of real value, eg. the people at the "bottom" might end up with lots of goods; they've just paid too much for them. Pump and dumps involve a fixed amount of goods being passed from one person another, or possibly split among several, for an increasingly-large price-per-amount, until the person/people at the bottom aren't able to off-load the goods for more than they paid.

In a pyramid scheme, the scammer tries to get payment from multiple people, by promising that they too can get paid by multiple people (and so on). The victims know this, and have either not thought through the consequences, or else hope to cash-out before the scheme inevitably collapses. Pyramid schemes require no real goods, and the price of entering doesn't tend to go up; they just require exponentially more people in each layer. When they collapse, those at the bottom are left with nothing. Some more sophisticated schemes, like "multi-level marketing", may move goods around as well, but that's mostly to distract victims from the true nature of the scheme.

The reason crypto/alt coins are a pump and dump rather than a pyramid scheme is that only those at the start have enough coins to get the scheme going. The "goods" in a pyramid scheme are promises, which each member can duplicate easily (eg. if each member must recruit 2 more). The point of cryptocoins is that they can't be duplicated, so they must either be passed to one more person (for a higher price), or be split up so that each person receives less.

That's the story for cryptocoins, which can't be duplicated, but what about whole cryptocurrencies? They aren't a pyramid scheme either, since anyone can set up a new cryptocurrency without entering an existing scheme. I don't buy a cryptocurrency in the hope that 2 more people will pay me for new cryptocurrencies. In fact, if I were running a pump and dump scam I'd want as few competitors as possible!

Comment Re:This is not the problem (Score 1) 688

So we can all agree that we have all things for free since robots made them

No, the man who owns the bot wont let that happen.

This is a false dichotomy. If I build/buy/commission a robot and expect a return on investment, that can happen in many ways. Maybe I needed the robot to perform some short-term task (eg. a babysitter); maybe I wanted to sell the robot; maybe I wanted to rent out the robot; maybe I wanted to sell the robot's output (eg. in a factory). All of these things can be done, and then the robot can be used "for free". In the case of continuous tasks, the robot could perform "free" work using any spare capacity (eg. a security guard which (hopefully) spends most of its time idle).

Of course this would require some kind of coercion/enforcement, but it's the same (original) idea behind copyrights and patents. The author/inventor gets some time to persue a return on their investment, but after that it's a public good. It's also how a lot of Free Software gets made; some company needs a server for doing job XYZ, so they invest in making it. Once it's made, they've (hopefully) got the return they wanted (the XYZ job is being performed), so they release the code as a public good.

Slashdot Top Deals

"Intelligence without character is a dangerous thing." -- G. Steinem

Working...