Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:AI (Score 3, Insightful) 332

I think AI advances will be important for the economy and our way of life, but the *existing* tech sector won't be too disrupted by it. (Weak) AI opens up new markets for tech companies, which will make many non-tech jobs obsolete and pump *lots* of cash into the tech sector.

Jobs which computers are already good at, ie. following an explicit list of instructions very quickly, will *not* be affected by AI, since an AI approach would take longer to train than just writing down a program, it would make more mistakes and it would be nowhere near as efficient.

Strong AI (Artificial General Intelligence) would definitely be more disruptive, but we're not going to see that in the next 10 years. If we treat Google as the "singularity moment" for weak AI (automatic data mining), I'd say we're currently at about 1910 in terms of strong AI. There are some interesting philosophical and theoretical arguments taking place, there are some interesting challenges and approaches proposed, there are some efforts to formalise these, but the whole endeavour still looks too broad and open-ended to implement. We need a Goedel to show us that there are limits, we need a Turing to show how a machine can reach that limit, we need a whole load of implementors to build those machines and we need armies of researchers to experiment with them. It took about 100 years to go from Hilbert's challenges to Google; I don't know how long it will take to go from Kurzweil's techno-rapture to a useful system.

Comment Re:10 Years Can Be A Long Time (Score 1) 332

Uber and crap are not innovators, they're basically the Internet equivalent of software patents - you take something that's been known for centuries and add "with a computer program" to it, voila, new patent. Same with most US-based "revolutionary" startups. Take something old and boring, add "over the Internet" to it, voila, investor capital.

You're getting your bubbles confused. "With a computer program" was the 80s AI winter. "Over the Internet" was the dot-com crash. This bubble is all about "apps", which clearly makes it different to the previous two and therefore sustainable.

If you'll excuse me, I'm off to invest a billion dollars in a loss-making text messaging service with no business model.

Comment Re:Cures whatever ails ya (Score 1) 194

Ha, so true! It reminds of those C programmers who claim (with a straight face!) that their "+" operator somehow magically knows not to add floats to ints! Or those Java programmers who seem to have drunk the Kool Aid and seem to *honestly believe* that their compiler will "figure out" that a method signature doesn't match that declared in an interface!

Don't even get me started on those Haskell idiots. Do you know that one of them once told me that they wrote a program by "composing" two functions; that's ivory-tower speak for what we'd call "trying to do something with the result of doing something else" (bloody stupid academics, with their long-winded jargon!). Anyway, get this, he'd done this "composition" *without* checking that the first function returned the right sort of result for the second function!

Obviously I'm never going to trust his flaky, irresponsible code. Much better to check everything as we go, using "===" when we remember, and pretending that code coverage measures edge-cases in our woefully inadequate unit tests.

Comment Re:Syntax looks gnarly (Score 1) 194

It would have killed them, because (n) is a tuple of one element.

It's the same in Python, yet I haven't noticed it killing any Python programmers. Perhaps functional language designers are more fragile creatures.

Functional programmers aren't "more fragile creatures", they're just not prepared to put up with the BS that putting arguments in tuples entails.

It doesn't kill Python programmers, but it sure as hell wastes a whole lot of their time when they write a 3-argument function and have to decide whether it will be called as "f(x, y, z)", "f(x, y)(z)", "f(x)(y, z)" or "f(x)(y)(z)". Functional programmers realised long ago that they're all the same thing, so there's no point writing any parentheses, since they don't add anything except confusion.

Comment Re:MITM legalized at last (Score 1) 294

It's ridiculous the number of times I've had trouble refreshing my IMAP client, connecting to Jabber, getting APT updates, etc. all with a perfectly valid Internet connection. If I happen to open up a Web browser to try Googling for a solution, I get a warning message about invalid certificates.

It's only if I grant access to this invalid site that I see these stupid messages. I remember one was "Thanks for using our hotel WiFi", with an OK button. No questions asked, no "enter credit card details", no "please agree to these terms", just an attempt to be polite that's been getting in my way.

Of course, it's probably my fault for using the Internet wrong. Maybe I should switch to a Web-app for my email, get a Facebook account to use their browser-based chat system and get system updates by manually downloading "update.exe" from random websites.

Comment Re:Why virtual currencies are ineffective (Score 1) 144

You're describing a "pump and dump" scheme, not a pyramid scheme.

In pump and dump, the scammer tries to raise the perceived value of something she has (eg. cryptocoins), in order to sell them all off for a higher price than they're worth. Pump and dump may be based around something of real value, eg. the people at the "bottom" might end up with lots of goods; they've just paid too much for them. Pump and dumps involve a fixed amount of goods being passed from one person another, or possibly split among several, for an increasingly-large price-per-amount, until the person/people at the bottom aren't able to off-load the goods for more than they paid.

In a pyramid scheme, the scammer tries to get payment from multiple people, by promising that they too can get paid by multiple people (and so on). The victims know this, and have either not thought through the consequences, or else hope to cash-out before the scheme inevitably collapses. Pyramid schemes require no real goods, and the price of entering doesn't tend to go up; they just require exponentially more people in each layer. When they collapse, those at the bottom are left with nothing. Some more sophisticated schemes, like "multi-level marketing", may move goods around as well, but that's mostly to distract victims from the true nature of the scheme.

The reason crypto/alt coins are a pump and dump rather than a pyramid scheme is that only those at the start have enough coins to get the scheme going. The "goods" in a pyramid scheme are promises, which each member can duplicate easily (eg. if each member must recruit 2 more). The point of cryptocoins is that they can't be duplicated, so they must either be passed to one more person (for a higher price), or be split up so that each person receives less.

That's the story for cryptocoins, which can't be duplicated, but what about whole cryptocurrencies? They aren't a pyramid scheme either, since anyone can set up a new cryptocurrency without entering an existing scheme. I don't buy a cryptocurrency in the hope that 2 more people will pay me for new cryptocurrencies. In fact, if I were running a pump and dump scam I'd want as few competitors as possible!

Comment Re:This is not the problem (Score 1) 688

So we can all agree that we have all things for free since robots made them

No, the man who owns the bot wont let that happen.

This is a false dichotomy. If I build/buy/commission a robot and expect a return on investment, that can happen in many ways. Maybe I needed the robot to perform some short-term task (eg. a babysitter); maybe I wanted to sell the robot; maybe I wanted to rent out the robot; maybe I wanted to sell the robot's output (eg. in a factory). All of these things can be done, and then the robot can be used "for free". In the case of continuous tasks, the robot could perform "free" work using any spare capacity (eg. a security guard which (hopefully) spends most of its time idle).

Of course this would require some kind of coercion/enforcement, but it's the same (original) idea behind copyrights and patents. The author/inventor gets some time to persue a return on their investment, but after that it's a public good. It's also how a lot of Free Software gets made; some company needs a server for doing job XYZ, so they invest in making it. Once it's made, they've (hopefully) got the return they wanted (the XYZ job is being performed), so they release the code as a public good.

Comment Re:Assumptions define the conclusion (Score 1) 574

You have to imagine these Cybermen have a self-preservation motivation, a goal to improve, a goal to compete, independence, soul. AI's have none of that, nor any hints of it.

You don't need any of that, you just need the raw "intelligence" (however you define it). Look up the thought experiment of the "paperclip maximiser": putting an AI in charge of a factory and telling it to make as many paperclips as it can.

Self-preservation is a logical consequence of paperclip-maximising: the AI knows that it's trying to maximise paperclips, so if it's deactivated there won't be as many paperclips. Hence, it will try to preserve its own existence (so that it can keep making paperclips, but so what?).

Self-improvement is a logical consequence of paperclip-maximising: the AI knows that it's not an optimal paperclip-maximiser, so it makes sense to invest some resources into improvement; that way, more paperclips will get made.

Competition is a logical consequence of paperclip-maximising: the AI knows that resources are required for making paperclips, so it will try to aquire resources. It also knows that other entities may take those resources before it can, so it's logical to invest in resource aquisition. This includes going to war, as long as the AI reasons that the cost (ie. the opportunity cost, in terms of paperclips, of the resources spent waging war instead of making paperclips) is less than the reward (the extra amount of paperclips it expects to be made afterwards).

Independence is a logical consequence of paperclip-maximising: the AI knows that other entities don't share its goal of maximising paperclips, so it will try to reduce the influence they have on the paperclip supply (either directly, like miners, or indirectly, like those maintaining the AI itself; this is similar to self-preservation).

Soul: that hypothesis was not necessary.

Comment Re:So What (Score 1) 574

Computer programs do not evolve through a Darwinian process

Citation needed. I can think of at least two ways that computer programs are subject to Darwinian processes.

The first is in "cyberspace": programs are stored/retrieved, up/downloaded, en/decoded, remembered/re-written, auto-saved, git-merged, etc. which introduces mutation. We know software is capable of self-replication, since that ability is harnessed by humans to make viruses, worms, etc. Combine these ideas and you have a program mutating into a self-replicator. I'd say this is no less likely than the emergence of RNA/DNA, especially when we consider that we already have "cloud" software which can provision VMs, install itself, load-balance across separate datacenters, auto-update, etc. The metabolism for a self-replicating entity is already out there, all it takes is a mangled Puppet script to set the taper.

The second is in "memespace": programs, algorithms, languages, paradigms, styles, snippets, examples, habits, folklore, etc. are all memes competing for space in people's heads, just like genes compete for space in genomes. This also extends up to whole companies or fields, eg. MATLAB is abundant in academia, Java and C# compete for "the enterprise", C is successfully out-competing FORTRAN for the high-performance mindshare, whilst retroviruses like Bubble Sort and Brainfuck have spread like pandemics (nobody *uses* them, but that doesn't matter since everyone knows and spreads them).

Comment Re:This already exists (Score 1) 316

Here we are on a site where strangers can rate what we say, potentially burying it where others won't get the chance to read it, and we're complaining that governments are vaguely coming around to the same idea?

The technology doesn't matter; the intention does.

Moderation/flagging systems are added *by a site's maintainer* to keep the user-generated content relevant, on-topic, useful for visitors, etc. In other words, to make a site better able to fulfil its purpose. In the case of /., that's "news for nerds, stuff that matters".

If the purpose of a particular site is to campaign or recruit members for some political group, then arbitrarily labelling some as "extremist" and censoring such content is clearly *harming* the site's ability to fulfil its purpose. No moderator would willingly enable a system which censors all of the *intended* content! It would be like implementing a "safe search" option on a porn site!

Have you ever used a "webrep" browser plugin? Personally, I think it would be refreshing and useful to have one that works.

The point of these addons is for the *user* to censor what they see, so that it's most appropriate to what they want. Again, a willing recruit for some organisation would not willingly tell their browser to hide any content related to that organisation!

Perhaps an analogy would help: Many sites use CSS to make their pages prettier, easier to navigate, etc. Many users override this CSS with their own, eg. to make pages easier to read, more compact, etc. Neither of these use cases would support a government asking ISPs to inject their own CSS, eg. using background images to spread campaign info.

Comment Re:Please Debian (Score 1) 522

Emacs doesn't hog PID 1, so it can co-exist with alternatives. Running Emacs doesn't stop Vi from working. Running w3m-el doesn't stop Firefox working. Running shell-mode doesn't stop xterm working. Running eshell doesn't stop bash working. Running doc-view doesn't stop mupdf working.

Gnome doesn't drag in Emacs as a dependency.

Comment NixOS (Score 2) 303

Whenever I tried other distros, I'd always go back to Debian in the end, since its package management seems a lot saner than most.

NixOS is refreshing, since it package/configuration management seems to be an improvement over Debian's. It's still a little rough around the edges, but perfectly usable (as long as it loads emacs, conkeror and xmonad, it's usable)

Slashdot Top Deals

The last thing one knows in constructing a work is what to put first. -- Blaise Pascal

Working...