Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Consciousness (Score 1) 284

They will probably always remain inherently beyond the reach of scientific evidence.

Yes. that's because they are 100% made-up. Just as Santa Claus is inherently beyond the reach of scientific evidence.

You will never find evidence, that is, anything manifesting as objective reality, for a wholly illusory concept. You can, of course, drown yourself in delusion. We appear to be well designed for exactly that exercise, we even practice it most nights during REM sleep. And it's perfectly acceptable, socially speaking. Imagine away.

Comment Re:19,000 (Score 2) 401

Sounds nice, but without explicit control over what American companies are allowed to do at/across the border, (and foreign companies the other way) it's not going to happen. Right now, the door is wide open in every way: Hire offshore and have the hires work here *or* there, keep your money offshore and avoid taxes with blissful ease, manufacture elsewhere, all the while you're paying off congress and whatever agencies are involved.

Sure, corporations are people. Sociopaths. Psychopaths. Those kinds of people. Evil slimeballs, primarily. Exceptions are very rare, and will remain so, as long as being competitive means one company has to take advantage of the same things the next one over does.

That's the way it works now. And unless you can forward larger envelopes to congress (directly, indirectly, or metaphorically) than big business can, or somehow make congress actually ethical and focused on the betterment of the country, this is only going to become more so. Money for election chests. Sweet land deals for cousin George. Fully paid fact finding junkets. Post-congress speaking deals. Guaranteed book advances, complete with ghostwriter, sales irrelevant. Well paid lobbyist positions, commentator positions, corporate vice presidential or other high paid positions... or money... or a sweet deal on a boat, or a house, or whatever, all for 2nd cousins of course. It's so corrupt and ingrained you can't possibly picture it until you've had an inside view (yes, I have.)

Today, you want a great job? Start your own business. Are you really great at programming? Write a great program. Are you really great at electronics? Create a great device. The internet of things is rising, your opportunity is knocking. Real AI needs done (oooo, hard.) Are you really great at mechanicals? Create a wonderful mechanical thing. These are the *only* doors that remain open to technical people in general. Easy? Hell no. But there it is. Otherwise, change career tracks while you still can. Finance. Lawyers (it's the dark side, all right, but it's also a license to print money, especially with the extremely deep collection of bad law we have now.) Nursing -- medicine looks good right now, but doctors... not so sure.

Otherwise, prepare to be lowballed, then fall (or be thrown) from the workforce segment you're qualified for as you age. Also, just as a PS, you'll note that above, the apologists consistently talk about who is learning what in school. The underlying message is clear: Once you're out of school and if you manage to improve yourself, you're not particularly hirable. They're not even looking at/for you.

That's just the way it is. Be proactive and possibly suffer, or just definitely suffer. Choose.

Comment Re: AI is always "right around the corner". (Score 1) 564

The machine has no fucking clue about what it is translating. Not the media, not the content, not even what to and from which languages it is translating (other than a variable somewhere, which is not "knowing". None whatsoever. Until it does, it has nothing to do with AI in the sense of TAFA. (The alarmist fucking article)

How would you determine this, quantitatively? Is there a series of questions you could ask a machine translator about the text that would distinguish it from a human translator? Asking questions like "How did this make you feel?" is getting into the Turing Test's territory. Asking questions like "Why did Alice feel X" or "Why did you choose this word over another word in this sentence?" is something that machines are getting better at answering all the time.

To head off the argument that machine translation is just using large existing corpus of human-generated text, my response is that is pretty much what humans do. Interact with a lot of other humans and their texts to understand the meaning. Clearly humans have the tremendous advantage of actually experiencing some of what is written about to ground their understanding of the language, but as machine translation shows it is not a necessity for demonstrating an understanding of language.

For the argument that meaning must be grounded in conscious experience for it to be considered "intelligence" I would argue that machine learning *has* experience spread across many different research institutions and over time. Artificial selection has produced those agents and models which work well for human language translation, and this experience is real, physical experience of algorithms in the world. Not all algorithms and models survived, the survivors were shaped by this experience even though it was not tied to one body, machine, location, or time. Whether machine translation agents are consciously aware of this experience, I couldn't say. They almost certainly have no direct memory of it, but evidence of the experience exists. Once a system gets to the point that it can provide a definite answer to the question "What have machine translation agents experienced?" and integrate everything it knows about itself and the research done to create it, then we'll have an answer.

Comment Re:AI is always (Score 1) 564

Everything humans do is simply a matter of following a natural-selection-generated set of instructions, bootstrapping from the physical machinery of a single cell. Neurological processes work together in the brain to produce intelligence in humans, at least as far as we can tell. Removing parts of the human brain (via disease, injury, surgery, etc.) can reduce different aspects of intelligence, so it's not unreasonable to think that humans are also a pile of algorithms united in a special way that leads to general intelligence and that AI efforts are only lacking some of the pieces and a way of uniting them. As researchers put together more and more of the individual pieces (speech and object recognition, navigation, information gathering and association, etc.) the results probably won't look like artificial general intelligence until all the necessary pieces exist and it's only the integration that remains to be done. For example there's another article today about the claustrum in a woman that appears to be an effective on-off switch for her consciousness, strengthening the evidence for consciousness being an integration of various neural subsystems mediated by other regions that produce consciousness.

It's important to consider that AGI may act nothing like human or animal intelligence, either. It may not be interested in communication, exploration, or anything else that humans are interested in. Its drives or goals will be the result of its algorithms, and we shouldn't discount the possibility of very inhuman intelligence that nonetheless has a lot of power to change the world. Expecting androids or anthropomorphic robots to emerge from the first AGI is wishful thinking. The simplest AGI would probably be most similar to bacteria or other organisms we find annoying; it would understand the world well enough to improve itself with advanced technology but wouldn't consider the physical world to consist of anything but resources for its own growth. It may even lack sentient consciousness.

Producing human-equivalent AGI is a step or two beyond functional AGI. Implementing all of nature's tricks for getting humans to do the things we do in silicon will not be a trivial task. Look at The Moral Landscape or similar for ideas about how one might go about reverse engineering what makes humans "human" so that the rules could be encoded in AGI.

Comment Re:What we don't know... (Score 1) 564

IIRC, there was an article today about the location of the origin of consciousness having been located. It's probably an overstatement, as I expect that there are many essential pieces, but it's not utter mystery any more. It's a problem that's being worked on. And there are at least some partial answers.

Comment Re:AI is always (Score 1) 564

The thing is, the Google Car driver isn't a general intelligence. It's quite specialized. Watson, OTOH, is a much more general intelltigence. But it still doesn't have a hierarchy of goals that allows it to override what it is told to do. I'm not sure, however, that that counts as intelligence rather than something else.

FWIW, AI programs come up with ideas all the time. But they are designed to prune them to match their goal structure. (So are you, but your goal structure is much more self-centered.) Coming up with idea is not a problem, coming up with appropriate ideas, and knowing that they are appropriate is still a problem. Watson appears to be addressing that problem. Currently an incarnation of it has learned to diagnose cancer better than most doctors. An earlier incarnation learned to play Jephrody better than most humans. (Lots better.) And the hardware requirements have been shrinking. (I'm not sure how much is hardware improvement and how much is program improvement.)

I expect that a near-term target of Watson will be middle-management...though I also expect that it will be presented as offering advice rather than as replacing them. Basically what it will do is allow one manager to directly manage an increasing number of workers efficiently. This will prepare it for a career as an advisor to politicians.

Do note, however, that this isn't what he was talking about. He was talking about Cyborgs. These are held back by two things: The lack of a long term neural connector that won't destroy the neurons that they connect to, and the fact that installing significant Cyborg modifications requires surgery. I expect the first problem to be solved within the decade, but as for the second...

Comment Re:Now thats incentive (Score 1) 564

Well, my estimate for the first "human equivalent" AI is still 2030. But I'm using a very rough estimate for "human equivalent", and I'm only talking about the first iteration.

He's talking about cyborgs. That depends less on AI than on a few crucial inventions that haven't quite happened yet, and are difficult to predict, though lots of effort is being put into them. One is a long-term neural connector that keeps working and doesn't kill the neurons that it connects to. Until that's done, we can't make cyborgs that use neural interfaces. There's also a bit more work needed on decoding the "machine language" of the brain. Parts of it are fairly well understood, for loose meanings of fairly well understood. Without the long term neural connections we can't try it out in people. And other parts are still pretty much terra incognita. But a built in calculator wouldn't require any really advanced understanding over what we already have. Recording memories, however, is a lot more difficult, much less replaying them. Still, not everything needs to be solved at the same time, and something that would automatically prompt you in response to key phrases isn't much of an advance over what we already have on cell phones. What's really missing is the long term neural connection. And you want it to be good enough to play an immersive game on, so it will sell, but it's got to be useful enough to justify the operation...unless someone comes up with a way to just avoid the operation, but that's pretty much guaranteed to be low fidelity and slow bit rate.

OTOH, to some extent we're already on the path. Consider cell phones, and the way people can no longer find their way around without using GPS. That's a totally external kind of proto-cyborg behavior. We think we understand vision and sound well enough that if we had a good neural connector, a built-in cell phone wouldn't be unreasonable within a decade...outside of FDA approval.

Slashdot Top Deals

There are two ways to write error-free programs; only the third one works.

Working...