Comment Re:What about the CEOs? (Score 1) 87
Anthropic's recent test of having a AI run a small business suggests that CEOs are safe this year.
Anthropic's recent test of having a AI run a small business suggests that CEOs are safe this year.
Whether it's a "work in progress" or "useful tool" depends on which AI you're talking about, and what task you're considering. Many of them are performing tasks that used to require highly trained experts. Others are doing things where a high error rate is a reasonable tradeoff for a "cheap and fast turn-around". But it's definitely true that for lots of tasks even the best are, at best, a "work in progress. So don't use it for those jobs.
OTOH, figuring out which jobs it can or can't do is a "at this point in time for this system" kind of thing. It's probably best to be relatively conservative. But not to depend on "today's results" being good next month.
Most of those things are either experimental, or only useful in a highly structured environment.
AI is coming, but the current publicly available crop (outside specialty tasks) makes lots of mistakes. So it's only useful in places where those mistakes can be tolerated. Maybe 6 months from now. I rather trust Derek Lowe's analysis of where biochemical AI is currently...and his analysis is "it needs better data!".
One shouldn't blindly trust news stories. There are always slanted. Sometimes you can figure the slant, but even so that markedly increases the size of the error bars.
OTOH, AI *is* changing rapidly. I don't think a linear model is valid, except as a "lower bound". Some folks have pointed to work that China has claimed as "building the technology leading to a fast takeoff". Naturally details aren't available, only general statements. "Distributed training over a large dataset" and "running on a assembly of heterogeneous computers" can mean all sorts of things, but it MIGHT be something impressive (i.e. super-exponential). Or it might not. Most US companies are being relatively close-mouthed about their technologies, and usually only talking (at least publicly) about their capitalization.
Companies change. OTOH, perhaps those that continue to have jobs at Ford will continue to be able to buy a Ford.
I think that either you don't understand AI, or you don't understand how creativity works in people. Probably both.
Current AIs don't have a good selection filter for their creativity. This is a real weakness, that I expect can only be remedied by real world experience. But they *are* creative in the same sense that people are. It's just that a lot of what they create is garbage (although *different* garbage than what most people create).
No, we aren't tracking EVERY object of that kind. (You didn't say all, so that includes the meteor that hits a gopher in his hole.)
Possible? Yeah, I think it's possible. It would be a bit expensive. We're tracking most large objects that cross Earth's orbit. New ones don't appear very often, and we rarely lose track of any. It would take multiple observatories in places outside the plane of the solar system to track all of them, so we've been surprised occasionally by "city killer" meteors, though none of them have actually hit a city. ("city killer" is a bit of an overestimate, but "block buster" would be an understatement.) There have been repeated official statements that "now we know all the really dangerous ones", but even if you believe it, asteroid orbits are subject to change, so you need to keep looking.
Ok, but evolution requires selection as well as variation. Generally one should select several from each generation to modify, and filter out a bunch that don't measure up. (Note that the evaluation function is a very strong determinant of what you'll eventually get.) Selecting "one from each generation" just looks like an extremely bad approach. Perhaps it should read "one batch from each generation".
Not always, but they tend to me more
That's a real problem Python has survived Guido's retirement quite nicely, though. But there was lots of pre-planning. I haven't attended to whether such pre-planning is happening in the kernel, but there were a few obvious candidates the last time I looked.
If that's really what they're doing, they're doing it wrong. Probably to save on compute time. Evolution needs a large and varied population to work well.
You underestimate the cost. Even among those that survive for a few generations, most will eventually succumb to changing environmental conditions. Consider trilobites.
OTOH, that's judging by assuming that the present is the correct time-frame to evaluate from. Why should that be true? Trilobites lasted a lot longer than we're likely to. (But we've got the *potential* to last until the heat death...*IF*... But what are the odds?)
It's actually a very good approach. Unfortunately, it depends on having a good and ungameable evaluation function.
Thinking about this more, my first response was so incomplete as to almost be a lie.
You *cannot* know reality. All you can know is a model of reality. So when you say "reality" you're actually using a abbreviation for "in my model of reality".
And when I said "physics is physics" I was so oversimplifying as to almost be lying. Consider "flat earth" vs. "spherical earth". How do you know which belief to accept? The direct sensory data seems to imply that "flat earth" is the more appropriate belief. There are lots of arguments that "spherical earth" is a better model, but those are nearly ALL based on accepting what someone else says. We are told of experiments we *could* do that would validate it, but very few people have, themselves, done the experiment. So for just about everyone the "spherical earth" model is a "social reality".
Similarly I accept that I have a spleen, but I do this because others have told me it's true. I'm also told my tonsils were cut out, but I was unconscious when this was supposed to have happened, so I'm taking other people words for it.
Reality, as we know it, it largely a social construct. We don't know just how completely it's a social construct, but that's hugely what it is.
That's what this article is about.
Reality is largely a social construct, how much nobody knows. (Yeah, physics is physics and biology is biology, but that's not social reality.) What you believe is largely a feedback process, and when one of the sources of feedback is disconnected from reality...beliefs will drift. This is classically known from sailors who ended up marooned on an empty island. They had physical feedback, but no social feedback, and after awhile their beliefs shifted in weird ways. This seems to be a lot faster process, but it's being driven by a feedback system that's disconnected from reality, so that seems plausible. And it seems to avoid negative feedback effects. Systems dominated by positive feedback are known to run out of control.
(1) Never draw what you can copy. (2) Never copy what you can trace. (3) Never trace what you can cut out and paste down.