Forgot your password?
typodupeerror

Comment Re:Not impressive, a Pre-ML 1990s PC doable proble (Score 1) 39

Didn't they try to do that kind of image recognition in the 90s and find it unreliable? IIRC they tested it with tanks and found that rather that detecting tanks it was detecting sunny days, and once they eliminated the weather variations it couldn't do anything useful.

Today Tesla's vision system is notoriously unreliable, and you would assume that in military applications the aircraft are going to be camouflaged.

Comment Re:bent pipe (Score 1) 39

But then you have to transmit potentially massive amounts of data back to Earth.

Say you want to detect aircraft entering airspace. They are difficult to detect with radar, so you want to do it optically. You need decent resolution to capture small drone sized ones, and you need multiple images to help with camouflage, false positives, and determining flight path.

That's a lot of data. The data rate is likely to be the limiting factor on what resolution and how frequently you can image an area. Being able to do the detection on the satellite, and only send reports or images that suggest further investigation is worthwhile, is going to be very useful.

Comment Google's AI does not impress. (Score 1) 102

When I test the different AI systems, Google's AI system loses track of complex problems incredibly quickly. It's great on simple stuff, but for complex stuff, it's useless.

Unfortunately.... advice, overviews, etc, are very very complex problems indeed, which means that you're hitting the weakspot of their system.

Comment Re:Billionares Using Our Resources to Replace Peop (Score 1) 47

I've designed a few machines - some rather more insane than others - in meticulous detail using AI. What I have not done, so far, is get an engineer to review the designs to see if any of them can be turned into something that would be usable. My suspicion is that a few might be made workable, but that has to be verified.

Having said that, producing the design probably took a significant amount of compute power and a significant amount of water. If I'd fermented that same quantity of water and provided wine to an engineering team that cost the same as the computing resources consumed, I'd probably have better designs.But, that too, is unverified. As before, it's perfectly verifiable, it just hasn't been so far.

If an engineer looks at the design and dies laughing, then I'm probably liable for funeral costs but at least there would be absolutely no question as to how good AI is at challenging engineering concepts. On the other hand, if they pause and say that there's actually a neat idea in a few of the concepts, then it becomes a question of how much of that was ideas I put in and how much is stuff the AI actually put together. Again, though, we'd have a metric.

That, to me, is the crux. It's all fine and well arguing over whether AI is any good or not (and, tbh, I would say that my feeling is that you're absolutely right), but this should be definitively measured and quantified, not assumed. There may be far better benchmarks than the designs I have - I'm good but I'm not one of the greats, so the odds of someone coming up with better measures seems high. But we're not seeing those, we're just seeing toy tests by journalists and that's not a good measure of real-world usability.

If no such benchmark values actually appear, then I think it's fair to argue that it's because nobody believes any AI out there is going to do well at them.

(I can tell you now, Gemini won't. Gemini is next to useless -- but on the Other Side.)

Slashdot Top Deals

You can tune a piano, but you can't tuna fish. You can tune a filesystem, but you can't tuna fish. -- from the tunefs(8) man page

Working...