Forgot your password?
typodupeerror

Comment Re:As long as it's just an option (Score 2) 35

I think it's for a certain kind of workflow. If you want to watch YouTube videos it kind of does nothing useful. If you want to swap between documents and reference materials a lot, much more helpful. I think the answer is "It sucks because it's for multitasking, not because it is a bad idea."

I think it depends less on workflow and more on screen layout. If you run your browser maximized on a landscape-mode display, there's a lot of horizontal real estate that isn't very well-used, while vertical space is at a premium. So it makes sense to move tabs to the side.

On the other hand, if you don't maximize your window but keep it as narrow as possible (so you can see other windows) but just wide enough that sites render well, then you'll probably prefer them on top.

On the gripping hand, if you're like me and run your browser full-screen on a portrait mode screen, then you have gobs of vertical real-estate and tabs on top definitely makes sense.

(I have three monitors, a 32" (landscape) in the center, which is where my IDE, editors, and "focused" work lives, and a 27" portrait orientation monitor on each side. The left one has a full-screen browser window for work stuff and the right one has a full-screen browser window for personal stuff. It's fantastic.)

Comment Re: AI doesn't lie. (Score 1) 76

Says who?

The AI's intent is defined by the way it is trained, and Gemini is trained to emphasize what the google executives want emphasized.

Mmmm.... if anything it's "what the Google engineers want emphasized". Executives at Google have surprisingly little control over technical decisions. For nearly all of Google's existence it's been an almost completely bottom-up driven company and while in the last few years management has been trying to exert more control it's a very, very slow process.

It's actually the engineering-driven culture that produces Google's infamous tendency to abandon products. Stuff gets built because some engineers think it's a good idea and convince their managers to let them run with it. Then eventually it gets boring and engineers tend to wander off to other teams in search of something interesting. If the product has managed to achieve significant userbase and/or revenue stream (and keep in mind that both are measured on Google scales; so anything less than 100M users or $1B/year is "not signficant").

In a top-down company products don't get built until they have significant executive support, which requires a fairly detailed plan, which gets executed and adjusted, and if an exec's project is in trouble it will get support. At Google products kind of wander out the door and into the world and if they happen to be a hit, great, if not, well, unless there are legally-binding contracts obligating the company to support something, it just gets shut down. Even with the projects that the executive leadership are really excited about (like AI!), their influence is mostly limited to shoveling resources at it.

Anyway, the point is that execs likely have little to no influence on Gemini training beyond setting very broad guidelines, and even those might not have much effect.

Comment Re:This is what stochastic parrots do (Score 1) 76

That's not because they're broken -- which is why I put "fix" in quotes in the previous paragraph. It's because that's how they work: it's an intrinsic property of all such models and no amount of computing power and/or model tweaking can change that: all it can do is obfuscate it. And obfuscated problems are far worse than obvious problems.

That's a strong statement. Can you explain why that isn't also true of human brains? What's the intrinsic difference?

Comment Google's AI does not impress. (Score 1) 76

When I test the different AI systems, Google's AI system loses track of complex problems incredibly quickly. It's great on simple stuff, but for complex stuff, it's useless.

Unfortunately.... advice, overviews, etc, are very very complex problems indeed, which means that you're hitting the weakspot of their system.

Comment Re: I think it would be a good idea.. (Score 1) 105

What if the Fed buys bonds? Why did bond yields plummet after the Fed's liberal printing in 2008?

Is it possible you're misidentifying the zero-sum assumptions of economics as "not stupid" when in fact not taking advantage of the positive sum nature of financial economics is the real stupidity?

Comment Re:Billionares Using Our Resources to Replace Peop (Score 1) 37

I've designed a few machines - some rather more insane than others - in meticulous detail using AI. What I have not done, so far, is get an engineer to review the designs to see if any of them can be turned into something that would be usable. My suspicion is that a few might be made workable, but that has to be verified.

Having said that, producing the design probably took a significant amount of compute power and a significant amount of water. If I'd fermented that same quantity of water and provided wine to an engineering team that cost the same as the computing resources consumed, I'd probably have better designs.But, that too, is unverified. As before, it's perfectly verifiable, it just hasn't been so far.

If an engineer looks at the design and dies laughing, then I'm probably liable for funeral costs but at least there would be absolutely no question as to how good AI is at challenging engineering concepts. On the other hand, if they pause and say that there's actually a neat idea in a few of the concepts, then it becomes a question of how much of that was ideas I put in and how much is stuff the AI actually put together. Again, though, we'd have a metric.

That, to me, is the crux. It's all fine and well arguing over whether AI is any good or not (and, tbh, I would say that my feeling is that you're absolutely right), but this should be definitively measured and quantified, not assumed. There may be far better benchmarks than the designs I have - I'm good but I'm not one of the greats, so the odds of someone coming up with better measures seems high. But we're not seeing those, we're just seeing toy tests by journalists and that's not a good measure of real-world usability.

If no such benchmark values actually appear, then I think it's fair to argue that it's because nobody believes any AI out there is going to do well at them.

(I can tell you now, Gemini won't. Gemini is next to useless -- but on the Other Side.)

Comment Information lacking from summary/article (Score 5, Informative) 73

Artemis II is breaking Apollo 13's record by about 4100 miles. The primary reason they're going further is because they're passing much farther from the moon, about 4000 miles, compared to 158 miles for Apollo 13. The moon is also a little further from Earth, accounting for the other 250 miles.

Slashdot Top Deals

Overflow on /dev/null, please empty the bit bucket.

Working...