Forgot your password?
typodupeerror

Comment Re:Reliability? (Score 1) 47

I'd want:
- Trivially replaceable battery. This means no glue, and ideally means a standardized battery approach to maximize chances of buying a replacement one down the line.
- Putting ports on a separate board than the CPU and ram and such. Physical damage comes to ports, especially charging ports. Having this delegated off board minimizes risk of having to replace something expensive.
- Replacable keyboard and screen. Again, at high risk of damage and should be replaceable
- Removable storage. If your mainboard does fail, smoothest if you can move your SSD over to the replacement main board.
- Commitment to consistent form factor. If 5 years down the line it breaks, I can accept if I can't get *exactly* the same board anymore, but it would be nice if I could just get a new generation board and replace it without letting perfectly adequate screen, keyboard, case go to waste.

So mostly Framework, Lenovo recently did a think with a Thinkpad also exhibiting most of these, except no indication of generation to generation consistency in parts.

Comment Re:ThinkPad? (Score 1) 47

Note that this report might be based on perusing websites more than hands on evaluation.

That said, "Lenovo" laptops include the non-thinkpads, which tend to be *terrible* for repair-ability. For example, in many cases they don't consider the keyboard to be a part worthy of keeping replaceable without replacing half of the laptop, despite it being one of the most likely things for a user to break. You can get third-party parts that is just the keyboard, but you have to destroy a lot of plastic welds to even try, and there was never a design to put it really back together after you did that.

The Thinkpads tend to do pretty well, though increasingly the cpu and memory are "just part of the board now", but honestly that's just the direction of that industry in general. We are pushing physics, it's harder for us to do modular RAM at the speeds we want to interact with the RAM, LPCAMM is a thing, but even then you just have a single LPCAMM and it's less about 'repair' and more about being able to have different memory amounts by swapping the module out.

Comment Re:Most Thinkpads Quite Repairable (Score 3, Interesting) 47

Couldn't find actual details on *which* models they looked at.

If you look at the non-ThinkPad Lenovo laptops... They are complete shit for repairability.

The ThinkPads on the other hand tend to be very very good.

But other issues make me wonder about their competency in writing the report. Notably they give Lenovo a "lobbying penalty" for being a member of a group that fights right to repair but gives Motorola a pass for not being in those groups.... Lenovo and Motorola are the same company, and they don't seem to realize that.

Comment Google's AI does not impress. (Score 1) 100

When I test the different AI systems, Google's AI system loses track of complex problems incredibly quickly. It's great on simple stuff, but for complex stuff, it's useless.

Unfortunately.... advice, overviews, etc, are very very complex problems indeed, which means that you're hitting the weakspot of their system.

Comment Re:Billionares Using Our Resources to Replace Peop (Score 1) 47

I've designed a few machines - some rather more insane than others - in meticulous detail using AI. What I have not done, so far, is get an engineer to review the designs to see if any of them can be turned into something that would be usable. My suspicion is that a few might be made workable, but that has to be verified.

Having said that, producing the design probably took a significant amount of compute power and a significant amount of water. If I'd fermented that same quantity of water and provided wine to an engineering team that cost the same as the computing resources consumed, I'd probably have better designs.But, that too, is unverified. As before, it's perfectly verifiable, it just hasn't been so far.

If an engineer looks at the design and dies laughing, then I'm probably liable for funeral costs but at least there would be absolutely no question as to how good AI is at challenging engineering concepts. On the other hand, if they pause and say that there's actually a neat idea in a few of the concepts, then it becomes a question of how much of that was ideas I put in and how much is stuff the AI actually put together. Again, though, we'd have a metric.

That, to me, is the crux. It's all fine and well arguing over whether AI is any good or not (and, tbh, I would say that my feeling is that you're absolutely right), but this should be definitively measured and quantified, not assumed. There may be far better benchmarks than the designs I have - I'm good but I'm not one of the greats, so the odds of someone coming up with better measures seems high. But we're not seeing those, we're just seeing toy tests by journalists and that's not a good measure of real-world usability.

If no such benchmark values actually appear, then I think it's fair to argue that it's because nobody believes any AI out there is going to do well at them.

(I can tell you now, Gemini won't. Gemini is next to useless -- but on the Other Side.)

Comment Alaska & many oil-rich countries already have (Score 1) 118

Even Iran has it. Well had it. Pretty sure it's gotten zeroed as of the past few weeks. It was not a large amount (you'd have to look up the amount, I think it is about $10 a month). Anyway the UAE, Qatar, Saudi Ariabia, Kuwait etc. have it. It's just a matter of how much they provide. The UAE provides enough to live on without a job (about $2,900 a month for an individual citizen). I think Saudi Arabia does too.

Comment Re:What I find amusing is... (Score 2) 38

It's not out of date, it's a simplification.

They don't innately understand their capabilities, but information about it's own capabilities may be fed explicitly into it by other means, just like any other data you want to endeavor to put into the context.

The concept of asking if it implements a certain behavior and either it's deliberately lying or it's not actually there relies upon a false assumption that of course it has innate knowledge of it's own implementation without any "help".

The core relevant issue is that the LLMs will generate an answer based on no data. Instead of "Information on that one way or the other is not available to the model" it sees the answer most consistent with the narrative to be "Those behaviors do not exist". LLMs tend to generate output that implies confidence regardless of whether there should be confidence or not. The workaround has been to try to do everything possible to make sure there is actual data in the context window and hope it just doesn't come up that much, but this is only so possible. Some coding has the opportunity to use test cases to add "the output given failed to work" automatically to the narrative to drive iteration and maybe get further.

Slashdot Top Deals

It is not every question that deserves an answer. -- Publilius Syrus

Working...