Forgot your password?
typodupeerror

Comment Re:Reliability? (Score 1) 53

I'd want:
- Trivially replaceable battery. This means no glue, and ideally means a standardized battery approach to maximize chances of buying a replacement one down the line.
- Putting ports on a separate board than the CPU and ram and such. Physical damage comes to ports, especially charging ports. Having this delegated off board minimizes risk of having to replace something expensive.
- Replacable keyboard and screen. Again, at high risk of damage and should be replaceable
- Removable storage. If your mainboard does fail, smoothest if you can move your SSD over to the replacement main board.
- Commitment to consistent form factor. If 5 years down the line it breaks, I can accept if I can't get *exactly* the same board anymore, but it would be nice if I could just get a new generation board and replace it without letting perfectly adequate screen, keyboard, case go to waste.

So mostly Framework, Lenovo recently did a think with a Thinkpad also exhibiting most of these, except no indication of generation to generation consistency in parts.

Comment Re:ThinkPad? (Score 1) 53

Note that this report might be based on perusing websites more than hands on evaluation.

That said, "Lenovo" laptops include the non-thinkpads, which tend to be *terrible* for repair-ability. For example, in many cases they don't consider the keyboard to be a part worthy of keeping replaceable without replacing half of the laptop, despite it being one of the most likely things for a user to break. You can get third-party parts that is just the keyboard, but you have to destroy a lot of plastic welds to even try, and there was never a design to put it really back together after you did that.

The Thinkpads tend to do pretty well, though increasingly the cpu and memory are "just part of the board now", but honestly that's just the direction of that industry in general. We are pushing physics, it's harder for us to do modular RAM at the speeds we want to interact with the RAM, LPCAMM is a thing, but even then you just have a single LPCAMM and it's less about 'repair' and more about being able to have different memory amounts by swapping the module out.

Comment Re: Sounds like a good problem to have (Score 1) 133

Most of these sales are people who would have bought a more expensive Mac if this one wasn't available.

Absolutely not the case. The Neo is essentially Apple's first attempt at a budget laptop, and the market segment they're targeting is entirely different.

Case in point: My dad has been a Windows user for 20+ years and has always decried Apple as "decent hardware that's overpriced and running a lobotomized operating system." However he hates Windows 11 more and decided to replace his aging Windows laptop with a Neo last week. So far he's been impressed, though is still dealing with a learning curve. I guarantee you he isn't alone.

I think the Neo is Apple's attempt at getting into a new segment of lower-priced computers AND taking advantage of Microslop dropping the ball hard when it comes to Windows 11 shit quality and bullshit hardware requirements.

Perhaps somewhat ironically, I suspect the market really being cannibalized is potential new Linux users who go with a New instead of installing Linux on a cheap Windows laptop.

Comment Re:Most Thinkpads Quite Repairable (Score 4, Interesting) 53

Couldn't find actual details on *which* models they looked at.

If you look at the non-ThinkPad Lenovo laptops... They are complete shit for repairability.

The ThinkPads on the other hand tend to be very very good.

But other issues make me wonder about their competency in writing the report. Notably they give Lenovo a "lobbying penalty" for being a member of a group that fights right to repair but gives Motorola a pass for not being in those groups.... Lenovo and Motorola are the same company, and they don't seem to realize that.

Comment Re:Apple is Doomed! (Score 1) 133

There was a time when the people who complained about soldered RAM (and I was one of those people) were a significant enough proportion of the community that manufacturers would pay attention. This was the age when gaming PCs were constructed from high end pieces from the wild-assed cases to the heavy duty PSUs to overclocked CPUs and next gen GPUs.

But overall, that segment of the consumer market has dwindled. Most folks just want to charge their new machine up, connect it to their WiFi network and get going. On the corporate end of things, save for pretty niche areas like engineering and R&D, a cube you can plug a keyboard, mouse and camera into and will last through a few upgrade cycles before it's sold back to a refurb outfit is all that is needed. Nobody in IT departments is pulling RAM chips anymore, particularly at RAM prices right now! Even the folks writing operating systems are starting to get it, and have rediscovered the glory of native apps that don't required bloated Javascript engines just to select a few radio buttons.

Comment Re:It's about the hardware (Score 1) 133

Yes, Windows 11 is really that bad. It's cluttered, slow, inconsistent. I've seen it on pretty high end hardware, and it's a dog. And that's before we even talk about how they tried to insert Copilot into everything. It's a shitty version of Windows and even Redmond acknowledges it. It was the impending EOL of Windows 10 that lead me to buy an M1 MacBook Pro, and I've never looked back. If I want to run Linux, I've got servers set up to do that kind of heavy lifting, but I have absolutely no need for whatever it is MS is trying to sell me these days.

Comment Re:What I find amusing is... (Score 2) 38

It's not out of date, it's a simplification.

They don't innately understand their capabilities, but information about it's own capabilities may be fed explicitly into it by other means, just like any other data you want to endeavor to put into the context.

The concept of asking if it implements a certain behavior and either it's deliberately lying or it's not actually there relies upon a false assumption that of course it has innate knowledge of it's own implementation without any "help".

The core relevant issue is that the LLMs will generate an answer based on no data. Instead of "Information on that one way or the other is not available to the model" it sees the answer most consistent with the narrative to be "Those behaviors do not exist". LLMs tend to generate output that implies confidence regardless of whether there should be confidence or not. The workaround has been to try to do everything possible to make sure there is actual data in the context window and hope it just doesn't come up that much, but this is only so possible. Some coding has the opportunity to use test cases to add "the output given failed to work" automatically to the narrative to drive iteration and maybe get further.

Comment Re:Self discipline (Score 1) 128

Here's the thing, some folks do the discipline and keep a healthy weight, but they are basically always feeling hunger. Some people don't feel it but some people are having to constantly fight sensation of hunger, with a respite of a little bit after a meal, and almost never feeling 'full'.

If we had something to tame the rather depressive experience of constantly denying one's hunger because you know in your mind that you got the nutrition and caloric intake you need, but your body wants to eat your way to obesity.

Comment Re:Sounds like the lights might be going out on PO (Score 2) 26

Problem is that the only viable market for mainframe are current mainframe customers, who are so change averse that if you even hint at breaking compatibility they will be triggered to start evaluating *all* their options if they are faced with a potential migration anyway.

IBM may love the idea of shuttering their in-house stuff in favor of massively cheap commodity stuff, but they would absolutely no longer command mainframe margins.

Comment Re:developer market share (Score 2) 118

In short, Java was invented for a reason, and while it has become a victim of legacy cruft as well, the underlying concept of truly portable apps, with a minimum of fuss to jump from platform to platform, still ought to be the preferable path. The problem is that that true platform neutrality/ambiguity pretty much kills Microsoft in all but a few niches, like gaming, but only because hardware vendors put less effort into drivers for other operating systems.

Yes, Office is still king, although I think that crown is beginning to slip, and it may end up being Excel, with its large list of features, that may last the longest. But it isn't 1990, or even 2000 anymore. Developers have multiple ways of developing portable applications, and while MS may (for the nth time) update or swap out its toolchains, the real question is will developers really care?

Comment Re: 25,000 lines of code (Score 2) 78

You assume that a standards document exists and is also sufficiently specific for all scenarios. Other than some very fundamental IETF stuff have I seen a standards document that pretty much covers the scope specifically. Even more severely, "specifications" for an internal project have been so traditionally bad, a whole methodology cropped up basically saying that getting specifications that specifically correct is a waste of time because during the coding it will turn out to not be workable.

Yes, it can write hundreds of tests, but if the same mediocre engine that can't code it right is also generating tests, the tests will be mediocre. Leading to bizarre things like a test case to make sure '1234' comes back as 'abcd' and the function just always returns the fixed string 'abcd' and passes the test because it decided to make a test and pass it instead of trying to implement the logic. I have seen people almost superstitiously add to a prompt "and test everything to make sure it's correct" and declare "that'll fix the problems". The superstitious prompting is a big problem in my mind, that people think they add a magic phrase and suddenly the LLM won't make the mistakes LLMs tend to make. I have seen people take an LLM at their word when the LLM "promises" to not make a specific mistake, and then confounded the first time they hit the LLM making the mistake anyway. "It specifically said it wouldn't do that!", it doesn't understand promises, the thing just will generate the 'consistent' followup to a demand for a promise which is text indicating making the promise.

Take the experiment where they took Opus 4.6 and made it produce a C compiler. To do so, the guy at Anthropic said point blank he had to invest a great deal of effort in a test harness, that the process needed an already working gcc to use as a reference on top of that, and specified the end game as a bootable, compiled kernel. Even then he had to intervene to fix it and it couldn't do the whole thing and when people reviewed the published result, it failed to compile other valid code and managed to compile things that shouldn't have been compilable. This is Anthropic with their best model doing a silly stunt to create a knock off of an existing open source project with full access to said project and source code and *still* it being a lot of human work for mediocre output.

Yes, it has utility, but there's a lot of people overestimating capabilities and underestimating risks and it's hard for the non-technical decision makers to tell the difference until much further down the line. Mileage varies greatly depending on the nature of the task at hand as to whether LLM is barely useful at all or it can credibly almost generate the whole thing.

Slashdot Top Deals

Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true. -- Bertrand Russell

Working...