Forgot your password?
typodupeerror

Comment Re:8Gb RAM? (Score 5, Informative) 52

It's such a damning testament to the dreadful, bloated mess that my own industry of software development has caused, that so many people say this kind of thing. IMHO the hubris and arrogance of the software industry as a whole is truly unpleasant; basically, it manifests as contempt for the end-user justified in a thousand ways which amount to syntax sugar over "lazy and cheap". Every developer seems to think that their software is the one and only thing the customer will ever want or need to run on their computer.

8GB is a huge amount of RAM...

...though you might want to run older software, and avoid Electron like the plague. But isn't that true for everyone, no matter how powerful your computer? Did you buy all that CPU power and RAM capacity just to be able to run someone's React dependencies, or to be available for your workflows across multiple applications?

What's more, the Neo's SOC - despite being aimed at a phone - is absurdly powerful. Single-core performance is much better than "desktop class" M1 from just a few scant years ago. Again, really that just shows how truly bloated and slow modern software is; the resource requirements apparently needed to show a reminders app, or a weather app, or a calendar or whatever on a phone these days are just insane. It's doing somewhere between nothing and very little more in those applications, but they're just more bloated and, often, more buggy because Reasons.

That's kind of reinforced by how older software, simply written better in the core originally (although likely bloated by standards of that day) work just fine. Check YouTube and you'll find e.g. someone showing Davinci Resolve with 2 4K streams, Final Cut Pro with 3 4K streams, editing a large image in Photoshop and yes, even running Chome with several heavy tabs - which isn't "efficient" or older software, but new and huge - including YouTube and Prime, all loaded at once and running smoothly. There are no obvious lags between application switches and no dropped frames evident during the 4K multi-layer playbacks or, say, YouTube. Doubtless it's swapping with all that loaded, but it's not particularly visible to the end user.

Making better software doesn't cost more long-term - your overall velocity stays higher - but it costs more in the short term, and corporate types obsess about that. If you've got captive customers today, you probably don't care much about plunging velocity due to tech debt, bugs and bloat anyway. The customers will wait. The feature will ship one day, and hey, you can keep hiking subscription costs to pay for the devs until it's done. Meanwhile, the bloat means that customers are gaslit into thinking they need very powerful computers, because, well, there's a chance that they do! The ever-faster hardware is countered by ever-slower software.

Thanks to AI, RAM & storage just got very expensive. Even more last laughs for the industry and even more money out of the pocket of the customer. Except enter the Neo - a very unexpected twist. When Apple "launched Apple Intelligence" (ha!), the extra RAM needed was ostensibly linked to 16GB RAM becoming the entry-level baseline in their computing line. I figured it was all over for people with 24-32GB; macOS would just bloat out and swallow up the baseline RAM, so those who'd purchased more had far less headroom and would hit swap much more quickly. Early Tahoe releases showed that happening, but it got tighter again and I was surprised. Now I know why - it needs to run smoothly under 8GB, with space for applications. This is excellent news for owners of more powerful machines because the baseline has stayed low. Software has to meet a minimum bar of efficiency. Everyone benefits.

I don't want a developer's hubris to mean my (say) 16GB laptop has half its RAM used by a Figma browser tab, or launching MS Word and loading a small document into it. But that's the trajectory. Thanks to machines like the Neo, hopefully that stays slower.

Comment Oh, really? Trust me bro?! (Score 1) 27

A sprawling Chinese influence operation [...] focused on intimidating Chinese dissidents abroad, including by impersonating US immigration officials, according to a new report from ChatGPT-maker OpenAI

American company which has execs who have made vast donations to Trump accuses China of something and provides no proof. News At Eleven!

The Chinese law enforcement official used ChatGPT like a diary to document the alleged covert campaign of suppression, OpenAI said

Yes, that totally sounds realistic and just the sort of thing that people would do, especially those dastardly Chinese. I mean, who doesn't look at a Chatbot and think, "yes, that's a great diary tool, and this top secret government operation is just the sort of thing to write down in a diary!"

We should all definitely comment at length about how evil China are in relation to this story, yes siree. Nothing fishy about OpenAI here at all. By the way, anyone remember the Epstein files? Oh, we're not supposed to look over there, we're supposed to look over here, at Big Bad China? Right. Got it.

In related news: If this is true in whole or in part then, as another poster points out, OpenAI just admitted they harvest and monitor your interactions with ChatGPT not just for training, but for policing .

Comment Re:Part of this decline is all MBA-driven (Score 1) 187

Evidently, there is little actual knowledge in comments here about the referenced calculator issue. Calculator - normally reporting some 50-60MB of usage with all the caveats others have noted about that figure's interpretive complexity - became a kind of poster child for extremely serious and aggressive memory leaks in Tahoe, with another frequent culprit being Finder.

The 32GB example was infamous on Reddit and comes from the "Your system has run out of application memory" dialogue box - swap has been exhausted and the system has determined that it can no longer allocate more RAM - it's a very serious condition and is exactly what the blog says it is; it's not some hand-wavey virtual usage or other nonsense, it's actual, used, claimed allocation.

https://www.macobserver.com/news/early-adopters-hit-by-calculator-bug-in-macos-tahoe-26/

You will notice that other values reported in that dialogue are reasonable. It's likely to be a framework-level problem that Apple knowingly shipped with (if they didn't know about it, then that's even worse for obvious reasons). Ever since around Big Sur, serious operating system memory leaks have been something of an Apple signature move!

Comment Re:Who watches linear TV these days (Score 1) 38

It's nothing to do with linear TV. It is of course impossible to target multiple different broadcasts to the level of individual streets via the over-air transmission tower network. The coverage is vastly too broad and the technology to send different signals out even to individual towers doesn't really exist, mostly because it'd just be wildly expensive to establish truly independent links to each tower.

As the summary says, this article is talking about the streaming service which used to be called "4OD" (4 On Demand) but has inevitably been rebranded to be more confusing (just "Channel 4", losing the distinction between that and the OTA service) along with inevitably a pay-for "plus" bolt-on.

The whole thing seems pointless, anyway. Anyone from the UK will be aware of how varied most neighourboods are - Space even wrote a song about it in the 90s. I suppose there's a small chance of greater relevance given they argue they'll advertise local businesses, but everyone in a neighbourhood / street already knows all those local businesses, because they're local! Besides, small stores in local areas aren't likely to pay C4 much for ads.

Can't blame 'em for trying, though, I suppose.

Comment Re:Those poor Brits! (Score 2) 38

The TV licence is for the BBC-produced channels and content only. It has absolutely nothing to do with the commercial, independent, ad-supported and free-to-air ITV, C4, C5 or the ever-growing plethora of digital TV and streaming-based services available in the UK.

The clues really are in the names for much of it - BBC first (British Broadcasting Corporation), ITV - Independent TeleVision ("channel 3", in the old analogue days, back when there really were just 3 channels to choose from - the launch of C4 in 1982 was a huge deal!

Comment Re:Modernize? (Score 5, Funny) 90

Nonsense! I want the UI to become at least four times as large, with huge, widely spaced, bland and sterile corporate UI styling consisting of flat areas with optionally rounded corners, but none of that 3D-effect-button nonsense and ideally, none of those ugly, cluttering shadows. If it doesn't look like some Apple rip-off, despite Apple's current style having become tired and dated several years ago, you're not trying hard enough and should remove even more detail and character. It must be bland, I tell you! That is the future.

It must use an equally bland font too; something visually identical (for those poor, pathetic "normal" humans) to Helvetica, but actually made by a really expensive consulting company that calls it Meta Whalesong Regular and tells us that there's a slight widening of the letter "l", which represents the body of whales. The normal people won't think they see it, but the clever graphic designers know that it's really changing how they feel. It's not a user interface, it's a user experience. And don't forget, make the titles big. No, bigger! Even bigger! And bolder! People are idiots - make sure the titles of things shout at them in a huge, bold font that's totally different from the rest of the UI. Nobody's going to ignore typography that practically punches them in the face! Take that, human interface guideline authors worldwide.

Meanwhile, there are lots of advanced features that only a few percent of the users like. They just waste dev time. We'll remove all those - if it's good enough for 95% of people, it's good enough for anyone, right?

Removing all the visual complexity, decoration and features will of course increase the application's RAM and CPU footprint by two or three orders of magnitude, but that's a small price to pay for progress.

Comment Re:That's quite a reversal (Score 5, Informative) 80

You do understand that a non-profit does make money, don't you? It just only needs to make enough money to cover its costs. It does not then add some arbitrary (and usually ever-increasing) amount of net profit on top.

This is a good way to try and stop a rampant profiteering, AKA in recent years, enshittification. It's also a very good statement about what your company stands for - it stands for its products, and by extension the users of those products - not its shareholders.

While fairly easy circumvented by creative accounting that makes it look like costs are higher than they are, thus allowing revenue to creep up and actual profit to be realised off-shore, this kind of thing is usually as easy to trace as it is to set up, so it gets called out pretty quickly for any sufficiently large, and therefore interesting, non-profit.

If dropping its non-profit status, OpenAI would no longer stand for its products or its users. It would stand for its shareholders. From here, things follow an extremely familiar and extremely depressing path.

Comment Re:Of course it did - it's even written in the adv (Score 2) 52

Granted I have also seen it go haywire and write 100%/50% wrong code [...] some more junior engineers in a huge rush just accepting what copiliot suggests and then making a PR. But, that's what PR review is for right? Catch the stupid before it gets into the code base.

No, that is ABSOLUTELY NOT WHAT PR REVIEW IS FOR and my goodness it's terrifying that you'd suggest it.

Code review isn't there so someone can just shit out stuff that in your own words might be 100%/50% wrong, not give a fuck, write no tests, never run it locally, don't care, don't wanna know, just slap it into a PR and let the reviewer do the work. If I saw work of such pathetic quality in a PR, ever, it'd be an instant referral to HR to remind the dev that they're paid well to be competent and diligent in their work. Regardless of your experience there is no excuse for laziness.

Code review is there for the opportunity of mentoring on possible code style improvements, efficiency issues, reinvention of wheel where something already exists in a wider framework but has been coded anew because the implementor didn't know better, and of course, to pick up on mistakes. There are a plethora of things we'd like PR reviews to be, but of course it does depend on the reviewer, time pressure and a bunch of other things.

Fundamentally, though, there's an assumption that the dev wasn't just a moron (!) and wasn't just eyes-glazed-over lazy. We assume they at least had some decent notion of what they were meant to be doing. If they don't even have that, they most likely need to be in a different job, or at least sent on them other of all training courses before being let anywhere near the code base again.

Once more, with feeling: reviewers are not there so you can shit out untested garbage and they can rewrite it for you.

Comment "Science" magazine (Score 1) 57

It really has fallen a long way, hasn't it, if this obvious nonsense VC bait is considered "science"?

Let's see now. Just one of these inane enterprises has been estimated to cost $88 billion - and if will in practice, of course, be at least ten times that expensive because Capitalism - for just 80km of "curtains" to merely slow things down, but in so doing release vast amounts of CO2 into the atmosphere from all the resource mining, manufacturing, ships and other construction equipment.

Or - and, hey, hear me out here, this is a wild idea I know - we could spend that $88 billion on systemic CO2 output reduction measures.

Sigh. We're so fucking doomed.

Comment If it lost money for ten consecutive quarters... (Score 1) 151

...then its problems date back to the comparative EV boom, when it was still unable to make money. It's certainly easy to argue that a slowdown - which is just a reduction in the rate of growth, but still growth, however much the fossil fuel lobbyist scaremongers try to paint it otherwise - is the cause, but meanwhile other battery vendors were and are doing just fine.

Bad business plan for years, CEO doesn't want to get blamed, so CEO blames everyone else.

Comment Re:Hold individuals accountable (Score 1) 123

Yes, but because of the utterly spectacular assholery that is the USA's legal treatment of corporations as individuals, we get the predictable and - at least as far as the so-called "C-suite" is concerned - desired outcomes. AFAICT from the Wikipedia article linked below, this is ultimately all colonial Britain's fault. At least two centuries or so of this disgusting attitude, especially over the last few decades, is the cancerous rot at the very heart of American-style capitalism. There is near-absolute personal legal deniability. "It was the company what dunnit, your honor! I am but a blameless billionaire CEO!"

This is why corporations have repeatedly and without remorse behaved in such abhorrent and immoral ways, and will continue to do so without bounds until such a time as that legal viewpoint is reversed (which will be never, thanks to multi-generational deep seated lobbying-based corporate corruption of the political process). And it's not just the big stuff like Ford knowingly burning their customers to death in the Pinto (one of the most agonising and horrifying deaths imaginable but hey, nothing personal, business is business, amirite?). This includes things like good old fashioned nepotism; discrimination; harassment; generally misleading customers; illegal sale of data; insider trading; and so-on.

https://en.wikipedia.org/wiki/Corporate_personhood

Comment Re:Apple is going to be using openAI (Score 1) 71

Arrant nonsense. I now suspect you are a troll. Your comment in its entirety is:

That isn't entirely correct. The chatGPT integration is in fact chatGPT integration. It even includes the ability to link your paid account with OpenAI, to gain access to your paid features.

That's all. Let's take the second part first where you claim to be "doubly correct" - LOL - about "The chatGPT integration" (sic. - NB - the 'C' in 'ChatGPT' is capitalised; see chatgpt.com). This part of your reply cannot possibly be rationally replying to anything other than the previous comment's description of the ChatGPT integration that Apple announced, to which end, @EvilSS said:

OpenAI (and other AI services in the future, they will let you add/choose models at some point) are an add-on that it offers to run in certain situations, and it prompts the user to allow it each time. It is not the backbone of either Recall or Apple's AI implementation.

This is once again entirely correct and is not contradicted by your response. The OpenAI stuff is just an add-on, and Apple have signalled that they intend to do other add-ons in future. Your bizarre statement "The chatGPT integration is in fact chatGPT integration" is mostly meaningless, but seems to at least imply that you believe "add-on" and "integration" are different. They are in fact synonyms in this context. The add-in/plugin/module/extension/integration is just that; a lump of code integrated to a greater or lesser extent with an external service, in this case, ChatGPT.

This leaves the first part of your incontrovertibly wrong response, where you simply say, "This isn't entirely correct" about the prior reply. Since we've already shown that the second part of that reply is entirely correct, then you must be talking about what @EvilSS said before that, which was:

No they didn't. First of all Recall does not use OpenAI models. It runs using a local model on the device, there is no cloud involved. With Apple most of the models also run on device. Apple has their own, not OpenAI, cloud models that can do heavy lifting when needed.

All of that is true. Recall does not use OpenAI models, even though yes, that is something of a tangent to even higher up in this thread where @EvilSS was responding to a poster saying that Apple and Microsoft were using the same back-end, but didn't specifically mention OpenAI. Tangent or not, @EvilSS is factually accurate. The model also does run on local device without cloud. The description of how Apple's stuff works is also correct, as I've already explained in detail, and even a cursory glance at the WWDC 2024 day 1 Platforms State Of The Union video prove all of this (unless Apple are lying, which I don't believe anyone here is saying, yourself included).

Comment Re:Apple is going to be using openAI (Score 1) 71

That isn't entirely correct

@EvilSS was entirely correct, and nothing you said in apparent rebuttal contradicted that. There is ChatGPT integration, which is basically a plug-in that Siri can use if it doesn't know the answer to something.

Apple Intelligence is based around models they developed, built and trained themselves, either on-device or with cloud assist on servers made of their own hardware running an operating they built themselves, also running models that they developed, built and trained themselves. OpenAI are nowhere to be seen in any of it (although we will never know if they offered any engineering assistance during Apple's development work and thus did, in a way, have at least a hand in the creation of Apple's bespoke models).

The ChatGPT stuff is a bolt-on - and thank heavens for that, given that it's useless, misinformation-peddling, hallucinating junkware fit at the very best only for bland and over-verbose works of fiction. We've already seen more than enough examples of how critically unsuitable it is for any factual work.

Slashdot Top Deals

Why be a man when you can be a success? -- Bertolt Brecht

Working...