Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Layered design (Score 1) 84

Except flatpacks/snaps/docker images, those have to be updated by their respective maintainers.

Not quite.
Docker and flatpak works in layers.
Flatpak in particular has "runtime" - base systems - and is often well integrated into the package manager of the distro (e.g. on the Arch-based SteamOS running on the SteamDeck).

The SSL libraries are part of such base layers.
So if there is a bug fix, most likely you're going to see (either directly from your page manager, or when typing "flatpak updae") an update to "org.freedesktop.Platform.GL.default", "org.gnome.Platform", etc. runtimes, not the individual flatpaks.

Docker won't be as convenient as it's a Git-like DAG of "commit-like" layers. If the base "ubuntu:latest" layer is changed, the hash of the software based on it will change and it will be a new release too.
Luckily Docker is very convenient to automate with CI/CD and trivial to rebuild a new image.

Note that this also the case with package managers like Nix, which also consider the process of building package as successive layers in a DAG.

There is also no way to ask into those containers whether they're running vulnerable versions of libraries.

There's no ultra-straight forward automatic way, BUT

In the case of flatpak, it's often well integrated into the package manager, so you're going to get a pop-up from your package manager telling to update a flatpak runtime, in addition of updating a system library.

In the case of docker, the command-line approach still works. So the way to ask containers is to run command on their shell. The draw back is that you need to be fluent in several package management systems (e.g. you're running on Arch linux and usually rely on "pacman" to report such library versions, but you also need to be fluent in "aptitude" because most of the dockers you use run on "ubuntu:latest" base),.

Comment Two use cases (Score 1) 84

why install apps *especially* like Thunderbird, Firefox, etc. from a flatpack {...} Change my mind?

Use case 1: Steam Deck.

The console is designed with users who aren't seasoned Linux power-users in mind. To avoid the head-aches of support users who managed to utterly break their SteamOS installation, by default the root partition is mounted read-only.

Sure, power users like most of people on /. will simply switch root to read-write and install all they need with pacman from the regular Arch repo. And are probably able to debug why a subsequent SteamOS update has trouble installing and pin-pointing the package conflict that triggered it.

But what is Joe Random 6-pack, who doesn't even have experience with command line on Windows and wants something that works as a console - as easily as an Xbox - supposed to do? Valve's answer: just deploy with flatpaks. Console's root remains read-only and undamaged, version updates will not break due to weird stuff installed, all the shit happens in containers, which are very easy to remove individually.

Use case 2: Lazy to compile several dozens of dependencies.

Exactly as you hinted: you want to have the latest bells and whistles for some reason. But your distro lags behind and ships with an old version.

If you happen to us a major distro (e.g.: Debian Stable is a good example of a very popular one that lags behind with versions), chances are that there is a 3rd party repo that provides up-to-date pre-compiled package and dependencies.
(e.g.: on my opensuse tumbleweed laptop, I use stuff from OBS and Packman repos, on my SailfishOS phone, I use stuff from Sailfish::Chum, etc.)

Failing that, perhaps you're on a distro that has good facility for compiling custom packages with their depencies (AUR, Gentoo, etc.)

But what to do otherwise?

Either you go through the madness of compiling several dozens of libraries, hoping not to break your system.
Or you fetch a container that is precompiled with everything need (well not everything, Flatpak and Docker work in layer, so the remaining needed stuff is most likely in some common base layer)

Comment Layers, large apps. (Score 3, Informative) 84

DISCLAIMER: I usually install most of the stuff from the package repository of my distro (Opensuse Tumbleweed, Manjaro ARM, Debian, Raspbian).

But...

We could perhaps make all the flatpak that use the same libraries, like share them. You know, to reduce package bloat, disk footprint and RAM requirement. We could call it "shared libraries" for example.

Jokes aside... that's very close to what Docker and Flatpak are doing. Docker works in a system of layers. (Most of the dockers people use would most likely be extending a ubuntu:latest base)
And Flatpaks are built atop of "runtime" (base system).

These app containers only differ in the main application running and its specific collection of dependencies which are not part of the base system.

are all the same "performances be damned" approach to solving dependency hell

They are not VMs. They are not entirely separate whole-system installs.
They are closer to provide a single specific set of common dependencies that an application can target.
You need to make sure that you application works successfully on top of the latest Flatpak runtime, instead of making sure that it works against a zoo of dozens of distro, each with slightly different set of library versions, some introducing subtle incompatibilities. It is thus closer to, e.g., what Valve' Steam provides for native Linux executables.

Yes, in an ideal world, you would like that the devs of your distro take the time to custom optimize and adapt the application and integrate them nicely with the specific library versions you have. (And hope that other devs similarly replicate this effort on other distros).
(I am lucky, nearly all what I need is available in this way from repos - so that's indeed how I install it).

But these container apps "built against a fixed base layer" is the next best thing before needing to go to "single app VMs".

It's convenient, but if you have more than 5 to 10 of those packages running on your system at the same time,

The prime target are applications which have a very large collections of dependencies (think large office suit like LibreOffice, rather than some lightweight text editor than doesn't depend on much more that the base Qt libraries).
You aren't very likely to run more that a couple at the same time.

And if you check the applications listed as example:

(e.g. Firefox, Thunderbird, VLC, Spotify, OBS Studio, Google Chrome, Telegram),

These example all support playing and/or recording media, and thus they all would need ffmpeg/libav or gstreamer and a bunch of codecs.
Those things are sensitive to versions.
A distro dev would need to make sure that all of them are compatible with the exact version of libraries shipped in my distro (and patching around any bugs).
Or failing that, in practice lots of distros will ship several different versions of the shared libraries, with differing sonames, and you end up with 2-3 version of all multimedia libraries installed, differing only by the number tacked after .so (that's currently the case on my opensuse laptop).
(This starts to look very close to how docker and flatpak handle this).

Next use case for docker and flathub (and an extra use for conda environment, for that matter): when you just quickly want to test one specific application that you're probably going to delete afterward, but don't want to install a zillion of specific dependencies for it, and are too lazy to also remove all the not-needed-anymore deps once you remove the app.
(I personally use a couple of containers this way. And use conda a lot for testing data analysis).

SteamDeck is another different use for this containers: it makes it possible to install apps while still keeping the root partition read-only, thus it brings a whole catalog of apps to newbie non-fluent in Linux, who would have otherwise needed to switch their root to read-write and thus risk b0rking their SteamOS installation and bricking their consoles. Flathub remove potential user support headaches from Valve, while still giving options to NON-power users. (Would be a shame if you needed to be a seasoned Linux user to add application to your gaming console).

Comment Apple's history? (Score 1) 135

if they won’t bother to look at apple’s history?

Apple's history? You mean like the Newton pad?

A device that did so poorly, that it took Palm (and Handspring) to show Apple how unobtrusive pocket computers should be done, before Apple eventually gave it another go (and an initial go with catastrophically bad battery life)?

Vision Pro can't be a success in it's current form (Size+Weight, Price, weird product that assumes there's a market for people spending the whole day with a VR headset strapped to their head the whole day, resolution isn't that great for the specific use case (screen replacement) that Apple mainly shows in their advertising, etc.)

The question is whether this is going to be Newton moment (product failed, took another company to show how to do it successfully) or an Apple Watch moment (product completely missed the point Pebble was trying to make (e.g long battery life, minimalistic functionnality, etc.), and instead did an initial poor reinterpretation of the concept (smartphone with miniature screen and poor battery life strapped to your wrist), only ententually stumbling into something more workable eventually and mostly gaining market share through rabid fanboys who'll buy anything with an apple logo slapped on it).

This also answers the: Wonders if anybody would actually use it -part.
It has an Apple logo slapped on it, the insane fans will buy it, some will even try to use it 24/24 as God^H^H^H Apple intended it (and will need to buy an absurd amount of powerbanks).

Comment Where is the BlueSky federation _TODAY_ ? (Score 1) 36

As the french saying goes:
"Un tiens vaut mieux que deux tu l'auras"

(or as the english equivalent: ": a bird in the hand is worth two in the bush")

Secondly, it's false. They now run two instances internally and {Paraphrased as: Blah, blah, blah, ...}

Okay, can you point me to a dozen of 3rd party, independently-run instances of BlueSky?
No, you can't.
Until that actually happens in practice, BlueSky is only theoretically promised to be distributable.

Meanwhile Mastodon, despite all the warts you find in its protocol is.

ActivityPub is a garbage protocol. It just wasn't thought through well.

Nobody says its perfect.
It's just that it's already working out in the wild.

Pick an area of focus for your demands, for God's sake.

My demands are simple:
I want people on other independent 3rd party servers to be able to interact with people on BlueSky's server(s).

You're the one who brought the fact that bridges would be trivial. I am merely pointing out that BlueSky's team weren't even arsed to do that as a stop gag measure to be actually interoperable in practice.

Threads *is* ActivityPub based.

[citation needed]

thus cutting off the vast majority of the user base.

The vast majority of the threads' user base isn't even there yet.
Currently only a few select accounts are available over ActivityPub, as a test phase.

It literally happened to me, on Fosstodon.

Nobody said it never happened.
It's unfornutate it happened on you.
But in my experience it doesn't happen as frequently as some think.

Not when migration is so poorly implemented due to the protocol it isn't.

Yet, migration is implemented already, and is already happening right now, out in the wilrd between independent servers.

That's more than what can be said about the vast majority of social networks (basically, nearly anything outside of the fediverse), including BlueSky. At best they can as of today more data between their own shards.

Still long way to go until one could do in practice what is already achieveable on the fediverse, no matter how crappy and badly implemented.

TL;DR:
I make a big difference between what can currently been done in practice NOW (no matter the warts)
vs.
the lofty promises of what some day could be happenning when (if?) some company open to 3rd paties (no matter how much they promices their protocol is going to be better).

Comment Mastodon (and the Fediverse) vs. BlueSky (Score 1) 36

I hear Mastodon (ActivityPub) people a lot (I'm on both networks) writing things like:

You might hear the sentence you wrote somewhere, but the most common gripe with BlueSky is:

- The whole ActivityPub-powered Fediverse, including tons of Mastodon instances are all already federated as of today. Yes, there are some inconvenience (e.g.: the manual steps necessary whenever you change a nickname and/or server as you mention (*), or merely the fact that you have to pick up one server among many, each with their own moderation policies) but they aren't any different than the one, e.g., with e-mail.

- Meanwhile, BlueSky promises that one day it will be federated/distributed, but as of today these are still promises, nothing concrete. Plus several people have looked into it and are doubtful how much in practice this is going to be distributable. (Basically, it's easy to make your own instance of caching server, but making your own index is another matter).

TL;DR: One is federated today, the other is just a promise of being federate "one day(tm)".

(*): though in practice there haven't been major problems with that: I've had contacts who both switched nicknames or moved servers, e.g., when mastodon.lol shut down, and I was still automatically following them after the switch.

One can create a bridge to ActivityPub, and it shouldn't even be that hard

Yet, and Bluesky isn't even trying that..
(I mean even Facebook (or Meta or whatever)'s Thread has managed to start bridging(**) )

(**): Yes, I am totally aware that Zuckerberg is only doing this in the same way that Microsoft has supported Apple: just to have something to point at if (when) antitrust starts to look to closely.
Well, that, and also to pre-populate timeline content so threads doesn't look like a ghost town (also the reason why Facebook did auto-pre-create Threads accounts for all Instragram).

or the server admin can have a grudge against you and delete (or even modify!!) all your data, or do basically whatever they want because each one is basically a lord of his own personal fiefdom (few people have any clue what they're getting into when they pick a server).

Despite being brough up regularly by people pushing for the techbro-/cryptobro- centralised networks, this hasn't happened at any significant rate on the fediverse.
That's mostly due to the fact that Fediverse is already distributed in practice and competition (even at some very limited level e.g. between server) is a good incentive to behave semi-decently (if you try being an arsehole people will flee very fast).

What happens a lot is server blocking and federation:
A lot of servers with more at-risk users have defederated from Threads; nobody federates at all to neither Gab nor Truth Social despite those using derivative of Mastodon as their base; etc.

Comment 3rd parties (Score 1) 83

We're /. aka massive geeks: you can use our experience as a reference point for what average Joe does.

Yes, it's technically possible to install a 3rd party store beside Google Play Store. But Google has designed the whole thing to make it slightly discouraging (accordingbto the Epic lawsuit), so the end result is that only a few geeks are on F-droid, Chinese are on alternate store because they're banned, and virtually every single other Android user on this planet is only using Google Play Store exclusively.

Contrast this with the SteamDeck: its on-boarding tutorial litteraly shows you how to install a browser from a FlatPak as an example of getting non-steamos applications.

Comment Pulse oximeter (Score 1, Troll) 122

This isn't someone inventing "email, but on a phone" and suing.

Indeed, no. This is instead someone inventing "pulse oximeter, but on a smartwatch" and suing.

(Pulse oximetry is around half a century old. Ironically the Japanese inventors didn't apply for a patent outside of Japan, so in the USA, it was an open tech to further expand. Wikipedia has the detail.
Further back in history, light based oximetry itself is older than WW2, and adapting it to a pulsating target made entirely sense as soon as progress of electronic allowed it.
And packaging everything into a smartwatch fir convenience is a no-brainer.)

(Also, medical doctor [and as a hobby, officially lincensed sports - alpine ski - trainer] speaking here: the claims made on the masimo personal health website you link are dubious, bordering on snake oil)

Comment Medical AI ; licensing (Score 1) 100

...and further on (sorry for the spli posting)

Except that for a huge and growing number of medical tasks, AI performs better than humans

Medical doctor speaking. In short: Nope.

More precisely: there's a growing number of big public announcements which are picked-up by magazines, blogs, etc. (and here on /. )
It's basically start-ups which need to drum up whichever slighest sign of promises of success they've been lucky to hit.
(and remember that almost no one is interested in reporting failure).
and academic groups pushing the currently pooular buzzwords to attract a bit more funding (I work in research, I have colleagues who have crammed some chatGPT gimmick in their project to keep with the popular trends).

But in reality, it's a lot less successful than it seems.
Most of this "human beaten by AI" are just lucky anectode.
Some fall in the "leaky training material" pitfall. See the recent claims about AI able to pass the bar's exams. (Then followed by utterly catastrophic results of attempt at using AI in real court). What most likely happens is that the exams questions were part of the training material. Of course the language model will be able to answer most of them.
Other situations are extra information leaking in the dataset:
a lit of X-ray machine, in addition of providing metadata in the corresponding fields of the DICOM, will also hard-code them in the pixels (and then there's the physical "metadata" objects, like the letters used to indicate "left" or "right").
Turns out that in a few of those "AI better at predicting outcomes" cases, the model has just learned to distinguish between X-rays from the radiology department and X-rays from the small portative stations on wheels used by the emergency department.
Turns out, yes: emergency patients (specially those that cannot be moved to radioligy and require an on premices X-ray), tend to have statistically worse outcomes than the others, enough so to be significant when picked up by the model.
But mive the model to a different hospital that uses different devices and it completely breaks up, unlike its human doctor competition (my brother is head of radiology, he could probably give a bunch of examples of debunked overpromising claims).

And regarding the licensing of current AI frameworks:
yes it's easy to fork opensource code. But keep in mind that not all forks are successful for each successful OpenOffice to LibreOffice, you've got a lot of projects that have devolved into a bunch of competing forks, none of which have managed to attract a large enough community to remain relevant.
it remains to be seen if one day, after Google and Meta drop their costly AI department jn pursuit of the "next big thing (tm)", those framework would survive with a thriving community.

Comment *Commercial* Ai models (Score 1) 100

(go read the full text on Cory's blog)

Even more important, these models are expensive to run....

Like, this is demonstrably not true?

The subject that Croy discusses is the commercial AI-as-a-service companie, such as Open.ai: there seem to be an arm's race in that field of making the biggest model ever (see Chat GPT 4, etc.)

These companies' business model isn't workable in the long term. ChatGPT *is* very costly to run in the long term.
And tons of start-ups directly relay on APIs of Chat GPT and similar.
The low price of such API is artificially kept low by burning investors' money.

The day the commercial AI companies' investors starring demanding return on investment, the API price go up and the start-ups business model falls appart.

And these models are usually completely closed (I mean the actual weights of the model. Not the PyTorch, TensorFlow, etc. library used to runnthem), so once the start-up's access is cut they have nothing they can do.

Of course, small models, with open weights, and that you can run on commodity hardware ARE a thing.
I know that /. usually don't RTFA, but Cory actually points later in both his pieces (both the Locus one, abd the blog one) that these "small models" are a possible usefull output of the AI buble that will remain after the dust has settled once the big companies with their unsustainable datacenter AI have gone belly up.
Even the question of training those small model could be solved with federated training.

BUT, you have to conceive that small models for now are more an enthusiasts / community thing: people on reddit swaping tricks, code developped by small hobbyist teams on github, etc.
They still have "some" costs: they still do require some top-end hardware to run for now (a big gaming rig), and they require some decent domain experts to wrangle them into specific uses.

So not that many start-ups currently rely on those for their business plan, because they are under the (false) impression that they'll be saving money on big GPUs and on hiring AI gurus, by simply off-loading everything into some commercial API.
For each "Mozillla procides an AI translator that runs on your laptop", you got a dozen of start-ups that leverage ChatGPT 4 for some bullshit stuff.

Comment Cheaper? Nope. (Score 1) 100

The "cheaper" part is the part Cory is pointing out.
In practice, most of the commercial AI models from companies such as, e.g., Open.ai cost insane amounts of resources.
Not only does training them cost mind bogglingly large amount of both energy and labor, but even running them requires vast data center (energy hungry, for servers and cooling those) and an army of low paid workers (whose job is to make sure, e.g, the chat bot doesn't start spewing racist nazi propaganda, or that the image generator doesn't start painting non consensual violent gore porn).
And that's just on the AI company side (as the summary above mentions, the client company then in turn needs to keep in-house enough highly paid expert reviewers to avoid getting into too much liability troubles).

For now, online AI service seem cheap, because they are setting large amounts of investors' cash on firefire, in order to keep the illusion of attractive prices.

And yes, currently some clueless "decision makers" C-levels could get lured into thinking that using AI workers will be cheaper and allow them to fire some human workers (and then to pay themselves big fat bonuses to congratulate themselves for having saved the company some pennies).

BUT, at some point, the AI companies' investors will get fed up with the money burning, and will demand return on investments. And so the enshittification will begin, the AI companies will start charging the real costs of running the giant data centers.
And suddenly, the client comoany new business model centered around "cheap" AI workers will completely break. Either they'll need to reflect their AI costs into their own goods and service price (and suddenly the price off everything will spiral up. Prepare for AI-flation!) or they'll go bankrupt.

of course the C-level "decision makers" will either be long gone with their bonuses, or will get sacked with a very large golden parachute.

but this is going to be a giant wreck among companies that were stupid enough to shift from human- to AI- workers.

Comment Tweaking vs. retraining (Score 1) 20

The difference is that due to the vastly different resources needed,
- most software enthusiasts study, change and improve software by editing the code and recompiling (few people do binary patching).
- most AI enthusiats study, change and improve models by tweaking, further training or mix-n-matching pre-trained models (very few people burn the gazillon cycles to do retraining from scratch)

It's as if AI was in a strange "opposite world" where everyone and their dog can simply patch binaries, but it would take vast amounts of resources to generate a new binary from source.

Comment Opensource's purpose (Score 2) 20

The core purpose of opensource initiatives is that the users have the freedom to run, copy, distribute, study, change and improve the software.

I see no differences from source and binary: {...} you can distribute them both, and you should do that, if you want to call your product "open source".

From my outsider's perspective, it seems that the norm in the deep models (like the various Diffusion Stability or the present LLM) is exactly like the parent poster mentions to "merg[e], tweak[], adapt[], tun[e].. ad nausea pre-trained [models]".
What you needs is weights, and a license that allows you to further tweak them.

e.g.: If you want a specialized language model that can specialise in assisting writing scientific papers:
- you don't take the original training material and add more scientific papers while removing other training material and spend again gazillon resource re-running the whole model-training on some giant cluster.
- instead you take a pre-trained model (or a few more) and further train it (and optional mix-n-match with other trained models) with a few specific training material for the extra specialization you want. and run this for a few minutes or an hour max on your laptop.

The original training dataset and the software that generated the weight is useful for reproducible science, but not for the current workflow of the vast majority of AI enthusiast.

And, in any case, people don't have the money or resources to train the model *today*. That's not an excuse not to distribute the source.

There's a big difference between the software world with source/binary, and the AI world with training set/trained model.

Re-compiling a binary from source is done in a reasonable amount of time on a mere laptop (or even SBC), whereas patching binaries is a rather complex task that necessitate mastering advanced tools (at minimum, very good debugging tools; at best, ghidra or whatever you need to reverse engineer the code and understand what to modify).
Most of the interest is indeed in having the source code, so you can edit and recompile. Reverse engineering binaries in order to be able to modify their behavior is a fallback option when no source can be located.

Whereas retraining from scratch a modern AI model with billions of parameters requires many order of magnitude more resource, than modifying an already trained one to better suit your needs. Very few people are interested in burning tons of resource in training from scratch, when the same results can be obtained at a fraction of the resource by tweaking the pre-trained model.

Think of it as artists making photos, paintings or sketchs of giant architectural structure (cathedrals, pyramids, china's great wall, etc.) few of them are interested in spending the vast resource in building their own instance of the structure, most of them are interested in taking their own artistic twist on the subject in their art.

Slashdot Top Deals

With your bare hands?!?

Working...