Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Not breaking the DRM, need an actual Switch (Score 2) 107

it's an actual product that they're still selling in stores and trying to earn a living from it really is enabling piracy that's cannibalizing their sales.

(Keep in mind that Nindento is typically making money by selling games. In the whole gaming sector, consoles are usually either sold at a loss, or close to costs. Selling fewer Switches isn't costing them money, only public visibility and less lock-in. So the whole arguments of emulator versus sales of machines isn't very convincing).

Yuzu technically cannot cannibalize sales of Switch consoles: it doesn't break the DRM. Instead it relies on duplicating the key of a Switch you already own
(and Nintendo's argument is that you need to hack the Switch in order to do so, which would be some DCMA violation, and Yuzu's website or Discord is pointing to instruction how to do so).
The way the DRM scheme is designed, you couldn't use Yuzu without an actually paid-for Switch (though probably some pirate could try using keys downloaded from some warez website. Though Nintendo could very trivially black list those DRM keys).

The main use of Yuzu (given the way it is designed) is to allow you to leave your Switch at home, only take your, e.g., SteamDeck with you, and still be able to run your usual games while pretending to have your exact Switch (and it's DRM keys) with you.
The advantage being not needing to lug around multiple devices, higher performance of some platforms (including SteamDeck), or alternative inputs (e.g. accessibility).
Nintendo argument is that running game on non-Switch hardware is a violation of Nintendo's licensing term and they only should decide which devices you're allowed (or not) to run your games on.

AFAIK Nintendo games nowadays employ per-copy keys, so it would be more difficult to use a pirated copy downloaded from the internet than a dump from your own game using your own Switch (which need to be hacked, so again Nintendo arguing the same DCMA violations and links-to-instructions as above).

Tears of The Kingdom is a bad example, as the version pirated before release wasn't used on Yuzu but on Ruijinx back then.

Comment Sound mixing in the early 90s (Score 1) 12

I remember, when I was a young lad thinking I could change the world, and I went on the journey to develop my own game engine, surprisingly one of the hardest parts was finding a good sound component/engine. - I'm thinking of the mind set I had 10\15 years ago

Funnily, that part was a bit easier earlier in the 90s, specially when making simple 2D games, as there weren't that many fancy effects.

A "sound engine(*)" was mostly mixing multiple samples (in software, unless you were lucky to have a Gravis, or later an AWE) and playing them over the soundcard.
Which by then mostly meant either a Sound Blaster-family card or something at least compatible with the SBPro, so with a relatively narrow number of hardware interfaces to support.
(And throw in support for digital samples on PC Speaker used in pulse-width mode to support lower end machines. Add some parallel-port home-made resistor ladder (COVOX Speech Thing compatibles). Or if you feel very fancy, playing digital samples over FM chips like AdLib to cover for older machines).

Once you got the mixing, you can use it both for music, to play MODs and other similar tracker music (by mixing instrument's digital samples at varying volumes and frequencies), and sound effect (mixing digital sound effects at various volumes, mostly based on distance).

As the games were much simpler back then, sound effect were really just that: play sample more or less loudly based on how far away they are, have different left-right volumes for stereo effect.
So most of the programming tricks boiled down to mixing fast enough on slower CPUs, smoothing somewhat the sound when changing frequency and mostly the driver to make sure it's compatible across the widest amount of cards.

Contrast with graphics programming which tried to get as many effects as possible out of the limited hardware capabilities of EGA, VGA and SVGA, requiring tons of low-level register tweaking (planar 256 color modes, latches, etc.) or complex programming (using VESA bank swapping). Getting more even more complex once you want to do software 3D.

That's probably why there weren't that many good libraries when you started: the past decade was spent putting efforts into graphics and thinking "yeah, simple sample mixing will do it" about the sound (Spoiler: no, not anymore. If you want more realistic 3D games, you suddenly need a lot more efforts into 3D audio, well beyond "closer is louder": you need to map environments, do Doppler, do echo and reverb, etc. cue in A3D and the like)

(*): for the amateurs, indie, and demoscene.
For big companies (think Sierra, LucasFilm, etc.) it was mostly about supporting MIDI and tweaking a few specific musical synthetisers (tweaking the hardware registers of AdLib's FM synth, or SysEx on Roland MT-32, etc.) or later supporting General MIDI and the music sounding crap - if you weren't using the same synth as the artist (usually a Roland SC-55), you were stuck with whatever General MIDI sound bank your sound card had to offer (often some very twangy instruments in a botched OPL clone chip, or an aweful rompler).

Comment Also biomechanics (Score 1) 30

I imagine this will need to be a fairly snug fit to get a decent read... which I unfortunately know is something that can exacerbate carpel tunnel.

The part that makes it worse (both for you and for making this work in general) is that by the time the nerve has reached the carpal tunnel, it's mostly only sensory input (that's why you feel pain when it's compressed) and a lot let muscle motor output (that's why it won't work).

Most of the stronger muscle that move your finger (specially for strong motion like strong grasp) are in your forearm.
At the level of the wrist, there isn't any nerve output for those muscle, even the muscle themselve have endeded and you have a large bunch of tendons.

It would probably be possible to detect movement with ultrasounds, but if you try picking either nerve impulses or muscle eletrical activity, you'll mostly only pick up the few muscles in your palm that are in charge with the fine motion.
i.e.: on a "pinching" motion, you'll detect which finger are aligning (based on the impulses going to the thenar and lumbrical muscles), but you won't pick up the finger flexing (pulled by tendons coming from muscles in the forearm whose nerves aren't in the wrist).

(Here's an Anatomy poster with only paintings, no photos: https://anatomywarehouse.com/m...,
Sources with more detailed explanations, warning contains cadaver photos at the end:
https://teachmeanatomy.info/up...
https://teachmeanatomy.info/up...
https://teachmeanatomy.info/up... )

Comment DRMed (Score 2) 82

They are cheap, endlessly shareable, always yours

...and are encumbered with a couple of DRM systems, some which are rather convoluted.

Which means that at a point in the future:
- you will need to connect your player to the internet (or use an USB stick) and update it's firmware to view newer discs.
- ..but that upgrade could potentially block you from watching older media that you already hold but which Disney and Sony have decided to deprecate and not cover in the newer firmware.

or:
- never connect the player to internet and keep the current firmware (and its precious keys) as-is forever (until the player breaks) and be guaranteed that your current disc will continue to work (until the disc bitrots) even if Sony and Disney decide to deprecate some encryption kyes.
- ..but then newer disc might not be playable anymore

or:
- you need to keep a collection of players of various vintage (might be easier with software players).

or:
- you need to RIP those discs onto your server, and/or keep you own cracked keys to keep access to the disc.
- this should very likely be under local equivalent of fair use in most European countries here around.
- but this could land you in prison and/or severe fines in other jurisdictions, including in the US.

Comment Obviously (Score 1) 12

Phones can probably do it. Well, android ones.

Given that internally the Sony Portal is running Android, it's not a surprise they eventually managed.

I suspect an element of spite,

Yup, Sony's official statement is that the Portal is streaming only and there wouldn't be any way to run games directly on it.

So these guys' effort are basically a big "Well actually, ..." regardng that last point.

Comment Relevant (Score 1) 12

Andy's bio states:

Cloud Vulnerability Research @ Google

i.e.: he specialises in investigating vulnerabilities.
in other words: hacking shit is actually his job descrition.

(But yeah, the fact that he usually does it at Google isn't relevant, and TFS on /. could have emphasized that day job better)

Comment Field of View (Score 2) 203

And you forgot to add the most important: field of view.

Apple's design for the Vision Pro blocks peripheral vision.

So even if its latency is indeed as imperceptible as Apple pretends it to be (we wonder) and its resolution is high enough (it's definitely NOT: the "4k" pixels are spread over a much larger part of the view, and even with the "pin-cushion" distortion, resolution in the center isn't that high. See, e.g., analysis by KGuttag), the field of view in the Vision is only in front of the users, there's no peripheral view covered by the screen.

Contrast with latest gen military night vision goggle which double the light amplifier tubes per eye just to increase horizontally the field of view and extend it into peripheral.
Contrast also with AR glasses, and "open" VR glasses, which let you see the actual world from the sides.
Those do allow the user noticing thing comming in from the sides.

Also, speaking about looking at stuff, a core design flaw of the Vision is that your eye sight is your mouse, so you must constantly look directly at the interface elements you're interacting with(*), you can't just look ahead straight in to the road and fumble stuff in your peripheral vision like with a car's infotainment (and again, you dont even HAVE peripheral vision to begin with).

Oh, "one more thing(tm)", there's no dedicated Vision version of Google Maps on the Vision, it's just the tablet version, so you don't get the cool uses like AR super-imposed "follow the floating light ribbon" navigation that would have been an actual possible use case for AR while driving.

---

(*) That's how you know the video was kind of staged, actual AVP use doesn't require holding hands in the air in front like Minority Report. One can just pinch fingers while resting on the lap (or on the wheel, as long as the cams can see it). The drivers does "Minority Report" style hands to make the joke more obvious.

Comment Containers (Score 1) 84

So flatpack is basically a separate userspace, not managed by the package manager? {...} I understand it as "a distro in your distro"

That more or less sums up how containers work.
The whole idea can be summed up as "chroots, but on steroid" (i.e.: better isolation), and just like chroot, each container is a different userspace.

(With different strategies to manage it.
With LXC, you would need to run that distro's tools (e.g.: pacman if it's Arch) instead of your host's tools (e.g.: aptitude, it it's Ubuntu).
With Flatpak and Docker you play around layers).

Will a flatpack from today run on flatpack in 10 year's time?

In theory, yes.
In practice that 10-years old flatpak will need a specific userland that most probably will by then come with big blinking deprecation warning:
"Warning, your version of us.zoom.Zoom is compatible with org.freedesktop.Platform.GL.default up to version 25, which comes with deprecation warning: 'Absolutely never connect this flatpak to the internet'. "

To come back to the SSL example, next year somebody is probably coming with an updated runtime which fixes openssl 1.1.1zy. In a couple of years later, the answer will be "move to openssl 3.0 already!" and your old Zoom flatpak will not be able to move to a newer secure runtime (without porting to the new library's API and rebuilding).

It's roughly the same situation as having an old legacy RPM and eventually not having the depencies it relies on updated anymore. You'll need to port the source to the new API used in the new libraries (if you have access to the source), or deploy a whole stack of outdated dependencies.
The difference is that:
- these RPM would be distro-specific. They might not work on a different distro using RPM (e.g.: opensuse vs. redhat vs. sailfishos), and other distro might not even use RPMs (DEB in debian, ubuntu, etc.). i.e. you need one package per distro (or at least per family of distro derivative) optimised for the dependencies there.
- whereas flatpaks target the same flatpak's runtimes no matter which distro you're on.
- packages as RPM (and DEB) are extremely granular (e.g. one per shared library)
- whereas you only use a couple of flatpak runtimes at most.

Comment Enshittification marches on. (Score 1) 108

Up to 3.5 min per hour they said:

Amazon's presentation said the average ad load per hour is expected to be between two and three-and-half minutes,

...for now. Fast forward a year of so and you'd be lucky if you get up to 3.5 min of content between each hour of ads.

I'll just keep watching for free on the various pirate sites.

Yup, looking up on ThePirateBay and torrenting it seem to be a lot more convenient.

Comment Layered design (Score 1) 84

Except flatpacks/snaps/docker images, those have to be updated by their respective maintainers.

Not quite.
Docker and flatpak works in layers.
Flatpak in particular has "runtime" - base systems - and is often well integrated into the package manager of the distro (e.g. on the Arch-based SteamOS running on the SteamDeck).

The SSL libraries are part of such base layers.
So if there is a bug fix, most likely you're going to see (either directly from your page manager, or when typing "flatpak updae") an update to "org.freedesktop.Platform.GL.default", "org.gnome.Platform", etc. runtimes, not the individual flatpaks.

Docker won't be as convenient as it's a Git-like DAG of "commit-like" layers. If the base "ubuntu:latest" layer is changed, the hash of the software based on it will change and it will be a new release too.
Luckily Docker is very convenient to automate with CI/CD and trivial to rebuild a new image.

Note that this also the case with package managers like Nix, which also consider the process of building package as successive layers in a DAG.

There is also no way to ask into those containers whether they're running vulnerable versions of libraries.

There's no ultra-straight forward automatic way, BUT

In the case of flatpak, it's often well integrated into the package manager, so you're going to get a pop-up from your package manager telling to update a flatpak runtime, in addition of updating a system library.

In the case of docker, the command-line approach still works. So the way to ask containers is to run command on their shell. The draw back is that you need to be fluent in several package management systems (e.g. you're running on Arch linux and usually rely on "pacman" to report such library versions, but you also need to be fluent in "aptitude" because most of the dockers you use run on "ubuntu:latest" base),.

Comment Two use cases (Score 1) 84

why install apps *especially* like Thunderbird, Firefox, etc. from a flatpack {...} Change my mind?

Use case 1: Steam Deck.

The console is designed with users who aren't seasoned Linux power-users in mind. To avoid the head-aches of support users who managed to utterly break their SteamOS installation, by default the root partition is mounted read-only.

Sure, power users like most of people on /. will simply switch root to read-write and install all they need with pacman from the regular Arch repo. And are probably able to debug why a subsequent SteamOS update has trouble installing and pin-pointing the package conflict that triggered it.

But what is Joe Random 6-pack, who doesn't even have experience with command line on Windows and wants something that works as a console - as easily as an Xbox - supposed to do? Valve's answer: just deploy with flatpaks. Console's root remains read-only and undamaged, version updates will not break due to weird stuff installed, all the shit happens in containers, which are very easy to remove individually.

Use case 2: Lazy to compile several dozens of dependencies.

Exactly as you hinted: you want to have the latest bells and whistles for some reason. But your distro lags behind and ships with an old version.

If you happen to us a major distro (e.g.: Debian Stable is a good example of a very popular one that lags behind with versions), chances are that there is a 3rd party repo that provides up-to-date pre-compiled package and dependencies.
(e.g.: on my opensuse tumbleweed laptop, I use stuff from OBS and Packman repos, on my SailfishOS phone, I use stuff from Sailfish::Chum, etc.)

Failing that, perhaps you're on a distro that has good facility for compiling custom packages with their depencies (AUR, Gentoo, etc.)

But what to do otherwise?

Either you go through the madness of compiling several dozens of libraries, hoping not to break your system.
Or you fetch a container that is precompiled with everything need (well not everything, Flatpak and Docker work in layer, so the remaining needed stuff is most likely in some common base layer)

Comment Layers, large apps. (Score 3, Informative) 84

DISCLAIMER: I usually install most of the stuff from the package repository of my distro (Opensuse Tumbleweed, Manjaro ARM, Debian, Raspbian).

But...

We could perhaps make all the flatpak that use the same libraries, like share them. You know, to reduce package bloat, disk footprint and RAM requirement. We could call it "shared libraries" for example.

Jokes aside... that's very close to what Docker and Flatpak are doing. Docker works in a system of layers. (Most of the dockers people use would most likely be extending a ubuntu:latest base)
And Flatpaks are built atop of "runtime" (base system).

These app containers only differ in the main application running and its specific collection of dependencies which are not part of the base system.

are all the same "performances be damned" approach to solving dependency hell

They are not VMs. They are not entirely separate whole-system installs.
They are closer to provide a single specific set of common dependencies that an application can target.
You need to make sure that you application works successfully on top of the latest Flatpak runtime, instead of making sure that it works against a zoo of dozens of distro, each with slightly different set of library versions, some introducing subtle incompatibilities. It is thus closer to, e.g., what Valve' Steam provides for native Linux executables.

Yes, in an ideal world, you would like that the devs of your distro take the time to custom optimize and adapt the application and integrate them nicely with the specific library versions you have. (And hope that other devs similarly replicate this effort on other distros).
(I am lucky, nearly all what I need is available in this way from repos - so that's indeed how I install it).

But these container apps "built against a fixed base layer" is the next best thing before needing to go to "single app VMs".

It's convenient, but if you have more than 5 to 10 of those packages running on your system at the same time,

The prime target are applications which have a very large collections of dependencies (think large office suit like LibreOffice, rather than some lightweight text editor than doesn't depend on much more that the base Qt libraries).
You aren't very likely to run more that a couple at the same time.

And if you check the applications listed as example:

(e.g. Firefox, Thunderbird, VLC, Spotify, OBS Studio, Google Chrome, Telegram),

These example all support playing and/or recording media, and thus they all would need ffmpeg/libav or gstreamer and a bunch of codecs.
Those things are sensitive to versions.
A distro dev would need to make sure that all of them are compatible with the exact version of libraries shipped in my distro (and patching around any bugs).
Or failing that, in practice lots of distros will ship several different versions of the shared libraries, with differing sonames, and you end up with 2-3 version of all multimedia libraries installed, differing only by the number tacked after .so (that's currently the case on my opensuse laptop).
(This starts to look very close to how docker and flatpak handle this).

Next use case for docker and flathub (and an extra use for conda environment, for that matter): when you just quickly want to test one specific application that you're probably going to delete afterward, but don't want to install a zillion of specific dependencies for it, and are too lazy to also remove all the not-needed-anymore deps once you remove the app.
(I personally use a couple of containers this way. And use conda a lot for testing data analysis).

SteamDeck is another different use for this containers: it makes it possible to install apps while still keeping the root partition read-only, thus it brings a whole catalog of apps to newbie non-fluent in Linux, who would have otherwise needed to switch their root to read-write and thus risk b0rking their SteamOS installation and bricking their consoles. Flathub remove potential user support headaches from Valve, while still giving options to NON-power users. (Would be a shame if you needed to be a seasoned Linux user to add application to your gaming console).

Comment Apple's history? (Score 1) 135

if they won’t bother to look at apple’s history?

Apple's history? You mean like the Newton pad?

A device that did so poorly, that it took Palm (and Handspring) to show Apple how unobtrusive pocket computers should be done, before Apple eventually gave it another go (and an initial go with catastrophically bad battery life)?

Vision Pro can't be a success in it's current form (Size+Weight, Price, weird product that assumes there's a market for people spending the whole day with a VR headset strapped to their head the whole day, resolution isn't that great for the specific use case (screen replacement) that Apple mainly shows in their advertising, etc.)

The question is whether this is going to be Newton moment (product failed, took another company to show how to do it successfully) or an Apple Watch moment (product completely missed the point Pebble was trying to make (e.g long battery life, minimalistic functionnality, etc.), and instead did an initial poor reinterpretation of the concept (smartphone with miniature screen and poor battery life strapped to your wrist), only ententually stumbling into something more workable eventually and mostly gaining market share through rabid fanboys who'll buy anything with an apple logo slapped on it).

This also answers the: Wonders if anybody would actually use it -part.
It has an Apple logo slapped on it, the insane fans will buy it, some will even try to use it 24/24 as God^H^H^H Apple intended it (and will need to buy an absurd amount of powerbanks).

Comment Where is the BlueSky federation _TODAY_ ? (Score 1) 36

As the french saying goes:
"Un tiens vaut mieux que deux tu l'auras"

(or as the english equivalent: ": a bird in the hand is worth two in the bush")

Secondly, it's false. They now run two instances internally and {Paraphrased as: Blah, blah, blah, ...}

Okay, can you point me to a dozen of 3rd party, independently-run instances of BlueSky?
No, you can't.
Until that actually happens in practice, BlueSky is only theoretically promised to be distributable.

Meanwhile Mastodon, despite all the warts you find in its protocol is.

ActivityPub is a garbage protocol. It just wasn't thought through well.

Nobody says its perfect.
It's just that it's already working out in the wild.

Pick an area of focus for your demands, for God's sake.

My demands are simple:
I want people on other independent 3rd party servers to be able to interact with people on BlueSky's server(s).

You're the one who brought the fact that bridges would be trivial. I am merely pointing out that BlueSky's team weren't even arsed to do that as a stop gag measure to be actually interoperable in practice.

Threads *is* ActivityPub based.

[citation needed]

thus cutting off the vast majority of the user base.

The vast majority of the threads' user base isn't even there yet.
Currently only a few select accounts are available over ActivityPub, as a test phase.

It literally happened to me, on Fosstodon.

Nobody said it never happened.
It's unfornutate it happened on you.
But in my experience it doesn't happen as frequently as some think.

Not when migration is so poorly implemented due to the protocol it isn't.

Yet, migration is implemented already, and is already happening right now, out in the wilrd between independent servers.

That's more than what can be said about the vast majority of social networks (basically, nearly anything outside of the fediverse), including BlueSky. At best they can as of today more data between their own shards.

Still long way to go until one could do in practice what is already achieveable on the fediverse, no matter how crappy and badly implemented.

TL;DR:
I make a big difference between what can currently been done in practice NOW (no matter the warts)
vs.
the lofty promises of what some day could be happenning when (if?) some company open to 3rd paties (no matter how much they promices their protocol is going to be better).

Slashdot Top Deals

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...