Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Switch 2 backward compatibility? (Score 1) 107

preserving games that would otherwise be lost to time once the console is obsolete.

Though it remains to be seen how soon this will happen for Switch games, depending on the upcoming Switch 2's backward compatibility.

But yeah, eventually only emulators and dumped games will remain at some point in the future.

Comment Moral != Legal (Score 1) 107

In addition, dumping your own owned games and keys from your own owned console and cartridges, in order to run them with modifications, is perfectly legal.

Yes, that's perfectly moral. Should be legal. And is actually legal in some jurisdictions. But not all.

That's what Nintendo is arguing:
- dumping your own console's DRM keys and dumping your own games requires a that you hack your own Switch. Nintendo argues this is a violation of DMCA and outbound links to instructions to do so are illegal.

- running such way games on hardware which is not a Switch (e.g.: on a SteamDeck) is a violation of Nintendo's licensing term, as they decide what they will allow you to run your games on.

Comment duplicate DRM (Score 1) 107

All games ever playes on Yuzu are illegally obtained or cracked.

Switches employ (per-device) DRM and, AFAIK, Switch' game use per-copy keys.

The canonical (and simplest) way to use Yuzu is to duplicate the DRM key of the Switch you already own, and dump your own game.

(But that requires rooting your own Switch, and Nintendo actually argues this is a DMCA violaiton and that outbound links to instruction to do so are illegal).

Comment Not breaking the DRM, need an actual Switch (Score 2) 107

it's an actual product that they're still selling in stores and trying to earn a living from it really is enabling piracy that's cannibalizing their sales.

(Keep in mind that Nindento is typically making money by selling games. In the whole gaming sector, consoles are usually either sold at a loss, or close to costs. Selling fewer Switches isn't costing them money, only public visibility and less lock-in. So the whole arguments of emulator versus sales of machines isn't very convincing).

Yuzu technically cannot cannibalize sales of Switch consoles: it doesn't break the DRM. Instead it relies on duplicating the key of a Switch you already own
(and Nintendo's argument is that you need to hack the Switch in order to do so, which would be some DCMA violation, and Yuzu's website or Discord is pointing to instruction how to do so).
The way the DRM scheme is designed, you couldn't use Yuzu without an actually paid-for Switch (though probably some pirate could try using keys downloaded from some warez website. Though Nintendo could very trivially black list those DRM keys).

The main use of Yuzu (given the way it is designed) is to allow you to leave your Switch at home, only take your, e.g., SteamDeck with you, and still be able to run your usual games while pretending to have your exact Switch (and it's DRM keys) with you.
The advantage being not needing to lug around multiple devices, higher performance of some platforms (including SteamDeck), or alternative inputs (e.g. accessibility).
Nintendo argument is that running game on non-Switch hardware is a violation of Nintendo's licensing term and they only should decide which devices you're allowed (or not) to run your games on.

AFAIK Nintendo games nowadays employ per-copy keys, so it would be more difficult to use a pirated copy downloaded from the internet than a dump from your own game using your own Switch (which need to be hacked, so again Nintendo arguing the same DCMA violations and links-to-instructions as above).

Tears of The Kingdom is a bad example, as the version pirated before release wasn't used on Yuzu but on Ruijinx back then.

Comment Sound mixing in the early 90s (Score 1) 12

I remember, when I was a young lad thinking I could change the world, and I went on the journey to develop my own game engine, surprisingly one of the hardest parts was finding a good sound component/engine. - I'm thinking of the mind set I had 10\15 years ago

Funnily, that part was a bit easier earlier in the 90s, specially when making simple 2D games, as there weren't that many fancy effects.

A "sound engine(*)" was mostly mixing multiple samples (in software, unless you were lucky to have a Gravis, or later an AWE) and playing them over the soundcard.
Which by then mostly meant either a Sound Blaster-family card or something at least compatible with the SBPro, so with a relatively narrow number of hardware interfaces to support.
(And throw in support for digital samples on PC Speaker used in pulse-width mode to support lower end machines. Add some parallel-port home-made resistor ladder (COVOX Speech Thing compatibles). Or if you feel very fancy, playing digital samples over FM chips like AdLib to cover for older machines).

Once you got the mixing, you can use it both for music, to play MODs and other similar tracker music (by mixing instrument's digital samples at varying volumes and frequencies), and sound effect (mixing digital sound effects at various volumes, mostly based on distance).

As the games were much simpler back then, sound effect were really just that: play sample more or less loudly based on how far away they are, have different left-right volumes for stereo effect.
So most of the programming tricks boiled down to mixing fast enough on slower CPUs, smoothing somewhat the sound when changing frequency and mostly the driver to make sure it's compatible across the widest amount of cards.

Contrast with graphics programming which tried to get as many effects as possible out of the limited hardware capabilities of EGA, VGA and SVGA, requiring tons of low-level register tweaking (planar 256 color modes, latches, etc.) or complex programming (using VESA bank swapping). Getting more even more complex once you want to do software 3D.

That's probably why there weren't that many good libraries when you started: the past decade was spent putting efforts into graphics and thinking "yeah, simple sample mixing will do it" about the sound (Spoiler: no, not anymore. If you want more realistic 3D games, you suddenly need a lot more efforts into 3D audio, well beyond "closer is louder": you need to map environments, do Doppler, do echo and reverb, etc. cue in A3D and the like)

(*): for the amateurs, indie, and demoscene.
For big companies (think Sierra, LucasFilm, etc.) it was mostly about supporting MIDI and tweaking a few specific musical synthetisers (tweaking the hardware registers of AdLib's FM synth, or SysEx on Roland MT-32, etc.) or later supporting General MIDI and the music sounding crap - if you weren't using the same synth as the artist (usually a Roland SC-55), you were stuck with whatever General MIDI sound bank your sound card had to offer (often some very twangy instruments in a botched OPL clone chip, or an aweful rompler).

Comment Also biomechanics (Score 1) 30

I imagine this will need to be a fairly snug fit to get a decent read... which I unfortunately know is something that can exacerbate carpel tunnel.

The part that makes it worse (both for you and for making this work in general) is that by the time the nerve has reached the carpal tunnel, it's mostly only sensory input (that's why you feel pain when it's compressed) and a lot let muscle motor output (that's why it won't work).

Most of the stronger muscle that move your finger (specially for strong motion like strong grasp) are in your forearm.
At the level of the wrist, there isn't any nerve output for those muscle, even the muscle themselve have endeded and you have a large bunch of tendons.

It would probably be possible to detect movement with ultrasounds, but if you try picking either nerve impulses or muscle eletrical activity, you'll mostly only pick up the few muscles in your palm that are in charge with the fine motion.
i.e.: on a "pinching" motion, you'll detect which finger are aligning (based on the impulses going to the thenar and lumbrical muscles), but you won't pick up the finger flexing (pulled by tendons coming from muscles in the forearm whose nerves aren't in the wrist).

(Here's an Anatomy poster with only paintings, no photos: https://anatomywarehouse.com/m...,
Sources with more detailed explanations, warning contains cadaver photos at the end:
https://teachmeanatomy.info/up...
https://teachmeanatomy.info/up...
https://teachmeanatomy.info/up... )

Comment DRMed (Score 2) 82

They are cheap, endlessly shareable, always yours

...and are encumbered with a couple of DRM systems, some which are rather convoluted.

Which means that at a point in the future:
- you will need to connect your player to the internet (or use an USB stick) and update it's firmware to view newer discs.
- ..but that upgrade could potentially block you from watching older media that you already hold but which Disney and Sony have decided to deprecate and not cover in the newer firmware.

or:
- never connect the player to internet and keep the current firmware (and its precious keys) as-is forever (until the player breaks) and be guaranteed that your current disc will continue to work (until the disc bitrots) even if Sony and Disney decide to deprecate some encryption kyes.
- ..but then newer disc might not be playable anymore

or:
- you need to keep a collection of players of various vintage (might be easier with software players).

or:
- you need to RIP those discs onto your server, and/or keep you own cracked keys to keep access to the disc.
- this should very likely be under local equivalent of fair use in most European countries here around.
- but this could land you in prison and/or severe fines in other jurisdictions, including in the US.

Comment Obviously (Score 1) 12

Phones can probably do it. Well, android ones.

Given that internally the Sony Portal is running Android, it's not a surprise they eventually managed.

I suspect an element of spite,

Yup, Sony's official statement is that the Portal is streaming only and there wouldn't be any way to run games directly on it.

So these guys' effort are basically a big "Well actually, ..." regardng that last point.

Comment Relevant (Score 1) 12

Andy's bio states:

Cloud Vulnerability Research @ Google

i.e.: he specialises in investigating vulnerabilities.
in other words: hacking shit is actually his job descrition.

(But yeah, the fact that he usually does it at Google isn't relevant, and TFS on /. could have emphasized that day job better)

Comment Field of View (Score 2) 203

And you forgot to add the most important: field of view.

Apple's design for the Vision Pro blocks peripheral vision.

So even if its latency is indeed as imperceptible as Apple pretends it to be (we wonder) and its resolution is high enough (it's definitely NOT: the "4k" pixels are spread over a much larger part of the view, and even with the "pin-cushion" distortion, resolution in the center isn't that high. See, e.g., analysis by KGuttag), the field of view in the Vision is only in front of the users, there's no peripheral view covered by the screen.

Contrast with latest gen military night vision goggle which double the light amplifier tubes per eye just to increase horizontally the field of view and extend it into peripheral.
Contrast also with AR glasses, and "open" VR glasses, which let you see the actual world from the sides.
Those do allow the user noticing thing comming in from the sides.

Also, speaking about looking at stuff, a core design flaw of the Vision is that your eye sight is your mouse, so you must constantly look directly at the interface elements you're interacting with(*), you can't just look ahead straight in to the road and fumble stuff in your peripheral vision like with a car's infotainment (and again, you dont even HAVE peripheral vision to begin with).

Oh, "one more thing(tm)", there's no dedicated Vision version of Google Maps on the Vision, it's just the tablet version, so you don't get the cool uses like AR super-imposed "follow the floating light ribbon" navigation that would have been an actual possible use case for AR while driving.

---

(*) That's how you know the video was kind of staged, actual AVP use doesn't require holding hands in the air in front like Minority Report. One can just pinch fingers while resting on the lap (or on the wheel, as long as the cams can see it). The drivers does "Minority Report" style hands to make the joke more obvious.

Comment Containers (Score 1) 84

So flatpack is basically a separate userspace, not managed by the package manager? {...} I understand it as "a distro in your distro"

That more or less sums up how containers work.
The whole idea can be summed up as "chroots, but on steroid" (i.e.: better isolation), and just like chroot, each container is a different userspace.

(With different strategies to manage it.
With LXC, you would need to run that distro's tools (e.g.: pacman if it's Arch) instead of your host's tools (e.g.: aptitude, it it's Ubuntu).
With Flatpak and Docker you play around layers).

Will a flatpack from today run on flatpack in 10 year's time?

In theory, yes.
In practice that 10-years old flatpak will need a specific userland that most probably will by then come with big blinking deprecation warning:
"Warning, your version of us.zoom.Zoom is compatible with org.freedesktop.Platform.GL.default up to version 25, which comes with deprecation warning: 'Absolutely never connect this flatpak to the internet'. "

To come back to the SSL example, next year somebody is probably coming with an updated runtime which fixes openssl 1.1.1zy. In a couple of years later, the answer will be "move to openssl 3.0 already!" and your old Zoom flatpak will not be able to move to a newer secure runtime (without porting to the new library's API and rebuilding).

It's roughly the same situation as having an old legacy RPM and eventually not having the depencies it relies on updated anymore. You'll need to port the source to the new API used in the new libraries (if you have access to the source), or deploy a whole stack of outdated dependencies.
The difference is that:
- these RPM would be distro-specific. They might not work on a different distro using RPM (e.g.: opensuse vs. redhat vs. sailfishos), and other distro might not even use RPMs (DEB in debian, ubuntu, etc.). i.e. you need one package per distro (or at least per family of distro derivative) optimised for the dependencies there.
- whereas flatpaks target the same flatpak's runtimes no matter which distro you're on.
- packages as RPM (and DEB) are extremely granular (e.g. one per shared library)
- whereas you only use a couple of flatpak runtimes at most.

Comment Enshittification marches on. (Score 1) 108

Up to 3.5 min per hour they said:

Amazon's presentation said the average ad load per hour is expected to be between two and three-and-half minutes,

...for now. Fast forward a year of so and you'd be lucky if you get up to 3.5 min of content between each hour of ads.

I'll just keep watching for free on the various pirate sites.

Yup, looking up on ThePirateBay and torrenting it seem to be a lot more convenient.

Comment Layered design (Score 1) 84

Except flatpacks/snaps/docker images, those have to be updated by their respective maintainers.

Not quite.
Docker and flatpak works in layers.
Flatpak in particular has "runtime" - base systems - and is often well integrated into the package manager of the distro (e.g. on the Arch-based SteamOS running on the SteamDeck).

The SSL libraries are part of such base layers.
So if there is a bug fix, most likely you're going to see (either directly from your page manager, or when typing "flatpak updae") an update to "org.freedesktop.Platform.GL.default", "org.gnome.Platform", etc. runtimes, not the individual flatpaks.

Docker won't be as convenient as it's a Git-like DAG of "commit-like" layers. If the base "ubuntu:latest" layer is changed, the hash of the software based on it will change and it will be a new release too.
Luckily Docker is very convenient to automate with CI/CD and trivial to rebuild a new image.

Note that this also the case with package managers like Nix, which also consider the process of building package as successive layers in a DAG.

There is also no way to ask into those containers whether they're running vulnerable versions of libraries.

There's no ultra-straight forward automatic way, BUT

In the case of flatpak, it's often well integrated into the package manager, so you're going to get a pop-up from your package manager telling to update a flatpak runtime, in addition of updating a system library.

In the case of docker, the command-line approach still works. So the way to ask containers is to run command on their shell. The draw back is that you need to be fluent in several package management systems (e.g. you're running on Arch linux and usually rely on "pacman" to report such library versions, but you also need to be fluent in "aptitude" because most of the dockers you use run on "ubuntu:latest" base),.

Slashdot Top Deals

1 + 1 = 3, for large values of 1.

Working...