Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:It's not a next-gen xbox console (Score 1) 40

That's Microsoft's strategy for first-party titles. They don't have control over third-party developers. And even then there are some exceptions. Halo 5 never got a PC release, and for something more recent, the 2023 remaster of Goldeneye didn't either.

If there aren't that many third-party xbox exclusives, that says more about the viability of the xbox platform than any specific strategy on Microsoft's part.

Comment Re:It's not a next-gen xbox console (Score 1) 40

If Microsoft had managed to produce a usable game streaming service, I might agree, but nobody but nVidia has pulled that off (with GeForce Now).

By the same logic, you could call this a Playstation handheld, since you can run Sony's streaming service on it.

Besides, you can't really use a game streaming service unless you're tethered to a good home Internet connection, which renders a mobile gaming device somewhat useless.

Comment Re:It's not a next-gen xbox console (Score 3, Informative) 40

The term "xbox game" does not appear at all on the page that you linked to (outside of the fine print and navigation), let alone the phrase "Play all your PC and XBOX games."

The closest this thing gets to letting you play Xbox Games is "Stream with Xbox Cloud Gaming (Beta)" which is just super laggy cloud streaming, and "Xbox remote play", which is just streaming it from a local Xbox console.

Comment Re:Availability (Score 1) 46

Doing a direct comparison between the mobile 2050 (which is Ampere) against the Switch 2's T239 (which is modified Ampere) while only accounting for the differences in core count and theoretical clocks ignores, among other things, the difference in GPU architecture (there was some backporting of features from newer GPU generations), the difference in memory bus width and clockspeed, the differences in power/voltage/temperature curves, the difference in API and OS overhead, the difference in DLSS model (the Switch 2 uses a lighter-weight model), and for that matter even the difference in core config (it's not just "1536 versus 2048", the scaling factor of the CUDA cores is not necessarily the same as the scaling in shader processors, texture mapping units, render output units, RT cores, and tensor cores). There's so much they didn't account for that their comparison is meaningless.

Take the RTX 2050 mobile and RTX 3050 mobile, for example. They are actually both the GA107 die, and both have 2048 CUDA cores. So with the same CUDA count, you should be able to compare them directly only accounting for the clockspeed difference like Geekerwan did, right? But the RTX 3050 mobile has half as many RT cores and a quarter as many tensor cores, but it has 2x the memory bus width, 86% the memory clock, and 1.8x the max TDP. And who knows how the voltage/clockspeed curve or thermal throttling is different. So you can't compare them like this at all!

That's my point. Geekerwan's benchmark relies on the assumption that the *only* thing that affects performance is the CUDA core count and GPU core clockspeed. And that's an invalid assumption.

Comment Re:Availability (Score 1) 46

Their GPU benchmarks are extremely questionable: they make a lot of assumptions and fail to account for a lot of factors. There are so many such assumptions that they might as well have just made up numbers for where they guessed the performance might be.

Their CPU benchmarks aren't as bad, but they do assume that the A78AE and A78C will have identical clock-for-clock performance, despite the architectural difference (2x4 core clusters versus 1x8 core clusters, which matters for things like cache access) and likely radically different power/thermal limits.

Comment Re:I am SHOCKED! (Score 2) 120

There were other issues beyond the weight. The cost, for one thing. The sub-par head attachment setup. The lack of cross-platform software.

They could have improved both the cost and weight by using lighter materials like injection-molded plastic instead of milled aluminum and not including useless features like the outward-facing creepy-eye-display. They could have improved the perceived weight by using a better mounting mechanism (like a halo style).

It's like Apple just ignored the past decade of learnings from the VR industry. In terms of the importance of weight and comfort, if nothing else. And they should have realized that they needed to sell the thing at cost or even at a lost to build a customer base and start leveraging economies of scale and app store revenue to drive a profit. It had a $1,500 BOM and they sold it for $3,500. They wanted to make a profit on the thing and recoup their R&D costs, but instead they sold so few of them that the whole project lost tons of money.

It costs 7x more than the market leader, and it wasn't a 7x better product. They could have gotten away with charging a significant premium over the competition, but they were never going to succeed at a 700% premium.

Comment A desperate attempt (Score 1) 137

My understanding is that lithium manganese-rich batteries offer potentially higher densities, but much worse cycle lifespan. This could make the trade-off not worth it compared to LFP.

Seems to me like, having allowed the Chinese to get many years ahead in LFP technology, GM/LG are trying to leapfrog them by going with something completely different. Time will tell if that makes any sense versus just buying the latest and greatest LFP cells from the Chinese.

Comment Re:They Don't Care (Score 1) 46

Hardware decode support isn't critical for all devices. For example, an iPhone 14 or 15 non-pro can play AV1 just fine with software decoding, because they have more than enough compute available to do it, it's just much less efficient. Doesn't really change the big picture, lots of devices don't have the processing power to brute force it, so decode support isn't widespread enough, it just means that the situation isn't *quite* as bad.

Comment Re:Serious question (Score 1) 11

Intel got low-NA EUV tools before TSMC too. That didn't stop TSMC from leapfrogging Intel with EUV adoption. They've decided to do the same for high-NA: delay high-NA adoption a bit (waiting for it to mature) by adopting multi-pattern low-NA in the interim. They expect that, even with multi-pattern low-NA's lower throughput, it will still be cheaper than high-NA, and high-NA also can't do large dies like the kind that Apple and nVidia make.

Node names are meaningless (18A, 14A, 10A, it's just marketing labels), but from what I can gather, Intel seems to plan to move to high-NA with 14A, and TSMC with 1.0nm (or 10A). But that was from reports a year ago. Either way, it seems likely that TSMC will wait a process node generation or two longer than Intel.

Slashdot Top Deals

Marriage is the sole cause of divorce.

Working...