Forgot your password?
typodupeerror

Comment Re:Everyone has their own message app (Score 1) 50

I'm just curious as to why you'd use that instead of any manufacturer or google's offering, since those are feature complete, have no ads and no in app purchases.

"I used it when options were worse, and I just like it the way it is" is a completely justifiable reason. A lot of things in our lives, there are better options but having to switch carries cognitive and time costs that are just not worth it.

Comment Story that didn't print important thing last (Score 1) 49

Story notes that number changes massively every month and it went 3,5 > 2,5 > 5,3 across January, February and March this year.

This obviously isn't changes in user base, but changes in tracking combined with very low sample size, meaning it's wildly inaccurate.

Props to writer in that he didn't engage in mainstream media clickbait method of "we clickbait headline and then we spin a narrative for the entire story. And then we destroy our narrative by telling what actually happened in last two paragraphs". This is explained in three opening paragraph, and literally first sentence is "if figures are accurate" proceeding to explain why they almost certainly aren't:

Quoting the story's opening below:

"If Valve's latest Steam Survey monthly figures are accurate, Steam on Linux enjoyed a very wild month of March. Steam on Linux is now above the 5% threshold and more than twice the size of the Steam on macOS marketshare.

Steam on Linux ended 2025 at around a 3.5% marketshare, dipped a bit in January, and fell to 2.23% in February. That's still much better than several years ago in the pre-Steam-Deck days when Steam on Linux was at around 1%. In absolute terms with the continued growth of the Steam user base, 2~3% was rather healthy considering all of its bumps over the past decade.

But Valve just published the Steam Survey results for March 2026 and they have never been so incredible for Linux... 5.33%! Steam on Linux was never above 5% and easily an all-time high for the Linux gaming marketshare, especially in absolute numbers. It was a massive 3.1% spike in March"

I.e. this growth is more than total supposed user base last month. This is obviously a massive statistical inaccuracy just hitting the limits of the error margin, as these numbers were tracked for many years, so we know what approximate number is. Error rate for these sorts of studies is usually 2-5% depending on sample size and methodology, so we're seeing the error rate manifest itself on massive differential reported.

It would really help if people took any decent class on statistics.

AI

Google Announces Gemma 4 Open AI Models, Switches To Apache 2.0 License 3

An anonymous reader quotes a report from Ars Technica: Google's Gemini AI models have improved by leaps and bounds over the past year, but you can only use Gemini on Google's terms. The company's Gemma open-weight models have provided more freedom, but Gemma 3, which launched over a year ago, is getting a bit long in the tooth. Starting today, developers can start working with Gemma 4, which comes in four sizes optimized for local usage. Google has also acknowledged developer frustrations with AI licensing, so it's dumping the custom Gemma license.

Like past versions of its open-weight models, Google has designed Gemma 4 to be usable on local machines. That can mean plenty of things, of course. The two large Gemma variants, 26B Mixture of Experts and 31B Dense, are designed to run unquantized in bfloat16 format on a single 80GB Nvidia H100 GPU. Granted, that's a $20,000 AI accelerator, but it's still local hardware. If quantized to run at lower precision, these big models will fit on consumer GPUs. Google also claims it has focused on reducing latency to really take advantage of Gemma's local processing. The 26B Mixture of Experts model activates only 3.8 billion of its 26 billion parameters in inference mode, giving it much higher tokens-per-second than similarly sized models. Meanwhile, 31B Dense is more about quality than speed, but Google expects developers to fine-tune it for specific uses.

The other two Gemma 4 models, Effective 2B (E2B) and Effective 4B (E4B), are aimed at mobile devices. These options were designed to maintain low memory usage during inference, running at an effective 2 billion or 4 billion parameters. Google says the Pixel team worked closely with Qualcomm and MediaTek to optimize these models for devices like smartphones, Raspberry Pi, and Jetson Nano. Not only do they use less memory and battery than Gemma 3, but Google also touts "near-zero latency" this time around.
The Apache 2.0 license is much more flexible with its terms of use for commercial restrictions, "granting you complete control over your data, infrastructure, and models," says Google.

Clement Delangue, co-founder and CEO of Hugging Face, called it "a huge milestone" that will help developers use Gemma for more projects and expand what Google calls the "Gemmaverse."

Comment Re:human vs slop (Score 0) 53

Facebook is tightly tied to the government, launched the day after DARPA shut down LifeLog and was originally funded by Peter Thiel. It's always been intended as a global surveillance system. OpenAI also has ties to the US government and any of the same Peter Thiel backed entities.

OpenAI benefits from a global control grid. You know what China has with surveillance and companies providing individual person credit scores? The US and EU governments want that, but automated and on steroids. These people are evil and OpenAI/Anthropic are devouring the world's data, eating through RAM and storage and ultimately push for technocratic subjugation of people to fit into their perfect regime.

Comment Re:AI can help here (Score 3, Insightful) 68

Chromebooks had zero to do with education. They were 100% about Google forcing every high school student to have a Google account as early as possible. I bet less than 1% of parents said, "No, we're not doing that. Here's a Ubuntu laptop instead. Never sign in to Google, cause I said so."

Everyone wants digital tracking of every human: governments (via ID programs), Google, Meta (both are pretty much governments at this point), Anthropic, OpenAI ... they all want to know exactly who everyone is. They all want a Technocracy.

Comment Re:hohoho (Score 4, Interesting) 69

The article is paywalled and every other article I found was obviously LLM generated shit and didn't link to this new implementation. It took me a bit, but I found at least one of the Rust implementations of Claude's CLI:

https://github.com/Outcomefocu...

I was to see Anthropic choke on this so bad.

Courts still haven't really ruled on AI generated code in any big countries yet, as far as I can tell. Courts could view AI code the same as AI generated images: non-copyrightable. Generated images can still be subject to trademark if you try to commercialize them, but code not so much. If code ever gets rules as non-copyrightable, any generated code is open game if it gets leaked. Courts could also rule it is subject to copyright of the original training data holders.

Both of these outcomes would be equally devastating to the entire industry in entirely different ways. I'm kinda read to see it all burn.

Slashdot Top Deals

In any problem, if you find yourself doing an infinite amount of work, the answer may be obtained by inspection.

Working...