Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:What about QuickShare? (Score 1) 32

There are 3 major problems with QuickShare: 1.) You need to have QuickShare. When you need to share something, the chances of a given person having your specific app installed (out of the thousands available) is basically zero. 2.) You need to use the app. There is no app for AirDrop; it's a system-wide service that works with virtually any datatype you encounter on an Apple device. You just click the "Share" button and then choose who to AirDrop stuff to. 3.) It can only share files. In many instances, AirDrop creates the data package on the fly - it can share URLs, contact info, locations, passkeys, notes, etc. File sharing apps generally can't do this because there are no files associated with those.

Comment These are incompatible requirements (Score 1) 43

The idea of deleting a consumer's data is at odds with the requirements for CCPA opt-out. Anyone who submits an opt-out request needs to be on record that they don't want their data to be used for certain purposes. If a data broker is required to delete the consumer's data, how are they supposed to honor opt-out going forward as they get updated data that might include the identities again?

Comment Re:Is people really using notebooks for AI? (Score 1) 75

There are open models that can trade blows with big commercial ones (like DeepSeek and Qwen), but those are too large to run on notebooks today. The models that you can run on a laptop tend to be in the 120b - 20b range. They are not as great for general knowledge, but can excel at purpose-specific tasks. Take a look at MedGemma, or Qwen3 Coder Flash. Qwen Coder is super fast on a good laptop, and can beat the original GPT-4 at most coding tasks. Amazingly, these smaller models are often as good at the state-of-the-art models from 12-18 months ago.

Submission + - A free and secure coding AI, courtesy of China (x.com)

DogFoodBuss writes: This week, Alibaba Qwen released Qwen3-Coder-Flash. It's believed to be the most capable AI coding model that can run locally on a developer laptop. The architecture delivers excellent throughput by only activating three of its 30-billion parameters per token generated, and supports 256k and 1M context sizes. Qwen also claims that their model can beat GPT 4 and DeepSeek V3 in many benchmarks.

Ready-to-use quantizations are now available from Unsloth AI https://huggingface.co/unsloth... and in Apple's MLX DWQ format on HuggingFace https://huggingface.co/mlx-com...

Comment False headline (Score 4, Informative) 121

The headline is not representative of what the survey says AT ALL, which is that current AI cannot be scaled up to AGI. Anybody with half a brain could have told you that more research is still needed. Even Sam Altman has said as much. Saying that current AI is a âoedead endâ is completely ignoring all of the demonstrably useful stuff it can do now.

Comment China is ready to pick up the slack (Score 2) 55

I understand how it sucks for artists to see their work used in ways that they didn't intend, but I don't think there is much we should be doing about it. If the West starts imposing fines or fees for AI training, all that means is the best AI products will start coming out of China. They don't give a rip about IP laws, and would love to see their competitors hamstrung.

Comment Only censored for the hosted version (Score 4, Insightful) 65

If you could be bothered to host DeepSeek yourself, there is no such censorship. R1 will happily talk about the issues regarding Taiwan and Tank Man. Of all people, Slashdotters should know to not rely on cloud services. https://x.com/fabianwilliams/s...

Comment Re: Amazing The Speed Of Rollout (Score 0) 30

There is only one version of DeepSeek R1. It has 671 billion parameters, and common 4-bit quantization requires about a half terabyte of fast memory to load it. If you squeeze it down to 1.5bpw, you can run it in just 200 GB of VRAM (with some loss of quality). The different âoeversionsâ you might be referring to are just ye old Llama and Qwen, which have been fine tuned with R1 data. Completely different architectures from DeepSeek.

Slashdot Top Deals

You've been Berkeley'ed!

Working...