Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment There's no "if" (Score 1) 164

Even the current LLMs can take a significant number of jobs. And they won't be evenly distributed across the population. Some specialties will be affected harder and sooner than others. Others will be hardly affected at all...by this round.

If we had lots of strong unions that were actively interested in furthering the benefits of their members, I'd worry less. As it is ... well, there's a strong imbalance of power, and it's going to get more unbalanced. Universal Basic Income would be a way to handle that, but we can't even get universal health care.

Comment Re:Maybe (Score 2) 96

I understand that.

I'm going based on what you said here, that a "frozen" kernel is an insecure kernel.

It's not that the current kernel is secure, it's just that the security bugs haven't been found yet. And the implication is that in the current kernel, there are a lot security bugs (otherwise freezing the kernel would be ok, and backporting the patches feasible). So updating to a current kernel won't fix your security problems, it'll just hide them a bit longer.

Comment Re:Maybe (Score 1) 96

If there are so many bugs that it's hard to keep up with security patches, then your code is overall insecure. Updating to the latest version won't help that.

It is a nest of insecure bugs because they don't prioritize security. They don't prioritize security because users don't prioritize security. It's not even the third or fourth priority.

Comment Re:It helps (Score 1) 32

Almost nobody in the indie AI community cares about whether the training data for the model is open source. We care about the license restrictions on the model. We can re-finetune or further train a foundation however we want, the question is, what we're allowed to do with it.

A lot of people just ignore the licenses, but that can come back to bite you, and I don't recommend it.

AI

'Openwashing' 32

An anonymous reader quotes a report from The New York Times: There's a big debate in the tech world over whether artificial intelligence models should be "open source." Elon Musk, who helped found OpenAI in 2015, sued the startup and its chief executive, Sam Altman, on claims that the company had diverged from its mission of openness. The Biden administration is investigating the risks and benefits of open source models. Proponents of open source A.I. models say they're more equitable and safer for society, while detractors say they are more likely to be abused for malicious intent. One big hiccup in the debate? There's no agreed-upon definition of what open source A.I. actually means. And some are accusing A.I. companies of "openwashing" -- using the "open source" term disingenuously to make themselves look good. (Accusations of openwashing have previously been aimed at coding projects that used the open source label too loosely.)

In a blog post on Open Future, a European think tank supporting open sourcing, Alek Tarkowski wrote, "As the rules get written, one challenge is building sufficient guardrails against corporations' attempts at 'openwashing.'" Last month the Linux Foundation, a nonprofit that supports open-source software projects, cautioned that "this 'openwashing' trend threatens to undermine the very premise of openness -- the free sharing of knowledge to enable inspection, replication and collective advancement." Organizations that apply the label to their models may be taking very different approaches to openness. [...]

The main reason is that while open source software allows anyone to replicate or modify it, building an A.I. model requires much more than code. Only a handful of companies can fund the computing power and data curation required. That's why some experts say labeling any A.I. as "open source" is at best misleading and at worst a marketing tool. "Even maximally open A.I. systems do not allow open access to the resources necessary to 'democratize' access to A.I., or enable full scrutiny," said David Gray Widder, a postdoctoral fellow at Cornell Tech who has studied use of the "open source" label by A.I. companies.
The Courts

The Delta Emulator Is Changing Its Logo After Adobe Threatened It (theverge.com) 56

After Adobe threatened legal action, the Delta Emulator said it'll abandon its current logo for a different, yet-to-be-revealed mark. The issue centers around Delta's stylized letter "D", which the digital media giant says is too similar to its stylized letter "A". The Verge reports: On May 7th, Adobe's lawyers reached out to Delta with a firm but kindly written request to go find a different icon, an email that didn't contain an explicit threat or even use the word infringement -- it merely suggested that Delta might "not wish to confuse consumers or otherwise violate Adobe's rights or the law." But Adobe didn't wait for a reply. On May 8th, one day later, Testut got another email from Apple that suggested his app might be at risk because Adobe had reached out to allege Delta was infringing its intellectual property rights.

"We responded to both Apple and Adobe explaining our icon was a stylized Greek letter delta -- not an A -- but that we would update the Delta logo anyway to avoid confusion," Testut tells us. The icon you're seeing on the App Store now is just a temporary one, he says, as the team is still working on a new logo. "Both the App Store and AltStore versions have been updated with this temporary icon, but the plan is to update them to the final updated logo with Delta 1.6 once it's finished."

Slashdot Top Deals

The earth is like a tiny grain of sand, only much, much heavier.

Working...