×

Submission + - Where are the Open-Source Local-Only AI Solutions?

BrendaEM writes: Do you remember how it was portrayed in movies when people would just talk to their computer, and it would do things? As implemented, as perverted, why does AI have to take the work of others? Why can't we each have our own AI software that runs locally, doesn't take anything from anyone else? It doesn't spy on us, and no one else owns it. We download it, from souce-code if you like, install it, if we want. It assists: us. For now, it's yours. No one gate-keeps it. It's not out to get us--and this is important: because no one owns it but our indebted gratitude, the AI software is ours and leaks no data anywhere, to no one, no company, for no political nor financial purpose. No one profits--but you! Though, that's not what is happening--is it?

Why cannot we have software--without AI. While it upsets me that a company such as Microsoft, who seems to have had legal problems from with taking another company's code--banner intact and all, implementing machine-learning in computers for legally defenseless masses, but it was just heartbreaking to read that Firefox has updated their legalese to further go upstream from their often-self proclaimed privacy motto for likely the sake of adding AI. I have used Firefox since it split from the Netscape suite, now I am likely going to leave it--because I am losing my remaining trust for Mozilla. Why is AMD stamping AI on local-processors--when most of the AI is done on external company servers? And if there is local AI processing--with what is it processing? To whom is it processing for? Having grown board with the elusive fusion reactor, the memsistor, the battery tech that will spare our remorseless wastefulness, having nosed its way past blockchain--should AI be crowned the ultimate hype?

We read about falsified naked pictures and video of society's beloved actors and performers. Have they not given enough of themselves? We see photos undiscerningly mangled to where most people can no longer trust what was once de-facto proof. We are at a point that anyone can be placed in any crime scene. Perhaps we have for some time, but now anyone can do it to any one

Beyond the deliberate targeted assassination of our sense of morality, lies the withering of society's intellect, as AI, as used, feeds on everything--including ingesting its own corrupted data--until the AI purveyors will have no choice but to use AI-free content, which it at first it was. As time goes on, finding AI-untouched data will be as difficult to find--as vintage wine without isotopes from nuclear tests.

Why ever would computer bugs be called, "hallucinations?" In reference to a comparison to the human intelligence of a six year old human, why are we being told that we just have to redefine intelligence in favor of the marketer's of AI? If AI is not really intelligent, nor is it mortal, nor feeling, nor capable of empathy, living or dying--then why ever should it be allowed to say, "I" Why should we allow it.

What future will anyone have if anything they really wanted to do when you--could be mimicked and sold by the ill-gotten work of others?

Could local, open-source, AI software be the only answer to dishearten billionaire companies from taking and selling back to their customers--everything we have done? Could we not...instead--steal their dream!

Submission + - Anthropic CEO Says Spies Are After $100M AI Secrets In a 'Few Lines of Code' (techcrunch.com)

An anonymous reader writes: Anthropic’s CEO Dario Amodei is worried that spies, likely from China, are getting their hands on costly “algorithmic secrets” from the U.S.’s top AI companies — and he wants the U.S. government to step in. Speaking at a Council on Foreign Relations event on Monday, Amodei said that China is known for its “large-scale industrial espionage” and that AI companies like Anthropic are almost certainly being targeted. “Many of these algorithmic secrets, there are $100 million secrets that are a few lines of code,” he said. “And, you know, I’m sure that there are folks trying to steal them, and they may be succeeding.”

More help from the U.S. government to defend against this risk is "very important," Amodei added, without specifying exactly what kind of help would be required. Anthropic declined to comment to TechCrunch on the remarks specifically but referred to Anthropic’s recommendations to the White House’s Office of Science and Technology Policy (OSTP) earlier this month. In the submission, Anthropic argues that the federal government should partner with AI industry leaders to beef up security at frontier AI labs, including by working with U.S. intelligence agencies and their allies.

Submission + - Large Study Shows Drinking Alcohol Is Good For Yours Cholesterol Levels (arstechnica.com)

An anonymous reader writes: Researchers at Harvard University led the study, and it included nearly 58,000 adults in Japan who were followed for up to a year using a database of medical records from routine checkups. Researchers found that when people switched from being nondrinkers to drinkers during the study, they saw a drop in their "bad" cholesterol—aka low-density lipoprotein cholesterol or LDL. Meanwhile, their "good" cholesterol—aka high-density lipoprotein cholesterol or HDL—went up when they began imbibing. HDL levels went up so much, that it actually beat out improvements typically seen with medications, the researchers noted.

On the other hand, drinkers who stopped drinking during the study saw the opposite effect: Upon giving up booze, their bad cholesterol went up and their good cholesterol went down. The cholesterol changes scaled with the changes in drinking. That is, for people who started drinking, the more they started drinking, the lower their LDL fell and higher their HDL rose. In the newly abstaining group, those who drank the most before quitting saw the biggest changes in their lipid levels.

Specifically, people who went from drinking zero drinks to 1.5 drinks per day or less saw their bad LDL cholesterol fall 0.85 mg/dL and their good HDL cholesterol go up 0.58 mg/dL compared to nondrinkers who never started drinking. For those that went from zero to 1.5 to three drinks per day, their bad LDL dropped 4.4 mg/dL and their good HDL rose 2.49 mg/dL. For people who started drinking three or more drinks per day, their LDL fell 7.44 mg/dL and HDL rose 6.12 mg/dL. For people who quit after drinking 1.5 drinks per day or less, their LDL rose 1.10 mg/dL and their HDL fell by 1.25 mg/dL. Quitting after drinking 1.5 to three drinks per day, led to a rise in LDL of 3.71 mg/dL and a drop in HDL of 3.35. Giving up three or more drinks per day led to an LDL increase of 6.53 mg/dL and a drop in HDL of 5.65.

Submission + - New Instagram/Facebook hack is causing the suspension of innocent Facebook accts

jwbales writes:

We suspended your account. Your Facebook account was suspended because your Instagram account what_ever7468 doesn't follow our rules. You have 180 days left to appeal. Log into your linked Instagram account to appeal our decision.

But that is NOT your Instagram acct so you cannot log in to it to appeal your suspension. Catch-22.

No avenue of contact with FB, IG or Meta elicits a response. They simply ghost you. Some victims have had success with suing FB in small claims court to get their accounts back.

The best explanation of this FB scandal is on LinkedIn: LinkedIn,

Submission + - Hardware Security Key Shootout! (k9.io)

Beave writes: The standard hardware security key in the tech space is typically a YubiKey. While I’m sure we all appreciate YubiKeys, there are many other key manufacturers out there. Each manufacturer and key has different capabilities, and are not all equal. This article will explore the various hardware security keys that can be used to store Passkeys and SSH keys. We will focus on usability, operating system compatibility, and costs. This article will likely help, whether you're looking for a personal key for projects or seeking to implement a passwordless solution at work.

Submission + - GamersNexus: Effect of 32-bit PhysX removal on older games (youtube.com)

UnknowingFool writes: Gamer's Nexus performed tests on the effect of removing legacy PhysX on the newest generation of NVidia cards with older games, and the results are not good. With PhysX on, the latest generation NVidia was slightly beaten by a GTX 580 (released 2010) on some games and handily beaten by a GTX 980 (2014) on some games.

With the launch of the 5000 series, NVidia dropped 32-bit CUDA support going forward. Part of that change was dropping support for 32-bit PhysX. As a result older titles that used it would perform poorly with 5000 series cards as it would default to CPU for calculations. Even the latest CPUs do not perform as well as 15 year old GPUs when it comes to PhysX.

The best performance on the 5080 was to turn PhysX off however that would remove many effects like smoke, breaking glass, and rubble from scenes. The second best option was to pair a 5000 series with an older card like a 980 to just handle the PhysX computations.

Submission + - Mark Klein, AT&T Whistleblower Who Revealed NSA Mass Spying, Has Died (eff.org)

An anonymous reader writes: EFF is deeply saddened to learn of the passing of Mark Klein, a bona fide hero who risked civil liability and criminal prosecution to help expose a massive spying program that violated the rights of millions of Americans. Mark didn’t set out to change the world. For 22 years, he was a telecommunications technician for AT&T, most of that in San Francisco. But he always had a strong sense of right and wrong and a commitment to privacy. When the New York Times reported in late 2005 that the NSA was engaging in spying inside the U.S., Mark realized that he had witnessed how it was happening. He also realized that the President was not telling Americans the truth about the program. And, though newly retired, he knew that he had to do something. He showed up at EFF’s front door in early 2006 with a simple question: “Do you folks care about privacy?”

We did. And what Mark told us changed everything. Through his work, Mark had learned that the National Security Agency (NSA) had installed a secret, secure room at AT&T’s central office in San Francisco, called Room 641A. Mark was assigned to connect circuits carrying Internet data to optical “splitters” that sat just outside of the secret NSA room but were hardwired into it. Those splitters—as well as similar ones in cities around the U.S.—made a copy of all data going through those circuits and delivered it into the secret room. Mark not only saw how it works, he had the documents to prove it. He brought us over a hundred pages of authenticated AT&T schematic diagrams and tables. Mark also shared this information with major media outlets, numerous Congressional staffers, and at least two senators personally. One, Senator Chris Dodd, took the floor of the Senate to acknowledge Mark as the great American hero he was.

Submission + - Google's New Robot AI Can Fold Delicate Origami, Close Zipper Bags (arstechnica.com)

An anonymous reader writes: On Wednesday, Google DeepMind announced two new AI models designed to control robots: Gemini Robotics and Gemini Robotics-ER. The company claims these models will help robots of many shapes and sizes understand and interact with the physical world more effectively and delicately than previous systems, paving the way for applications such as humanoid robot assistants. [...] Google's new models build upon its Gemini 2.0 large language model foundation, adding capabilities specifically for robotic applications. Gemini Robotics includes what Google calls "vision-language-action" (VLA) abilities, allowing it to process visual information, understand language commands, and generate physical movements. By contrast, Gemini Robotics-ER focuses on "embodied reasoning" with enhanced spatial understanding, letting roboticists connect it to their existing robot control systems. For example, with Gemini Robotics, you can ask a robot to "pick up the banana and put it in the basket," and it will use a camera view of the scene to recognize the banana, guiding a robotic arm to perform the action successfully. Or you might say, "fold an origami fox," and it will use its knowledge of origami and how to fold paper carefully to perform the task.

In 2023, we covered Google's RT-2, which represented a notable step toward more generalized robotic capabilities by using Internet data to help robots understand language commands and adapt to new scenarios, then doubling performance on unseen tasks compared to its predecessor. Two years later, Gemini Robotics appears to have made another substantial leap forward, not just in understanding what to do but in executing complex physical manipulations that RT-2 explicitly couldn't handle. While RT-2 was limited to repurposing physical movements it had already practiced, Gemini Robotics reportedly demonstrates significantly enhanced dexterity that enables previously impossible tasks like origami folding and packing snacks into Zip-loc bags. This shift from robots that just understand commands to robots that can perform delicate physical tasks suggests DeepMind may have started solving one of robotics' biggest challenges: getting robots to turn their "knowledge" into careful, precise movements in the real world.

According to DeepMind, the new Gemini Robotics system demonstrates much stronger generalization, or the ability to perform novel tasks that it was not specifically trained to do, compared to its previous AI models. In its announcement, the company claims Gemini Robotics "more than doubles performance on a comprehensive generalization benchmark compared to other state-of-the-art vision-language-action models." Generalization matters because robots that can adapt to new scenarios without specific training for each situation could one day work in unpredictable real-world environments. [...] Google is attempting to make the real thing: a generalist robot brain. With that goal in mind, the company announced a partnership with Austin, Texas-based Apptronik to"build the next generation of humanoid robots with Gemini 2.0." While trained primarily on a bimanual robot platform called ALOHA 2, Google states that Gemini Robotics can control different robot types, from research-oriented Franka robotic arms to more complex humanoid systems like Apptronik's Apollo robot.

Submission + - Amazon forest felled to build road for climate summit (bbc.com)

An anonymous reader writes: “A new four-lane highway cutting through tens of thousands of acres of protected Amazon rainforest is being built for the COP30 climate summit in the Brazilian city of Belém.”

Beyond Satire or Parody.

Submission + - UK starting climate change geoengineering trials (theguardian.com)

Bruce66423 writes: An opponent writes:

'The UK government itself would be leading the charge into what is almost universally recognized as the most dangerous and destabilizing sort of research: field trials that risk developing dangerous technology and paving the way for deployment. That is precisely the emphasis as the UK’s Advanced Research and Invention Agency (Aria) prepares to hand over $58m for solar geoengineering research and development.'

Submission + - Trump's efforts to help Tesla could hurt it instead (apnews.com)

Jasie writes: Tesla investors cheered Tuesday as Trump came to the defense of Musk’s beleaguered and boycotted carmaker by lavishing the Tesla CEO with praise in a press conference as five Teslas lined up in the White House driveway. Trump called Musk a “patriot” and announced that he had even bought one of his cars — a show of support that may have helped Tesla stock close higher for the day after a plunge a day earlier.

But experts warn the unusual presidential backing of a private company could backfire.

Submission + - Apple set to unveil boldest software redesign in years across entire ecosystem

CInder123 writes: Apple is undertaking one of the most significant software overhauls in its history, aiming to revamp the user interface across iPhone, iPad, and Mac devices. This ambitious update, set for release later this year, will fundamentally transform the look and feel of Apple's operating systems, enhancing consistency and the user experience.

The updates are part of iOS 19 and iPadOS 19, codenamed "Luck," and macOS 16, dubbed "Cheer," according to Bloomberg's Mark Gurman. He cited sources who requested anonymity since the project has yet to be officially announced. These major upgrades will introduce a new design language while simplifying navigation and controls.

Apple's push for consistency across platforms aims to create a seamless user experience when switching between devices. Currently, applications, icons, and window styles vary significantly across macOS, iOS, and visionOS, leading to a disjointed experience.
Source

Submission + - Spain to impose massive fines for not labelling AI-generated content (reuters.com)

CInder123 writes: Spain's government approved a bill imposing fines on companies that use AI-generated content without proper labeling, in a bid to curb "deepfakes." The bill follows EU AI Act guidelines, imposing transparency obligations on high-risk AI systems. Non-compliance can lead to fines of up to 35 million euros or 7% of global turnover. It also bans practices like subliminal manipulation and biometric profiling. Enforcement will be handled by the new AI supervisory agency AESIA, with exceptions for specific sectors.

Submission + - AskSlashdot: Which computer security threat scares you the most? (microsoft.com) 2

shanen writes: And even better, do you have any useful solution approaches to solve that threat?

Actually, I have two threats in mind today, so I can't even answer my own question for "most". One might be an imminent threat due to Microsoft's combination of malevolent incompetence and lack of liability, which the other threat seems more distant but more clearly on the evil side and probably less bounded. I'm not even going to claim these are serious threats that you should consider. More in the way of examples, though it would be great if someone could convince me "There's nothing to see or worry about here."

The Microsoft threat might be affecting you. My path to get there is from the Settings page for my Microsoft Account. For example, from the gear icon on outlook.live.com. Under General you can find "Privacy and data" and *boom*. You're stuck. You can't get anywhere from there. But if you go to the security basics page. Oh wait. You can't get there from that place and I can't send you the URL because of Microsoft has bastardized it... Well, what about... "Good luck, Mr Phelps."

Anyway, if you somehow find your way to the "recent activity" page you may be surprised. Mine shows an endless string of "Unsuccessful sign-in" attempts. About one an hour from all over the world. Are these related to the roughly daily fake requests for one-time authentication codes? Dictionary searches of common passwords? Harmless, or maybe they are more sinister. Is there any way I can see the fake passwords? It would certainly annoy me if the attempts are gradually converging on the actual password... (Even though I don't use the Outlook account for anything and even though I never voluntarily use any Microsoft software or websites. Only under duress, but I but most of you, too.)

The other threat that's bothering me is the GAIvatar thing. I hope we are "enlightened beings" who could not be easily copied for Generative AI avatars, but I don't feel sanguine about it. And I definitely think there are some folks I know whose responses are so predictable that I could not tell them apart from a GAIvatar... Basically I see two threats here, but you may see others. One is simple prediction, using the GAIvatar to figure out what a person is most likely to do, but the more serious threat is control, by using the GAIvatar to test various prompts until the proper buttons are discovered to manipulate the human model. Maybe you see worse possibilities?

And I don't really see any solutions anywhere. At the "social" levels where solutions are supposed to appear they appear to be a bunch of benign incompetents, malevolent incompetents, or feckless incompetents. And some of them check two or three boxes at random...

Submission + - DJI and Other Chinese Companies Move to Eliminate Overtime (chosun.com) 1

hackingbear writes: Chinese corporations have begun to improve the long working hours culture represented by the so-called "996" (working from 9 a.m. to 9 p.m., 6 days a week). As the Chinese government asks them to address inefficient "internal competition," corporations that already needed management efficiency have started to eliminate overtime. DJI, the world's largest drone maker, has been implementing a "no overtime" policy since the 27th of last month. Accordingly, employees must leave the office after 9 p.m. [without requiring workrs starting at 9 a.m.] The company also eliminated transportation expenses paid for overtime and closed down facilities such as the gym, swimming pool, and badminton court, while also reducing team expenses, in order to foster an early leaving environment. Chinese appliance manufacturer Midea began enforcing a mandatory leaving policy at 6:20 p.m. for office workers. Midea has also initiated the simplification of work methods this year, implementing a "strict prohibition on meetings and formal overtime after hours," and has taken a step further with this policy. Another appliance manufacturer, Haier, mandated two days of rest on weekends starting last month and decided to allow a maximum of 3 hours of overtime during the week. The 996 practice is particularly prominent in large corporations and the internet industry. In 2021, Jack Ma, the founder of Alibaba, one of China's largest e-commerce corporations, stated, "Being able to work 996 is a great blessing" and asked, "If you don't do 996 when you're young, when will you?" China's legislature, the National People's Congress, issued, for the first time, a call to comprehensively [reduce] "internal competition" broadly including chaotic expansion of production capacity, price wars, and zero-sum games. However, reactions from workers regarding these measures by corporations are mixed with some complaint these measures amount to wage cut as overtime pay disappears as well.

Submission + - Elon Musk Says X Outages Were Caused by a Cyberattack From Ukraine 1

hcs_$reboot writes: Elon Musk’s X hit by waves of outages in what he claims is a massive cyberattack from Ukraine[paywall]. Mr. Musk on Monday quickly blamed Ukraine[no paywall] without providing evidence. X, which Mr. Musk purchased in 2022, experienced intermittent outages on Monday, mostly on its app, according to Downdetector, which tracks reports of problems from users on websites. The first outages were reported before 6 a.m. Eastern time, after which the site and app seemed to resume functioning. But about 10 a.m. more problems arose, and there were 41,000 reports of outages on X, according to Downdetector. Shortly after 11 a.m., a third spike of reported outages emerged, and the site remained down for many users.

“There was a massive cyberattack to try to bring down the X system with IP addresses originating in the Ukraine area,” Mr. Musk said during a Monday interview with Fox’s Larry Kudlow.

Submission + - Allstate Insurance Sued For Delivering Personal Info In Plaintext (theregister.com)

An anonymous reader writes: New York State has sued Allstate Insurance for operating websites so badly designed they would deliver personal information in plain-text to anyone that went looking for it. The data was lifted from Allstate's National General business unit, which ran a website for consumers who wanted to get a quote for a policy. That task required users to input a name and address, and once that info was entered, the site searched a LexisNexis Risk Solutions database for data on anyone who lived at the address provided. The results of that search would then appear on a screen that included the driver's license number (DLN) for the given name and address, plus “names of any other drivers identified as potentially living at that consumer’s address, and the entire DLNs of those other drivers.”

Naturally, miscreants used the system to mine for people's personal information for fraud. "National General intentionally built these tools to automatically populate consumers' entire DLNs in plain text — in other words, fully exposed on the face of the quoting websites — during the quoting process," the court documents [PDF] state. "Not surprisingly, attackers identified this vulnerability and targeted these quoting tools as an easy way to access the DLNs of many New Yorkers," according to the lawsuit. The digital thieves then used this information to "submit fraudulent claims for pandemic and unemployment benefits," we're told. ... [B]y the time the insurer resolved the mess, crooks had built bots that harvested at least 12,000 individuals' driver's license numbers from the quote-generating site.

Slashdot Top Deals