×

Submission + - Man lives 100 days with artificial titanium heart in world-first medical success (techspot.com)

jjslash writes: An Australian man has made history as the first person to be discharged from a hospital with a total artificial heart implant, representing a major breakthrough in heart failure treatment. He relied on the device for more than 100 days before undergoing a donor heart transplant in early March, setting a new record for the longest survival with this technology. As reported by TechSpot:

The patient, a man in his 40s from New South Wales, received the BiVACOR Total Artificial Heart (TAH) during a six-hour procedure at St. Vincent's Hospital in Sydney on November 22, 2024. The operation, led by cardiothoracic and transplant surgeon Paul Jansz, was part of the Monash University-led Artificial Heart Frontiers Program, which aims to develop three key devices to treat common forms of heart failure.

Globally, more than 23 million people suffer from heart failure each year, yet only about 6,000 receive a donor heart. To support the development and commercialization of the BiVACOR device, the Australian government has invested $50 million in the program. While still in clinical trials and awaiting regulatory approval, the device's ability to sustain patients for extended periods suggests it could become a long-term solution for those facing heart failure.


Submission + - Meta stops ex-director from promoting critical memoir (bbc.co.uk)

Alain Williams writes: Meta has won an emergency ruling in the US to temporarily stop a former director of Facebook from promoting or further distributing copies of her memoir.

The book, Careless People by Sarah Wynn-Williams, who used to be the company's global public policy director, includes a series of critical claims about what she witnessed during her seven years working at Facebook.

Facebook's parent company, Meta, says the ruling — which orders her to stop promotions "to the extent within her control" — affirms that "the false and defamatory book should never have been published".

The UK publisher Macmillan says it is "committed to upholding freedom of speech" and Ms Wynn-Williams' "right to tell her story".

You can also hear Ms Wynn-Williams interviewed in the BBC Radio 4 Media Show on 12 March.

Submission + - OpenAI demands fair use scanning or else (arstechnica.com)

awwshit writes: OpenAI is hoping that Donald Trump's AI Action Plan, due out this July, will settle copyright debates by declaring AI training fair use—paving the way for AI companies' unfettered access to training data that OpenAI claims is critical to defeat China in the AI race.

So far, one landmark ruling favored rights holders, with a judge declaring AI training is not fair use, as AI outputs clearly threatened to replace Thomson-Reuters' legal research firm Westlaw in the market, Wired reported. But OpenAI now appears to be looking to Trump to avoid a similar outcome in its lawsuits, including a major suit brought by The New York Times.

"OpenAI’s models are trained to not replicate works for consumption by the public. Instead, they learn from the works and extract patterns, linguistic structures, and contextual insights," OpenAI claimed. "This means our AI model training aligns with the core objectives of copyright and the fair use doctrine, using existing works to create something wholly new and different without eroding the commercial value of those existing works."

Submission + - Fake Reddit and WeTransfer pages are spreading stealer malware

An anonymous reader writes: A new large-scale cybercriminal operation has been identified operating in the wild.

“This new threat campaign ran over 1,000 fake sites, impersonating WeTransfer and Reddit. The goal of the campaign is to trick users into downloading the Lumma stealer. But it is also part of a growing trend that is becoming the norm. Let’s dive into it.”

Submission + - Where are the Open-Source Local-Only AI Solutions?

BrendaEM writes: Do you remember how it was portrayed in movies when people would just talk to their computer, and it would do things? As implemented, as perverted, why does AI have to take the work of others? Why can't we each have our own AI software that runs locally, doesn't take anything from anyone else? It doesn't spy on us, and no one else owns it. We download it, from souce-code if you like, install it, if we want. It assists: us. For now, it's yours. No one gate-keeps it. It's not out to get us--and this is important: because no one owns it but our indebted gratitude, the AI software is ours and leaks no data anywhere, to no one, no company, for no political nor financial purpose. No one profits--but you! Though, that's not what is happening--is it?

Why cannot we have software--without AI. While it upsets me that a company such as Microsoft, who seems to have had legal problems from with taking another company's code--banner intact and all, implementing machine-learning in computers for legally defenseless masses, but it was just heartbreaking to read that Firefox has updated their legalese to further go upstream from their often-self proclaimed privacy motto for likely the sake of adding AI. I have used Firefox since it split from the Netscape suite, now I am likely going to leave it--because I am losing my remaining trust for Mozilla. Why is AMD stamping AI on local-processors--when most of the AI is done on external company servers? And if there is local AI processing--with what is it processing? To whom is it processing for? Having grown board with the elusive fusion reactor, the memsistor, the battery tech that will spare our remorseless wastefulness, having nosed its way past blockchain--should AI be crowned the ultimate hype?

We read about falsified naked pictures and video of society's beloved actors and performers. Have they not given enough of themselves? We see photos undiscerningly mangled to where most people can no longer trust what was once de-facto proof. We are at a point that anyone can be placed in any crime scene. Perhaps we have for some time, but now anyone can do it to any one

Beyond the deliberate targeted assassination of our sense of morality, lies the withering of society's intellect, as AI, as used, feeds on everything--including ingesting its own corrupted data--until the AI purveyors will have no choice but to use AI-free content, which it at first it was. As time goes on, finding AI-untouched data will be as difficult to find--as vintage wine without isotopes from nuclear tests.

Why ever would computer bugs be called, "hallucinations?" In reference to a comparison to the human intelligence of a six year old human, why are we being told that we just have to redefine intelligence in favor of the marketer's of AI? If AI is not really intelligent, nor is it mortal, nor feeling, nor capable of empathy, living or dying--then why ever should it be allowed to say, "I" Why should we allow it.

What future will anyone have if anything they really wanted to do when you--could be mimicked and sold by the ill-gotten work of others?

Could local, open-source, AI software be the only answer to dishearten billionaire companies from taking and selling back to their customers--everything we have done? Could we not...instead--steal their dream!

Submission + - Anthropic CEO Says Spies Are After $100M AI Secrets In a 'Few Lines of Code' (techcrunch.com)

An anonymous reader writes: Anthropic’s CEO Dario Amodei is worried that spies, likely from China, are getting their hands on costly “algorithmic secrets” from the U.S.’s top AI companies — and he wants the U.S. government to step in. Speaking at a Council on Foreign Relations event on Monday, Amodei said that China is known for its “large-scale industrial espionage” and that AI companies like Anthropic are almost certainly being targeted. “Many of these algorithmic secrets, there are $100 million secrets that are a few lines of code,” he said. “And, you know, I’m sure that there are folks trying to steal them, and they may be succeeding.”

More help from the U.S. government to defend against this risk is "very important," Amodei added, without specifying exactly what kind of help would be required. Anthropic declined to comment to TechCrunch on the remarks specifically but referred to Anthropic’s recommendations to the White House’s Office of Science and Technology Policy (OSTP) earlier this month. In the submission, Anthropic argues that the federal government should partner with AI industry leaders to beef up security at frontier AI labs, including by working with U.S. intelligence agencies and their allies.

Submission + - Large Study Shows Drinking Alcohol Is Good For Yours Cholesterol Levels (arstechnica.com)

An anonymous reader writes: Researchers at Harvard University led the study, and it included nearly 58,000 adults in Japan who were followed for up to a year using a database of medical records from routine checkups. Researchers found that when people switched from being nondrinkers to drinkers during the study, they saw a drop in their "bad" cholesterol—aka low-density lipoprotein cholesterol or LDL. Meanwhile, their "good" cholesterol—aka high-density lipoprotein cholesterol or HDL—went up when they began imbibing. HDL levels went up so much, that it actually beat out improvements typically seen with medications, the researchers noted.

On the other hand, drinkers who stopped drinking during the study saw the opposite effect: Upon giving up booze, their bad cholesterol went up and their good cholesterol went down. The cholesterol changes scaled with the changes in drinking. That is, for people who started drinking, the more they started drinking, the lower their LDL fell and higher their HDL rose. In the newly abstaining group, those who drank the most before quitting saw the biggest changes in their lipid levels.

Specifically, people who went from drinking zero drinks to 1.5 drinks per day or less saw their bad LDL cholesterol fall 0.85 mg/dL and their good HDL cholesterol go up 0.58 mg/dL compared to nondrinkers who never started drinking. For those that went from zero to 1.5 to three drinks per day, their bad LDL dropped 4.4 mg/dL and their good HDL rose 2.49 mg/dL. For people who started drinking three or more drinks per day, their LDL fell 7.44 mg/dL and HDL rose 6.12 mg/dL. For people who quit after drinking 1.5 drinks per day or less, their LDL rose 1.10 mg/dL and their HDL fell by 1.25 mg/dL. Quitting after drinking 1.5 to three drinks per day, led to a rise in LDL of 3.71 mg/dL and a drop in HDL of 3.35. Giving up three or more drinks per day led to an LDL increase of 6.53 mg/dL and a drop in HDL of 5.65.

Submission + - New Instagram/Facebook hack is causing the suspension of innocent Facebook accts

jwbales writes:

We suspended your account. Your Facebook account was suspended because your Instagram account what_ever7468 doesn't follow our rules. You have 180 days left to appeal. Log into your linked Instagram account to appeal our decision.

But that is NOT your Instagram acct so you cannot log in to it to appeal your suspension. Catch-22.

No avenue of contact with FB, IG or Meta elicits a response. They simply ghost you. Some victims have had success with suing FB in small claims court to get their accounts back.

The best explanation of this FB scandal is on LinkedIn: LinkedIn,

Submission + - Hardware Security Key Shootout! (k9.io)

Beave writes: The standard hardware security key in the tech space is typically a YubiKey. While I’m sure we all appreciate YubiKeys, there are many other key manufacturers out there. Each manufacturer and key has different capabilities, and are not all equal. This article will explore the various hardware security keys that can be used to store Passkeys and SSH keys. We will focus on usability, operating system compatibility, and costs. This article will likely help, whether you're looking for a personal key for projects or seeking to implement a passwordless solution at work.

Submission + - GamersNexus: Effect of 32-bit PhysX removal on older games (youtube.com)

UnknowingFool writes: Gamer's Nexus performed tests on the effect of removing legacy PhysX on the newest generation of NVidia cards with older games, and the results are not good. With PhysX on, the latest generation NVidia was slightly beaten by a GTX 580 (released 2010) on some games and handily beaten by a GTX 980 (2014) on some games.

With the launch of the 5000 series, NVidia dropped 32-bit CUDA support going forward. Part of that change was dropping support for 32-bit PhysX. As a result older titles that used it would perform poorly with 5000 series cards as it would default to CPU for calculations. Even the latest CPUs do not perform as well as 15 year old GPUs when it comes to PhysX.

The best performance on the 5080 was to turn PhysX off however that would remove many effects like smoke, breaking glass, and rubble from scenes. The second best option was to pair a 5000 series with an older card like a 980 to just handle the PhysX computations.

Submission + - Mark Klein, AT&T Whistleblower Who Revealed NSA Mass Spying, Has Died (eff.org)

An anonymous reader writes: EFF is deeply saddened to learn of the passing of Mark Klein, a bona fide hero who risked civil liability and criminal prosecution to help expose a massive spying program that violated the rights of millions of Americans. Mark didn’t set out to change the world. For 22 years, he was a telecommunications technician for AT&T, most of that in San Francisco. But he always had a strong sense of right and wrong and a commitment to privacy. When the New York Times reported in late 2005 that the NSA was engaging in spying inside the U.S., Mark realized that he had witnessed how it was happening. He also realized that the President was not telling Americans the truth about the program. And, though newly retired, he knew that he had to do something. He showed up at EFF’s front door in early 2006 with a simple question: “Do you folks care about privacy?”

We did. And what Mark told us changed everything. Through his work, Mark had learned that the National Security Agency (NSA) had installed a secret, secure room at AT&T’s central office in San Francisco, called Room 641A. Mark was assigned to connect circuits carrying Internet data to optical “splitters” that sat just outside of the secret NSA room but were hardwired into it. Those splitters—as well as similar ones in cities around the U.S.—made a copy of all data going through those circuits and delivered it into the secret room. Mark not only saw how it works, he had the documents to prove it. He brought us over a hundred pages of authenticated AT&T schematic diagrams and tables. Mark also shared this information with major media outlets, numerous Congressional staffers, and at least two senators personally. One, Senator Chris Dodd, took the floor of the Senate to acknowledge Mark as the great American hero he was.

Submission + - Google's New Robot AI Can Fold Delicate Origami, Close Zipper Bags (arstechnica.com)

An anonymous reader writes: On Wednesday, Google DeepMind announced two new AI models designed to control robots: Gemini Robotics and Gemini Robotics-ER. The company claims these models will help robots of many shapes and sizes understand and interact with the physical world more effectively and delicately than previous systems, paving the way for applications such as humanoid robot assistants. [...] Google's new models build upon its Gemini 2.0 large language model foundation, adding capabilities specifically for robotic applications. Gemini Robotics includes what Google calls "vision-language-action" (VLA) abilities, allowing it to process visual information, understand language commands, and generate physical movements. By contrast, Gemini Robotics-ER focuses on "embodied reasoning" with enhanced spatial understanding, letting roboticists connect it to their existing robot control systems. For example, with Gemini Robotics, you can ask a robot to "pick up the banana and put it in the basket," and it will use a camera view of the scene to recognize the banana, guiding a robotic arm to perform the action successfully. Or you might say, "fold an origami fox," and it will use its knowledge of origami and how to fold paper carefully to perform the task.

In 2023, we covered Google's RT-2, which represented a notable step toward more generalized robotic capabilities by using Internet data to help robots understand language commands and adapt to new scenarios, then doubling performance on unseen tasks compared to its predecessor. Two years later, Gemini Robotics appears to have made another substantial leap forward, not just in understanding what to do but in executing complex physical manipulations that RT-2 explicitly couldn't handle. While RT-2 was limited to repurposing physical movements it had already practiced, Gemini Robotics reportedly demonstrates significantly enhanced dexterity that enables previously impossible tasks like origami folding and packing snacks into Zip-loc bags. This shift from robots that just understand commands to robots that can perform delicate physical tasks suggests DeepMind may have started solving one of robotics' biggest challenges: getting robots to turn their "knowledge" into careful, precise movements in the real world.

According to DeepMind, the new Gemini Robotics system demonstrates much stronger generalization, or the ability to perform novel tasks that it was not specifically trained to do, compared to its previous AI models. In its announcement, the company claims Gemini Robotics "more than doubles performance on a comprehensive generalization benchmark compared to other state-of-the-art vision-language-action models." Generalization matters because robots that can adapt to new scenarios without specific training for each situation could one day work in unpredictable real-world environments. [...] Google is attempting to make the real thing: a generalist robot brain. With that goal in mind, the company announced a partnership with Austin, Texas-based Apptronik to"build the next generation of humanoid robots with Gemini 2.0." While trained primarily on a bimanual robot platform called ALOHA 2, Google states that Gemini Robotics can control different robot types, from research-oriented Franka robotic arms to more complex humanoid systems like Apptronik's Apollo robot.

Submission + - Amazon forest felled to build road for climate summit (bbc.com)

An anonymous reader writes: “A new four-lane highway cutting through tens of thousands of acres of protected Amazon rainforest is being built for the COP30 climate summit in the Brazilian city of Belém.”

Beyond Satire or Parody.

Submission + - UK starting climate change geoengineering trials (theguardian.com)

Bruce66423 writes: An opponent writes:

'The UK government itself would be leading the charge into what is almost universally recognized as the most dangerous and destabilizing sort of research: field trials that risk developing dangerous technology and paving the way for deployment. That is precisely the emphasis as the UK’s Advanced Research and Invention Agency (Aria) prepares to hand over $58m for solar geoengineering research and development.'

Submission + - Apple set to unveil boldest software redesign in years across entire ecosystem

CInder123 writes: Apple is undertaking one of the most significant software overhauls in its history, aiming to revamp the user interface across iPhone, iPad, and Mac devices. This ambitious update, set for release later this year, will fundamentally transform the look and feel of Apple's operating systems, enhancing consistency and the user experience.

The updates are part of iOS 19 and iPadOS 19, codenamed "Luck," and macOS 16, dubbed "Cheer," according to Bloomberg's Mark Gurman. He cited sources who requested anonymity since the project has yet to be officially announced. These major upgrades will introduce a new design language while simplifying navigation and controls.

Apple's push for consistency across platforms aims to create a seamless user experience when switching between devices. Currently, applications, icons, and window styles vary significantly across macOS, iOS, and visionOS, leading to a disjointed experience.
Source

Submission + - Spain to impose massive fines for not labelling AI-generated content (reuters.com)

CInder123 writes: Spain's government approved a bill imposing fines on companies that use AI-generated content without proper labeling, in a bid to curb "deepfakes." The bill follows EU AI Act guidelines, imposing transparency obligations on high-risk AI systems. Non-compliance can lead to fines of up to 35 million euros or 7% of global turnover. It also bans practices like subliminal manipulation and biometric profiling. Enforcement will be handled by the new AI supervisory agency AESIA, with exceptions for specific sectors.

Submission + - AskSlashdot: Which computer security threat scares you the most? (microsoft.com) 2

shanen writes: And even better, do you have any useful solution approaches to solve that threat?

Actually, I have two threats in mind today, so I can't even answer my own question for "most". One might be an imminent threat due to Microsoft's combination of malevolent incompetence and lack of liability, which the other threat seems more distant but more clearly on the evil side and probably less bounded. I'm not even going to claim these are serious threats that you should consider. More in the way of examples, though it would be great if someone could convince me "There's nothing to see or worry about here."

The Microsoft threat might be affecting you. My path to get there is from the Settings page for my Microsoft Account. For example, from the gear icon on outlook.live.com. Under General you can find "Privacy and data" and *boom*. You're stuck. You can't get anywhere from there. But if you go to the security basics page. Oh wait. You can't get there from that place and I can't send you the URL because of Microsoft has bastardized it... Well, what about... "Good luck, Mr Phelps."

Anyway, if you somehow find your way to the "recent activity" page you may be surprised. Mine shows an endless string of "Unsuccessful sign-in" attempts. About one an hour from all over the world. Are these related to the roughly daily fake requests for one-time authentication codes? Dictionary searches of common passwords? Harmless, or maybe they are more sinister. Is there any way I can see the fake passwords? It would certainly annoy me if the attempts are gradually converging on the actual password... (Even though I don't use the Outlook account for anything and even though I never voluntarily use any Microsoft software or websites. Only under duress, but I but most of you, too.)

The other threat that's bothering me is the GAIvatar thing. I hope we are "enlightened beings" who could not be easily copied for Generative AI avatars, but I don't feel sanguine about it. And I definitely think there are some folks I know whose responses are so predictable that I could not tell them apart from a GAIvatar... Basically I see two threats here, but you may see others. One is simple prediction, using the GAIvatar to figure out what a person is most likely to do, but the more serious threat is control, by using the GAIvatar to test various prompts until the proper buttons are discovered to manipulate the human model. Maybe you see worse possibilities?

And I don't really see any solutions anywhere. At the "social" levels where solutions are supposed to appear they appear to be a bunch of benign incompetents, malevolent incompetents, or feckless incompetents. And some of them check two or three boxes at random...

Slashdot Top Deals