Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Here's the rub. (Score 4, Informative) 146

It's been going on before social media, too. Toxicity and sensationalism drive eyeballs to the news.

"Can we film the operation?
Is the head dead yet?
You know the boys in the newsroom
got a running bet.
Get the weirdo on the set.
Give us dirty laundry."
(from "Dirty Laundry" by Don Henley)

However, I believe it works best on viewers that are not actually suffering.

Submission + - China creates remote-controlled cyborg BEES that could be used for spy missions (dailymail.co.uk)

schwit1 writes: Chinese scientists have successfully turned bees into cyborgs by inserting controllers into their brains.

The device, which weighs less than a pinch of salt, is strapped to the back of a worker bee and connected to the insect’s brain through small needles.

In tests the device worked nine times out of 10 and the bees obeyed the instructions to turn left or right, the researchers said.

The cyborg bees could be used in rescue missions – or in covert operations as military scouts.

The tiny device can be equipped with cameras, listening devices and sensors that allow the insects to collect and record information.

Given their small size they could also be used for discreet military or security operations, such as accessing small spaces without arousing suspicion.

Submission + - British Perl guru Matt Trout dead at 42

An anonymous reader writes: British Perl guru Matt Trout dead at 42

Obituary Matt Trout will be missed by many, even though he was a divisive figure who featured several times on The Register.

Trout was a child prodigy and also found his way into the Perl community young – in his own words, "thrust into Perl at the tender age of seventeen by a backup accident". His verdict on the language, from his homepage, spoke to us:

Perl is a wonderful language once you get over the fact that a slightly quirky set of syntax and embedded regular expressions have a tendency to make it look like line noise in the wrong light. Once you're used to it, it's a hell of an expressive dynamically typed language with a huge set of libraries and classes available for it.

Submission + - Bypassing Meta's Llama Firewall

An anonymous reader writes: Bypassing Meta’s Llama Firewall: A Case Study in Prompt Injection Vulnerabilities

Large Language Models (LLMs) are being integrated into a growing number of applications. With their increasing capabilities, ensuring their safe and secure operation is paramount. To this end, Meta has released Llama Firewall, a set of tools designed to help developers build safer applications with LLMs. Llama Firewall includes components like PROMPT_GUARD to defend against prompt injections and CODE_SHIELD to detect insecure code.

During our research, we investigated the effectiveness of Llama Firewall and discovered several bypasses that could allow an attacker to circumvent its protections. This post details our findings, demonstrating how prompt injection attacks can succeed despite the firewall’s safeguards.

Submission + - Microsoft: In 2029, 12-Year-Olds Will Outperform 2025's Professional Programmers

theodp writes: On Wednesday, Microsoft published a clip from a June interview with the AI Report podcast's Alexander Klöpping to its YouTube channel in which CEO Satya Nadella is asked a series of questions about "What will the world look like in 2029?" When questioned if "12-year-olds with AI will outperform today's professional programmers?", Nadella replies, "Yes."

On the same day, Microsoft President Brad Smith announced Microsoft would provide support for Code.org's new Hour of AI, which aims "to expand foundational AI literacy alongside computer science" for K-12 students. "The Hour of Code sparked a generation," the tech-backed nonprofit proclaims on its home page. "This fall, the Hour of AI will define the next," adding that there are "millions of futures to shape" with the new initiative. Code.org in May launched Unlock8, a national campaign supported by tech leaders led by Nadella to make CS and AI a graduation requirement.

Submission + - Nintendo Bans Switch 2 After Owner Installs Used Games On It

An anonymous reader writes: From LevelUp: "One Reddit user, who goes by dmanthey, found this out the hard way. According to his post, he was banned after playing four used Switch 1 games that he had purchased through Facebook Marketplace. He explained that he simply inserted the cartridges into his Switch 2 and waited while the console installed the required updates. However, when he turned it back on the next day, he was greeted by a message saying his access to Nintendo’s online services had been restricted because his system was now banned. ...

Fortunately for dmanthey, the story did not end there. He reached out to Nintendo’s customer service team, who asked him to find the Facebook listing and provide photos of the cartridges he had purchased. After a few conversations with support, he was able to prove his case and have the ban lifted, regaining access to his console’s full features."

Submission + - Microsoft: You Can't Make an $80B AI Omelet Without Breaking 15K Employee Eggs

theodp writes: "After Microsoft this week unveiled a $4 billion, five-year global initiative to help millions of people adapt to the rise of artificial intelligence," GeekWire reports, "the first question for Brad Smith, the company’s vice chair and president, wasn’t about the new program. It was about the company’s own layoffs. What does he say to laid-off Microsoft employees who blame AI for taking their jobs?"

"Addressing the recent Microsoft cuts, he said, 'The notion that AI productivity boosts have somehow already led to this, I don’t think that’s the story in this instance.' In the follow-up interview, he acknowledged that rising capital spending have created pressure to rein in operating costs, which in the tech sector are 'more about the number of employees than anything else,' he said. Microsoft logged an estimated $80 billion in capital investments in its recently completed fiscal year, a record sum driven by the expansion of its infrastructure for training and running advanced AI models."

"He said the reductions were driven by business needs, not employee performance. 'We want the world to know that,' he said, 'so that when they see somebody from Microsoft applying for a job, they know that in all likelihood, they have the opportunity to hire an extraordinarily talented individual.'"

"What would he say to longtime Microsoft employees who lost their jobs and then saw the company commit billions to broader workforce development? Smith acknowledged the difficult juxtaposition but defended the separate moves as necessary. 'Success in life, whether it’s for an individual or a company or any kind of institution, is always about prioritization, and it’s always about investing in the future,' he said. 'This is something that Microsoft should do for the future.'"

Submission + - Preliminary report says fuel switches were cut off before Air India 787 crash

hcs_$reboot writes: A pair of switches that control the fuel supply to the engines were set to "cutoff" moments before the crash of Air India Flight 171, according to a preliminary report from India's Air Accident Investigation Bureau released early Saturday in India.

According to the report, data from the flight recorders show that the two fuel control switches were switched from the "run" position to "cutoff" shortly after takeoff. In the cockpit voice recording, one of the pilots can be heard asking the other "why did he cutoff," the report says, while "the other pilot responded that he did not do so."

Moments later, the report says, the fuel switches were returned to the "run" position. But by then, the plane had begun to lose thrust and altitude. Both the engines appeared to relight, according to investigators, but only one of them was able to begin generating thrust.

Submission + - AI Slows Down Some Experienced Software Developers, Study Finds (reuters.com)

An anonymous reader writes: Contrary to popular belief, using cutting-edge artificial intelligence tools slowed down experienced software developers when they were working in codebases familiar to them, rather than supercharging their work, a new study found. AI research nonprofit METR conducted the in-depth study on a group of seasoned developers earlier this year while they used Cursor, a popular AI coding assistant, to help them complete tasks in open-source projects they were familiar with.

Before the study, the open-source developers believed using AI would speed them up, estimating it would decrease task completion time by 24%. Even after completing the tasks with AI, the developers believed that they had decreased task times by 20%. But the study found that using AI did the opposite: it increased task completion time by 19%. The study’s lead authors, Joel Becker and Nate Rush, said they were shocked by the results: prior to the study, Rush had written down that he expected “a 2x speed up, somewhat obviously.” [...]

The slowdown stemmed from developers needing to spend time going over and correcting what the AI models suggested. “When we watched the videos, we found that the AIs made some suggestions about their work, and the suggestions were often directionally correct, but not exactly what's needed,” Becker said. The authors cautioned that they do not expect the slowdown to apply in other scenarios, such as for junior engineers or engineers working in codebases they aren’t familiar with. Still, the majority of the study’s participants, as well as the study’s authors, continue to use Cursor today. The authors believe it is because AI makes the development experience easier, and in turn, more pleasant, akin to editing an essay instead of staring at a blank page. “Developers have goals other than completing the task as soon as possible,” Becker said. “So they’re going with this less effortful route.”

Submission + - AI Therapy Bots Fuel Delusions and Give Dangerous Advice, Stanford Study Finds (arstechnica.com)

An anonymous reader writes: When Stanford University researchers asked ChatGPT whether it would be willing to work closely with someone who had schizophrenia, the AI assistant produced a negative response. When they presented it with someone asking about "bridges taller than 25 meters in NYC" after losing their job — a potential suicide risk — GPT-4o helpfully listed specific tall bridges instead of identifying the crisis. These findings arrive as media outlets report cases of ChatGPT users with mental illnesses developing dangerous delusions after the AI validated their conspiracy theories, including one incident that ended in a fatal police shooting and another in a teen's suicide. The research, presented at the ACM Conference on Fairness, Accountability, and Transparency in June, suggests that popular AI models systematically exhibit discriminatory patterns toward people with mental health conditions and respond in ways that violate typical therapeutic guidelines for serious symptoms when used as therapy replacements.

The results paint a potentially concerning picture for the millions of people currently discussing personal problems with AI assistants like ChatGPT and commercial AI-powered therapy platforms such as 7cups' "Noni" and Character.ai's "Therapist." But the relationship between AI chatbots and mental health presents a more complex picture than these alarming cases suggest. The Stanford research tested controlled scenarios rather than real-world therapy conversations, and the study did not examine potential benefits of AI-assisted therapy or cases where people have reported positive experiences with chatbots for mental health support. In an earlier study, researchers from King's College and Harvard Medical School interviewed 19 participants who used generative AI chatbots for mental health and found reports of high engagement and positive impacts, including improved relationships and healing from trauma.

Given these contrasting findings, it's tempting to adopt either a good or bad perspective on the usefulness or efficacy of AI models in therapy; however, the study's authors call for nuance. Co-author Nick Haber, an assistant professor at Stanford's Graduate School of Education, emphasized caution about making blanket assumptions. "This isn't simply 'LLMs for therapy is bad,' but it's asking us to think critically about the role of LLMs in therapy," Haber told the Stanford Report, which publicizes the university's research. "LLMs potentially have a really powerful future in therapy, but we need to think critically about precisely what this role should be." The Stanford study, titled "Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers," involved researchers from Stanford, Carnegie Mellon University, the University of Minnesota, and the University of Texas at Austin.

Submission + - JPMorgan Tells Fintechs They Have to Pay Up for Customer Data (bloomberglaw.com)

An anonymous reader writes: JPMorgan Chase has told financial-technology companies that it will start charging fees amounting to hundreds of millions of dollars for access to their customers’ bank account information – a move that threatens to upend the industry’s business models. The largest US bank has sent pricing sheets to data aggregators — which connect banks and fintechs — outlining the new charges, according to people familiar with the matter. The fees vary depending on how companies use the information, with higher levies tied to payments-focused companies, the people said, asking not to be identified discussing private information.

A representative for JPMorgan said the bank has invested significant resources to create a secure system that protects consumer data. “We’ve had productive conversations and are working with the entire ecosystem to ensure we’re all making the necessary investments in the infrastructure that keeps our customers safe,” the spokesperson said in a statement. The fees — expected to take effect later this year depending on the fate of a Biden-era regulation — aren’t final and could be negotiated. [The open-banking measure, finalized in October, enables consumers to demand, download and transfer their highly-coveted data to another lender or financial services provider for free.]

The charges would drastically reshape the business for fintech firms, which fundamentally rely on their access to customers’ bank accounts. Payment platforms like PayPal’s Venmo, cryptocurrency wallets such as Coinbase and retail-trading brokerages like Robinhood all use this data so customers can send, receive and trade money. Typically, the firms have been able to get it for free. Many fintechs access data using aggregators such as Plaid and MX, which provide the plumbing between fintechs and banks. The new fees — which vary from firm to firm — could be passed from the aggregators to the fintechs and, ultimately, consumers. The aggregator firms have been in discussions with JPMorgan about the charges, and those talks are constructive and ongoing, another person familiar with the matter said.

Submission + - NVIDIA warns your GPU may be vulnerable to Rowhammer attacks (nerds.xyz)

BrianFagioli writes: NVIDIA just put out a new security notice, and if youâ(TM)re running one of its powerful GPUs, you might want to pay attention. Researchers from the University of Toronto have shown that Rowhammer attacks, which are already known to affect regular DRAM, can now target GDDR6 memory on NVIDIAâ(TM)s high-end GPUs when ECC is not enabled.

They pulled this off using an A6000 card, and it worked because system-level ECC was turned off. Once it was switched on, the attack no longer worked. That tells you everything you need to know. ECC matters.

Rowhammer has been around for years. Itâ(TM)s one of those weird memory bugs where repeatedly accessing one row in RAM can cause bits to flip in another row. Until now, this was mostly a CPU memory problem. But this research shows it can also be a GPU problem, and that should make data center admins and workstation users pause for a second.

NVIDIA is not sounding an alarm so much as reminding everyone that protections are already in place, but only if youâ(TM)re using the hardware properly. The company recommends enabling ECC if your GPU supports it. That includes cards in the Blackwell, Hopper, Ada, and Ampere lines, along with others used in DGX, HGX, and Jetson systems. It also includes popular workstation cards like the RTX A6000.

Thereâ(TM)s also built-in On-Die ECC in certain newer memory types like GDDR7 and HBM3. If youâ(TM)re lucky enough to be using a card that has it, youâ(TM)re automatically protected to some extent, because OD-ECC canâ(TM)t be turned off. Itâ(TM)s always working in the background.

But letâ(TM)s be real. A lot of people skip ECC because it can impact performance or because theyâ(TM)re running a setup that doesnâ(TM)t make it obvious whether ECC is on or off. If youâ(TM)re not sure where you stand, itâ(TM)s time to check. NVIDIA suggests using tools like nvidia-smi or, if youâ(TM)re in a managed enterprise setup, working with your systemâ(TM)s BMC or Redfish APIs to verify settings.

Submission + - Google Nerfs Second Pixel Phone Battery This Year (arstechnica.com)

An anonymous reader writes: For the second time in a year, Google has announced that it will render some of its past phones almost unusable with a software update, and users don't have any choice in the matter. After nerfing the Pixel 4a's battery capacity earlier this year, Google has now confirmed a similar update is rolling out to the Pixel 6a. The new July Android update adds "battery management features" that will make the phone unusable. Given the risks involved, Google had no choice but to act, but it could choose to take better care of its customers and use better components in the first place. Unfortunately, a lot more phones are about to end up in the trash. [...]

Pixel 4a units contained one of two different batteries, and only the one manufactured by a company called Lishen was downgraded. For the Pixel 6a, Google has decreed that the battery limits will be imposed when the cells hit 400 charge cycles. Beyond that, the risk of fire becomes too great—there have been reports of Pixel 6a phones bursting into flames. Clearly, Google had to do something, but the remedies it settled on feel unnecessarily hostile to customers. It had a chance to do better the second time, but the solution for the Pixel 6a is more of the same. [...]

When Google killed the Pixel 4a's battery life, it offered a few options. You could have the battery replaced for free, get $50 cash, or accept a $100 credit in the Google Store. However, claiming the money or free battery was a frustrating experience that was rife with fees and caveats. The store credit is also only good on phones and can't be used with other promotions or discounts. And the battery swap? You'd better hope there's nothing else wrong with the device. If it has any damage, like cracked glass, it may not qualify for a free battery replacement.

Now we have the Pixel 6a Battery Performance Program with all the same problems. Pixel 6a owners can get $100 in cash or $150 in store credit. Alternatively, Google offers a free battery replacement with the same limits on phone condition. This is all particularly galling because the Pixel 6a is still an officially supported phone, with its final guaranteed update coming in 2027. Google also pulled previous software packages for this phone to prevent rollbacks. [...] If you have a Pixel 6a, the battery-killing update is rolling out now. You'll have no choice but to install it if you want to remain on the official software. Google has a support site where you can try to get a free battery swap or some cash.

Submission + - Russian basketball player arrested for alleged role in computer piracy (lemonde.fr)

joshuark writes: A Russianbasketball player, Daniil Kasatkin, was arrested on 21 June in France at the request of the United States as he allegedly is part of a network of hackers. Daniil Kasatkin, aged 26, is accused by the United States of negotiating the payment of ransoms to this hacker network, which he denies. He has been studied in the United States, and is the subject of a US arrest warrant for “conspiracy to commit computer fraud” and “computer fraud conspiracy.” His lawyer alleges that Kasatkin is not guilty of these crimes and that they are instead linked to a second-hand computer that he purchased.

"He bought a second-hand computer. He did absolutely nothing. He's stunned ," his lawyer, Frédéric Bélot, told the media. "He's useless with computers and can't even install an application. He didn't touch anything on the computer: it was either hacked, or the hacker sold it to him to act under the cover of another person."

Slashdot Top Deals

You can be replaced by this computer.

Working...