Forgot your password?
typodupeerror

Submission + - Mozilla Thunderbolt is an open-source AI client focused on control and self-host (nerds.xyz)

BrianFagioli writes: Mozillaâ(TM)s email subsidiary MZLA Technologies just introduced Thunderbolt, an open-source AI client aimed at organizations that want to run AI on their own infrastructure instead of relying entirely on cloud services. The idea is to give companies full control over their data, models, and workflows while still offering things like chat, research tools, automation, and integration with enterprise systems through the Haystack AI framework. Native apps are planned for Windows, macOS, Linux, iOS, and Android. Personally, I like the self-hosted concept, but the name âoeThunderboltâ feels like a miss since there are already a ton of unrelated tech products using that name.

Submission + - Vishing attacks on Okta identity systems on the rise (scworld.com)

spatwei writes: Vishing attacks on Okta identity systems have increased in which attackers simply call the victim or an IT help desk and convince them to weaken or reset multi-factor authentication (MFA).

In an April 13 blog post, LevelBlue researchers said once Okta is compromised via vishing, the attackers gain access to an enterprise’s SaaS systems via single sign-on (SSO), which leads to the exfiltration of SharePoint, OneDrive, Salesforce, and Google Workspace data.

The LevelBlue researchers explained that as part of the attack, the threat actors aim to get the victim or help desk to reset MFA, enroll a new authenticator device, provide one-time passcodes, disclose passwords, or reset Okta credentials.

“The initial attack vector here is still classic social engineering, however, the strategy has matured,” said Mika Aalto, co-founder and CEO at Hoxhunt. “Instead of targeting individual users, attackers are moving upstream to bypass MFA at the identity provider level, manipulating in this case Okta's IT help desk to unlock access across the targeted organization.”

Submission + - Keep doing social science experiments? (theguardian.com) 1

Bruce66423 writes: The London Guardian reflects on the poor reproducibility of experiments in social science revealed by the latest Systematizing Confidence in Open Research and Evidence (Score) finding, which has now published three studies looking at 3,900 social science papers.

Despite the low quality revealed by these studies, it suggests that things are getting better, admits: 'Some findings don’t matter much' claims 'replication studies can themselves be flawed.... These studies should strengthen the case for change and serve as a warning. Social science is a powerful tool for understanding the world – and that trust will be built by acknowledging uncertainty, not repudiating it.'

Given the degree to which 'following the science' led to some very bad decisions during the pandemic, not least because the social impact of lockdown choices were not well evaluated, it's hard to come to a clear view. And it's worth remembering the size of the industry employed in these science studies... It's encouraging that one of its spokesmen is admitting that mistakes were made in the past. Whether this is enough reason to carry on is less clear!

Submission + - Tennessee bill turns building a chatbot into a Class A felony (reddit.com)

schwit1 writes: Tennessee HB1455/SB1493 creates Class A felony criminal liability — the same category as first-degree murder — for anyone who “knowingly trains artificial intelligence” to provide emotional support, act as a companion, simulate a human being, or engage in open-ended conversations that could lead a user to feel they have a relationship with the AI. The Senate Judiciary Committee already approved it 7-0. It takes effect July 1, 2026. This affects every conversational AI product in existence. If you deploy any AI SaaS product, you need to read this right now.

Submission + - 30 WordPress plugins turned into malware after ownership change (anchor.host)

axettone writes: Many website creators and their clients appreciate the ability to extend their site’s functionality through plugins. However, the governance policies of open source projects are increasingly revealing flaws far more serious than those found in the code itself. In this case, a company legally took ownership of an open source project, only to transform it — months later — into a trojan horse.
According to the source, this issue has been resolved by the WordPress team, but it is necessary to check whether your site has been compromised and, if so, clean up the malicious code.

Submission + - Opera Browser Connector Lets ChatGPT and Claude Read Your Open Tabs (nerds.xyz)

BrianFagioli writes: Opera has introduced a new feature called Browser Connector that allows AI assistants like ChatGPT and Claude to access the contents of your browser tabs directly. The feature, available in Opera One and Opera GX, lets external AI tools read page content, understand context across multiple open tabs, and even analyze screenshots or charts from the pages you are viewing. Instead of copying text into a chatbot to explain what you are looking at, the browser can pass that context along automatically.

Opera says the feature is designed to reduce friction when using AI during research or comparison shopping, while also supporting an open approach that allows users to connect different AI tools instead of being locked into a single ecosystem. Browser Connector is currently available in Early Bird builds of the browsers. While the capability could make AI assistants far more useful for browsing tasks, it also raises privacy questions since enabling it effectively allows an AI service to see what you are doing inside your browser tabs.

Submission + - Researchers Induce Smells With Ultrasound, No Chemical Cartridges Required (uploadvr.com)

An anonymous reader writes: A group of independent researchers built a device that can artificially induce smell using ultrasound, with no consumable cartridges required. [...] The team of four are Lev Chizhov, Albert Yan-Huang, Thomas Ribeiro, Aayush Gupta. Chizhov is a neurotech entrepreneur with a background in math and physics, Yan-Huang is a researcher at Caltech with a background in computation and neural systems, and Ribeiro and Gupta are co-researchers on the project with software engineering and AI expertise.

Instead of targeting your nose at all, the device directly targets the olfactory bulb in your brain with "focused ultrasound through the skull." The researchers say that as far as they're aware, no one has ever done this before, even in animals. A challenge in targeting the olfactory bulb is that it's buried behind the top of your nose, and your nose doesn't provide a flat surface for an emitter. Ultrasound also doesn't travel well through air. The solution the researchers came up with was to place the emitter on your forehead instead, with a "solid, jello-like pad for stability and general comfort," and the ultrasound directed downward towards the olfactory bulb.

To determine the best placement, they say they used an MRI of one of their skulls to "roughly determine where the transducer would point and how the focal region (where ultrasound waves actually concentrate) aligned with the olfactory bulb (the target for stimulation)". [...] According to the researchers, they were able to induce the sensation of fresh air "with a lot of oxygen", the smell of garbage "like few-day-old fruit peels," an ozone-like sensation "like you're next to an air ionizer," and a campfire smell of burning wood. While technically head-mounted, the current device does require being held up with two hands. But as with all such prototypes, it likely could be significantly miniaturized.

Submission + - Anna's Archive Loses $322 Million Spotify Piracy Case Without a Fight (torrentfreak.com)

An anonymous reader writes: Spotify and several major record labels, including UMG, Sony, and Warner, secured a $322 million default judgment against the unknown operators of Anna's Archive. The shadow library failed to appear in court and briefly released millions of tracks that were scraped from Spotify via BitTorrent. In addition to the monetary penalty, a permanent injunction required domain registrars and other parties to suspend the site's domain names. [...]

The music labels get the statutory maximum of $150,000 in damages for around 50 works. Spotify adds a DMCA circumvention claim of $2,500 for 120,000 music files, bringing the total to more than $322 million. The plaintiff previously described their damages request as “extremely conservative.” The DMCA claim is based only on the 120,000 files, not the full 2.8 million that were released. Had they applied the $2,500 rate to all released files, the damages figure would exceed $7 billion. Anna’s Archive did not show up in court, and the operators of the site remain unidentified. The judgment attempts to address this directly, by ordering Anna’s Archive to file a compliance report within ten business days, under penalty of perjury, that includes valid contact information for the site and its managing agents.

Whether the site will comply with this order is highly uncertain. For now, the monetary judgment is mostly a victory on paper, as recouping money from an unknown entity is impossible. For this reason, the music companies also requested a permanent injunction. In addition to the damages award, [Judge Jed Rakoff] entered a permanent worldwide injunction covering ten Anna’s Archive domains: annas-archive.org, .li, .se, .in, .pm, .gl, .ch, .pk, .gd, and .vg. Domain registries and registrars of record, along with hosting and internet service providers, are ordered to permanently disable access to those domains, disable authoritative nameservers, cease hosting services, and preserve evidence that could identify the site’s operators.

The judgment names specific third parties bound by those obligations, including Public Interest Registry, Cloudflare, Switch Foundation, The Swedish Internet Foundation, Njalla SRL, IQWeb FZ-LLC, Immaterialism Ltd., Hosting Concepts B.V., Tucows Domains Inc., and OwnRegistrar, Inc. Anna’s Archive is also ordered to destroy all copies of works scraped from Spotify and to file a compliance report within ten business days, under penalty of perjury, including valid contact information for the site and its managing agents. That last requirement could prove significant, given that the identity of the site’s operators remains unknown.

Submission + - Slowbooks, AI coded cleanroom re-imagined Quickbooks (github.com)

Archangel Michael writes: The Story
VonHoltenCodes ran QuickBooks 2003 Pro for 14 years for side business invoicing and bookkeeping. Then the hard drive died. Intuit's activation servers have been dead since ~2017, so the software can't be reinstalled. The license paid for is worthless.

So he built his own replacement, transferred all his data from the old .QBW file using IIF export/import.

The codebase is annotated with "decompilation" comments referencing QBW32.EXE offsets, Btrieve table layouts, and MFC class names — a tribute to the software that served him well for 14 years before its maker decided it should stop working.

This is a clean-room reimplementation. No Intuit source code was available or used.

(Side Note from story submitter. This is the beginning of the end of Windows only applications)

Submission + - IBM warns AI-powered hackers are coming, so it built AI to fight them (nerds.xyz)

BrianFagioli writes: IBM says hackers are starting to use powerful AI models to find vulnerabilities and automate cyberattacks, and it thinks traditional security teams may not be able to keep up. The company just announced new cybersecurity tools, including an AI-driven assessment to identify weaknesses in enterprise systems and something called IBM Autonomous Security, which uses coordinated AI agents to detect threats and automatically respond at machine speed. In other words, IBMâ(TM)s answer to AI-powered hackers is more AI, which raises the interesting possibility that future cyber battles could end up being machines defending networks against other machines.

Submission + - Stanford Report Highlights Growing Disconnect Between AI Insiders, Everyone Else (techcrunch.com)

An anonymous reader writes: AI experts and the public’s opinion on the technology are increasingly diverging, according to Stanford University’s annual report on the AI industry, which was released Monday. In particular, the report noted a growing trend of anxiety around AI and, in the U.S., concerns about how the technology will impact key societal areas, such as jobs, medical care, and the economy. [...] Stanford’s report provides more insight into where all this negativity is coming from, as it summarizes data around public sentiment of AI across various sources. For instance, it pointed to a report from Pew Research published last month, which noted that only 10% of Americans said they were more excited than concerned about the increased use of AI in daily life. Meanwhile, 56% of AI experts said they believed AI would have a positive impact on the U.S. over the next 20 years.

Expert opinion and public sentiment also greatly diverged in particular areas where AI could have a societal impact. Indeed, 84% of experts, the report authors noted, said that AI would have a largely positive impact on medical care over the next 20 years, but only 44% of the U.S. general public said the same. Plus, a majority (73%) of experts felt positive about AI’s impact on how people do their jobs, compared with just 23% of the public. And 69% of experts felt that AI would have a positive impact on the economy. Given the supposed AI-fueled layoffs and disruptions to the workplace, it’s not surprising that only 21% of the public felt similarly. Other data from Pew Research, cited by the report, noted that AI experts were less pessimistic on AI’s impact on the job market, while nearly two-thirds of Americans (or 64%) said they think AI will lead to fewer jobs over the next 20 years.

The U.S. also reported the lowest trust in its government to regulate AI responsibly, compared with other nations, at 31%. Singapore ranked highest at 81%, per data pulled from Ipsos found in Stanford’s report. Another source looked at regulation concerns on a state-by-state level and concluded that, nationwide, 41% of respondents said federal AI regulation will not go far enough, while only 27% said it would go “too far.” Despite the fears and concerns, AI did get one accolade: Globally, those who feel like AI products and services offer more benefits than drawbacks slightly rose from 55% in 2024 to 59% in 2025. But at the same time, those respondents who said that AI makes them “nervous” grew from 50% to 52% during the same period, per data cited by the report’s authors.

Submission + - USAF Asked This Man to Investigate UFOs-Then Pushed Him Away After What He Found (yahoo.com)

schwit1 writes: Do you think the U.S. government is hiding, and possibly reverse-engineering, extraterrestrial technology? Think again. Or better yet, don't think about it at all. Nothing to see here.

That's the underlying message of a report released in 2024 by the Department of Defense. The 63-page " Report on the Historical Record of U.S. Government Involvement with Unidentified Anomalous Phenomena (UAP) " concludes that the DoD's All-Domain Anomaly Resolution Office (AARO) "found no evidence that any [U.S. Government] investigation, academic-sponsored research, or official review panel has confirmed that any sighting of a UAP represented extraterrestrial technology."

The AARO, as The Guardian summarizes, is "a government office established in 2022 to detect and, as necessary, mitigate threats including ‘anomalous, unidentified space, airborne, submerged and transmedium objects'."

Slashdot Top Deals

"Be there. Aloha." -- Steve McGarret, _Hawaii Five-Oh_

Working...