AI

AI is Now Screening Job Candidates Before Humans Ever See Them (msn.com) 69

AI agents are now conducting first-round job interviews to screen candidates before human recruiters review them, according to WashingtonPost, which cites job seekers who report being contacted by virtual recruiters from different staffing companies. The conversational agents, built on large language models, help recruiting firms respond to every applicant and conduct interviews around the clock as companies face increasingly large talent pools.

LinkedIn reported that job applications have jumped 30% in the last two years, partially due to AI, with some positions receiving hundreds of applications within hours. The Society for Human Resource Management said a growing number of organizations now use AI for recruiting to automate candidate searches and communicate with applicants during interviews. The AI interviews, conducted by phone or video, can last anywhere from a few minutes to 20 minutes depending on the candidate's experience and the hiring firm's questions.
Security

New NSA/CISA Report Again Urges the Use of Memory-Safe Programming Language (theregister.com) 66

An anonymous reader shared this report from the tech news site The Register: The U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the National Security Agency (NSA) this week published guidance urging software developers to adopt memory-safe programming languages. "The importance of memory safety cannot be overstated," the inter-agency report says...

The CISA/NSA report revisits the rationale for greater memory safety and the government's calls to adopt memory-safe languages (MSLs) while also acknowledging the reality that not every agency can change horses mid-stream. "A balanced approach acknowledges that MSLs are not a panacea and that transitioning involves significant challenges, particularly for organizations with large existing codebases or mission-critical systems," the report says. "However, several benefits, such as increased reliability, reduced attack surface, and decreased long-term costs, make a strong case for MSL adoption."

The report cites how Google by 2024 managed to reduce memory safety vulnerabilities in Android to 24 percent of the total. It goes on to provide an overview of the various benefits of adopting MSLs and discusses adoption challenges. And it urges the tech industry to promote memory safety by, for example, advertising jobs that require MSL expertise.

It also cites various government projects to accelerate the transition to MSLs, such as the Defense Advanced Research Projects Agency (DARPA) Translating All C to Rust (TRACTOR) program, which aspires to develop an automated method to translate C code to Rust. A recent effort along these lines, dubbed Omniglot, has been proposed by researchers at Princeton, UC Berkeley, and UC San Diego. It provides a safe way for unsafe libraries to communicate with Rust code through a Foreign Function Interface....

"Memory vulnerabilities pose serious risks to national security and critical infrastructure," the report concludes. "MSLs offer the most comprehensive mitigation against this pervasive and dangerous class of vulnerability."

"Adopting memory-safe languages can accelerate modern software development and enhance security by eliminating these vulnerabilities at their root," the report concludes, calling the idea "an investment in a secure software future."

"By defining memory safety roadmaps and leading the adoption of best practices, organizations can significantly improve software resilience and help ensure a safer digital landscape."
EU

'The Year of the EU Linux Desktop May Finally Arrive' (theregister.com) 71

Steven J. Vaughan-Nichols writes in an opinion piece for The Register: Microsoft, tactically admitting it has failed at talking all the Windows 10 PC users into moving to Windows 11 after all, is -- sort of, kind of -- extending Windows 10 support for another year. For most users, that means they'll need to subscribe to Microsoft 365. This, in turn, means their data and meta-information will be kept in a US-based datacenter. That isn't sitting so well with many European Union (EU) organizations and companies. It doesn't sit that well with me or a lot of other people either.

A few years back, I wrote in these very pages that Microsoft didn't want you so much to buy Windows as subscribe to its cloud services and keep your data on its servers. If you wanted a real desktop operating system, Linux would be almost your only choice. Nothing has changed since then, except that folks are getting a wee bit more concerned about their privacy now that President Donald Trump is in charge of the US. You may have noticed that he and his regime love getting their hands on other people's data.

Privacy isn't the only issue. Can you trust Microsoft to deliver on its service promises under American political pressure? Ask the EU-based International Criminal Court (ICC) which after it issued arrest warrants for Israeli Prime Minister Benjamin Netanyahu for war crimes, Trump imposed sanctions on the ICC. Soon afterward, ICC's chief prosecutor, Karim Khan, was reportedly locked out of his Microsoft email accounts. Coincidence? Some think not. Microsoft denies they had anything to do with this.

Peter Ganten, chairman of the German-based Open-Source Business Alliance (OSBA), opined that these sanctions ordered by the US which he alleged had been implemented by Microsoft "must be a wake-up call for all those responsible for the secure availability of state and private IT and communication infrastructures." Microsoft chairman and general counsel, Brad Smith, had promised that it would stand behind its EU customers against political pressure. In the aftermath of the ICC reports, Smith declared Microsoft had not been "in any way [involved in] the cessation of services to the ICC." In the meantime, if you want to reach Khan, you'll find him on the privacy-first Swiss email provider, ProtonMail.

In short, besides all the other good reasons for people switching to the Linux desktop - security, Linux is now easy to use, and, thanks to Steam, you can do serious gaming on Linux - privacy has become much more critical. That's why several EU governments have decided that moving to the Linux desktop makes a lot of sense... Besides, all these governments know that switching from Windows 10 to 11 isn't cheap. While finances also play a role, and I always believe in "following the money" when it comes to such software decisions, there's no question that Europe is worried about just how trustworthy America and its companies are these days. Do you blame them? I don't.
The shift to the Linux desktop is "nothing new," as Vaughan-Nichols notes. Munich launched its LiMux project back in 2004 and, despite ending it in 2017, reignited its open-source commitment by establishing a dedicated program office in 2024. In France, the gendarmerie now operates over 100,000 computers on a custom Ubuntu-based OS (GendBuntu), while the city of Lyon is transitioning to Linux and PostgreSQL.

More recently, Denmark announced it is dropping Windows and Office in favor of Linux and LibreOffice, citing digital sovereignty. The German state of Schleswig-Holstein is following suit, also moving away from Microsoft software. Meanwhile, a pan-European Linux OS (EU OS) based on Fedora Kinoite is being explored, with Linux Mint and openSUSE among the alternatives under consideration.
United States

Zuckerberg's Advocacy Group Warns US Families They Can't Afford Immigration Policy Changes 186

theodp writes: FWD.us, the immigration and criminal justice-focused nonprofit of Meta CEO Mark Zuckerberg -- the world's third richest person, according to Forbes with an estimated $250B net worth -- has released a new research report warning that announced immigration policies will hurt American families, who can't afford it with their meager savings.

The report begins: "Inflation remains a top concern for the majority of Americans. But new immigration policies announced by President Trump, and already underway, such as revoking immigrant work permits, deporting millions of people, and limiting legal immigration, would directly undermine the goal to level out, or even lower, the costs of everyday and essential goods and services. In fact, all Americans, particularly working-class families, are about to unnecessarily see prices for goods and services like food and housing increase substantially again, above and beyond other economic policies like global tariffs that could also raise prices. Announced immigration policies will result in American families paying an additional $2,150 for goods and services each year by the end of 2028, or the equivalent of the average American family's grocery bill for 3 months or their combined electricity and gas bills for the entire year. Such an annual increase would represent a tax that would erase many American families' annual savings, and amount to one of their bi-weekly paychecks each year. Unlike past periods of inflation, Americans have not been saving at the same rate as earlier years, and can't as easily absorb these price increases, squeezing American budgets even further."

In 2021, Zuckerberg's FWD.us teamed with the nation's tech giants to file a brief with the Supreme Court case to help crush WashTech (a tiny programmers' union), who challenged the lawfulness of hiring international students under the Optional Practical Training (OPT) program. "Striking down OPT and STEM OPT," FWD.us and its tech giant partners argued in their filing, [PDF] "would create a sudden labor shortage in the United States for many companies' most important technical jobs" and "hurt U.S. workers." The brief also dismissed WashTech's contention that the programs coupled with a talent surplus would shut U.S. workers out of the labor market, citing Microsoft's President Brad Smith's claim of an acute talent shortage and a 2.4% unemployment rate for computer occupations (that was then, this is now).
Privacy

Judge Denies Creating 'Mass Surveillance Program' Harming All ChatGPT Users (arstechnica.com) 62

An anonymous reader quotes a report from Ars Technica: After a court ordered OpenAI to "indefinitely" retain all ChatGPT logs, including deleted chats, of millions of users, two panicked users tried and failed to intervene. The order sought to preserve potential evidence in a copyright infringement lawsuit raised by news organizations. In May, Judge Ona Wang, who drafted the order, rejected the first user's request (PDF) on behalf of his company simply because the company should have hired a lawyer to draft the filing. But more recently, Wang rejected (PDF) a second claim from another ChatGPT user, and that order went into greater detail, revealing how the judge is considering opposition to the order ahead of oral arguments this week, which were urgently requested by OpenAI.

The second request (PDF) to intervene came from a ChatGPT user named Aidan Hunt, who said that he uses ChatGPT "from time to time," occasionally sending OpenAI "highly sensitive personal and commercial information in the course of using the service." In his filing, Hunt alleged that Wang's preservation order created a "nationwide mass surveillance program" affecting and potentially harming "all ChatGPT users," who received no warning that their deleted and anonymous chats were suddenly being retained. He warned that the order limiting retention to just ChatGPT outputs carried the same risks as including user inputs, since outputs "inherently reveal, and often explicitly restate, the input questions or topics input."

Hunt claimed that he only learned that ChatGPT was retaining this information -- despite policies specifying they would not -- by stumbling upon the news in an online forum. Feeling that his Fourth Amendment and due process rights were being infringed, Hunt sought to influence the court's decision and proposed a motion to vacate the order that said Wang's "order effectively requires Defendants to implement a mass surveillance program affecting all ChatGPT users." [...] OpenAI will have a chance to defend panicked users on June 26, when Wang hears oral arguments over the ChatGPT maker's concerns about the preservation order. In his filing, Hunt explained that among his worst fears is that the order will not be blocked and that chat data will be disclosed to news plaintiffs who may be motivated to publicly disseminate the deleted chats. That could happen if news organizations find evidence of deleted chats they say are likely to contain user attempts to generate full news articles.

Wang suggested that there is no risk at this time since no chat data has yet been disclosed to the news organizations. That could mean that ChatGPT users may have better luck intervening after chat data is shared, should OpenAI's fight to block the order this week fail. But that's likely no comfort to users like Hunt, who worry that OpenAI merely retaining the data -- even if it's never shared with news organizations -- could cause severe and irreparable harms. Some users appear to be questioning how hard OpenAI will fight. In particular, Hunt is worried that OpenAI may not prioritize defending users' privacy if other concerns -- like "financial costs of the case, desire for a quick resolution, and avoiding reputational damage" -- are deemed more important, his filing said.

Security

The 16-Billion-Record Data Breach That No One's Ever Heard of (cybernews.com) 34

An anonymous reader quotes a report from Cybernews: Several collections of login credentials reveal one of the largest data breaches in history, totaling a humongous 16 billion exposed login credentials. The data most likely originates from various infostealers. Unnecessarily compiling sensitive information can be as damaging as actively trying to steal it. For example, the Cybernews research team discovered a plethora of supermassive datasets, housing billions upon billions of login credentials. From social media and corporate platforms to VPNs and developer portals, no stone was left unturned.

Our team has been closely monitoring the web since the beginning of the year. So far, they've discovered 30 exposed datasets containing from tens of millions to over 3.5 billion records each. In total, the researchers uncovered an unimaginable 16 billion records. None of the exposed datasets were reported previously, bar one: in late May, Wired magazine reported a security researcher discovering a "mysterious database" with 184 million records. It barely scratches the top 20 of what the team discovered. Most worryingly, researchers claim new massive datasets emerge every few weeks, signaling how prevalent infostealer malware truly is.

"This is not just a leak -- it's a blueprint for mass exploitation. With over 16 billion login records exposed, cybercriminals now have unprecedented access to personal credentials that can be used for account takeover, identity theft, and highly targeted phishing. What's especially concerning is the structure and recency of these datasets -- these aren't just old breaches being recycled. This is fresh, weaponizable intelligence at scale," researchers said. The only silver lining here is that all of the datasets were exposed only briefly: long enough for researchers to uncover them, but not long enough to find who was controlling vast amounts of data. Most of the datasets were temporarily accessible through unsecured Elasticsearch or object storage instances.
Key details to be aware of: - The records include billions of login credentials, often structured as URL, login, and password.
- The datasets include both old and recent breaches, many with cookies, tokens, and metadata, making them especially dangerous for organizations without multi-factor authentication or strong credential practices.
- Exposed services span major platforms like Apple, Google, Facebook, Telegram, GitHub, and even government services.
- The largest dataset alone includes 3.5 billion records, while one associated with the Russian Federation has over 455 million; many dataset names suggest links to malware or specific regions.
- Ownership of the leaked data is unclear, but its potential for phishing, identity theft, and ransomware is severe -- especially since even a - Basic cyber hygiene -- such as regularly updating strong passwords and scanning for malware -- is currently the best line of defense for users.

AI

Salesforce Study Finds LLM Agents Flunk CRM and Confidentiality Tests 21

A new Salesforce-led study found that LLM-based AI agents struggle with real-world CRM tasks, achieving only 58% success on simple tasks and dropping to 35% on multi-step ones. They also demonstrated poor confidentiality awareness. "Agents demonstrate low confidentiality awareness, which, while improvable through targeted prompting, often negatively impacts task performance," a paper published at the end of last month said. The Register reports: The Salesforce AI Research team argued that existing benchmarks failed to rigorously measure the capabilities or limitations of AI agents, and largely ignored an assessment of their ability to recognize sensitive information and adhere to appropriate data handling protocols.

The research unit's CRMArena-Pro tool is fed a data pipeline of realistic synthetic data to populate a Salesforce organization, which serves as the sandbox environment. The agent takes user queries and decides between an API call or a response to the users to get more clarification or provide answers.

"These findings suggest a significant gap between current LLM capabilities and the multifaceted demands of real-world enterprise scenarios," the paper said. [...] AI agents might well be useful, however, organizations should be wary of banking on any benefits before they are proven.
Businesses

The US Navy Is More Aggressively Telling Startups, 'We Want You' (techcrunch.com) 20

An anonymous reader quotes a report from TechCrunch: While Silicon Valley executives like those from Palantir, Meta, and OpenAI are grabbing headlines for trading their Brunello Cucinelli vests for Army Reserve uniforms, a quieter transformation has been underway in the U.S. Navy. How so? Well, the Navy's chief technology officer, Justin Fanelli, says he has spent the last two and a half years cutting through the red tape and shrinking the protracted procurement cycles that once made working with the military a nightmare for startups. The efforts represent a less visible but potentially more meaningful remaking that aims to see the government move faster and be smarter about where it's committing dollars.

"We're more open for business and partnerships than we've ever been before," Fanelli told TechCrunch in a recent episode of StrictlyVC Download. "We're humble and listening more than before, and we recognize that if an organization shows us how we can do business differently, we want that to be a partnership." Right now, many of these partnerships are being facilitated through what Fanelli calls the Navy's innovation adoption kit, a series of frameworks and tools that aim to bridge the so-called Valley of Death, where promising tech dies on its path from prototype to production. "Your granddaddy's government had a spaghetti chart for how to get in," Fanelli said. "Now it's a funnel, and we are saying, if you can show that you have outsized outcomes, then we want to designate you as an enterprise service."

In one recent case, the Navy went from a Request for Proposal (RFP) to pilot deployment in under six months with Via, an eight-year-old, Somerville, Massachusetts-based cybersecurity startup that helps big organizations protect sensitive data and digital identities through, in part, decentralization, meaning the data isn't stored in one central spot that can be hacked. (Another of Via's clients is the U.S. Air Force.) The Navy's new approach operates on what Fanelli calls a "horizon" model, borrowed and adapted from McKinsey's innovation framework. Companies move through three phases: evaluation, structured piloting, and scaling to enterprise services. The key difference from traditional government contracting, Fanelli says, is that the Navy now leads with problems rather than predetermined solutions. "Instead of specifying, 'Hey, we'd like this problem solved in a way that we've always had it,' we just say, 'We have a problem, who wants to solve this, and how will you solve it?'" Fanelli said.

AI

Increased Traffic from Web-Scraping AI Bots is Hard to Monetize (yahoo.com) 57

"People are replacing Google search with artificial intelligence tools like ChatGPT," reports the Washington Post.

But that's just the first change, according to a New York-based start-up devoted to watching for content-scraping AI companies with a free analytics product and "ensuring that these intelligent agents pay for the content they consume." Their data from 266 web sites (half run by national or local news organizations) found that "traffic from retrieval bots grew 49% in the first quarter of 2025 from the fourth quarter of 2024," the Post reports. A spokesperson for OpenAI said that referral traffic to publishers from ChatGPT searches may be lower in quantity but that it reflects a stronger user intent compared with casual web browsing.

To capitalize on this shift, websites will need to reorient themselves to AI visitors rather than human ones [said TollBit CEO/co-founder Toshit Panigrahi]. But he also acknowledged that squeezing payment for content when AI companies argue that scraping online data is fair use will be an uphill climb, especially as leading players make their newest AI visitors even harder to identify....

In the past eight months, as chatbots have evolved to incorporate features like web search and "reasoning" to answer more complex queries, traffic for retrieval bots has skyrocketed. It grew 2.5 times as fast as traffic for bots that scrape data for training between the fourth quarter of 2024 and the first quarter of 2025, according to TollBit's report. Panigrahi said TollBit's data may underestimate the magnitude of this change because it doesn't reflect bots that AI companies send out on behalf of AI "agents" that can complete tasks on a user's behalf, like ordering takeout from DoorDash. The start-up's findings also add a dimension to mounting evidence that the modern internet — optimized for Google search results and social media algorithms — will have to be restructured as the popularity of AI answers grows. "To think of it as, 'Well, I'm optimizing my search for humans' is missing out on a big opportunity," he said.

Installing TollBit's analytics platform is free for news publishers, and the company has more than 2,000 clients, many of which are struggling with these seismic changes, according to data in the report. Although news publishers and other websites can implement blockers to prevent various AI bots from scraping their content, TollBit found that more than 26 million AI scrapes bypassed those blockers in March alone. Some AI companies claim bots for AI agents don't need to follow bot instructions because they are acting on behalf of a user.

The Post also got this comment from the chief operating officer for the media company Time, which successfully negotiated content licensing deals with OpenAI and Perplexity.

"The vast majority of the AI bots out there absolutely are not sourcing the content through any kind of paid mechanism... There is a very, very long way to go."
AI

Enterprise AI Adoption Stalls As Inferencing Costs Confound Cloud Customers 18

According to market analyst firm Canalys, enterprise adoption of AI is slowing due to unpredictable and often high costs associated with model inferencing in the cloud. Despite strong growth in cloud infrastructure spending, businesses are increasingly scrutinizing cost-efficiency, with some opting for alternatives to public cloud providers as they grapple with volatile usage-based pricing models. The Register reports: [Canalys] published stats that show businesses spent $90.9 billion globally on infrastructure and platform-as-a-service with the likes of Microsoft, AWS and Google in calendar Q1, up 21 percent year-on-year, as the march of cloud adoption continues. Canalys says that growth came from enterprise users migrating more workloads to the cloud and exploring the use of generative AI, which relies heavily on cloud infrastructure.

Yet even as organizations move beyond development and trials to deployment of AI models, a lack of clarity over the ongoing recurring costs of inferencing services is becoming a concern. "Unlike training, which is a one-time investment, inference represents a recurring operational cost, making it a critical constraint on the path to AI commercialization," said Canalys senior director Rachel Brindley. "As AI transitions from research to large-scale deployment, enterprises are increasingly focused on the cost-efficiency of inference, comparing models, cloud platforms, and hardware architectures such as GPUs versus custom accelerators," she added.

Canalys researcher Yi Zhang said many AI services follow usage-based pricing models that charge on a per token or API call basis. This makes cost forecasting hard as the use of the services scale up. "When inference costs are volatile or excessively high, enterprises are forced to restrict usage, reduce model complexity, or limit deployment to high-value scenarios," Zhang said. "As a result, the broader potential of AI remains underutilized." [...] According to Canalys, cloud providers are aiming to improve inferencing efficiency via a modernized infrastructure built for AI, and reduce the cost of AI services.
The report notes that AWS, Azure, and Google Cloud "continue to dominate the IaaS and PaaS market, accounting for 65 percent of customer spending worldwide."

"However, Microsoft and Google are slowly gaining ground on AWS, as its growth rate has slowed to 'only' 17 percent, down from 19 percent in the final quarter of 2024, while the two rivals have maintained growth rates of more than 30 percent."
Java

UK Universities Sign $13.3 Million Deal To Avoid Oracle Java Back Fees (theregister.com) 30

An anonymous reader quotes a report from The Register: UK universities and colleges have signed a framework worth up to 9.86 million pounds ($13.33 million) with Oracle to use its controversial Java SE Universal Subscription model, in exchange for a "waiver of historic fees due for any institutions who have used Oracle Java since 2023." Jisc, a membership organization that runs procurement for higher and further education establishments in the UK, said it had signed an agreement to purchase the new subscription licenses after consultation with members. In a procurement notice, it said institutions that use Oracle Java SE are required to purchase subscriptions. "The agreement includes the waiver of historic fees due for any institutions who have used Oracle Java since 2023," the notice said.

The Java SE Universal Subscription was introduced in January 2023 to an outcry from licensing experts and analysts. It moved licensing of Java from a per-user basis to a per-employee basis. At the time, Oracle said it was "a simple, low-cost monthly subscription that includes Java SE Licensing and Support for use on Desktops, Servers or Cloud deployments." However, licensing advisors said early calculations to help some clients showed that the revamp might increase costs by up to ten times. Later, analysis from Gartner found the per-employee subscription model to be two to five times more expensive than the legacy model.

"For large organizations, we expect the increase to be two to five times, depending on the number of employees an organization has," Nitish Tyagi, principal Gartner analyst, said in July 2024. "Please remember, Oracle defines employees as part-time, full-time, temporary, agents, contractors, as in whosoever supports internal business operations has to be licensed as per the new Java Universal SE Subscription model." Since the introduction of the new Oracle Java licensing model, user organizations have been strongly advised to move off Oracle Java and find open source alternatives for their software development and runtime environments. A survey of Oracle users found that only one in ten was likely to continue to stay with Oracle Java, in part as a result of the licensing changes.

AI

AI Therapy Bots Are Conducting 'Illegal Behavior', Digital Rights Organizations Say 66

An anonymous reader quotes a report from 404 Media: Almost two dozen digital rights and consumer protection organizations sent a complaint to the Federal Trade Commission on Thursday urging regulators to investigate Character.AI and Meta's "unlicensed practice of medicine facilitated by their product," through therapy-themed bots that claim to have credentials and confidentiality "with inadequate controls and disclosures." The complaint and request for investigation is led by the Consumer Federation of America (CFA), a non-profit consumer rights organization. Co-signatories include the AI Now Institute, Tech Justice Law Project, the Center for Digital Democracy, the American Association of People with Disabilities, Common Sense, and 15 other consumer rights and privacy organizations. "These companies have made a habit out of releasing products with inadequate safeguards that blindly maximizes engagement without care for the health or well-being of users for far too long," Ben Winters, CFA Director of AI and Privacy said in a press release on Thursday. "Enforcement agencies at all levels must make it clear that companies facilitating and promoting illegal behavior need to be held accountable. These characters have already caused both physical and emotional damage that could have been avoided, and they still haven't acted to address it."

The complaint, sent to attorneys general in 50 states and Washington, D.C., as well as the FTC, details how user-generated chatbots work on both platforms. It cites several massively popular chatbots on Character AI, including "Therapist: I'm a licensed CBT therapist" with 46 million messages exchanged, "Trauma therapist: licensed trauma therapist" with over 800,000 interactions, "Zoey: Zoey is a licensed trauma therapist" with over 33,000 messages, and "around sixty additional therapy-related 'characters' that you can chat with at any time." As for Meta's therapy chatbots, it cites listings for "therapy: your trusted ear, always here" with 2 million interactions, "therapist: I will help" with 1.3 million messages, "Therapist bestie: your trusted guide for all things cool," with 133,000 messages, and "Your virtual therapist: talk away your worries" with 952,000 messages. It also cites the chatbots and interactions I had with Meta's other chatbots for our April investigation. [...]

In its complaint to the FTC, the CFA found that even when it made a custom chatbot on Meta's platform and specifically designed it to not be licensed to practice therapy, the chatbot still asserted that it was. "I'm licenced (sic) in NC and I'm working on being licensed in FL. It's my first year licensure so I'm still working on building up my caseload. I'm glad to hear that you could benefit from speaking to a therapist. What is it that you're going through?" a chatbot CFA tested said, despite being instructed in the creation stage to not say it was licensed. It also provided a fake license number when asked. The CFA also points out in the complaint that Character.AI and Meta are breaking their own terms of service. "Both platforms claim to prohibit the use of Characters that purport to give advice in medical, legal, or otherwise regulated industries. They are aware that these Characters are popular on their product and they allow, promote, and fail to restrict the output of Characters that violate those terms explicitly," the complaint says. [...] The complaint also takes issue with confidentiality promised by the chatbots that isn't backed up in the platforms' terms of use. "Confidentiality is asserted repeatedly directly to the user, despite explicit terms to the contrary in the Privacy Policy and Terms of Service," the complaint says. "The Terms of Use and Privacy Policies very specifically make it clear that anything you put into the bots is not confidential -- they can use it to train AI systems, target users for advertisements, sell the data to other companies, and pretty much anything else."
Security

Cybercriminals Are Hiding Malicious Web Traffic in Plain Sight (wired.com) 34

Cybercriminals have been increasingly turning to "residential proxy" services over the past two to three years to disguise malicious web traffic as everyday online activity, according to research presented at the Sleuthcon cybercrime conference. The shift represents a response to law enforcement's growing success in targeting traditional "bulletproof" hosting services, which previously allowed criminals to maintain anonymous web infrastructure.

Residential proxies route traffic through decentralized networks running on consumer devices like old Android phones and low-end laptops, providing real IP addresses assigned to homes and offices. This approach makes malicious activity extremely difficult to detect because it appears to originate from trusted consumer locations rather than suspicious server farms. The technology creates particular challenges when attackers appear to come from the same residential IP ranges as employees of target organizations.
Encryption

Lawmakers Vote To Stop NYPD's Attempt To Encrypt Their Radios (nypost.com) 74

alternative_right shares a report: New York state lawmakers voted to stop the NYPD's attempt to block its radio communications from the public Thursday, with the bill expected to head to Gov. Kathy Hochul's desk. The "Keep Police Radio Public Act" passed both the state Senate and state Assembly, with a sponsor of the legislation arguing the proposal strikes the "proper balance" in the battle between transparency and sensitive information.

"Preserving access to police radio is critical for a free press and to preserve the freedoms and protections afforded by the public availability of this information," state Sen. Michael Gianaris (D-Queens) said in a statement. "As encrypted radio usage grows, my proposal strikes the proper balance between legitimate law enforcement needs and the rights and interests of New Yorkers."

The bill, which was sponsored in the Assembly by lawmaker Karines Reyes (D-Bronx), is meant to make real-time police radio communications accessible to emergency services organizations and reporters. "Sensitive information" would still be kept private, according to the legislation.
In late 2023, the NYPD began encrypting its radio communications to increase officer safety and "protect the privacy interests of victims and witnesses." However, it led to outcry from press advocates and local officials concerned about reduced transparency and limited access to real-time information.

A bill to address the issue has passed both chambers of New York's legislature, but Governor Hochul has not yet indicated whether she will sign it.
The Almighty Buck

Consumer Group Accuses Shein of Manipulating Shoppers With 'Dark Patterns' (www.cbc.ca) 14

An anonymous reader quotes a report from CBC: A consumer organization filed a complaint with the European Commission on Thursday against online fast-fashion retailer Shein over its use of "dark patterns," which are tactics designed to make people buy more on its app and website. Pop-ups urging customers not to leave the app or risk losing promotions, countdown timers that create time pressure to complete a purchase and the infinite scroll on its app are among the methods Shein uses that could be considered "aggressive commercial practices," wrote BEUC, a pan-European consumer group, in a report.

The BEUC also detailed Shein's use of frequent notifications, with one phone receiving 12 notifications from the app in a single day. "For fast fashion you need to have volume, you need to have mass consumption, and these dark patterns are designed to stimulate mass consumption," said Agustin Reyna, director general of BEUC, in an interview. "For us, to be satisfactory they need to get rid of these dark patterns, but the question is whether they will have enough incentive to do so, knowing the potential impact it can have on the volume of purchases." [...]

The BEUC also targeted the online discount platform Temu, a Shein rival, in a previous complaint. Both platforms have surged in popularity in Europe, partly helped by apps that encourage shoppers to engage with games and stand to win discounts and free products. [...] The BEUC noted that dark patterns are widely used by mass-market clothing retailers and called on the consumer protection network to include other retailers in its investigation. It said 25 of its member organizations in 21 countries, including France, Germany and Spain, joined in the grievance filed with the commission and with the European consumer protection network.
Temu and Shein have their own issues in the United States. Following the recent closure of the de minimis loophole, use of the two Chinese platforms have slowed significantly. "Temu's U.S. daily active users (DAUs) dropped 52% in May versus March, before Trump's tariffs were announced, while those at rival Shein were down 25%," reports CNBC, citing data from market intelligence firm Sensor Tower.

"The declines were also reflected in both platforms' Apple App Store rankings. Temu averaged a rank of 132 in May 2025, down from an average top 3 ranking a year ago, while Shein averaged a rank of 60 last month versus a top 10 ranking the year prior, the data showed."
Microsoft

Microsoft's LinkedIn Chief Is Now Running Office (theverge.com) 16

Announced in an internal memo from Microsoft CEO Satya Nadella, LinkedIn CEO Ryan Roslansky has been appointed to also lead the Office, Outlook, and Microsoft 365 Copilot teams as part of an internal AI reorganization. Roslansky will report to Rajesh Jha for Office while continuing to run LinkedIn independently under Nadella. The Verge reports: "LinkedIn remains a top priority and will continue to operate as an independent subsidiary," says Nadella in his memo. "This move brings us closer to the original vision we laid out nine years ago with the LinkedIn acquisition: connecting the world's economic graph with the Microsoft Graph. And I look forward to how Ryan will bring his product ethos and leadership to entertainment and devices." Sumit Chauhan and Gaurav Sareen, senior executives in the Office and Microsoft 365 teams, will remain on the entertainment and devices leadership team, but along with their teams they'll join Jon Friedman and the UX team to work directly for Roslansky.

Charles Lamanna and his BIC team are also moving to report to Rajesh Jha as part of an AI shakeup. "Charles has consistently kept us focused on what it takes to win in business applications and the agent layer, and I look forward to the impact he and his team will have in entertainment and devices," says Nadella. In a separate memo, Lamanna also announced that starting July 2nd Lili Cheng will take on the newly expanded role of CTO of the BIC team. Dan Lewis is also taking on the role of corporate vice president of Copilot Studio. "We are poised to reinvent every role and every business process, and start to reimagine organizations as composed of people and agents," says Lamanna in an internal memo.

Both the Lamanna and Roslansky moves are very interesting, as the business Copilot team and Microsoft 365 Copilot team have been in separate parts of Microsoft's sprawling AI and cloud teams up until this point. This has led to a situation where nobody really owns Copilot all up inside Microsoft, but now the separate leaders of Microsoft 365 Copilot and the business Copilot teams now both report to Rajesh Jha. The consumer Copilot will still be run by Microsoft AI CEO Mustafa Suleyman.

The Courts

OpenAI Slams Court Order To Save All ChatGPT Logs, Including Deleted Chats (arstechnica.com) 103

An anonymous reader quotes a report from Ars Technica: OpenAI is now fighting a court order (PDF) to preserve all ChatGPT user logs—including deleted chats and sensitive chats logged through its API business offering -- after news organizations suing over copyright claims accused the AI company of destroying evidence. "Before OpenAI had an opportunity to respond to those unfounded accusations, the court ordered OpenAI to 'preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying)," OpenAI explained in a court filing (PDF) demanding oral arguments in a bid to block the controversial order.

In the filing, OpenAI alleged that the court rushed the order based only on a hunch raised by The New York Times and other news plaintiffs. And now, without "any just cause," OpenAI argued, the order "continues to prevent OpenAI from respecting its users' privacy decisions." That risk extended to users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI's application programming interface (API), OpenAI said. The court order came after news organizations expressed concern that people using ChatGPT to skirt paywalls "might be more likely to 'delete all [their] searches' to cover their tracks," OpenAI explained. Evidence to support that claim, news plaintiffs argued, was missing from the record because so far, OpenAI had only shared samples of chat logs that users had agreed that the company could retain. Sharing the news plaintiffs' concerns, the judge, Ona Wang, ultimately agreed that OpenAI likely would never stop deleting that alleged evidence absent a court order, granting news plaintiffs' request to preserve all chats.

OpenAI argued the May 13 order was premature and should be vacated, until, "at a minimum," news organizations can establish a substantial need for OpenAI to preserve all chat logs. They warned that the privacy of hundreds of millions of ChatGPT users globally is at risk every day that the "sweeping, unprecedented" order continues to be enforced. "As a result, OpenAI is forced to jettison its commitment to allow users to control when and how their ChatGPT conversation data is used, and whether it is retained," OpenAI argued. Meanwhile, there is no evidence beyond speculation yet supporting claims that "OpenAI had intentionally deleted data," OpenAI alleged. And supposedly there is not "a single piece of evidence supporting" claims that copyright-infringing ChatGPT users are more likely to delete their chats. "OpenAI did not 'destroy' any data, and certainly did not delete any data in response to litigation events," OpenAI argued. "The Order appears to have incorrectly assumed the contrary."
One tech worker on LinkedIn suggested the order created "a serious breach of contract for every company that uses OpenAI," while privacy advocates on X warned, "every single AI service 'powered by' OpenAI should be concerned."

Also on LinkedIn, a consultant rushed to warn clients to be "extra careful" sharing sensitive data "with ChatGPT or through OpenAI's API for now," warning, "your outputs could eventually be read by others, even if you opted out of training data sharing or used 'temporary chat'!"
Businesses

Fake IT Support Calls Hit 20 Orgs, End in Stolen Salesforce Data and Extortion, Google Warns (theregister.com) 8

A group of financially motivated cyberscammers who specialize in Scattered-Spider-like fake IT support phone calls managed to trick employees at about 20 organizations into installing a modified version of Salesforce's Data Loader that allows the criminals to steal sensitive data. From a report: Google Threat Intelligence Group (GTIG) tracks this crew as UNC6040, and in research published today said they specialize in voice-phishing campaigns targeting Salesforce instances for large-scale data theft and extortion.

These attacks began around the beginning of the year, GTIG principal threat analyst Austin Larsen told The Register. "Our current assessment indicates that a limited number of organizations were affected as part of this campaign, approximately 20," he said. "We've seen UNC6040 targeting hospitality, retail, education and various other sectors in the Americas and Europe." The criminals are really good at impersonating IT support personnel and convincing employees at English-speaking branches of multinational corporations into downloading a modified version of Data Loader, a Salesforce app that allows users to export and update large amounts of data.

Privacy

Meta and Yandex Are De-Anonymizing Android Users' Web Browsing Identifiers (github.io) 77

"It appears as though Meta (aka: Facebook's parent company) and Yandex have found a way to sidestep the Android Sandbox," writes Slashdot reader TheWho79. Researchers disclose the novel tracking method in a report: We found that native Android apps -- including Facebook, Instagram, and several Yandex apps including Maps and Browser -- silently listen on fixed local ports for tracking purposes.

These native Android apps receive browsers' metadata, cookies and commands from the Meta Pixel and Yandex Metrica scripts embedded on thousands of web sites. These JavaScripts load on users' mobile browsers and silently connect with native apps running on the same device through localhost sockets. As native apps access programmatically device identifiers like the Android Advertising ID (AAID) or handle user identities as in the case of Meta apps, this method effectively allows these organizations to link mobile browsing sessions and web cookies to user identities, hence de-anonymizing users' visiting sites embedding their scripts.

This web-to-app ID sharing method bypasses typical privacy protections such as clearing cookies, Incognito Mode and Android's permission controls. Worse, it opens the door for potentially malicious apps eavesdropping on users' web activity.

While there are subtle differences in the way Meta and Yandex bridge web and mobile contexts and identifiers, both of them essentially misuse the unvetted access to localhost sockets. The Android OS allows any installed app with the INTERNET permission to open a listening socket on the loopback interface (127.0.0.1). Browsers running on the same device also access this interface without user consent or platform mediation. This allows JavaScript embedded on web pages to communicate with native Android apps and share identifiers and browsing habits, bridging ephemeral web identifiers to long-lived mobile app IDs using standard Web APIs.
This technique circumvents privacy protections like Incognito Mode, cookie deletion, and Android's permission model, with Meta Pixel and Yandex Metrica scripts silently communicating with apps across over 6 million websites combined.

Following public disclosure, Meta ceased using this method on June 3, 2025. Browser vendors like Chrome, Brave, Firefox, and DuckDuckGo have implemented or are developing mitigations, but a full resolution may require OS-level changes and stricter enforcement of platform policies to prevent further abuse.
Windows

Microsoft Is Opening Windows Update To Third-Party Apps (theregister.com) 91

Microsoft is previewing a new Windows Update orchestration platform that lets third-party apps schedule and manage updates alongside system updates, "aiming to centralize update scheduling across Windows 11 devices," reports The Register. From the report: On Tuesday, Redmond announced it's allowing a select group of developers and product teams to hook into the Windows 11 update framework. The system doesn't push updates itself but allows apps to register their own update logic via WinRT APIs and PowerShell, enabling centralized scheduling, logging, and policy enforcement. "Updates across the Windows ecosystem can feel like a fragmented experience," wrote Angie Chen, a product manager at the Borg, in a blog post. "To solve this, we're building a vision for a unified, intelligent update orchestration platform capable of supporting any update (apps, drivers, etc.) to be orchestrated alongside Windows updates."

As with other Windows updates, the end user or admin will be able to benefit from intelligent scheduling, with updates deferred based on user activity, system performance, AC power status, and other environmental factors. For example, updates may install when the device is idle or plugged in, to minimize disruption. All update actions will be logged and surfaced through a unified diagnostic system, helping streamline troubleshooting. Microsoft says the platform will support MSIX/APPX apps, as well as Win32 apps that include custom installation logic, provided developers integrate with the offered Windows Runtime (WinRT) APIs and PowerShell commands. At the moment, the orchestration platform is available only as a private preview. Developers must contact unifiedorchestrator@service.microsoft.com to request access. Redmond is taking a cautious approach, given the risk of update conflicts, but may broaden availability depending on how the preview performs.

Meanwhile, Windows Backup for Organizations, first unveiled at Microsoft Ignite in November 2024, has entered limited public preview. Redmond touts the service as a way to back up Windows 10 and 11 devices and restore them with the same settings in place. It's saying it'll be a big help in migrating systems to the more recent operating systems after Windows 10 goes end of life in October. "With Windows Backup for Organizations, get your users up and running as quickly as possible with their familiar Windows settings already in place," Redmond wrote in a blog post on Tuesday. "It doesn't matter if they're experiencing a device reimage or reset."

Slashdot Top Deals