AI

AI's Ability To Displace Jobs is Advancing Quickly, Anthropic CEO Says (axios.com) 50

The ability of AI displace humans at various tasks is accelerating quickly, Anthropic CEO Dario Amodei said at an Axios event on Wednesday. From the report: Amodei and others have previously warned of the possibility that up to half of white-collar jobs could be wiped out by AI over the next five years. The speed of that displacement could require government intervention to help support the workforce, executives said.

"As with most things, when an exponential is moving very quickly, you can't be sure," Amodei said. "I think it is likely enough to happen that we felt there was a need to warn the world about it and to speak honestly." Amodei said the government may need to step in and support people as AI quickly displaces human work.

AI

ChatGPT Will Guess Your Age and Might Require ID For Age Verification 111

OpenAI is rolling out stricter safety measures for ChatGPT after lawsuits linked the chatbot to multiple suicides. "ChatGPT will now attempt to guess a user's age, and in some cases might require users to share an ID in order to verify that they are at least 18 years old," reports 404 Media. "We know this is a privacy compromise for adults but believe it is a worthy tradeoff," the company said in its announcement. "I don't expect that everyone will agree with these tradeoffs, but given the conflict it is important to explain our decisionmaking," OpenAI CEO Sam Altman said on X. From the report: OpenAI introduced parental controls to ChatGPT earlier in September, but has now introduced new, more strict and invasive security measures. In addition to attempting to guess or verify a user's age, ChatGPT will now also apply different rules to teens who are using the chatbot. "For example, ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting," the announcement said. "And, if an under-18 user is having suicidal ideation, we will attempt to contact the users' parents and if unable, will contact the authorities in case of imminent harm."

OpenAI's post explains that it is struggling to manage an inherent problem with large language models that 404 Media has tracked for several years. ChatGPT used to be a far more restricted chatbot that would refuse to engage users on a wide variety of issues the company deemed dangerous or inappropriate. Competition from other models, especially locally hosted and so-called "uncensored" models, and a political shift to the right which sees many forms of content moderation as censorship, has caused OpenAI to loosen those restrictions.

"We want users to be able to use our tools in the way that they want, within very broad bounds of safety," Open AI said in its announcement. The position it seemed to have landed on given these recent stories about teen suicide, is that it wants to "'Treat our adult users like adults' is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else's freedom."
AI

Tens of Thousands of US Emergency Workers Trained on How to Handle a Robotaxi (msn.com) 21

Last year Amazon's robotaxi service Zoox held a training session for 20 Las Vegas firefighters, police officers, and other first responders, reports the Washington Post, calling it "a new ritual for emergency workers across the country, as autonomous vehicles begin to spread beyond the handful of cities that served as initial testing grounds..." Questions that came up included: What can first responders do if the nearly 6,000-pound vehicle is blocking a roadway? (Better to pull, not push.) What happens if the vehicle loses its connectivity? (It's designed to pull over.) And can first responders manually shut off the vehicle? (Not yet, but Zoox is working on it....) The vehicles' operators claim they drive more safely than humans, but anything can happen on public roads, and first responders need to know how to intervene if a robotaxi is caught in a collision that traps passengers, catches fire or gets caught doing something that demands a traffic stop...

Alphabet's Waymo, which has more than 2,000 vehicles completing hundreds of thousands of paid trips each week across San Francisco and Silicon Valley, Los Angeles, Phoenix, Austin and Atlanta, has trained more than 20,000 first responders in how to interact with its vehicles, the company said. Tesla didn't respond to a request for comment on how many first responders the company has trained, but a representative from the Austin Police Department confirmed that fire, police and transit workers were trained on the company's Robotaxi before the company launched commercial service in June. Tesla, Waymo and Zoox say their vehicles can detect the lights and sirens of emergency vehicles and automatically attempt to pull over. Waymo says its vehicles can interpret first responders' hand signals....

The first responders appeared excited about the potential of the company's artificial intelligence technology to ferry visitors up and down the Vegas Strip without concern that a driver might be inebriated. They were also wary of problems that might unfold: Autonomous vehicles are electric, and when electric vehicles catch fire, they're difficult to extinguish, the firefighters said. The first responders also worried that a secondary air bag deployment could injure an emergency responder, a common concern with conventional vehicles. And if a police officer wanted to view the footage a Zoox vehicle captured on the road, would the company be willing to share it?

Turning over footage would require a subpoena, a Zoox official responded.

But "those who've been through the trainings and have seen large-scale commercial rollouts say it's difficult to anticipate all the potential issues in a specific market," the article points out.
  • Darius Luttropp, former deputy chief of operations for the San Francisco Fire Department, told the Post last year that Waymo vehicles had blocked city firefighters from leaving and entering firehouses, and also crashed into their equipment.
  • Lt. William White of the Austin Police Department told the Post that more than once Waymo vehicles failed to recognize an officer on a motorcycle with their police lights activated.

Encryption

Swiss Government Looks To Undercut Privacy Tech, Stoking Fears of Mass Surveillance (therecord.media) 31

The Swiss government could soon require service providers with more than 5,000 users to collect government-issued identification, retain subscriber data for six months and, in many cases, disable encryption. From a report: The proposal, which is not subject to parliamentary approval, has alarmed privacy and digital-freedoms advocates worldwide because of how it will destroy anonymity online, including for people located outside of Switzerland. A large number of virtual private network (VPN) companies and other privacy-preserving firms are headquartered in the country because it has historically had liberal digital privacy laws alongside its famously discreet banking ecosystem.

Proton, which offers secure and end-to-end encrypted email along with an ultra-private VPN and cloud storage, announced on July 23 that it is moving most of its physical infrastructure out of Switzerland due to the proposed law. The company is investing more than $117 million in the European Union, the announcement said, and plans to help develop a "sovereign EuroStack for the future of our home continent." Switzerland is not a member of the EU. Proton said the decision was prompted by the Swiss government's attempt to "introduce mass surveillance."

Earth

Protect Arctic From 'Dangerous' Climate Engineering, Scientists Warn 49

Dozens of polar scientists have warned that geoengineering schemes to manipulate the Arctic and Antarctic are dangerous, impractical, and risk distracting from the urgent need to cut fossil fuel emissions. The BBC reports: These polar "geoengineering" techniques aim to cool the planet in unconventional ways, such as artificially thickening sea-ice or releasing tiny, reflective particles into the atmosphere. They have gained attention as potential future tools to combat global warming, alongside cutting carbon emissions. But more than 40 researchers say they could bring "severe environmental damage" and urged countries to simply focus on reaching net zero, the only established way to limit global warming.

The scientists behind the new assessment, published in the journal Frontiers in Science, reviewed the evidence for five of the most widely discussed polar geoengineering ideas. All fail to meet basic criteria for their feasibility and potential environmental risks, they say. One such suggestion is releasing tiny, reflective particles called aerosols high into the atmosphere to cool the planet. This often attracts attention among online conspiracy theorists, who falsely claim that condensation trails in the sky -- water vapour created from aircraft jet engines -- is evidence of sinister large-scale geoengineering today. But many scientists have more legitimate concerns, including disruption to weather patterns around the world.

With those potential knock-on effects, that also raises the question of who decides to use it -- especially in the Arctic and Antarctic, where governance is not straightforward. If a country were to deploy geoengineering against the wishes of others, it could "increase geopolitical tensions in polar regions," according to Dr Valerie Masson-Delmotte, senior scientist at the Universite Paris Saclay in France. Another fear is that while some of the ideas may be theoretically possible, the enormous costs and time to scale-up mean they are extremely unlikely to make a difference, according to the review. [...]

A more fundamental concern is that these types of projects could create the illusion of an alternative to cutting humanity's emissions of planet-warming gases. "If they are promoted... then they are a distraction because to some people they will be a solution to the climate crisis that doesn't require decarbonising," said Prof Siegert. "Of course that would not be true and that's why we think they can be potentially damaging." Even supporters of geoengineering research agree that it is, at best, a supplement to net zero, not a substitution.
Microsoft

Microsoft Forces Workers Back To the Office (nerds.xyz) 99

BrianFagioli writes: Microsoft has decided it is time to rein in remote work. The company will soon require employees to spend at least three days per week in the office, starting with those in the Puget Sound region by February 2026. From there, the policy will spread across the United States and eventually overseas.
The Courts

Whistle-Blower Sues Meta Over Claims of WhatsApp Security Flaws (nytimes.com) 8

The former head of security for WhatsApp filed a lawsuit on Monday accusing Meta of ignoring major security and privacy flaws that put billions of the messaging app's users at risk, the latest in a string of whistle-blower allegations against the social media giant. The New York Times: In the lawsuit filed in the U.S. District Court of the District of Northern California, Attaullah Baig claimed that thousands of WhatsApp and Meta employees could gain access to sensitive user data including profile pictures, location, group memberships and contact lists. Meta, which owns WhatsApp, also failed to adequately address the hacking of more than 100,000 accounts each day and rejected his proposals for security fixes, according to the lawsuit.

Mr. Baig tried to warn Meta's top leaders, including its chief executive, Mark Zuckerberg, that users were being harmed by the security weaknesses, according to the lawsuit. In response, his managers retaliated and fired him in February, he claims. Mr. Baig, who is represented by the whistle-blower organization Psst.org and the law firm Schonbrun, Seplow, Harris, Hoffman & Zeldes, argued in the suit that the actions violated a privacy settlement Meta reached with the Federal Trade Commission in 2019, as well as securities laws that require companies to disclose risks to shareholders.

Security

First AI-Powered 'Self-Composing' Ransomware Was Actually Just a University Research Project (tomshardware.com) 6

Cybersecurity company ESET thought they'd discovered the first AI-powered ransomware in the wild, which they'd dubbed "PromptLock". But it turned out to be the work of university security researchers...

"Unlike conventional malware, the prototype only requires natural language prompts embedded in the binary," the researchers write in a research paper, calling it "Ransomware 3.0: Self-Composing and LLM-Orchestrated." Their prototype "uses the gpt-oss:20b model from OpenAI locally" (using the Ollama API) to "generate malicious Lua scripts on the fly." Tom's Hardware said that would help PromptLock evade detection: If they had to call an API on [OpenAI's] servers every time they generate one of these scripts, the jig would be up. The pitfalls of vibe coding don't really apply, either, since the scripts are running on someone else's system.
The whole thing was actually an experiment by researchers at NYU's Tandon School of Engineering. So "While it is the first to be AI-powered," the school said in an announcement, "the ransomware prototype is a proof-of-concept that is non-functional outside of the contained lab environment."

An NYU spokesperson told Tom's Hardware a Ransomware 3.0 sample was uploaded to malware-analsys platform VirusTotal, and then picked up by the ESET researchers by mistake: But the malware does work: NYU said "a simulation malicious AI system developed by the Tandon team carried out all four phases of ransomware attacks — mapping systems, identifying valuable files, stealing or encrypting data, and generating ransom notes — across personal computers, enterprise servers, and industrial control systems." Is that worrisome? Absolutely. But there's a significant difference between academic researchers demonstrating a proof-of-concept and legitimate hackers using that same technique in real-world attacks. Now the study will likely inspire the ne'er-do-wells to adopt similar approaches, especially since it seems to be remarkably affordable.

"The economic implications reveal how AI could reshape ransomware operations," the NYU researchers said. "Traditional campaigns require skilled development teams, custom malware creation, and substantial infrastructure investments. The prototype consumed approximately 23,000 AI tokens per complete attack execution, equivalent to roughly $0.70 using commercial API services running flagship models."

As if that weren't enough, the researchers said that "open-source AI models eliminate these costs entirely," so ransomware operators won't even have to shell out the 70 cents needed to work with commercial LLM service providers...

"The study serves as an early warning to help defenders prepare countermeasures," NYU said in an announcement, "before bad actors adopt these AI-powered techniques."

ESET posted on Mastodon that "Nonetheless, our findings remain valid — the discovered samples represent the first known case of AI-powered ransomware."

And the ESET researcher who'd mistakenly thought the ransomware was "in the wild" had warned that looking ahead, ransomware "will likely become more sophisticated, faster spreading, and harder to detect.... This makes cybersecurity awareness, regular backups, and stronger digital hygiene more important than ever."
Supercomputing

Europe Hopes To Join Competitive AI Race With Supercomputer Jupiter (france24.com) 41

Europe on Friday inaugurated Jupiter, its first exascale supercomputer and the most powerful AI machine on the continent. Built in Germany with 24,000 Nvidia chips, the 500-million-euro system aims to close the AI gap with the US and China while also advancing climate modeling, neuroscience, and renewable energy research. France 24 reports: Based at Juelich Supercomputing Centre in western Germany, it is Europe's first "exascale" supercomputer -- meaning it will be able to perform at least one quintillion (or one billion billion) calculations per second. The United States already has three such computers, all operated by the Department of Energy. Jupiter is housed in a centre covering some 3,600 meters (38,000 square feet) -- about half the size of a football pitch -- containing racks of processors, and packed with about 24,000 Nvidia chips, which are favored by the AI industry.

Half the 500 million euros ($580 million) to develop and run the system over the next few years comes from the European Union and the rest from Germany. Its vast computing power can be accessed by researchers across numerous fields as well as companies for purposes such as training AI models. "Jupiter is a leap forward in the performance of computing in Europe," Thomas Lippert, head of the Juelich centre, told AFP, adding that it was 20 times more powerful than any other computer in Germany. [...]

Yes, Jupiter will require on average around 11 megawatts of power, according to estimates -- equivalent to the energy used to power thousands of homes or a small industrial plant. But its operators insist that Jupiter is the most energy-efficient among the fastest computer systems in the world. It uses the latest, most energy-efficient hardware, has water-cooling systems and the waste heat that it generates will be used to heat nearby buildings, according to the Juelich centre.

Communications

Garmin Beats Apple to Market with Satellite-Connected Smartwatch (macrumors.com) 32

Just days before Apple's expected launch of the satellite-enabled Apple Watch Ultra 3, Garmin unveiled its Fenix 8 Pro -- the company's first smartwatch with built-in inReach satellite and cellular connectivity, SOS features, and a blindingly bright 4,500-nit microLED display. MacRumors reports: With inReach, the Fenix 8 Pro can send location check-ins and text messages over satellite using the Garmin Messenger app. There is also included cellular connectivity, so the smartwatch can make phone calls, send 30-second voice messages, and provide LiveTrack links and weather forecasts when an LTE connection is available.

LiveTrack is a feature that allows the wearer's family and friends to keep track of their location during an activity or adventure. For emergencies, there is an SOS feature that will send a message to the Garmin Response center over a satellite or cellular connection. Garmin Response will then communicate with the user, their emergency contacts, and search and rescue organizations to provide help. Garmin says that its Response team has supported over 17,000 inReach incident responses across over 150 countries.
The Fenix 8 Pro smartwatch launches September 8, with the AMOLED model starting at $1,200 and the 51mm microLED version priced at $2,000. Both require a paid inReach satellite plan beginning at $7.99 per month for full functionality.
Android

What Every Argument About Sideloading Gets Wrong (hugotunius.se) 89

Developer Hugo Tunius, writing in a blog post: Sideloading has been a hot topic for the last decade. Most recently, Google has announced further restrictions on the practice in Android. Many hundreds of comment threads have discussed these changes over the years. One point in particular is always made: "I should be able to run whatever code I want on hardware I own." I agree entirely with this point, but within the context of this discussion it's moot.

When Google restricts your ability to install certain applications they aren't constraining what you can do with the hardware you own, they are constraining what you can do using the software they provide with said hardware. It's through this control of the operating system that Google is exerting control, not at the hardware layer. You often don't have full access to the hardware either and building new operating systems to run on mobile hardware is impossible, or at least much harder than it should be. This is a separate, and I think more fruitful, point to make. Apple is a better case study than Google here. Apple's success with iOS partially derives from the tight integration of hardware and software. An iPhone without iOS is a very different product to what we understand an iPhone to be. Forcing Apple to change core tenets of iOS by legislative means would undermine what made the iPhone successful.

Medicine

Study: Young Children Diagnosed with ADHD Often Prescribed Medication Too Quickly (cbsnews.com) 198

"A new study released Friday found that young children diagnosed with attention-deficit/hyperactivity disorder, or ADHD, are often prescribed medication too quickly," reports CBS News: The study, led by Stanford Medicine and published in JAMA Network Open, examined the health records of nearly 10,000 preschool-aged children ages 3 to 5 between 2016 and 2023 who were diagnosed with ADHD... The Stanford study found that about 68% of those children who were diagnosed with ADHD were prescribed medications before age 7, most often stimulants such as Ritalin, which can help children focus their attention and regulate their emotions. The turn to medication often came quickly, according to the study. About 42% of the children who were diagnosed with ADHD were prescribed drugs within 30 days of diagnosis, the study found.

"We don't have concerns about the toxicity of the medications for 4- and 5-year-olds, but we do know that there is a high likelihood of treatment failure, because many families decide the side effects outweigh the benefits," Dr. Yair Bannett, assistant professor of pediatrics at Stanford Medicine and the lead author of the study, said in a statement. Those side effects can include irritability, aggressiveness and emotional problems, according to Bannett. "The high rate of medication prescriptions among preschool-age children with ADHD and the lack of delay between initial diagnosis and prescription require further investigation to assess the appropriateness of early medication treatment," the researchers concluded.

The study also found that the vast majority of the young children diagnosed with ADHD, about 76%, were boys.

CBS News interviewed Jamie Howard, senior clinical psychologist from the Child Mind Institute (who was not involved in the study). Howard said when treating ADHD in young children, clinical guidelines call for starting with "behavioral intervention...."

"I think that people have an association with ADHD and stimulant medication... But there is actually a lot more than that. And we want to give kids the opportunity to use these other strategies first, and then if they need medication, it can be incredibly helpful for a lot of kids."
Social Networks

Mastodon Says It Doesn't 'Have the Means' To Comply With Age Verification Laws (techcrunch.com) 67

Mastodon says it cannot comply with Mississippi's new age verification law because its decentralized software does not support age checks and the nonprofit lacks resources to enforce them. "The social nonprofit explains that Mastodon doesn't track its users, which makes it difficult to enforce such legislation," reports TechCrunch. "Nor does it want to use IP address-based blocks, as those would unfairly impact people who were traveling, it says." From the report: The statement follows a lively back-and-forth conversation earlier this week between Mastodon founder and CEO Eugen Rochko and Bluesky board member and journalist Mike Masnick. In the conversation, published on their respective social networks, Rochko claimed, "there is nobody that can decide for the fediverse to block Mississippi." (The Fediverse is the decentralized social network that includes Mastodon and other services, and is powered by the ActivityPub protocol.) "And this is why real decentralization matters," said Rochko.

Masnick pushed back, questioning why Mastodon's individual servers, like the one Rochko runs at mastodon.social, would not also be subject to the same $10,000 per user fines for noncompliance with the law. On Friday, however, the nonprofit shared a statement with TechCrunch to clarify its position, saying that while Mastodon's own servers specify a minimum age of 16 to sign up for its services, it does not "have the means to apply age verification" to its services. That is, the Mastodon software doesn't support it. The Mastodon 4.4 release in July 2025 added the ability to specify a minimum age for sign-up and other legal features for handling terms of service, partly in response to increased regulation around these areas. The new feature allows server administrators to check users' ages during sign-up, but the age-check data is not stored. That means individual server owners have to decide for themselves if they believe an age verification component is a necessary addition.

The nonprofit says Mastodon is currently unable to provide "direct or operational assistance" to the broader set of Mastodon server operators. Instead, it encourages owners of Mastodon and other Fediverse servers to make use of resources available online, such as the IFTAS library, which provides trust and safety support for volunteer social network moderators. The nonprofit also advises server admins to observe the laws of the jurisdictions where they are located and operate. Mastodon notes that it's "not tracking, or able to comment on, the policies and operations of individual servers that run Mastodon."
Bluesky echoed those comments in a blog post last Friday, saying the company doesn't have the resources to make the substantial technical changes this type of law would require.
Security

WhatsApp Fixes 'Zero-Click' Bug Used To Hack Apple Users With Spyware (techcrunch.com) 13

An anonymous reader quotes a report from TechCrunch: WhatsApp said on Friday that it fixed a security bug in its iOS and Mac apps that was being used to stealthily hack into the Apple devices of "specific targeted users." The Meta-owned messaging app giant said in its security advisory that it fixed the vulnerability, known officially as CVE-2025-55177, which was used alongside a separate flaw found in iOS and Macs, which Apple fixed last week and tracks as CVE-2025-43300.

Apple said at the time that the flaw was used in an "extremely sophisticated attack against specific targeted individuals." Now we know that dozens of WhatsApp users were targeted with this pair of flaws. Donncha O Cearbhaill, who heads Amnesty International's Security Lab, described the attack in a post on X as an "advanced spyware campaign" that targeted users over the past 90 days, or since the end of May. O Cearbhaill described the pair of bugs as a "zero-click" attack, meaning it does not require any interaction from the victim, such as clicking a link, to compromise their device.

The two bugs chained together allow an attacker to deliver a malicious exploit through WhatsApp that's capable of stealing data from the user's Apple device. Per O Cearbhaill, who posted a copy of the threat notification that WhatsApp sent to affected users, the attack was able to "compromise your device and the data it contains, including messages." It's not immediately clear who, or which spyware vendor, is behind the attacks. When reached by TechCrunch, Meta spokesperson Margarita Franklin confirmed the company detected and patched the flaw "a few weeks ago" and that the company sent "less than 200" notifications to affected WhatsApp users. The spokesperson did not say, when asked, if WhatsApp has evidence to attribute the hacks to a specific attacker or surveillance vendor.

The Courts

4chan and Kiwi Farms Sue the UK Over Its Age Verification Law (404media.co) 103

An anonymous reader quotes a report from 404 Media: 4chan and Kiwi Farms sued the United Kingdom's Office of Communications (Ofcom) over its age verification law in U.S. federal court Wednesday, fulfilling a promise it announced on August 23. In the lawsuit, 4chan and Kiwi Farms claim that threats and fines they have received from Ofcom "constitute foreign judgments that would restrict speech under U.S. law." Both entities say in the lawsuit that they are wholly based in the U.S. and that they do not have any operations in the United Kingdom and are therefore not subject to local laws. Ofcom's attempts to fine and block 4chan and Kiwi Farms, and the lawsuit against Ofcom, highlight the messiness involved with trying to restrict access to specific websites or to force companies to comply with age verification laws.

The lawsuit calls Ofcom an "industry-funded global censorship bureau." "Ofcom's ambitions are to regulate Internet communications for the entire world, regardless of where these websites are based or whether they have any connection to the UK," the lawsuit states. "On its website, Ofcom states that 'over 100,000 online services are likely to be in scope of the Online Safety Act -- from the largest social media platforms to the smallest community forum.'" [...] Ofcom began investigating 4chan over alleged violations of the Online Safety Act in June. On August 13, it announced a provisional decision and stated that 4chan had "contravened its duties" and then began to charge the site a penalty of [roughly $26,000] a day. Kiwi Farms has also been threatened with fines, the lawsuit states.
"American citizens do not surrender our constitutional rights just because Ofcom sends us an e-mail. In the face of these foreign demands, our clients have bravely chosen to assert their constitutional rights," said Preston Byrne, one of the lawyers representing 4chan and Kiwi Farms.

"We are aware of the lawsuit," an Ofcom spokesperson told 404 Media. "Under the Online Safety Act, any service that has links with the UK now has duties to protect UK users, no matter where in the world it is based. The Act does not, however, require them to protect users based anywhere else in the world."
Businesses

A Proposal to Ban Ghost Jobs (cnbc.com) 67

After losing his job in 2024, Eric Thompson spearheaded a working group to push for federal legislation banning "ghost jobs" -- openings posted with no intent to hire. The proposed Truth in Job Advertising and Accountability Act would require transparency around job postings, set limits on how long ads can remain up, and fine companies that violate the rules. CNBC reports: "There's nothing illegal about posting a job, currently, and never filling it," says Thompson, a network engineering leader in Warrenton, Virginia. Not to mention, it's "really hard to prove, and so that's one of the reasons that legally, it's been kind of this gray area." As Thompson researched more into the phenomenon, he connected with former colleagues and professional connections across the country experiencing the same thing. Together, the eight of them decided to form the TJAAA working group to spearhead efforts for federal legislation to officially ban businesses from posting ghost jobs.

In May, the group drafted its first proposal: The TJAAA aims to require that all public job listings include information such as:
- The intended hire and start dates
- Whether it's a new role or backfill
- If it's being offered internally with preference to current employees
- The number of times the position has been posted in the last two years, and other factors, according to the draft language.

It also sets guidelines for how long a post is required to be up (no more than 90 calendar days) and how long the submission period can be (at least four calendar days) before applications can be reviewed. The proposed legislation applies to businesses with more than 50 employees, and violators can be fined a minimum of $2,500 for each infraction. The proposal provides a framework at the federal level, Thompson says, because state-level policies won't apply to employers who post listings across multiple states, or who use third-party platforms that operate beyond state borders.

Android

Google To Require Identity Verification for All Android App Developers by 2027 (androidauthority.com) 97

Google will require identity verification for all Android app developers, including those distributing apps outside the Play Store, starting September 2026 in Brazil, Indonesia, Singapore, and Thailand before expanding globally through 2027. Developers must register through a new Android Developer Console beginning March 2026. The requirement applies to certified Android devices running Google Mobile Services. Google cited malware prevention as the primary motivation, noting sideloaded apps contain 50 times more malware than Play Store apps.

Hobbyist and student developers will receive separate account types. Developer information submitted to Google will not be displayed to users.
Social Networks

Bluesky Blocks Mississippi Over Age Verification Law (techcrunch.com) 71

People in Mississippi no longer have access to Bluesky. "If you access Bluesky from a Mississippi IP address, you'll see a message explaining why the app isn't available," announced a Bluesky blog post Friday.

The reason is a new Mississippi law that "requires all users to verify their ages before using common social media sites ranging from Facebook to Nextdoor," noted NPR. Bluesky wrote that their block "will remain in place while the courts decide whether the law will stand." [U]nder the law, we would need to verify every user's age and obtain parental consent for anyone under 18. The potential penalties for non-compliance are substantial — up to $10,000 per user. Building the required verification systems, parental consent workflows, and compliance infrastructure would require significant resources that our small team is currently unable to spare.
Bluesky also notes that the law "requires collecting and storing sensitive personal information from all users...not just those accessing age-restricted content" — and that this information would include "detailed tracking of minors."

TechCrunch notes that even blocking Mississippi has created some problems: Some Bluesky users outside Mississippi subsequently reported issues accessing the service due to their cell providers routing traffic through servers in the state, with CTO Paul Frazee responding Saturday that the company was "working deploy an update to our location detection that we hope will solve some inaccuracies." The company's blog post notes that its decision only applies to the Bluesky app built on the AT Protocol. Other apps may approach the decision differently.
Interestingly, the law had been immediately challenged by NetChoice (a trade association of major tech companies). But while a District Court agreed, blocking the law from going into effect (until court challenges finished), an Appeals Court then lifted that block. A final appeal to America's Supreme Court was unsuccessful — although the ruling by Justice Kavanaugh suggests the law could be overturned later: "To be clear, NetChoice has, in my view, demonstrated that it is likely to succeed on the merits — namely, that enforcement of the Mississippi law would likely violate its members' First Amendment rights under this Court's precedents... [U]nder this Court's case law as it currently stands, the Mississippi law is likely unconstitutional. Nonetheless, because NetChoice has not sufficiently demonstrated that the balance of harms and equities favors it at this time, I concur in the Court's denial of the application for interim relief."
Social Networks

Bluesky Blocks Service In Mississippi Over Age Assurance Law (techcrunch.com) 72

Bluesky has blocked access to its service in Mississippi rather than comply with a new state law requiring age verification for all social media users. TechCrunch reports: In a blog post published on Friday, the company explains that, as a small team, it doesn't have the resources to make the substantial technical changes this type of law would require, and it raised concerns about the law's broad scope and privacy implications. Mississippi's HB 1126 requires platforms to introduce age verification for all users before they can access social networks like Bluesky. On Thursday, U.S. Supreme Court justices decided to block an emergency appeal that would have prevented the law from going into effect as the legal challenges it faces played out in the courts. As a result, Bluesky had to decide what it would do about compliance.

Instead of requiring age verification before users could access age-restricted content, this law requires age verification of all users. That means Bluesky would have to verify every user's age and obtain parental consent for anyone under 18. The company notes that the potential penalties for noncompliance are hefty, too -- up to $10,000 per user. Bluesky also stresses that the law goes beyond child safety, as intended, and would create "significant barriers that limit free speech and disproportionately harm smaller platforms and emerging technologies." To comply, Bluesky would have to collect and store sensitive information from all its users, in addition to the detailed tracking of minors. This is different from how it's expected to comply with other age verification laws, like the U.K.'s Online Safety Act (OSA), which only requires age checks for certain content and features.

Mississippi's law blocks anyone from using the site unless they provide their personal and sensitive information. The company notes that its decision only applies to the Bluesky app built on the AT Protocol. Other apps may approach the decision differently.

Google

Google TV and Android TV Apps Must Support 64-bit Starting August 2026 (nerds.xyz) 22

BrianFagioli writes: Google is preparing to bring its television platforms in line with the rest of Android. Starting August 1, 2026, both Google TV and Android TV will require app updates that include native code to provide 64-bit support. The move follows similar requirements for phones and tablets, and it paves the way for upcoming 64-bit TV devices.

Slashdot Top Deals