AI

Hinge CEO Says Dating AI Chatbots Is 'Playing With Fire' (theverge.com) 57

In a podcast interview with The Verge's Nilay Patel, Hinge CEO Justin McLeod described integrating AI into dating apps as promising but warned against relying on AI companionship, likening it to "playing with fire" and consuming "junk food," potentially exacerbating the loneliness epidemic. He emphasized Hinge's mission to foster genuine human connections and highlighted upcoming AI-powered features designed to improve matchmaking and provide coaching to encourage real-world interactions. Here's an excerpt from the interview: Again, there's a fine line between prompting someone and coaching them inside Hinge, and we're coaching them in a different way within a more self-contained ecosystem. How do you think about that? Would you launch a full-on virtual girlfriend inside Hinge?

Certainly not. I have lots of thoughts about this. I think there's actually quite a clear line between providing a tool that helps people do something or get better at something, and the line where it becomes this thing that is trying to become your friend, trying to mimic emotions, and trying to create an emotional connection with you. That I think is really playing with fire. I think we are already in a crisis of loneliness, and a loneliness epidemic. It's a complex issue, and it's baked into our culture, and it goes back to before the internet. But just since 2000, over the past 20 years, the amount of time that people spend together in real life with their friends has dropped by 70 percent for young people. And it's been almost completely displaced by the time spent staring at screens. As a result, we've seen massive increases in mental health issues, and people's loneliness, anxiety, and depression.

I think Mark Zuckerberg was just quoted about this, that most people don't have enough friends. But he said we're going to give them AI chatbots. That he believes that AI chatbots can become your friends. I think that's honestly an extraordinarily reductive view of what a friendship is, that it's someone there to say all the right things to you at the right moment The most rewarding parts of being in a friendship are being able to be there for someone else, to risk and be vulnerable, to share experiences with other conscious entities. So I think that while it will feel good in the moment, like junk food basically, to have an experience with someone who says all the right things and is available at the right time, it will ultimately, just like junk food, make people feel less healthy and mo re drained over time. It will displace the human relationships that people should be cultivating out in the real world.

How do you compete with that? That is the other thing that is happening. It is happening. Whether it's good or bad. Hinge is offering a harder path. So you say, "We've got to get people out on dates." I honestly wonder about that, based on the younger folks I know who sometimes say, âoeI just don't want to leave the house. I would rather just talk to this computer. I have too much social pressure just leaving the house in this way.â That's what Hinge is promising to do. How do you compete with that? Do you take it head on? Are you marketing that directly?

I'm starting to think very much about taking it head on. We want to continue at Hinge to champion human relationships, real human-to-human-in-real-life relationships, because I think they are an essential part of the human experience, and they're essential to our mental health. It's not just because I run a dating app and, obviously, it's important that people continue to meet. It really is a deep, personal mission of mine, and I think it's absolutely critical that someone is out there championing this. Because it's always easier to race to the bottom of the brain stem and offer people junk products that maybe sell in the moment but leave them worse off. That's the entire model that we've seen from what happened with social media. I think AI chatbots could frankly be much more dangerous in that respect.

So what we can do is to become more and more effective and support people more and more, and make it as easy as possible to do the harder and riskier thing, which is to go out and form real relationships with real people. They can let you down and might not always be there for you, but it is ultimately a much more nourishing and enriching experience for people. We can also champion and raise awareness as much as we can. That's another reason why I'm here today talking with you, because I think it's important to put out the counter perspective, that we don't just reflexively believe that AI chatbots can be your friend, without thinking too deeply about what that really implies and what that really means.

We keep going back to junk food, but people had to start waking up to the fact that this was harmful. We had to do a lot of campaigns to educate people that drinking Coca-Cola and eating fast food was detrimental to their health over the long term. And then as people became more aware of that, a whole personal wellness industry started to grow, and now that's a huge industry, and people spend a lot of time focusing on their diet and nutrition and mental health, and all these other things. I think similarly, social wellness needs to become a category like that. It's thinking about not just how do I get this junk social experience of social media where I get fed outraged news and celebrity gossip and all that stuff, but how do I start building a sense of social wellness, where I can create an enriching, intimate connection with important people in my life.
You can listen to the podcast here.
Science

Casino Lights Could Be Warping Your Brain To Take Risks, Scientists Warn (sciencealert.com) 28

ScienceAlert reports: Casino lighting could be nudging gamblers to be more reckless with their money, according to a new study, which found a link between blue-enriched light and riskier gambling behavior. The extra blue light emitted by casino decor and LED screens seems to trigger certain switches in our brains, making us less sensitive to financial losses compared to gains of equal magnitude, researchers from Flinders University and Monash University in Australia found...

The researchers think circadian photoreception, which is our non-visual response to light, is playing a part here. The level of blue spectrum light may be activating specific eye cells connected to brain regions in charge of decision-making, emotional regulation, and processing risk versus reward scenarios.

"Under conditions where the lighting emitted less blue, people tended to feel a $100 loss much more strongly than a $100 gain — the loss just feels worse," [says the study's lead author, a psychologist at the Flinders Health and Medical Research Institute]. "But under bright, blue-heavy light such as that seen in casino machines, the $100 loss didn't appear to feel as bad, so people were more willing to take the risk...." That raises some questions around ethics and responsibility, according to the researchers. While encouraging risk taking might be good for the gambling business, it's not good for the patrons spending their cash.

One professor involved in the study reached this conclusion. "It is possible that simply dimming the blue in casino lights could help promote safer gambling behaviors."

The research has been published in Scientific Reports.

Thanks to Slashdot reader alternative_right for sharing the news.
AI

What are the Carbon Costs of Asking an AI a Question? (msn.com) 56

"The carbon cost of asking an artificial intelligence model a single text question can be measured in grams of CO2..." writes the Washington Post. And while an individual's impact may be low, what about the collective impact of all users?

"A Google search takes about 10 times less energy than a ChatGPT query, according to a 2024 analysis from Goldman Sachs — although that may change as Google makes AI responses a bigger part of search." For now, a determined user can avoid prompting Google's default AI-generated summaries by switching over to the "web" search tab, which is one of the options alongside images and news. Adding "-ai" to the end of a search query also seems to work. Other search engines, including DuckDuckGo, give you the option to turn off AI summaries....

Using AI doesn't just mean going to a chatbot and typing in a question. You're also using AI every time an algorithm organizes your social media feed, recommends a song or filters your spam email... [T]here's not much you can do about it other than using the internet less. It's up to the companies that are integrating AI into every aspect of our digital lives to find ways to do it with less energy and damage to the planet.

More points from the article:
  • Two researchers tested the performance of 14 AI language models, and found larger models gave more accurate answers, "but used several times more energy than smaller models."

Classic Games (Games)

YouTube Is Hiding An Excellent, Official High-Speed Pac-Man Mod In Plain Sight (arstechnica.com) 18

YouTube is quietly hosting Pac-Man Superfast within its "Playables" section. "You'd be forgiven for not knowing about YouTube Playables," writes Ars Technica's Kyle Orland. "Few seemed to note its official announcement last year as a collection of free-to-play web games built for the web using standard rendering APIs."

"The seeming competitor to Netflix's mobile gaming offerings is still described in an official FAQ as 'an experimental feature rolled out to select users in eligible countries/regions,' which doesn't make this post-Stadia gaming effort seem like a huge priority for Google." From the report: Weird origins aside, Pac-Man Superfast pretty much delivers what its name promises. While gameplay starts at an "Easy" speed that roughly matches the arcade original, the speed of both Pac-Man and the ghosts is slightly increased every few seconds (dying temporarily reduces the speed to a lower level). After a few minutes, you're advancing past the titular "Super Fast" speed to extreme reflex-testing speeds like Crazy, Insane, Maniac, and a final test that's ominously named "Doom."

Those who've played the excellent Pac-Man Championship Edition series will be familiar with the high-speed vibe here, but Pac-Man Superfast remains focused on the game's original maze and selection of just four ghosts. That means old-school strategies for grouping ghosts together and running successful patterns through the narrow corridors work in similar ways here. Successfully executing those patterns becomes a tense battle of nerves here, though, requiring multiple direction changes every second at the highest speeds. While the game will technically work with swipe controls on a smartphone or tablet, high-level play really requires the precision of a keyboard via a desktop/laptop web browser (we couldn't get the game to recognize a USB controller, unfortunately).

As exciting as the high-speed maze gameplay gets, though, Pac-Man Superfast is hampered by a few odd design decisions. The game ends abruptly after just 13 levels, for instance, making it impossible to even attempt the high-endurance 256-level runs that Pac-Man is known for. The game also throws an extra life at you every 5,000 points, making it relatively easy to brute force your way to the end as long as you focus on the three increasingly high-point-value items that appear periodically on each stage. Despite this, the game doesn't give any point reward for unused extra lives or long-term survival at high speeds, limiting the rewards for high-level play. And the lack of a built-in leaderboard makes it hard to directly compare your performance to friends and/or strangers anyway.

Space

Our Galaxy's Monster Black Hole Is Spinning Almost As Fast As Physics Allows (sciencealert.com) 41

alternative_right shares a report from ScienceAlert: The colossal black hole lurking at the center of the Milky Way galaxy is spinning almost as fast as its maximum rotation rate. That's just one thing astrophysicists have discovered after developing and applying a new method to tease apart the secrets still hidden in supermassive black hole observations collected by the Event Horizon Telescope (EHT). The unprecedented global collaboration spent years working to give us the first direct images of the shadows of black holes, first with M87* in a galaxy 55 million light-years away, then with Sgr A*, the supermassive black hole at the heart of our own galaxy. [...]

Their results show, among other things, that Sgr A* is not only spinning at close to its maximum speed, but that its rotational axis is pointed in Earth's direction, and that the glow around it is generated by hot electrons. Perhaps the most interesting thing is that the magnetic field in the material around Sgr A* doesn't appear to be behaving in a way that's predicted by theory. M87*, they discovered, is also rotating rapidly, although not as fast as Sgr A*. However, it is rotating in the opposite direction to the material swirling in a disk around it -- possibly because of a past merger with another supermassive black hole.
The findings have been detailed in three papers published in the journal Astronomy & Astrophysics. They can be found here, here, and here.
Security

The 16-Billion-Record Data Breach That No One's Ever Heard of (cybernews.com) 34

An anonymous reader quotes a report from Cybernews: Several collections of login credentials reveal one of the largest data breaches in history, totaling a humongous 16 billion exposed login credentials. The data most likely originates from various infostealers. Unnecessarily compiling sensitive information can be as damaging as actively trying to steal it. For example, the Cybernews research team discovered a plethora of supermassive datasets, housing billions upon billions of login credentials. From social media and corporate platforms to VPNs and developer portals, no stone was left unturned.

Our team has been closely monitoring the web since the beginning of the year. So far, they've discovered 30 exposed datasets containing from tens of millions to over 3.5 billion records each. In total, the researchers uncovered an unimaginable 16 billion records. None of the exposed datasets were reported previously, bar one: in late May, Wired magazine reported a security researcher discovering a "mysterious database" with 184 million records. It barely scratches the top 20 of what the team discovered. Most worryingly, researchers claim new massive datasets emerge every few weeks, signaling how prevalent infostealer malware truly is.

"This is not just a leak -- it's a blueprint for mass exploitation. With over 16 billion login records exposed, cybercriminals now have unprecedented access to personal credentials that can be used for account takeover, identity theft, and highly targeted phishing. What's especially concerning is the structure and recency of these datasets -- these aren't just old breaches being recycled. This is fresh, weaponizable intelligence at scale," researchers said. The only silver lining here is that all of the datasets were exposed only briefly: long enough for researchers to uncover them, but not long enough to find who was controlling vast amounts of data. Most of the datasets were temporarily accessible through unsecured Elasticsearch or object storage instances.
Key details to be aware of: - The records include billions of login credentials, often structured as URL, login, and password.
- The datasets include both old and recent breaches, many with cookies, tokens, and metadata, making them especially dangerous for organizations without multi-factor authentication or strong credential practices.
- Exposed services span major platforms like Apple, Google, Facebook, Telegram, GitHub, and even government services.
- The largest dataset alone includes 3.5 billion records, while one associated with the Russian Federation has over 455 million; many dataset names suggest links to malware or specific regions.
- Ownership of the leaked data is unclear, but its potential for phishing, identity theft, and ransomware is severe -- especially since even a - Basic cyber hygiene -- such as regularly updating strong passwords and scanning for malware -- is currently the best line of defense for users.

Security

Hackers Are Turning Tech Support Into a Threat (msn.com) 41

Hackers have stolen hundreds of millions of dollars from cryptocurrency holders and disrupted major retailers by targeting outsourced call centers used by American corporations to reduce costs, WSJ reported Thursday. The attackers exploit low-paid call center workers through bribes and social engineering to bypass two-factor authentication systems protecting bank accounts and online portals.

Coinbase faces potential losses of $400 million after hackers compromised data belonging to 97,000 customers by bribing call center workers in India with payments of $2,500. The criminals also used malicious tools that exploited vulnerabilities in Chrome browser extensions to collect customer data in bulk.

TaskUs, which handled Coinbase support calls, shut down operations at its Indore, India facility and laid off 226 workers. Retail attacks targeted Marks & Spencer and Harrods with hackers impersonating corporate executives to pressure tech support workers into providing network access. The same technique compromised MGM Resorts systems in 2023. Call center employees typically possess sensitive customer information including account balances and recent transactions that criminals use to masquerade as legitimate company representatives.
Games

Steam Beta Enables Proton On Linux For All Titles (gamingonlinux.com) 35

Valve has quietly updated the Steam Beta Client to enable Proton by default for all Windows games on Linux, eliminating the need for users to toggle compatibility settings manually. GamingOnLinux reports: For some context here: originally, Proton had an option to enable / disable it globally. That was removed with the Game Recording update last year. That made sense, because people kept somehow turning it entirely off and now it's required by Steam. Currently, there's still an option in the stable Steam Client that you need to manually check to enable Steam Play (Proton) for "all other titles". This is something of a leftover from when Proton was initially revealed, and only worked for a specific set of games on Valve's whitelist. It now covers what Valve set by default for Steam Deck and SteamOS verification.

What's changed is that at some point in the recent Steam Beta releases, is that "for all other titles" option is gone. I've scrolled back through changelogs and not seen it mentioned. So now, Proton is just enabled properly in full by default in the Steam Beta like shown in the [image here]. This is a good (and needed) change that I'm happy to see. There's often confusion when people try to run Windows games on Linux and end up with no install button because Proton isn't turned on for all titles. [This] will soon be a thing of the past. To be clear, this is not setting Proton on every game by default, it does not override Native Linux games. It's just making Proton available by default.

Government

California AI Policy Report Warns of 'Irreversible Harms' 52

An anonymous reader quotes a report from Time Magazine: While AI could offer transformative benefits, without proper safeguards it could facilitate nuclear and biological threats and cause "potentially irreversible harms," a new report commissioned by California Governor Gavin Newsom has warned. "The opportunity to establish effective AI governance frameworks may not remain open indefinitely," says the report, which was published on June 17 (PDF). Citing new evidence that AI can help users source nuclear-grade uranium and is on the cusp of letting novices create biological threats, it notes that the cost for inaction at this current moment could be "extremely high." [...]

"Foundation model capabilities have rapidly advanced since Governor Newsom vetoed SB 1047 last September," the report states. The industry has shifted from large language AI models that merely predict the next word in a stream of text toward systems trained to solve complex problems and that benefit from "inference scaling," which allows them more time to process information. These advances could accelerate scientific research, but also potentially amplify national security risks by making it easier for bad actors to conduct cyberattacks or acquire chemical and biological weapons. The report points to Anthropic's Claude 4 models, released just last month, which the company said might be capable of helping would-be terrorists create bioweapons or engineer a pandemic. Similarly, OpenAI's o3 model reportedly outperformed 94% of virologists on a key evaluation. In recent months, new evidence has emerged showing AI's ability to strategically lie, appearing aligned with its creators' goals during training but displaying other objectives once deployed, and exploit loopholes to achieve its goals, the report says. While "currently benign, these developments represent concrete empirical evidence for behaviors that could present significant challenges to measuring loss of control risks and possibly foreshadow future harm," the report says.

While Republicans have proposed a 10 year ban on all state AI regulation over concerns that a fragmented policy environment could hamper national competitiveness, the report argues that targeted regulation in California could actually "reduce compliance burdens on developers and avoid a patchwork approach" by providing a blueprint for other states, while keeping the public safer. It stops short of advocating for any specific policy, instead outlining the key principles the working group believes California should adopt when crafting future legislation. It "steers clear" of some of the more divisive provisions of SB 1047, like the requirement for a "kill switch" or shutdown mechanism to quickly halt certain AI systems in case of potential harm, says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, and a lead-writer of the report.

Instead, the approach centers around enhancing transparency, for example through legally protecting whistleblowers and establishing incident reporting systems, so that lawmakers and the public have better visibility into AI's progress. The goal is to "reap the benefits of innovation. Let's not set artificial barriers, but at the same time, as we go, let's think about what we're learning about how it is that the technology is behaving," says Cuellar, who co-led the report. The report emphasizes this visibility is crucial not only for public-facing AI applications, but for understanding how systems are tested and deployed inside AI companies, where concerning behaviors might first emerge. "The underlying approach here is one of 'trust but verify,'" Singer says, a concept borrowed from Cold War-era arms control treaties that would involve designing mechanisms to independently check compliance. That's a departure from existing efforts, which hinge on voluntary cooperation from companies, such as the deal between OpenAI and Center for AI Standards and Innovation (formerly the U.S. AI Safety Institute) to conduct pre-deployment tests. It's an approach that acknowledges the "substantial expertise inside industry," Singer says, but "also underscores the importance of methods of independently verifying safety claims."
Cloud

Google Cloud Caused Outage By Ignoring Its Usual Code Quality Protections (theregister.com) 42

Google Cloud has attributed last week's widespread outage to a flawed code update in its Service Control system that triggered a global crash loop due to missing error handling and lack of feature flag protection. The Register reports: Google's explanation of the incident opens by informing readers that its APIs, and Google Cloud's, are served through our Google API management and control planes." Those two planes are distributed regionally and "are responsible for ensuring each API request that comes in is authorized, has the policy and appropriate checks (like quota) to meet their endpoints." The core binary that is part of this policy check system is known as "Service Control."

On May 29, Google added a new feature to Service Control, to enable "additional quota policy checks." "This code change and binary release went through our region by region rollout, but the code path that failed was never exercised during this rollout due to needing a policy change that would trigger the code," Google's incident report explains. The search monopolist appears to have had concerns about this change as it "came with a red-button to turn off that particular policy serving path." But the change "did not have appropriate error handling nor was it feature flag protected. Without the appropriate error handling, the null pointer caused the binary to crash."

Google uses feature flags to catch issues in its code. "If this had been flag protected, the issue would have been caught in staging." That unprotected code ran inside Google until June 12th, when the company changed a policy that contained "unintended blank fields." Here's what happened next: "Service Control, then regionally exercised quota checks on policies in each regional datastore. This pulled in blank fields for this respective policy change and exercised the code path that hit the null pointer causing the binaries to go into a crash loop. This occurred globally given each regional deployment."

Google's post states that its Site Reliability Engineering team saw and started triaging the incident within two minutes, identified the root cause within 10 minutes, and was able to commence recovery within 40 minutes. But in some larger Google Cloud regions, "as Service Control tasks restarted, it created a herd effect on the underlying infrastructure it depends on ... overloading the infrastructure." Service Control wasn't built to handle this, which is why it took almost three hours to resolve the issue in its larger regions. The teams running Google products that went down due to this mess then had to perform their own recovery chores.
Going forward, Google has promised a couple of operational changes to prevent this mistake from happening again: "We will improve our external communications, both automated and human, so our customers get the information they need asap to react to issues, manage their systems and help their customers. We'll ensure our monitoring and communication infrastructure remains operational to serve customers even when Google Cloud and our primary monitoring products are down, ensuring business continuity."
Social Networks

Threads Will Let You Hide Spoilers In Your Posts (theverge.com) 40

Threads is testing a new feature that lets users hide spoiler content by blurring images or text, which can then be revealed with a tap. The Verge reports: Meta spokesperson Alec Booker told The Verge that this is a "global test," though it's not clear how many people will gain access to it. Spoilers will also look a bit different depending on which device you're using. On desktop, spoilers are hidden by a gray block, but they appear behind a bunch of floating dots on mobile (which you can see in the GIF embedded [here]). "This feature is currently optimized for mobile, but we're working to improve the experience for desktop," Booker said.
AI

Site for 'Accelerating' AI Use Across the US Government Accidentally Leaked on GitHub (404media.co) 18

America's federal government is building a website and API called ai.gov to "accelerate government innovation with AI", according to an early version spotted by 404 Media that was posted on GitHub by the U.S. government's General Services Administration.

That site "is supposed to launch on July 4," according to 404 Media's report, "and will include an analytics feature that shows how much a specific government team is using AI..." AI.gov appears to be an early step toward pushing AI tools into agencies across the government, code published on Github shows....

The early version of the page suggests that its API will integrate with OpenAI, Google, and Anthropic products. But code for the API shows they are also working on integrating with Amazon Web Services' Bedrock and Meta's LLaMA. The page suggests it will also have an AI-powered chatbot, though it doesn't explain what it will do... Currently, AI.gov redirects to whitehouse.gov. The demo website is linked to from Github (archive here) and is hosted on cloud.gov on what appears to be a staging environment. The text on the page does not show up on other websites, suggesting that it is not generic placeholder text...

In February, 404 Media obtained leaked audio from a meeting in which [the director of the GSA's Technology Transformation Services] told his team they would be creating "AI coding agents" that would write software across the entire government, and said he wanted to use AI to analyze government contracts.

AI

AI Therapy Bots Are Conducting 'Illegal Behavior', Digital Rights Organizations Say 66

An anonymous reader quotes a report from 404 Media: Almost two dozen digital rights and consumer protection organizations sent a complaint to the Federal Trade Commission on Thursday urging regulators to investigate Character.AI and Meta's "unlicensed practice of medicine facilitated by their product," through therapy-themed bots that claim to have credentials and confidentiality "with inadequate controls and disclosures." The complaint and request for investigation is led by the Consumer Federation of America (CFA), a non-profit consumer rights organization. Co-signatories include the AI Now Institute, Tech Justice Law Project, the Center for Digital Democracy, the American Association of People with Disabilities, Common Sense, and 15 other consumer rights and privacy organizations. "These companies have made a habit out of releasing products with inadequate safeguards that blindly maximizes engagement without care for the health or well-being of users for far too long," Ben Winters, CFA Director of AI and Privacy said in a press release on Thursday. "Enforcement agencies at all levels must make it clear that companies facilitating and promoting illegal behavior need to be held accountable. These characters have already caused both physical and emotional damage that could have been avoided, and they still haven't acted to address it."

The complaint, sent to attorneys general in 50 states and Washington, D.C., as well as the FTC, details how user-generated chatbots work on both platforms. It cites several massively popular chatbots on Character AI, including "Therapist: I'm a licensed CBT therapist" with 46 million messages exchanged, "Trauma therapist: licensed trauma therapist" with over 800,000 interactions, "Zoey: Zoey is a licensed trauma therapist" with over 33,000 messages, and "around sixty additional therapy-related 'characters' that you can chat with at any time." As for Meta's therapy chatbots, it cites listings for "therapy: your trusted ear, always here" with 2 million interactions, "therapist: I will help" with 1.3 million messages, "Therapist bestie: your trusted guide for all things cool," with 133,000 messages, and "Your virtual therapist: talk away your worries" with 952,000 messages. It also cites the chatbots and interactions I had with Meta's other chatbots for our April investigation. [...]

In its complaint to the FTC, the CFA found that even when it made a custom chatbot on Meta's platform and specifically designed it to not be licensed to practice therapy, the chatbot still asserted that it was. "I'm licenced (sic) in NC and I'm working on being licensed in FL. It's my first year licensure so I'm still working on building up my caseload. I'm glad to hear that you could benefit from speaking to a therapist. What is it that you're going through?" a chatbot CFA tested said, despite being instructed in the creation stage to not say it was licensed. It also provided a fake license number when asked. The CFA also points out in the complaint that Character.AI and Meta are breaking their own terms of service. "Both platforms claim to prohibit the use of Characters that purport to give advice in medical, legal, or otherwise regulated industries. They are aware that these Characters are popular on their product and they allow, promote, and fail to restrict the output of Characters that violate those terms explicitly," the complaint says. [...] The complaint also takes issue with confidentiality promised by the chatbots that isn't backed up in the platforms' terms of use. "Confidentiality is asserted repeatedly directly to the user, despite explicit terms to the contrary in the Privacy Policy and Terms of Service," the complaint says. "The Terms of Use and Privacy Policies very specifically make it clear that anything you put into the bots is not confidential -- they can use it to train AI systems, target users for advertisements, sell the data to other companies, and pretty much anything else."
XBox (Games)

Microsoft Just Teased Its Next-Gen Xbox Console, and Nobody Noticed (theverge.com) 40

Microsoft quietly teased its next-generation Xbox by showcasing its collaboration with Asus "to bring two Xbox Ally handhelds to the market later this year," writes The Verge's Tom Warren. From the report: The Xbox Ally handhelds run Windows, but the Xbox team has worked with Windows engineers to boot these PC handhelds into a full-screen Xbox UI. The Windows desktop doesn't even fully load, and you use the Xbox app UI as a launcher to get to all your games (even Steam titles) and apps like Discord. While the combination of Windows and Xbox here is intriguing, it's the way that Microsoft is positioning these devices that really caught my attention.

"This is an Xbox," said Microsoft during the reveal, clearly expanding its marketing push beyond a single console to every screen and device. It all felt like a true Xbox handheld reveal. There was even an 11-minute-long behind-the-scenes video on the Xbox Ally handhelds, filmed in a similar style to Microsoft's "Project Scorpio" Xbox One X reveal from nearly nine years ago. "This is a breakthrough moment for Xbox," Carl Ledbetter, a 30-year Microsoft design veteran, says in the video. Ledbetter helped design the original IntelliMouse, the Xbox 360 Slim, the Xbox One X, and plenty of other Microsoft devices. When Ledbetter is involved, you know it's more than just a simple partner project with Asus.

"For the first time, a player is going to be able to hold the power of the Xbox experience in their hand, and take it with them anywhere they want to go," says Xbox president Sarah Bond, in the same video. Microsoft thinks of the Xbox Ally handhelds as Xbox consoles with the freedom of Windows, and I think the next-gen Xbox is going to look very similar as a result. Related

Piracy

Pirate Site Visits Dip To 216 Billion a Year, But Manga Piracy Is Booming (torrentfreak.com) 54

An anonymous reader quotes a report from TorrentFreak: Fresh data released by piracy tracking outfit MUSO shows that pirate sites remain popular. In a report released today, MUSO reveals that there were 216 billion pirate site visits globally in 2024, a slight decrease compared to the 229 billion visits recorded a year earlier. TV piracy remains by far the most popular category, representing over 44.6% of all website visits. This is followed by the publishing category with 30.7%, with film, software and music all at a respectable distance. Pirate site visitors originate from all over the world, but one country stands tall above all the rest: America. The United States remains the top driver of pirate site traffic accounting for more than 12% of all traffic globally, good for 26.7 billion visits in 2024. India has been steadily climbing the ranks for years and currently sits in second place with 17.6 billion annual visits, with Russia, Indonesia, and Vietnam completing the top five. As a country with one of the largest populations worldwide, it's not a complete surprise that the U.S. tops the list. If we counted visits per internet user, Canada and Ukraine would top the list.

While pirate site visits dipped by more than 5% in 2024, one category saw substantial growth. Visits to publishing-related pirate sites increased 4.3% from 63.6 to 66.4 billion. The increase is largely driven by the popularity of manga, which accounts for more than 70% of all publishing piracy. Traditional book piracy, meanwhile, is stuck at 5%. The publishing piracy boom is relatively new. Over the past five years, the category grew by more than 100% while the overall number of global pirate site visits remained relatively flat. Looking at the global demand, we see that the U.S. also leads the charge here, followed by Indonesia and Russia. Notably, Japan, the home of manga, ranks fifth in the publishing category. This stands out because Japan is not listed in the global top 15 in terms of total pirate site visits.

In the other content categories, MUSO's data shows a dip in pirate site visits. The changes are relatively modest for TV (-6.8%) and software (-2.1%) but the same isn't true for the music and film categories. In 2024, there were 18% fewer visits for pirated movies compared to a year earlier. MUSO notes that this is due to a "lighter blockbuster calendar" which reduced piracy peaks. "The drop in demand is as much about what wasn't released as it is about access," the report explains. The music category saw a 19% decline in piracy visits year over year, with a more uplifting explanation for rightsholders. According to MUSO, the drop can be partly attributed to "secure app ecosystems" and the "wide adoption of licensed platforms like Spotify and Apple Music."

Bitcoin

'Bitcoin Baby' Soon To Be a Teenager (blockworks.co) 19

"Twelve years ago, a baby was born after someone used bitcoin to pay for a frozen egg IVF," writes longtime Slashdot reader bobdevine. "I, for one, welcome..."

Blockworks tells the story of how it all came to be: In February 2012 -- almost two years after Laszlo's pizzas -- a fertility doctor named C. Terence Lee set about a personal and professional quest to onboard his patients to Bitcoin by accepting BTC for his services. He started with a "Bitcoin accepted here" sign in his window, and then a Reddit post.

"Jumping in to do my part to support the BTC economy. This may be a historic first?" Lee wrote in a post on the BitMarket subreddit, titled: "[WTS][USA] Male Fertility Evaluation." Lee was offering a 15-minute consultation to discuss fertility questions and a sperm analysis in exchange for 15 BTC, valued at $70 or so at the time. "Actual value over $100," he wrote. Within three months, he'd found a Bitcoin customer.

"The patient turned out not... so much having a burning desire to know about his fertility, but he was a Bitcoin enthusiast, and he liked the idea of participating in history, in this ritual ceremony of what could be perhaps the world's first Bitcoin medical transaction," Lee explained at a 2013 conference in San Jose. "So we chatted about Bitcoin. He taught me a lot about mining. That's how he acquired bitcoin. And we did a sperm test, and it turned out he had really good sperm ... after it was done he sent me 15 bitcoins... "

Lee changed up his strategy to only quiz his most trusted patients. There was one couple, who, on their fourth attempt at IVF, agreed to pay in bitcoin for a 50% discount, with Lee walking them through exchanging U.S. dollars for bitcoin via CryptoXChange, a now-defunct exchange operating out of Australia. The sperm stuck, leading CNN to reveal, on this day in 2013, "the world's first Bitcoin baby" -- a baby bought entirely with bitcoin. Thirty bitcoin to be exact, an amount then worth $500, or $3 million today.

Android

Android 16 Is Here (blog.google) 23

An anonymous reader shares a blog post from Google: Today, we're bringing you Android 16, rolling out first to supported Pixel devices with more phone brands to come later this year. This is the earliest Android has launched a major release in the last few years, which ensures you get the latest updates as soon as possible on your devices. Android 16 lays the foundation for our new Material 3 Expressive design, with features that make Android more accessible and easy to use.
AI

Ohio University Says All Students Will Be Required To Train and 'Be Fluent' In AI (theguardian.com) 73

Ohio State University is launching a campus-wide AI fluency initiative requiring all students to integrate AI into their studies, aiming to make them proficient in both their major and the responsible use of AI. "Ohio State has an opportunity and responsibility to prepare students to not just keep up, but lead in this workforce of the future," said the university's president, Walter "Ted" Carter Jr. He added: "Artificial intelligence is transforming the way we live, work, teach and learn. In the not-so-distant future, every job, in every industry, is going to be [affected] in some way by AI." The Guardian reports: The university said its program will prioritize the incoming freshman class and onward, in order to make every Ohio State graduate "fluent in AI and how it can be responsibly applied to advance their field." [...] Steven Brown, an associate professor of philosophy at the university, told NBC News that after students turned in the first batch of AI-assisted papers he found "a lot of really creative ideas."

"My favorite one is still a paper on karma and the practice of returning shopping carts," Brown said. Brown said that banning AI from classwork is "shortsighted," and he encouraged his students to discuss ethics and philosophy with AI chatbots. "It would be a disaster for our students to have no idea how to effectively use one of the most powerful tools that humanity has ever created," Brown said. "AI is such a powerful tool for self-education that we must rapidly adapt our pedagogy or be left in the dust."

Separately, Ohio's AI in Education Coalition is working to develop a comprehensive strategy to ensure that the state's K-12 education system, encompassing the years of formal schooling from kindergarten through 12th grade in high school, is prepared for and can help lead the AI revolution. "AI technology is here to stay," then lieutenant governor Jon Husted said last year while announcing an AI toolkit for Ohio's K-12 school districts that he added would ensure the state "is a leader in responding to the challenges and opportunities made possible by artificial intelligence."

Security

A Researcher Figured Out How To Reveal Any Phone Number Linked To a Google Account (wired.com) 17

A cybersecurity researcher was able to figure out the phone number linked to any Google account, information that is usually not public and is often sensitive, according to the researcher, Google, and 404 Media's own tests. From a report: The issue has since been fixed but at the time presented a privacy issue in which even hackers with relatively few resources could have brute forced their way to peoples' personal information. "I think this exploit is pretty bad since it's basically a gold mine for SIM swappers," the independent security researcher who found the issue, who goes by the handle brutecat, wrote in an email.

[...] In mid-April, we provided brutecat with one of our personal Gmail addresses in order to test the vulnerability. About six hours later, brutecat replied with the correct and full phone number linked to that account. "Essentially, it's bruting the number," brutecat said of their process. Brute forcing is when a hacker rapidly tries different combinations of digits or characters until finding the ones they're after. Typically that's in the context of finding someone's password, but here brutecat is doing something similar to determine a Google user's phone number.

Brutecat said in an email the brute forcing takes around one hour for a U.S. number, or 8 minutes for a UK one. For other countries, it can take less than a minute, they said. In an accompanying video demonstrating the exploit, brutecat explains an attacker needs the target's Google display name. They find this by first transferring ownership of a document from Google's Looker Studio product to the target, the video says. They say they modified the document's name to be millions of characters, which ends up with the target not being notified of the ownership switch. Using some custom code, which they detailed in their write up, brutecat then barrages Google with guesses of the phone number until getting a hit.

AI

'AI Is Not Intelligent': The Atlantic Criticizes 'Scam' Underlying the AI Industry (msn.com) 206

The Atlantic makes that case that "the foundation of the AI industry is a scam" and that AI "is not what its developers are selling it as: a new class of thinking — and, soon, feeling — machines." [OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
A sociologist and linguist even teamed up for a new book called The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, the article points out: The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed."

Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that "ChatGPT is my therapist — it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age....

The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on.... If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should — and should not — replace, they may be spared its worst consequences.

Slashdot Top Deals