AI

'AI' Is Coming For Your Online Gaming Servers Next (pcworld.com) 35

"Consumer PC parts aren't the only things being gobbled up by the 'AI' industry," writes PCWorld's Michael Crider. "A Starcraft-inspired strategy game is shutting down its multiplayer servers because the hosting company got bought out for 'AI.'" The game will still be playable offline for now, but the shutdown highlights the ripple effects of the AI boom on the gaming industry. Amid the ongoing hardware shortages, AI companies are basically gobbling up as much infrastructure as they can to repurpose it for AI workloads. From the report: The game in question is Stormgate, a crowdfunded revival of the real-time strategy genre that has languished in the last decade or so. The developer Frost Giant Studios told its players on Discord (spotted by PC Gamer) that it would be unable to continue multiplayer access past the end of this month. The "game server orchestration partner" was bought by an AI company -- the developer's words, not mine -- which means that the multiplayer aspects of the game will have a "planned outage."

The devs say the game will be patched for offline play, presumably including its single-player campaign mode and co-op modes, but "online modes will not be available at that point." They're hoping to bring back online play in a later update, but that'll depend on "finding a partner to support ongoing operations." That sounds like old-fashioned player-hosted games with lobbies aren't in the cards, at least not yet.

Frost Giant's server provider is Hathora, which was bought by a company called Fireworks AI last month. Fireworks describes its offerings as "open-source AI models at blazing speed, optimized for your use case, scaled globally with the Fireworks Inference Cloud." So, yeah, Hathora's infrastructure will likely be used for yet more generative "AI." And according to GamesBeat, it's planning to shut down the game service aspect of its company completely. That means Stormgate probably isn't going to be the last game affected. Hathora also provides online services for Splitgate 2, among others. I'm contacting Hathora for comment and will update this story if I receive a response.

Transportation

Uber's Deal Blitz To Stop a Robotaxi Monopoly (businessinsider.com) 17

Uber is aggressively partnering with multiple robotaxi companies to avoid a future dominated by Waymo or Tesla. The ride-hailing giant has struck deals with at least a dozen autonomous vehicle players in recent years. Just last week, it announced a $1.25 billion partnership with Rivian, with plans to deploy up to 50,000 driverless vehicles over the next decade. Business Insider reports: Uber announced three new robotaxi partnerships in the past few weeks with Zoox, Wayve-Nissan, and Rivian. In less than half a decade, the company has secured at least a dozen deals, including with WeRide, AVride, May Mobility, Momenta, Pony.AI, Wayve, Baidu's Apollo Go, Motional, and Lucid-Nuro. Still, less than a half-dozen of Uber's partners have deployed fully driverless, paid robotaxi operations, and only one, Waymo, operates in the US. Uber has a joint deployment with Waymo in Atlanta, Austin, and Phoenix, but in other cities, Waymo is a competitor.

Uber's partnership spree is less about seeking the singular, dominant player of autonomous driving. Instead, analysts told Business Insider that Uber is ensuring multiple vendors can participate in the expensive business of robotaxis -- fending off the real risk of a Waymo or Tesla scaling on its own -- and giving itself a stake in the robotaxi economy by being the aggregator of choice. "The more diversified the supplier base, the better for the network in the middle, which is Uber," Mark Mahaney, an Uber analyst for Evercore ISI, told Business Insider.

The Courts

Valve Faces Second, Class-Action Lawsuit Over Loot Boxes (pcgamer.com) 110

Valve is facing a new consumer class-action lawsuit two weeks after New York sued the video game company for "letting children and adults illegally gamble" with loot boxes. The new lawsuit is similar, alleging that loot boxes in games like Counter-Strike 2, Dota 2, and Team Fortress 2 are "carefully engineered to extract money from consumers, including children, through deceptive, casino-style psychological tactics."

"We believe Valve deliberately engineered its gambling platform and profited enormously from it," Steve Berman, founder and managing partner at law firm Hagens Berman, said in a press release. "Consumers played these games for entertainment, unaware that Valve had allegedly already stacked the odds against them. We intend to hold Valve accountable and put money back in the pockets of consumers." PC Gamer reports: The system is well known to anyone who's played a Valve multiplayer game: Earn a locked loot box by playing, pay $2.50 for a key, unlock it, get a digital doohickey that's sometimes worth hundreds or even thousands of dollars but far more often is worth just a few pennies. Is that gambling? If these cases go to court, we'll find out.

The full complaint points out that the unlocking process is even designed to look like a slot machine: "Images of possible items scroll across the screen, spinning fast at first, then slowing to a stop on the player's 'prize.' Players buy and open loot boxes for the same reason people play slot machines -- the hope of a valuable payout." Loot boxes, the complaint continues, are not "incidental features" of Valve's games, but rather "a deliberate, carefully engineered revenue model." So too is the Steam Community Market, and Steam itself, which the suit claims is "deliberately designed" to enable the sale of digital items on third-party marketplaces through "trade URLs," despite Valve's terms of service prohibiting off-platform sales.

And while the debate over whether loot boxes constitute a form of gambling continues to rage, the suit claims Valve's system does indeed qualify under Washington law, which defines gambling as "staking or risking something of value upon the outcome of a contest of chance or a future contingent event not under the person's control or influence." "Valve's loot boxes satisfy every element of this definition," the lawsuit alleges. "Users stake money (the price of a key) on the outcome of a contest of chance (the random selection of a virtual item), and the items received are 'things of value' under RCW 9.46.0285 because they can be sold for real money through Valve's own marketplace and through third-party marketplaces that Valve has fostered and facilitated."

Biotech

Human Brain Cells On a Chip Learned To Play Doom In a Week (newscientist.com) 35

Researchers at Cortical Labs used living human neurons grown on a chip to learn how to play Doom in about a week. "While its performance is not up to par with humans, experts say it brings biological computers a step closer to useful real-world applications, like controlling robot arms," reports New Scientist. From the report: In 2021, the Australian company Cortical Labs used its neuron-powered computer chips to play Pong. The chips consisted of clumps of more than 800,000 living brain cells grown on top of microelectrode arrays that can both send and receive electrical signals. Researchers had to carefully train the chips to control the paddles on either side of the screen. Now, Cortical Labs has developed an interface that makes it easier to program these chips using the popular programming language Python. An independent developer, Sean Cole, then used Python to teach the chips to play Doom, which he did in around a week.

"Unlike the Pong work that we did a few years ago, which represented years of painstaking scientific effort, this demonstration has been done in a matter of days by someone who previously had relatively little expertise working directly with biology," says Brett Kagan of Cortical Labs. "It's this accessibility and this flexibility that makes it truly exciting."

The neuronal computer chip, which used about a quarter as many neurons as the Pong demonstration, played Doom better than a randomly firing player, but far below the performance of the best human players. However, it learnt much faster than traditional, silicon-based machine learning systems and should be able to improve its performance with newer learning algorithms, says Kagan. However, it's not useful to compare the chips with human brains, he says. "Yes, it's alive, and yes, it's biological, but really what it is being used as is a material that can process information in very special ways that we can't recreate in silicon."
Cortical Labs posted a YouTube video showing its CL1 biological computer running Doom. There's also source code available on GitHub, with additional details in a README file.
Television

TV Makers Are Taking AI Too Far (theverge.com) 53

TV manufacturers at CES 2026 in Las Vegas this week unveiled a wave of AI features that frequently consume significant screen space and take considerable time to deliver results -- all while global TV shipments declined 0.6% year over year in Q3, according to Omdia. Google demonstrated Veo generating video from a photo on a television, a process that took about two minutes to produce eight seconds of footage, The Verge writes in a column. Samsung presented a future where viewers ask their sets for sports predictions and recipes to share with kitchen displays. Hisense showed an AI agent that displays real-time stats for every soccer player on screen, a feature requiring so much space the company built a prototype 21:9 aspect ratio display to accommodate it.

Demos repeatedly showed video shrinking to make room for sports scores and information when viewers asked questions -- noticeable on 70-inch displays and likely worse on anything 50 inches or smaller. Amazon's Alexa Plus can jump to Prime Video scenes based on verbal descriptions. LG's sets switch homescreen recommendations based on voice recognition of individual family members.
AI

Furiosa's Energy-Efficient 'NPU' AI Chips Start Mass Production This Month, Challenging Nvidia (msn.com) 25

The Wall Street Journal profiles "the startup that is now one of a handful of chip makers nipping at the heels of Nvidia." Furiosa's AI chip is dubbed "RNGD" — short for renegade — and slated to start mass production this month. Valued at nearly $700 million based on its most recent fundraising, Furiosa has attracted interest from big tech firms. Last year, Meta Platforms attempted to acquire it, though the startup declined the offer. OpenAI used a Furiosa chip for a recent demonstration in Seoul. LG's AI research unit is testing the chip and said it offered "excellent real-world performance." Furiosa said it is engaged in talks with potential customers.

Nvidia's graphic processing units, or GPUs, dominated the initial push to train AI models. But companies like Furiosa are betting that for the next stage — referred to as "inference," or using AI models after they're trained — their specialty chips can be competitive. Furiosa makes chips called neural processing units, or NPUs, which are a rising class of chips designed specifically to handle the type of computing calculations underpinning AI and use less energy than GPUs. [Founder/CEO June] Paik said Furiosa's chips can provide similar performance as Nvidia's advanced GPUs with less electricity usage. That would drive down the total costs of deploying AI. The tech world, Paik says, shouldn't be so reliant on one chip maker for AI computing. "A market dominated by a single player — that's not a healthy ecosystem, is it?" Paik said...

In 2024, at Stanford's prestigious Hot Chips conference, Paik debuted Furiosa's RNGD chip as a solution for what he called "sustainable AI computing" in a keynote speech. Paik presented data showing how the chip could run the then-latest version of Meta's Llama large language model with more than twice the power efficiency of Nvidia's high-end chips. Furiosa's booth was swarmed with engineers from big tech firms, including Google, Meta and Amazon.com, wanting to see a live demo of the chip. "It was a moment where we felt we could really move forward with our chip with confidence," Paik said.

First Person Shooters (Games)

Sony Killed This Game in 2024. Three Developers Reverse-Engineered It Back to Life (aftermath.site) 19

An anonymous reader shared this post from the gaming news site Aftermath: Concord, Sony Interactive Entertainment and Firewalk Studios' Overwatch-like shooter, was live for just two weeks before it was pulled offline. Though Concord certainly had some dedicated players, it didn't have many — which is why it may be surprising to hear that a group of players are reverse-engineering the game and its servers to bring it back to life.

Publisher Sony removed Concord from stores and digital marketplaces, automatically refunded some, and, later, shut down Firewalk Studios. Two hundred or so people were laid off, and any hopes of Concord's return were dashed. Poor sales — estimated to be under 25,000 copies sold — and low player numbers marred the release. Firewalk Studios' game director Ryan Ellis said in a blog post that pieces of the game "resonated with players," but "other aspects of the game and [Concord's] initial launch didn't land the way [Firewalk Studios] intended."

Concord wasn't a bad game, but it just didn't generate enough interest with enough players. Now, a group of three hobbyist reverse-engineers, who go by real, Red, and gwog online, are trying to make it playable again... "Sometimes there's enough of the server left in the game, that we can 'activate' that code and make the game believe it's a server," Red said. "We do pretty much always need to fill in the gaps though..." Concord used an anti-tamper software to keep people from cheating, which also creates a problem for people reverse engineering. It's "nearly impossible" to crack, Red said, so the group didn't — they found an exploit to "forcefully decrypt the game's code" to "restore the game and start working on servers...."

It's not open to the public, but people can sign up for future tests. Even former Firewalk Studios employees have joined the server. They're excited to see Concord come back to life, too, the developers said.

"Friday morning, a video of the playtest was posted to the Concord Reddit page," according to the article. (Though ironically by Friday night YouTube had had removed the video "due to a copyright claim by MarkScan Enforcement."
Games

Counter-Strike's Player Economy Is In a Multi-Billion Dollar Freefall (polygon.com) 66

Counter-Strike has long been known for two things: tight tactical FPS gameplay and a thriving player marketplace effectively valued at literal billions of dollars. Now, thanks to a recent update from Valve, the latter is in a downward spiral, having lost 25% of its value -- or $1.75 billion -- overnight. Polygon: First, some context. Counter-Strike is a free-to-play multiplayer shooter. As with most other F2P games, it generates revenue from selling cosmetics. They arrive in lootbox-like Cases, which are opened by Keys purchased with real-world currency. They can also be obtained through trading with other players and purchasing from Steam Community Market. Beyond Steam, unofficial third-party marketplaces for CS cosmetics have also popped up as channels for buying and selling items.

Because items are obtained at random through opening Cases, rarer items fetch the highest value on the open marketplaces. Items of lower-rarity tiers can also be traded in at volume for an item of a higher tier via trade up contracts. Previously, Knives and Gloves could not be obtained through trade up contracts, exponentially increasing their value as highly sought-after items. Prior to the most recent update, some Knives, like a Doppler Ruby Butterfly Knife, could fetch around $20,000 on third-party storefronts like CSFloat.

Following Valve's Oct. 22 update to Counter-Strike, the second-highest-tier, Covert (Red), can now be traded up and turned into Knives and Gloves. Essentially, this means that a previously extremely rare and highly sought-after cosmetic is going to be much more obtainable for those who increasingly want it, reducing the value of Knives and Gloves on the open marketplace. And this is where the market descends into a freefall. Now, that Butterfly Knife mentioned above? It's going for around $12,000, as people are essentially dumping their stock, with 15 sold over the past 16 hours at the time of this writing.

Robotics

Scientists Built a Badminton-Playing Robot With AI-Powered Skills (arstechnica.com) 10

An anonymous reader quotes a report from Ars Technica: The robot built by [Yuntao Ma and his team at ETH Zurich] was called ANYmal and resembled a miniature giraffe that plays badminton by holding a racket in its teeth. It was a quadruped platform developed by ANYbotics, an ETH Zurich spinoff company that mainly builds robots for the oil and gas industries. "It was an industry-grade robot," Ma said. The robot had elastic actuators in its legs, weighed roughly 50 kilograms, and was half a meter wide and under a meter long. On top of the robot, Ma's team fitted an arm with several degrees of freedom produced by another ETH Zurich spinoff called Duatic. This is what would hold and swing a badminton racket. Shuttlecock tracking and sensing the environment were done with a stereoscopic camera. "We've been working to integrate the hardware for five years," Ma said.

Along with the hardware, his team was also working on the robot's brain. State-of-the-art robots usually use model-based control optimization, a time-consuming, sophisticated approach that relies on a mathematical model of the robot's dynamics and environment. "In recent years, though, the approach based on reinforcement learning algorithms became more popular," Ma told Ars. "Instead of building advanced models, we simulated the robot in a simulated world and let it learn to move on its own." In ANYmal's case, this simulated world was a badminton court where its digital alter ego was chasing after shuttlecocks with a racket. The training was divided into repeatable units, each of which required that the robot predict the shuttlecock's trajectory and hit it with a racket six times in a row. During this training, like a true sportsman, the robot also got to know its physical limits and to work around them.

The idea behind training the control algorithms was to develop visuo-motor skills similar to human badminton players. The robot was supposed to move around the court, anticipating where the shuttlecock might go next and position its whole body, using all available degrees of freedom, for a swing that would mean a good return. This is why balancing perception and movement played such an important role. The training procedure included a perception model based on real camera data, which taught the robot to keep the shuttlecock in its field of view while accounting for the noise and resulting object-tracking errors.

Once the training was done, the robot learned to position itself on the court. It figured out that the best strategy after a successful return is to move back to the center and toward the backline, which is something human players do. It even came with a trick where it stood on its hind legs to see the incoming shuttlecock better. It also learned fall avoidance and determined how much risk was reasonable to take given its limited speed. The robot did not attempt impossible plays that would create the potential for serious damage -- it was committed, but not suicidal. But when it finally played humans, it turned out ANYmal, as a badminton player, was amateur at best.
The findings have been published in the journal Science Robotics.

You can watch a video of the four-legged robot playing badminton on YouTube.
AI

OpenAI's o3 Model Beats Master-Level Geoguessr Player 32

In a blog post yesterday, Master I-ranked human GeoGuessr player Sam Patterson said that OpenAI's o3 model outscored him in a head-to-head match, "correctly identifying all five countries and twice landing within a few hundred meters." Geoguessing is a game -- most popularly known through the platform GeoGuessr -- where players are dropped into a random location in Google Street View and must figure out where in the world they are using only visual clues from the environment. With the release of its newest AI models, o3 and o4-mini, OpenAI now does a surprisingly good job of analyzing uploaded images to determine their locations using nothing but subtle visual clues.

"Even when I embedded fake GPS coordinates in the image EXIF, the model ignored the spoof and still pinpointed the real locations, showing its performance comes from visual reasoning and on-the-fly web sleuthing -- not hidden metadata," says Patterson. From the post: I notice that it often does a lot of unnecessary and repetitive cropping, and will sometimes spend way too much time on something unimportant. A human is very good at knowing what matters, and o3 is less knowledgeable about what things it should focus on. It got distracted by advertising multiple times. However, most of what it says about things like signs and road lines appears to be accurate, or at least close enough to truth that they meaningfully add up. Given the end result of these excellent guesses, it seems to arrive at the guesses from that information.

If it's using other information to arrive at the guess, then it's not metadata from the files, but instead web search. It seems likely that in the Austria round, the web search was meaningful, since it mentioned the website named the town itself. It appeared less meaningful in the Ireland round. It was still very capable in the rounds without search.

So to put a bow on this:
- The o3 model isn't smoke and mirrors, tricking us by only using EXIF data. It's at a comparable Geoguessr skill level to Master I or better players now (at least according to my own ~20 or so rounds of testing).
- Humans still hold a big edge in decision time -- most of my guesses were 4 min.
- Spoofing EXIF data doesn't throw off the model.

Whether you view this as dystopian or as a technological marvel -- or both -- you can't claim it's a parlor trick.
AI

Microsoft's New AI-Generated Version of 'Quake 2' Now Playable Online (microsoft.com) 31

Microsoft has created a real-time AI-generated rendition of Quake II gameplay (playable on the web).

Friday Xbox's general manager of gaming AI posted the startling link to "an AI-generated gaming experience" at Copilot.Microsoft.com "Move, shoot, explore — and every frame is created on the fly by an AI world model, responding to player inputs in real-time. Try it here."

They started with their "Muse" videogame world models, adding "a real-time playable extension" that players can interact with through keyboard/controller actions, "essentially allowing you to play inside the model," according to a Microsoft blog post. A concerted effort by the team resulted in both planning out what data to collect (what game, how should the testers play said game, what kind of behaviours might we need to train a world model, etc), and the actual collection, preparation, and cleaning of the data required for model training. Much to our initial delight we were able to play inside the world that the model was simulating. We could wander around, move the camera, jump, crouch, shoot, and even blow-up barrels similar to the original game. Additionally, since it features in our data, we can also discover some of the secrets hidden in this level of Quake II. We can also insert images into the models' context and have those modifications persist in the scene...

We do not intend for this to fully replicate the actual experience of playing the original Quake II game. This is intended to be a research exploration of what we are able to build using current ML approaches. Think of this as playing the model as opposed to playing the game... The interactions with enemy characters is a big area for improvement in our current WHAMM model. Often, they will appear fuzzy in the images and combat with them (damage being dealt to both the enemy/player) can be incorrect.

They warn that the model "can and will forget about objects that go out of view" for longer than 0.9 seconds. "This can also be a source of fun, whereby you can defeat or spawn enemies by looking at the floor for a second and then looking back up. Or it can let you teleport around the map by looking up at the sky and then back down. These are some examples of playing the model."

This generative AI model was trained on Quake II "with just over a week of data," reports Tom's Hardware — a dramatic reduction from the seven years required for the original model launched in February.

Some context from The Verge: "You could imagine a world where from gameplay data and video that a model could learn old games and really make them portable to any platform where these models could run," said Microsoft Gaming CEO Phil Spencer in February. "We've talked about game preservation as an activity for us, and these models and their ability to learn completely how a game plays without the necessity of the original engine running on the original hardware opens up a ton of opportunity."
"Is porting a game like Gameday 98 more feasible through AI or a small team?" asks the blog Windows Central. "What costs less or even takes less time? These are questions we'll be asking and answering over the coming decade as AI continues to grow. We're in year two of the AI boom; I'm terrified of what we'll see in year 10."

"It's clear that Microsoft is now training Muse on more games than just Bleeding Edge," notes The Verge, "and it's likely we'll see more short interactive AI game experiences in Copilot Labs soon." Microsoft is also working on turning Copilot into a coach for games, allowing the AI assistant to see what you're playing and help with tips and guides. Part of that experience will be available to Windows Insiders through Copilot Vision soon.
AI

Microsoft Unveils New Voice-Activated AI Assistant For Doctors 18

Microsoft has introduced Dragon Copilot, a voice-activated AI assistant for doctors that integrates dictation and ambient listening tools to automate clinical documentation, including notes, referrals, and post-visit summaries. The tool is set to launch in May in the U.S. and Canada. CNBC reports: Microsoft acquired Nuance Communications, the company behind Dragon Medical One and DAX Copilot, for about $16 billion in 2021. As a result, Microsoft has become a major player in the fiercely competitive AI scribing market, which has exploded in popularity as health systems have been looking for tools to help address burnout. AI scribes like DAX Copilot allow doctors to draft clinical notes in real time as they consensually record their visits with patients. DAX Copilot has been used in more than 3 million patient visits across 600 health-care organizations in the last month, Microsoft said.

Dragon Copilot is accessible through a mobile app, browser or desktop, and it integrates directly with several different electronic health records, the company said. Clinicians will still be able to draft clinical notes with the assistant like they could with DAX Copilot, but they'll be able to use natural language to edit their documentation and prompt it further, Kenn Harper, general manager of Dragon products at Microsoft, told reporters on the call. For instance, a doctor could ask questions like, "Was the patient experiencing ear pain?" or "Can you add the ICD-10 codes to the assessment and plan?" Physicians can also ask broader treatment-related queries such as, "Should this patient be screened for lung cancer?" and get an answer with links to resources like the Centers for Disease Control and Prevention. [...]
AI

Microsoft Shows Progress Toward Real-Time AI-Generated Game Worlds (arstechnica.com) 23

An anonymous reader quotes a report from Ars Technica: For a while now, many AI researchers have been working to integrate a so-called "world model" into their systems. Ideally, these models could infer a simulated understanding of how in-game objects and characters should behave based on video footage alone, then create fully interactive video that instantly simulates new playable worlds based on that understanding. Microsoft Research's new World and Human Action Model (WHAM), revealed today in a paper published in the journal Nature, shows how quickly those models have advanced in a short time. But it also shows how much further we have to go before the dream of AI crafting complete, playable gameplay footage from just some basic prompts and sample video footage becomes a reality.

Much like Google's Genie model before it, WHAM starts by training on "ground truth" gameplay video and input data provided by actual players. In this case, that data comes from Bleeding Edge, a four-on-four online brawler released in 2020 by Microsoft subsidiary Ninja Theory. By collecting actual player footage since launch (as allowed under the game's user agreement), Microsoft gathered the equivalent of seven player-years' worth of gameplay video paired with real player inputs. Early in that training process, Microsoft Research's Katja Hoffman said the model would get easily confused, generating inconsistent clips that would "deteriorate [into] these blocks of color." After 1 million training updates, though, the WHAM model started showing basic understanding of complex gameplay interactions, such as a power cell item exploding after three hits from the player or the movements of a specific character's flight abilities. The results continued to improve as the researchers threw more computing resources and larger models at the problem, according to the Nature paper.

To see just how well the WHAM model generated new gameplay sequences, Microsoft tested the model by giving it up to one second's worth of real gameplay footage and asking it to generate what subsequent frames would look like based on new simulated inputs. To test the model's consistency, Microsoft used actual human input strings to generate up to two minutes of new AI-generated footage, which was then compared to actual gameplay results using the Frechet Video Distance metric. Microsoft boasts that WHAM's outputs can stay broadly consistent for up to two minutes without falling apart, with simulated footage lining up well with actual footage even as items and environments come in and out of view. That's an improvement over even the "long horizon memory" of Google's Genie 2 model, which topped out at a minute of consistent footage. Microsoft also tested WHAM's ability to respond to a diverse set of randomized inputs not found in its training data. These tests showed broadly appropriate responses to many different input sequences based on human annotations of the resulting footage, even as the best models fell a bit short of the "human-to-human baseline."

The most interesting result of Microsoft's WHAM tests, though, might be in the persistence of in-game objects. Microsoft provided examples of developers inserting images of new in-game objects or characters into pre-existing gameplay footage. The WHAM model could then incorporate that new image into its subsequent generated frames, with appropriate responses to player input or camera movements. With just five edited frames, the new object "persisted" appropriately in subsequent frames anywhere from 85 to 98 percent of the time, according to the Nature paper.

Movies

A Videogame Meets Shakespeare in 'Grand Theft Hamlet' Film (yahoo.com) 9

The Los Angeles Times calls it "a guns-blazingly funny documentary about two out-of-work British actors who spent a chunk of their COVID-19 lockdown staging Shakespeare's masterpiece on the mean streets of Grand Theft Auto V."

Grand Theft Hamlet won SXSW's Jury Award for best documentary, and has now opened in U.S. theatres this weekend (and begun streaming on Mubi), after opening in the U.K. and Ireland. But nearly the entire film is set in Grand Theft Auto's crime-infested version of Los Angeles, the Times reports, "where even the good guys have weapons and a nihilistic streak — the vengeful Prince of Denmark fits right in." Yet when Sam Crane, a.k.a. @Hamlet_thedane, launches into one of the Bard's monologues, he's often murdered by a fellow player within minutes. Everyone's a critic.

Crane co-directed the movie with his wife, Pinny Grylls, a first-time gamer who functions as the film's camera of sorts. What her character sees, where she chooses to stand and look, makes up much of the film, although the editing team does phenomenal work splicing in other characters' points of view. (We're never outside of the game until the last 30 seconds; only then do we see anyone's real face....) The Bard's story is only half the point. Really, this is a classic let's-put-on-a-pixilated-show tale about the need to create beauty in the world — even this violent world — especially when stage productions in England have shuttered, forcing Crane, a husband and father, and Mark Oosterveen, single and lonely, to kill time speeding around the digital desert...

To our surprise (and theirs), the play's tussles with depression and anguish and inertia become increasingly resonant as the production and the pandemic limps toward their conclusions. When Crane and Oosterveen's "Grand Theft Auto" avatars hop into a van with an anonymous gamer and ask this online stranger for his thoughts on Hamlet's suicidal soliloquy, the man, a real-life delivery driver stuck at home with a broken leg, admits, "I don't think I'm in the right place to be replying to this right now...."

In 2014 Hamlet was also staged in Guild Wars 2, the article points out. "This is, however, the first attempt I'm aware of that attempts to do the whole thing live in one go, no matter if one of the virtual actors falls to their doom from a blimp.

"As Grylls says, 'You can't stop production just because somebody dies.'"
Open Source

VLC Tops 6 Billion Downloads, Previews AI-Generated Subtitles (techcrunch.com) 68

VLC media player, the popular open-source software developed by nonprofit VideoLAN, has topped 6 billion downloads worldwide and teased an AI-powered subtitle system. From a report: The new feature automatically generates real-time subtitles -- which can then also be translated in many languages -- for any video using open-source AI models that run locally on users' devices, eliminating the need for internet connectivity or cloud services, VideoLAN demoed at CES.
Open Source

Slashdot's Interview with Bruce Perens: How He Hopes to Help 'Post Open' Developers Get Paid (slashdot.org) 61

Bruce Perens, original co-founder of the Open Source Initiative, has responded to questions from Slashdot readers about a new alternative he's developing that hopefully helps "Post Open" developers get paid.

But first, "One of the things that's clear from the Slashdot patter is that people are not aware of what I've been doing, in general," Perens says. "So, let's start by filling that in..."

Read on for the rest of his wide-ranging answers....
Iphone

'Punctuation Is Dead Because the iPhone Keyboard Killed It' (androidauthority.com) 138

Android Authority's Rita El Khoury argues that the decline in punctuation use and capitalization in social media writing, especially among younger generations, can largely be attributed to the iPhone keyboard. "By hiding the comma and period behind a symbol switch, the iPhone keyboard encourages the biggest grammar fiends to be lazy and skip punctuation," writes El Khoury. She continues: Pundits will say that it's just an extra tap to add a period (double-tap the space bar) or a comma (switch to the characters layout and tap comma), but it's one extra tap too many. When you're firing off replies and messages at a rapid rate, the jarring pause while the keyboard switches to symbols and then switches back to letters is just too annoying, especially if you're doing it multiple times in one message. I hate pausing mid-sentence so much that I will sacrifice a comma at the altar of speed. [...]

The real problem, at the end of the day, is that iPhones -- not Android phones -- are popular among Gen Z buyers, especially in the US -- a market with a huge online presence and influence. Add that most smartphone users tend to stick to default apps on their phones, so most of them end up with the default iPhone keyboard instead of looking at better (albeit often even slower) alternatives. And it's that same keyboard that's encouraging them to be lazy instead of making it easier to add punctuation.

So yes, I blame the iPhone for killing the period and slaughtering the comma, and I think both of those are great offenders in the death of the capital letter. But trends are cyclical, and if the cassette player can make a comeback, so can the comma. Who knows, maybe in a year or two, writing like a five-year-old will be passe, too, and it'll be trendy to use proper grammar again.

Games

Kurt Vonnegut's Lost Board Game Finally Published 15

An anonymous reader shares a report: Fans of literature most likely know Kurt Vonnegut for the novel Slaughterhouse-Five. The staunchly anti-war book first resonated with readers during the Vietnam War era, later becoming a staple in high school curricula the world over. When Vonnegut died in 2007 at the age of 84, he was widely recognized as one of the greatest American novelists of all time. But would you believe that he was also an accomplished game designer?

In 1956, following the lukewarm reception of his first novel, Player Piano, Vonnegut was one of the 16 million other World War II veterans struggling to put food on the table. His moneymaking solution at the time was a board game called GHQ, which leveraged his understanding of modern combined arms warfare and distilled it into a simple game played on an eight-by-eight grid. Vonnegut pitched the game relentlessly to publishers all year long according to game designer and NYU faculty member Geoff Engelstein, who recently found those letters sitting in the archives at Indiana University. But the real treasure was an original set of typewritten rules, complete with Vonnegut's own notes in the margins.

With the permission of the Vonnegut estate, Engelstein tells Polygon that he cleaned the original rules up just a little bit, buffed out the dents in GHQ's endgame, and spun up some decent art and graphic design. Now you can purchase the final product, titled Kurt Vonnegut's GHQ: The Lost Board Game, at your local Barnes & Noble -- nearly 70 years after it was created.
Role Playing (Games)

Playing D&D Helps Autistic Players In Social Interactions, Study Finds (arstechnica.com) 40

An anonymous reader quotes a report from Ars Technica: Since its introduction in the 1970s, Dungeons & Dragons has become one of the most influential tabletop role-playing games (TRPGs) in popular culture, featuring heavily in Stranger Things, for example, and spawning a blockbuster movie released last year. Over the last decade or so, researchers have turned their focus more heavily to the ways in which D&D and other TRPGs can help people with autism form healthy social connections, in part because the gaming environment offers clear rules around social interactions. According to the authors of a new paper published in the journal Autism, D&D helped boost players' confidence with autism, giving them a strong sense of kinship or belonging, among other benefits.

"There are many myths and misconceptions about autism, with some of the biggest suggesting that those with it aren't socially motivated, or don't have any imagination," said co-author Gray Atherton, a psychologist at the University of Plymouth. "Dungeons & Dragons goes against all that, centering around working together in a team, all of which takes place in a completely imaginary environment. Those taking part in our study saw the game as a breath of fresh air, a chance to take on a different persona and share experiences outside of an often challenging reality. That sense of escapism made them feel incredibly comfortable, and many of them said they were now trying to apply aspects of it in their daily lives." [...] For this latest study. Atherton et al. wanted to specifically investigate how autistic players experience D&D when playing in groups with other autistic players. It's essentially a case study with a small sample size -- just eight participants -- and qualitative in nature, since the post-play analysis focused on semistructured interviews with each player after the conclusion of the online campaign, the better to highlight their individual voices.

The players were recruited through social media advertisements within the D&D, Reddit and Discord online communities; all had received an autism diagnosis by a medical professional. They were split into two groups of four players, with one of the researchers (who's been playing D&D for years) acting as the dungeon master. The online sessions featured in the study was the Waterdeep: Dragonheist campaign. The campaign ran for six weeks, with sessions lasting between two and four hours (including breaks). Participants spoke repeatedly about the positive benefits they received from playing D&D, providing a friendly environment that helped them relax about social pressures. "When you're interacting with people over D&D, you're more likely to understand what's going on," one participant said in their study interview. "That's because the method you'll use to interact is written out. You can see what you're meant to do. There's an actual sort of reference sheet for some social interactions." That, in turn, helped foster a sense of belonging and kinship with their fellow players.

Participants also reported feeling emotionally invested and close to their characters, with some preferring to separate themselves from their character in order to explore other aspects of their personality or even an entirely new persona, thus broadening their perspectives. "I can make a character quite different from how I interact with people in real-life interactions," one participant said. "It helps you put yourself in the other person's perspective because you are technically entering a persona that is your character. You can then try to see how it feels to be in that interaction or in that scenario through another lens." And some participants said they were able to "rewrite" their own personal stories outside the game by adopting some of their characters' traits -- a psychological phenomenon known as "bleed."

Robotics

Google DeepMind Develops a 'Solidly Amateur' Table Tennis Robot (techcrunch.com) 20

An anonymous reader quotes a report from TechCrunch: In a newly published paper titled "Achieving Human Level Competitive Robot Table Tennis," Google's DeepMind Robotics team is showcasing its own work on the game. The researchers have effectively developed a "solidly amateur human-level player" when pitted against a human component. During testing, the table tennis bot was able to beat all of the beginner-level players it faced. With intermediate players, the robot won 55% of matches. It's not ready to take on pros, however. The robot lost every time it faced an advanced player. All told, the system won 45% of the 29 games it played. "This is the first robot agent capable of playing a sport with humans at human level and represents a milestone in robot learning and control," the paper claims. "However, it is also only a small step towards a long-standing goal in robotics of achieving human level performance on many useful real world skills. A lot of work remains in order to consistently achieve human-level performance on single tasks, and then beyond, in building generalist robots that are capable of performing many useful tasks, skillfully and safely interacting with humans in the real world."

The robot's biggest trouble areas are responding to fast balls, high and low balls. It also has trouble with backhand and the ability to read the spin on an incoming ball. Here's how the researchers plan to address the issue with fast balls: "To address the latency constraints that hinder the robot's reaction time to fast balls, we propose investigating advanced control algorithms and hardware optimizations. These could include exploring predictive models to anticipate ball trajectories or implementing faster communication protocols between the robot's sensors and actuators."

Slashdot Top Deals