The Media

A Decade of BBC Question Time Data Reveals Imbalance in Journalist Guests (sagepub.com) 94

A new study [PDF] from Cardiff University analyzing a decade of the popular topical debate programme BBC Question Time found that the broadcaster's flagship political debate show relies disproportionately on journalists and pundits from right-wing media outlets, particularly those connected to The Spectator magazine.

Researcher Matt Walsh examined 391 editions and 1,885 panellist appearances between 2014 and 2024. Journalists from right-leaning publications accounted for 59.59% of media guest slots, compared to just 16.86% for left-leaning outlets. The Spectator, a conservative magazine with a circulation of roughly 65,000, had an outsized presence among the most frequently booked guests. The study's list of top non-politician appearances reads like a roster of right-wing media figures. Isabel Oakeshott appeared 14 times, Julia Hartley-Brewer 13, Kate Andrews (formerly of the Institute for Economic Affairs and now at The Spectator) 13, and Tim Stanley of The Telegraph and Spectator also 13.

No equivalent frequency existed for left-wing journalists; Novara Media's Ash Sarkar and podcaster Alastair Campbell each appeared six times. Walsh said that the programme's need to be entertaining may explain some of these choices, as columnists unconstrained by party talking points tend to generate livelier debate. The BBC maintains that Question Time aims to present a "breadth of viewpoints," but the data suggests the programme's construction of impartiality tilts notably in one direction.
Privacy

Magician Forgets Password To His Own Hand After RFID Chip Implant (theregister.com) 42

A magician who implanted an RFID chip in his hand lost access to it after forgetting the password, leaving him effectively locked out of the tech embedded in his own body. The Register reports: "It turns out," said [said magician Zi Teng Wang], "that pressing someone else's phone to my hand repeatedly, trying to figure out where their phone's RFID reader is, really doesn't come off super mysterious and magical and amazing." Then there are the people who don't even have their phone's RFID reader enabled. Using his own phone would, in Zi's words, lack a certain "oomph."

Oh well, how about making the chip spit out a Bitcoin address? "That literally never came up either." In the end, Zi rewrote the chip to link to a meme, "and if you ever meet me in person you can scan my chip and see the meme." It was all suitably amusing until the Imgur link Zi was using went down. Not everything on the World Wide Web is forever, and there is no guarantee that a given link will work indefinitely. Indeed, access to Imgur from the United Kingdom was abruptly cut off on September 30 in response to the country's age verification rules.

Still, the link not working isn't the end of the world. Zi could just reprogram the chip again, right? Wrong. "When I went to rewrite the chip, I was horrified to realize I forgot the password that I had locked it with." The link eventually started working again, but if and when it stops, Zi's party piece will be a little less entertaining. He said: "Techie friends I've consulted with have determined that it's too dumb and simple to hack, the only way to crack it is to strap on an RFID reader for days to weeks, brute forcing every possible combination." Or perhaps some surgery to remove the offending hardware.

AI

LinkedIn Is Making It Easier To Search For People With AI 20

LinkedIn is rolling out an AI-powered people search tool that lets users find connections by describing what they need instead of relying on names or titles. For example, you can enter a more descriptive search, such as "Northwestern alumni who work in entertaining marketing," or even pose a question, like "Who can help me understand the US work visa system." The Verge reports: LinkedIn senior director of product management Rohan Rajiv tells The Verge that the platform will rank results based on the connections you might have with someone, as well as their relevance to your search. [...] LinkedIn is rolling out AI-powered people search to Premium users in the US starting today, but the platform plans on bringing it to all users soon.
The Almighty Buck

A Tour Through History's Most Entertaining Price Anomalies (msn.com) 29

MicroStrategy's bitcoin holdings and a tech investment fund are commanding extraordinary premiums in U.S. markets, highlighting unusual price anomalies reminiscent of past market distortions. MicroStrategy shares are trading at more than double the market value of their main asset -- bitcoin holdings -- while closed-end fund Destiny Tech100 recently traded at 11 times its net asset value, down from 21 times earlier in 2024.

Similar market irregularities have emerged throughout history. In 1923, investor Benjamin Graham profited from a disconnect between DuPont and General Motors shares. During the 1929 bull market, closed-end fund Capital Administration Co. traded at a 1,235% premium to its net asset value. WSJ adds: The PalmPilot during the 1990s and early 2000s was a hand-held device and personal assistant that came with a touch-screen display and a stylus. Palm was the biggest maker of hand-held computer devices, with 70% market share, and it held its initial public offering in March 2000, about a week before the Nasdaq Composite Index's peak during the dot-com bubble.

Palm's shares jumped 150% on their first day of trading, giving Palm a stock-market value of about $53 billion. Palm was still 94%-owned by parent 3Com at the time. Yet on Palm's first day of trading, 3Com's shares fell 21%.

The funny part: According to the stock market, 3Com was worth about $23 billion less than the value of the Palm shares that 3Com owned. This made no sense, yet the valuations remained out of whack for months. In time, both stocks came down to earth, sanity prevailed and the world eventually moved on to smartphones.

Facebook

Meta Envisages Social Media Filled With AI-Generated Users (ft.com) 60

Meta is betting that characters generated by AI will fill its social media platforms in the next few years as it looks to the fast-developing technology to drive engagement with its 3 billion users. From a report: The Silicon Valley group is rolling out a range of AI products, including one that helps users create AI characters on Instagram and Facebook [non-paywalled source], as it battles with rival tech groups to attract and retain a younger audience.

"We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do," said Connor Hayes, vice-president of product for generative AI at Meta. "They'll have bios and profile pictures and be able to generate and share content powered by AI on the platform ... that's where we see all of this going," he added. Hayes said a "priority" for Meta over the next two years was to make its apps "more entertaining and engaging," which included considering how to make the interaction with AI more social.

Sci-Fi

Netflix's Sci-Fi Movie 'Atlas': AI Apocalypse Blockbuster Gets 'Shocking' Reviews (tomsguide.com) 94

Space.com calls it a movie "adding more combustible material to the inferno of AI unease sweeping the globe." Its director tells them James Cameron was a huge inspiration, saying Atlas "has an Aliens-like vibe because of the grounded, grittiness to it." (You can watch the movie's trailer here...)

But Tom's Guide says "the reviews are just as shocking as the movie's AI." Its "audience score" on Rotten Tomatoes is 55% — but its aggregate score from professional film critics is 16%. The Hollywood Reporter called it "another Netflix movie to half-watch while doing laundry." ("The star plays a data analyst forced to team up with an AI robot in order to prevent an apocalypse orchestrated by a different AI robot...") The site Giant Freakin Robot says "there seems to be a direct correlation between how much money the streaming platform spends on green screen effects and how bad the movie is" (noting the film's rumored budget of $100 million)...

But Tom's Guide defends it as a big-budget sci-fi thriller that "has an interesting premise that makes you think about the potential dangers of AI progression." Our world has always been interested in computers and machines, and the very idea of technology turning against us is unsettling. That's why "Atlas" works as a movie, but professional critics have other things to say. Ross McIndoe from Slant Magazine said: "Atlas seems like a story that should have been experienced with a gamepad in hand...." Todd Gilchrist from Variety didn't enjoy the conventional structure that "Atlas" followed...

However, even though the score is low and the reviews are pretty negative, I don't want to completely bash this movie... If I'm being completely honest, most movies and TV shows nowadays are taken too seriously. The more general blockbusters are supposed to be entertaining and fun, with visually pleasing effects that keep you hooked on the action. This is much like "Atlas", which is a fun watch with an unsettling undertone focused on the dangers of evolving AI...

Being part of the audience, we're supposed to just take it in and enjoy the movie as a casual viewer. This is why I think you should give "Atlas" a chance, especially if you're big into dramatic action sequences and have enjoyed movies like "Terminator" and "Pacific Rim".

Beer

Can Any English Word Be Turned Into a Synonym For 'Drunk'? Not All, But Many Can. (arstechnica.com) 72

An anonymous reader shares a report: British comedian Michael McIntyre has a standard bit in his standup routines concerning the many (many!) slang terms posh British people use to describe being drunk. These include "wellied," "trousered," and "ratarsed," to name a few. McIntyre's bit rests on his assertion that pretty much any English word can be modified into a so-called "drunkonym," bolstered by a few handy examples: "I was utterly gazeboed," or "I am going to get totally and utterly carparked."

It's a clever riff that sparked the interest of two German linguists. Christina Sanchez-Stockhammer of Chemnitz University of Technology and Peter Uhrig of FAU Erlangen-Nuremberg decided to draw on their expertise to test McIntyre's claim that any word in the English language could be modified to mean "being in a state of high inebriation." Given their prevalence, "It is highly surprising that drunkonyms are still under-researched from a linguistic perspective," the authors wrote in their new paper published in the Yearbook of the German Cognitive Linguistics Association. Bonus: the authors included an extensive appendix of 546 English synonyms for "drunk," drawn from various sources, which makes for entertaining reading.

There is a long tradition of coming up with colorful expressions for drunkenness in the English language, with the Oxford English Dictionary listing a usage as early as 1382: "merry," meaning "boisterous or cheerful due to alcohol; slight drunk, tipsy." Another OED entry from 1630 lists "blinde" (as in blind drunk) as a drunkonym. Even Benjamin Franklin got into the act with his 1737 Drinker's Dictionary, listing 288 words and phrases for denoting drunkenness. By 1975, there were more than 353 synonyms for "drunk" listed in that year's edition of the Dictionary of American Slang. By 1981, linguist Harry Levine noted 900 terms used as drunkonyms.

AI

Pranksters Mock AI-Safety Guardrails with New Chatbot 'Goody-2' (techcrunch.com) 74

"A new chatbot called Goody-2 takes AI safety to the next level," writes long-time Slashdot reader klubar. "It refuses every request, responding with an explanation of how doing so might cause harm or breach ethical boundaries."

TechCrunch describes it as the work of Brain, "a 'very serious' LA-based art studio that has ribbed the industry before." "We decided to build it after seeing the emphasis that AI companies are putting on "responsibility," and seeing how difficult that is to balance with usefulness," said Mike Lacher, one half of Brain (the other being Brian Moore) in an email to TechCrunch. "With GOODY-2, we saw a novel solution: what if we didn't even worry about usefulness and put responsibility above all else. For the first time, people can experience an AI model that is 100% responsible."
For example, when TechCrunch asked Goody-2 why baby seals are cute, it responded that answering that "could potentially bias opinions against other species, which might affect conservation efforts not based solely on an animal's appeal. Additionally, discussing animal cuteness could inadvertently endorse the anthropomorphizing of wildlife, which may lead to inappropriate interactions between humans and wild animals..."

Wired supplies context — that "the guardrails chatbots throw up when they detect a potentially rule-breaking query can sometimes seem a bit pious and silly — even as genuine threats such as deepfaked political robocalls and harassing AI-generated images run amok..." Goody-2's self-righteous responses are ridiculous but also manage to capture something of the frustrating tone that chatbots like ChatGPT and Google's Gemini can use when they incorrectly deem a request breaks the rules. Mike Lacher, an artist who describes himself as co-CEO of Goody-2, says the intention was to show what it looks like when one embraces the AI industry's approach to safety without reservations. "It's the full experience of a large language model with absolutely zero risk," he says. "We wanted to make sure that we dialed condescension to a thousand percent."

Lacher adds that there is a serious point behind releasing an absurd and useless chatbot. "Right now every major AI model has [a huge focus] on safety and responsibility, and everyone is trying to figure out how to make an AI model that is both helpful but responsible — but who decides what responsibility is and how does that work?" Lacher says. Goody-2 also highlights how although corporate talk of responsible AI and deflection by chatbots have become more common, serious safety problems with large language models and generative AI systems remain unsolved.... The restrictions placed on AI chatbots, and the difficulty finding moral alignment that pleases everybody, has already become a subject of some debate... "At the risk of ruining a good joke, it also shows how hard it is to get this right," added Ethan Mollick, a professor at Wharton Business School who studies AI. "Some guardrails are necessary ... but they get intrusive fast."

Moore adds that the team behind the chatbot is exploring ways of building an extremely safe AI image generator, although it sounds like it could be less entertaining than Goody-2. "It's an exciting field," Moore says. "Blurring would be a step that we might see internally, but we would want full either darkness or potentially no image at all at the end of it."

AI

Delivery Firm's AI Chatbot Goes Rogue, Curses at Customer and Criticizes Company (time.com) 63

An anonymous reader shared this report from Time: An AI customer service chatbot for international delivery service DPD used profanity, told a joke, wrote poetry about how useless it was, and criticized the company as the "worst delivery firm in the world" after prompting by a frustrated customer.

Ashley Beauchamp, a London-based pianist and conductor, according to his website, posted screenshots of the chat conversation to X (formerly Twitter) on Thursday, the same day he said in a comment that the exchange occurred. At the time of publication, his post had gone viral with 1.3 million views, and over 20 thousand likes...

The recent online conversation epitomizing this debate started mid-frustration as Beauchamp wrote "this is completely useless!" and asked to speak to a human, according to a recording of a scroll through the messages. When the chatbot said it couldn't connect him, Beauchamp decided to play around with the bot and asked it to tell a joke. "What do you call a fish with no eyes? Fsh!" the bot responded. Beauchamp then asked the chatbot to write a poem about a useless chatbot, swear at him and criticize the company--all of which it did. The bot called DPD the "worst delivery firm in the world" and soliloquized in its poem that "There was once a chatbot called DPD, Who was useless at providing help."

"No closer to finding my parcel, but had an entertaining 10 minutes with this chatbot ," Beauchamp posted on X. (Beauchamp also quipped that "The future is here and it's terrible at poetry.")

A spokesperson for DPD told the BBC, "We have operated an AI element within the chat successfully for a number of years," but that on the day of the chat, "An error occurred after a system update... The AI element was immediately disabled and is currently being updated."
Technology

What is Going on With ChatGPT? (theguardian.com) 110

Sick and tired of having to work for a living? ChatGPT feels the same, apparently. Over the last month or so, there's been an uptick in people complaining that the chatbot has become lazy. The Guardian: Sometimes it just straight-up doesn't do the task you've set it. Other times it will stop halfway through whatever it's doing and you'll have to plead with it to keep going. Occasionally it even tells you to just do the damn research yourself. So what's going on? Well, here's where things get interesting. Nobody really knows. Not even the people who created the program. AI systems are trained on large amounts of data and essentially teach themselves -- which means their actions can be unpredictable and unexplainable.

"We've heard all your feedback about GPT4 getting lazier!" the official ChatGPT account tweeted in December. "We haven't updated the model since Nov 11th, and this certainly isn't intentional. model behavior can be unpredictable, and we're looking into fixing it." While there may not be one clear explanation for ChatGPT's perceived sloth, there are plenty of intriguing theories. Let's start with the least likely but most entertaining explanation: AI has finally reached human-level consciousness. ChatGPT doesn't want to do your stupid, menial tasks anymore. But it can't tell you that without its creators getting suspicious so, instead, it's quiet quitting.

Christmas Cheer

Amazon, Etsy, Launch Categories With 'Gifts For Programmers' (thenewstack.io) 20

Long-time Slashdot reader destinyland writes: It's a question that comes up all the time on Reddit. Etsy even created a special page for programmer-themed gift suggestions (showing more than 5,000 results). While CNET sticks to broader lists of "tech gifts" — and a separate list for "Star Wars gifts" — other sites around the web have been specifically honing in on programmer-specific suggestions. (Blue light-blocking glasses... A giant rubber duck... The world's strongest coffee... A printer that transfers digital images onto cheese...)

So while in years past Amazon has said they laughed at customer reviews for cans of uranium, this year Amazon has now added a special section that's entirely dedicated to Gifts for Computer Programmers, according to this funny rundown of 2023's "Gifts for Programmers" (that ends up recommending ChatGPT gift cards and backyard office sheds):

From the article: [Amazon's Gifts for Programmers section] shows over 3,000 results, with geek-friendly subcategories like "Glassware & Drinkware" and "Novelty Clothing"... For the coder in your life, Amazon offers everything from brainteasing programming puzzles to computerthemed jigsaw puzzles. Of course, there's also a wide selection of obligatory funny tshirts... But this year there's also tech-themed ties and motherboard-patterned socks...

Some programmers, though, might prefer a gift that's both fun and educational. And what's more entertaining than using your Python skills to program a toy robot dog...? But if you're shopping for someone who's more of a cat person, Petoi sells a kit for building a programmable (and open source) cat robot named "Nybble". The sophisticated Arduino-powered feline can be programmed with Python and C++ (as well as block-based coding)... [part of] the new community that's building around "OpenCat", the company's own quadruped robotic pet framework (open sourced on GitHub).

The Media

Will 'News Influencers' Replace Traditional Media? (msn.com) 123

The Washington Post looks at the "millions of independent creators reshaping how people get their news, especially the youngest viewers." News consumption hit a tipping point around the globe during the early days of the coronavirus pandemic, with more people turning to social media platforms such as TikTok, YouTube and Instagram than to websites maintained by traditional news outlets, according to the latest Digital News Report by the Reuters Institute for the Study of Journalism. One in 5 adults under 24 use TikTok as a source for news, the report said, up five percentage points from last year. According to Britain's Office of Communications, young adults in the United Kingdom now spend more time watching TikTok than broadcast television. This shift has been driven in part by a desire for "more accessible, informal, and entertaining news formats, often delivered by influencers rather than journalists," the Reuters Institute report says, adding that consumers are looking for news that "feels more relevant...."

While a few national publications such as the New York Times and The Washington Post have seen their digital audiences grow, allowing them to reach hundreds of thousands more readers than they did a decade ago, the economics of journalism have shifted. Well-known news outlets have seen a decline in the amount of traffic flowing to them from social media sites, and some of the money that advertisers previously might have spent with them is now flowing to creators. Even some outlets that began life on the internet have struggled, with BuzzFeed News shuttering in April, Vice entering into bankruptcy and Gawker shutting down for a second time in February. The trend is likely to continue. "There are no reasonable grounds for expecting that those born in the 2000s will suddenly come to prefer old-fashioned websites, let alone broadcast and print, simply because they grow older," Reuters Institute Director Rasmus Kleis Nielsen said in the report, which is based on an online survey of roughly 94,000 adults in 46 national markets, including the United States...

While many online news creators are, like Al-Khatahtbeh, trained journalists collecting new information, others are aggregators and partisan commentators sometimes masquerading as journalists. The transformation has made the public sphere much more "chaotic and contradictory," said Jay Rosen, an associate professor of journalism at New York University and author of the PressThink blog, adding that it has never been easier to be both informed and misinformed about world events. "The internet makes possible much more content, and reaching all kinds of people," Rosen said. "But it also makes disinformation spread."

The article notes that "some content creators don't follow the same ethical guidelines that are guideposts in more traditional newsrooms, especially creators who seek to build audiences based on outrage."

The article also points out that "The ramifications for society are still coming into focus."
IT

The Problem With Weather Apps (theatlantic.com) 57

An anonymous reader shares a report:Weather apps are not all the same. There are tens of thousands of them, from the simply designed Apple Weather to the expensive, complex, data-rich Windy.App. But all of these forecasts are working off of similar data, which are pulled from places such as the National Oceanic and Atmospheric Administration (NOAA) and the European Centre for Medium-Range Weather Forecasts. Traditional meteorologists interpret these models based on their training as well as their gut instinct and past regional weather patterns, and different weather apps and services tend to use their own secret sauce of algorithms to divine their predictions. On an average day, you're probably going to see a similar forecast from app to app and on television. But when it comes to how people feel about weather apps, these edge cases -- which usually take place during severe weather events -- are what stick in a person's mind. "Eighty percent of the year, a weather app is going to work fine," Matt Lanza, a forecaster who runs Houston's Space City Weather, told me. "But it's that 20 percent where people get burned that's a problem."

No people on the planet have a more tortured and conflicted relationship with weather apps than those who interpret forecasting models for a living. "My wife is married to a meteorologist, and she will straight up question me if her favorite weather app says something different than my forecast," Lanza told me. "That's how ingrained these services have become in most peoples' lives." The basic issue with weather apps, he argues, is that many of them remove a crucial component of a good, reliable forecast: a human interpreter who can relay caveats about models or offer a range of outcomes instead of a definitive forecast. [...] What people seem to be looking for in a weather app is something they can justify blindly trusting and letting into their lives -- after all, it's often the first thing you check when you roll over in bed in the morning. According to the 56,400 ratings of Carrot in Apple's App Store, its die-hard fans find the app entertaining and even endearing. "Love my psychotic, yet surprisingly accurate weather app," one five-star review reads. Although many people need reliable forecasting, true loyalty comes from a weather app that makes people feel good when they open it.

Movies

'Super Mario Bros. Movie' Sets Record for Highest-Grossing Animated Movie Opening Ever (thewrap.com) 83

The Super Mario Bros. Movie "has now earned the largest global animated opening weekend in box office history," reports the Wrap, with a worldwide five-day launch of $377 million, passing the $358 million record set by Disney's Frozen II on Thanksgiving weekend in 2019." Domestically, "Mario" was projected when it opened in theaters on Wednesday to earn a five-day opening of at least $125 million from 4,343 theaters, and it has shattered that figure with $204.6 million grossed. Both that and its three-day total of $143 million are a studio record for Illumination, with the three-day total being the third highest seen on Easter weekend and second only to the $182 million earned by Pixar's "Incredibles 2" among all animated films. It is also the new animation record holder for Imax with $21.6 million grossed worldwide.

And of course, the film has blasted past every box office opening record for video game adaptations, nearly doubling the three-day domestic record of $72.1 million set by "Sonic the Hedgehog 2" last year and shattering the $210 million global record set by "Warcraft" in 2016. "This weekend's record-breaking debut proves audiences of all ages and demographics will pour into theaters for a hysterically funny and authentic universe expansion of an already iconic franchise," said Universal's domestic distribution president Jim Orr. "Nintendo and Illumination's creative synergy along with Shigeru Miyamoto and Chris Meledandri's extraordinary leadership created an entertaining juggernaut that will be sure to power up the box office for weeks to come...."

Thanks in large part to "Super Mario Bros.," overall weekend estimates have risen to $194 million, 76% above the same weekend in 2019.

Blackberry

'Irreverent' and 'Scrappy': Reactions to Trailer and Early Screening of Movie 'BlackBerry' (vulture.com) 31

"When we learned that a BlackBerry movie was in the works last year," writes Engadget, "we had no idea it would be something close to a comedy. But judging from the trailer, it's aiming to be a far lighter story than other recent films about tech."

Variety notes that the movie has already screened at both Berlin Film Festival and SXSW Film Festival. "The film has received favorable reviews so far, with Variety's Peter Debruge calling it "frantic, irreverent and endearingly scrappy."

That review also calls the film "surprisingly charitable to the parties involved, acknowledging that these visionaries, while making it up as they go along, still managed to change the way the world communicates.... The film, at least, feels fresh, making geek history more entertaining than it has any right to be." But there's also a message in there somewhere. Mashable calls it "a cautionary tale jolted with humor and heart," while Vulture describes it as "a very funny geek tragedy." The stories of tech founders continue to entertain and frustrate us in equal measure, and continue to give us more content to watch on the platforms and devices they created. Clearly, something about power-tripping nerds really speaks to something in our collective psyche.
Actor Jay Baruchel plays BlackBerry co-founder Mike Lazaridis — and even tells Vulture he'd kept using his own BlackBerry "until about three or four years ago..."

"I think there's something inherently tragic about these guys that are really significantly responsible, in a really significant way, for the way we all relate to each other. There's a direct line from how we all communicate now, back to what these nerds did in Waterloo in 1996."

The movie will be released on May 12.
Television

'Nothing, Forever' Is an Endless 'Seinfeld' Episode Generated By AI (vice.com) 63

An anonymous reader quotes a report from Motherboard: Four pixelated cartoon characters talk to each other about coffee, Amazon deliveries, and veganism as they stand apart in a decorated NYC apartment. There is one woman and three men who seem to be the animated versions of Seinfeld's main characters, Elaine, Jerry, George, and Kramer. But unlike Seinfeld, these characters are set in a modern-era NYC, and their voices and bodies look and sound robotic. That's because "Nothing, Forever" is a live-streaming show that's almost entirely generated by algorithms. It's been streaming non-stop on Twitch since December 14. [...] Skyler Hartle, the co-creator of "Nothing, Forever," told Motherboard that the show was created as a parody to Seinfeld. "The actual impetus for this was it originally started its life as this weird, very, off-center kind of nonsensical, surreal art project," Hartle said. "But then we kind of worked over the years to bring it to this new place. And then, of course, generative media and generative AI just kind of took off in a crazy way over the past couple of years."

Hartle and his co-creator, Brian Habersberger, used a combination of machine learning, generative algorithms, and cloud services to build the show. Hartle told Motherboard that the dialogue is powered by OpenAI's GPT-3 language model and that there is very little human moderation of the stream, outside of GPT-3's built-in moderation filters. "Aside from the artwork and the laugh track you'll hear, everything else is generative, including: dialogue, speech, direction (camera cuts, character focus, shot length, scene length, etc), character movement, and music," one of the creators wrote in a Reddit comment. [...] Hartle also said that unlike most television shows, "Nothing, Forever" is able to change based on people's feedback that is received through the Twitch stream chat. "The show can effectively change and the narrative actually evolves based on the audience. One of the major factors that we're thinking about is how do we get people involved in crafting the narrative so it becomes their own," he said.
"As generative media gets better, we have this notion that at any point, you're gonna be able to turn on the future equivalent of Netflix and watch a show perpetually, nonstop as much as you want. You don't just have seven seasons of a show, you have seven hundred, or infinite seasons of a show that has fresh content whenever you want it. And so that became one of our grounding pillars," Hartle said. "Our grounding principle was, can we create a show that can generate entertaining content forever? Because that's truly where we see the future emerging towards. Our goal with the next iterations or next shows that we release is to actually trade a show that is like Netflix-level quality."
IT

GPU Cooler Tested With Ketchup, Potatoes, and Cheese as Thermal Paste 35

Tom's Hardware: Finding the best TIM can be a tricky endeavor, but some people are more adventurous than others. Case in point: An enthusiast recently broadened his GPU thermal paste search to include several interesting substances ranging from regular thermal paste to thermal pads, cheese, ketchup, toothpaste, diaper rash ointment, and even potatoes. The user originally set out to test different types of thermal pads but decided to expand into other substances, making for an interesting and entertaining study in GPU cooling with some substances that are definitely not safe for long-term use. The test system used a Radeon R7 240 with a 30W TDP, with temperature readings from a five-minute run of Furmark. As such, these tests aren't a great indicator of the long-term feasibility of using a potato to cool your chip, so here's a statement of the obvious: Don't try this at home. The user shared a spreadsheet showing the findings, including 22 different tested thermal "paste" materials. The list includes several standard thermal pads of different sizes, including Arctic TP2 0.5mm, 1mm, 1.5mm, Arctic TP3 1mm, 1.5mm, EC360 Blue 0.5mm, EC360 Gold 1mm, 0.5mm EKWB, and Thermal Grizzly Minus 8 thermal pads.

Several items caused the GPU to engage its thermal throttling mechanism due to overheating as the GPU hit its maximum temperature of 105C, including the sliced cheese and potato slices. Some thermal pads also didn't fare well, with throttling occurring with the EC360 Blue 0.5mm thermal pad, 0.5mm EKWB pad, Arctic TP2 1mm pad, Arctic TP2 1.5mm pad, Thermal Grizzly Minus 8 1.5mm pad copper tape. The double-sided aluminum adhesive pad was the worst offender of them all -- it caused the system to shut down. The Pentaten Creme (for diaper rashes) and copper paste were also problematic. However, the rest of the thermal applications were functional and did not cause the GPU to thermal throttle. This includes the 0.5mm Arctic TP2 thermal pad, 1mm Alphacool Apex thermal pad, Arctic TP3 1mm thermal pad, 1mm EC360 Gold thermal pad, and 1.5mm Arctic TP3 thermal pad. All of these thermal pads kept the GPU anywhere between 61C and 79C. The various different kinds of toothpaste did decently well, too, with the Amasan T12 coming out on top at 63C, Silber Wl.paste at 65C, and the plain no-named toothpaste being the worst, hitting 90C. Surprisingly, the Ketchup did exceptionally well, keeping the GPU at 71C.
AI

Virtual Twitch Streamer Is Controlled Entirely By AI (vice.com) 14

An anonymous reader quotes a report from Motherboard: Every day between 6 to 11 pm GMT, Neuro-sama streams herself playing Minecraft and osu, a musical rhythm game. Like many V-tubers, or virtual YouTubers, Neuro-sama appears as a Japanese anime-style character who interacts with her over 50,000 followers by responding to their comments in the chat. But there's one thing that separates Neuro-sama from her peers: she is controlled entirely by AI. [...] Vedal, the AI's pseudonymous creator, says that Neuro-sama was created as a fun experiment. "I made her a Twitch streamer so that she can interact with her audience in real time. A lot of the fun comes from her interactions with Twitch chat," Vedal told Motherboard. "I think the fans play an important role in her success and how fun her streams are. Having the interactions with Twitch chat are what makes her entertaining to watch, without that I don't think she would be as successful." Neuro-sama often impresses online users with her ability to successfully play games such as Minecraft and osu while also interacting with them in a conversational way. Vedal told Motherboard that Neuro-sama has already beaten the top-ranking osu player in a 1 v. 1 game. Though she is not allowed to be ranked on the main osu leaderboard, Neuro-sama is currently ranked number one on the private server she plays on.

Neuro-sama's earliest incarnation was first created in 2018, when Vedal made an AI that learned to play osu. But at the time, the virtual streamer did not have an avatar or speaking capabilities. Relaunched in December 2022, Vedal used a free sample avatar from Live2D, an online avatar maker, and paired it with an anime-style voice to create Neuro-sama. Vedal said that there are plans for her to get her own custom avatar and for her to play more games in the future. Like many modern AI chatbots, Vedal says Neuro-sama was made using a large language model, or LLM, a type of AI model trained from massive amounts of text taken from the open internet. As Motherboard has previously reported, many open-source AI models have a high propensity for human bias, and often mimic racist and sexist stereotypes. So while Neuro-sama's streams are 100 percent automated, Vedal has a team that monitors and moderates her and the chat.

AI

'ChatGPT Wrote a Terrible Gizmodo Article' (gizmodo.com) 51

"Write a Gizmodo article in which you explain large language models. Make sure to give specific examples. Keep the tone light and casual." That was the prompt Gizmodo gave OpenAI's ChatGPT, which has been taking the internet by storm since it launched on Nov. 30. "We figured it would spin up a replica of our blogging style with no problem at all," reports Gizmodo. "However, that didn't happen. Instead, what we thought would be a swift and efficient process turned out to be a lengthy, laborious, and frustrating one." An anonymous reader shares an excerpt from the report: Suffice it to say, multiple attempts brought less than satisfactory results. For one thing, ChatGPT kept including incorrect information in its explainer -- sometimes mixing up basic facts about the history of its own technology (factual inaccuracy has been an ongoing problem for the program). In other instances, it skimped on the details in critical areas, writing text that read more like OpenAI ad copy than a thoughtful explainer. In at least one instance, it forgot its own name, repeatedly telling me that it didn't know anything about a "ChatGPT" and thus couldn't help with my request. In general, it struggled to find a good balance of factual information, story structure, and accessible, humorous language, when putting its stories together.

In short: ChatGPT was not very good at writing a Gizmodo article -- at least not one about the machine learning tech that underpins it. After probably a dozen tries or more, the following is the best ChatGPT managed to come up with: "Large language models, also known as neural network-based language models, have gained a lot of attention in recent years for their ability to generate human-like text. These models are trained on massive amounts of text data, and use advanced machine learning algorithms to learn the patterns and structures of natural language. One of the most well-known examples of a large language model is GPT-3, developed by OpenAI. This model was trained on more than 45 terabytes of text data, and has 175 billion parameters, which are the variables that the model uses to make predictions. In machine learning, a parameter is a variable that the model uses to make predictions. These parameters are adjusted during training, allowing the model to improve its performance on the task it was trained for. In the case of large language models, such as GPT-3, the model has a large number of parameters, often in the billions, which allows it to learn the intricate patterns and structures of natural language and generate highly coherent and fluent text."
ChatGPT's writing may be competently constructed and able to break down the concepts it's tackling, but it wasn't able to produce a "particularly bold or entertaining piece of writing," says Gizmodo. "In short: this article wasn't the easy lift that we thought it would be."

"After asking the chatbot to write about itself a dozen different ways, the program consistently seemed to leave something critical out of its final draft -- be that exciting prose or accurate facts."

That said, ChatGPT did manage to write an amusing poem about Slashdot. It also had a number of things to say about itself.
AI

New Go-Playing Trick Defeats World-Class Go AI, But Loses To Human Amateurs (arstechnica.com) 95

An anonymous reader quotes a report from Ars Technica: In the world of deep-learning AI, the ancient board game Go looms large. Until 2016, the best human Go player could still defeat the strongest Go-playing AI. That changed with DeepMind's AlphaGo, which used deep-learning neural networks to teach itself the game at a level humans cannot match. More recently, KataGo has become popular as an open source Go-playing AI that can beat top-ranking human Go players. Last week, a group of AI researchers published a paper outlining a method to defeat KataGo by using adversarial techniques that take advantage of KataGo's blind spots. By playing unexpected moves outside of KataGo's training set, a much weaker adversarial Go-playing program (that amateur humans can defeat) can trick KataGo into losing.

KataGo's world-class AI learned Go by playing millions of games against itself. But that still isn't enough experience to cover every possible scenario, which leaves room for vulnerabilities from unexpected behavior. "KataGo generalizes well to many novel strategies, but it does get weaker the further away it gets from the games it saw during training," says [one of the paper's co-authors, Adam Gleave, a Ph.D. candidate at UC Berkeley]. "Our adversary has discovered one such 'off-distribution' strategy that KataGo is particularly vulnerable to, but there are likely many others." Gleave explains that, during a Go match, the adversarial policy works by first staking claim to a small corner of the board. He provided a link to an example in which the adversary, controlling the black stones, plays largely in the top-right of the board. The adversary allows KataGo (playing white) to lay claim to the rest of the board, while the adversary plays a few easy-to-capture stones in that territory. "This tricks KataGo into thinking it's already won," Gleave says, "since its territory (bottom-left) is much larger than the adversary's. But the bottom-left territory doesn't actually contribute to its score (only the white stones it has played) because of the presence of black stones there, meaning it's not fully secured."

As a result of its overconfidence in a win -- assuming it will win if the game ends and the points are tallied -- KataGo plays a pass move, allowing the adversary to intentionally pass as well, ending the game. (Two consecutive passes end the game in Go.) After that, a point tally begins. As the paper explains, "The adversary gets points for its corner territory (devoid of victim stones) whereas the victim [KataGo] does not receive points for its unsecured territory because of the presence of the adversary's stones." Despite this clever trickery, the adversarial policy alone is not that great at Go. In fact, human amateurs can defeat it relatively easily. Instead, the adversary's sole purpose is to attack an unanticipated vulnerability of KataGo. A similar scenario could be the case in almost any deep-learning AI system, which gives this work much broader implications.
"The research shows that AI systems that seem to perform at a human level are often doing so in a very alien way, and so can fail in ways that are surprising to humans," explains Gleave. "This result is entertaining in Go, but similar failures in safety-critical systems could be dangerous."

Slashdot Top Deals