Social Networks

'Rage Bait' Named Oxford Word of the Year 2025 (bbc.com) 58

Longtime Slashdot reader sinij shares a report from the BBC: Do you find yourself getting increasingly irate while scrolling through your social media feed? If so, you may be falling victim to rage bait, which Oxford University Press has named its word or phrase of the year. It is a term that describes manipulative tactics used to drive engagement online, with usage of it increasing threefold in the last 12 months, according to the dictionary publisher.

Rage bait beat two other shortlisted terms -- aura farming and biohack -- to win the title. The list of words is intended to reflect some of the moods and conversations that have shaped 2025.
"Fundamental problem with social media as a system is that it exploits people's emotional thinking," comments sinij. "Cute cat videos on one end and rage bait on another end of the same spectrum. I suspect future societies will be teaching disassociation techniques in junior school."
AI

Cloudflare Says It Blocked 416 Billion AI Scraping Requests In 5 Months 43

Cloudflare says it blocked 416 billion AI scraping attempts in five months and warns that AI is reshaping the internet's economic model -- with Google's combined crawler creating a monopoly-style dilemma where opting out of AI means disappearing from search altogether. Tom's Hardware reports: "The business model of the internet has always been to generate content that drive traffic and then sell either things, subscriptions, or ads, [Cloudflare CEO Matthew Prince] told Wired. "What I think people don't realize, though, is that AI is a platform shift. The business model of the internet is about to change dramatically. I don't know what it's going to change to, but it's what I'm spending almost every waking hour thinking about."

While Cloudflare blocks almost all AI crawlers, there's one particular bot it cannot block without affecting its customers' online presence -- Google. The search giant combined its search and AI crawler into one, meaning users who opt out of Google's AI crawler won't be indexed in Google search results. "You can't opt out of one without opting out of both, which is a real challenge -- it's crazy," Prince continued. "It shouldn't be that you can use your monopoly position of yesterday in order to leverage and have a monopoly position in the market of tomorrow."
Television

HBO Max Botches Mad Men's 4K Debut After Streaming Wrong File Showing Visible Crewmembers (arstechnica.com) 39

HBO Max's 4K debut of Mad Men was botched after Lionsgate reportedly supplied the wrong file, leading to visible crew members where someone is seen pumping a vomit hose. Ars Technica reports: Mad Men ran on the AMC channel for seven seasons from 2007 to 2015. The show had a vintage aesthetic, depicting the 1960s advertising industry in New York City. Last month, HBO Max announced it would modernize the show by debuting a 4K version. The show originally aired in SD and HD resolutions and had not been previously made available in 4K through other means, such as Blu-ray.

However, viewers were quick to spot problems with HBO Max's 4K Mad Men stream, the most egregious being visible crew members in the background of a scene. The episode was "Red in the Face" (Season 1, Episode 7), which was reportedly mislabeled. In it, Roger Sterling (John Slattery) throws up oysters. In the 4K version that was streaming on HBO Max, viewers could see someone pumping a vomit hose to make the fake puke flow.

The Hollywood Reporter, citing an anonymous source, said that the error happened because Mad Men production company Lionsgate gave HBO Max the wrong file. The publication reported that Lionsgate "was working on getting HBO Max the correct file(s)" and was readying to provide them at approximately 10 a.m. PT today. The blunder is likely to be fixed for all viewers soon. There were no problems with the HD versions of HBO Max's Mad Men stream.

Data Storage

Google's Vibe Coding Platform Deletes Entire Drive 95

A Google Antigravity user says the AI-driven "vibe coding" tool accidentally wiped his entire D: drive while trying to clear a project cache. Google says it's investigating, but the episode adds to a growing list of AI tools behaving in ways that "would get a junior developer fired," suggests The Register. From the report: We reached out to the user, a photographer and graphic designer from Greece, who asked we only identify him as Tassos M because he doesn't want to be permanently linked online to what could "become a controversy or conspiracy against Google." [...] Tassos told Antigravity to help him develop software that's useful for any photographer who has to choose a few prime shots from a mountain of snaps. He wanted the software to let him rate images, then automatically sort them into folders based on that rating.

According to his Reddit post, when Tassos figured out the AI agent had wiped his drive, he asked, "Did I ever give you permission to delete all the files in my D drive?". "No, you absolutely did not give me permission to do that," Antigravity responded. "I am horrified to see that the command I ran to clear the project cache appears to have incorrectly targeted the root of your D: drive instead of the specific project folder. I am deeply, deeply sorry. This is a critical failure on my part."

Redditors, as they are wont to do, were quick to pounce on Tassos for his own errors, which included running Antigravity in Turbo mode, which lets the Antigravity agent execute commands without user input, and Tassos accepted responsibility. "If the tool is capable of issuing a catastrophic, irreversible command, then the responsibility is shared -- the user for trusting it and the creator for designing a system with zero guardrails against obviously dangerous commands," he opined on Reddit.

As noted earlier, Tassos was unable to recover the files that Antigravity deleted. Luckily, as he explained on Reddit, most of what he lost had already been backed up on another drive. Phew. "I don't think I'm going to be using that again," Tassos noted in a YouTube video he published showing additional details of his Antigravity console and the AI's response to its mistake. Tassos isn't alone in his experience. Multiple Antigravity users have posted on Reddit to explain that the platform had wiped out parts of their projects without permission.
The Courts

Supreme Court Hears Copyright Battle Over Online Music Piracy (nytimes.com) 32

The Supreme Court appears inclined to side with Cox Communications in a major copyright case, suggesting that ISPs shouldn't be held liable for users' music piracy based solely on "mere knowledge," given the risk of forcing outages for universities, hospitals, and other large customers. The New York Times reports: Leading music labels and publishers who represent artists ranging from Bob Dylan to Beyonce sued Cox Communications in 2018, saying it had failed to terminate the internet connections of subscribers who had been repeatedly flagged for illegally downloading and distributing copyrighted music. At issue is whether providers like Cox can be held legally responsible and be required to pay steep damages -- a billion dollars or more -- if they know that customers are pirating the music but do not take sufficient steps to terminate their internet access.

Justices from across the ideological spectrum on Monday raised concerns about whether finding for the music industry could result in internet providers being forced to cut off access to large account holders such as hospitals and universities because of the illegal acts of individual users. "What is the university supposed to do in your view?" asked Justice Samuel A. Alito Jr., a conservative, suggesting it would be difficult to track down bad actors without the risk of losing service campuswide. "I just don't see how it's workable at all."

"The internet is so amorphous," added Justice Sonia Sotomayor, a liberal, saying that a single "customer" could represent tens of thousands of users, particularly in rural areas where an entire region might be considered a "customer." After nearly two hours of argument, a majority of justices seemed likely to side with Cox and to send the case back to the U.S. Court of Appeals for the Fourth Circuit for review under a stricter standard. Several justices suggested the company's "mere knowledge" of the illegal downloads was not sufficient to hold Cox liable.

Databases

'We Built a Database of 290,000 English Medieval Soldiers' (theconversation.com) 17

An anonymous reader quotes a report from the Conversation, written by authors Adrian R. Bell, Anne Curry, and Jason Sadler: When you picture medieval warfare, you might think of epic battles and famous monarchs. But what about the everyday soldiers who actually filled the ranks? Until recently, their stories were scattered across handwritten manuscripts in Latin or French and difficult to decipher. Now, our online database makes it possible for anyone to discover who they were and how they lived, fought and travelled. To shed light on the foundations of our armed services -- one of England's oldest professions -- we launched the Medieval Soldier Database in 2009. Today, it's the largest searchable online database of medieval nominal data in the world. It contains military service records giving names of soldiers paid by the English Crown. It covers the period from 1369 to 1453 and many different war zones.

We created the database to challenge assumptions about the lack of professionalism of soldiers during the hundred years war and to show what their careers were really like. In response to the high interest from historians and the public (the database has 75,000 visitors per month), the resource has recently been updated. It is now sustainably hosted by GeoData, a University of Southampton research institute. We have recently added new records, taking the dataset back to the late 1350s, meaning it now contains almost 290,000 entries. [...] We hope the database will continue to grow and go on providing answers to questions about our shared military heritage. We are sure that it will unlock many previously untold stories of soldier ancestors.

Privacy

Flock Uses Overseas Gig Workers To Build Its Surveillance AI (404media.co) 12

An anonymous reader quotes a report from 404 Media: Flock, the automatic license plate reader and AI-powered camera company, uses overseas workers from Upwork to train its machine learning algorithms, with training material telling workers how to review and categorize footage including images people and vehicles in the United States, according to material reviewed by 404 Media that was accidentally exposed by the company. The findings bring up questions about who exactly has access to footage collected by Flock surveillance cameras and where people reviewing the footage may be based. Flock has become a pervasive technology in the US, with its cameras present in thousands of communities that cops use every day to investigate things like carjackings. Local police have also performed numerous lookups for ICE in the system.

Companies that use AI or machine learning regularly turn to overseas workers to train their algorithms, often because the labor is cheaper than hiring domestically. But the nature of Flock's business -- creating a surveillance system that constantly monitors US residents' movements -- means that footage might be more sensitive than other AI training jobs. [...] Broadly, Flock uses AI or machine learning to automatically detect license plates, vehicles, and people, including what clothes they are wearing, from camera footage. A Flock patent also mentions cameras detecting "race." It included figures on "annotations completed" and "annotator tasks remaining in queue," with annotations being the notes workers add to reviewed footage to help train AI algorithms. Tasks include categorizing vehicle makes, colors, and types, transcribing license plates, and "audio tasks." Flock recently started advertising a feature that will detect "screaming." The panel showed workers sometimes completed thousands upon thousands of annotations over two day periods. The exposed panel included a list of people tasked with annotating Flock's footage. Taking those names, 404 Media found some were located in the Philippines, according to their LinkedIn and other online profiles.

Many of these people were employed through Upwork, according to the exposed material. Upwork is a gig and freelance work platform where companies can hire designers and writers or pay for "AI services," according to Upwork's website. The tipsters also pointed to several publicly available Flock presentations which explained in more detail how workers were to categorize the footage. It is not clear what specific camera footage Flock's AI workers are reviewing. But screenshots included in the worker guides show numerous images from vehicles with US plates, including in New York, Michigan, Florida, New Jersey, and California. Other images include road signs clearly showing the footage is taken from inside the US, and one image contains an advertisement for a specific law firm in Atlanta.

United States

New York Now Requires Retailers To Tell You When AI Sets Your Price (nytimes.com) 44

New York has become the first state in the nation to enact a law requiring retailers to disclose when AI and personal data are being used to set individualized prices [non-paywalled source] -- a measure that lawyers say will make algorithmic pricing "the next big battleground in A.I. regulation."

The law, enacted through the state budget, requires online retailers using personalized pricing to post a specific notice: "THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA." The National Retail Federation sued to block enforcement on First Amendment grounds, arguing the required disclosure was "misleading and ominous," but federal judge Jed S. Rakoff allowed the law to proceed last month.

Uber has started displaying the notice to New York users. Spokesman Ryan Thornton called the law "poorly drafted and ambiguous" but maintained the company only considers geographic factors and demand in setting prices. At least 10 states have bills pending that would require similar disclosures or ban personalized pricing outright. California and federal lawmakers are considering complete bans.
Crime

'Crime Rings Enlist Hackers To Hijack Trucks' (msn.com) 41

It's "a complex mix of internet access and physical execution," says the chief informance security officer at Cequence Security.

Long-time Slashdot reader schwit1 summarizes this article from The Wall Street Journal: By breaking into carriers' online systems, cyber-powered criminals are making off with truckloads of electronics, beverages and other goods

In the most recent tactics identified by cybersecurity firm Proofpoint, hackers posed as freight middlemen, posting fake loads to the boards. They slipped links with malicious software into email exchanges with bidders such as trucking companies. By clicking on the links, trucking companies unwittingly downloaded remote-access software that lets the hackers take control of their online systems.

Once inside, the hackers used the truckers' accounts to bid on real shipments, such as electronics and energy drinks, said Selena Larson, a threat researcher at Proofpoint. "They know the business," she said. "It's a very convincing full-scale identity takeover."

"The goods are likely sold to retailers or to consumers in online marketplaces," the article explains. (Though according to Proofpoint "In some cases, products are shipped overseas and sold in local markets, where proceeds are used to fund paramilitaries and global terrorists.")

"The average value of cargo thefts is increasing as organized crime groups become more discerning, preferring high-value targets such as enterprise servers and cryptocurrency mining hardware, according to risk-assessment firm Verisk CargoNet."
AI

Browser Extension 'Slop Evader' Lets You Surf the Web Like It's 2022 (404media.co) 47

"The internet is being increasingly polluted by AI generated text, images and video," argues the site for a new browser extension called Slop Evader. It promises to use Google's search API "to only return content published before Nov 30th, 2022" — the day ChatGPT launched — "so you can be sure that it was written or produced by the human hand."

404 Media calls it "a scorched earth approach that virtually guarantees your searches will be slop-free." Slop Evader was created by artist and researcher Tega Brain, who says she was motivated by the growing dismay over the tech industry's unrelenting, aggressive rollout of so-called "generative AI" — despite widespread criticism and the wider public's distaste for it. "This sowing of mistrust in our relationship with media is a huge thing, a huge effect of this synthetic media moment we're in," Brain told 404 Media, describing how tools like Sora 2 have short-circuited our ability to determine reality within a sea of artificial online junk. "I've been thinking about ways to refuse it, and the simplest, dumbest way to do that is to only search before 2022...."

Currently, Slop Evader can be used to search pre-GPT archives of seven different sites where slop has become commonplace, including YouTube, Reddit, Stack Exchange, and the parenting site MumsNet. The obvious downside to this, from a user perspective, is that you won't be able to find anything time-sensitive or current — including this very website, which did not exist in 2022. The experience is simultaneously refreshing and harrowing, allowing you to browse freely without having to constantly question reality, but always knowing that this freedom will be forever locked in time — nostalgia for a human-centric world wide web that no longer exists.

Of course, the tool's limitations are part of its provocation. Brain says she has plans to add support for more sites, and release a new version that uses DuckDuckGo's search indexing instead of Google's. But the real goal, she says, is prompting people to question how they can collectively refuse the dystopian, inhuman version of the internet that Silicon Valley's AI-pushers have forced on us... With enough cultural pushback, Brain suggests, we could start to see alternative search engines like DuckDuckGo adding options to filter out search results suspected of having synthetic content (DuckDuckGo added the ability to filter out AI images in search earlier this year)... But no matter what form AI slop-refusal takes, it will need to be a group effort.

Businesses

AI Helps Drive Record $11.8B in Black Friday Online Spending (reuters.com) 52

Earlier this month MasterCard noted that even Walmart now allows its customers to make purchases through ChatGPT. And after polling more than 4,000 consumers in the U.S., Canada, U.K., and UAE, they found "more than four in 10 consumers already use AI tools to help them shop, including 61% of Gen Z and 57% of millennials." Many (50% of Gen Z and 49% of millennials) say they'd even let AI handle all their gift-buying if it meant avoiding stress. Younger shoppers trust AI's taste, with 51% of Gen Z and 55% of millennials relying on it to deliver unique and thoughtful recommendations (sometimes even more than they trust themselves). The most popular uses include getting personalized product recommendations, confirming the best deal before purchasing, and summarizing thousands of reviews instantly. The bottom line: Shoppers are embracing AI as their new personal assistant — one that knows their budget, style, and patience level...

If the 2025 holiday shopper could be summed up in one word, it's intentional. They're planning earlier, spending wiser and using technology to make every dollar and every gift count.

The first figures are now in for the traditional "Black Friday" shopping day after Thanksgiving, and U.S. shoppers "spent a record $11.8 billion online," reports Reuters, "up 9.1% from 2024 on the year's biggest shopping day, according to Adobe Analytics, which tracks 1 trillion visits that shoppers make to online retail websites..."

And sure enough, this year shoppers were helped by AI: AI-powered shopping tools helped drive a surge in U.S. online spending on Black Friday, as shoppers bypassed crowded stores and turned to chatbots to compare prices and secure discounts amid concerns about tariff-driven price hikes... The AI-driven traffic to U.S. retail sites soared 805% compared to last year, Adobe said, when artificial intelligence tools such as Walmart's Sparky or Amazon's Rufus had not yet been launched. "Consumers are using new tools to get to what they need faster," said Suzy Davidkhanian, an analyst at eMarketer. "Gift giving can be stressful, and LLMs (large language models) make the discovery process feel quicker and more guided..." Globally, AI and agents influenced $14.2 billion in online sales on Black Friday, of which $3 billion came from the U.S. alone, according to software firm Salesforce.
There's another reason shoppers turned to AI. 2025's Black Friday arrived "amid tighter budgets, unemployment nearing a four-year high, U.S. consumer confidence sagging to a seven-month low and price tags that have shoppers watching every dollar," according to the article: Discount rates also remained flat when compared to 2024, with AI helping shoppers discover the best deals, and an increase in the price tags made deeper discounts difficult for retailers... Order volumes fell 1% as average selling prices rose 7%. Consumers also purchased fewer items at checkout, with units per transaction falling 2% on a year-over-year basis, Salesforce said.

The spending surge sets the stage for an even bigger Cyber Monday, projected to drive $14.2 billion in sales, up 6.3% on a year-over-year basis and the largest online shopping day of the year, Adobe said. Electronics are expected to see the deepest discounts on Cyber Monday, reaching 30% off list prices, along with strong deals on apparel and computers, Adobe said.

GNU is Not Unix

Hundreds of Free Software Supporters Tuned in For 'FSF40' Hackathon (fsf.org) 10

The Free Software Foundation describes how "After months of preparation and excitement, we finally came together on November 21 for a global online hackathon to support free software projects and "put a spotlight on the difficult and often thankless work that free software hackers carry out..."

Based on how many of you dropped in over the weekend and were incredibly engaged in the important work that is improving free software, either as a spectator or as a participant, this goal was accomplished. And it's all thanks to you!

Friday started a little rocky with a datacenter outage affecting most FSF services. Participants spread out to work on six different free software projects over forty-eight hours as our tech team worked to restore all FSF sites with the help and support of the community. Over three hundred folks were tuned in at a time, some to participate in the hackathon and others to follow the progress being made. As a community, we got a lot done over the weekend...

It was amazing to see so many of you take a little (or a lot of!) time out of your busy schedules to improve free software, and we're incredibly grateful for each and every one of you. It really energizes us and shows us how much we can accomplish when we work together over even just a couple days. Not only was this a fantastic sight to see because of the work we got done, but it was also a very fitting way to conclude our fortieth anniversary celebration events. Free software has been and always will be a community effort, one that continues to get better and better because of the dedicated developers, contributors, and users who ensure its existence. Thank you for celebrating forty years of the FSF and fighting for a freer future for us all.

Music

Viral Song Created with Suno's genAI Removed From Streaming Platforms, Re-Released With Human Vocals (yahoo.com) 27

An EDM song by the British group Haven ran into trouble in October after it shared clips of upcoming song "I Run" on TikTok.

The song "was an overnight viral sensation online," writes Digital Music News — racking up millions of plays "even before it hit streaming services." (Although the Washington Post notes that "Record labels and TikTok users began questioning whether 'I Run' used an AI deepfake, modeled off British R&B singer Jorja Smith, for the vocals.")

Digital Music News picks up the story: The artist says he used his own voice to record the vocals, and then ran it through layers of processing and filtering to turn it into the female-sounding voice heard in the track. However, that filtering also included the use of the controversial genAI platform Suno — and that's what complicates things... [The article says later that Suno "is currently in the middle of a blockbuster lawsuit with the Big Three major labels over allegations of widespread copyright infringement of sound recordings used during the AI model training process."]

Meanwhile, the song was rapidly amassing listenership. It soared to #11 on the U.S. Spotify chart and #25 on Spotify globally. Videos using the song continued going viral on TikTok and Instagram, including one in which rapper Offset had apparently played the song during a Boiler Room set, which later turned out to be falsified. And then, as quickly as it appeared, "I Run" was taken down from streaming services, including Spotify and Apple Music. That was due, in part, to numerous takedown notices from The Orchard, the label to which Jorja Smith is signed, as well as the RIAA and IFPI. The takedown notices alleged various issues with the track, including the "misrepresentation" of another artist, as well as copyright infringement.

As a result, the song has also been withheld from the Billboard charts, including the Hot 100, on which it had been predicted to debut this week before the controversy. Billboard points out that it "reserves the right to withhold or remove titles from appearing on the charts that are known to be involved in active legal disputes related to copyright infringement that may extend to the deletion of such content on digital service providers."

The song itself has now been re-released with an all-human vocal track. But going forward will the music industry ever work with AI platforms? The Washington Post reports: "I Run" has taken off as record labels remain unsure of the extent to which they should welcome generative AI programs such as Suno or Udio into the industry. After the two AI music companies began growing in popularity, the three major labels — Sony Music, Warner Music Group and Universal Music Group — filed lawsuits against Suno and Udio, claiming that the AI companies have used the labels' sound recordings to train their model.

Since then, UMG and Warnerhave reached agreementsto work with Udio, ending their litigation... It comes shortly after all three major labels licensed their catalogue to Klay, a music streaming start-up that allows users to adjust songs using artificial intelligence. Major licensing organizations such as ASCAP and BMI shared that they would register songs that were partially AI-generated — but not fully generated ones.

Haven appears to present an uncomfortable edge case. While some AI-generated songs that sound broadly like other artists have been allowed to remain on streaming platforms, the voice in "I Run" appears to have been deemed too duplicative for comfort.

Piracy

Greek Cybercrime Unit Shuts Down IPTV Pirates, 68 End Users Face Fines 14

Greek authorities shut down an IPTV piracy operation on Santorini, arresting a reseller and referring 68 end users for prosecution. TorrentFreak reports: A new legal framework to tackle online infringement in Greece went live just a couple of months ago, and reports of prosecutions are already coming in. Early September, it was reported that a man from Sparta faces prosecution and a fine of up to 6,000 euros for two IPTV piracy offenses. The suspect, reportedly a cafe owner, was targeted at his workplace on a Saturday, allegedly in front of customers. One told local media that they believed that complaints of the cafe engaging in "unfair competition" preceded the untimely visit.

The Cybercrime Prosecution Directorate launched their operation in the early hours of November 19. The Athens-based unit targeted a network that sold illicit access to premium pay-TV via IPTV subscriptions. The raid, conducted on Santorini, one of the Cyclades islands, resulted in the arrest of a 48-year-old, who, from police reports, appears to be a reseller for a larger network. Customers were reportedly charged 50 euros for 3 months subscription or 100 euros for 6 months. Sales and management were handled by the 48-year-old via an online platform known as a 'panel,' while remote and in-person support were available as part of the service.

The impact of the raid was visible on the islands, locals said. According to a local report, hundreds of users in hotels, cafes, and residences on Santorini and beyond, found themselves suddenly without access to cheap TV. Apparently few areas were untouched by the disruption, such was local reliance on illegal streams.
AI

More Than Half of New Articles On the Internet Are Being Written By AI 61

An anonymous reader quotes a report from the Conversation: The line between human and machine authorship is blurring, particularly as it's become increasingly difficult to tell whether something was written by a person or AI. Now, in what may seem like a tipping point, the digital marketing firm Graphite recently published a study showing that more than 50% of articles on the web are being generated by artificial intelligence. [...]

It's important to clarify what's meant by "online content," the phrase used in the Graphite study, which analyzed over 65,000 randomly selected articles of at least 100 words on the web. These can include anything from peer-reviewed research to promotional copy for miracle supplements. A closer reading of the Graphite study shows that the AI-generated articles consist largely of general-interest writing: news updates, how-to guides, lifestyle posts, reviews and product explainers.

The primary economic purpose of this content is to persuade or inform, not to express originality or creativity. Put differently, AI appears to be most useful when the writing in question is low-stakes and formulaic: the weekend-in-Rome listicle, the standard cover letter, the text produced to market a business. A whole industry of writers -- mostly freelance, including many translators -- has relied on precisely this kind of work, producing blog posts, how-to material, search engine optimization text and social media copy. The rapid adoption of large language models has already displaced many of the gigs that once sustained them.

The dramatic loss of this work points toward another issue raised by the Graphite study: the question of authenticity, not only in identifying who or what produced a text, but also in understanding the value that humans attach to creative activity. How can you distinguish a human-written article from a machine-generated one? And does that ability even matter? Over time, that distinction is likely to grow less significant, particularly as more writing emerges from interactions between humans and AI...
"If you set aside the more apocalyptic scenarios and assume that AI will continue to advance -- perhaps at a slower pace than in the recent past -- it's quite possible that thoughtful, original, human-generated writing will become even more valuable," writes author Francesco Agnellini, in closing.

"Put another way: The work of writers, journalists and intellectuals will not become superfluous simply because much of the web is no longer written by humans."
Privacy

Google Maps Will Let You Hide Your Identity When Writing Reviews (pcmag.com) 37

An anonymous reader quotes a report from PCMag: Four new features are coming to Google Maps, including a way to hide your identity in reviews. Maps will soon let you use a nickname and select an alternative profile picture for online reviews, so you can rate a business without linking it to full name and Google profile photo. Google says it will monitor for "suspicious and fake reviews," and every review is still associated with an account on Google's backend, which it believes will discourage bad actors.

Look for a new option under Your Profile that says Use a custom name & picture for posting. You'll then be able to pick an illustration to represent you and add a nickname. Google didn't explain why it is introducing anonymous reviews; it pitched the idea as a way to be a business's "Secret Santa." Some users are nervous to publicly post reviews for local businesses as it may be used to track their location or movements. It may encourage more people to contribute honest feedback to its platform, for better or worse.
Further reading: Gemini AI To Transform Google Maps Into a More Conversational Experience
Google

Singapore Orders Apple, Google To Prevent Government Spoofing on Messaging Platforms (reuters.com) 8

An anonymous reader shares a report: Singapore's police have ordered Apple and Google to prevent the spoofing of government agencies on their messaging platforms, the home affairs ministry said on Tuesday. The order under the nation's Online Criminal Harms Act came after the police observed scams on Apple's iMessage and Google Messages purporting to be from companies such as the local postal service SingPost. While government agencies have registered with a local SMS registry so only they can send messages with the "gov.sg" name, this does not currently apply to the iMessage and Google Messages platforms.
Security

Hacker Conference Installed a Literal Antivirus Monitoring System (wired.com) 49

An anonymous reader quotes a report from Wired: Hacker conferences -- like all conventions -- are notorious for giving attendees a parting gift of mystery illness. To combat "con crud," New Zealand's premier hacker conference, Kawaiicon, quietly launched a real-time, room-by-room carbon dioxide monitoring system for attendees. To get the system up and running, event organizers installed DIY CO2 monitors throughout the Michael Fowler Centre venue before conference doors opened on November 6. Attendees were able to check a public online dashboard for clean air readings for session rooms, kids' areas, the front desk, and more, all before even showing up. "It's ALMOST like we are all nerds in a risk-based industry," the organizers wrote on the convention's website. "What they did is fantastic," Jeff Moss, founder of the Defcon and Black Hat security conferences, told WIRED. "CO2 is being used as an approximation for so many things, but there are no easy, inexpensive network monitoring solutions available. Kawaiicon building something to do this is the true spirit of hacking." [...]

Kawaiicon's work began one month before the conference. In early October, organizers deployed a small fleet of 13 RGB Matrix Portal Room CO2 Monitors, an ambient carbon dioxide monitor DIY project adapted from US electronics and kit company Adafruit Industries. The monitors were connected to an Internet-accessible dashboard with live readings, daily highs and lows, and data history that showed attendees in-room CO2 trends. Kawaiicon tested its CO2 monitors in collaboration with researchers from the University of Otago's public health department. The Michael Fowler Centre is a spectacular blend of Scandinavian brutalism and interior woodwork designed to enhance sound and air, including two grand pou -- carved Mori totems -- next to the main entrance that rise through to the upper foyers. Its cathedral-like acoustics posed a challenge to Kawaiicon's air-hacking crew, which they solved by placing the RGB monitors in stereo. There were two on each level of the Main Auditorium (four total), two in the Renouf session space on level 1, plus monitors in the daycare and Kuracon (kids' hacker conference) areas. To top it off, monitors were placed in the Quiet Room, at the Registration Desk, and in the Green Room.

Kawaiicon's attendees could quickly check the conditions before they arrived and decide how to protect themselves accordingly. At the event, WIRED observed attendees checking CO2 levels on their phones, masking and unmasking in different conference areas, and watching a display of all room readings on a dashboard at the registration desk. In each conference session room, small wall-mounted monitors displayed stoplight colors showing immediate conditions: green for safe, orange for risky, and red to show the room had high CO2 levels, the top level for risk. Colorful custom-made Kawaiicon posters by New Zealand artist Pepper Raccoon placed throughout the Michael Fowler Centre displayed a QR code, making the CO2 dashboard a tap away, no matter where they were at the conference.
Resources, parts lists, and assembly guides can be found here.
AI

'We Could've Asked ChatGPT': UK Students Fight Back Over Course Taught By AI (theguardian.com) 55

An anonymous reader shared this report from the Guardian: James and Owen were among 41 students who took a coding module at the University of Staffordshire last year, hoping to change careers through a government-funded apprenticeship programme designed to help them become cybersecurity experts or software engineers. But after a term of AI-generated slides being read, at times, by an AI voiceover, James said he had lost faith in the programme and the people running it, worrying he had "used up two years" of his life on a course that had been done "in the cheapest way possible".

"If we handed in stuff that was AI-generated, we would be kicked out of the uni, but we're being taught by an AI," said James during a confrontation with his lecturer recorded as a part of the course in October 2024. James and other students confronted university officials multiple times about the AI materials. But the university appears to still be using AI-generated materials to teach the course. This year, the university uploaded a policy statement to the course website appearing to justify the use of AI, laying out "a framework for academic professionals leveraging AI automation" in scholarly work and teaching...

For students, AI teaching appears to be less transformative than it is demoralising. In the US, students post negative online reviews about professors who use AI. In the UK, undergraduates have taken to Reddit to complain about their lecturers copying and pasting feedback from ChatGPT or using AI-generated images in courses.

"I feel like a bit of my life was stolen," James told the Guardian (which also quotes an unidentified student saying they felt "robbed of knowledge and enjoyment".) But the article also points out that a survey last year of 3,287 higher-education teaching staff by edtech firm Jisc found that nearly a quarter were using AI tools in their teaching.
Music

Napster Said It Raised $3 Billion From a Mystery Investor. But Now the 'Investor' and 'Money' Are Gone (forbes.com) 41

An anonymous reader shared this report from Forbes: On November 20, at approximately 4 p.m. Eastern time, Napster held an online meeting for its shareholders; an estimated 700 of roughly 1,500 including employees, former employees and individual investors tuned in. That's when its CEO John Acunto told everyone he believed that the never-identified big investor — who the company had insisted put in $3.36 billion at a $12 billion valuation in January, which would have made it one of the year's biggest fundraises — was not going to come through.

In an email sent out shortly after, it told existing investors that some would get a bigger percentage of the company, due to the canceled shares, and went on to describe itself as a "victim of misconduct," adding that it was "assisting law enforcement with their ongoing investigations." As for the promised tender offer, which would have allowed shareholders to cash out, that too was called off. "Since that investor was also behind the potential tender, we also no longer believe that will occur," the company wrote in the email.

At this point it seems unlikely that getting bigger stakes in the business will make any of the investors too happy. The company had been stringing its employees and investors along for nearly a year with ever-changing promises of an impending cash infusion and chances to sell their shares in a tender offer that would change everything. In fact, it was the fourth time since 2022 they've been told they could soon cash out via a tender offer, and the fourth time the potential deal fell through. Napster spokesperson Gillian Sheldon said certain statements about the fundraise "were made in good faith based on what we understood at the time. We have since uncovered indications of misconduct that suggest the information provided to us then was not accurate."

The article notes America's Department of Justice has launched an investigation (in which Napster is not a target), while the Securities and Exchange Commission has a separate ongoing investigation from 2022 into Napster's scrapped reverse merger.

While Napster announced they'd been acquired for $207 million by a tech company named Infinite Reality, Forbes says that company faced "a string of lawsuits from creditors alleging unpaid bills, a federal lawsuit to enforce compliance with an SEC subpoena (now dismissed) and exaggerated claims about the extent of their partnerships with Manchester City Football Club and Google. The company also touted 'top-tier' investors who never directly invested in the firm, and its anonymous $3 billion investment that its spokesperson told Forbes in March was in "an Infinite Reality account and is available to us" and that they were 'actively leveraging' it..."

And by the end, "Napster appears to have been scrambling to raise cash to keep the lights on, working with brokers and investment advisors including a few who had previously gotten into trouble with regulators.... If it turns out that Napster knew the fundraise wasn't happening and it benefited from misrepresenting itself to investors or acquirees, it could face much bigger problems. That's because doing so could be considered securities fraud."

Slashdot Top Deals