Social Networks

Social Media Platforms Need To Stop Never-Ending Scrolling, UK's Starmer Says (reuters.com) 54

UK Prime Minister Keir Starmer said social media platforms should remove addictive infinite-scroll features for young users as Britain considers new child-safety measures. "We're consulting on whether there should be a ban for under 16s," Starmer told BBC Radio. "But I think equally important, the addictive scrolling mechanisms are really problematic to my mind. They need to go." Reuters reports: Britain, like other countries, is considering restricting access to social media for children and it is testing bans, curfews and app time limits to see how they impact sleep, family life and schoolwork. Social media companies had designed algorithms that were intended to encourage addictive behavior, and parents were asking the government to intervene, Starmer said.

[...] More than 45,000 people had already responded to its consultation on children's online safety, the UK government said, adding that there was still time to contribute before a deadline of May 26. "We want to hear from mums and dads who are worried about the amount of time their children spend online and what they are viewing," Technology Secretary Liz Kendall said on Monday. "We want to hear from teenagers who know better than anyone what it is like to grow up in the age of social media. And we want to hear from families about their views on curfews, AI chatbots and addictive features."

AI

Neuroscientist's AI-Powered Startup Aims To Transform Human Cognition With Perfect, Infinite Memory (msn.com) 75

Bloomberg describes him as a "former Harvard Medical School professor whose research has focused on the intersection of AI and neuroscience."

"For the past 20 years, I studied how the human brain stores and retrieves memories," Kreiman writes on LinkedIn. And now "My co-founder Spandan Madan and I built a new algorithm to endow humans with perfect and infinite memory." Engramme connects to your **memorome**, i.e., entire digital life. Large Memory Models work in the same way that your brain encodes and retrieves information. Then memories are recalled automatically — no searching, no prompting, no hallucinations. [The startup's web site promises "omniscient AI to augment human cognition."]

We have built the memory layer for EVERY app. Read our manifesto about augmenting human cognition. ["We are not just building software; we are enabling a complete transformation of human cognition. When the friction disappears between needing a piece of information and recalling it, the nature of thought itself changes. This synergy between biological intuition and digital precision will be the most disruptive force in modern history, fundamentally reshaping every profession... We are dedicated to creating a world where everyone has the power to remember everything they have ever learned, seen, or felt "]

Welcome to a new future where you can remember everything. This is the MEMORY SINGULARITY: after 300,000 years, this is the moment that humans stop forgetting.

Bloomberg reports that the startup (spun out of a lab at Harvard) is "in talks with investors to raise about $100 million, according to people familiar with the matter."
Social Networks

Will Social Media Change After YouTube and Meta's Court Defeat? (theverge.com) 54

Yes, this week YouTube and Meta were found negligent in a landmark case about social media addiction.

But "it's still far from certain what this defeat will change," argues The Verge's senior tech and policy editor, "and what the collateral damage could be." If these decisions survive appeal — which isn't certain — the direct outcome would be multimillion-dollar penalties. Depending on the outcome of several more "bellwether" cases in Los Angeles, a much larger group settlement could be reached down the road... For many activists, the overall goal is to make clear that lawsuits will keep piling up if companies don't change their business practices...

The best-case outcome of all this has been laid out by people like Julie Angwin, who wrote in The New York Times that companies should be pushed to change "toxic" features like infinite scrolling, beauty filters that encourage body dysmorphia, and algorithms that prioritize "shocking and crude" content. The worst-case scenario falls along the lines of a piece from Mike Masnick at Techdirt, who argued the rulings spell disaster for smaller social networks that could be sued for letting users post and see First Amendment-protected speech under a vague standard of harm. He noted that the New Mexico case hinged partly on arguing that Meta had harmed kids by providing end-to-end encryption in private messaging, creating an incentive to discontinue a feature that protects users' privacy — and indeed, Meta discontinued end-to-end encryption on Instagram earlier this month.

Blake Reid, a professor at Colorado Law, is more circumspect. "It's hard right now to forecast what's going to happen," Reid told The Verge in an interview. On Bluesky, he noted that companies will likely look for "cold, calculated" ways to avoid legal liability with the minimum possible disruption, not fundamentally rethink their business models. "There are obviously harms here and it's pretty important that the tort system clocked those harms" in the recent cases, he told The Verge. "It's just that what comes in the wake of them is less clear to me".

The article also includes this prediction from legal blogger/Section 230 export Eric Goldman. "There will be even stronger pushes to restrict or ban children from social media." Goldman argues "This hurts many subpopulations of minors, ranging from LGBTQ teens who will be isolated from communities that can help them navigate their identities to minors on the autism spectrum who can express themselves better online than they can in face-to-face conversations."
Social Networks

Meta and YouTube Found Negligent in Landmark Social Media Addiction Case 113

A jury found Meta and YouTube negligent in a landmark social media addiction case, ruling that addictive design features such as infinite scroll and algorithmic recommendations harmed a young user and contributed to her mental health distress. The verdict awards $3 million in compensatory damages so far and could pave the way for more lawsuits seeking financial penalties and product changes across the social media industry. "Meta is responsible for 70 percent of that cost and YouTube for the remainder," notes The New York Times. "TikTok and Snap both settled with the plaintiff for undisclosed terms before the trial started." From the report: The bellwether case, which was brought by a now 20-year-old woman identified as K.G.M., had accused social media companies of creating products as addictive as cigarettes or digital casinos. K.G.M. sued Meta, which owns Instagram and Facebook, and Google's YouTube over features like infinite scroll and algorithmic recommendations that she claimed led to anxiety and depression.

The jury of seven women and five men will deliberate further to decide what further punitive damages the companies should pay for malice or fraud. The verdict in K.G.M.'s case -- one of thousands of lawsuits filed by teenagers, school districts and state attorneys general against Meta, YouTube, TikTok and Snap, which owns Snapchat -- was a major win for the plaintiffs. The finding validates a novel legal theory that social media sites or apps can cause personal injury. It is likely to factor into similar cases expected to go to trial this year, which could expose the internet giants to further financial damages and force changes to their products.
The verdict also comes on the heels of a New Mexico jury ruling that found Meta liable for violating state law by failing to protect users of its apps from child predators.
Social Networks

The EU Moves To Kill Infinite Scrolling 37

Doom scrolling is doomed, if the EU gets its way. From a report: The European Commission is for the first time tackling the addictiveness of social media in a fight against TikTok that may set new design standards for the world's most popular apps. Brussels has told the company to change several key features, including disabling infinite scrolling, setting strict screen time breaks and changing its recommender systems. The demand follows the Commission's declaration that TikTok's design is addictive to users -- especially children.

The fact that the Commission said TikTok should change the basic design of its service is "ground-breaking for the business model fueled by surveillance and advertising," said Katarzyna Szymielewicz, president of the Panoptykon Foundation, a Polish civil society group. That doesn't bode well for other platforms, particularly Meta's Facebook and Instagram. The two social media giants are also under investigation over the addictiveness of their design.
Social Networks

Europe Accuses TikTok of 'Addictive Design' and Pushes for Change (nytimes.com) 36

TikTok's endless scroll of irresistible content, tailored for each person's tastes by a well-honed algorithm, has helped the service become one of the world's most popular apps. Now European Union regulators say those same features that made TikTok so successful are likely illegal. From a report: On Friday, the regulators released a preliminary decision that TikTok's infinite scroll, auto-play features and recommendation algorithm amount to an "addictive design" that violated European Union laws for online safety. The service poses potential harm to the "physical and mental well-being" of users, including minors and vulnerable adults, the European Commission, the 27-nation bloc's executive branch, said in a statement.

The findings suggest TikTok must overhaul the core features that made it a global phenomenon, or risk major fines. European officials said it was the first time that a legal standard for social media addictiveness had been applied anywhere in the world. "TikTok needs to change the basic design of its service," the European Commission said in a statement.

Social Networks

Internal Messages May Doom Meta At Social Media Addiction Trial (arstechnica.com) 54

An anonymous reader quotes a report from Ars Technica: This week, the first high-profile lawsuit -- considered a "bellwether" case that could set meaningful precedent in the hundreds of other complaints -- goes to trial. That lawsuit documents the case of a 19-year-old, K.G.M, who hopes the jury will agree that Meta and YouTube caused psychological harm by designing features like infinite scroll and autoplay to push her down a path that she alleged triggered depression, anxiety, self-harm, and suicidality. TikTok and Snapchat were also targeted by the lawsuit, but both have settled. The Snapchat settlement came last week, while TikTok settled on Tuesday just hours before the trial started, Bloomberg reported. For now, YouTube and Meta remain in the fight. K.G.M. allegedly started watching YouTube when she was 6 years old and joined Instagram by age 11. She's fighting to claim untold damages -- including potentially punitive damages -- to help her family recoup losses from her pain and suffering and to punish social media companies and deter them from promoting harmful features to kids. She also wants the court to require prominent safety warnings on platforms to help parents be aware of the risks. [...]

To win, K.G.M.'s lawyers will need to "parcel out" how much harm is attributed to each platform, due to design features, not the content that was targeted to K.G.M., Clay Calvert, a technology policy expert and senior fellow at a think tank called the American Enterprise Institute, wrote. Internet law expert Eric Goldman told The Washington Post that detailing those harms will likely be K.G.M.'s biggest struggle, since social media addiction has yet to be legally recognized, and tracing who caused what harms may not be straightforward. However, Matthew Bergman, founder of the Social Media Victims Law Center and one of K.G.M.'s lawyers, told the Post that K.G.M. is prepared to put up this fight. "She is going to be able to explain in a very real sense what social media did to her over the course of her life and how in so many ways it robbed her of her childhood and her adolescence," Bergman said.

The research is unclear on whether social media is harmful for kids or whether social media addiction exists, Tamar Mendelson, a professor at Johns Hopkins Bloomberg School of Public Health, told the Post. And so far, research only shows a correlation between Internet use and mental health, Mendelson noted, which could doom K.G.M.'s case and others.' However, social media companies' internal research might concern a jury, Bergman told the Post. On Monday, the Tech Oversight Project, a nonprofit working to rein in Big Tech, published a report analyzing recently unsealed documents in K.G.M.'s case that supposedly provide "smoking-gun evidence" that platforms "purposefully designed their social media products to addict children and teens with no regard for known harms to their wellbeing" -- while putting increased engagement from young users at the center of their business models.
Most of the unsealed documents came from Meta. An internal email shows Mark Zuckerberg decided Meta's top strategic priority was getting teens "locked in" to Meta's family of apps. Another damning document discusses allowing "tweens" to use a private mode inspired by fake Instagram accounts ("finstas"). The same document includes an admission that internal data showed Facebook use correlated with lower well-being.

Internal communications showed Meta seemingly bragging that "teens can't switch off from Instagram even if they want to" and an employee declaring, "oh my gosh yall IG is a drug," likening all social media platforms to "pushers."
United Kingdom

UK Mulls Australia-Like Social Media Ban For Users Under 16 (engadget.com) 25

The UK government has launched a public consultation on whether to ban social media use for children under 16, drawing inspiration from Australia's recently enacted age-based restrictions. "It would also explore how to enforce that limit, how to limit tech companies from being able to access children's data and how to limit 'infinite scrolling,' as well as access to addictive online tools," reports Engadget. "In addition to seeking feedback from parents and young people themselves, the country's ministers are going to visit Australia to see the effects of the country's social media ban for kids, according to Financial Times."
United States

The Rise and Fall of the American Monoculture (wsj.com) 66

The American monoculture -- the era when three television networks, seven movie studios, and a handful of record labels determined virtually everything the country watched and heard -- is collapsing under the weight of algorithmic recommendation engines and infinite streaming options. An estimated 200 million tickets were sold for "Gone With the Wind" in 1939 when the U.S. population was 130 million; more than 100 million people watched the MAS*H finale in 1983.

Only three American productions grossed more than $1 billion in 2025, down from nine in 2019. "That broad experience has become a more difficult thing for us studio people to manufacture," said Donna Langley, chairman of NBCUniversal Entertainment. "The audience wants a much better value for their money."

YouTube became the most popular video platform on televisions not by having the hottest shows but by having something for everyone. The internet broke Hollywood's hold on distribution; anyone can now stream to the same devices Disney and Netflix use.
Social Networks

'NY Orders Apps To Lie About Social Media Addiction, Will Lose In Court' (techdirt.com) 38

New York Governor Kathy Hochul has signed S4505, a law that requires websites to display warnings claiming that features like algorithmic feeds, push notifications, infinite scroll, like counts, and autoplay cause addiction -- despite, as TechDirt argues, the absence of scientific consensus supporting such claims.

State Senator Andrew Gounardes sponsored the legislation. The law's constitutional footing appears precarious. Courts have already rejected nearly identical compelled-speech schemes, most notably in the Texas pornography age-verification case that reached the Supreme Court. The Fifth Circuit, in that case, refused to uphold mandatory health warnings about pornography, ruling that such public health claims were "too contentious and controversial to receive Zauderer scrutiny" -- the legal standard that sometimes permits government-mandated disclosures.

The science around social media's purported addictiveness is even more disputed than the pornography research the Fifth Circuit rejected. Hochul's signing statement asserts that studies link increased social media use to anxiety and depression, but researchers in the field note these studies demonstrate correlation rather than causation. Some experts have suggested the causal relationship may run in the opposite direction: teenagers struggling with mental health issues turn to social media for community and coping mechanisms. The law's broad definitions could sweep in far more than major platforms like Facebook and TikTok. News sites, recipe apps, fitness trackers, and email clients could theoretically face enforcement if they employ the targeted features. New York's Attorney General holds sole authority to grant exemptions.
Facebook

You Can't Trust Your Eyes To Tell You What's Real Anymore, Says Instagram Head (theverge.com) 66

Instagram head Adam Mosseri closed out 2025 by acknowledging what many have long suspected: the era of trusting photographs as accurate records of reality is over, and the platform he runs will need to fundamentally adapt to an age of "infinite synthetic content."

In a slideshow posted to Instagram, Mosseri wrote that for most of his life he could safely assume photographs or videos were largely accurate captures of moments that happened, adding that this is clearly no longer the case. He predicted a shift from assuming what we see is real by default to starting with skepticism and paying attention to who is sharing something and why.
Social Networks

New York To Require Social Media Platforms To Display Mental Health Warnings (reuters.com) 37

Social media platforms with infinite scrolling, auto-play and algorithmic feeds will be required to display warning labels about their potential harm to young users' mental health under a new law, New York Governor Kathy Hochul announced on Friday. From a report: "Keeping New Yorkers safe has been my top priority since taking office, and that includes protecting our kids from the potential harms of social media features that encourage excessive use," Hochul said in a statement.

This month Australia imposed a social media ban for children under 16. New York joins states like California and Minnesota that have similar social media laws. The New York law includes platforms that offer "addictive feeds," auto play or infinite scroll, according to the legislation. The law applies to conduct occurring partly or wholly in New York but not when the platform is accessed by users physically outside the state.

The Almighty Buck

What Happens When an 'Infinite-Money Machine' Unravels 78

Michael Saylor's software company Strategy, formerly known as MicroStrategy, built a financial model that some observers called an "infinite-money machine" by stockpiling hundreds of thousands of bitcoins and issuing stock and debt to buy more, but that machine appears to be breaking down. The company's stock peaked above $450 in mid-July and ended November at $177.18, a 60% decline. Bitcoin fell only 25% over the same period. The gap between Strategy's market cap and the value of its bitcoin holdings has nearly vanished.

At one point last week, the company's market value dipped below the value of its bitcoins after accounting for debt. Strategy announced it had built a $1.4 billion dollar reserve by selling more stock to cover required dividend payments to preferred shareholders over the next twelve months. The company also disclosed it might sell some of its coins if its value continues to fall, a reversal from Saylor's February tweet declaring "Never sell your Bitcoin." Professional short seller Jim Chanos, who had questioned the strategy's sustainability, told Sherwood he made money by shorting the stock and buying bitcoins.
Music

Napster Said It Raised $3 Billion From a Mystery Investor. But Now the 'Investor' and 'Money' Are Gone (forbes.com) 41

An anonymous reader shared this report from Forbes: On November 20, at approximately 4 p.m. Eastern time, Napster held an online meeting for its shareholders; an estimated 700 of roughly 1,500 including employees, former employees and individual investors tuned in. That's when its CEO John Acunto told everyone he believed that the never-identified big investor — who the company had insisted put in $3.36 billion at a $12 billion valuation in January, which would have made it one of the year's biggest fundraises — was not going to come through.

In an email sent out shortly after, it told existing investors that some would get a bigger percentage of the company, due to the canceled shares, and went on to describe itself as a "victim of misconduct," adding that it was "assisting law enforcement with their ongoing investigations." As for the promised tender offer, which would have allowed shareholders to cash out, that too was called off. "Since that investor was also behind the potential tender, we also no longer believe that will occur," the company wrote in the email.

At this point it seems unlikely that getting bigger stakes in the business will make any of the investors too happy. The company had been stringing its employees and investors along for nearly a year with ever-changing promises of an impending cash infusion and chances to sell their shares in a tender offer that would change everything. In fact, it was the fourth time since 2022 they've been told they could soon cash out via a tender offer, and the fourth time the potential deal fell through. Napster spokesperson Gillian Sheldon said certain statements about the fundraise "were made in good faith based on what we understood at the time. We have since uncovered indications of misconduct that suggest the information provided to us then was not accurate."

The article notes America's Department of Justice has launched an investigation (in which Napster is not a target), while the Securities and Exchange Commission has a separate ongoing investigation from 2022 into Napster's scrapped reverse merger.

While Napster announced they'd been acquired for $207 million by a tech company named Infinite Reality, Forbes says that company faced "a string of lawsuits from creditors alleging unpaid bills, a federal lawsuit to enforce compliance with an SEC subpoena (now dismissed) and exaggerated claims about the extent of their partnerships with Manchester City Football Club and Google. The company also touted 'top-tier' investors who never directly invested in the firm, and its anonymous $3 billion investment that its spokesperson told Forbes in March was in "an Infinite Reality account and is available to us" and that they were 'actively leveraging' it..."

And by the end, "Napster appears to have been scrambling to raise cash to keep the lights on, working with brokers and investment advisors including a few who had previously gotten into trouble with regulators.... If it turns out that Napster knew the fundraise wasn't happening and it benefited from misrepresenting itself to investors or acquirees, it could face much bigger problems. That's because doing so could be considered securities fraud."
AI

While Meta Crawls the Web for AI Training Data, Bruce Ediger Pranks Them with Endless Bad Data (bruceediger.com) 43

From the personal blog of interface expert Bruce Ediger: Early in March 2025, I noticed that a web crawler with a user agent string of

meta-externalagent/1.1 (+https://developers.facebook.com/docs/sharing/webmasters/crawler)

was hitting my blog's machine at an unreasonable rate.

I followed the URL and discovered this is what Meta uses to gather premium, human-generated content to train its LLMs. I found the rate of requests to be annoying.

I already have a PHP program that creates the illusion of an infinite website. I decided to answer any HTTP request that had "meta-externalagent" in its user agent string with the contents of a bork.php generated file...

This worked brilliantly. Meta ramped up to requesting 270,000 URLs on May 30 and 31, 2025...

After about 3 months, I got scared that Meta's insatiable consumption of Super Great Pages about condiments, underwear and circa 2010 C-List celebs would start costing me money. So I switched to giving "meta-externalagent" a 404 status code. I decided to see how long it would take one of the highest valued companies in the world to decide to go away.

The answer is 5 months.

Social Networks

What Happens After the Death of Social Media? (noemamag.com) 112

"These are the last days of social media as we know it," argues a humanities lecturer from University College Cork exploring where technology and culture intersect, warning they could become lingering derelicts "haunted by bots and the echo of once-human chatter..."

"Whatever remains of genuine, human content is increasingly sidelined by algorithmic prioritization, receiving fewer interactions than the engineered content and AI slop optimized solely for clicks... " In recent years, Facebook and other platforms that facilitate billions of daily interactions have slowly morphed into the internet's largest repositories of AI-generated spam. Research has found what users plainly see: tens of thousands of machine-written posts now flood public groups — pushing scams, chasing clicks — with clickbait headlines, half-coherent listicles and hazy lifestyle images stitched together in AI tools like Midjourney... While content proliferates, engagement is evaporating. Average interaction rates across major platforms are declining fast: Facebook and X posts now scrape an average 0.15% engagement, while Instagram has dropped 24% year-on-year. Even TikTok has begun to plateau. People aren't connecting or conversing on social media like they used to; they're just wading through slop, that is, low-effort, low-quality content produced at scale, often with AI, for engagement.

And much of it is slop: Less than half of American adults now rate the information they see on social media as "mostly reliable" — down from roughly two-thirds in the mid-2010s... Platforms have little incentive to stem the tide. Synthetic accounts are cheap, tireless and lucrative because they never demand wages or unionize. Systems designed to surface peer-to-peer engagement are now systematically filtering out such activity, because what counts as engagement has changed. Engagement is now about raw user attention — time spent, impressions, scroll velocity — and the net effect is an online world in which you are constantly being addressed but never truly spoken to.

"These are the last days of social media, not because we lack content," the article suggests, "but because the attention economy has neared its outer limit — we have exhausted the capacity to care..." Social media giants have stopped growing exponentially, while a significant proportion of 18- to 34-year-olds even took deliberate mental health breaks from social media in 2024, according to an American Psychiatric Association poll.) And "Some creators are quitting, too. Competing with synthetic performers who never sleep, they find the visibility race not merely tiring but absurd."

Yet his 5,000-word essay predicts social media's death rattle "will not be a bang but a shrug," since "the model is splintering, and users are drifting toward smaller, slower, more private spaces, like group chats, Discord servers and federated microblogs — a billion little gardens." Intentional, opt-in micro-communities are rising in their place — like Patreon collectives and Substack newsletters — where creators chase depth over scale, retention over virality. A writer with 10,000 devoted subscribers can potentially earn more and burn out less than one with a million passive followers on Instagram... Even the big platforms sense the turning tide. Instagram has begun emphasizing DMs, X is pushing subscriber-only circles and TikTok is experimenting with private communities. Behind these developments is an implicit acknowledgement that the infinite scroll, stuffed with bots and synthetic sludge, is approaching the limit of what humans will tolerate....

The most radical redesign of social media might be the most familiar: What if we treated these platforms as public utilities rather than private casinos...? Imagine social media platforms with transparent algorithms subject to public audit, user representation on governance boards, revenue models based on public funding or member dues rather than surveillance advertising, mandates to serve democratic discourse rather than maximize engagement, and regular impact assessments that measure not just usage but societal effects... This could take multiple forms, like municipal platforms for local civic engagement, professionally focused networks run by trade associations, and educational spaces managed by public library systems... We need to "rewild the internet," as Maria Farrell and Robin Berjon mentioned in a Noema essay.

We need governance scaffolding, shared institutions that make decentralization viable at scale... [R]eal change will come when platforms are rewarded for serving the public interest. This could mean tying tax breaks or public procurement eligibility to the implementation of transparent, user-controllable algorithms. It could mean funding research into alternative recommender systems and making those tools open-source and interoperable. Most radically, it could involve certifying platforms based on civic impact, rewarding those that prioritize user autonomy and trust over sheer engagement.

"Social media as we know it is dying, but we're not condemned to its ruins. We are capable of building better — smaller, slower, more intentional, more accountable — spaces for digital interaction, spaces..."

"The last days of social media might be the first days of something more human: a web that remembers why we came online in the first place — not to be harvested but to be heard, not to go viral but to find our people, not to scroll but to connect. We built these systems, and we can certainly build better ones."
AI

Making Cash Off 'AI Slop': the Surreal Video Business Taking Over the Web (msn.com) 83

The Washington Post looks at the rise of low-effort, high-volume "AI slop" videos: The major social media platforms, scared of driving viewers away, have tried to crack down on slop accounts, using AI tools of their own to detect and flag videos they believe were synthetically made. YouTube last month said it would demonetize creators for "inauthentic" and "mass-produced" content. But the systems are imperfect, and the creators can easily spin up new accounts — or just push their AI tools to pump out videos similar to the banned ones, dodging attempts to snuff them out.
One place where they're coming from... Jiaru Tang, a researcher at the Queensland University of Technology who recently interviewed creators in China, said AI video has become one of the hottest new income opportunities there for workers in the internet's underbelly, who previously made money writing fake news articles or running spam accounts. Many university students, stay-at-home moms and the recently unemployed now see AI video as a kind of gig work, like driving an Uber. The average small creator she interviewed did their day jobs and then, at night, "spent two to three hours making AI-slop money," she said. A few she spoke with made $2,000 to $3,000 a month at it.
But the article provides other examples of the "wild cottage industry of AI-video makers, enticed by the possibility of infinite creation for minimal work"
  • A 31-year-old loan officer in eastern Idaho first went viral in June "with an AI-generated video on TikTok in which a fake but lifelike old man talked about soiling himself. Within two weeks, he had used AI to pump out 91 more, mostly showing fake street interviews and jokes about fat people to an audience that has surged past 180,000 followers..." (He told the Post the videos earn him about $5,000 a month through TikTok's creator program.)
  • "To stand out, some creators have built AI-generated influencers with lives a viewer can follow along. 'Why does everybody think I'm AI? ... I'm a human being, just like you guys,' says the AI woman in one since-removed TikTok video, which was watched more than 1 million times."
  • One AI-generated video a dog biting a woman's face off (revealing a salad) received a quarter of a billion views.

Crime

Dev Gets 4 Years For Creating Kill Switch On Ex-Employer's Systems (bleepingcomputer.com) 113

Davis Lu, a former Eaton Corporation developer, has been sentenced to four years in prison for sabotaging his ex-employer's Windows network with malware and a custom kill switch that locked out thousands of employees once his account was disabled. The attack caused significant operational disruption and financial losses, with Lu also attempting to cover his tracks by deleting data and researching privilege escalation techniques. BleepingComputer reports: After a corporate restructuring and subsequent demotion in 2018, the DOJ says that Lu retaliated by embedding malicious code throughout the company's Windows production environment. The malicious code included an infinite Java thread loop designed to overwhelm servers and crash production systems. Lu also created a kill switch named "IsDLEnabledinAD" ("Is Davis Lu enabled in Active Directory") that would automatically lock all users out of their accounts if his account was disabled in Active Directory. When his employment was terminated on September 9, 2019, and his account disabled, the kill switch activated, causing thousands of users to be locked out of their systems.

"The defendant breached his employer's trust by using his access and technical knowledge to sabotage company networks, wreaking havoc and causing hundreds of thousands of dollars in losses for a U.S. company," said Acting Assistant Attorney General Matthew R. Galeotti. When he was instructed to return his laptop, Lu reportedly deleted encrypted data from his device. Investigators later discovered search queries on the device researching how to elevate privileges, hide processes, and quickly delete files. Lu was found guilty earlier this year of intentionally causing damage to protected computers. After his four-year sentence, Lu will also serve three years of supervised release following his prison term.

AI

Harvard Dropouts To Launch 'Always On' AI Smart Glasses That Listen, Record Every Conversation 68

Two Harvard dropouts are launching Halo X, a $249 pair of AI-powered smart glasses that continuously listen, record, and transcribe conversations while displaying real-time information to the wearer. "Our goal is to make glasses that make you super intelligent the moment you put them on," said AnhPhu Nguyen, co-founder of Halo. Co-founder Caine Ardayfio said the glasses "give you infinite memory."

"The AI listens to every conversation you have and uses that knowledge to tell you what to say ... kinda like IRL Cluely," Ardayfio told TechCrunch. "If somebody says a complex word or asks you a question, like, 'What's 37 to the third power?' or something like that, then it'll pop up on the glasses." From the report: Ardayfio and Nguyen have raised $1 million to develop the glasses, led by Pillar VC, with support from Soma Capital, Village Global, and Morningside Venture. The glasses will be priced at $249 and will be available for preorder starting Wednesday. Ardayfio called the glasses "the first real step towards vibe thinking."

The two Ivy League dropouts, who have since moved into their own version of the Hacker Hostel in the San Francisco Bay Area, recently caused a stir after developing a facial-recognition app for Meta's smart Ray-Ban glasses to prove that the tech could be used to dox people. As a potential early competitor to Meta's smart glasses, Ardayfio said Meta, given its history of security and privacy scandals, had to rein in its product in ways that Halo can ultimately capitalize on. [...]

For now, Halo X glasses only have a display and a microphone, but no camera, although the two are exploring the possibility of adding it to a future model. Users still need to have their smartphones handy to help power the glasses and get "real time info prompts and answers to questions," per Nguyen. The glasses, which are manufactured by another company that the startup didn't name, are tethered to an accompanying app on the owner's phone, where the glasses essentially outsource the computing since they don't have enough power to do it on the device itself. Under the hood, the smart glasses use Google's Gemini and Perplexity as its chatbot engine, according to the two co-founders. Gemini is better for math and reasoning, whereas they use Perplexity to scrape the internet, they said.
Piracy

How Napster Inspired a Generation of Rule-Breaking Entrepreneurs (fastcompany.com) 16

Napster's latest AI pivot "is the latest in a series of attempts by various owners to ride its brand cachet during emerging tech waves," Fast Company reported in July. In March, it sold for $207 million to Infinite Reality, an immersive digital media and e-commerce company, which also rebranded as Napster last month. Since 2020, other owners have included a British VR music startup (to create VR concerts) and two crypto-focused companies that bought it to anchor a Web3 music platform. Napster's launch follows a growing number of attempts to drive AI adoption beyond smartphones and laptops.
And tonight the Washington Post re-visited the legacy of Napster's original mp3-sharing model, arguing Napster "inspired successive generations of entrepreneurs to risk flouting the law so they could grow enough to get the laws changed to suit them, including Airbnb and Uber." "Napster to me embodies the idea that it is better to seek forgiveness than permission," said Mark Lemley, director of Stanford Law School's Program in Law, Science & Technology. "It didn't work out well for Napster or for many of the others who got sued, but it worked out very well for everyone else — users, and eventually the content industry, too, which is making record profits...." [Napster co-founder Sean] Parker later advised Spotify, and Napster marketing chief Oliver Schusser is now Apple's vice president for music.

Although many users saw Napster as an extension of rock-and-roll rebellion, that was not the company's real plan. First Fanning's majority-owning uncle, and then venture capital firm Hummer Winblad, wanted the start-up to leverage its knowledge of individual music consumers to make lucrative deals with the labels, according to internal documents this reporter found in researching a book on Napster. They warned that if no agreement were reached and Napster failed, more decentralized pirate services would take the audience and offer the labels nothing.

But settlement talks failed. The litigation blitz also took down a Napster competitor called Scour, which a young Travis Kalanick had joined shortly after its founding. Kalanick later created Uber, dedicated to overthrowing taxi regulations.

The article concludes that "Now it is Microsoft, Meta, Apple and Google, among the largest companies in the world, bankrolling the consumption of all media.

"They, too, have absorbed Napster's lessons in realpolitik, namely to build it first and hope the regulators will either yield or catch up."

Slashdot Top Deals