Electronic Frontier Foundation

Congress Wants To Hand Your Parenting To Big Tech 53

An anonymous reader quotes a report from the Electronic Frontier Foundation (EFF): Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. The Senate Commerce Committee held a hearing [Friday] on "examining the effect of technology on America's youth." Witnesses warned about "addictive" online content, mental health, and kids spending too much time buried in screen. At the center of the debate is a bill from Sens. Ted Cruz (R-TX) and Brian Schatz (D-HI) called the Kids Off Social Media Act (KOSMA), which they say will protect children and "empower parents."

That's a reasonable goal, especially at a time when many parents feel overwhelmed and nervous about how much time their kids spend on screens. But while the bill's press release contains soothing language, KOSMA doesn't actually give parents more control. Instead of respecting how most parents guide their kids towards healthy and educational content, KOSMA hands the control panel to Big Tech. That's right -- this bill would take power away from parents, and hand it over to the companies that lawmakers say are the problem. [...] This bill doesn't just set an age rule. It creates a legal duty for platforms to police families. Section 103(b) of the bill is blunt: if a platform knows a user is under 13, it "shall terminate any existing account or profile" belonging to that user. And "knows" doesn't just mean someone admits their age. The bill defines knowledge to include what is "fairly implied on the basis of objective circumstances" -- in other words, what a reasonable person would conclude from how the account is being used. The reality of how services would comply with KOSMA is clear: rather than risk liability for how they should have known a user was under 13, they will require all users to prove their age to ensure that they block anyone under 13.

KOSMA contains no exceptions for parental consent, for family accounts, or for educational or supervised use. The vast majority of people policed by this bill won't be kids sneaking around -- it will be minors who are following their parents' guidance, and the parents themselves. Imagine a child using their parent's YouTube account to watch science videos about how a volcano works. If they were to leave a comment saying, "Cool video -- I'll show this to my 6th grade teacher!" and YouTube becomes aware of the comment, the platform now has clear signals that a child is using that account. It doesn't matter whether the parent gave permission. Under KOSMA, the company is legally required to act. To avoid violating KOSMA, it would likely lock, suspend, or terminate the account, or demand proof it belongs to an adult. That proof would likely mean asking for a scan of a government ID, biometric data, or some other form of intrusive verification, all to keep what is essentially a "family" account from being shut down.

Violations of KOSMA are enforced by the FTC and state attorneys general. That's more than enough legal risk to make platforms err on the side of cutting people off. Platforms have no way to remove "just the kid" from a shared account. Their tools are blunt: freeze it, verify it, or delete it. Which means that even when a parent has explicitly approved and supervised their child's use, KOSMA forces Big Tech to override that family decision. [...] These companies don't know your family or your rules. They only know what their algorithms infer. Under KOSMA, those inferences carry the force of law. Rather than parents or teachers, decisions about who can be online, and for what purpose, will be made by corporate compliance teams and automated detection systems.
AI

Microsoft Executives Discuss How AI Will Change Windows, Programming -- and Society (windowscentral.com) 69

"Windows is evolving into an agentic OS," Microsoft's president of Windows Pavan Davuluri posted on X.com, "connecting devices, cloud, and AI to unlock intelligent productivity and secure work anywhere."

But former Uber software engineer and engineering manager Gergely Orosz was unimpressed. "Can't see any reason for software engineers to choose Windows with this weird direction they are doubling down on. So odd because Microsoft has building dev tools in their DNA... their OS doesn't look like anything a builder who wants OS control could choose. Mac or Linux it is for devs."

Davuluri "has since disabled replies on his original post..." notes the blog Windows Central, "which some people viewed as an attempt to shut out negative feedback." But he also replied to that comment... Davuluri says "we care deeply about developers. We know we have work to do on the experience, both on the everyday usability, from inconsistent dialogs to power user experiences. When we meet as a team, we discuss these pain points and others in detail, because we want developers to choose Windows..." The good news is Davuluri has confirmed that Microsoft is listening, and is aware of the backlash it's receiving over the company's obsession with AI in Windows 11. That doesn't mean the company is going to stop with adding AI to Windows, but it does mean we can also expect Microsoft to focus on the other things that matter too, such as stability and power user enhancements.
Elsewhere on X.com, Microsoft CEO Satya Nadella shared his own thoughts on "the net benefit of the AI platform wave ." The Times of India reports: Nadella said tech companies should focus on building AI systems that create more value for the people and businesses using them, not just for the companies that make the technology. He cited Bill Gates to emphasize the same: "A platform is when the economic value of everybody that uses it exceeds the value of the company that creates it."Tesla CEO Elon Musk responded to Nadella's post with a facepalm emoji.

Nadella said this idea matters even more during the current AI boom, where many firms risk giving away too much of their own value to big tech platforms. "The real question is how to empower every company out there to build their own AI-native capabilities," he wrote. Nadella says Microsoft's partnership with OpenAI is an example of zero-sum mindset industry... [He also cited Microsoft's "work to bring AMD into the fleet."]

More from Satya Nadella's post: Thanks to AI, the [coding] category itself has expanded and may ultimately become one of the largest software categories. I don't ever recall any analyst ever asking me about how much revenue Visual Studio makes! But now everyone is excited about AI coding tools. This is another aspect of positive sum, when the category itself is redefined and the pie becomes 10x what it was! With GitHub Copilot we compete for our share and with GitHub and Agent HQ we also provide a platform for others.

Of course, the real test of this era won't be when another tech company breaks a valuation record. It will be when the overall economy and society themselves reach new heights. When a pharma company uses AI in silico to bring a new therapy to market in one year instead of twelve. When a manufacturer uses AI to redesign a supply chain overnight. When a teacher personalizes lessons for every student. When a farmer predicts and prevents crop failure.That's when we'll know the system is working.

Let us move beyond zero-sum thinking and the winner-take-all hype and focus instead on building broad capabilities that harness the power of this technology to achieve local success in each firm, which then leads to broad economic growth and societal benefits. And every firm needs to make sure they have control of their own destiny and sovereignty vs just a press release with a Tech/AI company or worse leak all their value through what may seem like a partnership, except it's extractive in terms of value exchange in the long run.

Crime

Google Begins Aggresively Using the Law To Stop Text Message Scams (bgr.com) 18

"Google is going to court to help put an end to, or at least limit, the prevalence of phishing scams over text message," reports BGR: Google said it's bringing suit against Lighthouse, an impressively large operation that allegedly provides tools customers can buy to set up their own specialized phishing scams. All told, Google estimates that Lighthouse-affiliated scams in the U.S. have stolen anywhere between 12.7 million and 115 million credit cards. "Bad actors built Lighthouse as a phishing-as-a-service kit to generate and deploy massive SMS phishing attacks," Google notes. "These attacks exploit established brands like E-Z Pass to steal people's financial information."

Google's legal action is comprehensive and is intent on completely dismantling Lighthouse's operations. The search giant is bringing claims under RICO, the Lanham Act, and the Computer Fraud and Abuse Act (CFAA). RICO, which often comes up in movies and television shows, allows authorities to treat Lighthouse's phishing operation as a broad criminal enterprise as opposed to isolated scams. By using RICO, Google also expands the list of individuals who can be found liable, whether it be the people who started Lighthouse, the people who run it, or even unaffiliated customers who used the company's services. The Lanham Act, for those unaware, targets malicious actors who misappropriate well-known company trademarks in order to confuse consumers. This Lanham Act comes into play because many phishing scams masquerade as legitimate messages from companies like Amazon and FedEx. The Computer Fraud and Abuse Act, meanwhile, is relevant because scammers typically use stolen credentials to gain unauthorized access to financial systems, something the CFAA is designed to target...

The fact that Google is invoking all three of the acts above underscores how serious the company is about putting a stop to SMS-based scams. By using all three, Google's legal attack is more potent and also expands the range of available remedies to include civil damages and criminal penalties. In short, Google isn't merely trying to win a legal case; it's aiming to emphatically and permanently stop Lighthouse in its tracks.

Getting even more aggressive, Google says it's also working with the U.S. Congress to pass new anti-scammer legislation, and endorsed these three new bipartisan bills:
  • The Scam Compound Accountability and Mobilization (SCAM) Act "would develop a national strategy to counter scam compounds, enhance sanctions and support survivors of human trafficking within these compounds."
  • The Foreign Robocall Elimination Act "would establish a taskforce focused on how to best block foreign-originated illegal robocalls before they ever reach American consumers."
  • The Guarding Unprotected Aging Retirees from Deception (GUARD) Act "would empower state and local law enforcement by enabling them to utilize federal grant funding to investigate financial fraud and scams specifically targeting retirees. "

Thanks to Slashdot reader anderzole for sharing the article.


The Internet

Tim Berners-Lee Urges New Open-Source Interoperable Data Standard, Protections from AI (theguardian.com) 29

Tim Berners-Lee writes in a new article in the Guardian that "Somewhere between my original vision for web 1.0 and the rise of social media as part of web 2.0, we took the wrong path Today, I look at my invention and I am forced to ask: is the web still free today? No, not all of it. We see a handful of large platforms harvesting users' private data to share with commercial brokers or even repressive governments. We see ubiquitous algorithms that are addictive by design and damaging to our teenagers' mental health. Trading personal data for use certainly does not fit with my vision for a free web. On many platforms, we are no longer the customers, but instead have become the product. Our data, even if anonymised, is sold on to actors we never intended it to reach, who can then target us with content and advertising...

We have the technical capability to give that power back to the individual. Solid is an open-source interoperable standard that I and my team developed at MIT more than a decade ago. Apps running on Solid don't implicitly own your data — they have to request it from you and you choose whether to agree, or not. Rather than being in countless separate places on the internet in the hands of whomever it had been resold to, your data is in one place, controlled by you. Sharing your information in a smart way can also liberate it. Why is your smartwatch writing your biological data to one silo in one format? Why is your credit card writing your financial data to a second silo in a different format? Why are your YouTube comments, Reddit posts, Facebook updates and tweets all stored in different places? Why is the default expectation that you aren't supposed to be able to look at any of this stuff? You generate all this data — your actions, your choices, your body, your preferences, your decisions. You should own it. You should be empowered by it...

We're now at a new crossroads, one where we must decide if AI will be used for the betterment or to the detriment of society. How can we learn from the mistakes of the past? First of all, we must ensure policymakers do not end up playing the same decade-long game of catchup they have done over social media. The time to decide the governance model for AI was yesterday, so we must act with urgency. In 2017, I wrote a thought experiment about an AI that works for you. I called it Charlie. Charlie works for you like your doctor or your lawyer, bound by law, regulation and codes of conduct. Why can't the same frameworks be adopted for AI? We have learned from social media that power rests with the monopolies who control and harvest personal data. We can't let the same thing happen with AI.

Berners-Lee also says "we need a Cern-like not-for-profit body driving forward international AI research," arguing that if we muster the political willpower, "we have the chance to restore the web as a tool for collaboration, creativity and compassion across cultural borders.

"We can re-empower individuals, and take the web back. It's not too late."

Berners-Lee has also written a new book titled This is For Everyone.
Facebook

Facebook and Instagram Offer UK Users an Ad-Stopping Subscription Fee (bbc.com) 24

"Facebook and Instagram owner Meta is launching paid subscriptions for users who do not want to see adverts in the UK," reports the BBC: The company said it would start notifying users in the coming weeks to let them choose whether to subscribe to its platforms if they wish to use them without seeing ads. EU users of its platforms can already pay a fee starting from €5.99 (£5) a month to see no ads — but subscriptions will start from £2.99 a month for UK users.

"It will give people in the UK a clear choice about whether their data is used for personalised advertising, while preserving the free access and value that the ads-supported internet creates for people, businesses and platforms," Meta said. But UK users will not have an option to not pay and see "less personalised" adverts — a feature Meta added for EU users after regulators raised concerns...

Meta said its own model would see its subscription for no ads cost £2.99 a month on the web or £3.99 a month on iOS and Android apps — with the higher fee to offset cuts taken from transactions by Apple and Google... [Meta] reiterated its critical stance on the EU on Friday, saying its regulations were creating a worse experience for users and businesses unlike the UK's "more pro-growth and pro-innovation regulatory environment".

"Meta said its own model would see its subscription for no ads cost £2.99 a month on the web or £3.99 a month on iOS and Android apps," according to the BBC, "with the higher fee to offset cuts taken from transactions by Apple and Google."

Even users not paying for an ad-free experience have "tools and settings that empower people to control their ads experience," according to Meta's announcement. The include Ad Preferences which influences data used to inform ads including Activity Information from Ad Partners. "We also have tools in our products that explain 'Why am I seeing this ad?' and how people can manage their ad experience. We do not sell personal data to advertisers."
AI

Duolingo Faces Massive Social Media Backlash After 'AI-First' Comments (fastcompany.com) 35

"Duolingo had been riding high," reports Fast Company, until CEO Luis von Ahn "announced on LinkedIn that the company is phasing out human contractors, looking for AI use in hiring and in performance reviews, and that 'headcount will only be given if a team cannot automate more of their work.'"

But then "facing heavy backlash online after unveiling its new AI-first policy", Duolingo's social media presence went dark last weekend. Duolingo even temporarily took down all its posts on TikTok (6.7 million followers) and Instagram (4.1 million followers) "after both accounts were flooded with negative feedback." Duolingo previously faced criticism for quietly laying off 10% of its contractor base and introducing some AI features in late 2023, but it barely went beyond a semi-viral post on Reddit. Now that Duolingo is cutting out all its human contractors whose work can technically be done by AI, and relying on more AI-generated language lessons, the response is far more pronounced. Although earlier TikTok videos are not currently visible, a Fast Company article from May 12 captured a flavor of the reaction:

The top comments on virtually every recent post have nothing to do with the video or the company — and everything to do with the company's embrace of AI. For example, a Duolingo TikTok video jumping on board the "Mama, may I have a cookie" trend saw replies like "Mama, may I have real people running the company" (with 69,000 likes) and "How about NO ai, keep your employees...."

And then... After days of silence, on Tuesday the company posted a bizarre video message on TikTok and Instagram, the meaning of which is hard to decipher... Duolingo's first video drop in days has the degraded, stuttering feel of a Max Headroom video made by the hackers at Anonymous. In it, a supposed member of the company's social team appears in a three-eyed Duo mask and black hoodie to complain about the corporate overlords ruining the empire the heroic social media crew built.
"But this is something Duolingo can't cute-post its way out of," Fast Company wrote on Tuesday, complaining the company "has not yet meaningfully addressed the policies that inspired the backlash against it... "

So the next video (Thursday) featured Duolingo CEO Luis von Ahn himself, being confronted by that same hoodie-wearing social media rebel, who says "I'm making the man who caused this mess accountable for his behavior. I'm demanding answers from the CEO..." [Though the video carefully sidesteps the issue of replacing contractors with AI or how "headcount will only be given if a team cannot automate more of their work."] Rebel: First question. So are there going to be any humans left at this company?

CEO: Our employees are what make Duolingo so amazing. Our app is so great because our employees made it... So we're going to continue having employees, and not only that, we're actually going to be hiring more employees.

Rebel: How do we know that these aren't just empty promises? As long as you're in charge, we could still be shuffled out once the media fire dies down. And we all know that in terms of automation, CEOs should be the first to go.

CEO: AI is a fundamental shift. It's going to change how we all do work — including me. And honestly, I don't really know what's going to happen.

But I want us, as a company, to have our workforce prepared by really knowing how to use AI so that we can be more efficient with it.

Rebel: Learning a foreign language is literally about human connection. How is that even possible with AI-first?

CEO: Yes, language is about human connection, and it's about people. And this is the thing about AI. AI will allow us to reach more people, and to teach more people. I mean for example, it took us about 10 years to develop the first 100 courses on Duolingo, and now in under a year, with the help of AI and of course with humans reviewing all the work, we were able to release another 100 courses in less than a year.

Rebel: So do you regret posting this memo on LinkedIn.

CEO: Honestly, I think I messed up sending that email. What we're trying to do is empower our own employees to be able to achieve more and be able to have way more content to teach better and reach more people all with the help of AI.

Returning to where it all started, Duolingo's CEO posted again on LinkedIn Thursday with "more context" for his vision. It still emphasizes the company's employees while sidestepping contractors replaced by AI. But it puts a positive spin on how "headcount will only be given if a team cannot automate more of their work." I've always encouraged our team to embrace new technology (that's why we originally built for mobile instead of desktop), and we are taking that same approach with AI. By understanding the capabilities and limitations of AI now, we can stay ahead of it and remain in control of our own product and our mission.

To be clear: I do not see AI as replacing what our employees do (we are in fact continuing to hire at the same speed as before). I see it as a tool to accelerate what we do, at the same or better level of quality. And the sooner we learn how to use it, and use it responsibly, the better off we will be in the long run. My goal is for Duos to feel empowered and prepared to use this technology.

No one is expected to navigate this shift alone. We're developing workshops and advisory councils, and carving out dedicated experimentation time to help all our teams learn and adapt. People work at Duolingo because they want to solve big problems to improve education, and the people who work here are what make Duolingo successful. Our mission isn't changing, but the tools we use to build new things will change. I remain committed to leading Duolingo in a way that is consistent with our mission to develop the best education in the world and make it universally available.

"The backlash to Duolingo is the latest evidence that 'AI-first' tends to be a concept with much more appeal to investors and managers than most regular people," notes Fortune: And it's not hard to see why. Generative AI is often trained on reams of content that may have been illegally accessed; much of its output is bizarre or incorrect; and some leaders in the field are opposed to regulations on the technology. But outside particular niches in entry-level white-collar work, AI's productivity gains have yet to materialize.
Social Networks

Snap's New Spectacles Inch Closer To Compelling AR (theverge.com) 29

The Verge's Alex Heath reports: Snap's fifth-generation Spectacles have a richer, more immersive display. Using them feels snappier. They weigh less than their predecessor and last longer on a charge. Those are exactly the kinds of upgrades you'd expect from a product line that's technically eight years old. But the market for Spectacles -- and AR glasses in general -- still feels as nascent as ever. Snap has an idea for what could change that: developers. These new Spectacles, announced Tuesday at Snap's annual Partner Summit in Los Angeles, aren't being sold. Instead, Snap is repeating its playbook for the last version of Spectacles in 2021 and distributing them to the people who make AR lenses for Snapchat. This time around, though, there's an extra hurdle: you have to apply for access through Lens Studio, the company's desktop tool for creating AR software, and pay $1,188 to lease a pair for at least one year. (After a year, the subscription becomes $99 a month.)

Yes, Snap is asking developers to pay $1,188 to build software for hardware with no user base. Even still, Snap CEO Evan Spiegel believes the interest will be there. "Our goal is really to empower and inspire the developer and AR enthusiast communities," he tells me. "This really is an invitation, and hopefully an inspiration, to create." [...] Ultimately, I'm skeptical of why developers will want to build software for Spectacles right now, given the lack of a market and the cost of getting access to a pair. Still, Spiegel believes enough of them are excited about the promise of AR glasses and that they'll want to help shape that future. "I think it's the same reason why developers were really excited with the early desktop computer or the reason why developers were really excited by the early smartphones," he says. "I think this is a group of visionary technologists who are really excited about what the future holds." Spiegel may be right. AR glasses may be the future, and Spectacles may be well-positioned to become the next major computing platform, even with competition heating up. But there's still a lot of progress that needs to happen for Snap's vision to become reality.
Road to VR has a full list of specs embedded in their report. They also published a reveal trailer on YouTube.
Electronic Frontier Foundation

EFF Decries 'Brazen Land-Grab' Attempt on 900 MHz 'Commons' Frequency Used By Amateur Radio (eff.org) 145

An EFF article calls out a "brazen attempt to privatize" a wireless frequency band (900 MHz) which America's FCC's left " as a commons for all... for use by amateur radio operators, unlicensed consumer devices, and industrial, scientific, and medical equipment." The spectrum has also become "a hotbed for new technologies and community-driven projects. Millions of consumer devices also rely on the range, including baby monitors, cordless phones, IoT devices, garage door openers." But NextNav would rather claim these frequencies, fence them off, and lease them out to mobile service providers. This is just another land-grab by a corporate rent-seeker dressed up as innovation. EFF and hundreds of others have called on the FCC to decisively reject this proposal and protect the open spectrum as a commons that serves all.

NextNav [which sells a geolocation service] wants the FCC to reconfigure the 902-928 MHz band to grant them exclusive rights to the majority of the spectrum... This proposal would not only give NextNav their own lane, but expanded operating region, increased broadcasting power, and more leeway for radio interference emanating from their portions of the band. All of this points to more power for NextNav at everyone else's expense.

This land-grab is purportedly to implement a Positioning, Navigation and Timing (PNT) network to serve as a US-specific backup of the Global Positioning System(GPS). This plan raises red flags off the bat. Dropping the "global" from GPS makes it far less useful for any alleged national security purposes, especially as it is likely susceptible to the same jamming and spoofing attacks as GPS. NextNav itself admits there is also little commercial demand for PNT. GPS works, is free, and is widely supported by manufacturers. If Nextnav has a grand plan to implement a new and improved standard, it was left out of their FCC proposal. What NextNav did include however is its intent to resell their exclusive bandwidth access to mobile 5G networks. This isn't about national security or innovation; it's about a rent-seeker monopolizing access to a public resource. If NextNav truly believes in their GPS backup vision, they should look to parts of the spectrum already allocated for 5G.

The open sections of the 900 MHz spectrum are vital for technologies that foster experimentation and grassroots innovation. Amateur radio operators, developers of new IoT devices, and small-scale operators rely on this band. One such project is Meshtastic, a decentralized communication tool that allows users to send messages across a network without a central server. This new approach to networking offers resilient communication that can endure emergencies where current networks fail. This is the type of innovation that actually addresses crises raised by Nextnav, and it's happening in the part of the spectrum allocated for unlicensed devices while empowering communities instead of a powerful intermediary. Yet, this proposal threatens to crush such grassroots projects, leaving them without a commons in which they can grow and improve.

This isn't just about a set of frequencies. We need an ecosystem which fosters grassroots collaboration, experimentation, and knowledge building. Not only do these commons empower communities, they avoid a technology monoculture unable to adapt to new threats and changing needs as technology progresses. Invention belongs to the public, not just to those with the deepest pockets. The FCC should ensure it remains that way.

NextNav's proposal is a direct threat to innovation, public safety, and community empowerment. While FCC comments on the proposal have closed, replies remain open to the public until September 20th. The FCC must reject this corporate land-grab and uphold the integrity of the 900 MHz band as a commons.

Transportation

Speed Limiters Now Mandatory In All New EU Cars (autoweek.com) 406

An anonymous reader shares a report: Cars have been able to figure out when they're speeding for a while, thanks to GPS as well as traffic sign recognition, and they've also been able to pump the brakes automatically when needed. Having a computer automatically slow down a car in response to posted speed limits, therefore, was not really a question of technical feasibility for some time -- but mandating it has been a question of political will. That political will has materialized in the European Union, and starting July 7 all new cars sold in the EU will feature intelligent speed assistance (ISA) systems.

The systems themselves have been working their way into newly introduced models of cars starting in 2022, so quite a few new cars on the road already feature them. The July 2024 regulation extends that mandate to all new vehicles being manufactured for sale in the EU. The objective is to protect Europeans against traffic accidents, poor air quality and climate change, empower them with new mobility solutions that match their changing needs, and defend the competitiveness of European industry," the European Commission said in a statement. The systems themselves operate through traffic sign recognition, as well as navigation systems. There will be four ways in which ISA systems will work to slow the vehicle down, and it will be up to the manufacturers to pick which one they want to use. The EU regulations permit a system that can use a cascaded acoustic warning, a cascaded vibrating warning, an accelerator pedal with haptic feedback, or a speed control function in which the speed of the vehicle will be gradually reduced.

Advertising

Netflix To Take On Google and Amazon By Building Its Own Ad Server (techcrunch.com) 20

Lauren Forristal writes via TechCrunch: Netflix announced during its Upfronts presentation on Wednesday that it's launching its own advertising technology platform only a year and a half after entering the ads business. This move pits it against other industry heavyweights with ad servers, like Google, Amazon and Comcast. The announcement signifies a significant shake-up in the streaming giant's advertising approach. The company originally partnered with Microsoft to develop its ad tech, letting Netflix enter the ad space quickly and catch up with rivals like Hulu, which has had its own ad server for over a decade.

With the launch of its in-house ad tech, Netflix is poised to take full control of its advertising future. This strategic move will empower the company to create targeted and personalized ad experiences that resonate with its massive user base of 270 million subscribers. [...] Netflix didn't say exactly how its in-house solution will change the way ads are delivered, but it's likely it'll move away from generic advertisements. According to the Financial Times, Netflix wants to experiment with "episodic" campaigns, which involve a series of ads that tell a story rather than delivering repetitive ads. During the presentation, Netflix also noted that it'll expand its buying capabilities this summer, which will now include The Trade Desk, Google's Display & Video 360 and Magnite as partners. Notably, competitor Disney+ also has an advertising agreement with The Trade Desk. Netflix also touted the success of its ad-supported tier, reporting that 40 million global monthly active users opt for the plan. The ad tier had around 5 million users within six months of launching.

The Courts

Social Media Giants Must Face Child Safety Lawsuits, Judge Rules (theverge.com) 53

Emma Roth reports via The Verge: Meta, ByteDance, Alphabet, and Snap must proceed with a lawsuit alleging their social platforms have adverse mental health effects on children, a federal court ruled on Tuesday. US District Judge Yvonne Gonzalez Rogers rejected the social media giants' motion to dismiss the dozens of lawsuits accusing the companies of running platforms "addictive" to kids. School districts across the US have filed suit against Meta, ByteDance, Alphabet, and Snap, alleging the companies cause physical and emotional harm to children. Meanwhile, 42 states sued Meta last month over claims Facebook and Instagram "profoundly altered the psychological and social realities of a generation of young Americans." This order addresses the individual suits and "over 140 actions" taken against the companies.

Tuesday's ruling states that the First Amendment and Section 230, which says online platforms shouldn't be treated as the publishers of third-party content, don't shield Facebook, Instagram, YouTube, TikTok, and Snapchat from all liability in this case. Judge Gonzalez Rogers notes many of the claims laid out by the plaintiffs don't "constitute free speech or expression," as they have to do with alleged "defects" on the platforms themselves. That includes having insufficient parental controls, no "robust" age verification systems, and a difficult account deletion process.

"Addressing these defects would not require that defendants change how or what speech they disseminate," Judge Gonzalez Rogers writes. "For example, parental notifications could plausibly empower parents to limit their children's access to the platform or discuss platform use with them." However, Judge Gonzalez Rogers still threw out some of the other "defects" identified by the plaintiffs because they're protected under Section 230, such as offering a beginning and end to a feed, recommending children's accounts to adults, the use of "addictive" algorithms, and not putting limits on the amount of time spent on the platforms.

Government

America's Net Neutrality Question: Should the FCC Define the Internet as a 'Common Carrier'? (fcc.gov) 132

The Washington Post's editorial board looks at America's "net neutrality" debate.

But first they note that America's communications-regulating FCC has "limited authority to regulate unless broadband is considered a 'common carrier' under the Telecommunications Act of 1996." The FCC under President Barack Obama moved to reclassify broadband so it could regulate broadband companies; the FCC under President Donald Trump reversed the change. Dismayed advocates warned the world that, without the protections in place, the internet would break. You'll never guess what happened next: nothing. Or, at least, almost nothing. The internet did not break, and internet service providers for the most part did not block and they did not throttle.

All the same, today's FCC, under Chairwoman Jessica Rosenworcel, has just moved to re-reclassify broadband. The interesting part is that her strongest argument doesn't have much to do with net neutrality, but with some of the other benefits the country could see from having a federal watchdog keeping an eye on the broadband business... Broadband is an essential service... Yet there isn't a single government agency with sufficient authority to oversee this vital tool. Asserting federal authority over broadband would empower regulation of any blocking, throttling or anti-competitive paid traffic prioritization that they might engage in. But it could also help ensure the safety and security of U.S. networks.

The FCC has, on national security grounds, removed authorization for companies affiliated with adversary states, such as China's Huawei, from participating in U.S. telecommunications markets. The agency can do this for phone carriers. But it can't do it for broadband, because it isn't allowed to. Or consider public safety during a crisis. The FCC doesn't have the ability to access the data it needs to know when and where there are broadband outages — much less the ability to do anything about those outages if they are identified. Similarly, it can't impose requirements for network resiliency to help prevent those outages from occurring in the first place — during, say, a natural disaster or a cyberattack.

The agency has ample power to police the types of services that are becoming less relevant in American life, such as landline telephones, and little power to police those that are becoming more important every day.

The FCC acknowledges this power would also allow them to prohibit "throttling" of content. But the Post's editorial also makes the argument that here in 2023 that's "unlikely to have any major effect on the broadband industry in either direction... Substantial consequences have only become less likely as high-speed bandwidth has become less limited."
Social Networks

New York Seeks To Limit Social Media's Grip On Children's Attention (nytimes.com) 23

An anonymous reader quotes a report from the New York Times: New York State officials on Wednesday unveiled a bill to protect young people from potential mental health risks by prohibiting minors from accessing algorithm-based social media feeds unless they have permission from their parents. Gov. Kathy Hochul and Letitia James, the state attorney general, announced their support of new legislation to crack down on the often inscrutable algorithms, which they argue are used to keep young users on social media platforms for extended periods of time -- sometimes to their detriment. If the bill is passed and signed into law, anyone under 18 in New York would need parental consent to access those feeds on TikTok, Instagram, Facebook, YouTube, X and other social media platforms that use algorithms to display personalized content. While other states have sought far-reaching bans and measures on social media apps, New York is among a few seeking to target the algorithms more narrowly.

The legislation, for example, would target TikTok's central feature, its ubiquitous "For You" feed, which displays boundless reams of short-form videos based on user interests or past interactions. But it would not affect a minor's access to the chronological feeds that show posts published by the accounts that a user has decided to follow. The bill would also allow parents to limit the number of hours their children can spend on a platform and block their child's access to social media apps overnight, from midnight until 6 a.m., as well as pause notifications during that time.

The bill in New York, which could be considered as soon as January when the 2024 legislative session begins, is likely to confront resistance from tech industry groups. The bill's sponsors, State Senator Andrew Gounardes and Assemblywoman Nily Rozic, said they were readying for a fight. But Ms. Hochul's enthusiastic support of the bill -- she rarely joins lawmakers to introduce bills -- is a sign that it could succeed in the State Capitol, which Democrats control. A second bill unveiled on Wednesday is meant to protect children's privacy by prohibiting websites from "collecting, using, sharing, or selling personal data" from anyone under 18 for the purpose of advertising, unless they receive consent, according to a news release. Both bills would empower the state attorney general to go after platforms found in violation.

Government

Senate Panel Advances Bill To Childproof the Internet (theverge.com) 80

An anonymous reader quotes a report from The Verge: Congress is closer than ever to passing a pair of bills to childproof the internet after lawmakers voted to send them to the floor Thursday. The bills -- the Kids Online Safety Act (KOSA) and COPPA 2.0 -- were approved by the Senate Commerce Committee Thursday by a unanimous voice vote. Both pieces of legislation aim to address an ongoing mental health crisis amongst young people that some lawmakers blame social media for intensifying. But critics of the bills have long argued that they have the potential to cause more harm than good, like forcing social media platforms to collect more user information to properly enforce Congress' rules.

KOSA is supposed to establish a new legal standard for the Federal Trade Commission and state attorneys general, allowing them to police companies that fail to prevent kids from seeing harmful content on their platforms. The authors of the bills, Sen. Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT), have said the bill keeps kids from seeing content that glamorizes eating disorders, suicidal thoughts, substance abuse, and gambling. It would also ban kids 13 and under from using social media and require companies to acquire parental consent before allowing children under 17 to use their platforms. At Thursday's markup, Blackburn proposed an amendment to remedy some of the concerns raised by digital rights groups, mainly language requiring platforms to verify the age of their users. Lawmakers approved those changes along with the bill, but the groups fear that platforms would still need to collect more data on all users to live up to the bill's other rules. [...] The other bill lawmakers approved, COPPA 2.0, raises the age of protection under the Children's Online Privacy Protection Act from 13 to 16 years of age, along with similar age-gating restrictions. It also bans platforms from targeting ads to kids.
"When it comes to determining the best way to help kids and teens use the internet, parents and guardians should be making those decisions, not the government," Carl Szabo, NetChoice vice president and general counsel, said. "Rather than violating free speech rights and handing parenting over to bureaucrats, we should empower law enforcement with the resources necessary to do its job to arrest and convict bad actors committing online crimes against children."
Social Networks

Reddit is Getting Rid of Its Gold Awards System (theverge.com) 44

Reddit is sunsetting its current coins and awards systems, meaning you soon won't be able to thank a kind stranger for giving you Reddit Gold for one of your posts. From a report: Awards are little icons on posts you might have come across while scrolling around Reddit, and they're given by other users to show appreciation for a post. Perhaps the most commonly-known award is Reddit Gold, which shows up as a gold medal with a star, but there also reaction awards and awards specific to certain communities.

[...] Reddit does have plans for some kind of award system in the future, but the post only provides vague hints about what that might look like. "Rewarding content and contribution (as well as something golden) will still be a core part of Reddit," venkman01 said. "In the coming months, we'll be sharing more about a new direction for awarding that allows redditors to empower one another and create more meaningful ways to reward high-quality contributions on Reddit." In a reply, venkman01 said that "we want to create a system that is simple, easy to use, and easy to understand."

United Kingdom

UK Universities Draw Up Guiding Principles on Generative AI (theguardian.com) 6

UK universities have drawn up a set of guiding principles to ensure that students and staff are AI literate, as the sector struggles to adapt teaching and assessment methods to deal with the growing use of generative artificial intelligence. From a report: Vice-chancellors at the 24 Russell Group research-intensive universities have signed up to the code. They say this will help universities to capitalise on the opportunities of AI while simultaneously protecting academic rigour and integrity in higher education. While once there was talk of banning software like ChatGPT within education to prevent cheating, the guidance says students should be taught to use AI appropriately in their studies, while also making them aware of the risks of plagiarism, bias and inaccuracy in generative AI.

Staff will also have to be trained so they are equipped to help students, many of whom are already using ChatGPT in their assignments. New ways of assessing students are likely to emerge to reduce the risk of cheating. All 24 Russell Group universities have reviewed their academic conduct policies and guidance to reflect the emergence of generative AI. The new guidance says: "These policies make it clear to students and staff where the use of generative AI is inappropriate, and are intended to support them in making informed decisions and to empower them to use these tools appropriately and acknowledge their use where necessary."

AI

Google Makes Its Text-To-Music AI Public (techcrunch.com) 16

An anonymous reader quotes a report from TechCrunch: Google [on Wednesday] released MusicLM, a new experimental AI tool that can turn text descriptions into music. Available in the AI Test Kitchen app on the web, Android or iOS, MusicLM lets users type in a prompt like "soulful jazz for a dinner party" or "create an industrial techno sound that is hypnotic" and have the tool create several versions of the song. Users can specify instruments like "electronic" or "classical," as well as the "vibe, mood, or emotion" they're aiming for, as they refine their MusicLM-generated creations.

When Google previewed MusicLM in an academic paper in January, it said that it had "no immediate plans" to release it. The coauthors of the paper noted the many ethical challenges posed by a system like MusicLM, including a tendency to incorporate copyrighted material from training data into the generated songs. But in the intervening months, Google says it's been working with musicians and hosting workshops to "see how [the] technology can empower the creative process." One of the outcomes? The version of MusicLM in AI Test Kitchen won't generate music with specific artists or vocals. Make of that what you will. It seems unlikely, in any case, that the broader challenges around generative music will be easily remedied.
You can sign up to try MusicLM here.
United Kingdom

Major Tech Firms Face Hefty Fines Under New Digital Consumer Bill (theguardian.com) 52

Major tech firms face the threat of multibillion-pound fines for breaching consumer protection rules under new legislation that will tackle issues including fake online reviews and subscriptions that are difficult to cancel. From a report: The digital markets, competition and consumers bill will empower the UK's competition watchdog to tackle the "excessive dominance" that a small number of tech firms hold over consumers and businesses. Firms that are deemed to have "strategic market status" -- such as tech firms Google and Apple, and online retailer Amazon -- will be given strict rules on how to operate under the bill and face a fine representing up to 10% of global turnover if they breach the new regime.

Without naming these companies, the government said firms could be required to open up their data to rival search engines or increase the transparency of how their app stores and review systems work. Oversight of major tech firms will be carried out by an arm of the Competition and Markets Authority (CMA), the Digital Markets Unit, which will also decide which firms receive strategic market status. The bill, which will be tabled in parliament on Tuesday, is expected to become law next year.

AI

Amazon Launches Startup Accelerator for Generative AI Companies (geekwire.com) 5

The newest startup accelerator from Amazon aims to attract companies building generative AI technologies. From a report: The Amazon Web Services accelerator, revealed Tuesday, is a 10-week program aims to "empower companies applying generative AI to solutions from legal and marketing, to software engineering, green energy, and life sciences, including drug discovery." It also provides up to $300,000 in AWS credits. The hybrid program is open to all startups, with two week-long in-person events in San Francisco. AWS does not take equity from participating companies. The accelerator is a way for Amazon to draw early-stage startups into its cloud ecosystem.
United States

Biden's Semiconductor Plan Flexes the Power of the Federal Government (nytimes.com) 139

Semiconductor manufacturers seeking a slice of nearly $40 billion in new federal subsidies will need to ensure affordable child care for their workers, limit stock buybacks and share certain excess profits with the government, the Biden administration will announce on Tuesday. From a report: The new requirements represent an aggressive attempt by the federal government to bend the behavior of corporate America to accomplish its economic and national security objectives. As the Biden administration makes the nation's first big foray into industrial policy in decades, officials are also using the opportunity to advance policies championed by liberals that seek to empower workers. While the moves would advance some of the left-behind portions of the president's agenda, they could also set a fraught precedent for attaching policy strings to federal funding.

Last year, a bipartisan group of lawmakers passed the CHIPS Act, which devoted $52 billion to expanding U.S. semiconductor manufacturing and research, in hopes of making the nation less reliant on foreign suppliers for critical chips that power computers, household appliances, cars and more. The prospect of accessing those funds has already enticed domestic and foreign-owned chip makers to announce plans for or begin construction on new projects in Arizona, Texas, Ohio, New York and other states. On Tuesday, the Commerce Department will release its application for manufacturers seeking funds under the law. It will include a variety of requirements that go far beyond simply encouraging semiconductor production.

Slashdot Top Deals