×
AI

Elon Musk Says AI Could Eliminate Our Need to Work at Jobs (cnn.com) 289

In the future, "Probably none of us will have a job," Elon Musk said Thursday, speaking remotely to the VivaTech 2024 conference in Paris. Instead, jobs will be optional — something we'd do like a hobby — "But otherwise, AI and the robots will provide any goods and services that you want."

CNN reports that Musk added this would require "universal high income" — and "There would be no shortage of goods or services." In a job-free future, though, Musk questioned whether people would feel emotionally fulfilled. "The question will really be one of meaning — if the computer and robots can do everything better than you, does your life have meaning?" he said. "I do think there's perhaps still a role for humans in this — in that we may give AI meaning."
CNN accompanied their article with this counterargument: In January, researchers at MIT's Computer Science and Artificial Intelligence Lab found workplaces are adopting AI much more slowly than some had expected and feared. The report also said the majority of jobs previously identified as vulnerable to AI were not economically beneficial for employers to automate at that time. Experts also largely believe that many jobs that require a high emotional intelligence and human interaction will not need replacing, such as mental health professionals, creatives and teachers.
CNN notes that Musk "also used his stage time to urge parents to limit the amount of social media that children can see because 'they're being programmed by a dopamine-maximizing AI'."
Communications

American Radio Relay League Confirms Cyberattack Disrupted Operations (bleepingcomputer.com) 32

Roughly 160,000 U.S.-based amateur radio enthusiasts belong to the American Radio Relay League, a nonprofit with 100 full-time and part-time staff members.

Nine days ago it announced "that it suffered a cyberattack that disrupted its network and systems," reports BleepingComputer, "including various online services hosted by the organization." "We are in the process of responding to a serious incident involving access to our network and headquarters-based systems. Several services, such as Logbook of The World and the ARRL Learning Center, are affected," explained ARRL in a press release... [T]he ARRL took steps to allay members' concerns about the security of their data, confirming that they do not store credit card information or collect social security numbers.

However, the organization confirmed that its member database contains some private information, including names, addresses, and call signs. While they do not specifically state email addresses are stored in the database, one is required to become a member of the organization.

"The ARRL has not specifically said that its member database has been accessed by hackers," Security Week points out, "but its statement suggests it's possible."

The site adds that it has also "reached out to ARRL to find out if this was a ransomware attack and whether the attackers made any ransom demand."

Thanks to Slashdot reader AzWa Snowbird for sharing the news.
Facebook

Meta, Activision Sued By Parents of Children Killed in Last Year's School Shooting (msn.com) 153

Exactly one year after the fatal shooting of 19 elementary school students in Texas, their parents filed a lawsuit against the publisher of the videogame Call of Duty, against Meta, and against the manufacturer of the AR-15-style weapon used in the attack, Daniel Defense.

The Washington Post says the lawsuits "may be the first of their kind to connect aggressive firearms marketing tactics on social media and gaming platforms to the actions of a mass shooter." The complaints contend the three companies are responsible for "grooming" a generation of "socially vulnerable" young men radicalized to live out violent video game fantasies in the real world with easily accessible weapons of war...

Several state legislatures, including California and Hawaii, passed consumer safety laws specific to the sale and marketing of firearms that would open the industry to more civil liability. Texas is not one of them. But it's just one vein in the three-pronged legal push by Uvalde families. The lawsuit against Activision and Meta, which is being filed in California, accuses the tech companies of knowingly promoting dangerous weapons to millions of vulnerable young people, particularly young men who are "insecure about their masculinity, often bullied, eager to show strength and assert dominance."

"To put a finer point on it: Defendants are chewing up alienated teenage boys and spitting out mass shooters," the lawsuit states...

The lawsuit alleges that Meta, which owns Instagram, easily allows gun manufacturers like Daniel Defense to circumvent its ban on paid firearm advertisements to reach scores of young people. Under Meta's rules, gunmakers are not allowed to buy advertisements promoting the sale of or use of weapons, ammunition or explosives. But gunmakers are free to post promotional material about weapons from their own account pages on Facebook and Instagram — a freedom the lawsuit alleges Daniel Defense often exploited.

According to the complaint, the Robb school shooter downloaded a version of "Call of Duty: Modern Warfare," in November 2021 that featured on the opening title page the DDM4V7 model rifle [shooter Salvador] Ramos would later purchase. Drawing from the shooter's social media accounts, Koskoff argued he was being bombarded with explicit marketing and combat imagery from the company on Instagram... The complaint cites Meta's practice, first reported by The Washington Post in 2022, of giving gun sellers wide latitude to knowingly break its rules against selling firearms on its websites. The company has allowed buyers and sellers to violate the rule 10 times before they are kicked off, The Post reported.

The article adds that the lawsuit against Meta "echoes some of the complaints by dozens of state attorneys general and school districts that have accused the tech giant of using manipulative practices to hook... while exposing them to harmful content." It also includes a few excerpts from the text of the lawsuit.
  • It argues that both Meta and Activision "knowingly exposed the Shooter to the weapon, conditioned him to see it as the solution to his problems, and trained him to use it."
  • The lawsuit also compares their practices to another ad campaign accused of marketing harmful products to children: cigarettes. "Over the last 15 years, two of America's largest technology companies — Defendants Activision and Meta — have partnered with the firearms industry in a scheme that makes the Joe Camel campaign look laughably harmless, even quaint."

Meta and Daniel Defense didn't respond to the reporters' requests for comment. But they did quote a statement from Activision expressing sympathy for the communities and families impacted by the "horrendous and heartbreaking" shooting.

Activision also added that "Millions of people around the world enjoy video games without turning to horrific acts."


The Almighty Buck

Best Buy and Geek Squad Were Most Impersonated Orgs By Scammers In 2023 (theregister.com) 20

An anonymous reader quotes a report from The Register: The Federal Trade Commission (FTC) has shared data on the most impersonated companies in 2023, which include Best Buy, Amazon, and PayPal in the top three. The federal agency detailed the top ten companies scammers impersonate and how much they make depending on the impersonation. By far the most impersonated corp was Best Buy and its repair business Geek Squad, with a total of 52k reports. Amazon impersonators came in second place with 34k reports, and PayPal a distant third with 10,000. Proportionally, the top three made up roughly 72 percent of the reports among the top ten, and Best Buy and Geek Squad scam reports were about 39 percent on their own. Though, high quantity doesn't necessarily translate to greater success for scammers, as the FTC also showed how much scammers made depending on what companies they impersonated. Best Buy and Geek Squad, Amazon, and PayPal scams made about $15 million, $19 million, and $16 million respectively, but that's nothing compared to the $60 million that Microsoft impersonators were able to fleece. [...]

The FTC also reported the vectors scammers use to contact their victims. Phone and email are still the most common means, but social media is becoming increasingly important for scamming and features the most costly scams. The feds additionally disclosed the kinds of payment methods scammers use for all sorts of frauds, including company and individual impersonation scams, investment scams, and romance scams. Cryptocurrency and bank transfers were popular for investment scammers, who are the most prolific on social media, while gift cards were most common for pretty much every other type of scam. However, not all scammers ask for digital payment, as the Federal Bureau of Investigation says that even regular old mail is something scammers are relying on to get their ill-gotten gains.

Transportation

Feds Add Nine More Incidents To Waymo Robotaxi Investigation (techcrunch.com) 36

Nine more accidents have been discovered by federal safety regulators during their safety investigation of Waymo's self-driving vehicles in Phoenix and San Francisco. TechCrunch reports: The National Highway Traffic Safety Administration Office of Defects Investigation (ODI) opened an investigation earlier this month into Waymo's autonomous vehicle software after receiving 22 reports of robotaxis making unexpected moves that led to crashes and potentially violated traffic safety laws. The investigation, which has been designated a "preliminary evaluation," is examining the software and its ability to avoid collisions with stationary objects and how well it detects and responds to "traffic safety control devices" like cones. The agency said Friday it has added (PDF) another nine incidents since the investigation was opened.

Waymo reported some of these incidents. The others were discovered by regulators via public postings on social media and forums like Reddit, YouTube and X. The additional nine incidents include reports of Waymo robotaxis colliding with gates, utility poles, and parked vehicles, driving in the wrong lane with nearby oncoming traffic and into construction zones. The ODI said it's concerned the robotaxis "exhibiting such unexpected driving behaviors may increase the risk of crash, property damage, and injury." The agency said that while it's not aware of any injuries from these incidents, several involved collisions with visible objects that "a competent driver would be expected to avoid." The agency also expressed concern that some of these occurred near pedestrians. NHTSA has given Waymo until June 11 to respond to a series of questions regarding the investigation.

Encryption

Signal Slams Telegram's Security (techcrunch.com) 33

Messaging app Signal's president Meredith Whittaker criticized rival Telegram's security on Friday, saying Telegram founder Pavel Durov is "full of s---" in his claims about Signal. "Telegram is a social media platform, it's not encrypted, it's the least secure of messaging and social media services out there," Whittaker told TechCrunch in an interview. The comments come amid a war of words between Whittaker, Durov and Twitter owner Elon Musk over the security of their respective platforms. Whittaker said Durov's amplification of claims questioning Signal's security was "incredibly reckless" and "actually harms real people."

"Play your games, but don't take them into my court," Whittaker said, accusing Durov of prioritizing being "followed by a professional photographer" over getting facts right about Signal's encryption. Signal uses end-to-end encryption by default, while Telegram only offers it for "secret chats." Whittaker said many in Ukraine and Russia use Signal for "actual serious communications" while relying on Telegram's less-secure social media features. She said the "jury is in" on the platforms' comparative security and that Signal's open source code allows experts to validate its privacy claims, which have the trust of the security community.
AI

Meta AI Chief Says Large Language Models Will Not Reach Human Intelligence (ft.com) 78

Meta's AI chief said the large language models that power generative AI products such as ChatGPT would never achieve the ability to reason and plan like humans, as he focused instead on a radical alternative approach to create "superintelligence" in machines. From a report: Yann LeCun, chief AI scientist at the social media giant that owns Facebook and Instagram, said LLMs had "very limited understanding of logicâ... do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot planâ...âhierarchically."

In an interview with the Financial Times, he argued against relying on advancing LLMs in the quest to make human-level intelligence, as these models can only answer prompts accurately if they have been fed the right training data and are, therefore, "intrinsically unsafe." Instead, he is working to develop an entirely new generation of AI systems that he hopes will power machines with human-level intelligence, although he said this vision could take 10 years to achieve. Meta has been pouring billions of dollars into developing its own LLMs as generative AI has exploded, aiming to catch up with rival tech groups, including Microsoft-backed OpenAI and Alphabet's Google.

EU

EU Sets Benchmark For Rest of the World With Landmark AI Laws (reuters.com) 28

An anonymous reader quotes a report from Reuters: Europe's landmark rules on artificial intelligence will enter into force next month after EU countries endorsed on Tuesday a political deal reached in December, setting a potential global benchmark for a technology used in business and everyday life. The European Union's AI Act is more comprehensive than the United States' light-touch voluntary compliance approach while China's approach aims to maintain social stability and state control. The vote by EU countries came two months after EU lawmakers backed the AI legislation drafted by the European Commission in 2021 after making a number of key changes. [...]

The AI Act imposes strict transparency obligations on high-risk AI systems while such requirements for general-purpose AI models will be lighter. It restricts governments' use of real-time biometric surveillance in public spaces to cases of certain crimes, prevention of terrorist attacks and searches for people suspected of the most serious crimes. The new legislation will have an impact beyond the 27-country bloc, said Patrick van Eecke at law firm Cooley. "The Act will have global reach. Companies outside the EU who use EU customer data in their AI platforms will need to comply. Other countries and regions are likely to use the AI Act as a blueprint, just as they did with the GDPR," he said, referring to EU privacy rules.

While the new legislation will apply in 2026, bans on the use of artificial intelligence in social scoring, predictive policing and untargeted scraping of facial images from the internet or CCTV footage will kick in in six months once the new regulation enters into force. Obligations for general purpose AI models will apply after 12 months and rules for AI systems embedded into regulated products in 36 months. Fines for violations range from $8.2 million or 1.5% of turnover to 35 million euros or 7% of global turnover depending on the type of violations.

Science

'Pay Researchers To Spot Errors in Published Papers' 24

Borrowing the idea of "bug bounties" from the technology industry could provide a systematic way to detect and correct the errors that litter the scientific literature. Malte Elson, writing at Nature: Just as many industries devote hefty funding to incentivizing people to find and report bugs and glitches, so the science community should reward the detection and correction of errors in the scientific literature. In our industry, too, the costs of undetected errors are staggering. That's why I have joined with meta-scientist Ian Hussey at the University of Bern and psychologist Ruben Arslan at Leipzig University in Germany to pilot a bug-bounty programme for science, funded by the University of Bern. Our project, Estimating the Reliability and Robustness of Research (ERROR), pays specialists to check highly cited published papers, starting with the social and behavioural sciences (see go.nature.com/4bmlvkj). Our reviewers are paid a base rate of up to 1,000 Swiss francs (around US$1,100) for each paper they check, and a bonus for any errors they find. The bigger the error, the greater the reward -- up to a maximum of 2,500 francs.

Authors who let us scrutinize their papers are compensated, too: 250 francs to cover the work needed to prepare files or answer reviewer queries, and a bonus 250 francs if no errors (or only minor ones) are found in their work. ERROR launched in February and will run for at least four years. So far, we have sent out almost 60 invitations, and 13 sets of authors have agreed to have their papers assessed. One review has been completed, revealing minor errors. I hope that the project will demonstrate the value of systematic processes to detect errors in published research. I am convinced that such systems are needed, because current checks are insufficient. Unpaid peer reviewers are overburdened, and have little incentive to painstakingly examine survey responses, comb through lists of DNA sequences or cell lines, or go through computer code line by line. Mistakes frequently slip through. And researchers have little to gain personally from sifting through published papers looking for errors. There is no financial compensation for highlighting errors, and doing so can see people marked out as troublemakers.
Linux

Linux 6.10 Honors One Last Request By Hans Reiser (phoronix.com) 71

Longtime Slashdot reader DVega shares a report from Phoronix: ReiserFS lead developer and convicted murderer Hans Reiser a few months back wrote letters to be made public apologizing for his social mistakes and other commentary. In his written communications he also made a last request for ReiserFS in the Linux kernel: "Assuming that the decision is to remove [ReiserFS] V3 from the kernel, I have just one request: that for one last release the README be edited to add Mikhail Gilula, Konstantin Shvachko, and Anatoly Pinchuk to the credits, and to delete anything in there I might have said about why they were not credited. It is time to let go."

Hans credits his improved social and communication skills learned in prison among other details shared in the public letters. Per the indirect request by Hans Reiser, SUSE's Jan Kara has now altered the ReiserFS README file with the changes going in today to the Linux 6.10 kernel. The negative language was removed and instead acknowledging their contributions.

Digital

Gordon Bell, an Architect of Our Digital Age, Dies At Age 89 (arstechnica.com) 6

An anonymous reader quotes a report from Ars Technica: Computer pioneer Gordon Bell, who as an early employee of Digital Equipment Corporation (DEC) played a key role in the development of several influential minicomputer systems and also co-founded the first major computer museum, passed away on Friday, according to Bell Labs veteran John Mashey. Mashey announced Bell's passing in a social media post on Tuesday morning. "I am very sad to report [the] death May 17 at age 89 of Gordon Bell, famous computer pioneer, a founder of Computer Museum in Boston, and a force behind the @ComputerHistory here in Silicon Valley, and good friend since the 1980s," wrote Mashey in his announcement. "He succumbed to aspiration pneumonia in Coronado, CA."

Bell was a pivotal figure in the history of computing and a notable champion of tech history, having founded Boston's Computer Museum in 1979, which later became the heart of the Computer History Museum in Mountain View, with his wife Gwen Bell. He was also the namesake of the ACM's prestigious Gordon Bell Prize, created to spur innovations in parallel processing.
Bell also mentored at Microsoft in 1995, where he "studied telepresence technologies and served as the subject of the MyLifeBits life-logging project," reports Ars. "The initiative aimed to realize Vannevar Bush's vision of a system that could store all the documents, photos, and audio a person experienced in their lifetime."

Former Windows VP Steven Sinofsky said Bell "was immeasurably helpful at Microsoft where he was a founding advisor and later full time leader in Microsoft Research. He advised and supported countless researchers, projects, and product teams. He was always supportive and insightful beyond words. He never hesitated to provide insights and a few sparks at so many of the offsites that were so important to the evolution of Microsoft."

"His memory is a blessing to so many," added Sinofsky in a post memorializing Bell. "His impact on all of us in technology will be felt for generations. May he rest in peace."
Technology

Match Group, Meta, Coinbase And More Form Anti-Scam Coalition (engadget.com) 23

An anonymous reader shares a report: Scams are all over the internet, and AI is making matters worse (no, Taylor Swift didn't giveaway Le Creuset pans, and Tom Hanks didn't promote a dental plan). Now, companies such as Match Group, Meta and Coinbase are launching Tech Against Scams, a new coalition focused on collaboration to prevent online fraud and financial schemes. They will "collaborate on ways to take action against the tools used by scammers, educate and protect consumers and disrupt rapidly evolving financial scams."

Meta, Coinbase and Match Group -- which owns Hinge and Tinder -- first joined forces on this issue last summer but are now teaming up with additional digital, social media and crypto companies, along with the Global Anti-Scam Organization. A major focus of this coalition is pig butchering scams, a type of fraud in which a scammer tricks someone into giving them more and more money through trusting digital relationships, both romantic and platonic in nature.

Transportation

Some People Who Rented a Tesla from Hertz Were Still Charged for Gas (thedrive.com) 195

"Last week, we reported on a customer who was charged $277 for gasoline his rented Tesla couldn't have possibly used," writes the automotive blog The Drive.

"And now, we've heard from other Hertz customers who say they've been charged even more." Hertz caught attention last week for how it handled a customer whom it had charged a "Skip the Pump" fee, which allows renters to pay a premium for Hertz to refill the tank for them. But of course, this customer's rented Tesla Model 3 didn't use gas — it draws power from a battery — and Hertz has a separate, flat fee for EV recharges. Nevertheless, the customer was charged $277.39 despite returning the car with the exact same charge they left with, and Hertz refused to refund it until after our story ran. It's no isolated incident either, as other customers have written in to inform us that it happened to them, too....

Evan Froehlich returned the rental at 21 percent charge, expecting to pay a flat $25 recharge fee. (It's ordinarily $35, but Hertz's loyalty program discounts it.) To Froehlich's surprise, he was hit with a $340.97 "Skip the Pump" fee, which can be applied after returning a car if it's not requested beforehand. He says Hertz's customer service was difficult to reach, and that it took making a ruckus on social media to get Hertz's attention. In the end, a Hertz representative was able to review the charge and have it reversed....

A March 2023 Facebook post documenting a similar case indicates this has been happening for more than a year.

After renting a Tesla Model 3, another customer even got a $475.19 "fuel charge," according to the article — in addition to a $25 charging fee: They also faced a $125.01 "rebill" for using the Supercharger network during their rental, which other Hertz customers have expressed surprise and frustration with. Charging costs can vary, but a 75-percent charge from a Supercharger will often cost in the region of just $15.
Crime

What Happened After a Reporter Tracked Down The Identity Thief Who Stole $5,000 (msn.com) 46

"$5,000 in cash had been withdrawn from my checking account — but not by me," writes journalist Linda Matchan in the Boston Globe. A police station manager reviewed footage from the bank — which was 200 miles away — and deduced that "someone had actually come into the bank and spoken to a teller, presented a driver's license, and then correctly answered some authentication questions to validate the account..." "You're pitting a teller against a national crime syndicate with massive resources behind them," says Paul Benda, executive vice president for risk, fraud, and cybersecurity at the American Bankers Association. "They're very well-funded, well-resourced criminal gangs doing this at an industrial scale."
The reporter writes that "For the past two years, I've worked to determine exactly who and what lay behind this crime..." [N]ow I had something new to worry about: Fraudsters apparently had a driver's license with my name on it... "Forget the fake IDs adolescents used to get into bars," says Georgia State's David Maimon, who is also head of fraud insights at SentiLink, a company that works with institutions across the United States to support and solve their fraud and risk issues. "Nowadays fraudsters are using sophisticated software and capable printers to create virtually impossible-to-detect fake IDs." They're able to create synthetic identities, combining legitimate personal information, such as a name and date of birth, with a nine-digit number that either looks like a Social Security number or is a real, stolen one. That ID can then be used to open financial accounts, apply for a bank or car loan, or for some other dodgy purpose that could devastate their victims' financial lives.



And there's a complex supply chain underpinning it all — "a whole industry on the dark web," says Eva Velasquez, president and CEO of the Identity Theft Resource Center, a nonprofit that helps victims undo the damage wrought by identity crime. It starts with the suppliers, Maimon told me — "the people who steal IDs, bring them into the market, and manufacture them. There's the producers who take the ID and fake driver's licenses and build the facade to make it look like they own the identity — trying to create credit reports for the synthetic identities, for example, or printing fake utility bills." Then there are the distributors who sell them in the dark corners of the web or the street or through text messaging apps, and finally the customers who use them and come from all walks of life. "We're seeing females and males and people with families and a lot of adolescents, because social media plays a very important role in introducing them to this world," says Maimon, whose team does surveillance of criminals' activities and interactions on the dark web. "In this ecosystem, folks disclose everything they do."

The reporter writes that "It's horrifying to discover, as I have recently, that someone has set up a tech company that might not even be real, listing my home as its principal address."

Two and a half months after the theft the stolen $5,000 was back in their bank account — but it wasn't until a year later that the thief was identified. "The security video had been shared with New York's Capital Region Crime Analysis Center, where analysts have access to facial recognition technology, and was run through a database of booking photos. A possible match resulted.... She was already in custody elsewhere in New York... Evidently, Deborah was being sought by law enforcement in at least three New York counties. [All three cases involved bank-related identity fraud.]"

Deborah was finally charged with two separate felonies: grand larceny in the third degree for stealing property over $3,000, and identity theft. But Deborah missed her next two court dates, and disappeared. "She never came back to court, and now there were warrants for her arrest out of two separate courts."

After speaking to police officials the reporter concludes "There was a good chance she was only doing the grunt work for someone else, maybe even a domestic or foreign-organized crime syndicate, and then suffering all the consequences."

The UK minister of state for security even says that "in some places people are literally captured and used as unwilling operators for fraudsters."
Government

Are AI-Generated Search Results Still Protected by Section 230? (msn.com) 63

Starting this week millions will see AI-generated answers in Google's search results by default. But the announcement Tuesday at Google's annual developer conference suggests a future that's "not without its risks, both to users and to Google itself," argues the Washington Post: For years, Google has been shielded for liability for linking users to bad, harmful or illegal information by Section 230 of the Communications Decency Act. But legal experts say that shield probably won't apply when its AI answers search questions directly. "As we all know, generative AIs hallucinate," said James Grimmelmann, professor of digital and information law at Cornell Law School and Cornell Tech. "So when Google uses a generative AI to summarize what webpages say, and the AI gets it wrong, Google is now the source of the harmful information," rather than just the distributor of it...

Adam Thierer, senior fellow at the nonprofit free-market think tank R Street, worries that innovation could be throttled if Congress doesn't extend Section 230 to cover AI tools. "As AI is integrated into more consumer-facing products, the ambiguity about liability will haunt developers and investors," he predicted. "It is particularly problematic for small AI firms and open-source AI developers, who could be decimated as frivolous legal claims accumulate." But John Bergmayer, legal director for the digital rights nonprofit Public Knowledge, said there are real concerns that AI answers could spell doom for many of the publishers and creators that rely on search traffic to survive — and which AI, in turn, relies on for credible information. From that standpoint, he said, a liability regime that incentivizes search engines to continue sending users to third-party websites might be "a really good outcome."

Meanwhile, some lawmakers are looking to ditch Section 230 altogether. [Last] Sunday, the top Democrat and Republican on the House Energy and Commerce Committee released a draft of a bill that would sunset the statute within 18 months, giving Congress time to craft a new liability framework in its place. In a Wall Street Journal op-ed, Reps. Cathy McMorris Rodgers (R-Wash.) and Frank Pallone Jr. (D-N.J.) argued that the law, which helped pave the way for social media and the modern internet, has "outlived its usefulness."

The tech industry trade group NetChoice [which includes Google, Meta, X, and Amazon] fired back on Monday that scrapping Section 230 would "decimate small tech" and "discourage free speech online."

The digital law professor points out Google has traditionally escaped legal liability by attributing its answers to specific sources — but it's not just Google that has to worry about the issue. The article notes that Microsoft's Bing search engine also supplies AI-generated answers (from Microsoft's Copilot). "And Meta recently replaced the search bar in Facebook, Instagram and WhatsApp with its own AI chatbot."

The article also note sthat several U.S. Congressional committees are considering "a bevy" of AI bills...
Social Networks

France Bans TikTok In New Caledonia (politico.eu) 48

In what's marked as an EU first, the French government has blocked TikTok in its territory of New Caledonia amid widespread pro-independence protests. Politico reports: A French draft law, passed Monday, would let citizens vote in local elections after 10 years' residency in New Caledonia, prompting opposition from independence activists worried it will dilute the representation of indigenous people. The violent demonstrations that have ensued in the South Pacific island of 270,000 have killed at least five people and injured hundreds. In response to the protests, the government suspended the popular video-sharing app -- owned by Beijing-based ByteDance and favored by young people -- as part of state-of-emergency measures alongside the deployment of troops and an initial 12-day curfew.

French Prime Minister Gabriel Attal didn't detail the reasons for shutting down the platform. The local telecom regulator began blocking the app earlier on Wednesday. "It is regrettable that an administrative decision to suspend TikTok's service has been taken on the territory of New Caledonia, without any questions or requests to remove content from the New Caledonian authorities or the French government," a TikTok spokesperson said. "Our security teams are monitoring the situation very closely and ensuring that our platform remains safe for our users. We are ready to engage in discussions with the authorities."

Digital rights NGO Quadrature du Net on Friday contested the TikTok suspension with France's top administrative court over a "particularly serious blow to freedom of expression online." A growing number of authoritarian regimes worldwide have resorted to internet shutdowns to stifle dissent. This unexpected -- and drastic -- decision by France's center-right government comes amid a rise in far-right activism in Europe and a regression on media freedom. "France's overreach establishes a dangerous precedent across the globe. It could reinforce the abuse of internet shutdowns, which includes arbitrary blocking of online platforms by governments around the world," said Eliska Pirkova, global freedom of expression lead at Access Now.

Privacy

User Outcry As Slack Scrapes Customer Data For AI Model Training (securityweek.com) 34

New submitter txyoji shares a report: Enterprise workplace collaboration platform Slack has sparked a privacy backlash with the revelation that it has been scraping customer data, including messages and files, to develop new AI and ML models. By default, and without requiring users to opt-in, Slack said its systems have been analyzing customer data and usage information (including messages, content and files) to build AI/ML models to improve the software.

The company insists it has technical controls in place to block Slack from accessing the underlying content and promises that data will not lead across workplaces but, despite these assurances, corporate Slack admins are scrambling to opt-out of the data scraping. This line in Slack's communication sparked a social media controversy with the realization that content in direct messages and other sensitive content posted to Slack was being used to develop AI/ML models and that opting out world require sending e-mail requests: "If you want to exclude your Customer Data from Slack global models, you can opt out. To opt out, please have your org, workspace owners or primary owner contact our Customer Experience team at feedback@slack.com with your workspace/org URL and the subject line 'Slack global model opt-out request'. We will process your request and respond once the opt-out has been completed."

Social Networks

Reddit Reintroduces Its Awards System (techcrunch.com) 20

After shutting down its awards system last July, Reddit announced that it is bringing it back, with much of the same and some new features. There'll be "a new design for awards, a new award button under eligible posts and a leaderboard showing top awards earned for a comment or a post," reports TechCrunch. From the report: The company sunset its awards program last year along with the ability for users to purchase coins. At the same time, Reddit introduced "Golden Upvotes," which were purchased directly through cash. In a new post, the company said the system wasn't as expressive as awards. "While the golden upvote was certainly simpler in theory, in practice, it missed the mark. It wasn't as fun or expressive as legacy awards, and it was unclear how it benefited the recipient," the social network said.

Users who want to give awards to posts and comments will need to buy "gold," which kind of replaces coins. On a support page, the company mentioned that, on average, awards cost anywhere between 15 to 50 gold. Gold packages in Reddit's mobile apps currently start at $1.99 for 100 gold. Users can buy as much as 2,750 gold for $49.99. The company is also adding some safeguards to the awards system, such as disabling awards in NSFW subreddits, trauma and addiction support subreddits, and subreddits with mature content. Additionally, users will be able to report awards to avoid them being used for moderator removals.

Social Networks

Another Billionaire Pushes a Bid For TikTok, But To Decentralize It (techdirt.com) 68

An anonymous reader quotes a report from Techdirt, written by Mike Masnick: If you're a fan of chaos, well, the TikTok ban situation is providing plenty of chaos to follow. Ever since the US government made it clear it was seriously going to move forward with the obviously unconstitutional and counterproductive plan to force ByteDance to divest from TikTok or have the app effectively banned from the U.S., various rich people have been stepping up with promises to buy the app. There was former Trump Treasury Secretary Steven Mnuchin with plans to buy it. Then there was "mean TV investor, who wants you to forget his sketchy history" Kevin O'Leary with his own TikTok buyout plans. I'm sure there have been other rich dudes as well, though strikingly few stories of actual companies interested in purchasing TikTok.

But now there's another billionaire to add to the pile: billionaire real estate/property mogul Frank McCourt (who has had some scandals in his own history) has had an interesting second act over the last few years as a big believer in decentralized social media. He created and funded Project Liberty, which has become deeply involved in a number of efforts to create infrastructure for decentralized social media, including its own Decentralized Social Networking Protocol (DSTP).

Over the past few years, I've had a few conversations with people involved in Project Liberty and related projects. Their hearts are in the right place in wanting to rethink the internet in a manner that empowers users over big companies, even if I don't always agree with their approach (he also frequently seems to surround himself with all sorts of tech haters, who have somewhat unrealistic visions of the world). Either way, McCourt and Project Liberty have now announced a plan to bid on TikTok. They plan to merge it into his decentralization plans.
"Frank McCourt, Founder of Project Liberty and Executive Chairman of McCourt Global, today announced that Project Liberty is organizing a bid to acquire the popular social media platform TikTok in the U.S., with the goal of placing people and data empowerment at the center of the platform's design and purpose," reads a press release from Project Liberty.

"Working in consultation with Guggenheim Securities, the investment banking and capital markets business of Guggenheim Partners, and Kirkland & Ellis, one of the world's largest law firms, as well as world-renowned technologists, academics, community leaders, parents and engaged citizens, this bid for TikTok offers an innovative, alternative vision for the platform's infrastructure -- one that allows people to reclaim agency over their digital identities and data by proposing to migrate the platform to a new digital open-source protocol. In launching the bid, McCourt and his partners are seizing this opportunity to return control and value back into the hands of individuals and provide Americans with a meaningful voice, choice, and stake in the future of the web."

Slashdot Top Deals