Windows

Microsoft Says It's Not Planning To Use AI To Rewrite Windows From C To Rust 41

Microsoft has denied any plans to rewrite Windows 11 using AI and Rust after a LinkedIn post from one of its top-level engineers sparked a wave of online backlash by claiming the company's goal was to "eliminate every line of C and C++ from Microsoft by 2030."

Galen Hunt, a principal software engineer responsible for several large-scale research projects at Microsoft, made the claim in what was originally a hiring post for his team. His original wording described a "North Star" of "1 engineer, 1 month, 1 million lines of code" and outlined a strategy to "combine AI and Algorithms to rewrite Microsoft's largest codebases." The repeated use of "our" in the post led many to interpret it as an official company direction rather than a personal research ambition.

Frank X. Shaw, Microsoft's head of communications, told Windows Latest that the company has no such plans. Hunt subsequently edited his LinkedIn post to clarify that "Windows is NOT being rewritten in Rust with AI" and that his team's work is a research project focused on building technology to enable language-to-language migration. He characterized the reaction as "speculative reading between the lines."
Biotech

23andMe Says 15% of Customers Asked To Delete Their Genetic Data Since Bankruptcy (techcrunch.com) 36

Since filing for bankruptcy in March, 23andMe has received data deletion requests from 1.9 million users -- around 15% of its customer base. That number was revealed by 23andMe's interim chief executive Joseph Selsavage during a House Oversight Committee hearing, during which lawmakers scrutinized the company's sale following an earlier bankruptcy auction. "The bankruptcy sparked concerns that the data of millions of Americans who used 23andMe could end up in the hands of an unscrupulous buyer, prompting customers to ask the company to delete their data," adds TechCrunch. From the report: Pharmaceutical giant Regeneron won the court-approved auction in May, offering $256 million for 23andMe and its banks of customers' DNA and genetic data. Regeneron said it would use the 23andMe data to aid the discovery of new drugs, and committed to maintain 23andMe's privacy practices. Truly deleting your personal genetic information from the DNA testing company is easier said than done. But if you were a 23andMe customer and are interested, MIT Technology Review outlines that steps you can take.
Advertising

Will Consumer Data Collection Lead to Algorithm-Adjusted 'Surveillance Pricing'? (msn.com) 104

An anonymous reader shared this report from the Washington Post's "Tech Brief": Last fall, reports that Kroger was considering bringing facial recognition technology into its stores sparked outcry from lawmakers and customers. They worried personalized data could be used to charge different prices for different customers based on their shopping habits, financial circumstances or appearance. Kroger, the country's largest supermarket chain, had already been using digital price tags in its stores.

Kroger told lawmakers that it doesn't use facial recognition to help it set prices, a stance the company reiterated to the Tech Brief on Thursday. Still, the uproar helped to spark a push by consumer advocates who warn that the threat of invasive, personalized pricing schemes is real. Now, Democratic lawmakers in several states are working to ban so-called "surveillance pricing" — when businesses charge customers more or less for the same item based on their personal information.

Besides a bill in California, three more bill were introduced this month in Colorado, Georgia, and Illinois that also ban "surveillance wages," which the article defines as employers adjusting wages based on how much data an employee collects. "Both surveillance pricing and surveillance wages really disrupt fundamental ideals of fairness," University of California, Irvine law professor Veena Dubal tells the Washington Post.

Dubal is one of the consumer advocates behind a new report which notes information released last month by America's consumer-protecting FTC that "suggests that surveillance pricing tools are being actively developed and marketed across a range of industries, including consumer-facing businesses like 'grocery stores, apparel retailers, health and beauty retailers, home goods and furnishing stores, convenience stores, building and hardware stores, and general merchandise retailers such as department or discount stores." The consumer advocates (which include the Electronic Privacy Information Center) put it this way.

"Imagine walking into a grocery store and seeing a price for milk that's higher than what the next shopper pays because an algorithm calculated that you're willing to spend more..."
Privacy

MoneyGram Says Hackers Stole Customers' Personal Information, Transaction Data (techcrunch.com) 6

An anonymous reader quotes a report from TechCrunch: U.S. money transfer giant MoneyGram has confirmed that hackers stole its customers' personal information and transaction data during a cyberattack last month. The company said in a statement Monday that an unauthorized third party "accessed and acquired" customer data during the cyberattack on September 20. The cyberattack -- the nature of which remains unknown -- sparked a week-long outage that resulted in the company's website and app falling offline. MoneyGram says it serves over 50 million people in more than 200 countries and territories each year.

The stolen customer data includes names, phone numbers, postal and email addresses, dates of birth, and national identification numbers. The data also includes a "limited number" of Social Security numbers and government identification documents, such as driver's licenses and other documents that contain personal information, like utility bills and bank account numbers. MoneyGram said the types of stolen data will vary by individual. MoneyGram said that the stolen data also included transaction information, such as dates and amounts of transactions, and, "for a limited number of consumers, criminal investigation information (such as fraud)."

The Military

Telegram CEO Pavel Durov's Arrest Upends Kremlin Military Communications (politico.eu) 107

Telegram founder and CEO Pavel Durov was arrested Saturday night by French authorities on allegations that his social media platform was being used for child pornography, drug trafficking and organized crime. The move sparked debate over free speech worldwide from prominent anti-censorship figures including Elon Musk, Robert F. Kennedy. Jr. and Edward Snowden. However, "the immediate freakout came from Russia," reports Politico. "That's because Telegram is widely used by the Russian military for battlefield communications thanks to problems with rolling out its own secure comms system. It's also the primary vehicle for pro-war military bloggers and media -- as well as millions of ordinary Russians." From the report: "They practically detained the head of communication of the Russian army," Russian military blogger channel Povernutie na Z Voine said in a Telegram statement. The blog site Dva Mayora said that Russian specialists are working on an alternative to Telegram, but that the Russian army's Main Communications Directorate has "not shown any real interest" in getting such a system to Russian troops. The site said Durov's arrest may actually speed up the development of an independent comms system. Alarmed Russian policymakers are calling for Durov's release.

"[Durov's] arrest may have political grounds and be a tool for gaining access to the personal information of Telegram users," the Deputy Speaker of the Russian Duma Vladislav Davankov said in a Telegram statement. "This cannot be allowed. If the French authorities refuse to release Pavel Durov from custody, I propose making every effort to move him to the UAE or the Russian Federation. With his consent, of course." Their worry is that Durov may hand over encryption keys to the French authorities, allowing access to the platform and any communications that users thought was encrypted.

French President Emmanuel Macron said Monday that the arrest of Durov was "in no way a political decision." The Russian embassy has demanded that it get access to Durov, but the Kremlin has so far not issued a statement on the arrest. "Before saying anything, we should wait for the situation to become clearer," said Kremlin spokesperson Dmitry Peskov. However, officials and law enforcement agencies were instructed to clear all their communication from Telegram, the pro-Kremlin channel Baza reported. "Everyone who is used to using the platform for sensitive conversations/conversations should delete those conversations right now and not do it again," Kremlin propagandist Margarita Simonyan said in a Telegram post. "Durov has been shut down to get the keys. And he's going to give them."

Businesses

Amazon is Bricking Primary Feature on $160 Echo Device After 1 Year (arstechnica.com) 43

Amazon is canceling its PhotosPlus subscription service for the Echo Show 8 Photos Edition, effectively ending the device's main selling point. The company will automatically cancel all PhotosPlus subscriptions on September 12 and cease support for the service on September 23. The Echo Show 8 Photos Edition, launched in September 2023, allowed users to display personal photos indefinitely on the home screen for a $2 monthly fee.

Without PhotosPlus, the device will revert to showing ads and promotions after three hours, like standard Echo Show 8 models. Amazon spokesperson says that the Photos Edition was discontinued in March, citing regular product evaluations based on customer feedback. Users can still display photos on the device, but not indefinitely. The move has sparked criticism from customers who paid a $10 premium for ad-free photo display.
Security

Cyberattack Knocks Mobile Guardian MDM Offline, Wipes Thousands of Student Devices (techcrunch.com) 17

Zack Whittaker reports via TechCrunch: A cyberattack on Mobile Guardian, a U.K.-based provider of educational device management software, has sparked outages at schools across the world and has left thousands of students unable to access their files. Mobile Guardian acknowledged the cyberattack in a statement on its website, saying it identified "unauthorized access to the iOS and ChromeOS devices enrolled to the Mobile Guardian platform." The company said the cyberattack "affected users globally," including in North America, Europe and Singapore, and that the incident resulted in an unspecified portion of its userbase having their devices unenrolled from the platform and "wiped remotely." "Users are not currently able to log in to the Mobile Guardian Platform and students will experience restricted access on their devices," the company said.

Mobile device management (MDM) software allows businesses and schools to remotely monitor and manage entire fleets of devices used by employees or students. Singapore's Ministry of Education, touted as a significant customer of Mobile Guardian on the company's website since 2020, said in a statement overnight that thousands of its students had devices remotely wiped during the cyberattack. "Based on preliminary checks, about 13,000 students in Singapore from 26 secondary schools had their devices wiped remotely by the perpetrator," the Singaporean education ministry said in a statement. The ministry said it was removing the Mobile Guardian software from its fleet of student devices, including affected iPads and Chromebooks.

Microsoft

Is the New 'Recall' Feature in Windows a Security and Privacy Nightmare? (thecyberexpress.com) 140

Slashdot reader storagedude shares a provocative post from the cybersecurity news blog of Cyble Inc. (a Ycombinator-backed company promising "AI-powered actionable threat intelligence").

The post delves into concerns that the new "Recall" feature planned for Windows (on upcoming Copilot+ PCs) is "a security and privacy nightmare." Copilot Recall will be enabled by default and will capture frequent screenshots, or "snapshots," of a user's activity and store them in a local database tied to the user account. The potential for exposure of personal and sensitive data through the new feature has alarmed security and privacy advocates and even sparked a UK inquiry into the issue. In a long Mastodon thread on the new feature, Windows security researcher Kevin Beaumont wrote, "I'm not being hyperbolic when I say this is the dumbest cybersecurity move in a decade. Good luck to my parents safely using their PC."

In a blog post on Recall security and privacy, Microsoft said that processing and storage are done only on the local device and encrypted, but even Microsoft's own explanations raise concerns: "Note that Recall does not perform content moderation. It will not hide information such as passwords or financial account numbers. That data may be in snapshots that are stored on your device, especially when sites do not follow standard internet protocols like cloaking password entry." Security and privacy advocates take issue with assertions that the data is stored securely on the local device. If someone has a user's password or if a court orders that data be turned over for legal or law enforcement purposes, the amount of data exposed could be much greater with Recall than would otherwise be exposed... And hackers, malware and infostealers will have access to vastly more data than they would without Recall.

Beaumont said the screenshots are stored in a SQLite database, "and you can access it as the user including programmatically. It 100% does not need physical access and can be stolen.... Recall enables threat actors to automate scraping everything you've ever looked at within seconds."

Beaumont's LinkedIn profile and blog say that starting in 2020 he worked at Microsoft for nearly a year as a senior threat intelligence analyst. And now Beaumont's Mastodon post is also raising other concerns (according to Cyble's blog post):
  • "Sensitive data deleted by users will still be saved in Recall screenshots... 'If you or a friend use disappearing messages in WhatsApp, Signal etc, it is recorded regardless.'"
  • "Beaumont also questioned Microsoft's assertion that all this is done locally."

The blog post also notes that Leslie Carhart, Director of Incident Response at Dragos, had this reaction to Beaumont's post. "The outrage and disbelief are warranted."


The Almighty Buck

A $700 Million Bonanza for the Winners of Crypto's Collapse: Lawyers (msn.com) 121

An anonymous Slashdot reader shared this report from the New York Times: The collapse in cryptocurrency prices last year forced a procession of major firms into bankruptcy, triggering a government crackdown and erasing the savings of millions of inexperienced investors. But for a small group of corporate turnaround specialists, crypto's implosion has become a financial bonanza.

Lawyers, accountants, consultants, cryptocurrency analysts and other professionals have racked up more than $700 million in fees since last year from the bankruptcies of five major crypto firms, including the digital currency exchange FTX, according to a New York Times analysis of court records. That sum is likely to grow significantly as the cases unfold over the coming months. Large fees are common in corporate bankruptcies, which require complex and time-intensive legal work to untangle. But in the crypto world, the mounting fees have sparked widespread outrage because many of the people owed money are amateur traders who lost their personal savings, rather than corporations with the ability to weather a financial crisis. Every dollar in fees is deducted from the pool of funds that will be returned to creditors at the end of the bankruptcies.

The fees are "exorbitant and ridiculous," said Daniel Frishberg, a 19-year-old investor who lost about $3,000 when the crypto company Celsius Network filed for bankruptcy last year. "At every hearing, they have an army of people there, and most of them don't need to be there. You don't need 20 people taking notes."

AI

OpenAI To Offer Remedies To Resolve Italy's ChatGPT Ban (apnews.com) 8

The company behind ChatGPT will propose measures to resolve data privacy concerns that sparked a temporary Italian ban on the artificial intelligence chatbot, regulators said Thursday. The Associated Press reports: In a video call late Wednesday between the watchdog's commissioners and OpenAI executives including CEO Sam Altman, the company promised to set out measures to address the concerns. Those remedies have not been detailed. The Italian watchdog said it didn't want to hamper AI's development but stressed to OpenAI the importance of complying with the 27-nation EU's stringent privacy rules. The regulators imposed the ban after some users' messages and payment information were exposed to others. They also questioned whether there's a legal basis for OpenAI to collect massive amounts of data used to train ChatGPT's algorithms and raised concerns the system could sometimes generate false information about individuals.

Other regulators in Europe and elsewhere have started paying more attention after Italy's action. Ireland's Data Protection Commission said it's "following up with the Italian regulator to understand the basis for their action and we will coordinate with all EU Data Protection Authorities in relation to this matter." France's data privacy regulator, CNIL, said it's investigating after receiving two complaints about ChatGPT. Canada's privacy commissioner also has opened an investigation into OpenAI after receiving a complaint about the suspected "collection, use and disclosure of personal information without consent." In a blog post this week, the U.K. Information Commissioner's Office warned that "organizations developing or using generative AI should be considering their data protection obligations from the outset" and design systems with data protection as a default. "This isn't optional -- if you're processing personal data, it's the law," the office said.

In an apparent response to the concerns, OpenAI published a blog post Wednesday outlining its approach to AI safety. The company said it works to remove personal information from training data where feasible, fine-tune its models to reject requests for personal information of private individuals, and acts on requests to delete personal information from its systems.

Businesses

Are Drone Delivery Services Finally Taking Off? (kiplinger.com) 40

Amazon isn't the only company that's started drone-delivery services. Kiplinger.com reports: Walmart has 37 stores set up for drone delivery to homes and businesses — six stores in Arizona, four in Arkansas, nine Walmarts in Florida, two in North Carolina, 11 in Texas, two in Utah and three in Virginia. Walmart has partnered with drone delivery service DroneUp Delivery to deliver customers' packages that weigh 10 pounds or less. Walmart says that more than 10,000 items are available for drone delivery and items can arrive as quickly as 30 minutes after the order has been placed.

There are restrictions: Customers must live within one mile of participating stores. Orders are accepted on the DroneUp Delivery website from 8 a.m. until 8 p.m. local time. "If it fits safely, it flies," Walmart said in a statement. "Participating stores will house a DroneUp delivery hub inclusive of a team of certified pilots, operating within FAA guidelines, that safely manage flight operations for deliveries. Once a customer places an order, the item is fulfilled from the store, packaged, loaded into the drone and delivered right to their yard using a cable that gently lowers the package."

Oh, and the top-selling item at one of Walmart's drone ports? Hamburger Helper. Just sayin'.

The Street notes predictions of increasing numbers of drone deliveries: A March 2022 report by the consulting firm McKinsey & Co. found that more than 660,000 commercial drone deliveries were made to customers in the past three years and more than 2,000 drone deliveries are occurring each day worldwide. The report projected that this year close to 1.5 million deliveries will be made by drones, about triple the number in 2021.
But Business Insider reported last May that at least eight Amazon drones had crashed during testing in the past year, including one that sparked a 20-acre brush fire in eastern Oregon in June of 2021 after the drone's motors failed.

It's part of why The Street writes that the very idea of drone-delivery service has also "hit some turbulence along the way." There's plenty of skepticism about the practicality of broad-scale use of delivery drones. "[Because] of technical and financial limitations, drones are unlikely to be the future of package delivery on a mass scale," The New York Times' Shira Ovide reported in June. And safety is a critical concern. In 2018, hundreds of flights at Gatwick Airport near London were canceled following reports of drone sightings close to the runway. In September a delivery drone crashed into power lines in the Australian town of Browns Plains and knocked out power for more than 2,000 customers.

A survey by the business intelligence firm Morning Consult found that 57% of the respondents said they had little or no trust in the devices for deliveries, compared with 43% who said they had "a lot" or "some" trust. Respondents said they were worried about unsuccessful deliveries of items and threats to personal and data privacy related to using drones for delivery, including deliveries performed by Chinese-made drones.

United States

Intuit To Pay $141 Million Settlement Over 'Free' TurboTax Ads (apnews.com) 34

The company behind the TurboTax tax-filing program will pay $141 million to customers across the United States who were deceived by misleading promises of free tax-filing services, New York's attorney general announced Wednesday. From a report: Under the terms of a settlement signed by the attorneys general of all 50 states, Mountain View, California-based Intuit Inc. will suspend TurboTax's "free, free, free" ad campaign and pay restitution to nearly 4.4 million taxpayers, New York Attorney General Letitia James said. James said her investigation into Intuit was sparked by a 2019 ProPublica report that found the company was using deceptive tactics to steer low-income tax filers away from the federally supported free services for which they qualified -- and toward its own commercial products, instead.
The Courts

Web Scraping is Legal, US Appeals Court Reaffirms (techcrunch.com) 78

Good news for archivists, academics, researchers and journalists: Scraping publicly accessible data is legal, according to a U.S. appeals court ruling. From a report: The landmark ruling by the U.S. Ninth Circuit of Appeals is the latest in a long-running legal battle brought by LinkedIn aimed at stopping a rival company from scraping personal information from users' public profiles. The case reached the U.S. Supreme Court last year but was sent back to the Ninth Circuit for the original appeals court to re-review the case. In its second ruling on Monday, the Ninth Circuit reaffirmed its original decision and found that scraping data that is publicly accessible on the internet is not a violation of the Computer Fraud and Abuse Act, or CFAA, which governs what constitutes computer hacking under U.S. law.

The Ninth Circuit's decision is a major win for archivists, academics, researchers and journalists who use tools to mass collect, or scrape, information that is publicly accessible on the internet. Without a ruling in place, long-running projects to archive websites no longer online and using publicly accessible data for academic and research studies have been left in legal limbo. But there have been egregious cases of scraping that have sparked privacy and security concerns. Facial recognition startup Clearview AI claims to have scraped billions of social media profile photos, prompting several tech giants to file lawsuits against the startup. Several companies, including Facebook, Instagram, Parler, Venmo and Clubhouse have all had users' data scraped over the years.

EU

WhatsApp Gets EU Ultimatum After New Terms Spark Backlash (bloomberg.com) 8

Meta Platforms' WhatsApp was given a month to answer European Union concerns over new terms and services that sparked outrage among consumers and privacy campaigners. From a report: WhatsApp must provide "concrete commitments" to address EU concerns about a possible lack of "sufficiently clear information" to users, or the exchange of user data between WhatsApp and third parties, the European Commission said Thursday. "WhatsApp must ensure that users understand what they agree to and how their personal data is used," EU Justice Commissioner Didier Reynders said in a statement. "I expect from WhatsApp to fully comply with EU rules that protect consumers and their privacy."

WhatsApp announced the policy changes a year ago, but was forced to delay their introduction until May after a backlash over what data the messaging service collects and how it shares that information with parent Facebook. European consumer association BEUC complained to the EU, saying the new terms and services were opaque. "WhatsApp bombarded users for months with persistent pop-up messages," BEUC said in reaction to the commission announcement. "WhatsApp has been deliberately vague about this, laying the ground for far-reaching data processing without valid consent from consumers."

Privacy

Security Flaws Found in a Popular Guest Wi-Fi System Used in Hundreds of Hotels (techcrunch.com) 25

A security researcher says an internet gateway used by hundreds of hotels to offer and manage their guest Wi-Fi networks has vulnerabilities that could put the personal information of their guests at risk. From a report: Etizaz Mohsin told TechCrunch that the Airangel HSMX Gateway contains hardcoded passwords that are "extremely easy to guess." With those passwords, which we are not publishing, an attacker could remotely gain access to the gateway's settings and databases, which store records about the guest's using the Wi-Fi. With that access, an attacker could access and exfiltrate guest records, or reconfigure the gateway's networking settings to unwittingly redirect guests to malicious webpages, he said. Back in 2018, Mohsin discovered one of these gateways on the network of a hotel where he was staying. He found that the gateway was synchronizing files from another server across the internet, which Mohsin said contained hundreds of gateway backup files from some of the most prestigious and expensive hotels in the world. The server also stored "millions" of guest names, email addresses and arrival and departure dates, he said. Mohsin reported the bug and the server was secured, but that sparked a thought: Could this one gateway have other vulnerabilities that could put hundreds of other hotels at risk? In the end, the security researcher found five vulnerabilities that he said could compromise the gateway -- including guests' information.
AI

Clearview AI Has New Tools To Identify People in Photos (wired.com) 24

Clearview AI has stoked controversy by scraping the web for photos and applying facial recognition to give police and others an unprecedented ability to peer into our lives. Now the company's CEO wants to use artificial intelligence to make Clearview's surveillance tool even more powerful. From a report: It may make it more dangerous and error-prone as well. Clearview has collected billions of photos from across websites that include Facebook, Instagram, and Twitter and uses AI to identify a particular person in images. Police and government agents have used the company's face database to help identify suspects in photos by tying them to online profiles. The company's cofounder and CEO, Hoan Ton-That, tells WIRED that Clearview has now collected more than 10 billion images from across the web -- more than three times as many as has been previously reported. Ton-That says the larger pool of photos means users, most often law enforcement, are more likely to find a match when searching for someone. He also claims the larger data set makes the company's tool more accurate.

Clearview combined web-crawling techniques, advances in machine learning that have improved facial recognition, and a disregard for personal privacy to create a surprisingly powerful tool. Ton-That demonstrated the technology through a smartphone app by taking a photo of the reporter. The app produced dozens of images from numerous US and international websites, each showing the correct person in images captured over more than a decade. The allure of such a tool is obvious, but so is the potential for it to be misused. Clearview's actions sparked public outrage and a broader debate over expectations of privacy in an era of smartphones, social media, and AI. [...] The pushback has not deterred Ton-That. He says he believes most people accept or support the idea of using facial recognition to solve crimes. "The people who are worried about it, they are very vocal, and that's a good thing, because I think over time we can address more and more of their concerns," he says.

Some of Clearview's new technologies may spark further debate. Ton-That says it is developing new ways for police to find a person, including "deblur" and "mask removal" tools. The first takes a blurred image and sharpens it using machine learning to envision what a clearer picture would look like; the second tries to envision the covered part of a person's face using machine learning models that fill in missing details of an image using a best guess based on statistical patterns found in other images. These capabilities could make Clearview's technology more attractive but also more problematic. It remains unclear how accurately the new techniques work, but experts say they could increase the risk that a person is wrongly identified and could exacerbate biases inherent to the system.

Encryption

PGP Turns 30 (philzimmermann.com) 50

prz writes: PGP just hit its 30th birthday. Before 1991, the average person had essentially no tools to communicate securely over long distances. That changed with PGP, which sparked the Crypto Wars of the 1990s. "Here we are, three decades later, and strong crypto is everywhere," writes PGP developer Phil Zimmermann in a blog post. "What was glamorous in the 1990s is now mundane. So much has changed in those decades. That's a long time in dog years and technology years. My own work shifted to end-to-end secure telephony and text messaging. We now have ubiquitous strong crypto in our browsers, in VPNs, in e-commerce and banking apps, in IoT products, in disk encryption, in the TOR network, in cryptocurrencies. And in a resurgence of implementations of the OpenPGP protocol. It would seem impossible to put this toothpaste back in the tube."

He continues: "Yet, we now see a number of governments trying to do exactly that. Pushing back against end-to-end encryption. [...] The need for protecting our right to a private conversation has never been stronger. Many democracies are sliding into populist autocracies. Ordinary citizens and grassroots political opposition groups need to protect themselves against these emerging autocracies as best as they can. If an autocracy inherits or builds a pervasive surveillance infrastructure, it becomes nearly impossible for political opposition to organize, as we can see in China. Secure communications is necessary for grassroots political opposition in those societies."

"It's not only personal freedom at stake. It's national security," says Zimmermann. "We must push back hard in policy space to preserve the right to end-end encryption."
AI

A Disturbing, Viral Twitter Thread Reveals How AI-Powered Insurance Can Go Wrong (vox.com) 49

An anonymous reader quotes a report from Vox: Lemonade, the fast-growing, machine learning-powered insurance app, put out a real lemon of a Twitter thread on Monday with a proud declaration that its AI analyzes videos of customers when determining if their claims are fraudulent. The company has been trying to explain itself and its business model -- and fend off serious accusations of bias, discrimination, and general creepiness -- ever since. [...] Over a series of seven tweets, Lemonade claimed that it gathers more than 1,600 "data points" about its users -- "100X more data than traditional insurance carriers," the company claimed. The thread didn't say what those data points are or how and when they're collected, simply that they produce "nuanced profiles" and "remarkably predictive insights" which help Lemonade determine, in apparently granular detail, its customers' "level of risk." Lemonade then provided an example of how its AI "carefully analyzes" videos that it asks customers making claims to send in "for signs of fraud," including "non-verbal cues." Traditional insurers are unable to use video this way, Lemonade said, crediting its AI for helping it improve its loss ratios: that is, taking in more in premiums than it had to pay out in claims. Lemonade used to pay out a lot more than it took in, which the company said was "friggin terrible." Now, the thread said, it takes in more than it pays out.

The Twitter thread made the rounds to a horrified and growing audience, drawing the requisite comparisons to the dystopian tech television series Black Mirror and prompting people to ask if their claims would be denied because of the color of their skin, or if Lemonade's claims bot, "AI Jim," decided that they looked like they were lying. What, many wondered, did Lemonade mean by "non-verbal cues?" Threats to cancel policies (and screenshot evidence from people who did cancel) mounted. By Wednesday, the company walked back its claims, deleting the thread and replacing it with a new Twitter thread and blog post. You know you've really messed up when your company's apology Twitter thread includes the word "phrenology." "The Twitter thread was poorly worded, and as you note, it alarmed people on Twitter and sparked a debate spreading falsehoods," a spokesperson for Lemonade told Recode. "Our users aren't treated differently based on their appearance, disability, or any other personal characteristic, and AI has not been and will not be used to auto-reject claims."

The company also maintains that it doesn't profit from denying claims and that it takes a flat fee from customer premiums and uses the rest to pay claims. Anything left over goes to charity (the company says it donated $1.13 million in 2020). But this model assumes that the customer is paying more in premiums than what they're asking for in claims. So, what's really going on here? According to Lemonade, the claim videos customers have to send are merely to let them explain their claims in their own words, and the "non-verbal cues" are facial recognition technology used to make sure one person isn't making claims under multiple identities. Any potential fraud, the company says, is flagged for a human to review and make the decision to accept or deny the claim. AI Jim doesn't deny claims. The blog post also didn't address -- nor did the company answer Recode's questions about -- how Lemonade's AI and its many data points are used in other parts of the insurance process, like determining premiums or if someone is too risky to insure at all.

United States

Justice Department Is Scrutinizing Takeover of Credit Karma by Intuit, Maker of TurboTax (propublica.org) 28

The Department of Justice is scrutinizing Silicon Valley giant Intuit's $7 billion takeover attempt of Credit Karma, an upstart personal finance firm that became a competitor when it launched a free tax prep offering that challenges Intuit's TurboTax product. From a report: The probe comes after ProPublica first reported in February that antitrust experts viewed the deal as concerning because it could allow a dominant firm to eliminate a competitor with an innovative business model. Intuit already dominates online tax preparation, with a 67% market share last year. The article sparked letters from Sen. Ron Wyden, D-Ore., and Rep. David Cicilline, D-R.I., urging the DOJ to investigate further. Cicilline is chair of the House Judiciary Committee's antitrust subcommittee. Government lawyers worry that allowing Intuit to snuff out a promising startup could harm American consumers seeking free tax prep options, according to a June memo from the company side that describes Intuit's legal strategy, which was obtained by ProPublica. The government is particularly interested in "the influence that Intuit's purchase of Credit Karma will have on consumer tax preparation platforms and [the] software market," according to the memo. Further reading: Inside TurboTax's 20-Year Fight to Stop Americans From Filing Their Taxes for Free.
Advertising

Facebook Won't Put Ads in WhatsApp -- For Now (newsweek.com) 29

Facebook "will no longer push through with its plans to sell ads on WhatsApp," writes Engadget, citing a report in the Wall Street Journal which says WhatsApp still "plans at some point to introduce ads to Status."

Newsweek reports: WhatsApp is the only app in Facebook's suite of products free from ads, which make up a vast amount of the parent company's revenue, bringing in the majority of its $17.65 billion during Q3 last year. Like rival apps Snapchat or TikTok, advertising features prominently in Messenger and Instagram. But what does it mean for Facebook? The impact of a delayed WhatsApp ad roll-out will not only mean a financial hit, but may also disrupt how much ad data Facebook can possibly extract from users of the app's desktop and web versions.

Currently, Facebook does not charge people for access to its products. Instead, it monetizes personal information by selling details about user preferences to companies for use in targeted ads. And there is clearly money to be made via mobile-based ads, which brought in about 94 percent of Facebook's total ad revenue during the third quarter of last year... "My assessment of this is it will be a delayed introduction of ads," social media consultant and commentator Matt Navarra told Newsweek today... "With the current climate of unrest surrounding data privacy and Facebook's plans to integrate its messaging apps backend, as well as the many legal battles they are facing, I suspect they are being cautious with yet more activity that could ruffle feathers at this time," Navarra told Newsweek. "But it's a case of when they do launch ads in WhatsApp, not if," he predicted.

The ad strategy sparked clashes between Facebook executives and WhatsApp founders Jan Koum and Brian Acton, and became a factor in their departures from the firm. Koum and Acton, pro-privacy technologists, reportedly feared the app's encryption could be put at risk.

Slashdot Top Deals