Patents

Smart TV Industry Rocked By Alleged Patent Conspiracy From Chipmaker (arstechnica.com) 27

An anonymous reader quotes a report from Ars Technica: During the pandemic, the demand for smart TVs dwindled as the supply chain for critical TV components became unreliable and consumers began tightening up on frivolous spending. Amid this smart TV demand slump, one of the world's top TV chipmakers, Taiwan-based Realtek, was hit with multiple meritless lawsuits by an alleged patent troll, Future Link Systems. These actions, Realtek said, drained its resources, made Realtek appear unreliable as a TV-chip supplier, and created "the harmful illusion of supply chain uncertainties in an already constrained industry." Determined to defend its reputation and maintain its dominant place in the market, Realtek filed a lawsuit (PDF) this week in a US district court in California. In it, the TV chipmaker alleged that Future Link launched "an unprecedented and unseemly conspiracy" with the world's leading TV-chip supplier, Taiwan-based MediaTek, and was allegedly paid a "bounty" to file frivolous patent infringement claims intended to drive Realtek out of the TV-chip market.

The scheme allegedly worked like this: Future Link "intentionally and knowingly" asked a US district court in Texas and the US International Trade Commission "for injunctions prohibiting importation of Realtek TV Chips and devices containing the same into the United States," Realtek alleged. This allowed MediaTek to reap the benefits of diminished competition in that market, Realtek claimed. Today, Reuters reported that MediaTek has officially responded to Realtek's allegations, vowing to defend itself against the lawsuit and claiming that MediaTek will supply evidence to dispute Realtek's claims.

Realtek's lawsuit seeks a jury trial to fight back against MediaTek and Future Link, as well as IPValue Management, which the complaint said owns and operates Future Link. The TV chipmaker alleged that defendants violated unfair competition laws in California, as well as federal laws. Any damages won from the lawsuit will be donated to charity, Realtek said. Realtek's complaint likens MediaTek to "robber barons of the Industrial Age," allegedly seeking to destroy competition and secure a monopoly in the TV-chip market. "With this action, Realtek seeks to stop a modern robber baron and its hired henchmen, protect itself from ongoing injury, and guard against the destruction of competition in the critical semiconductor industry by holding defendants accountable for their conspiracy," the complaint said.

China

After Being Wrongfully Accused of Spying for China, Professor Wins Appeal To Sue the Government 89

Xiaoxing Xi, a Temple University professor who was falsely accused of spying for China, will be able to bring a lawsuit against the Federal Bureau of Investigation. From a report: A judge at a federal appeals court ruled in favor of Xi on Wednesday, allowing the physicist to move forward with his case against the U.S. government for wrongful prosecution and violating his family's constitutional rights by engaging in unlawful search, seizure and surveillance. The decision comes after FBI agents swarmed Xi's Philadelphia home in 2015, rounded up his family at gunpoint, and arrested him on fraud charges related to economic espionage, before abruptly dropping the charges months afterward.

"I'm very, very glad that we can finally put the government under oath to explain why they decided to do what they did, violating our constitutional rights," Xi said in an exclusive interview with NBC News. "We finally have an opportunity to hold them accountable." The case will now be kicked back to the district court, continuing a long legal battle. Xi, who's represented in part by the American Civil Liberties Union, attempted to bring a suit against the government in 2017, alleging that FBI agents "made knowingly or recklessly false statements" to support their investigation and prosecution. Xi also claimed that his arrest was discriminatory, and that he was targeted due to his ethnicity, much like other scholars of Chinese descent. A district court dismissed his case in 2021, but Xi appealed the decision last year.
Android

Lawsuit Accuses DoorDash of Charging iPhone Users More For Identical Orders (arstechnica.com) 77

A class-action lawsuit has been filed against DoorDash, alleging that the company uses deceptive and fraudulent practices to charge higher delivery fees to iPhone users compared to Android users. Ars Technica reports: The lawsuit (PDF), filed May 5 in the District of Maryland, came in hot. Plaintiff Ross Hecox, in addition to his two children and a presumptive class of similarly situated customers, briefly defines DoorDash as an online marketplace with 32 million users and billions of dollars in annual revenue. "Yet, DoorDash generates its revenues not only through heavy-handed tactics that take advantage of struggling merchants and a significant immigrant driver workforce, but also through deceptive, misleading, and fraudulent practices that illegally deprive consumers of millions, if not billions, of dollars annually," the suit adds. "This lawsuit details DoorDash's illegal pricing scheme and seeks to hold DoorDash accountable for its massive fraud on consumers, including one of the most vulnerable segments of society, minor children."

Specifically, the suit claims that DoorDash misleads and defrauds customers by

- Making its "Delivery Fee" seem related to distance or demand, even though none of it goes to the delivery person.
- Offering an "Express" option that implies faster delivery, but then changing the wording to "Priority" in billing so it is not held to delivery times.
- Charging an "Expanded Range Delivery" fee that seems based on distance but is really based on a restaurant's subscription level and demand.
- Adding an undisclosed 99 cent "marketing fee," paid by the customer rather than the restaurant, to promote menu items that customers add to their carts.
- Obscuring minimum order amounts attached to its "zero-fee" DashPass memberships and coupon offers.
- Generally manipulating DashPass subscriptions to appear like substantial savings, when the company is "engineering" fees to seem reduced.

One of the more interesting and provocative claims is that DoorDash's fees, based in part on "other factors," continually charge iPhone users of its app more than Android users placing the same orders. The plaintiffs and their law firm conducted a few tests of DoorDash's system, using different accounts to order the same food, from the same restaurant, at almost the same exact time, delivered to the same address, with the same account type, delivery speed, and tip. [...] The plaintiffs are asking for $1 billion in damages for those who "fell prey to DoorDash's illegal pricing" over the past four years. The suit also includes allegations that DoorDash improperly allows children to enter into contract with the company without proper vetting.
"The claims put forward in the amended complaint are baseless and simply without merit," said a DoorDash spokesperson in a statement. "We ensure fees are disclosed throughout the customer experience, including on each restaurant storepage and before checkout. Building this trust is essential, and it's why the majority of delivery orders on our platform are placed by return customers. We will continue to strive to make our platform work even better for customers, and will vigorously fight these allegations."
Google

Google Reaches $39.9 Million Privacy Settlement With Washington State (reuters.com) 9

An anonymous reader quotes a report from Reuters: Google will pay Washington state $39.9 million to resolve a lawsuit accusing the Alphabet unit of misleading consumers about its location tracking practices, state Attorney General Bob Ferguson said on Thursday. The settlement resolves claims that Google deceived people into believing they controlled how the search and advertising company collected and used their personal data. In reality, the state said Google was able to collect and profit from that data even if consumers disabled its tracking technology on their smartphones and computers, invading consumers' privacy.

A consent decree filed on Wednesday in King County Superior Court requires Google to be more transparent about its tracking practices, and provide a more detailed "Location Technologies" webpage describing them. "Today's resolution holds one of the most powerful corporations accountable for its unethical and unlawful tactics," Ferguson said in a statement. Google, based in Mountain View, California, denied wrongdoing in agreeing to settle.
"In November, Google agreed to pay $391.5 million to resolve similar allegations by 40 U.S. states," notes Reuters. "Some states including Washington chose to sue Google on their own about its tracking practices."
Google

Google To Pay $8 Million Settlement For 'Lying To Texans,' State AG Says (arstechnica.com) 32

Google has agreed to an $8 million settlement with Texas over deceptive ads for its Pixel 4 smartphone, in which radio DJs were hired to provide testimonials without being given the phone to use. Texas Attorney General Ken Paxton made the announcement last week. Ars Technica reports: At issue was Google's trustworthiness as an advertiser after the tech giant "hired radio DJs to record and broadcast detailed testimonials about their personal experiences with the Pixel 4," but then "refused to provide the DJs with a phone for them to use," Paxton said. The tech giant had previously settled claims from the Federal Trade Commission and six other states for approximately $9 million, and Paxton seemed proud that his "settlement recovers $8 million for the State of Texas alone."

Paxton said that "if Google is going to advertise in Texas, their statements better be true." He decided to take action to hold Google "accountable for lying to Texans for financial gain," saying that large companies should not expect "special treatment under the law." "Texas will do whatever it takes to protect our citizens and our state economy from corporations' false and misleading advertisements," Paxton said.

United States

TurboTax to Pay $141M Settlement Over 'Deceiving' Millions of Low-Income Americans (msn.com) 28

The Washington Post reports: TurboTax will begin sending checks next week to nearly 4.4 million low-income Americans whom the company deceived into paying for tax services that should have been free, New York Attorney General Letitia James said.

The checks, part of a $141 million settlement reached in May 2022 between TurboTax owner Intuit and all 50 states and the District of Columbia, are for people who were eligible to file taxes for free through an IRS partner program but were "tricked" into paying TurboTax between 2016 and 2018, James (D) said in a statement Thursday.

The company was also accused of knowingly misleading customers and blocking its landing page for its IRS Free File Program, a public-private partnership with the IRS, from showing up on search engines such as Google. Because Intuit and other companies agreed to participate in that program, the IRS agreed not to offer its own free electronic tax services.

Intuit admitted no wrongdoing in the settlement.

Customers who qualify will receive between $29 and $85, depending on the number of years they paid for the services... Consumers who are eligible for the payments do not need to file a claim and will be notified by email, James's office said Thursday. Checks will be sent automatically and will be mailed throughout May.

"TurboTax's predatory and deceptive marketing cheated millions of low-income Americans who were trying to fulfill their legal duties to file their taxes," said Attorney General James. "Today we are righting that wrong and putting money back into the pockets of hardworking taxpayers who should have never paid to file their taxes." James described it as an effort "to stand up for ordinary Americans and hold companies who cheat consumers accountable," specifically calling out Intuit "for deceiving millions of low-income Americans into paying for tax services that should have been free."
The Courts

Google Gets Court Order To Take Down CryptBot That Infected Over 670,000 Computers (thehackernews.com) 14

An anonymous reader quotes a report from The Hacker News: Google on Wednesday said it obtained a temporary court order in the U.S. to disrupt the distribution of a Windows-based information-stealing malware called CryptBot and "decelerate" its growth. The tech giant's Mike Trinh and Pierre-Marc Bureau said the efforts are part of steps it takes to "not only hold criminal operators of malware accountable, but also those who profit from its distribution." CryptBot is estimated to have infected over 670,000 computers in 2022 with the goal of stealing sensitive data such as authentication credentials, social media account logins, and cryptocurrency wallets from users of Google Chrome. The harvested data is then exfiltrated to the threat actors, who then sell the data to other attackers for use in data breach campaigns. CryptBot was first discovered in the wild in December 2019.

The malware has been traditionally delivered via maliciously modified versions of legitimate and popular software packages such as Google Earth Pro and Google Chrome that are hosted on fake websites. [...] The major distributors of CryptBot, per Google, are suspected to be operating a "worldwide criminal enterprise" based out of Pakistan. Google said it intends to use the court order, granted by a federal judge in the Southern District of New York, to "take down current and future domains that are tied to the distribution of CryptBot," thereby kneecapping the spread of new infections.

Links

Man Battling Google Wins $500K For Search Result Links Calling Him a Pedophile (arstechnica.com) 32

An anonymous reader quotes a report from Ars Technica: A Montreal man spent years trying to hold Google accountable for search results linking to a defamatory post falsely accusing him of pedophilia that he said ruined his career. Now Google must pay $500,000 after a Quebec Supreme Court judge ruled that Google relied on an "erroneous" interpretation of Canadian law in denying the man's requests to remove the links. "Google variously ignored the Plaintiff, told him it could do nothing, told him it could remove the hyperlink on the Canadian version of its search engine but not the US one, but then allowed it to re-appear on the Canadian version after a 2011 judgment of the Supreme Court of Canada in an unrelated matter involving the publication of hyperlinks," judge Azimuddin Hussain wrote in his decision (PDF) issued on March 28.

The plaintiff was granted anonymity throughout the proceedings. Google has been ordered not to disclose any identifiable information about him in connection to the case for 45 days. The tech company must also remove all links to the defamatory post in search results viewable in Quebec. [...] Instead of compensatory and punitive damages originally sought -- amounting to $6 million -- the man was awarded $500,000 for moral injuries caused after successfully arguing that he lost business deals and suffered strains on his personal relationships due to being wrongly stigmatized as a pedophile. Hussain described the plaintiff's experience battling Google to preserve his reputation as a "waking nightmare." Due to Google's refusals to remove the defamatory posts, the man "found himself helpless in a surreal and excruciating contemporary online ecosystem as he lived through a dark odyssey to have the Defamatory Post removed from public circulation," Hussain wrote. The plaintiff, now in his early 70s, has the option to appeal the judge's order that Google may not release any of his identifiable information for 45 days.

AI

ChatGPT Sued for Lying (msn.com) 176

An anonymous readers shared this report from the Washington Post: Brian Hood is a whistleblower who was praised for "showing tremendous courage" when he helped expose a worldwide bribery scandal linked to Australia's National Reserve Bank. But if you ask ChatGPT about his role in the scandal, you get the opposite version of events. Rather than heralding Hood's whistleblowing role, ChatGPT falsely states that Hood himself was convicted of paying bribes to foreign officials, had pleaded guilty to bribery and corruption, and been sentenced to prison.

When Hood found out, he was shocked. Hood, who is now mayor of Hepburn Shire near Melbourne in Australia, said he plans to sue the company behind ChatGPT for telling lies about him, in what could be the first defamation suit of its kind against the artificial intelligence chatbot.... "There's never, ever been a suggestion anywhere that I was ever complicit in anything, so this machine has completely created this thing from scratch," Hood said — confirming his intention to file a defamation suit against ChatGPT. "There needs to be proper control and regulation over so-called artificial intelligence, because people are relying on them...."

If it proceeds, Hood's lawsuit will be the first time someone filed a defamation suit against ChatGPT's content, according to Reuters. If it reaches the courts, the case would test uncharted legal waters, forcing judges to consider whether the operators of an artificial intelligence bot can be held accountable for its allegedly defamatory statements.

The article notes that ChatGPT prominently warns users that it "may occasionally generate incorrect information." And another Post article notes that all the major chatbots now include disclaimers, "such as Bard's fine-print message below each query: 'Bard may display inaccurate or offensive information that doesn't represent Google's views.'"

But the Post also notes that ChatGPT still "invented a fake sexual harassment story involving a real law professor, Jonathan Turley — citing a Washington Post article that did not exist as its evidence." Long-time Slashdot reader schwit1 tipped us off to that story. But here's what happened when the Washington Post searched for accountability for the error: In a statement, OpenAI spokesperson Niko Felix said, "When users sign up for ChatGPT, we strive to be as transparent as possible that it may not always generate accurate answers. Improving factual accuracy is a significant focus for us, and we are making progress...." Katy Asher, senior communications director at Microsoft, said the company is taking steps to ensure search results are safe and accurate. "We have developed a safety system including content filtering, operational monitoring, and abuse detection to provide a safe search experience for our users," Asher said in a statement, adding that "users are also provided with explicit notice that they are interacting with an AI system."

But it remains unclear who is responsible when artificial intelligence generates or spreads inaccurate information. From a legal perspective, "we just don't know" how judges might rule when someone tries to sue the makers of an AI chatbot over something it says, said Jeff Kosseff, a professor at the Naval Academy and expert on online speech. "We've not had anything like this before."

The Internet

Brazil Looks To Regulate Monetized Content On Internet (reuters.com) 9

The Brazilian government is studying whether to regulate Internet platforms with content that earns revenue such as advertising, its secretary for digital policies, Joao Brant, said on Friday. Reuters reports: The idea would be for a regulator to hold such platforms, not consumers, accountable for monetized content, Brant told Reuters. Another goal is "to prevent the networks from being used for the dissemination and promotion of crimes and illegal content" especially after the riots by supporters of former far-right President JairBolsonaro in Brasilia in January, fueled by misinformation about the election he lost in October.

Brant said President Luiz Inacio Lula da Silva's government also intends to make companies responsible for stopping misinformation, hate speech and other crimes on their social media platforms. Platforms would not be held responsible for content individually, but for how diligent they are in protecting the "digital environment," he said in an interview. Brant did not detail what the regulatory body would look like, but said the government wants to regulate monetized content and prevent the platforms from spreading misinformation.

AI

Microsoft Lays Off Key AI Ethics Team, Report Says (platformer.news) 131

According to Platformer, Microsoft's recent layoffs included its entire ethics and society team within the artificial intelligence organization. "The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design at a time when the company is leading the charge to make AI tools available to the mainstream, current and former employees said." From the report: Microsoft still maintains an active Office of Responsible AI, which is tasked with creating rules and principles to govern the company's AI initiatives. The company says its overall investment in responsibility work is increasing despite the recent layoffs.

But employees said the ethics and society team played a critical role in ensuring that the company's responsible AI principles are actually reflected in the design of the products that ship. "People would look at the principles coming out of the office of responsible AI and say, 'I don't know how this applies,'" one former employee says. "Our job was to show them and to create rules in areas where there were none."

In recent years, the team designed a role-playing game called Judgment Call that helped designers envision potential harms that could result from AI and discuss them during product development. It was part of a larger "responsible innovation toolkit" that the team posted publicly. More recently, the team has been working to identify risks posed by Microsoft's adoption of OpenAI's technology throughout its suite of products. The ethics and society team was at its largest in 2020, when it had roughly 30 employees including engineers, designers, and philosophers. In October, the team was cut to roughly seven people as part of a reorganization.
"Microsoft is committed to developing AI products and experiences safely and responsibly, and does so by investing in people, processes, and partnerships that prioritize this," the company said in a statement. "Over the past six years we have increased the number of people across our product teams and within the Office of Responsible AI who, along with all of us at Microsoft, are accountable for ensuring we put our AI principles into practice. [...] We appreciate the trailblazing work the ethics and society team did to help us on our ongoing responsible AI journey."
AI

'I Broke Into a Bank Account With an AI-Generated Voice' (vice.com) 46

An anonymous reader quotes a report from Motherboard, written by Joseph Cox: On Wednesday, I phoned my bank's automated service line. To start, the bank asked me to say in my own words why I was calling. Rather than speak out loud, I clicked a file on my nearby laptop to play a sound clip: "check my balance," my voice said. But this wasn't actually my voice. It was a synthetic clone I had made using readily available artificial intelligence technology. "Okay," the bank replied. It then asked me to enter or say my date of birth as the first piece of authentication. After typing that in, the bank said "please say, 'my voice is my password.'" Again, I played a sound file from my computer. "My voice is my password," the voice said. The bank's security system spent a few seconds authenticating the voice. "Thank you," the bank said. I was in.

I couldn't believe it -- it had worked. I had used an AI-powered replica of a voice to break into a bank account. After that, I had access to the account information, including balances and a list of recent transactions and transfers. Banks across the U.S. and Europe use this sort of voice verification to let customers log into their account over the phone. Some banks tout voice identification as equivalent to a fingerprint, a secure and convenient way for users to interact with their bank. But this experiment shatters the idea that voice-based biometric security provides foolproof protection in a world where anyone can now generate synthetic voices for cheap or sometimes at no cost. I used a free voice creation service from ElevenLabs, an AI-voice company. Now, abuse of AI-voices can extend to fraud and hacking. Some experts I spoke to after doing this experiment are now calling for banks to ditch voice authentication altogether, although real-world abuse at this time could be rare.
A Lloyds Bank spokesperson said in a statement that "Voice ID is an optional security measure, however we are confident that it provides higher levels of security than traditional knowledge-based authentication methods, and that our layered approach to security and fraud prevention continues to provide the right level of protection for customers' accounts, while still making them easy to access when needed."

The Consumer Financial Protection Bureau, one of the U.S. agencies that regulates the financial industry, said: "The CFPB is concerned with data security, and companies are on notice that they'll be held accountable for shoddy practices. We expect that any firm follow the law, regardless of technology used."
Government

Larry Magid: Utah Bill Threatens Internet Security For Everyone (mercurynews.com) 89

"Wherever you live, you should be paying attention to Utah Senate Bill 152 and the somewhat similar House Bill 311," writes tech journalist and long-time child safety advocate Larry Magid in an op-ed via the Mercury News. "Even though it's legislation for a single state, it could set a dangerous precedent and make it harder to pass and enforce sensible federal legislation that truly would protect children and other users of connected technology." From the report: SB 152 would require parents to provide their government-issued ID and physical address in order for their child or teenager to access social media. But even if you like those provisions, this bill would require everyone -- including adults -- to submit government-issued ID to sign up for a social media account, including not just sites like Facebook, Instagram, Snapchat and TikTok, but also video sharing sites like YouTube, which is commonly used by schools. The bill even bans minors from being online between 10:30 p.m. and 6:30 a.m., empowering the government to usurp the rights of parents to supervise and manage teens' screen time. Should it be illegal for teens to get up early to finish their homework (often requiring access to YouTube or other social media) or perhaps access information that would help them do early morning chores? Parents -- not the state -- should be making and enforcing their family's schedule.

I oppose these bills from my perch as a long-time child safety advocate (I wrote "Child Safety on the Information Highway" in 1994 for the National Center for Missing & Exploited Children and am currently CEO of ConnectSafely.org). However well-intentioned, they could increase risk and deny basic rights to children and adults. SB 152 would require companies to keep a "record of any submissions provided under the requirements," which means there would not only be databases of all social media users, but also of users under 18, which could be hacked by criminals or foreign governments seeking information on Utah children and adults. And, in case you think that's impossible, there was a breach in 2006 of a database of children that was mandated by the State of Utah to protect them from sites that displayed or promoted pornography, alcohol, tobacco and gambling. No one expects a data breach, but they happen on a regular basis. There is also the issue of privacy. Social media is both media and speech, and some social media are frequented by people who might not want employers, family members, law enforcement or the government to know what information they're consuming. Whatever their interests, people should have the right to at least anonymously consume information or express their opinions. This should apply to everyone, regardless of who they are, what they believe or what they're interested in. [...]

It's important to always look at the potential unintended consequences of legislation. I'm sure the lawmakers in Utah who are backing this bill have the best interests of children in mind. But this wouldn't be the first law designed to protect children that actually puts them at risk or violates adult rights in the name of child protection. I applaud any policymaker who wants to find ways to protect kids and hold technology companies accountable for doing their part to protect privacy and security as well as employing best-practices when it comes to the mental health and well being of children. But the legislation, whether coming from Utah, another state or Washington, D.C., must be sensible, workable, constitutional and balanced, so it at the very least, does more good than harm.

Businesses

Zoom To Lay Off 1,300 Employees, Or About 15% of Its Workforce (cnbc.com) 44

Zoom on Tuesday announced plans to cut about 1,300 workers, or 15% of its workforce, according to a blog post on the company's website. CNBC reports: CEO Eric Yuan wrote in the blog post that as the world continues to adjust to life after the Covid pandemic, the company needs to adapt to the "uncertainty of the global economy" as well as "its effect on our customers." "We worked tirelessly and made Zoom better for our customers and users. But we also made mistakes," Yuan said. "We didn't take as much time as we should have to thoroughly analyze our teams or assess if we were growing sustainably, toward the highest priorities."

Yuan said the cuts will impact every organization across Zoom, and employees who are laid off will be offered up to 16 weeks of salary and health-care coverage. The CEO also said he plans to reduce his own salary for the coming fiscal year by 98%, and he is also forgoing his 2023 corporate bonus. "As the CEO and founder of Zoom, I am accountable for these mistakes and the actions we take today -- and I want to show accountability not just in words but in my own actions," Yuan wrote in the post.

AI

Science Journals Ban Listing of ChatGPT as Co-Author on Papers (theguardian.com) 45

The publishers of thousands of scientific journals have banned or restricted contributors' use of an advanced AI-driven chatbot amid concerns that it could pepper academic literature with flawed and even fabricated research. From a report: ChatGPT, a fluent but flaky chatbot developed by OpenAI in California, has impressed or distressed more than a million human users by rattling out poems, short stories, essays and even personal advice since its launch in November. But while the chatbot has proved a huge source of fun -- its take on how to free a peanut butter sandwich from a VCR, in the style of the King James Bible, is one notable hit -- the program can also produce fake scientific abstracts that are convincing enough to fool human reviewers. ChatGPT's more legitimate uses in article preparation have already led to it being credited as a co-author on a handful of papers.

The sudden arrival of ChatGPT has prompted a scramble among publishers to respond. On Thursday, Holden Thorp, the editor-in-chief of the leading US journal Science, announced an updated editorial policy, banning the use of text from ChatGPT and clarifying that the program could not be listed as an author. Leading scientific journals require authors to sign a form declaring that they are accountable for their contribution to the work. Since ChatGPT cannot do this, it cannot be an author, Thorp says. But even using ChatGPT in the preparation of a paper is problematic, he believes. ChatGPT makes plenty of errors, which could find their way into the literature, he says, and if scientists come to rely on AI programs to prepare literature reviews or summarise their findings, the proper context of the work and the deep scrutiny that results deserve could be lost. "That is the opposite direction of where we need to go," he said. Other publishers have made similar changes. On Tuesday, Springer-Nature, which publishes nearly 3,000 journals, updated its guidelines to state that ChatGPT cannot be listed as an author. But the publisher has not banned ChatGPT outright. The tool, and others like it, can still be used in the preparation of papers, provided full details are disclosed in the manuscript.

Games

Ubisoft Devs Grill Boss On Shifting Blame And Chasing Trends (kotaku.com) 32

Ubisoft CEO Yves Guillemot faced tough questions from some exhausted and fed-up staff about recent missteps and future plans in a company-wide Q&A session on Wednesday. The meeting comes just a week after the Assassin's Creed publisher announced new cancellations, delays, and cost-cutting measures, and told employees "the ball is in your court" to help get the $3 billion company back on track. From a report: "The ball is now in our court -- for years it has been in your court so why did you mishandle the ball so badly so we, the workers, have to fix it for you?" read one upvoted question on a list submitted in advance through corporate communication channels and viewed by Kotaku. It was a reference to a now infamous email Guillemot sent to staff last week that appeared to shift blame for the publisher's recent mistakes and hold lower-level employees accountable for fixing the situation.

Guillemot opened the meeting by apologizing. "I heard your feedback and I'm sorry this was perceived that way," Guillemot said, according to sources present who were not authorized to speak to press. "When saying 'the ball is in your court' to deliver our lineup on time and at the expected level of quality, I wanted to convey the idea that more than ever I need your talent and energy to make it happen. This is a collective journey that starts of course with myself and with the leadership team to create the conditions for all of us to succeed together." While that clarification resonated with some developers, others who spoke with Kotaku still feel management is out of touch and found little in the meeting to reassure them.

Google

Google Says India Antitrust Order Poses Threat To National Security (techcrunch.com) 12

Google warned on Friday that if the Indian antitrust watchdog's ruling is allowed to progress it would result in devices getting expensive in the South Asian market and lead to proliferation of unchecked apps that will pose threats for individual and national security, escalating its concerns over the future of Android in the key overseas region. From a report: "Predatory apps that expose users to financial fraud, data theft and a number of other dangers abound on the internet, both from India and other countries. While Google holds itself accountable for the apps on Play Store and scans for malware as well compliance with local laws, the same checks may not be in place for apps sideloaded from other sources," the company wrote in a blog post, titled "Heart of the Matter." The Competition Commission of India has slapped two fines against Google, alleging the Android-maker abused the Play Store's dominant position in the country and required Android device makers to pre-install its entire Google Mobile Suite.
United States

Joe Biden: Republicans and Democrats, Unite Against Big Tech Abuses (wsj.com) 147

Congress can find common ground on the protection of privacy, competition and American children, says U.S. President Joe Biden. In an op-ed at Wall Street Journal, he shares why he has pushed for legislation to hold Big Tech accountable. From the start of his administration, says Biden, he has embraced three broad principles for reform: First, we need serious federal protections for Americans' privacy. That means clear limits on how companies can collect, use and share highly personal data -- your internet history, your personal communications, your location, and your health, genetic and biometric data. It's not enough for companies to disclose what data they're collecting. Much of that data shouldn't be collected in the first place. These protections should be even stronger for young people, who are especially vulnerable online. We should limit targeted advertising and ban it altogether for children.

Second, we need Big Tech companies to take responsibility for the content they spread and the algorithms they use. That's why I've long said we must fundamentally reform Section 230 of the Communications Decency Act, which protects tech companies from legal responsibility for content posted on their sites. We also need far more transparency about the algorithms Big Tech is using to stop them from discriminating, keeping opportunities away from equally qualified women and minorities, or pushing content to children that threatens their mental health and safety.

Third, we need to bring more competition back to the tech sector. My administration has made strong progress in promoting competition throughout the economy, consistent with my July 2021 executive order. But there is more we can do. When tech platforms get big enough, many find ways to promote their own products while excluding or disadvantaging competitors -- or charge competitors a fortune to sell on their platform. My vision for our economy is one in which everyone -- small and midsized businesses, mom-and-pop shops, entrepreneurs -- can compete on a level playing field with the biggest companies. To realize that vision, and to make sure American tech keeps leading the world in cutting-edge innovation, we need fairer rules of the road. The next generation of great American companies shouldn't be smothered by the dominant incumbents before they have a chance to get off the ground.

Education

Seattle Public Schools Sue Social Media Giants for Youth Mental Health Crisis (geekwire.com) 165

Long-time Slashdot reader theodp writes: "A new lawsuit filed by Seattle Public Schools against TikTok, YouTube, Facebook, Snap, Instagram, and their parent companies alleges that the social media giants have 'successfully exploited the vulnerable brains of youth' for their own profit, using psychological tactics that have led to a mental health crisis in schools," reports GeekWire. "The suit, filed Friday in U.S. District Court in Seattle, seeks "the maximum statutory and civil penalties permitted by law," making the case that the companies have violated Washington state's public nuisance law."
From GeekWire's report: The district alleges that it has suffered widespread financial and operational harm from social media usage and addiction among students. The lawsuit cites factors including the resources required to provide counseling services to students in crisis, and to investigate and respond to threats made against schools and students over social media. 'This mental health crisis is no accident,' the suit says. 'It is the result of the Defendants' deliberate choices and affirmative actions to design and market their social media platforms to attract youth.'"

The lawsuit cites President Joe Biden's statement in his 2022 State of the Union address that "we must hold social media platforms accountable for the national experiment they're conducting on our children for profit." The suit says the school district "brings this action to do just that."

The Internet

Watching Porn Now Requires Age Verification in Louisiana Because of New Law 328

An anonymous reader shares a report: The porn industry has been around for a while and in today's digital age business is booming. When Laurie Schlegel isn't seeing her patients who struggle with sex addiction, she's at the Louisiana State Capitol. The Republican state representative from Metairie passed HB 142 earlier this year requiring age verification for any website that contains 33.3% or more pornographic material. "Pornography is destroying our children and they're getting unlimited access to it on the internet and so if the pornography companies aren't going to be responsible, I thought we need to go ahead and hold them accountable," said Schlegel. According to Schlegel, websites would verify someone's age in collaboration with LA Wallet. So, if you plan on using these sites in the future, you may want to download the app. "I would say so," said Sara Kelley, project manager with Envoc. "I mean, I think it's a must-have for anyone who has a Louisiana state ID or driver's license."

Kelley added there are other ways websites could ask you to verify your age if you cannot access LA Wallet. She added that although some personal information will be required, companies must not retain personal data after complete verification. "It doesn't identify your date of birth, it doesn't identify who you are, where you live, what part of the state you're in, or any information from your device or from your actual ID. It just returns that age to say that yes, this person is old enough to be allowed to go in," explained Kelley. It will be the website's responsibility to ensure age verification is required when accessing their site in Louisiana. Schlegel said there will be consequences for those who fail to follow the law.

Slashdot Top Deals