Privacy

Cellebrite Asks Cops To Keep Its Phone Hacking Tech 'Hush Hush' (techcrunch.com) 50

An anonymous reader shares a report: For years, cops and other government authorities all over the world have been using phone hacking technology provided by Cellebrite to unlock phones and obtain the data within. And the company has been keen on keeping the use of its technology "hush hush." As part of the deal with government agencies, Cellebrite asks users to keep its tech -- and the fact that they used it -- secret, TechCrunch has learned. This request concerns legal experts who argue that powerful technology like the one Cellebrite builds and sells, and how it gets used by law enforcement agencies, ought to be public and scrutinized.

In a leaked training video for law enforcement customers that was obtained by TechCrunch, a senior Cellebrite employee tells customers that "ultimately, you've extracted the data, it's the data that solves the crime, how you got in, let's try to keep that as hush hush as possible." "We don't really want any techniques to leak in court through disclosure practices, or you know, ultimately in testimony, when you are sitting in the stand, producing all this evidence and discussing how you got into the phone," the employee, who we are not naming, says in the video.

AI

Schools are Now Teaching About ChatGPT and AI So Their Students Aren't Left Behind (cnn.com) 73

Professors now fear that ignoring or discouraging the use of AI "will be a disservice to students and leave many behind when entering the workforce," reports CNN: According to a study conducted by higher education research group Intelligent.com, about 30% of college students used ChatGPT for schoolwork this past academic year and it was used most in English classes. Jules White, an associate professor of computer science at Vanderbilt University, believes professors should be explicit in the first few days of school about the course's stance on using AI and that it should be included it in the syllabus. "It cannot be ignored," he said. "I think it's incredibly important for students, faculty and alumni to become experts in AI because it will be so transformative across every industry in demand so we provide the right training."

Vanderbilt is among the early leaders taking a strong stance in support of generative AI by offering university-wide training and workshops to faculty and students. A three-week 18-hour online course taught by White this summer was taken by over 90,000 students, and his paper on "prompt engineering" best practices is routinely cited among academics. "The biggest challenge is with how you frame the instructions, or 'prompts,'" he said. "It has a profound impact on the quality of the response and asking the same thing in various ways can get dramatically different results. We want to make sure our community knows how to effectively leverage this." Prompt engineering jobs, which typically require basic programming experience, can pay up to $300,000.

Although White said concerns around cheating still exist, he believes students who want to plagiarize can still seek out other methods such as Wikipedia or Google searches. Instead, students should be taught that "if they use it in other ways, they will be far more successful...." Some schools are hiring outside experts to teach both faculty and students about how to use AI tools.

Security

White House Orders Federal Agencies To Shore Up Cybersecurity, Warns of Potential Exposure (cnn.com) 15

The White House ordered federal agencies to shore up their cybersecurity after agencies have lagged in implementing a key executive order President Joe Biden issued in 2021. From a report: Multiple federal departments and agencies have, as of the end of June, "failed to fully comply" with critical security practices prescribed by the executive order, "leaving the U.S. Government exposed to malicious cyber intrusions and undermining the example the Government must set for adequate cybersecurity practices," national security adviser Jake Sullivan said in a memo to Cabinet secretaries this week.

Sullivan asked senior officials from across the departments to ensure they achieve "full compliance" with the executive order's security requirements by the end of the year. His memo is addressed to agencies outside of the Pentagon. "This morning the National Security Advisor shared a memo with federal departments and agencies to ensure their cyber infrastructure is compliant with the President's Executive Order to improve the nation's cybersecurity," a National Security Council spokesperson told CNN. "As we've said, the Biden-Harris Administration has had a relentless focus on strengthening the cybersecurity of nation's most critical sectors since day one, and will continue to work to secure our cyber defenses."

Businesses

Linus Tech Tips Pauses Production as Controversy Swirls (theverge.com) 115

Linus Sebastian's Linus Media Group YouTube empire is currently in crisis, with accusations of theft, lapses in ethics, and most recently, allegations of sexual harassment. From a report: The company has currently paused all production to improve its review processes, and CEO Terren Tong tells The Verge an outside investigator will be hired to examine the harassment allegations. In a video posted this morning titled "What do we do now?" Linus Media Group CFO Yvonne Ho announced the entire channel was pausing production for the next week to address the issues raised by the YouTube channel Gamers Nexus about errors in videos and concerning ethical practices. "I agree with the community," Ho said in the video, "so I'm putting my foot down. Effective immediately all YouTube video production is on pause." The controversy started earlier this week, when Gamers Nexus posted a video outlining a number of factual errors and ethics concerns in recent Linus Tech Tips videos. "We've been seeing an alarming amount of conflicts from Linus Tech Tips as it relates to their corporate connections, their flow of money, and the potential bias as a result of those things," said Gamers Nexus host Steve Burke.
United States

US Watchdog To Announce Plans To Regulate 'Surveillance Industry' (reuters.com) 21

The top U.S. agency for consumer financial protection will announce plans at the White House on Tuesday to regulate companies that track and sell people's personal data, part of the Biden administration's widening scrutiny of that industry's privacy practices, officials said. From a report: Data brokers' conduct can be "particularly worrisome" because the sensitive data driving the use of artificial intelligence can be collected from military personnel, people experiencing dementia, and others, according to Rohit Chopra, director of the U.S. Consumer Financial Protection Bureau. "The CFPB will be taking steps to ensure that modern-day data brokers in the surveillance industry know that they cannot engage in illegal collection and sharing of our data," he said in a statement. President Joe Biden last year called on the U.S. Federal Trade Commission (FTC) to help protect the data privacy of women seeking reproductive healthcare who may face law enforcement action in some states. The FTC has also sued an Idaho company for selling mobile phone geolocation data, saying it could be traced to places like abortion clinics, churches and addiction treatment centers.
Government

Homeland Security Report Details How Teen Hackers Exploited Security Weaknesses In Some of the World's Biggest Companies (cnn.com) 31

An anonymous reader quotes a report from CNN: A group of teenage hackers managed to breach some of the world's biggest tech firms last year by exploiting systemic security weaknesses in US telecom carriers and the business supply chain, a US government review of the incidents has found, in what is a cautionary tale for America's critical infrastructure. The Department of Homeland Security-led review of the hacks, which was shared exclusively with CNN, determined US regulators should penalize telecom firms with lax security practices and Congress should consider funding programs to steer American youth away from cybercrime. The investigation of the hacks -- which hit companies like Microsoft and Samsung -- found that, in general, it was far too easy for the cybercriminals to intercept text messages that corporate employees use to log into systems. [...]

"It is highly concerning that a loose band of hackers, including a number of teenagers, was able to consistently break into the best-defended companies in the world," Homeland Security Secretary Alejandro Mayorkas told CNN in an interview, adding: "We are seeing a rise in juvenile cybercrime." After a series of high-profile cyberattacks marked his first four months in office, President Joe Biden established the DHS-led Cyber Safety Review Board in 2021 to study the root causes of major hacking incidents and inform policy on how to prevent the next big cyberattack. Staffed by senior US cybersecurity officials and executives at major technology firms like Google, the board does not have regulatory authority, but its recommendations could shape legislation in Congress and future directives from federal agencies. [...]

The board's first review, released in July 2022, concluded that it could take a decade to eradicate a vulnerability in software used by thousands of corporations and government agencies worldwide. The second review, to be released Thursday, focused on a band of young criminal hackers based in the United Kingdom and Brazil that last year launched a series of attacks on Microsoft, Uber, Samsung and identity management firm Okta, among others. The audacious hacks were often followed by extortion demands and taunts by hackers who seemed to be out for publicity as much as they were for money. The hacking group, known as Lapsus$, alarmed US officials because they were able to embarrass major tech firms with robust security programs. "If richly resourced cybersecurity programs were so easily breached by a loosely organized threat actor group, which included several juveniles, how can organizations expect their programs to perform against well-resourced cybercrime syndicates and nation-state actors?" the Cyber Safety Review Board's new report states.
Lapsus$, as well as other hacking groups, conduct "SIM-swapping" attacks that can take over a victim's phone number by having it transferred to another device, thereby gaining access to 2FA security codes and personal messages. These can then be used to reveal login credentials and access financial information.

"The board wants telecom carriers to report SIM-swapping attacks to US regulatory agencies, and for those agencies to penalize carriers when they don't adequately protect customers from such attacks," reports CNN.
Transportation

California Will Probe Data-Collecting, Internet-Connected Cars (msn.com) 25

The Washington Post reports: California's newly empowered privacy regulators announced their first case Monday, a probe of the data practices of newer-generation cars that are often or always connected to the internet. The California Privacy Protection Agency said its enforcement division would review manufacturer's treatment of data collected from vehicles, including locations, smartphone connections and images from cameras.

The agency was established by a 2020 ballot initiative that toughened the California Consumer Privacy Act of 2018. As of July 1, it can conduct operations to enforce Californians' right to learn what is being collected about them, the right to stop that information from being spread and the right to have it deleted...

When combined with web surfing habits and other internet data collated by brokers, movement tracking can paint a full portrait that includes a person's home, workplace, shopping habits, religious attendance and medical treatments. Insurance companies also want data on how quickly drivers brake ahead of problems on the road, along with other performance indicators, and they are willing to pay to get it.

The Post notes that data is beamed to business partners of automakers under "vague privacy policies."
The Courts

Federal Judge Clears Way for US Antitrust Case Against Google (msn.com) 32

The Washington Post reports: A federal judge said the Department of Justice's landmark case alleging Google's dominance over the online search business is anti-competitive can go ahead, throwing out some of the government's claims but ruling that a trial is still necessary.

Google had asked for the judge to make a ruling before the trial, which is scheduled for September.

Some of the government's claims, including those put together by a consortium of state attorneys general that argued the way Google designed its search engine page was unfairly harming competitors like Yelp, were dismissed. But D.C. District Court Judge Amit Mehta said the allegations that Google's overall business practices constitute a monopoly that violates the 1890 Sherman Antitrust Act still deserve a trial. "This is a significant victory for Google, knocking out several claims and narrowing the range of activities at issue for trial," said David Olson, an associate professor and antitrust expert at Boston College's law school. "Having said that, the strongest claims against Google remain, so Google still remains at risk of a significant antitrust ruling against it."

The trial will be a major test for Google and the massive business empire it has assembled over the past two decades. The company is still the dominant portal to the internet, exercising immense power over what people see online... The eventual ruling will also be seen as a test for the U.S. government's more aggressive posture on antitrust.

Security

Microsoft Comes Under Blistering Criticism For 'Grossly Irresponsible' Security (arstechnica.com) 55

An anonymous reader quotes a report from Ars Technica: Microsoft has once again come under blistering criticism for the security practices of Azure and its other cloud offerings, with the CEO of security firm Tenable saying Microsoft is "grossly irresponsible" and mired in a "culture of toxic obfuscation." The comments from Amit Yoran, chairman and CEO of Tenable, come six days after Sen. Ron Wyden (D-Ore.) blasted Microsoft for what he said were "negligent cybersecurity practices" that enabled hackers backed by the Chinese government to steal hundreds of thousands of emails from cloud customers, including officials in the US Departments of State and Commerce. Microsoft has yet to provide key details about the mysterious breach, which involved the hackers obtaining an extraordinarily powerful encryption key granting access to a variety of its other cloud services. The company has taken pains ever since to obscure its infrastructure's role in the mass breach.

On Wednesday, Yoran took to LinkedIn to castigate Microsoft for failing to fix what the company said on Monday was a "critical" issue that gives hackers unauthorized access to data and apps managed by Azure AD, a Microsoft cloud offering for managing user authentication inside large organizations. Monday's disclosure said that the firm notified Microsoft of the problem in March and that Microsoft reported 16 weeks later that it had been fixed. Tenable researchers told Microsoft that the fix was incomplete. Microsoft set the date for providing a complete fix to September 28.

"To give you an idea of how bad this is, our team very quickly discovered authentication secrets to a bank," Yoran wrote. "They were so concerned about the seriousness and the ethics of the issue that we immediately notified Microsoft." He continued: "Did Microsoft quickly fix the issue that could effectively lead to the breach of multiple customers' networks and services? Of course not. They took more than 90 days to implement a partial fix -- and only for new applications loaded in the service."
In response, Microsoft officials wrote: "We appreciate the collaboration with the security community to responsibly disclose product issues. We follow an extensive process involving a thorough investigation, update development for all versions of affected products, and compatibility testing among other operating systems and applications. Ultimately, developing a security update is a delicate balance between timeliness and quality, while ensuring maximized customer protection with minimized customer disruption." Microsoft went on to say that the initial fix in June "mitigated the issue for the majority of customers" and "no customer action is required."

In a separate email, Yoran responded: "It now appears that it's either fixed, or we are blocked from testing. We don't know the fix, or mitigation, so hard to say if it's truly fixed, or Microsoft put a control in place like a firewall rule or ACL to block us. When we find vulns in other products, vendors usually inform us of the fix so we can validate it effectively. With Microsoft Azure that doesn't happen, so it's a black box, which is also part of the problem. The 'just trust us' lacks credibility when you have the current track record."
Open Source

Hugging Face, GitHub and More Unite To Defend Open Source in EU AI Legislation (venturebeat.com) 19

A coalition of a half-dozen open-source AI stakeholders -- Hugging Face, GitHub, EleutherAI, Creative Commons, LAION and Open Future -- are calling on EU policymakers to protect open source innovation as they finalize the EU AI Act, which will be the world's first comprehensive AI law. From a report: In a policy paper released this week, "Supporting Open Source and Open Science in the EU AI Act," the open-source AI leaders offered recommendations âoefor how to ensure the AI Act works for open source" -- with the "aim to ensure that open AI development practices are not confronted with obligations that are structurally impractical to comply with or that would be otherwise counterproductive."

According to the paper, "overbroad obligations" that favor closed and proprietary AI development -- like models from top AI companies such as OpenAI, Anthropic and Google -- "threaten to disadvantage the open AI ecosystem." The paper was released as the European Commission, Council and Parliament debate the final EU AI Act in what is known as the "trilogue," which began after the European Parliament passed its version of the bill on June 14. The goal is to finish and pass the AI Act by the end of 2023 before the next European Parliament elections.

United States

Lindsey Graham and Elizabeth Warren: When It Comes To Big Tech, Enough Is Enough 142

Lindsey Graham and Elizabeth Warren, writing at The New York Times: Enough is enough. It's time to rein in Big Tech. And we can't do it with a law that only nibbles around the edges of the problem. Piecemeal efforts to stop abusive and dangerous practices have failed. Congress is too slow, it lacks the tech expertise, and the army of Big Tech lobbyists can pick off individual efforts easier than shooting fish in a barrel. Meaningful change -- the change worth engaging every member of Congress to fight for -- is structural.

For more than a century, Congress has established regulatory agencies to preserve innovation while minimizing harm presented by emerging industries. In 1887 the Interstate Commerce Commission took on railroads. In 1914 the Federal Trade Commission took on unfair methods of competition and later unfair and deceptive acts and practices. In 1934 the Federal Communications Commission took on radio (and then television). In 1975 the Nuclear Regulatory Commission took on nuclear power, and in 1977 the Federal Energy Regulatory Commission took on electric generation and transmission. We need a nimble, adaptable, new agency with expertise, resources and authority to do the same for Big Tech.

Our Digital Consumer Protection Commission Act would create an independent, bipartisan regulator charged with licensing and policing the nation's biggest tech companies -- like Meta, Google and Amazon -- to prevent online harm, promote free speech and competition, guard Americans' privacy and protect national security. The new watchdog would focus on the unique threats posed by tech giants while strengthening the tools available to the federal agencies and state attorneys general who have authority to regulate Big Tech.

Our legislation would guarantee common-sense safeguards for everyone who uses tech platforms. Families would have the right to protect their children from sexual exploitation, cyberbullying and deadly drugs. Certain digital platforms have promoted the sexual abuse and exploitation of children, suicidal ideation and eating disorders or done precious little to combat these evils; our bill would require Big Tech to mitigate such harms and allow families to seek redress if they do not.
EU

EU Opens Antitrust Probe Into Microsoft Over Teams Bundling (cnbc.com) 54

European Union regulators on Thursday opened an antitrust investigation into Microsoft's bundling of its video and chat app Teams with other Office products. From a report: The European Commission, the EU's executive arm, said that these practices may constitute anti-competitive behavior. It is the first antitrust investigation by the EU into Microsoft in over a decade. "The Commission is concerned that Microsoft may grant Teams a distribution advantage by not giving customers the choice on whether or not to include access to that product when they subscribe to their productivity suites and may have limited the interoperability between its productivity suites and competing offerings," the EU regulators said on Thursday in a press release. In other words, the EU is concerned Microsoft is not giving customers the choice to not buy Teams when they subscribe to the company's Office 365 product. In doing so, Microsoft might be stopping other companies from competing in the workplace messaging and video app space.
AI

Top Tech Companies Form Group Seeking To Control AI (ft.com) 33

Some of the world's most advanced artificial intelligence companies have formed a group to research increasingly powerful AI and establish best practices for controlling it, as public anxiety and regulatory scrutiny over the impact of the technology increases. From a report: On Wednesday, Anthropic, Google, Microsoft and OpenAI launched the Frontier Model Forum, with the aim of "ensuring the safe and responsible development of frontier AI models." In recent months, the US companies have rolled out increasingly powerful AI tools that produce original content in image, text or video form by drawing on a bank of existing material. The developments have raised concerns about copyright infringement, privacy breaches and that AI could ultimately replace humans in a range of jobs.

"Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control," said Brad Smith, vice-chair and president of Microsoft. "This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity." Membership of the forum is limited only to the handful of companies building "large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models," according to its founders.

United States

FTC Readies Lawsuit That Could Break Up Amazon 62

The Federal Trade Commission is finalizing its long-awaited antitrust lawsuit against Amazon, POLITICO reported Tuesday, citing people with knowledge of the matter, a move that could ultimately break up parts of the company. From the report: The FTC has been investigating the company on a number of fronts, and the coming case would be one of the most aggressive and high-profile moves in the Biden administration's rocky effort to tame the power of tech giants. The wide-ranging lawsuit is expected as soon as August, and will likely challenge a host of Amazon's business practices, said the people, who were granted anonymity to discuss a confidential matter. If successful, it could lead to a court-ordered restructuring of the $1.3 trillion empire and define the legacy of FTC Chair Lina Khan.

Khan rose to prominence as a Big Tech skeptic with a 2017 academic paper specifically identifying Amazon as a modern monopolist needing to be reined in. Because any case will likely take years to wind through the courts, the final result will rest with her successors. The exact details of the final lawsuit are not known, and changes to the final complaint are expected until the eleventh hour. But personnel throughout the agency, including Khan herself, have homed in on several of Amazon's business practices, said some of the people.
AI

Ask Slashdot: What Happens After Every Programmer is Using AI? (infoworld.com) 127

There's been several articles on how programmers can adapt to writing code with AI. But presumably AI companies will then gather more data from how real-world programmers use their tools.

So long-time Slashdot reader ThePub2000 has a question. "Where's the generative leaps if the humans using it as an assistant don't make leaps forward in a public space?" Let's posit a couple of things:

- First, your AI responses are good enough to use.
- Second, because they're good enough to use you no longer need to post publicly about programming questions.

Where does AI go after it's "perfected itself"?

Or, must we live in a dystopian world where code is scrapable for free, regardless of license, but access to support in an AI from that code comes at a price?

United States

NYPD To Test Public Announcement Drones During Emergencies (vice.com) 49

An anonymous reader quotes a report from Motherboard: [T]he NYPD announced it's piloting test drones to fly over at-risk neighborhoods and make public announcements during emergencies. On Sunday, at the tail end of a weekend of heavy rainfall and flooding, New York City's emergency notification system tweeted that the NYPD would be "conducting a test of remote-piloted public messaging capabilities" at a location confirmed to AM New York as Hook Creek Park in Queens. The NYPD told AM New York that the drones were being tested to make announcements during weather-related emergencies, and were being tested in advance of more flooding expected this weekend. The comments suggest that public announcement drones could be deployed in a real-world scenario very soon.

Besides the eeriness of a drone instructing New Yorkers during life-threatening emergencies, the test raises questions about the NYPD's compliance with laws that require the agency to alert the public when deploying surveillance technology. The NYPD is required to post an impact statement and use policy on its website and seek public comment 90 days prior to deploying new surveillance technology to comply with the 2020 POST Act. However, according to the law, the NYPD merely has to amend old use policies if it is using previously existing surveillance tech for new purposes. For its use policy for unmanned aircraft, finalized in April 2021, there is no mention of the emergency announcements. The document says, "In situations where deployment of NYPD (drones) has not been foreseen or prescribed in policy, the highest uniformed member of the NYPD, the Chief of Department, will decide if deployment is appropriate and lawful. In accordance with the Public Oversight of Surveillance Technology Act, an addendum to this impact and use policy will be prepared as necessary to describe any additional uses of UAS." No such addendum appears on the website.
"This plan just isn't going to fly. The city already has countless ways of reaching New Yorkers, and it would take thousands of drones to reach the whole city," Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project told Motherboard by email.

"The drones are a terrible way to alert New Yorkers, but they are a great way to creep us out. More alarmingly, the NYPD is once again violating the landmark Public Oversight of Surveillance Technology (POST) Act, which requires public notice and comment before deploying new surveillance systems." Cahn added: "No gadget is going to be a substitute for effective city management and communication practices."
United States

Bank of America Fined $250M for 'Systematic' Overcharging, Opening Unwanted Credit Cards (msn.com) 80

Bank of America "will pay more than $250 million in refunds and fines," reports the Washington Post, "after federal regulators found the company systematically overcharged customers, withheld promised bonuses and opened accounts without customer approval." The Consumer Financial Protection Bureau [or CFPB] found the bank made "substantial additional revenue" for years by repeatedly charging customers $35 overdraft fees on the same transaction. The bank also denied cash and points bonuses it had pledged to tens of thousands of credit card customers. And starting in 2012, Bank of America employees enrolled customers in credit card accounts without their approval, obtaining credit reports without permission to complete the applications, the bureau said.
The bureau's director emphasized that "These practices are illegal and undermine customer trust," adding that America's CFPB "will be putting an end to these practices across the banking system."

The Post points out that Bank of America will now pay more than $100 million in restitution to customers, a $90 million fine to the CFPB and another $60 million fine to the Office of the Comptroller of the Currency. "Bank of America already has refunded customers denied credit card rewards and bonuses, the consumer bureau said. It will be repaying those it overcharged on fees by depositing funds into their account or sending a check..."

But how widespread is hte problem? Hundreds of thousands of customers were harmed over several years, the consumer agency said. Bank of America is the second largest U.S. bank, with 68 million residential and small business customers... In extra fees alone, the bank charged customers "tens of millions of dollars" between March 2020 and November 2021, federal regulators found. The regulator said Bank of America in that period hit customers with a $35 fee if they had insufficient funds to cover a charge. If the customer still lacked funds when the merchant resubmitted the transaction, the company assessed another $35 penalty... And bank employees opened credit card accounts for customers without their knowledge in a bid to meet individual sales goals, the CFPB said...

[T]he practice has given the banking industry a major black eye in recent years. Wells Fargo reached a $3.7 billion settlement with federal regulators in December over a range of violations, including opening millions of fake accounts. The CFPB fined U.S. Bank $37.5 million last summer over its own sham accounts scandal.

This is not Bank of America's first brush with federal regulators over its treatment of customers. The CFPB ordered the company to pay $727 million in 2014 over illegal credit card practices. The company paid another $225 million last year in fines over mishandling state unemployment benefits during the pandemic and a separate $10 million civil penalty over unlawful garnishments.

"The company did not admit or deny wrongdoing in its settlement with the agency..." notes the article. But a statement from the chairman of the U.S. Senate Banking Committee said Bank of America "has clearly broken the law in yet another case of Wall Street banks taking Americans' money to pad their already-massive profits...

"This kind of abuse is why we will continue to hold the big banks accountable, and it's why we need the Consumer Financial Protection Bureau — so consumers can keep their hard-earned money."
The Courts

Texas' TikTok Ban Hit With First Amendment Lawsuit (cnn.com) 37

Texas's ban on TikTok at state institutions violates the First Amendment, claims a lawsuit filed Thursday by a group of academics and civil society researchers. CNN reports: The Knight First Amendment Institute at Columbia University filed the lawsuit on behalf of the Coalition for Independent Technology Research, which works to study the impact of technology on society. The lawsuit specifically challenges Texas' TikTok ban in relation to public universities, saying it compromises academic freedom and impedes vital research. "The ban is not just ineffective but counterproductive. It's impeding researchers and scholars from studying the very things that Texas says it's concerned about -- like data-collection and disinformation," Jameel Jaffer, executive director of the Institute, told CNN.

The lawsuit cites the example of a University of North Texas researcher who studies young people's use of social media, who has been forced to abandon research projects that rely on university computers and to remove material about TikTok from her courses. The Knight Institute lawsuit notes that Texas has not imposed a ban on other online platforms that collect similar user data, such as Meta and Google. It further argues that a ban doesn't "meaningfully" constrain China's ability to collect sensitive data about Americans, because this data is widely available from other data brokers.

"It's entirely legitimate for government officials to be concerned about social media platforms' data-collection practices, but Imposing broad bans on Americans' access to the platforms isn't a reasonable, effective, or constitutional response to those concerns," Jaffer told CNN. "Like it or not, TikTok is an immensely popular communications platform, and its policies and practices are influencing culture and politics around the world," said Dave Karpf, a Coalition for Independent Technology Research board member and associate professor in the George Washington University School of Media and Public Affairs. "It's important that scholars and researchers be able to study the platform and illuminate the risks associated with it. Ironically, Texas's misguided ban is impeding our members from studying the very risks that Texas says it wants to address."

Businesses

Who Employs Your Doctor? Increasingly, a Private Equity Firm. (nytimes.com) 120

In recent years, private equity firms have been gobbling up physician practices to form powerful medical groups across the country, according to a new report. The New York Times: In more than a quarter of local markets -- in places like Tucson, Ariz.; Columbus, Ohio; and Providence, R.I. -- a single private equity firm owned more than 30 percent of practices in a given specialty in 2021. In 13 percent of the markets, the firms owned groups employing more than half the local specialists. The medical groups were associated with higher prices in their respective markets, particularly when they controlled a dominant share, according to a paper by researchers at the Petris Center at the University of California, Berkeley, and the Washington Center for Equitable Growth, a progressive think tank in Washington, D.C.

When a firm controlled more than 30 percent of the market, the cost of care in three specialties -- gastroenterology, dermatology, and obstetrics and gynecology -- increased by double digits. The paper, published by the American Antitrust Institute, documented substantial private equity purchases across multiple medical specialties over the last decade. Urology, ophthalmology, cardiology, oncology, radiology and orthopedics have also been major targets for such deals. "It's shocking when you look at it," said Laura Alexander, director of markets and competition policy for the Washington Center, who said private equity firms dominated only a handful of markets a decade ago. By looking at individual markets, the researchers were able to document the local impact. "National rates mask this much more acute problem in local markets," she said.

United States

OpenAI's ChatGPT Under Investigation by FTC (wsj.com) 32

The Federal Trade Commission is investigating whether OpenAI's ChatGPT artificial-intelligence system has harmed individuals by publishing false information about them, according to a letter the agency sent to the company. WSJ: The letter, reported earlier by The Washington Post and confirmed by a person familiar with the matter, also asked detailed questions about the company's data-security practices, citing a 2020 incident in which the company disclosed a bug that allowed users to see information about other users' chats and some payment-related information.

Slashdot Top Deals