Privacy

Logitech Reports Data Breach From Zero-Day Software Vulnerability (nerds.xyz) 5

BrianFagioli writes: Logitech has confirmed a cybersecurity breach after an intruder exploited a zero-day in a third-party software platform and copied internal data. The company says the incident did not affect its products, manufacturing or business operations, and it does not believe sensitive personal information like national ID numbers or credit card data were stored in the impacted system. The attacker still managed to pull limited information tied to employees, consumers, customers and suppliers, raising fair questions about how long the zero-day existed before being patched.

Logitech brought in outside cybersecurity firms, notified regulators and says the incident will not materially affect its financial results. The company expects its cybersecurity insurance policy to cover investigation costs and any potential legal or regulatory issues. Still, with zero-day attacks increasing across the tech world, even established hardware brands are being forced to acknowledge uncomfortable weaknesses in their internal systems.

AI

Russia's AI Robot Falls Seconds After Being Unveiled 112

Russia's first AI humanoid robot, Aldol, fell just seconds after its debut at a technology event in Moscow on Tuesday. "The robot was being led on stage to the soundtrack from the film 'Rocky,' before it suddenly lost its balance and fell," reports the BBC. "Assistants could then be seen scrambling to cover it with a cloth -- which ended up tangling in the process." Developers of Aldol blamed poor lighting and calibration issues for the collapse, saying the robot's stereo cameras are sensitive to light and the hall was dark.
Linux

Linus Torvalds Expresses Frustration With 'Garbage' Link Tags In Git Commits (phoronix.com) 82

"I have not pulled this, I'm annoyed by having to even look at this, and if you actually expect me to pull this I want a real explanation and not a useless link," Linus Torvalds posted Friday on the Linux kernel mailing list.

Phoronix explains: It's become a common occurrence seeing "Link: " tags within Git commits for the Linux kernel that point to the latest Linux kernel mailing list patches of the same patch... Linus Torvalds has had enough and will be more strict against accepting pull requests that have link tags of no value. He commented yesterday on a block pull request that he pulled and then backed out of:

"And dammit, this commit has that promising 'Link:' argument that I hoped would explain why this pointless commit exists, but AS ALWAYS that link only wasted my time by pointing to the same damn information that was already there. I was hoping that it would point to some oops report or something that would explain why my initial reaction was wrong.

"Stop this garbage already. Stop adding pointless Link arguments that waste people's time. Add the link if it has *ADDITIONAL* information....

"Yes, I'm grumpy. I feel like my main job — really my only job — is to try to make sense of pull requests, and that's why I absolutely detest these things that are automatically added and only make my job harder."

A longer discussion ensued...
  • Torvalds: [A] "perfect" model might be to actually have some kind of automation of "unless there was actual discussion about it". But I feel such a model might be much too complicated, unless somebody *wants* to explore using AI because their job description says "Look for actual useful AI uses". In today's tech world, I assume such job descriptions do exist. Sigh...
  • Torvalds: I do think it makes sense for patch series that (a) are more than a small handful of patches and (b) have some real "story" to them (ie a cover letter that actually explains some higher-level issues)...

Torvalds also had two responses to a poster who'd said "IMHO it's better to have a Link and it _potentially_ being useful than not to have it and then need to search around for it."

  • Torvalds: No. Really. The issue is "potentially — but very likely not — useful" vs "I HIT THIS TEN+ TIMES EVERY SINGLE F%^& RELEASE".

    There is just no comparison. I have literally *never* found the original submission email to be useful, and I'm tired of the "potentially useful" argument that has nothing to back it up with. It's literally magical thinking of "in some alternate universe, pigs can fly, and that link might be useful"
  • Torvalds: And just to clarify: the hurt is real. It's not just the disappointment. It's the wasted effort of following a link and having to then realize that there's nothing useful there. Those links *literally* double the effort for me when I try to be careful about patches...

    The cost is real. The cost is something I've complained about before... Yes, it's literally free to you to add this cost. No, *YOU* don't see the cost, and you think it is helpful. It's not. It's the opposite of helpful. So I want commit messages to be relevant and explain what is going on, and I want them to NOT WASTE MY TIME.

    And I also don't want to ignore links that are actually *useful* and give background information. Is that really too much to ask for?

Torvalds points out he's brought this up four times before — once in 2022.

  • Torvalds: I'm a bit frustrated, exactly because this _has_ been going on for years. It's not a new peeve.

    And I don't think we have a good central place for that kind of "don't do this". Yes, there's the maintainer summit, but that's a pretty limited set of people. I guess I could mention it in my release notes, but I don't know who actually reads those either.. So I end up just complaining when I see it.

Bug

UK Courts Service 'Covered Up' IT Bug That Lost Evidence (bbc.co.uk) 20

Bruce66423 shares a report from the BBC: The body running courts in England and Wales has been accused of a cover-up, after a leaked report found it took several years to react to an IT bug that caused evidence to go missing, be overwritten or appear lost. Sources within HM Courts & Tribunals Service (HMCTS) say that as a result, judges in civil, family and tribunal courts will have made rulings on cases when evidence was incomplete. The internal report, leaked to the BBC, said HMCTS did not know the full extent of the data corruption, including whether or how it had impacted cases, as it had not undertaken a comprehensive investigation. It also found judges and lawyers had not been informed, as HMCTS management decided it would be "more likely to cause more harm than good." HMCTS says its internal investigation found no evidence that "any case outcomes were affected as a result of these technical issues." However, the former head of the High Court's family division, Sir James Munby, told the BBC the situation was "shocking" and "a scandal." Bruce66423 comments: "Given the relative absence of such stories from the USA, should I congratulate you for better-quality software or for being better at covering up disasters?"
Transportation

Skipping Over-The-Air Car Updates Could Be Costly (autoblog.com) 83

Longtime Slashdot reader Mr_Blank shares a report from Autoblog: Once a new OTA update becomes available, owners of GM vehicles have 45 days to install the update. After this date, the company will not cover any damages or issues that are caused by ignoring the update. "Damage resulting from failure to install over-the-air software updates is not covered," states the warranty booklet for 2025 and 2026 models.

This same rule applies to all GM's brands in the USA: Chevrolet, Buick, Cadillac, and GMC. However, if the software update itself causes any component damage, that will be covered by the warranty. Owners coming from older GM vehicles will have to adapt as the company continues to implement its Global B electronic architecture on newer models, which relies heavily on OTA updates. Similar policies appear in the owner's manual for Tesla. Software-defined vehicles are here to stay, even if some of them have far more tech glitches than they should -- just ask Volvo.

Red Hat Software

Free Software Foundation Speaks Up Against Red Hat Source Code Announcement 126

PAjamian writes: Two years ago Red Hat announced an end to its public source code availability. This caused a great deal of outcry from the Enterprise Linux community at large. Since then many have waited for a statement from the Free Software Foundation concerning their stance on the matter. Now, nearly two years later the FSF has finally responded to questions regarding their stance on the issue with the following statement:

Generally, we don't agree with what Red Hat is doing. Whether it constitutes a violation of the GPL would require legal analysis and the FSF does not give legal advice. However, as the stewards of the GNU GPL we can speak how it is intended to be applied and Red Hat's approach is certainly contrary to the spirit of the GPL. This is unfortunate, because we would expect such flagship organizations to drive the movement forward.

When asked if the FSF would be willing to intervene on behalf of the community they had this to say:

As of today, we are not aware of any issue with Red Hat's new policy that we could pursue on legal grounds. However, if you do find a violation, please follow these instructions and send a report to license-violation@gnu.org.

Following is the full text of my original email to them and their response:

Subject: Statement about recent changes in source code distribution for Red Hat Enterprise Linux
Date: 2023-07-16 00:39:51

> Hi,
>
> I'm a user of Red Hat Enterprise Linux, Rocky Linux and other Linux
> distributions in the RHEL ecosystem. I am also involved in the EL
> (Enterprise Linux) community which is being affected by the statements
> and changes in policy made by Red Hat at
> https://www.redhat.com/en/blog/furthering-evolution-centos-stream and
> https://www.redhat.com/en/blog/red-hats-commitment-open-source-
> response-gitcentosorg-changes
> (note there are many many more links and posts about this issue which
> I
> believe you are likely already aware of). While a few of these
> questions are answered more directly by the license FAQ some of them
> are
> not and there are a not insignificant number of people who would very
> much appreciate a public statement from the FSF that answers these
> questions directly.
>
> Can you please comment or release a statement about the Free Software
> Foundation's position on this issue? Specifically:
>

Thank you for writing in with your questions. My apologies for the delay, but we are a small team with limited resources and can be challenging keeping up with all the emails we receive.

Generally, we don't agree with what Red Hat is doing. Whether it constitutes a violation of the GPL would require legal analysis and the FSF does not give legal advice. However, as the stewards of the GNU GPL we can speak how it is intended to be applied and Red Hat's approach is certainly contrary to the spirit of the GPL. This is unfortunate, because we would expect such flagship organizations to drive the movement forward.

> Is Red Hat's removal of sources from git.centos.org a violation of the
> GPL and various other Free Software licenses for the various programs
> distributed under RHEL?
>
> Is Red Hat's distribution of source RPMs to their customers under
> their
> subscriber agreement sufficient to satisfy the above mentioned
> licenses?
>
> Is it a violation if Red Hat terminates a subscription early because
> their customer exercised their rights under the GPL and other Free
> Software licenses to redistribute the RHEL sources or create
> derivative
> works from them?
>
> Is it a violation if Red Hat refuses to renew a subscription that has
> expired because a customer exercised their rights to redistribute or
> create derivative works?
>
> A number of the programs distributed with RHEL are copyrighted by the
> FSF, some examples being bash, emacs, GNU core utilities, gcc, gnupg
> and
> glibc. Given that the FSF has standing to act in this matter would
> the
> FSF be willing to intervene on behalf of the community in order to get
> Red Hat to correct any of the above issues?
>

As of today, we are not aware of any issue with Red Hat's new policy that we could pursue on legal grounds. However, if you do find a violation, please [follow these instructions][0] and send a report to <license-violation@gnu.org>.

[0]: https://www.gnu.org/licenses/gpl-violation.html

If you are interested in something more specific on this, the Software Freedom Conservancy [published an article about the RHEL][1] situation and hosted a [panel at their conference in 2023][2]. These cover the situation fairly thoroughly.

[1]: https://sfconservancy.org/blog/2023/jun/23/rhel-gpl-analysis/
[2]: https://sfconservancy.org/blog/2023/jul/19/rhel-panel-fossy-2023/

AI

AI Workers Seek Whistleblower Cover To Expose Emerging Threats 7

Workers at AI companies want Congress to grant them specific whistleblower protection, arguing that advancements in the technology pose threats that they can't legally expose under current law. From a report: "What people should be thinking about is the 100 ways in which these companies can lose control of these technologies," said Lawrence Lessig, a Harvard law professor who represented OpenAI employees and former employees raising issues about the company. Current dangers range from deepfake videos to algorithms that discriminate, and the technology is quickly becoming more sophisticated. Lessig called the argument that big tech companies and AI startups can police themselves naive. "If there's a risk, which there is, they're not going to take care of it," he said. "We need regulation."
Piracy

US Court Orders LibGen To Pay $30 Million To Publishers, Issues Broad Injunction 27

A New York federal court has ordered (PDF) the operators of shadow library LibGen to pay $30 million in copyright damages to publishers. The default judgment also comes with a broad injunction that affects third-party services including domain registries, browser extensions, CDN providers, IPFS gateways, advertisers, and more. These parties must restrict access to the pirate site. An anonymous reader quotes a report from TorrentFreak: Yesterday, U.S. District Court Judge Colleen McMahon granted the default judgment without any changes. The anonymous LibGen defendants are responsible for willful copyright infringement and their activities should be stopped. "Plaintiffs have been irreparably harmed as a result of Defendants' unlawful conduct and will continue to be irreparably harmed should Defendants be allowed to continue operating the Libgen Sites," the order reads. The order requires the defendants to pay the maximum statutory damages of $150,000 per work, a total of $30 million, for which they are jointly and severally liable. While this is a win on paper, it's unlikely that the publishers will get paid by the LibGen operators, who remain anonymous.

To address this concern, the publishers' motion didn't merely ask for $30 million in damages, they also demanded a broad injunction. Granted by the court yesterday, the injunction requires third-party services such as advertising networks, payment processors, hosting providers, CDN services, and IPFS gateways to restrict access to the site. [...] The injunction further targets "browser extensions" and "other tools" that are used to provide direct access to the LibGen Sites. While site blocking by residential Internet providers is mentioned in reference to other countries, ISP blocking is not part of the injunction itself. In addition to the broad measures outlined above, the order further requires domain name registrars and registries to disable or suspend all active LibGen domains, or alternatively, transfer them to the publishers. This includes Libgen.is, the most used domain name with 16 million monthly visits, as well as Libgen.rs, Libgen.li and many others.

At the moment, it's unclear how actively managed the LibGen site is, as it has shown signs of decay in recent years. However, when faced with domain seizures, sites typically respond by registering new domains. The publishers are aware of this risk. Therefore, they asked the court to cover future domain names too. The court signed off on this request, which means that newly registered domain names can be taken over as well; at least in theory. [...] All in all, the default judgment isn't just a monetary win, on paper, it's also one of the broadest anti-piracy injunctions we've seen from a U.S. court.
The Courts

Courts Close the Loophole Letting the Feds Search Your Phone At the Border (reason.com) 46

On Wednesday, Judge Nina Morrison ruled that cellphone searches at the border are "nonroutine" and require probable cause and a warrant, likening them to more invasive searches due to their heavy privacy impact. As reported by Reason, this decision closes the loophole in the Fourth Amendment's protection against unreasonable searches and seizures, which Customs and Border Protection (CBP) agents have exploited. Courts have previously ruled that the government has the right to conduct routine warrantless searches for contraband at the border. From the report: Although the interests of stopping contraband are "undoubtedly served when the government searches the luggage or pockets of a person crossing the border carrying objects that can only be introduced to this country by being physically moved across its borders, the extent to which those interests are served when the government searches data stored on a person's cell phone is far less clear," the judge declared. Morrison noted that "reviewing the information in a person's cell phone is the best approximation government officials have for mindreading," so searching through cellphone data has an even heavier privacy impact than rummaging through physical possessions. Therefore, the court ruled, a cellphone search at the border requires both probable cause and a warrant. Morrison did not distinguish between scanning a phone's contents with special software and manually flipping through it.

And in a victory for journalists, the judge specifically acknowledged the First Amendment implications of cellphone searches too. She cited reporting by The Intercept and VICE about CPB searching journalists' cellphones "based on these journalists' ongoing coverage of politically sensitive issues" and warned that those phone searches could put confidential sources at risk. Wednesday's ruling adds to a stream of cases restricting the feds' ability to search travelers' electronics. The 4th and 9th Circuits, which cover the mid-Atlantic and Western states, have ruled that border police need at least "reasonable suspicion" of a crime to search cellphones. Last year, a judge in the Southern District of New York also ruled (PDF) that the government "may not copy and search an American citizen's cell phone at the border without a warrant absent exigent circumstances."

Japan

Japan Plans 310-Mile Conveyor Belt That Can Carry Freight of 25,000 Trucks a Day (newatlas.com) 108

The Japanese government plans to create zero-emissions logistics links between major cities, potentially using massive conveyor belts or autonomous electric carts. The initiative aims to shift millions of tons of cargo, reduce greenhouse gas emissions, and alleviate the anticipated 30% shortfall in parcel deliveries by 2030 due to a lack of drivers. New Atlas reports: According to The Japan News, the project has been under discussion since February by an expert panel at the Land, Infrastructure, Transport and Tourism ministry. A draft outline of an interim report was released Friday, revealing plans to complete an initial link between Tokyo and Osaka by 2034. Japan's well-known population collapse issues foretell severe labor squeezes in the coming years, and one specific issue this project aims to curtail is the continuing rise in online shopping, with a forecast decline in the numbers of delivery drivers that can move goods around. The country is expecting some 30% of parcels simply won't make it from A to B by 2030, because there'll be nobody to move them. Hence this wild logistical link, the first iteration of which the team says will move as much small cargo between Tokyo and Osaka as 25,000 trucks.

Exactly how it'll do this is yet to be nailed down, but individual pallets will carry up to a ton of small cargo items, and they'll move without human interference from one end to the other. One possibility is to use massive conveyor belts to cover the 500-km (310-mile) distance between the two cities, running alongside the highway or potentially through tunnels underneath the road. Alternatively, the infrastructure could simply provide flat lanes or tunnels, and the pallets could be shifted by automated electric carts. A 500-km tunnel, mind you, would be insanely expensive at somewhere around $23 billion before any conveyor belts or autonomous carts are factored in. And one does have to wonder if autonomous electric trucks might be able to do the job without any of the infrastructure requirements [...].

Businesses

GM's Cruise Names Former Amazon, Microsoft Xbox Executive As New CEO (cnbc.com) 6

Cruise, the autonomous vehicle unit from General Motors, named Amazon and Microsoft executive Marc Whitten as its new CEO, replacing former CEO and co-founder Kyle Vogt. CNBC reports: Whitten was a founding engineer at Microsoft's Xbox before leaving the company after more than 17 years to become chief product officer of audio company Sonos in 2014, according to his LinkedIn profile. He then worked at Amazon as vice president of entertainment devices and services before his most recent role as chief product and technology officer for software development company Unity's Create.

His appointment comes at a crucial time for Cruise, which is testing and relaunching its autonomous vehicles on public roadways. It ceased operations weeks after an Oct. 2 accident in which a pedestrian in San Francisco was dragged 20 feet by a Cruise robotaxi. A third-party probe into the October incident ordered by GM and Cruise found that culture issues, ineptitude and poor leadership fueled regulatory oversights that led to the accident. The probe also investigated allegations of a cover-up by Cruise leadership, but investigators did not find evidence to support those claims.

During that time, San Francisco-based Cruise was attempting to expand its operations into a revenue-generating business for GM, which has been a majority owner of the company since acquiring it in 2016. Other investors now include Honda Motor, Microsoft, T. Rowe Price, and Walmart. As of this month, Cruise has resumed supervised driving in Phoenix, Houston and Dallas, in addition to its ongoing testing in Dubai. It has not relaunched in San Francisco, where it remains under investigation related to the accident.

United Kingdom

Microsoft Admits No Guarantee of Sovereignty For UK Policing Data (computerweekly.com) 88

An anonymous reader shared this report from Computer Weekly: Microsoft has admitted to Scottish policing bodies that it cannot guarantee the sovereignty of UK policing data hosted on its hyperscale public cloud infrastructure, despite its systems being deployed throughout the criminal justice sector.

According to correspondence released by the Scottish Police Authority (SPA) under freedom of information (FOI) rules, Microsoft is unable to guarantee that data uploaded to a key Police Scotland IT system — the Digital Evidence Sharing Capability (DESC) — will remain in the UK as required by law. While the correspondence has not been released in full, the disclosure reveals that data hosted in Microsoft's hyperscale public cloud infrastructure is regularly transferred and processed overseas; that the data processing agreement in place for the DESC did not cover UK-specific data protection requirements; and that while the company has the ability to make technical changes to ensure data protection compliance, it is only making these changes for DESC partners and not other policing bodies because "no one else had asked".

The correspondence also contains acknowledgements from Microsoft that international data transfers are inherent to its public cloud architecture. As a result, the issues identified with the Scottish Police will equally apply to all UK government users, many of whom face similar regulatory limitations on the offshoring of data. The recipient of the FOI disclosures, Owen Sayers — an independent security consultant and enterprise architect with over 20 years' experience in delivering national policing systems — concluded it is now clear that UK policing data has been travelling overseas and "the statements from Microsoft make clear that they 100% cannot comply with UK data protection law".

Linux

T2 Linux 24.6 Goes Desktop with Integrated Windows Binary Support (t2sde.org) 35

T2's open development process and the collection of exotic, vintage and retro hardware can be followed live on YouTube and Twitch.

Now Slashdot reader ReneR writes: Embedded T2 Linux is known for its sophisticated cross compile features as well as supporting all CPU architectures, including: Alpha, Arc, ARM(64), Avr32, HPPA(64), IA64, M68k, MIPS(64), Nios2, PowerPC(64)(le), RISCV(64), s390x, SPARC(64), SuperH, x86(64). But now it's going Desktop!

24.6 comes as a major convenience update, with out-of-the-box Windows application compatibility as well as LibreOffice and Thunderbird cross-compiled and in the default base ISO for the most popular CPU architectures.

Continuing to keep Intel IA-64 Itanium alive, a major, up-to-3x performance improvement was found for OpenSSL, doubling crypto performance for many popular algorithms and SSH.

The project's CI unit testing was further expanded to now cover the whole installation in two variants. The graphical desktop defaults were also polished -- and a T2 branded wallpaper was added! ;-)
The release contains 606 changesets, including approximately 750 package updates, 67 issues fixed, 80 packages or features added, 21 removed and 9 other improvements.
Government

White House Looks To Curb Foreign Powers' Ability To Buy Americans' Sensitive Personal Data With Executive Order (cnn.com) 117

President Joe Biden will issue an executive order on Wednesday aimed at curbing foreign governments' ability to buy Americans' sensitive personal information such as heath and geolocation data, according to senior US officials. From a report: The move marks a rare policy effort to address a longstanding US national security concern: the ease with which anyone, including a foreign intelligence services, can legally buy Americans' data and then use the information for espionage, hacking and blackmail. The issue, a senior Justice Department official told reporters this week, is a "growing threat to our national security."

The executive order will give the Justice Department the authority to regulate commercial transactions that "pose an unacceptable risk" to national security by, for example, giving a foreign power large-scale access to Americans' personal data, the Justice Department official said. The department will also issue regulations that require better protection of sensitive government information, including geolocation data on US military members, according to US officials. A lot of the online trade in personal information runs through so-called data brokers, which buy information on people's Social Security numbers, names, addresses, income, employment history and criminal background, as well as other items.

"Countries of concern, such as China and Russia, are buying Americans' sensitive personal data from data brokers," a separate senior administration official told reporters. In addition to health and location data, the executive order is expected to cover other sensitive information like genomic and financial data. Administration officials told reporters the new executive order would be applied narrowly so as not to hurt business transactions that do not pose a national security risk.
The White House's press release.
Social Networks

Instagram and Threads Will Stop Recommending Political Content (theverge.com) 19

In a blog post today, Meta announced that it'll stop showing political content across Instagram and Threads unless users explicitly choose to have it recommended to them. The Verge reports: Meta announced that it's expanding an existing Reels policy that limits political content from people you're not following (including posts about social issues) from appearing in recommended feeds to more broadly cover the company's Threads and Instagram platforms. "Our goal is to preserve the ability for people to choose to interact with political content, while respecting each person's appetite for it," said Instagram head Adam Mosseri, announcing on Threads that the changes will be applied over the next few weeks. Facebook is also expected to roll out these new controls at a later, undisclosed date.

Users who still want to have content "likely to mention governments, elections, or social topics that affect a group of people and/or society at large" recommended to them can choose to turn off this limitation within their account settings. The changes will apply to public accounts when enabled and only in places where content is being recommended, such as Explore, Reels, in-feed recommendations, and suggested users. The update won't change how users view content from accounts they choose to follow, so accounts that aren't eligible to be recommended can still post political content to their followers via their feed and Stories.

For creators, Meta says that "if your account is not eligible to be recommended, none of your content will be recommended regardless of whether or not all of your content goes against our recommendations guidelines." When these changes do go live, professional accounts on Instagram will be able to use the Account Status feature to check if posting political content is impacting their eligibility for recommendation. Professional accounts can also use Account Status to contest decisions that revoke this eligibility, alongside editing, removing, or pausing politically related posts until the account is eligible to be recommended again.

Social Networks

Is AI Hastening the Demise of Quora? (slate.com) 57

Quora "used to be a thriving community that worked to answer our most specific questions," writes Slate. "But users are fleeing," while the site hosts "a never-ending avalanche of meaningless, repetitive sludge, filled with bizarre, nonsensical, straight-up hateful, and A.I.-generated entries..."

The site has faced moderation issues, spam, trolls, and bots re-posting questions from Reddit (plus competition for ad revenue from sites like Facebook and Google which forced cuts in Quora's support and moderation teams). But automating its moderation "did not improve the situation...

"Now Quora is even offering A.I.-generated images to accompany users' answers, even though the spawned illustrations make little sense." To top it all off, after Quora began using A.I. to "generate machine answers on a number of selected question pages," the site made clear the possibility that human-crafted answers could be used for training A.I. This meant that the detailed writing Quorans provided mostly for free would be ingested into a custom large language model. Updated terms of service and privacy policies went into effect at the site last summer. As angel investor and Quoran David S. Rose paraphrased them: "You grant all other Quora users the unlimited right to reuse and adapt your answers," "You grant Quora the right to use your answers to train an LLM unless you specifically opt out," and "You completely give up your right to be any part of any class action suit brought against Quora," among others. (Quora's Help Center claims that "as of now, we do not use answers, posts, or comments added to Quora to train LLMs used for generating content on Quora. However, this may change in the future." The site offers an opt-out setting, although it admits that "opting out does not cover everything.")

This raised the issue of consent and ownership, as Quorans had to decide whether to consent to the new terms or take their work and flee. High-profile users, like fantasy author Mercedes R. Lackey, are removing their work from their profiles and writing notes explaining why. "The A.I. thing, the terms of service issue, has been a massive drain of top talent on Quora, just based on how many people have said, Downloaded my stuff and I'm out of there," Lackey told me. It's not that all Quorans want to leave, but it's hard for them to choose to remain on a website where they now have to constantly fight off errors, spam, trolls, and even account impersonators....

The tragedy of Quora is not just that it crushed the flourishing communities it once built up. It's that it took all of that goodwill, community, expertise, and curiosity and assumed that it could automate a system that equated it, apparently without much thought to how pale the comparison is. [Nelson McKeeby, an author who joined Quora in 2013] has a grim prediction for the future: "Eventually Quora will be robot questions, robot answers, and nothing else." I wonder how the site will answer the question of why Quora died, if anyone even bothers to ask.

The article notes that Andreessen Horowitz gave Quora "a much-needed $75 million investment — but only for the sake of developing its on-site generative-text chatbot, Poe."
AI

MIT Group Releases White Papers On Governance of AI (mit.edu) 46

An anonymous reader quotes a report from MIT News: Providing a resource for U.S. policymakers, a committee of MIT leaders and scholars has released a set of policy briefs that outlines a framework for the governance of artificial intelligence. The approach includes extending current regulatory and liability approaches in pursuit of a practical way to oversee AI. The aim of the papers is to help enhance U.S. leadership in the area of artificial intelligence broadly, while limiting harm that could result from the new technologies and encouraging exploration of how AI deployment could be beneficial to society.

The main policy paper, "A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector," suggests AI tools can often be regulated by existing U.S. government entities that already oversee the relevant domains. The recommendations also underscore the importance of identifying the purpose of AI tools, which would enable regulations to fit those applications. "As a country we're already regulating a lot of relatively high-risk things and providing governance there," says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, who helped steer the project, which stemmed from the work of an ad hoc MIT committee. "We're not saying that's sufficient, but let's start with things where human activity is already being regulated, and which society, over time, has decided are high risk. Looking at AI that way is the practical approach." [...]

"The framework we put together gives a concrete way of thinking about these things," says Asu Ozdaglar, the deputy dean of academics in the MIT Schwarzman College of Computing and head of MIT's Department of Electrical Engineering and Computer Science (EECS), who also helped oversee the effort. The project includes multiple additional policy papers and comes amid heightened interest in AI over last year as well as considerable new industry investment in the field. The European Union is currently trying to finalize AI regulations using its own approach, one that assigns broad levels of risk to certain types of applications. In that process, general-purpose AI technologies such as language models have become a new sticking point. Any governance effort faces the challenges of regulating both general and specific AI tools, as well as an array of potential problems including misinformation, deepfakes, surveillance, and more.
These are the key policies and approaches mentioned in the white papers:

Extension of Current Regulatory and Liability Approaches: The framework proposes extending current regulatory and liability approaches to cover AI. It suggests leveraging existing U.S. government entities that oversee relevant domains for regulating AI tools. This is seen as a practical approach, starting with areas where human activity is already being regulated and deemed high risk.

Identification of Purpose and Intent of AI Tools: The framework emphasizes the importance of AI providers defining the purpose and intent of AI applications in advance. This identification process would enable the application of relevant regulations based on the specific purpose of AI tools.

Responsibility and Accountability: The policy brief underscores the responsibility of AI providers to clearly define the purpose and intent of their tools. It also suggests establishing guardrails to prevent misuse and determining the extent of accountability for specific problems. The framework aims to identify situations where end users could reasonably be held responsible for the consequences of misusing AI tools.

Advances in Auditing of AI Tools: The policy brief calls for advances in auditing new AI tools, whether initiated by the government, user-driven, or arising from legal liability proceedings. Public standards for auditing are recommended, potentially established by a nonprofit entity or a federal entity similar to the National Institute of Standards and Technology (NIST).

Consideration of a Self-Regulatory Organization (SRO): The framework suggests considering the creation of a new, government-approved "self-regulatory organization" (SRO) agency for AI. This SRO, similar to FINRA for the financial industry, could accumulate domain-specific knowledge, ensuring responsiveness and flexibility in engaging with a rapidly changing AI industry.

Encouragement of Research for Societal Benefit: The policy papers highlight the importance of encouraging research on how to make AI beneficial to society. For instance, there is a focus on exploring the possibility of AI augmenting and aiding workers rather than replacing them, leading to long-term economic growth distributed throughout society.

Addressing Legal Issues Specific to AI: The framework acknowledges the need to address specific legal matters related to AI, including copyright and intellectual property issues. Special consideration is also mentioned for "human plus" legal issues, where AI capabilities go beyond human capacities, such as mass surveillance tools.

Broadening Perspectives in Policymaking: The ad hoc committee emphasizes the need for a broad range of disciplinary perspectives in policymaking, advocating for academic institutions to play a role in addressing the interplay between technology and society. The goal is to govern AI effectively by considering both technical and social systems.
Space

Euclid Telescope: First Images Revealed From 'Dark Universe' Mission (bbc.com) 10

AmiMoJo shares a report from the BBC: Europe's Euclid telescope is ready to begin its quest to understand the greatest mysteries in the Universe. Exquisite imagery from the space observatory shows its capabilities to be exceptional. Over the next six years, Euclid will survey a third of the heavens to get some clues about the nature of so-called dark matter and dark energy. The 1.4 billion euro Euclid telescope went into space in July. Since then, engineers have been fine-tuning it. There were some early worries. Initially, Euclid's optics couldn't lock on to stars to take a steady image. This required new software for the telescope's fine guidance sensor. Engineers also found some stray light was polluting pictures when the observatory was pointed in a certain way. But with these issues all now resolved, Euclid is good to go -- as evidenced by the release of five sample images on Tuesday. "No previous space telescope has been able to combine the breadth, depth and sharpness of vision that Euclid can," notes the BBC. "The astonishing James Webb telescope, for example, has much higher resolution, but it can't cover the amount of sky that Euclid does in one shot."
Encryption

How the US is Preparing For a Post-Quantum World (msn.com) 45

To explore America's "transition to a post-quantum world," the Washington Post interviewed U.S. federal official Nick Polk, who is focused on national security issues including quantum computing and is also a senior advisor to a White House federal chief information security officer): The Washington Post: The U.S. is in the early stages of a major shift focused on bolstering government network defenses, pushing federal agencies to adopt a new encryption standard known as post-quantum cryptography that aims to prevent systems from being vulnerable to advanced decryption techniques enabled by quantum computers in the near future...

Nick Polk: We've been using asymmetric encryption for a very long time now, and it's been ubiquitous since about 2014, when the U.S. government and some of the large tech companies decided that they're going to make it a default on most web browsers... Interestingly enough, regarding the post-quantum cryptographic standards being developed, the only thing that's quantum about them is that it has "quantum" in the name. It's really just a different type of math that's much more difficult for a quantum computer to be able to reverse-engineer. The National Institute of Standards and Technology is looking at different mathematical models to cover all their bases. The interesting thing is that these post-quantum standards are actually being used to protect classical computers that we have now, like laptops...

Given the breadth of the U.S. government and the amount of computing power we use, we really see ourselves and our role as a steward of the tech ecosystem. One of the things that came out of [this week's Inside Quantum Technology conference in New York City] was that we are very quickly moving along with the private sector to migrate to post-quantum cryptography. I think you're gonna see very shortly a lot of very sensitive private sector industries start to migrate or start to advertise that they're going to migrate. Banks are a perfect example. That means meeting with vendors regularly, and testing their algorithms to ensure that we can accurately and effectively implement them on federal systems...

The administration and national security memorandum set 2035 as our deadline as a government to migrate our [national security] systems to post-quantum cryptography. That's supposed to time with the development of operational quantum computers. We need to ensure that we start now, so that we don't end up not meeting the deadline before computers are operational... This is a prioritized migration for the U.S. government. We're going to start with our most critical systems — that includes what we call high-value assets, and high-impact systems. So for example, we're gonna prioritize systems that have personal health information.

That's our biggest emphasis — both when we talk to private industry and when we encourage agencies when they talk to their contractors and vendors — to really think about where your most sensitive data is and then prioritize those systems for migration.

AI

Getty Images Built a 'Socially Responsible' AI Tool That Rewards Artists (arstechnica.com) 26

An anonymous reader quotes a report from Ars Technica: Getty Images CEO Craig Peters told the Verge that he has found a solution to one of AI's biggest copyright problems: creators suing because AI models were trained on their original works without consent or compensation. To prove it's possible for AI makers to respect artists' copyrights, Getty built an AI tool using only licensed data that's designed to reward creators more and more as the tool becomes more popular over time. "I think a world that doesn't reward investment in intellectual property is a pretty sad world," Peters told The Verge. The conversation happened at Vox Media's Code Conference 2023, with Peters explaining why Getty Images -- which manages "the world's largest privately held visual archive" -- has a unique perspective on this divisive issue.

In February, Getty Images sued Stability AI over copyright concerns regarding the AI company's image generator, Stable Diffusion. Getty alleged that Stable Diffusion was trained on 12 million Getty images and even imitated Getty's watermark -- controversially seeming to add a layer of Getty's authenticity to fake AI images. Now, Getty has rolled out its own AI image generator that has been trained in ways that are unlike most of the popular image generators out there. Peters told The Verge that because of Getty's ongoing mission to capture the world's most iconic images, "Generative AI by Getty Images" was intentionally designed to avoid major copyright concerns swirling around AI images -- and compensate Getty creators fairly.

Rather than crawling the web for data to feed its AI model, Getty's tool is trained exclusively on images that Getty owns the rights to, Peters said. The tool was created out of rising demand from Getty Images customers who want access to AI generators that don't carry copyright risks. [...] With that as the goal, Peters told Code Conference attendees that the tool is "entirely commercially safe" and "cannot produce third-party intellectual property" or deepfakes because the AI model would have no references from which to produce such risky content. Getty's AI tool "doesn't know what the Pope is," Peters told The Verge. "It doesn't know what [Balenciaga] is, and it can't produce a merging of the two." Peters also said that if there are any lawsuits over AI images generated by Getty, then Getty will cover any legal costs for customers. "We actually put our indemnification around that so that if there are any issues, which we're confident there won't be, we'll stand behind that," Peters said.
When asked how Getty creators will be paid for AI training data, Peters said that there currently isn't a tool for Getty to assess which artist deserves credit every time an AI image is generated. "Instead, Getty will rely on a fixed model that Peters said determines 'what proportion of the training set does your content represent? And then, how has that content performed in our licensing world over time? It's kind of a proxy for quality and quantity. So, it's kind of a blend of the two,'" reports Ars.

"Importantly, Peters suggested that Getty isn't married to using this rewards system and would adapt its methods for rewarding creators by continually monitoring how customers are using the AI tool."

Slashdot Top Deals