Software

Self-Pay Gas Station Pumps Break Across NZ As Software Can't Handle Leap Day (arstechnica.com) 92

An anonymous reader quotes a report from Ars Technica: Today is Leap Day, meaning that for the first time in four years, it's February 29. That's normally a quirky, astronomical factoid (or a very special birthday for some). But that unique calendar date broke gas station payment systems across New Zealand for much of the day. As reported by numerous international outlets, self-serve pumps in New Zealand were unable to accept card payments due to a problem with the gas pumps' payment processing software. The New Zealand Herald reported that the outage lasted "more than 10 hours." This effectively shuttered some gas stations, while others had to rely on in-store payments. The outage affected suppliers, including Allied Petroleum, BP, Gull, Waitomo, and Z Energy, and has reportedly been fixed. In-house payment solutions, such as BP fuel cards and the Waitomo app, reportedly still worked during the outage. A representative for Petroleum, when prompted via Facebook to "maybe remember Leap Day in four years' time," responded: "We'll add it to our Outlook reminders :("

Submission + - Self-Pay Gas Station Pumps Break Across NZ As Software Can't Handle Leap Day (arstechnica.com)

An anonymous reader writes: Today is Leap Day, meaning that for the first time in four years, it's February 29. That's normally a quirky, astronomical factoid (or a very special birthday for some). But that unique calendar date broke gas station payment systems across New Zealand for much of the day. As reported by numerous international outlets, self-serve pumps in New Zealand were unable to accept card payments due to a problem with the gas pumps' payment processing software. The New Zealand Herald reported that the outage lasted "more than 10 hours." This effectively shuttered some gas stations, while others had to rely on in-store payments. The outage affected suppliers, including Allied Petroleum, BP, Gull, Waitomo, and Z Energy, and has reportedly been fixed. In-house payment solutions, such as BP fuel cards and the Waitomo app, reportedly still worked during the outage.
IT

Amazon Bricks Long-Standing Fire TV Apps With New Update (arstechnica.com) 64

Amazon has issued an update to Fire TV streaming devices and televisions that has broken apps that let users bypass the Fire OS home screen. From a report: The tech giant claims that its latest Fire OS update is about security but has refused to detail any potential security concerns. Users and app developers have reported that numerous apps that used to work with Fire TV devices for years have suddenly stopped working. As first reported by AFTVnews, the update has made apps unable to establish local Android Debug Bridge (ADB) connections and execute ADB commands with Fire TV devices.

The update, Fire OS 7.6.6.9, affects several Fire OS-based TVs, including models from TCL, Toshiba, Hisense, and Amazon's Fire TV Omni QLED Series. Other devices running the update include Amazon's first Fire TV Stick 4K Max, the third-generation Fire TV Stick, as well as the third and second-generation Fire TV Cubes and the Fire TV Stick Lite. A code excerpt shared with AFTVnews by what the publication described as an "affected app developer," which you can view here, shows a line of code indicating that Fire TVs would not be allowed to make ADB connections with a local device or app. As pointed out by AFTVnews, such apps have been used by Fire TV modders for abilities like clearing installed apps' cache and using a different home screen than the Fire OS default.

AI

Scientists Propose AI Apocalypse Kill Switches 104

A paper (PDF) from researchers at the University of Cambridge, supported by voices from numerous academic institutions including OpenAI, proposes remote kill switches and lockouts as methods to mitigate risks associated with advanced AI technologies. It also recommends tracking AI chip sales globally. The Register reports: The paper highlights numerous ways policymakers might approach AI hardware regulation. Many of the suggestions -- including those designed to improve visibility and limit the sale of AI accelerators -- are already playing out at a national level. Last year US president Joe Biden put forward an executive order aimed at identifying companies developing large dual-use AI models as well as the infrastructure vendors capable of training them. If you're not familiar, "dual-use" refers to technologies that can serve double duty in civilian and military applications. More recently, the US Commerce Department proposed regulation that would require American cloud providers to implement more stringent "know-your-customer" policies to prevent persons or countries of concern from getting around export restrictions. This kind of visibility is valuable, researchers note, as it could help to avoid another arms race, like the one triggered by the missile gap controversy, where erroneous reports led to massive build up of ballistic missiles. While valuable, they warn that executing on these reporting requirements risks invading customer privacy and even lead to sensitive data being leaked.

Meanwhile, on the trade front, the Commerce Department has continued to step up restrictions, limiting the performance of accelerators sold to China. But, as we've previously reported, while these efforts have made it harder for countries like China to get their hands on American chips, they are far from perfect. To address these limitations, the researchers have proposed implementing a global registry for AI chip sales that would track them over the course of their lifecycle, even after they've left their country of origin. Such a registry, they suggest, could incorporate a unique identifier into each chip, which could help to combat smuggling of components.

At the more extreme end of the spectrum, researchers have suggested that kill switches could be baked into the silicon to prevent their use in malicious applications. [...] The academics are clearer elsewhere in their study, proposing that processor functionality could be switched off or dialed down by regulators remotely using digital licensing: "Specialized co-processors that sit on the chip could hold a cryptographically signed digital "certificate," and updates to the use-case policy could be delivered remotely via firmware updates. The authorization for the on-chip license could be periodically renewed by the regulator, while the chip producer could administer it. An expired or illegitimate license would cause the chip to not work, or reduce its performance." In theory, this could allow watchdogs to respond faster to abuses of sensitive technologies by cutting off access to chips remotely, but the authors warn that doing so isn't without risk. The implication being, if implemented incorrectly, that such a kill switch could become a target for cybercriminals to exploit.

Another proposal would require multiple parties to sign off on potentially risky AI training tasks before they can be deployed at scale. "Nuclear weapons use similar mechanisms called permissive action links," they wrote. For nuclear weapons, these security locks are designed to prevent one person from going rogue and launching a first strike. For AI however, the idea is that if an individual or company wanted to train a model over a certain threshold in the cloud, they'd first need to get authorization to do so. Though a potent tool, the researchers observe that this could backfire by preventing the development of desirable AI. The argument seems to be that while the use of nuclear weapons has a pretty clear-cut outcome, AI isn't always so black and white. But if this feels a little too dystopian for your tastes, the paper dedicates an entire section to reallocating AI resources for the betterment of society as a whole. The idea being that policymakers could come together to make AI compute more accessible to groups unlikely to use it for evil, a concept described as "allocation."
United States

No 'GPT' Trademark For OpenAI (techcrunch.com) 22

The U.S. Patent and Trademark Office has denied OpenAI's attempt to trademark "GPT," ruling that the term is "merely descriptive" and therefore unable to be registered. From a report: [...] The name, according to the USPTO, doesn't meet the standards to register for a trademark and the protections a "TM" after the name affords. (Incidentally, they refused once back in October, and this is a "FINAL" in all caps denial of the application.) As the denial document puts it: "Registration is refused because the applied-for mark merely describes a feature, function, or characteristic of applicant's goods and services."

OpenAI argued that it had popularized the term GPT, which stands in this case for "generative pre-trained transformer," describing the nature of the machine learning model. It's generative because it produces new (ish) material, pre-trained in that it is a large model trained centrally on a proprietary database, and transformer is the name of a particular method of building AIs (discovered by Google researchers in 2017) that allows for much larger models to be trained. But the patent office pointed out that GPT was already in use in numerous other contexts and by other companies in related ones.

Privacy

US Military Notifies 20,000 of Data Breach After Cloud Email Leak (techcrunch.com) 11

An anonymous reader quotes a report from TechCrunch: The U.S. Department of Defense is notifying tens of thousands of individuals that their personal information was exposed in an email data spill last year. According to the breach notification letter sent out to affected individuals on February 1, the Defense Intelligence Agency -- the DOD's military intelligence agency -- said, "numerous email messages were inadvertently exposed to the Internet by a service provider," between February 3 and February 20, 2023. TechCrunch has learned that the breach disclosure letters relate to an unsecured U.S. government cloud email server that was spilling sensitive emails to the open internet. The cloud email server, hosted on Microsoft's cloud for government customers, was accessible from the internet without a password, likely due to a misconfiguration.

The DOD is sending breach notification letters to around 20,600 individuals whose information was affected. "As a matter of practice and operations security, we do not comment on the status of our networks and systems. The affected server was identified and removed from public access on February 20, 2023, and the vendor has resolved the issues that resulted in the exposure. DOD continues to engage with the service provider on improving cyber event prevention and detection. Notification to affected individuals is ongoing," said DOD spokesperson Cdr. Tim Gorman in an email to TechCrunch.

Submission + - US Military Notifies 20,000 of Data Breach After Cloud Email Leak (apnews.com)

An anonymous reader writes: The U.S. Department of Defense is notifying tens of thousands of individuals that their personal information was exposed in an email data spill last year. According to the breach notification letter sent out to affected individuals on February 1, the Defense Intelligence Agency — the DOD’s military intelligence agency — said, “numerous email messages were inadvertently exposed to the Internet by a service provider,” between February 3 and February 20, 2023. TechCrunch has learned that the breach disclosure letters relate to an unsecured U.S. government cloud email server that was spilling sensitive emails to the open internet. The cloud email server, hosted on Microsoft’s cloud for government customers, was accessible from the internet without a password, likely due to a misconfiguration.

The DOD is sending breach notification letters to around 20,600 individuals whose information was affected. “As a matter of practice and operations security, we do not comment on the status of our networks and systems. The affected server was identified and removed from public access on February 20, 2023, and the vendor has resolved the issues that resulted in the exposure. DOD continues to engage with the service provider on improving cyber event prevention and detection. Notification to affected individuals is ongoing,” said DOD spokesperson Cdr. Tim Gorman in an email to TechCrunch.

United States

Climate Change Reversing Gains In Air Quality Across the US, Study Finds (axios.com) 121

An anonymous reader quotes a report from Axios: After decades of progress in the U.S. toward cleaner air, climate change-related events will cause a steady deterioration through 2054. New research from the nonprofit First Street Foundation is part of a hyperlocal air quality model showing shifts down to the property level between 2024 and 2054. Its conclusions flow from methods contained in three peer-reviewed studies published by the coauthors. The report itself is not peer reviewed, however. The study finds that climate change is increasing the prevalence of two of the air pollutants most harmful to human health: particulate matter, commonly referred to as PM2.5, and tropospheric ozone.

PM2.5 are tiny particles emitted by vehicles, power plants, wildfires and other sources. They can get lodged in people's lungs and enter the bloodstream, causing or exacerbating numerous health problems. Through the use of air quality observations and the development of the new model, First Street's researchers found that the West will be particularly hard hit by increasing amounts of PM2.5 emissions, as wildfires become more frequent and severe. [...] Future projections estimate a continued increase in PM2.5 levels by nearly 10% over the next 30 years, said Jeremy Porter, head of climate implications at First Street, tells Axios in an interview. This would "completely" erase air quality gains made in the last two decades, he said.

Porter says that whereas pollutants from cars and factors could be targeted by regulations over the past few decades (and the EPA is proposing tightening some further), climate-related deterioration in air quality is a much tougher problem to solve. Instead of national regulations, climate action requires global emissions cuts, and even sharp declines in greenhouse gas emissions may not alter trend lines for the next few decades. The population exposed to "dangerous" days on the air quality index is likely to grow to 11.2 million between 2024 and 2054, an increase of about 13%. A 27% gain in the population exposed to "hazardous" (or maroon) days on the AQI is likely between the present climate and 30 years from now, the report finds. Porter said that while 83 million people are exposed to at least one "unhealthy" (red) day, this is likely to grow to over 125 million during the next three decades. "The climate penalty, associated with the rapidly increasing levels of air pollution, is perhaps the clearest signal we've seen regarding the direct impact climate change is having on our environment," Porter told Axios via email.

Submission + - Climate Change Reversing Gains In Air Quality Across the US, Study Finds (axios.com)

An anonymous reader writes: After decades of progress in the U.S. toward cleaner air, climate change-related events will cause a steady deterioration through 2054. New research from the nonprofit First Street Foundation is part of a hyperlocal air quality model showing shifts down to the property level between 2024 and 2054. Its conclusions flow from methods contained in three peer-reviewed studies published by the coauthors. The report itself is not peer reviewed, however. The study finds that climate change is increasing the prevalence of two of the air pollutants most harmful to human health: particulate matter, commonly referred to as PM2.5, and tropospheric ozone.

PM2.5 are tiny particles emitted by vehicles, power plants, wildfires and other sources. They can get lodged in people's lungs and enter the bloodstream, causing or exacerbating numerous health problems. Through the use of air quality observations and the development of the new model, First Street's researchers found that the West will be particularly hard hit by increasing amounts of PM2.5 emissions, as wildfires become more frequent and severe. [...] Future projections estimate a continued increase in PM2.5 levels by nearly 10% over the next 30 years, said Jeremy Porter, head of climate implications at First Street, tells Axios in an interview. This would "completely" erase air quality gains made in the last two decades, he said.

Porter says that whereas pollutants from cars and factors could be targeted by regulations over the past few decades (and the EPA is proposing tightening some further), climate-related deterioration in air quality is a much tougher problem to solve. Instead of national regulations, climate action requires global emissions cuts, and even sharp declines in greenhouse gas emissions may not alter trend lines for the next few decades. The population exposed to "dangerous" days on the air quality index is likely to grow to 11.2 million between 2024 and 2054, an increase of about 13%. A 27% gain in the population exposed to "hazardous" (or maroon) days on the AQI is likely between the present climate and 30 years from now, the report finds. Porter said that while 83 million people are exposed to at least one "unhealthy" (red) day, this is likely to grow to over 125 million during the next three decades. "The climate penalty, associated with the rapidly increasing levels of air pollution, is perhaps the clearest signal we've seen regarding the direct impact climate change is having on our environment," Porter told Axios via email.

NASA

NASA Spots Signs of Twin Volcanic Plumes on Jupiter's Moon Io 12

The second of a pair of close flybys adds to the treasure trove of data that scientists have about Jupiter's volcanic moon. From a report: On Saturday, NASA's Juno orbiter got a second close-up with Io, Jupiter's third-largest moon and the most volcanic world of our solar system. The Juno spacecraft, which arrived at the gas giant in 2016, is on an extended mission to explore Jupiter's rings and moons. Its latest flyby, which complemented the mission's first close approach on Dec. 30, yielded even more views of the moon's hellish landscape.

Io's violent expulsions of sulfur and additional compounds give the moon its orange, yellow and blue hues. The process is similar to what happens around the volcanoes of Hawaii or the geysers in Yellowstone National Park, according to Scott Bolton, a physicist at the Southwest Research Institute who leads the Juno mission. "That must be what Io is like -- on steroids," he said. He added that it probably smells like those places, too.

Released on Sunday, the most recent shots of Juno are already ripe for discovery. Dr. Bolton saw on the surface of Io what appears to be a double volcanic plume spewing into space -- something that Juno has never caught before. Other scientists are noticing new lava flows and changes to familiar features spotted in past space missions like the Galileo probe, which made numerous close flybys of Io in the 1990s and 2000s. "That's the beauty of Io," said Jani Radebaugh, a planetary scientist at Brigham Young University who is not part of the Juno mission, but collaborates with the team on Io observations. Unlike our own moon, which remains frozen in time, Dr. Radebaugh said, "Io changes every day, every minute, every second."
The Courts

Self-Proclaimed Bitcoin Inventor's Claim 'a Brazen Lie,' London Court Told (reuters.com) 91

In a London court, lawyers for a group supported by the Crypto Open Patent Alliance (COPA) argued that Craig Wright's assertion of being the inventor of bitcoin is "a brazen lie," challenged by accusations of extensive document forgery to substantiate his claim. Wright's defense disputes these allegations, maintaining that he has presented definitive proof of his role in creating bitcoin. Reuters reports: Craig Wright says he is the author of a 2008 white paper, the foundational text of bitcoin and other cryptocurrencies, published in the name "Satoshi Nakamoto". He argues this means he owns the copyright in the white paper and has intellectual property rights over the bitcoin blockchain. But the Crypto Open Patent Alliance (COPA) -- whose members include Twitter founder Dorsey's payments firm Block -- is asking London's High Court to rule that Wright is not Satoshi.

The five-week hearing, at which Wright will give evidence from Tuesday, is the culmination of years of speculation about the true identity of Satoshi. Wright first publicly claimed to be Satoshi in 2016 and has since taken legal action against cryptocurrency developers and exchanges. COPA, however, says Wright has never provided any genuine proof, accusing him of repeatedly forging documents to support his claim, which Wright denies. Wright sat in court as COPA's lawyer Jonathan Hough said his claim was "a brazen lie, an elaborate false narrative supported by forgery on an industrial scale." Hough said that "there are elements of Dr Wright's conduct that stray into farce," citing his alleged use of ChatGPT to produce forgeries.

But he added: "Dr Wright's conduct is also deadly serious. On the basis of his dishonest claim to be Satoshi, he has pursued claims he puts at hundreds of billions of dollars, including against numerous private individuals." Wright's lawyer Anthony Grabiner, however, argued in court filings that he has produced "clear evidence demonstrating his authorship of the white paper and creation of bitcoin." Grabiner added that it was "striking" that no one else had publicly claimed to be Satoshi. "If Dr Wright were not Satoshi, the real Satoshi would have been expected to come forward to counter the claim," he said.

Submission + - Rooftop Solar Industry Could Be On the Verge of Collapse (time.com)

SonicSpike writes: Some of the nation’s biggest public solar companies are struggling to stay afloat as questions arise over the viability of the financial products they sold to both consumers and investors to fund their growing operations.

These looming financial problems could topple the residential solar industry at a time when solar is supposed to be saving the world. Though solar represented just 3.4% of the nation’s electricity generation in 2022, studies show that rooftop solar could eventually meet residential electricity demand in many states if deployed widely, freeing American homes from dependency on fossil fuels. To help speed adoption, the Inflation Reduction Act extended a 30% tax credit for residential solar and battery installations.

Still, the residential solar industry is floundering. In late 2023 alone, more than 100 residential solar dealers and installers in the U.S. declared bankruptcy, according to Roth Capital Partners—six times the number in the previous three years combined. Roth expects at least 100 more to fail. The two largest companies in the industry, SunRun and Sunnova, both posted big losses in their most recent quarterly reports, and their shares are down 86% and 81% respectively from their peaks in January 2021. (This isn’t because of an economy-wide trend; the S&P 500 has grown 26% over the same time period.) Sunnova is also under the microscope for having received a $3 billion loan guarantee from the Department of Energy while facing numerous complaints about troubling sales practices that targeted low-income and elderly homeowners. Another solar giant, SunPower, saw shares plunge 41% on Dec. 18 after it said that it may not be able to continue to operate because of debt issues. Sunlight Financial, a big player in the solar finance space, filed for Chapter 11 bankruptcy in October; it also faces a lawsuit alleging that the company made false and misleading statements about its financial well-being.

Transportation

Apple Dials Back Car's Self-Driving Features and Delays Launch To 2028 (bloomberg.com) 67

Apple, reaching a make-or-break point in its decade-old effort to build a car, has pivoted to a less ambitious design with the intent of finally bringing an electric vehicle to market. Bloomberg: After previously envisioning a truly driverless car, the company is now working on an EV with more limited features, according to people with knowledge of the project. Even so, Apple's goal for a release date continues to slip. With the latest changes, the company looks to introduce the car in 2028 at the earliest, roughly two years after a recent projection, said the people, who asked not to be identified because the deliberations are private.

Apple's secretive effort to create a car is one of the most ambitious endeavors in its history, and one of its more tumultuous. Since it began taking shape in 2014, the project -- codenamed Titan and T172 -- has seen several bosses come and go. There have been multiple rounds of layoffs, key changes in strategy and numerous delays. But it remains one of the company's potential next big things -- an entirely new category for the device maker that could help reinvigorate sales growth. Apple's revenue stalled last year as it contended with a maturing smartphone industry and a slowdown in China, its biggest overseas market.

Security

JPMorgan Suffers 45 Billion Cyber Attacks a Day (cnn.com) 36

Speaking of cyber attacks, JPMorgan Chase is targeted by hackers trying to infiltrate its systems 45 billion times a day (Warning: source may be paywalled; alternative source) -- twice the rate at which it was attacked a year earlier -- the bank's head of asset and wealth management has said. FT: Speaking at Davos on Wednesday, Mary Erdoes said the bank spent $15bn on technology every year and employed 62,000 technologists, with many focused solely on combating the rise in cyber crime. "We have more engineers than Google or Amazon. Why? Because we have to," she said. "The fraudsters get smarter, savvier, quicker, more devious, more mischievous."

Western lenders have suffered a surge in cyber attacks in the past two years, which has been partly blamed on Russian hackers acting in response to sanctions placed on the country and its banks following its full-scale invasion of Ukraine. But the use of artificial intelligence by cyber criminals has also increased the number of incidents and level of sophistication of attacks.
UPDATE 1/18/24: In a statement provided to Slashdot, a JPMorgan spokesperson said: "The 45 billion per day figure measures numerous activities, not just hacking attempts. As updated by Bloomberg, 'Examples of activity can include user log ins like employee virtual desktops, and scanning activity, which are often highly automated and not targeted.'" Bloomberg and FT have updated their articles accordingly.
Security

Ukrainian Hacker Group Takes Down Moscow ISP As a Revenge For Kyivstar Cyber Attack (dailysecurityreview.com) 85

Longtime Slashdot reader Plugh shares a report from Daily Security Review: A Ukrainian hacker group [...] carried out a destructive attack on the servers of a Moscow-based internet provider to take revenge for Kyivstar cyberattack. The group, known as Blackjack, successfully hacked into the systems of M9com, causing extensive damage by deleting terabytes of data. Numerous residents in Moscow experienced disruptions in their internet and television services. Additionally, the Blackjack hacker group has issued a warning of a potentially larger attack in the near future.

Based on the information provided by Ukrinform, the cyber attack on M9com deleted approximately 20 terabytes of data. The attack targeted various critical services of the company, including its official website, mail server, and cyber protection services. Furthermore, the hackers managed to access and download over 10 gigabytes of data from M9com's mail server and client databases. To make matters worse, they made this stolen information publicly accessible via the Tor browser. [...]

Based on the nature of the attack on M9com, it appears that when the hackers hit Moscow, they were able to gain access to the back-end operations of the company. This allowed them to effectively delete data from the servers, similar to what occurred in the Kyivstar incident. It is worth noting that this type of attack, which involves directly targeting and compromising the servers, is less common compared to the more frequently observed distributed denial-of-service (DDoS) attacks. DDoS attacks overwhelm a system by inundating it with automated requests, causing the service to become inaccessible.

AI

OpenAI Quietly Deletes Ban On Using ChatGPT For 'Military and Warfare' 52

An anonymous reader quotes a report from The Intercept: OpenAI this week quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy, which seeks to dictate how powerful and immensely popular tools like ChatGPT can be used. Up until January 10, OpenAI's "usage policies" page included a ban on "activity that has high risk of physical harm, including," specifically, "weapons development" and "military and warfare." That plainly worded prohibition against military applications would seemingly rule out any official, and extremely lucrative, use by the Department of Defense or any other state military. The new policy retains an injunction not to "use our service to harm yourself or others" and gives "develop or use weapons" as an example, but the blanket ban on "military and warfare" use has vanished.

The unannounced redaction is part of a major rewrite of the policy page, which the company said was intended to make the document "clearer" and "more readable," and which includes many other substantial language and formatting changes. "We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs," OpenAI spokesperson Niko Felix said in an email to The Intercept. "A principle like 'Don't harm others' is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples." Felix declined to say whether the vaguer "harm" ban encompassed all military use, writing, "Any use of our technology, including by the military, to '[develop] or [use] weapons, [injure] others or [destroy] property, or [engage] in unauthorized activities that violate the security of any service or system,' is disallowed."
"OpenAI is well aware of the risk and harms that may arise due to the use of their technology and services in military applications," said Heidy Khlaaf, engineering director at the cybersecurity firm Trail of Bits and an expert on machine learning and autonomous systems safety, citing a 2022 paper (PDF) she co-authored with OpenAI researchers that specifically flagged the risk of military use. "There is a distinct difference between the two policies, as the former clearly outlines that weapons development, and military and warfare is disallowed, while the latter emphasizes flexibility and compliance with the law," she said. "Developing weapons, and carrying out activities related to military and warfare is lawful to various extents. The potential implications for AI safety are significant. Given the well-known instances of bias and hallucination present within Large Language Models (LLMs), and their overall lack of accuracy, their use within military warfare can only lead to imprecise and biased operations that are likely to exacerbate harm and civilian casualties."

"I could imagine that the shift away from 'military and warfare' to 'weapons' leaves open a space for OpenAI to support operational infrastructures as long as the application doesn't directly involve weapons development narrowly defined," said Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University. "Of course, I think the idea that you can contribute to warfighting platforms while claiming not to be involved in the development or use of weapons would be disingenuous, removing the weapon from the sociotechnical system -- including command and control infrastructures -- of which it's part." Suchman, a scholar of artificial intelligence since the 1970s and member of the International Committee for Robot Arms Control, added, "It seems plausible that the new policy document evades the question of military contracting and warfighting operations by focusing specifically on weapons."

Submission + - OpenAI Quietly Deletes Ban On Using ChatGPT For 'Military and Warfare' (theintercept.com)

An anonymous reader writes: OpenAI this week quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy, which seeks to dictate how powerful and immensely popular tools like ChatGPT can be used. Up until January 10, OpenAI’s “usage policies” page included a ban on “activity that has high risk of physical harm, including,” specifically, “weapons development” and “military and warfare.” That plainly worded prohibition against military applications would seemingly rule out any official, and extremely lucrative, use by the Department of Defense or any other state military. The new policy retains an injunction not to “use our service to harm yourself or others” and gives “develop or use weapons” as an example, but the blanket ban on “military and warfare” use has vanished.

The unannounced redaction is part of a major rewrite of the policy page, which the company said was intended to make the document “clearer” and “more readable,” and which includes many other substantial language and formatting changes. “We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs,” OpenAI spokesperson Niko Felix said in an email to The Intercept. “A principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples.” Felix declined to say whether the vaguer “harm” ban encompassed all military use, writing, “Any use of our technology, including by the military, to ‘[develop] or [use] weapons, [injure] others or [destroy] property, or [engage] in unauthorized activities that violate the security of any service or system,’ is disallowed.”

Education

After Reports of His Own Wife's Plagiarism, Bill Ackman Threatens Plagiarism Reviews For All MIT Faculty (businessinsider.com) 293

This week Harvard's president Claudine Gay resigned "after conservative activists revealed she had plagiarized," writes Business Insider, adding that hedge fund manager/prominent Harvard donor Bill Ackman "helped lead the charge."

Then Business Insider "analyzed Ackman's wife's doctoral dissertation and found numerous instances of plagiarism." In most cases Ackman's wife put the author's name and publication date immediately after the material which she used — but did not put quotation marks around it. But according to the Business Insider, "At least 15 passages from her 2010 MIT doctoral dissertation were lifted without any citation from Wikipedia entries." Her husband, Ackman, has taken a hardline stance on plagiarism. On Wednesday, responding to news that Gay is set to remain a part of Harvard's faculty after she resigned as president, he wrote on X that Gay should be fired completely due to "serious plagiarism issues... Students are forced to withdraw for much less," Ackman continued. "Rewarding her with a highly paid faculty position sets a very bad precedent for academic integrity at Harvard."
Ackman's wife was a tenured MIT professor from 2017 to 2021, according to the article. "It is unfortunate that my actions to address problems in higher education have led to these attacks on my family," Ackman posted Friday night on Twitter.

Then Ackman threatened "a review of the work of all current MIT faculty members. We will begin with a review of the work of all current MIT faculty members, President Kornbluth, other officers of the Corporation, and its board members for plagiarism."

Business Insider notes that Ackman "has been vocal about wanting to see MIT's president, Sally Kornbluth, fired since Kornbluth testified on December 5 in front of a congressional panel examining how university presidents handled student protests against Israel's war in Gaza. Kornbluth said in her opening statement that she didn't support 'speech codes' that would restrict what students say during protests."
Games

Tekken 8's 'Colorblind' Mode Is Causing Migraines, Vertigo, and Debate (arstechnica.com) 19

An anonymous reader quotes a report from Ars Technica: Modern fighting games have come quite a long way from their origins in providing accessibility options. Street Fighter 6 has audio cues that can convey distance, height, health, and other crucial data to visually impaired players. King of Fighters 15 allows for setting the contrast levels between player characters and background. Competitors like BrolyLegs and numerous hardware hackers have taken the seemingly inhospitable genre even further. Tekken 8, due later this month, seems to aim even higher, offering a number of color vision options in its settings. This includes an unofficially monikered "colorblind mode," with black-and-white and detail-diminished backgrounds and characters' flattened shapes filled in with either horizontal or vertical striped lines. But what started out as excitement in the fighting game and accessibility communities about this offering has shifted into warnings about the potential for migraines, vertigo, or even seizures.

You can see the mode in action in the Windows demo or in a YouTube video shared by Gatterall -- which, of course, you should not view if you believe yourself susceptible to issues with strobing images. Gatterall's enthusiasm for Tekken 8's take on colorblind accessibility ("Literally no game has done this") drew comment from Katsuhiro Harada, head of the Tekken games for developer and publisher Bandai Namco, on X (formerly Twitter). Harada stated that he had developed and tested "an accessibility version" of Tekken 7, which was never shipped or sold. Harada states that those "studies" made it into Tekken 8.

Not everybody in game accessibility circles was excited to see the new offerings, especially when it was shared directly with them by excited followers. Morgan Baker, game-accessibility lead at Electronic Arts, asked followers to "Please stop tagging me in the Tekken 8 'colorblind' stripe filters." The scenes had "already induced an aura migraine," Baker wrote, and she could not "afford to get another one right now." Accessibility consultant Ian Hamilton reposted a number of people citing migraines, nausea, or seizure concerns while also decrying the general nature of colorblind "filters" as an engineering-based approach to a broader design challenge. He added in the thread that shipping a game that contained a potentially seizure-inducing mode could result in people inadvertently discovering their susceptibility, similar to an infamous 1997 episode of the Pokemon TV series. Baker and Hamilton also noted problems with such videos automatically playing on sites like X/Twitter.
"Patterns of lines moving on a screen creates a contiguous area of high-frequency flashing, like an invisible strobe," explained James Berg, accessibility project manager at Xbox Game Studios. "Human meat-motors aren't big fans of that." People typically start to notice "flicker fusion frequency" at around 40 frames per second, notes Ars.

Tekken's Harada responded by saying a "very few" number of people misunderstood what his team was trying to do with this mode. There are multiple options, not just one colorblind mode, Harada wrote, along with brightness adjustments for effects and other elements.

"These color vision options are a rare part of the fighting game genre, but they are still being researched and we intend to expand on them in the future," Harada wrote. Developers "have been working with several research institutes and communities to develop this option," even before the unsold "accessibility version of Tekken 7," added Harada.

Slashdot Top Deals