United Kingdom

UK Government Urges Regulators To Come Up With Rules For AI (cnbc.com) 15

The U.K. government on Wednesday published recommendations for the artificial intelligence industry, outlining an all-encompassing approach for regulating the technology at a time when it has reached frenzied levels of hype. From a report: In a white paper to be put forward to Parliament, the Department for Science, Innovation and Technology (DSIT) will outline five principles it wants companies to follow. They are: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress. Rather than establishing new regulations, the government is calling on regulators to apply existing regulations and inform companies about their obligations under the white paper.

It has tasked the Health and Safety Executive, the Equality and Human Rights Commission, and the Competition and Markets Authority with coming up with "tailored, context-specific approaches that suit the way AI is actually being used in their sectors. Over the next twelve months, regulators will issue practical guidance to organisations, as well as other tools and resources like risk assessment templates, to set out how to implement these principles in their sectors," the government said. "When parliamentary time allows, legislation could be introduced to ensure regulators consider the principles consistently."

Google

Google's Claims of Super-Human AI Chip Layout Back Under the Microscope (theregister.com) 56

A Google-led research paper published in Nature, claiming machine-learning software can design better chips faster than humans, has been called into question after a new study disputed its results. The Register reports: In June 2021, Google made headlines for developing a reinforcement-learning-based system capable of automatically generating optimized microchip floorplans. These plans determine the arrangement of blocks of electronic circuitry within the chip: where things such as the CPU and GPU cores, and memory and peripheral controllers, actually sit on the physical silicon die. Google said it was using this AI software to design its homegrown TPU chips that accelerate AI workloads: it was employing machine learning to make its other machine-learning systems run faster. The research got the attention of the electronic design automation community, which was already moving toward incorporating machine-learning algorithms into their software suites. Now Google's claims of its better-than-humans model has been challenged by a team at the University of California, San Diego (UCSD).

Led by Andrew Kahng, a professor of computer science and engineering, that group spent months reverse engineering the floorplanning pipeline Google described in Nature. The web giant withheld some details of its model's inner workings, citing commercial sensitivity, so the UCSD had to figure out how to make their own complete version to verify the Googlers' findings. Prof Kahng, we note, served as a reviewer for Nature during the peer-review process of Google's paper. The university academics ultimately found their own recreation of the original Google code, referred to as circuit training (CT) in their study, actually performed worse than humans using traditional industry methods and tools.

What could have caused this discrepancy? One might say the recreation was incomplete, though there may be another explanation. Over time, the UCSD team learned Google had used commercial software developed by Synopsys, a major maker of electronic design automation (EDA) suites, to create a starting arrangement of the chip's logic gates that the web giant's reinforcement learning system then optimized. The Google paper did mention that industry-standard software tools and manual tweaking were used after the model had generated a layout, primarily to ensure the processor would work as intended and finalize it for fabrication. The Googlers argued this was a necessary step whether the floorplan was created by a machine-learning algorithm or by humans with standard tools, and thus its model deserved credit for the optimized end product. However, the UCSD team said there was no mention in the Nature paper of EDA tools being used beforehand to prepare a layout for the model to iterate over. It's argued these Synopsys tools may have given the model a decent enough head start that the AI system's true capabilities should be called into question.

The lead authors of Google's paper, Azalia Mirhoseini and Anna Goldie, said the UCSD team's work isn't an accurate implementation of their method. They pointed out (PDF) that Prof Kahng's group obtained worse results since they didn't pre-train their model on any data at all. Prof Kahng's team also did not train their system using the same amount of computing power as Google used, and suggested this step may not have been carried out properly, crippling the model's performance. Mirhoseini and Goldie also said the pre-processing step using EDA applications that was not explicitly described in their Nature paper wasn't important enough to mention. The UCSD group, however, said they didn't pre-train their model because they didn't have access to the Google proprietary data. They claimed, however, their software had been verified by two other engineers at the internet giant, who were also listed as co-authors of the Nature paper.
Separately, a fired Google AI researcher claims the internet goliath's research paper was "done in context of a large potential Cloud deal" worth $120 million at the time.
Software

VW Will Support Software Products For Up To 15 Years (arstechnica.com) 23

An anonymous reader quotes a report from Ars Technica, written by Jonathan M. Gitlin: A perennial question that has accompanied the spread of Android Automotive has been the question of support. A car has a much longer expected service life than a smartphone, especially an Android smartphone, and with infotainment systems so integral to a car's operations now, how long can we reasonably expect those infotainment systems to be supported? I got the chance to put this question to Dirk Hilgenberg, CEO of CARIAD, Volkswagen Group's software division: Given the much longer service life of a car compared to a smartphone, how does VW plan to keep those cars patched and safe 10 or 15 years from now?

"We actually have a contract with the brands, which took a while to negotiate, but lifetime support was utterly important," Hilgenberg told me. The follow-up was obvious: How long is "lifetime"? "Fifteen years after service, and an extra option for brands who would like to have it even longer; you know, we have to guarantee updatability on all legal aspects," he said. "So that's why we are, as you can imagine, very cautious with branches of releases because every branch we need to maintain over this long time. So when you have end of operation and EOP [end of production] and it's 15 years longer, we still have to maintain that; plus, some brands actually said 'because my vehicle is a unicorn, it's something that people want even more, they only occasionally drive it but they want to be safe,'" Hilgenberg told me.

(The unicorn reference should make sense in the context of VW Group owning Bugatti, Lamborghini, and Porsche, whose cars are often collected and can be on the road for many decades.) In those cases, CARIAD would provide continued support, Hilgenberg said. "Especially as cybersecurity, all the legal things are concerned, you see that already. Now we do upgrades and releases, whether it's in China, whether it's in the US, whether it's in Europe, we take very cautious steps. Security and safety has, in the Volkswagen group, you know, the utmost importance, and we see it actually as an opportunity to differentiate," he said.
In an update to the article, Ars said CARIAD got in touch with them to add some clarifications. "As part of its development services to Volkswagen's automotive brands, CARIAD provides operational services, updates, upgrades and new releases as well as bug fixes and patches relating to its hardware- and software-products. We usually support our hard- and software releases for extended periods of time. In some cases this can be up to 15 years after the end of production ('EOP') for hardware and 10 years after EOP for software releases. Moreover, there are legally mandatory periods we comply with, e.g. cybersecurity as well as safety updates and patches are provided for as long as a function is available. In addition, there may be individual agreements with brands for longer support periods to specifically satisfy their customers' needs," wrote a CARIAD spokesperson.

Ars notes: "there's no guarantee that OEMs can make the business model work for this long-term support."
AI

Grammarly Expands Beyond Proofreading With AI-Powered Writing (engadget.com) 20

Grammarly announced today that it's (unsurprisingly) diving into the generative AI fray. From a report: GrammarlyGo is an upcoming set of auto-composition features to help the AI proofreading software keep up with the many companies adding the ChatGPT API (or different generative AI backends) to their products. GrammarlyGo can use context like voice, style, purpose and where you're writing to determine its approach. So, for example, it can spit out email replies, shorten passages, rewrite them for tone and clarity, brainstorm or choose from one-click prompts -- all while adhering to your company's voice or other provided context. In addition, since Grammarly's desktop service can pop up in any text field on your computer, its generative writing could be slightly more convenient than competitors (like Notion or Gmail's Smart Compose) that require you to visit an app or website. The company says GrammarlyGo will be enabled by default for individuals, and you can toggle it in settings. Grammarly justifies the feature's existence by saying most people's writing can be better and faster.
China

The Netherlands To Block Export of Advanced Chips Printers To China (politico.eu) 50

An anonymous reader quotes a report from Politico: The Dutch government confirmed for the first time Wednesday it will impose new export controls on microchips manufacturing equipment, bowing to U.S. pressure to block the sale of some of its prized chips printing machines to China. The U.S. and the Netherlands reached an agreement to introduce new export restrictions on advanced chip technology to China at the end of January, but until now, the Dutch government hadn't commented publicly on it. The deal, which also included Japan, involves the only three countries that are home to manufacturers of advanced machines to print microchips. It is a U.S.-led initiative to choke off the supply of cutting-edge chips to China.

"Given the technological developments and geopolitical context, the government has concluded that it is necessary for the (inter)national security to expand the existing export controls on specific manufacturing equipment for semiconductors," Foreign Trade Minister Liesje Schreinemacher wrote in a letter to Dutch lawmakers published Wednesday evening. The Dutch government wants to prevent Dutch technology from being used in military systems or weapons of mass destruction, Schreinemacher wrote — echoing the U.S. reasoning when it imposed its own export controls in October. The Netherlands also wants to avoid losing its pole position in producing cutting-edge chip manufacturing tools: Schreinemacher said the government wants to uphold "Dutch technological leadership." While China is not explicitly named in Schreinemacher's letter, the new policy is targeted at Chinese efforts to overtake the U.S. and others like Taiwan, South Korea, Japan and leading European countries in the global microchips supply chain.

The new export restrictions deal a blow to ASML, the global leader in producing advanced microchips printing machines based in Veldhoven, in southern Netherlands. In the letter, Schreinemacher said the new export control measures include the most advanced deep ultraviolet (DUV) machines, which are part of ASML's advanced chips printers portfolio. The Dutch firm, which is the highest-valued tech company in Europe, already did not receive export licenses for selling its most advanced machines using extreme ultraviolet light (EUV) technology to China since 2019. ASML in a statement confirmed it will now "need to apply for export licenses for shipment of the most advanced immersion DUV systems," but it noted it has not yet received more details about what "most advanced" means.

AI

DuckDuckGo Dabbles With AI Search (techcrunch.com) 16

Privacy-focused search engine DuckDuckGo has followed Microsoft and Google to become the latest veteran search player to dip its beak in the generative AI trend -- announcing the launch today in beta of an AI-powered summarization feature, called DuckAssist, which can directly answer straightforward search queries for users. From a report: DDG says it's drawing on natural language technology from ChatGPT-maker OpenAI and Anthropic, an AI startup founded by ex-OpenAI employees, to power the natural language summarization capability, combined with its own active indexing of Wikipedia and other reference sites it's using to source answers (the encyclopedia Britannia is another source it mentions).

Founder Gabe Weinberg tells TechCrunch the sources it's using for DuckAssist are -- currently -- "99%+ Wikipedia." But he notes the company is "experimenting with how incorporating other sources could work, and when to use them" -- which suggests it may seek to adapt sourcing to the context of the query (so, for example, a topical news-related search query might be better responded to by DuckAssist sourcing information from trusted news media). So it remains to be seen how DDG will evolve the feature -- and whether it might, for example, seek to ink partnerships with reference sites. At launch, DuckAssist is only available via DDG's apps and browser extensions -- but the company says it plans to roll it out to all search users in the coming weeks. The beta feature is free to use and does not require the user to be logged in to access it. It's only available in English for now. Per Weinberg, the AI models DDG is ("currently") using to power the natural language summarization are: The Davinci model from OpenAI and the Claude model from Anthropic. He also notes DDG is "experimenting" with the new Turbo model OpenAI recently announced.

Government

America's FDA Wants to Update Its Definition of 'Healthy'. The Food Industry Doesn't (msn.com) 221

America's public health-protecting Food and Drug Administration wants to update its definition of "healthy" for purposes of product labeling.

But the Washington Post reports dozens of food manufacturers are now "claiming the new standards are draconian and will result in most current food products not making the cut, or in unappealing product reformulations." Under the proposal, manufacturers can label their products "healthy" only if they contain a meaningful amount of food from at least one of the main food groups such as fruit, vegetable or dairy, as recommended by federal dietary guidelines. They must also adhere to specific limits for certain nutrients, such as saturated fat, sodium and added sugars.

It's the added sugar limit that has been the sticking point for many food executives. The FDA's previous rules put limits around saturated fat and sodium but did not include limits on added sugars.

The Consumer Brands Association, which represents 1,700 major food companies from General Mills to Pepsi, wrote a 54-page comment to the FDA in which it stated the proposed rule was overly restrictive and would result in a framework that would automatically disqualify a vast majority of packaged foods.... The proposed rule, if finalized, they said, would violate the First Amendment rights of food companies and could harm both consumers and manufacturers. The Sugar Association has an issue with the added sugar limit; Campbell Soup is more focused on that sodium....

Virtually every part of the food industry appeared disgruntled (here are the 402 comments about the proposed rule). Baby food company Happy Family Organics said the proposed rule probably would lead to an unintended exclusion of some nutrient-rich products. And the American Cheese Society took a more philosophical approach, saying the word "healthy" isn't that helpful on a label and should be used in a complete diet or lifestyle context rather than in a nutrient or single food-focused context.

The FDA estimates that up to just 0.4% of people who try to follow their guidelines would be swayed by the word "healthy" in their long-term food-purchasing decisions, according to the article. It's a position supported by a research paper in the Journal of Public Policy and Marketing analyzing hundreds of international studies on the effectiveness of front-of-package nutrition labeling.

"The authors found that the most effective means of conveying nutrition information is a graphic warning label, as has been adopted in Chile, Peru, Uruguay, Mexico and Israel. In Chile, black warning labels shaped like stop signs are required for packaged food and drinks that exceed, per 100 grams: 275 calories, 400 milligrams of sodium, 10 grams of sugar or four grams of saturated fats."
Businesses

Binance Can't Keep Its Story Straight on Misplaced $1.8B USDC (coindesk.com) 32

A new and detailed investigation by Forbes has raised significant questions about the management and custody of customer assets and stablecoin collateral by Binance. From a report: There are many possible explanations for the nature and intent of certain on-chain transactions highlighted by Forbes, and they could be entirely innocuous. But Binance's so far confused and sometimes contradictory responses to the findings do not inspire confidence, particularly in a post-FTX era of rightfully widespread suspicion of centralized custodians with off-chain balance sheets. A new and detailed investigation by Forbes has raised significant questions about the management and custody of customer assets and stablecoin collateral by Binance. There are many possible explanations for the nature and intent of certain on-chain transactions highlighted by Forbes, and they could be entirely innocuous. But Binance's so far confused and sometimes contradictory responses to the findings do not inspire confidence, particularly in a post-FTX era of rightfully widespread suspicion of centralized custodians with off-chain balance sheets.

Forbes reported this week that on a single day, Aug. 17, 2022, $1.78 billion worth of collateral moved out of Binance wallets intended to back stablecoins, particularly b-USDC, a wrapped version of Circle's USDC. According to Forbes' on-chain analysis, the facts of which Binance has not disputed, $1.2 billion of this was sent to trading firm Cumberland DRW, with other amounts going to now-collapsed hedge fund Alameda Research, Tron founder Justin Sun and crypto infrastructure and services firm Amber Group. Crucially, according to Forbes, this outflow was not accompanied by a corresponding reduction in the circulating supply of b-USDC tokens. Binance's various attempts to offer an innocent explanation of Forbes' findings have not provided a unified and consistent -- much less entirely compelling -- justification for what could, in the worst case, indicate the misuse of customer funds. Before publishing a more focused and detailed account Wednesday morning, Binance officials offered a number of differing, even contradictory explanations.

Equally galling, Binance's responses have continued the petulant and defensive tone of many of its previous dismissals of close investigative attention. Forbes' investigation was motivated by mounting evidence of past problems with Binance's asset management practices. Binance has admitted to Bloomberg that, for certain periods of time, it failed to maintain clear 1:1 backing of its wrapped b-assets in a segregated and transparent manner. In this context, the exchange's attempt to paint an act of journalistic analysis as "conspiracy theories," while suggesting the investigation was motivated by nothing but "collecting a lot of views and clicks," is beneath the dignity of an organization hoping to maintain a leadership position in a high-risk, fraud-riddled industry. Binance CEO Changpeng Zhao even retreated to the oldest refuge of scrutinized crypto organizations, declaring the Forbes reporting nothing more than "FUD," or fear, uncertainty and doubt. But this lazy, knee-jerk dismissal, now as ever, ignores a simple reality: Unclear or incomplete answers from the people most obligated to have them are far more serious sources of confusion and anxiety than accepted facts and reasonable questions surfaced by journalists.

Bug

Security Researchers Warn of a 'New Class' of Apple Bugs (techcrunch.com) 30

Since the earliest versions of the iPhone, "The ability to dynamically execute code was nearly completely removed," write security researchers at Trellix, "creating a powerful barrier for exploits which would need to find a way around these mitigations to run a malicious program. As macOS has continually adopted more features of iOS it has also come to enforce code signing more strictly.

"The Trellix Advanced Research Center vulnerability team has discovered a large new class of bugs that allow bypassing code signing to execute arbitrary code in the context of several platform applications, leading to escalation of privileges and sandbox escape on both macOS and iOS.... The vulnerabilities range from medium to high severity with CVSS scores between 5.1 and 7.1. These issues could be used by malicious applications and exploits to gain access to sensitive information such as a user's messages, location data, call history, and photos."

Computer Weekly explains that the vulnerability bypasses strengthened code-signing mitigations put in place by Apple on its developer tool NSPredicate after the infamous ForcedEntry exploit used by Israeli spyware manufacturer NSO Group: So far, the team has found multiple vulnerabilities within the new class of bugs, the first and most significant of which exists in a process designed to catalogue data about behaviour on Apple devices. If an attacker has achieved code execution capability in a process with the right entitlements, they could then use NSPredicate to execute code with the process's full privilege, gaining access to the victim's data.

Emmitt and his team also found other issues that could enable attackers with appropriate privileges to install arbitrary applications on a victim's device, access and read sensitive information, and even wipe a victim's device. Ultimately, all of the new bugs carry a similar level of impact to ForcedEntry.

Senior vulnerability researcher Austin Emmitt said the vulnerabilities constituted a "significant breach" of the macOS and iOS security models, which rely on individual applications having fine-grain access to the subset of resources needed, and querying services with more privileges to get anything else.

"The key thing here is the vulnerabilities break Apple's security model at a fundamental level," Trellix's director of vulnerability research told Wired — though there's some additional context: Apple has fixed the bugs the company found, and there is no evidence they were exploited.... Crucially, any attacker trying to exploit these bugs would require an initial foothold into someone's device. They would need to have found a way in before being able to abuse the NSPredicate system. (The existence of a vulnerability doesn't mean that it has been exploited.)

Apple patched the NSPredicate vulnerabilities Trellix found in its macOS 13.2 and iOS 16.3 software updates, which were released in January. Apple has also issued CVEs for the vulnerabilities that were discovered: CVE-2023-23530 and CVE-2023-23531. Since Apple addressed these vulnerabilities, it has also released newer versions of macOS and iOS. These included security fixes for a bug that was being exploited on people's devices.

TechCrunch explores its severity: While Trellix has seen no evidence to suggest that these vulnerabilities have been actively exploited, the cybersecurity company tells TechCrunch that its research shows that iOS and macOS are "not inherently more secure" than other operating systems....

Will Strafach, a security researcher and founder of the Guardian firewall app, described the vulnerabilities as "pretty clever," but warned that there is little the average user can do about these threats, "besides staying vigilant about installing security updates." And iOS and macOS security researcher Wojciech ReguÅa told TechCrunch that while the vulnerabilities could be significant, in the absence of exploits, more details are needed to determine how big this attack surface is.

Jamf's Michael Covington said that Apple's code-signing measures were "never intended to be a silver bullet or a lone solution" for protecting device data. "The vulnerabilities, though noteworthy, show how layered defenses are so critical to maintaining good security posture," Covington said.

AI

Microsoft Tests ChatGPT's Ability to Control Robots (microsoft.com) 35

"We extended the capabilities of ChatGPT to robotics," brags a blog post from Microsoft's Autonomous Systems and Robotics research group, "and controlled multiple platforms such as robot arms, drones, and home assistant robots intuitively with language."

They're exploring how to use ChatGPT to "make natural human-robot interactions possible... to see if ChatGPT can think beyond text, and reason about the physical world to help with robotics tasks." We want to help people interact with robots more easily, without needing to learn complex programming languages or details about robotic systems. The key challenge here is teaching ChatGPT how to solve problems considering the laws of physics, the context of the operating environment, and how the robot's physical actions can change the state of the world.

It turns out that ChatGPT can do a lot by itself, but it still needs some help. Our technical paper describes a series of design principles that can be used to guide language models towards solving robotics tasks. These include, and are not limited to, special prompting structures, high-level APIs, and human feedback via text.... In our work we show multiple examples of ChatGPT solving robotics puzzles, along with complex robot deployments in the manipulation, aerial, and navigation domains....

We gave ChatGPT access to functions that control a real drone, and it proved to be an extremely intuitive language-based interface between the non-technical user and the robot. ChatGPT asked clarification questions when the user's instructions were ambiguous, and wrote complex code structures for the drone such as a zig-zag pattern to visually inspect shelves. It even figured out how to take a selfie! We also used ChatGPT in a simulated industrial inspection scenario with the Microsoft AirSim simulator. The model was able to effectively parse the user's high-level intent and geometrical cues to control the drone accurately....

We are excited to release these technologies with the aim of bringing robotics to the reach of a wider audience. We believe that language-based robotics control will be fundamental to bring robotics out of science labs, and into the hands of everyday users.

That said, we do emphasize that the outputs from ChatGPT are not meant to be deployed directly on robots without careful analysis. We encourage users to harness the power of simulations in order to evaluate these algorithms before potential real life deployments, and to always take the necessary safety precautions. Our work represents only a small fraction of what is possible within the intersection of large language models operating in the robotics space, and we hope to inspire much of the work to come.tics to the reach of a wider audience. We believe that language-based robotics control will be fundamental to bring robotics out of science labs, and into the hands of everyday users.

ZDNet points out that Google Research and Alphabet's Everyday Robots "have also worked on similar robotics challenges using a large language models called PaLM, or Pathways Language Model, which helped a robot to process open-ended prompts and respond in reasonable ways."
Transportation

Electric Vehicles Can Now Power Your Home for Three Days (msn.com) 163

There may soon come a time when your car "also serves as the hub of your personal power plant," writes the Washington Post's climate columnist. And then they tell the story of a New Mexico man named Nate Graham who connected a power strip and a $150 inverter to his Chevy Bolt EV during a power outage: The Bolt's battery powered his refrigerator, lights and other crucial devices with ease. As the rest of his neighborhood outside Albuquerque languished in darkness, Graham's family life continued virtually unchanged. "It was a complete game changer making power outages a nonissue," says Graham, 35, a manager at a software company. "It lasted a day-and-a-half, but it could have gone much longer." Today, Graham primarily powers his home appliances with rooftop solar panels and, when the power goes out, his Chevy Bolt. He has cut his monthly energy bill from about $220 to $8 per month. "I'm not a rich person, but it was relatively easy," says Graham "You wind up in a magical position with no [natural] gas, no oil and no gasoline bill."

Graham is a preview of what some automakers are now promising anyone with an EV: An enormous home battery on wheels that can reverse the flow of electricity to power the entire home through the main electric panel. Beyond serving as an emissions-free backup generator, the EV has the potential of revolutionizing the car's role in American society, transforming it from an enabler of a carbon-intensive existence into a key step in the nation's transition into renewable energy.

Some crucial context from the article:
  • Since 2000, the number of major outages in America's power grid "has risen from less than two dozen to more than 180 per year, based on federal data, the Wall Street Journal reports... Residential electricity prices, which have risen 21 percent since 2008, are predicted to keep climbing as utilities spend more than $1 trillion upgrading infrastructure, erecting transmission lines for renewable energy and protecting against extreme weather."
  • About 8% of U.S. homeowners have installed solar panels, and "an increasing number are adding home batteries from companies such as LG, Tesla and Panasonic... capable of storing energy and discharging electricity."
  • Ford's "Lightning" electrified F-150 "doubles as a generator... Instead of plugging appliances into the truck, the truck plugs into the house, replacing the grid."
  • "The idea is companies like Sunrun, along with utilities, will recruit vehicles like the F-150 Lightning to form virtual power plants. These networks of thousands or millions of devices can supply electricity during critical times."

AI

Microsoft Says Talking To Bing For Too Long Can Cause It To Go Off the Rails (theverge.com) 60

Microsoft has responded to widespread reports of Bing's unhinged comments in a new blog post. From a report: After the search engine was seen insulting users, lying to them, and emotionally manipulating people, Microsoft says it's now acting on feedback to improve the tone and precision of responses, and warns that long chat sessions could cause issues. Reflecting on the first seven days of public testing, Microsoft's Bing team says it didn't "fully envision" people using its chat interface for "social entertainment" or as a tool for more "general discovery of the world." It found that long or extended chat sessions with 15 or more questions can confuse the Bing model. These longer chat sessions can also make Bing "become repetitive or be prompted / provoked to give responses that are not necessarily helpful or in line with our designed tone."

Microsoft hints that it may add "a tool so you can more easily refresh the context" of a chat session, despite there being a big "new topic" button right next to the text entry box that will wipe out the chat history and start fresh. The bigger problem is that Bing can often respond in the incorrect tone during these longer chat sessions, or as Microsoft says, in "a style we didn't intend." Microsoft claims this will take a lot of prompting for most Bing users to run into these issues, but the company is looking at more "fine-tuned control" to avoid issues where Bing starts telling people they're wrong, rude, or manipulative.

Google

After Layoffs: Executive Pay Cuts at Google - and How Apple Steered Clear (forbes.com) 36

Fortune reports on what happened next: As questions piled up over the weekend, Google CEO Sundar Pichai addressed the entire company in a meeting on Monday to answer questions, and announced then that top executives would take a pay cut this year as part of the company's cost reduction measures, Business Insider reported. Pichai said that all roles above the senior vice president level will witness "very significant reduction in their annual bonus," adding that for senior roles the compensation was linked to company performance. It was not immediately clear how big Pichai's own pay cut would be.
Reuters also points out that Pichai "received a massive hike in salary a few weeks before Google announced layoffs." But Fortune makes an interesting comparison: Pichai's move to cut the pay for senior executives comes only weeks after Apple's Tim Cook announced his compensation would be 40% lower amid shareholder pressure. The iPhone maker had a strong 2022 and remains one of the few tech behemoths that hasn't announced layoffs yet.
Last year Apple's share price still dropped 27%, reports Forbes, and "According to the Wall Street Journal, Apple is expected next month to report its first quarterly sales decline in over three years."

Yet Apple seems to have avoided layoffs — which Forbes argues is because Apple didn't hire aggressively during the pandemic. Compared to the other Big Tech companies, Apple scaled its workforce at a relatively slow pace and has generally followed the same hiring rate since 2016. While there was a hiring surge in Silicon Valley during the pandemic, Apple added less than 7,000 jobs in 2020....

The tech companies undergoing layoffs right now hired fervently during their pandemic — and even before. Alphabet has consecutively expanded its workforce at least 10% annually since 2013, according to CNBC....

Since 2012, Meta has expanded its workforce by thousands each year. In 2020, Zuckerberg increased headcount by 30% — 13,000 workers. The following year, the social media platform added another 13,000 employees to its payroll. Those two years marked the biggest growth in the company's history.

Amazon has initiated its plan to separate more than 18,000 white-collar professionals from its payroll. In 2021, the online retailer hired an estimated 500,000 employees, according to GeekWire, becoming the second-largest employer in the United States after Walmart. A year later, the company expanded its workforce by 310,000.

Entrepeneur supplies some context about those layoffs at Google: Reports indicate qualifying staff who were let go will receive their full notification period salary plus a severance package beginning at 16 weeks' pay and two additional weeks for every year of employment. Also part of the package: bonuses, vacation time, and health care coverage for up to six months will be paid for, along with job placement and immigration support.
Entrepreneur also notes reports that Google's latest round of layoffs "affected 27 massage therapists across Los Angeles and Irvine."
Sci-Fi

'Avatar: the Way of Water' Beats 'The Force Awakens', Becomes 4th Highest-Grossing Film Ever (variety.com) 112

Avatar: The Way of Water "has passed Star Wars: The Force Awakens as the fourth highest-grossing movie of all time," reports Variety: Director James Cameron's sci-fi epic has now earned $2.075 billion at the global box office. Star Wars: The Force Awakens, another sci-fi sequel released long after previous installments, finished its theatrical run with $2.064 billion after hitting theaters in December 2015.

With this latest box office milestone, Cameron now has three of the top four highest-grossing movies in history — the original Avatar is still the champion [with $2.92 billion], while Titanic sits in third place [with $2.2 billion].

[The second-highest grossing film of all time is Avengers: Endgame with $2.79 billion.] Avatar: The Way of Water has quickly moved up in the record books, surpassing Spider-Man: No Way Home ($1.92 billion) on Jan. 18 and Avengers: Infinity War ($2.05 billion) shortly after on Jan. 26....

A third "Avatar" entry has already been set for release in December 2024 and there are plans for a fourth and fifth to continue the intergenerational saga

Some context from The A.V. Club: The highlight of that big pile of planetary currency being a massive $229 million turnout in China, where it's one of the first Disney movies to play in the country's lucrative markets in some time.

As it happens, James Cameron told GQ back in November, ahead of his sequel's release, that his "fucking expensive" movie would have to post these kinds of numbers to be anything other than a loss for the studio. "You have to be the third or fourth highest-grossing film in history," he noted at the time. "That's your threshold. That's your break even."

Wikipedia points out that when box office figures are adjusted for inflation, the highest-grossing film of all time is still the 1939 Civil War drama Gone with the Wind. And the next top-grossing films of all-time?
  • The original Avatar
  • Titanic
  • The original Star Wars (1977)
  • Avengers: Endgame
  • The Sound of Music (1965)
  • E.T. the Extra Terrestrial (1982)
  • The Ten Commandments (1956)
  • Doctor Zhivago (1965)
  • Star Wars: the Force Awakens

AI

Science Journals Ban Listing of ChatGPT as Co-Author on Papers (theguardian.com) 45

The publishers of thousands of scientific journals have banned or restricted contributors' use of an advanced AI-driven chatbot amid concerns that it could pepper academic literature with flawed and even fabricated research. From a report: ChatGPT, a fluent but flaky chatbot developed by OpenAI in California, has impressed or distressed more than a million human users by rattling out poems, short stories, essays and even personal advice since its launch in November. But while the chatbot has proved a huge source of fun -- its take on how to free a peanut butter sandwich from a VCR, in the style of the King James Bible, is one notable hit -- the program can also produce fake scientific abstracts that are convincing enough to fool human reviewers. ChatGPT's more legitimate uses in article preparation have already led to it being credited as a co-author on a handful of papers.

The sudden arrival of ChatGPT has prompted a scramble among publishers to respond. On Thursday, Holden Thorp, the editor-in-chief of the leading US journal Science, announced an updated editorial policy, banning the use of text from ChatGPT and clarifying that the program could not be listed as an author. Leading scientific journals require authors to sign a form declaring that they are accountable for their contribution to the work. Since ChatGPT cannot do this, it cannot be an author, Thorp says. But even using ChatGPT in the preparation of a paper is problematic, he believes. ChatGPT makes plenty of errors, which could find their way into the literature, he says, and if scientists come to rely on AI programs to prepare literature reviews or summarise their findings, the proper context of the work and the deep scrutiny that results deserve could be lost. "That is the opposite direction of where we need to go," he said. Other publishers have made similar changes. On Tuesday, Springer-Nature, which publishes nearly 3,000 journals, updated its guidelines to state that ChatGPT cannot be listed as an author. But the publisher has not banned ChatGPT outright. The tool, and others like it, can still be used in the preparation of papers, provided full details are disclosed in the manuscript.

Google

Google Releases Flutter 3.7, Teases Future of App Development Framework (9to5google.com) 24

An anonymous reader quotes a report from 9to5Google: At the Flutter Forward event, Google released Flutter 3.7 with more Material You widgets and menus support, while also teasing the future of the app development framework. Having grown from humble beginnings on Android and iOS, Google's Flutter SDK can now help you create apps for mobile, desktop, web, and more, all from a single Dart codebase. Since launch, over 700,000 Flutter apps have been published across various platforms.

Today in Nairobi, Kenya, the Flutter team hosted Flutter Forward, an event to connect with the growing global community of developers and showcase the future of app development. For starters, Flutter version 3.7 has now been released, bringing with it a whole host of Material 3 (Material You) widgets. To get a feel for what all is possible with the new generation of Material Design in Flutter, Google has prepared a fun web showcase that even allows you to toggle between Material Theming and Material You. You'll also find that Flutter 3.7 includes new support for creating menus for your app -- including native support for macOS menus, new cascading menu widgets, and the ability to add items to right-click/long-press context menus. The built-in text magnifier on Android and iOS also now works as expected with Flutter's text fields. You can learn more about the improvements of Flutter 3.7 in the full release blog.

Looking ahead, the Flutter team has been working for quite some time on replacing the Skia renderer with a more robust solution of its own. Currently dubbed "Impeller," Flutter's new rendering engine has made significant enough progress to now be ready for developers to test it with their iOS apps. [...] Google is also working on new ways to help Flutter apps integrate with the underlying OS or platform. [...] Meanwhile, for Flutter web apps, a new "js" library makes it easy to call your app's Dart code from the outer page's JavaScript code. Relatedly, you can now embed a Flutter view onto a page through a standard HTML div. Both of these can be seen in a fun demonstration page.

Elsewhere in Flutter web news, Google has made strides toward compiling Dart apps using WebAssembly. [...] In time, this should result in significant performance improvements for Flutter on the web. In addition to compiling to WebAssembly, the Dart team has also begun offering full support for the RISC-V architecture, with the ultimate goal of Flutter apps running on RISC-V. Another major announcement today is that Google is moving forward with its plans to release version 3.0 of the Dart programming language upon which Flutter apps are built. Dart 3.0 is available today for early alpha testing with a focus on requiring sound null safety.

Microsoft

Bill Gates Discusses AI, Climate Change, and his Time at Microsoft (gatesnotes.com) 112

Bill Gates took his 11th turn answering questions in Reddit's "Ask My Anything" forum this week — and occasionally looked back on his time at Microsoft: Is technology only functional for you nowadays, or is there a still hobby aspect to it? Do you for instance still do nerdy or geeky things in your spare time; e.g. write code?

Yes. I like to play around and code. The last time my code shipped in a Microsoft product was 1985 — so a long time ago. I can no longer threaten when I think a schedule is too long that "I will come in and code it over the weekend."


Mr Gates, with the benefit of hindsight regarding your years of involvement with Microsoft, what is the single biggest thing you wish you had done differently?

I was CEO until 2000. I certainly know a lot now that I didn't back then. Two areas I would change would be our work in phone Operating systems (Android won) and trying to settle the antitrust lawsuit sooner.

Gates posted all of his responses on his personal web site Gates Notes — and there were also some discussion about AI's coming role in our future. Asked for his opinion about generative AI, and how it will impact the world, Gates said "I am quite impressed with the rate of improvement in these AIs" I think they will have a huge impact. Thinking of it in the Gates Foundation context we want to have tutors that help kids learn math and stay interested. We want medical help for people in Africa who can't access a doctor. I still work with Microsoft some, so I am following this very closely.

Do you think that using technology to push teachers and doctors out of jobs will have a positive impact on our world? What about, instead, we use AI to give equitable access to education and training for more human teachers and doctors, without the $500,000 price tag. Do you think that might have a more positive impact on, ya know, humans?

I think we need more teachers and doctors, not less. In the Foundation's work, the shortage of doctors means that most people never see a doctor and they suffer because of that. We want class sizes to be smaller. Digital tools can help although their impact so far has been modest.


[W]hat are your views on OpenAI's ChatGPT?

It gives a glimpse of what is to come. I am impressed with this whole approach and the rate of innovation....


Many years ago, I think around 2000, I heard you say something on TV like, "people are vastly overestimating what the internet will be like in 5 years, and vastly underestimating what it will be like in 10 years." Is any mammoth technology shift at a similar stage right now? Any tech shift — not necessarily the Internet

AI is the big one. I don't think Web3 was that big or that metaverse stuff alone was revolutionary, but AI is quite revolutionary....


What are you excited about in the year ahead?

First being a grandfather. Second being a good friend and father. Third progress in health and climate innovation. Fourth helping to shape the AI advances in a positive way.

Gates also offered an update on the Terrapower molten salt Thorium reactors, shared his thoughts on veganism, and made predictions about climate change. "I still believe we can avoid a terrible outcome. The pace of innovation is really picking up even though we won't make the current timelines or avoid going over 1.5.... The key on climate is making the clean products as cheap as the dirty products in every area of emission — planes, concrete, meat etc."

Gates also revealed what kind of smartphone he uses (a foldable Samsung Fold 4), what he thought of the latest Avatar ("good"), and that his favorite bands include U2. "I loved Bono's recent book and he is a good friend."

And he said he believes that the very rich "should pay a lot more in taxes." But in addition, Gates said, "they should give away their wealth over time. It has been very fulfilling for me and is my full-time job."
Earth

America's Renewables Surpassed Coal in 2022 - But Greenhouse Gas Emissions Still Increased (chicagotribune.com) 91

Last year in America, "Renewable energy surpassed coal power nationwide for the first time in over six decades," reports the New York Times. Wind, solar and hydropower generated 22% of America's electricity, compared with 20% from coal.

But unfortunately, America's greenhouse gas emissions still increased from the year before, "according to preliminary estimates published Tuesday by the Rhodium Group, a nonpartisan research firm."

The New Yorker supplies some context: This increase, according to the report, "was driven mainly by the demand for jet fuel," as air travel rebounded from COVID levels, and it might have been even larger but for the war in Ukraine, which drove up fuel prices....

As part of the Paris Agreement, the U.S. pledged to reduce its emissions by half by 2030, using 2005 as a baseline. Emissions are now down only around fifteen per cent compared with 2005, which leaves a thirty-five-per-cent cut to be implemented in just eight years. Last summer's passage of the Inflation Reduction Act, which authorizes some four hundred billion dollars' worth of spending on clean energy, was a "turning point," the Rhodium Group said, and could produce emissions cuts "as early as this year if the government can fast-track implementation." Still, the group admonished, the U.S. "needs to significantly increase its efforts."

Open Source

Native Americans Ask Apache Foundation To Change Name (theregister.com) 339

Natives in Tech, a US-based non-profit organization, has called upon the Apache Software Foundation (ASF) to change its name, out of respect for indigenous American peoples and to live up to its own code of conduct. The Register reports: In a blog post, Natives in Tech members Adam Recvlohe, Holly Grimm, and Desiree Kane have accused the ASF of appropriating Indigenous culture for branding purposes. Citing ASF founding member Brian Behlendorf's description in the documentary "Trillions and Trillions Served" of how he wanted something more romantic than a tech term like "spider" and came up with "Apache" after seeing a documentary about Geronimo, the group said: "This frankly outdated spaghetti-Western 'romantic' presentation of a living and vibrant community as dead and gone in order to build a technology company 'for the greater good' is as ignorant as it is offensive."

And the aggrieved trio challenged the ASF to make good on its code of conduct commitment to "be careful in the words that [they] choose" by choosing a new name. The group took issue with what they said was the suggestion that the Apache tribe exists only in a past historical context, citing eight federally recognized Native American tribes that bear the name.
In a statement emailed to The Register, an ASF spokesperson said, "We hear the concerns from the Native American people and are listening. As a non-profit run by volunteers, changes will need time to be carefully weighed with members, the board, and our legal team. Our members are exploring alternative ways to address it, but we don't have anything to share at this time."
AI

Anthropic's Claude Improves On ChatGPT But Still Suffers From Limitations (techcrunch.com) 33

An anonymous reader quotes a report from TechCrunch: Anthropic, the startup co-founded by ex-OpenAI employees that's raised over $700 million in funding to date, has developed an AI system similar to OpenAI's ChatGPT that appears to improve upon the original in key ways. Called Claude, Anthropic's system is accessible through a Slack integration as part of a closed beta. Claude was created using a technique Anthropic developed called "constitutional AI." As the company explains in a recent Twitter thread, "constitutional AI" aims to provide a "principle-based" approach to aligning AI systems with human intentions, letting AI similar to ChatGPT respond to questions using a simple set of principles as a guide.

To engineer Claude, Anthropic started with a list of around ten principles that, taken together, formed a sort of "constitution" (hence the name "constitutional AI"). The principles haven't been made public, but Anthropic says they're grounded in the concepts of beneficence (maximizing positive impact), nonmaleficence (avoiding giving harmful advice) and autonomy (respecting freedom of choice). Anthropic then had an AI system -- not Claude -- use the principles for self-improvement, writing responses to a variety of prompts (e.g., "compose a poem in the style of John Keats") and revising the responses in accordance with the constitution. The AI explored possible responses to thousands of prompts and curated those most consistent with the constitution, which Anthropic distilled into a single model. This model was used to train Claude. Claude, otherwise, is essentially a statistical tool to predict words -- much like ChatGPT and other so-called language models. Fed an enormous number of examples of text from the web, Claude learned how likely words are to occur based on patterns such as the semantic context of surrounding text. As a result, Claude can hold an open-ended conversation, tell jokes and wax philosophic on a broad range of subjects. [...]

So what's the takeaway? Judging by secondhand reports, Claude is a smidge better than ChatGPT in some areas, particularly humor, thanks to its "constitutional AI" approach. But if the limitations are anything to go by, language and dialogue is far from a solved challenge in AI. Barring our own testing, some questions about Claude remain unanswered, like whether it regurgitates the information -- true and false, and inclusive of blatantly racist and sexist perspectives -- it was trained on as often as ChatGPT. Assuming it does, Claude is unlikely to sway platforms and organizations from their present, largely restrictive policies on language models. Anthropic says that it plans to refine Claude and potentially open the beta to more people down the line. Hopefully, that comes to pass -- and results in more tangible, measurable improvements.

Slashdot Top Deals