AI

Bill Gates Calls AI's Risks 'Real But Manageable' (gatesnotes.com) 57

This week Bill Gates said "there are more reasons than not to be optimistic that we can manage the risks of AI while maximizing their benefits." One thing that's clear from everything that has been written so far about the risks of AI — and a lot has been written — is that no one has all the answers. Another thing that's clear to me is that the future of AI is not as grim as some people think or as rosy as others think. The risks are real, but I am optimistic that they can be managed. As I go through each concern, I'll return to a few themes:

- Many of the problems caused by AI have a historical precedent. For example, it will have a big impact on education, but so did handheld calculators a few decades ago and, more recently, allowing computers in the classroom. We can learn from what's worked in the past.

— Many of the problems caused by AI can also be managed with the help of AI.

- We'll need to adapt old laws and adopt new ones — just as existing laws against fraud had to be tailored to the online world.

Later Gates adds that "we need to move fast. Governments need to build up expertise in artificial intelligence so they can make informed laws and regulations that respond to this new technology."

But Gates acknowledged and then addressed several specific threats:
  • He thinks AI can be taught to recognize its own hallucinations. "OpenAI, for example, is doing promising work on this front.
  • Gates also believes AI tools can be used to plug AI-identified security holes and other vulnerabilities — and does not see an international AI arms race. "Although the world's nuclear nonproliferation regime has its faults, it has prevented the all-out nuclear war that my generation was so afraid of when we were growing up. Governments should consider creating a global body for AI similar to the International Atomic Energy Agency."
  • He's "guardedly optimistic" about the dangers of deep fakes because "people are capable of learning not to take everything at face value" — and the possibility that AI "can help identify deepfakes as well as create them. Intel, for example, has developed a deepfake detector, and the government agency DARPA is working on technology to identify whether video or audio has been manipulated."
  • "It is true that some workers will need support and retraining as we make this transition into an AI-powered workplace. That's a role for governments and businesses, and they'll need to manage it well so that workers aren't left behind — to avoid the kind of disruption in people's lives that has happened during the decline of manufacturing jobs in the United States."

Gates ends with this final thought:

"I encourage everyone to follow developments in AI as much as possible. It's the most transformative innovation any of us will see in our lifetimes, and a healthy public debate will depend on everyone being knowledgeable about the technology, its benefits, and its risks.

"The benefits will be massive, and the best reason to believe that we can manage the risks is that we have done it before."


AI

Nine AI-Powered Humanoid Robots Hold Press Conference at UN Summit (apnews.com) 30

We've just had the world's first press conference with AI-enabled, humanoid social robots. Click here to jump straight to Slashdot's transcript of all the robots' answers during the press conference, or watch the 40-minute video here.

It all happened as the United Nations held an "AI for Good" summit in Geneva, where the Guardian reports that the foyer was "humming with robotic voices, the whirring of automated wheels and limbs, and Desdemona, the 'rock star' humanoid, who is chanting 'the singularity will not be centralised' on stage backed by a human band, Jam Galaxy."

But the Associated Press describes how one UN agency had "assembled a group of robots that physically resembled humans at a news conference Friday, inviting reporters to ask them questions in an event meant to spark discussion about the future of artificial intelligence. "The nine robots were seated and posed upright along with some of the people who helped make them at a podium in a Geneva conference center... Among them: Sophia, the first robot innovation ambassador for the U.N. Development Program, or UNDP; Grace, described as a health care robot; and Desdemona, a rock star robot."

"I'm terrified by all of this," said one local newscaster, noting that the robots also said they "had no intention of rebelling against their creators."

But the Associated Press points out an important caveat: While the robots vocalized strong statements - that robots could be more efficient leaders than humans, but wouldn't take anyone's job away or stage a rebellion - organizers didn't specify to what extent the answers were scripted or programmed by people. The summit was meant to showcase "human-machine collaboration," and some of the robots are capable of producing preprogrammed responses, according to their documentation.
Two of the robots seemed to disagree on whether AI-powered robots should submit to stricter regulation. (Although since they're only synthesizing sentences from large-language models, can they really be said to "agree" or "disagree"?)

There were unintentionally humorous moments, starting right from the beginning. Click here to start reading Slashdot's transcript of the robots' answers:
Medicine

Dispute Over Database Use Could Disrupt US Organ Transplant System (wric.com) 20

"The flow of lifesaving organs to 63 U.S. transplant centers could be disrupted..." reported the Washington Post on Monday, "by a dispute over the use of data."

Or, as a local news station WRIC puts it, "Two entities dedicated to fighting to save lives through organ transplant operations are now fighting with each other." Buckeye Transplant Services filed a lawsuit against the United Network for Organ Sharing — or UNOS — on July 3 after the Richmond-based non-profit accused the transplant screening service of putting donor and patient privacy at risk.

UNOS claimed Buckeye did so by using technology to gain unauthorized, improper access to a DonorNet database. Buckeye denied any wrongdoing and insisted that the company has always complied with data accessibility protocol... This isn't UNOS's first controversy, but the reason this particular debate has become high-profile is due to rumors that it could impact transplant operations. Prior to the lawsuit, UNOS threatened to cut off Buckeye's access to data necessary for its operation. UNOS still insists that no transplant program will experience any interruptions in receiving organ offers as a result of the dispute. However, Buckeye warned that if it loses access to crucial data, 63 hospitals across the country — two in Virginia — could have to take on extra burdens.

One of those healthcare systems, the University of Virginia's Transplant Center, told 8News that its team is closely monitoring the situation and is already coming up with plans to prevent any legal hiccups from interrupting the lifesaving organ donation process.

Buckeye was involved in over 13% of America's organ transplants in 2022, according to figures cited by the Washington Post. "Buckeye said it is doing nothing wrong," according to the article, "and that other organizations across the transplant system act similarly." Meanwhile, UNOS's general counsel "stressed that cutting off Buckeye is a last resort in a negotiation that has been underway for two months," the Washington Post reported. "Certain features of Buckeye's electronic systems are capable of and have collected from UNOS systems various large volumes of patient-specific and facility-specific information related to transplant services," a UNOS attorney wrote to Buckeye on June 21. Livingston, the UNOS general counsel, said in an interview that the data belongs to UNOS and that transplant centers are able to obtain it from the organization if they want it. But Buckeye is not allowed to collect it in bulk and sell it to its customers. He said if Buckeye retrieves and "scrapes" the data, UNOS does not know how well it is secured, whether it is being "misused or mishandled" and how it is being stored. He also said Buckeye could create an alternate database with the information.
On Tuesday the Washington Post reported that UNOS had issued a two-week extension (through July 19): Anne Paschke, a spokesperson for UNOS, said the group provided the extension to "allow the court an appropriate amount of time" to consider the company's request for a temp restraining order. "We are confident in our position," Paschke said... Buckeye sued UNOS in federal court on Monday seeking an injunction that would stop the nonprofit group from blocking its access to the national transplant database system...

[The U.S. Health Resources and Services Administration] unveiled plans in March to overhaul the transplant system, including changes to the 37-year monopoly UNOS has held as manager of the organ database... Buckeye is potentially interested in bidding for a part of the contract UNOS now holds, according to company representatives. Its lawsuit contends UNOS "has monopolistic intent to squash the development of technology that could eventually supplant" the UNOS transplant system.

Thanks to long-time Slashdot reader belmolis for sharing the article.
Medicine

AI Tool Decodes Brain Cancer's Genome During Surgery 4

An anonymous reader quotes a report from Harvard Medical School: Scientists have designed an AI tool that can rapidly decode a brain tumor's DNA to determine its molecular identity during surgery -- critical information that under the current approach can take a few days and up to a few weeks. Knowing a tumor's molecular type enables neurosurgeons to make decisions such as how much brain tissue to remove and whether to place tumor-killing drugs directly into the brain -- while the patient is still on the operating table. A report on the work, led by Harvard Medical School researchers, is published July 7 in the journal Med.

The tool, called CHARM (Cryosection Histopathology Assessment and Review Machine), is freely available to other researchers. It still has to be clinically validated through testing in real-world settings and cleared by the FDA before deployment in hospitals, the research team said. [...] CHARM was developed using 2,334 brain tumor samples from 1,524 people with glioma from three different patient populations. When tested on a never-before-seen set of brain samples, the tool distinguished tumors with specific molecular mutations at 93 percent accuracy and successfully classified three major types of gliomas with distinct molecular features that carry different prognoses and respond differently to treatments.

Going a step further, the tool successfully captured visual characteristics of the tissue surrounding the malignant cells. It was capable of spotting telltale areas with greater cellular density and more cell death within samples, both of which signal more aggressive glioma types. The tool was also able to pinpoint clinically important molecular alterations in a subset of low-grade gliomas, a subtype of glioma that is less aggressive and therefore less likely to invade surrounding tissue. Each of these changes also signals different propensity for growth, spread, and treatment response. The tool further connected the appearance of the cells -- the shape of their nuclei, the presence of edema around the cells -- with the molecular profile of the tumor. This means that the algorithm can pinpoint how a cell's appearance relates to the molecular type of a tumor.
Power

Kentucky Mandates Tesla's Charging Plug For State-Backed Charging Stations (reuters.com) 75

Kentucky is requiring that electric vehicle charging companies include Tesla's plug if they want to be part of a state program to electrify highways using federal dollars, according to documents reviewed by Reuters. From the report: Kentucky's plan went into effect on Friday, making it the first state to mandate Tesla's charging technology, although Texas and Washington states previously shared such plans with Reuters. In addition to federal requirements for the rival Combined Charging System (CCS), Kentucky mandates Tesla's plug, called the North American Charging Standard (NACS), at charging stations, according to Kentucky's request for proposal (RFP) for the state's EV charging program on Friday.

"Each port must be equipped with an SAE CCS 1 connector. Each port shall also be capable of connecting to and charging vehicles equipped with charging ports compliant with the North American Charging Standard (NACS)," the documents say. The U.S. Department of Transportation earlier this year said that charging companies must provide CCS plugs to be eligible for federal funding to deploy 500,000 EV chargers by 2030. It added that the rule allows charging stations to have other connectors, as long as they support CCS, a national standard.

AI

WinGPT Is a New ChatGPT App For Your Ancient Windows 3.1 PC (theverge.com) 91

An anonymous reader quotes a report from The Verge: Someone has created a ChatGPT app for Windows 3.1 PCs. WinGPT brings a very basic version of OpenAI's ChatGPT responses into an app that can run on an ancient 386 chip. It's built by the same mysterious developer behind Windle, a Wordle clone for Microsoft's Windows 3.1 operating system. "I didn't want my Gateway 4DX2-66 from 1993 to be left out of the AI revolution, so I built an AI Assistant for Windows 3.1, based on the OpenAI API," says the developer in a Hacker News thread.

WinGPT is written in C using Microsoft's standard Windows API and connects to OpenAI's API server using TLS 1.3, so there's no need for a separate modern PC. That was a particularly interesting part of getting this app running on Windows 3.1, alongside managing the memory segmentation architecture on 16-bit versions of Windows and building the UI for the app. Neowin notes that the ChatGPT responses are only brief due to the limited memory support that can't handle the context of conversations. The icon for WinGPT was also designed in Borland's Image Editor, a clone of Microsoft Paint that's capable of making ICO files.

"I built most of the UI in C directly, meaning that each UI component had to be manually constructed in code," says the anonymous WinGPT developer. "I was surprised that the set of standard controls available to use by any program with Windows 3.1 is incredibly limited. You have some controls you'd expect -- push buttons, check boxes, radio buttons, edit boxes -- but any other control you might need, including those used across the operating system itself, aren't available."

AI

Google DeepMind's CEO Says Its Next Algorithm Will Eclipse ChatGPT 38

In 2016, an artificial intelligence program called AlphaGo from Google's DeepMind AI lab made history by defeating a champion player of the board game Go. Now Demis Hassabis, DeepMind's cofounder and CEO, says his engineers are using techniques from AlphaGo to make an AI system dubbed Gemini that will be more capable than that behind OpenAI's ChatGPT. From a report: DeepMind's Gemini, which is still in development, is a large language model that works with text and is similar in nature to GPT-4, which powers ChatGPT. But Hassabis says his team will combine that technology with techniques used in AlphaGo, aiming to give the system new capabilities such as planning or the ability to solve problems.

"At a high level you can think of Gemini as combining some of the strengths of AlphaGo-type systems with the amazing language capabilities of the large models," Hassabis says. "We also have some new innovations that are going to be pretty interesting." Gemini was first teased at Google's developer conference last month, when the company announced a raft of new AI projects. AlphaGo was based on a technique DeepMind has pioneered called reinforcement learning, in which software learns to take on tough problems that require choosing what actions to take like in Go or video games by making repeated attempts and receiving feedback on its performance. It also used a method called tree search to explore and remember possible moves on the board. The next big leap for language models may involve them performing more tasks on the internet and on computers. Gemini is still in development, a process that will take a number of months, Hassabis says. It could cost tens or hundreds of millions of dollars. Sam Altman, OpenAI CEO, said in April that creating GPT-4 cost more than $100 million.
AI

DeepMind Co-Founder Proposes a New Kind of Turing Test For Chatbots 84

Mustafa Suleyman, co-founder of DeepMind, suggests chatbots like ChatGPT and Google Bard should be put through a "modern Turing test" where their ability to turn $100,000 into $1 million is evaluated to measure human-like intelligence. He discusses the idea in his new book called "The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma." Insider reports: In the book, Suleyman dismissed the traditional Turing test because it's "unclear whether this is a meaningful milestone or not," Bloomberg reported Tuesday. "It doesn't tell us anything about what the system can do or understand, anything about whether it has established complex inner monologues or can engage in planning over abstract time horizons, which is key to human intelligence," he added.

The Turing test was introduced by Alan Turing in the 1950s to examine whether a machine has human-level intelligence. During the test, human evaluators determine whether they're speaking to a human or a machine. If the machine can pass for a human, then it passes the test. Instead of comparing AI's intelligence to humans, Suleyman proposes tasking a bot with short-term goals and tasks that it can complete with little human input in a process known as "artificial capable intelligence," or ACI.

To achieve ACI, Suleyman says AI bots should pass a new Turing test in which it receives a $100,000 seed investment and has to turn it into $1 million. As part of the test, the bot must research an e-commerce business idea, develop a plan for the product, find a manufacturer, and then sell the item. He expects AI to achieve this milestone in the next two years. "We don't just care about what a machine can say; we also care about what it can do," he wrote, per Bloomberg.
Space

Webb Telescope Is Powerful Enough To See a Variety of Biosignatures In Exoplanets, Argues New Paper (phys.org) 39

A new study argues that the James Webb Space Telescope (JWST) is capable of detecting the chemical signs of life in exoplanet atmospheres -- the best hope for finding life on another world. Phys.Org reports: The team simulated atmospheric conditions for five broad types of Earth-like worlds: an ocean world, a volcanically active world, a rocky world during the high bombardment period, a super-Earth, and a world like Earth when life arose. They assumed all these worlds had a surface pressure of less than five Earth atmospheres, and calculated the absorption spectra for several organically produced molecules such as methane, ammonia, and carbon monoxide. These molecules can also be formed by non-biological methods, but they form a good baseline as a proof of concept.

They found that with a reasonably thick atmosphere, the JWST, specifically its NIRSpec G395M/H instrument, could confirm the presence of these molecules within 10 transits of the planet. It would be easiest to do with super-Earths and other worlds with a thick atmosphere, but it is still possible for potentially habitable worlds. Given the number of transits needed, our best shot at detecting biosignatures with JWST would be the close-orbiting worlds of red dwarf stars, such as the Trappist-1 system, which has several potentially habitable Earth-sized planets. Given the overlap between biological and non-biological origins, JWST observations might not be enough to confirm the existence of life, but this study shows that we are very close to that ability.

Hardware

M2 Max Is Basically An M1 Ultra, and M2 Ultra Nearly Doubles the Performance (9to5mac.com) 42

The new Mac Studio started shipping to customers this week, giving product reviewers a chance to test Apple's "most capable chip ever." According to new benchmarks by YouTuber Luke Miani, the M2 Ultra features nearly double the GPU performance of last year's M1 Ultra, with notable performance improvements in other areas. 9to5Mac reports: While the M1 Max and M1 Ultra are blazing fast, the difference between the two wasn't as notable as some expected. In many tasks, the much cheaper M1 Max wasn't too far off from the top-end M1 Ultra variant, especially in video editing, photo editing, and 3D rendering. Despite the M1 Ultra literally being 2 M1 Max's fused, the performance was never doubled. For the M2 series, Apple has made some significant changes under the hood, especially in GPU scaling. In Luke's testing, he found that in some GPU heavy applications, like Blender 3D and 3DMark, the M2 Ultra was sometimes precisely twice the performance of M2 Max -- perfect GPU scaling! In Final Cut Pro exports, it nearly doubled again. He also found that the M2 Ultra doubled the GPU performance of the M1 Ultra in these same benchmarks -- a genuinely remarkable year-over-year upgrade.

The reason for the massive performance improvement is that Apple added a memory controller chip to the M2 generation that balances the load between all of M2 Ultra's cores -- M1 Ultra required the ram to be maxed out before using all cores. M1 Ultra was very good at doing many tasks simultaneously but struggled to do one task, such as benchmarking or rendering, faster than the M1 Max. With M2 Ultra, because of this new memory controller, Apple can now achieve the same incredible performance without the memory buffer needing to be maxed out. It's important to note that some applications cannot take advantage of the M2 Ultra fully, and in non-optimized applications, you should not expect double the performance.

Despite this incredible efficiency and performance, the better deal might be the M2 Max. In Luke's testing, the M2 Max performed very similarly or outperformed last year's M1 Ultra. In Blender, Final Cut Pro, 3DMark, and Rise of the Tomb Raider, the M2 Max consistently performed the same or better than the M1 Ultra. Instead of finding an M1 Ultra on eBay, it might be best to save money and get the M2 Max if you're planning on doing tasks that heavily utilize the GPU. While the GPU performance is similar, the M1 Ultra still has the advantage of far more CPU cores, and will outperform the M2 Max in CPU heavy workloads.

Debian

Debian 12 'Bookworm' Released (debian.org) 62

Slashdot reader e065c8515d206cb0e190 shared the big announcement from Debian.org: After 1 year, 9 months, and 28 days of development, the Debian project is proud to present its new stable version 12 (code name bookworm).

bookworm will be supported for the next 5 years thanks to the combined work of the Debian Security team and the Debian Long Term Support team...

This release contains over 11,089 new packages for a total count of 64,419 packages, while over 6,296 packages have been removed as obsolete. 43,254 packages were updated in this release. The overall disk usage for bookworm is 365,016,420 kB (365 GB), and is made up of 1,341,564,204 lines of code.

bookworm has more translated man pages than ever thanks to our translators who have made man-pages available in multiple languages such as: Czech, Danish, Greek, Finnish, Indonesian, Macedonian, Norwegian (Bokmål), Russian, Serbian, Swedish, Ukrainian, and Vietnamese. All of the systemd man pages are now completely available in German.

The Debian Med Blend introduces a new package: shiny-server which simplifies scientific web applications using R. We have kept to our efforts of providing Continuous Integration support for Debian Med team packages. Install the metapackages at version 3.8.x for Debian bookworm.

The Debian Astro Blend continues to provide a one-stop solution for professional astronomers, enthusiasts, and hobbyists with updates to almost all versions of the software packages in the blend. astap and planetary-system-stacker help with image stacking and astrometry resolution. openvlbi, the open source correlator, is now included.

Support for Secure Boot on ARM64 has been reintroduced: users of UEFI-capable ARM64 hardware can boot with Secure Boot mode enabled to take full advantage of the security feature.

9to5Linux has screenshots, and highlights some new features: Debian 12 also brings read/write support for APFS (Apple File System) with the apfsprogs and apfs-dkms utilities, a new tool called ntfs2btrfs that lets you convert NTFS drives to Btrfs, a new malloc implementation called mimalloc, a new kernel SMB server called ksmbd-tools, and support for the merged-usr root file system layout...

This release also includes completely new artwork called Emerald, designed (once again) by Juliette Taka. New fonts are also present in this major Debian release, along with a new fnt command-line tool for accessing 1,500 DFSG-compliant fonts.

Debian 12 "bookworm" ships with several desktop environments, including:
  • Gnome 43,
  • KDE Plasma 5.27,
  • LXDE 11,
  • LXQt 1.2.0,
  • MATE 1.26,
  • Xfce 4.18

Robotics

Uber Eats to Deploy 2,000 Autonomous Delivery Robots (techcrunch.com) 20

"If you live in San Jose, Dallas, or Vancouver, you may soon be sharing the sidewalk with an army of delivery robots," reports PC Magazine (citing a report from TechCrunch. Uber Eats is expanding its partnership with Serve Robotics to deploy up to 2,000 zero-emission bots: Currently covering Los Angeles and San Francisco, Serve Robotics has been working with more than 200 California restaurants to dish out meals via the Uber Eats platform... Serve's sidewalk robots run seven days a week from 10 a.m. to 9 p.m. They're capable of Level 4 autonomy, allowing them to operate routinely without human intervention, TechCrunch reports.

Uber is no stranger to driverless robots. Together with AI-powered partner Cartken, the firm recently expanded a food delivery pilot from Miami to Fairfax, Virginia, where bots now roam the sidewalks, dropping off meals and providing curbside pickup to locals.

Last week Uber also announced it was making robotaxis available via the Uber app in Phoenix.

TechCrunch argues this new expansion "validates Serve's goal to mass commercialize robotics for autonomous delivery" — while also signalling Uber's deeper commitment to autonomy.
Transportation

US Proposes Requiring New Cars To Have Automatic Braking Systems (nytimes.com) 142

The National Highway Traffic Safety Administration (NHTSA) has proposed a rule that would require all new cars and trucks to have automatic braking systems capable of preventing collisions. The rule aims to address the rise in traffic fatalities and would mandate the use of advanced systems that can automatically stop and avoid hitting pedestrians and stationary or slow-moving vehicles. The New York Times reports: The agency is proposing that all light vehicles, including cars, large pickup trucks and sport utility vehicles, be equipped to automatically stop and avoid hitting pedestrians at speeds of up to 37 miles per hour. Vehicles would also have to brake and stop to avoid hitting stopped or slow-moving vehicles at speeds of up to 62 m.p.h. And the systems would have to perform well at night. About 90 percent of the new vehicles on sale now have some form of automatic emergency braking, but not all meet the standards the safety agency is proposing.

Automatic emergency braking systems typically use cameras, radar or both to spot vehicles, pedestrians, cyclists and other obstacles. By comparing a vehicle's speed and direction with those of other vehicles or people, these systems can determine that a collision is imminent, alert the driver through an alarm and activate the brakes if the driver fails to do so. [...] The safety agency will take comments on the rule from automakers, safety groups and the public before making it final -- a process that can take a year or more. The rule will go into effect three years after it is adopted.

Security

Unearthed: CosmicEnergy, Malware For Causing Kremlin-Style Power Disruptions (arstechnica.com) 45

An anonymous reader quotes a report from Ars Technica: Researchers have uncovered malware designed to disrupt electric power transmission and may have been used by the Russian government in training exercises for creating or responding to cyberattacks on electric grids. Known as CosmicEnergy, the malware has capabilities that are comparable to those found in malware known as Industroyer and Industroyer2, both of which have been widely attributed by researchers to Sandworm, the name of one of the Kremlin's most skilled and cutthroat hacking groups.

Researchers from Mandiant, the security firm that found CosmicEnergy, wrote: "COSMICENERGY is the latest example of specialized OT malware capable of causing cyber physical impacts, which are rarely discovered or disclosed. What makes COSMICENERGY unique is that based on our analysis, a contractor may have developed it as a red teaming tool for simulated power disruption exercises hosted by Rostelecom-Solar, a Russian cyber security company. Analysis into the malware and its functionality reveals that its capabilities are comparable to those employed in previous incidents and malware, such as INDUSTROYER and INDUSTROYER.V2, which were both malware variants deployed in the past to impact electricity transmission and distribution via IEC-104. The discovery of COSMICENERGY illustrates that the barriers to entry for developing offensive OT capabilities are lowering as actors leverage knowledge from prior attacks to develop new malware. Given that threat actors use red team tools and public exploitation frameworks for targeted threat activity in the wild, we believe COSMICENERGY poses a plausible threat to affected electric grid assets. OT asset owners leveraging IEC-104 compliant devices should take action to preempt potential in the wild deployment of COSMICENERGY."

Right now, the link is circumstantial and mainly limited to a comment found in the code suggesting it works with software designed for training exercises sponsored by the Kremlin. Consistent with the theory that CosmicEnergy is used in so-called Red Team exercises that simulate hostile hacks, the malware lacks the ability to burrow into a network to obtain environment information that would be necessary to execute an attack. The malware includes hardcoded information object addresses typically associated with power line switches or circuit breakers, but those mappings would have to be customized for a specific attack since they differ from manufacturer to manufacturer. "For this reason, the particular actions intended by the actor are unclear without further knowledge about the targeted assets," Mandiant researchers wrote.

Privacy

NSO Spyware Used in Armenia-Azerbaijan Conflict, Report Finds (nbcnews.com) 10

Invasive spyware capable of reading a smartphone's messages and listening to calls was found on the phones of at least 12 Armenian journalists, politicians and civil society members, according to a report published Thursday by a group of nonprofit organizations. From a report: The spyware, called Pegasus and made by the Israeli company NSO, had previously been found on the phones of thousands of people around the world, leading to U.S. sanctions in 2021 and a lawsuit from Apple. But researchers said their most recent findings are unique -- they believe it is the first time that the technology has been weaponized in an armed conflict between countries.

Armenia has intermittently battled its neighbor Azerbaijan for decades. In 2020, a cease-fire was broken in the disputed region of Nagorno-Karabakh, leaving thousands dead. Since then, the two countries have been mired in a sporadic shooting war which has killed dozens more. The report, a collaboration among the international internet rights group Access Now, Amnesty International and the University of Toronto's Citizen Lab, calls for "an immediate moratorium" on the sale and transfer of spyware technology. NSO is the most notorious mercenary spyware developer in the world. It creates powerful programs like Pegasus, which can hack smartphones to reveal information such as contacts, calls and location.

Python

Python's PyPi Package Repository Temporarily Halted New Signups, Citing 'Volume of Malicious Projects' (bleepingcomputer.com) 24

On Saturday PyPI, the official third-party registry of open source Python packages, "temporarily suspended new users from signing up, and new projects from being uploaded to the platform" reports BleepingComputer.

"The volume of malicious users and malicious projects being created on the index in the past week has outpaced our ability to respond to it in a timely fashion, especially with multiple PyPI administrators on leave," stated an incident notice posted by PyPI admins Saturday.

Hours ago they posted a four-word update: "Suspension has been lifted." No details were provided, but The Hacker News writes the incident "comes as software registries such as PyPI have proven time and time again to be a popular target for attackers looking to poison the software supply chain and compromise developer environments." Earlier this week, Israeli cybersecurity startup Phylum uncovered an active malware campaign that leverages OpenAI ChatGPT-themed lures to bait developers into downloading a malicious Python module capable of stealing clipboard content in order to hijack cryptocurrency transactions. ReversingLabs, in a similar discovery, identified multiple npm packages named nodejs-encrypt-agent and nodejs-cookie-proxy-agent in the npm repository that drops a trojan called TurkoRat.
AI

Meta's Building an In-House AI Chip to Compete with Other Tech Giants (techcrunch.com) 17

An anonymous reader shared this report from the Verge: Meta is building its first custom chip specifically for running AI models, the company announced on Thursday. As Meta increases its AI efforts — CEO Mark Zuckerberg recently said the company sees "an opportunity to introduce AI agents to billions of people in ways that will be useful and meaningful" — the chip and other infrastructure plans revealed Thursday could be critical tools for Meta to compete with other tech giants also investing significant resources into AI.

Meta's new MTIA chip, which stands for Meta Training and Inference Accelerator, is its "in-house, custom accelerator chip family targeting inference workloads," Meta VP and head of infrastructure Santosh Janardhan wrote in a blog post... But the MTIA chip is seemingly a long ways away: it's not set to come out until 2025, TechCrunch reports.

Meta has been working on "a massive project to upgrade its AI infrastructure in the past year," Reuters reports, "after executives realized it lacked the hardware and software to support demand from product teams building AI-powered features."

As a result, the company scrapped plans for a large-scale rollout of an in-house inference chip and started work on a more ambitious chip capable of performing training and inference, Reuters reported...

Meta said it has an AI-powered system to help its engineers create computer code, similar to tools offered by Microsoft, Amazon and Alphabet.

TechCrunch calls these announcements "an attempt at a projection of strength from Meta, which historically has been slow to adopt AI-friendly hardware systems — hobbling its ability to keep pace with rivals such as Google and Microsoft."

Meta's VP of Infrastructure told TechCrunch "This level of vertical integration is needed to push the boundaries of AI research at scale." Over the past decade or so, Meta has spent billions of dollars recruiting top data scientists and building new kinds of AI, including AI that now powers the discovery engines, moderation filters and ad recommenders found throughout its apps and services. But the company has struggled to turn many of its more ambitious AI research innovations into products, particularly on the generative AI front. Until 2022, Meta largely ran its AI workloads using a combination of CPUs — which tend to be less efficient for those sorts of tasks than GPUs — and a custom chip designed for accelerating AI algorithms...

The MTIA is an ASIC, a kind of chip that combines different circuits on one board, allowing it to be programmed to carry out one or many tasks in parallel... Custom AI chips are increasingly the name of the game among the Big Tech players. Google created a processor, the TPU (short for "tensor processing unit"), to train large generative AI systems like PaLM-2 and Imagen. Amazon offers proprietary chips to AWS customers both for training (Trainium) and inferencing (Inferentia). And Microsoft, reportedly, is working with AMD to develop an in-house AI chip called Athena.

Meta says that it created the first generation of the MTIA — MTIA v1 — in 2020, built on a 7-nanometer process. It can scale beyond its internal 128 MB of memory to up to 128 GB, and in a Meta-designed benchmark test — which, of course, has to be taken with a grain of salt — Meta claims that the MTIA handled "low-complexity" and "medium-complexity" AI models more efficiently than a GPU. Work remains to be done in the memory and networking areas of the chip, Meta says, which present bottlenecks as the size of AI models grow, requiring workloads to be split up across several chips. (Not coincidentally, Meta recently acquired an Oslo-based team building AI networking tech at British chip unicorn Graphcore.) And for now, the MTIA's focus is strictly on inference — not training — for "recommendation workloads" across Meta's app family...

If there's a common thread in today's hardware announcements, it's that Meta's attempting desperately to pick up the pace where it concerns AI, specifically generative AI... In part, Meta's feeling increasing pressure from investors concerned that the company's not moving fast enough to capture the (potentially large) market for generative AI. It has no answer — yet — to chatbots like Bard, Bing Chat or ChatGPT. Nor has it made much progress on image generation, another key segment that's seen explosive growth.

If the predictions are right, the total addressable market for generative AI software could be $150 billion. Goldman Sachs predicts that it'll raise GDP by 7%. Even a small slice of that could erase the billions Meta's lost in investments in "metaverse" technologies like augmented reality headsets, meetings software and VR playgrounds like Horizon Worlds.

AI

'Stack Overflow is ChatGPT Casualty' (similarweb.com) 150

SimilarWeb: Developers increasingly get advice from AI chatbots and GitHub CoPilot rather than Stack Overflow message boards. While traffic to OpenAI's ChatGPT has been growing exponentially, Stack Overflow has been experiencing a steady decline -- losing some of its standings as the go-to source developers turn to for answers to coding challenges. Actually, traffic to Stack Overflow's community website has been dropping since the beginning of 2022. That may be in part because of a related development, the introduction of the CoPilot coding assistant from Microsoft's GitHub business. CoPilot is built on top of the same OpenAI large language model as ChatGPT, capable of processing both human language and programming language. A plugin to the widely used Microsoft Visual Studio Code allows developers to have CoPilot write entire functions on their behalf, rather than going to Stack Overflow in search of something to copy and paste. CoPilot now incorporates the latest GPT-4 version of OpenAI's platform.

On a year-over-year basis, traffic to Stack Overflow (stackoverflow.com) has been down by an average of 6% every month since January 2022 and was down 13.9% in March. ChatGPT doesn't have a year-over-year track record, having only launched at the end of November, but its website (chat.openai.com) has become one of the world's hottest digital properties in that short time, bigger than Microsoft's Bing search engine for worldwide traffic. It attracted 1.6 billion visits in March and another 920.7 million in the first half of April. The GitHub website has also been seeing strong growth, with traffic to github.com up 26.4% year-over-year in March to 524 million visits. That doesn't reflect all the usage of CoPilot, which normally takes place within an editor like Visual Studio Code, but it would include people coming to the website to get a subscription to the service. Visits to the GitHub CoPilot free trial signup page more than tripled from February to March, topping 800,000.

AI

Wendy's To Begin Replacing Drive-Thru Staff With AI Chatbots (newatlas.com) 107

An anonymous reader quotes a report from New Atlas: It's at least as good as our best customer service representative, and it's probably on average better," said Wendy's CIO Kevin Vasconi to the Wall Street Journal. After successful early tests, the fifth biggest fast food chain in the USA will start using AI chatbots to interact with drive-thru customers next month. The company has been working with Google on a number of machine learning and AI tools behind the scenes, and is now extending that partnership to begin deploying a Large Language Model (LLM) generative AI system built on the Vertex AI platform, that's custom-trained to take over from human workers, taking drive-thru orders and talking with customers.

Verbal AI tech has advanced in leaps and bounds -- not that you'd know it trying to talk to my Google Home, mind you -- and the two companies have worked together to train up a system called FreshAI. This model understands the entire menu, including the street slang for certain orders, and it's capable of having conversations -- within a set of "guardrails" -- as well as taking custom orders and answering questions. It integrates with the company's point of sale systems and has been trained to follow the rules the company currently gives to its human drive thru window staff. Wendy's will begin with a pilot program at a site in the Columbus, Ohio, area next month, expecting that some customers won't realize they're not talking to a human. From there, the company hopes to expand to include other drive-thru locations.

Google

Google Announces PaLM 2, Its Next Generation Language Model (blog.google) 6

Google, in a blog post: PaLM 2 is a state-of-the-art language model with improved multilingual, reasoning and coding capabilities.

Multilinguality: PaLM 2 [PDF] is more heavily trained on multilingual text, spanning more than 100 languages. This has significantly improved its ability to understand, generate and translate nuanced text -- including idioms, poems and riddles -- across a wide variety of languages, a hard problem to solve. PaLM 2 also passes advanced language proficiency exams at the "mastery" level.
Reasoning: PaLM 2's wide-ranging dataset includes scientific papers and web pages that contain mathematical expressions. As a result, it demonstrates improved capabilities in logic, common sense reasoning, and mathematics.
Coding: PaLM 2 was pre-trained on a large quantity of publicly available source code datasets. This means that it excels at popular programming languages like Python and JavaScript, but can also generate specialized code in languages like Prolog, Fortran and Verilog.

Even as PaLM 2 is more capable, it's also faster and more efficient than previous models -- and it comes in a variety of sizes, which makes it easy to deploy for a wide range of use cases. We'll be making PaLM 2 available in four sizes from smallest to largest: Gecko, Otter, Bison and Unicorn. Gecko is so lightweight that it can work on mobile devices and is fast enough for great interactive applications on-device, even when offline. This versatility means PaLM 2 can be fine-tuned to support entire classes of products in more ways, to help more people.

At I/O today, we announced over 25 new products and features powered by PaLM 2. That means that PaLM 2 is bringing the latest in advanced AI capabilities directly into our products and to people -- including consumers, developers, and enterprises of all sizes around the world. Here are some examples:

PaLM 2's improved multilingual capabilities are allowing us to expand Bard to new languages, starting today. Plus, it's powering our recently announced coding update.
Workspace features to help you write in Gmail and Google Docs, and help you organize in Google Sheets are all tapping into the capabilities of PaLM 2 at a speed that helps people get work done better, and faster.
Med-PaLM 2, trained by our health research teams with medical knowledge, can answer questions and summarize insights from a variety of dense medical texts. It achieves state-of-the-art results in medical competency, and was the first large language model to perform at "expert" level on U.S. Medical Licensing Exam-style questions. We're now adding multimodal capabilities to synthesize information like x-rays and mammograms to one day improve patient outcomes. Med-PaLM 2 will open up to a small group of Cloud customers for feedback later this summer to identify safe, helpful use cases.

Slashdot Top Deals