×
Businesses

Nvidia Posts Record Revenue Up 265% On Booming AI Business (cnbc.com) 27

In its fourth quarter earnings report today, Nvidia beat Wall Street's forecast for earnings and sales, causing shares to rise about 10% in extended trading. CNBC reports: Here's what the company reported compared with what Wall Street was expecting for the quarter ending in January, based on a survey of analysts by LSEG, formerly known as Refinitiv:

Earnings per share: $5.16 adjusted vs. $4.64 expected
Revenue: $22.10 billion vs. $20.62 billion expected

Nvidia said it expected $24.0 billion in sales in the current quarter. Analysts polled by LSEG were looking for $5.00 per share on $22.17 billion in sales. Nvidia CEO Jensen Huang addressed investor fears that the company may not be able to keep up this growth or level of sales for the whole year on a call with analysts. "Fundamentally, the conditions are excellent for continued growth" in 2025 and beyond, Huang told analysts. He says demand for the company's GPUs will remain high due to generative AI and an industry-wide shift away from central processors to the accelerators that Nvidia makes.

Nvidia reported $12.29 billion in net income during the quarter, or $4.93 per share, up 769% versus last year's $1.41 billion or 57 cents per share. Nvidia's total revenue rose 265% from a year ago, based on strong sales for AI chips for servers, particularly the company's "Hopper" chips such as the H100, it said. "Strong demand was driven by enterprise software and consumer internet applications, and multiple industry verticals including automotive, financial services and health care," the company said in commentary provided to investors. Those sales are reported in the company's Data Center business, which now comprises the majority of Nvidia's revenue. Data center sales were up 409% to $18.40 billion. Over half the company's data center sales went to large cloud providers. [...]

The company's gaming business, which includes graphics cards for laptops and PCs, was merely up 56% year over year to $2.87 billion. Graphics cards for gaming used to be Nvidia's primary business before its AI chips started taking off, and some of Nvidia's graphics cards can be used for AI. Nvidia's smaller businesses did not show the same meteoric growth. Its automotive business declined 4% to $281 million in sales, and its OEM and other business, which includes crypto chips, rose 7% to $90 million. Nvidia's business making graphics hardware for professional applications rose 105% to $463 million.

Businesses

Nvidia Becomes Third Most Valuable US Company (cnbc.com) 75

Nvidia is now the third most valuable company in the U.S., surpassing Google parent Alphabet and Amazon. It's only behind Apple and Microsoft in terms of market cap. CNBC reports: Nvidia rose over 2% to close at $739.00 per share, giving it a market value of $1.83 trillion to Google's $1.82 trillion market cap. The move comes one day after Nvidia surpassed Amazon in terms of market value. The symbolic milestone is more confirmation that Nvidia has become a Wall Street darling on the back of elevated AI chip sales, valued even more highly than some of the large software companies and cloud providers that develop and integrate AI technology into their products.

Nvidia shares are up over 221% over the past 12 months on robust demand for its AI server chips that can cost more than $20,000 each. Companies like Google and Amazon need thousands of them for their cloud services. Before the recent AI boom, Nvidia was best known for consumer graphics processors it sold to PC makers to build gaming computers, a less lucrative market.

Portables (Apple)

Asahi Linux Project's OpenGL Support On Apple Silicon Officially Surpasses Apple's (arstechnica.com) 43

Andrew Cunningham reports via Ars Technica: For around three years now, the team of independent developers behind the Asahi Linux project has worked to support Linux on Apple Silicon Macs, despite Apple's total lack of involvement. Over the years, the project has gone from a "highly unstable experiment" to a "surprisingly functional and usable desktop operating system." Even Linus Torvalds has used it to run Linux on Apple's hardware. The team has been steadily improving its open source, standards-conformant GPU driver for the M1 and M2 since releasing them in December 2022, and today, the team crossed an important symbolic milestone: The Asahi driver's support for the OpenGL and OpenGL ES graphics have officially passed what Apple offers in macOS. The team's latest graphics driver fully conforms with OpenGL version 4.6 and OpenGL ES version 3.2, the most recent version of either API. Apple's support in macOS tops out at OpenGL 4.1, announced in July 2010.

Developer Alyssa Rosenzweig wrote a detailed blog post that announced the new driver, which had to pass "over 100,000 tests" to be deemed officially conformant. The team achieved this milestone despite the fact that Apple's GPUs don't support some features that would have made implementing these APIs more straightforward. "Regrettably, the M1 doesn't map well to any graphics standard newer than OpenGL ES 3.1," writes Rosenzweig. "While Vulkan makes some of these features optional, the missing features are required to layer DirectX and OpenGL on top. No existing solution on M1 gets past the OpenGL 4.1 feature set... Without hardware support, new features need new tricks. Geometry shaders, tessellation, and transform feedback become compute shaders. Cull distance becomes a transformed interpolated value. Clip control becomes a vertex shader epilogue. The list goes on."

Now that the Asahi GPU driver supports the latest OpenGL and OpenGL ES standards -- released in 2017 and 2015, respectively -- the work turns to supporting the low-overhead Vulkan API on Apple's hardware. Vulkan support in macOS is limited to translation layers like MoltenVK, which translates Vulkan API calls to Metal ones that the hardware and OS can understand. [...] Rosenzweig's blog post didn't give any specific updates on Vulkan except to say that the team was "well on the road" to supporting it. In addition to supporting native Linux apps, supporting more graphics APIs in Asahi will allow the operating system to take better advantage of software like Valve's Proton, which already has a few games written for x86-based Windows PCs running on Arm-based Apple hardware.

Businesses

Sam Altman Seeks Trillions of Dollars To Reshape Business of Chips and AI (wsj.com) 54

Sam Altman was already trying to lead the development of human-level artificial intelligence. Now he has another great ambition: raising trillions of dollars to reshape the global semiconductor industry. From a report: The OpenAI chief executive officer is in talks with investors including the United Arab Emirates government to raise funds for a wildly ambitious tech initiative that would boost the world's chip-building capacity, expand its ability to power AI, among other things, and cost several trillion dollars, according to people familiar with the matter. The project could require raising as much as $5 trillion to $7 trillion, one of the people said.

The fundraising plans, which face significant obstacles, are aimed at solving constraints to OpenAI's growth, including the scarcity of the pricey AI chips required to train large language models behind AI systems such as ChatGPT. Altman has often complained that there aren't enough of these kinds of chips -- known as graphics processing units, or GPUs -- to power OpenAI's quest for artificial general intelligence, which it defines as systems that are broadly smarter than humans. Such a sum of investment would dwarf the current size of the global semiconductor industry. Global sales of chips were $527 billion last year and are expected to rise to $1 trillion annually by 2030. Global sales of semiconductor manufacturing equipment -- the costly machinery needed to run chip factories -- last year were $100 billion, according to an estimate by the industry group SEMI.

AI

AI PCs To Account for Nearly 60% of All PC Shipments by 2027, IDC Says (idc.com) 70

IDC, in a press release: A new forecast from IDC shows shipments of artificial intelligence (AI) PCs -- personal computers with specific system-on-a-chip (SoC) capabilities designed to run generative AI tasks locally -- growing from nearly 50 million units in 2024 to more than 167 million in 2027. By the end of the forecast, IDC expects AI PCs will represent nearly 60% of all PC shipments worldwide. [...] Until recently, running an AI task locally on a PC was done on the central processing unit (CPU), the graphics processing unit (GPU), or a combination of the two. However, this can have a negative impact on the PC's performance and battery life because these chips are not optimized to run AI efficiently. PC silicon vendors have now introduced AI-specific silicon to their SoCs called neural processing units (NPUs) that run these tasks more efficiently.

To date, IDC has identified three types of NPU-enabled AI PCs:
1. Hardware-enabled AI PCs include an NPU that offers less than 40 tera operations per second (TOPS) performance and typically enables specific AI features within apps to run locally. Qualcomm, Apple, AMD, and Intel are all shipping chips in this category today.

2. Next-generation AI PCs include an NPU with 40 to 60 TOPS performance and an AI-first operating system (OS) that enables persistent and pervasive AI capabilities in the OS and apps. Qualcomm, AMD, and Intel have all announced future chips for this category, with delivery expected to begin in 2024. Microsoft is expected to roll out major updates (and updated system specifications) to Windows 11 to take advantage of these high-TOPS NPUs.

3. Advanced AI PCs are PCs that offer more than 60 TOPS of NPU performance. While no silicon vendors have announced such products, IDC expects them to appear in the coming years. This IDC forecast does not include advanced AI PCs, but they will be incorporated into future updates.
Michael Dell, commenting on X: This is correct and might be underestimating it. AI PCs are coming fast and Dell is ready.
Movies

Avatar VFX Workers Vote To Unionize (hollywoodreporter.com) 28

Visual effects artists working on James Cameron's Avatar movies have voted to unionize in a National Labor Relations Board (NLRB) election. From the Hollywood Reporter: Of an eligible 88 workers at Walt Disney Studios subsidiary TCF US Productions 27, Inc. who assist with productions for Cameron's Lightstorm Entertainment, 57 voted to join the union and 19 voted against, while two ballots were void. These workers include creatures costume leads and environment artists as well as others in the stage, environments, render, post viz, sequence, turn over and kabuki departments. Management and labor now have a few days to file any objections, and if none are raised, the election results will be certified.

This bargaining unit doesn't include employees of VFX facility vendors, notably Weta FX, which is the lead VFX house on the Avatar films and employs the vast majority of the more than 1,000 artists who work on a typical Avatar movie. But unionizing the group represents a major inroad for the VFX industry labor movement, believes one VFX industry source who spoke with THR. "While insignificant as a number, this is the core team that answers to Jim Cameron," says the source. "They are not necessarily impressive in size, but in influence."

The workers first went public with their organizing bid in December, when they filed for a union election with the NLRB. At the time, participating workers said in public statements that they were aiming to gain comparable benefits and pay to their unionized peers and have greater input into in working conditions. "Every one of my coworkers has dedicated so much time, creativity and passion to make these films a reality. So when you see them struggling to cover their health premiums, or being overworked because they took on multiple roles, or are just scraping by on their wages ... you cannot keep silent," said kabuki lead Jennifer Anaya.

Android

Google Is Rolling Out WebGPU For Next-Gen Gaming On Android 14

In a blog post today, Google announced that WebGPU is "now enabled by default in Chrome 121 on devices running Android 12 and greater powered by Qualcomm and ARM GPUs," with support for more Android devices rolling out gradually. Previously, the API was only available on Windows PCs that support Direct3D 12, macOS, and ChromeOS devices that support Vulkan.

Google says WebGPU "offers significant benefits such as greatly reduced JavaScript workload for the same graphics and more than three times improvements in machine learning model inferences." With lower-level access to a device's GPU, developers are able to enable richer and more complex visual content in web applications. This will be especially apparent with games, as you can see in this demo.

Next up: WebGPU for Chrome on Linux.
Security

A Flaw In Millions of Apple, AMD, and Qualcomm GPUs Could Expose AI Data (wired.com) 22

An anonymous reader quotes a report from Wired: As more companies ramp up development of artificial intelligence systems, they are increasingly turning to graphics processing unit (GPU) chips for the computing power they need to run large language models (LLMs) and to crunch data quickly at massive scale. Between video game processing and AI, demand for GPUs has never been higher, and chipmakers are rushing to bolster supply. In new findings released today, though, researchers are highlighting a vulnerability in multiple brands and models of mainstream GPUs -- including Apple, Qualcomm, and AMD chips -- that could allow an attacker to steal large quantities of data from a GPU's memory. The silicon industry has spent years refining the security of central processing units, or CPUs, so they don't leak data in memory even when they are built to optimize for speed. However, since GPUs were designed for raw graphics processing power, they haven't been architected to the same degree with data privacy as a priority. As generative AI and other machine learning applications expand the uses of these chips, though, researchers from New York -- based security firm Trail of Bits say that vulnerabilities in GPUs are an increasingly urgent concern. "There is a broader security concern about these GPUs not being as secure as they should be and leaking a significant amount of data," Heidy Khlaaf, Trail of Bits' engineering director for AI and machine learning assurance, tells WIRED. "We're looking at anywhere from 5 megabytes to 180 megabytes. In the CPU world, even a bit is too much to reveal."

To exploit the vulnerability, which the researchers call LeftoverLocals, attackers would need to already have established some amount of operating system access on a target's device. Modern computers and servers are specifically designed to silo data so multiple users can share the same processing resources without being able to access each others' data. But a LeftoverLocals attack breaks down these walls. Exploiting the vulnerability would allow a hacker to exfiltrate data they shouldn't be able to access from the local memory of vulnerable GPUs, exposing whatever data happens to be there for the taking, which could include queries and responses generated by LLMs as well as the weights driving the response. In their proof of concept, as seen in the GIF below, the researchers demonstrate an attack where a target -- shown on the left -- asks the open source LLM Llama.cpp to provide details about WIRED magazine. Within seconds, the attacker's device -- shown on the right -- collects the majority of the response provided by the LLM by carrying out a LeftoverLocals attack on vulnerable GPU memory. The attack program the researchers created uses less than 10 lines of code. [...] Though exploiting the vulnerability would require some amount of existing access to targets' devices, the potential implications are significant given that it is common for highly motivated attackers to carry out hacks by chaining multiple vulnerabilities together. Furthermore, establishing "initial access" to a device is already necessary for many common types of digital attacks.
The researchers did not find evidence that Nvidia, Intel, or Arm GPUs contain the LeftoverLocals vulnerability, but Apple, Qualcomm, and AMD all confirmed to WIRED that they are impacted. Here's what each of the affected companies had to say about the vulnerability, as reported by Wired:

Apple: An Apple spokesperson acknowledged LeftoverLocals and noted that the company shipped fixes with its latest M3 and A17 processors, which it unveiled at the end of 2023. This means that the vulnerability is seemingly still present in millions of existing iPhones, iPads, and MacBooks that depend on previous generations of Apple silicon. On January 10, the Trail of Bits researchers retested the vulnerability on a number of Apple devices. They found that Apple's M2 MacBook Air was still vulnerable, but the iPad Air 3rd generation A12 appeared to have been patched.
Qualcomm: A Qualcomm spokesperson told WIRED that the company is "in the process" of providing security updates to its customers, adding, "We encourage end users to apply security updates as they become available from their device makers." The Trail of Bits researchers say Qualcomm confirmed it has released firmware patches for the vulnerability.
AMD: AMD released a security advisory on Wednesday detailing its plans to offer fixes for LeftoverLocals. The protections will be "optional mitigations" released in March.
Google: For its part, Google says in a statement that it "is aware of this vulnerability impacting AMD, Apple, and Qualcomm GPUs. Google has released fixes for ChromeOS devices with impacted AMD and Qualcomm GPUs."
Wine

Wine 9.0 Released (9to5linux.com) 15

Version 9.0 of Wine, the free and open-source compatibility layer that lets you run Windows apps on Unix-like operating systems, has been released. "Highlights of Wine 9.0 include an experimental Wayland graphics driver with features like basic window management, support for multiple monitors, high-DPI scaling, relative motion events, as well as Vulkan support," reports 9to5Linux. From the report: The Vulkan driver has been updated to support Vulkan 1.3.272 and later, the PostScript driver has been reimplemented to work from Windows-format spool files and avoid any direct calls from the Unix side, and there's now a dark theme option on WinRT theming that can be enabled in WineCfg. Wine 9.0 also adds support for many more instructions to Direct3D 10 effects, implements the Windows Media Video (WMV) decoder DirectX Media Object (DMO), implements the DirectShow Audio Capture and DirectShow MPEG-1 Video Decoder filters, and adds support for video and system streams, as well as audio streams to the DirectShow MPEG-1 Stream Splitter filter.

Desktop integration has been improved in this release to allow users to close the desktop window in full-screen desktop mode by using the "Exit desktop" entry in the Start menu, as well as support for export URL/URI protocol associations as URL handlers to the Linux desktop. Audio support has been enhanced in Wine 9.0 with the implementation of several DirectMusic modules, DLS1 and DLS2 sound font loading, support for the SF2 format for compatibility with Linux standard MIDI sound fonts, Doppler shift support in DirectSound, Indeo IV50 Video for Windows decoder, and MIDI playback in dmsynth.

Among other noteworthy changes, Wine 9.0 brings loader support for ARM64X and ARM64EC modules, along with the ability to run existing Windows binaries on ARM64 systems and initial support for building Wine for the ARM64EC architecture. There's also a new 32-bit x86 emulation interface, a new WoW64 mode that supports running of 32-bit apps on recent macOS versions that don't support 32-bit Unix processes, support for DirectInput action maps to improve compatibility with many old video games that map controller inputs to in-game actions, as well as Windows 10 as the default Windows version for new prefixes. Last but not least, the kernel has been updated to support address space layout randomization (ASLR) for modern PE binaries, better memory allocation performance through the Low Fragmentation Heap (LFH) implementation, and support memory placeholders in the virtual memory allocator to allow apps to reserve virtual space. Wine 9.0 also adds support for smart cards, adds support for Diffie-Hellman keys in BCrypt, implements the Negotiate security package, adds support for network interface change notifications, and fixes many bugs.
For a full list of changes, check out the release notes. You can download Wine 9.0 from WineHQ.
China

China's Chip Imports Fell By a Record 15% Due To US Sanctions, Globally Weaker Demand (tomshardware.com) 49

According to Bloomberg, China's chip import value dropped significantly by 15.4% in 2023, from $413 billion to $349 billion. "Chip sales were down across the board in 2023 thanks to a weakening global economy, but China's chip imports indicate that its economy might be in trouble," reports Tom's Hardware. "The country's inability to import cutting-edge silicon is also certainly a factor in its decreasing chip imports." From the report: In 2022, the value of chip imports to China stood at $413 billion, and in 2023 the country only imported chips worth a total of $349 billion, a 15.4% decrease in value. That a drop happened at all isn't surprising; even TSMC, usually considered to be one of the most advanced fabbing corporation in the world, saw its sales decline by 4.5%. However, a 15.4% decrease in shipments is much more significant, and indicates China has particular issues other than weaker demand across the world.

China's ongoing economic issues, such as its high deflation could play a part. Deflation is when currency increases in value, the polar opposite of inflation, when currency loses value. As inflation has been a significant problem for countries such as the U.S. and UK, deflation might sound much more appealing, but economically it can be problematic. A deflationary economy encourages consumers not to spend, since money is increasing in value, meaning buyers can purchase more if they wait. In other words, deflation decreases demand for products like semiconductors.

However, shipment volume only decreased by 10.8% compared to the 15.4% decline in value, meaning the chips that China didn't buy in 2023 were particularly valuable. This likely reflects U.S. sanctions on China, which prevents it from buying top-end graphics cards, especially from Nvidia. The H100, H200, GH200, and the RTX 4090 are illegal to ship to China, and they're some of Nvidia's best GPUs. The moving target for U.S. sanctions could also make exporters and importers more tepid, as it's hard to tell if more sanctions could suddenly upend plans and business deals.

Classic Games (Games)

Atari Will Release a Mini Edition of Its 1979 Atari 400 (Which Had An 8-Bit MOS 6502 CPU) (extremetech.com) 64

An 1979 Atari 8-bit system re-released in a tiny form factor? Yep.

Retro Games Ltd. is releasing a "half-sized" version of its very first home computer, the Atari 400, "emulating the whole 8-bit Atari range, including the 400/800, XL and XE series, and the 5200 home console. ("In 1979 Atari brought the computer age home," remembers a video announcement, saying the new device represents "The iconic computer now reimagined.")

More info from ExtremeTech: For those of you unfamiliar with it, the Atari 400 and 800 were launched in 1979 as the company's first attempt at a home computer that just happened to double as an incredible game system. That's because, in addition to a faster variant of the excellent 8-bit MOS 6502 CPU found in the Apple II and Commodore PET, they also included Atari's dedicated ANTIC, GTIA, and POKEY coprocessors for graphics and sound, making the Atari 400 and 800 the first true gaming PCs...

If it's as good as the other Retro Games systems, the [new] 400Mini will count as another feather in the cap for Atari Interactive's resurgence following its excellent Atari50 compilation, reissued Atari 2600+ console, and acquisitions of key properties including Digital Eclipse, MobyGames, and AtariAge.

The 2024 version — launching in the U.K. March 28th — will boast high-definition HDMI output at 720p 50 or 60Hz, along with five USB ports. More details from Retro Games Ltd. Also included is THECXSTICK — a superb recreation of the classic Atari CX-40 joystick, with an additional seven seamlessly integrated function buttons. Play one of the included 25 classic Atari games, selected from a simple to use carousel, including all-time greats such as Berzerk, Missile Command, Lee, Millipede, Miner 2049er, M.U.L.E. and Star Raiders II, or play the games you own from USB stick. Plus save and resume your game at any time, or rewind by up to 30 seconds to help you finish those punishingly difficult classics!
Thanks to long-time Slashdot reader elfstones for sharing the article.
AI

Should Chatbots Teach Your Children? 94

"Sal Kahn, the founder and CEO of Khan Academy predicted last year that AI tutoring bots would soon revolutionize education," writes long-time Slashdot reader theodp: theodp writes: His vision of tutoring bots tapped into a decades-old Silicon Valley dream: automated teaching platforms that instantly customize lessons for each student. Proponents argue that developing such systems would help close achievement gaps in schools by delivering relevant, individualized instruction to children faster and more efficiently than human teachers ever could. But some education researchers say schools should be wary of the hype around AI-assisted instruction, warning that generative AI tools may turn out to have harmful or "degenerative" effects on student learning.
A ChatGPT-powered tutoring bot was tested last spring at the Khan Academy — and Bill Gates is enthusiastic about that bot and AI education in general (as well as the Khan Academy and AI-related school curriculums). From the original submission: Explaining his AI vision in November, Bill Gates wrote, "If a tutoring agent knows that a kid likes [Microsoft] Minecraft and Taylor Swift, it will use Minecraft to teach them about calculating the volume and area of shapes, and Taylor's lyrics to teach them about storytelling and rhyme schemes. The experience will be far richer—with graphics and sound, for example—and more personalized than today's text-based tutors."

The New York Times article notes that similar enthusiasm greeted automated teaching tools in the 1960s, but predictions that that the mechanical and electronic "teaching machines' — which were programmed to ask students questions on topics like spelling or math — would revolutionize education didn't pan out.

So, is this time different?
Operating Systems

Biggest Linux Kernel Release Ever Welcomes bcachefs File System, Jettisons Itanium (theregister.com) 52

Linux kernel 6.7 has been released, including support for the new next-gen copy-on-write (COW) bcachefs file system. The Register reports: Linus Torvalds announced the release on Sunday, noting that it is "one of the largest kernel releases we've ever had." Among the bigger and more visible changes are a whole new file system, along with fresh functionality for several existing ones; improved graphics support for several vendors' hardware; and the removal of an entire CPU architecture. [...] The single biggest feature of 6.7 is the new bcachefs file system, which we examined in March 2022. As this is the first release of Linux to include the new file system, it definitely would be premature to trust any important data to it yet, but this is a welcome change. The executive summary is that bcachefs is a next-generation file system that, like Btrfs and ZFS, provides COW functionality. COW enables the almost instant creation of "snapshots" of all or part of a drive or volume, which enables the OS to make disk operations transactional: In other words, to provide an "undo" function for complex sets of disk write operations.

Having a COW file system on Linux isn't new. The existing next-gen file system in the kernel, Btrfs, also supports COW snapshots. The version in 6.7 sees several refinements. It inherits a feature implemented for Steam OS: Two Btrfs file systems with the same ID can be mounted simultaneously, for failover scenarios. It also has improved quota support and a new raid_stripe_tree that improves handling of arrays of dissimilar drives. Btrfs remains somewhat controversial. Red Hat banished it from RHEL years ago (although Oracle Linux still offers it) but SUSE's distros depend heavily upon it. It will be interesting to see how quickly SUSE's Snapper tool gains support for bcachefs: This new COW contender may reveal unquestioned assumptions built into the code. Since Snapper is also used in several non-SUSE distros, including Spiral Linux, Garuda, and siduction, they're tied to Btrfs as well.

The other widely used FOSS next-gen file system, OpenZFS, also supports COW, but licensing conflicts prevent ZFS being fully integrated into the Linux kernel. So although multiple distros (such as NixOS, Proxmox, TrueNAS Scale, Ubuntu, and Void Linux) support ZFS, it must remain separate and distinct. This results in limitations, such as the ZFS Advanced Read Cache being separate from Linux's page cache. Bcachefs is all-GPL and doesn't suffer from such limitations. It aims to supply the important features of ZFS, such as integrated volume management, while being as fast as ext4 or XFS, and also surpass Btrfs in both performance and, crucially, reliability.
A full list of changes in this release can be viewed via KernelNewbies.
Television

LG Unveils the World's First Wireless Transparent OLED TV (engadget.com) 26

At CES, LG on Monday unveiled the OLED T, or as the firm describes it, "the first wireless transparent OLED TV," with 4K resolution and LG's wireless transmission tech for audio and video. Engadget: The unit also features a contrast screen that rolls down into a box at its base that you can raise or lower with the press of a bottom. The OLED T is powered by LG's new Alpha 11 AI processor with four times the performance of the previous-gen chip. The extra power offers 70 percent greater graphics performance and 30 percent faster processing speeds, according to the company.

The OLED T model works with the company's Zero Connect Box that debuted on last year's M3 OLED that sends video and audio wirelessly to the TV. You connect all of your streaming devices and game consoles to that box rather than the television. The OLED T's base houses down-firing speakers, which sound surprisingly good, as well as some other components. There are backlights as well, but you can turn those on for a fully-transparent look. LG says the TV will come in standalone, against-the-wall and wall-mounted options.
No word on when the TV will go on sale, or how much it would cost.
Virtualization

How 'Digital Twin' Technology Is Revolutionizing the Auto Industry (motortrend.com) 37

"Digital twin technology is one of the most significant disruptors of global manufacturing seen this century," argues Motor Trend, "and the automobile industry is embracing it in a big way." Roughly three-quarters of auto manufacturers are using digital twins as part of their vehicle development process, evolving not only how they design and develop new cars but also the way they monitor them, fix them, and even build them...

Nvidia, best known for its consumer graphics cards, also has a digital twin solution, called Omniverse, which manufacturers such as Mercedes-Benz are using to design their manufacturing processes. "Their factory planners now have every single element in the factory that they can then put in that virtual digital twin first, lay it all out, and then operate it," Danny Shapiro, VP of automotive at Nvidia said. At that point, those planners can run the entire manufacturing process virtually, ensuring every conveyor feeds the next step in the process, identifying and addressing factory floor headaches long before production begins...

Software developers can run their solutions within digital twins. That includes the code at the lowest level, basic stuff that controls ignition timing within the engine for example, all the way up to the highest level, like touchscreens responding to user inputs. "We're not just simulating the operation outside the car, but the user experience," Nvidia's Shapiro said. "We can simulate and basically run the real software that would be running in that car and display it on the screens." By bringing all these systems together virtually, developers can find and solve issues earlier, preventing costly development delays or, worse yet, buggy releases...

Using unique identifiers, manufacturers can effectively create internal digital copies of vehicles that have been produced. Those copies can be used for ongoing tests and verifications, helping to anticipate things like required maintenance or susceptibility to part failures. By using telematics, in-car services that remotely communicate a car's status back to the manufacturer in real-time, these digital twins can be updated to match the real thing. "By monitoring tire health, tire grip, vehicle weight distribution, and other critical parameters, engineers can anticipate potential problems and schedule maintenance proactively, reducing downtime and extending the vehicle's lifespan," Tactile Mobility's Tzur said.

Graphics

Nvidia Slowed RTX 4090 GPU By 11 Percent, To Make It 100 Percent Legal For Export In China (theregister.com) 22

Nvidia has throttled the performance of its GeForce RTX 4090 GPU by roughly 11%, allowing it to comply with U.S. sanctions and be sold in China. The Register reports: Dubbed the RTX 4090D, the device appeared on Nvidia's Chinese-market website Thursday and boasts performance roughly 10.94 percent lower than the model Nvidia announced in late 2022. This shows up in the form of lower core count, 14,592 CUDA cores versus 16,384 on versions sold outside of China. Nvidia also told The Register today the card's tensor core count has also been been cut down by a similar margin from 512 to 456 on the 4090D variant. Beyond this the card is largely unchanged, with peak clock speeds rated at 2.52 GHz, 24 GB of GDDR6x memory, and a fat 384-bit memory bus.

As we reported at the time, the RTX 4090 was the only consumer graphics card barred from sale in the Middle Kingdom following the October publication of the Biden Administration's most restrictive set of export controls. The problem was the card narrowly exceeded the performance limits on consumer cards with a total processing performance (TPP) of more than 4,800. That number is calculated by doubling the max number of dense tera-operations per second -- floating point or integer -- and multiplying by the bit length of the operation.

The original 4090 clocked a TPP of 5,285 performance, which meant Nvidia needed a US government-issued license to sell the popular gaming card in China. Note, consumer cards aren't subject to the performance density metric that restricts the sale of much less powerful datacenter cards like the Nvidia L4. As it happens, cutting performance by 10.94 percent is enough to bring the card under the metrics that trigger the requirement for the USA's Bureau of Industry and Security (BIS) to consider an export license.
Nvidia notes that the 4090D can be overclocked by end users, effectively allowing customers to recover some performance lost by the lower core count. "In 4K gaming with ray tracing and deep-learning super sampling (DLSS), the GeForce RTX 4090D is about five percent slower than the GeForce RTX 4090 and it operates like every other GeForce GPU, which can be overclocked by end users," an Nvidia spokesperson said in an email.
Desktops (Apple)

Inside Apple's Massive Push To Transform the Mac Into a Gaming Paradise (inverse.com) 144

Apple is reinvesting in gaming with advanced Mac hardware, improvements to Apple silicon, and gaming-focused software, aiming not to repeat its past mistakes and capture a larger share of the gaming market. In an article for Inverse, Raymond Wong provides an in-depth overview of this endeavor, including commentary from Apple's marketing managers Gordon Keppel, Leland Martin, and Doug Brooks. Here's an excerpt from the report: Gaming on the Mac in the 1990s until 2020, when Apple made a big shift to its own custom silicon, could be boiled down to this: Apple was in a hardware arms race with the PC that it couldn't win. Mac gamers were hopeful that the switch from PowerPC to Intel CPUs starting in 2005 would turn things around, but it didn't because by then, GPUs started becoming the more important hardware component for running 3D games, and the Mac's support for third-party GPUs could only be described as lackluster. Fast forward to 2023, and Apple has a renewed interest in gaming on the Mac, the likes of which it hasn't shown in the last 25 years. "Apple silicon has changed all that," Keppel tells Inverse. "Now, every Mac that ships with Apple silicon can play AAA games pretty fantastically. Apple silicon has been transformative of our mainstream systems that got tremendous boosts in graphics with M1, M2, and now with M3."

Ask any gadget reviewer (including myself) and they will tell you Keppel isn't just drinking the Kool-Aid because Apple pays him to. Macs with Apple silicon really are performant computers that can play some of the latest PC and console games. In three generations of desktop-class chip design, Apple has created a platform with "tens of millions of Apple silicon Macs," according to Keppel. That's tens of millions of Macs with monstrous CPU and GPU capabilities for running graphics-intensive games. Apple's upgrades to the GPUs on its silicon are especially impressive. The latest Apple silicon, the M3 family of chips, supports hardware-accelerated ray-tracing and mesh shading, features that only a few years ago didn't seem like they would ever be a priority, let alone ones that are built into the entire spectrum of MacBook Pros.

The "magic" of Apple silicon isn't just performance, says Leland Martin, an Apple software marketing manager. Whereas Apple's fallout with game developers on the Mac previously came down to not supporting specific computer hardware, Martin says Apple silicon started fresh with a unified hardware platform that not only makes it easier for developers to create Mac games for, but will allow for those games to run on other Apple devices. "If you look at the Mac lineup just a few years ago, there was a mix of both integrated and discrete GPUs," Martin says. "That can add complexity when you're developing games. Because you have multiple different hardware permutations to consider. Today, we've effectively eliminated that completely with Apple silicon, creating a unified gaming platform now across iPhone, iPad, and Mac. Once a game is designed for one platform, it's a straightforward process to bring it to the other two. We're seeing this play out with games like Resident Evil Village that launched first [on Mac] followed by iPhone and iPad."

"Gaming was fundamentally part of the Apple silicon design,â Doug Brooks, also on the Mac product marketing team, tells Inverse. "Before a chip even exists, gaming is fundamentally incorporated during those early planning stages and then throughout development. I think, big picture, when we design our chips, we really look at building balanced systems that provide great CPU, GPU, and memory performance. Of course, [games] need powerful GPUs, but they need all of those features, and our chips are designed to deliver on that goal. If you look at the chips that go in the latest consoles, they look a lot like that with integrated CPU, GPU, and memory." [...] "One thing we're excited about with this most recent launch of the M3 family of chips is that we're able to bring these powerful new technologies, Dynamic Caching, as well as ray-tracing and mesh shading across our entire line of chips," Brook adds. "We didn't start at the high end and trickle them down over time. We really wanted to bring that to as many customers as possible."

Intel

12VO Power Standard Appears To Be Gaining Steam, Will Reduce PC Cables and Costs (tomshardware.com) 79

An anonymous reader quotes a report from Tom's Hardware: The 12VO power standard (PDF), developed by Intel, is designed to reduce the number of power cables needed to power a modern PC, ultimately reducing cost. While industry uptake of the standard has been slow, a new slew of products from MSI indicates that 12VO is gaining traction.

MSI is gearing up with two 12VO-compliant motherboards, covering both Intel and AMD platforms, and a 12VO Power Supply that it's releasing simultaneously: The Pro B650 12VO WiFi, Pro H610M 12VO, and MSI 12VO PSU power supply are all 'coming soon,' which presumably means they'll officially launch at CES 2024. HardwareLux got a pretty good look at MSI's offerings during its EHA (European Hardware Awards) tech tour, including the 'Project Zero' we covered earlier. One of the noticeable changes is the absence of a 24-pin ATX connector, as the ATX12VO connectors use only ten-pin connectors. The publications also saw a 12VO-compliant FSP power supply in a compact system with a thick graphics card.

A couple of years ago, we reported on FSP 650-watt and 750-watt SFX 12VO power supply. Apart from that, there is a 1x 6-pin ATX12VO termed 'extra board connector' according to its manual and a 1x 8-pin 12V power connector for the CPU. There are two smaller 4-pin connectors that will provide the 5V power needed for SATA drives. It is likely each of these connectors provides power to two SATA-based drives. Intel proposed the ATX12VO standard several years ago, but adoption has been slow until now. This standard is designed to provide 12v exclusively, completely removing a direct 3.3v and 5v supply. The success of the new standard will depend on the wide availability of the motherboard and power supplies.

AI

Will AI Be a Disaster for the Climate? (theguardian.com) 100

"What would you like OpenAI to build/fix in 2024?" the company's CEO asked on X this weekend.

But "Amid all the hysteria about ChatGPT and co, one thing is being missed," argues the Observer — "how energy-intensive the technology is." The current moral panic also means that a really important question is missing from public discourse: what would a world suffused with this technology do to the planet? Which is worrying because its environmental impact will, at best, be significant and, at worst, could be really problematic.

How come? Basically, because AI requires staggering amounts of computing power. And since computers require electricity, and the necessary GPUs (graphics processing units) run very hot (and therefore need cooling), the technology consumes electricity at a colossal rate. Which, in turn, means CO2 emissions on a large scale — about which the industry is extraordinarily coy, while simultaneously boasting about using offsets and other wheezes to mime carbon neutrality.

The implication is stark: the realisation of the industry's dream of "AI everywhere" (as Google's boss once put it) would bring about a world dependent on a technology that is not only flaky but also has a formidable — and growing — environmental footprint. Shouldn't we be paying more attention to this?

Thanks to long-time Slashdot reader mspohr for sharing the article.
NASA

NASA's Tech Demo Streams First Video From Deep Space Via Laser 24

NASA has successfully beamed an ultra-high definition streaming video from a record-setting 19 million miles away. The Deep Space Optical Communications experiment, as it is called, is part of a NASA technology demonstration aimed at streaming HD video from deep space to enable future human missions beyond Earth orbit. From a NASA press release: The [15-second test] video signal took 101 seconds to reach Earth, sent at the system's maximum bit rate of 267 megabits per second (Mbps). Capable of sending and receiving near-infrared signals, the instrument beamed an encoded near-infrared laser to the Hale Telescope at Caltech's Palomar Observatory in San Diego County, California, where it was downloaded. Each frame from the looping video was then sent "live" to NASA's Jet Propulsion Laboratory in Southern California, where the video was played in real time.

The laser communications demo, which launched with NASA's Psyche mission on Oct. 13, is designed to transmit data from deep space at rates 10 to 100 times greater than the state-of-the-art radio frequency systems used by deep space missions today. As Psyche travels to the main asteroid belt between Mars and Jupiter, the technology demonstration will send high-data-rate signals as far out as the Red Planet's greatest distance from Earth. In doing so, it paves the way for higher-data-rate communications capable of sending complex scientific information, high-definition imagery, and video in support of humanity's next giant leap: sending humans to Mars.

Uploaded before launch, the short ultra-high definition video features an orange tabby cat named Taters, the pet of a JPL employee, chasing a laser pointer, with overlayed graphics. The graphics illustrate several features from the tech demo, such as Psyche's orbital path, Palomar's telescope dome, and technical information about the laser and its data bit rate. Tater's heart rate, color, and breed are also on display. There's also a historical link: Beginning in 1928, a small statue of the popular cartoon character Felix the Cat was featured in television test broadcast transmissions. Today, cat videos and memes are some of the most popular content online.
"Despite transmitting from millions of miles away, it was able to send the video faster than most broadband internet connections," said Ryan Rogalin, the project's receiver electronics lead at JPL. "In fact, after receiving the video at Palomar, it was sent to JPL over the internet, and that connection was slower than the signal coming from deep space. JPL's DesignLab did an amazing job helping us showcase this technology -- everyone loves Taters."

Slashdot Top Deals