Music

After KISS's Final Show, They'll Become Digital Avatars From Industrial Light & Magic (go.com) 93

Gene Simmons is 74 years old. But as the singer for the classic rock band KISS left the stage after their final show, USA Today reports there was a surprise: in the most on-brand KISS move even by KISS standards, before the quartet likely hit their dressing rooms after disappearing on stage in the blizzard of smoke and confetti that accompanied the set-closing "Rock and Roll All Nite," a message blasted on the video screens: "A new KISS era starts now."

Digital avatars of the band followed, playing their anthem, "God Gave Rock and Roll To You."

ABC News reports: The avatars were created by George Lucas' special-effects company, Industrial Light & Magic, in partnership with Pophouse Entertainment Group, the latter of which was co-founded by ABBA's Björn Ulvaeus. The two companies recently teamed up for the "ABBA Voyage" show in London, in which fans could attend a full concert by the Swedish band — as performed by their digital avatars. Per Sundin, CEO of Pophouse Entertainment, says this new technology allows Kiss to continue their legacy for "eternity." He says the band wasn't on stage during virtual performance because "that's the key thing," of the future-seeking technology. "Kiss could have a concert in three cities in the same night across three different continents. That's what you could do with this."

In order to create their digital avatars, who are depicted as a kind of superhero version of the band, Kiss performed in motion capture suits.

Experimentation with this kind of technology has become increasingly common in certain sections of the music industry. In October K-pop star Mark Tuan partnered with Soul Machines to create an autonomously automated "digital twin" called "Digital Mark." In doing so, Tuan became the first celebrity to attach their likeness to OpenAI's GPT integration, artificial intelligence technology that allows fans to engage in one-on-one conversations with Tuan's avatar. Aespa, the K-pop girl group, frequently perform alongside their digital avatars — the quartet is meant to be viewed as an octet with digital twins. Another girl group, Eternity, is made up entirely of virtual characters — no humans necessary.

Kiss frontman Paul Stanley told ABC News that "The band deserves to live on because the band is bigger than we are."
Science

Physicists May Have Found a Hard Limit on The Performance of Large Quantum Computers (sciencealert.com) 71

For circuit-based quantum computations, the achievable circuit complexity is limited by the quality of timekeeping. That's according to a new analysis published in the journal Physical Review Letters exploring "the effect of imperfect timekeeping on controlled quantum dynamics."

An announcement from the Vienna University of Technology explains its significance. "The research team was able to show that since no clock has an infinite amount of energy available (or generates an infinite amount of entropy), it can never have perfect resolution and perfect precision at the same time. This sets fundamental limits to the possibilities of quantum computers."

ScienceAlert writes: While the issue isn't exactly pressing, our ability to grow systems based on quantum operations from backroom prototypes into practical number-crunching behemoths will depend on how well we can reliably dissect the days into ever finer portions. This is a feat the researchers say will become increasingly more challenging...

"Time measurement always has to do with entropy," says senior author Marcus Huber, a systems engineer who leads a research group in the intersection of Quantum Information and Quantum Thermodynamics at the Vienna University of Technology. In their recently published theorem, Huber and his team lay out the logic that connects entropy as a thermodynamic phenomenon with resolution, demonstrating that unless you've got infinite energy at your fingertips, your fast-ticking clock will eventually run into precision problems. Or as the study's first author, theoretical physicist Florian Meier puts it, "That means: Either the clock works quickly or it works precisely — both are not possible at the same time...."

[F]or technologies like quantum computing, which rely on the temperamental nature of particles hovering on the edge of existence, timing is everything. This isn't a big problem when the number of particles is small. As they increase in number, the risk any one of them could be knocked out of their quantum critical state rises, leaving less and less time to carry out the necessary computations... This appears to be the first time researchers have looked at the physics of timekeeping itself as a potential obstacle. "Currently, the accuracy of quantum computers is still limited by other factors, for example the precision of the components used or electromagnetic fields," says Huber. "But our calculations also show that today we are not far from the regime in which the fundamental limits of time measurement play the decisive role."

Hardware

Apple's Chip Lab: Now 15 Years Old With Thousands of Engineers (cnbc.com) 68

"As of this year, all new Mac computers are powered by Apple's own silicon, ending the company's 15-plus years of reliance on Intel," according to a new report from CNBC.

"Apple's silicon team has grown to thousands of engineers working across labs all over the world, including in Israel, Germany, Austria, the U.K. and Japan. Within the U.S., the company has facilities in Silicon Valley, San Diego and Austin, Texas..." The latest A17 Pro announced in the iPhone 15 Pro and Pro Max in September enables major leaps in features like computational photography and advanced rendering for gaming. "It was actually the biggest redesign in GPU architecture and Apple silicon history," said Kaiann Drance, who leads marketing for the iPhone. "We have hardware accelerated ray tracing for the first time. And we have mesh shading acceleration, which allows game developers to create some really stunning visual effects." That's led to the development of iPhone-native versions from Ubisoft's Assassin's Creed Mirage, The Division Resurgence and Capcom's Resident Evil 4.

Apple says the A17 Pro is the first 3-nanometer chip to ship at high volume. "The reason we use 3-nanometer is it gives us the ability to pack more transistors in a given dimension. That is important for the product and much better power efficiency," said the head of Apple silicon, Johny Srouji . "Even though we're not a chip company, we are leading the industry for a reason." Apple's leap to 3-nanometer continued with the M3 chips for Mac computers, announced in October. Apple says the M3 enables features like 22-hour battery life and, similar to the A17 Pro, boosted graphics performance...

In a major shift for the semiconductor industry, Apple turned away from using Intel's PC processors in 2020, switching to its own M1 chip inside the MacBook Air and other Macs. "It was almost like the laws of physics had changed," Ternus said. "All of a sudden we could build a MacBook Air that's incredibly thin and light, has no fan, 18 hours of battery life, and outperformed the MacBook Pro that we had just been shipping." He said the newest MacBook Pro with Apple's most advanced chip, the M3 Max, "is 11 times faster than the fastest Intel MacBook Pro we were making. And we were shipping that just two years ago." Intel processors are based on x86 architecture, the traditional choice for PC makers, with a lot of software developed for it. Apple bases its processors on rival Arm architecture, known for using less power and helping laptop batteries last longer.

Apple's M1 in 2020 was a proving point for Arm-based processors in high-end computers, with other big names like Qualcomm — and reportedly AMD and Nvidia — also developing Arm-based PC processors. In September, Apple extended its deal with Arm through at least 2040.

Since Apple first debuted its homegrown semiconductors in 2010 in the iPhone 4, other companies started pursuing their own custom semiconductor development, including Amazon, Google, Microsoft and Tesla.

CNBC reports that Apple is also reportedly working on its own Wi-Fi and Bluetooth chip. Apple's Srouji wouldn't comment on "future technologies and products" but told CNBC "we care about cellular, and we have teams enabling that."
Programming

Java Tries a New Way to Use Multithreading: Structured Concurrency (infoworld.com) 96

"Structured concurrency is a new way to use multithreading in Java," reports InfoWorld.

"It allows developers to think about work in logical groups while taking advantage of both traditional and virtual threads." Available in preview in Java 21, structured concurrency is a key aspect of Java's future, so now is a good time to start working with it... Java's thread model makes it a strong contender among concurrent languages, but multithreading has always been inherently tricky. Structured concurrency allows you to use multiple threads with structured programming syntax. In essence, it provides a way to write concurrent software using familiar program flows and constructs. This lets developers focus on the business at hand, instead of the orchestration of threading.

As the JEP for structured concurrency says, "If a task splits into concurrent subtasks then they all return to the same place, namely the task's code block." Virtual threads, now an official feature of Java, create the possibility of cheaply spawning threads to gain concurrent performance. Structured concurrency provides the simple syntax to do so. As a result, Java now has a unique and highly-optimized threading system that is also easy to understand...

Between virtual threads and structured concurrency, Java developers have a compelling new mechanism for breaking up almost any code into concurrent tasks without much overhead... Any time you encounter a bottleneck where many tasks are occurring, you can easily hand them all off to the virtual thread engine, which will find the best way to orchestrate them. The new thread model with structured concurrency also makes it easy to customize and fine-tune this behavior. It will be very interesting to see how developers use these new concurrency capabilities in our applications, frameworks, and servers going forward.

It involves a new class StructuredTaskScope located in the java.util.concurrent library. (InfoWorld points out that "you'll need to use --enable-preview and --source 21 or --source 22 to enable structured concurrency.")

Their reporter shared an example on GitHub, and there's more examples in the Java 21 documentation. "The structured concurrency documentation includes an example of collecting subtask results as they succeed or fail and then returning the results."
AI

1960s Chatbot ELIZA Beat OpenAI's GPT-3.5 In a Recent Turing Test Study (arstechnica.com) 57

An anonymous reader quotes a report from Ars Technica: In a preprint research paper titled "Does GPT-4 Pass the Turing Test?", two researchers from UC San Diego pitted OpenAI's GPT-4 AI language model against human participants, GPT-3.5, and ELIZA to see which could trick participants into thinking it was human with the greatest success. But along the way, the study, which has not been peer-reviewed, found that human participants correctly identified other humans in only 63 percent of the interactions -- and that a 1960s computer program surpassed the AI model that powers the free version of ChatGPT. Even with limitations and caveats, which we'll cover below, the paper presents a thought-provoking comparison between AI model approaches and raises further questions about using the Turing test to evaluate AI model performance.

In the recent study, listed on arXiv at the end of October, UC San Diego researchers Cameron Jones (a PhD student in Cognitive Science) and Benjamin Bergen (a professor in the university's Department of Cognitive Science) set up a website called turingtest.live, where they hosted a two-player implementation of the Turing test over the Internet with the goal of seeing how well GPT-4, when prompted different ways, could convince people it was human. Through the site, human interrogators interacted with various "AI witnesses" representing either other humans or AI models that included the aforementioned GPT-4, GPT-3.5, and ELIZA, a rules-based conversational program from the 1960s. "The two participants in human matches were randomly assigned to the interrogator and witness roles," write the researchers. "Witnesses were instructed to convince the interrogator that they were human. Players matched with AI models were always interrogators."

The experiment involved 652 participants who completed a total of 1,810 sessions, of which 1,405 games were analyzed after excluding certain scenarios like repeated AI games (leading to the expectation of AI model interactions when other humans weren't online) or personal acquaintance between participants and witnesses, who were sometimes sitting in the same room. Surprisingly, ELIZA, developed in the mid-1960s by computer scientist Joseph Weizenbaum at MIT, scored relatively well during the study, achieving a success rate of 27 percent. GPT-3.5, depending on the prompt, scored a 14 percent success rate, below ELIZA. GPT-4 achieved a success rate of 41 percent, second only to actual humans.
"Ultimately, the study's authors concluded that GPT-4 does not meet the success criteria of the Turing test, reaching neither a 50 percent success rate (greater than a 50/50 chance) nor surpassing the success rate of human participants," reports Ars. "The researchers speculate that with the right prompt design, GPT-4 or similar models might eventually pass the Turing test. However, the challenge lies in crafting a prompt that mimics the subtlety of human conversation styles. And like GPT-3.5, GPT-4 has also been conditioned not to present itself as human."

"It seems very likely that much more effective prompts exist, and therefore that our results underestimate GPT-4's potential performance at the Turing Test," the authors write.
Books

Merriam-Webster's Word For 2023 Is 'Authentic' (apnews.com) 45

On Monday, Merriam-Webster announced its word of the year is "authentic -- the term for something we're thinking about, writing about, aspiring to, and judging more than ever." The Associated Press reports: Authentic cuisine. Authentic voice. Authentic self. Authenticity as artifice. Lookups for the word are routinely heavy on the dictionary company's site but were boosted to new heights throughout the year, editor at large Peter Sokolowski told The Associated Press in an exclusive interview. "We see in 2023 a kind of crisis of authenticity," he said ahead of Monday's announcement of this year's word. "What we realize is that when we question authenticity, we value it even more."

Sokolowski and his team don't delve into the reasons people head for dictionaries and websites in search of specific words. Rather, they chase the data on lookup spikes and world events that correlate. This time around, there was no particularly huge boost at any given time but a constancy to the increased interest in "authentic." [...] "Can we trust whether a student wrote this paper? Can we trust whether a politician made this statement? We don't always trust what we see anymore," Sokolowski said. "We sometimes don't believe our own eyes or our own ears. We are now recognizing that authenticity is a performance itself."

There's "not false or imitation: real, actual," as in an authentic cockney accent. There's "true to one's own personality, spirit or character." There's "worthy of acceptance or belief as conforming to or based on fact." There's "made or done the same way as an original." And, perhaps the most telling, there's "conforming to an original so as to reproduce essential features."

Portables (Apple)

Fanless AirJet Cooler Experiment Boosts MacBook Air To Match MacBook Pro's Performance (tomshardware.com) 31

Anton Shilov reports via Tom's Hardware: Engineers from Frore Systems have integrated the company's innovative solid-state AirJet cooling system, which provides impressive cooling capabilities despite a lack of moving parts, into an M2-based Apple MacBook Air. With proper cooling, the relatively inexpensive laptop matched the performance of a more expensive MacBook Pro based on the same processor. The lack of a fan is probably one of the main advantages of Apple's MacBook Air over its more performant siblings, but it also puts the laptop at a disadvantage. Fanless cooling doesn't have moving parts (which is a plus), but it also cannot properly cool down Apple's M1 or M2 processor under high loads, which is why a 13-inch MacBook Air powered by M1 or M2 system-on-chip is slower than 13-inch MacBook Pro based on the same SoC. However, making a MacBook Air run as fast as a 13-inch MacBook Pro is now possible. A video posted to YouTube by PC World shows how the AirJet system works. They also released a recent demo showing off the strength of the AirJet technology.
Hardware

Amazon Updates Homegrown Chips, Even as It Grows Nvidia Ties (bloomberg.com) 3

Amazon's cloud-computing unit announced updated versions of its in-house computer chips while also forging closer ties with Nvidia -- dual efforts designed to ensure it can get enough supplies of crucial data-center processors. From a report: New homegrown Graviton4 chips will have as much as 30% better performance than their predecessors, Amazon Web Services said at its annual re:Invent conference in Las Vegas. Computers using the processors will start coming online in the coming months.

The company also unveiled Trainium2, an updated version of a processor designed for artificial intelligence systems. It will begin powering new services starting next year, Amazon said. That chip provides an alternative to so-called AI accelerators sold by Nvidia -- processors that have been vital to the build-out of artificial intelligence services. But Amazon also touted "an expansion of its partnership" with Nvidia, whose chief executive officer, Jensen Huang, joined AWS counterpart Adam Selipsky on stage. AWS will be the first big user of an updated version of that company's Grace Hopper Superchip, and it will be one of the data-center companies hosting Nvidia's DGX Cloud service.

Power

US Energy Department Funds Next-Gen Semiconductor Projects to Improve Power Grids (energy.gov) 20

America's long-standing Advanced Research Projects Agency (or ARPA) developed the foundational technologies for the internet.

This week its energy division announced $42 million for projects enabling a "more secure and reliable" energy grid, "allowing it to utilize more solar, wind, and other clean energy." But specifically, they funded 15 projects across 11 states to improve the reliability, resiliency, and flexibility of the grid "through the next-generation semiconductor technologies." Streamlining the coordinated operation of electricity supply and demand will improve operational efficiency, prevent unforeseen outages, allow faster recovery, minimize the impacts of natural disasters and climate-change fueled extreme weather events, and redcude grid operating costs and carbon intensity.
Some highlights:
  • The Georgia Institute of Technology will develop a novel semiconductor switching device to improve grid control, resilience, and reliability.
  • Michigan's Great Lakes Crystal Technologies (will develop a diamond semiconductor transistor to support the control infrastructure needed for an energy grid with more distributed generation sources and more variable loads
  • Lawrence Livermore National Laboratory will develop an optically-controlled semiconductor transistor to enable future grid control systems to accommodate higher voltage and current than state-of-the-art devices.
  • California's Opcondys will develop a light-controlled grid protection device to suppress destructive, sudden transient surges on the grid caused by lightning or electromagnetic pulses.
  • Albuquerque's Sandia National Laboratories will develop novel a solid-state surge arrester protecting the grid from very fast electromagnetic pulses that threaten grid reliability and performance.

America's Secretary of Energy said the new investment "will support project teams across the country as they develop the innovative technologies we need to strengthen our grid security and bring reliable clean electricity to more families and businesses — all while combatting the climate crisis."


Businesses

EU, Chinese, French Regulators Seeking Info on Graphic Cards, Nvidia Says (reuters.com) 44

Regulators in the European Union, China and France have asked for information on Nvidia's graphic cards, with more requests expected in the future, the U.S. chip giant said in a regulatory filing. From a report: Nvidia is the world's largest maker of chips used both for artificial intelligence and for computer graphics. Demand for its chips jumped following the release of the generative AI application ChatGPT late last year. The California-based company has a market share of around 80% via its chips and other hardware and its powerful software that runs them.

Its graphics cards are high-performance devices that enable powerful graphics rendering and processing for use in video editing, video gaming and other complex computing operations. The company said this has attracted regulatory interest around the world. "For example, the French Competition Authority collected information from us regarding our business and competition in the graphics card and cloud service provider market as part of an ongoing inquiry into competition in those markets," Nvidia said in a regulatory filing dated Nov. 21.

IT

FFmpeg 6.1 Drops a Heaviside Dose of Codec Magic (theregister.com) 14

FFmpeg 6.1's codename is a tribute to the great 19th century mathematician Oliver Heaviside. This version includes support for multi-threaded hardware-accelerated video decoding of H.264, HEVC, and AV1 video using the cross-platform Vulkan API, the next-gen replacement for OpenGL, which was added to the codebase in May. The Register adds: The pace of development of FFmpeg has been speeding up slightly in recent years, given that it took 13 years to get to version 2.0. We can't help but wonder if that's connected with the departure of the former project lead in 2015. The developers are planning to release version 7.0 in about February next year. Even so, the "Heaviside" release, which has been refactored to support even more formats and introduce new methods for faster performance or reduced processor utilization, is smaller than previous releases.
China

China's Secretive Sunway Pro CPU Quadruples Performance Over Its Predecessor (tomshardware.com) 73

An anonymous reader shares a report: Earlier this year, the National Supercomputing Center in Wuxi (an entity blacklisted in the U.S.) launched its new supercomputer based on the enhanced China-designed Sunway SW26010 Pro processors with 384 cores. Sunway's SW26010 Pro CPU not only packs more cores than its non-Pro SW26010 predecessor, but it more than quadrupled FP64 compute throughput due to microarchitectural and system architecture improvements, according to Chips and Cheese. However, while the manycore CPU is good on paper, it has several performance bottlenecks.

The first details of the manycore Sunway SW26010 Pro CPU and supercomputers that use it emerged back in 2021. Now, the company has showcased actual processors and disclosed more details about their architecture and design, which represent a significant leap in performance, recently at SC23. The new CPU is expected to enable China to build high-performance supercomputers based entirely on domestically developed processors. Each Sunway SW26010 Pro has a maximum FP64 throughput of 13.8 TFLOPS, which is massive. For comparison, AMD's 96-core EPYC 9654 has a peak FP64 performance of around 5.4 TFLOPS.

The SW26010 Pro is an evolution of the original SW26010, so it maintains the foundational architecture of its predecessor but introduces several key enhancements. The new SW26010 Pro processor is based on an all-new proprietary 64-bit RISC architecture and packs six core groups (CG) and a protocol processing unit (PPU). Each CG integrates 64 2-wide compute processing elements (CPEs) featuring a 512-bit vector engine as well as 256 KB of fast local store (scratchpad cache) for data and 16 KB for instructions; one management processing element (MPE), which is a superscalar out-of-order core with a vector engine, 32 KB/32 KB L1 instruction/data cache, 256 KB L2 cache; and a 128-bit DDR4-3200 memory interface.

Businesses

Nvidia Beats TSMC and Intel To Take Top Chip Industry Revenue Crown For the First Time (tomshardware.com) 21

Nvidia has swung from fourth to first place in an assessment of chip industry revenue published today. From a report: Taipei-based financial analyst Dan Nystedt noted that the green team took the revenue crown from contract chip-making titan TSMC as Q3 financials came into view. Those keeping an eye on the world of investing and finance will have seen our report about Nvidia's earnings explosion, evidenced by the firm's publishing of its Q3 FY23 results.

Nvidia charted an amazing performance, with a headlining $18.12 billion in revenue for the quarter, up 206% year-over-year (YoY). The firm's profits were also through the roof, and Nystedt posted a graph showing Nvidia elbowed past its chip industry rivals by this metric in Q3 2023, too. Nvidia's advance is supported by multiple highly successful operating segments, which have provided a multiplicative effect on its revenue and income. Again, we saw clear evidence of a seismic shift in revenue, with the latest set of financials shared with investors earlier this week.

Google

Some Pixel 8 Pro Displays Have Bumps Under the Glass (9to5google.com) 31

Some Pixel 8 Pro owners have noticed circular bumps in several places on the screen that look to be the result of something pressing up against the underside, which is soft and fragile, of the 6.7-inch OLED panel. From a report: A statement from the company today acknowledges how "some users may see impressions from components in the device that look like small bumps" in specific conditions. Google says there is "no functional impact to Pixel 8 performance or durability," which does line up with all current reports.
Science

'Electrocaloric' Heat Pump Could Transform Air Conditioning (nature.com) 160

The use of environmentally damaging gases in air conditioners and refrigerators could become redundant if a new kind of heat pump lives up to its promise. A prototype, described in a study published last week in Science, uses electric fields and a special ceramic instead of alternately vaporizing a refrigerant fluid and condensing it with a compressor to warm or cool air. From a report: The technology combines a number of existing techniques and has "superlative performance," says Neil Mathur, a materials scientist at the University of Cambridge, UK. Emmanuel Defay, a materials scientist at the Luxembourg Institute of Science and Technology in Belvaux, and his collaborators built their experimental device out of a ceramic with a strong electrocaloric effect. Materials that exhibit this effect heat up when exposed to electric fields.

In an electrocaloric material, the atoms have an electric polarization -- a slight imbalance in their distribution of electrons, which gives these atoms a 'plus' and a 'minus' pole. When the material is left alone, the polarization of these atoms continuously swivels around in random directions. But when the material is exposed to an electric field, all the electrostatic poles suddenly align, like hair combed in one direction. This transition from disorder to order means that the electrons' entropy -- physicists' way of measuring disorder -- suddenly drops, Defay explains. But the laws of thermodynamics say that the total entropy of a system can never decline, so if it falls somewhere it must increase somewhere else. "The only possibility for the material to get rid of this extra mess is to pour it into the lattice" of its crystal structure, he says. That extra disorder means that the atoms themselves start vibrating faster, resulting in a rise in temperature.

AI

Can AI Be Used to Fine-Tune Linux Kernel Performance? (zdnet.com) 66

An anonymous reader shared this report from ZDNet: At the Linux Plumbers Conference, the invite-only meeting for the top Linux kernel developers, ByteDance Linux Kernel Engineer Cong Wang, proposed that we use AI and machine learning to tune the Linux kernel for the maximum results for specific workloads... There are thousands of parameters. Even for a Linux expert, tuning them for optimal performance is a long, hard job. And, of course, different workloads require different tunings for different sets of Linux kernel parameters... What ByteDance is working on is a first attempt to automate the entire Linux kernel parameter tuning process with minimal engineering efforts.

Specifically, ByteDance is working on tuning Linux memory management. ByteDance has found that with machine learning algorithms, such as Bayesian optimization, automated tuning could even beat most Linux kernel engineers. Why? Well, the idea, as Wang wryly put it, "is not to put Linux kernel engineers out of business." No, the goal is "to liberate human engineers from tuning performance for each individual workload. While making better decisions with historical data, which humans often struggle with. And, last, but never least, find better solutions than those we come up with using our current trial and error, heuristic methods.

In short, ByteDance's system optimizes resource usage by making real-time adjustments to things like CPU frequency scaling and memory management.
Supercomputing

Linux Foundation Announces Intent to Form 'High Performance Software Foundation' (linuxfoundation.org) 5

This week the Linux Foundation "announced the intention to form the High Performance Software Foundation.

"Through a series of technical projects, the High Performance Software Foundation aims to build, promote, and advance a portable software stack for high performance computing by increasing adoption, lowering barriers to contribution, and supporting development efforts." As use of high performance computing becomes ubiquitous in scientific computing and digital engineering, and AI use cases multiply, more and more data centers deploy GPUs and other compute accelerators. The High Performance Software Foundation intends to leverage investments made by the United States Department of Energy's Exascale Computing Project, the EuroHPC Joint Undertaking, and other international projects in accelerated high performance computing to exploit the performance of this diversifying set of architectures. As an umbrella project under the Linux Foundation, HPSF intends to provide a neutral space for pivotal projects in the high performance software ecosystem, enabling industry, academia, and government entities to collaborate together on the scientific software stack.

The High Performance Software Foundation already benefits from strong support across the high performance computing landscape, including leading companies and organizations like Amazon Web Services, Argonne National Laboratory, CEA, CIQ, Hewlett Packard Enterprise, Intel, Kitware, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, NVIDIA, Oak Ridge National Laboratory, Sandia National Laboratory, and the University of Oregon.

Its first open source technical projects include:
  • Spack: the high performance computing package manager
  • Kokkos: a performance-portable programming model for writing modern C++ applications in a hardware-agnostic way.
  • AMReX: a performance-portable software framework designed to accelerate solving partial differential equations on block-structured, adaptively refined meshes.
  • WarpX: a performance-portable Particle-in-Cell code with advanced algorithms that won the 2022 Gordon Bell Prize
  • Trilinos: a collection of reusable scientific software libraries, known in particular for linear, non-linear, and transient solvers, as well as optimization and uncertainty quantification.
  • Apptainer: a container system and image format specifically designed for secure high-performance computing.
  • VTK-m: a toolkit of scientific visualization algorithms for accelerator architectures.
  • HPCToolkit: performance measurement and analysis tools for computers ranging from laptops to the world's largest GPU-accelerated supercomputers.
  • E4S: the Extreme-scale Scientific Software Stack
  • Charliecloud: high performance computing-tailored, lightweight, fully unprivileged container implementation.

Python

How Mojo Hopes to Revamp Python for an AI World (acm.org) 28

Python "come with downsides," argues a new article in Communications of the ACM. "Its programs tend to run slowly, and because it is inefficient at running processes in parallel, it is not well suited to some of the latest AI programming."

"Hoping to overcome those difficulties, computer scientist Chris Lattner set out to create a new language, Mojo, which offers the ease of use of Python, but the performance of more complex languages such as C++ or Rust." Lattner tells the site "we don't want to break Python, we want to make Python better," while software architect Doug Meil says Mojo is essentially "Python for AI... and it's going to be way faster in scale across multiple hardware platforms." Lattner teamed up with Tim Davis, whom he had met when they both worked for Google, to form Modular in January 2022. The company, where Lattner is chief executive officer and Davis chief product officer, provides support for companies working on AI and is developing Mojo.

A modern AI programming stack generally has Python on top, Lattner says, but because that is an inefficient language, it has C++ underneath to handle the implementation. The C++ then must communicate with performance accelerators or GPUs, so developers add a platform such as Compute Unified Device Architecture (CUDA) to make efficient use of those GPUs. "Mojo came from the need to unify these three different parts of the stack so that we could build a unified solution that can scale up and down," Lattner says. The result is a language with the same syntax as Python, so people used to programming in Python can adopt it with little difficulty, but which, by some measures, can run up to 35,000 times faster. For AI, Mojo is especially fast at performing the matrix multiplications used in many neural networks because it compiles the multiplication code to run directly on the GPU, bypassing CUDA...

"Increasingly, code is not being written by computer programmers. It's being written by doctors and journalists and chemists and gamers," says Jeremy Howard, an honorary professor of computer science at the University of Queensland, Australia, and a co-founder of fast.ai, a. "All data scientists write code, but very few data scientists would consider themselves professional computer programmers." Mojo attempts to fill that need by being a superset of Python. A program written in Python can be copied into Mojo and will immediately run faster, the company says. The speedup comes from a variety of factors. For instance, Mojo, like other modern languages, enables threads, small tasks that can be run simultaneously, rather than in sequence. Instead of using an interpreter to execute code as Python does, Mojo uses a compiler to turn the code into assembly language.

Mojo also gives developers the option of using static typing, which defines data elements and reduces the number of errors... "Static behavior is good because it leads to performance," Lattner says. "Static behavior is also good because it leads to more correctness and safety guarantees."

Python creator Guido van Rossum "says he is interested to watch how Mojo develops and whether it can hit the lofty goals Lattner is setting for it..." according to the article, " but he emphasizes that the language is in its early stages and, as of July 2023, Mojo had not yet been made available for download."


In June, Lattner did an hour-long interview with the TWIML AI podcast. And in 2017 Chris Lattner answered questions from Slashdot's readers.
GUI

Raspberry Pi OS, elementary OS Will Default to Wayland (elementary.io) 75

Recently the Register pointed out that the new (Debian-based) Raspberry Pi OS 5.0 has "a completely new Wayland desktop environment replacing PIXEL, the older desktop based on LXDE and X.org, augmented with Mutter in its previous release."

And when elementary OS 8 finally arrives, "the development team plans to finally shift to the Wayland display server by default," reports Linux magazine (adding "If you'd like to get early access to daily builds, you can do so by becoming an elementary OS sponsor on GitHub.")

"This is a transition that we have been planning and working towards for several years," writes CEO/co-founder Danielle Foré, "and we're finally in the home stretch... Wayland will bring us improved performance, better app security, and opens the doors to support more complex display setups like mixed DPI multi-monitor setups." There are other things that we're experimenting with, like the possibility of an immutable OS, and there are more mundane things that will certainly happen like shipping Pipewire. You'll also see on the project board that we're looking to replace the onscreen keyboard and it's time to re-evaluate some things like SystemD Boot. You can expect lots more little features to be detailed over the coming months.
Meanwhile, Linux Mint is getting "experimental" Wayland support next month. And also in December, Firefox will let Wayland support be enabled by default.

And last month the Register noted a merge request for GNOME to remove the gnome-xorg.desktop file. "To put this in context, the Fedora project is considering a comparable change: removing or hiding the GNOME on X.org session from the login menu, which is already the plan for the Fedora KDE spin when it moves to KDE version 6, which is still in development."
Facebook

MediaTek Partners With Meta To Develop Chips For AR Smart Glasses (9to5google.com) 7

During MediaTek's 2023 summit, MediaTek executive Vince Hu announced a new partnership with Meta that would allow it to develop smart glasses capable of augmented reality or mixed reality experiences. 9to5Google reports: As the current generation exists, the Ray-Ban Meta glasses feature a camera and microphone for sending and receiving messages. However, the next generation of Meta smart glasses are likely to have a built-in "viewfinder" display to merge the virtual and physical worlds, allowing users to scan QR codes, read messages, and more. Beyond that, the company wants to bring AR glasses into the fold, which presents a much broader set of challenges. To accomplish this, a few things need to change. AR glasses need to be built for everyday use and optimized to take on an industrial design that looks good but can pack enough tech to ensure a good experience. As it stands, mixed-reality headsets are bulky and take on a large profile. Ideally, Meta's fully AR glasses would be thinner and sleeker.

The new partnership between companies means that MediaTek will help co-develop custom silicon with Meta, built specifically for AR use cases and the glasses. MediaTek brings expertise in developing low-power, high-performance SoCs that can fit within small parameters, like in the frame in a pair of AR glasses. Little to no details were revealed about the upcoming AR glasses, other than directly stating that "MediaTek-powered AR glasses from Meta" would be a thing sometime in the future. Previous leaks position the next generation of smart glasses with a viewfinder as a 2025 release, while a more robust set of AR glasses was referred to as a 2027 product -- if done properly, it would be an incredible product.

Slashdot Top Deals