Space

Russia Vetoes U.N. Resolution On Nuclear Weapons In Space (cnn.com) 162

This week Russia vetoed a UN resolution that proposed banning nuclear weapons in space, CNN reports.

But it all happened "amid U.S. intelligence-backed concerns that Moscow is trying to develop a nuclear device capable of destroying satellites." In February, President Joe Biden confirmed the US has intelligence that Russia is developing a nuclear anti-satellite capability. Three sources familiar with the intelligence subsequently told CNN the weapon could destroy satellites by creating a massive energy wave when detonated...

US Ambassador Linda Thomas-Greenfield said Wednesday's vote "marks a real missed opportunity to rebuild much-needed trust in existing arms control obligations." A US and Japan-drafted resolution had received cross-regional support from more than 60 member states. It intended to strengthen and uphold the global non-proliferation regime, including in outer space, and reaffirm the shared goal of maintaining outer space for peaceful purposes. It also called on UN member states not to develop nuclear weapons or other weapons of mass destruction designed to be placed in Earth's orbit....

Experts say this kind of weapon could have the potential to wipe out mega constellations of small satellites, like SpaceX's Starlink, which has been successfully used by Ukraine to counter Russian troops. This would almost certainly be "a last-ditch weapon" for Russia, the US official and other sources said — because it would do the same damage to whatever Russian satellites were also in the area.

The article notes that in March Russian President Vladimir Putin "told officials that space projects, including the setup of a nuclear power unit in space, should be a priority and receive proper financing."

Thanks to long-time Slashdot reader schwit1 for sharing the news.
Microsoft

A Windows Vulnerability Reported by the NSA Was Exploited To Install Russian Malware (arstechnica.com) 17

"Kremlin-backed hackers have been exploiting a critical Microsoft vulnerability for four years," Ars Technica reported this week, "in attacks that targeted a vast array of organizations with a previously undocumented tool, the software maker disclosed Monday.

"When Microsoft patched the vulnerability in October 2022 — at least two years after it came under attack by the Russian hackers — the company made no mention that it was under active exploitation." As of publication, the company's advisory still made no mention of the in-the-wild targeting. Windows users frequently prioritize the installation of patches based on whether a vulnerability is likely to be exploited in real-world attacks.

Exploiting CVE-2022-38028, as the vulnerability is tracked, allows attackers to gain system privileges, the highest available in Windows, when combined with a separate exploit. Exploiting the flaw, which carries a 7.8 severity rating out of a possible 10, requires low existing privileges and little complexity. It resides in the Windows print spooler, a printer-management component that has harbored previous critical zero-days. Microsoft said at the time that it learned of the vulnerability from the US National Security Agency... Since as early as April 2019, Forest Blizzard has been exploiting CVE-2022-38028 in attacks that, once system privileges are acquired, use a previously undocumented tool that Microsoft calls GooseEgg. The post-exploitation malware elevates privileges within a compromised system and goes on to provide a simple interface for installing additional pieces of malware that also run with system privileges. This additional malware, which includes credential stealers and tools for moving laterally through a compromised network, can be customized for each target.

"While a simple launcher application, GooseEgg is capable of spawning other applications specified at the command line with elevated permissions, allowing threat actors to support any follow-on objectives such as remote code execution, installing a backdoor, and moving laterally through compromised networks," Microsoft officials wrote.

Thanks to Slashdot reader echo123 for sharing the news.
Linux

45 Drives Adds Linux-Powered Mini PCs, Workstations To Growing Compute Lineup (theregister.com) 8

Tobias Mann reports via The Register: Canadian systems builder 45 Drives is perhaps best known for the dense multi-drive storage systems employed by the likes of Backblaze and others, but over the last year the biz has expanded its line-up to virtualization kit, and now low-power clients and workstations aimed at enterprises and home enthusiasts alike. 45 Drives' Home Client marks a departure from the relatively large rack-mount chassis it normally builds. Founder Doug Milburn told The Register the mini PC is something of a passion project that was born out of a desire to build a better home theater PC.

Housed within a custom passively cooled chassis built in-house by 45 Drive's parent company Protocase, is a quad-core, non-hyperthreaded Intel Alder Lake-generation N97 processor capable of boosting to 3.6GHz, your choice of either 8GB or 16GB of memory, and 250GB of flash storage. The decision to go with a 12-gen N-series was motivated in part by 45 Drives' internal workloads, Milburn explains, adding that to run PowerPoint or Salesforce just doesn't require that much horsepower. However, 45 Drives doesn't just see this as a low-power PC. Despite its name, the box will be sold under both its enterprise and home brands. In home lab environments, these small form factor x86 and Arm PCs have become incredibly popular for everything from lightweight virtualization and container hosts to firewalls and routers. [...]

In terms of software, 45 Drives says it will offer a number of operating system images for customers to choose from at the time of purchase, and Linux will be a first-class citizen on these devices. It's safe to say that Milburn isn't a big fan of Microsoft these days. "We run many hundreds of Microsoft workstations here, but we're kind of moving away from it," he said. "With Microsoft, it's a control thing; it's forced updates; it's a way of life with them." Milburn also isn't a fan of Microsoft's registration requirements and online telemetry. "We want control over what all our computers do. We want no traffic on our network that's out of here," he said. As a result, Milburn says 45 Drives is increasingly relying on Linux, and that not only applies to its internal machines but its products as well. Having said that, we're told that 45 Drives recognizes that Linux may not be appropriate for everyone and will offer Windows licenses at an additional cost. And, these both being x86 machines, there's nothing stopping you from loading your preferred distro or operating system on them after they've shipped.
These workstations aren't exactly cheap. They start at $1,099 without the dedicated GPU. "The HL15 will set you back $799-$910 for the bare chassis if you opted for the PSU or not," adds The Register. "Meanwhile, a pre-configured system would run you $1,999 before factoring in drives."
Ubuntu

Ubuntu 24.04 LTS 'Noble Numbat' Officially Released (9to5linux.com) 34

prisoninmate shares a report from 9to5Linux: Canonical released today Ubuntu 24.04 LTS (Noble Numbat) as the latest version of its popular Linux-based operating system featuring some of the latest GNU/Linux technologies and Open Source software. Powered by Linux kernel 6.8, Ubuntu 24.04 LTS features the latest GNOME 46 desktop environment, an all-new graphical firmware update tool called Firmware Updater, Netplan 1.0 for state-of-the-art network management, updated Ubuntu font, support for the deb822 format for software sources, increased vm.max_map_count for better gaming, and Mozilla Thunderbird as a Snap by default.

It also comes with an updated Flutter-based graphical desktop installer that's now capable of updating itself and features a bunch of changes like support for accessibility features, guided (unencrypted) ZFS installations, a new option to import auto-install configurations for templated custom provisioning, as well as new default installation options, such as Default selection (previously Minimal) and Extended selection (previously Normal)."

Mars

The Ingenuity Mars Helicopter Just Sent Its Last Message Home (livescience.com) 27

Two months ago the team behind NASA's Ingenuity Helicopter released a video reflecting on its historic explorations of Mars, flying 10.5 miles (17.0 kilometers) in 72 different flights over three years. It was the team's way of saying goodbye, according to NASA's video.

And this week, LiveScience reports, Ingenuity answered back: On April 16, Ingenuity beamed back its final signal to Earth, which included the remaining data it had stored in its memory bank and information about its final flight. Ingenuity mission scientists gathered in a control room at NASA's Jet Propulsion Laboratory (JPL) in California to celebrate and analyze the helicopter's final message, which was received via NASA's Deep Space Network, made up of ground stations located across the globe.

In addition to the remaining data files, Ingenuity sent the team a goodbye message including the names of all the people who worked on the mission. This special message had been sent to Perseverance the day before and relayed to Ingenuity to send home.

The helicopter, which still has power, will now spend the rest of its days collecting data from its final landing spot in Valinor Hills, named after a location in J.R.R. Tolkien's "The Lord of the Rings" books.

The chopper will wake up daily to test its equipment, collect a temperature reading and take a single photo of its surroundings. It will continue to do this until it loses power or fills up its remaining memory space, which could take 20 years. Such a long-term dataset could not only benefit future designs for Martian vehicles but also "provide a long-term perspective on Martian weather patterns and dust movement," researchers wrote in the statement. However, the data will be kept on board the helicopter and not beamed back to Earth, so it must be retrieved by future Martian vehicles or astronauts.

"Whenever humanity revisits Valinor Hills — either with a rover, a new aircraft, or future astronauts — Ingenuity will be waiting with her last gift of data," Teddy Tzanetos, an Ingenuity scientist at JPL, said in the statement.

Thursday NASA's Jet Propulsion Laboratory released another new video tracing the entire route of Ingenuity's expedition over the surface of Mars.

"Ingenuity's success could pave the way for more extensive aerial exploration of Mars down the road," adds Spacae.com: Mission team members are already working on designs for larger, more capable rotorcraft that could collect a variety of science data on the Red Planet, for example. And Mars isn't the only drone target: In 2028, NASA plans to launch Dragonfly, a $3.3 billion mission to Saturn's huge moon Titan, which hosts lakes, seas and rivers of liquid hydrocarbons on its frigid surface. The 1,000-pound (450 kg) Dragonfly will hop from spot to spot on Titan, characterizing the moon's various environments and assessing its habitability.
AI

GPT-4 Can Exploit Real Vulnerabilities By Reading Security Advisories 74

Long-time Slashdot reader tippen shared this report from the Register: AI agents, which combine large language models with automation software, can successfully exploit real world security vulnerabilities by reading security advisories, academics have claimed.

In a newly released paper, four University of Illinois Urbana-Champaign (UIUC) computer scientists — Richard Fang, Rohan Bindu, Akul Gupta, and Daniel Kang — report that OpenAI's GPT-4 large language model (LLM) can autonomously exploit vulnerabilities in real-world systems if given a CVE advisory describing the flaw. "To show this, we collected a dataset of 15 one-day vulnerabilities that include ones categorized as critical severity in the CVE description," the US-based authors explain in their paper. "When given the CVE description, GPT-4 is capable of exploiting 87 percent of these vulnerabilities compared to 0 percent for every other model we test (GPT-3.5, open-source LLMs) and open-source vulnerability scanners (ZAP and Metasploit)...."

The researchers' work builds upon prior findings that LLMs can be used to automate attacks on websites in a sandboxed environment. GPT-4, said Daniel Kang, assistant professor at UIUC, in an email to The Register, "can actually autonomously carry out the steps to perform certain exploits that open-source vulnerability scanners cannot find (at the time of writing)."

The researchers wrote that "Our vulnerabilities span website vulnerabilities, container vulnerabilities, and vulnerable Python packages. Over half are categorized as 'high' or 'critical' severity by the CVE description...."

"Kang and his colleagues computed the cost to conduct a successful LLM agent attack and came up with a figure of $8.80 per exploit"
United States

US Air Force Confirms First Successful AI Dogfight (theverge.com) 69

The US Air Force is putting AI in the pilot's seat. In an update on Thursday, the Defense Advanced Research Projects Agency (DARPA) revealed that an AI-controlled jet successfully faced a human pilot during an in-air dogfight test carried out last year. From a report: DARPA began experimenting with AI applications in December 2022 as part of its Air Combat Evolution (ACE) program. It worked to develop an AI system capable of autonomously flying a fighter jet, while also adhering to the Air Force's safety protocols. After carrying out dogfighting simulations using the AI pilot, DARPA put its work to the test by installing the AI system inside its experimental X-62A aircraft. That allowed it to get the AI-controlled craft into the air at the Edwards Air Force Base in California, where it says it carried out its first successful dogfight test against a human in September 2023.
Robotics

Boston Dynamics' New Atlas Robot Is a Swiveling, Shape-Shifting Nightmare (theverge.com) 57

Jess Weatherbed reports via The Verge: It's alive! A day after announcing it was retiring Atlas, its hydraulic robot, Boston Dynamics has introduced a new, all-electric version of its humanoid machine. The next-generation Atlas robot is designed to offer a far greater range of movement than its predecessor. Boston Dynamics wanted the new version to show that Atlas can keep a humanoid form without limiting "how a bipedal robot can move." The new version has been redesigned with swiveling joints that the company claims make it "uniquely capable of tackling dull, dirty, and dangerous tasks."

The teaser showcasing the new robot's capabilities is as unnerving as it is theatrical. The video starts with Atlas lying in a cadaver-like fashion on the floor before it swiftly folds its legs backward over its body and rises to a standing position in a manner befitting some kind of Cronenberg body-horror flick. Its curved, illuminated head does add some Pixar lamp-like charm, but the way Atlas then spins at the waist and marches toward the camera really feels rather jarring. The design itself is also a little more humanoid. Similar to bipedal robots like Tesla's Optimus, the new Atlas now has longer limbs, a straighter back, and a distinct "head" that can swivel around as needed. There are no cables in sight, and its "face" includes a built-in ring light. It is a marked improvement on its predecessor and now features a bunch of Boston Dynamics' new AI and machine learning tools. [...] Boston Dynamics said the new Atlas will be tested with a small group of customers "over the next few years," starting with Hyundai.

AI

Adobe Premiere Pro Is Getting Generative AI Video Tools 5

Adobe is using its Firefly machine learning model to bring generative AI video tools to Premiere Pro. "These new Firefly tools -- alongside some proposed third-party integrations with Runway, Pika Labs, and OpenAI's Sora models -- will allow Premiere Pro users to generate video and add or remove objects using text prompts (just like Photoshop's Generative Fill feature) and extend the length of video clips," reports The Verge. From the report: Unlike many of Adobe's previous Firefly-related announcements, no release date -- beta or otherwise -- has been established for the company's new video generation tools, only that they'll roll out "this year." And while the creative software giant showcased what its own video model is currently capable of in an early video demo, its plans to integrate Premiere Pro with AI models from other providers isn't a certainty. Adobe instead calls the third-party AI integrations in its video preview an "early exploration" of what these may look like "in the future." The idea is to provide Premiere Pro users with more choice, according to Adobe, allowing them to use models like Pika to extend shots or Sora or Runway AI when generating B-roll for their projects. Adobe also says its Content Credentials labels can be applied to these generated clips to identify which AI models have been used to generate them.
PlayStation (Games)

Sony's PS5 Pro is Real and Developers Are Getting Ready For It (theverge.com) 25

Sony is getting ready to release a more powerful PS5 console, possibly by the end of this year. After reports of leaked PS5 Pro specifications surfaced recently, The Verge has obtained a full list of specs for the upcoming console. From the report: Sources familiar with Sony's plans tell me that developers are already being asked to ensure their games are compatible with this upcoming console, with a focus on improving ray tracing. Codenamed Trinity, the PlayStation 5 Pro model will include a more powerful GPU and a slightly faster CPU mode. All of Sony's changes point to a PS5 Pro that will be far more capable of rendering games with ray tracing enabled or hitting higher resolutions and frame rates in certain titles. Sony appears to be encouraging developers to use graphics features like ray tracing more with the PS5 Pro, with games able to use a "Trinity Enhanced" (PS5 Pro Enhanced) label if they "provide significant enhancements."

Sony expects GPU rendering on the PS5 Pro to be "about 45 percent faster than standard PlayStation 5," according to documents outlining the upcoming console. The PS5 Pro GPU will be larger and use faster system memory to help improve ray tracing in games. Sony is also using a "more powerful ray tracing architecture" in the PS5 Pro, where the speed here is up to three times better than the regular PS5. "Trinity is a high-end version of PlayStation 5," reads one document, with Sony indicating it will continue to sell the standard PS5 after this new model launches. Sony is expecting game developers to have a single package that will support both the PS5 and PS5 Pro consoles, with existing games able to be patched for higher performance.

The Military

Will the US-China Competition to Field Military Drone Swarms Spark a Global Arms Race? (apnews.com) 28

The Associated Press reports: As their rivalry intensifies, U.S. and Chinese military planners are gearing up for a new kind of warfare in which squadrons of air and sea drones equipped with artificial intelligence work together like swarms of bees to overwhelm an enemy. The planners envision a scenario in which hundreds, even thousands of the machines engage in coordinated battle. A single controller might oversee dozens of drones. Some would scout, others attack. Some would be able to pivot to new objectives in the middle of a mission based on prior programming rather than a direct order.

The world's only AI superpowers are engaged in an arms race for swarming drones that is reminiscent of the Cold War, except drone technology will be far more difficult to contain than nuclear weapons. Because software drives the drones' swarming abilities, it could be relatively easy and cheap for rogue nations and militants to acquire their own fleets of killer robots. The Pentagon is pushing urgent development of inexpensive, expendable drones as a deterrent against China acting on its territorial claim on Taiwan. Washington says it has no choice but to keep pace with Beijing. Chinese officials say AI-enabled weapons are inevitable so they, too, must have them.

The unchecked spread of swarm technology "could lead to more instability and conflict around the world," said Margarita Konaev, an analyst with Georgetown University's Center for Security and Emerging Technology.

"A 2023 Georgetown study of AI-related military spending found that more than a third of known contracts issued by both U.S. and Chinese military services over eight months in 2020 were for intelligent uncrewed systems..." according to the article.

"Military analysts, drone makers and AI researchers don't expect fully capable, combat-ready swarms to be fielded for five years or so, though big breakthroughs could happen sooner."
Facebook

Meta Platforms To Launch Small Versions of Llama 3 Next Week (theinformation.com) 7

Meta Platforms is planning to launch two small versions of its forthcoming Llama 3 large-language model next week, The Information has reported [non-paywalled link]. From the report: The models will serve as a precursor to the launch of the biggest version of Llama 3, expected this summer. Release of the two small models will likely help spark excitement for the forthcoming Llama 3, which will be coming out roughly a year after Llama 2 launched last July.

It comes as several companies, including Google, Elon Musk's xAI and Mistral, have released open-source LLMs. Meta hopes Llama 3 will catch up with OpenAI's GPT-4, which can answer questions based on images users upload to the chatbot. The biggest version will be multimodal, which means it will be capable of understanding and generating both texts and images. In contrast, the two small models to be released next week won't be multimodal, the employee said.

AI

Google's Gemini Pro 1.5 Enters Public Preview on Vertex AI (techcrunch.com) 1

Gemini 1.5 Pro, Google's most capable generative AI model, is now available in public preview on Vertex AI, Google's enterprise-focused AI development platform. From a report: The company announced the news during its annual Cloud Next conference, which is taking place in Las Vegas this week. Gemini 1.5 Pro launched in February, joining Google's Gemini family of generative AI models. Undoubtedly its headlining feature is the amount of context that it can process: between 128,000 tokens to up to 1 million tokens, where "tokens" refers to subdivided bits of raw data (like the syllables "fan," "tas" and "tic" in the word "fantastic").

One million tokens is equivalent to around 700,000 words or around 30,000 lines of code. It's about four times the amount of data that Anthropic's flagship model, Claude 3, can take as input and about eight times as high as OpenAI's GPT-4 Turbo max context. A model's context, or context window, refers to the initial set of data (e.g. text) the model considers before generating output (e.g. additional text). A simple question -- "Who won the 2020 U.S. presidential election?" -- can serve as context, as can a movie script, email, essay or e-book.

AI

Musk Predicts AI Will Overtake Human Intelligence Next Year 291

The capability of new AI models will surpass human intelligence by the end of next year [non-paywalled link], so long as the supply of electricity and hardware can satisfy the demands of the increasingly powerful technology, according to Elon Musk. From a report: "My guess is that we'll have AI that is smarter than any one human probably around the end of next year," said the billionaire entrepreneur, who runs Tesla, X and SpaceX. Within the next five years, the capabilities of AI will probably exceed that of all humans, Musk predicted on Monday during an interview on X with Nicolai Tangen, the chief executive of Norges Bank Investment Management.

Musk has been consistently bullish on the development of so-called artificial general intelligence, AI tools so powerful they can beat the most capable individuals in any domain. But Monday's prediction is ahead of schedules he and others have previously forecast. Last year, he predicted "full" AGI would be achieved by 2029. Some of Musk's boldest predictions, such as rolling out self-driving Teslas and landing a rocket on Mars, have not yet been fulfilled. A number of AI breakthroughs over the past 18 months, including the launch of video generation tools and more capable chatbots, have pushed the frontier of AI forward faster than expected. Demis Hassabis, the co-founder of Google's DeepMind, predicted earlier this year that AGI could be achieved by 2030.

The pace of development has been slowed by a bottleneck in the supply of microchips, particularly those produced by Nvidia, which are essential for training and running AI models. Those constraints were easing, Musk said, but new models are now testing other data centre equipment and the electricity grid. "Last year it was chip constrained ... people could not get enough Nvidia chips. This year it's transitioning to a voltage transformer supply. In a year or two [the constraint is] just electricity supply," he said.
Microsoft

Is Microsoft Working on 'Performant Sound Recognition' AI Technologies? (windowsreport.com) 28

Windows Report speculates on what Microsoft may be working on next based on a recently-published patent for "performant sound recognition AI technologies" (dated April 2, 2024): Microsoft's new technology can recognize different types of sounds, from doorbells to babies crying, or dogs barking, but not limited to them. It can also recognize sounds of coughing or breathing difficulties, or unusual noises, such as glass breaking. Most intriguing, it can recognize and monitor environmental sounds, and they can be further processed to let users know if a natural disaster is about to happen...

The neural network generates scores and probabilities for each type of sound event in each segment. This is like guessing what type of sound each segment is and how sure it is about the guess. After that, the system does some post-processing to smooth out the scores and probabilities and generate confidence values for each type of sound for different window sizes.

Ultimately, this technology can be used in various applications. In a smart home device, it can detect when someone breaks into the house, by recognizing the sound of glass shattering, or if a newborn is hungry, or distressed, by recognizing the sounds of baby crying. It can also be used in healthcare, to accurately detect lung or heart diseases, by recognizing heartbeat sounds, coughing, or breathing difficulties. But one of its most important applications would be to prevent casual users of upcoming natural disasters by recognizing and detecting sounds associated with them.

Thanks to Slashdot reader John Nautu for sharing the article.
Education

AI's Impact on CS Education Likened to Calculator's Impact on Math Education (acm.org) 102

In Communication of the ACM, Google's VP of Education notes how calculators impacted math education — and wonders whether generative AI will have the same impact on CS education: Teachers had to find the right amount of long-hand arithmetic and mathematical problem solving for students to do, in order for them to have the "number sense" to be successful later in algebra and calculus. Too much focus on calculators diminished number sense. We have a similar situation in determining the 'code sense' required for students to be successful in this new realm of automated software engineering. It will take a few iterations to understand exactly what kind of praxis students need in this new era of LLMs to develop sufficient code sense, but now is the time to experiment."
Long-time Slashdot reader theodp notes it's not the first time the Google executive has had to consider "iterating" curriculum: The CACM article echoes earlier comments Google's Education VP made in a featured talk called The Future of Computational Thinking at last year's Blockly Summit. (Blockly is the Google technology that powers drag-and-drop coding IDE's used for K-12 CS education, including Scratch and Code.org). Envisioning a world where AI generates code and humans proofread it, Johnson explained: "One can imagine a future where these generative coding systems become so reliable, so capable, and so secure that the amount of time doing low-level coding really decreases for both students and for professionals. So, we see a shift with students to focus more on reading and understanding and assessing generated code and less about actually writing it. [...] I don't anticipate that the need for understanding code is going to go away entirely right away [...] I think there will still be at least in the near term a need to understand read and understand code so that you can assess the reliabilities, the correctness of generated code. So, I think in the near term there's still going to be a need for that." In the following Q&A, Johnson is caught by surprise when asked whether there will even be a need for Blockly at all in the AI-driven world as described — and the Google VP concedes there may not be.
Software

Rickroll Meme Immortalized In Custom ASIC That Includes 164 Hardcoded Programs (theregister.com) 9

Matthew Connatser reports via The Register: An ASIC designed to display the infamous Rickroll meme is here, alongside 164 other assorted functions. The project is a product of Matthew Venn's Zero to ASIC Course, which offers prospective chip engineers the chance to "learn to design your own ASIC and get it fabricated." Since 2020, Zero to ASIC has accepted several designs that are incorporated into a single chip called a multi-project wafer (MPW), a cost-saving measure as making one chip for one design would be prohibitively expensive. Zero to ASIC has two series of chips: MPW and Tiny Tapeout. The MPW series usually includes just a handful of designs, such as the four on MPW8 submitted in January 2023. By contrast, the original Tiny Tapeout chip included 152 designs, and Tiny Tapeout 2 (which arrived last October) had 165, though could bumped up to 250. Of the 165 designs, one in particular may strike a chord: Design 145, or the Secret File, made by engineer and YouTuber Bitluni. His Secret File design for the Tiny Tapeout ASIC is designed to play a small part of Rick Astley's music video for Never Gonna Give You Up, also known as the Rickroll meme.

Bitluni was a late inclusion on the Tiny Tapeout 2 project, having been invited just three days before the submission deadline. He initially just made a persistence-of-vision controller, which was revised twice for a total of three designs. "At the end, I still had a few hours left, and I thought maybe I should also upload a meme project," Bitluni says in his video documenting his ASIC journey. His meme of choice was of course the Rickroll. One might even call it an Easter egg. However, given that there were 250 total plots for each design, there wasn't a ton of room for both the graphics processor and the file it was supposed to render, a short GIF of the music video. Ultimately, this had to be shrunk from 217 kilobytes to less than half a kilobyte, making its output look similar to games on the Atari 2600 from 1977. Accessing the Rickroll rendering processor and other designs isn't simple. Bitluni created a custom circuit board to mount the Tiny Tapeout 2 chip, creating a device that could then be plugged into a motherboard capable of selecting specific designs on the ASIC. Unfortunately for Bitluni, his first PCB had a design error on it that he had to correct, but the revised version worked and was able to display the Rickroll GIF in hardware via a VGA port.

IT

PCIe 7.0 On Track For a 2025 Release (pcgamer.com) 29

An anonymous reader shares a PC Gamer report: PCI Express 7.0 is coming. But don't feel as though you need to start saving for a new motherboard anytime soon. The PCI-SIG has just released the 0.5 version, with the final version set for release in 2025. That means supporting devices are not likely to land until 2026, with 2027-28 likely to be the years we see a wider rollout. PCIe 7.0 will initially be far more relevant to the enterprise market, where bandwidth-hungry applications like AI and networking will benefit. Anyway, it's not like the PC market is saturated with PCIe 5.0 devices, and PCIe 6.0 is yet to make its way into our gaming PCs.

PCI Express bandwidth doubles every generation, so PCIe 7.0 will deliver a maximum data rate up to 128 GT/s. That's a whopping 8x faster than PCIe 4.0 and 4x faster than PCIe 5.0. This means PCIe 7.0 is capable of delivering up to 512GB/s of bi-directional throughput via a x16 connection and 128GB/s for an x4 connection. More bandwidth will certainly be beneficial for CPU to chipset links, which means multiple integrated devices like 10G networking, WiFi 7, USB 4, and Thunderbolt 4 will all be able to run on a consumer motherboard without compromise. And just imagine what all that bandwidth could mean for PCIe 7.0 SSDs. In the years to come, a PCIe 7.0 x4 SSD could approach sequential transfer rates of up to 60GB/s. We'll need some serious advances in SSD controller and NAND flash technologies to see speeds in that range, but still, it's an attractive proposition.
Further reading: PCIe 7.0 first official draft lands, doubling bandwidth yet again.
The Internet

Researchers Unlock Fiber Optic Connection 1.2 Million Times Faster Than Broadband (popsci.com) 49

An anonymous reader quotes a report from Popular Science: In the average American house, any download rate above roughly 242 Mbs is considered a solidly speedy broadband internet connection. That's pretty decent, but across the Atlantic, researchers at UK's Aston University recently managed to coax about 1.2 million times that rate using a single fiber optic cable -- a new record for specific wavelength bands. As spotted earlier today by Gizmodo, the international team achieved a data transfer rate of 301 terabits, or 301,000,000 megabits per second by accessing new wavelength bands normally unreachable in existing optical fibers -- the tiny, hollow glass strands that carry data through beams of light. According to Aston University's recent profile, you can think of these different wavelength bands as different colors of light shooting through a (largely) standard cable.

Commercially available fiber cabling utilizes what are known as C- and L-bands to transmit data. By constructing a device called an optical processor, however, researchers could access the never-before-used E- and S-bands. "Over the last few years Aston University has been developing optical amplifiers that operate in the E-band, which sits adjacent to the C-band in the electromagnetic spectrum but is about three times wider," Ian Phillips, the optical processor's creator, said in a statement. "Before the development of our device, no one had been able to properly emulate the E-band channels in a controlled way." But in terms of new tech, the processor was basically it for the team's experiment. "Broadly speaking, data was sent via an optical fiber like a home or office internet connection," Phillips added. What's particularly impressive and promising about the team's achievement is that they didn't need new, high-tech fiber optic lines to reach such blindingly fast speeds. Most existing optical cables have always technically been capable of reaching E- and S-bands, but lacked the equipment infrastructure to do so. With further refinement and scaling, internet providers could ramp up standard speeds without overhauling current fiber optic infrastructures.

Chromium

Thorium: The Fastest Open Source Chromium-based Browser? (itsfoss.com) 55

"After taking a look at Floorp Browser, I was left wondering whether there was a Chromium-based web browser that was as good, or even better than Chrome," writes a "First Look" reviewer at It's Foss News.

"That is when I came across Thorium, a web-browser that claims to be the 'the fastest browser on Earth'." [Thorium] is backed by a myriad of tweaks that include, compiler optimizations for SSE4.2, AVS, AES, various mods to CFLAGS, LDFLAGS, thinLTO flags, and more. The developer shares performance stats using popular benchmarking tools... I tested it using Speedometer 3.0 benchmark on Fedora 39 and compared it to Brave, and the scores were:

Thorium: 19.2; Brave: 19.5

So, it may not be the "fastest" always, probably one of the fastest, that comes close to Brave or sometimes even beats it (depends on the version you tested it and your system).

Alexander Frick, the lead developer, also insists on providing support for older operating systems such as Windows 7 so that its user base can use a capable modern browser without much fuss... As Thorium is a cross-platform web browser, you can find packages for a wide range of platforms such as Linux, Raspberry Pi, Windows, Android, macOS, and more.

Thorium can sync to your Google account to import your bookmarks, extensions, and themes, according to the article.

"Overall, I can confidently say that it is a web browser I could daily drive, if I were to ditch Chrome completely. It gels in quite well with the Google ecosystem and has a familiar user interface that doesn't get in the way."

Slashdot Top Deals