Software

Rickroll Meme Immortalized In Custom ASIC That Includes 164 Hardcoded Programs (theregister.com) 9

Matthew Connatser reports via The Register: An ASIC designed to display the infamous Rickroll meme is here, alongside 164 other assorted functions. The project is a product of Matthew Venn's Zero to ASIC Course, which offers prospective chip engineers the chance to "learn to design your own ASIC and get it fabricated." Since 2020, Zero to ASIC has accepted several designs that are incorporated into a single chip called a multi-project wafer (MPW), a cost-saving measure as making one chip for one design would be prohibitively expensive. Zero to ASIC has two series of chips: MPW and Tiny Tapeout. The MPW series usually includes just a handful of designs, such as the four on MPW8 submitted in January 2023. By contrast, the original Tiny Tapeout chip included 152 designs, and Tiny Tapeout 2 (which arrived last October) had 165, though could bumped up to 250. Of the 165 designs, one in particular may strike a chord: Design 145, or the Secret File, made by engineer and YouTuber Bitluni. His Secret File design for the Tiny Tapeout ASIC is designed to play a small part of Rick Astley's music video for Never Gonna Give You Up, also known as the Rickroll meme.

Bitluni was a late inclusion on the Tiny Tapeout 2 project, having been invited just three days before the submission deadline. He initially just made a persistence-of-vision controller, which was revised twice for a total of three designs. "At the end, I still had a few hours left, and I thought maybe I should also upload a meme project," Bitluni says in his video documenting his ASIC journey. His meme of choice was of course the Rickroll. One might even call it an Easter egg. However, given that there were 250 total plots for each design, there wasn't a ton of room for both the graphics processor and the file it was supposed to render, a short GIF of the music video. Ultimately, this had to be shrunk from 217 kilobytes to less than half a kilobyte, making its output look similar to games on the Atari 2600 from 1977. Accessing the Rickroll rendering processor and other designs isn't simple. Bitluni created a custom circuit board to mount the Tiny Tapeout 2 chip, creating a device that could then be plugged into a motherboard capable of selecting specific designs on the ASIC. Unfortunately for Bitluni, his first PCB had a design error on it that he had to correct, but the revised version worked and was able to display the Rickroll GIF in hardware via a VGA port.

AI

Google Books Is Indexing AI-Generated Garbage (404media.co) 11

Google Books is indexing low quality, AI-generated books that will turn up in search results, and could possibly impact Google Ngram viewer, an important tool used by researchers to track language use throughout history. From a report: I was able to find the AI-generated books with the same method we've previously used to find AI-generated Amazon product reviews, papers published in academic journals, and online articles. Searching Google Books for the term "As of my last knowledge update," which is associated with ChatGPT-generated answers, returns dozens of books that include that phrase. Some of the books are about ChatGPT, machine learning, AI, and other related subjects and include the phrase because they are discussing ChatGPT and its outputs. These books appear to be written by humans. However, most of the books in the first eight pages of results turned up by the search appear to be AI-generated and are not about AI.

For example, the 2024 book Bears, Bulls, and Wolves: Stock Trading for the Twenty-Year-Old by Tristin McIver, bills itself as "a transformative journey into the world of stock trading" and "a comprehensive guide designed for beginners eager to unlock the mysteries of financial markets." In reality, it reads like ChatGPT-generated text with surface, Wikipedia-level analysis of complex financial events like Facebook's initial public offering or the 2008 financial crisis summed up in a few short paragraphs. [...] Other books appear to be outdated to the point of being useless at the time they are published because they are generated with a version of ChatGPT with an old "knowledge update."

Space

Scientists Complete Construction of the Biggest Digital Camera Ever (gizmodo.com) 29

Isaac Schultz reports via Gizmodo: Nine years and 3.2 billion pixels later, it is complete: the LSST Camera stands as the largest digital camera ever built for astronomy and will serve as the centerpiece of the Vera Rubin Observatory, poised to begin its exploration of the southern skies. The Rubin Observatory's key goal is the 10-year Legacy Survey of Space and Time (LSST), a sweeping, near-constant observation of space. This endeavor will yield 60 petabytes of data on the composition of the universe, the nature and distribution of dark matter, dark energy and the expansion of the universe, the formation of our galaxy, our intimate little solar system, and more. The camera will use its 5.1-foot-wide optical lens to take a 15-second exposure of the sky every 20 seconds, automatically changing filters to view light in every wavelength from near-ultraviolet to the near-infrared. Its constant monitoring of the skies will eventually amount to a timelapse of the heavens; it will highlight fleeting events for other scientists to train their telescopes on, and monitor changes in the southern sky.

To do this, the team needed a Rolls Royce of a digital camera. Mind you, the camera actually cost many million times that of an actual Royce Royce, and at 6,200 pounds (2,812 kilograms), it weighs a lot more than a fancy car. Each of the 21 rafts that makes up the camera's focal plane is the price of a Maserati, and are worth every penny if they collect the sort of data scientists expect them to. "I'm personally most excited to study the expansion of the Universe using gravitational lenses to better understand Dark Energy," said Aaron Roodman, a physicist at SLAC and lead on the camera program, in an email to Gizmodo. "That means two things: 1) measuring the brightness in all six of our filters of literally billions of galaxies and very carefully measuring their shape, which has been subtly altered by the bending of light by matter, and 2) discovering and studying very special objects where a distant quasar is almost perfectly lined up with a more nearby galaxy."

Speaking through a SLAC release, Rodman said the camera's images could "resolve a golf ball from around 15 miles away, while covering a swath of the sky seven times wider than the full moon." The first images from the Rubin Observatory are slated to be publicly released in March 2025, which feels like a long way away. But several important agenda items still need to happen. For one, the SLAC team has to ship the LSST camera safely to Chile from its current lodgings in northern California. (Don't worry -- they've made a test run of the journey.) Then, the observatory's mirrors need to be readied for testing and the observatory's dome has to be completed, among some other tasks. But whenever all that is complete, the legacy survey will launch into a decade's worth of scientific discovery. Rubin Observatory estimates suggest that LSST could "increase the number of known objects by a factor of 10," according to a SLAC release.

Databases

Database For UK Nurse Registration 'Completely Unacceptable' (theregister.com) 42

Lindsay Clark reports via The Register: The UK Information Commissioner's Office has received a complaint detailing the mismanagement of personal data at the Nursing and Midwifery Council (NMC), the regulator that oversees worker registration. Employment as a nurse or midwife depends on enrollment with the NMC in the UK. According to whistleblower evidence seen by The Register, the databases on which the personal information is held lack rudimentary technical standards and practices. The NMC said its data was secure with a high level of quality, allowing it to fulfill its regulatory role, although it was on "a journey of improvement." But without basic documentation, or the primary keys or foreign keys common in database management, the Microsoft SQL Server databases -- holding information about 800,000 registered professionals -- are difficult to query and manage, making assurances on governance nearly impossible, the whistleblower told us.

The databases have no version control systems. Important fields for identifying individuals were used inconsistently -- for example, containing junk data, test data, or null data. Although the tech team used workarounds to compensate for the lack of basic technical standards, they were ad hoc and known by only a handful of individuals, creating business continuity risks should they leave the organization, according to the whistleblower. Despite having been warned of the issues of basic technical practice internally, the NMC failed to acknowledge the problems. Only after exhausting other avenues did the whistleblower raise concern externally with the ICO and The Register. The NMC stores sensitive data on behalf of the professionals that it registers, including gender, sexual orientation, gender identity, ethnicity and nationality, disability details, marital status, as well as other personal information.

The whistleblower's complaint claims the NMC falls well short of [the standards required under current UK law for data protection and the EU's General Data Protection Regulation (GDPR)]. The statement alleges that the NMC's "data management and data retrieval practices were completely unacceptable." "There is not even much by way of internal structure of the databases for self-documentation, such as primary keys, foreign keys (with a few honorable exceptions), check constraints and table constraints. Even fields that should not be null are nullable. This is frankly astonishing and not the practice of a mature, professional organization," the statement says. For example, the databases contain a unique ten-digit number (or PRN) to identify individuals registered to the NMC. However, the fields for PRNs sometimes contain individuals' names, start with a letter or other invalid data, or are simply null. The whistleblower's complaint says that the PRN problem, and other database design deficiencies, meant that it was nearly impossible to produce "accurate, correct, business critical reports ... because frankly no one knows where the correct data is to be found."
A spokesperson for the NMC said the register was "organized and documented" in the SQL Server database. "For clarity, the register of all our nurses, midwives and nursing practitioners is held within Dynamics 365 which is our system of record. This solution and the data held within it, is secure and well documented. It does not rely on any SQL database. The SQL database referenced by the whistleblower relates to our data warehouse which we are in the process of modernizing as previously shared."
Biotech

Neuralink Shows First Brain-Chip Patient Playing Online Chess Using His Mind 52

Neuralink, the brain-chip startup founded by Elon Musk, showed its first patient using his mind to play online chess. Reuters reports: Noland Arbaugh, the 29-year-old patient who was paralyzed below the shoulder after a diving accident, played chess on his laptop and moved the cursor using the Neuralink device. The implant seeks to enable people to control a computer cursor or keyboard using only their thoughts. Arbaugh had received an implant from the company in January and could control a computer mouse using his thoughts, Musk said last month.

"The surgery was super easy," Arbaugh said in the video streamed on Musk's social media platform X, referring to the implant procedure. "I literally was released from the hospital a day later. I have no cognitive impairments. I had basically given up playing that game," Arbaugh said, referring to the game Civilization VI, "you all (Neuralink) gave me the ability to do that again and played for 8 hours straight."

Elaborating on his experience with the new technology, Arbaugh said that it is "not perfect" and they "have run into some issues." "I don't want people to think that this is the end of the journey, there's still a lot of work to be done, but it has already changed my life," he added.
Space

Physicist Claims Universe Has No Dark Matter and Is Twice As Old As We Thought (sciencealert.com) 243

schwit1 shares a report from ScienceAlert: Sound waves fossilized in the maps of galaxies across the Universe could be interpreted as signs of a Big Bang that took place 13 billion years earlier than current models suggest. Last year, theoretical physicist Rajendra Gupta from the University of Ottawa in Canada published a rather extraordinary proposal that the Universe's currently accepted age is a trick of the light, one that masks its truly ancient state while also ridding us of the need to explain hidden forces. Gupta's latest analysis suggests oscillations from the earliest moments in time preserved in large-scale cosmic structures support his claims. "The study's findings confirm that our previous work about the age of the Universe being 26.7 billion years has allowed us to discover that the Universe does not require dark matter to exist," says Gupta. "In standard cosmology, the accelerated expansion of the Universe is said to be caused by dark energy but is in fact due to the weakening forces of nature as it expands, not due to dark energy." [...]

Current cosmological models make the reasonable assumption that certain forces governing the interactions of particles have remained constant throughout time. Gupta challenges a specific example of this 'coupling constant', asking how it might affect the spread of space over exhaustively long periods of time. It's hard enough for any novel hypothesis to survive the intense scrutiny of the scientific community. But Gupta's suggestion isn't even entirely new -- it's loosely based on an idea that was shown the door nearly a century ago. In the late 1920s, Swiss physicist Fritz Zwicky wondered if the reddened light of far distant objects was a result of lost energy, like a marathon runner exhausted by a long journey across the eons of space. His 'tired light' hypothesis was in competition with the now-accepted theory that light's red-shifted frequency is due to the cumulative expansion of space tugging at light waves like a stretched spring.

The consequences of Gupta's version of the tired light hypothesis -- what is referred to as covarying coupling constants plus tired light, or CCC+TL -- would affect the Universe expansion, doing away with mysterious pushing forces of dark energy and blaming changing interactions between known particles for the increased stretching of space. To replace existing models with CCC+TL, Gupta would need to convince cosmologists his model does a better job of explaining what we see at large. His latest paper attempts to do that by using CCC+TL to explain fluctuations in the spread of visible matter across space caused by sound waves in a newborn Universe, and the glow of ancient dawn known as the cosmic microwave background. While his analysis concludes his hybrid tired light theory can play nicely with certain features of the Universe's residual echoes of light and sound, it does so only if we also ditch the idea that dark matter is also a thing.
The research has been published in The Astrophysical Journal.
Space

Scientists Reveal Never-Before-Seen Map of the Milky Way's Central Engine (space.com) 11

With funding from NASA, researchers from Villanova University have obtained a never-before-seen view of the central engine at the heart of our galaxy. Space.com reports: The new map of this central region of the Milky Way, which took four years to assemble, reveals the relationship between magnetic fields at the heart of our galaxy and the cold dust structures that dwell there. This dust forms the building blocks of stars, planets, and, ultimately, life as we know it. The central engine of the Milky Way drives this process. That means a clearer picture of dust and magnetic interactions builds a better understanding of the Milky Way and our place within it. The team's findings also have implications beyond our galaxy, offering glimpses of how dust and magnetic fields interact in the central engines of other galaxies.

"The center of the Milky Way and most of the space between stars is filled with a lot of dust, and this is important for our galaxy's life cycle," David Chuss, research team leader and a physics professor at Villanova University, told Space.com. "What we looked at was light emitted from these cool dust grains produced by heavy elements forged in stars and dispersed when those stars die and explode." [...] Chuss and colleagues received funding from NASA to investigate this dusty central zone using the Stratospheric Observatory for Infrared Astronomy (SOFIA), which was a telescope that circled the globe at an altitude of 45,000 feet (13,716 meters) aboard a Boeing 747 plane. The project's Far-Infrared Polarimetric Large Area CMZ Exploration (FIREPLACE) created an infrared map that spans around 500 light-years across the center of the Milky Way over nine flights. Using measurements of the polarization of the radiation emitted from dust that is aligned with magnetic fields, the intricate structure of those magnetic fields themselves was inferred by the team. This was then overlaid onto a three-color map that shows warm dust with a pink hue and cool dust clouds in blue. The image also shows radio-wave-emitting filaments in yellow.

"This is a journey, not a destination, but what we've found is this is a very complicated thing. The directions of the magnetic field vary all across the clouds at the center of the Milky Way," Chuss explained. "This is the first step in trying to figure out how the field that we see in the radiowaves across these large organized filaments may relate to the rest of the dynamics of the center of the Milky Way." Chuss explained that this complicated picture of magnetic fields was something that he and the FIREPLACE team had expected to see with the new SOFIA map; the observations agreed with smaller-scale infrared and radio wave observations previously made of the heart of the Milky Way. Where this new map, however, really comes into its own is the sheer scale. It manages to reveal some never-before mapped regions. The fine detail woven into it is stunning as well.
A preprint version of the SOFIA data is available on arXiv.
Space

Voyager 1, First Craft in Interstellar Space, May Have Gone Dark (nytimes.com) 80

The 46-year-old probe, which flew by Jupiter and Saturn in its youth and inspired earthlings with images of the planet as a "Pale Blue Dot," hasn't sent usable data from interstellar space in months. From a report: When Voyager 1 launched in 1977, scientists hoped it could do what it was built to do and take up-close images of Jupiter and Saturn. It did that -- and much more. Voyager 1 discovered active volcanoes, moons and planetary rings, proving along the way that Earth and all of humanity could be squished into a single pixel in a photograph, a "pale blue dot," as the astronomer Carl Sagan called it. It stretched a four-year mission into the present day, embarking on the deepest journey ever into space. Now, it may have bid its final farewell to that faraway dot.

Voyager 1, the farthest man-made object in space, hasn't sent coherent data to Earth since November. NASA has been trying to diagnose what the Voyager mission's project manager, Suzanne Dodd, called the "most serious issue" the robotic probe has faced since she took the job in 2010. The spacecraft encountered a glitch in one of its computers that has eliminated its ability to send engineering and science data back to Earth. The loss of Voyager 1 would cap decades of scientific breakthroughs and signal the beginning of the end for a mission that has given shape to humanity's most distant ambition and inspired generations to look to the skies.

AI

Tinder Owner Inks Deal With OpenAI (techcrunch.com) 27

An anonymous reader quotes a report from TechCrunch: In a press release written with help from ChatGPT, Match Group announced an enterprise agreement with the AI chatbot's maker, OpenAI. The new agreement includes over 1,000 enterprise licenses for the dating app giant and home to Tinder, Match, OkCupid, Hinge and others. The AI tech will be used to help Match Group employees with work-related tasks, the company says, and come as part of Match's $20 million-plus bet on AI in 2024. [...] As for the news itself, Match Group says it will begin using the AI tech, and specifically ChatGPT-4, to aid with coding, design, analysis, build templates, and other daily tasks, including, as you can tell, communications. To keep its corporate data protected, only trained and licensed Match Group employees will have access to OpenAI's tools, it noted.

Before being able to use these tools, Match Group employees will also have to undergo mandatory training that focuses on responsible use, the technology's capabilities, as well as its limitations. The use will be guided by the company's existing privacy practices and AI principles, too. The company declined to share the cost of the agreement or how it will impact the tech giant's bottom line, but Match believes that the AI tools will make teams more productive. Match execs recently spoke of the company's plans for AI during the company's fourth-quarter earnings, noting that, this year, the app maker will use AI technology to both evolve its existing products and build new ones. The company's Shareholder letter explained how AI could help to improve various aspects of the dating app journey. For instance, it could help with profile creation, where Match is testing features like an AI-powered photo picker, and generative AI for help making bios. The company said that AI will also improve its matching abilities and post-match guidance, in areas like conversation starters, nudges, and offering date ideas.

Mars

Martians Wanted: NASA Opens Call for Simulated Yearlong Mars Mission (nasa.gov) 55

"Would you like to live on Mars?" NASA asked Friday on social media.

"You can help us move humanity toward that goal by participating in a simulated, year-long Mars surface mission at NASA's Johnson Space Center." NASA is seeking applicants to participate in its next simulated one-year Mars surface mission to help inform the agency's plans for human exploration of the Red Planet. The second of three planned ground-based missions called CHAPEA (Crew Health and Performance Exploration Analog) is scheduled to kick off in spring 2025.

Each CHAPEA mission involves a four-person volunteer crew living and working inside a 1,700-square-foot, 3D-printed habitat based at NASA's Johnson Space Center in Houston. The habitat, called the Mars Dune Alpha, simulates the challenges of a mission on Mars, including resource limitations, equipment failures, communication delays, and other environmental stressors. Crew tasks include simulated spacewalks, robotic operations, habitat maintenance, exercise, and crop growth.

NASA is looking for healthy, motivated U.S. citizens or permanent residents who are non-smokers, 30-55 years old, and proficient in English for effective communication between crewmates and mission control. Applicants should have a strong desire for unique, rewarding adventures and interest in contributing to NASA's work to prepare for the first human journey to Mars...

As NASA works to establish a long-term presence for scientific discovery and exploration on the Moon through the Artemis campaign, CHAPEA missions provide important scientific data to validate systems and develop solutions for future missions to the Red Planet. With the first CHAPEA crew more than halfway through their yearlong mission, NASA is using research gained through the simulated missions to help inform crew health and performance support during Mars expeditions.

You can see the simulated Mars habitat in this NASA video.

The deadline for applicants is Tuesday, April 2, according to NASA. "A master's degree in a STEM field such as engineering, mathematics, or biological, physical or computer science from an accredited institution with at least two years of professional STEM experience or a minimum of one thousand hours piloting an aircraft is required."
Programming

To Help Rust/C++ Interoperability, Google Gives Rust Foundation $1M (siliconangle.com) 61

An anonymous Slashdot reader shared this report from SiliconANGLE: The Rust Foundation, which supports the development of the popular open-source Rust programming language... shared that Google LLC had made a $1 million contribution specifically earmarked for a C++/Rust interoperability effort known as the "Interop Initiative." The initiative aims to foster seamless integration between Rust and the widely used C++ programming language, addressing one of the significant barriers to Rust's adoption in legacy systems entrenched in C++ code.

Rust has the ability to prevent common memory errors that plague C++ programs and offers a path toward more secure and reliable software systems. However, transitioning from C++ to Rust presents notable challenges, particularly for organizations with extensive C++ codebases. The Interop Initiative seeks to mitigate these challenges by facilitating smoother transitions and enabling organizations to leverage Rust's advantages without completely overhauling their existing systems.

As part of the initiative, the Rust Foundation will collaborate closely with the Rust Project Leadership Council, stakeholders and member organizations to develop a comprehensive scope of work. The collaborative effort will focus on enhancing build system integration, exploring artificial intelligence-assisted code conversion techniques and expanding upon existing interoperability frameworks. By addressing these strategic areas, the initiative aims to accelerate the adoption of Rust across the software industry and hence contribute to advancing memory safety and reducing the prevalence of software vulnerabilities.

A post on Google's security blog says they're excited to collaborate "to ensure that any additions made are suitable and address the challenges of Rust adoption that projects using C++ face. Improving memory safety across the software industry is one of the key technology challenges of our time, and we invite others across the community and industry to join us in working together to secure the open source ecosystem for everyone."

The blog post also includes this quote from Google's VP of engineering, Android security and privacy. "Based on historical vulnerability density statistics, Rust has proactively prevented hundreds of vulnerabilities from impacting the Android ecosystem. This investment aims to expand the adoption of Rust across various components of the platform."

The Register adds: Lars Bergstrom, director of Android platform tools and libraries and chair of the Rust Foundation Board, announced the grant and said that the funding will "improve the ability of Rust code to interoperate with existing legacy C++ codebases.... Integrating Rust today is possible where there is a fallback C API, but for high-performance and high-fidelity interoperability, improving the ability to work directly with C++ code is the single biggest initiative that will further the ability to adopt Rust...."

According to Bergstrom, Google's most significant increase in the use of Rust has occurred in Android, where interoperability started receiving attention in 2021, although Rust is also being deployed elsewhere.... Bergstrom said that as of mid-2023, Google had more than 1,000 developers who had committed Rust code, adding that the ad giant recently released the training material it uses. "We also have a team working on building out interoperability," he added. "We hope that this team's work on addressing challenges specific to Google's codebases will complement the industry-wide investments from this new grant we've provided to the Rust Foundation."

Google's grant matches a $1 million grant last November from Microsoft, which also committed $10 million in internal investment to make Rust a "first-class language in our engineering systems." The Google-bucks are expected to fund further interoperability efforts, along the lines of KDAB's bidirectional Rust and C++ bindings with Qt.

Moon

Photo Shows Japan's Moon Lander Arrived Upside-Down (mashable.com) 22

"A photo of Japan's robotic moon lander shows that though the spacecraft did make the quarter-million-mile journey to the lunar surface, it landed upside down..." reports Mashable. Because of the lander's now-apparent inverted position, its solar panels weren't oriented correctly to generate power, according to the space agency. The team elected to conserve power by shutting down the spacecraft about 2.5 hours after landing.

What's perhaps as surprising as the photo of the lander is how it was taken. Two small rovers separated from the crewless mothership just prior to touchdown. It was one of these baseball-sized robots that was able to snap the image of the spacecraft with its head in the moondust. The rover, built with the help of Japanese toy maker Takara Tomy, is a sphere that splits in half to expose a pair of cameras that point front and back. The two hemispheres also become the rover wheels. "The company is perhaps most famous for originally creating the Transformers, the alien robots that can disguise themselves as machines," said Elizabeth Tasker, who provided commentary on the moon landing in English on Jan. 20.

The space agency still isn't entirely sure what went wrong. At about 55 yards above the ground, the spacecraft performed an obstacle avoidance maneuver, part of the pinpoint-landing demonstration. Just prior to this step, one of the two main engines stopped thrusting, throwing the lander's orientation off. JAXA is continuing to investigate what caused the engine problem... Despite the fact that the spacecraft is now sleeping, the SLIM team hasn't lost hope for a recovery. With solar panels facing west, the lander still has a chance of catching some rays and generating power. If the angle of sunlight changes, SLIM could still be awakened, mission officials said.

That would have to happen soon, though. Night will fall on the moon on Feb. 1, bringing about freezing temperatures. The spacecraft was not built to withstand those conditions.

NASA's Lunar Reconnaissance Orbiter spacecraft has now passed over the landing site at an altitude of about 50 miles (80 km) — and snapped their own photograph which they say shows "the slight change in reflectance around the lander due to engine exhaust sweeping the surface."
Networking

Ceph: a Journey To 1 TiB/s (ceph.io) 16

It's "a free and open-source, software-defined storage platform," according to Wikipedia, providing object storage, block storage, and file storage "built on a common distributed cluster foundation". The charter advisory board for Ceph included people from Canonical, CERN, Cisco, Fujitsu, Intel, Red Hat, SanDisk, and SUSE.

And Nite_Hawk (Slashdot reader #1,304) is one of its core engineers — a former Red Hat principal software engineer named Mark Nelson. (He's now leading R&D for a small cloud systems company called Clyso that provides Ceph consulting.) And he's returned to Slashdot to share a blog post describing "a journey to 1 TiB/s". This gnarly tale-from-Production starts while assisting Clyso with "a fairly hip and cutting edge company that wanted to transition their HDD-backed Ceph cluster to a 10 petabyte NVMe deployment" using object-based storage devices [or OSDs]...) I can't believe they figured it out first. That was the thought going through my head back in mid-December after several weeks of 12-hour days debugging why this cluster was slow... Half-forgotten superstitions from the 90s about appeasing SCSI gods flitted through my consciousness...

Ultimately they decided to go with a Dell architecture we designed, which quoted at roughly 13% cheaper than the original configuration despite having several key advantages. The new configuration has less memory per OSD (still comfortably 12GiB each), but faster memory throughput. It also provides more aggregate CPU resources, significantly more aggregate network throughput, a simpler single-socket configuration, and utilizes the newest generation of AMD processors and DDR5 RAM. By employing smaller nodes, we halved the impact of a node failure on cluster recovery....

The initial single-OSD test looked fantastic for large reads and writes and showed nearly the same throughput we saw when running FIO tests directly against the drives. As soon as we ran the 8-OSD test, however, we observed a performance drop. Subsequent single-OSD tests continued to perform poorly until several hours later when they recovered. So long as a multi-OSD test was not introduced, performance remained high. Confusingly, we were unable to invoke the same behavior when running FIO tests directly against the drives. Just as confusing, we saw that during the 8 OSD test, a single OSD would use significantly more CPU than the others. A wallclock profile of the OSD under load showed significant time spent in io_submit, which is what we typically see when the kernel starts blocking because a drive's queue becomes full...

For over a week, we looked at everything from bios settings, NVMe multipath, low-level NVMe debugging, changing kernel/Ubuntu versions, and checking every single kernel, OS, and Ceph setting we could think of. None these things fully resolved the issue. We even performed blktrace and iowatcher analysis during "good" and "bad" single OSD tests, and could directly observe the slow IO completion behavior. At this point, we started getting the hardware vendors involved. Ultimately it turned out to be unnecessary. There was one minor, and two major fixes that got things back on track.

It's a long blog post, but here's where it ends up:
  • Fix One: "Ceph is incredibly sensitive to latency introduced by CPU c-state transitions. A quick check of the bios on these nodes showed that they weren't running in maximum performance mode which disables c-states."
  • Fix Two: [A very clever engineer working for the customer] "ran a perf profile during a bad run and made a very astute discovery: A huge amount of time is spent in the kernel contending on a spin lock while updating the IOMMU mappings. He disabled IOMMU in the kernel and immediately saw a huge increase in performance during the 8-node tests." In a comment below, Nelson adds that "We've never seen the IOMMU issue before with Ceph... I'm hoping we can work with the vendors to understand better what's going on and get it fixed without having to completely disable IOMMU."
  • Fix Three: "We were not, in fact, building RocksDB with the correct compile flags... It turns out that Canonical fixed this for their own builds as did Gentoo after seeing the note I wrote in do_cmake.sh over 6 years ago... With the issue understood, we built custom 17.2.7 packages with a fix in place. Compaction time dropped by around 3X and 4K random write performance doubled."

The story has a happy ending, with performance testing eventually showing data being read at 635 GiB/s — and a colleague daring them to attempt 1 TiB/s. They built a new testing configuration targeting 63 nodes — achieving 950GiB/s — then tried some more performance optimizations...


Bitcoin

Is India Done With Crypto? (techcrunch.com) 35

An anonymous reader shares a column: Apple delisting a dozen global crypto apps -- relied by big traders in India, in part due to its tax evasive properties -- from its Indian App Store seems the final nail in the coffin, capping a brutal two years. The pending removal across Google Play, internet providers and beyond caps a journey mired with shutdowns, pivots and relocations abroad for Indian crypto startups. The web3 dreams of local entrepreneurs now appear dashed against the rocky shores of regulatory resistance.
Cellphones

Will Switching to a Flip Phone Fight Smartphone Addiction? (omanobserver.om) 152

"This December, I made a radical change," writes a New York Times tech reporter — ditching their $1,300 iPhone 15 for a $108 flip phone.

"It makes phone calls and texts and that was about it. It didn't even have Snake on it..." The decision to "upgrade" to the Journey was apparently so preposterous that my carrier wouldn't allow me to do it over the phone.... Texting anything longer than two sentences involved an excruciating amount of button pushing, so I started to call people instead. This was a problem because most people don't want their phone to function as a phone... [Most voicemails] were never acknowledged. It was nearly as reliable a method of communication as putting a message in a bottle and throwing it out to sea...

My black clamshell of a phone had the effect of a clerical collar, inducing people to confess their screen time sins to me. They hated that they looked at their phone so much around their children, that they watched TikTok at night instead of sleeping, that they looked at it while they were driving, that they started and ended their days with it. In a 2021 Pew Research survey, 31 percent of adults reported being "almost constantly online" — a feat possible only because of the existence of the smartphone.

This was the most striking aspect of switching to the flip. It meant the digital universe and its infinite pleasures, efficiencies and annoyances were confined to my computer. That was the source of people's skepticism: They thought I wouldn't be able to function without Uber, not to mention the world's knowledge, at my beck and call. (I grew up in the '90s. It wasn't that bad...

"Do you feel less well-informed?" one colleague asked. Not really. Information made its way to me, just slightly less instantly. My computer still offered news sites, newsletters and social media rubbernecking.

There were disadvantages — and not just living without Google Maps. ("I've got an electric vehicle, and upon pulling into a public charger, low on miles, realized that I could not log into the charger without a smartphone app... I received a robot vacuum for Christmas ... which could only be set up with an iPhone app.") Two-factor authentication was impossible.

But "Despite these challenges, I survived, even thrived during the month. It was a relief to unplug my brain from the internet on a regular basis and for hours at a time. I read four books... I felt that I had more time, and more control over what to do with it... my sleep improved dramatically."

"I do plan to return to my iPhone in 2024, but in grayscale and with more mindfulness about how I use it."
China

China Is Stealing AI Secrets To Turbocharge Spying, US Says 50

U.S. officials are worried about hacking and insider theft of AI secrets, which China has denied. From a report: On a July day in 2018, Xiaolang Zhang headed to the San Jose, Calif., airport to board a flight to Beijing. He had passed the checkpoint at Terminal B when his journey was abruptly cut short by federal agents. After a tipoff by Apple's security team, the former Apple employee was arrested and charged with stealing trade secrets related to the company's autonomous-driving program. It was a skirmish in a continuing shadow war between the U.S. and China for supremacy in artificial intelligence. The two rivals are seeking any advantage to jump ahead in mastering a technology with the potential to reshape economies, geopolitics and war.

Artificial intelligence has been on the Federal Bureau of Investigation's list of critical U.S. technologies to protect, just as China placed it on a list of technologies it wanted its scientists to achieve breakthroughs on by 2025. China's AI capabilities are already believed to be formidable, but U.S. intelligence authorities have lately made new warnings beyond the threat of intellectual-property theft. Instead of just stealing trade secrets, the FBI and other agencies believe China could use AI to gather and stockpile data on Americans at a scale that was never before possible. China has been linked to a number of significant thefts of personal data over the years, and artificial intelligence could be used as an "amplifier" to support further hacking operations, FBI Director Christopher Wray said, speaking at a press conference in Silicon Valley earlier this year.
Networking

New Internet Standard L4S: the Quiet Plan to Make the Internet Feel Faster (theverge.com) 79

Slow load times? Choppy videos? The real problem is latency, writes the Verge — but the good news is "there's a plan to almost eliminate latency, and big companies like Apple, Google, Comcast, Charter, Nvidia, Valve, Nokia, Ericsson, T-Mobile parent company Deutsche Telekom, and more have shown an interest." It's a new internet standard called L4S that was finalized and published in January, and it could put a serious dent in the amount of time we spend waiting around for webpages or streams to load and cut down on glitches in video calls. It could also help change the way we think about internet speed and help developers create applications that just aren't possible with the current realities of the internet... L4S stands for Low Latency, Low Loss, Scalable Throughput, and its goal is to make sure your packets spend as little time needlessly waiting in line as possible by reducing the need for queuing. To do this, it works on making the latency feedback loop shorter; when congestion starts happening, L4S means your devices find out about it almost immediately and can start doing something to fix the problem. Usually, that means backing off slightly on how much data they're sending... [L4S] makes it easier to maintain a good amount of data throughput without adding latency that increases the amount of time it takes for data to be transferred...

If you really want to get into it (and you know a lot about networking), you can read the specification paper on the Internet Engineering Task Force's website... The L4S standard adds an indicator to packets, which says whether they experienced congestion on their journey from one device to another. If they sail right on through, there's no problem, and nothing happens. But if they have to wait in a queue for more than a specified amount of time, they get marked as having experienced congestion. That way, the devices can start making adjustments immediately to keep the congestion from getting worse and to potentially eliminate it altogether... In terms of reducing latency on the internet, L4S or something like it is "a pretty necessary thing," according to Greg White, a technologist at research and development firm CableLabs who helped work on the standard. "This buffering delay typically has been hundreds of milliseconds to even thousands of milliseconds in some cases. Some of the earlier fixes to buffer bloat brought that down into the tens of milliseconds, but L4S brings that down to single-digit milliseconds...."

Here's the bad news: for the most part, L4S isn't in use in the wild yet. However, there are some big names involved with developing it... When we spoke to Greg White from CableLabs, he said there were already around 20 cable modems that support it today and that several ISPs like Comcast, Charter, and Virgin Media have participated in events meant to test how prerelease hardware and software work with L4S. Companies like Nokia, Vodafone, and Google have also attended, so there definitely seems to be some interest. Apple put an even bigger spotlight on L4S at WWDC 2023 after including beta support for it in iOS 16 and macOS Ventura... At around the same time as WWDC, Comcast announced the industry's first L4S field trials in collaboration with Apple, Nvidia, and Valve. That way, content providers can mark their traffic (like Nvidia's GeForce Now game streaming), and customers in the trial markets with compatible hardware like the Xfinity 10G Gateway XB7 / XB8, Arris S33, or Netgear CM1000v2 gateway can experience it right now...

The other factor helping L4S is that it's broadly compatible with the congestion control systems in use today...

Android

Nothing is Bringing iMessage To Its Android Phone (theverge.com) 146

Nothing Phone 2 owners get blue bubbles now. The company shared it has added iMessage to its newest phone through a new "Nothing Chats" app powered by the messaging platform Sunbird. From a report: The feature will be available to users in North America, the EU, and other European countries starting this Friday, November 17th. Nothing writes on its page that it's doing this because "messaging services are dividing phone users," and it wants "to break those barriers down." But doing so here requires you to trust Sunbird. Nothing's FAQ says Sunbird's "architecture provides a system to deliver a message from one user to another without ever storing it at any point in its journey," and that messages aren't stored on its servers.

Marques Brownlee has also had a preview of Nothing Chats. He confirmed with Nothing that, similar to how other iMessage-to-Android bridge services have worked before, "...it's literally signing in on some Mac Mini in a server farm somewhere, and that Mac Mini will then do all of the routing for you to make this happen." Nothing's US head of PR, Jane Nho, told The Verge in an email that Sunbird stores user iCloud credentials as a token "in an encrypted database" and associated with one of its Mac Minis in the US or Europe, depending on the user's location, that then act as a relay for iMessages sent via the app. She added that, after two weeks of inactivity, Sunbird deletes the account information.

Social Networks

Tumblr Is Reportedly On Life Support As Its Latest Owner Reassigns Staff (arstechnica.com) 54

Tumblr may be nearing its end after its management sent memos to staff with the Lord Tennyson quote about having "loved and lost." Ars Technica reports: Internet statesman and Waxy.org proprietor Andy Baio posted what is "apparently an internal Automattic memo making the rounds on Tumblr" to Threads. The memo, written to employees at WordPress.com parent company Automattic, which bought Tumblr from Verizon's media arm in 2019, is titled or subtitled "You win or you learn." The posted memo states that a majority of the 139 employees working on product and marketing at Tumblr (in a team apparently named "Bumblr") will "switch to other divisions." Those working in "Happiness" (Automattic's customer support and service division) and "T&S" (trust and safety) would remain.

"We are at the point where after 600+ person-years of effort put into Tumblr since the acquisition in 2019, we have not gotten the expected results from our effort, which was to have revenue and usage above its previous peaks," the posted memo reads. After quotes and anecdotes about love, loss, mountain climbing, and learning on the journey, the memo notes that nobody will be let go and that team members can make a ranked list of their top three preferred assignments elsewhere inside Automattic.

Crime

FTX Founder Sam Bankman-Fried Found Guilty of Fraud (yahoo.com) 135

Slashdot readers schwit1 and Another Random Kiwi share the breaking news that FTX founder Sam Bankman-Fried has been found guilty of fraud. From the Associated Press: FTX founder Sam Bankman-Fried's spectacular rise and fall in the cryptocurrency industry -- a journey that included his testimony before Congress, a Super Bowl advertisement and dreams of a future run for president -- hit a new bottom Thursday when a New York jury convicted him of fraud in a scheme that cheated customers and investors of at least $10 billion. After the monthlong trial, jurors rejected Bankman-Fried's claim during four days on the witness stand in Manhattan federal court that he never committed fraud or meant to cheat customers before FTX, once the world's second-largest crypto exchange, collapsed into bankruptcy a year ago.

"His crimes caught up to him. His crimes have been exposed," Assistant U.S. Attorney Danielle Sassoon told the jury of the onetime billionaire just before they were read the law by Judge Lewis A. Kaplan and began deliberations. Sassoon said Bankman-Fried turned his customers' accounts into his "personal piggy bank" as up to $14 billion disappeared. [...] U.S. Attorney Damian Williams told reporters after the verdict that Bankman-Fried "perpetrated one of the biggest financial frauds in American history, a multibillion dollar scheme designed to make him the king of crypto." "But here's the thing: The cryptocurrency industry might be new. The players like Sam Bankman-Fried might be new. This kind of fraud, this kind of corruption is as old as time and we have no patience for it," he said.

Slashdot Top Deals