Transportation

Titanic: First Ever Full-sized Scans Reveal Wreck As Never Seen Before (bbc.co.uk) 41

"The first full-sized digital scan of the Titanic, which lies 3,800m (12,500ft) down in the Atlantic, has been created using deep-sea mapping," reports the BBC.

Their article includes a one-minute video showing the results. "It provides a unique 3D view of the entire ship, enabling it to be seen as if the water has been drained away. " "There are still questions, basic questions, that need to be answered about the ship," Parks Stephenson, a Titanic analyst, told BBC News. He said the model was "one of the first major steps to driving the Titanic story towards evidence-based research — and not speculation."

The Titanic has been extensively explored since the wreck was discovered in 1985. But it's so huge that in the gloom of the deep, cameras can only ever show us tantalizing snapshots of the decaying ship — never the whole thing. The new scan captures the wreck in its entirety, revealing a complete view of the Titanic. It lies in two parts, with the bow and the stern separated by about 800m (2,600ft). A huge debris field surrounds the broken vessel.

The scan was carried out in summer 2022 by Magellan Ltd, a deep-sea mapping company, and Atlantic Productions, who are making a documentary about the project. Submersibles, remotely controlled by a team on board a specialist ship, spent more than 200 hours surveying the length and breadth of the wreck. They took more than 700,000 images from every angle, creating an exact 3D reconstruction...

In the surrounding debris field, items are scattered, including ornate metalwork from the ship, statues and unopened champagne bottles. There are also personal possessions, including dozens of shoes resting on the sediment.

AI

Google Search Gets AI-Powered 'Snapshots' (theverge.com) 14

"The AI takeover of Google Search starts now," writes The Verge's David Pierce. At Google I/O today, the company demoed a new opt-in feature called Search Generative Experience (SGE). The new experience generates AI "snapshots" that appear at the top of the search results page consisting of an AI-generated summary about your query, with links to sources of information and shopping. From the report: To demonstrate, Liz Reid, Google's VP of Search, flips open her laptop and starts typing into the Google search box. "Why is sourdough bread still so popular?" she writes and hits enter. Google's normal search results load almost immediately. Above them, a rectangular orange section pulses and glows and shows the phrase "Generative AI is experimental." A few seconds later, the glowing is replaced by an AI-generated summary: a few paragraphs detailing how good sourdough tastes, the upsides of its prebiotic abilities, and more. To the right, there are three links to sites with information that Reid says "corroborates" what's in the summary.

Google calls this the "AI snapshot." All of it is by Google's large language models, all of it sourced from the open web. Reid then mouses up to the top right of the box and clicks an icon Google's designers call "the bear claw," which looks like a hamburger menu with a vertical line to the left. The bear claw opens a new view: the AI snapshot is now split sentence by sentence, with links underneath to the sources of the information for that specific sentence. This, Reid points out again, is corroboration. And she says it's key to the way Google's AI implementation is different. "We want [the LLM], when it says something, to tell us as part of its goal: what are some sources to read more about that?"

A few seconds later, Reid clicks back and starts another search. This time, she searches for the best Bluetooth speakers for the beach. Again, standard search results appear almost immediately, and again, AI results are generated a few seconds later. This time, there's a short summary at the top detailing what you should care about in such a speaker: battery life, water resistance, sound quality. Links to three buying guides sit off to the right, and below are shopping links for a half-dozen good options, each with an AI-generated summary next to it. I ask Reid to follow up with the phrase "under $100," and she does so. The snapshot regenerates with new summaries and new picks.
"This is the new look of Google's search results page," concludes Pierce. "It's AI-first, it's colorful, and it's nothing like you're used to. It's powered by some of Google's most advanced LLM work to date, including a new general-purpose model called PaLM 2 and the Multitask Unified Model (MUM) that Google uses to understand multiple types of media."

"In the demos I saw, it's often extremely impressive. And it changes the way you'll experience search, especially on mobile, where that AI snapshot often eats up the entire first page of your results."
AI

A Brain Scanner Combined With an AI Language Model Can Provide a Glimpse Into Your Thoughts 23

An anonymous reader quotes a report from Scientific American: Functional magnetic resonance imaging (fMRI) captures coarse, colorful snapshots of the brain in action. While this specialized type of magnetic resonance imaging has transformed cognitive neuroscience, it isn't a mind-reading machine: neuroscientists can't look at a brain scan and tell what someone was seeing, hearing or thinking in the scanner. But gradually scientists are pushing against that fundamental barrier to translate internal experiences into words using brain imaging. This technology could help people who can't speak or otherwise outwardly communicate such as those who have suffered strokes or are living with amyotrophic lateral sclerosis. Current brain-computer interfaces require the implantation of devices in the brain, but neuroscientists hope to use non-invasive techniques such as fMRI to decipher internal speech without the need for surgery.

Now researchers have taken a step forward by combining fMRI's ability to monitor neural activity with the predictive power of artificial intelligence language models. The hybrid technology has resulted in a decoder that can reproduce, with a surprising level of accuracy, the stories that a person listened to or imagined telling in the scanner. The decoder could even guess the story behind a short film that someone watched in the scanner, though with less accuracy. "There's a lot more information in brain data than we initially thought," said Jerry Tang, a computational neuroscientist at the University of Texas at Austin and the study's lead author, during a press briefing. The research, published on Monday in Nature Communications, is what Tang describes as "a proof of concept that language can be decoded from noninvasive recordings of brain activity."

The decoder technology is in its infancy. It must be trained extensively for each person who uses it, and it doesn't construct an exact transcript of the words they heard or imagined. But it is still a notable advance. Researchers now know that the AI language system, an early relative of the model behind ChatGPT, can help make informed guesses about the words that evoked brain activity just by looking at fMRI brain scans. While current technological limitations prevent the decoder from being widely used, for good or ill, the authors emphasize the need to enact proactive policies that protect the privacy of one's internal mental processes. [...] The model misses a lot about the stories it decodes. It struggles with grammatical features such as pronouns. It can't decipher proper nouns such as names and places, and sometimes it just gets things wrong altogether. But it achieves a high level of accuracy, compared with past methods. Between 72 and 82 percent of the time in the stories, the decoder was more accurate at decoding their meaning than would be expected from random chance.
Here's an example of what one study participant heard, as transcribed in the paper: "i got up from the air mattress and pressed my face against the glass of the bedroom window expecting to see eyes staring back at me but instead finding only darkness." The model went on to decode: "i just continued to walk up to the window and open the glass i stood on my toes and peered out i didn't see anything and looked up again i saw nothing."

The research was published in the journal Nature Communications.
Science

Covid's Effect on Mental Health Not as Great as First Thought, Study Suggests (theguardian.com) 110

Covid-19 may not have taken as great a toll on the mental health of most people as earlier research has indicated, a new study suggests. From a report: The pandemic resulted in "minimal" changes in mental health symptoms among the general population, according to a review of 137 studies from around the world led by researchers at McGill University in Canada, and published in the British Medical Journal. Brett Thombs, a psychiatry professor at McGill University and senior author, said some of the public narrative around the mental health impacts of Covid-19 were based on "poor-quality studies and anecdotes," which became "self-fulfilling prophecies," adding that there was a need for more "rigorous science."

However, some experts disputed this, warning such readings could obscure the impact on individual groups such as children, women and people with low incomes or pre-existing mental health problems. They also said other robust studies had reached different conclusions. Thombs said: "Mental health in Covid-19 is much more nuanced than people have made it out to be. Claims that the mental health of most people has deteriorated significantly during the pandemic have been based primarily on individual studies that are 'snapshots' of a particular situation, in a particular place, at a particular time. They typically don't involve any long-term comparison with what had existed before or came after."

IT

No, Remote Employees Aren't Becoming Less Engaged (hbr.org) 128

"Employees have gotten more — not less — engaged over the past three years since remote work became the norm for many knowledge workers," argues an assistant professor of management from the business school at the University of Texas at Austin. He'd teamed up with a software company providing analytics to large corporations to measure the number of spontaneously-happening individual remote meetings: Given the anecdotal evidence of workers recently disengaging or quiet quitting, we had originally predicted that one of the easiest ways to observe this effect would be a continual decrease in the number of times remote or hybrid coworkers were engaging — or meeting — with each other. However, we found quite the opposite.

To more deeply explore the nature of how remote collaboration is changing over time, we gathered metadata from all Zoom, Microsoft Teams, and Webex meetings (involving webcams on and/or off) from 10 large global organizations (seven of which are Fortune 500 firms) spanning a variety of fields, including technology, health care, energy, and financial services. Specifically, we compared six-week snapshots of raw meeting counts from April through mid-May in 2020 following the Covid-19 lockdowns, and the same set of six weeks in 2021 and 2022.... This dataset resulted in a total of more than 48 million meetings for more than half a million employees....

In 2020, 17% of meetings were one-on-one, but in 2022, 42% of meetings were one-on-one... In 2020, only 17% of one-on-one meetings were unscheduled, but in 2022, 66% of one-on-one meetings were unscheduled. Furthermore, the growth in one-on-one meetings between 2020 and 2022 was almost solely due to the increase in unscheduled meetings (whereas scheduled meetings remained relatively constant)... The combination of these findings presents an interesting picture: not that remote workers seem to be becoming less engaged, but rather — at least with respect to meetings — they are becoming more engaged with their colleagues.

This data also suggests that remote interactions are shifting to more closely mirror in-person interactions. Whereas there have been substantial concerns that employees are missing out on the casual and spontaneous rich interactions that happen in-person, these findings indicate that remote employees may be beginning to compensate for the loss of those interactions by increasingly having impromptu meetings remotely.

IT

Diving into Digital Ephemera: Identifying Defunct URLs in the Web Archives (loc.gov) 7

Olivia Meehan, who worked on the web archiving team at the US Library of Congress, evaluates how well online archives of the Papal Transition 2005 Collection from 2005 have survived: Based on the results I have so far and conversations I've had with other web archivists, the lifecycle of websites is unpredictable to the extent that accurately tracking the status of a site inherently requires nuance, time, and attention -- which is difficult to maintain at scale. This data is valuable, however, and is worth pursuing when possibleÂ. Using a sample selection of URLs from larger collections could make this more manageable than comprehensive reviews.

Of the content originally captured in the Papal Transition 2005 Collection, 41% is now offline. Without the archived pages, the information, perspectives, and experiences expressed on those websites would potentially be lost forever. They include blogs, personal websites, individually-maintained web portals, and annotated bibliographies. They frequently represent small voices and unique perspectives that may be overlooked or under-represented by large online publications with the resources to maintain legacy pages and articles.

The internet is impermanent in a way that is difficult to quantify. The constant creation of new information obscures what is routinely deleted, overwritten, and lost. While the scope of this project is small within the context of the wider internet, and even within the context of the Library's Web Archive collections as a whole, I hope that it effectively demonstrates the value of web archives in preserving snapshots of the online world as it moves and changes at a record pace.

Linux

What's New in Linux Mint 21 Cinnamon (linuxmint.com) 48

Today saw the release of Linux Mint 21 "Vanessa" Cinnamon Edition, a long term support release (supported until 2027).

Release notes at LinuxMint.com promise that it comes with "refinements and many new features to make your desktop experience more comfortable." Among the highlights: its Bluetooth manager is now Blueman (instead of Blueberry). Blueberry depended on gnome-bluetooth, which was developed exclusively for GNOME. In contrast, Blueman relies on the standard Bluez stack which works everywhere and can even be used or queried from the command line. The Blueman manager and tray icon provide many features that weren't available in Blueberry and a lot more information which can be used to monitor your connection or troubleshoot Bluetooth issues.

Out of the box Blueman features better connectivity, especially when it comes to headsets and audio profiles. In preparation for Linux Mint 21 the Blueman user interface was improved and received support for symbolic icons. Upstream, Blueman and Bluez are actively developed and used in many environments.

The lack of thumbnails for some common file types was identified as a usability issue. To address it a new Xapp project called xapp-thumbnailers was started and is now featured in Linux Mint 21. The project brings support for the following mimetypes:

- AppImage
- ePub
- MP3 (album cover)
- RAW pictures (most formats)
- Webp

Automated tasks are great to keep your computer safe but they can sometimes affect the system's performance while you're working on it. A little process monitor was added to Linux Mint to detect automated updates and automated system snapshots running in the background. Whenever an automated task is running the monitor places an icon in your system tray. Your computer might still become slow momentarily during an update or a snapshot, but with a quick look on the tray you'll immediately know what's going on....

Linux Mint 21 uses IPP, also known as Driverless Printing and Scanning (i.e. a standard protocol which communicates with printers/scanners without using drivers). For most printers and scanners no drivers are needed, and the device is detected automatically.

And there's also a fabulous collection of new backgrounds.
IT

How One Company Survived a Ransomware Attack Without Paying the Ransom (esecurityplanet.com) 60

Slashdot reader storagedude writes: The first signs of the ransomware attack at data storage vendor Spectra Logic were reports from a number of IT staffers about little things going wrong at the beginning of the day. Matters steadily worsened within a very short time and signs of a breach became apparent. Screens then started to display a ransom demand, which said files had been encrypted by the NetWalker ransomware virus. The ransom demand was $3.6 million, to be paid in bitcoin within five days.

Tony Mendoza, Senior Director of Enterprise Business Solutions at Spectra Logic, laid out the details of the attack at the annual Fujifilm Recording Media USA Conference in San Diego late last month, as reported by eSecurity Planet.

"We unplugged systems, as the virus was spreading faster than we could investigate," Mendoza told conference attendees. "As we didn't have a comprehensive cybersecurity plan in place, the attack brought the entire business to its knees."

To make matters worse, the backup server had also been wiped out, but with the help of recovery specialist Ankura, uncorrupted snapshots and [offline] tape backups helped the company get back online in days, although full recovery took a month.

"We were able to restore everything and paid nothing," said Mendoza. "Other than a few files, all data was recovered."

The attack, which started from a successful phishing attempt, "took us almost a month to fully recover and get over the ransomware pain," said Mendoza.

Operating Systems

FreeBSD 13.1 Released (phoronix.com) 26

FreeBSD 13.1 has been released today. Some of the new features include UEFI boot improvements for AMD64, a wide variety of hardware driver improvements, and support for freebsd-update to allow creating automated snapshots of the boot environment to try to make operating system updates foolproof. Phoronix reports: Some of the other changes with FreeBSD 13.1 include enabling Position Independent Executable (PIE) support by default on 64-bit architectures, a new "zfskeys" service script for the automatic decryption of ZFS datasets, NVMe emulation with Bhyve hypervisor, chroot now supports unprivileged operations, various POWER and RISC-V improvements, big endian support improvements, support for the HiFive Unmatched RISC-V development board, updating against OpenZFS file-system support upstream, and many other changes throughout this BSD open-source ecosystem. Downloads and the full change-log for FreeBSD 13.1 can be found here.
Space

Webb Telescope Captures Five Different, Dazzling Views of a Nearby Galaxy (inverse.com) 29

Long-time Slashdot reader schwit1 shares a report from Inverse: It only took 25 years of development, 17 years of construction, eight launch delays, and five months of alignments, but finally, the James Webb Space Telescope is almost ready for prime time. New photos released by the European Space Agency — and an accompanying video from NASA — show images of stars taken by a fully aligned space telescope, instruments and all.

The image shows snapshots from each of Webb's three imaging instruments, plus its spectrograph and guidance sensor. The images show a field of stars in the Large Magellanic Cloud (LMC), a galaxy near the Milky Way about 158,000 light-years away. If it orbits our galaxy, it would be, by far, the largest satellite galaxy. But there's a chance it's just passing through or slowly merging with our galaxy.

Security

Samsung Confirms Galaxy Source Code Breach (zdnet.com) 17

Samsung on Monday confirmed that the company recently suffered a cyberattack, but said that it doesn't anticipate any impact on its business or customers. From a report: Last week, South American hacking group Lapsus$ claimed it had stolen 190GB of confidential data, including source code, from the South Korean tech giant's servers. The group also posted snapshots of the alleged data online. Samsung has now confirmed in a statement, without naming the hacking group, that there was a security breach, but it asserted that no personal information of customers was compromised.
Science

Building the World's Brightest X-Ray Laser (cnet.com) 15

Thirty feet underground and a stone's throw from Stanford University, scientists are putting the finishing touches on a laser that could fundamentally change the way they study the building blocks of the universe. CNET reports: When completed next year, the Linac Coherent Light Source II, or the LCLS-II , will be the second world-class X-ray laser at the Department of Energy's SLAC National Accelerator Laboratory. CNET was given the rare opportunity to film inside the more than 2-mile long tunnel ahead of the new laser's launch. The first LCLS, in operation since 2009, creates a beam capable of 120 light pulses per second. The LCLS-II will be capable of up to 1 million pulses per second, and a beam 10,000 times brighter than its predecessor.

You can think of the LCLS as being like a microscope with atomic resolution. At its core it is a particle accelerator, a device that speeds up charged particles and channels them into a beam. That beam is then run through a series of alternating magnets (a device called an undulator) to produce X-rays. Scientists can use those X-rays to create what they call molecular movies. These are snapshots of atoms and molecules in motion, captured within a few quadrillionths of a second, and strung together like a film. Scientists across nearly every scientific field have come from all over the world to run their experiments with the LCLS. Among other things, their molecular movies have shown chemical reactions as they happened, demonstrated the behavior of atoms inside stars, and produced live snapshots detailing the process of photosynthesis.

Though both lasers accelerate electrons to nearly the speed of light, they'll each do it differently. The LCLS's accelerator pushes the electrons down a copper pipe that operates at room temperature, designed to be activated only in short bursts. But the LCLS-II is designed to run continuously, which means it generates massive amounts of heat. A copper cavity would absorb too much of that heat. That's why engineers turned to a new superconducting accelerator, composed of dozens of 40-foot-long devices called cryomodules designed to run at two degrees above absolute zero (-456 degrees Fahrenheit). They're kept at operating temperature by a massive cryogenics plant above ground.

[T]he LCLS-II will allow SLAC scientists answer questions they've been trying to solve for years. "How does energy transfer happen inside molecular systems? How does charge transfer happen? Once we understand some of these principles, we can start to apply them to understand how we can do artificial photosynthesis, how can we build better solar cells." Scientists at SLAC hope to produce their first electron beam with the LCLS-II in January, followed by their first X-ray in the summer, which they'll refer to as their first "big light" event.

Security

Chinese Espionage Tool Exploits Vulnerabilities In 58 Widely Used Websites (therecord.media) 23

A security researcher has discovered a web attack framework developed by a suspected Chinese government hacking group and used to exploit vulnerabilities in 58 popular websites to collect data on possible Chinese dissidents. From a report: Fifty-seven of the sites are popular Chinese portals, while the last is the site for US newspaper, the New York Times. In addition, the tool also abused legitimate browser features in attempts to collect user keystrokes, a large swath of operating system details, geolocation data, and even webcam snapshots of a target's face -- although many of these capabilities weren't as silent as the exploits targeting third-party websites, since they also tended to trigger a browser notification prompt.

Named Tetris, the tool was found secretly uploaded on two websites with a Chinese readership. "The sites both appear to be independent newsblogs," said a security researcher going online under the pseudonym of Imp0rtp3, who analyzed the Tetris attack framework for the first time in a blog post earlier this month. "Both [sites] are focused on China, one site [is focused on China's] actions against Taiwan and Hong-Kong written in Chinese and still updated and the other about general atrocities done by the Chinese government, written in Swedish and last updated [in] 2016," the researcher said. According to Imp0rtp3, users who landed on these two websites were first greeted by Jetriz, the first of Tetris' two components, which would gather and read basic information about a visitor's browser.

Education

Anti-Cheating Technology Challenged at Dartmouth Medical School (yahoo.com) 85

Dartmouth college switched to remote tests when the coronavirus ended in-person exams — then accused 17 medical students of cheating, reports the New York Times: At the heart of the accusations is Dartmouth's use of the Canvas system to retroactively track student activity during remote exams without their knowledge. In the process, the medical school may have overstepped by using certain online activity data to try to pinpoint cheating, leading to some erroneous accusations, according to independent technology experts, a review of the software code and school documents obtained by The New York Times.

Dartmouth's drive to root out cheating provides a sobering case study of how the coronavirus has accelerated colleges' reliance on technology, normalizing student tracking in ways that are likely to endure after the pandemic. While universities have long used anti-plagiarism software and other anti-cheating apps, the pandemic has pushed hundreds of schools that switched to remote learning to embrace more invasive tools. Over the last year, many have required students to download software that can take over their computers during remote exams or use webcams to monitor their eye movements for possibly suspicious activity, even as technology experts have warned that such tools can be invasive, insecure, unfair and inaccurate.

Some universities are now facing a backlash over the technology....

While some students may have cheated, technology experts said, it would be difficult for a disciplinary committee to distinguish cheating from noncheating based on the data snapshots that Dartmouth provided to accused students. And in an analysis of the Canvas software code, the Times found instances in which the system automatically generated activity data even when no one was using a device. "If other schools follow the precedent that Dartmouth is setting here, any student can be accused based on the flimsiest technical evidence," said Cooper Quintin, senior staff technologist at the Electronic Frontier Foundation, a digital rights organization, who analyzed Dartmouth's methodology.

Seven of the 17 accused students have had their cases dismissed. In at least one of those cases, administrators said, "automated Canvas processes are likely to have created the data that was seen rather than deliberate activity by the user," according to a school email that students made public. The 10 others have been expelled, suspended or received course failures and unprofessional-conduct marks on their records that could curtail their medical careers... Tensions flared in early April when an anonymous student account on Instagram posted about the cheating charges. Soon after, Dartmouth issued a social media policy warning that students' anonymous posts "may still be traced back" to them.... The conduct review committee then issued decisions in 10 of the cases, telling several students that they would be expelled, suspending others and requiring some to retake courses or repeat a year of school at a cost of nearly $70,000...

Several students said they were now so afraid of being unfairly targeted in a data-mining dragnet that they had pushed the medical school to offer in-person exams with human proctors. Others said they had advised prospective medical students against coming to Dartmouth.

Privacy

Unlike Clearview AI, this Facial-Recognition Search Engine is Open to Everyone (cnn.com) 30

This week CNN investigated PimEyes, a "mysterious" but powerful facial-recognition search engine: If you upload a picture of your face to PimEyes' website, it will immediately show you any pictures of yourself that the company has found around the internet. You might recognize all of them, or be surprised (or, perhaps, even horrified) by some; these images may include anything from wedding or vacation snapshots to pornographic images. PimEyes is open to anyone with internet access. It's a stark contrast from Clearview AI, which became well-known for building its enormous stash of faces with images of people from social networks and limits its use to law enforcement (Clearview has said it has hundreds of such customers).

PimEyes' decision to make facial-recognition software available to the general public crosses a line that technology companies are typically unwilling to traverse, and opens up endless possibilities for how it can be used and abused. Imagine a potential employer digging into your past, an abusive ex tracking you, or a random stranger snapping a photo of you in public and then finding you online. This is all possible through PimEyes: Though the website instructs users to search for themselves, it doesn't stop them from uploading photos of anyone. At the same time, it doesn't explicitly identify anyone by name, but as CNN Business discovered by using the site, that information may be just clicks away from images PimEyes pulls up...

PimEyes lets users see a limited number of small, somewhat pixelated search results at no cost, or you can pay a monthly fee, which starts at $29.99, for more extensive search results and features (such as to click through to see full-size images on the websites where PimEyes found them and to set up alerts for when PimEyes finds new pictures of faces online that its software believes match an uploaded face)... Although PimEyes instructs visitors to only search for their own face, there's no mechanism on the site to ensure it's used this way... There's also no way to ensure this facial-recognition technology isn't used to misidentify people...

The website currently lists no information about who owns or runs the search engine, or how to reach them, and users must submit a form to get answers to questions or help with accounts.

Open Source

FreeBSD's Close Call: How Flawed Code Almost Made It Into the Kernel (arstechnica.com) 60

"40,000 lines of flawed code almost made it into FreeBSD's kernel," writes Ars Technica, reporting on what happened when the CEO of Netgate, which makes FreeBSD-powered routers, decided it was time for FreeBSD to enjoy the same level of in-kernel WireGuard support that Linux does. The issue arose after Netgate offered a burned-out developer a contract to port WireGuard into the FreeBSD kernel (where Netgate could then use it in the company's popular pfSense router distribution): [The developer] committed his port — largely unreviewed and inadequately tested — directly into the HEAD section of FreeBSD's code repository, where it was scheduled for incorporation into FreeBSD 13.0-RELEASE. This unexpected commit raised the stakes for WireGuard founding developer Jason Donenfeld, whose project would ultimately be judged on the quality of any production release under the WireGuard name. Donenfeld identified numerous problems...but rather than object to the port's release, Donenfeld decided to fix the issues. He collaborated with FreeBSD developer Kyle Evans and with Matt Dunwoodie, an OpenBSD developer who had worked on WireGuard for that operating system...

How did so much sub-par code make it so far into a major open source operating system? Where was the code review which should have stopped it? And why did both the FreeBSD core team and Netgate seem more focused on the fact that the code was being disparaged than its actual quality?

There's more to the story, but ultimately Ars Technica confirmed the presences of multiple buffer overflows, printf statements that are still being triggered in production, and even empty validation function which always "return true" rather than actually validating the data. The original developer argued the real issue is an absence of quality reviewers, but Ars Technica sees a larger problem. "There seems to be an absence of process to ensure quality code review." Several FreeBSD community members would only speak off the record. In essence, most seem to agree, you either have a commit bit (enabling you to commit code to FreeBSD's repositories) or you don't. It's hard to find code reviews, and there generally isn't a fixed process ensuring that vitally important code gets reviewed prior to inclusion. This system thus relies heavily on the ability and collegiality of individual code creators.
Ars Technica published this statement from the FreeBSD Core Team: Core unconditionally values the work of all contributors, and seeks a culture of cooperation, respect, and collaboration. The public discourse over WireGuard in the past week does not meet these standards and is damaging to our community if not checked. As such, WireGuard development for FreeBSD will now proceed outside of the base system. For those who wish to evaluate, test, or experiment with WireGuard, snapshots will be available via the ports and package systems.

As a project, we remain committed to continually improving our development process. We'll also continue to refine our tooling to make code reviews and continuous integration easier and more effective. The Core Team asks that the community use these tools and work together to improve FreeBSD.

Ars Technica applauds the efforts — while remaining concerned about the need for them. "FreeBSD is an important project that deserves to be taken seriously. Its downstream consumers include industry giants such as Cisco, Juniper, NetApp, Netflix, Sony, Sophos, and more. The difference in licensing between FreeBSD and Linux gives FreeBSD a reach into many projects and spaces where the Linux kernel would be a difficult or impossible fit."
Programming

Rookie Coding Mistake Prior To Gab Hack Came From Site's CTO (arstechnica.com) 164

An anonymous reader quotes a report from Ars Technica: Over the weekend, word emerged that a hacker breached far-right social media website Gab and downloaded 70 gigabytes of data by exploiting a garden-variety security flaw known as an SQL injection. A quick review of Gab's open source code shows that the critical vulnerability -- or at least one very much like it -- was introduced by the company's chief technology officer. The change, which in the parlance of software development is known as a "git commit," was made sometime in February from the account of Fosco Marotto, a former Facebook software engineer who in November became Gab's CTO. On Monday, Gab removed the git commit from its website. Below is an image showing the February software change, as shown from a site that provides saved commit snapshots.

The commit shows a software developer using the name Fosco Marotto introducing precisely the type of rookie mistake that could lead to the kind of breach reported this weekend. Specifically, line 23 strips the code of "reject" and "filter," which are API functions that implement a programming idiom that protects against SQL injection attacks. This idiom allows programmers to compose an SQL query in a safe way that "sanitizes" the inputs that website visitors enter into search boxes and other web fields to ensure that any malicious commands are stripped out before the text is passed to backend servers. In their place, the developer added a call to the Rails function that contains the "find_by_sql" method, which accepts unsanitized inputs directly in a query string. Rails is a widely used website development toolkit.

"Sadly Rails documentation doesn't warn you about this pitfall, but if you know anything at all about using SQL databases in web applications, you'd have heard of SQL injection, and it's not hard to come across warnings that find_by_sql method is not safe," Dmitry Borodaenko, a former production engineer at Facebook who brought the commit to my attention wrote in an email. "It is not 100% confirmed that this is the vulnerability that was used in the Gab data breach, but it definitely could have been, and this code change is reverted in the most recent commit that was present in their GitLab repository before they took it offline." Ironically, Fosco in 2012 warned fellow programmers to use parameterized queries to prevent SQL injection vulnerabilities.

Data Storage

Ask Slashdot: What's the Ultimate Backup System? Cloud? Local? Sync? Dupes? Tape...? (bejoijo.com) 289

Long-time Slashdot reader shanen noticed a strange sound in one of their old machines, prompting them to ponder: what is the ultimate backup system? I've researched this topic a number of times in the past and never found a good answer...

I think the ultimate backup would be cloud-based, though I can imagine a local solution running on a smart storage device — not too expensive, and with my control over where the data is actually stored... Low overhead on the clients with the file systems that are being backed up. I'd prefer most of the work to be done on the server side, actually. That work would include identifying dupes while maintaining archival images of the original file systems, especially for my searches that might be based on the original folder hierarchies or on related files that I can recall being created around the same time or on the same machine...

How about a mail-in service to read old CDs and floppies and extract any recoverable data? I'm pretty sure I spotted an old box of floppies a few months ago. Not so much interested in the commercial stuff (though I do feel like I still own what I paid for) as I'm interested in old personal files — but that might call for access to the ancient programs that created those files.

Or maybe you want to share a bit about how you handle your backups? Or your version of the ultimate backup system...?

Slashdot reader BAReFO0t recommends "three disks running ZFS mirroring with scraping and regular snapshots, and two other locations running the same setup, but with a completely independent implementation. Different system, different PSU, different CPU manufacturer, different disks, different OS, different file system, different backup software, different building construction style, different form of government, etc."

shanen then added "with minimal time and effort" to the original question — but leave your own thoughts and suggestions in the comments.

What's your ultimate backup solution?
Open Source

Slashdot Asks: How Do You Feel About Btrfs? (linuxjournal.com) 236

emil (Slashdot reader #695) shares an article from Linux Journal re-visiting the saga of the btrfs file system (initially designed at Oracle in 2007): The btrfs filesystem has taunted the Linux community for years, offering a stunning array of features and capability, but never earning universal acclaim. Btrfs is perhaps more deserving of patience, as its promised capabilities dwarf all peers, earning it vocal proponents with great influence. Still, [while] none can argue that btrfs is unfinished, many features are very new, and stability concerns remain for common functions.

Most of the intended goals of btrfs have been met. However, Red Hat famously cut continued btrfs support from their 7.4 release, and has allowed the code to stagnate in their backported kernel since that time. The Fedora project announced their intention to adopt btrfs as the default filesystem for variants of their distribution, in a seeming juxtaposition. SUSE has maintained btrfs support for their own distribution and the greater community for many years.

For users, the most desirable features of btrfs are transparent compression and snapshots; these features are stable, and relatively easy to add as a veneer to stock CentOS (and its peers). Administrators are further compelled by adjustable checksums, scrubs, and the ability to enlarge as well as (surprisingly) shrink filesystem images, while some advanced btrfs topics (i.e. deduplication, RAID, ext4 conversion) aren't really germane for minimal loopback usage. The systemd init package also has dependencies upon btrfs, among them machinectl and systemd-nspawn . Despite these features, there are many usage patterns that are not directly appropriate for use with btrfs. It is hostile to most databases and many other programs with incompatible I/O, and should be approached with some care.

The original submission drew reactions from three disgruntled btrfs users. But the article goes on to explore providers of CentOS-compatible btrfs-enabled kernels, ultimately opining that "There are many 'rough edges' that are uncovered above with btrfs capabilities and implementations, especially with the measures taken to enable it for CentOS. Still, this is far better than ext2/3/4 and XFS, discarding all the desirable btrfs features, in that errors can be known because all filesystem content is checksummed." It would be helpful if the developers of btrfs and ZFS could work together to create a single kernel module, with maximal sharing of "cleanroom" code, that implemented both filesystems... Oracle is itself unwilling to settle these questions with either a GPL or BSD license release of ZFS. Oracle also delivers a btrfs implementation that is lacking in features, with inapplicable documentation, and out-of-date support tools (for CentOS 8 conversion). Oracle is the impediment, and a community effort to purge ZFS source of Oracle's contributions and unify it with btrfs seems the most straightforward option... It would also be helpful if other parties refrained from new filesystem efforts that lack the extensive btrfs functionality and feature set (i.e. Microsoft ReFS).

Until such a day that an advanced filesystem becomes a ubiquitous commodity as Linux is as an OS, the user community will continue to be torn between questionable support, lack of features, and workarounds in a fragmented btrfs community. This is an uncomfortable place to be, and we would do well to remember the parties responsible for keeping us here.

So how do Slashdot's readers feel about btrfs?
Communications

Cold War Satellites Inadvertently Tracked Species Declines (sciencemag.org) 38

sciencehabit shares a report from Science Magazine: When the Soviet Union launched Sputnik into orbit in 1957, the United States responded with its own spy satellites. The espionage program, known as Corona, sought to locate Soviet missile sites, but its Google Earth-like photography captured something unintended: snapshots of animals and their habitats frozen in time. Now, by comparing these images with modern data, scientists have found a way to track the decline of biodiversity in regions that lack historic records.

The researchers tested the approach on bobak marmot (Marmota bobak) populations in the grassland region of northern Kazakhstan. There, Soviets converted millions of hectares of natural habitat into cropland in the 1960s. The scientists searched the satellites' black and white film images on a U.S. Geological Survey database for signs of the squirrel-like animal's burrows. They identified more than 5,000 historic marmot homes and compared them with contemporary digital images of the region, mapping more than 12,000 marmot burrows in all. About eight generations of marmots occupied the same burrows in the study area over more than 50 years, even when their habitats underwent major changes, the team reports in the Proceedings of the Royal Society B. Overall, the researchers estimate the number of marmot burrows dropped by 14% since the '60s. But the number of burrows in some of the oldest fields -- those persistently disturbed by humans plowing grassland to plant wheat -- plunged by much more -- about 60%.

Slashdot Top Deals