Google

Predatory Loan Apps Are Thriving in Google Play Store, Despite Ban (restofworld.org) 29

Tens of thousands of people have fallen victim to predatory loan apps, which extort users using sensitive information from their phones. Google has changed its policy to prevent the loan apps from being listed on the Play store, but enforcement is unreliable. Rest of World: According to Mexico City's Citizen Council for Safety and Justice, a consumer watchdog group, 135 reports to local authorities have been filed against JoyCredito for fraud and extortion. But despite the government attention, the app is still available to download from the Google Play store. For years, apps like JoyCredito have been exploiting borrowers from Mexico to India. They lend small amounts of money with few requirements and very high interest rates to financially vulnerable people -- and then extort them when the loan is due. After years of mounting pressure from watchdog groups, Google explicitly banned the apps from the Play store in October. But stories like those of Macias Gonzalez show how widespread the apps still are -- and how ineffective Google has been at enforcing its own policy.

Rest of World presented Google with 15 instances of exploitative loan apps based in Mexico that explicitly violate the terms of the Play store. All of them were still available in the store as of press time. Of the 15 apps, 12 explicitly asked for access to either the camera roll or contacts in the Google Play store's terms of services. Two others specified full access only in external documents. One other gave no data access information. Rest of World also found 10 apps in Peru that have been flagged as exploitative by SBS, a national body that oversees banking, insurance, and private pension. All the apps are still available for download on the Google Play store.

IT

Most CEOs Won't Prioritize Return-to-Office Policies, Survey Finds (axios.com) 101

The pandemic may have proved to employeers that remote and flexible-work arrangements were viable — and changed the way we work forever. Axios writes: Just 6 out of 158 U.S. CEOs said they'll prioritize bringing workers back to the office full-time in 2024, according to a new survey released by the Conference Board. Executives are increasingly resigned to a world where employees don't come in every day, as hybrid work arrangements — mixing work from home and in-office — become the norm for knowledge workers. "Maintain hybrid work," was cited as a priority by 27% of the U.S. CEOs who responded to the survey, conducted in October and November. A separate survey of chief financial officers by Deloitte, conducted in November, found that 65% of CFOs expect their company to offer a hybrid arrangement this year.

"Remote work appears likely to be the most persistent economic legacy of the pandemic," write Goldman Sachs economists in a recent note. About 20%-25% of workers in the U.S. work from home at least part of the week, according to data Goldman cites. That's below a peak of 47% during the pandemic but well above its prior average of around 3%.

"The battle is over," said Diana Scott, human capital center leader at The Conference Board. "There are so many other issues CEOs are facing." Headlines about CEOs determined to get butts in seats get attention, but they are the exception, says Brian Elliott, the cofounder of Future Forum, a future of work think tank. "There are a lot more CEOs that are actually quietly becoming more flexible...." Though the labor market has softened, employers still do care about keeping employees satisfied — and they don't want to fight with them. "It's not worth the fight," says Elliott.

Power

What's the Solution to Gridlocked EV Chargers? (sacbee.com) 426

"Some of the most convenient fast-charging stations — mostly those located off major highways — have become gridlocked, especially on busy weekends," complains the opinion editor for California's Tribune newspaper in San Luis, Obispo. Drivers are reporting waits of half an hour or more — sometimes much more. One driver who posted on Reddit waited three hours to charge in Kettleman City on Thanksgiving weekend, turning a five-and-a-half-hour trip into a 10-and-a-half-hour ordeal... Look, it's one thing to spend 30 or 40 minutes charging a battery, which is a given when you take an EV on a road trip. But having to wait in a long line just to get to an open charging bay? What's happening now is "potentially a nightmare for drivers as more EVs hit the road," described GreenBiz transportation writer Vartan Badalian [after a March visit to New York State]...

Badalian, the transportation writer, has an idea on how to deal with gridlock. "As you approach a full charging location, your EV (of any make) connects to the charging location and enters itself into a virtual queue, with entry to the queue dependent upon close geographical proximity. Drivers then park in an available normal parking spot, and only when prompted, proceed to plug in and charge. If a driver attempted to charge before their turn, the chargers would simply not communicate with the vehicle..."

If only that would work. Unfortunately, plug-in chargers have a tough enough time fulfilling their basic task of delivering electricity. Here's how bad it is: A survey of non-Tesla chargers conducted in the Bay Area in 2022 found that 27% of chargers were not working. This would be a good time to point out that Tesla superchargers have a much better performance record than other types of chargers, and that Tesla is opening "select" supercharger stations to other types of vehicles. Also, efforts are being made to increase the reliability of public chargers; the U.S. Department of Transportation just awarded $149 million in grants for the repair and replacement of broken chargers. The biggest share, $64 million, is going to California. In other words, hope is on the horizon. For now, though, we seem to be relying on a haphazard honor system.

How hard would it be to use some orange cones to designate a "waiting lane"? That way drivers pulling in could get an immediate read on how long they might have to wait... Also, limit drivers to an 80% charge, and require them to drive away within, say, five minutes after the charger has stopped. That might be hard to enforce, but peer pressure can be a powerful incentive. The point is, somebody has to step up and make charging stations more driver-friendly, and the obvious choice is whoever is in charge of the chargers.

Open Source

Hans Reiser Sends a Letter From Prison (arstechnica.com) 181

In 2003, Hans Reiser answered questions from Slashdot's readers...

Today Wikipedia describes Hans Reiser as "a computer programmer, entrepreneur, and convicted murderer... Prior to his incarceration, Reiser created the ReiserFS computer file system, which may be used by the Linux kernel but which is now scheduled for removal in 2025, as well as its attempted successor, Reiser4."

This week alanw (Slashdot reader #1,822), spotted a development on the Linux kernel mailing list. "Hans Reiser (imprisoned for the murder of his wife) has written a letter, asking it to be published to Slashdot." Reiser writes: I was asked by a kind Fredrick Brennan for my comments that I might offer on the discussion of removing ReiserFS V3 from the kernel. I don't post directly because I am in prison for killing my wife Nina in 2006.

I am very sorry for my crime — a proper apology would be off topic for this forum, but available to any who ask.

A detailed apology for how I interacted with the Linux kernel community, and some history of V3 and V4, are included, along with descriptions of what the technical issues were. I have been attending prison workshops, and working hard on improving my social skills to aid my becoming less of a danger to society. The man I am now would do things very differently from how I did things then.

Click here for the rest of Reiser's introduction, along with a link to the full text of the letter...

The letter is dated November 26, 2023, and ends with an address where Reiser can be mailed. Ars Technica has a good summary of Reiser's lengthy letter from prison — along with an explanation for how it came to be. With the ReiserFS recently considered obsolete and slated for removal from the Linux kernel entirely, Fredrick R. Brennan, font designer and (now regretful) founder of 8chan, wrote to the filesystem's creator, Hans Reiser, asking if he wanted to reply to the discussion on the Linux Kernel Mailing List (LKML). Reiser, 59, serving a potential life sentence in a California prison for the 2006 murder of his estranged wife, Nina Reiser, wrote back with more than 6,500 words, which Brennan then forwarded to the LKML. It's not often you see somebody apologize for killing their wife, explain their coding decisions around balanced trees versus extensible hashing, and suggest that elementary schools offer the same kinds of emotional intelligence curriculum that they've worked through in prison, in a software mailing list. It's quite a document...

It covers, broadly, why Reiser believes his system failed to gain mindshare among Linux users, beyond the most obvious reason. This leads Reiser to detail the technical possibilities, his interpersonal and leadership failings and development, some lingering regrets about dealings with SUSE and Oracle and the Linux community at large, and other topics, including modern Russian geopolitics... Reiser asks that a number of people who worked on ReiserFS be included in "one last release" of the README, and to "delete anything in there I might have said about why they were not credited." He says prison has changed him in conflict resolution and with his "tendency to see people in extremes...."

Reiser writes that he understood the difficulty ahead in getting the Linux world to "shift paradigms" but lacked the understanding of how to "make friends and allies of people" who might initially have felt excluded. This is followed by a heady discussion of "balanced trees instead of extensible hashing," Oracle's history with implementing balanced trees, getting synchronicity just right, I/O schedulers, block size, seeks and rotational delays on magnetic hard drives, and tails. It leads up to a crucial decision in ReiserFS' development, the hard non-compatible shift from V3 to Reiser 4. Format changes, Reiser writes, are "unwanted by many for good reasons." But "I just had to fix all these flaws, fix them and make a filesystem that was done right. It's hard to explain why I had to do it, but I just couldn't rest as long as the design was wrong and I knew it was wrong," he writes. SUSE didn't want a format change, but Reiser, with hindsight, sees his pushback as "utterly inarticulate and unsociable." The push for Reiser 4 in the Linux kernel was similar, "only worse...."

He encourages people to "allow those who worked so hard to build a beautiful filesystem for the users to escape the effects of my reputation." Under a "Conclusion" sub-heading, Reiser is fairly succinct in summarizing a rather wide-ranging letter, minus the minutiae about filesystem architecture.

I wish I had learned the things I have been learning in prison about talking through problems, and believing I can talk through problems and doing it, before I had married or joined the LKML. I hope that day when they teach these things in Elementary School comes.

I thank Richard Stallman for his inspiration, software, and great sacrifices,

It has been an honor to be of even passing value to the users of Linux. I wish all of you well.



It both is and is not a response to Brennan's initial prompt, asking how he felt about ReiserFS being slated for exclusion from the Linux kernel. There is, at the moment, no reply to the thread started by Brennan.

Space

Ultra-Large Structure Discovered In Distant Space Challenges Cosmological Principle (scitechdaily.com) 60

"The discovery of a second ultra-large structure in the remote universe has further challenged some of the basic assumptions about cosmology," writes SciTechDaily: The Big Ring on the Sky is 9.2 billion light-years from Earth. It has a diameter of about 1.3 billion light-years, and a circumference of about four billion light-years. If we could step outside and see it directly, the diameter of the Big Ring would need about 15 full Moons to cover it.

It is the second ultra-large structure discovered by University of Central Lancashire (UCLan) PhD student Alexia Lopez who, two years ago, also discovered the Giant Arc on the Sky. Remarkably, the Big Ring and the Giant Arc, which is 3.3 billion light-years across, are in the same cosmological neighborhood — they are seen at the same distance, at the same cosmic time, and are only 12 degrees apart on the sky. Alexia said: "Neither of these two ultra-large structures is easy to explain in our current understanding of the universe. And their ultra-large sizes, distinctive shapes, and cosmological proximity must surely be telling us something important — but what exactly?

"One possibility is that the Big Ring could be related to Baryonic Acoustic Oscillations (BAOs). BAOs arise from oscillations in the early universe and today should appear, statistically at least, as spherical shells in the arrangement of galaxies. However, detailed analysis of the Big Ring revealed it is not really compatible with the BAO explanation: the Big Ring is too large and is not spherical." Other explanations might be needed, explanations that depart from what is generally considered to be the standard understanding in cosmology...

And if the Big Ring and the Giant Arc together form a still larger structure then the challenge to the Cosmological Principle becomes even more compelling... Alexia said, "From current cosmological theories we didn't think structures on this scale were possible. "

Possible explanations include a Conformal Cyclic Cosmology, or the effect of cosmic strings passing through...

Thanks to long-time Slashdot reader schwit1 for sharing the article.
Networking

Ceph: a Journey To 1 TiB/s (ceph.io) 16

It's "a free and open-source, software-defined storage platform," according to Wikipedia, providing object storage, block storage, and file storage "built on a common distributed cluster foundation". The charter advisory board for Ceph included people from Canonical, CERN, Cisco, Fujitsu, Intel, Red Hat, SanDisk, and SUSE.

And Nite_Hawk (Slashdot reader #1,304) is one of its core engineers — a former Red Hat principal software engineer named Mark Nelson. (He's now leading R&D for a small cloud systems company called Clyso that provides Ceph consulting.) And he's returned to Slashdot to share a blog post describing "a journey to 1 TiB/s". This gnarly tale-from-Production starts while assisting Clyso with "a fairly hip and cutting edge company that wanted to transition their HDD-backed Ceph cluster to a 10 petabyte NVMe deployment" using object-based storage devices [or OSDs]...) I can't believe they figured it out first. That was the thought going through my head back in mid-December after several weeks of 12-hour days debugging why this cluster was slow... Half-forgotten superstitions from the 90s about appeasing SCSI gods flitted through my consciousness...

Ultimately they decided to go with a Dell architecture we designed, which quoted at roughly 13% cheaper than the original configuration despite having several key advantages. The new configuration has less memory per OSD (still comfortably 12GiB each), but faster memory throughput. It also provides more aggregate CPU resources, significantly more aggregate network throughput, a simpler single-socket configuration, and utilizes the newest generation of AMD processors and DDR5 RAM. By employing smaller nodes, we halved the impact of a node failure on cluster recovery....

The initial single-OSD test looked fantastic for large reads and writes and showed nearly the same throughput we saw when running FIO tests directly against the drives. As soon as we ran the 8-OSD test, however, we observed a performance drop. Subsequent single-OSD tests continued to perform poorly until several hours later when they recovered. So long as a multi-OSD test was not introduced, performance remained high. Confusingly, we were unable to invoke the same behavior when running FIO tests directly against the drives. Just as confusing, we saw that during the 8 OSD test, a single OSD would use significantly more CPU than the others. A wallclock profile of the OSD under load showed significant time spent in io_submit, which is what we typically see when the kernel starts blocking because a drive's queue becomes full...

For over a week, we looked at everything from bios settings, NVMe multipath, low-level NVMe debugging, changing kernel/Ubuntu versions, and checking every single kernel, OS, and Ceph setting we could think of. None these things fully resolved the issue. We even performed blktrace and iowatcher analysis during "good" and "bad" single OSD tests, and could directly observe the slow IO completion behavior. At this point, we started getting the hardware vendors involved. Ultimately it turned out to be unnecessary. There was one minor, and two major fixes that got things back on track.

It's a long blog post, but here's where it ends up:
  • Fix One: "Ceph is incredibly sensitive to latency introduced by CPU c-state transitions. A quick check of the bios on these nodes showed that they weren't running in maximum performance mode which disables c-states."
  • Fix Two: [A very clever engineer working for the customer] "ran a perf profile during a bad run and made a very astute discovery: A huge amount of time is spent in the kernel contending on a spin lock while updating the IOMMU mappings. He disabled IOMMU in the kernel and immediately saw a huge increase in performance during the 8-node tests." In a comment below, Nelson adds that "We've never seen the IOMMU issue before with Ceph... I'm hoping we can work with the vendors to understand better what's going on and get it fixed without having to completely disable IOMMU."
  • Fix Three: "We were not, in fact, building RocksDB with the correct compile flags... It turns out that Canonical fixed this for their own builds as did Gentoo after seeing the note I wrote in do_cmake.sh over 6 years ago... With the issue understood, we built custom 17.2.7 packages with a fix in place. Compaction time dropped by around 3X and 4K random write performance doubled."

The story has a happy ending, with performance testing eventually showing data being read at 635 GiB/s — and a colleague daring them to attempt 1 TiB/s. They built a new testing configuration targeting 63 nodes — achieving 950GiB/s — then tried some more performance optimizations...


Education

'A Groundbreaking Study Shows Kids Learn Better On Paper, Not Screens. Now What?' (theguardian.com) 130

In an opinion piece for the Guardian, American journalist and author John R. MacArthur discusses the alarming decline in reading skills among American youth, highlighted by a Department of Education survey showing significant drops in text comprehension since 2019-2020, with the situation worsening since 2012. While remote learning during the pandemic and other factors like screen-based reading are blamed, a new study by Columbia University suggests that reading on paper is more effective for comprehension than reading on screens, a finding not yet widely adopted in digital-focused educational approaches. From the report: What if the principal culprit behind the fall of middle-school literacy is neither a virus, nor a union leader, nor "remote learning"? Until recently there has been no scientific answer to this urgent question, but a soon-to-be published, groundbreaking study from neuroscientists at Columbia University's Teachers College has come down decisively on the matter: for "deeper reading" there is a clear advantage to reading a text on paper, rather than on a screen, where "shallow reading was observed." [...] [Dr Karen Froud] and her team are cautious in their conclusions and reluctant to make hard recommendations for classroom protocol and curriculum. Nevertheless, the researchers state: "We do think that these study outcomes warrant adding our voices ... in suggesting that we should not yet throw away printed books, since we were able to observe in our participant sample an advantage for depth of processing when reading from print."

I would go even further than Froud in delineating what's at stake. For more than a decade, social scientists, including the Norwegian scholar Anne Mangen, have been reporting on the superiority of reading comprehension and retention on paper. As Froud's team says in its article: "Reading both expository and complex texts from paper seems to be consistently associated with deeper comprehension and learning" across the full range of social scientific literature. But the work of Mangen and others hasn't influenced local school boards, such as Houston's, which keep throwing out printed books and closing libraries in favor of digital teaching programs and Google Chromebooks. Drunk on the magical realism and exaggerated promises of the "digital revolution," school districts around the country are eagerly converting to computerized test-taking and screen-reading programs at the precise moment when rigorous scientific research is showing that the old-fashioned paper method is better for teaching children how to read.

Indeed, for the tech boosters, Covid really wasn't all bad for public-school education: "As much as the pandemic was an awful time period," says Todd Winch, the Levittown, Long Island, school superintendent, "one silver lining was it pushed us forward to quickly add tech supports." Newsday enthusiastically reports: "Island schools are going all-in on high tech, with teachers saying they are using computer programs such as Google Classroom, I-Ready, and Canvas to deliver tests and assignments and to grade papers." Terrific, especially for Google, which was slated to sell 600 Chromebooks to the Jericho school district, and which since 2020 has sold nearly $14bn worth of the cheap laptops to K-12 schools and universities.

If only Winch and his colleagues had attended the Teachers College symposium that presented the Froud study last September. The star panelist was the nation's leading expert on reading and the brain, John Gabrieli, an MIT neuroscientist who is skeptical about the promises of big tech and its salesmen: "I am impressed how educational technology has had no effect on scale, on reading outcomes, on reading difficulties, on equity issues," he told the New York audience. "How is it that none of it has lifted, on any scale, reading? ... It's like people just say, "Here is a product. If you can get it into a thousand classrooms, we'll make a bunch of money.' And that's OK; that's our system. We just have to evaluate which technology is helping people, and then promote that technology over the marketing of technology that has made no difference on behalf of students ... It's all been product and not purpose." I'll only take issue with the notion that it's "OK" to rob kids of their full intellectual potential in the service of sales -- before they even get started understanding what it means to think, let alone read.

Medicine

Cancer Deaths Are Falling, but There May Be an Asterisk (nytimes.com) 29

Cancer deaths in the United States are falling, with four million deaths prevented since 1991, according to the American Cancer Society's annual report. At the same time, the society reported that the number of new cancer cases had ticked up to more than two million in 2023, from 1.9 million in 2022. The New York Times: Cancer remains the second leading cause of death in the United States, after heart disease. Doctors believe that it is urgent to understand changes in the death rate, as well as changes in cancer diagnoses. The cancer society highlighted three chief factors in reduced cancer deaths: declines in smoking, early detection and greatly improved treatments. Breast cancer mortality is one area where treatment had a significant impact. In the 1980s and 1990s, metastatic breast cancer "was regarded as a death sentence," said Donald Berry, a statistician at the University of Texas MD Anderson Cancer Center and an author of a new paper on breast cancer with Sylvia K. Plevritis of Stanford University and other researchers (several authors of the paper reported receiving payments from companies involved in cancer therapies).

The paper, published Tuesday in JAMA, found that the death rate from breast cancer had fallen to 27 per 100,000 women in 2019 from 48 per 100,000 in 1975. That includes metastatic cancer, which counted for nearly 30 percent of the reduction in the breast cancer death rate. Breast cancer treatment has improved so much that it has become a bigger factor than screening in saving lives, said Ruth Etzioni, a biostatistician at the Fred Hutchinson Cancer Center. Death rates have even declined among women in their 40s, who generally did not have regular mammograms, said Dr. Mette Kalager, a professor of medicine at the University of Oslo and Oslo University Hospital, "indicating a substantial effect of treatment," she said.

AI

Famous XKCD Comic Comes Full Circle With AI Bird-Identifying Binoculars (arstechnica.com) 70

An anonymous reader quotes a report from Ars Technica: Last week, Austria-based Swarovski Optik introduced the AX Visio 10x32 binoculars, which the company says can identify over 9,000 species of birds and mammals using image recognition technology. The company is calling the product the world's first "smart binoculars," and they come with a hefty price tag -- $4,799. "The AX Visio are the world's first AI-supported binoculars," the company says in the product's press release. "At the touch of a button, they assist with the identification of birds and other creatures, allow discoveries to be shared, and offer a wide range of practical extra functions."

The binoculars, aimed mostly at bird watchers, gain their ability to identify birds from the Merlin Bird ID project, created by Cornell Lab of Ornithology. As confirmed by a hands-on demo conducted by The Verge, the user looks at an animal through the binoculars and presses a button. A red progress circle fills in while the binoculars process the image, then the identified animal name pops up on the built-in binocular HUD screen within about five seconds. In 2014, a famous xkcd comic strip titled Tasks depicted someone asking a developer to create an app that, when a user takes a photo, will check whether the user is in a national park (deemed easy due to GPS) and check whether the photo is of a bird (to which the developer says, "I'll need a research team and five years"). The caption below reads, "In CS, it can be hard to explain the difference between the easy and the virtually impossible."

It's been just over nine years since the comic was published, and while identifying the presence of a bird in a photo was solved some time ago, these binoculars arguably go further by identifying the species of the bird in the photo (it also keeps track of location due to GPS). While apps to identify bird species already exist, this feature is now packed into a handheld pair of binoculars.

Chrome

Google Is No Longer Bringing the Full Chrome Browser To Fuchsia (9to5google.com) 24

Google has formally discontinued its efforts to bring the full Chrome browser experience to its Fuchsia operating system. 9to5Google reports: In 2021, we reported that the Chromium team had begun an effort to get the full Chrome/Chromium browser running on Google's in-house Fuchsia operating system. Months later, in early 2022, we were even able to record a video of the progress, demonstrating that Chromium (the open-source-only variant of Chrome) could work relatively well on a Fuchsia-powered device. This was far from the first time that the Chromium project had been involved with Fuchsia. Google's full lineup of Nest Hub smart displays is currently powered by Fuchsia under the hood, and those displays have limited web browsing capabilities through an embedded version of the browser.

In contrast to that minimal experience, Google was seemingly working to bring the full might of Chrome to Fuchsia. To observers, this was yet another signal that Google intended for Fuchsia to grow beyond the smart home and serve as a full desktop operating system. After all, what good is a laptop or desktop without a web browser? Fans of the Fuchsia project have anticipated its eventual expansion to desktop since Fuchsia was first shown to run on Google's Pixelbook hardware. However, in the intervening time -- a period that also saw significant layoffs in the Fuchsia division -- it seems that Google has since shifted Fuchsia in a different direction. The clearest evidence of that move comes from a Chromium code change (and related bug tracker post) published last month declaring that the "Chrome browser on fuchsia won't be maintained."

Robotics

The Global Project To Make a General Robotic Brain (ieee.org) 23

Generative AI "doesn't easily carry over into robotics," write two researchers in IEEE Spectrum, "because the Internet is not full of robotic-interaction data in the same way that it's full of text and images."

That's why they're working on a single deep neural network capable of piloting many different types of robots... Robots need robot data to learn from, and this data is typically created slowly and tediously by researchers in laboratory environments for very specific tasks... The most impressive results typically only work in a single laboratory, on a single robot, and often involve only a handful of behaviors... [W]hat if we were to pool together the experiences of many robots, so a new robot could learn from all of them at once? We decided to give it a try. In 2023, our labs at Google and the University of California, Berkeley came together with 32 other robotics laboratories in North America, Europe, and Asia to undertake the RT-X project, with the goal of assembling data, resources, and code to make general-purpose robots a reality...

The question is whether a deep neural network trained on data from a sufficiently large number of different robots can learn to "drive" all of them — even robots with very different appearances, physical properties, and capabilities. If so, this approach could potentially unlock the power of large datasets for robotic learning. The scale of this project is very large because it has to be. The RT-X dataset currently contains nearly a million robotic trials for 22 types of robots, including many of the most commonly used robotic arms on the market...

Surprisingly, we found that our multirobot data could be used with relatively simple machine-learning methods, provided that we follow the recipe of using large neural-network models with large datasets. Leveraging the same kinds of models used in current LLMs like ChatGPT, we were able to train robot-control algorithms that do not require any special features for cross-embodiment. Much like a person can drive a car or ride a bicycle using the same brain, a model trained on the RT-X dataset can simply recognize what kind of robot it's controlling from what it sees in the robot's own camera observations. If the robot's camera sees a UR10 industrial arm, the model sends commands appropriate to a UR10. If the model instead sees a low-cost WidowX hobbyist arm, the model moves it accordingly.

"To test the capabilities of our model, five of the laboratories involved in the RT-X collaboration each tested it in a head-to-head comparison against the best control system they had developed independently for their own robot... Remarkably, the single unified model provided improved performance over each laboratory's own best method, succeeding at the tasks about 50 percent more often on average." And they then used a pre-existing vision-language model to successfully add the ability to output robot actions in response to image-based prompts.

"The RT-X project shows what is possible when the robot-learning community acts together... and we hope that RT-X will grow into a collaborative effort to develop data standards, reusable models, and new techniques and algorithms."

Thanks to long-time Slashdot reader Futurepower(R) for sharing the article.
Security

Water Pump Used To Get $1 Billion Stuxnet Malware Into Iranian Nuclear Facility (securityweek.com) 36

An anonymous reader quotes a report from SecurityWeek.com: A Dutch engineer recruited by the country's intelligence services used a water pump to deploy the now-infamous Stuxnet malware in an Iranian nuclear facility, according to a two-year investigation conducted by Dutch newspaper De Volkskrant. Stuxnet, whose existence came to light in 2010, is widely believed to be the work of the United States and Israel, its goal being to sabotage Iran's nuclear program by compromising industrial control systems (ICS) associated with nuclear centrifuges. The malware, which had worm capabilities, is said to have infected hundreds of thousands of devices and caused physical damage to hundreds of machines.

De Volkskrant's investigation, which involved interviews with dozens of people, found that the AIVD, the general intelligence and security service of the Netherlands, the Dutch equivalent of the CIA, recruited Erik van Sabben, a then 36-year-old Dutch national working at a heavy transport company in Dubai. Van Sabben was allegedly recruited in 2005 -- a couple of years before the Stuxnet malware was triggered -- after American and Israeli intelligence agencies asked their Dutch counterpart for help. However, the Dutch agency reportedly did not inform its country's government and it was not aware of the full extent of the operation. Van Sabben was described as perfect for the job as he had a technical background, he was doing business in Iran and was married to an Iranian woman.

It's believed that the Stuxnet malware was planted on a water pump that the Dutch national installed in the nuclear complex in Natanz, which he had infiltrated. It's unclear if Van Sabben knew exactly what he was doing, but his family said he appeared to have panicked at around the time of the Stuxnet attack. [...] Michael Hayden, who at the time was the chief of the CIA, did agree to talk to De Volkskrant, but could not confirm whether Stuxnet was indeed delivered via water pumps due to it still being classified information. One interesting piece of information that has come to light in De Volkskrant's investigation is that Hayden reportedly told one of the newspaper's sources that it cost between $1 and $2 billion to develop Stuxnet.

Software

Thousands of Software Engineers Say the Job Market Is Getting Much Worse (vice.com) 135

An anonymous reader quotes a report from Motherboard: For much of the 21st century, software engineering has been seen as one of the safest havens in the tenuous and ever-changing American job market. But there are a growing number of signs that the field is starting to become a little less secure and comfortable, due to an industry-wide downturn and the looming threat of artificial intelligence that is spurring growing competition for software jobs. "The amount of competition is insane," said Joe Forzano, an unemployed software engineer who has worked at the mental health startup Alma and private equity giant Blackstone. Since he lost his job in March, Forzano has applied to over 250 jobs. In six cases, he went through the "full interview gauntlet," which included between six and eight interviews each, before learning he had been passed over. "It has been very, very rough," he told Motherboard.

Forzano is not alone in his pessimism, according to a December survey of 9,338 software engineers performed on behalf of Motherboard by Blind, an online anonymous platform for verified employees. In the poll, nearly nine in 10 surveyed software engineers said it is more difficult to get a job now than it was before the pandemic, with 66 percent saying it was "much harder." Nearly 80 percent of respondents said the job market has even become more competitive over the last year. Only 6 percent of the software engineers were "extremely confident" they could find another job with the same total compensation if they lost their job today while 32 percent said they were "not at all confident."

Over 2022 and 2023, the tech sector incurred more than 400,000 layoffs, according to the tracking site Layoffs.fyi. But up until recently, it seemed software engineers were more often spared compared to their co-workers in non-technical fields. One analysis found tech companies cut their recruiting teams by 50 percent, compared to only 10 percent of their engineering departments. At Salesforce, engineers were four times less likely to lose their jobs than those in marketing and sales, which Bloomberg has said is a trend replicated at other tech companies such as Dell and Zoom. But signs of dread among software engineers have started to become more common online. In December, one Amazon employee wrote a long post on the anonymous employee platform Blind saying that the "job market is terrible" and that he was struggling to get interviews of any sort.
"In the age of AI, computer science is no longer the safe major," Kelli Maria Korducki wrote in The Atlantic in September. AI programs like ChatGPT and Google Bard allow users to write code using natural language, greatly reducing the time it takes workers to complete coding tasks. It could lead to less job security and lower compensation for all but the very best in the software trade, warns Matt Welsh, a former computer science professor at Harvard.

"More than 60 percent of those surveyed said they believed their company would hire fewer people because of AI moving forward," reports Motherboard.
The Courts

Judges in England and Wales Given Cautious Approval To Use AI in Writing Legal Opinions (apnews.com) 23

Press2ToContinue writes: England's 1,000-year-old legal system -- still steeped in traditions that include wearing wigs and robes -- has taken a cautious step into the future by giving judges permission to use artificial intelligence to help produce rulings . The Courts and Tribunals Judiciary last month said AI could help write opinions but stressed it shouldn't be used for research or legal analyses because the technology can fabricate information and provide misleading, inaccurate and biased information.

"Judges do not need to shun the careful use of AI," said Master of the Rolls Geoffrey Vos, the second-highest ranking judge in England and Wales. "But they must ensure that they protect confidence and take full personal responsibility for everything they produce." At a time when scholars and legal experts are pondering a future when AI could replace lawyers, help select jurors or even decide cases, the approach spelled out Dec. 11 by the judiciary is restrained. But for a profession slow to embrace technological change, it's a proactive step as government and industry -- and society in general -- react to a rapidly advancing technology alternately portrayed as a panacea and a menace.

Microsoft

Discontinued and Unreleased Microsoft Peripherals Revived By Licensing Deal (arstechnica.com) 46

An anonymous reader quotes a report from Ars Technica: In April, Microsoft announced that it would stop selling Microsoft-branded computer peripherals. Today, Onward Brands announced that it's giving those discarded Microsoft-stamped gadgets a second life under new branding. Products like the Microsoft Ergonomic Keyboard will become Incase products with "Designed by Microsoft" branding. Beyond the computer accessories saying "Designed by Microsoft," they should be the same keyboards, mice, webcams, headsets, and speakers, Onward, Incase's parent company, said, per The Verge. Onward said its Incase brand will bring back 23 Microsoft-designed products in 2024 and hopes for availability to start in Q2. Incase also plans to launch an ergonomic keyboard that Microsoft designed but never released. Onward CEO Charlie Tebele told The Verge that there's "potential" for Incase to release even more designs Microsoft never let us see.

The return of Microsoft peripheral designs resurrects (albeit in a new form) a line of computer gear started in 1983 when Microsoft released its first mouse, the Microsoft Mouse. Neither Onward nor Microsoft shared the full terms of their licensing agreement, but Onward claims that Incase will leverage the same supply chain and manufacturing components that Microsoft did, The Verge noted. "Microsoft will still retain ownership of its designs, so it could potentially bring back classic mice or keyboards itself in the future or continue to renew its license to Incase," The Verge reported, pointing out that Onward isn't licensing every single one of Microsoft's computer peripherals. Some classics, like the Intellimouse or its modern iterations, for example, don't make the Incase reboot list. For its part, Microsoft is still "convicted on going under one single" Surface brand, Nancie Gaskill, general manager of Surface, told The Verge.
Further reading: Microsoft Adding New Key To PC Keyboards For First Time Since 1994
Ubuntu

ZDNet Calls Rhino Linux 'New Coolest Linux Distro' (zdnet.com) 52

If you're starting the new year with a new Linux distro, ZDNet just ran an enthusiastic profile of Rhino Linux, calling it "beautiful" with "one of the more useful command-line package managers on the market." Rhino uses a modern take on the highly efficient and customizable Xfce desktop (dubbed "Unicorn") to help make the interface immediately familiar to anyone who logs in. You'll find a dock on the left edge of the screen that contains launchers for common applications, access to the Application Grid (where you can find all of your installed software), and a handy Search Bar (Ulauncher) that allows you to quickly search for and launch any installed app (or even the app settings) you need...

Thanks to myriad configuration options, Xfce can be a bit daunting. At the same time, the array of settings makes Xfce highly customizable, which is exactly what the Rhino developers did when they designed this desktop. For those who want a desktop that makes short work of accessing files, the Rhino developers have added a really nifty tool to the top bar. You'll find a listing of some folders you have in your Home directory (Files, Documents, Music, Pictures, Video). If you click on one of those entries, you'll see a list of the most recently accessed files within the directory. Click on the file you want to open with the default, associated application...

Rhino opts for the Pacstall package manager over the traditional apt-get. That's not to say apt-get isn't on the system — it is. But with Rhino Linux, there's a much easier path to getting the software you want installed... [W]hen you first run the installed OS, you are greeted with a window that allows you to select what package managers you want to use. You can select from Snap, Flatpak, and AppImages (or all three). Next, the developers added a handy tool (rhino-pkg) that makes installing from the command line very simple.

When the distro launched in August, 9to5Linux described it as "a unique distribution for Ubuntu fans who wanted a rolling-release system where they install once and receive updates forever." The theming looks gorgeous and it's provided by the Elementary Xfce Darker icon theme, Xubuntu's Greybird GTK theme, and Ubuntu's Yaru Dark WM theme. It also comes with some cool features, such as a dedicated and full-screen desktop switcher provided by Xfdashboard...
Space

Neptune Is Much Less Blue Than Depictions (seattletimes.com) 38

Long-time Slashdot readers necro81 writes: The popular vision of Neptune is azure blue. This comes mostly from the publicly released images from Voyager 2's flyby in 1989 — humanity's only visit to this icy giant at the edge of the solar system. But it turns out that view is a bit distorted — the result of color-enhancing choices made by NASA at the time. A new report from Oxford depicts Neptune's blue color as more muted, with a touch of green, not much different than Uranus. The truer-to-life view comes from re-analyzing the Voyager data, combined with ground-based observations going back decades. (Add'l links here, here, and here.)

This is nothing new: most publicity images released by space agencies — of planets, nebulae, or the surface of Mars — have undergone some color-enhancement for visual effect. (They'll also release "true-color" images, which try to best mimic what the human eye would see.) Many images — such as those from the infrared-seeing JWST — need wholesale coloration of their otherwise invisible wavelengths. The new report is a good reminder, though, to remember that scientific cameras are pretty much always black and white; color images come from combining filters in various ways.

Also thanks to long-time Slashdot reader Geoffrey.landis for sharing the story.
Security

Russian Hackers Were Inside Ukraine Telecoms Giant For Months (reuters.com) 26

An anonymous reader quotes a report from Reuters: Russian hackers were inside Ukrainian telecoms giant Kyivstar's system from at least May last year in a cyberattack that should serve as a "big warning" to the West, Ukraine's cyber spy chief told Reuters. The hack, one of the most dramatic since Russia's full-scale invasion nearly two years ago, knocked out services provided by Ukraine's biggest telecoms operator for some 24 million users for days from Dec. 12. In an interview, Illia Vitiuk, head of the Security Service of Ukraine's (SBU) cybersecurity department, disclosed exclusive details about the hack, which he said caused "disastrous" destruction and aimed to land a psychological blow and gather intelligence. "This attack is a big message, a big warning, not only to Ukraine, but for the whole Western world to understand that no one is actually untouchable," he said. He noted Kyivstar was a wealthy, private company that invested a lot in cybersecurity.

The attack wiped "almost everything", including thousands of virtual servers and PCs, he said, describing it as probably the first example of a destructive cyberattack that "completely destroyed the core of a telecoms operator." During its investigation, the SBU found the hackers probably attempted to penetrate Kyivstar in March or earlier, he said in a Zoom interview on Dec. 27. "For now, we can say securely, that they were in the system at least since May 2023," he said. "I cannot say right now, since what time they had ... full access: probably at least since November." The SBU assessed the hackers would have been able to steal personal information, understand the locations of phones, intercept SMS-messages and perhaps steal Telegram accounts with the level of access they gained, he said. A Kyivstar spokesperson said the company was working closely with the SBU to investigate the attack and would take all necessary steps to eliminate future risks, adding: "No facts of leakage of personal and subscriber data have been revealed."

Investigating the attack is harder because of the wiping of Kyivstar's infrastructure. Vitiuk said he was "pretty sure" it was carried out by Sandworm, a Russian military intelligence cyberwarfare unit that has been linked to cyberattacks in Ukraine and elsewhere. A year ago, Sandworm penetrated a Ukrainian telecoms operator, but was detected by Kyiv because the SBU had itself been inside Russian systems, Vitiuk said, declining to identify the company. The earlier hack has not been previously reported. Vitiuk said SBU investigators were still working to establish how Kyivstar was penetrated or what type of trojan horse malware could have been used to break in, adding that it could have been phishing, someone helping on the inside or something else. If it was an inside job, the insider who helped the hackers did not have a high level of clearance in the company, as the hackers made use of malware used to steal hashes of passwords, he said. Samples of that malware have been recovered and are being analysed, he added.

IT

Is 'Work From Home' Here to Stay After 2023? (usatoday.com) 163

"Remote-work numbers have dwindled over the past few years as employers issue return-to-office mandates," reports USA Today. "But will that continue in 2024?" The numbers started to slide after spring 2020, when more than 60% of days were worked from home, according to data from WFH Research, a scholarly data collection project. By 2023, that number had dropped to about 25% â' much lower than its peak but still a fivefold increase from 5% in 2019. But work-from-home numbers have held steady throughout most of 2023. And according to remote-work experts, they're expected to rebound in the years to come as companies adjust to work-from-home trends. "Return-to-office died in '23," said Nick Bloom, an economics professor at Stanford University and work-from-home expert. "There's a tombstone with 'RTO' on it...."

Though a number of companies issued return-to-work mandates this year, most are allowing employees to work from home at least part of the week. That makes 2024 the year for employers to figure out the hybrid model. "We're never going to go back to a five-days-in-the-office policy," said Stephan Meier, professor of business at Columbia University. "Some employers are going to force people to come back, but I think over the next year, more and more firms will actually figure out how to manage hybrid well." Thirty-eight percent of companies require full-time in-office work, down from 39% one quarter ago and 49% at the start of the year, according to software firm Scoop Technologies...

[Stanford economics professor] Bloom called remote-work numbers in 2023 "pancake-flat." Yes, large companies like Meta and Zoom made headlines by ordering workers back to the office. But, Bloom said, just as many other companies were quietly reducing office attendance to cut costs.

Bloom thinks holograms and VR devices are possible within five years. "In the long run, the thing that really matters is technology."

One paper estimates that currently 37% of America's jobs can be done entirely at home, according to the article, and ZipRecruiter's chief economist seems to agree, predicting as much as 33% America's work days will eventually be completed from home. "I think the numbers will gradually go up as this becomes more of an accepted norm as future generations grow up with it being so widely available, and as the technology for for doing it gets better."

And the article notes that the ZipRecruiter economist sees another factor fueling the trend. "Reluctant leaders aging out of the workforce will help, too, she said."
Wireless Networking

Wi-Fi 7 Signals the Industry's New Priority: Stability (ieee.org) 45

Multi-link operations and the 6-GHz band promise more reliability than before. From a report: The key to a future Wi-Fi you can depend on is something called multi-link operations (MLO). "It is the marquee feature of Wi-Fi 7," says Kevin Robinson, president and CEO of the Wi-Fi Alliance. MLO comes in two flavors. The first -- and simpler -- of the two is a version that allows Wi-Fi devices to spread a stream of data across multiple channels in a single frequency band. The technique makes the collective Wi-Fi signal more resilient to interference at a specific frequency. Where MLO really makes Wi-Fi 7 stand apart from previous generations, however, is a version that allows devices to spread a data stream across multiple frequency bands. For context, Wi-Fi utilizes three bands-2.5 gigahertz, 5 GHz, and as of 2020, 6 GHz.

Whether MLO spreads signals across multiple channels in the same frequency band or channels across two or three bands, the goals are the same: dependability and reduced latency. Devices will be able to split up a stream of data and send portions across different channels at the same time -- which cuts down on the overall transmission time -- or beam copies of the data across diverse channels, in case one channel is noisy or otherwise impaired. MLO is hardly the only feature new to Wi-Fi 7, even if industry experts agree it's the most notable. Wi-Fi 7 will also see channel size increase from 160 megahertz to a new maximum of 320 MHz. Bigger channels means more throughput capacity, which means more data in the same amount of time.

That said, 320-MHz channels won't be universally available. Wi-Fi uses unlicensed spectrum -- and in some regions, contiguous 320-MHz chunks of unlicensed spectrum don't exist because of other spectrum allocations. In cases where full channels aren't possible, Wi-Fi 7 includes another feature, called puncturing. "In the past, let's say you're looking for 320 MHz somewhere, but right within, there's a 20-MHz interferer. You would need to look at going to either side of that," says Andy Davidson, senior director of technology planning at Qualcomm. Before Wi-Fi 7, you'd functionally be stuck with about a 160-MHz channel either above or below that interference.

Slashdot Top Deals