Graphics

Gamers React With Overwhelming Disgust To DLSS 5's Generative AI Glow-Ups (arstechnica.com) 124

Kyle Orland writes via Ars Technica: Since deep-learning super-sampling (DLSS) launched on 2018's RTX 2080 cards, gamers have been generally bullish on the technology as a way to effectively use machine-learning upscaling techniques to increase resolutions or juice frame rates in games. With yesterday's tease of the upcoming DLSS 5, though, Nvidia has crossed a line from mere upscaling into complete lighting and texture overhauls influenced by "generative AI." The result is a bland, uncanny gloss that has received an instant and overwhelmingly negative reaction from large swaths of gamers and the industry at large.

While previous DLSS releases rendered upscaled frames or created entirely new ones to smooth out gaps, Nvidia calls DLSS 5 -- which it plans to launch in Autumn -- "a real-time neural rendering model" that can "deliver a new level of photoreal computer graphics previously only achieved in Hollywood visual effects." Nvidia CEO Jensen Huang said explicitly that the technology melds "generative AI" with "handcrafted rendering" for "a dramatic leap in visual realism while preserving the control artists need for creative expression."

Unlike existing generative video models, which Nvidia notes are "difficult to precisely control and often lack predictability," DLSS 5 uses a game's internal color and motion vectors "to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame." That underlying game data helps the system "understand complex scene semantics such as characters, hair, fabric and translucent skin, along with environmental lighting conditions like front-lit, back-lit or overcast," the company says.
Nvidia's announcement video and detailed Digital Foundry breakdown can be found at their respective links.

"Reactions have compared the effect to air-brushed pornography, 'yassified, looks-maxed freaks,' or those uncanny, unavoidable Evony ads," writes Orland. "Others have noted how DLSS 5 seems to mangle the intended art direction by dampening shadows in favor of a homogenized look."

Thomas Was Alone developer Mike Bithell said the technology seems designed "for when you absolutely, positively, don't want any art direction in your gaming experience."

Gunfire Games Senior Concept Artist Jeff Talbot added that "in every shot the art direction was taken away for the senseless addition of 'details.' Each DLSS 5 shot looked worse and had less character than the original. This is just a garbage AI Filter."

DLSS 5's "AI dogshit is actually depressing," said New Blood Interactive founder and CEO Dave Oshry, adding that future generations "won't even know this looks 'bad' or 'wrong' because to them it'll be normal."
Google

Google Discover Replaces News Headlines With Sometimes Inaccurate AI-Generated Alternatives (theverge.com) 25

An anonymous reader shared this report from The Verge: In early December, I brought you the news that Google has begun replacing Verge headlines, and those of our competitors, with AI clickbait nonsense in its content feed [which appears on the leftmost homescreen page of many Android phones and the Google app's homepage]. Google appeared to be backing away from the experiment, but now tells The Verge that its AI headlines in Google Discover are a feature, one that "performs well for user satisfaction." I once again see lots of misleading claims every time I check my phone...

For example, Google's AI claimed last week that "US reverses foreign drone ban," citing and linking to this PCMag story for the news. That's not just false — PCMag took pains to explain that it's false in the story that Google links to...! What does the author of that PCMag story think? "It makes me feel icky," Jim Fisher tells me over the phone. "I'd encourage people to click on stories and read them, and not trust what Google is spoon-feeding them." He says Google should be using the headline that humans wrote, and if Google needs a summary, it can use the ones that publications already submit to help search engines parse our work.

Google claims it's not rewriting headlines. It characterizes these new offerings as "trending topics," even though each "trending topic" presents itself as one of our stories, links to our stories, and uses our images, all without competent fact-checking to ensure the AI is getting them right... The AI is also no longer restricted to roughly four words per headline, so I no longer see nonsense headlines like "Microsoft developers using AI" or "AI tag debate heats." (Instead, I occasionally see tripe like "Fares: Need AAA & AA Games" or "Dispatch sold millions; few avoided romance.")

But Google's AI has no clue what parts of these stories are new, relevant, significant, or true, and it can easily confuse one story for another. On December 26th, Google told me that "Steam Machine price & HDMI details emerge." They hadn't. On January 11th, Google proclaimed that "ASUS ROG Ally X arrives." (It arrived in 2024; the new Xbox Ally arrived months ago.) On January 20th, it wrote that "Glasses-free 3D tech wows," introducing readers to "New 3D tech called Immensity from Leia" — but linking to this TechRadar story about an entirely different company called Visual Semiconductor...

Google declined our request for an interview to more fully explain the idea.

The site Android Police spotted more inaccurate headlines in December: A story from 9to5Google, which was actually titled 'Don't buy a Qi2 25W wireless charger hoping for faster speeds — just get the 'slower' one instead' was retitled as 'Qi2 slows older Pixels.' Similarly, Ars Technica's 'Valve's Steam Machine looks like a console, but don't expect it to be priced like one' was changed to 'Steam Machine price revealed.' At the time, we believed that the inaccuracies were due to the feature being unstable and in early testing.... Now, Google has stopped calling Discover replacing human-written headlines as an "experiment."
"Google buries a 'Generated with AI, which can make mistakes' message under the 'See more' button in the summary," reports 9to5Google, "making it look like this is the publisher's intended headline." While it is obvious that Google has refined this feature over the past couple of months, it doesn't take long to still find plenty of misleading headlines throughout Discover... Another article from NotebookCheck about an Anker power bank with a retractable cable was given a headline that's about another product entirely. A pair of headlines from Tom's Hardware and PCMag, meanwhile, show the two sides of using AI for this purpose. The Tom's Hardware headline, "Free GPU & Amazon Scams," isn't representative of the actual article, which is about someone who bought a GPU from Amazon, canceled their order, and the retailer shipped it anyway. There's nothing about "Amazon Scams" in the article.
Government

Should Salesforce's Tableau Be Granted a Patent On 'Visualizing Hierarchical Data'? 72

Long-time Slashdot reader theodp says America's Patent and Trademark Office (USPTO) has granted a patent to Tableau (Salesforce's visual analytics platform) — for a patent covering "Data Processing For Visualizing Hierarchical Data": "A provided data model may include a tree specification that declares parent-child relationships between objects in the data model. In response to a query associated with objects in the data model: employing the parent-child relationships to determine a tree that includes parent objects and child objects from the objects based on the parent-child relationships; determining a root object based on the query and the tree; traversing the tree from the root object to visit the child objects in the tree; determining partial results based on characteristics of the visited child objects such that the partial results are stored in an intermediate table; and providing a response to the query that includes values based on the intermediate table and the partial results."

A set of 15 simple drawings is provided to support the legal and tech gobbledygook of the invention claims. A person can have a manager, Tableau explains in Figures 5-6 of its accompanying drawings, and that manager can also manage and be managed by other people. Not only that, Tableau illustrates in Figures 7-10 that computers can be used to count how many people report to a manager. How does this magic work, you ask? Well, you "generate [a] tree" [Fig. 13] and "traverse a tree" [Fig. 15], Tableau explains. But wait, there's more — you can also display the people who report to a manager in multi-level or nested pie charts (aka Sunburst charts), Tableau demonstrates in Fig. 11.

Interestingly, Tableau released a "pre-Beta" Sunburst chart type in late April 2023 but yanked it at the end of June 2023 (others have long-supported Sunburst charts, including Plotly). So, do you think Tableau should be awarded a patent in 2025 on a concept that has roots in circa-1921 Sunburst charts and tree algorithms taught to first-year CS students in circa-1975 Data Structures courses?
Hardware

Meta Set To Unveil First Consumer-Ready Smart Glasses With a Display, Wristband (cnbc.com) 16

At its upcoming Connect conference next month, Meta is rumored to unveil its first consumer-ready smart glasses with a built-in display, alongside a neural wristband controller. The $800 device, codenamed Hypernova, will be able to show simple visual content like texts and support AI assistant interactions. CNBC reports: Connect is a two-day conference for developers focused on virtual reality, AR and the metaverse. It was originally called Oculus Connect and obtained its current moniker after Facebook changed its parent company name to Meta in 2021. The glasses are internally codenamed Hypernova and will include a small digital display in the right lens of the device, said the people, who asked not to be named because the details are confidential. The device is expected to cost about $800 and will be sold in partnership with EssilorLuxottica, the people said. CNBC reported in October that Meta was working with Luxottica on consumer glasses with a display. [...]

With Hypernova, Meta will finally be offering glasses with a display to consumers, but the company is setting low expectations for sales, some of the sources said. That's because the device requires more components than its voice-only predecessors, and will be slightly heavier and thicker, the people said. [...] Although Hypernova will feature a display, those visual features are expected to be limited, people familiar with the matter said. They said the color display will offer about a 20 degree field of view -- meaning it will appear in a small window in a fixed position -- and will be used primarily to relay simple bits of information, such as incoming text messages.

The Hypernova glasses will also come paired with a wristband that will use technology built by Meta's CTRL Labs, said people familiar with the matter. CTRL Labs, which Meta acquired in 2019, specializes in building neural technology that could allow users to control computing devices using gestures in their arms. [...] In addition to Hypernova and the wristband, Meta will also announce a third-generation of its voice-only smart glasses with Luxottica at Connect, one person said.

Medicine

Infrared Contact Lenses Allow People To See In the Dark, Even With Eyes Closed (phys.org) 50

An anonymous reader quotes a report from Phys.Org: Neuroscientists and materials scientists have created contact lenses that enable infrared vision in both humans and mice by converting infrared light into visible light. Unlike infrared night vision goggles, the contact lenses, described in the journal Cell, do not require a power source -- and they enable the wearer to perceive multiple infrared wavelengths. Because they're transparent, users can see both infrared and visible light simultaneously, though infrared vision was enhanced when participants had their eyes closed. [...] The contact lens technology uses nanoparticles that absorb infrared light and convert it into wavelengths that are visible to mammalian eyes (e.g., electromagnetic radiation in the 400-700 nm range). The nanoparticles specifically enable the detection of "near-infrared light," which is infrared light in the 800-1600 nm range, just beyond what humans can already see.

The team previously showed that these nanoparticles enable infrared vision in mice when injected into the retina, but they wanted to design a less invasive option. To create the contact lenses, the team combined the nanoparticles with flexible, nontoxic polymers that are used in standard soft contact lenses. After showing that the contact lenses were nontoxic, they tested their function in both humans and mice. They found that contact lens-wearing mice displayed behaviors suggesting that they could see infrared wavelengths. For example, when the mice were given the choice of a dark box and an infrared-illuminated box, contact-wearing mice chose the dark box whereas contact-less mice showed no preference. The mice also showed physiological signals of infrared vision: the pupils of contact-wearing mice constricted in the presence of infrared light, and brain imaging revealed that infrared light caused their visual processing centers to light up. In humans, the infrared contact lenses enabled participants to accurately detect flashing morse code-like signals and to perceive the direction of incoming infrared light.

An additional tweak to the contact lenses allows users to differentiate between different spectra of infrared light by engineering the nanoparticles to color-code different infrared wavelengths. For example, infrared wavelengths of 980 nm were converted to blue light, wavelengths of 808 nm were converted to green light, and wavelengths of 1,532 nm were converted to red light. In addition to enabling wearers to perceive more detail within the infrared spectrum, these color-coding nanoparticles could be modified to help color-blind people see wavelengths that they would otherwise be unable to detect. [...] Because the contact lenses have limited ability to capture fine details (due to their close proximity to the retina, which causes the converted light particles to scatter), the team also developed a wearable glass system using the same nanoparticle technology, which enabled participants to perceive higher-resolution infrared information. Currently, the contact lenses are only able to detect infrared radiation projected from an LED light source, but the researchers are working to increase the nanoparticles' sensitivity so that they can detect lower levels of infrared light.

Moon

ESA Video Game Trains AI To Recognize Craters On the Moon 4

Longtime Slashdot reader Qbertino writes: German public news outlet Tagesschau reports (source: YouTube) on an ESA video game that helps train a future moon lander's guidance AI to spot craters. Games have already helped collect visual data on millions of craters. The University Darmstadt developed the game, called IMPACT, to support ESA's efforts to establish a base on the moon. An older article from August 2024 provides further details on the project.
Privacy

ChatGPT Models Are Surprisingly Good At Geoguessing (techcrunch.com) 15

An anonymous reader quotes a report from TechCrunch: There's a somewhat concerning new trend going viral: People are using ChatGPT to figure out the location shown in pictures. This week, OpenAI released its newest AI models, o3 and o4-mini, both of which can uniquely "reason" through uploaded images. In practice, the models can crop, rotate, and zoom in on photos -- even blurry and distorted ones -- to thoroughly analyze them. These image-analyzing capabilities, paired with the models' ability to search the web, make for a potent location-finding tool. Users on X quickly discovered that o3, in particular, is quite good at deducing cities, landmarks, and even restaurants and bars from subtle visual clues.

In many cases, the models don't appear to be drawing on "memories" of past ChatGPT conversations, or EXIF data, which is the metadata attached to photos that reveal details such as where the photo was taken. X is filled with examples of users giving ChatGPT restaurant menus, neighborhood snaps, facades, and self-portraits, and instructing o3 to imagine it's playing "GeoGuessr," an online game that challenges players to guess locations from Google Street View images. It's an obvious potential privacy issue. There's nothing preventing a bad actor from screenshotting, say, a person's Instagram Story and using ChatGPT to try to doxx them.

AI

Gemini App Rolling Out Veo 2 Video Generation For Advanced Users 2

Google is rolling out Veo 2 video generation in the Gemini app for Advanced subscribers, allowing users to create eight-second, 720p cinematic-style videos from text prompts. 9to5Google reports: Announced at the end of last year, Veo 2 touts "fluid character movement, lifelike scenes, and finer visual details across diverse subjects and styles," as well as "cinematic realism," thanks to an understanding of real-world physics and human motion. In Gemini, Veo 2 can create eight-second video clips at 720p resolution. Specifically, you'll get an MP4 download in a 16:9 landscape format. There's also the ability to share via a g.co/gemini/share/ link. To enter your prompt, select Veo 2 from the model dropdown on the web and mobile apps. Just describe the scene you want to create: "The more detailed your description, the more control you have over the final video." It takes 1-2 minutes for the clip to generate. [...]

On the safety front, each frame features a SynthID digital watermark. Only available to Gemini Advanced subscribers ($19.99 per month), there is a "monthly limit" on how many videos you can generate, with Google notifying users when they're close. It is rolling out globally -- in all languages supported by Gemini -- starting today and will be fully available in the coming weeks.
AI

Midjourney Releases V7, Its First New AI Image Model In Nearly a Year 3

Midjourney's new V7 image model features a revamped architecture with smarter text prompt handling, higher image quality, and default personalization based on user-rated images. While some features like upscaling aren't yet available, it does come with a faster, cheaper Draft Mode. TechCrunch reports: To use it, you'll first have to rate around 200 images to build a Midjourney "personalization" profile, if you haven't already. This profile tunes the model to your individual visual preferences; V7 is Midjourney's first model to have personalization switched on by default. Once you've done that, you'll be able to turn V7 on or off on Midjourney's website and, if you're a member of Midjourney's Discord server, on its Discord chatbot. In the web app, you can quickly select the model from the drop-down menu next to the "Version" label.

Midjourney CEO David Holz described V7 as a "totally different architecture" in a post on X. "V7 is ... much smarter with text prompts," Holz continued in an announcement on Discord. "[I]mage prompts look fantastic, image quality is noticeably higher with beautiful textures, and bodies, hands, and objects of all kinds have significantly better coherence on all details." V7 is available in two flavors, Turbo (costlier to run) and Relax, and powers a new tool called Draft Mode that renders images at 10x the speed and half the cost of the standard mode. Draft images are of lower quality than standard-mode images, but they can be enhanced and re-rendered with a click.

A number of standard Midjourney features aren't available yet for V7, according to Holz, including image upscaling and retexturing. Those will arrive in the near future, he said, possibly within two months. "This is an entirely new model with unique strengths and probably a few weaknesses" Holz wrote on Discord. "[W]e want to learn from you what it's good and bad at, but definitely keep in mind it may require different styles of prompting. So play around a bit."
AI

Stable Diffusion 3 Mangles Human Bodies Due To Nudity Filters (arstechnica.com) 88

An anonymous reader quotes a report from Ars Technica: On Wednesday, Stability AI released weights for Stable Diffusion 3 Medium, an AI image-synthesis model that turns text prompts into AI-generated images. Its arrival has been ridiculed online, however, because it generate images of humans in a way that seems like a step backward from other state-of-the-art image-synthesis models like Midjourney or DALL-E 3. As a result, it can churn out wild anatomically incorrect visual abominations with ease. A thread on Reddit, titled, "Is this release supposed to be a joke? [SD3-2B]" details the spectacular failures of SD3 Medium at rendering humans, especially human limbs like hands and feet. Another thread titled, "Why is SD3 so bad at generating girls lying on the grass?" shows similar issues, but for entire human bodies.

AI image fans are so far blaming the Stable Diffusion 3's anatomy fails on Stability's insistence on filtering out adult content (often called "NSFW" content) from the SD3 training data that teaches the model how to generate images. "Believe it or not, heavily censoring a model also gets rid of human anatomy, so... that's what happened," wrote one Reddit user in the thread. The release of Stable Diffusion 2.0 in 2023 suffered from similar problems in depicting humans accurately, and AI researchers soon discovered that censoring adult content that contains nudity also severely hampers an AI model's ability to generate accurate human anatomy. At the time, Stability AI reversed course with SD 2.1 and SD XL, regaining some abilities lost by excluding NSFW content. "It works fine as long as there are no humans in the picture, I think their improved nsfw filter for filtering training data decided anything humanoid is nsfw," wrote another Redditor.

Basically, any time a prompt hones in on a concept that isn't represented well in its training dataset, the image model will confabulate its best interpretation of what the user is asking for. And sometimes that can be completely terrifying. Using a free online demo of SD3 on Hugging Face, we ran prompts and saw similar results to those being reported by others. For example, the prompt "a man showing his hands" returned an image of a man holding up two giant-sized backward hands, although each hand at least had five fingers.

Programming

FORTRAN and COBOL Re-enter TIOBE's Ranking of Programming Language Popularity (i-programmer.info) 93

"The TIOBE Index sets out to reflect the relative popularity of computer languages," writes i-Programmer, "so it comes as something of a surprise to see two languages dating from the 1950's in this month's Top 20. Having broken into the the Top 20 in April 2021 Fortran has continued to rise and has now risen to it's highest ever position at #10... The headline for this month's report by Paul Jansen on the TIOBE index is:

Fortran in the top 10, what is going on?

Jansen's explanation points to the fact that there are more than 1,000 hits on Amazon for "Fortran Programming" while languages such as Kotlin and Rust, barely hit 300 books for the same search query. He also explains that Fortran is still evolving with the new ISO Fortran 2023 definition published less than half a year ago....

The other legacy language that is on the rise in the TIOBE index is COBOL. We noticed it re-enter the Top 20 in January 2024 and, having dropped out in the interim, it is there again this month.

More details from TechRepublic: Along with Fortran holding on to its spot in the rankings, there were a few small changes in the top 10. Go gained 0.61 percentage points year over year, rising from tenth place in May 2023 to eighth this year. C++ rose slightly in popularity year over year, from fourth place to third, while Java (-3.53%) and Visual Basic (-1.8) fell.
Here's how TIOBE ranked the 10 most popular programming languages in May:
  1. Python
  2. C
  3. C++
  4. Java
  5. C#
  6. JavaScript
  7. Visual Basic
  8. Go
  9. SQL
  10. Fortran

On the rival PYPL ranking of programming language popularity, Fortran does not appear anywhere in the top 29.

A note on its page explains that "Worldwide, Python is the most popular language, Rust grew the most in the last 5 years (2.1%) and Java lost the most (-4.0%)." Here's how it ranks the 10 most popular programming languages for May:

  1. Python (28.98% share)
  2. Java (15.97% share)
  3. JavaScript (8.79%)
  4. C# (6.78% share)
  5. R (4.76% share)
  6. PHP (4.55% share)
  7. TypeScript (3.03% share)
  8. Swift (2.76% share)
  9. Rust (2.6% share)

AI

Ask Slashdot: DuckDB Queries JSON with SQL. But Will AI Change Code Syntax? (pgrs.net) 12

Long-time Slashdot reader theodp writes: Among the amazing features of the in-process analytical database DuckDB, writes software engineer Paul Gross in DuckDB as the New jq, is that it has many data importers included without requiring extra dependencies. This means it can natively read and parse JSON as a database table, among many other formats. "Once I learned DuckDB could read JSON files directly into memory," Gross explains, "I realized that I could use it for many of the things where I'm currently using jq. In contrast to the complicated and custom jq syntax, I'm very familiar with SQL and use it almost daily."

The stark difference of the two programming approaches to the same problem — terse-but-cryptic jq vs. more-straightforward-to-most SQL — also raises some interesting questions: Will the use of Generative AI coding assistants more firmly entrench the status quo of the existing programming paradigms on whose codebases it's been trained? Or could it help bootstrap the acceptance of new, more approachable programming paradigms?

Had something like ChatGPT been around back in the Programming Windows 95 days, might people have been content to use Copilot to generate reams of difficult-to-maintain-and-enhance Windows C code using models trained on the existing codebases instead of exploring easier approaches to Windows programming like Visual BASIC?

Science

Ask Slashdot: Can You Picture Things in Your Mind? (theguardian.com) 243

"It never occurred to me that having no visual imagery was unusual..." writes a science journalist at the Guardian.

"It's not that I forget what I look like, but I am sometimes a little surprised, and don't feel connected to my outward appearance as a matter of identity." There's been a surge of research on how aphantasia affects our lives... [F]or some it affects images alone; some can't imagine other sensory information, like sounds. Some people with aphantasia have visualizations when they dream (I do), and others don't. There's evidence that it can make it harder for people to recall visual details, though other studies show that aphants perform better on some memory tests unrelated to imagery... But overall, people with aphantasia don't seem to have serious problems navigating their day-to-day lives, unlike those with more severe memory conditions like episodic amnesia...

Some people consider aphantasia to be a deficit and wish they could reverse it. People have claimed they can train their way out of aphantasia, or use psychedelics to regain some sense of mental imagery (the jury is out on whether that works). I have no desire for this — my mind is plenty busy without a stream of imagery. If I was born with imagery, it would be commonplace for me, and I'm sure I'd enjoy it. But I already can find myself overwhelmed with thoughts and feelings that have no visual aspects to them.

Long-time Slashdot reader whoever57 writes that "Personally, I never realized before reading this article that people could create mental images." (And they also wonder if people with the condition tend to go into STEM fields.) There's what's known as the "red apple test," where you rate your own ability to visualize an apple on a scale of 1 to 5.

Any Slashdot readers want to share their own experiences in the comments?
Music

Remembering The 1970s-Era Technology of Devo (msn.com) 43

It's the 50th anniversary of Devo, the geek-friendly, dystopia-themed band that combined synthesizers with showmanship, first founded in 1973.

As a new documentary about the group celebrates its Sundance world premiere, the Los Angeles Times explores how the band made innovative use of the technology of its time: With their yellow radiation suits, red "energy dome" hats and manic energy, part playful and part angry, the band Devo combined the futuristic glamour of new wave with atomic-age anxieties and post-'60s disillusionment.... Uniquely, the band developed a fully formed, intricate internal philosophy and mythology built around the idea that humans were "de-evolving" by becoming dumber and less sophisticated. The mascot of the band, known as "Booji Boy," was an infantile urchin in a rubber mask...

Was there an idea to document the band right from the very start? It's incredible that there's footage of the very first show in 1973.

GERALD CASALE:
We were that delusional, yes. And we were trying to document ourselves when nobody was interested in doing that. And when it was quite expensive and clumsy to do it. You're dealing with Sony U-matic reel-to-reel recorders and big heavy cameras and a scarcity of equipment and very little interest. I mean, my God, if a Devo of now existed like we did, then clearly, there'd be a million cellphone videos.

MARK MOTHERSBAUGH: [...] Bob was the first of us to direct a video, back when he was in high school. Bob and me, our dad, starting when we were like babies, like 1 year old, he'd bring out an 8-millimeter camera that didn't have sound, and so he shot hundreds and hundreds of these films through the years, just family stuff. So we always kind of liked that. And Jerry was doing films at Kent State with Chuck Statler before Chuck said, "Hey, let's do a film with a couple of the songs in it." So we were always audio-visual. We were always thinking in both worlds...

[DOCUMENTARY DIRECTOR] CHRIS SMITH: One of my favorite details in looking through the old footage is, there's an early show that was recorded in black-and-white, and they have such limited materials to work with, yet they do this thing where the light goes on and off on both sides of the stage. And to me it was so emblematic of where they were going because they were making something that you hadn't seen before that was super creative and visually distinctive and interesting out of something we all had to work with... You could see in that footage, the inventiveness that wasn't a result of means — it was something that was just created out of what they had to work with at that time.

MARK MOTHERSBAUGH: [...] Sonically, a lot of what we did was just related to the fact that Bob Mothersbaugh bought a four-track TEAC. So we had this machine that could record four little skinny channels on a quarter-inch tape. It was an amateur home-tape machine, but it made us think about our parts, because we thought, well, OK, you're only going to get to do the bass on one track, and the guitar on one track and the drums on one track and the synth. You're not going to do all these overdubs. We had to think about it, what was an essential part. So we'd work on the song till you could play it just in one pass. Everything essential. I think it really made the early stuff sound really strong because of that.

You really get a sense of that on their 1978 song "Mongoloid." But the 2023 documentary's director doesn't see his film as an ending bookmark for the band. "They're still touring. They're all still actively creatively pursuing many different things, as I hope that you would expect after seeing the film."

And speaking specifically about the documentary, Mark Mothersbaugh says Booji Boy "describes it as a halfway point to the year of 2073, where we'll celebrate the 100-year anniversary." Booji Boy also says the next 50 years will be more about action. "And it'll be about positive mutation. Mutate, don't stagnate."
Space

'Behold - the Best Space Images of 2023' (scientificamerican.com) 5

As the year comes to a close, "one constant, reliable source of awe and beauty is the sky over our head..." writes astronomer Phil Plait in Scientific America

"And every year we see new things, or old things in new ways, and I've been set the wonderful task of selecting my favorites and relaying them and their import to you." End-of-year lists, especially those displaying astronomical imagery, tend to be splashy and colorful. That's understandable, but what they sometimes miss are the more subtle photographs, those that hide momentous discoveries in minor visual details or offer fresh perspectives on familiar objects. They may not leap off the page, but they still have an impact. That's what I've kept in mind while sorting through this year's celestial treasure trove. This gallery is by no means complete, but it shows what I think are some of the most interesting astronomical portraits to have emerged in 2023.

No gallery such as this would be complete without something from the James Webb Space Telescope (JWST), our newest infrared eye on the sky. This monster observatory has already brought so many small revolutions to astronomy that picking one from the past year is no small task. Should it be a baby star throwing an immense tantrum or a massive old star shedding material at colossal rates before it inevitably explodes as a supernova? Or should it be a map of a mind-stomping 100,000 galaxies?

Well, how about something very, very different — such as the skeletal structure of a nearby galaxy's intricate web of dust [also displayed at the top of Scientiic American's article]...? [I]t has a beautiful spiral structure and shows the effects of a smaller galaxy colliding with it. In the phenomenally sharp and decidedly eerie false-color view from JWST's Mid-Infrared Instrument, we see countless clouds of cosmic dust in a skeletonlike pattern. Each of these clouds is made up of small grains of rocky and sooty carbon-based molecules expelled by dying stars...

Astronomers captured this image to better understand how stars are born in stellar nurseries and how they evolve over time.

Software

Meet Kosmik, a Visual Canvas With Built-In PDF Reader and Web Browser (techcrunch.com) 10

An anonymous reader quotes a report from TechCrunch: In recent years, tools such as Figma, TLDraw, Apple's Freeform and Arc browser's Easel functionality have tried to sell the idea of using an "infinite canvas" for capturing and sharing ideas. French startup Kosmik is building on that general concept with a knowledge-capturing tool that doesn't require the user to switch between different windows or apps to capture information. Kosmik was founded in 2018 by Paul Rony and Christophe Van Deputte. Prior to that, Rony worked at a video production company as a junior director, and he wanted a single whiteboard-type canvas instead of file and folders where he could put videos, PDFs, websites, notes and drawings. And that's when he started to build Kosmic, Rony told TechCrunch, drawing on a prior background in computing history and philosophy.

"It took us almost three years to make a working product to include baseline features like data encryption, offline-first mode and build a spatial canvas-based UI," Rony explained. "We have built all of this on IPFS, so when two people collaborate everything is peer-to-peer rather than relying on a server-based architecture." Kosmik offers an infinite canvas interface where you can insert text, images, videos, PDFs and links, which can be opened and previewed in a side panel. It also features a built-in browser, saving users from having to switch windows when they need to find a relevant website link. Additionally, the platform sports a PDF reader, which lets the user extract elements such as images and text.

The tool is useful for designers, architects, consultants, and students to build boards of information for different projects. The tool is useful for them as they don't need to open up a bunch of Chrome tabs and put details into a document, which is not a very visual medium for various media types. Some retail investors are using the app to monitor stock prices and consultants are using them for their project boards. Available via the web, Mac, and Windows, Kosmik ships with a basic free tier, though this has a limit of 50MB of files and 5GB of storage with 500 canvas "elements." For more storage and unlimited elements, the company offers a $5.99 monthly subscription, with plans in place to eventually offer a "pay-once" model for those who only want to use the software on a single device.

Movies

Nintendo Is Making a Live-Action 'Legend of Zelda' Movie (theverge.com) 32

Nintendo has confirmed that it's working on a live-action adaptation of The Legend of Zelda, directed by Wes Ball and produced by Zelda creator Shigeru Miyamoto. The Verge reports: "This is Miyamoto. I have been working on the live-action film of The Legend of Zelda for many years now with Avi Arad-san, who has produced many mega hit films," Miyamoto said in a statement posted on X, formerly Twitter. We might be waiting a while for the movie, however; Miyamoto said, "It will take time until its completion, but I hope you look forward to seeing it." While there aren't many details on the movie itself, Nintendo says that it will be co-financed by itself and Sony, with Nintendo footing more than 50 percent of the bill.

"By producing visual contents of Nintendo IP by itself, Nintendo is creating new opportunities to have people from around the world to access the world of entertainment which Nintendo has built, through different means apart from its dedicated game consoles," the company said in a statement about the Zelda film. "By getting deeply involved in the movie production with the aim to put smiles on everyone's faces through entertainment, Nintendo will continue its efforts to produce unique entertainment and deliver it to as many people as possible."

Graphics

Nvidia Details 'Neural Texture Compression', Claims Significant Improvements (techspot.com) 17

Slashdot reader indominabledemon shared this article from TechSpot: Games today use highly-detailed textures that can quickly fill the frame buffer on many graphics cards, leading to stuttering and game crashes in recent AAA titles for many gamers... [T]he most promising development in this direction so far comes from Nvidia — neural texture compression could reduce system requirements for future AAA titles, at least when it comes to VRAM and storage.... In a research paper published this week, the company details a new algorithm for texture compression that is supposedly better than both traditional block compression (BC) methods as well as other advanced compression techniques such as AVIF and JPEG-XL.

The new algorithm is simply called neural texture compression (NTC), and as the name suggests it uses a neural network designed specifically for material textures. To make this fast enough for practical use, Nvidia researchers built several small neural networks optimized for each material... [T]extures compressed with NTC preserve a lot more detail while also being significantly smaller than even these same textures compressed with BC techniques to a quarter of the original resolution... Researchers explain the idea behind their approach is to compress all these maps along with their mipmap chain into a single file, and then have them be decompressed in real time with the same random access as traditional block texture compression...

However, NTC does have some limitations that may limit its appeal. First, as with any lossy compression, it can introduce visual degradation at low bitrates. Researchers observed mild blurring, the removal of fine details, color banding, color shifts, and features leaking between texture channels. Furthermore, game artists won't be able to optimize textures in all the same ways they do today, for instance, by lowering the resolution of certain texture maps for less important objects or NPCs. Nvidia says all maps need to be the same size before compression, which is bound to complicate workflows. This sounds even worse when you consider that the benefits of NTC don't apply at larger camera distances.

Perhaps the biggest disadvantages of NTC have to do with texture filtering. As we've seen with technologies like DLSS, there is potential for image flickering and other visual artifacts when using textures compressed through NTC. And while games can utilize anisotropic filtering to improve the appearance of textures in the distance at a minimal performance cost, the same isn't possible with Nvidia's NTC at this point.

Open Source

Red Hat's 30th Anniversary: How a Microsoft Competitor Rose from an Apartment-Based Startup (msn.com) 47

For Red Hat's 30th anniversary, North Carolina's News & Observer newspaper ran a special four-part series of articles.

In the first article Red Hat co-founder Bob Young remembers Red Hat's first big breakthrough: winning InfoWorld's "OS of the Year" award in 1998 — at a time when Microsoft's Windows controlled 85% of the market. "How is that possible," Young said, "that one of the world's biggest technology companies, on this strategically critical product, loses the product of the year to a company with 50 employees in the tobacco fields of North Carolina?" The answer, he would tell the many reporters who suddenly wanted to learn about his upstart company, strikes at "the beauty" of open-source software.

"Our engineering team is an order of magnitude bigger than Microsoft's engineering team on Windows, and I don't really care how many people they have," Young would say. "Like they may have thousands of the smartest operating system engineers that they could scour the planet for, and we had 10,000 engineers by comparison...."

Young was a 40-year-old Canadian computer equipment salesperson with a software catalog when he noticed what Marc Ewing was doing. [Ewing was a recent college graduate bored with his two-month job at IBM, selling customized Linux as a side hustle.] It's pretty primitive, but it's going in the right direction, Young thought. He began reselling Ewing's Red Hat product. Eventually, he called Ewing, and the two met at a tech conference in New York City. "I needed a product, and Marc needed some marketing help," said Young, who was living in Connecticut at the time. "So we put our two little businesses together."

Red Hat incorporated in March 1993, with the earliest employees operating the nascent business out of Ewing's Durham apartment. Eventually, the landlord discovered what they were doing and kicked them out.

The four articles capture the highlights. ("A visual effects group used its Linux 4.1 to design parts of the 1997 film Titanic.") And it doesn't leave out Red Hat's skirmishes with Microsoft. ("Microsoft was owned by the richest person in the world. Red Hat engineers were still linking servers together with extension cords. ") "We were changing the industry and a lot of companies were mad at us," says Michael Ferris, Red Hat's VP of corporate development/strategy. Soon there were corporate partnerships with Netscape, Intel, Hewlett-Packard, Compaq, Dell, and IBM — and when Red Hat finally goes public in 1999, its stock sees the eighth-largest first-day gain in Wall Street history, rising in value in days to over $7 billion and "making overnight millionaires of its earliest employees."

But there's also inspiring details like the quote painted on the wall of Red Hat's headquarters in Durham: "Every revolution was first a thought in one man's mind; and when the same thought occurs to another man, it is the key to that era..." It's fun to see the story told by a local newspaper, with subheadings like "It started with a student from Finland" and "Red Hat takes on the Microsoft Goliath."

Something I'd never thought of. 2001's 9/11 terrorist attack on the World Trade Center "destroyed the principal data centers of many Wall Street investment banks, which were housed in the twin towers. With their computers wiped out, financial institutions had to choose whether to rebuild with standard proprietary software or the emergent open source. Many picked the latter." And by the mid-2000s, "Red Hat was the world's largest provider of Linux...' according to part two of the series. "Soon, Red Hat was servicing more than 90% of Fortune 500 companies." By then, even the most vehement former critics were amenable to Red Hat's kind of software. Microsoft had begun to integrate open source into its core operations. "Microsoft was on the wrong side of history when open source exploded at the beginning of the century, and I can say that about me personally," Microsoft President Brad Smith later said.

In the 2010s, "open source has won" became a popular tagline among programmers. After years of fighting for legitimacy, former Red Hat executives said victory felt good. "There was never gloating," Tiemann said.

"But there was always pride."

In 2017 Red Hat's CEO answered questions from Slashdot's readers.
AI

Developer Builds a ChatGPT Client for MS-DOS (yeokhengmeng.com) 54

"With the recent attention on ChatGPT and OpenAI's release of their APIs, many developers have developed clients for modern platforms to talk to this super smart AI chatbot," writes maker/retro coding enthusiast yeokm1 . "However I'm pretty sure almost nobody has written one for a vintage platform like MS-DOS."

They share a blog post with all the details — including footage of their client ultimately running on a vintage IBM PC from 1984 (with a black and orange monitor and those big, boxy keys). "3.5 years ago, I wrote a Slack client to run on Windows 3.1," the blog post explains. "I thought to try something different this time and develop for an even older platform as a challenge."

One challenge was just finding a networking API for DOS. But everything came together, with the ChatGPT-for-DOS app written using Visual Studio Code text editor (testing on a virtual machine running DOS 6.22), parsing the JSON output from OpenAI's Chat Completion API. "And before you ask, I did not use ChatGPT for help to code this app in any way," the blog post concludes. But after the app was working, he used it to ask ChatGPT how one would build such an app — and ChatGPT erroneously suggested breezily that he just try accessing OpenAI's Python API from the DOS command line.

"What is the AI smoking...?"

Slashdot Top Deals