Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Second flying drone to explore another planet (Score 3) 70

Planetary scientists frequently refer to moons that are large enough to be in hydrostatic equilibrium as planets in the literature. Examples, just from a quick search:

"Locally enhanced precipitation organized by planetary-scale waves on Titan"

"3.3. Relevance to Other Planets" (section on Titan)

"Superrotation in Planetary Atmospheres" (article covers Titan alongside three other planets)

"All planets with substantial atmospheres (e.g., Earth, Venus, Mars, and Titan) have ionospheres which expand above the exobase"

"Clouds on Titan result from the condensation of methane and ethane and, as on other planets, are primarily structured by circulation of the atmosphere"

"... of the planet. However, rather than being scarred by volcanic features, Titan's surface is largely shaped..."

"Spectrophotometry of the Jovian Planets and Titan at 300- to 1000-nm Wavelength: The Methane Spectrum" (okay, it's mainly referring to the Jovian satellites as planets, but same point)

"Superrotation indices for Solar System and extrasolar atmospheres" - contains a table whose first column is "Planet", and has Titan in the list, alongside other planets

Etc. This is not to be confused with the phrase "minor planet", which is used for asteroids, etc. In general there's a big distinction in how commonly you see the large moons in hydrostatic equilibrium referred to as "planets" and with "planetary" adjectives, vs. smaller bodies not in hydrostatic equilibrium.

Comment Re:Titan or Bust! (Score 3, Informative) 70

Why?

NASA's obsession with Mars is weird, and it consumes the lion's share of their planetary exploration budget. We know vastly more about Mars than we know of everywhere else except Earth.

This news here is bittersweet for me. I *love* Titan - it and Venus are my two favourite worlds for further exploration, and dragonfly is a superb way to explore Titan. But there's some sadness in the fact that they're launching it to an equatorial site, so we don't get to see the fascinating hydrocarbon seas and the terrain sculpted by them near the poles. I REALLY wish they were going to the north pole instead :( In theory they could eventually get there, but the craft would have to survive far beyond design limits and get a lot of mission extensions. At a max pace of travel it might cover 600 meters or so per Earth day on average. So we're talking like 12 years to get to the first small hydrocarbon lakes and ~18 years to get to Ligeia Mare or Punga Mare (a bit further to Kraken Mare), *assuming* no detours, vs. a 2 1/2 year mission design. And that ignores the fact that they'll be going slower in the start - the nominal mission is only supposed to cover 175km, just a few percent of the way, under 200 metres per day. Sigh... Maybe it'll be possible to squeeze more range out of it once they're comfortable with its performance and reliability, but... it's a LONG way to the poles.

At least if it lasts for that long it'll have done a full transition between wet and dry cycles, which should last ~15 years. So maybe surface liquids will be common at certain points, rare in others.

Comment H19 (Score 1) 79

The machine I had was a HeathKit H19. This had it's own OS called HDOS. Not sure what the quality of that was or how it compared to CP/M. However the hardware also had ROM mapped to the first 2K or so (to run the program controlling the front panel display) which made it incompatible with CP/M. I somewhat remember it was already clear that all the good software was only for CP/M and I had the wrong machine and HeathKit screwed up. Anybody else remember these, have any comments on them? It does sound like creating HDOS was not a trivial amout of work, was anything interesting lost with it?

Submission + - Windows vulnerability reported by the NSA exploited to install Russian malware (arstechnica.com)

echo123 writes: Kremlin-backed hackers have been exploiting a critical Microsoft vulnerability for four years in attacks that targeted a vast array of organizations with a previously undocumented tool, the software maker disclosed Monday.

When Microsoft patched the vulnerability in October 2022—at least two years after it came under attack by the Russian hackers—the company made no mention that it was under active exploitation. As of publication, the company’s advisory still made no mention of the in-the-wild targeting. Windows users frequently prioritize the installation of patches based on whether a vulnerability is likely to be exploited in real-world attacks.

On Monday, Microsoft revealed that a hacking group tracked under the name Forest Blizzard has been exploiting CVE-2022-38028 since at least June 2020—and possibly as early as April 2019. The threat group—which is also tracked under names including APT28, Sednit, Sofacy, GRU Unit 26165, and Fancy Bear—has been linked by the US and the UK governments to Unit 26165 of the Main Intelligence Directorate, a Russian military intelligence arm better known as the GRU. Forest Blizzard focuses on intelligence gathering through the hacking of a wide array of organizations, mainly in the US, Europe, and the Middle East.

Microsoft representatives didn't respond to an email asking why the in-the-wild exploits are being reported only now.

Monday’s advisory provided additional technical details:

Read the rest at ArsTechnica.

Submission + - Voyager 1 Is Communicating Well Again (scientificamerican.com)

fahrbot-bot writes: Scientific American is reporting that after [5] months of nonsensical transmissions from humanity’s most distant emissary, NASA’s iconic Voyager 1 spacecraft is finally communicating intelligibly with Earth again.

When the latest communications glitch occurred last fall, scientists could still send signals to the distant probe, and they could tell that the spacecraft was operating. But all they got from Voyager 1 was gibberish—what NASA described in December 2023 as “a repeating pattern of ones and zeros.” The team was able to trace the issue back to a part of the spacecraft’s computer system called the flight data subsystem, or FDS, and identified that a particular chip within that system had failed.

Mission personnel couldn’t repair the chip. They were, however, able to break the code held on the failed chip into pieces they could tuck into spare corners of the FDS’s memory, according to NASA. The first such fix was transmitted to Voyager 1 on April 18. With a total distance of 30 billion miles to cross from Earth to the spacecraft and back, the team had to wait nearly two full days for a response from the probe. But on April 20 NASA got confirmation that the initial fix worked. Additional commands to rewrite the rest of the FDS system’s lost code are scheduled for the coming weeks, according to the space agency, including commands that will restore the spacecraft’s ability to send home science data.

Also: Voyager 1 is sending data back to Earth for the first time in 5 months and NASA's Voyager 1 spacecraft finally phones home after 5 months of no contact

Submission + - Voyager 1 resumes sending information (nasa.gov)

quonset writes: Just over two weeks ago, NASA figured out why its Voyager 1 spacecraft stopped sending useful data. They suspected corrupted memory in its flight data system (FDS) was the culprit. Today, for the first time since November, Voyager 1 is sending useful data about its health and the status of its onboard systems back to NASA. How did NASA accomplish this feat of long distance repair? They broke up the code into smaller pieces and redistributed them throughout the memory. From NASA:

So they devised a plan to divide the affected code into sections and store those sections in different places in the FDS. To make this plan work, they also needed to adjust those code sections to ensure, for example, that they all still function as a whole. Any references to the location of that code in other parts of the FDS memory needed to be updated as well.

The team started by singling out the code responsible for packaging the spacecraft’s engineering data. They sent it to its new location in the FDS memory on April 18. A radio signal takes about 22 ½ hours to reach Voyager 1, which is over 15 billion miles (24 billion kilometers) from Earth, and another 22 ½ hours for a signal to come back to Earth. When the mission flight team heard back from the spacecraft on April 20, they saw that the modification worked: For the first time in five months, they have been able to check the health and status of the spacecraft.

During the coming weeks, the team will relocate and adjust the other affected portions of the FDS software. These include the portions that will start returning science data.

Comment Re: Year of the Wayland desktop... (Score 1) 66

No, ignoring the XY position of windows is a specific design decision by Wayland. They did it on purpose because they think it is a security problem. The idea that the desktop could just look at the requested positions and only ignore bad ones apparently is foreign to them. Instead they made it impossible for an application to store window positions.
They also purposely designed it so it is impossible to work with overlapping windows, by requiring that clicking in a window always raises it,a design that was removed from X10 to make x11. Their arrogance shows no bounds.

Comment Re:AI is just Wikipedia (Score 1) 25

I've probably done tens of thousands of legit, constructive edits, but even I couldn't resist the temptation to prank it at one point. The article was on the sugar apple (Annona squamosa), and at the time, there was a big long list of the name of the fruit in different languages. I wrote that in Icelandic, the fruit was called "Hva[TH]er[TH]etta" (eth and thorn don't work on Slashdot), which means "What's that?", as in, "I've never seen that fruit before in my life" ;) Though the list disappeared from Wikipedia many years ago (as it shouldn't have been there in the first place), even to this day, I find tons of pages listing that in all seriousness as the Icelandic name for the fruit.

Comment Nonsense (Score 1) 25

The author has no clue what they're talking about:

Meta said the 15 trillion tokens on which its trained came from "publicly available sources." Which sources? Meta told The Verge that it didn't include Meta user data, but didn't give much more in the way of specifics. It did mention that it includes AI-generated data, or synthetic data: "we used Llama 2 to generate the training data for the text-quality classifiers that are powering Llama 3." There are plenty of known issues with synthetic or AI-created data, foremost of which is that it can exacerbate existing issues with AI, because it's liable to spit out a more concentrated version of any garbage it is ingesting.

1) *Quality classifiers* are not themselves training data. Think of it as a second program that you run on your training data before training your model, to look over the data and decide how useful it looks and thus how much to emphasize it in the training, or whether or not to just omit it.

2): Synthetic training data *very much* can be helpful, in a number of different ways.

A) It can diversify existing data. E.g., instead of just a sentence "I was on vacation in Morocco and I got some hummus", maybe you generate different versions of the same sentence ("I was traveling in Rome and ordered some pasta" ,"I went on a trip to Germany and had some sausage", etc), to deemphasize the specifics (Morocco, hummus, etc) and focus on the generalization. One example can turn into millions, thus rendering rote memorization during training impossible.

B) It allows for programmatic filtration stages. Let's say that you're training a model to extract quotes from text. You task a LLM with creating training examples for your quote-extracting LLM (synthetic data). But you don't just blindly trust the outputs - first you do a text match to see if what it quoted is actually in the text and whether it's word-for-word right. Maybe you do a fuzzy match, and if it just got a word or two off, you correct it to the exact match, or whatnot. But the key is: you can postprocess the outputs to do sanity checks on it, and since those programmatic steps are deterministic, you can guarantee that the training data meets certain characteristics..

C) It allows for the discovery of further interrelationships. Indeed, this is a key thing that we as humans do - learning from things we've already learned by thinking about them iteratively. If a model learned "The blue whale is a mammal" and it learned "All mammals feed their young with milk", a synthetic generation might include "Blue whales are mammals, and like all mammals, feed their young with milk" . The new model now directly learns that blue whales feed their young with milk, and might chain new deductions off *that*.

D) It's not only synthetic data that can contain errors, but non-synthetic data as well. The internet is awash in wrong things; a random thing on the internet is competing with a model that's been trained on reems of data and has high quality / authoritative data boosted and garbage filtered out. "Things being wrong in the training data" in the training data is normal, expected, and fine, so long as the overall picture is accurate. If there's 1000 training samples that say that Mars is the fourth planet from the sun, and one that says says that the fourth planet from the sun is Joseph Stalin, it's not going to decide that the fourth planet is Stalin - it's going to answer "Mars".

Indeed, the most common examples I see of "AI being wrong" that people share virally on the internet are actually RAG (Retrieval Augmented Generation), where it's tasked with basically googling things and then summing up the results - and the "wrong content" is actually things that humans wrote on the internet.

That's not that you should rely only generated data when building a generalist model (it's fine for a specialist). There may be specific details that the generating model never learned, or got wrong, or new information that's been discovered since then; you always want an influx of fresh data.

3): You don't just randomly guess whether a given training methodology (such as synthetic data, which I'll reiterate, Meta did not say that they used - although they might have) is having a negative impact. Models are assessed with a whole slew of evaluation metrics to assess how good and accurately they respond to different queries. And LLaMA 3 scores superbly, relative to model size.

I'm not super-excited about LLaMA 3 simply because I hate the license - but there's zero disputing that it's an impressive series of models.

Comment Re: Cue all the people acting shocked about this.. (Score 1) 41

Under your (directly contradicting their words) theory, then creative endeavour on the front end SHOULD count If the person writes a veritable short-story as the prompt, then that SHOULD count. It does not. Because according to the copyright office, while user controls the general theme, they do not control the specific details.

"Instead, these prompts function more like instructions to a commissioned artist—they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output."

if a user instructs a text-generating technology to “write a poem about copyright law in the style of William Shakespeare,” she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare's style.[29] But the technology will decide the rhyming pattern, the words in each line, and the structure of the text

It is the fact that the user does not control the specific details, only the overall concept, that (according to them) that makes it uncopyrightable.

Comment Re: Cue all the people acting shocked about this.. (Score 1) 41

Based on the Office's understanding of the generative AI technologies currently available, users do not exercise ultimate creative control over how such systems interpret prompts and generate material. Instead, these prompts function more like instructions to a commissioned artist—they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output.[28] For example, if a user instructs a text-generating technology to “write a poem about copyright law in the style of William Shakespeare,” she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare's style.[29] But the technology will decide the rhyming pattern, the words in each line, and the structure of the text.[30]

Compare with my summary:

" their argument was that because the person doesn't control the exact details of the composition of the work"

I'll repeat: I accurately summed up their argument. You did not.

Slashdot Top Deals

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...