Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Titan or Bust! (Score 2) 69

Venus's middle cloud layer is the most Earthlike place in the solar system apart from Earth**, is energy-abundant, has favourable orbital dynamics, easy entry, and the simple act of storing electricity for the night via reversible fuel cells - if plumbed in a cascade - can enrich deuterium (2 1/2 orders of magnitude more abundant on Venus), a natural export commodity, if launch costs are sufficiently low. The atmosphere contains CHONP, S, Cl, F, noble gases, and even small amounts of iron. Pretty much everything you need to build a floating habitat, which can be lofted by normal Earth air, aka people can live inside the envelope. Aka, unlike on Mars, where you live in a tiny tin can pressure vessel where any access to the outside tracks in toxic electrostatic dust and you waste away from low gravity, on Venus you'd be in a massive, brightly lit hanging garden, where you could live half a kilometer from a crewmate if they really got on your nerves.

Most Earthlike? Yes. Temperature, pressure, gravity, etc all similar. Natural radiation shielding equivalent to half a dozen meters or so of water over your head. Even storms seem to be of an Earthlike distribution. The "sulfuric acid" is overblown; it's a sparse vog, with visibility of several kilometers; with a face mask, you could probably stand outside in shirtsleeves, feeling an alien wind on your skin, only risking dermatitis if you stayed outside for too long.

Indeed, it'd actually be useful if the sulfuric vog was more common (to be fair, it's still unclear whether precipitation happens, and if so, whether rains or snows; the Vega data is disputed). Why? Because it's your main source of hydrogen. Highly hygroscopic and easily electrostatically attracted, so readily scrubbed through your propulsion system. First releases free water vapour when heated, then decomposes to more water plus SO3, and if you want you can further decompose the SO3 over a vanadium pentoxide catalyst to O2 + SO2, or you can reinject it into the scrubber as a conditioning agent to seed more water vapour. Of course, if precipitation happens, collection possibilities are basically limitless.

The surface is certainly hostile, but even 1960s Soviet technology was landing on it (also, contrary to popular myth, there is no acid at the surface; it's unstable at those temperatures, the sulfur inventory is only SO2 there). But in many ways, the surface is very gentle. Mars eats probes with its hard landings, but one Venera probe outright lost its parachute during descent and still landed intact, as the dense atmosphere slows one's fall. It's been calculated that with the right trajectory, a simple hollow titanium sphere launched from Earth could arrive at Venus, enter, descend and land all intact. Simple thermal inertia (insulation + a phase change material) can keep an object cool for a couple hours; with heat pumps, indefinitely (and yes, heat pumps and power sources for the surface conditions have been designed). Even humans could walk there with insulated hard suits, like atmospheric diving suits. Indeed, some of the first space suits NASA designed for the moon (ultimately ditched for weight reasons, despite the superior mobility performance) were similarly jointed hard-shell suits.

On Venus's surface, a lander or explorer can literally fly, via a compressible metal bellows balloon. Small wings / fins can allow for long glide ratios. Loose surface material can be dredged rather than requiring physical excavation, potentially with the same fan used for propulsion. Reversible ascent back to altitude can be done with phase change balloons - that is, at altitude, a lifting gas condenses and is collected in a valved container, and the craft can descend; at the surface, when one desires to rise, the valve is opened and the gas re-lofts the lander.

On Mars, you're stuck in one location. The problem is that all minerals aren't found in the same spot; different processes concentrate different minerals. And you can't exactly just get on a train to some other spot on the planet; long-distance travel requires rockets, and all their consumables. But on Venus the atmosphere superrotates every several days (rate depending on altitude and latitude), while latitude shifts in a floating habitat or lander can be done with minimal motor requirements. So vast swaths of the planet are available to you. Furthermore, Venus is far more dramatic in terms of natural enrichment processes; wide ranges of minerals are sublimated or eaten out of rocks and then recondensed elsewhere. Temperatures and pressures vary greatly between the highlands and lowlands as well. There even appear to be outright semiconductor frosts on parts of the planet. Lava flows show signs of long cooling times, which promotes fractionalization and pegmatites. Volcanism is common, primarily basaltic but also potentially secondary rhyolitic sources. A variety of unusual flows with no earth analogies (or only rare ones) show signs of existing, including the longest "river" channel in the solar system (Baltis Vallis). While there's no global tectonic activity, there appear to be areas of intense local buckling between microplates. The surface conditions of the planet also appear to have been very different at many times in the past. It's all a perfect setup for having diverse mineral enrichment processes. Yet there's almost no overburden (unlike Mars, which is covered in thick overburden on most of the planet).

As mentioned before, Venus has significantly superior orbital dynamics to Mars, due to the Oberth effect. Venus-Mars transfers are almost as fast and almost as low energy as Earth-Mars transfers. Venus-Earth transits are super-fast, esp. with extra delta-V added. The asteroid belt is, contrary to intuition, much more accessible from Venus than from Mars. Also, gravity assists are much more common around Venus - when we want to launch probes to the outer solar system, we generally start with sending them first inwards toward Venus, then back between Venus and Earth and outwards from there.

From a long term perspective, both Venus and Mars have problems with terraforming, with some things you can do "relatively easy", and some that require megascale engineering on scales best left to fantasy. You can boil off Mars's polar caps, but the amount of CO2 there is still quite limited, and there's just not that much nitrogen inventory on the planet (it's been lost to space), which also matters to plant cultivation. You could probably engineer active radiation shielding from orbit, maybe direct more light to the surface, but you can't increase the gravity. Etc.

With Venus, one of the earliest ideas for terraforming it was from Carl Sagan, before the planet was known well; he proposed seeding it with engineered bacteria to convert CO2 to graphite and release oxygen. He later rejected his idea, on the grounds that a high temperature surface of graphite and oxygen would be a bomb. Later studies showed that the timescales for said conversion would be tens of thousands to millions of years. But in a way, that is actually a savior to his idea, in that Venus's rocks contain unoxidized minerals. In analogy to the Great Oxygen Catastrophe on Earth that created our banded iron formations, slowly exposed to oxygen, Venus's rocks would weather and sequester the oxygen and deposited carbon. Hot, high-pressure high-oxygen conditions would never have a chance to exist.

Various faster methods have been proposed. A common one is that of the soletta, a thin orbital sunshade. Another is building an "alternative surface", aka propagating floating colonies to the point that they are the new surface - and indeed, below that surface, they could exclude sunlight to the below atmosphere. Regardless of the method, the cooler the atmosphere gets, the lower its pressure gets, to the point that you can start outright precipitating out the atmosphere out as icecaps.

Just like Mars will never have high gravity and probably never much nitrogen, Venus would probably never be fully Earthlike. It would have enough nitrogen that, barring loss to weathering, people would have a constant mild nitrogen narcosis, like always being ever so slightly tipsy. It would remain a desert planet, barring massive influxes of ice (which present their own challenges and problems), or of hydrogen (pre-cooling). But then again, the very concept of terraforming anything has always required one to put on thick rose-coloured glasses ;)

I don't say all this to diss on Mars. But our obsession with "surface conditions" has led us to ignore the fact that if you're going to the extremes of engineering an off-world habitat, having it be airborne is not that radical of an additional ask, esp. on a planet with such a big "fluffy" atmosphere as Venus. If Venus's atmosphere stopped at its Earthlike middle cloud layer, if there was a surface there, nobody would be talking about long-term habitation on Mars - the focus would have been entirely Venus. But we can still have habitats there. The habitat can, in whole or part, even potentially be its own reentry vehicle (ballute reentry), and certainly at least inflate and descend as a ballute (with a small supply of Earth-provided helium as a temporary lifting gas until an Earthlike atmosphere can be produced). Unlike with Mars entry, you're never going to be "off course", or "crash into something" because you got the location or altitude wrong.

(Getting back to orbit is certainly challenging from Venus - all that gravity that's good for your body has its downsides - but the TL/DR is, hybrid and/or air-augmented nuclear thermal rockets look by far to be the best option. Far less hydrogen needed than chemical rockets, far lighter relative to their deliverable payload, only a single stage needed, and in some designs have the ability to hover without consuming fuel. This is, of course, of great benefit for docking with a habitat, avoiding the need for descending rocket stages to deploy balloons and then to dock those to the habitat. The hydrogen and mass budgets involved are totally viable)

Comment Re:power (Score 2) 69

Titan's atmosphere is rather calm; not an issue. At the surface, the winds measured by Huygens were 0,3 m/s.

You actually can use solar power in extreme environments - even Venus's surface has been shown to be compatible with certain types solar, though you certainly get very poor power density. Dragonfly, as noted above, uses an RTG.

Comment Re:Second flying drone to explore another planet (Score 3) 69

Planetary scientists frequently refer to moons that are large enough to be in hydrostatic equilibrium as planets in the literature. Examples, just from a quick search:

"Locally enhanced precipitation organized by planetary-scale waves on Titan"

"3.3. Relevance to Other Planets" (section on Titan)

"Superrotation in Planetary Atmospheres" (article covers Titan alongside three other planets)

"All planets with substantial atmospheres (e.g., Earth, Venus, Mars, and Titan) have ionospheres which expand above the exobase"

"Clouds on Titan result from the condensation of methane and ethane and, as on other planets, are primarily structured by circulation of the atmosphere"

"... of the planet. However, rather than being scarred by volcanic features, Titan's surface is largely shaped..."

"Spectrophotometry of the Jovian Planets and Titan at 300- to 1000-nm Wavelength: The Methane Spectrum" (okay, it's mainly referring to the Jovian satellites as planets, but same point)

"Superrotation indices for Solar System and extrasolar atmospheres" - contains a table whose first column is "Planet", and has Titan in the list, alongside other planets

Etc. This is not to be confused with the phrase "minor planet", which is used for asteroids, etc. In general there's a big distinction in how commonly you see the large moons in hydrostatic equilibrium referred to as "planets" and with "planetary" adjectives, vs. smaller bodies not in hydrostatic equilibrium.

Comment Re:Titan or Bust! (Score 3, Informative) 69

Why?

NASA's obsession with Mars is weird, and it consumes the lion's share of their planetary exploration budget. We know vastly more about Mars than we know of everywhere else except Earth.

This news here is bittersweet for me. I *love* Titan - it and Venus are my two favourite worlds for further exploration, and dragonfly is a superb way to explore Titan. But there's some sadness in the fact that they're launching it to an equatorial site, so we don't get to see the fascinating hydrocarbon seas and the terrain sculpted by them near the poles. I REALLY wish they were going to the north pole instead :( In theory they could eventually get there, but the craft would have to survive far beyond design limits and get a lot of mission extensions. At a max pace of travel it might cover 600 meters or so per Earth day on average. So we're talking like 12 years to get to the first small hydrocarbon lakes and ~18 years to get to Ligeia Mare or Punga Mare (a bit further to Kraken Mare), *assuming* no detours, vs. a 2 1/2 year mission design. And that ignores the fact that they'll be going slower in the start - the nominal mission is only supposed to cover 175km, just a few percent of the way, under 200 metres per day. Sigh... Maybe it'll be possible to squeeze more range out of it once they're comfortable with its performance and reliability, but... it's a LONG way to the poles.

At least if it lasts for that long it'll have done a full transition between wet and dry cycles, which should last ~15 years. So maybe surface liquids will be common at certain points, rare in others.

Comment Re:AI is just Wikipedia (Score 1) 25

I've probably done tens of thousands of legit, constructive edits, but even I couldn't resist the temptation to prank it at one point. The article was on the sugar apple (Annona squamosa), and at the time, there was a big long list of the name of the fruit in different languages. I wrote that in Icelandic, the fruit was called "Hva[TH]er[TH]etta" (eth and thorn don't work on Slashdot), which means "What's that?", as in, "I've never seen that fruit before in my life" ;) Though the list disappeared from Wikipedia many years ago (as it shouldn't have been there in the first place), even to this day, I find tons of pages listing that in all seriousness as the Icelandic name for the fruit.

Comment Nonsense (Score 1) 25

The author has no clue what they're talking about:

Meta said the 15 trillion tokens on which its trained came from "publicly available sources." Which sources? Meta told The Verge that it didn't include Meta user data, but didn't give much more in the way of specifics. It did mention that it includes AI-generated data, or synthetic data: "we used Llama 2 to generate the training data for the text-quality classifiers that are powering Llama 3." There are plenty of known issues with synthetic or AI-created data, foremost of which is that it can exacerbate existing issues with AI, because it's liable to spit out a more concentrated version of any garbage it is ingesting.

1) *Quality classifiers* are not themselves training data. Think of it as a second program that you run on your training data before training your model, to look over the data and decide how useful it looks and thus how much to emphasize it in the training, or whether or not to just omit it.

2): Synthetic training data *very much* can be helpful, in a number of different ways.

A) It can diversify existing data. E.g., instead of just a sentence "I was on vacation in Morocco and I got some hummus", maybe you generate different versions of the same sentence ("I was traveling in Rome and ordered some pasta" ,"I went on a trip to Germany and had some sausage", etc), to deemphasize the specifics (Morocco, hummus, etc) and focus on the generalization. One example can turn into millions, thus rendering rote memorization during training impossible.

B) It allows for programmatic filtration stages. Let's say that you're training a model to extract quotes from text. You task a LLM with creating training examples for your quote-extracting LLM (synthetic data). But you don't just blindly trust the outputs - first you do a text match to see if what it quoted is actually in the text and whether it's word-for-word right. Maybe you do a fuzzy match, and if it just got a word or two off, you correct it to the exact match, or whatnot. But the key is: you can postprocess the outputs to do sanity checks on it, and since those programmatic steps are deterministic, you can guarantee that the training data meets certain characteristics..

C) It allows for the discovery of further interrelationships. Indeed, this is a key thing that we as humans do - learning from things we've already learned by thinking about them iteratively. If a model learned "The blue whale is a mammal" and it learned "All mammals feed their young with milk", a synthetic generation might include "Blue whales are mammals, and like all mammals, feed their young with milk" . The new model now directly learns that blue whales feed their young with milk, and might chain new deductions off *that*.

D) It's not only synthetic data that can contain errors, but non-synthetic data as well. The internet is awash in wrong things; a random thing on the internet is competing with a model that's been trained on reems of data and has high quality / authoritative data boosted and garbage filtered out. "Things being wrong in the training data" in the training data is normal, expected, and fine, so long as the overall picture is accurate. If there's 1000 training samples that say that Mars is the fourth planet from the sun, and one that says says that the fourth planet from the sun is Joseph Stalin, it's not going to decide that the fourth planet is Stalin - it's going to answer "Mars".

Indeed, the most common examples I see of "AI being wrong" that people share virally on the internet are actually RAG (Retrieval Augmented Generation), where it's tasked with basically googling things and then summing up the results - and the "wrong content" is actually things that humans wrote on the internet.

That's not that you should rely only generated data when building a generalist model (it's fine for a specialist). There may be specific details that the generating model never learned, or got wrong, or new information that's been discovered since then; you always want an influx of fresh data.

3): You don't just randomly guess whether a given training methodology (such as synthetic data, which I'll reiterate, Meta did not say that they used - although they might have) is having a negative impact. Models are assessed with a whole slew of evaluation metrics to assess how good and accurately they respond to different queries. And LLaMA 3 scores superbly, relative to model size.

I'm not super-excited about LLaMA 3 simply because I hate the license - but there's zero disputing that it's an impressive series of models.

Comment Re: Cue all the people acting shocked about this.. (Score 1) 41

Under your (directly contradicting their words) theory, then creative endeavour on the front end SHOULD count If the person writes a veritable short-story as the prompt, then that SHOULD count. It does not. Because according to the copyright office, while user controls the general theme, they do not control the specific details.

"Instead, these prompts function more like instructions to a commissioned artist—they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output."

if a user instructs a text-generating technology to “write a poem about copyright law in the style of William Shakespeare,” she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare's style.[29] But the technology will decide the rhyming pattern, the words in each line, and the structure of the text

It is the fact that the user does not control the specific details, only the overall concept, that (according to them) that makes it uncopyrightable.

Comment Re: Cue all the people acting shocked about this.. (Score 1) 41

Based on the Office's understanding of the generative AI technologies currently available, users do not exercise ultimate creative control over how such systems interpret prompts and generate material. Instead, these prompts function more like instructions to a commissioned artist—they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output.[28] For example, if a user instructs a text-generating technology to “write a poem about copyright law in the style of William Shakespeare,” she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare's style.[29] But the technology will decide the rhyming pattern, the words in each line, and the structure of the text.[30]

Compare with my summary:

" their argument was that because the person doesn't control the exact details of the composition of the work"

I'll repeat: I accurately summed up their argument. You did not.

Comment Re:AI Incest (Score 2, Interesting) 41

Yes, "you've been told" that by people who have no clue what they're talking about. Meanwhile, models just keep getting better and better. AI images have been out for years now. There's tons on the net.

First off, old datasets don't just disappear. So the *very worst case* is that you just keep developing your new models on pre-AI datasets.

Secondly, there is human selection on things that get posted. If humans don't like the look of something, they don't post it. In many regards, an AI image is replacing what would have been a much crapper alternative choice.

Third, dataset gatherers don't just blindly use a dump of the internet. If there's a place that tends to be a source of crappy images, they'll just exclude or downrate it.

Fourth, images are scored with aesthetic gradients before they're used. That is, humans train models to assess how much they like images, and then those models look at all the images in the dataset and rate them. Once again, crappy images are excluded / downrated.

Fifth, trainers do comparative training and look at image loss rates, and an automatically exclude problematic ones. For example, if you have a thousand images labeled "watermelon" but one is actually a zebra, the zebra will have an anomalous loss spike that warrants more attention (either from humans or in an automated manner). Loss rates can also be compared between data +sources+ - whole websites or even whole datasets - and whatever is working best gets used.

Sixth, trainers also do direct blind human comparisons for evaluation.

This notion that AIs are just going to get worse and worse because of training on AI images is just ignorant. And demonstrably false.

Comment Re:Cue all the people acting shocked about this... (Score 4, Interesting) 41

As for why I think the ruling was bad: their argument was that because the person doesn't control the exact details of the composition of the work, than the basic work (before postprocessing or selection) can't be copyrighted. But that exact same thing applies to photography, outside of studio conditions. Ansel Adams wasn't out there going, "Okay, put a 20 meter oak over there, a 50 meter spruce over there, shape that mountain ridge a bit steeper, put a cliff on that side, cover the whole thing with snow... now add a rainbow to the sky... okay, cue the geese!" He was searching the search space for something to match a general vision - or just taking advantage of happenstance findings. And sure, a photographer has many options at their hands in terms of their camera and its settings, but if you think that's a lot, try messing around with AUTOMATIC1111 with all of its plugins some time.

The winner of Nature Photographer of the year in 2022 was Dmitry Kokh, with "House of Bears". He was stranded on a remote Russian archipelago and discovered that polar bears had moved into an abandoned weather station, and took photos of them. He didn't even plan to be there then. He certainly didn't plan on having polar bears in an abandoned weather station, and he CERTAINLY wasn't telling the bears where to stand and how to pose. Yet his work is a classic example of what the copyright office thinks should be a copyrightable work.

And the very notion that people don't control the layout with AI art is itself flawed. It was an obsolete notion even when they made their ruling - we already had img2img, instructpix2pix and controlnet. The author CAN control the layout, down to whatever level of intricate detail they choose. Unlike, say, a nature photographer. And modern models give increasing levels of control even with the prompt itself - with SD3 (unlike SD1/2 or SC) - you can do things like "A red sphere on a blue cube to the left of a green cone" . We're heading to - if not there already - where you could write a veritable short story's worth of detail to describe a scene.

I find it just plain silly that Person A could grab their cell phone and spend 2 seconds snapping a photo of whatever happens to be out their window, and that's copyrightable, but a person who spends hours searching through the latent space - let alone with ControlNet guidance (controlnet inputs can be veritable works of art in their own right) - isn't given the same credit for the amount of creative effort put into the work.

I think, rather, it's very simple: the human creative effort should be judged not on the output of the work (the work is just a transformation of the inputs), but the amount of creative effort they put into said inputs. Not just on the backend side - selection, postprocessing, etc - but on the frontend side as well. If a person just writes "a fluffy dog" and takes the first pic that comes up, obviously, that's not sufficient creative endeavour. But if a person spends hours on the frontend in order to get the sort of image they want, why shouldn't that frontend work count? Seems dumb to me.

Comment Cue all the people acting shocked about this... (Score 4, Informative) 41

... when the original ruling itself plainly said that though the generated content itself isn't copyrightable, human creative action such as postprocessing or selection can render it copyrightable.

I still think the basic ruling was bad for a number of reasons, and it'll increasingly come under stress in the coming years. But there is zero shock to this copyright here. The copyright office basically invited people to do this.

Slashdot Top Deals

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...