Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:power (Score 2) 63

Titan's atmosphere is rather calm; not an issue. At the surface, the winds measured by Huygens were 0,3 m/s.

You actually can use solar power in extreme environments - even Venus's surface has been shown to be compatible with certain types solar, though you certainly get very poor power density. Dragonfly, as noted above, uses an RTG.

Comment Re:Second flying drone to explore another planet (Score 3) 63

Planetary scientists frequently refer to moons that are large enough to be in hydrostatic equilibrium as planets in the literature. Examples, just from a quick search:

"Locally enhanced precipitation organized by planetary-scale waves on Titan"

"3.3. Relevance to Other Planets" (section on Titan)

"Superrotation in Planetary Atmospheres" (article covers Titan alongside three other planets)

"All planets with substantial atmospheres (e.g., Earth, Venus, Mars, and Titan) have ionospheres which expand above the exobase"

"Clouds on Titan result from the condensation of methane and ethane and, as on other planets, are primarily structured by circulation of the atmosphere"

"... of the planet. However, rather than being scarred by volcanic features, Titan's surface is largely shaped..."

"Spectrophotometry of the Jovian Planets and Titan at 300- to 1000-nm Wavelength: The Methane Spectrum" (okay, it's mainly referring to the Jovian satellites as planets, but same point)

"Superrotation indices for Solar System and extrasolar atmospheres" - contains a table whose first column is "Planet", and has Titan in the list, alongside other planets

Etc. This is not to be confused with the phrase "minor planet", which is used for asteroids, etc. In general there's a big distinction in how commonly you see the large moons in hydrostatic equilibrium referred to as "planets" and with "planetary" adjectives, vs. smaller bodies not in hydrostatic equilibrium.

Comment Re:Titan or Bust! (Score 3, Informative) 63

Why?

NASA's obsession with Mars is weird, and it consumes the lion's share of their planetary exploration budget. We know vastly more about Mars than we know of everywhere else except Earth.

This news here is bittersweet for me. I *love* Titan - it and Venus are my two favourite worlds for further exploration, and dragonfly is a superb way to explore Titan. But there's some sadness in the fact that they're launching it to an equatorial site, so we don't get to see the fascinating hydrocarbon seas and the terrain sculpted by them near the poles. I REALLY wish they were going to the north pole instead :( In theory they could eventually get there, but the craft would have to survive far beyond design limits and get a lot of mission extensions. At a max pace of travel it might cover 600 meters or so per Earth day on average. So we're talking like 12 years to get to the first small hydrocarbon lakes and ~18 years to get to Ligeia Mare or Punga Mare (a bit further to Kraken Mare), *assuming* no detours, vs. a 2 1/2 year mission design. And that ignores the fact that they'll be going slower in the start - the nominal mission is only supposed to cover 175km, just a few percent of the way, under 200 metres per day. Sigh... Maybe it'll be possible to squeeze more range out of it once they're comfortable with its performance and reliability, but... it's a LONG way to the poles.

At least if it lasts for that long it'll have done a full transition between wet and dry cycles, which should last ~15 years. So maybe surface liquids will be common at certain points, rare in others.

Comment Re:Thanks (Score 2) 34

In controlled airspace you would be right... but these things are being pitched as suburban/urban commute options operating at low altitude where there's no ATC. That means they have to deal with birds, the GPS-shielding effects of tall buildings, wind tunnels created by the same tall structures and a whole lot more.

The automation of air-transport at 30,000 feet is a whole lot different to transport at a few hundred feet over a busy metropolis and where there may be buildings higher than t he craft itself.

Comment Re:Thanks (Score 1) 34

You nailed it!

We've had VTOL passenger transport for decades now, in the form of tried and proven technology: helicopters.

Right now a bunch of startups are trying to reinvent the industry by claiming "carbon zero" and "autonomous" when we know:

1. the market is *very* limited (ie: where are all the helicopter-based flying taxi services?)
2. the tech isn't ready (current battery tech isn't up to the task)
3. we don't trust autonomous sytems on the road so why would be trust them in the air?
4. regulators are still many years away from approving such things in Western nations
5. there is zero mitigation available for GPS failure (or malicious attack) and eVTOL craft don't autorotate in the event of power faulure.

Call me in 10 years time and we'll reconsider.

Comment Re:Lead By Example (Score 2) 146

I don't see it. For example, cell phone records are only recorded and accessible via warrant, and by presenting that warrant to a provider directly. Same could be done with E2EE data if forced through the cell phone provider's networks.

That would mean an end to E2EE APIs on cell phones and other devices, which may be practically impossible at this point.

Edward Snowden showed that this is not as true as you seem to think it is.

LK

Comment Re:Lead By Example (Score 2) 146

Oh dear lord, the hyperbole. We allow law enforcement access to all other forms of communication with a lawful warrant. So should this particular technology be exempt from that?

Then, let them serve the warrant.

What is different is that for the first time in human history, it's not only possible but it's practical to have encrypted communications that no one can access except for the intended recipient.

All of "the most heinous of crimes" take place in the real world, there is some physical action that can be detected and punished. I don't care if this makes the job of law enforcement harder. I want law enforcement to be a difficult and time consuming job. Idle and bored cops tend to find ways to fill their time and it's never good.

LK

Comment Re:Threatened? (Score 2) 302

I'm sure the EVs the Chinese are currently selling in the EU would pass NHSTA certification. Are those the $10K Chinese EVs? Nope, those are absolutely death traps that have no chance of getting certified. However, the ~$30K Chinese EVs, those have a shot of passing NHSTA certification. Honestly, I hope Ford and GM stop being distracted with hybrids and commit, but I think it will take a systemic shock like the Chinese EVs arriving to break the grip of their ICE agenda and parts revenue stream. Same thing happened with fuel efficient Japanese imports in the 80s, the old guard poo pooed them and lost that market to the imports.

Comment Threatened? (Score 5, Interesting) 302

Ford and GM have been crowing that consumers don't want EVs. So what harm can possibly come from letting China flood the market with EVs nobody wants? If Ford and GM are right, the Chinese makers will be shooting their own foot. However, I have a feeling Ford and GM are being a little disengenous and the issue might be that people don't want Ford and GM base EVs at $65K a pop.

Comment Re:AI is just Wikipedia (Score 1) 25

I've probably done tens of thousands of legit, constructive edits, but even I couldn't resist the temptation to prank it at one point. The article was on the sugar apple (Annona squamosa), and at the time, there was a big long list of the name of the fruit in different languages. I wrote that in Icelandic, the fruit was called "Hva[TH]er[TH]etta" (eth and thorn don't work on Slashdot), which means "What's that?", as in, "I've never seen that fruit before in my life" ;) Though the list disappeared from Wikipedia many years ago (as it shouldn't have been there in the first place), even to this day, I find tons of pages listing that in all seriousness as the Icelandic name for the fruit.

Comment Nonsense (Score 1) 25

The author has no clue what they're talking about:

Meta said the 15 trillion tokens on which its trained came from "publicly available sources." Which sources? Meta told The Verge that it didn't include Meta user data, but didn't give much more in the way of specifics. It did mention that it includes AI-generated data, or synthetic data: "we used Llama 2 to generate the training data for the text-quality classifiers that are powering Llama 3." There are plenty of known issues with synthetic or AI-created data, foremost of which is that it can exacerbate existing issues with AI, because it's liable to spit out a more concentrated version of any garbage it is ingesting.

1) *Quality classifiers* are not themselves training data. Think of it as a second program that you run on your training data before training your model, to look over the data and decide how useful it looks and thus how much to emphasize it in the training, or whether or not to just omit it.

2): Synthetic training data *very much* can be helpful, in a number of different ways.

A) It can diversify existing data. E.g., instead of just a sentence "I was on vacation in Morocco and I got some hummus", maybe you generate different versions of the same sentence ("I was traveling in Rome and ordered some pasta" ,"I went on a trip to Germany and had some sausage", etc), to deemphasize the specifics (Morocco, hummus, etc) and focus on the generalization. One example can turn into millions, thus rendering rote memorization during training impossible.

B) It allows for programmatic filtration stages. Let's say that you're training a model to extract quotes from text. You task a LLM with creating training examples for your quote-extracting LLM (synthetic data). But you don't just blindly trust the outputs - first you do a text match to see if what it quoted is actually in the text and whether it's word-for-word right. Maybe you do a fuzzy match, and if it just got a word or two off, you correct it to the exact match, or whatnot. But the key is: you can postprocess the outputs to do sanity checks on it, and since those programmatic steps are deterministic, you can guarantee that the training data meets certain characteristics..

C) It allows for the discovery of further interrelationships. Indeed, this is a key thing that we as humans do - learning from things we've already learned by thinking about them iteratively. If a model learned "The blue whale is a mammal" and it learned "All mammals feed their young with milk", a synthetic generation might include "Blue whales are mammals, and like all mammals, feed their young with milk" . The new model now directly learns that blue whales feed their young with milk, and might chain new deductions off *that*.

D) It's not only synthetic data that can contain errors, but non-synthetic data as well. The internet is awash in wrong things; a random thing on the internet is competing with a model that's been trained on reems of data and has high quality / authoritative data boosted and garbage filtered out. "Things being wrong in the training data" in the training data is normal, expected, and fine, so long as the overall picture is accurate. If there's 1000 training samples that say that Mars is the fourth planet from the sun, and one that says says that the fourth planet from the sun is Joseph Stalin, it's not going to decide that the fourth planet is Stalin - it's going to answer "Mars".

Indeed, the most common examples I see of "AI being wrong" that people share virally on the internet are actually RAG (Retrieval Augmented Generation), where it's tasked with basically googling things and then summing up the results - and the "wrong content" is actually things that humans wrote on the internet.

That's not that you should rely only generated data when building a generalist model (it's fine for a specialist). There may be specific details that the generating model never learned, or got wrong, or new information that's been discovered since then; you always want an influx of fresh data.

3): You don't just randomly guess whether a given training methodology (such as synthetic data, which I'll reiterate, Meta did not say that they used - although they might have) is having a negative impact. Models are assessed with a whole slew of evaluation metrics to assess how good and accurately they respond to different queries. And LLaMA 3 scores superbly, relative to model size.

I'm not super-excited about LLaMA 3 simply because I hate the license - but there's zero disputing that it's an impressive series of models.

Slashdot Top Deals

For God's sake, stop researching for a while and begin to think!

Working...