Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:Iridium tried this (Score 1) 34

The carbon burn up from these satellites is tiny. We can make a pretty easy estimate for it. The largest Starlinks are a little under 300 kg. The final planned configuration is going to have at most 50,000 satellites. So if the satellites have a 5 year lifespan, that is around 10,000 a year, so if the satellites were instead pure gasoline, that would be equivalent to brining up 3 million kg of gasoline a year, or around 10,000 kg daily. US gasoline consumption daily is around 376 million gallons https://coltura.org/us-gasoline-consumption/ or roughly a billion kg of gasoline daily. So this very little. Gasoline is of course not the only form of carbon production, and the vast majority of the satellite is not carbon.

Comment Iridium tried this (Score 4, Informative) 34

Iridium tried this https://en.wikipedia.org/wiki/Iridium_satellite_constellation . Now that was 25 years ago and the tech has changed, but doing this via large satellites proved very expensive then. While there are some clear advantages of a few large satellites (more ability to do maneuvering and station keeping, better scaling on the satellite itself for many parts which are needed regardless of satellite size) but , there are other advantages of having a large set of satellites, including redundancy, economies of scale, ability to quickly replace satellites, ability to keep them in lower orbit, ability to iterate designs faster. How all of these end up shaking out I don't know.

Comment Re:"probably. We're not 100% sure about it...." (Score 1) 121

There's a large gap here and essentially a false dichotomy. The person you are replying to didn't claim that humans are identical in their reading to how AIs read. They did note correctly that the claim that a human learns to read from a single book is obviously false.

That said, It is pretty clear that the AIs are doing a lot of things more inefficiently than humans in terms of training data, which shouldn't be surprising given that human brains have had millions of years of evolution to optimize learning. It also is clear that humans have other high bandwidth data systems, including vision hearing, and other sensory methods.

Comment Re:No (Score 1) 113

These are not "hard" problems. These are problems nobody competent has yet looked into

With all due respect, I'm a mathematician and you don't know what you are talking about. These problems are not "hard" in the sense of being the sort of problems people get fame for solving. They are hard in the sense that it would likely take days or weeks for an expert human to solve them and that isn't guaranteed.

. Hence it is entirely plausible OpenAI had that somebody competent solve them in secret and put that in the training data. They certainly have lied enough to far to make that entirely plausible.

Much of the work on these problems have not been by anyone affiliated with OpenAI or other AI groups. That would require them to have spent time having experts solve them, and then had enlisted people outside OpenAI to then try those problems. And that's even before we get to the problem of turnaround. This is happening as there's a new systematic attempt to list and attack all the Erdos problems. It isn't even obvious there would be enough time for humans to do this under your proposal. At this point, your entire "plausible" (and no, it isn't) scenario is you constructing a conspiracy theory to grasp at straws so you can ignore the evidence you don't want to hear.

Comment Re:Misleading title (Score 1) 113

Are you arguing that these systems are not general AI? Sure. No argument there. But it should also be clear that this sort of thing is something you could not do a year ago. The systems continue to improve rapidly. I suspect that these systems will never become genuinely intelligent in the sense humans are without fundamentally new insights, but that doesn't mean we cannot recognize the extreme improvements and the impact that's having. Worse, if suspicions like mine are incorrect, the situation could change drastically, very quickly.

Comment Re:LLM had a head start (Score 4, Informative) 113

Hahahha, no. You should maybe try to replicate this to see how incapable LLMs actually are. LLMs cannot to anything even mildly non-trivial.

I'm a mathematician. I've talked explicitly before on Slashdot about personal experiments using LLMs such as here https://slashdot.org/comments.pl?sid=23789930&cid=65646656 where I discussed that yes, it could do non-trivial work. And I'm not the only example. Terry Tao for example has used them, and that's a name you should have at least heard of https://mathoverflow.net/questions/501066/is-the-least-common-multiple-sequence-textlcm1-2-dots-n-a-subset-of-t But the fact that multiple mathematicians are now telling you that it is doing non-trivial work and you just ignore it says much more about LLMs than it does about you. But I suppose I shouldn't be surprised since the last time you and I discussed a similar topic, you claimed that what LLMs were doing could be done by software such as Mathematica or Maple and then refused to show that even after you were giving a direct incentive to show your case https://slashdot.org/comments.pl?sid=23748766&cid=65535428. I'm really struggling to imagine anything that would be sufficient evidence to change your mind, which says something about you, not about LLM systems.

Comment Re:No (Score 1) 113

There's no gaming of benchmarks here. These systems were used to solve genuinely open problems, and this sort of work would take a grad student months after they've done extensive training as an undergrad. Maybe you should rethink how knee-jerk your position is that LLM AIs cannot do anything interesting, no matter what evidence you see to the contrary?

Comment Re:LLM had a head start (Score 1) 113

Sigh. Despite your phrasing of "Indeed" here that's not what I'm saying at all. Adopting techniques from papers like this is doing at the level of a beginning grad student is extremely non-trivial. It is true that this is largely adaption of training data, but it is doing so to an extent that normally takes people with years of prior training, guidance by mentors, and then need to be pretty bright and then takes them months on top of it.

Comment Re:LLM had a head start (Score 2) 113

Mathematician here. The vast majority of new mathematical work uses existing ideas and techniques. They'll combine in new ways, or be generalized, or tweaked. More broadly, most mathematicians have 10 or 15 major techniques they know really well and use them along with a bunch of tricks. To some extent, the better mathematicians are those who just know a lot more tricks. In that context, these AI systems are functioning very close to what one would expect a first or second year graduate student to do with access to the existing literature. There's also different degrees to which this sort of thing is true, which with some areas or others having more of this sort of thing. . It is not a coincidence in that sense that these AI systems are right now being more successful in elementary number theory where there's a lot of this sort of thing, and having less success in some other areas where there's less repetition of the same techniques (such as some areas of logic and combinatorial game theory).

Comment Re:Data centers on the moon (Score 1) 130

Yes, but the point is you can do something here that you could not do just from a shared set of random measurements and shared classical information. The trick would not work if one had just the magic coin I mentioned earlier. What's going on here really does require cancelation of amplitudes.

Comment Re:Data centers on the moon (Score 1) 130

It is a little more than that. You can use previously shared entangled bits to do superdense coding https://en.wikipedia.org/wiki/Superdense_coding. Note that this is a trick that really does only work because amplitudes are complex numbers. The trick would not work if you just had the sort of magic coin in question, or had a pair of magic coins which just highly correlated with each other.

Comment Re:Data centers on the moon (Score 3, Interesting) 130

Quantum computers are not magic so they won't help you there either. The no-communication theorem https://en.wikipedia.org/wiki/No-communication_theorem says that you cannot use quantum entanglement to send information faster than the speed of light. The rough intuition here that may help is to imagine two coins which are entangled so that when one fairly flips heads, you know that the next coin flip of the other will be tails. You can do a lot of fun tricks with such a pair of coin, but since they only work on fair flips, you cannot use them to transmit information directly. What's happening with quantum entanglement is a bit more subtle than this coin analogy, since amplitude, the quantum analog of probability, can be a complex number, but for this purpose the coin analogy should be sufficient to understand the basic idea.

Slashdot Top Deals

The easiest way to figure the cost of living is to take your income and add ten percent.

Working...