Forgot your password?
typodupeerror

Comment Re: 4GB has been insufficient for many years now (Score 2) 81

I have not seen AI code that is *more* efficient than human code, yet. I have seen AI write efficient, compact code when pressed, very, very hard to do so, but only then. Otherwise, in my hands, and those of my developer colleagues, AI produces mostly correct, but inefficient, verbose code.

Could that change? Sure, I suppose. But right now it is not the case, and the value system that is driving auto-generated code (i.e., the training set of extant code), does not put a premium on efficiency.

Comment Re:4GB has been insufficient for many years now (Score 5, Informative) 81

Web browsers are absolute hogs, and, in part, that's because web sites are absolute hogs. Web sites are now full-blown applications that were written without regard to memory footprint or efficiency. I blame the developers who write their code on lovely, large, powerful machines (because devs should get good tools, I get that), but then don't suffer the pain of running them on perfectly good 8 GB laptops that *were* top-of-the line 10 years ago, but are now on eBay for $100. MS Teams is a perfect example of this. What a steaming pile of crap. My favored laptop is said machine, favored because of the combination of ultra-light weight and eminently portable size, and zoom works just fine on it, but teams is unusable. Slack is OK, if that's nearly the only web site you're visiting. Eight frelling GB to run a glorified chat room.

The thing that gets my goat, however, is that the laptop I used in the late 1990s was about the same form factor as this one, had 64 MB (yes, MB) of main memory, and booted up Linux back then just about as fast. If memory serves, the system took about 2 MB, once up. The CPU clock on that machine was in the 100 MHz range. Even not counting for the massive architectural improvements, my 2010s-era laptop should boot an order of magnitude faster. It does not.

Why? Because a long time ago, it became OK to include vast numbers of libraries because programmers were too lazy to implement something on their own, so you got 4, 5, 6 or more layers of abstraction, as each library recursively calls packages only slightly lower-level to achieve its goals. I fear that with AI coding, it will only get worse.

And don't get me started on the massive performance regression that so-called modern languages represent, even when compiled. Hell in a handbasket? Yes. Because CPU cycles are stupidly cheap now, and we don't have to work hard to eke out every bit of performance, so we don't bother.

Comment Re:Intel's political marketing has always been bad (Score 3, Insightful) 22

If you read this post it shows that AMD stole Intel's design and reverse engineered it.

If you dig deeper, you'll find that AMD originally reverse engineered the *8080*, not the 8086. The two companies had entered into a cross-licensing agreement by 1976. Intel agreed to let AMD second-source the 8086 in order to secure the PC deal with IBM, who insisted on having a second source vendor.

There would have been no Intel success story without AMD to back them up.

(That actually would have been for the best. IBM would probably have selected an non-segmented CPU from somebody else instead of Intel's kludge.)

Comment Re:Seems pointlessly unsafe (Score 1) 183

A dummy load and some chemistry to use oxygen would do the same job with zero human risk.

If they're not putting boots on the Moon, they shouldn't have their asses in the rocket.

Remember kids, spaceflight is hard. Nature does not like us being in space, at all. She puts up serious, difficult barriers that we need to overcome. Just look how hard it was for a new program like Space X to start from scratch even with all of the existing knowledge developed by NASA, ESA, etc.. How many rapid unscheduled disassembly events did they suffer? I lost count. Even the Russians, who arguably have as much or more LEO experience than the US, continue to face challenges. Heck, so do we, as the current generation of engineers no longer has the direct experience from Gemini and Apollo to guide them. Space is deeply unforgiving of mistakes.

To the GP, if you think that your 5-second considered opinion is better than a fleet of talented folks, I'll wager that if you more time, did some research, you'd change your opinion. I hope you do.

Comment Re:Clean room? (Score 5, Interesting) 124

Even if you use an AI to extract an extremely condensed specification out of the source code, it's hardly clean room if the LLM was pre-trained on the source code any way.

I once worked at a place that had a clean room process to create code compatible with a proprietary product. Anybody who had ever seen the original code or even loaded the original binary into a debugger was not allowed to write any code at all for the cloned product. The clone writers generally worked only off of the specifications and user documentation.

There were a handful of people who were allowed to debug the original to resolve a few questions about low-level compatibility. The only way they were allowed to communicate with the software writers was through written questions and answers that left a clear paper trail, and the answers had to be as terse as possible (usually just yes or no). Everyone knew that these memos were highly likely to be used as evidence in legal proceedings.

I highly doubt that any AI tech bros have ever been this rigorous, and I'd bet that most of these AIs have been trained on the exact same source code that they are cloning.

Comment Re:The God-fearing and the Accountants (Score 1) 162

In the end, the real solution is to be able to grow parts as they're needed, not grow an entire body requiring expensive maintenance that you might have to throw away after you harvest one critical part.

I've been expecting that eventual outcome since the early 2000s when we (as in someone in an academic lab) grew a 3rd kidney in a mouse by grafting stem cells from a donor.

Comment Re:Here it comes (Score 1) 70

You're confusing the importance of avoiding Kessler syndrome in LEO with the difficulty of causing Kessler syndrome. GEO debris can potentially remain there for millions of years before interactions between the gravitational pull of the Sun, Earth, and Moon sufficiently perturb it. LEO debris remains for weeks to months. You have to have many orders of magnitude more debris in LEO to trigger Kessler Syndrome, where the rate of collisions exceeds the rate of debris loss.

The fact that a LEO Kessler Syndrome would also be short is something that exists on top of that.

It's also worth nothing that not only are modern satellites not only vastly better at properly disposing of themselves than they were in the 1970s when Kessler Syndrome was proposed, but they're also vastly better at avoiding debris strikes. All of these factors are multiplicative together.

Comment Re:Here it comes (Score 3, Insightful) 70

People forget that the primary concerns about Kessler Syndrome were about geosynchronous orbit, which used to be where all the most important satellites went (many of course still go there, but not the megaconstellations). It takes a long, long time for debris to leave GEO. But LEO is a very different beast.

Comment Re:Here it comes (Score 4, Informative) 70

Yeah. In particular:

with fragments likely to fall to Earth over the next few weeks

LEO FTW. Kessler Syndrome is primarily a risk if you put too much stuff with too poor of an end-of-life disposal rate in GEO. End-of-life without proper disposal rates have declined exponentially since Kessler Syndrome was first proposed (manufacturers both understand the importance more, and do a better job, of decreasing the rate of failures before deorbit - in the past, sometimes there wasn't even attempts to dispose of a craft at end-of-life). And now we're increasingly putting stuff in LEO, where debris falls out of orbit relatively quickly. It's not impossible in LEO, esp. with higher LEO orbits - but it's much more difficult.

Or to put it another way: fragments can't build up to hit other things if they're gone after just a couple weeks.

And this trend is likely to continue - a lower percentage of premature failures, and decreasing altitudes / reentry times. Concerning ever-decreasing altitudes, we've already been doing this via use of ion engines to provide more reboost (with mission lifespans designed for only several years before running out of propellant, instead of decades like the giant GEO ones), but there's an increasing interest in "sky skimming" satellites that function in a way somewhat reminiscent of a ramjet - instead of krypton or xenon as the propellant for an ion engine, the sparse atmospheric air itself is the propellant, so the craft can in effect fly indefinitely until it fails, wherein it quite rapidly enters the denser atmosphere and burns up.

Comment Re:TypeScript? (Score 1) 65

Erlang is... weird. 15 years ago I wanted to learn a new and different language and I tried it but i could not wrap my brain around some of its constructs. Then I read a paper by a guy claiming that some things were impossible to do with Erlang (with examples in other languages) and since I didn't have any projects to do with it, i basically forgot all about it.

Comment Re:Doing the editor's job. (Score 5, Informative) 41

Relativity = gravity is represented by the curvature of spacetime. Curvature is linear, R. The formula treats curvature linearly. As things get closer and curvature spikes, the math just scales at a 1:1 rate

Quadratic gravity = Squares the curvature. Doesn't really change things much when everything is far apart, but heavily changes things when everything is close together.

Pros: prevents infinities and other problems when trying to reconcile quantum theory with relativity ("makes the theory renormalizable"). E.g. you don't want to calculate "if I add up the probabilities of all of these possible routes to some specific event, what are the odds that it happens?" -> "Infinity percent odds". That's... a problem. Renormalization is a trick for electromagnetism that prevents this by letting the infinities cancel out. But it doesn't work with linear curvature - gravitons carry energy, which creates gravity, which carries more energy... it explodes, and renormalization attempts just create new infinities. But it does work with quadratic curvature - it weakens high-energy interactions and allows for convergence.

Cons: Creates "ghosts" (particles with negative energies or negative probabilities, which create their own problems). There's various proposed solutions, but none that's really a "eureka!" moment. Generally along the lines of "they exist but are purely virtual and don't interact", "they exist but they're so massive that they decay before they can interact with the universe", "they don't exist, we're just using the math out of bounds and need a different representation of the same", "If we don't stop at R^2 but also add in R^3, R^4, ... on to infinity, then they go away". Etc.

The theory isn't new, BTW. The idea is from 1918 (just a few years after Einstein's theory of General Relativity was published), and the work that led to the "Pros" above is from 1977.

Slashdot Top Deals

Using TSO is like kicking a dead whale down the beach. -- S.C. Johnson

Working...