Forgot your password?
typodupeerror

Comment Re: 4GB has been insufficient for many years now (Score 3, Informative) 103

I have not seen AI code that is *more* efficient than human code, yet. I have seen AI write efficient, compact code when pressed, very, very hard to do so, but only then. Otherwise, in my hands, and those of my developer colleagues, AI produces mostly correct, but inefficient, verbose code.

Could that change? Sure, I suppose. But right now it is not the case, and the value system that is driving auto-generated code (i.e., the training set of extant code), does not put a premium on efficiency.

Comment Re:4GB has been insufficient for many years now (Score 5, Informative) 103

Web browsers are absolute hogs, and, in part, that's because web sites are absolute hogs. Web sites are now full-blown applications that were written without regard to memory footprint or efficiency. I blame the developers who write their code on lovely, large, powerful machines (because devs should get good tools, I get that), but then don't suffer the pain of running them on perfectly good 8 GB laptops that *were* top-of-the line 10 years ago, but are now on eBay for $100. MS Teams is a perfect example of this. What a steaming pile of crap. My favored laptop is said machine, favored because of the combination of ultra-light weight and eminently portable size, and zoom works just fine on it, but teams is unusable. Slack is OK, if that's nearly the only web site you're visiting. Eight frelling GB to run a glorified chat room.

The thing that gets my goat, however, is that the laptop I used in the late 1990s was about the same form factor as this one, had 64 MB (yes, MB) of main memory, and booted up Linux back then just about as fast. If memory serves, the system took about 2 MB, once up. The CPU clock on that machine was in the 100 MHz range. Even not counting for the massive architectural improvements, my 2010s-era laptop should boot an order of magnitude faster. It does not.

Why? Because a long time ago, it became OK to include vast numbers of libraries because programmers were too lazy to implement something on their own, so you got 4, 5, 6 or more layers of abstraction, as each library recursively calls packages only slightly lower-level to achieve its goals. I fear that with AI coding, it will only get worse.

And don't get me started on the massive performance regression that so-called modern languages represent, even when compiled. Hell in a handbasket? Yes. Because CPU cycles are stupidly cheap now, and we don't have to work hard to eke out every bit of performance, so we don't bother.

Comment Re:Intel's political marketing has always been bad (Score 4, Insightful) 23

If you read this post it shows that AMD stole Intel's design and reverse engineered it.

If you dig deeper, you'll find that AMD originally reverse engineered the *8080*, not the 8086. The two companies had entered into a cross-licensing agreement by 1976. Intel agreed to let AMD second-source the 8086 in order to secure the PC deal with IBM, who insisted on having a second source vendor.

There would have been no Intel success story without AMD to back them up.

(That actually would have been for the best. IBM would probably have selected an non-segmented CPU from somebody else instead of Intel's kludge.)

Comment Re:Speed enforcement (Score 4, Interesting) 184

2) Police officer hides, catches unsuspecting driver speeding, stops driver, issues summons.

This is the very best approach. It's got the perfect tension leading to the greatest safety.

When you're expecting such an ambush (getting caught a few times will teach you to do that), and you're really paying attention and playing "spot the ambush" then they won't catch you. But because you're being so damned focused and alert, you're also a safer driver.

OTOH if they nail you, that means you weren't paying attention. So you weren't merely speeding; you really literally were speeding unsafely, and the ticket is the proof. (If you were so safe, then how come you didn't see the guy with the radar gun in time?)

Every. Single. Time. I got ticketed, my mind was wandering and not fully focused on the road. I wasn't looking for a speed trap, so I didn't see it in time. Busted. And those times I was looking? I didn't fall for it. I slowed down and avoided a ticket.

The ideal system (in terms of safety) happens to also be downright sporting! The ol' classic speed trap was almost .. a game?

Comment Re:Time to innovate (Score 1) 63

Not really. I was in a factory in Shenzen 25 years ago that was able to spit out ~20 5MVA transformers per week with basic equipment and minimal staff. The only components brought in from outside were bolts and insulators. Doubling capacity only required equipment worth roughly the cost of one transformer.

Solid state solutions are an order of magnitude more complex.

Comment Re:really? (Score 1) 125

That's generally how it's being done. The robot reads the code and writes specs. Then another robot reads the specs and writes code. If courts still accept the traditional clean room defense (and why wouldn't they?) then they're probably going to say it isn't a derived work.

It looks like the big catch, the actual source of uncertainty, is that the instance of the robot that reads the specs and writes code, may have seen the original code as part of its training data. That'll be enough to keep it from being a true clean room. In those cases, you'll be totally right.

But for any particular given project, was it trained on the original code? That'll be a case-by-case thing, and I think in a very long-term way, the answer will increasingly be No, simply because codebots' need to keep training on newly-published code, will diminish.

As an analogy, imagine you're a human author, and for some weird reason, one thing you like to do is have people tell you high-level plot summaries (specs) and then you write a detailed story from that. Someone says "the moon is unusually bright one night and people fear something bad has happened" and you write a story much like Larry Niven's Inconstant Moon, from that prompt alone. And you do this with 100 more stories, and most of them honestly don't appear to be derived. You take specs like "bombardier has crazy war experiences" and your resulting story is nothing like Catch-22.

But then one day, you're up in the attic and you find an old box that's been sitting there for decades, and inside, you find an old, worn, dog-eared paperback of Larry Niven stories which happens to include Inconstant Moon. Oh shit, you must have read that 45 years ago and then somehow "forgot" that you had, so your story wasn't truly independent of Niven's work. Your story turned out to not be "clean" at all, whoops! It was a derived work after all, because you read it ("trained on it") when you were a kid.

But the other 100 stories? Nope, those really were clean. Your story-writing process was almost legally foolproof, except that you had to learn reading and writing at some point, so your childhood favorites needed to be off-limits.

Comment Things are illusorily fabulous (Score 5, Interesting) 107

The heat wave made March be like late spring. Things that normally bloom in May, bloomed in March. And yesterday I got my first MRGCD irrigation of the year, flooding my back yard and letting the shade trees greedily suck up the water. We're spending a lot more time outside on the patio, compared to previous years during this time-of-year.

If I were stupid, I would be out of my mind with pleasure. Things feel wonderful right now.

But that water I just got .. that is The snowpack, probably. Instead of getting it all throughout summer, this first irrigation is probably the last, or second-to-last.

This summer is going to SUCK.

Comment Re:Seems pointlessly unsafe (Score 1) 183

A dummy load and some chemistry to use oxygen would do the same job with zero human risk.

If they're not putting boots on the Moon, they shouldn't have their asses in the rocket.

Remember kids, spaceflight is hard. Nature does not like us being in space, at all. She puts up serious, difficult barriers that we need to overcome. Just look how hard it was for a new program like Space X to start from scratch even with all of the existing knowledge developed by NASA, ESA, etc.. How many rapid unscheduled disassembly events did they suffer? I lost count. Even the Russians, who arguably have as much or more LEO experience than the US, continue to face challenges. Heck, so do we, as the current generation of engineers no longer has the direct experience from Gemini and Apollo to guide them. Space is deeply unforgiving of mistakes.

To the GP, if you think that your 5-second considered opinion is better than a fleet of talented folks, I'll wager that if you more time, did some research, you'd change your opinion. I hope you do.

Comment Re:Clean room? (Score 5, Interesting) 125

Even if you use an AI to extract an extremely condensed specification out of the source code, it's hardly clean room if the LLM was pre-trained on the source code any way.

I once worked at a place that had a clean room process to create code compatible with a proprietary product. Anybody who had ever seen the original code or even loaded the original binary into a debugger was not allowed to write any code at all for the cloned product. The clone writers generally worked only off of the specifications and user documentation.

There were a handful of people who were allowed to debug the original to resolve a few questions about low-level compatibility. The only way they were allowed to communicate with the software writers was through written questions and answers that left a clear paper trail, and the answers had to be as terse as possible (usually just yes or no). Everyone knew that these memos were highly likely to be used as evidence in legal proceedings.

I highly doubt that any AI tech bros have ever been this rigorous, and I'd bet that most of these AIs have been trained on the exact same source code that they are cloning.

Comment Re:really? (Score 4, Interesting) 125

If a computer program ingests code (whether GPL or not) and then outputs some code, the big question is whether or not the resulting code is a derived work.

If it's not a derived work, then the license of the original code is irrelevant, and it doesn't matter if it's GPLed, fully proprietary, or somewhere in between. The license has no say in the matter, because nobody ever needs to agree to the license; whatever they're doing is legal under copyright law so they already had all the permission they needed, without ever needing the additional rights granted by a license.

If it is a derived work, then that's copyright infringement unless the person who does it has permission. And the only way to get permission (i.e. cause copyright infringement to have not happened) is to agree to the license. So yes, the output would have to be GPLed.

But I don't think we really know whether or not robots reading code and then writing code from what they "learned," are creating derived works. Ask again in a few years, after a few court cases. This is hard. Rational people can disagree and come up with pretty good arguments no matter what side they're on. We'll see what the courts decide.

I think the most interesting case for determining it, won't involve a GPLed input. It'll be if Anthropic sues this project, since they will have contributed arguments to both sides. They'll have to argue "it is a derived work" in court, but to all their customers, they have and will continue to preach "it's not a derived work."

Comment Re:The God-fearing and the Accountants (Score 1) 162

In the end, the real solution is to be able to grow parts as they're needed, not grow an entire body requiring expensive maintenance that you might have to throw away after you harvest one critical part.

I've been expecting that eventual outcome since the early 2000s when we (as in someone in an academic lab) grew a 3rd kidney in a mouse by grafting stem cells from a donor.

Slashdot Top Deals

Your good nature will bring you unbounded happiness.

Working...