Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:Please stop caring of the American people only (Score 1) 130

Who are you addressing? If it is the Pentagon, they're supposed to care only for the American people

I think the administration would disagree with you. The Pentagon is only supposed to care for the right Americans, not all Americans. Oh, but, yeah, the administration is completely on board with not caring about the rest of the world. Except for the white people. But not Europeans. And it's not really clear whether they should care about Australians or New Zealanders. White South Africans, though, definitely the Pentagon should care about them.

Comment Re:Department of war lol (Score 1) 130

We shall see. If the US recovers from this, there may be a reckoning.

It's far from certain that the US will recover, though I think we will. But the same political divide that makes recovery uncertain makes it very certain that there will be no reckoning. Barring some incredible mistake that makes Trump's base turn hard on him and anyone remotely associated with him, that base will retain enough power and enough loyalty to shield he and his from significant consequences. And given that he survives near-daily scandals that would have taken down anyone else, anything bad enough to make his base turn on him would have to be so horrific that we really, really don't want it to happen.

His actions are nibbling away at his support, and the longer he's free to act with few restraints the more that will happen because he's an utter incompetent who cares about nothing but self-aggrandizement. And that will probably (probably!) be enough to turn the voters sufficiently sour on him that association with Trump will become a moderate liability. But it's very unlikely that he'll lose enough support to make any sort of reckoning possible.

Comment Re: Is it true? (Score 1) 103

Sure, if you ask it for unoriginal code, it'll give you unoriginal code. But outside of people playing around with it for fun, that's not how it's used. If you go look how it's being used by highly-skilled engineers at top tier companies who pay hundreds of dollars per engineer per month to give their engineers access to the frontier models, basically none of that is unoriginal code. Not that the AI is writing "original code" by itself; there's a lot of human guidance and decisionmaking. The AI is writing pretty much every character of the code, but to specifications written in English. Sometimes pretty high-level specifications. I just told Claude "Implement AES-CBC support in the Rust layer" and it generated a thousand lines (which will decrease by ~20% when I review) of Rust code that implement AES-CBC support within the architecture and framework that I defined -- however, a lot of the architecture and framework definition was also done with heavy AI support. I use the LLM for brainstorming and analysis.

The end result is all original, not regurgitated from anything, and disentangling the human and AI contributions is impossible. The core ideas are mine, but a lot of improvements came from Claude's suggestions, and some of those improvements are pretty deep. Claude also made a lot of stupid suggestions, which I obviously discarded. Nearly all of the actual code was generated by Claude rather than being typed by me, but I've reviewed every line of it and told Claude to fix lots of things that I didn't like. The result is definitely my style and pretty much indistinguishable from something I would have written myself -- except that it was produced several times faster than I'd have achieved on my own.

As for creativity... I'd say my current project is one of the most deeply creative endeavors of my career, and that higher level of creativity is in large part because the LLM is writing the code, freeing me from tedium and allowing me to think harder about the architecture and design that is embodied in the code.

Comment Re:Is it true? (Score 1) 103

Is it true that AI code can't be copyrighted?

Pretty much the whole industry assumes that AI-written code is owned by the company who employs the engineer who was using the AI. I don't think this has been litigated, but if it were to go the way you suggest it would create... problems.

Comment Re:AI Hype needs money (Score 1) 103

No way experienced developers are letting AI generate bug fixes or entirely new features using Slack to talk to AI on the way to work.

Depends on whether they can review the code and tests effectively first. I frequently push commits without ever typing a line of code myself: Tell the LLM to write the test, and how to write it, check the test, tell the LLM how to tweak it if necessary, then tell the LLM to write the code and verify the test passes, check the code, tell the LLM what to fix, repeat until good, then tell the LLM to write the commit message (which I also review), then tell it to commit and push.

Actually "tell the LLM what to fix/tweak" is often not right. More often it's "Ask the LLM why it chose to do X". I find I program via the Socratic Method a lot these days. The LLM usually immediately recognizes what I'm getting at and fixes it -- most often not because the code was wrong but because the implementation was more complex than necessary, or duplicated code that should be factored out, or similar. Sometimes it provides a good explanation and I agree that the LLM got it right.

As an example from immediately before I started typing this comment, the LLM wrote some code that included a line like [[maybe_unused]] ignored = ptr->release(). The LLM had recognized that the linter was going to flag the unused return value (which it had named "ignored" to make clear to readers that ignoring it was intended) and inserted the annotation to suppress it. This was all unnecessarily complex, made necessary by the fact that it had previously used get() to get the raw pointer value before checking it and then (right after the release()) stuffing it into another smart pointer object to return. The release() call was necessary to keep the first smart pointer from deleting the pointed-at object. I typed "Why not move the pointer directly from release() to the new smart pointer?". The LLM said "Oh, that would be cleaner and then I could get rid of the temporaries entirely" and reorganized the code that way. That's a trivial code structure example, of course, but the pattern often holds with deeper bugs, including sometimes that my question makes the LLM realize that its whole approach (which is often what I suggested) was wrong and to go into planning mode to develop a correct strategy.

There are exceptions, of course. Sometimes the LLM seems to be incredibly obtuse and after a couple of prompts I click "stop" and type what I want, at least enough that I can then tell the LLM "See what I did? That's what I mean."

"Writing code" with AI assistance is mostly reviewing code and you can often do that on a small screen and without a keyboard.

Comment Re:Time to address the real problem (Score 1) 339

The only way to establish change is to hit the primary contributors (corporations) to this problem where it hurts

Corporations aren't the primary contributors, their customers are. Corps just supply what people want to buy.

The solution is simple and well-understood: Apply carbon taxes, then let the market work. It's just not politically feasible until we convince voters to care.

Comment Re:Again (Score 1) 339

That is one of the benefits of a multi planet culture.

It's really not.

I'm all in favor of humanity becoming a multi-planetary species. I think it's a good goal and we should work toward it. But colonizing Mars is not a solution for climate change because living in Mars' climate is way, way harder than living in Earth's, even with extreme global warming. I suppose you could argue that learning how to live on Mars would prepare us for living on a hellscape Earth, but (a) it's not clear that we are capable of continuing our civilization under such conditions and (b) even if we can, it would be orders of magnitude more costly than simply fixing Earth's climate.

Colonizing Mars and then eventually turning the Mars colonies into a self-sufficient civilization is a good goal, and could be an important hedge against some other catastrophic risks (e.g. killer asteroids), but it's not a good solution for this problem.

Comment Re: And this is the problem. (Score 1) 105

I do not think that will work. Because who would make that decision and implement it? It will just be the start of open "value" manipulation.

Actually, I looked it up rather than going from memory: BTC difficulty updates don't happen every few months they happen every 2016 blocks, which is roughly every two weeks. And the process is entirely automated, adjusting the difficulty to maintain a new block rate of about one every 10 minutes. In fact, the "2016" number was chosen because it's the number of 10-minute intervals in two weeks. This is all part of Satoshi's original design.

The most recent adjustment happened on Feb 7, and it was a downward difficulty adjustment. This wasn't a new phenomenon; downward adjustments are much less common than upward adjustments, but they've happened many times in the past.

So, BTC adapts automatically to price changes that making mining more or less profitable.

Comment Re: And this is the problem. (Score 1) 105

I do not think that will work. Because who would make that decision and implement it? It will just be the start of open "value" manipulation.

Difficulty is updated every few months. This is a routine process. I'm not sure that it has ever been decreased rather than increased, but I don't think that will be a significant obstacle.

Slashdot Top Deals

"I shall expect a chemical cure for psychopathic behavior by 10 A.M. tomorrow, or I'll have your guts for spaghetti." -- a comic panel by Cotham

Working...