Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:Bad title, bad summary: missing key information (Score 1) 127

The actual problem: The buses need replacement batteries as the current ones pose a fire risk. To allow the buses to operate with existing batteries, software restrictions were installed to not allow the buses to charge under 41F. The software could be changed or the buses could get their replacement batteries sooner. The summary makes it seems like there are zero solutions to the issue.

Another solution: Get buses with proper thermal management systems in their batteries. The batteries should be able to warm themselves to the safe charging temperature using stored power, while driving to the charger.

They may actually have that, and it's some bug in the thermal management system that creates the fire risk, or something similar. But if they don't, that's an actual issue that should be looked into: Why did Vermont buy buses without such a critical cold-weather feature? If that's what's going on, there is an issue but it's a governance issue, not an EV issue. It would be like buying diesel buses in North Dakota without block heaters.

Comment Re: Working in Canada (Score 1) 127

The point is perhaps we should listen to the people who stated total EV tech might never reach the predicted nirvana in some climates.

"never" and "nirvana" are both excessively strong.

There will undoubtedly be teething issues as we learn how to build EVs for different environments. In this case, it sounds like these busses really need proper battery thermal management systems that are capable of warming the batteries to the required temperature for charging. Done right, that increases charging time only trivially because the batteries should use stored energy to warm themselves on the way to the charging station. My car (Tesla) does this, so it's not like the technology is in any way unusual.

On the other hand there will be some ways in which EVs are forever inferior to ICEVs, just as there are ways in which ICEVs are forever inferior to EVs. In both cases, the solution is to structure the system around the strengths and weaknesses of the technology, and in some cases choosing the otherwise less-desirable technology. "Nirvana" will never be achieved with any tech.

Comment Re:This is all so stupid (Score 1) 37

What matters is that LLMs reliably and dependably say how fucking awesome I am

LOL.

FYI, if that's not actually what you want, it's fairly easy to fix. All of the models allow you to specify a "personal preferences" prompt that is automatically applied to all conversations. For example, my preferences prompt for Claude says:

Ask clarifying questions when necessary and avoid trying to confirm my biases or opinions, or lauding my insights or views. Avoid calling me "astute", "shrewd", "incisive" or similar, or describing my comments with those sorts of superlatives. Take a neutral and fact-based position and err on the side of being critical of my ideas and positions.

I find this mostly fixes LLM obsequiousness, including to counter the tendency of the models to be easily convinced to agree with me.

Comment Re:Price (Score 2) 188

Slightly different, but a few years ago in Canada there was a push for plant based meat replacements. The problem was not that I wouldn't be willing to eat it, it was the price. In fact, I was curious as one of my siblings is a vegan, so it would be nice if there was something we both could enjoy. "Beyond Meat" for example would sell 4 burger patties for $18. Whereas I could buy 8 ground beef patties for $15. When the company starts by charging double the price for a "meat substitute" it's hard to get people on board.

When lab-grown or plant-based meat substitute taste the same and cost half as much as real meat, people will find that their concerns about it not being "natural" subside and their concerns about the morality of eating "real" meat increase. Motivated reasoning FTW. Oh, there will still be some qualms for a while about whether it might not be as good as the real thing, but those will subside over time.

The real question is whether the stuff can be made and sold cheaply enough without economies of scale. I think most likely it'll follow a typical sigmoid adoption curve. At the beginning, only people with strong moral motivations to avoid real meat will pay the high prices, but that will provide enough scale to bring the price down a little, which will increase demand a little, and so on. At some point it will become cheaper than "real" meat and adoption will skyrocket. States that banned it to protect their livestock industries (which is the real reason they're doing it) will find public pressure pushes them to reverse those bans even as the livestock industry's power wanes due to revenues lost in states that allow cultivated meat. Eventually, "real" meat will become a somewhat-distasteful luxury product that gets more and more expensive as the livestock industry scales back.

Assuming AI doesn't kill us all first, I predict that it will take about two generations for the curve to reach the upper inflection point.

Comment Re: Is it true? (Score 1) 106

I'm not saying those tools are not useful, or effective, only questioning the legality.

And I'm saying that effectively the whole software industry is using LLMs to write software approximately the way I am. Some a little less so, some more. If the courts were to decide five years from now (it takes that long for courts to decide anything) that AI-produced code is not copyrightable, it would be an incredible rug pull. It would throw years of work by hundreds of thousands of developers into legal limbo. Worse, it would be impossible even to tell what the legal status of that code was because there is no reliable way to distinguish LLM-written code from human-written code, even when the code is entirely one or the other, and in fact it rarely is except for vibe-coded product produced for people who don't have the ability to write it themselves.

If the outcome of a legal decision would be incredibly disruptive, courts don't make that decision. Programmers tend to think of laws the way we think of code, instructions to be followed exactly with no reflection about their impact. But that's not how courts work. Courts exercise judgement and if the outcome of interpreting the law in one way is too bad, they find a different interpretation that is not so bad. This is particularly true in the case of copyrights, whose legal basis is rooted in a clause in the constitution that is explicitly focused on promoting progress in the useful arts and sciences. Interpretations (and even laws) that specify a view of copyright that clearly harms progress are unconstitutional. "Clearly harms progress" is a high bar, of course.

As an example, consider Oracle v Google, the case over whether Google violated Oracle's copyrights by reimplementing the Java APIs. The ultimate resolution was on Fair Use grounds, not copyrightability, but the initial district court ruling found that APIs could not be copyrighted explicitly based on the argument that allowing APIs to be copyrighted would be too harmful to the software industry (both that ruling and the Fair Use ruling were overturned on appeal, but SCOTUS upheld the Fair Use argument and sidestepped the copyrightabiliy argument because they didn't actually need to decide it).

So, if it comes up, courts will decide that AI-written code is copyrightable, and this will happen precisely because so much commerce today and in the near future is based on the assumption that it is.

In the longer run, AI may make this question moot, not by rendering code not worthy of copyright protection in legal terms, but by reducing the value of software to zero. Things that have no monetary value are generally not valid subjects for legal disputes, that is, not justiciable, because civil remedies are largely limited to monetary awards.

Comment Re:Mazda was correct (Score 1) 47

Only tactile feedback has any hope of keeping your eyes on the road while using the dash.

Definitely not the way Mazda did it.

With their interface you had physical wheels and buttons, sure, but they were used to move around on a screen. So rather than just a quick glance to tap the screen icon you wanted, you had to watch the screen as you moved over to the selection and "clicked" it.

It's the worst of both worlds.

Honestly, I don't mind touch-based UIs for infotainment as long as they're well-organized and keep the important things in fixed locations, and make the buttons big enough.

Comment Re:Please stop caring of the American people only (Score 2) 151

Who are you addressing? If it is the Pentagon, they're supposed to care only for the American people

I think the administration would disagree with you. The Pentagon is only supposed to care for the right Americans, not all Americans. Oh, but, yeah, the administration is completely on board with not caring about the rest of the world. Except for the white people. But not Europeans. And it's not really clear whether they should care about Australians or New Zealanders. White South Africans, though, definitely the Pentagon should care about them.

Comment Re:Department of war lol (Score 1) 151

We shall see. If the US recovers from this, there may be a reckoning.

It's far from certain that the US will recover, though I think we will. But the same political divide that makes recovery uncertain makes it very certain that there will be no reckoning. Barring some incredible mistake that makes Trump's base turn hard on him and anyone remotely associated with him, that base will retain enough power and enough loyalty to shield he and his from significant consequences. And given that he survives near-daily scandals that would have taken down anyone else, anything bad enough to make his base turn on him would have to be so horrific that we really, really don't want it to happen.

His actions are nibbling away at his support, and the longer he's free to act with few restraints the more that will happen because he's an utter incompetent who cares about nothing but self-aggrandizement. And that will probably (probably!) be enough to turn the voters sufficiently sour on him that association with Trump will become a moderate liability. But it's very unlikely that he'll lose enough support to make any sort of reckoning possible.

Comment Re: Is it true? (Score 1) 106

Sure, if you ask it for unoriginal code, it'll give you unoriginal code. But outside of people playing around with it for fun, that's not how it's used. If you go look how it's being used by highly-skilled engineers at top tier companies who pay hundreds of dollars per engineer per month to give their engineers access to the frontier models, basically none of that is unoriginal code. Not that the AI is writing "original code" by itself; there's a lot of human guidance and decisionmaking. The AI is writing pretty much every character of the code, but to specifications written in English. Sometimes pretty high-level specifications. I just told Claude "Implement AES-CBC support in the Rust layer" and it generated a thousand lines (which will decrease by ~20% when I review) of Rust code that implement AES-CBC support within the architecture and framework that I defined -- however, a lot of the architecture and framework definition was also done with heavy AI support. I use the LLM for brainstorming and analysis.

The end result is all original, not regurgitated from anything, and disentangling the human and AI contributions is impossible. The core ideas are mine, but a lot of improvements came from Claude's suggestions, and some of those improvements are pretty deep. Claude also made a lot of stupid suggestions, which I obviously discarded. Nearly all of the actual code was generated by Claude rather than being typed by me, but I've reviewed every line of it and told Claude to fix lots of things that I didn't like. The result is definitely my style and pretty much indistinguishable from something I would have written myself -- except that it was produced several times faster than I'd have achieved on my own.

As for creativity... I'd say my current project is one of the most deeply creative endeavors of my career, and that higher level of creativity is in large part because the LLM is writing the code, freeing me from tedium and allowing me to think harder about the architecture and design that is embodied in the code.

Comment Re:Is it true? (Score 1) 106

Is it true that AI code can't be copyrighted?

Pretty much the whole industry assumes that AI-written code is owned by the company who employs the engineer who was using the AI. I don't think this has been litigated, but if it were to go the way you suggest it would create... problems.

Comment Re:AI Hype needs money (Score 1) 106

No way experienced developers are letting AI generate bug fixes or entirely new features using Slack to talk to AI on the way to work.

Depends on whether they can review the code and tests effectively first. I frequently push commits without ever typing a line of code myself: Tell the LLM to write the test, and how to write it, check the test, tell the LLM how to tweak it if necessary, then tell the LLM to write the code and verify the test passes, check the code, tell the LLM what to fix, repeat until good, then tell the LLM to write the commit message (which I also review), then tell it to commit and push.

Actually "tell the LLM what to fix/tweak" is often not right. More often it's "Ask the LLM why it chose to do X". I find I program via the Socratic Method a lot these days. The LLM usually immediately recognizes what I'm getting at and fixes it -- most often not because the code was wrong but because the implementation was more complex than necessary, or duplicated code that should be factored out, or similar. Sometimes it provides a good explanation and I agree that the LLM got it right.

As an example from immediately before I started typing this comment, the LLM wrote some code that included a line like [[maybe_unused]] ignored = ptr->release(). The LLM had recognized that the linter was going to flag the unused return value (which it had named "ignored" to make clear to readers that ignoring it was intended) and inserted the annotation to suppress it. This was all unnecessarily complex, made necessary by the fact that it had previously used get() to get the raw pointer value before checking it and then (right after the release()) stuffing it into another smart pointer object to return. The release() call was necessary to keep the first smart pointer from deleting the pointed-at object. I typed "Why not move the pointer directly from release() to the new smart pointer?". The LLM said "Oh, that would be cleaner and then I could get rid of the temporaries entirely" and reorganized the code that way. That's a trivial code structure example, of course, but the pattern often holds with deeper bugs, including sometimes that my question makes the LLM realize that its whole approach (which is often what I suggested) was wrong and to go into planning mode to develop a correct strategy.

There are exceptions, of course. Sometimes the LLM seems to be incredibly obtuse and after a couple of prompts I click "stop" and type what I want, at least enough that I can then tell the LLM "See what I did? That's what I mean."

"Writing code" with AI assistance is mostly reviewing code and you can often do that on a small screen and without a keyboard.

Slashdot Top Deals

Backed up the system lately?

Working...