Forgot your password?
typodupeerror

Comment Re:All according to plan. (Score 1) 209

Yeah but I have to drive 1000 miles up hill (both ways) every day for work in temperatures where lithium itself freezes, and I only pee on Sundays.

I don't need 1000 miles. 600 (unencumbered) is definitely sufficient, and 500 might be okay. The thing is that I'll lose half to 2/3 of that range when towing my camp trailer, and that's not even considering that I'm typically towing it up into the mountains, gaining ~5000 vertical feet. I also need minimum 12k pounds of towing capacity and I'd like a little headroom, so call it 16k, and the bed payload has to be able to take at least 2000 pounds, because that's how much the trailer puts on the fifth-wheel hitch.

I'm anxiously awaiting an EV pickup that can do this. I'd love to have essentially unllimited electricity to buffer cloudy days (I have 1 kW of solar panels on the trailer and on sunny days they generate way more than enough, but consecutive cloudy days can leave be difficult).

3/4 ton and 1-ton gas and diesel pickups typically have oversized fuel tanks that provide about 600 miles of range, because that's what you actually need when you start hauling or towing significant loads. I don't think an EV pickup needs to have more range, but it needs to be comparable, and to be able to tow and haul comparable loads.

I'm not anti-EV by any means. I bought my first EV in 2011, and have had electric cars ever since. Trucks are a different sort of problem, though.

Comment Re:All according to plan. (Score 1) 209

Oh, I think the Silverado EV's are adequate. 480+ mile range in best conditions still puts me way over my bladders ability to drive even in the absolute worst conditions of that tow + cold weather. That thing will still be 200'ish miles of towing in cold weather.

That's getting there, though I'd like to see some driving tests with a good-sized fifth wheel at highway speeds. The towing capacity is probably okay, though it provides very little headroom for when I'm towing both my camp trailer (~8k) and my boat (~3.5k), which I actually do several times each summer. But I think the payload capacity is too small to tow the trailer, which puts about 2000 points on the truck.

Comment Re:umm (Score 5, Insightful) 61

Actually, if anything he's saying his software package is so crappy that it *should* have found issues. He considers it's failure to find issues not a testament to how awesome his software package is but how lacking the tool is.

I've seen a few times where the curl developer has stood up to some asinine thing that most projects just roll with and I've appreciated his perspective each time.

His finding is consistent with another analysis I saw: Mythos was not good at finding issues at all. The one thing they could claim was that while other models found more issues, Mythos was able to craft a demonstrator to actually exploit the weakness, rather than just identifying the issue.

Comment Re:Programming graffiti (Score 1) 26

Getting their name on a project they are a fan of is a big factor

Also, they *truly* think they can *finally* be useful without having to actually understand things. They think a code request will be accepted more readily than they ever got attention on feature requests or behavior changes. They think their willingness to let CodeGen go nuts is a differentiation over the developers who mifroght even be using the same CodeGen tools, but with more care. If nothing else, they see tokens as almost like currency, so by generating the code it saves the developers from spending tokens.

So they mean well enough, but they flood folks with slop because they were never in a position before to do anything but slop, but CodeGen lets them realize their slop.

Just like the AI art is generally slop by lack of any artistic vision to start with. If they had sat down to actually draw the art, it still would have been slop, but now it just accelerates the process.

Comment Re:aaaand now I'm curious (Score 1) 26

Well, github seems to be throwing a fit, but I can say from my experiences:

- Uselessly verbose. One AI pull request I was asked to review was aiming to do a minor adjustment the layout of a singular webui element. The pull request was hundreds of lines of CSS because the LLM just started firing the random bullshit CSS cannon, often repeating itself probably because the operator said "no, it's still messed up" until by some miracle the one change he wanted managed to finally appear, alongside a whole bunch of other crap with side effects that the operator didn't bother to find out about. When getting to the heart of what they *actually* wanted, it was a single line one minute css tweak.

- Missing the glaringly obvious. Had a pull request seeking to adjust behavior to be compatible with newer things in the ecosystem. Ok, great, but adjustments had already been made and released a year ago, the operator had a stale container they had never updated. At no point in the clone/pull/mod/pull request flow did the AI stop and say "oh, it appears equivalent changes have already been made", but instead submitted different ways that were actually functionally broken.

- Operators tend to fire off just tons of requests to many projects. A relatively low traffic project I work with that might have a pull request every couple of months woke up with 50 pull requests from one guy that were opened over the course of an hour. The operator had pointed at the issue tracker (which admittedly had poor issue hygiene, resulting in issues open that should have been closed) and said make a pull request per issue to fix everything. One example was a 15 year old issue asking to change the project to support python 2.4, and since then the project had moved to require python 3.9, but the LLM still submitted patches around the specific examples of python 2.4 incompatibilities, despite it being ugly and also useless since so much more of the codebase was python 3 only. Several issues that had been fixed but not updated in the tracker had a pull request to 'fix' it.

- Fixing issues that weren't an issue. They pull a project and ask llm to do a code review and then submit pull requests based on what the LLM represents as needing changes.

So tons of volume, useless changes, changes with side effects...

The main issue is that CodeGen enthusiasts that were formerly intimidated by code syntax and toolchains think they can finally make an impact. The issue being is that code syntax and toolchains are the least of the challenges associated with good software. So CodeGen can significantly mitigate the tedium of those items, but now you have to contend with people that formerly were filtered by the intimidation.

Comment Re:All according to plan. (Score 1) 209

Agreed. My sedan has been electric for nearly a decade now, but I'm still driving a diesel pickup (1-ton, though a 3/4 ton would be sufficient) because EV pickup range is inadequate -- and I think it may be inadequate for a while. I need 250 miles of range when towing a trailer, which means I need ~500 -- maybe 600 -- miles of range without.

I'm not generally a fan of hybrids, but I think plug-in hybrids with large-ish batteries may be the sweet spot for a while with pickups. The Dodge Ramcharger is looking really good to me, though I'd like to see them make a 2500.

Comment Re:META is doing this to make them quit (Score 1) 91

That's actually a smart strategy.

It is effective at reducing staff cheaply, but it has a huge downside, shared with most attrition-based schemes for reducing payroll: The best employees are also the ones who find it the easiest to leave. The worst employees are also the ones who will grit their teeth and hold on to the bitter end.

It's harder and more costly (in the short term) to do targeted layoffs which allows the company to target low-performers, or those who are low performers relative to their cost. It's the better choice, though.

But I wonder how many employees will quit in today's job market.

Lots of the top performers will.

Comment Re:not really surpising. (Score 3, Insightful) 43

Problem is that these things may have a whiff of influence, but they manifest as superstitions that GenAI users *swear* by.

I remember someone swearing up and down that he solved the 'hallucination' problem by putting. "Be sure not to halluncinate" in his prompts.

Similar here, you say "secure" to a human and they'll have all sort of potentially weird ideas and can't count on it being implemented correctly. Let alone an LLM.

Slashdot Top Deals

When you don't know what to do, walk fast and look worried.

Working...