Comment Nadella.. (Score 1) 19
Nadella personally stands to lose out on billions if anything bad happens to OpenAI. Seems like his testimony especially about subjective non-fact based matters should largely be ignored.
Nadella personally stands to lose out on billions if anything bad happens to OpenAI. Seems like his testimony especially about subjective non-fact based matters should largely be ignored.
"the big businesses will be very ineffective, with them not having the right number of human workers."
so.... Same as always
Actually, if anything he's saying his software package is so crappy that it *should* have found issues. He considers it's failure to find issues not a testament to how awesome his software package is but how lacking the tool is.
I've seen a few times where the curl developer has stood up to some asinine thing that most projects just roll with and I've appreciated his perspective each time.
His finding is consistent with another analysis I saw: Mythos was not good at finding issues at all. The one thing they could claim was that while other models found more issues, Mythos was able to craft a demonstrator to actually exploit the weakness, rather than just identifying the issue.
Not particularly the message for graduates of Art and Humanities...
Of the potential benefits of AI, the trashing of arts and humanities is not exactly something most folks like already.
If industrial equipment is running a 37 year old cpu, it isn't getting new software.
Getting their name on a project they are a fan of is a big factor
Also, they *truly* think they can *finally* be useful without having to actually understand things. They think a code request will be accepted more readily than they ever got attention on feature requests or behavior changes. They think their willingness to let CodeGen go nuts is a differentiation over the developers who mifroght even be using the same CodeGen tools, but with more care. If nothing else, they see tokens as almost like currency, so by generating the code it saves the developers from spending tokens.
So they mean well enough, but they flood folks with slop because they were never in a position before to do anything but slop, but CodeGen lets them realize their slop.
Just like the AI art is generally slop by lack of any artistic vision to start with. If they had sat down to actually draw the art, it still would have been slop, but now it just accelerates the process.
Well, github seems to be throwing a fit, but I can say from my experiences:
- Uselessly verbose. One AI pull request I was asked to review was aiming to do a minor adjustment the layout of a singular webui element. The pull request was hundreds of lines of CSS because the LLM just started firing the random bullshit CSS cannon, often repeating itself probably because the operator said "no, it's still messed up" until by some miracle the one change he wanted managed to finally appear, alongside a whole bunch of other crap with side effects that the operator didn't bother to find out about. When getting to the heart of what they *actually* wanted, it was a single line one minute css tweak.
- Missing the glaringly obvious. Had a pull request seeking to adjust behavior to be compatible with newer things in the ecosystem. Ok, great, but adjustments had already been made and released a year ago, the operator had a stale container they had never updated. At no point in the clone/pull/mod/pull request flow did the AI stop and say "oh, it appears equivalent changes have already been made", but instead submitted different ways that were actually functionally broken.
- Operators tend to fire off just tons of requests to many projects. A relatively low traffic project I work with that might have a pull request every couple of months woke up with 50 pull requests from one guy that were opened over the course of an hour. The operator had pointed at the issue tracker (which admittedly had poor issue hygiene, resulting in issues open that should have been closed) and said make a pull request per issue to fix everything. One example was a 15 year old issue asking to change the project to support python 2.4, and since then the project had moved to require python 3.9, but the LLM still submitted patches around the specific examples of python 2.4 incompatibilities, despite it being ugly and also useless since so much more of the codebase was python 3 only. Several issues that had been fixed but not updated in the tracker had a pull request to 'fix' it.
- Fixing issues that weren't an issue. They pull a project and ask llm to do a code review and then submit pull requests based on what the LLM represents as needing changes.
So tons of volume, useless changes, changes with side effects...
The main issue is that CodeGen enthusiasts that were formerly intimidated by code syntax and toolchains think they can finally make an impact. The issue being is that code syntax and toolchains are the least of the challenges associated with good software. So CodeGen can significantly mitigate the tedium of those items, but now you have to contend with people that formerly were filtered by the intimidation.
Problem is that these things may have a whiff of influence, but they manifest as superstitions that GenAI users *swear* by.
I remember someone swearing up and down that he solved the 'hallucination' problem by putting. "Be sure not to halluncinate" in his prompts.
Similar here, you say "secure" to a human and they'll have all sort of potentially weird ideas and can't count on it being implemented correctly. Let alone an LLM.
Problem for everyone is that mindset does not save cost/produce value.
Even when part of the AI companies try to show utility honestly, they get drowned out by their own executives bulldozing the nuance aside and pretending it is just a magical replacement for software developers.
Now there will be a bunch of vibe coders that think adding "but make it secure" to the prompt fixes the issues...
Well, you just said flat out, user pays $150/month in lieu of ISP+power.
Note that last month my ISP+Power was less than $100/month (thanks to solar offsetting it).
"One big reason the XFRA model works is that the average American home only uses about 40 percent of its electrical capacity,"
Yes, sure, on the individual level, a house may average 40 percent and the 200A is just peak demand and/or anomalous residences, but I guarantee that the grid is *not* sized for everyone to continuously pull down 200A all the time.
When power demand gets pressure due to prolonged weather events, you get rolling blackouts precisely because the grid is not sized to handle the load, even though *technically* everyone is operating within their individual 'capacity'.
Power grids are oversubscribed, and this concept pretends that the aren't.
Per the article, the homeowner has to pay to have this unit at their house, but the cost of the monthly fee goes to also cover ISP.
So you don't get free anything, but they claim that, in theory, you would get lower monthly bill than ISP + Utility.
More like the opposite.
They would want to divert resources away from your usage and into a locked enclosure that you would not have access to.
It would not upgrade your home infrastructure one bit. You even still have to pay them for the privilege, though they argue it will be less than you would have otherwise paid to an ISP and utility company.
I agree with you, but the big tech companies *seem* to be winning their arguments, even when the plaintiff shows output that includes even the watermark of the plaintiff's stuff on something that looks like the plaintiff's assets.
So it's at least more pragmatic to show that the acquisition and likely redistribution of the works while torrenting were a problem without even bringing up the whole AI ingest argument.
"Summit meetings tend to be like panda matings. The expectations are always high, and the results usually disappointing." -- Robert Orben