Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Probably a good thing (Score 1) 53

In a mirror network, the TLS cert is a weaker assurance than the signed payload from the presumably trusted source. A mirror host is more exposure and more risk that the actual content gets modified despite having 'a' valid certificate.

So the TLS assurance is redundant and less specific and less rigorous than the content signature validation from the originator. Further, unencrypted protocols are friendlier to things like proxying, which can dramatically reduce load on the internet hosts if proxies take on a good part of the burden from certain institutional clients that may use an HTTP proxy to improve their performance.

So yes, if it's worth downloading, it's worth validating as best as possible, and for a lot of content the best as possible is something like GPG validation not TLS validation.

Comment Re:May not be their "best" devs after all then... (Score 1) 98

Yeah, noticed that too, he felt the need to clarify 'best'. That sounds super odd, why would specifically only the best be able to claim that? I would have assumed it more likely for lower level developers to claim that achievement. So if you have some people writing code but only the 'best' not writing code....

Of course, I suspect it's like some executive I recently dealt with who felt that 'developers' were beneath him until they left that stupid 'coding' behind and became just executives. So he would be very excited to declare a 'coder' that professed to not writing a line of code in months to be the best.

Comment Re:AI Hype needs money (Score 1) 98

Well, for a company like Spotify, the downside isn't so scary because their software isn't exactly doing rocket science either. Their business is internet radio with on-demand capability. The technical piece is relatively basic enablement of that direction that isn't difficult and others can and have easily competed on technical design. See the cited three features, *super* easy sounding stuff.

But I *have* seen a software sales guy fail to understand the point you just made. He was excited because now when a customer asked for some software, he thought his non-technical sales team members could just put that request into a gemini prompt and then sell the customer the software the AI just generated. He seemed to completely fail to recognize that Gemini isn't some exclusive secret to his team and if someone without skill can do it on his team, then the customer could just do for themselves. For this vision to work even in theory, it has to be software for yourself, or enabling your actual business objective where the 'stickiness' comes from.

Comment Re:AI Hype needs money (Score 1) 98

Yeah, they speak to stupid shareholders, and shareholders that might not be stupid, but are willing to bank on the stupidity of others, either way, currently money wise the money flows to the hype.

But very good point that this *should* be a double edged sword. Our software can be completely constructed low skilled using LLM, so what might prevent competition from just eating their lunch on the technical front? Of course, it is just spotify, and it's not exactly a technical marvel to begin with, basically riding marketing and content rights and the technical piece is pretty basic anyway.

Comment Re:AI Hype needs money (Score 1) 98

Very consistent with my experience. Sure, it can accelerate certain tasks, but it will blunder along the way.

Even if it gets something mostly right but I see a mistake, it sounds like they say explain the mistake to it to let it try to fix it (which for me when I tried was very unreliable, and more work than just manually amending the code, since I also know that correcting it's mistake won't even pay dividends because it won't 'learn' from that interaction). I have similar experiences with people, it's sometimes a bit of work to explain to them, but at least there I know they get better because of that direction. There was a person I just cut off because he never learned anything and kept making the same mistakes over and over again, and LLM usage feels like that. LLM is better than that guy because at least it is immediate in the proposed code and I don't have to feel bad about failing to 'teach' by going in and just fixing the mistakes directly.

I note the weasel word 'best' in front of developers, so he pretty much decided that by definition the 'best' developers are the ones exclusively working through AI.

Comment Re:So that's not really the problem (Score 1) 115

Well, there was the Steve Bannon proposal about identifying "risky" voting locations and deploying ICE to 'manage' the situation, and the White House refused to deny that course of action. More vaguely they talked about GOP "taking control" of voting locations more directly from Trump, though no one knows exactly what he meant. Around here they are shutting down a number of college campus polling locations and other democratic hotspots to make it more difficult to vote. They both are trying to impair mail-in voting and reduce early voting.

Then there was what they pushed for in 2020 but Mike Pence wouldn't carry through on it of just discarding the election results.

So a lot of that may fall under voter suppression, but it is quite a bit to possibly hurt the elections.

Of course, even if they get a majority of the House, they still can't do *too* much to an executive branch that runs rogue unless the republicans in the senate work with them to do something about it. They can make the formal funding story more complicated, but an administration willing to ignore norms can do a lot...

Comment Re:So ... (Score 1) 115

Yes, the folks who murdered a US citizen instead saying that it was related to the immigrant that was allegedly in some building around, but neither the woman being pushed around or the man who was shot was in their way....

Getting all pissy about getting filmed and yelled at and abusing and killing citizens instead of even trying to do the job they were allegedly there for.

I certainly wouldn't trust them with any equipment that requires a modicum of responsibility, judgement, and capability.

Comment Re:BYOB (Score 2) 36

Problem is they happily do that, with noisy and polluting portable gas generators in big trailers.

Note when Elon declared that the only practical path forward was tens of thousands of starship launches a year to let the datacenter be built (which is stupid), his assertion was that the solar panels and launch logistics were easier than making more turbines for natural gas generators. Again, a stupid stance, but it says that they consider plopping down natural gas generators a critical path..

Comment Re:Fun quote.. (Score 1) 160

To make that analogy, you have someone who notices their toaster is running some novel SoC with a new instruction set 'z36' that has not run Doom. So they go to the manufacturers site, get the SDK, compile Doom with it, reflash their toaster, and show Doom running on it. Sure, a decent fun story of it's own right.

Then they post that they 'Wrote a 3D game from scratch to run on a toaster"

Followed up a few days later by the guy's boss pointing to that blog post and declaring that Unity and Unreal and PCs are all dead because this guy compiled Doom for z36

Comment Re:Congratulations (Score 2) 160

Sure, they carefully spend weeks crafting the test cases with test data and then spend tens of thousands and probably have to buy it a license of Word to use as a reference to compare test execution on the LLM output versus reference implementation...

Or they could just stop at having bought Word....

The problem here is that this example leaned *heavily* on the software desired already existing and the LLM having access to run the original software as it endeavored to make a knock-off. And then per analysis and issues it kind of did a crappy incomplete job...

The question is what happens if an LLM creates a knock-off and a human goes to redistribute and maintain the work as a viable alternative... Does the original software vendor go nuclear because the LLM can't be considered to really do a 'clean room' reverse engineering?

Comment Re:Congratulations (Score 3, Informative) 160

Who is 'we'? I don't think the LLM is up to the task of making a C compiler that can target an architecture with at most 256k of RAM and without a reference C compiler to work with, like it did for this. The LLM basically got told to write a knock off of gcc, and based on some of what happened, it absolutely needed a working gcc to work from to create the sort-of knock off.

Comment Re:Congratulations (Score 2) 160

I mean, what next iteration?

I would also say that this seems the opposite of useful for no longer working hardware. If the hardware existed, then we still have a C compiler for it. If you say you want to modernize that compiler, but the hardware is a dead platform, why are you trying to modernize for it anyway?

Let me extrapolate to the point that maybe you are talking about a compiler for some future architecture. Problem is this example needed:

- A human to carefully craft test cases and rescue the LLM when it spun out
- Ability to compare with an existing, working compiler behavior... Meaning you would have to write a compiler before you could have it write a compiler...
- Ability to re-execute over and over again after shuffling the code, meaning you need working hardware to actually run the tests over and over again, and you need to know the *hardware* is good because when a problem happens, the LLM can't determine if the hardware is flawed or if the compiler is flawed, it's just too ambiguous when you have no basis of confidence on either one.

Slashdot Top Deals

FORTH IF HONK THEN

Working...