Comment Re:click bait title? (Score 2) 68
The former, which is why TFS talks about space vehicles and "risky close approaches", rather than
The former, which is why TFS talks about space vehicles and "risky close approaches", rather than
I have no idea how good AI is right now for this task, but presumably existing off-the-shelf dedicated s/w for doing code inspections will use way less resources than an LLM.
Based on my experience, I am almost certain you are right: the static cofe analyzer / static application security testing tool that I have used professionally needs fewer resources than an LLM. But on the other hand, an LLM might catch things that the special purpose tool does not. The guy I interviewed said the race conditions escaped his static analyzer, and I've seen even a locally hosted mid-size (120B parameter) LLM flag cut-and-paste errors that an SCA tool might miss. (I did not run a dedicated analysis tool on the latter code, so I can't say for sure whether it would have missed the error, but my gut says it would have been missed.)
My question wasn't really meant to be about method chaining, but about the tedious boilerplate around testing error result values. There are answers to that also (https://stackoverflow.com/questions/18771569/avoid-checking-if-error-is-nil-repetition#18772200) but the general answer is still that you have to just suck it up and have N or N-1 "if err != nil { return
The question isn't "is AI ever useful", but rather "is it useful enough today for the specific use case?". That is what this guy is exploring. My gut feeling is that it isn't, but I don't have the experience to know for sure, and neither does anyone else.
Well, most of us don't have the specific experience to know whether AI is useful enough for Chris Mason's specific use case, but we have enough from our own.
I have found that AI is good enough for a first draft of code, or for providing comments on existing code -- but I want a human to review whatever it generates, and would expect the normal suite of other tools (linters, SAST/DAST, fuzzers, etc.) to pass the code before publishing it. I recently interviewed someone else with 25-ish years of professional experience developing software, and his assessment was that AI tools are very good at reviewing code as well. He said that the tool he used at his current job was able to diagnose some race conditions that he and a coworker both overlooked.
So I think the real questions are: What specific use cases can AI help you (specifically) with? How quickly is that set of use cases changing, and in what directions?
OK, please tell me how to avoid writing code like this:
a, err
:= foo()
if err != nil {
return fmt.Errorf("foo failed: %w", err)
}
b, err:= bar(a)
if err != nil {
return fmt.Errorf("bar failed: %w", err)
}
(Ignore for the moment whether you're using a version of Go that rejects the second "err
It's problematic for assurance because users cannot tell what undisclosed biases are baked into the weights -- and relying on someone else to do the bulk of the training means you cannot produce your own weights.
Even in classic open source, think of cases like an implementation of Dual_EC_DRBG, a malicious version of gotofail, or the xzutils backdoor. Those are all cases where the source code was available.
Define "vibe coding". I have an existing (small) code base to ingest and process XML, using a SAX interface for a couple of reasons. The input XML scheme embeds XHTML, but for the purposes of the app it's better to remove the namespace prefix from those tags than to keep it. I asked Claude to implement that. The logic to grab the right prefix and pass out to the worker function was fine. The changes to the "grabXhtml" worker function worked, but we're kind of crap. Overall, it saved me time even though I rewrote the core logic for clarity, brevity and performance.
I also recently asked Claude to either create a PDF to help me lay out a home gym, or recommend software to do that. It recommended some software but also wrote a little Python script to generate a PDF for me. The PDF took four iterations of prompt to get eight -- the first version said Scale: 1" = 0.4". When I told it to fix that, the second one said Scale: 1" = 0". So it needed a lot of checking but still overall probably saved me time.
Those two things are worth (to me) most what I paid for a year of service, so I think it's showing value even though there are major quality problems that mandate close supervision of its output.
The "thousands of antennas" are supposed to work around that, presumably by allowing much better beam-steering (narrower beams and attenuated sidelobes; the alternative is that they're fixed-beam antennas). The trade-off is that it requires a lot more processing power to steer the beams, even more processing if they're doing it in the digital domain (which they presumably are), and a tighter beam is less able to adapt to dynamics of the user -- managing a static velocity isn't hard, but variable acceleration is trickier and the atmosphere isn't uniform. Oh, and of course they need an ADC and/or DAC per antenna or per user. 5G channels at lower frequencies don't have super high bandwidths, so the converters don't need to be really high performance, but those add power as well.
You're not even wrong, to borrow a phrase from Wolfgang Pauli.
GPS and other GNSS signals don't use frequency hopping at all -- it would make it harder to acquire a signal and make ranging much less accurate. Except for GLONASS's legacy FDMA transmissions, they use direct-sequence spread spectrum with phase shift keying modulation (or variations on it), which is somewhat different from the spread spectrum technique that Hedy Lamarr co-invented. There are a lot of techniques for jamming GPS, which work different ways and require different transmitter power levels and sophistication. And a "bogus location" would imply spoofing, which GNSS people distinguish from jamming for a lot of reasons.
Yes, this is a great idea. Then the company could work with Mastercard and Visa to set up a standard payment protocol so that anyone using a cars from one of those issuers can pay at any retail location that supports the protocol. Maybe they could even call that protocol "EMV", with the "E" representing Europe.
Why hasn't anyone thought of this before?
on the same frequency. If you overpower the 4 watts of transmission power with 10 watts of noise, then the satellites will only see noise, but why stop there, hit it 50 watts so that nothing reaches it.
The first part isn't necessarily true: it depends on the waveform. A GPS satellite normally broadcasts with something like (depending on how you measure it) 20 W per channel. From a given user position and for L1, there are often a dozen other GPS satellites, and eight to ten Galileo satellites, broadcasting at basically the same power on the same frequency. Fewer GPS satellites broadcast on L5 but it's also crowded; other GNSSes don't broadcast at L2. A receiver can still track all of those satellites with low bit error rates, even though the signal-to-interference-plus-noise power ratio (SINR) is worse than your 4 W : 50 W. (The compromise is that the bit rate you get as throughout is really low compared to the bandwidth: for GPS L1 C/A, 50 bits per second via a 2+ MHz signal.)
But yes, jammers often go for brute power. The risk is that this makes them very easy to find, so anti-radiation missiles and similar weapons can home in on the jammers.
There's a difference between "no retail presence" and "no prior relationship" -- at least for those of us paying attention. If someone claims to sell product X but relies on drop shipping without the consent of either the original seller or the end purchaser, that's a form of trademark infringement or even fraud: it's enriching the middleman by misrepresenting the source of the goods they sell.
Avionics can currently only be certified to use GPS (or maybe in theory GLONASS, but I don't think anyone outside of Russia's sphere of influence would do that). In the US, the FAA recognizes a bunch of RTCA guidance documents like DO-208 and DO-229, but those only cover GPS. DO-401 is new (the European equivalent, ED-259, was formally published one revision earlier) and allows use of multiple constellations, but is recognized in the industry as not ready to be certified against. The same is basically true for Europe and the Pacific Rim: they either recognize the RTCA DOs as applicable, or recognize the EUROCAE ED that is harmonized with the RTCA DO.
The jammers on L1/E1 probably affect both GPS and Galileo similarly (Galileo has slightly wider bandwidth on E1, but most of the energy is in the L1 C/A part). Until a year or so ago, most jammers and spoofers were single-frequency and GPS-only -- but new jammers and spoofers are multifrequency and multiconstellation, so even having DFMC avionics wouldn't be a universal fix now.
The long term solution is going to involve beamforming or similar active antenna techniques. Those are also still being standardized, and the Ukraine war is driving the state of the art for military CRPAs.
Name a specific problem and couch it in terms of memory bandwidth, memory capacity, and FLOPS -- then we can figure out whether GPUs are a good fit. Arm.waving about "data warehouse capabilities" doesn't cut it. Most of the time, the difficulty will be in getting the data you want into the database you want to process it, rather than how far the database can give you an answer.
For example, people don't tend to put databases on GPUs unless they do full scans of the data multiple times each second, because other use cases work just fine with CPU+RAM(+disk). Sometimes that means they have to wait a few minutes instead of seconds for an answer -- but they can do the math for how much that costs them compared to infrastructure. But if they don't have the data imported already, that's gotta of labor and importing it, and GPUs won't help there.
Even expensive GPUs are cheap compared to AI accelerator cards, and the profit margins are lower (in percent) as well. An H200 card reportedly goes for $30,000 and up, about 20 times as much as a RTX 5080. Nvidia's margin (in percent of sales price) is probably two or three times as much for the H200 -- so they need to sell a hugely larger number of GPUs to make the same profit. If AI demand went to near zero, Nvidia would still have a very healthy business but their profits would drop enormously.
The solution of problems is the most characteristic and peculiar sort of voluntary thinking. -- William James