Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Vibe coding is an intermediate step that will die (Score 1) 10

I don't think vibe coding is going to last long as a thing, because it's just a sort of intermediate step to telling the AI to do what you want and having it do that. Right now, people are telling the LLM to write code to accomplish a thing and then running the code to see how it works, then telling the LLM to refine it, but that's a lot of unnecessary extra steps. I'm sure that in the not-too-distant future people will just tell the LLM what they want to do, which may require creating a custom user interface to make user interaction convenient, and may require creating databases or performing network queries or whatever, and the LLM will understand what they want, and do it.

In that future, it's possible that the LLM may generate code to implement the requested functionality, but if it does so that will be a compute-saving shortcut, essentially a way to cache the LLM's work and be able to repeat it with less effort. There won't be any need to show any of the code to the user, or even tell the user that the LLM chose to generate some code.

As an aside, the whole notion of leaning "prompt engineering" is another intermediate step that will die. The whole point of natural language-capable AI is that it will be able to understand what humans want when we express ourselves as we would to other humans. As the LLMs get more capable, it will become less necessary to treat them as something different from an entity that is fully capable of understanding and acting on human communication.

Comment Re:Very quick code reviews (Score 1) 35

At my company we don't have any dedicated Rust programmers. We all have to learn it (eventually). So passing a review off to a Rust developer or dedicated team isn't an option for us.

C++ reviews go quick for us because we have 20 years of it in our code base. And our changes tend to either be a tiny increment at the core. Or a massive dump of support for a new feature or chip that not every reviewer is familiar with.

At my company we don't have any dedicated Rust programmers. We all have to learn it (eventually). So passing a review off to a Rust developer or dedicated team isn't an option for us.

One of the things Android did very right with the Rust transition was to set up a small team of people who were entirely focused on Rust support. It wasn't a large team, only 2-6 people (it varied over time) out of approximately 1500 engineers. Having that core team who either were or became deep Rust language and toolchain experts was critical to smoothing the path for everyone else. It provided a group that had the knowledge and bandwidth to solve the problems that inevitably came up, as well as to offer advice and code review support to the early adopters.

That group no longer provides code reviews and design advice because Rust knowledge is now widespread enough that teams have their own, homegrown, Rust experts (not people designated as Rust experts, just engineers who became enthusiastic and dived deep), but the group still exists to resolve complex technical problems with language integration and to work on improving tooling and performance.

I think any shop adopting Rust (or any new language or complex tool) needs to have some people who become deeply expert in it and are allowed the time and freedom to support others who are picking it up.

C++ reviews go quick for us because we have 20 years of it in our code base.

So does Android. Google has been a primarily-C++ shop since its inception and although I'm not sure if Android had a lot of C++ in it when Google bought Android in 2005, it definitely became a C++-based system as soon as that happened.

And our changes tend to either be a tiny increment at the core. Or a massive dump of support for a new feature or chip that not every reviewer is familiar with.

The highly-segmented architecture of Android really helped facilitate the transition. Most of Android is structured as a web of collaborating services that communicate through a common language-independent [*] IPC mechanism (binder). Implementing Rust binder IDL generation and support libraries was a moderately big job, but once that was done it was easy to begin writing new system components (or replacing existing system components) in pure Rust, generally without any unsafe blocks at all.

If your code runs as a monolithic process, or has a lot of different IPC mechanisms, or uses a lot of existing libraries, it will be a lot harder, and the benefits will come slower. You'll have to wrap a lot of C interfaces in Rust -- and they will have to be C, not C++, since there isn't a good way for Rust to interoperate directly with C++. People are working on that, but it's a very hard problem and at present the best option is to layer a C interface on top of your C++ code, then wrap a Rust interface around the C interface. Yuck. Or, in the alternative, insert some other language-agnostic boundary between them.

So in a lot of ways Android got lucky because of its modular architecture and single, language-agnostic IPC mechanism. OTOH, that wasn't really "luck", it was a lot of work, done for good reasons, one of which was cross-language compatibility, notably between Java and C++.

[*] Language independent-ish, maybe I should say. The binder IDL is definitively Java-based, but this maps fairly nicely onto OO languages that support common primitive types (int, char, enum), basic composite types (array, vector, class/struct, string (which is just a vector, but used enough to be worth treating as a first-class thing)) and Java-like methods (fixed argument list, single return value). Further, it's based on "old" Java, before Java acquired functional extensions, when doing things like passing method references as argument was uncommon, and therefore not supported. So it's moderately-expressive but avoids things that get weird and complicated. My one big complaint about it is that I wish it supported unsigned integer types. That's my biggest gripe with Java, too.

Comment Re: I'm so glad the government makes me safe. (Score 1) 53

The new law (according to the article) still allows the re-sale of tickets, but not for more than the original price.

Which is good as I occasionally organise group outings to a play or similar. People pay me the cost of their ticket. I do, sometimes, profit as some theatres will give (me) a free ticket if I buy more than 10 or so -- but that is not why I do it.

Comment Re:Oh, Such Greatness (Score 2) 131

Lincoln was a Free Soiler. He may have had a moral aversion to slavery, but it was secondary to his economic concerns. He believed that slavery could continue in the South but should not be extended into the western territories, primarily because it limited economic opportunities for white laborers, who would otherwise have to compete with enslaved workers.

From an economic perspective, he was right. The Southern slave system enriched a small aristocratic elite—roughly 5% of whites—while offering poor whites very limited upward mobility.

The politics of the era were far more complicated than the simplified narrative of a uniformly radical abolitionist North confronting a uniformly pro-secession South. This oversimplification is largely an artifact of neo-Confederate historical revisionism. In reality, the North was deeply racist by modern standards, support for Southern secession was far from universal, and many secession conventions were marked by severe democratic irregularities, including voter intimidation.

The current coalescence of anti-science attitudes and neo-Confederate interpretations of the Civil War is not accidental. Both reflect a willingness to supplant scholarship with narratives that are more “correct” ideologically. This tendency is universal—everyone does it to some degree—but in these cases, it is profoundly anti-intellectual: inconvenient evidence is simply ignored or dismissed. As in the antebellum South, this lack of critical thought is being exploited to entrench an economic elite. It keeps people focused on fears over vaccinations or immigrant labor while policies serving elite interests are quietly enacted.

Comment Re:Computers don't "feel" anything (Score 1) 52

It's different from humans in that human opinions, expertise and intelligence are rooted in their experience. Good or bad, and inconsistent as it is, it is far, far more stable than AI. If you've ever tried to work at a long running task with generative AI, the crash in performance as the context rots is very, very noticeable, and it's intrinsic to the technology. Work with a human long enough, and you will see the faults in his reasoning, sure, but it's just as good or bad as it was at the beginning.

Comment Re:Computers don't "feel" anything (Score 3, Informative) 52

Correct. This is why I don't like the term "hallucinate". AIs don't experience hallucinations, because they don't experience anything. The problem they have would more correctly be called, in psychology terms "confabulation" -- they patch up holes in their knowledge by making up plausible sounding facts.

I have experimented with AI assistance for certain tasks, and find that generative AI absolutely passes the Turing test for short sessions -- if anything it's too good; too fast; too well-informed. But the longer the session goes, the more the illusion of intelligence evaporates.

This is because under the hood, what AI is doing is a bunch of linear algebra. The "model" is a set of matrices, and the "context" is a set of vectors representing your session up to the current point, augmented during each prompt response by results from Internet searches. The problem is, the "context" takes up lots of expensive high performance video RAM, and every user only gets so much of that. When you run out of space for your context, the older stuff drops out of the context. This is why credibility drops the longer a session runs. You start with a nice empty context, and you bring in some internet search results and run them through the model and it all makes sense. When you start throwing out parts of the context, the context turns into inconsistent mush.

Comment Re:The price of wealth (Score 1) 83

Does a story like this make anybody else wonder if the lifestyle cost of wealth is too high?

The problem in this story is not the wealth, but its form. Cryptocurrency transactions are generally irreversible and not subject to the layers of process and protection that have been built up around large banking transactions. Keep your money in banks and brokerages like a sensible person and you don't have much risk.

Comment Re:Huh? Where? (Score 1) 60

No it's far from the most expensive option

Uh, yes, the 24-hour cancellation option is always the most expensive one for a given room (ignoring paying extra for add-ons like free breakfast or extra points). What other option would be more expensive? The one that gives the consumer the most flexibility is the one with the highest risk to the property, and that's priced in.

TFA postulates a scenario where the cancellations have disappeared.

Yeah, TFA overstated it. Though if you're not booking through the chain directly, in many cases it is hard to get a 24-hour cancellation policy. Many of the travel aggregator services hide them.

Comment Re:way more than some irrationality (Score 1) 55

The AI thing absolutely is a bubble, but it's not "sand-castle based or vapor based". It's very real. The problem is that the massive wave of investment is going to have to start generating returns within the next 3-4 years or else the financial deals that underpin it all will collapse. That doesn't mean the technology will disappear, it just means that the current investors will lose their shirts, other people will scoop up their assets at firesale prices, and those people will figure out how to deploy it effectively, and create trillions in economic value.

The problem is that the investors - and lenders - potentially losing their shirts include major international banks and pension funds, not just private shareholders. Recently, a J.P. Morgan analysis estimated that at least $650 billion in annual revenue will be required to deliver mere 10% return on the projected AI spend. And already banks like Deutsche Bank are looking to hedge their lending exposure to AI related projects.

If the AI bubble crashes hard, it could be a repeat of the 2007 global financial crisis.

Yep. That's all true even if AI is the most transformative technology ever invented, even if it generates trillions per year in economic output -- it might not do it soon enough to prevent another crash. You don't have to believe that AI is "sand-castle based or vapor based" (which it's really not) to see a big problem coming.

Comment Re:way more than some irrationality (Score 1) 55

Here is the thing, you are posting on Slashdot. Don't tell me you are not sharp enough to find a broker, and buy some long dated at the money PUTS either on the AI and AI adjacent firms or just the market over all with funds like SPY / QQQ.

The market can remain irrational longer than you can remain solvent.

The better strategy, IMO, is to keep your money safe and wait for the bubble to burst, then pile in for the recovery. Where to keep money safe is a good question, though. Just holding cash might be risky if inflation comes back, and the current administration seems anxious to pump up inflation.

Comment Re:way more than some irrationality (Score 1) 55

It is quite clear to everybody it is a bubble and a lot of the AI stuff is sand-castle based or vapor based... At least those of us understanding what the current crop of AI does

There's a pair of seriously bad assumptions underlying your analysis:

(1) What AI does right now is all it's going to do. Given the way capabilites have grown recently, this is a ludicrous assumption. Keep in mind that ChatGPT was launched November 30, 2022... it's less than three years old! And the reasoning models are barely a year old. There is no reason whatsoever to assume that this technology has peaked.

(2) We already know how to take full advantage of AI. Every time a new technology comes along it takes decades for us to fully understand how to effectively use it, and to deploy it everywhere it is useful. I'd say we still haven't fully incorporated the Internet into our society, and we've been working on that for over 30 years now. We're barely beginning to understand how to use what AI we've already got, and it'll take years, if not decades, for the full economic benefits to be achieved -- and in the meantime AI is probably going to continue improving.

The AI thing absolutely is a bubble, but it's not "sand-castle based or vapor based". It's very real. The problem is that the massive wave of investment is going to have to start generating returns within the next 3-4 years or else the financial deals that underpin it all will collapse. That doesn't mean the technology will disappear, it just means that the current investors will lose their shirts, other people will scoop up their assets at firesale prices, and those people will figure out how to deploy it effectively, and create trillions in economic value.

Well, assuming AI doesn't just kill us all.

Comment Re:Hardware will be fine (Score 3, Insightful) 55

This is a decent point, though one supposes the rush to build datacentres would slow further, so it won't all be gravy for the hardware companies either.

There comes a time where there has to be some actual utility for the software running on the hardware that is there however, because a significant amount of what it is being used for now quite often has zero, or negative utility itself. But it may mean some people are going to get access to compute power cheaper than they may have done previously once the realignment starts.

It's like the railroads. Enormous fortunes were made and then lost as the railroad boom played out and then the bubble burst. When people were driving hard to push rails across the continental US, the business case for doing so wasn't there. Yes, linking the east and west coasts had some value, but not much, since there really wasn't that much on the west coast. And there was a whole lot of nothing in between. But it was obvious to everyone that when the railroads connected the coasts and opened access to the interior, there would be enormous value. What exactly, no one knew, in the sense that no one knew where all of the railroad-enabled interior cities would be constructed or what kinds of things they would do. But it was clear that there was value in access to all of that land and that someone would do something with it.

On the other hand, realizing that value didn't happen right away. It took decades for all of the land granted to the railroads to become really valuable, because it wasn't valuable until people came and built farms, dug mines, established ranches and generally built lives and industry. The return on that massive investment was there... but it came far too late for most of the people that invested it. Lots of bankruptcies resulted, and others swooped in and snapped up the resources for bargain-basement prices, and they're the ones who became incredibly wealthy (well, they and the ones who supplied the steel, e.g. Carnegie).

It's been the same with pretty much every technology-driven bubble. Remember telecom/dot-com bubble in the 90s, with all of the "dark fiber" that was laid everywhere? Bankruptcies and consolidations resulted, and all of that fiber got lit up and used. That bubble built the Internet, and huge fortunes were made as a result -- the top half-dozen most valuable companies on the planet are all a direct result.

OpenAI and Anthropic are betting that this time will be different, that the payoff will come fast enough to pay back the investment. Google is betting this somewhat, too, but Google has scale, diversity and resources to weather the bust -- and might be well-positioned to snap up the depreciated investments made by others. If history is any guide, OpenAI and Anthropic are wrong. But, then again, AI is fundamentally different from every other technology we've created.

Comment Re:Thanks for the research data (Score 1) 116

It also corresponds to a time when the US was a lot Whiter, but I'm pretty sure that's a "coincidence" you don't want to discuss.

Like most racists, your critical thinking skills (assuming you have them) are shoved aside by overwhelming confirmation bias. Otherwise, you'd have noticed that the US was also a lot whiter before the Pendleton Act, and that the post-Pendleton boom continued and even accelerated after the Civil Rights acts and a large influx of non-white immigrants. We became the world's sole superpower and continued increasing our economic, political and cultural dominance as a diverse, melting-pot society. The rise of China as an economic power (oh, wait... how is that, they're not white, how can they possibly do well?) has flummoxed us somewhat, but even with Trump beginning to throw away the apolitical civil service, our international partnerships and, well, the rule of law as a whole, we're still on top. But the decline is beginning, and it's not the brown-skinned immigrants who are taking us down, it's the white nationalist administration.

If you could discard your biases and examine the situation objectively and critically, you would notice that the timeline you're referring to completely and utterly refutes the conclusion that you're trying to draw.

Slashdot Top Deals

"I have not the slightest confidence in 'spiritual manifestations.'" -- Robert G. Ingersoll

Working...