Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:More features I don't need. As usual. (Score 1) 107

Because not fixing what isn't broke causes your market share to crater.

...is what marketing drones keep telling us, but I sure have not seen evidence that this is generally true for all kinds of products. And specifically for Firefox, I see no evidence for the claim that their cratering market share had anything to do with lack of UI changes.

They are trying to "fix" their market share. They are failing, but doing nothing (despite how much that would please power users) doesn't help them either.

(emphasis mine)
What reason is there to believe that?

Comment Re:Oh, yeay. (Score 1) 115

Great for you that you've managed to match your lifestyle to your personality, but you should realize you're an extreme outlier. Exceedingly few people want no social interaction at all, and such a lifestyle would drive many/most humans literally insane.
E.g. I am very introverted - people drain me, and I never managed to understand why would someone actively want to go to a party. Still, I need (occasional) face-to-face interaction with (small numbers of carefully chosen) humans to function properly. Took me a while to realize, since most socialization seems so pointless to me.

Comment Re:Most interesting part ot TFA (Score 1) 155

The Linux kernel is, sadly, an outstanding example of this problem. Syzkaller has found so many thousands of memory vulnerabilities that not only is no one fixing them, hardly anyone is even bothering to look any more. Oh, there are plenty of researchers making a good living by finding and reporting vulnerabilities, because there are plenty of companies -- white hat and black hat -- who will pay for them. But it's a meaningless treadmill, because there are far more than can ever be fixed. The only thing sort of saving us at the moment is that attackers generally find it easier to mine CVEs than to get their own. Deploying kernel updates is slow enough that they can just rely on finding unpatched kernels with published exploits. This dynamic is also what has led to the demise of LTS, but that's a topic for another post. As we get better at patching, we'll eventually force attackers to do their own research, and then hope that we can reverse engineer what they're doing to figure out which of the myriad bugs to fix.

I find it hard to believe the situation is really as dismal as you're implying. The reason is, I simply do not see Linux systems being exploited en masse in the wild, which I'd think they would be if it really was this bad. Yes, theoretical exploits make the news every few months, but most of them require very specific conditions to trigger, and are patched quickly. Instead, most actual attacks I read about are "supply chain vulnerabilities", which is fancy speak for "we downloaded and ran some random code from somewhere", and Rust with its "modern package management practices" is far more vulnerable to it than C.

Besides, if there are "more vulnerabilities than can ever be fixed", just who is going to rewrite all this code in Rust? Rewriting a major codebase from scratch in a different language is a far larger project than fixing memory issues in it.

This situation is not going to improve as long as we continue building complex, critical infrastructure with memory-unsafe languages.

My experience speaks otherwise. I am not a kernel developer, but I do contribute to a large and widely used C codebase, which 15+ years ago would segfault if you looked at it funny. Things improved dramatically since then, and the main cause was change in developer mindset. It became culturally unacceptable to crash on invalid input, leak memory on errors, etc. People got sponsored to fix the existing crashes one by one, and though their number seemed endless at first, gradually things became better. Not perfect of course, but no software ever is. I do not believe that a hypothetical Rust rewrite would improve things all that much at this point, and the effort required would be at least an order of magnitude larger, probably more. Not to mention the potential community-destroying drama and a myriad other issues.

Another thing that bugs me is the implied equality being made between "memory-safe" and "Rust". The implicit argument is along the lines of

  • 1. we want memory safety
  • 2. Rust is memory-safe
  • 3. therefore, all C code should be rewritten in Rust

but the conclusion does not necessarily follow at all. Rust is far from being "memory-safe C" (whatever such a thing may be), it is a far more complex and ambitious language. It is also by far not the first or the only memory-safe language. So why is "rewrite everything in Rust" being presented as THE solution to all our ills. In fact, given the sheer size of the Linux kernel, and the amount of other critical C code out there, it seems far more viable to me to create this "memory-safe C" - a less ambitious language that would prevent memory errors and existing codebases could be easily ported to. Of course that is nowhere near as glamorous and so will never be done.

I think you're mischaracterizing what happens here. In my experience, it's less that rewrites inject new design or logic bugs and more that that they identify unknown requirements which were accidentally or at least quietly met by the old code. Often, this is because of features that were added to the old code without anyone documenting them. Sometimes its because the world around the old code has actually adapted to rely on unintentional features (which would perhaps have been called bugs if they'd been noticed).

My experience is similar, but I would characterize it differently. When you're a young smart ambitious rewriter of an old and hairy codebase, there is a strong temptation to see things in black and white. The old code sucks, and its authors were obviously stupid and clueless. You are smart and have better tools, and therefore can just discard the old cruft and Do Things Correctly. The typical result then is that you miss the reasons why old code is designed the way it is, and either reinvent the problems solved 25 years ago, or write something that fails to account for some less obvious, but important use cases. Then you hack in the fixes for those issues and suddenly your nice clean new codebase is looking just as hairy as the old one, except without the decades of testing and bugfixes. I've seen this scenario happen many times, some of them to myself. The correct way to do it is to understand the old code in detail before rewriting it, but that requires a lot of effort few people are willing to spend.

Comment Re:Most interesting part ot TFA (Score 1) 155

Languages are just tools ...

Strongly agree, but in my impression this is not the way most of the Rust community sees things. Rust to them is not just a tool with superior characteristics, rather it is the messiah sent from on high to save us from our mistakes. Rust does everything well and everything is improved by adding more Rust. Rust is the shining future, C is the dung-covered past, and everyone who "still" uses C is a senile fossil at best.

...but I don't understand why people are so averse to another language to the point where they actively attempt to prevent its adoption.

The short answer is: nobody likes being preached at.

I would consider myself a highly experienced C developer. I used to be interested in Rust a few years ago, but that interest has since faded and I don't plan to invest time in it until and unless substantially more sanity is reached in at least the following:

  • Language complexity AFAIK Rust is FAR more complex than C. Worse, it keeps getting more complex with new language features constantly being added. Rust proponents almost never acknowledge this actually has a cost.
  • Language stability There is a single implementation and no real specification. Rust projects routinely depend on beta/nightly features from the compiler and their dependencies. Again, this is never acknowledged as an actual problem, rather the blame is shifted on the crusty fossils operating with silly obsolete notions like stability (see e.g. the recent drama around bcachefs tools). This alone is enough for me to conclude the language and its ecosystem are not mature enough for serious use.
  • Cult-like community Rust is love, Rust is life. Rust can do no wrong. Everything should be rewritten in Rust. I've lost count of the number of projects whose ENTIRE shtick is "rewrite $foo in Rust", for no other reason than Rust's inherent superiority. Their typical lifecycle seems to be getting to the PoC/toy stage, after which the project is promptly abandoned, because shipping mature reliable software is an obsolete paradigm incompatible with Rust's values.
  • Obsession with memory safety Yes, memory safety would be a nice thing to have in C. However, its importance is VASTLY overstated by Rust evangelists. In my experience, memory issues are typically easy to locate, debug, and fix. Meanwhile most really deep and annoying bugs are in logic and high-level design, and can happen in any language. And one of the best way to invent more of such bugs is a needless rewrite by overconfident zealots.

Comment Re:No. Obviously not. (Score 1) 61

>As soon as you have a more complex problem, doing the spec can already become infeasible due to complexity

To elaborate on this, writing software can be roughly decomposed into two parts:
1) Determining its specification*, i.e. how it should behave for every possible input.
2) Writing down said specification as code in some computer language.

Most lay people (and, to my continuing surprise, also many programmers) believe that software development is mainly about 2). In fact, for larger software, it is almost entirely 1). And this is the reason automated correctness proofs are of very limited usefulness - most bugs are in the spec, not the code.

* This may be either an actual specification written down as a document, or a bunch of notes, or comments in the source, or just ideas in the programmer's head. Sometimes barely any explicit thought is given to this at all - resulting programs are usually buggy and write-only.

Comment Programming is not coding (Score 2) 158

All this "coding will be replaced by AI" FUD stems from a fundamental misunderstanding of what programming is, and typically comes with an unstated assumption that "code" (i.e. programming languages) is an elitist nerd invention designed to keep normal people out of a lucrative job.

That is, of course, utterly absurd. Software development is not about writing code. It's about converting a vaguely-stated real-world problem into an exact specification of a program solving that problem. The act of writing down that specification as code in some programming language is almost trivial in comparison, and this only becomes more true as the problem being solved gets more complex. The only "coders" that can be replaced by LLMs are those that weren't doing any creative work to begin with.

Comment Re:"AI" is advanced autocomplete (Score 1) 127

The problem is, this operation won't be reliable. E.g. if your module had global state (as is depressingly common), extracting it would require creative refactoring, which LLMs are fundamentally unable to do. So your LLM of choice will confidently give you some result, but there's no telling whether it actually does what you want it to, or how it behaves in uncommon conditions.

Comment "AI" is advanced autocomplete (Score 2) 127

It is a probabilistic language model - for a given prompt it generates the "most likely" text according to its training data. It has no actual understanding of the problem you are solving, or programming in general. It has no common sense and no reasoning abilities. I can only see it being useful for simple discrete problems someone has already solved, or for generating large amounts of boilerplate. But in the former case not only are you reinventing the wheel, you are also not learning anything. In the latter, if your tools require excessive boilerplate then you should consider different tools.

Comment Re:Sounds very unsafe... (Score 2) 119

(In)sanity is not a clear binary distinction, there is a large amount of arbitrariness involved in labeling someone as insane, depending on the person in question, the person performing the diagnosis, the societal context in which this happens, etc. Pretty much everyone has some amount of mental problems.

Psychedelics typically do not forcibly change your life, they merely give you the opportunity to look at yourself and the world from a different angle, one that is not easily accessible to most people otherwise. What you do with this knowledge is up to you. I believe most people (except for those susceptible to psychosis, epilepsia, etc.) would benefit from having at least one psychedelic experience in their life.

Slashdot Top Deals

What the scientists have in their briefcases is terrifying. -- Nikita Khruschev

Working...