Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:After destroying a launchpad? (Score 4, Insightful) 88

They have the best safety record

by far.

I think that is pushing it.

If I'm counting correctly there have only been 9 crewed flights of falcon 9/dragon. 1 spacex demo, 6 operational flghts for NASA and 2 private flights.

By contrast there were 135 crewed space shuttle flights, 2 of which resulted in loss of crew.

One accident would essentially take spacex's record on crewed flights from best to worst.

Now the total number of falcon 9 launches is much higher, but those haven't been without incident, one blew up on the pad prior to launch, and one broke up in flight. Hopefully nothing simlar will happen on a crewed launch and if it does hopefully the launch escape system will do it it's job.

Comment Re:It's in the Name (Score 1) 94

OTOH much of the point of a video game is to let you escape the drudgery of real-life. People will tolerate an hours drive in real life, not so much in a video game. You can put in fast uncongested highways and let the player drive like a maniac but i'm still not convinced you can go much bigger than GTA Vs map without getting really annoying.

Comment Re:The only improvements to C (Score 2) 167

IMO there are multiple different categories of UB in C/C++.

Some UB just reprepresents pig-headedness on the part of compiler writers and standards organisations. The optimisation benefits are massively outweiged by the fact that they make even the simplest operations (like addition) into potential footguns.

Some UB is arguable, stuff like the aliasing rules. On the one hand i'm not convinced the optimisation opertunities justify the extra mental load, on the other hand they relate to things (like pointer typecasts) that should be used sparingly anyway.

These first two categories could be got rid of quite easily if there was the will to do so. Indeed in gcc there are actually compiler flags to turn many of them into defined behaviour for those who can be bothered to actually RTFM.

But there is a third category of UB. UB that represents fundamental consequences of the language and library design and of low level programming in general. This last category includes stuff like buffer overflows (and sometimes underflows), stale pointer de-references, double frees.

I don't think this third category can ever be done away with completely in low level programming. The best you can hope for is to do what rust tries to do and compartmentalize it but I can't see how you can do even that without language and library changes radical enough that you may as well call it a new language.

A particular problem is the combination of sharing and mutability. Lets take for example I have a type that represents a string, it has a pointer to the string data, a length and a capacity. I read the pointer and start a loop of read character, process character, increment pointer until I reach the end of the string.

However, while the code working through the string some other peice of code comes along, maybe another thread, maybe an interrupt handler, maybe a callback function. This other piece of code appends a character to the string, triggering reallocation of the string. Suddenly the first piece of code has a stale pointer.

Rust gets around this by forbidding shared mutability in safe code except under very specific circumstances but I really don't see how you can solve this problem for an existing language and library ecosystem.

Comment Re: Google did a study about this (Score 1) 220

if all your members know how to move and there aren't any invariants you need to maintain between them, the compiler does the right thing by default.

Indeed, C++ can generate a move constructor that calls the move/copy constructors of your component types but that doesn't change the fact that a C++ "move" is fundamentally different from a "rust" move.

A rust "move" is an operation that is intrinsic to the language, the content of the memory representing the object is transferred and the old location is treated as no longer containing a valid object.

This has a number of implications.

1. Since the operation is intrinsic to the language and does not involve calling any library code the caller can "move" the object into a register and the callee can "move" it back out. That means that if the type is sufficiently small, it can be passed in registers rather than being forced to live on the stack.
2. It's generally safe to assume that a move is "cheap"
3. "smart pointers" don't need to have a null state.
4. Moves cannot panic (the rust equivilent of throwing an exception)

the things it's enforcing are things a good C++ programmer would be doing anyway.

I'd disagree strongly with that, the restrictions on safe rust to make analysis tractable are far stricter than the restrictions most C++ codebases self-impose. Indeed i would say this is one of the issues rust has gaining traction.

Comment Re:Google did a study about this (Score 3, Interesting) 220

Implicit copy is the default because that is exactly what computer hardware does. When I move the value at address A to address B, the value is still at A.

Indeed, and that line of thinking works fine as long as your variables are agreegates of dumb values and unmanaged pointers. Indeed for "trivially copyable" types even rust is copy by default.

Once you introduced managed pointers though you have a problem. If you make a copy of a type containing managed pointers and then run the destructor on both copies then you have a double free bug. Similarly if you overwrite the contents of such a type without freeing the old content first you have a memory leak.

And it is for such "Managed" types that rust and C++ differ. In C++ a simple assignment can turn into a call to an arbitarily complex copy constructor. In the case of collection types containing a managed type that copy constructor will in turn call the copy constructors for all the types contained with in.

C++11 introduced the concept of "moves" but asside from not being the default, they are also required to leave the type in a "valid but otherwise unspecified" state. This means that each type needs it's own custom move constructor which practically means the types cannot be passed in registers but must be passed on the stack. It also practically means that your smart pointer types have to support a "null" state.

Rust take a different approach, the compiler keeps track of whether a given address contains a "valid" object or not. At an instruction level a move is just a copy but the key difference is that after the move the compiler no longer treats the source as containing a valid object. So drop (rust's equivilent of a destructor) will only be called once.

Comment Re: Just that one aspect (Score 3, Interesting) 220

You are not obligated to use C++ as if it were C. That the language allows you to do unsafe things does not mean that you should.

The problem is even "modern C++" is still very much an "unsafe by default" programming environment.

1. you can use your fancy smart pointers to track memory ownership, but to actually do anything with the objects they point at you have to convert them, explicitly or implicitly (usually through the operator-> overload) to raw pointers or references. Once you do so there is nothing to gaurantee that the that the smart pointer is not modified during the lifetime of the raw pointer or reference.
2. With the standard collection types the pleasent to use access methods are the ones that do not have bounds checks.
3. There is nothing to prevent inadvertant passing of a non-threadsafe type across a thread boundary.

Comment Re:Just nonsense (Score 1) 184

There are different revisions of the USB power delivery spec, AFAIK revisions 1 was for USB A/B connectors, revision 2 supports both the legacy A/B and the new C while revisions 3.0 and 3.1 are for USB C connectors. To make things even more confusing each of the revisions seems to have versions, USB-IF are awful at naming things.

USB IF seem to put both revision 2.0 and revision 3.1 in the same zipfile https://www.usb.org/sites/defa... I have skimmed the specs at various times but not read them in detail.

My understanding is that PD on USB A/B was implemented using comminication superimposed on VBUS. To protect against damage caused by non-compliant cables the plugs on PD compliant cables were required to be "marked", this marking was mechanical for the standard A/B plugs and electronic for the micro A/B plugs.

In practice though I don't think USB PD on A/B connectors was ever widely implemented. I suspect the requirement for special cables and connectors put manufacturers off.

Comment Re:So basically C/C++ and some apps written in tho (Score 1) 54

I disagree.

C is just painful to work in. I don't want to have to worry about memory management and buffer overflows every time I manipulate a bloody string. Those better things in the C++ standard library compared to the C standard library are possible *because* of the extra language features C++ has.

Most of the newer languages that compile to native code have no shared library story to speak of, you are expected to build everything into your binaries statically and/or rebuild everything on a new compiler release. The C++ shared library story is far from perfect, but at least it has one.

C++ has many problems, but I don't think the question of whether those are bigger or smaller problems than other languages have is an easy one to answer.

Comment Re:Well macs don't last as long as PCs, so of cour (Score 1) 99

That wasn't a hardware limitation though,

The 4GB address space limit pops up in various places.

* It's a limitation of CPUs without PAE (or with PAE disabled). In practice CPU support is a non-issue as the CPUs have supported PAE for years.
* It's a limitation of 32-bit Desktop windows since XP SP2 even with PAE enabled. In XP SP2 this limit was hardcoded into the kernel and infeasible to disable. In Vista and Win7 this was implemented through the licensing system and can be bypassed. I ave no idea wat the situation is in win10
* It's a limitation of some chipsets, IIRC intel lifted the limit in their laptop chipsets one generation after the one in my 15 year old macbook. I presume mainstream desktop chipsets lifted it a bit earlier and workstation/server/highend desktop chipsets much earlier.

How exactly the 4GB address space limit translates into usable ram was a bit variable, Sometimes it could be as high as 3.75GB, I heard of some systems where it was as low as 2.5GB.

it just occurred to me that the 4GB limit was becoming an issue 20 years ago

That is certainly true in the high end workstation/server space, intel introduced PAE as far back as 1995.

Comment Re:This is not the job of a payment processor (Score 1) 151

I can't comment on elsewhere but here in the UK the problem is that at least in the UK the banking industry doesn't seem to have properly grasped the distinction between identification and authorisation. I believe the situation is even worse in the USA, I don't know about elsewhere.

So by giving someone the information they need to send me money, I risk them fucking with me by setting up fraudulant direct debits. A possible workaround is to use a "savings" account that doesn't support direct debits, but I've seen some reports that even this isn't a totally watertight soloution.

Comment Re:This is not the job of a payment processor (Score 1) 151

A stablecoin that has anonymity like Monero would gain tremendous ground, provided it is done "right", perhaps by a non-profit, who can assure that if 100% of coin owners want their coins exchanged into dollars, they can get them.

The fundamental problem with a stablecoin is that it has a centralised entity reponsible for exchanging between stablecoins and regular currency. That centralised entity runs a heavy risk of being treated like a payment processor by regulators but it's basically impossible for a stablecoin to comply with the regulations for a payment processor while retaining the other things that cryptocurrency users want.

Comment Re:Well macs don't last as long as PCs, so of cour (Score 1) 99

IME 10 year old hardware is much more usable than 15 year old hardware at this time.

I own a core 2 duo based macbook, which spent most of it's life runing linux (it's set up as a triple boot with windows XP and some version of macos, but it spent nearly all of it's time in Linux). I think it's the 2.16 GHz "mid 2007" model. I retired it for a few reasons.

1. Ram the model in question only supported 3GB of usable ram. Afaict it was very common with machines of that vintage for the chipset to only support 4GB of address space (even though the CPU was 64 bit) with some systems being better than others about maximising usable ram within that space.
2. The keyboard was developing issues.
3. The CPU was stuggling to keep up with video playback.

It got replaced with a refurb thinkpad T430 which I maxed out the ram on to 16GB (twice what lenovo say can be fitted) and am still using to this day.

Comment Re:Rust is a waste of duplicated effort (Score 1) 83

There are things that C++ is bad at. ABI compatibility is one of them

Problematic as C++ can be in this department, rust is even worse. Or to put it a different way rust is where C++ was decades ago. Rust has no ABI stability for "repr rust" types, each version of the rust compiler just does what it's developers believe will give the best performance. Rust libraries also don't tend to care at all for ABI stability. This is why you will find that rust programs are nearly always statically linked agaist rust libraries (they may be dynamically linked against C libaries of course).

gcc also used to do this for C++, but sometime in the early 2000s* the developers adopted a stable ABI. Clang also adopted this ABI. MSVC still does it's own thing but that is hardly relavent in a Linux discussion.

Now this is still far from perfect, seemingly minor changes to a libraries code can easily cause ABI breaks in C++ and changes to the C++ standard can be a problem too (the C++11 string/list changes were a particular pain point) but with some care C++ ABIs can be kept stable enough to make shared libraries practical without rebuliding the world on every compiler update.

The kernel needs good ABI compatibility.

The Linux kernel cares very little about compatibility of it's internal ABIs. The default assumption of the modules system is that each kernel version, even a "micro" version will require modules to be rebuilt.

The ABI used between kernel and userspace is another matter, that does have long term stability promises but it also has relatively little relation to the language used. Structures passed across the kernel/userspace boundary need to have a stable layout but that is easilly achived in any of the languages under consideration.

* It happened sometime between Debian woody and Debian sarge, I could trawl through upstream changelogs to figure out exactly when but meh.

Comment Re:don't worry (Score 1) 228

Given the context I suspect it's an abbreviation for it's "Men who have sex with Men". A term I've seen come up in public health messaging on monkeypox.

AIUI it refers to people who are physically male and have sex with other people who are physically male. So mostly gay men, but also some trans, bisexual, nonbinary etc people.

Slashdot Top Deals

Time is the most valuable thing a man can spend. -- Theophrastus

Working...