Become a fan of Slashdot on Facebook


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Re:More features. (Score 1) 267

In 1997, std::vector was not legal syntax.

1997 was prior to C++ becoming standardised, so there was no standard library. The vector from vector.h in SGI's STL was similar to the standard library vector, but it was not part of the standard library. The STL was also still available for a good decade or so after it was deprecated in favour of the C++ standard library.

Another change that happened involved the meaning of delete with respect to arrays. Old code would introduce bugs if compiled with new compilers and new code would leak memory if compiled with old compilers. That fits your time window.

Only if your compiler is buggy. Arrays have required deletion with delete[], not delete, for as long as the language has had a standard.

Comment Re:C# vs Swift (Score 3, Interesting) 69

1. While you're completely correct in theory, if all of the mainstream implementations currently work in the assumed fashion then his point is still reasonably valid.

The mainstream implementations optimise for a particular point in the tradeoff space because they're mainstream (i.e. intended to be used in that space). GCs designed for mobile and embedded systems work differently.

2. ARC is designed to minimise the refcount editing / false sharing

I've worked on the ARC optimisations a little bit. They're very primitive and the design means that you often can't elide the count manipulations. The optimisations are really there to simplify the front end, not to make the code better. When compiling Objective-C, clang is free to emit a retain for the object that's being stored and a release for the object that's being replaced. It doesn't need to try to be efficient, because the optimisers have knowledge of data and control flow that the front end lacks and so it makes sense to emit redundant operations in the front end and delete them in the middle. Swift does this subtly differently by having its own high-level IR that tracks dataflow, so the front end can feed cleaner IR into LLVM.

The design of ARC does nothing to reduce false sharing. Until 64-bit iOS, Apple was storing the refcount in a look-aside table (GNUstep put it in the object header about 20 years before Apple). This meant that you were acquiring a lock on a C++ map structure for each refcount manipulation, which made it incredibly expensive (roughly an order of magnitude more expensive than the GNUstep implementation).

Oh, and making the same optimisations work with C++ shared_ptr would be pretty straightforward, but it's mostly not needed because the refcount manipulations are inlined and simple arithmetic optimisations elide the redundant ones.

Comment Re:Deliberately missing the forest for the trees (Score 2) 372

My only hope is that Trump kicks California out of the union for having the most unbalanced budget of any state

Not sure if it's still true, but the last time I looked California would be running a net surplus if it cut its contribution to the Federal budget in half, so I guess that's one way of solving the problem. Kicking out Texas and a few other states that are significant net contributors would help even more...

Comment Re: Deliberately missing the forest for the trees (Score 2) 372

The bulk of immigrants from Eastern Europe do not even compete with the regular native as they tend to be better educated and work at a higher level

Which actually is the problem, and it's a social one not an economic one. Picture this:

You're a British citizen. You grew up in a poor town and went to a crappy comprehensive school. You had no expectations of getting a good job, because there aren't very many in your area. You see immigrants coming in and living in your poor communities and by and large they're not a problem because they're suffering the same problems as you. After you get over the novelty, they just other poor people like you. Fast forward a few years and they've now manage to get qualifications that are recognised here and now they're suddenly getting better jobs than you'd ever qualify for. Their house now has a better car in front of it than yours. They're wearing more expensive clothes than you. You're seeing that social mobility is a real thing - just not for people like you. How does that make you feel?

Sure, the economy as a whole is doing better, but that's not really a great consolation to the long-term unemployed.

Comment Re:how about modules? (Score 1) 267

You can't really have a clean separation between interface and implementation in C++ without radically redesigning the compiler model. Class and function interfaces are simple, but when you throw in templates then you need to be able to have access to all of the code needed to construct a new instance of the template. The original 4Front C++ compiler did this by a horrible dance of trying to compile the code, catching the linker errors, generating code for the missing symbols, and repeating. Modern C++ compilers just generate a copy of every template that a compilation unit needs and expecting the linker to throw away the duplicates (which is why something like LLVM ends up with over a gigabyte of .o files for a 10MB binary). A lot of newer languages get around this by simply not supporting shared libraries, so you can ship the code (or some IR) along with the interface files and your compiler can generate specialised versions at static link time, or by including a big VM so that everything is distributed as some form of IR and you compile on demand.

Comment Re:Another SLOW Language (Score 1) 267

That's not necessarily true. Smalltalk VMs have been written in a subset of Smalltalk that is statically compiled for a while, and these days the JIT parts are written in the full language. In the case of Java, the sun.misc.Unsafe package include pretty much everything that the JVM needs to be able to do in terms of bypassing its own safety guarantees to be able to self host. In Jikes, the JIT and GC are written in Java. The main reason that JVMs are written in C/C++ is that you don't get much benefit from writing the bit of code that is explicitly allowed to violate a language's invariants in that language (and it makes bootstrapping harder).

Comment Re:C++ is due for deletion ... (Score 2) 267

When a "high" level language require half a dozen or so ways to implement a cast, it's time to go.

Of all of the criticisms of C++, this one makes the least sense. Different casts in C++ have very different semantics and one of the worst things that a programming language can do (and which C++ does in many places) is give you the same syntax for different semantics. Are you taking the same set of bits and interpreting them as a different type (typesafe languages don't need this because they don't permit it)? Are you explicitly removing a const qualifier and thus declaring that you know what you're doing when you do something that might break the invariants of an interface (similarly, not permitted in typesafe languages)? Are you saying create a new instance of the target type initialised from this one?

Remember when a programming language was truly object-oriented?

I still use some that are, but C++ is not one of them. If you try to write Smalltalk-like code in C++, you will write almost as bad code as if you try to write C-like code in C++.

Comment Re:Just what we needed (Score 1) 267

Here's the thing. You might not want to look at this:

template <typename S, typename T>
requires Sequence<S> && Equality_comparable<Value_type<S>, T>
Iterator_of<S> find(S& seq, const T& value);

But today, you'll instead find something like this (from the C++17 spec):

template< class InputIt, class T >
InputIt find( InputIt first, InputIt last, const T& value );

You broke it because either T doesn't implement the operator== overload that compares against T correctly, or InputIt doesn't correctly implement operator++ and operator* with the (documented, but not expressed in the code anywhere) requirements of a sequential access iterator. Now, as soon as you find this you know that the first argument must be a sequence (of type S) with elements of type X, that the second argument must be a type such that X == T is well defined, and that the return value will be an iterator into S, which can be dereferenced to give a reference to an X.

If your error is something syntactic, for example you've deleted the operator==(T&) on S, then your compiler will say 'This find function can't be used with this type because you're missing this method', whereas today it will give you a cryptic error about S::operator==(T&) not being defined somewhere in a deep set of template instantiations.

Concepts give you better compile-time error checking, better compile-time error reporting, and better in-code documentation. They're one of the few C++ language features that are purely benefit.

Comment Re:Epicycles (Score 2) 267

I'm no Java fan, but at least everything is a reference, so you don't have copy-by-accident ooga booga.

That's true, but Java doesn't really have an equivalent of the C++11 move idiom. If you want Java-like semantics from C++, just alias your pointers (ideally wrapping them in something like std::shared_ptr first). The term move is actually a little bit misleading: you're copying the object, but you're transferring ownership of the expensive contents of the object. For example, when you move a string you're creating a new string object, but you're not copying all of the string data, you're just transferring ownership of the underlying buffer. This is even more important for collection types, where you really don't want to do a deep copy and delete.

You can implement the same thing in Java by splitting your objects into cheap wrappers that have a reference to an expensive object and then adding a move() method that transfers ownership of the expensive object to the new wrapper, but it's not integrated into the language. The language integration isn't actually essential in C++ either: people have implemented the same thing using a special Move<> template, which could be used as an argument to an overloaded constructor, which would do the move operation. The nice thing about having it in the standard library and with language support is that tooling understands it. Your compiler / static analyser can warn you if you move an object and then try to anything with the old version.

If copying is so bad (which apparently it is because you'll definitely get reamed during a code review if you do), force a copy action via clone(), ike Java

Saying 'copying is bad' is just as wrong as most other 'X is always wrong' rules. Copying for small types is fine. A std::pair of an integer and a float is a couple of instructions to copy and move semantics would make no sense for it. clone() in Java is also problematic because the standard library doesn't distinguish between a deep and shallow clone.

Comment Re:More features. (Score 3, Informative) 267

For embedded systems, you really don't want exceptions. The runtime for RTTI and exceptions is bigger than the flash on most systems (I wrote the one that ships on FreeBSD, the PS4, and a few other places - it's not a huge amount of code in the context of a desktop OS, but it's 100KB of object code for the C++ bits, plus the generic stack unwinder - you don't want to burn 150-200KB of space on an embedded system for this) and stack unwinding performance is very hard to reason about in anything realtime. The reason that the Embedded C++ subset excluded templates was that they make it very hard to reason about code size. A small amount of source code can easily become 10-100KB of object code if you instantiate templates with too many different types. Writing foo<Bar>() is now not a simple case of set up the call stack and jump, it's either that simple if someone else has instantiated the same template function, or it's creating an entirely new copy of foo and all other templates that it refers to using the template parameters. This makes it very difficult to work out what changes were responsible for pushing the size above the available space. Actually, it's even worse, because the specialised function might now be simple enough to inline everywhere and give an overall space saving, but reasoning about exactly where that balance is becomes very hard. It's not that C++ generates bigger code than C, it's that object code size in C++ has far less of a direct correspondence with source code size than C.

Comment Re:More features. (Score 3, Interesting) 267

Really? Prior to 1998, there was no standard library, though the Standard Template Library from SGI was pretty much treated as the standard library. When C++ was standardised in 1998, most of the STL was incorporated into the C++ standard library, so almost everything that you'd learned from the STL would still be relevant. The next bump to the standard was in 2011. Lots of stuff was added to the standard library, but very few things were changed in incompatible ways (auto_ptr was deprecated, because in 13 years no one had figured out how to use it without introducing more problems than it solved) and almost all C++98 code compiles without problems against a C++11 library. C++14 and C++17 have both added a lot more useful things but removed or made incompatible changes to very few things.

Let's look at a commonly used class, std::vector. The only incompatible changes in the last 18 years have been subtle changes to how two of the types that are accessible after template instantiation are defined. Code using these types will still work (because the changes are not on the public side of the interface), but the chain for defining them is more explicit (e.g. the type of elements is now the type of elements, not the type of things allocated by the thing that allocates references - code would fail to compile if these weren't the same type). The changes in std::map are the same.

That said, you do need to learn new things. Modern C++ shouldn't use bare pointers anywhere and should create objects with std::make_shared or std::make_unique. The addition of std::variant, std::optional, and std::any in C++17 clean up a lot of code.

Comment Re: A new fad? (Score 4, Informative) 267

Yes, except with compile-time specialisation instead of run-time specialisation. One of the big problems that I have with C++ is that it has entirely separate mechanisms and syntax for implementing the same thing with compile-time and run-time specialisation and they don't always compose well. Languages such as Java sidestep this by providing only run-time specialisation and expecting the JIT compiler to generate the equivalent of compile-time specialisation.

With an abstract class in C++, you'd require that every method be called via a vtable, which makes inlining hard (though modern compiler can do devirtualisation to some extent). This often doesn't matter, but when it's something like an array access, which is 1-2 instructions, the cost of the method call adds up. In contrast, if you use a template then the compiler knows exactly which method implementation is called and will inline any trivial methods (at the cost of now having one version of each templated function for every data type, which can blow away your instruction cache if you're not careful). The down side of the template approach is that you have no (simple) way of saying 'this template argument must be a thing on which these operations are defined' and the error message when you get it wrong is often dozens of layers of template instantiation later and totally incomprehensible without a tool such as Templight.

Comment Re:C# vs Swift (Score 4, Insightful) 69

I'm not convinced by Chris' argument here. GC is an abstract policy (objects go away after they become unreachable), ARC is a policy (GC for acyclic data structures, deterministic destruction when the object is no longer reachable) combined with a mechanism (per object refcounts, refcount manipulation on every update). There is a huge design space for mechanisms that implement the GC policy and they all trade throughput and latency in different ways. It would be entirely possible to implement the C# GC requirements using ARC combined with either a cycle detector or a full mark-and-sweep-like mechanism for collecting cycles. If you used a programming style without cyclic data structures then you'd end up with almost identical performance for both.

Most mainstream GC implementations favour throughput over latency. In ARC, you're doing (at least) an atomic op for every heap pointer assignment. In parallel code, this leads to false sharing (two threads updating references to point to the same object will contend on the reference count, even if they're only reading the object and could otherwise have it in the shared state in their caches). There is a small cost with each operation, but it's deterministic and it doesn't add much to latency (until you're in a bit of code that removes the last reference to a huge object graph and then has to pause while they're all collected - one of the key innovations of OpenStep was the autorelease pool, which meant that this kind of deallocation almost always happens in between runloop iteration). A number of other GC mechanisms are tuned for latency, to the extent that they can be used in hard realtime systems with a few constraints on data structure design (but fewer than if you're doing manual memory management).

This is, unfortunately, a common misconception regarding GC: that it implies a specific implementation choice. The first GC papers were published almost 60 years ago and it's been an active research area ever since, filling up the design space with vastly different approaches.

Slashdot Top Deals

They laughed at Einstein. They laughed at the Wright Brothers. But they also laughed at Bozo the Clown. -- Carl Sagan