Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Re:how about modules? (Score 1) 184

You can't really have a clean separation between interface and implementation in C++ without radically redesigning the compiler model. Class and function interfaces are simple, but when you throw in templates then you need to be able to have access to all of the code needed to construct a new instance of the template. The original 4Front C++ compiler did this by a horrible dance of trying to compile the code, catching the linker errors, generating code for the missing symbols, and repeating. Modern C++ compilers just generate a copy of every template that a compilation unit needs and expecting the linker to throw away the duplicates (which is why something like LLVM ends up with over a gigabyte of .o files for a 10MB binary). A lot of newer languages get around this by simply not supporting shared libraries, so you can ship the code (or some IR) along with the interface files and your compiler can generate specialised versions at static link time, or by including a big VM so that everything is distributed as some form of IR and you compile on demand.

Comment Re:Another SLOW Language (Score 1) 184

That's not necessarily true. Smalltalk VMs have been written in a subset of Smalltalk that is statically compiled for a while, and these days the JIT parts are written in the full language. In the case of Java, the sun.misc.Unsafe package include pretty much everything that the JVM needs to be able to do in terms of bypassing its own safety guarantees to be able to self host. In Jikes, the JIT and GC are written in Java. The main reason that JVMs are written in C/C++ is that you don't get much benefit from writing the bit of code that is explicitly allowed to violate a language's invariants in that language (and it makes bootstrapping harder).

Comment Re:C++ is due for deletion ... (Score 1) 184

When a "high" level language require half a dozen or so ways to implement a cast, it's time to go.

Of all of the criticisms of C++, this one makes the least sense. Different casts in C++ have very different semantics and one of the worst things that a programming language can do (and which C++ does in many places) is give you the same syntax for different semantics. Are you taking the same set of bits and interpreting them as a different type (typesafe languages don't need this because they don't permit it)? Are you explicitly removing a const qualifier and thus declaring that you know what you're doing when you do something that might break the invariants of an interface (similarly, not permitted in typesafe languages)? Are you saying create a new instance of the target type initialised from this one?

Remember when a programming language was truly object-oriented?

I still use some that are, but C++ is not one of them. If you try to write Smalltalk-like code in C++, you will write almost as bad code as if you try to write C-like code in C++.

Comment Re:Just what we needed (Score 1) 184

Here's the thing. You might not want to look at this:

template <typename S, typename T>
requires Sequence<S> && Equality_comparable<Value_type<S>, T>
Iterator_of<S> find(S& seq, const T& value);

But today, you'll instead find something like this (from the C++17 spec):

template< class InputIt, class T >
InputIt find( InputIt first, InputIt last, const T& value );

You broke it because either T doesn't implement the operator== overload that compares against T correctly, or InputIt doesn't correctly implement operator++ and operator* with the (documented, but not expressed in the code anywhere) requirements of a sequential access iterator. Now, as soon as you find this you know that the first argument must be a sequence (of type S) with elements of type X, that the second argument must be a type such that X == T is well defined, and that the return value will be an iterator into S, which can be dereferenced to give a reference to an X.

If your error is something syntactic, for example you've deleted the operator==(T&) on S, then your compiler will say 'This find function can't be used with this type because you're missing this method', whereas today it will give you a cryptic error about S::operator==(T&) not being defined somewhere in a deep set of template instantiations.

Concepts give you better compile-time error checking, better compile-time error reporting, and better in-code documentation. They're one of the few C++ language features that are purely benefit.

Comment Re:Epicycles (Score 1) 184

I'm no Java fan, but at least everything is a reference, so you don't have copy-by-accident ooga booga.

That's true, but Java doesn't really have an equivalent of the C++11 move idiom. If you want Java-like semantics from C++, just alias your pointers (ideally wrapping them in something like std::shared_ptr first). The term move is actually a little bit misleading: you're copying the object, but you're transferring ownership of the expensive contents of the object. For example, when you move a string you're creating a new string object, but you're not copying all of the string data, you're just transferring ownership of the underlying buffer. This is even more important for collection types, where you really don't want to do a deep copy and delete.

You can implement the same thing in Java by splitting your objects into cheap wrappers that have a reference to an expensive object and then adding a move() method that transfers ownership of the expensive object to the new wrapper, but it's not integrated into the language. The language integration isn't actually essential in C++ either: people have implemented the same thing using a special Move<> template, which could be used as an argument to an overloaded constructor, which would do the move operation. The nice thing about having it in the standard library and with language support is that tooling understands it. Your compiler / static analyser can warn you if you move an object and then try to anything with the old version.

If copying is so bad (which apparently it is because you'll definitely get reamed during a code review if you do), force a copy action via clone(), ike Java

Saying 'copying is bad' is just as wrong as most other 'X is always wrong' rules. Copying for small types is fine. A std::pair of an integer and a float is a couple of instructions to copy and move semantics would make no sense for it. clone() in Java is also problematic because the standard library doesn't distinguish between a deep and shallow clone.

Comment Re:More features. (Score 1) 184

For embedded systems, you really don't want exceptions. The runtime for RTTI and exceptions is bigger than the flash on most systems (I wrote the one that ships on FreeBSD, the PS4, and a few other places - it's not a huge amount of code in the context of a desktop OS, but it's 100KB of object code for the C++ bits, plus the generic stack unwinder - you don't want to burn 150-200KB of space on an embedded system for this) and stack unwinding performance is very hard to reason about in anything realtime. The reason that the Embedded C++ subset excluded templates was that they make it very hard to reason about code size. A small amount of source code can easily become 10-100KB of object code if you instantiate templates with too many different types. Writing foo<Bar>() is now not a simple case of set up the call stack and jump, it's either that simple if someone else has instantiated the same template function, or it's creating an entirely new copy of foo and all other templates that it refers to using the template parameters. This makes it very difficult to work out what changes were responsible for pushing the size above the available space. Actually, it's even worse, because the specialised function might now be simple enough to inline everywhere and give an overall space saving, but reasoning about exactly where that balance is becomes very hard. It's not that C++ generates bigger code than C, it's that object code size in C++ has far less of a direct correspondence with source code size than C.

Comment Re:More features. (Score 2) 184

Really? Prior to 1998, there was no standard library, though the Standard Template Library from SGI was pretty much treated as the standard library. When C++ was standardised in 1998, most of the STL was incorporated into the C++ standard library, so almost everything that you'd learned from the STL would still be relevant. The next bump to the standard was in 2011. Lots of stuff was added to the standard library, but very few things were changed in incompatible ways (auto_ptr was deprecated, because in 13 years no one had figured out how to use it without introducing more problems than it solved) and almost all C++98 code compiles without problems against a C++11 library. C++14 and C++17 have both added a lot more useful things but removed or made incompatible changes to very few things.

Let's look at a commonly used class, std::vector. The only incompatible changes in the last 18 years have been subtle changes to how two of the types that are accessible after template instantiation are defined. Code using these types will still work (because the changes are not on the public side of the interface), but the chain for defining them is more explicit (e.g. the type of elements is now the type of elements, not the type of things allocated by the thing that allocates references - code would fail to compile if these weren't the same type). The changes in std::map are the same.

That said, you do need to learn new things. Modern C++ shouldn't use bare pointers anywhere and should create objects with std::make_shared or std::make_unique. The addition of std::variant, std::optional, and std::any in C++17 clean up a lot of code.

Comment Re: A new fad? (Score 3, Interesting) 184

Yes, except with compile-time specialisation instead of run-time specialisation. One of the big problems that I have with C++ is that it has entirely separate mechanisms and syntax for implementing the same thing with compile-time and run-time specialisation and they don't always compose well. Languages such as Java sidestep this by providing only run-time specialisation and expecting the JIT compiler to generate the equivalent of compile-time specialisation.

With an abstract class in C++, you'd require that every method be called via a vtable, which makes inlining hard (though modern compiler can do devirtualisation to some extent). This often doesn't matter, but when it's something like an array access, which is 1-2 instructions, the cost of the method call adds up. In contrast, if you use a template then the compiler knows exactly which method implementation is called and will inline any trivial methods (at the cost of now having one version of each templated function for every data type, which can blow away your instruction cache if you're not careful). The down side of the template approach is that you have no (simple) way of saying 'this template argument must be a thing on which these operations are defined' and the error message when you get it wrong is often dozens of layers of template instantiation later and totally incomprehensible without a tool such as Templight.

Comment Re:C# vs Swift (Score 2) 29

I'm not convinced by Chris' argument here. GC is an abstract policy (objects go away after they become unreachable), ARC is a policy (GC for acyclic data structures, deterministic destruction when the object is no longer reachable) combined with a mechanism (per object refcounts, refcount manipulation on every update). There is a huge design space for mechanisms that implement the GC policy and they all trade throughput and latency in different ways. It would be entirely possible to implement the C# GC requirements using ARC combined with either a cycle detector or a full mark-and-sweep-like mechanism for collecting cycles. If you used a programming style without cyclic data structures then you'd end up with almost identical performance for both.

Most mainstream GC implementations favour throughput over latency. In ARC, you're doing (at least) an atomic op for every heap pointer assignment. In parallel code, this leads to false sharing (two threads updating references to point to the same object will contend on the reference count, even if they're only reading the object and could otherwise have it in the shared state in their caches). There is a small cost with each operation, but it's deterministic and it doesn't add much to latency (until you're in a bit of code that removes the last reference to a huge object graph and then has to pause while they're all collected - one of the key innovations of OpenStep was the autorelease pool, which meant that this kind of deallocation almost always happens in between runloop iteration). A number of other GC mechanisms are tuned for latency, to the extent that they can be used in hard realtime systems with a few constraints on data structure design (but fewer than if you're doing manual memory management).

This is, unfortunately, a common misconception regarding GC: that it implies a specific implementation choice. The first GC papers were published almost 60 years ago and it's been an active research area ever since, filling up the design space with vastly different approaches.

Comment Re:Gouge the middle class to make them poor (Score 0) 193

Of course, the nuclear family of the 1950s had:
a 1200 (not 2200) sqft house,
formica (not granite) counters, ...

But the house was owned - with a mortgage affordable on a single income and substantial equity in place.

The car was also either owned or being purchased on an auto loan (rather than leased), again with substantial equity from the down payment, and again paid for out of that single income - which was also feeding and clothing the 2.3 children and taking a nontrivial vacation once a year or so.

And I have no idea where you are getting those square footage numbers. Our family's houses (we moved a couple times once Dad got done with his degree and was buying rather than living in a student ghetto) were substantially larger than you describe, and were typical of the neighborhoods around them.

Yes, Formica: It was the big deal of the time. Granite is a recent vanity - and a REALLY STUPID idea if you actually USE the kitchen to prepare food on a regular basis. Drop a ceramic or glass utensil on a granite counter and it breaks. Drop it on Formica-over-plywood-or-hardwood and it usually bounces.

stainless steel appliances,
automatic dishwasher,
automatic dryer,
*might* have had a TV (not a 54" LCD),

Yeah we had all those boxes (though the appliances were be enamel rather than stainless). Also a console sound system - pre "Hi Fi" - AM, FM, and four-speed record changeer with diamond needle in the pickup.

The non-electronic appliances lasted for decades, too. (Even the electronics lasted a long time with occasional maintenance - which was required for vacuum tube based equipment - and was AVAILABLE.) Quite unlike the modern stuff. (My own family has been in our townhouse for about 17 years now and is on its third set of "stainless steel appliances", thanks to the rotten construction of post-outsourcing equipment by formerly high-end manufacturers. We're even on our third WATER HEATER: The brain of the new, governent mandated, eco-friendly, replacement flaked out after less than a year - and the manufacturer sent TWO MORE defective replacement brains and one defective gas sensor before lemon-replacing it.)

Comment Re:Yes, custom ROMs are still necessary (Score 1) 146

even Google itself washes their hands of any phone that is older than about 2 years.

Three years. Google devices get system upgrades for two years, and security updates for three years. That's still well short of five years, as you say. On the other hand, while Apple has a history of supporting devices for that long, they've made no commitment to any specific support timeline.

Comment Re:Still no template definition checking :-( (Score 1) 184

Yes. That's the nature of STL concepts. Unlike Haskell typeclasses, it's not enough just to see that there are operations with the right types, because STL concepts are often defined in terms like "this is a valid expression, which returns a value convertible to bool".

Comment Re:As someone with a masters in this -exact field- (Score 2) 184

you are a true master, you should be able to explain concepts in a way that even a child can understand. Richard Feynman was famous for this. So was Albert Einstein. Of course you can go too far, and simplify too much, so the children only think they understand.

Richard Feynman and Albert Einstein both did exactly this. You really can't understand quantum mechanics or general relativity without math. You can think you do, and both of them were great at providing simple explanations that gave the illusion of understanding... but it was only an illusion, which of course they knew perfectly well.

Slashdot Top Deals

1 Mole = 007 Secret Agents

Working...