Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).


Comment: Re:D is a regression (Score 1) 386

by ardor (#48887719) Attached to: Is D an Underrated Programming Language?

You only offer emotional arguments ("a blight", "relic from the past", etc.).

Yes, the preprocessor does not work at the same level as the compiler - and that is the good thing about it because it gives you leverage about what the compiler sees and it allows you to guarantee that the logic outside the #ifdefs is untouched by any changes - therefore you get much higher quality/stability.

Did you just say logic? Have you ever written complex sets of code with the preprocessor? A heap of nested macros? This thing scales very badly, and is not even turing complete. C++ templates also scale badly, that's a given (complex metacode is a horror, just like complex macros calling other macros), but they have knowledge about typing information and semantics of the language. By that I mean things like namespaces. Template metaprograms also run in a different realm (at compile-time, and not at run-time), so where does that leave the preprocessor?

In case you missed it: I said that in C, the preprocessor needs to be used a lot more often. I do not argue against that (I know it firsthand from writing C code for embedded hardware). I do argue that *in C++*, the preprocessor does not have to be used nearly as often. There is *zero* reason for a MIN() macro when you can have a templated min() function, for example.

Your example with the untested feature can be solved by isolating the crazy untested code in its own module, and simply *not enabling that module in the build scripts*.

So you have to have modules for every tiny feature?

And all that bloat and overhead just to satisfy your emotional sense of aesthetics?

So to avoid 2 lines of "ugly" code (#ifdef / #endif) you need to create a module, adapt the build-system, etc. etc.?

And we have not even gone into some "advanced" stuff like

#if defined(TEST_1) && defined(TEST_2)

So easy to do with the preprocessor - how do you do that with modules? Create a third module that contains just the code that is needed when both other modules are included? And hide everything in the build-system so that nobody can find and/or debug it?

And again, why all that overhead when all you get is a program that is slower, uses more RAM and (yes!) is much more difficult to understand and debug?

Ideally, the buld-system should not contain any logic. All the logic should be in the source-code.

And of course your "aesthetics before function" - approach may be acceptable on the PC where all that bloat does not matter much. But it is a absolute no-go in embedded-systems programming. Just two years ago I have worked in a project where we had only 128 KB (yes, that is kilobytes) of RAM. And we had to frequently cut the bloat to stay under that limit.

In that situation you forget about "modules", object-orientation and all that other buzz-words from the ivory-tower pretty fast.

If you seriously believe that modularization and object oriented programming are stuff that has no practical usage, then you obviously do not know much about them. Here's a hint: these things are incredibly useful and can even be applied to tiny platforms like stuff that you program with Keil compilers. Yes, things with 32K SRAM or less, no full standard C library (usually hardly any library at all), no heap, etc. I have worked on these. I have applied modularization and object orientation to them. No, it wasn't bloated. No, object oriented programming does not imply huge amounts of registries, virtual function tables, or deep class hierarchies. It is all about having a proper architecture where separate concerns are handled by separate modules. Not one big piece of magical code doing it all, in a messy, convoluted way. The fact that you call such essential concepts "aesthetics" and "ivory tower stuff" speaks volumes.

And I obviously do not advocate imperative logic in the build scripts. It is trivial to see that I mean different configurations for different feature sets if the changes are big and it makes sense to do it this way (for example, some additional profiling and analysis utility functions in a Debug configuration). If it is small stuff you are talking about, for example some experimental changes to the code, then I suggest you make use of version control and set up a separate branch. I also have done this for embedded projects ranging from tiny CSR chipsets with 8K SRAM and small-scale Cypress hardware to bigger ones like Cortex-A9 based hardware. I *never* put #ifdefs for new experimental stuff in the code, I use git branches instead. Both the workflow and the code quality are vastly superior as a result.

I do put an #if 0 block a few times if a certain feature is known to be problematic with the current BSP and/or kernel, together with a very detailed comment that explains the issue, and to please enable this once the faulty dependency is fixed. The conditional compilation feature is something that other languages have adapted, without adding all of the preprocessor. And hopefully, C++ will have static_if one day, so #if 0 can finally go. (Unlike #if 0, static_if can access metacode, and would make template metaprogramming vastly easier to read and program.)

Not to mention the combinatorial explosion with your TEST_1 / TEST_2 macros another poster already pointed at.

So what do you do when you have a new revision of a circuit board that has a different pin-layout?

Do you throw away everything (several man-months of programming and testing) and create a sophisticated module-system that will create numerous other problems and limitations to satisfy aesthetics?

No: You use the preprocessor to add the new stuff while still avoiding any change for the old, so the old stuff can still be used and tested and (more importantly) you can compare the old with the new.

I put the pin layout data in separate .c files (one file for each layout), and link the one that is appropriate for the board, and keep one header around, which contains a forward declaration of that table. No #ifdef necessary. And this is not a new or complex idea. It has been around since the dawn of C. And no, this is not complicated to set up in a Makefile.

Comment: Re:Problems in C++ (Score 1) 386

by ardor (#48863641) Attached to: Is D an Underrated Programming Language?

In general, the C++ committee eschews the introduction of new keywords unless they really have to. "interface" would be something they'd reject. That said, I do agree that it would reduce the noise, but since C++ has template metaprogramming which uses a form of structural typing, interfaces are not as essential as in, say, Java. In Java a type needs to inherit from the "Comparable" interface to be compatible with sorting functions. In C++ there has to be a less-than operator defined for the type, but it does not have to inherit from anything special (see "C++ generic programming" for details).

Of course, you can write C++ in a more Java-esque way, but you don't have to. That is my main point. And since you don't have to, the committee would rightfully argue that "interface" is not that essential.

Comment: Re:Problems in C++ (Score 1) 386

by ardor (#48863627) Attached to: Is D an Underrated Programming Language?

> The biggest gaps were filled in C++11 with replacements for atoi() and so forth, but there's still no replacement for strtok or some of the other functions in the core language.

stringstream and getline together can be used to form a tokenizer. I am not saying it is a paragon of beauty, but it works. See: http://stackoverflow.com/a/117...

Comment: Re:D is a regression (Score 2) 386

by ardor (#48863609) Attached to: Is D an Underrated Programming Language?

Rule number one in C++ : avoid the preprocessor unless you really have to use it.

Things like:

                #include "crazy_new_untested_code.c"

Are a blight, and one of the first things that I remove from C++ code.
The preprocessor does not work on the same level as the compiler, and therefore has no knowledge of rather important aspects of the language like scoping or namespaces. If something can be done as a language-level constructor instead of a preprocessor macro, do so. A good example is a templated min() function vs. a MIN() macro. Another one are awful sins like "#define DWORD unsigned long" . Oh, and include guards? Unfortunately a necessary evil, because #include is a relic from the past, and we have no modern, proper replacement (something like packages, modules, units in other languages). And no, #include + include guards are not "good enough". Hacks like the pimpl idiom are necessary because of the stupidity of #include. I hope the C++ committee gets modules done in the next C++ standard revision. Then *finally*, I can say goodbye to C++ headers.

Your example with the untested feature can be solved by isolating the crazy untested code in its own module, and simply *not enabling that module in the build scripts*. Not by filling code with #ifdefs.

WebKit unfortunately uses #ifdefs in its code, even in its headers. Example: https://github.com/WebKit/webk... and it is a horrible design approach. It completely violates the open/closed principle, and as a result, integrating a new graphics API or toolkit is not as straightfoward as it could be.

Of course the preprocessor can sometimes be useful, but it is not as much of a killer feature as you make it to be. In C it needs to be used much more often than in C++.

Comment: Re:More Likely (Score 1) 278

by ardor (#48691535) Attached to: Snowden Documents Show How Well NSA Codebreakers Can Pry

And none of this counters anything I said. "They intercept traffic and insert a nice little exploit for FF" is exactly what I mentioned. They do not crack the encryption itself, they use loopholes, side channel attacks, improper configurations, and exploits to get to you. Also, intercepting won't work with HTTPS, unless they take control over the CAs. This may work with CAs from the US, but not with overseas CAs.

Comment: Re:Again... (Score 1) 278

by ardor (#48687965) Attached to: Snowden Documents Show How Well NSA Codebreakers Can Pry

Instead of writing some vague stuff about an almighty NSA, do tell how they are supposed to break properly configured encryption algorithms? Do you think they have magical quantum computers in their basement which can crack AES-128 during coffee break?

The actual NSA attacks are most likely focused on exploiting improper configurations (which are unfortunately far more common than one would think), side channel attacks, or outdated and broken encryption algorithms. Or they simply wrestle US CAs into forging certificates and then do a MITM attack.

Always remember http://xkcd.com/538/ .

Comment: Re:You will not go to wormhole today. (Score 1) 289

by ardor (#48515157) Attached to: Physicist Kip Thorne On the Physics of "Interstellar"

> Also, how are you applying the many worlds theory? Aside from the fact that it's not universally accepted, and the fact that I don't have a clue how to falsify it, it applies to phenomena that could go more than one way. When I measure the spin on an electron, there are two possible values. The many worlds theory says that there are now twice as many universes, half with spin one way and half with spin the other way. Are you claiming that, when I drop a banana, there are universes where it falls and universes where it doesn't?

This is correct. Note that Many Worlds is not a theory, but a QM interpretation. But you correctly described how it would be applied. What can happen, will happen, in one of the infinite number of universes. The trick is to see all frames of reference over all universes. This way, there really are no preferred ones (in other universes, you do the FTL travel, so you enter these frames of reference, and then a causality violation happens in these universes). If you just look at the frames of reference of your universe, then yes, there would be a preferred one.

The actual problem is that Many Worlds is an interpretation of quantum mechanics, and nobody has ever actually attempted to combine it with special and general relativity, both because Many Worlds is (currently at least) not falsifiable, and because QM and relativity have fundamental incompatibilities, which need to be resolved anyway. So it's all speculation at this point. For instance, "all frames of reference", does this extend to all frames of all universes or not? It is unclear without merging.

Comment: Re:You will not go to wormhole today. (Score 1) 289

by ardor (#48506519) Attached to: Physicist Kip Thorne On the Physics of "Interstellar"

No, we can observe this other FTL travel as well, but *then*, the universe in which it is observed ends. "It is impossible to observe", on the other hand, would mean that it cannot happen in *any* universe. From the point of view of our universe, you are right, it does appear as if some types of FTL travel are disallowed. But this is solved by allowing them to happen in general, and just the ones where it didn't happen "survive". So, these other types of FTL travel only appear to be disallowed, they aren't really disallowed.

Another example would be a spontaneous transition to a lower quantum vacuum state. It is highly unlikely, but could happen. With the many worlds interpretation, it spontaneusly happens in some universes, which then end, or at least we aren't around to observe it. In others, it doesn't happen, and we are still around to observe these universes.

Comment: Re:You will not go to wormhole today. (Score 1) 289

by ardor (#48501683) Attached to: Physicist Kip Thorne On the Physics of "Interstellar"

No, they *can* happen, but when they happen, the universes cease to exist. The many worlds interpretation then implies that only those universes where these things *didn't* happen survive. It's not about forbidding certain events, its about how precisely these events prune universes.

Comment: Re:Faith based Order (Score 1) 289

by ardor (#48497591) Attached to: Physicist Kip Thorne On the Physics of "Interstellar"

> That's the scary thing about people with some education but not enough, they think they know FAR more than they really do and are more than happy to attack anything they think the people "above them" don't agree with even if they aren't qualified to do so..

Yep. That's the Dunning-Kruger effect in action.

Comment: Re:You will not go to wormhole today. (Score 1) 289

by ardor (#48495955) Attached to: Physicist Kip Thorne On the Physics of "Interstellar"

> There are solutions to GR equations which allow for spacetime to be bent to the point where something that *looks* like FTL to fall out, but they tend to require exotic matter, and there's no evidence to suggest that said matter exists.

This is the big one. Alcubierre's metric has been heavily optimized over time to require energy amounts that could be feasible one day, but the exotic matter bit is the second problem. We can only hope that (a) exotic matter exists (b) an alternate solution can be found (perhaps something based on dark energy once it is understood).

As for the frame of reference, perhaps this isn't such a big deal. If for example the many worlds interpretation is valid, and a causality violation leads to some sort of breakdown of a universe, then you simply would never notice them, since the universes where the violation did happen just cease to exist. So, if a spaceship FTL-flies from A to B, B is a planet in movement relative to A, and the ship FTL-flies back to A, perhaps in the "surviving" universes it flies to A slower for example.

It's all hypothetical of course, but it shows that the causality problem could be circumvented.

Then again, we shouldn't be talking about FTL if we don't event have (relatively) cheap and commercial mass transportation to LEO and beyond yet. The sun won't increase its luminosity to lethal levels for the next 700 million years or so, so we have time.

Adding features does not necessarily increase functionality -- it just makes the manuals thicker.