Forgot your password?

Comment: Re:Defining subsets of C++ (Score 1) 425

by DickBreath (#47673569) Attached to: Interviews: Ask Bjarne Stroustrup About Programming and C++
While I personally like to deeply understand the languages I program in, I don't think it should be necessary for an average programmer to have to understand the entire language specification in order to use the language.

If you can't just grab some code online and expect it to work on your compiler, then this is a major design fail in the language.

I can understand a language having unspecified aspects, poorly specified aspects, or things that are well specified to act in a compiler defined manner. However those should be extremely obscure features that an average programmer, and average code would never use.

Even better, don't have any compiler dependent behavior in a language. Compiler extensions should be okay -- and they, by design, should cause a compile time error on a different compiler -- or alternately have a way of being ignored so you can stack compiler specific blocks together in a single source file. Although having a source file that only supports a specific list of compilers also seems like a bad idea.

I'm glad it is not an issue in the languages I use.

Comment: Re:Defining subsets of C++ (Score 1) 425

by DickBreath (#47672141) Attached to: Interviews: Ask Bjarne Stroustrup About Programming and C++
> We don't need sub-sets of languages. We _already_ have those when programmers don't use all the complicated and over-engineered parts of C++.

We don't need sub-sets of languages. We already have them when compilers don't fully implement the entire language. (Rewind to mind 1990's.)

Comment: Re:Why is C++ such an utter pile of shit? (Score 3, Funny) 425

by DickBreath (#47672115) Attached to: Interviews: Ask Bjarne Stroustrup About Programming and C++
There are optimizations you can use to improve your experience with C and C++.

Just insert these into your header files for both time and space improvements in your compiled code.

#define struct union; // uses less memory
#define while if; // makes code run faster

Now how can you say bad things about a language that is so easily improved?

Comment: Re:How do you feel about the haters? (Score 2) 425

by DickBreath (#47672085) Attached to: Interviews: Ask Bjarne Stroustrup About Programming and C++
> . . . all of this passion only exists because people are using ${SOMETHING}.

I feel passionate about SCO (in a strongly negative way), but not because they are important, popular, or their products widely used. I feel passionate about Clojure (in a positive way) despite that it is not presently one of the top programming languages. How many people use something can be irrelevant to the legitimate reasons people feel passionately about it.

Comment: Re:Thanks Edward. (Score 5, Insightful) 206

Blaming Snowden for NSA abuses is like blaming Al Gore for Global Warming.

It is shooting the messenger.

If that messenger didn't tell us, some other messenger would have sooner or later. It was inevitable.

People only keep secrets (like global warming) when they feel it is their patriotic duty to do so for love of country. When they see widespread abuse, contrary to the values of a democracy, little or no oversight, and their peers feel the same way, it is inevitable that somebody is going to blow the whistle about global warming. If it hadn't been Snowden, it would have been someone else, eventually. This was never going to stay secret forever.

Comment: Re:So much Fail. Ignore. (Score 1) 315

by DickBreath (#47566081) Attached to: Programming Languages You'll Need Next Year (and Beyond)
Faster is both a characteristic of execution runtime, and time required for software development / maintenance. That development time has become a major important factor. Time to market. Beat competitors. Also programmer time is now vastly more expensive than hardware time. Just throw a few hundred gigabytes of RAM and a few racks of CPUs at it. This is still WAY cheaper than adding another programmer. (And more programmers add a certain drag factor on the development.)

But overall, with an outstanding GC, and with JIT compilers that are the work of over 15 years of research (see JVM), you can achieve excellent performance. (See elsewhere in this slashdot where I describes some of the amazing things the JVM does. Also Google for why organizations, like Twitter, switched to Java. It may seem counter intuitive, but the results are real.)

Comment: Re:So much Fail. Ignore. (Score 1) 315

by DickBreath (#47566017) Attached to: Programming Languages You'll Need Next Year (and Beyond)
Yes, that!

Would you rather have your next whizbang software package one year sooner, but with let's say, 75% of the performance? Or would you rather wait an extra year (or more years) for a version that has somewhat faster execution? You can substitute any reasonable number for the 75%, like 50%, and this question might still get the same answer. Not only is software delivery faster, but less buggy. You can write bugs in high level languages, but you tend to write fewer of them because the abstractions are designed to protect you from certain classes of bugs. Structured programming to avoid GOTO spaghetti. Type checking. GC. Functional programming. Immutable variables. Immutable data structures. Automated reasoning. Logic programming. Computer Algebra Systems. Automated theorem provers. Etc. Pick whichever level of abstraction is suitable for your project. C is not perfect for everything, but it is simply wonderful for certain things. But that can be said for other languages as well.

Comment: Re:The programming language for the next 20 years. (Score 3, Insightful) 315

by DickBreath (#47561191) Attached to: Programming Languages You'll Need Next Year (and Beyond)
Entire operating systems are written in C -- as they should be.

But C is a low level language. Not the best tool for writing applications.

Higher level languages and managed runtime systems have gained so much traction for a reason. They are very productive to use. They protect you from simple mistakes. The relieve the burden of memory management. GC simplifies library APIs by making the question of who should dispose of what become irrelevant. We could still be programming in assembly language instead of C. Why aren't we? Why aren't OSes written in assembly? Because C is more productive and higher level. Similarly, there are higher level languages than C, and they have their place. C is not the end all of abstraction.

Comment: Re:So much Fail. Ignore. (Score 4, Insightful) 315

by DickBreath (#47561139) Attached to: Programming Languages You'll Need Next Year (and Beyond)
So much fail about Garbage Collection.

GC is not about forgetting to free memory. It's about higher level abstraction removing the need for the programmer to do the bookkeeping that the machine can do. Why don't we still program in assembler? Because it's less productive. It's about productivity. As data structures become extremely complex, and get modified over time, keeping track of the ownership responsibility of who is supposed to dispose of what becomes difficult to impossible, and is the source of memory leak bugs. In complex enough programs, you end up re-inventing a poor GC when you could have used one that is the product of decades of research.

The article fails to understand that you can also run out of memory in a program using GC. Just keep allocating stuff without and keeping references to everything you allocate.

Reference Counting is not real GC. Cyclical data structures will never get freed using reference counting.

One of the major, but under-recognized benefits of GC, which the article fails to mention, is that GC allows much simpler ''contracts' in APIs. No longer is memory management part of the 'contract' of an API. It doesn't matter which library or function created an object, nobody needs to worry about who is responsible for disposing of that object. When nobody references the object any more, the GC can gobble it up.

On the subject of Virtual Machines, the article could mention some of the highly aggressive compilation techniques used in JIT compilers. So every method in Java is a virtual call. But a JIT compiler knows when there is only one subclass that implements a particular method and makes all calls to the method non-virtual. If another subclass is loaded (or dynamically created on the fly) the JIT can recompile all methods that call the method such that they are now virtual calls. Yet still, the JIT may be able to prove that certain calls are always and only to a specific subclass, and so they can be non-virtual.

The JIT compiler in JVM can aggressively inline small functions. But if a class gets reloaded on the fly such an the body of an inlined method changed, the JIT will know to recompile every other method that inlined the changed method. Based on the changes to the method, it may or may not now make sense to inline it -- so the decision on whether to inline the method can change based on actual need.

The HotSpot JVM dynamically profiles code and doesn't waste time and memory compiling methods that do not have any significant effect on the system's overall performance. The profiling can vary depending on factors that vary from system to system, and could not be predicted in advance when using a static compiler. The JIT compiler can compile your method using instructions that happen to exist on the current microprocessor at runtime -- something that could not be determined in advance with a static compiler.

All of this may seem very complex. But it's why big Java systems run so darn fast. Not very many languages can have tens or even hundreds of gigabytes (yes GB) of heap with GC pause times of 10 ms. Yes, it may need six times the amount of memory, but for the overall benefits of speed, the cost of memory is cheap.

I have ways of making money that you know nothing of. -- John D. Rockefeller