The result would be the exact same with a wide range of masses. It's clear that there's no objects of significant size around Earth's orbit that aren't Earth or orbiting around Earth This measure is far less sensitive to values than required planetary mass.
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
People don't lose their rights by forming into some sort of union. However, why would the union have rights?
Saying that a corporation can't donate to a political campaign does absolutely nothing to the individual rights of any owner, officer, customer, or nemesis of the corporation. There would be absolutely nothing stopping owners from contributing individually. I have no idea whatsoever why you think this could, under the Constitution, be otherwise.
I am saying that the Federalist papers have no official standing, were written by three people out of those working on the Constitution, and were written specifically to get people to want to adopt the Constitution.
Also, what do you mean by "regulate"? And why wouldn't the FCC be the right way of regulating the internet? There's mention, IIRC, of departments in the Federal government, indicating that the executive branch can have subordinate entities dedicated to enforcing the law. The FCC has no power not given to it by Congress, but they have a good deal of leeway in executing their commission.
In which case the compilers in gcc were preprocessors (at least way back when), since they created assembly-language code that went to as. Java is a preprocessor, since it compiles to JVM code which is either interpreted or compiled. Any front end for LLVM is a preprocessor, since they create intermediate code that will be transformed to executables by another program.
In other words, I think your definition of preprocessor is far too general to be really useful.
Simple variable declaration is where I'd advise not using auto. I hate "auto v = [initialization]" where v is of a fairly simple type. Floating-point and integer types have distinct semantics (what does a/b mean?), and I'd like to make that more explicit. auto is more useful for iterators, since one iterator is much like another.
C++ algorithms are useful. You're better off using them than coding things yourself. You're likely not to use some of them enough to remember them, but something like std::find_if is easier to write than a loop, and can be treated as a value rather than calculated before use.
On a scale of 1-10 for strong static typing, Haskell comes out at about 11. A lot of the typing is deduced, not explicitly declared, so there's the equivalent of heavy use of auto. Auto solves the problem of not knowing the type (common in templates, which preserve strong typing a whole lot better than any object-oriented equivalent), or having a type that is not easy to figure out, as well as simplifying iterator declaration and making it harder to get wrong.
Of course Apple hasn't said Objective-C is a dead end. There would be a revolt and a mass fleeing from the platform if they did that.
Apple has done that sort of thing several times in the past. They may have angered some developers, but there were plenty left.
Remember when we were supposed to use OpenDoc? Hypertalk? Java? Or haven't you followed Apple development as long as I have?
Okay, you make the decision that this thing owns the pointer, but that thing has a copy of it. We know that this thing and not that thing is responsible for destructing what's at the end of the pointer, but when should it do it? Instead of figuring that out, it's easier just to use std::shared_ptr.
Variable naming conventions can get difficult, particularly since the variable has to be named different things in different contexts, and comments can easily become wrong when they deal with such low-level issues. std::unique_ptr tells you immediately that this is the pointer owner.
You can run standard C++ on any modern general-purpose processor. It doesn't have a standard ABI, which C has in practice, but if you're writing your code you just use one C++ compiler and you're fine. Technically, C is more portable. If you're talking about programming a computer or a phone or something like that, it's just as portable.
C++ can be written as efficiently as C, since you can make any C program into C++ with minor changes. C++ also has facilities that make it easier to write efficient code than in C; std::sort is not only easier to use than qsort(), it's more efficient.
You can explain part of quantum entanglement by comparing it to two envelopes, one containing a black card and one a red, and that opening one envelope tells you what's in the other. That's at least useful for showing why it can't carry information faster than light.
Explaining what happens when you open the envelopes at different angles is where it gets difficult. (You turn your envelope 90 degrees before opening it, I open mine in the original orientation, and you have no idea whether I've got red or black.)
Auto doesn't kill programs, stupid programmers kill programs. The numeric conversion rules are well defined, APIs have to be defined to be used, and anybody who expects something to return an integral type and actually gets a floating-point type is doing something seriously wrong. It sounds like you were probably doing embedded programming (as any reasonably modern non-embedded processor does have built-in floating point), and doing it badly.
Do you want to know what "for (auto i = foo.crbegin(); i != foo.crend(); --i)" does? It traverses foo from end to beginning, not altering the contents of foo. Is there a problem with that? Specifying the type of i instead of using auto doesn't guarantee that foo is the right container to be traversing, and if foo is the right container then there's nothing wrong with the statement.
If you haven't used anything in the algorithms library in fifteen years, you're missing the point of much of C++ and your opinion is worthless. You not only don't understand the new features of C++, you don't really understand C++98. The algorithms library is much too useful to be intelligently avoided.
Common Lisp also has a macro system that's comparable to C++'s template system, but much easier to use, and the Common Lisp Object System is more powerful than C++'s. (Stroustrup wanted to include CLOS's multidispatch system, in which a method is selected for execution by the class types of more than one parameter, but couldn't find a good way to put it in C++.)
C++ never was a C preprocessor. The transformations were too complex for that. Cfront was a compiler that compiled early C++ to C, and passed on a lot of stuff intact. See Stroustrup, The Design and Evolution of C++.
Objective-C was a strict superset of standard C (at least back then; I haven't followed C evolution since C99). Its extensions were strictly syntax errors in C. C with Classes (later to become C++) was never a strict superset, as, among other things, it made "class" a keyword. The fact that the differences between C and Objective-C are so different from C syntax meant that they could be added to C++, making Objective-C++.
The O-O approaches are different. Objective-C was intended to put the Smalltalk O-O system into C, while C with Classes took O-O features from Simula and added them to C. Then, C++ started evolving into something considerably different that was mostly backwards compatible.
You can't learn object-oriented programming as a way of thinking in a month. At least I couldn't come within an order of magnitude of that, and I'm very good at picking up new programming ideas.
You seem to be advocating not only strong but static typing, which are different things. Any Common Lisp entity knows what type it is, for example, and you can't say that about C++. However, the C++ compiler knows what type something is, while in Common Lisp that's optional.
A really strong static type system is incompatible with C++'s OO system, in that, given a Base and Derived class, the compiler won't know if a Base * is a Base * or a Derived *, and that's essential for polymorphism.
Most of C++'s type weaknesses were inherited from C. If you avoid C-style casts and unions (usually only necessary in fields where type safety is not a concern), C++'s type safety increases markedly.