the famous Outerlimits Episode where the aliens come "TO serve Man", and it turns out that's the title of their cook book.
In the interest of accuracy, that was a Twilight Zone episode, written by Rod Serling, based on a story by Damon Knight.
Here in California, we've got a law that says you can't have a video display operating anywhere the driver might see it, with exceptions for dedicated GPS/Nav/vehicle status displays.
A friend of mine used to have an online store for GPS navigation devices. Many of the manufacturer's had "California" versions of the ROMs that he was required to ship to customers in California. The difference is that all the non-nav-related features (like games, calendar apps, etc.) were disabled when the device was in motion. This was to comply with the aforementioned law. While this was a long time ago, and the law has been amended substantially since then, I believe it still applies to this situation, but, of course, I am not a lawyer.
In general, I agree that the patent system is broken and creates perverse incentives that undermine the intent of the system. Nevertheless, allow me to play devil's advocate here.
Now please inform us as to how patent trolls promote the progress of science and/or useful arts.
Developing a technology requires an investment of time and money and possibly other resources. That investment may result in an invention that is useful for the investor, or it might not pan out at all.
Patent trolls provide a marketplace for inventions patents that the inventor may not be able to use but that might have value to others. The existence of this marketplace reduces the risk of investing in R&D because some of the "failures" might be valued by the troll marketplace.
Reduced risk possibly spurs more investment in R&D. More R&D investment likely promotes the progress of science and the useful arts.
I want to be outraged about the use of PRISM for copyright enforcement, but I made the mistake of reading the article. It seems the connection between the surveillance of Dotcom and PRISM is rather tenuous.
If I'm understanding the article correctly, it seems somebody noticed that the term "selectors" was used in setting the parameters of the illegal surveillance, and somebody else noticed that "selectors" is exactly the same term that XKEYSCORE uses--OMG! Um, yeah. That doesn't mean XKEYSCORE or PRISM was actually involved. It might just be that "selectors" is part of the standard terminology for signals intelligence.
Recompiling is not enough because you can't trust the compiler either, unless you write your own bootstrapping compiler to compile the compiler.
The standard specifies compiler behavior and the run-time library behavior. I know GCC has been pretty up to date with respect to the language features, but there are still some "Partial" and "No" entries in the run-time library implementation's C++11 status. Is Clang's library implementation complete with respect to the C++11 standard?
However, agreeing to the second one made it a clear money grab and it violated the California law.
He wasn't charged under California law. He was charged and convicted under federal conspiracy laws.
Presumably the intent of the California law is to make it easier to convict someone of something without necessarily having to make the harder charge stick, just as open container laws make it easier to secure a DUI conviction. The feds apparently thought (and were right) that they could make the more difficult argument that he was a conspirator in the drug trafficking.
STL arrays are allocated on the heap, and that's a quite slower and more wasteful allocation form than on the stack.
Actually, STL arrays (as in std::array) can be allocated on the stack. They have no overhead.
It seems you're referring to STL vectors (as in std::vector), which is an implementation of a dynamically-sized container, so, yes, it uses the heap—as does every other dynamic container I know of. That neither makes it significantly slower nor more wasteful.
Sure, you can use C arrays, but guess what: out go type safety and STL algorithms and C++ idioms.
False on all counts. C arrays can be used with type safety, with STL algorithms, and with C++ idioms.
"Disabling exceptions" is a non-standard extension to the language. If you disable exceptions, you're technically not using C++ but a nonstandard dialect.
What does it even mean to disable exceptions? I'll assume it makes new behave like new(std::nothrow), but is your STL implemented in a way to deal with that? It probably expects a failed allocation to throw. If it also has to check for nullptr, then there's another code path that's probably not well tested. What should std::vector::at do when faced with an out-of-bounds index? The standard says that it shall throw an exception.
Disabling exceptions is different than simply not using a feature you don't want to use. If you don't want to write a template, that's fine. If you don't take advantage of dynamic_cast, that's OK, too. If you choose not to use STL and avoid the standard new operator and don't use RAII, then I suppose you can reasonably ignore exceptions. But STL and new and RAII constructors that fail will throw exceptions. You can choose not to catch any of those, but that doesn't mean that exceptions don't exist.
In particular, exceptions and RTTI are absolutely verboten.
Ignoring RTTI is fine, but forbidding exceptions requires a dangerous sort of doublethink. The language itself, as well as the STL, is defined to generate and use exceptions. By ignoring their existence, you banish yourself to a nonstandard purgatory.
For example, every new now must become new(std::nothrow). For every STL container type, you have to provide a custom allocator that doesn't throw. That's a bit unwieldy.
By denying exceptions, you force everyone to use error-prone idioms. For example, the only way a constructor can signal a failure is to throw an exception. If you forbid exceptions, then all constructors must always be failure-proof. And then you have to provide an extra initializer method to do the real initialization that can fail. Every user of the class must reliably call the init method after construction, which gets cumbersome when classes are nested or when you're putting instances into an STL container. It also means that objects can have a zombie state--that period of time between construction and initialization. Zombie states add complexity and create an explosion in the number of test cases. Separate initialization means you can't always use const when you should.
Exceptions are necessary to the C++ promise that your object will be fully constructed and later destructed, or it will not be constructed at all. This is the basis of the RAII pattern, which just happens to be the best pattern yet devised for resource management. Without RAII, you will almost certainly have leaks. Worse, you won't be able to write exception-safe code, so you are essentially closing the door to ever using exceptions.
In response, FDA told Threatpost that it is developing tools to disassemble and test medical device software and locate security problems and weak design.
Why would the FDA have to disassemble the code in the devices? The FDA approval process requires giving the FDA examiners access to the source code, along with design docs, schematics, etc. Why would they need to reverse-engineer a device? Something's not right here.
One man's "magic" is another man's engineering. "Supernatural" is a null word. -- Robert Heinlein