This is an incorrect assertion, an assertion my previous post debunked, but I suppose I'll re-explain:
You could have a drive full of PDFs, you could have it full of PNGs, whatever file format you'd like. You could mount the drive as noexec, however when it comes down to it, a trusted program (NOT ON THAT DRIVE) can interact with those files and since file formats can be complex AND since the programs opening them are also complex, there's a chance that the program will be vulnerable to a crafted file that tricks the program to do something that a "regular movie" or whatever wouldn't do and may not have been tested for.
If you've written a file parser of any kind, you'll see how complicated it gets in having your program code check the file for abnormalities before interacting with it. This complexity is a steep curve and all it takes is not checking an array boundary for your program to mistakenly leak data memory into its executable memory space.
The old addage plays correct here: Never trust user inputs.
The parent couldn't be more correct.
People discount regular data files as being malicious simply because they're not labelled executables. What they don't think is that those files are opened by executables. These executables are often trusted programs which makes this an even bigger threat to a system as the malicious code can run hidden under the legitimate process and do its work. There's anything from buffer overruns to file parsing mistakes in the programs that can open them up to become a conduit for abuse.
An example of this is Adobe Reader's countless exploits with the PDF file format.
I disagree. Applications should be optimised but many developers aren't doing that anymore as it means more development time.
Over the years, if you compared in ratio how much resources applications used in 2000-2005 compared to what they use now, you'll see that applications use an unjustified amount of resources. A large problem is this "If your computer can't run X, upgrade your computer" movement instead of pressing developers into reviewing how they can optimise.
Nearly no modern practical use (read: browsers, office productivity, etc) program can run on a modern average computer anymore without paging to disk, and that is a disturbing trend.
This is correct, SSL induces significant overhead both bandwidth and CPU-wise. While most CPUs can handle an SSL website connection that is because the SSL handshake is done every so often (at the beginning of each resource download). However implementing it in a "fast acting" protocol like DNS is guaranteed to slow the protocol down, ergo clients will have to wait non-trivial time before they even connect to the resource in question.
This doesn't even account for the DNS resolver's resource usage, given an average resolver's query load, the additional stress needed to do SSL for each query would be operationally unacceptable and having persistant connections hanging open for an ISP-load of users would not be an option either as the servers' open file descriptors would get exhausted.
There is a large difference between "user" and "customer", the problem is you may think that you are a "customer" (or at least potential customer) of every site you visit, but this is incorrect.
"Customer" implies that there is a business relationship in play, however if it is a forum or other free resource, you will never be a customer as there is nothing to purchase. Not every website on the internet is a business.
It is often seen as abuse when a user downloads or needlessly accesses a resource (files) multiple times and website administrators often have no qualms blocking abuse, it means less load on their site's server and more resources free (bandwidth, connection slots on the webserver daemon) for other users and on top of that: potentially lowering their bill.
Coming from experience, I've seen people use download managers and misconfigure them purposefully so they open 20-100+ connections to a file feeling that the website somehow owes them that file, doing so on a webpage with a browser is no different.
If the BIOSes of the machines support it, you could use PXE (netboot) which you could load the software directly over the network without the usage of any portable media.
The problem is that spectrum is up for sale, aside from governmental implementations, there really isn't "open spectrum" for specific classes of devices unless a manufacturer has a monopoly on that area of spectrum AND type of devices. Spectrum is either assigned to organizations based off of money (auctions), or it is put up as a "free-for-all", which results in either underutilized or overcrowded communications.
I bet if the FCC started allocating specific spectrum to specific industries (not organizations) the interference could drop quite quickly.
One element has me curious about how these benchmarks were prepared: Is the benchmark software compiled on the target platform/cpu combination with all available optimisations of that platform?
Many of these benchmarks have a binary/library or set thereof that is written for a single target platform (the platform the original developers of the benchmark were working on), Usually pre-compiled, usually for intel, on an intel system, by an intel compiler, with intel optimisations or at least two of the four. This same binary is then used against whatever systems on compatible architectures, this has the high potential to produce skewed results on non-intel platforms as not all manufacturers use the same optimisations.
While this specific processor may not be as great as it should have been, I feel that benchmarks in themselves are usually flawed and must be taken with a grain of salt until real-world software that isn't in a lab-style environment is attempted on it.
Any circuit design must contain at least one part which is obsolete, two parts which are unobtainable, and three parts which are still under development.