Special chips for offloading the CPU are standard today. Your PC has a sound chip, a GPU and various other chips (or even whole cards) which offload the CPU. It scales just fine. That was in no way the downfall of the Amiga. Commodore being inept was.
So where do I insist on anyone using any particular tool?
You have a maturity problem because it's not enough to dislike something; you have to justify it by calling it obsolete, and insulting people who disagree with your attempt at justifying your opinion.
Maybe when you get to be half my age you'll know better. But I'm not holding my breath, youngster. Now git off my lawn!
I am quite curious how it's dogmatic to simply use the best tool for the job, without trying to make the claim that one or the other is "obsolete".
You've got a serious maturity problem, kiddo.
The editor you use is just as obsolete as vi. This is what you fail to comprehend and acknowledge. You make this into some form of "newer exists so the older is inherently inferior", and this is precisely wrong. Just because something does not fit in your brain does not mean it is obsolete. If anything, it is likely to mean *you* are obsolete.
I refuse to believe you use whatever serves your current purpose best because you present a dogmatic attitude. That means you will choose based on incorrect criteria. That you do not even realize this makes your position worse.
Where have I "insisted" there's "One True Editor"? Your arguments against vi grow out of some kind of small minded misunderstanding you acquired back when I was selling my first software commercially (if, indeed, you used it during the 1970's), and that is what I reacted against. Nothing else.
Maybe by the time you act your age you'll realize how absurdly you're acting, spitting venom on an editor you last used as intended decades ago (if ever) on a
That is the whole point of git, that you're able to work that way. It's not like a centralized version control system. Using git you do indeed get multi-file undo, and the ability to keep your own version history on exactly what you did when, to a level which no IDE can match with its own built-in tools. All without affecting the branch you're developing against until you're ready to commit.
And when you work that way, you're not tied to any IDE or editor, and you can do all manner of interesting statistics and analysis on your change history. Heck, you can even work fully without an editor or IDE and just integrate code using git.
The "average user" does not use an IDE, does not do programming and will use neither nano nor vim, and thus has no relevance to this
"The rest of the world" tends to use vi and emacs. You're the one stuck in some form of rut, and posting to get that rut validated. Get with the times, vi today is not what you remember from back when you (possibly) had a valid opinion on the subject.
So the proper way is for the IDE to intentionally break the code?
I can't even begin to fathom such an approach. Worst of all, it's probably considered proper by a huge chunk of the people working with C++. Small wonder software projects are perpetually late, over budget and bug ridden.
Are you trying to make the case that embedded systems requiring text file reconfiguration and low bandwidth connectivity to such systems is something *rare*?
Indeed you are.
Remind me never to pay attention to your quite uninformed opinion.
Oh, and anything on a command line is "obscure" and "unintuitive" until learned. I don't find vi to be either. That you do speaks more of you than of vi.
Must be nice to only work with systems having plenty of resources and huge default installs. and always over high bandwidth low latency connections. Not all of us have that luxury, and the one editor you can always count on exists on a limited system, and the best editor for use over high latency connections, is vi.
If you know it well, that is. Otherwise you simply can't work efficiently under those conditions. Which you evidently never have to do.
If they needed more than 30 days, they could have said so quite amicably without lawyers (or with them, but in a friendly request manner) within a week, and asked the researcher to withhold release until they were ready. Instead they barge in, lawyers blazing, trying to suppress any and all information release.
That is an attempt to sweep the whole thing under the rug, and deserves only information release and the Streisand effect as a response.
How many vulnerabilities is there in Ubuntu 6?
39 total vulnerabilities, 7 high severity, 27 medium severity, 5 low severity.
Couldn't find that. It's in NVD though, if you're really interested.
Windows XP is FIFTEEN YEARS OLD
No it's not. It's still under development, and there is almost nothing left of the codebase from the original XP when you have patched up an XP install.
Otherwise Linux is TWENTYFOUR YEARS OLD, but you know, writing that in all caps as if it means something just seems silly. Because it is.
And hardly any of the Linux vulnerabilities allow a web client attack, like a whole slew of the Windows ones do. Because Linux does not have a web browser with kernel access. Therefore, the low level vulnerabilities in Linux are not like the low level vulnerabilities you are used to.
A case can possibly be made for lossless, especially for complex music. A fan of Meshuggah can usually tell the difference between lossy compressed and lossless versions of their tracks. However, even a good mp3 compression algorithm at decent bitrate is so good it's very hard to beat chance in an ABX test.
As to 24 bits and any sample rate over 60kHz? Only useful for trying to blow up stereo systems and turning people deaf. The dynamic range of 16 bits alone is more than a healthy, young human can make use of outside the laboratory (or even for the most part inside it) and is much, much higher than that of any music. And if there is magical information hiding above around 20kHz, we simply can't hear it - or see it with any existing measurement tools, which means we can't record it either.
4K downsized to 1080P gives a great more detail, due to downsampling gives a higher detail, due to 1080P using 4 blocks with the same pixel, so 4k downsize, each 4 blocks are have a different pixel, its very noticeable
That downsampling can be done before the pixels are pushed to your TV and will yield the exact same benefits.
I agree it is important to ask questions. But the questions should not be of the strawman form, or asked from a position of apparent and obvious ignorance about not only the subject, but the very study being questioned. There is no added value, and in fact negative value, to ask that kind of questions.
Trying to think laterally about the issue and find other causes for the observed behaviour is completely different from ignorantly spouting off unfounded criticism of test methodology.