Comment Re:Bit = Binary Digit (Score 2, Interesting) 151
There is not a 1:1 correlation, but there might be now. With all physical bits being data bits we could gain up to 100% more data bits on the same area.
There is not a 1:1 correlation, but there might be now. With all physical bits being data bits we could gain up to 100% more data bits on the same area.
For the uniformed: with today's technology, a 1:1 correlation between data bits and magnetic "bits" is nearly impossible. We have to interleave data bits with clock bits, so we are able to count runs of equal bits. So the data bits are encoded on this interleaved stream of data and clock/sync bits before it is actually stored in the physical medium. If the bit-patterned layout doubles as a clock/sync mechanism we can store only the data bits (with error correcting codes too, of course).
When a change to the program can break a piece of code that the compiler conveniently wrote for me, yes, of course it's a language problem. Given the number of articles, web pages and C++ books that prominently mention workarounds for this issue, I'm clearly not alone in considering this to be a trap.
Do you realize that in almost every language you can break the whole program through a small change? Who is this different than say creating an infinite loop by adding a ";" after a while (...) expression?
Overloading numeric types is a nice strawman, and conveniently lets you ignore the stream operator issue that I mentioned. Well done.
It is not a strawman. Operators are overloaded all the time on Mathematics. Words are overloaded in human language. Why is overloading in a programming language so hard to accept?
Again ignoring the issue I brought up. I'll make it a little more explicit. Take a reference to an element of a vector. Add on to the vector until the vector is reallocated. The "reference" now points to garbage. No temporary objects involved. I can guarantee you that anyone familiar with other OO languages would be quite surprised by that behavior.
It is a characteristic of the std::vector then, not the reference's. The same happens if you hold a pointer or an iterator to an element of the container. The standard clearly states that references and iterators to elements of a std::vector might be invalidated after reallocations. Use std::list and this problem goes away. There is no data structure that has no downsides; it is not C++'s fault.
Default assignment operator: All you need to do is add a pointer to your class and suddenly code that you don't see causes a bug. Yes, IF you know about this you can work around it. That's true of anything.
You mean you changed the class definition by adding pointers, without worrying about maintaining the class invariant (which is to protect those pointers) and blame the language? You might want to learn a bit more about OO programming.
Well, yes, when people see an operator, they "think" they know what it's doing. It's interesting to me that in this very first case of overloading, Stroustrup ran into this fundamental problem, and had to choose a somewhat obscure operator to get around it.
I'm sure you enjoy doing str.append("."); in your favorite language with no operator overloading at all. Even funnier must be 4 + 5 and 4.2
References: references aren't what most people think of as references.
Most people I know are aware that references are not smart pointers. Why would anyone think that? They are just like pointers that can't be changed. The only unusual usage is when you use them to keep temporary objects alive. Remind that C99 had to introduce a new keyword, restricted, to mitigate the aliasing problems that always hurt the optimizer; by using references instead of pointers you solve almost all of these problems.
Remember, one of the definitions of cross platform is that it still works after a system restart.
Somehow I doubt the LCD could stand the amount of pressure a typical controller button receives. And who would be able to play without feeling the button? I don't want to have to look at the controller only to make sure my finger is over the correct button.
Not as low-hanging as you seem to think. They would have to buy those mod-chips, do some reverse-engineering, test the updates to make sure it doesn't break any revision of the Wii hardware; and still most mod-chips seem to be upgradeable anyway; and it's not like buying new mod-chip costs more than a Wii game anyway.
In short, it's too risky, will cost too much, and will be mostly ineffective (everyone that bought one mod-chip won't mind buying a second one that is resistant to said mod-chip-killer update.)
The funny thing is the homebrew community does much more to fight piracy than Nintendo. They ban any app that even remotely might be used to facilitate piracy. And still Nintendo goes after the homebrew.
There are a few others you should count in (like the Paper Mario series, and Yoshi's Story.) I tried playing Super Paper Mario Wii once, and gave up about 10 minutes or so into the game, tired of just pressing A to proceed to the next dialog line. I didn't play any of it. I would call that a miss too. It's amazing how many game designers think the player needs to be schooled for minutes on the mechanics and/or story before can start enjoying it; and even more when it's Nintendo committing the blunder with their very mascot. New Super Mario Bros goes back to the origins (once again) where you just play it.
Really? Because the only thing that I dislike is actually the multiplayer mode. Almost every interaction between the characters is meant to be disruptive. Either you stay far away from your ally, or one will accidentally end up killing the other. Maybe the fun is in obstructing the other player? Well, not for me, or anyone I invited to play with me. Good'n old Contra is much more enjoyable.
Nonsense. Wii's flash memory is 512 MB, not 512 KB. It's unthinkable for a video player to be larger than, say, 50 MB; and at that mark it would still fit.
On the NVIDIA side, CUDA performance and usage flexibility is still typically and substantially higher than is achievable via OpenCL, since obviously CUDA exists to fairly optimally exploit their GPU architectural capabilities whereas OpenCL is a generic GPU-vendor / architecture "neutral" platform that doesn't give as much card specific control as CUDA (or CAL in AMD's case).
That's not true. I've run many equivalent CUDA and OpenCL kernels on NVIDIA cards, and they perform both the same. Pretty much in accordance with those benchmarks.
There's no reason for OpenCL code to be any slower than CUDA code (the same compiler is used, only with small changes in the frontend). Maintainability on the other hand... with CUDA you can launch a kernel just like you were calling a function; with OpenCL you have almost a dozen of setup steps (reminds me of programming Win32 applications directly with raw Win32 api calls). Function and operator overloading, templates... those are nice things to have at your disposal when you need it. Let's hope they make an "OpenCL++" standard too.
Uh... no, you are wrong. Quadros and GeForces have a lot of differences in the internal hardware. Just because they "do the same thing" (they draw triangles really, really fast) it doesn't mean they are the same. GeForces, for example, don't have optimizations for drawing points and lines, nor assume you are abusing of obsolete APIs, like immediate mode drawing; both are common in CAD applications, and almost useless in games.
Are you suggesting people to use XFS? Why would you do that? That's beyond mean.
I tried migrating all my data to XFS once. About a month later I was desperately migrating it all back to ext3. Not only XFS has serious design flaws that make it one of the most fragile FS around, the driver implementation is even able to corrupt the stored data (that is, not just the directory structure, but the file contents too) even during normal operation. Two weeks after setting up a server with XFS, I had to shut it down to fix the file system errors; another 2 weeks uptime, I had to do it again, but this time only so i could back the data up and reinstall the system on an ext3 partition (same disk, not a single badblock up to this day).
Lots of folks confuse bad management with destiny. -- Frank Hubbard