It's still not clear how an application rendering Japanese text could end up making the bad assumption. If it's using a Japanese font, why would it bother to switch to another font when the character to be rendered exists in the current font? Does the problem only occur when the current font *doesn't* contain the character, and then the application goes hunting for it and ends up picking up characters from potentially multiple inconsistent fonts? That seems like an application issue, failing to try to retain a consistent font in this defaulting process. It points again to the notion that we should not even be doing that, but rather force applications to use "Unicode fonts" if they want to support Unicode text properly. This seems like a font issue more than a Unicode issue. Does Unicode have separate code points for italic and bold characters in other languages? Why should that information be part of the character instead of the font?
What I still don't understand is, if there's only one code point for this character, where are the multiple renderings coming from? Multiple fonts? Is the source of the problem that Japanese fonts are providing a bad glyph/rendering for this character that doesn't match the style of the rest of the font, or is it that they are unable to provide both glyphs because there's only one code point? Would there still be a problem if they just changed their glyph to the other style; could this just be considered a bug in Japanese fonts?
So, pardon my apparent inexperience with Unicode, fonts and glyphs, but this looks like an application or framework issue wherein someone decided that we should switch fonts in the middle of a string if there's another font that contains a glyph for the character we're after in some circumstances. Is that what's happening? Why shouldn't all text drawing operations be restricted to the currently active font, and make it the responsibility of the application developer and user to pick a font that contains all the glyphs required by their application. This doesn't really seem like a fault in Unicode, but in how the application or framework outsmarted itself in trying to switch fonts. Following the K.I.S.S. principle, this never would have happened, right? The application should simply stick to a single font. Also, under what circumstances (if any) would that "wrong" character ever be desired? Is it ever correct? Does it have a similar meaning in these other circumstances?
I have been reading the comments for 20 minutes because I don't understand Japanese, but I still don't understand the problem. There's a Japanese character called no, it looks very much like a lowercase English/Latin "e" rotated clockwise about 80 degrees and then flipped over the vertical axis. Is this being mixed up with something else or rendered wrongly? Can anybody provide examples of what it's getting mixed up with or how or where it's being rendered improperly?
The reply is not responding to the sentence with "perception" in it. It's replying to the prior sentence.
Why then, at 40, do I still get weekly contacts from recruiters looking to fill local development positions? Is it possibly your comment applies to a local market, possible in Silicon Valley, but not to the Midwest? Or is it possible that every one of these recruiters is just trying to fill a quota of prospects despite the fact that the employer they're hunting for couldn't afford me?
From the user perspective, I think Wikipedia is correct. To any coder using a sparse array, it just looks and acts like an array where most of the elements are 0 or null. From the implementation perspective, when you know this is the case, there are some optimizations you can make to significantly reduce the memory usage of such a structure, which is why the term "implementation" was used to describe sparse arrays' relation to maps. Internally sparse arrays are implemented as maps so that space doesn't need to be allocated for all those zeros. Although a sparse array's implementation doesn't define it, it is a notable detail about how they are generally implemented. So if you want to split hairs on the definition of "is", Wikipedia probably has a better definition, but it's also not incorrect to say that they are implemented as maps.
I thought the LCA involved in an H-1B visa is supposed to prevent paying the visa worker a wage lower than what would be paid to a native worker doing the same job. I can't find any reputable source for this, but Wikipedia states, "The LCA also contains an attestation section designed to prevent the program from being used to import foreign workers to break a strike or replace U.S. citizen workers." Is this a misconception that is not in fact backed up by any real requirement?
I created an easter egg in a product called Fourth Shift Edition for SAP Business One (http://findaccountingsoftware.com/directory/softbrands/fourth-shift-edition-for-sap-business-one/) maybe 5 years ago that rendered an interesting sequence of John Conway's game of Life (starting from the acorn state) while displaying names of developers in a marquee. Trying to remember how to access it... I think it was just typing "LIFE!" while looking at the about dialog. I work pretty efficiently so it was hard to keep me busy at times. The easter egg was a (self-inspired) way to do something interesting related to the software I was working on for a couple hours while waiting to see what came next... and I thought it might someday briefly amuse someone too accustomed to nothing but business all day long. (The software is for ERP.) I showed it to my boss and a few coworkers who, if I recall, all had positive reactions... or at least no negative reactions I'm aware of. I'm not sure if anyone would have expressed a negative reaction to me if they had one because I feel pretty well respected there. I'm not sure anyone who knew about it is still with the company. Maybe I should tell a couple support people about it in case they feel like using it as a diversion while researching a solution to someone's inquiry, especially since it's Easter time.
OK, the spooky thing is that I read your comment, looked at my clock, and it said 2:10
No, it's more appropriately celebrated than any typical holy holiday. It's a day of humor and jest whose purpose has not been lost. And celebrating it has more real and lasting effect than a typical holiday. What better way to lighten a spirit for a holiday than laughter? Nothing compares.
Am I reading the wrong article? The article I read doesn't contain the word "Fractal".
The mere fact that you appear to be putting people who use certain technologies on a scale from "less-smart" to "smart" directly counteracts your assertion that complexity is subjective. If complexity were subjective, you would have simply referred to C++ users as "familiar with C++" and Ruby users as "familiar with Ruby" not put them on a scale from Ruby==less-smart to C++==smart. But since you use the terms "smart" and "less-smart", you imply that there is an absolute scale of complexity which can be measured in the smartness required to understand it.
All fine and good when there's no clean-up to be done. However, if you're in an error handler after opening a database connection, creating a temp file, and allocating a block of shared memory, now you've just leaked resources all over the place by skipping all the clean-up. Or you have to duplicate all that clean-up in this and every subsequent error handler within the function.