Seems a shame that the heirs to the 68xx legacy these days just put out commodity standard architecture (ie ARM, PPC, etc) chips.
Is Freescale doing anything with the 68000 series these days? I assume the related but not quite the same ColdFire is still in production, but last I looked that hadn't advanced very much since the last 68060s in the 1990s.
No need for guns; just station a half-dozen guard birds on the roof.
Holy fucking shit, who cares? If this was done by LETTER WIDTH, we wouldn't see the problem-
EXACTLY! That is why you do not want "N characters". I don't understand what your problem is here.
It is true that for this example most programmers would scan from the start, finding the longest string that fits with an ellipsis at the end.
What I was trying to point out is that if you want to be clever, you can guess as to an insertion point. But 11 bytes is just as good of a guess as 11 "characters" and since 11 characters requires scanning you are not saving any time.
You are perfectly correct that after you stick the ellipsis in there you need to test to see if the rendering fits and perhaps try another guess. The idea is that you will do fewer measurements, but that such insertion can be done using byte offsets, and "N characters" is a useless concept that never enters into it.
So you mean get rid of wide-character encodings and only use UTF-8 where unicode characters are needed? I agree.
" HP wasn’t able keep up with its competitors. The company’s revenue share dropped from 25.5 percent to 23.8 percent, while its market share by volume dropped 2.6 percentage points to 20 percent, "
For anyone keeping score, this statement means 'HP is not keeping up because they are still in the lead, but the gap is narrower'.
"Dell increased revenue and shipments, but it too wasn’t quite able to keep up with the market. Its share of revenue and shipments each slipped by just under 1 percentage point to 17.1 percent and 19 percent respectively"
This is a little less blatantly wrong, but Dell is the #2 vendor Strictly true since they said keep up with *the market* which in aggregate grew, but being #2 in the market isn't such a dire thing.
" IBM had the third-largest server revenue, followed by Lenovo and Cisco Systems, while Lenovo was third by server shipments, "
This particular statistic is pretty screwed up because it doesn't correct for the situation that IBM sold of x86 based servers partway through the year in some parts of the world, and at the end of the year in other parts of the world. It mentions this, but fails to recognize that IBM's situation partially included Lenovo still. Lenovo's big year to year growth is mostly a changing of ownership currently.
"Cisco’s year-over-year server revenue growth of 44.4 percent was well above average for the industry, and suggests the company is not done capturing incremental market share in the server market"
Impressive and all, but given *after* that increase they still lag behind 4 other companies, it means that big year to year percentages are likely. Just like the lead experiencing a little crowding in a market shouldn't cause anyone to write them off, a large percentage gain by a relatively small player shouldn't send everyone into an excited state. You could write similarly exciting stories about some of the 'lower tier' vendors, but since those aren't exciting brands, they got omitted.
Do you really think 12 happy faces fit in the same space as 12 letter 'i'?
This is why it is pointless to do such counting.
And what you propose would split betwen a letter and a combining accent, so it really isn't any better for trashing strings.
Basically as soon as the words "N characters" come out of your mouth you are wrong. All your description just does that for many paragraphs. Don't feel too bad however as there are many many other people, including ones working for Apple, who are wrong as well.
PS: the surrogate order does not depend on they byte order in UTF-16. You might want to check what you are doing if you thought that.
A lot of misinformed programmers use the term "Unicode" to mean encodings other than UTF-8, typically UTF-16 or UCS-2. For instance a function called "toUnicode" often is a translator from UTF-8 to UTF-16. Therefore when people say "Unicode strings" they almost always mean non-byte strings. I propose the best solution is to eliminate all such strings. It is true that byte strings would encode UTF-8 and thus be "Unicode" but the hope is that this would be so standard that there would be no need to ever specify this and they would be called "strings".
I don't think there are any useful algorithms that need random access, except for searches that do not require the index to be "exact". Therefore you certainly do not need a reliable string[i]. You can make it return a pointer to byte i, which will work for most uses (ie searches for ASCII and replacing one ASCII byte with another).
Some complex searches do want to jump arbitrary amounts ahead, but do not require any "exact" location. Instead they want "a character near the middle" and so forth. In this case it might be useful to produce an iterator that points near byte i. A simple UTF-32 one would jump to byte i and then back up to the last non-continuation byte, unless there are 3 or more in which case it would stay where it first went (as that is the start of an error). Ones that return normalization forms would be much more complex. Something like this:
utf16_iterator i(string, string.length()/2);
Yes, we have UTF-8. You do know that it can hold non-English, right?
No you are wrong. What I propose does not fail any worse than what I think you are proposing, which is "search N Unicode code units forward and put the ellipsis there".
My scheme will not add an error. Either it will find the start of a character, or if there is enough trailing continuation bytes it will know that the string ends with an error and add the ellipsis after that (thus not adding an error and not removing it). As other posters here point out there is absolutely no need to count Unicode code points as it has nothing to do with how many "characters" there are.
A better scheme would be to actually measure the rendered string to see if it fits, and then do a weighted binary search for the correct location to place an ellipsis to get the longest string containing an ellipsis that fits. This still assumes that shorter strings render in a shorter area, which is still not true, but true enough that I think this may work.
What I recommend is that anything that takes text input assume that the input could be any possible arrangement of the data units (ie any stream of bytes for UTF-8, and any stream of 16-bit words for UTF-16).
Don't "sanitize", because that is simply a step that produces a new string and feeds it to the next step. You have not fixed anything because an error in "sanitizing" will still crash (as quite a few posters here have tried to point out). The work must be done at the point that the data is translated to something other than a string. In this case is is the glyph layout in their rendering. That code should assume the input is ANY possible arrangement. Ideally it should draw something visible showing that there was an error and place it between glyphs so that it is clear what location in the string the error was.
Relying on previous steps only producing valid data is not only unsafe (as this bug shows) but also wasteful because of the scanning of the data. And it is either lossy (because errors are translated to a valid sequence and thus two different inputs map to the same result) or a denial of service (due to an exception being thrown and the loss of further processing). Unfortunately handling that is completely obvious for most data is somehow confusing to programmers when they are presented with Unicode.
From that description it does sound like the string is still valid. However if the display is crashing on a certain sequence containing an ellipsis, I am not clear why you can't construct that string directly, rather than rely on the insertion of the ellipsis.
It does sound like they maybe rely on "sanitizing" but of a far more complex scheme that I was aware of. This is still wrong, maybe far worse, as they are detecting and rejecting patterns containing ellipsis and some other character, that is INSANE!!!. Any such work should be delayed until the VERY LAST moment. In this case their glyph layout should simply not crash on any possible arrangement of bytes or words in the incoming string. This is very much the same stupidity that I was ranting about for UTF-8. Nobody used to crash because you put mis-spelled words in your text and tried to print it. Apply the same logic to UTF-8 and Unicode. It is not hard and it seems really obvious, but for some reason Unicode turns some otherwise really smart programmers into total idiots.
In cases where a project is no longer actively being maintained, SourceForge has in some cases established a mirror of releases that are hosted elsewhere. This was done for GIMP-Win.
Editor's note: Gimp is actively being maintained and the definition of "mirror" is quite misleading here as a modified binary is no longer a verbatim copy. Download statistics for Gimp on Windows show SourceForge as offering over 1,000 downloads per day of the Gimp software. In an official response to this incident, the official Gimp project team reminds users to use official download methods. Slashdotters may remember the last time news like this surfaced (2013) when the Gimp team decided to move downloads from SourceForge to their own FTP service.
Therefore, we remind you again that GIMP only provides builds for Windows via its official Downloads page.
Note: SourceForge and Slashdot share a corporate parent.
Link to Original Source