Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Reference count synchronization across threads (Score 1) 296

Actually the reason for the Global Interpreter Lock is because cPython decided that had less overhead than making the reference counters atomic variables (plus you would still need some kind of locking when modifying any object with a reference count greater than one, though this is such a tiny amount of what a typical Python program does that it is probably irrelevant).

I personally have doubts this is true, but the argument is not impossible. I am wondering if their measurements were on older systems, modern ones are better at atomic operations.

Comment Re:Lol (Score 1) 248

Holy fucking shit, who cares? If this was done by LETTER WIDTH, we wouldn't see the problem-

EXACTLY! That is why you do not want "N characters". I don't understand what your problem is here.

It is true that for this example most programmers would scan from the start, finding the longest string that fits with an ellipsis at the end.

What I was trying to point out is that if you want to be clever, you can guess as to an insertion point. But 11 bytes is just as good of a guess as 11 "characters" and since 11 characters requires scanning you are not saving any time.

You are perfectly correct that after you stick the ellipsis in there you need to test to see if the rendering fits and perhaps try another guess. The idea is that you will do fewer measurements, but that such insertion can be done using byte offsets, and "N characters" is a useless concept that never enters into it.

Comment Re:Lol (Score 1) 248

Do you really think 12 happy faces fit in the same space as 12 letter 'i'?

This is why it is pointless to do such counting.

And what you propose would split betwen a letter and a combining accent, so it really isn't any better for trashing strings.

Basically as soon as the words "N characters" come out of your mouth you are wrong. All your description just does that for many paragraphs. Don't feel too bad however as there are many many other people, including ones working for Apple, who are wrong as well.

PS: the surrogate order does not depend on they byte order in UTF-16. You might want to check what you are doing if you thought that.

Comment Re:I am amazed (Score 1) 248

A lot of misinformed programmers use the term "Unicode" to mean encodings other than UTF-8, typically UTF-16 or UCS-2. For instance a function called "toUnicode" often is a translator from UTF-8 to UTF-16. Therefore when people say "Unicode strings" they almost always mean non-byte strings. I propose the best solution is to eliminate all such strings. It is true that byte strings would encode UTF-8 and thus be "Unicode" but the hope is that this would be so standard that there would be no need to ever specify this and they would be called "strings".

Comment Re:I am amazed (Score 1) 248

I don't think there are any useful algorithms that need random access, except for searches that do not require the index to be "exact". Therefore you certainly do not need a reliable string[i]. You can make it return a pointer to byte i, which will work for most uses (ie searches for ASCII and replacing one ASCII byte with another).

Some complex searches do want to jump arbitrary amounts ahead, but do not require any "exact" location. Instead they want "a character near the middle" and so forth. In this case it might be useful to produce an iterator that points near byte i. A simple UTF-32 one would jump to byte i and then back up to the last non-continuation byte, unless there are 3 or more in which case it would stay where it first went (as that is the start of an error). Ones that return normalization forms would be much more complex. Something like this:


      utf16_iterator i(string, string.length()/2); // i now points near middle of string

Comment Re:Lol (Score 1) 248

No you are wrong. What I propose does not fail any worse than what I think you are proposing, which is "search N Unicode code units forward and put the ellipsis there".

My scheme will not add an error. Either it will find the start of a character, or if there is enough trailing continuation bytes it will know that the string ends with an error and add the ellipsis after that (thus not adding an error and not removing it). As other posters here point out there is absolutely no need to count Unicode code points as it has nothing to do with how many "characters" there are.

A better scheme would be to actually measure the rendered string to see if it fits, and then do a weighted binary search for the correct location to place an ellipsis to get the longest string containing an ellipsis that fits. This still assumes that shorter strings render in a shorter area, which is still not true, but true enough that I think this may work.

Comment Re: Lol (Score 1) 248

What I recommend is that anything that takes text input assume that the input could be any possible arrangement of the data units (ie any stream of bytes for UTF-8, and any stream of 16-bit words for UTF-16).

Don't "sanitize", because that is simply a step that produces a new string and feeds it to the next step. You have not fixed anything because an error in "sanitizing" will still crash (as quite a few posters here have tried to point out). The work must be done at the point that the data is translated to something other than a string. In this case is is the glyph layout in their rendering. That code should assume the input is ANY possible arrangement. Ideally it should draw something visible showing that there was an error and place it between glyphs so that it is clear what location in the string the error was.

Relying on previous steps only producing valid data is not only unsafe (as this bug shows) but also wasteful because of the scanning of the data. And it is either lossy (because errors are translated to a valid sequence and thus two different inputs map to the same result) or a denial of service (due to an exception being thrown and the loss of further processing). Unfortunately handling that is completely obvious for most data is somehow confusing to programmers when they are presented with Unicode.

Slashdot Top Deals

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...