Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re: Short of memory? (Score 3, Insightful) 165

What does "character" mean?

Something represented by one unicode codepoint? (making your statement a tautology)
Grapheme cluster? (what most users would consider a character)
A position in the character grid of a console?

Which brings us to the real question. to what extent do you want to support unicode? do you care about

* Grapheme clusters that take multiple code points to represent? (letters with multiple diacritics, unusual letter/diacritic combinations etc)
* Right to left languages? (hebrew, arabic etc)
* Languages where chracters merge together such that computer output looks more like handwriting than type? (see above)
* Languages where "fixed" width fonts use two different widths giving "single width" and "double width" characters? (chineese, japanese, korean)
* Characters outside of the basic multilingual plane? (rare Chinese characters, dead languages, made up languages, rare mathematical symbols)

Once you have worked though that design decision it will help you make others. What you find is that "length in unicode code points" and "unicode code point n" really aren't much more useful than "length in utf-k code units" and "utf-k code point n". Either is fine for sanity checking string length or iterating through a string looking for delimiter. Neither is much use for anything more than unless you are doing a very limited implementation.

UTF-32 seems enticing initially but turns out to be fairly pointless, by the time you get to caring about non-BMP characters you are probably also going to be caring about combining characters etc and it will massively increase the size of the vast majority of text.

UTF-8 vs UTF-16 is something of a tossup. UTF-16 lets you get away with treating each unit of the string as one "character" much longer which may be considered either a blessing (because you don't care about the cases where it doesn't work) or a curse (because you realise your assumptions were wrong much later after basing much more code on them). UTF-8 is smaller for text with lots of latin chracters, UTF-16 is smaller for text with lots of CJK characters. UTF-8 is the usual choice on *nix systems and internet protocols. UTF-16 is the encoding chosen by windows and Java.

Comment Re:Shrug (Score 4, Interesting) 161

ways in which IPv6 sucks or sucked.

1: mechanisms for interoperability were bolted on later, not included as core features that every client and router should support and enable by default. The result is that relays for the transition mechanisms are in seriously short supply on the internet and often cause traffic to be routed significantly out of it's way.
2: the designers were massively anti-nat, as a result we don't have any interoperability mechanisms that go with the flow of NAT, instead we have two incompatible interoperability mechanisms one of which doesn't work with NAT at all and the other of which makes itself unnessaceraly fragile by fighting the NAT rather than going with it. The company behind the latter mechanism also disabled it by default for machines on "managed networks"*, presumablly because they were afraid of annoying corporate network admins.
3: there was lots of dicking around with trying to solve other problems at the same time rather than focusing on the core problem of address shortage. For example for a long time it was not possible to get IPv6 PI space because of pressure from people who wanted to reduce routing table size. Stateless autoconfiguation and the elimination of NAT seemed like good things at the time but they raised privacy issues and added considerable complexity to home/small buisness deployments.
4: there was little incentive to support it and so the time when you can use an IPv6 only system as a general internet client or server without resorting to transition mechanisms seems as far off as ever.

* Defined as any network with something windows thinks is a domain controller.

Comment Re:If you don't want to upgrade your box (Score 1) 100

BS.

Ramdrives have several advantages.

1: they are explicitly volatile, application developers don't know your usecase and therefore often err on the side of preserving your data over power failures and so use calls like fsync. Even when the app doesn't use fsync the OS will usually try and push the data out to disk reasonablly quickly. If you know you don't care about preserving the data across power cycles and you know you have sufficient ram then a ramdrive can be a much better option.
2: operating systems don't have precognition of what data you will need, when and on what timescales, they can only make educated guesses based on the accesses that have happened recently. If you know you will be accessing a particular set of data a lot and you want those accesses to be low latency then manually bringing it into memory in advance can be a better option than letting the OS fetch each peice as it needs it.

Comment Re:Well it also depends on chipset (Score 1) 100

Hence you can have a situation where for things like PCIe and USB the high end stuff is behind.

USB? yes, SATA? yes, PCIe? no.

None of intels chipsets has PCIe 3.0 on the chipset, not even X99. The only PCIe 3.0 lines on intel systems so-far have been those from the processor and the lanes on the processor have been PCIe 3.0 since "sandy bridge e" on the high end and and ivy bridge on the mainstream. So high end got PCIe 3.0 before mainstream did. Furthermore the high end platforms have a lot more PCIe lanes. One lane of 3.0 is equivilent to 2 lanes of 2.0 or 4 lanes of 1.0 so in terms of total PCIe data rate even the venerable X58/ICH10/LGA1366 setup (which offers 36 pcie 2.0 lanes and 6xPCIe 1.0 lanes which adds up to the equivilent of 78 PCIe 1.0 lanes ) is comparable to current upper-mainstream Z97/LGA1150 (which offer 16 PCIe 3.0 lanes and 8 PCIe 2.0 lanes which adds up to the equivilnet of 80 PCIe 1.0 lanes).

If we look back to the transition from PCIe 1.1 to 2.0 then again it seems the high end X38 chipset was the first desktop chipset (not sure about server ones) to feature PCIe 2.0 (note: at that time you could mix and match a high end chipset with a low end CPU or vice-versa) and it had 32 lanes of it which is more than any mainstream chipset ever had.

Comment Re:Cert Pinning (Score 1) 163

The approach taken by the http key pinning draft is to require sites using it to have at least one spare key. The spare key can then be used to order a new cert in the event that the main key is compromised.

Of course if you were stupid/careless enough to get your spare key lost or stolen too then you have a problem :(.

Comment Re:That's will be one dead astronugh (Score 1) 70

The GP post is clearly bullshit, it's in no way in spacex's interests to deliberately kill an astronaut, especially given that the government has multiple contractors working independently on commercial crew transport. However I have a feeling you are being overoptimistic.

overall chance of casualties from launch and landing activities of its Dragon capsule at 30-in-1 million

NASA management came out with similar figures for the space shuttle. http://sunnyday.mit.edu/accide...

Yet the actual crew loss rate for manned space vehicles has been in the single digit percentages. The space shuttle has had two crew loss failures and according to wikipedia has had 135 launches putting the credw loss rate at about 1.5%. Apollo had one crew loss incident during launch preperations and had about 17 manned missions (including skylab, appollo-soyuz and the aforementioned failure on the pad) putting the crew loss rate at about 5.9%. Soyuz has had 124 manned launches and has had two crew loss failures putting it's crew loss failure rate at about 1.6%.

Hopefully both failure analysis and our understanding of spaceship components have improved, but i'd still consider a figire two orders of manitude better than previous vehicles hard to belive without substantial data from actual flights.

Comment Re:With apologies (Score 2) 65

Soemtimes it works out that way, other times however things do converge, it used to be that every phone vendor needed their own chargers (or at least adaptor cables but even that could be dodgy). Nowadays they all use 5V and most of them use a microUSB connector to deliver it. In the early days of power over ethernet there were serveral competing standards. Nowadays all the major vendors use the IEEE standard. In the early days of computer networks there were many standards, now pretty much everyone uses BASE-T etehrnet with TCP/IP unless they have a very good reason not to.

Comment Re:Better way? (Score 1) 289

That works if all you care about but it breaks down as soon as you have to handle future and regularly scheduled events and/or deal with external constraints that are defined in term of local time.

For example suppose you divide the day into shifts, and the shifts are defined in terms of local time (and have been since long before your computer system came along). Once a year you will have a shift that is an hour longer than normal and once a year you will have a shift that is an hour shorter than normal. That means many calculations can no longer assume shifts or "days" (where a "day" is a group of shifts) of constant length.

And then theres the fact you can't reliablly convert future local times to UTC because DST rules are at the whim of the legislature. If your users schedule a meeting at 9am local time in a few months time they will expect it to stay scheduled at that local time even if the government changes the mapping from local time to universal time in the time between the meeeting being scheduled and it happening. Have fun with meetings between users in different jurisdictions.

 

Comment Re:Man vs Machine? (Score 1) 289

Personally I suspect that slight variations in the length of a "wallclock second" would be much less disruptive than a special case 22:59:60 time which can't be represented in many time formats and is sufficeiently rare that problems with it's handling are likely to go unnoticed during testing.

Yes a mechanism for adjusting the speed of clocks would be needed, but we already have such mechanisms to deal with the crap tolernaces of most local clocks.

Comment Re:Where should I hold my Bitcoins? (Score 2) 161

Given the overall shady nature of the organisations surrounding bitcoin trusting a service to store your bitcoins is folly. So you have to store them yourself.

How you do that is a tradeoff between cost/inconviniance and risk.

The normal method of risk management for those holding large numbers of bitcoins is to have a "hot wallet" and a "cold wallet". The hot wallet is where you keep the bitcoins you need on a day to day basis, you accept that if you get hacked you have a good chance of losing it's contents.

The cold wallet is where you keep the bulk of your bitcoins. You keep the keys to the cold wallet offline and possiblly consider using a secret sharing algorithm with parts of the key distributed between multiple secret locations (e.g. if you use a 2 of 3 secret sharing setup then the comprimise of any one location won't cause loss or compromise of the secret).

Comment Re:Nothing about proxy though (Score 1) 67

The real distinction is "partial coverage" vs "full coverage".

AIUI there have been soloutions that divert the authentication/setup traffic via a US ISP but still allow the bulk traffic to flow directly between the user and the netflix CDN. Presumablly this works because the CDN servers don't re-check the geo blocking. This is much cheaper than diverting everything through a proxy or VPN but also much easier for netflix to stop if they decide to do so.

Slashdot Top Deals

"Gravitation cannot be held responsible for people falling in love." -- Albert Einstein

Working...