using static ground stations like LORAN,
Reminds me of something I've not yet found the answer to: Why don't we have ground-based GPS transmitters in addition to the satellites? - wouldn't this give improved reliabilty / accuracy / easier-maintenance in places where you need it (ie. near ports) with standard GPS receiver equipment (rather than needing extra equipment like for differential-GPS)
oops - I should have read more closely...
because the source and destination can pick new X and Y values with every transmission
I see now that _that_ is what you gain for the additional bandwidth cost
This is the RSA algorithm. It hasn't been broken in the last 30 years by the smartest people. Either that, or the govt.(NSA) knows how to break it and is keeping it under wraps.
The algorithm in mark-t's post is not the one described on http://en.wikipedia.org/wiki/RSA : I read it as a varient that (using the wikipedia page's notation) is making {p,q} public instead of {n,e}, with a corresponding adjustment to the messages that need to be exchanged.
this relies on the discrete logarithm of (d6=d5^Ys mod C) being difficult to solve from step-6 (with d6,d5 and C being known to an eavesdropper : Ys being what you need to figure out to break the encryption) - compared to the wikipedia articles RSA algorithm that more directly relies on factorising n being the difficult step.
3.The source and destination then compute Ys and Yd, respectively, such that their own X*Y is congruent to 1 mod (A*B). They do not share this information.
Should that be 1 mod ((A-1)*(B-1))?
I'm not that convinced that relying on the discrete logarithm problem (at the cost of 4x as much network communication) rather than directly on the factoring problem (like more commonly discussed PK based systems) has any additional security : aren't the 2 problems of identical complexity?
From both the article and the summary re:
The cause appears to be a known issue with the Google search engine, in which the pages of defunct web sites containing sensitive directories remain cached and available to anyone
This makes it sound like the issue is with google's search engine and makes light of the real issue which is that at some point this information was published for all the world to see (or search engines to index) and anyone to cache (or write-down, or memorize).
Insisting on search engines removing removing this information from their indexes and remove it from their caches is just sweeping the problem under the rug : you or I taking a quick peek on the internet to see if our credit-card infomation has been published anywhere would get a false sense of security if the search engines pretended it wasn't there and that security breaches had never happened.
*tin-foil-hat-time* It seems analogous to re-writing history books to cover up prior misdeeds.
Its very difficult to compare as in typing speed measurements one will either be limited to different people as well as different keyboard layouts, or at least different amounts of exposure to each layout. And what about some control cases of randomly generated layouts or alphabetical layouts?
An interesting hypothesis to test would be that any keyboard layout might have similar typing speeds (say give a factor of 2 or so) once a user has enough experience with it - for things that can be typed with single key presses.
I _do_ have some personal experience with the (standard 2-hand) dvorak keyboard layout which anyone can try by selecting that layout in their OS's keyboard settings (irrespective of their physical keyboard), a side effect of this is that you will be forced to learn to touch-type as obviously the letters written on your standard keyboard will have no relation to what comes out on the screen any more!
Speaking entirely qualitatively - it was suprising how easy it was to learn, and a few times since I abandoned it I've gone back and found that it can be picked up again within an hour or two once learnt (just like riding a bike?). And as a few other posters have already mentioned (for typing normal English) it feels more comfortable as less finger movement is required on average.
However (and this is the reason I've abandoned using it) - the dvorak layout is inappropriate for most uses apart from simply typing English - such as computer programming, working with spreadsheets, linux command line usage etc.
This is because by arranging the characters by their frequency in standard english, many non-alphanumeric characters which are rarely used in standard english but now very frequently used for other tasks on a computer are placed in very awkard positions requiring you to type with the little finger (or even worse, shift + little-finger). Here are some examples
':' - used a lot in C++, is where shift-'z' is on qwerty.
'{' and '}' - are shift-'-' and shift-'=' on qwerty.
'\'' and '"' - are q and shift-'q' on qwerty.
There are quite a few situations where life would be simpler if there were just one definitive instance of something - mostly indexes / official repositories of some kind, but sadly at the moment we have a multitude of these things with no single instance providing 100% coverage :
- Airline flight schedules : presumably every airline has planned its timetable months in advance, but there is no obvious place to search across all of them (cf. www.nationalrail.co.uk which I think does have the authoritative timetable across all mainline railways)
- List of all properties for sale in the country; in the UK the seller has to choose which estate agents to list with, and the buyer then has to go around a load of different agents to find out what's available. It'd seem obvious for this to instead be a nationalized thing - what added value do estate agents / letting introduction agencies provide anyway? (as distinct from letting agencies that also do some property management)
- Web Search : there is scope for different search-engines to provide fundamentally different types of search (ie. text, image, audio etc) : but why do we need more than one of each type?
- Scientific Journals : pre-print servers (arXiv.org) are starting to solve this problem, but lots of papers still get published in obscure / less-popular journals and if your university library doesn't subscribe to that one then you can't easily read it - and once you leave university they become pretty much unavailable to the average member of the public.
Pretty much all the things I can think of that would be better as a single instance have the problem that they are run commercially - so there are problems with monopolies if we ever get just one of them, and the tin-foil-hat brigade might have something to say if they were nationalized (and in many cases could just be right too : imagine if our primary news outlet were government controlled)
Free stuff seems to find its own level of singleton-like-ness, some keeping just one instance :
- IMDB, CPAN, w3c.org,
others forking to meet differing preferences like linux distributions, tech news website etc.
"And remember: Evil will always prevail, because Good is dumb." -- Spaceballs