Follow Slashdot stories on Twitter


Forgot your password?
Slashdot Deals: Deal of the Day - 6 month subscription of Pandora One at 46% off. ×

Comment Modify ad bockers (Score 2, Interesting) 291

A simple hack for ad blockers, though this will require a few hacks to browsers, is to display ads with 0% opacity, and absolute position them in a place that can't be seen. With a few hacks to the browser, what you want to do is to have the rendering engine render everything as usual off screen, and then mirror the elements into a second page with the ads rendered invisible, such that javascript running on the page will see the off-screen page, possibly with simulated mouse and keyboard activity based upon what the actual user is doing (filter out keystrokes other than cursor keys). But sites powered by advertising need to learn that they must adopt conventions that keep advertising reasonable and reasonably unintrusive. If they can't make ends meet doing that, get off the web.

Comment Re: Technical solution: browser based boycott (Score 2) 220

The best defense is to make the law look stupid, and likewise those trying to take advantage of it. If Stallman refused to deal with copyright law and thus draft the gpl, much of the free software movement will not have happened. The idea of diagonalisation goes back a long way. When faced with silly laws, diagonalise them, as gpl and copyleft diagonalise copyright. I am just suggesting preparing to do likewise. If many immediately prepare such a diagonal response, maybe that will make it clear such a law is stupid. Then also demonstrate use of data compression techniques to programmatically construct links. Like water flows round obstacles, so the advance of freedom must flow round legislative stupidity.

Comment Technical solution: browser based boycott (Score 4, Interesting) 220

We need a robots.txt like file in the root which grants linking permission. Then in firefox have an option which flags unlinkable destination, and by default block such sites. Have the option in the first run dialog. Then actively campaign against sites whose copyright is not in the spirit of the open web, gpl style. Have an open web general license which permits only open web general sites to link to it. Word the license carefully. That is my thought.

Comment Re: Sigh (Score 3, Interesting) 418

Put another way, one limiting factor is the availability of a computational means to verify a correct guess. If the false positive rate is too high, as happens with a OTP, you have problems. Then using encoding schemes rather than just encoding textual data is not hard. If, for example, you only need 2000 different words for your messages, you could start with a basic forth and work thus:

( assume 'append' appends to a word list, and 'say' outputs and clears the word list )
: wHelp S" help" ;
: wThe S" the" ;
: wHomeless S" homeless" ;
: mHelpThe wHelp wThe ;
: mA mHelpThe wHomeless ;
: s1 mA say ;

Now we can map these definitions to 16 bit tokens, padding with random definitions, and store random definitions where the words go to get a non funtioning decode vector. Then to decode, we need a list of words and locations to insert them. One vector of 64k forth words could be used in many ways depending on which words are overwritten and what is put there. The 64k vector need not even contain the api, since we need only overwrite say v[435] with 'say', v[2789] with 'append', put 'S" help"' etc. in the right place and know that v[6789] is a correct code for mA. The secret code is in the modifications necessary, and without both pieces you have nothing. Just the vector and you have a random assortment of words defined in terms of other words.

The issue for GCHQ is not unbreakability, but that the above could be implemented in a few lines of Perl or PHP, and if it becomes widespread by some social media like a computational Twitter on acid, the effort required to search would be prohibitive given the potential for false positives and that most messages are for fun.

The Indiana Pi Law did not get passed, but many equivalently stupid laws have, and this will be yet another. You cannot pass a law requiring that maths magically become easy. Trying to causes collateral damage for no gain. But I guess politicians live in a different universe.

Comment Re: Sigh (Score 5, Interesting) 418

People often overlook the issue of verification. If you take a small structured dictionary which takes in, say, 128 bits, and outputs a nonsense poem using the words of the dictionary and some simple rules, you have a reversible procedure for turning 128 bit hashes into literary nonsense. Reverse the procedure and apply a simple procedure to the original 128 bit hash to see if it contains a message. The simple procedure may include things about the sender. The trouble for crackers here, is that there are many such procedures. A simple software example is to append 'Borg' to a message, hash it with shasum, and see if the first two hex digits are f7, say, else discard. Then using evolutionary programs to find a short procedure which generates indices recursively for words in a video file [ with feedback, so the second index requires having the correct video file on hand ]. Guessing a random 128bit passkey is bad enough, but guessing a random procedure is far worse. Having everybody just [ just! ] using aes128 will seem like paradise compared to the output of the computational arms race the UK government is inadvertently about to kick off.

I have fond memories of the old msdos program insults.exe. it has not escaped my attention that one can take a 128 bit number [ possibly the output of a sugared hash ] and use bits from it as indices into tables to generate phrases. There is much fun to be had, and so many variations. The paper from wayback about chaffing and winnowing will perhaps have more attention payed to it.

Comment Re: It's a business opportunity! (Score 4, Insightful) 320

If the vendor has not managed to produce a properly written, secure, bug free piece of software by the 10th attempt, what faith should one have in the 11th. Software updates have lead to bloat, bug tolerance and laziness. If vendors were required to ship working software, rather than anything they liked, we would have less software, but far less low quality software. Oracle, Apple and Adobe have some amazingly well written code lurking in their products, but it is buried under tons of bloated rubbish that should never have been considered fit to release.

Comment Start with bash and a better selection of tools (Score 1) 270

The first thing I write in bash is a command I call 'd'. I use it alot. It functions like Forth's colon command, in the sense that I write:

d cmd 'line1' 'line2' 'line3'

and thus can create new commands with a single line. It is not perfect, and I am playing with the idea. Bash's array functions were never intended for Lisp style metaprogramming, but bash is standard, and I want things I can easily run on a webserver over ssh, which rules out esoteric languages. But the above is an example of a template script, which takes input (either command line parameters or stdin) and then generates a new script based upon it. My version of d is 9 lines long.

Why I should need to write ten tons of boilerplate to get anything done is beyond me. The beauty of basic and forth is that you can do useful stuff with one or two lines. Likewise bash. Object oriented programming and class structures are great teaching tools for teaching CS kids how to design a program. In the modern world, this has turned into a computational cocaine habit, with programmers routinely writing 100,000 line behemoths with more bugs than one cares to count, and no serious possibility of turning it into something reliable. The old unix philosophy was to have a good selection of simple tools, and a means for combining them together. Forth did similar. When I code in PHP or Javascript, I do similar, and keep things as simple as possible. Importantly, the cult of adapting computers to suit humans has a nasty downside, resulting in computing devices being seen as magic black boxes with flashing lights which may as well be powered by unicorns. Only by keeping things simple can we hope for programmers to have an end to understanding of what they are doing, from the foundations upward.

Comment A Foundational Mathematical Logician's View (Score 1, Interesting) 274

The problem physics faces is that it is using mathematical methods which assume physically implausible foundations. It is then faced with the problem of incomplete knowledge. I shall illustrate the issues using metaphors that anybody with half an ounce of computer science common sense should get. (For reference, my area of doctoral studies was models of PA, in the region of mathematics which gave birth to modern computing.)

Consider modern hashing. If I know the correct input, I get the correct output. If I am off by one bit, but do not know which bit, and the input is 128bits, I have a 1/128 chance of getting the correct output. If I am off by two bits, I have a (1 / 128 choose 2) chance, and as the number of bit errors increases, this probability gets close to zero. Quantum mechanical effects occur when the number of bits of entropy get small, so that this probability becomes experimentally distinguishable from zero. Something like that.

Now consider that energy and mass are equivalent via Einstein's famous equation. Neglect the complex stuff for now. The current theoretical best idea is matter being vibrations in strings. For now I will just take a conceptually simple version to illustrate. A short vibration in a long string takes time to travel, and if this speed is c (lightspeed) and the string is long and coiled, it will take time to get to a place where one particular observer can see it. Likewise photons have to reach us before they can register. Of course interactions between matter through the elecromagnetic field happens via photos.

The obvious explanation is that there is some hidden delay in the underlying physics so that only, say, 5% of the energy in the universe is visible to an observer at any time. What this '5%' actually is will follow from the underlying structure, but quite possibly this cannot be probed by conventional experimental means since it is necessary that the part of the universe experimented on needs to be held constant, thus precluding conventional experiments using physical objects. Again, this is a sketch idea to be pondered, not a claimed 'final theory'.

The thing is, if energy is invisible due to delay, but still contributes to the overall mass inside our universe, these 'dark energy' type sum mismatches might be the only evidence they are there at all. But getting this right means getting the mathematical framework right, and mainstream theoretical physicists are still mostly using stuff done with methods that were beginning to become unstuck in the late 19th century. Issues with calculus gave rise to analysis using limits, and these were founded on arithmetic and set theory. But these last two assumed an infinitude of distinct objects with which to perform computations. It is known now that this is physically implausible. Thus one needs to use more strictly bounded arithmetics and recursive constructions using precisely accounted computational resources to form foundational models which can correspond to physically plausible structures. By studying such structures and limiting towards the ultimate capacity of the physical universe (think Bekenstein bound here) we will be better placed to sort out this theoretical mess. Current mathematical methods are simply not up to the task.

(google "John Allsup Mathematical Genealogy" and see where I fit in the Ph.D. tree to get an idea of the area I was trained in: life circumstances rendered a conventional career infeasible, which is why I have no academic reputation, but I have kept an eye on progress, and have kept my logical reasoning skills sharp, just in case.)

Comment Re: sTEM (Score 1) 219

As one of the authors of SICP said at the start of that videod course, Compsci is not about computers, and is not a science. I am slowly building a better nomenclature. It begins with two disciplines: turing mechanics and lambda theory. The first is about irreversible physical manipulations, the second is about supplying humanly intuitive meaning to those manipulations. I find it strange that having so loved infinity and set theory as a student, I am now compelled to be ever more a finitist.

Comment Re: sTEM (Score 1) 219

Computation lies at the foundations of mathematics where I did my Ph.D., and compsci is the discipline of making it practically useful. But it is pure mathematics and electronics more than anything else. The science part is physics, and while that is often underemphasised, you cannot get far in modern practical computing without relying on consequences of physics. Ultimately, however, Maths+Logic+Physics+Computation need to be understood and taught as an integrated whole, and the rest of science built on top of it. Most areas of science where this foundation is not well laid tend to be riddled with elementary logic errors building on each other like a mad teddy bear's tea party.

Comment Re:what about git? (Score 5, Interesting) 87

Immerman's point is essentially right. Here is a more thorough opinion.

Git does not use SHA1 for cryptographic purposes. The use of SHA1 for cryptographic purpose is what should be deprecated. If major git repositories start calculating SHA256 hashes too, and keep an eye out for in the wild collisions, it will probably be ok. Git does not need to be attack resistant like TLS does. In any case, it is worth rejigging the code so that the hash is done via a plugin and can be migrated, if this isn't already done. I haven't read the git source and am not sure, but it would be easy to get it done before it becomes a problem for git. I use md5sum for a lot of applications which don't require security sufficient for cryptographic purposes. Cryptography is the Formula 1 of computation, and just like most vehicles don't need to compete against an F1 car, many of the trickle-down uses of cryptographic hashes will be fine for a while. Git only has an issue if two versions of files in the same repo produce the same hash. In practice that means two compilable source files, rather than arbitrary meaningful input. That makes cracking much harder since you have a language recognition problem bolted onto the frontend of your hash, so most potentially colliding inputs will be excluded by this (if one colliding file is a C file, and the other is bad French poetry, it is clear which is intended -- cryptographic purposes cannot rely upon such applications of commonsense recognition). Do not worry about Git.

As an exercise, try and write two valid Python3 files between 10 and 30 lines long importing only sys, re and glob, such that they have identical md5sum outputs. By reducing the input space for a hash, you can make collisions less likely. What is important about this attack is that there is a round trip forward through the hash, and then backwards to a different input. By looking at the information discarded by the nonlinear parts of the hashing algorithm (that is, the non-reversible steps) you can start to make meaningful sense of what the hash is doing. Interestingly, if you produce a language specification which permits fewer valid inputs than the number of possible hash outputs, it is in principle possible that no collisions will occur. Indeed it would be a good exercise for a beginning cryptanalyst to try and construct a language such that valid inputs were guaranteed to get different md5sum outputs.

Comment Yes. Turing Mechanics doing Turing Mechanics (Score 1) 99

I have registered I have watched carefully for years (note my /. serial 987). I am descended academically from Turing, and after letting the mental elf numpties try to destroy my mind, and concluding that they cannot, I am confident to ring the doorbell and offer my assistance. I will for the UK Guild of Turing Mechanics for the purpose of putting Dear Alan's legacy straight. For reference, here is my entry in the mathemalogical family tree: http://www.genealogy.math.ndsu...

Honesty is for the most part less profitable than dishonesty. -- Plato