Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:XKCD (Score 2) 487

You are totally missing the point.

Instead of using an "alphabet" with 26 characters (or 52 with capitals, or 70-something with capitals and punctuation) and choosing a short random string, you use an "alphabet" with 5000+ ideograms (i.e., words) and choose a short random string of these words.

For simplicity, just suppose there are 5000 commonly used English words. Then there are 5000^n passphrases of length n (i.e., containing n words). Obviously, this is much, much bigger than 70-something raised to the n. It does not matter that it is smaller than 70-something raised to the number of characters in the passphrase.

As a matter of fact, my computer's word list contains about 95,000 words. Try to guess the password I will generate with the following algorithm:

Pick 7 random numbers between 1 and 95000. Look at the word indexed by the random number. Memorize.

My PRNG yielded:
74019,69542,70792,42388,32916,63978,55632

which maps to:
purchasing persecute platitudes escalations consummation mum intoned

A quick calculation shows that such a scheme has about bits 115 bits of entropy, compared to less than 44 for a "character" password with the same number of random tokens drawn from the alphabet.

So what's the big deal about using words instead of just longer random strings in the smaller 70-something character alphabet? You would need an 19 character random string drawn from an alphabet of 80 to get as much entropy as 7 words drawn from a dictionary of 95000 words. Clearly, the latter is far easier to memorize than something like "DtnqaELdIA=vozSkC" and provides the same cryptographic strength.

Comment Re:And Harry Nyquist is rolling around in his grav (Score 3, Interesting) 255

"X. Significance of the results
Given the existence of musical-instrument energy above 20 kilohertz, it is natural to ask whether the energy matters to human perception or music recording. The common view is that energy above 20 kHz does not matter, but AES preprint 3207 by Oohashi et al. claims that reproduced sound above 26 kHz "induces activation of alpha-EEG (electroencephalogram) rhythms that persist in the absence of high frequency stimulation, and can affect perception of sound quality." [4]
            Oohashi and his colleagues recorded gamelan to a bandwidth of 60 kHz, and played back the recording to listeners through a speaker system with an extra tweeter for the range above 26 kHz. This tweeter was driven by its own amplifier, and the 26 kHz electronic crossover before the amplifier used steep filters. The experimenters found that the listeners' EEGs and their subjective ratings of the sound quality were affected by whether this "ultra-tweeter" was on or off, even though the listeners explicitly denied that the reproduced sound was affected by the ultra-tweeter, and also denied, when presented with the ultrasonics alone, that any sound at all was being played.
            From the fact that changes in subjects' EEGs "persist in the absence of high frequency stimulation," Oohashi and his colleagues infer that in audio comparisons, a substantial silent period is required between successive samples to avoid the second evaluation's being corrupted by "hangover" of reaction to the first.
            The preprint gives photos of EEG results for only three of sixteen subjects. I hope that more will be published.

In a paper published in Science, Lenhardt et al. report that "bone-conducted ultrasonic hearing has been found capable of supporting frequency discrimination and speech detection in normal, older hearing-impaired, and profoundly deaf human subjects." [5] They speculate that the saccule may be involved, this being "an otolithic organ that responds to acceleration and gravity and may be responsible for transduction of sound after destruction of the cochlea," and they further point out that the saccule has neural cross-connections with the cochlea. [6]

Even if we assume that air-conducted ultrasound does not affect direct perception of live sound, it might still affect us indirectly through interfering with the recording process. Every recording engineer knows that speech sibilants (Figure 10), jangling key rings (Figure 15), and muted trumpets (Figures 1 to 3) can expose problems in recording equipment. If the problems come from energy below 20 kHz, then the recording engineer simply needs better equipment. But if the problems prove to come from the energy beyond 20 kHz, then what's needed is either filtering, which is difficult to carry out without sonically harmful side effects; or wider bandwidth in the entire recording chain, including the storage medium; or a combination of the two.
            On the other hand, if the assumption of the previous paragraph be wrong â" if it is determined that sound components beyond 20 kHz do matter to human musical perception and pleasure â" then for highest fidelity, the option of filtering would have to be rejected, and recording chains and storage media of wider bandwidth would be needed."

Comment Re:And Harry Nyquist is rolling around in his grav (Score 1, Interesting) 255

You can't improve audio quality of *audible frequencies* by increasing resolution of the horizontal axis (sampling frequency) beyond a rate which surpasses the Nyquist frequency for human hearing.

Nyquist-Shannon notwithstanding, the range of human hearing is wider than 20kHz.

http://www.cco.caltech.edu/~boyk/spectra/spectra.htm (a properly conducted experiment)

That said, doubling the sampling rate isn't going to do anything for a digital signal. At best, the new signal will simply play each of the old signal's samples twice.

Comment Re:fearmongering (Score 1) 266

If you are willing to believe in a Platonic universe, you must be willing to believe in string theory. The whole point of string theory is that it is the logical theory (in the sense of the first order logic) taken by taking the "known" laws of physics as axioms. This is Platonic realism in a scientific context.

Stop skipping class and lecturing us.

Comment Re:fearmongering (Score 2, Insightful) 266

I see this as sane. The risk of terrorism has always been overblown. But there are literally tens of thousands (or even hundreds of thousands or millions) of black hats out there totally willing to steal your identity or crack your voicemail, like the Murdoch family did to anybody they wanted to investigate or intimidate.

Comment Re:"Trust but verify" rule fools some people (Score 1) 79

"Require an amazing conspiracy" is closer to what trust means in terms of security than "trust but verify". But it is still too weak for a security context. And in some ways, it is the polar opposite of what "trust" means in context.

In security (of the mathematical, physical, or professional kind), a "trusted source" is a source that you are compelled to believe, because without their input, the security model would be impossible. Indeed, you want to have as few trusted sources as possible. For example, you rely on random numbers to seed a cryptographic system. Then you must trust your random number generator, because it is impossible (in general) that it is not biased in some way. You must trust your algorithm, because it is impossible to verify that it is unbreakable.

The fewer things in your security you have to take the word of, the more secure your model is, all things being equal. So "trust, but verify" runs counter to professional usage of the word "trust", because trusted things are unverifiable by definition (in context).

In security, everything that is not trusted is untrusted. And untrusted sources get all the scrutiny that is economically efficient.

Comment Re:Mindless drivel (Score 1) 123

I don't want to be excessively harsh but the summary was seriously a bunch of drivel. In silico either means it's data on the computer, or that you are simulating a biological process computationally. But as other posters have mentioned, unless you are purposely simulating evolution, mycoplasma sequences in your human databases isn't going to cause any "arms race." Yes, it seriously screws with validity, but that's a completely different issue.

You're still missing the point.

Methods to screen out junk contamination will all miss something. The data representation of a genome is reproduced, as a cost (and time) saving measure. In other words, the contamination that survives the screening process will "survive" as a silicon representation.

This is a problem in the long term, since we will presumably be using the genomic data to eradicate diseases. So our use of contaminated data will select for diseases which cannot be screened.

Comment Re:Checks and balances (Score 2) 384

Although overly-broad laws are a serious problem, the real problem has little to do with them.

The police are not trained in the law. They are trained to a 350 page handbook, and are trained that if they have any doubt that an action is legal, to arrest or fine, and let the Courts sort it out. They are trained to hide behind their badge when they are wrong.

This is a classic economic externality. It costs a policeman or woman nothing to arrest or fine someone they will probably never see again. But doing so imposes enormous costs on all of us, through the direct costs of defense, and the social costs of operating courts beyond their capacity.

Slashdot Top Deals

The sooner all the animals are extinct, the sooner we'll find their money. - Ed Bluestone

Working...