Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Yes (Score 1) 712

This is a good point.

On balance, I would suggest a fountain pen for the requirements the asker mentioned. A Pelikan with a fine point would work perfectly well for about $100. (Schaeffer used to make some pretty decent disposable, refillable pens, for about $12. That's how I got my start in fountain penmanship) I've had my Pelikan since 2004. Actually, I'm on my second one, but when the first one broke, I emailed Pelikan's American distributor about how mine broke (it fell a few feet and hit the edge of a metal trash can, cracking the celluloid), and they sent me a new $100 pen for free. Also, you can de-burr a fountain pen with the kinds of tools any wood working or cooking nerd would own: a diamond-surface hone. Just write on the hone for thirty seconds and the nib's tip will be smooth. And also, a fountain pen lets the user cultivate some style. Pressing harder makes the nib's tip flex more, and the line traced ends up being wider. This is very nice for readability. Note how only the crappiest of fonts are as heavy in the horizontal direction as the vertical.

As a matter of fact, people ask about pens on /. every year or two, and it used to be that fountain pens were always the top answer. I don't know where this new batch of losers recommending "technical" pens came from. They are more likely geeks than nerds.

Comment Re:Checkmate. (Score 1) 374

It's not merely ideology.

Free trade is (more-or-less) impossible with a planned economy. The planners can manipulate prices to effectively control the global market. Closing trade with the Soviet bloc was meant to prevent that.

On the plus side, there was never really a large threat of armed conflict with the Soviet bloc. Carving up the third world to increase each bloc's resource endowments was about the only source of armed conflict.

Comment Re:How can this be ? (Score 4, Insightful) 404

You evidently don't understand how business development works.

Demanding 100% ROI in six years is not realistic. At a nominal 8% return, it will take about 9 years to recover their money. And that's nominal (i.e., assuming a "normal" rate of return based on the empirical average). Youtube just became profitable. So it will nominally pay for itself in 9 years.

On the other hand, acquiring Youtube turned Google into a media company. Have you noticed how Google is using resources to combat copyright infringement of movies recently? Why do you think that is? To drive users to legitimately licensed Google owned media distribution channels, which will increase the rate of return of the investment.

They also control a massive content distribution infrastructure, putting pressure on other distribution networks. Cable television companies are finding that they must lower their prices for all of their services in order to compete with the internet -- the largest legitimate chunks of which are represented by Youtube, Netflix, and Hulu. Eventually, the cable companies will be nothing but ISPs with perhaps some "premium" content for subscribers. But even that is doubtful -- the media companies are much better off selling licenses to anybody who wants them. Including Google. The only thing keeping the cable companies at all relevant is their valuable networking infrastructure.

Either way, Google gets more eye balls on their pages and more licensing deals for Youtube distribution.

They bought a lot more than 1.65B worth of market power.

Comment Re:This is too simple to fix (Score 1) 487

No, the probability distribution is fixed at the time it is sampled.

For example, if I generate a random password from a uniform distribution of letters, I can end up with a "the" in the middle of it. That does not make it more likely that any given string will follow the "the". As I said, the entropy is a property of the distribution from which a password is drawn.

Consider the sequence of values:

"the aaa"
"the aab" ...
"the car" ...
"the zzz"

What is the probability that "car" is chosen from this list, given that it is uniformly distributed? It is 1 in 26^3, assuming the alphabet is entirely lowercase; not 1 in the number of three letter nouns and adjectives. For the latter to hold, we would have to be drawing uniformly from:

"the ace"
"the bat" ...
"the car" ...
"the red"
etc.

You seem to be confusing the underlying probability distribution from which a password is sampled with some kind of conditional probability relating the occurrence of a string to the underlying probability distribution. But that is a non-issue. The attacker doesn't have even partial information with which to compute Bayesian statistics. He doesn't know a priori that "the" is a part of your password, and he doesn't know which underlying probability distribution you chose to use.

Comment Re:This is too simple to fix (Score 1) 487

Y'see, reusing a string does not significantly add entropy. That is why zip compression works.

Not quite. Entropy is a property of the probability distribution from which a sample (i.e., a password) is drawn. Whether or not reusing a string adds entropy depends on the underlying distribution.

Zip is designed to be most effective on text and other probability spaces where repetition is likely. Zip will not work so well when drawing upon a uniform distribution. On the contrapositive, using a repetition will increase entropy to the extent that a repetition is unlikely with respect to the underlying distribution.

Comment Re:XKCD (Score 2) 487

You are totally missing the point.

Instead of using an "alphabet" with 26 characters (or 52 with capitals, or 70-something with capitals and punctuation) and choosing a short random string, you use an "alphabet" with 5000+ ideograms (i.e., words) and choose a short random string of these words.

For simplicity, just suppose there are 5000 commonly used English words. Then there are 5000^n passphrases of length n (i.e., containing n words). Obviously, this is much, much bigger than 70-something raised to the n. It does not matter that it is smaller than 70-something raised to the number of characters in the passphrase.

As a matter of fact, my computer's word list contains about 95,000 words. Try to guess the password I will generate with the following algorithm:

Pick 7 random numbers between 1 and 95000. Look at the word indexed by the random number. Memorize.

My PRNG yielded:
74019,69542,70792,42388,32916,63978,55632

which maps to:
purchasing persecute platitudes escalations consummation mum intoned

A quick calculation shows that such a scheme has about bits 115 bits of entropy, compared to less than 44 for a "character" password with the same number of random tokens drawn from the alphabet.

So what's the big deal about using words instead of just longer random strings in the smaller 70-something character alphabet? You would need an 19 character random string drawn from an alphabet of 80 to get as much entropy as 7 words drawn from a dictionary of 95000 words. Clearly, the latter is far easier to memorize than something like "DtnqaELdIA=vozSkC" and provides the same cryptographic strength.

Comment Re:And Harry Nyquist is rolling around in his grav (Score 3, Interesting) 255

"X. Significance of the results
Given the existence of musical-instrument energy above 20 kilohertz, it is natural to ask whether the energy matters to human perception or music recording. The common view is that energy above 20 kHz does not matter, but AES preprint 3207 by Oohashi et al. claims that reproduced sound above 26 kHz "induces activation of alpha-EEG (electroencephalogram) rhythms that persist in the absence of high frequency stimulation, and can affect perception of sound quality." [4]
            Oohashi and his colleagues recorded gamelan to a bandwidth of 60 kHz, and played back the recording to listeners through a speaker system with an extra tweeter for the range above 26 kHz. This tweeter was driven by its own amplifier, and the 26 kHz electronic crossover before the amplifier used steep filters. The experimenters found that the listeners' EEGs and their subjective ratings of the sound quality were affected by whether this "ultra-tweeter" was on or off, even though the listeners explicitly denied that the reproduced sound was affected by the ultra-tweeter, and also denied, when presented with the ultrasonics alone, that any sound at all was being played.
            From the fact that changes in subjects' EEGs "persist in the absence of high frequency stimulation," Oohashi and his colleagues infer that in audio comparisons, a substantial silent period is required between successive samples to avoid the second evaluation's being corrupted by "hangover" of reaction to the first.
            The preprint gives photos of EEG results for only three of sixteen subjects. I hope that more will be published.

In a paper published in Science, Lenhardt et al. report that "bone-conducted ultrasonic hearing has been found capable of supporting frequency discrimination and speech detection in normal, older hearing-impaired, and profoundly deaf human subjects." [5] They speculate that the saccule may be involved, this being "an otolithic organ that responds to acceleration and gravity and may be responsible for transduction of sound after destruction of the cochlea," and they further point out that the saccule has neural cross-connections with the cochlea. [6]

Even if we assume that air-conducted ultrasound does not affect direct perception of live sound, it might still affect us indirectly through interfering with the recording process. Every recording engineer knows that speech sibilants (Figure 10), jangling key rings (Figure 15), and muted trumpets (Figures 1 to 3) can expose problems in recording equipment. If the problems come from energy below 20 kHz, then the recording engineer simply needs better equipment. But if the problems prove to come from the energy beyond 20 kHz, then what's needed is either filtering, which is difficult to carry out without sonically harmful side effects; or wider bandwidth in the entire recording chain, including the storage medium; or a combination of the two.
            On the other hand, if the assumption of the previous paragraph be wrong â" if it is determined that sound components beyond 20 kHz do matter to human musical perception and pleasure â" then for highest fidelity, the option of filtering would have to be rejected, and recording chains and storage media of wider bandwidth would be needed."

Comment Re:And Harry Nyquist is rolling around in his grav (Score 1, Interesting) 255

You can't improve audio quality of *audible frequencies* by increasing resolution of the horizontal axis (sampling frequency) beyond a rate which surpasses the Nyquist frequency for human hearing.

Nyquist-Shannon notwithstanding, the range of human hearing is wider than 20kHz.

http://www.cco.caltech.edu/~boyk/spectra/spectra.htm (a properly conducted experiment)

That said, doubling the sampling rate isn't going to do anything for a digital signal. At best, the new signal will simply play each of the old signal's samples twice.

Comment Re:fearmongering (Score 1) 266

If you are willing to believe in a Platonic universe, you must be willing to believe in string theory. The whole point of string theory is that it is the logical theory (in the sense of the first order logic) taken by taking the "known" laws of physics as axioms. This is Platonic realism in a scientific context.

Stop skipping class and lecturing us.

Slashdot Top Deals

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...