Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Many methods to speed reading (Score 1) 92

I'm honestly not actually sure why your idea *would* increase reading speed.

It's very simple. As you suggest, the bottleneck is in the brain's ability to process the information rapidly, not in eye movement, for most readers. Therefore, whether you learn to speed "read" with audio or text, doesn't really matter. It's the back-end processing that needs improvement in both cases, and it's the same back-end. Improving one will improve the other.

Comment Re:Many methods to speed reading (Score 2) 92

I had a very similar idea, and it will work. Really. By the way, the poster above, Bysmuth, is dead wrong, labs and all. Feel free to contact me (Bill Cox - waywardgeek@gmail.com) if you need me as a reference to support this idea.

One of my contributions to open source and the blind community has been improving speech speedup algorithms. I listen at > 600 wpm, and have a blind friend who listens at double that. As part of this, I've done numerous A/B tests on many subjects (friends, family and acquaintances), trying to figure out what works for them. Here's what I found. First, anyone who is already a high speed reader also very rapidly becomes able to listen at high speed. This is 100% correlated, after maybe 100-ish tests. I found no counter examples, and the strength of listening speed ability increases with the subject's reading speed. While some speed readers do not hear a voice while reading, it must still using the speech centers in their brain, because high speed readers are already prepared for speed listening, whether they claim to vocalise or not. There are other contributing factors, most notably age. I am the only non-blind person I know who learned to be comfortable speed-listening after the age of 40, though I do have a strong central vision loss issue. Every test I did on with anyone over 40 backed up the fact that speed readers are also naturally speed listeners, but the > 40 crowd is almost violently opposed to speed listening, while the under 40 crowd thinks it's cool. I know... that's such an objective scientific observation :-)

Also, I found that non-blind listeners who force themselves to learn to speed listen (including me), discover that their regular reading speed increases naturally. People can argue all day long about vocalisation being good/bad while reading, but the fact is that the same centers in the brain are used regardless. If you train to listen fast, your reading speed will increase, and vise-versa. This is the single most obvious conclusion I have been able to draw. It's a very real effect.

Another interesting point is that young people will, given a chance, naturally turn up the audio speed over time while listening to good books, very much like we see kids reading faster as they read a good series.

Reading a story both visually and audibly in parallel should enable a reader (whether mostly using their eyes or ears) to focus on the story the way that is more natural for him, and as he goes faster over time, his regular reading speed will increase, regardless of his preference for audio or printed text.

Comment Re:Commodore Amiga 3000T (Score 1) 702

I solved my cellphone battery life problem with a Moto-X from Republic Wireless. Republic still has a few growing pains to get past, but for big geeks who don't mind putting their phone in airplane mode and enabling wifi once or twice a day, it's amazing. In that mode, I go for days without having to charge it, though my phone is only a few feet from the wireless router most of the time. For $25/month for "unlimited" Sprint 3G everything but tethering, it's hard to beat.

Comment Re:To Crypt or Not To Crypt (Score 1) 171

I'm always amazed at how hard something as simple as password hashing can be. Yes, it's the user's fault for reusing passwords, but we should try and protect him anyway, because it's very common. Part of the job of the computer security industry is protecting stupid people. Improving this is situation one reason for the Password Hashing Competition.

You are right that password strengthening before encryption is a different problem from user authentication, but the solutions tend to be the same. You can use Bcrypt or Scrypt for strengthening a password hash on an authentication server just like you can while deriving a volume decryption key. The main difference seems to be that a common server may not have a significant fraction of a second to spend on authenticating a user/password combo. TC has some additional constraints, like the volume needs to appear as random data, making it harder to embed various encryption parameters, such as which key stretching algorithm is in use. To an attacker, he doesn't care whether the password/salt is protecting a login account or an encrypted volume. To him, it's just so many rounds of PBKDF2 (or whatever), and then a quick check to see if he got the right answer, and do as many in parallel as possible. Salt is used either way to defeat rainbow tables, so instead attackers use GPU farms to do massively parallel brute force guessing, where each guess is user/salt specific.

However, the two cases I've mentioned are both encryption: TC encrypted volumes, and OpenSSh id_rsa private keys. We could argue about how much effort a server should put into protecting it's user's passwords, but both TC and OpenSSh do *nothing* more than a typical server, devoting only a millisecond to key stretching. That's just lame.

Comment Re:To Crypt or Not To Crypt (Score 1) 171

I just added a keyfile as you suggested. I put it on a couple of USB keys, so I have a backup, and now in theory my encrypted volume can't be mounted without having the physical key. That should greatly increase my passphrase protection, as well as the volume contents (basically a list of all my various user/password credentials at various sites). I'm still running TC in Windows, and several times I've answered "yes" to let various programs make changes to my hard disk, and my machine probably comes with back-doors from both Lenovo and Microsoft and maybe even Intel. I don't trust our company's closed-source VPN provider, either. So, I still don't feel secure, but at least it's an improvement. Thanks for the tip.

Comment Re:To Crypt or Not To Crypt (Score 2) 171

I don't do this for a living, but I'm not totally ignorant about this topic. TrueCrypt does a poor job strengthening passwords. TC's users would be far better protected if TC ran something even as lame as PBKDF2 for a full second, with rounds somewhere in the 100's of thousands or millions. Not only does TC do a poor job protecting my data, but when an attacker does manage to guess a user's low-entropy password, he can then try that password all over the place to see where else the user has used it. This is why I say that the user's password is at risk due to TC, not just the data TC encrypts.

To give TC some credit, OpenSSL has the same lame password strengthening as TC, putting id_rsa passphrases at risk, in addition to the user's private key. So, there seems to be plenty of lameness to go around. I hear that a Bcrypt option is in the bleeding edge version of OpenSSL. I which they'd push out that patch along with the Heartbleed fix.

Comment Re:To Crypt or Not To Crypt (Score 2) 171

I use TrueCrypt. Not that it likely matters given all the other back-doors on my Lenovo Wintel laptop, but I use a passphrase from Hell, and I suspect even the NSA's biggest cracker would have trouble with it.

Other than the backdoors in various places on this toxic waste dump of security, the biggest security threat to my passphrase from Hell is TrueCrypt itself. TrueCrypt by default does 100% useless password strengthening (key stretching or whatever it's called). It's strongest mode, which you have to select manually, is 2000 rounds of SHA-256. I can buy SHA256 boxes that do 1 Giga-hash/second per $10. Figure a government has a few million at least for such boxes, and go compute how strong your password needs to be, and it isn't pretty.

I use my password and TrueCrypt to protect my data. Why didn't it occur to the TrueCrypt authors to protect my password? I mean, Bcrypt at least, come on...

Comment Re:not developed by a responsible team? (Score 1) 301

Sometimes the individuals involved can be responsible while the team acts irresponsibly. For example, why is my passphrase of my id_rsa key protected by only one round of hashing with no option for increased rounds? I hear there are good things coming, like being able to use bcrypt, but this is a scandal. Only a security ignorant fool would want his passphrase attached to an id_rsa key with no password stretching at all. So... how many fools do we have out there? I surely hope you weren't counting on your passphrase being secure just because the OpenSSL team was involved.

Comment Re:RMS mentions a comparable situation (Score 2) 266

There's one major problem there: most disabled people in the US are living on Supplemental Security Income of $600-850/month, and have no other source of money. Even a group of them are unlikely to be able to pool enough to hire somebody to fix a bug in something like Xorg.

This is also potentially a huge benefit. I really enjoy working to make GNU/Linux more accessible. I'd do it full time if I could, but I cant afford to. I don't have the time, and companies wont pay me to do it.

People with disabilities, as you suggest, often have no job and little money. They often have lot's of free time that could be spend improving FOSS accessibility. A primary vision of the Accessible Computing Foundation is creating a world where people with disabilities help themselves by creating all of he accessible software they need. There are far more than enough brilliant blind people around the world than would be needed to make Linux virtually 100% accessible to the blind. They just need to come together, learn to code, and make it happen. One of the primary messages for young blind kids is that this is even possible. We seem to live in a world where people with disabilities are encouraged to settle for less than what they can achieve. How cool would it be to organize this unemployed force to make the changes they need? How cool would it be to get young blind kids across the country learning to write code?

Comment Re:RMS mentions a comparable situation (Score 4, Interesting) 266

Kudos to RMS for believing accessibility is a human right, and taking action personally to promote accessibility in Linux. Fixing accessibility in Linux is a mess, but if we can get enough people involved, it's doable. This is the mission of multiple efforts, and the one I'm involved in is the ACF (Accessible Computing Foundation). The free software movement, and the goal of people with disabilities taking control of their computing environments are well aligned. GNU/Linux provides a platform where at least in theory any and all accessibility issues can be corrected, unlike Windows and Mac OS X.

Unfortunately there are considerable obstacles to "fixing" accessibility in Linux. I believe they can be overcome if enough people come together to make it happen, but there are huge challenges. There are also people who devote a lot of their lives to improving the situation, often for free or very low financial incentive. I spearheaded the 3.0 release of Vinux, which is Linux for the Vision Impaired. I fixed a dozen or so accessibility bugs, but the right fix in many cases would involve major changes to GNU/Linux. I'll list a few.

The accessibility API in GNU/Linux, atk/at-spi, should have shared more functionality with Windows. For typical corporate and FOSS anti-Windows reasons, the accessibility stack was built intentionally in a Windows incompatible way. The result is that accessibility in Firefox and many other major applications never works as well in Linux as it does in Windows. It simply is not reasonable to make every software vendor do all their accessibility coding N times for N operating systems. There is even an effort called Iaccessible2, which is basically a FOSS accessibility stack for Windows, which the creators seemed to hope could also work for Linux. The code was even donated to the Linux Foundation. However, there was never any money or motivation in FOSS land to actually port the software to Linux, SFAIK. Building a single accessibility API that works in Windows, GNU/Linux, Android, and Mac OS X would go a long way towards fixing accessibility in all of those places, but especially in GNU/Linux, since it is usually the OS vendors put the least effort into. As it stands, few GNU/Linux distros are able to keep FireFox and LibreOffice accessibility working.

Then there's the problem of Linux being a multi-headed Hydra monster with no one in charge. At Microsoft, Bill Gates took a personal interest in accessibility, and that's all it took for the entire company to take accessibility seriously. In GNU/Linux land, RMS also takes a strong personal interest in accessibility, but it's not like most of the devs work for the guy. RMS can make his case, but when your boss is asking for prettier GTK+ widgets in Gnome 3 and you're late delivering, accessibility fixes fall by the wayside. When we are lucky enough for a patch to be developed, many times the GNU/Linux authors refuse to include them, because the "fix" is not perfect. For example, I added accessible descriptions to pixmaps in GTK+, which enabled blind users to hear 'star' for a star icon in a table containing pixmaps. The devs could not decide if pixmap was the right place for this accessible description, enabling them to justify doing nothing, and the continued lack of support for accessible icons was the result. It saved them a few hours of work in testing, which was their real priority. Multiply this asinine situation 100X, and you begin to understand why making Linux accessible is hard. GNU/Linux land seems to take pride in making it hard to fix accessibility, because we make it almost impossible to override any given stupid author's decision not to support accessibility. I should be able to patch GTK+, and have that patch automatically distributed to every user of every distro who believes my accessibility patches are something they want. Instead, we've built a system where patches have to be accepted by the authors, and then distributed slowly over years to the stable distros. Stupid, stupid, stupid...

Another major GNU/Linux accessibility problem is the lack of stability and portability between distros. If I write an important Linux accessibility app, like voxin for example, it would be great to compile it once, share it with every person who needs it, and have a way for those people to use that compiled binary as long as they like. This is mostly the case in Windows, and not at all the case in Linux. Voxin, a text-to-speech wrapper for the IBM engine prefered by many vision impaired people, has to be ported to each release of Ubuntu, causing the author considerable effort just to maintain his package for one distro, even though there is no new functionality ever. Pretty much unless you are an ace coder yourself, you wont be able to get voxin working on your prefered distro, and your blind users may avoiding your distro for just that reason. Even if you do go to this effort, that effort will be good for only one version of your distro, and you will have to repeat it forever. As a result, only the espeak TTS package is natively supported in even the most accessible GNU/Linux distros.

GNU/Linux is basically designed to break, and the first thing that breaks is typically accessibility. One problem is that while we can share source code between distros and releases, we cannot share testing, and often we can't even share packaging. If Debian goes the extra mile and insures that the accessibility stack works from boot for each release, that effort does not help RedHat, who must also put in the huge additional testing effort. The result is that only the biggest and most popular distros and applications typically have a working accessibility stack at all. When I looked at what it would take to make Trisquel Linux accessible, I had to let the devs know that they simply didn't have the resources to get there. This was back in 2010, so things may have changed, but this remains the case for most distros.

All of these issues can in theory be fixed. We should stop purposely making GNU/Linux incompatible with any other OS, and instead work for cross-platform accessibility solutions. We should share well tested compiled binaries (which can be verified as matching the source) between distros for critical portions of the accessibility stack, such as TTS, so that it just works. We should make it easy to patch an author's broken accessibility code, compile and test patched binaries, and share them with people on many distros, without making the patch author jump through insane hoops like we do now to get fixes included.

The same problems holding back accessibility in GNU/Linux are also stifling innovation. The fact that we let petty gate-keepers decide what packages can be shared easily is a crime. It is insanely hard to get a new accessibility package in to RedHat, Debian, etc. Accessibility just isn't cool enough. Most good ideas aren't cool enough. That's why so few people develop "apps" for GNU/Linux anymore. The fact that we refuse to share critical testing of binaries between distros, and make GNU/Linux APIs incompatible with the rest of the world on purpose... it all has to change. Otherwise, GNU/Linux will continue it's decline.

Slashdot Top Deals

We gave you an atomic bomb, what do you want, mermaids? -- I. I. Rabi to the Atomic Energy Commission

Working...