Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:Need better security (Score 2) 71

As far as I can tell the OTP calculators are only issued for business accounts, normal "end user" accounts have minimal provisions. One example uses a user ID, a password (split into two entry fields) and the site displays a picture that you chose when you first activated the "web access" .

This isn't that secure and because a lot of their site is HTTP there's a good chance that "sheep" attack will work too.

Comment Need better security (Score 2, Interesting) 71

It looks like banks and gov departments can no longer be trusted as normal web sites. They have to be setup to be only available through SSL and must use client certificates for authentication with some way of verifing that the server certificate matches the client certificate.

Only then could the software (possibly a custom configuration of a web browser, maybe an normal one) actually be sure of defeating a phishing attack.

Of course the main reason it'd work is that with a client certificate there's no password to "phish" for.

Something tells me that the banks are too lazy to do this; every other web site will have to be SSL before they get on the bandwaggon.

Comment Re:Arguments of convenience (Score 1) 244

I didn't say there weren't some disadvantages. As I understand it the perl thing is something of a documentation project so they explicitly specify what are the expected responses and what are artefacts of the implementation that the programmers aren't expecting. An example in C

x = 1;
x++ = x++ + x++;
printf("%d\n", x);

... What's the value of "x" ? Your implementation will give you one number, a different implementation may give a different number; the standard says both the implementations are right because the code is outside the standard even though the compilers accept it.

Once you have the documentation (standard) there may be better ways of coding a program that does what the standard says (but not necessarily exactly the same as the first implementation). For example in the perl case; how about a complier rather than an interpreter, would it run faster? With the standard in place you have a fixed target and (in theory) a test suite to check your implementation against the standard. Without the standard you only have the first implementation to compare against; with any significant program you will have differences ... but are they significant differences, there's no way to tell.

Unfortunately, the web standards are so weakly specified that they don't really supply this advantage. Browsers don't have to throw any errors, they don't have to have "validation modes" and there's no way for an older version to identify code that will work in a later version. So the result is that most web pages are "outside the standard" which means that the browsers can do what they want with them. Hopefully this is better with HTML5 though.

Comment Re:Arguments of convenience (Score 1) 244

As I remember it Microsoft jumped the gun on the standard with IE6 and made a guess as to which way the (ambiguous) draft standard should be interpreted. The committee went the other way.

After the release of IE6 there was some political stuff that basically meant that Netscape wasn't released again. Without the competition Microsoft didn't care and so IE6 froze too. Only when Mozilla actually threatened to get a larger market share than IE6 did Microsoft actually start updates again.

All through this the standards committee were still working (mostly without Microsoft's input) in their normal (slow) fashion.

The 'pissing contest' method is actually not a bad way of showing your users what you think they're asking for, normally it's called 'prototyping'. The important thing is to have a solid line between the parts that are 'stable' and the parts that are 'prototype'; in this case it's the "-webkit-" and similar prefixes. For IE6 there wasn't one.

Comment Re:Arguments of convenience (Score 2, Insightful) 244

I would strongly disagree with this.

Having a standards committee design the next step in a technical advance is one of the worst ways of working possible. What you usually end up with is a huge conglomeration of random ideas and special interests. For programming the result is frequently described as "feeping creaturitus".

The reason for web standards is not technical, standards don't help make better mousetraps they exist so that a hundred mice can wrestle the cat into submission. So that the little guys can make stuff too and they don't get forced out of the market by a brute who can throw either money or lawyers around to kill off the competition.

If webkit became "the web browser" this would be no different from (for example) the single source of the Perl language. There wouldn't be the problem of the secret Trident, where nobody can compete or the technology can be politically leveraged to for the use of other software (eg an OS). Because, being freely forkable, if the current maintainers don't support an environment patches against the source can be added by others. What's more if the maintainers make enough of a fuckup they can be force out completely.

But there is a problem for Microsoft; several years ago they claimed that IE was an essential component of their OS and they very hurriedly tried to make sure that this wasn't a complete lie. Because of this lots of parts of the OS now use DLLs and libraries from IE to do simple jobs or use IE as a local display processor. The result is that Microsoft will have a difficult job removing IE and it's html engine, so much so that it's probably easier for them to fix IE than to navigate the maze of interdepartmental politics that would be involved in removing it.

Comment Re:Nostalgia (Score 1) 465

I think you're looking at this from the wrong side.
It's not the size of a book that's fixed, it's the size of the ereader that's fixed.

If you have a thousand random books there will be a large percentage that are 'paperback sized', a few will be 'oversized paperbacks' , those ones that are always a pain on the shelves. But the rest are random sizes of anything from 'C' size to huge. For books the shelves are a problem, but there's not much downside to having a few different shelves for the wrong-sized books.

For an ereader you only have one screen size. As you've noticed it's usually about paperback sized because that's convenient for books that are just words; no pictures, no tables of numbers, just 5-12 words per line like paperbacks or newspapers. But as soon as a book starts adding pictures they're forcing a minimum page size, if that's larger than screen then you have a problem.

The answer is obvious, larger screens for the larger pictures ... but that makes the 'ereader' too big for a pocket, too expensive, not really an ereader any more.

Oddly enough the writers of the original series of Star Trek noted this problem in that there have always been at least two forms of the "PADD" or "hall pass", the handheld style and the clipboard style.

Looks like someone needs to make a "big screen" version of their ereader; identical (and so sharing the development budget) as possible to the "little screen" but sized A4ish. Or perhaps an A4 screen that you can attach the ereader to the back of.

Comment Re:Information density (Score 1) 465

Sticking fingers in as bookmarks ... okay, that sounds more like having the book "open" and "on the screen" more than once. That's not something that works with the current flock of ereaders, but it does work with ereader software on a large screen PC.

I suppose it comes down to 'rapidly', that flash and redraw of the "e-ink" style screens is slow, the processors may be fast enough, but the software they're running is written for machines with ten times the clock rate or ten times the memory, so the software is slow.

Sigh, I think my next ereader better run Linux or Android. At least then I may have a chance for it to get fixed.

PS: Searching is nice; bookmarking searches is even nicer.

Comment Re:Information density (Score 1) 465

I would disagree with your downsides.

Tactile nature
I'd say this is an upside for the reader. I can put the book down without being forced to find a bookmark. My ebook reader is a thinner than a paperback so it actually fits in pockets and so forth.
Bookmarks
The reader does bookmarks it seems to have an unlimited number and they're easier to label. You can even have bookmarks outside the book eg: 'hyperlinks between books'.
Flicking through
You can hit the 'next page' pretty quickly and jump to 24% through the book or 30 pages back very easily. Of course you don't have the odd rippling sound ...
_

IMO, the major downside of an ebook reader is that it has ONE page size. So any document that forces a page size or a page width has a problem. This obviously includes 'PDF' type documents but will also include HTML documents that use features later than about HTML3 and even text documents that assume a screen is 80 columns (or worse "800 pixels of Arial 10 point" ) wide.

Obviously this can be worked around if the reader's page size is "large" (eg possibly A4 sized) and this is what happened with HTML4+ on a computer screen, but that kinda defeats the idea of an ebook reader.

This leads to the second downside; unless you're very careful, your ebooks will die when the reader does either because they become unreadable because of format change or (worse yet) DRM kills them.

Comment Re:Love my kindle and my Nexus 7 (Score 2) 465

All you really have to realise is that the DRM thing is a con.

Those people claiming that DRM software can stop anyone getting a non-DRM copy are wrong. DRM can do two things

  1. Make it more difficult for a "legitimate" user to get at the data
  2. Prevent EVERYONE, including "legitimate" users accessing the data.

For the people that DRM is supposed to stop, one of them has to do a little work. All the rest have an easier time than any "legitimate" user.

DRM on Ebooks is actually one of the easiest to break; the data rate is so low compared to audio or video that the "analog hole" is a very reasonable way of un-DRMing the data.

So the correct solution for you as a user is to buy a "DRM copy" so that you're a "legitimate" user and then download the "pirate" non-DRM version to actually use. Please don't forget the first step ... um.

Comment Re:Frame rate shouldn't matter (Score 1) 245

Check out Fractal compression; for mpeg compression the increase in frame rate and resolution will increase the size of the compressed video. But for the 'fractal' compression the final stream is resolution and frame rate independent.

Unfortunately, the algorithms were put under patent in the US and the holder of the patent made the licensing terms too onerous. So very few compressors and decompressors were written in software and (unlike mpeg) hardware assisted encoding and decoding never happened.

As this work was done in the late eighties and early nineties the patents are expiring; so hardware encoding may now become cost effective.

Comment Re:Badblocks/Shred (Score 1) 348

If you do FDE you don't want to use badblocks --random, It creates ONE random block and writes it out repeatedly.

I find one of these is better..

  • Testing the disk four writes and four reads.

    testdisk () {
    [ -e "$1" ] || { < "$1" ; return; }
    hdparm -f "$1" 2>/dev/null ||:
    cryptsetup create towipe $1 -c aes-xts-plain -d /dev/urandom
    badblocks -svw /dev/mapper/towipe
    cryptsetup remove towipe
    dd bs=512 count=1 if=/dev/zero of=$1
    }

  • Fast write to true random usually runs at full disk speed.

    wipedisk () {
    [ -e "$1" ] || { < "$1" ; return; }
    hdparm -f "$1" 2>/dev/null ||:
    dd bs=512 count=100 if=/dev/zero of=$1
    cryptsetup create towipe $1 --offset 1 -c aes-xts-plain -d /dev/urandom
    dd bs=1024k if=/dev/zero of=/dev/mapper/towipe
    cryptsetup remove towipe
    }

  • Alternate full speed random wipe, sometimes faster.

    wipedisk() {
    [ -e "$1" ] || { < "$1" ; return; }
    hdparm -f "$1" 2>/dev/null ||:
    openssl enc -bf-cbc -nosalt -nopad \
    -pass "pass:`head -16c /dev/urandom | od -t x1`" \
    -in /dev/zero | dd bs=1024k > $1
    dd bs=512 count=1 if=/dev/zero of=$1 2>/dev/null
    }

The end result is a drive filled with true cryptographically random data completely indistinguishable from an encrypted drive, because it is an encrypted drive!

Comment Re:Related Anil Dash Blogs and earlier /. discussi (Score 1) 206

The VM you're describing IS java or silverlight (ie: msjava) or flash.

The problem always seems to go back to deep linking and scraping. So what if your VM runs wonderfully and displays everything perfectly to the user on a quad core processor with a dual slot GPU. If the search engine can't work out where you should be in a search list you'll never get any visitors. And search engines are dumb, small and dumb, no GPU either. Then if you have only one 'link' to your site, even if the search engine were able to index everything you'd get a vague list of things that are sort of near to the url the search engine can give. Even if you have a person creating a reference, without a deep link, direct to the interesting bit, nobody would bother.

HTML is an ugly overstressed framework, javascript is brutalised by the libraries and CSS is just crap. But even if the combined language were made perfect it wouldn't last. The current web is the bastard crossbreed that's needed to serve conflicting masters, the masters would still be there trying to rip your 'language' apart, make it perfect for just one tiny slice of the problem.

I don't know what the solution is; hopefully HTML5 will help more than it hurts.

Comment They're all wrong! (Score 1) 373

border-radius: 15px;
-moz-border-radius: 15px;
-ms-border-radius: 15px;
-o-border-radius: 15px;
-webkit-border-radius: 15px;

Look at that mess! It's not what the web developer is trying to say. This is more like what they want to say ...

border-radius: 15px;
-w3cnext-border-radius: 15px;

The web developer is wanting to use the expected behaviour of the next css standard. This prefix says that, or perhaps a "-css4beta-" prefix so we don't get caught by css5beta. IMO the web browsers should be saying what they are trying to provide not just that "this isn't the current version, it's mine".

I am NOT saying that the "-vendor-" prefixes should go away, but just that when it becomes pretty much certain that a particular change should be in the next standard it goes into the extra prefix. That prefix becomes what would be in the standard if it were ratified tomorrow.

At that point nobody cares how slow the W3C is.

Slashdot Top Deals

Disclaimer: "These opinions are my own, though for a small fee they be yours too." -- Dave Haynie

Working...