Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:"they" can fuck off, the binary units are the o (Score 1) 618

The notion of "sector" is a mere software convention, as the actual geometric placement of bits on the drive is irregular and model-specific.

(And by the way, the assumption of a specific length for sectors is a bad idea anyway, which is now causing endless pain during the switch from 512 to 4096 bytes.)

Comment Re:"they" can fuck off, the binary units are the o (Score 1) 618

Hard disks live inside computers yet using binary units is inappropriate for them for the same reasons. For memory as seen by the CPU, people can use kibibytes and friends, nobody wants to take them away. Just let them be called with a more specific name so people won't get tricked^Wconfused when they're buying a hard drive.

Comment Re:"they" can fuck off, the binary units are the o (Score 5, Insightful) 618

Please remind me: How many bits is there in an SI byte? Is it 10, 100 or 1000.

There is no "byte" in the SI. The question is therefore irrelevant. There's an IEC standard containing prefixes for 2^10, 2^100, 2^1000 etc, and those prefixes are kibi-, mibi-, gibi- and so on. The SI officially references them, even if they're not strictly part of it.

If your byte contains 8 bits, you are either using the binary sizes, or you are mixing things to fool the customer.

What's the relationship between the number of bits in a byte being 8 and 2 being the base for the multiples of the byte? Moreover, deciding that "a byte" is *the* unit of the smallest addressable memory cell of machines is a oversemplification, because there were in the past, and there might be in the future, machines having a word size which is not even a power of two. If anything, one might think that using powers of two to "size" memory comes from the fact that the widths of the ranges addressable by a bus made of binary wires are by nature powers of two - but that has nothing to do with the fact that the addressed items are bytes, 37-bit words or whatever.

Hard disks are memory, and counting that memory in powers of two makes no sense for them, since they store bits in very strange patterns, therefore hard disk manufacturers never adopted it. Computer networks transfer memory, and counting that memory in powers of two makes no sense, especially since they often transfer bits and not bytes, hence network designers prefer using bits and their decimal multiples rather than their binary counterparts, and they've always done so.

If you broaden your vision, you'll see that it's transistor-based memory to be "the exception". Therefore the onus should be on operators of that field to adopt the standard binary prefixes, as ugly as they may sound (and no I don't like them either), in order to avoid ambiguity with the terms used by the rest of the world.

Comment Nobel prize... for eternal peace (Score 1) 800

He started two or three wars, he runs a lager that he had promised to close, his administration is known to kidnap and torture innocent people abroad, and now we know that he kills even his own people just by attaching the "terrorist" label onto them?

The "pre-emptive" Nobel prize for peace given to him should be withdrawn, lest the prize itself become devoid of meaning, let alone prestige.

Comment What about students' privacy? (Score 1) 96

I don't think that a platform which is based on collecting private data from its users should be adopted in schools.

It's fine if somebody who is adult, is informed about the consequence of his actions, and is free to choose among other options, picks up a Chromebook. I use many services from Google myself. But minors being forced to use them, doesn't seem right to me.

And besides, Chromebooks are walled gardens, so schools will need to buy real computers anyway if they don't want to train their students into content consumption only.

Comment Re:Amazing how you twisted that. (Score 1) 182

Hacked -- Improved. Those words are pretty much interchangeable depending on your own view and biases.

No, they aren't in this case. Adding long filenames to FAT, for instance, broke compatibility with previous implementations of the FAT file system, precisely because they were implemented with a hack: invalid directory entries that happened to be ignored by earlier DOS versions, but would confuse other software which was perfectly working until then. Remember the "LOCK" command that Microsoft added in Windows 95 to prevent those utilities from ruining the file system?

Also, I did explain what minor and insufficient improvements to FAT were made, as well as what major deficiencies remained unfixed, so there's no "bias" involved here.

Also, systems using FAT can use extended attributes if they wish. OS/2 does just fine with extended attributes on FAT, and so does cygwin. Just because FAT doesn't explicitly say this is where you stick them doesn't mean you can't write a file system driver on top of it that puts them wherever you want.

Then they're not using FAT, which does not support extended attributes, instead they're using a personal extended file system derived from FAT, which itself is not FAT, is not interoperable with FAT, and will be unreadable and damaged by other software designed for the standard FAT file system. This won't happen with a file system supporting extended attributes, such as NTFS or UDF.

Yes, FAT has poor performance on optical storage, but why would you use FAT on it in the first place? There doesn't need to be one file system that works great in every case.

That's exactly the point, nobody would ever use FAT if it wasn't because of interoperability requirements. It's inefficient on traditional media, and it can be extremely inefficient on non-traditional media, it never works great. Therefore nobody would ever dream to license it for its technology.

Bull. The algorithm isn't part of the FAT/FAT32 standard, it's part of what is known as the VFAT standard, which you don't have to implement.

But there is no such thing as a "vfat standard". "vfat" is the name that Linux informally attached to its file system driver supporting long file names, and that name stuck. The reason is that the native Windows file system driver, with which the Linux driver aimed to interoperate, was called "VFATD", and that was because Windows "386 enhanced mode" drivers used to have a name in the form V-name-D.

Both Microsoft's official specification of the FAT file system, which is referenced by the UEFI standard, and the SD card standard, contain the short name creation algorithm. Beware: that documentation is subject to a restrictive license by Microsoft, and you have to accept it in order to look at the specification.

No, they don't have to only write 8.3 ASCII file names, they can implement any alternative they choose.

And then they're implementing something different than FAT, which is not interoperable, violates the standard, potentially makes Windows XP bluescreen etc. etc.

or install a Virtual File System driver in windows that understands your new layout.

So you're proposing that, in order not to pay licensing fees to Microsoft, a manufacturer should write a device driver for each operating system that currently supports FAT, for all of its revisions past, present and future, for all of the hardware architectures it supports, then distribute all of these drivers with its product, and require their installation before the use of the product? This would be unrealistic if it was possible, but then it's not doable even in principle, because many devices one might want to interoperate with do not have a user-extensible operating system. See Windows RT for example.

Comment Re:Amazing how you twisted that. (Score 1) 182

FAT/FAT32 isn't a poor technology, it's a simple technology.

FAT was a poor technology when it was introduced in the 80s: UNIX file systems already had many of the features we enjoy today in the 70s, see V6's file system in 1975, and 8.3 names were already a restrictive choice back then.
It also did not contain significant innovation, as it was basically an implementation of the CP/M file system.
So nobody would use it, unless for compatibility reasons, which is the point of my comment. It certainly never was "innovative technology that one would pay to use". Not in the 80s, and it would be risible if somebody said that it is now. And Microsoft are asking for money now.

It's not very complicated, but the implementation has evolved over nearly 40 years.

It's only been hacked to support larger disks and longer file names, and still it does that poorly (high internal fragmentation, small maximum file size, no support for extended attributes, poor performance on optical storage and flash, and let's not talk about missing features).

Secondly, you don't have to pay royalties to Microsoft for using FAT/FAT32 itself. You have to pay Microsoft if you use the same exact algorithm for storing larger than 8.3 filenames on FAT. You are free to use a different algorithm, and not pay any royalties, or stick to 8.3 filenames as the original FAT/FAT32 did.

The algorithm is part of the standard, You must implement it as it is. And even if you could omit that part of the standard while still claiming that you're implementing it, you would have to tell your customers that they need to only write 8.3 ASCII filenames, and that they won't be able to see files written by others if their names exceed 8 characters OR contain, say, an accented letter. Are you in all honesty convinced that a company could do this and be competitive in 2012?

As an additional information, know that Windows XP is known to blue screen when it encounters files with short filenames not conforming to the standard: Linux developers found that out when they were trying to implement an alternate 8.3 conversion scheme for the vfat Linux module.

Comment Re:Amazing how you twisted that. (Score 5, Insightful) 182

You don't choose to "use" standards. You are forced to implement them either by government regulation or interoperability needs. See what happens with the FAT file system: it's the result of an insignificant research effort, it is itself extremely poor technology, yet every device manufacturer is currently forced to implement it, and therefore needs to pay money to Microsoft.

This adds a sunk cost to the barriers to entry into the device market, in favour of the established market dominators (which is what patents are all about), and to the detriment of free market, consumers and technological progress.

Comment Re:The wrong way around (Score 1) 151

There's UDF. It works on Windows out of the box (most impotant feature), supports Unicode, large files and volumes, UNIX / DOS / OS2 / Mac file attributes and special files (symlinks, devices etc), and extended attributes. It works much faster than FAT on flash drives. And its specification is freely available. But it might be covered by patents too.

Comment Big deal? (Score 4, Insightful) 220

I remember playing "raid over Moscow" when I was a kid, which was a C64 game where you had to fly a bomber to the Kremlin, kill its guards, and blow it up. I suspect that it wouldn't sell well in the USSR, and that if somebody published the very same game today as "raid over Washington" replacing the Kremlin with the White House then it wouldn't sell well in the USA. People don't enjoy being offended, especially by propaganda, especially when it touches open wounds.

Comment Re:Be careful (Score 1) 222

So basically you haven't read TFA (they're not dumping LTS releases, it's the interim releases that they're discussing), you don't care about Linux ("I switched to Mac OS", as if it was an alternative), you don't know about Linux ("Maybe Linux got better", but "I read about KDE vs Gnome"...), but still you feel need to bitch about Linux ("I don't see it happening") and get modded +5 insightful.

Slashdot Top Deals

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...