Comment Re:"they" can fuck off, the binary units are the o (Score 1) 618
(And by the way, the assumption of a specific length for sectors is a bad idea anyway, which is now causing endless pain during the switch from 512 to 4096 bytes.)
(And by the way, the assumption of a specific length for sectors is a bad idea anyway, which is now causing endless pain during the switch from 512 to 4096 bytes.)
Please remind me: How many bits is there in an SI byte? Is it 10, 100 or 1000.
There is no "byte" in the SI. The question is therefore irrelevant. There's an IEC standard containing prefixes for 2^10, 2^100, 2^1000 etc, and those prefixes are kibi-, mibi-, gibi- and so on. The SI officially references them, even if they're not strictly part of it.
If your byte contains 8 bits, you are either using the binary sizes, or you are mixing things to fool the customer.
What's the relationship between the number of bits in a byte being 8 and 2 being the base for the multiples of the byte? Moreover, deciding that "a byte" is *the* unit of the smallest addressable memory cell of machines is a oversemplification, because there were in the past, and there might be in the future, machines having a word size which is not even a power of two. If anything, one might think that using powers of two to "size" memory comes from the fact that the widths of the ranges addressable by a bus made of binary wires are by nature powers of two - but that has nothing to do with the fact that the addressed items are bytes, 37-bit words or whatever.
Hard disks are memory, and counting that memory in powers of two makes no sense for them, since they store bits in very strange patterns, therefore hard disk manufacturers never adopted it. Computer networks transfer memory, and counting that memory in powers of two makes no sense, especially since they often transfer bits and not bytes, hence network designers prefer using bits and their decimal multiples rather than their binary counterparts, and they've always done so.
If you broaden your vision, you'll see that it's transistor-based memory to be "the exception". Therefore the onus should be on operators of that field to adopt the standard binary prefixes, as ugly as they may sound (and no I don't like them either), in order to avoid ambiguity with the terms used by the rest of the world.
The "pre-emptive" Nobel prize for peace given to him should be withdrawn, lest the prize itself become devoid of meaning, let alone prestige.
It's fine if somebody who is adult, is informed about the consequence of his actions, and is free to choose among other options, picks up a Chromebook. I use many services from Google myself. But minors being forced to use them, doesn't seem right to me.
And besides, Chromebooks are walled gardens, so schools will need to buy real computers anyway if they don't want to train their students into content consumption only.
Hacked -- Improved. Those words are pretty much interchangeable depending on your own view and biases.
No, they aren't in this case. Adding long filenames to FAT, for instance, broke compatibility with previous implementations of the FAT file system, precisely because they were implemented with a hack: invalid directory entries that happened to be ignored by earlier DOS versions, but would confuse other software which was perfectly working until then. Remember the "LOCK" command that Microsoft added in Windows 95 to prevent those utilities from ruining the file system?
Also, I did explain what minor and insufficient improvements to FAT were made, as well as what major deficiencies remained unfixed, so there's no "bias" involved here.
Also, systems using FAT can use extended attributes if they wish. OS/2 does just fine with extended attributes on FAT, and so does cygwin. Just because FAT doesn't explicitly say this is where you stick them doesn't mean you can't write a file system driver on top of it that puts them wherever you want.
Then they're not using FAT, which does not support extended attributes, instead they're using a personal extended file system derived from FAT, which itself is not FAT, is not interoperable with FAT, and will be unreadable and damaged by other software designed for the standard FAT file system. This won't happen with a file system supporting extended attributes, such as NTFS or UDF.
Yes, FAT has poor performance on optical storage, but why would you use FAT on it in the first place? There doesn't need to be one file system that works great in every case.
That's exactly the point, nobody would ever use FAT if it wasn't because of interoperability requirements. It's inefficient on traditional media, and it can be extremely inefficient on non-traditional media, it never works great. Therefore nobody would ever dream to license it for its technology.
Bull. The algorithm isn't part of the FAT/FAT32 standard, it's part of what is known as the VFAT standard, which you don't have to implement.
But there is no such thing as a "vfat standard". "vfat" is the name that Linux informally attached to its file system driver supporting long file names, and that name stuck. The reason is that the native Windows file system driver, with which the Linux driver aimed to interoperate, was called "VFATD", and that was because Windows "386 enhanced mode" drivers used to have a name in the form V-name-D.
Both Microsoft's official specification of the FAT file system, which is referenced by the UEFI standard, and the SD card standard, contain the short name creation algorithm. Beware: that documentation is subject to a restrictive license by Microsoft, and you have to accept it in order to look at the specification.
No, they don't have to only write 8.3 ASCII file names, they can implement any alternative they choose.
And then they're implementing something different than FAT, which is not interoperable, violates the standard, potentially makes Windows XP bluescreen etc. etc.
or install a Virtual File System driver in windows that understands your new layout.
So you're proposing that, in order not to pay licensing fees to Microsoft, a manufacturer should write a device driver for each operating system that currently supports FAT, for all of its revisions past, present and future, for all of the hardware architectures it supports, then distribute all of these drivers with its product, and require their installation before the use of the product? This would be unrealistic if it was possible, but then it's not doable even in principle, because many devices one might want to interoperate with do not have a user-extensible operating system. See Windows RT for example.
FAT/FAT32 isn't a poor technology, it's a simple technology.
FAT was a poor technology when it was introduced in the 80s: UNIX file systems already had many of the features we enjoy today in the 70s, see V6's file system in 1975, and 8.3 names were already a restrictive choice back then.
It also did not contain significant innovation, as it was basically an implementation of the CP/M file system.
So nobody would use it, unless for compatibility reasons, which is the point of my comment. It certainly never was "innovative technology that one would pay to use". Not in the 80s, and it would be risible if somebody said that it is now. And Microsoft are asking for money now.
It's not very complicated, but the implementation has evolved over nearly 40 years.
It's only been hacked to support larger disks and longer file names, and still it does that poorly (high internal fragmentation, small maximum file size, no support for extended attributes, poor performance on optical storage and flash, and let's not talk about missing features).
Secondly, you don't have to pay royalties to Microsoft for using FAT/FAT32 itself. You have to pay Microsoft if you use the same exact algorithm for storing larger than 8.3 filenames on FAT. You are free to use a different algorithm, and not pay any royalties, or stick to 8.3 filenames as the original FAT/FAT32 did.
The algorithm is part of the standard, You must implement it as it is. And even if you could omit that part of the standard while still claiming that you're implementing it, you would have to tell your customers that they need to only write 8.3 ASCII filenames, and that they won't be able to see files written by others if their names exceed 8 characters OR contain, say, an accented letter. Are you in all honesty convinced that a company could do this and be competitive in 2012?
As an additional information, know that Windows XP is known to blue screen when it encounters files with short filenames not conforming to the standard: Linux developers found that out when they were trying to implement an alternate 8.3 conversion scheme for the vfat Linux module.
This adds a sunk cost to the barriers to entry into the device market, in favour of the established market dominators (which is what patents are all about), and to the detriment of free market, consumers and technological progress.
I wonder if he even bothered to report a bug. Probably not.
Are you really applying the standard FOSS conversation-killer "shut up and report a bug" against Alan Cox, who wrote half of the Linux kernel(*) and worked at RedHat himself for ten years?
(*) hyperbole
"More software projects have gone awry for lack of calendar time than for all other causes combined." -- Fred Brooks, Jr., _The Mythical Man Month_