...they're going to dig for copper cabling that's thousands of years old, to prove they had a phone network before everyone else. When they don't find any, they'll conclude the only reason for that can be that they moved to mobile phones even back then!
FYI: According to the internet, the joke in question was:
'What's the difference between Mark Bridger and Santa Claus? Mark Bridger comes in April.'
For playing a movie - maybe. For actually burning a torrent - fair enough... if that is the _only_ thing that happens.
The point is that multiple accesses is going to delay the drive by a huge amount. If you want to, say, copy that Linux iso to your NAS at the same time as someone is playing a movie, the tape drive is going to have to move between the locations of the two files, which is going to wreck the access times, as I stated. Torrents are worse: you're downloading from / uploading to a bunch of other computers, all wanting to read from or write to a different location in the file. Again, this means moving between locations and the resulting huge access times.
You may be able to alleviate the process by putting a SSD or HD as cache in between, but I'm not sure if there's off-the-shelf software to do that, and I'm not even sure if that's going to work comfortably. Besides, if you're going to put a SSD or HD in between, why not just use that?
The big disadvantage of tapes is that it has long seek times. Not 'long' as in a few times that of a hard disk, but 'long' as in: can take a full minute to do. Access of multiple files on a normal HD is done by reading a meg of the first file, then seeking to the second file and reading a meg, going back to the first file and reading a meg etc. On a tape drive, even when the seek time is only, say, 10 seconds, you'd get a total throughput of 100K/sec that way. And I'm not even talking about the havoc that using it for storage of torrent files wreaks on it: that's a random-access process if I ever saw one, and the seek times on tape would kill your bandwidth very quickly, and probably your tapes too (because of wear&tear).
The device can detect the _way_ tou touch it (one finger, complete hand,
I think Andrew Ng had an easier job: machine learning is a course with a curriculum that's better defined than the almost all-encompassing AI class. That's why I had the idea the AI course sometimes jumped from subject to subject, while the ML class was more building up to something. I agree the ML class was easier to follow, I don't think it's because of the teacher, though: swap Andrew and Sebastian/Peter and I think you'd gotten the same result. Aside from that, I immensely enjoyed both courses (enough to enroll in one of Udacities courses as soon as it was announced), so they both did an excellent job.
Sixpack for the win
The PC connected to the TV still runs a menu on top of X that's written by me. I also automated the beer-list to a LCD+touchscreen thing, and while it's made out of bad soldering joints and gaffer tape, somehow that contraption still manages to survive.
From what I know of flash, the 'bad bits' aren't repeatedly bad. The bad-sector-swap-out-routine in most flash drives and USB sticks will actually swap out a sector after a single read that can't be ECC-corrected, but that doesn't mean all the bits in the sector can't be written correctly ever again.
For example, in this article (IEEExplore, so paywalled for you, sorry) a generic NAND flash chip has been tested for bit-error-rates. In the 5K write cycles after an average bit has failed, it only failed to be written correctly 4 times more. That would mean that a single erase-rewrite cycle would write the complete sector without any bit errors 99% of the time: to find 'most' of the bad bits, the sector would have to be rewritten 1000s of times every time the software would want to check the fingerprint.
Not only would that take a fair amount of time, it would also introduce new failed bits. That would mean the ID of the flash chip can only be checked so many times beffore the complete sector goes bad.
It indeed is packed BCD. Some processors of that time have special instructions for that kind of notation, which makes calculating with them not much more difficult than normal binary. (Dunno if the 6502c has these kinds of opcodes, though; the Z80 for example does.) The advantage is that it makes blitting to screen really easy: instead of constantly dividing by 10, which is a processor-intensive task, you could just bitshift the number, which is much easier.
2600/7800 DEVELOPMENT KIT<br>
CARE AND FEEDING INSTRUCTIONS<br>
Feel free to telephone John Feagans at Atari (U.S.) at area code
(408) 745-xxxx any time you have a question about using the
software. He wrote the download program and the transfer rom
code. He's the one who did not write any support documentation
to go with his software.
* From the base sw:
CPX #1 ;HACK: WE STOP AT 1
INX ;BIGGER HACK: PUSH X INTO RANGE.
LDA ZHACKMOD+2,X ;BIGGEST HACK: TABLE LOOKUP NEXT MODE.
* Ofcourse, we have explicit words:
CMP #$FF ;SEE IF ANY INPUT
FUCKYOU BIT INPT4
LDA #0 ;ENOUGH TIME HAS ELAPSED TO ALLOW CAPS
STA $1 ;TO DISCHARGE SO CONTINUE FUCKING WITH
LDA #$14 ;IO HARDWARE
STA AUDC0,X ;GO POUND SAND IN YOUR ASS
* Citizen Kane anyone?
;THE FINAL VERSION
* In Galaga, at 'a boss hit':
JSR ABOSSHIT ; HOW YOU PRONOUNCE IT IS YOUR OWN
* Liek wtf?
* GROUND TARGET SECRET CODES (SSHHHH!)
* 0 regular dome logram
* 1 regular pyramid barra
* 2 detector dome zolbak (and your mama, too)
*And finally, an original comment which couldn't be more to the point in 2009:
*PROGRAMMERS BEWARE: THIS CODE IS OLD AND VERY UGLY! TAMPER AT YOUR OWN RISK
It looks like Hattrick is written mostly in Forth btw. I personally didn't know they wrote games in that language!