Forgot your password?

Comment: Re: OR (Score 1) 250

From an iPhone on AT&T IPv6 does not work. Neither does it work on my Uverse connection.

That's the fault of the connection, however, and not the iPhone. iOS if fully IPv6 compatible; I take advantage of it all the time. I even wrote an IPv6 test utility for iOS a few years ago. You just need a WiFi router with autoconf advertising IPv6 routes, an you're all set.

The fact that all too many North American ISPs still haven't got their IPv6 implementations in play is the real story here. Computers and most smart phones are ready to connect -- they just need the ISP support to do it.


Comment: Re:Its Killer Feature (Score 1) 411

by Yaztromo (#47166621) Attached to: Apple WWDC 2014: Tim Cook Unveils Yosemite

I doubt it's high on Microsoft's priority list. Your earlier example shows a saving of a few hundred megabytes out of 8 GB, and RAM is really cheap.

I should point out that in my example, the memory pressure at the time was quite low. Had I pushed the memory pressure higher, the amount of compressed memory would also have been quite a bit higher.

RAM may be cheap, but there are still physical limits that can be hit on any given board or system before you reach the theoretical limits. I'm posting this on a 2009 iMac right now, and it has a maximum RAM configuration of 8GB (which is also how much RAM is installed). No matter how cheap RAM gets, this system can't accommodate any more.

Considering Mavericks was a free upgrade, installing it was like going up to 12GB of RAM or more -- for free. I don't have any metrics in front of me of the useful theoretical maximum compressed memory storage, I can only assume that it's somewhere in the neighbourhood of about (Installed RAM - 1GB)*2 at best (or in my case, 14GB. The 1GB is to ensure space is reserved for wired memory, which can't be swapped or compressed). I suspect it will be a bit less, depending on how compressible your data is (the algorithm used is optimized for a 2:1 compression ratio, however not all pages will be compressible to this degree; my understanding is that if a pair of pages can't be compressed into a single page, the compression routine stops for that pair of pages).

Note that as memory compression sits between the point where the OS identifies that it may need to evict old pages and the point where the pages are physically swapped to disk, the pages written to disk are also compressed (unless they were incompressible in the first place). This will roughly halve the amount of data that needs to be written to swap, meaning that the slowest operations of the paging to disk procedure is roughly halved in time as well.

As Windows machines swap as well, being able to halve the time required to read data from and write data to disk would be a huge boost. Being able to get a few million extra pages without the need to swap is an even bigger performance boost. I'll point out this ArsTechnica article on Apple's Compressed Memory subsystem -- note in particular the second graphic which shows a system under much heavier memory pressure, where a machine with 16GB of RAM has over 8GB compressed, and only 26.5MB (not a typo!) of data swapped to disk. That's a lot of data that didn't need to be written to a page file.


Comment: Re:Its Killer Feature (Score 5, Informative) 411

by Yaztromo (#47152217) Attached to: Apple WWDC 2014: Tim Cook Unveils Yosemite

you seem to know what you're talking about. can you explain this idea of memory compression, and what the heck the new activity monitor means? the old one made sense. Pie chart, showing free, available, and active. Now it's apparently using up all my memory I have 8 GB but it shows a line chart with a small amount of "memory pressure".

Sure -- I'll try to explain it the best I can. I won't make any specific judgements as to whether the new controls are better than the old, except to point out that there is more useful information in the new that wasn't present in the old. You're still perfectly welcome to prefer the old pie chart :). I'll try not to stray too far into the esoteric; if you need more details on a specific subject here, feel free to ask.

First a bit on the theory of memory management in general. In most modern operating systems like Mac OS X, each application appears to get it's own memory space, starting at '0' and running up to 0xffffffffffffffff (a fancy way of saying the addresses go from 0 to somewhere in the neighbourhood of 1.84*10E19 bytes of memory). To make things easier to deal with, the operating system breaks these up into chunks 4096 bytes in size called a 'page'. Now 1.84*10E19 bytes is probably way more memory than you have available on your system, but that's okay -- while conceptually an application can use any of that memory space for pretty much anything it wants, the operating system keeps track only of which pages have actually been allocated to each application. This system is called 'virtual memory': each application has its own virtually memory space to play with that doesn't interact with he memory of any other application. This is the value that shows in the "virtual memory" box in the activity monitor.

Now of course, you have real, physical memory in your machine, and you don't have a separate set for each application (in a physical sense -- you don't have one set of chips for Safari, and another set of chips for iPhoto, for example). The real memory has to hold the virtual memory somehow, and be able to map from one to the other. The operating system keeps a structure known as the Translation Lookaside Buffer that keeps this mapping for pages stored in physical memory. So it might have a bunch of entires for Safari, saying that the page consisting of what the application sees as memory area starting at 0x0000 and going to to 0x0FFF are stored in memory location 0x40000000 (the 1GB mark), the page of what the application sees as memory area starting at 0x1000 and going to 0x1FFF are in location 0x40096000, etc. In fact, the pages can be all over the place, and not even in order -- the operating system keeps track of all the used memory pages for the application wherever they are stored in memory. The amount of physical memory you have shows in the "Physical Memory" box of the activity monitor.

If you don't get all that, don't worry -- the main takeaway is that these pages can be stored in memory, and the operating system tracks of them when they are. Because we work with all of these pages, however, the operating system can also store them someplace else. Prior to Mavericks, this was always written to disk in the "swap" file (also sometimes known as a "page file"). This happened when memory pressure gets higher than the operating system can handle in RAM alone; that is, programs are asking for more virtual pages than the operating system can fit into real memory. To try to make room for new requests without unloading applications, the operating system will periodically go through the list of pages if memory pressure is high, find the least-used pages (you might have some application running that you put into the background and haven't touched in hours, for example, or applications which have reserved pages for things such as documents you haven't looked at in hours, even if you've otherwise used the application itself), and write them to disk. This is known as "swap". The pages of course are still there -- if memory pressure decreases, the operating system can load them again. Alternately, if you need to use the memory, the operating system will load it again, writing some other old, unused page to disk. The size on disk of all of these old pages sent out for long-term storage like this is displayed in the "Swap Used" box in the activity monitor.

When memory pressure is really high, this can get really ugly really fast. Everyone has hard drives that are massive compared to the physically available RAM, so the system can use this system until the hard drive is full. For many people, however, the system will become nearly unusable long before this point; if that "least used" page is one that hasn't been used in just a few milliseconds, and will be needed again in a few milliseconds (that is, all the pages have been used very recently and are expected to be used again), you can wind up in a situation where you're trying to write out some not-very-old pages in order to load some also not-very-old pages, and you're going back and forth doing this again and again and again, the the speed at which you can do so is slower than the need for the pages, slowing your system to a crawl. You've probably seen this before -- you load too many tabs in Safari and have a bunch of other stuff running, and all of a sudden the hard drive is running all the time and the system is unresponsive until you unload some tabs or programs and wait for the system to catch up to you (in effect, reducing the memory pressure). This phenomenon is known as "thrashing".

As an aside, modern operating systems take a shortcut when allocating memory to applications that request it. The memory page is typically only actually created when you put something into it. Empty pages are recorded by the system, but aren't given any real physical existence until you put something into them. This means you could (conceptually) ask for more pages than the system could ever handle -- an applications entire memory space in fact -- and no real memory pages will be used until you put something into them. This is the reason why you can sometimes see the "Virtual Memory" value exceeding the amount of RAM, and still have no swapping happening.

That brings me to "memory pressure" which you asked about. This is basically a measure of how responsive the system can be to give out new pages of memory when an application needs them. When it's a low, green line you effectively have no pressure -- the operating system can give pages immediately with little or no work to an application that asks for some new pages. When it's tall and red, the pressure is very high, with very little real RAM available to give out, lots of reliance on reading and writing pages to the hard drive swap file, and general slowness. You can think of this as the state where the programs you're running (including the operating system) has asked for way more RAM than you have, and the system is having a hard time rearranging the pages in order to make room for new requests. As the memory pressure gets higher, the probability of thrashing also increases.

Now, in Mavericks Apple introduced a new system they call "Compressed Memory". This subsystem basically sits in-between the point where the system looks for old pages to write to the swap file, and the actual writing of those pages to disk. Instead of writing those pages straight out to disk, the operating system finds pairs of pages and runs them through a compression algorithm, and then stores them as a single page in memory. In effect, you take two pages and jam them into one page, giving you an empty page to work with. Mavericks tries to do this with as many pages as possible instead of writing them out to disk, because it can do this operating a whole lot faster than writing all that data to disk, which is comparatively slow. The amount of memory the system is using to hold compressed data is listed in the bottom-right "Compressed" box in the activity monitor. Looking at mine right now, it says I have 392.3MB compressed. This is equal to over 102 million pages of data that the system has brought back available for application use by compressing those pages I haven't used in a long time. If memory pressure were to increase, more memory would be compressed. If pressure continues to increase, those least-used compressed pages will be swapped to disk (in compressed state).

The key to performance, of course, is not to write anything to swap if it can be avoided. Swap slows things down. Conceptually memory compression also slows things down, but it's so slight and imperceptible at the level it typically occurs at you'll never notice it as a user.

"App Memory" is the amount of memory actually in use by applications and the operating system itself. Memory pressure will be green so long as this value is less than your Physical Memory. "Wired Memory" is a special subset of memory that cannot be compressed, swapped, or written to disk -- typically because it's too core to the operating system (imaging if the operating system paged out to disk the code used to read pages from the disk! You'd be stuck in a paradox and the system would stop working. Much of the OS kernel is always flagged as "Wired").

Which leads me to the last value in the application monitor -- File Cache. It's well known in virtual memory circles that unused memory is wasted memory. Memory is very, very fast, and if you're not using it for anything at all, you're not gaining the benefit of its speed. For this reason, the operating system keeps a File Cache in memory where it stores bits and pieces of files it thinks you may need in the near future. It acts as a sort of predictive read-ahead of your hard drive: if you need a file in the cache, it will load in the blink of an eye without spending much time hitting the hard drive, making the system seem really, really fast. Modern OS designers use a file cache system that will use as much available memory as possible in order to benefit from this speed increase as many times as possible -- the larger the disk cache, the greater the probability you'll need something from that cache, and the faster the system will perform. As such, the File Cache claims nearly as many pages as are available in order to increase this probability, and give you a faster experience loading files (note that some number of pages is usually left free in order to have somewhere to swap to and from as the need arises, but it doesn't have to be particularly large: maybe just a few MB of space)

The magic of the File Cache is that, since it's typically a read-only copy of what's actually on your hard drive, if an application needs more memory than is available, the operating system can simply drop the pages int he cache with very little effort. They are never compressed or swapped (which would defeat the purpose -- why would you write data to the hard drive that already exists on the hard drive?) -- they just get dropped if an application needs more memory (perhaps by loading a new application). In this way, the File Cache will basically grow and shrink to fit into whatever free physical memory your system has at the time. This is why you may see your "Memory Used" value getting very close to the "Physical Memory" value, but with the Memory Pressure still down in the green and with no swap used: if you get into this situation and something asks for more memory pages, the OS will simply drop some of the disk cache pages and reassign them to the application.

For this reason, it's actually expected that Mavericks will always appear to be using all of your memory. Trying to fight this will actually hurt your performance. The truly important things to keep an eye on is the "Memory Pressure", as this is a much more exact guide to what you should expect performance wise as you use more memory. The old chart was great for people who know the ins and outs of virtual memory (like me), but it often led to users trying to do weird things to try to "clear out" RAM in order to see more free space. You have to trust that the operating system can do this faster and better than you can, but just seeing a pie chart showing all of your memory in use doesn't help. The "Memory Pressure" on the other hand is a much better measure, as only when it gets into the higher values should you start to look into doing something (such as unloading unused applications, browser tabs, documents, etc.).

I hope you didn't find this TL;DR -- I know it's long -- the system is unfortunately very complicated, and you could spend an entire university semester dealing with the subject in depth. When I taught 3rd year Operating Systems at a University I'd usually spend at least a week on this subject, and even then you'd just crack the surface and give as much background as you find here (albeit with some more algorithmic details). If you did find this TL;DR, here are the quick takeaways:

  1. Swapping to disk should be avoided as it slows your system down. Its main benefit is to allow you to use more memory that you physical have. Try to keep this value low.
  2. Mavericks tries to help you keep the swap size low by compressing pages before it has to write them to swap (with the hope the saved space means it never has to write to swap). And if it does eventually have to write to disk, it writes out the compressed pages, making swapping roughly twice as fast as it was prior to Mavericks.
  3. Try not to manually manage your memory. It's both okay and expected that the OS will try to use all of it. It does so to try to speed up your computer. If a program needs some of this used space, the OS will give it back immediately.
  4. The old memory pie chart is great for the uber-techies who understand virtual memory, but doesn't paint a useful picture if you don't. Memory Pressure is much, much more useful to laypeople, as it gives you an indication of when you should actually start to worry about reduced performance. When the memory pressure is low/green, the OS has lots of memory it can dole out at a moments notice (even if the Memory Used value is close to the Physical Memory value). When it's yellow, you'll start to see some performance degradation if you don't unload some stuff. When it's red, performance is already suffering and you should unload some stuff right away.
  5. Virtually Memory is your friend! Try not to second guess what the OS does with it -- it's smarter and faster than you are. You only need to take action if the Memory Pressure creeps into the red (or if it's yellow and you know you're going to need a bunch in short order). If you find you're frequently in the yellow or red, a RAM upgrade would be of tremendous help (and conversely, if it's always green a memory upgrade probably won't help you much.

Congrats if you made it to the end. That's my "short" version of a very complex topic. I hope it made sense and is useful to you.


Comment: Re:Its Killer Feature (Score 4, Interesting) 411

by Yaztromo (#47150467) Attached to: Apple WWDC 2014: Tim Cook Unveils Yosemite

Linux had the likes of zram, zcache, and zswap for years before Mavericks.

zram was only merged into the Linux kernel in 3.14, on March 30, 2014 -- well after Mavericks was released. And it's more about using a portion of compressed memory for swap -- it's a compressed RAM disk for swapping to, and isn't the same as Apple's transparent page compression system.

zswap is much more akin to what Apple's Memory Compression scheme achieves, and it was merged into the Linux kernel mainline in kernel version 3.11, which was released on September 2, 2013, just a few weeks before Mavericks was released.

So you have my apologies -- I wasn't aware of zswap until now. If the topic comes up again, I'll ensure I only compare that feature to Windows (which AFAIK still has nothing like this available).


Comment: Re:Its Killer Feature (Score 4, Interesting) 411

by Yaztromo (#47150029) Attached to: Apple WWDC 2014: Tim Cook Unveils Yosemite

Do you know C? Any desire to implement such a feature in Linux? Seems like a good idea, and your claim of dramatic performance improvement has got me thinking. Perhaps this would be a good way to dip my toes into kernel hacking, and perhaps I'm not the only one thinking that.

Yup -- I even wrote an experimental real-time kernel for the Atmel AT90 a few years back.

To be honest, I have considered it, as I'm also a Linux user (OS X makes a fantastic interface into a bunch of headless Linux servers that do the grunt work around here), and I'd love to have this support there as well. I currently have 285 processes running on my iMac, and while I'm not really putting a lot of memory pressure on the system (7.97GB used out of 8GB, with only 8.76GB of virtual memory active and no swap), however OS X has still managed to compress 395.6MB of memory, and I haven't noticed a thing. Indeed, it's probably saved me from having to page to disk at the moment to the tune of roughly 200MB. That's a lot of pages available for use pretty quickly without the need to load them from disk first.

What's stopping me? Time. I used to do a lot of Open Source software development, and have had a few projects of my own over the years that have seen some moderate success, and would like to contribute more to the community -- but that was before I had a wife, and before we had a child who has a lot of medical needs. After a long day of commercial application development, and driving my daughter from one appointment to another six days a week, my hobbies currently reflect my desire to get out from behind the keyboard and do things outdoors.

I lament that things have gone this way -- there's nothing more I'd love than to do some deeper research on the type of compression algorithms Apple is using in their memory compression scheme (WKdm, re-implement it as part of the Linux kernel, look at algorithms to quickly identify candidates for compression, and all that good stuff. I get giddy just thinking about it -- but the last thing I need on my plate right now is another project.

If someone decides to take this up, they have my moral support. Maybe in a few years I can start working on interesting stuff like this again, but right now it would probably burn me out to take on something of this size.


Comment: Re:Its Killer Feature (Score 4, Insightful) 411

by Yaztromo (#47149569) Attached to: Apple WWDC 2014: Tim Cook Unveils Yosemite

I like the idea of free regular releases too. But the reality is that they don't seem to be able to break much technical ground with these. Like moving to ZFS or integrating virtual reality (kinda serious) .

While it is disappointing that their push towards ZFS fizzled and died, OS X 10.9 did make some serious technical improvements under the hood that go well beyond the competition.

Compressing and decompressing memory pages on the fly being one of them. It's a much (much!) faster operation than paging to disk, and can significantly reduce memory pressure. Many users felt like they had received a free hardware upgrade -- it can be pretty dramatic. AFAIK neither Windows or Linux have transparent page compression like this. Timer coalescing was another significant kernel-level improvement (although certainly one that had been done before on other platforms). App Nap makes some significant adjustments to how threads and processes are allotted compute cycles. The overall effect can be significantly lessened power requirements, particularly on Apple's laptops, leading to increased battery life -- something no other OS vendor that I'm aware of is focussing on in the PC space (mobile being a bit of a different story, of course).

Perhaps not whiz-bang flashy stuff that end users notice first, but some pretty solid under-the-hood technology none-the-less.


Comment: Re:Decapitation. (Score 1) 483

by Yaztromo (#47094779) Attached to: Botched Executions Put Lethal Injections Under New Scrutiny

My personal stance on the legitimacy of the death penalty is a separate issue from how we'd implement it, if it is too be done.

While I support the death penalty, it would be an extremely rare event, confined to only applying to those so dangerous that allowing them to live will, statistically speaking, result in more death.

Keeping such offenders confined for long periods of time in a proper special handling unit serves the same purpose, but with one less death (the offenders).

Note that I don't have any sort of "soft spot" for dangerous offenders. What I'm more concerned about is a) what it does to those who have to complete the sentence ("the executioner"), and b) what message it sends to society at large. Capital Punishment is less about justice than it is vengeance; I often see a certain harshness in general with many people in the US population when it comes to the penal system that doesn't seem to exist as much elsewhere in the Westernized world, and it's easy to see lots of instances of mass murderers in the US who seem to feel it's their right to go out and take out anyone who has ever wronged them as "retribution" (the case of the recent Santa Barbara rampage would be a good case in point). That's certainly not to say that there aren't other things significant wrong with such people, but when your society glorifies death as a solution to problems, would it be all that surprising to find that maladjusted children who grow up in such a society take that message in the wrong direction?

Back to a) however. You can't execute someone without an executioner, and unless you find an otherwise law-abiding psychopath to do the job, it messes people up. As mentioned in my last post, the Arizona warden who officiated their last gas chamber execution threatened to quit if he ever had to do it again. I recall reading recently that Ronnie Lee Gardiners executioners all asked never to be asked to perform that duty again. Back when many countries still had professional executioners (many less than 100 years ago), many of them wound up being alcoholics with PTSD who had failed marriages and died relatively young (John Robert Radclive, state executioner of Canada between 1892 and 1899, started drinking after one particularly disturbing incident where a sheriff had him hang a man who had died on his way to the gallows; he died of alcohol related illness at the age of 55. On capital punishment, he later in life had this to say: "I had always thought capital punishment was right, but not now. I believe the Almighty will visit the Christian nations with dire calamity if they don't stop taking the lives of their fellows, no matter how heinous the crime. Murderers should be allowed to live as long as possible and work out their salvation on behalf of the State.").

I'm also not a fan of the idea of "the state" killing its own citizens -- for any reason. But I won't get into that at the moment.

That pretty much exhausts what I have to say about the subject, other than to thank-you for the polite and reasoned discourse. It's been interesting to see where our stands on the subject both intersect and diverge. I'm sure we could both agree that the ideal solution would simply be for our fellow citizens to no longer rape, torture, or kill others, mooting such debates in the future.


Comment: Re:Decapitation. (Score 1) 483

by Yaztromo (#47091141) Attached to: Botched Executions Put Lethal Injections Under New Scrutiny

Thus the orally administered anti-anxiety medicine.

I would think that would be its own can of worms. Ever try to administer oral medication to someone who doesn't want it? Besides which, anti-anxiety medication doesn't prevent you from having free will -- you'd still probably try to hold your breath if you're still conscious, particularly if you didn't want to die. The fight to live is quite strong for a lot of people.

Can't make anything entirely foolproof, but it's a lot easier to flood a room with N2 than it is for inexperienced people to find veins correctly.

Gas can be tricky to handle -- chambers have to be adequately sealed, and then have to be properly vented before anyone can enter the room to retrieve the deceased. In the case of the traditional hydrogen cyanide previously used by the US in gas chambers, the chambers had to be scrubbed by personnel in safety gear and oxygen masks using anhydrous ammonia before it would be safe to enter. Obviously this isn't a concern with nitrogen, but you still can't just open the door and walk in. The room would still need to be vented into the wider atmosphere prior to entry, and replaced with normal air. But I agree -- with adequate engineering and routine maintenance, it's probably more foolproof than sticking a needle in a vein.

Then again, the whole concept of capital punishment seems barbaric to me. Then again, I live in a country that hasn't executed anyone in over 50 years. Americans spend so much time, effort, and money into trying to figure out the most humane way to kill people, the endless court cases and legal wrangling, the inequity in its application, the number of people later exonerated as having been innocent, and for what? It has no deterrent value. It doesn't bring lost loved ones back to life. It has a tendency to negatively mess with the heads of the people carrying out the sentence (Arizona, for example, changed their primary method to lethal injection after the warden of the prison that carried out the last gas chamber execution in that state threatened to quit if he ever had to officiate over another gas chamber execution). The best solution to the problem isn't try to to find yet another way to kill people -- its to stop killing people in the first place.


Comment: Re:Sickening (Score 1) 483

by Yaztromo (#47089451) Attached to: Botched Executions Put Lethal Injections Under New Scrutiny

If it is illegal to kill, it should be for the state as well. Anything else is hypocritical. Period. It is not about justice, nor does having capital punishment provide a deterrent that significantly affects violent crime rates.

It's always amazed me that Americans, who are one of the absolutely most distrustful of their governments in the entire westernized world, are often more than happy to permit said governments the power to kill their fellow citizens.

I heard on the radio just this morning that due to the supply difficulties, Tennessee is passing/has passed a law to bring back the electric chair. Now that's humane!

I've never been able to understand how the electric chair was every able to surmount "cruel" and "unusual". Certainly the very first use of the electric chair would have to, by definition, constitute "unusual", and it only takes a few uses to see that it is substantially cruel.


Comment: Re:Nitrogen asphyxiation, if you must execute (Score 1) 483

by Yaztromo (#47089411) Attached to: Botched Executions Put Lethal Injections Under New Scrutiny

- It's completely painless and humane; one's physiology doesn't notice the lack of oxygen so the person just goes to sleep and then dies. People who were revived from asphyxia like this reported they had no idea until they woke up

I'm pretty sure they would have had some idea had they been marched manacled into a steel-walled room with sealed windows and an airlock-style door, strapped into a chair, and told they were now about to die by nitrogen asphyxiation.

It's potentially painless and humane when it's completely unexpected, but you can't say that about someone undergoing the ritual that is capital punishment. There is no correlation between the two.


Comment: Re:Human's a very good at not dying (Score 1) 483

by Yaztromo (#47089319) Attached to: Botched Executions Put Lethal Injections Under New Scrutiny

How many young women and girls were kidnapped, raped, tortured, and eventually killed by Ted Bundy after the state of Florida lit him up like a Christmas tree?

Coincidentally, it's the same number as were kidnapped, raped, tortured, and eventually killed after the state of Florida put Bundy in prison.

I'd also note that apparently none of the 66 people executed in Florida since Bundy were particularly deterred by Bundy's death.

Harder to calculate, of course, is how many murders have occurred in Florida since by people raised with the belief that murder == vengeance == justice via the example of state-sponsored killings like Bundy's.


Comment: Re:Decapitation. (Score 1) 483

by Yaztromo (#47089263) Attached to: Botched Executions Put Lethal Injections Under New Scrutiny

4. All evidence is that it's a fast, painless, and peaceful death.

Fast, painless, and peaceful if you're resigned to your death, sit or lie quietly, and inhale.

Slow, painful, and less peaceful if you try to fight it by holding your breath.

Of course either way the victim can experience convulsions prior to cardiac arrest, which is neither peaceful nor particularly pleasant to watch.

So in reality -- basically the opposite of what you posted.


Comment: Re:"Science" == "Argumentum ab auctoritate" ?!?!?! (Score 4, Informative) 247

by Yaztromo (#46838105) Attached to: Ask Slashdot: Books for a Comp Sci Graduate Student?

Cite Knuth... This is, of course, good science.

Well at least Professor Knuth is still alive, and I don't [YET!] need to refer to the poor man as spinning in his grave.

AC posted an excellent response here.. In the event you're filtering AC's, take the time to read it, as it's completely on point.

I would add is this: if you've never completed a Masters thesis or Doctoral dissertation, just try submitting one to your committee without adequate citations. If you write somewhere "I used well-known algorithm ABC because of XYZ" and you don't have a citation for that algorithm, you'll be sent back for rewrites pretty quickly to add appropriate citations.

By way of example, in my Masters thesis several years ago, I mentioned Unix diff , without a citation. Why would this need a citation? It was mostly mentioned in passing, and every computer scientist under the sun knows what diff is, right?

Committee came back asking for further citations on a few things, including diff (which, for the record, is "Hunt, J. W., and McIlroy, M. D. An algorithm for differential file comparison. CSTR, 41 (1976).")

Using citations isn't an appeal to authority. It's akin to using an existing library call in programming. Just as you wouldn't roll-your-own quick sort algorithm when coding, someone writing a scientific paper doesn't re-invent every algorithm ever derived. You find someone who has already done that, and you cite them. The AOCP is useful in this regard due to the sheerly massive number of algorithms Knuth describes. It's hard to go through a Computer Science program and not use one of these algorithms. Knuth himself likewise cites all of the algorithms in the AOCP, so it's not an appeal to his authority, as he delegates that out to others appropriately. It's simply useful because instead of having to track down papers written in the 1960's on your own, you can cite Knuth who cites those papers for you. This is why the AOCP is useful for a graduate student.

FWIW, I cited Knuth. I needs an algorithm to calculate variance, and another on the Box-Meuller transformation. Art of Computer Programming had one for each, which I adapted for my needs, and cited appropriately.


If you're not part of the solution, you're part of the precipitate.