Interesting theory. I've been working with Java since version 1.0 on devices as slow as an embedded 100Mhz device with 128MB RAM and I never remember GC taking seconds.
Just because I'm curious I tried to push our Java application (Data integration engine) to use both CPUs at 100% and dump the Garbage Collection stats to disk. Here's a typical sample:
133,091: [GC 30567K->10559K(60160K), 0,0052000 secs]
133,447: [GC 34943K->10347K(64832K), 0,0036360 secs]
133,873: [GC 39659K->10347K(63872K), 0,0028940 secs]
134,286: [GC 38699K->10531K(63104K), 0,0033140 secs]
134,674: [GC 37923K->10263K(61952K), 0,0019690 secs]
135,072: [GC 36759K->10351K(61184K), 0,0024490 secs]
135,462: [GC 36015K->10339K(60352K), 0,0022610 secs]
135,797: [GC 35171K->10739K(59840K), 0,0039780 secs]
136,134: [GC 34803K->10679K(59008K), 0,0033120 secs]
136,479: [GC 33975K->10567K(58048K), 0,0029140 secs]
136,801: [GC 33159K->10647K(57472K), 0,0026420 secs]
Note that this is without incremental garbage collection enabled. It might be possible for graphics intensive applications to notice the fraction of a second of delay but something tells me that this just might not be the case.