To cut a long story short, at "6.8 mllion years old" I assume they mean "the longest read (maximum number of consecutive GATC 'letters' in a row) you're possibly going to get is one". Imagine having a pile of letters which were once arranged into the collective works of William Shakespeare: could you re-assemble the original work? No. But what if you had 4-letter fragments? You might be able to learn something about the english language, indirectly, but you probably won't be able to reverse-engineer the complete original work. Now what if you had slightly longer fragments? That would help. What if the garbled pile of letters/fragments actually consisted of multiple, similarly (randomly!) shredded copies of Shakespeare? Well, as long as they're randomly fragmented in different ways - you can imagine that where we guess two fragments might join each other, if we have a fragment from that same region from another copy wich spans that join - we can become more and more confident about forming a plausible assembly. So we can take advantage of this redundancy and randomized fragmentation to attempt recovery of the original work.
In other words, the more degraded the DNA, the shorter the fragments and the harder it is to come up with an assembly. At some point the fragmentation might be so bad that the only way you can attempt to achieve anything is to try to use a relevant, well understood reference sequence from a modern day specimen/consensus for comparison (or clues, or to fill-in-the-blanks)... if one exists. I'm no geneticist, but I think in those circumstances the confidence in the results start to go from "hey, that's cool!" to "interesting" to, eventually, an artist's rendition of what an ancient genome might have looked like - drawing from long lost cousins which are still alive today.
Happily, re-assembling short, fragmented DNA happens to be how commodoty high-speed, high-throughput, low-cost sequencing works these days - DNA is split into small lengths, Eg. 500-ish basepairs, and then depending on the experiment/purpose/targets etc. it's all (or partially) re-assembled by finding enough overlapping bits (hopefully beginning and ending with proprietary markers used in the splitting process) with statistical tricks to qualify if the data is sufficient, which areas are problematic in coverage/confidence etc... and it helps enormously if you're working on an organism that's already been sequenced to death for comparison.
So there are many well advanced tools for coming up with contiguous DNA from a pile of short reads.
IIRC, the other trick with ancient DNA is - first of all, extracting enough useful material to begin with, without damage. As reads get shorter, increased redundancy helps - more randomly overlapping regions can ease the task of re-assembly - but very short reads might mean that a number of different assemblages are possible. Not to mention delicate amplification methods which might increase the noise as well as the signal...