I don't know if ancient samples are processed differently, but for 'fresh' samples, the DNA gets broken up into small fragments (200-1000 base-pairs long), and then these fragments get sequenced. All bits of the genome have roughly even chance of getting sequenced, and with thousands or millions of copies of each fragment, you normally get reasonably even coverage over the whole genome.
The problem is when you map your sequences back onto a reference genome (ie the currently known chr1, chr2, chrX, etc). The aligning software will have trouble deciding where to place a fragment that is part of a highly repetitive sequence (like centromeres or telomeres) , or is duplicated several/many times (eg large gene families that have large sections of the genes in common, or pseudogenes that look like copies of other genes). In addition, we don't even know the exact sequence for some of these regions, so our reference human genome is contantly being updated (currently up to version 38).
For bioinformatics analysis, sometimes it is easier to sweep some of this under the rug. For example, some people use a reference genome that masks out the centromeres and telomeres (ie our reference sequence just has NNNNNNNNNNNN bases here, instead of As,Cs,Gs and Ts). Otherwise there are databases that list the regions containing repeated sequences or duplicated segments, so you can check any of your findings to make sure they aren't in a suspicious region.