My father had a saying about when you hear hoof beats, its not Zebras. You look for horses first.
There is not one type of accuracy,but two. Chance of false positives and chance of false negatives. Most of the time you care more about false positives (hey, this test says you have deadly disease when you don't), rather than false negatives (sorry we failed to catch the fact that you have the disease).
Example: Deadly disease is rare - only happens 4% of the time. Out of 1000 people, 40 people actually have the disease, and 960t. For an 80% correct positive diagnosis, that means it will show 40*0.8= 32 have it. Fails to catch 8 people (false negatives). This test is 80% "Sensitive" issue, detecting the problem when it exists.
But what is the percent of false negatives? The test could be 80% accurate here too, showing 800*0.8 = 640 as true negatives, leaving 160 false positives? This is called "Specifity". But what if it is for example only 50% specific. That would be showing 800*.5=400 people correctly true negative, with 400 people falsely told they have the disease?
Honestly, usually we care far more about "Specificity" issue. Even assuming the 80% rate, it is pretty horrible to correctly catch 32 (out of 40) true sick people, if it also informs 160 (out of 800) healthy they have the disease.
It is easy to create a test with a higher Sensitivity if you do not care about Specificity, but that is rarely a good idea, especially for an initial exam.
Usually you want something with a high specificity to make sure you are not terrifying patients and beginning dangerous, expensive, and/or painful treatments on healthy people.
Only later do you switch to the high sensitivity exam, when you are trying to double check that the first test is accurate.