From the sort of images we see as output, I gather that it is an ANN doing the work, so even if we got 100% success rate it would be hard to understand the algorithm per se.
Also, we don't know *how* it was trained, so we cannot possibly know whether it can decode the RAW visual input, or the pre-parsed this-is-an-A sort of input.
I am waiting for some with insider access to the articles to shed some light on this.
Sounds like something Microsoft would sell for $5999.99.
Outside of a dog, a book is man's best friend. Inside of a dog, it is too dark to read.