The problem with computer vision is not that it's not useful, but that it's sold as a complete solution comparable to a human.
In reality, it's only used where it doesn't really matter.
OCR - mistakes are corrected by spellcheckers or humans afterwards.
Mail systems - sure, there are postcode errors, but they result in a slight delay, not a catastrophe of the system.
Structure from motion - fair enough, but it's not "accurate" and most of that kind of work isn't to do with CV as much as actual laser measurements etc.
Photo stitching - I'd be hard pushed to see this as more of a toy. It's like a photoshop filter. Sure, it's useful, but we could live without it or do it manually. Probably biggest use in mapping, where it's a time-saver and not much else. It doesn't work miracles.
Number plate recognition - well-defined formats on tuned cameras aimed at the right point, and I guarantee there are still errors. The systems I've been sold in the past claim 95% accuracy at best. Like OCR, if the number plate is read slightly wrongly, there are fallbacks before you issue a fine to someone based on the image.
Face detection is a joke in terms of accuracy. If we're talking about biometric logon, it's still a joke. If we're talking about working out if there's a face in-shot, still a joke. And, again, not put to serious use.
QR scanners - that I'll give you. But it's more to do with old barcode technology that we had 20 years ago, and a very well defined (and very error-correcting) format.
Pick-and-place rarely relies on vision only. There's much better ways of making sure something is aligned that don't come down to CV (and, again, usually involve actually measuring rather than just looking).
I'll give you medical imaging - things like MRI and microscopy are greatly enhanced with CV, and the only industry I know where a friend with a CV doctorate has been hired. Counting luminescent genes / cells is a task easily done by CV. Because, again, accuracy is not key. I can also refer you to my girlfriend who works in this field (not CV) and will show you how many times the most expensive CV-using machine in the hospital can get it catastrophically wrong and hence there's a human to double-check.
CV is, hence, a tool. Used properly, you can save a human time. That's the extent of it. Used improperly, or relied upon to do the work all by itself, it's actually not so good.
I'm sorry to attack your field of study, it's a difficult and complex area as I know myself being a mathematician that adores coding theory (i.e. I can tell you how/why a QR code works even if large portions of the image are broken, or how Voyager is able to keep communicating, despite interference on an unbelievable magnitude).
The problem is that, like AI, practical applications run into tool-time (saving a human having to do a laborious repetitive task, helping that task along, but not able to replace the human in the long run or operate entirely unsupervised). Meanwhile, the headlines are telling us that we've invented "yet-another-human-brain", which are so vastly untrue as to be truly laughable.
What you have is an expertise in image manipulation. That's all CV is. You can manipulate the image to be easier read by a computer which can extract some of the information it's after. How the machine deals with that, or how your manipulations cope with different scenarios, requires either a constrained environment (QR codes, number plates), or constant human manipulation to deal with.
Yet it's sold as something that "thinks" or "sees" (and thus interprets the image) like we do. It's not.
The CV expert I know has code in an ATM-like machine in one of the southern American counties. It recognises dollar bills, and things like that. Useful? Yes. Perfect? No. Intelligent? Far from it. From what I tell, most of the system is things like edge detection (i.e. image manipulation via a matrix, not unlike every Photoshop-compatible filter going back 20 years), derived heuristics and error-margins.
Hence, "computer vision" is really a misnomer, where "Photoshopping an image to make it easier to read" is probably closer.