It could be an interesting idea in linguistics and data mining to identify potential workplace threats and troubled workers.
Being an "interesting" idea from an intellectual point of view says absolutely *nothing* about whether it's a good idea or not.
There shouldn't be an expectation of privacy in workplace emails. If you want that, use a private account to discuss things.
Okay; the fact you're expressing that pat response here suggests that you don't understand (or weren't paying attention to) the difference between this and the typical (straightforward) "employers are reading my workplace email" thread. I actually wonder whether you even got the point of the story at all.
This isn't spying on people directly expressing hostile or subversive thoughts against the company, this is using it on (potentially) superficially work-related and neutral email content to determine the underlying psychological attitude of the employee.
Given that the employee is probably *required* to use email in this manner as part of their job, and given that this isn't something they're likely to be consciously doing (else they'd avoid doing it, duh) it's not as if they have a choice in the matter.
Whether this is good or bad comes down to how you react to an alert.
The issue here- and the reason most people quite rightly expressed the (supposedly) "kneejerk" reaction you dismiss- is that they already know based on past experience how large corporations or similar entities- i.e. the people likely to be buying this technology- will probably use this sort of power.
For genuinely troubled employees, however, this might actually be useful if it leads to a confidential meeting with a third party or ombudsman who tries to help the employee.
Yeah, because large US-style corporations are well-known for protecting employees with problems and won't simply use this as an early warning on someone they can get rid of before they become a problem. Or might not have, but why take the chance?
I saw the example in the story. A nice, touchy-feely way to justify an intrusive technology, but let's get real here.
If it's used to actually help troubled employers who might not reach out for help on their own, it could actually help people while protecting the company. If used properly, it's a good thing.
The question is, how likely to you think it is to be used "properly" in your sense of the word?
Your problem is that you seem to view the technology in a purely abstract sense- i.e. one that could theoretically be used for good or bad. Well, theoretically it could be, yes.
However, your so-called "tinfoil hat crowd" knows damn well that such technologies don't exist in isolation, know what type of people it's been designed for, and the type of people and organisations it's likely to be sold to. Based on past experience, it's not unreasonable to draw such conclusions on how it's likely to be used.
So, you can keep expressing your (repeated) dismissal of its critics as "paranoid delusional", but that doesn't make your counter-argument any stronger.