First, thanks for the answer. Second, I don't know how interested you would be in a reply, but I'll reply anyways.
There's always that one person in any conversation who just can't take a joke. Tag, you're it. Here, I'll lay it out for you plainly, and I promise to use small words;
Are you saying there's no factual evidence to support this? You disagree with his observation?
Yes and yes. People display a wide variety of behaviors for a wide variety of reasons. Even worse, the emotions expressed are often inconsistent even when circumstances are identical or nearly so because a person's emotional state, receptivity, and responsiveness, depend not just on what's going on now, but on past experience, which isn't available for observation.
"People who are nervous often are hiding something."
The OP said "often", not always. And even then, he doesn't imply that they're hiding anything significant, much less criminal. What you seem to be proposing instead is that the range of emotions expressed for any given reason is so wide and random, that the the former cannot be significantly linked to the latter, or vice-versa. Maybe you have some special knowledge that supports your idea, but the OP's idea seems to speak more to common knowledge, especially since it is culturally normal to assume that people are sad because something bad happened, or happy because something good happened, etc.
I have no idea what this means. Every police officer doesn't profile? Or you can't say either way? Or they shouldn't profile?
Officers who observe others change the behavior of those they are observing.
I agree, if the observed people are aware they're being watched, it would definitely be something to be taken into account.
Worse, they come with their own biases based on frequent exposure to extreme behaviors. Those biases in turn create behavioral interactions with those they are observing, which intensify the first order effect.
I agree also. I'm not sure how you're proposing it would affect an automated system that can't significantly interact with anyone, unless you are talking about the people who program them or the people who later operate them. I am not denying the possibility of a self-confirming bias emerging in the statistics.
So... you agree with his premise? Isn't that how a premise works?
I'm saying the statement has no relevance. It's like saying;
1. It is raining.
2. It never rains on the moon.
3. Rain needs clouds.
conclusion: It must be cloudy.
#2 is completely irrelevant, even though it is on the same topic.
"There are lots of people out there, and since you can't really be expected to casually see the criminals in the act..."
I don't see how this is irrelevant to the discussion. He's justifying the need for the system based on factual limitations of the supply of resources available towards "solving" the issue. He is taking a constraint into account.
Where is his backward logic?
He's implying profiling is required to identify people likely to commit crimes. That isn't correct; You can identify people without profiling, for example, by using their past criminal history.
"you need to profile them in order to pick out people who are likely to commit crimes."
This answer left me even more confused. If by "You can identify people without profiling" you mean "You can identify people likely to commit crimes without profiling", then your statement can only be true if you restrict the definition of profiling to the type proposed in the article, since what you proposed is still a type of profiling - and then you would also be saying that the type you proposed is more likely to be correct, and then, self-confirming bias again. If you meant something else, then your definitions are strange to me.
Also, your use of the word required needs clarification. The point the OP was making was that if there is not an "infinite" amount of resources (infinite cops) to casually come across every crime, then there is a "logistical need" to intelligently address the available resources.
You can't look for people who are about to commit a major crime? Or if you look for them, they won't commit crimes? Or if they commit it, you won't catch them?
Non sequitur. Your questions don't even come close to matching up with the statement I made. My point was that looking for crimes that people are 'about' to commit isn't the problem -- it's the fact that increased surveillance of any sufficiently sized group will result in discovering more criminal activity compared to a control group. The group being placed under surveillance is largely arbitrary; the criterion suggested can just as easily ensnare the average person as a 'terrorist', 'person of interest', or whatever the latest phrase is for a political undesirable. This system will simply provide prosecutors with more 'evidence' of a person's 'guilt', when on closer examination, it's a complete deck of cards. It has zero evidentiary value -- a prosecutor can't show up and say "Well your honor, here's the camera footage of the defendant doing the crime, and... here's our form 193-B stating there was a .13% chance of him doing it this week... which being that it was 4 times higher than his neighbors PROVES he's guilty!"
The method of identifying these factors is flawed. But even if it wasn't, subjecting some people to a 'test' that results in an increased risk to their personal liberty without any due process is a circumvention of the entire point of the judiciary: Which is to be a fair and impartial system. And even if THAT wasn't the case, supposing that this system was supported by incontrovertible scientific accuracy, and that this surveillance was subjected to due process, and said surveillance didn't violate reasonable expectations of privacy, etc., etc., it would still be wrong -- because the justification for the warrant is based on statistical probability, also known as circumstantial evidence. But skipping all that, it comes down to this;
The only way we can have a fair system is when we punish people for the things they've done, not the things they could do. And increased surveillance is a form of punishment -- it's subjecting someone to scrutiny, depriving them of privacy, and even if nothing comes of it to them personally, the risk has an associated cost which would be the time spent behind bars and the fines divided by the percent chance that anyone who is subjected to this increased surveillance will be prosecuted and convicted.
So while I still don't know, I will guess that your answer of "Warning: This statement will never evaluate." is a way of saying "It's not about that". Which is actually just dismissing many of the interesting points the OP made, in order to introduce another problem (which you have now explained at length). The other problem may well exist, but the points the OP made remain nonetheless. And if you go back and read his whole post, you'll see that he probably doesn't even disagree with you with most of it. Read his last line especially. He isn't proposing this system be used for evidence, only as a "tag-for-follow-up" system.