Collecting the data alone is a log(n) step... and can be worse if you are trying to keep the data sorted while you collect it.
Use red-black trees these keep data sorted with O(log(n)) worst case bound for insertion.
How can you calculate the mean deviation at any time without revisiting all of the data points that you have collected so far? How can you calculate do it in any time better than O(n)? Calculating standard deviation takes O(1) and does not require reexamining the data at all if you've been keeping track of right things during data collection (which still takes O(n)).
That is typical exam question for data structures class. You maintain a red-black tree and for node you keep a sum and count of elements of its subtree (you need to update these in rotation and thats it). As red-black tree has logarithmic height you easily find sum of elements greater than given number in logarithmic time. Just do binary search and sum values for subtrees whose smallest element is greater than searched element.
Once you have that a mean absolute difference by following expression
(sum_greater(mean) - count_greater(mean) * mean) + (count_less(mean) * mean - sum_less(mean))
and you can get each term in O(log (n)) time.