The error is not small. If you read the article, on certain very reasonable inputs (not pathological at all), you can sometimes wind up with only _four_ bits being correct.
The issue here, is that any computed sine value outside the first quadrant (input values 0 to pi/2) is computed by reducing the input quantity. The function is periodic, so adding or subtracting any multiple of the period (2 pi) from the input value, is mathematically valid. So, the error is made to be small for each value in that first quadrant, and the Intel documentation correctly quotes the errors there. The 'reducing the input quantity' step, however, doesn't use extended precision arithmetic, so the (add 2pi) step adds roundoff error (and that roundoff error generates output errors).
Thus, the sin(1.14159) calculation, with a 10-decimal-place accurate representation of that number (1.14159), gives roughly a 10-decimal-place determination of the proper sine value. But, the sin(1.14159 x 10**9) will get LOTS of leading digits truncated when you scale the input, and can only determine a 1-decimal-place scaled input value, thus only a 1-decimal-place sine.
And, if the hypothetical, perfect, sine value has leading zeroes, it looks terrible as a 'percent error'. Any roundofff error at all, in a sin(3.14159265358979323846264338...) calculation, will get a divide-by-zero boost when you calculate a percent error. The absolute error, though, is just what is to be expected from roundoff error in a step that takes the remainder after dividing by (2pi + roundoff_error(2pi) ).