Yes, this is how science works. It is obvious that talking will help people make flint tools. We all know that. But how do we know that? Saying 'it's obvious' is not helpful. It is also obvious that you can get better at making tools when you can watch someone who is good at it. But you can get plenty of people how have never chipped flint tools, and see how much better they are when they watch someone, when they mutely interact with someone, and when they talk. Some gifted people can pick up musical instruments just by watching, but making flint tools seems to be helped a lot by language.
The article also says that this is suggestive, but could not be considered a proof. They know they have not got ancient people to experiment on. It is not practical to try the same tests with a mammoth hunt. It's not a time machine, but we use what we have.
No, it's how junk science works -- where you conduct an small and very badly flawed experiment, dominated by an effect that has been very well known in the literature for 30 years, but then claim the experiment is "suggestive" of a grandiose conclusion that the experiment clearly doesn't support in order to garner some publicity for yourself from journalists.
The researchers found the "exciting" result that the groups that received more feedback in their instruction performed better, and the group that received detailed feedback using whatever means the teacher deemed appropriate performed really well. Frankly, they could have had this "excitement" in 1984 if they'd read the educational research literature first -- the impact effects of better feedback on learning have been very well known for a very very long time, studied on wider groups over longer periods of time (not some toy 25-minute one-time task). Bloom's "two sigma" paper would have been a good place for them to start reading, and they'd have found plenty more. Only they're less exciting because they don't then tack on a grandiose claim about the evolutionary origins of instruction.
I may sound curmudgeonly, but widely-publicised science stories where some small externally invalid experiment "suggests" a grandiose conclusion reduces science to the level of QI anecdotes.
You know Beveridge's law of headlines? Here's Billingsley's law of science headlines: if a science story headline says something "may have" [followed by a grandiose claim] then it almost certainly didn't -- if the experiment had produced enough evidence to support that conclusion, the headline would say "did".