The problem isn't really the textbooks---the books themselves are often relatively cheap (for example, a 9th edition of Sullivan's Precalculus can be had for $30 or $40 if you don't mind being an edition out of date). The problem is that students are also required to buy access to the publisher's website in order to do their homework. One alternative is to hire advanced undergraduates to grade papers, or (better yet) hire more expensive graduate students, or even (heaven forbid) tenure track lecturers to teach smaller sections and/or grade papers. There is basically no money to do that, so it isn't going to happen. Another alternative is to use something like MAA's WeBWorK for homework. This might be quite feasible in the future as WeBWorK is improved (or another, better free, open source system comes along), and my department is doing as much as it can via WeBWorK, but the system is still not all there---there are simply things that, as bad as it is, MathXL can do much better than WeBWorK.
This might be evidence of my own lack of creativity, but I just don't see many other alternatives, and none of them are going to be any cheaper at the end of the day.
*Mathematics* isn't science. It is more properly a branch of philosophy that happens to be really, really useful for the sciences. The difference is that sciences are empirical---ideally, scientists observe the world, form explanations of their observations, then test those explanations with further observations. Mathematics is not empirical---mathematicians start from a set of fundamental assumptions, then use logic to deduce the consequences of those assumptions.
From the summary, it seems that the criticism is that economists are behaving more like mathematicians than like scientists---that is, they are making assumptions about how the world works, then using logic to determine the consequences of those assumptions. Instead, they should be making observations, then using the tools of mathematics analyze data taken from the real world and test their explanations.
I don't know about that. A couple of back-of-the-envelope computations make me think that 10 years is not a long enough timeframe to make such a camera anywhere near common. Consider, for instance, the 3 ton weight. Suppose that technology develops such that an equivalent sensor halves in weight every year. Ten years then represents halving the weight 10 times, giving a weight of approximately 6 lbs. That definitely isn't iPhone weight, and comes from a pretty optimistic assumption about how quickly the technology will develop. The computation, for completeness: (3 tons) / 2^10) ~= 5.9 lbs
Or we could look at pixel counts. The summary claims that the camera will capture 3.2 gigapixel images. Apple claims that the iPhone 6 has a 8 mega pixel camera. So the telescope camera will capture 400 times as much data. Assuming that the iPhone camera doubles its pixel count every year, it would take almost 9 years to get to 3.2 gigapixels. Even if we assume that the iPhone is used to take panoramas, where a panorama can have up to about 2^3 the pixel count of a non-panorama (again, see Apple's claims), this represents 6 years of doubling every year, which is, again, pretty optimistic.
Long story short: yes, technology marches forward, but this is likely to be a pretty impressive instrument even 10-15 years in the future.
It can be difficult to tell the difference between rocks that have been modified by people and rocks that have been shaped by natural processes. That being said, there are things to look for.
First is material. From the photographs in the linked article, it appears that the purported tool is made from some kind of fine-grained silicious material (high in silicon, rather than magnesium and iron, as evidenced by the color), whereas the surrounding rock appears to be basalt (mafic, therefor darker in color). If you work in an area, you get to know the geology of the region, and where rocks come from. Seeing rocks far from their sources often indicates human curation. That being said, it seems unlikely to me that anyone would bother to curate a general tool like the ones photographed, so that probably isn't going to be a huge factor in this case.
Second, after seeing hundreds or thousands of stone tools, you get good at identifying them. It is kind of like chicken sexing---it may be difficult to quantify *exactly* why something is a tool, but people get really good at it, none the less. Again, this isn't the whole story, but it gives you an idea about why one might pick up a rock in the field. People who have a lot of experience and training are more likely to recognize potential tools.
Third, there are morphological indications of human modification. Rocks that fall and break naturally tend to have random patterns of flaking, whereas intentionally modified rocks will show flaking that is concentrated in a particular place. This isn't foolproof (indeed, there were purported pre-Clovis tools found in California a few decades ago that, upon closer examination, turned out to be naturally formed), but, again, it is an indication.
Fourth, it is often possible to tell a tool from other contextual clues: is it near a hearth? a pile of animal bones? other easily identified tools? Again, given the age, this is unlikely to be useful in this context, but you asked a more general question, so this is part of a more general answer.
Finally, there are lab tests that can help. One can check for residue (i.e. blood or plant reside that might indicate use in preparing food), or microflaking that might indicate use, for example. These are things that you can't see in the field, and almost certainly can't see in a photograph that was taken in the field.
I had the rare misfortune of being one of the first people to try and implement a PL/1 compiler. -- T. Cheatham