Yep. I always use LibreOffice to edit and send back documents for work. It usually works OK, but with frequent glitches. I worried about that, so I once asked our admin if she had a problem with the docs I sent back. She said mine were no worse than those she got from everybody else, and she had never realized I wasn't actually using MS word to edit them. Glitches and formatting errors is apparently completely normal even with the same version of MS word on different computers.
Another way is to find a book that represents your company culture, and give it to each new arrival to read.
Give a different book to each new arrival. Put them all into the same, new greenfield project and let them fight it out. Adopt the winners' methodology as the new company culture.
John Cook (put his blog in your RSS feed if you don't already have it) made a very good point recently: The speed gains from Moore's Law are dwarfed by the speed gains from algorithmic improvements. And unlike Moore's Law, we're not yet seeing a limit approaching for better ways to solve stuff. The post in question: http://www.johndcook.com/blog/...
A lot of tasks intrinsically don't scale, or scale only up to some limit. Some people are running into this already in the HPC world, were we have big parallel machines that they can't take full advantage of. Their simulations simply don't scale above a certain number of cores.
This problem is becoming steadily worse, since people want to make models with more detail (that tends to not parallelize well), and simulate much longer timeframes than before. If you're simulating protein interactions over one millisecond, then it might not matter if it takes an hour or two. But if you want to use that to understand LTP in neurons and simulate a second or two, then it becomes a very major problem if your model can't parallelize further and the per-core speed stays put.
Just remember that a real neuron is nothing like the "neurons" in neural networks. Each one is really computing a fairly complex set of functions. A single real neuron would be best represented by a decent-sized recurrent neural network all by itself.
Geeks are just as good the world over, whether Japan, Taiwan, EU, US or China. Product quality has nothing to do with the quality of the designers and builders and everything to do with the budget and time constraints they have to do their stuff. And that is all about where their company wants to position itself in the price/quality/reputation landscape.
Sharp has a well-deserved reputation for good quality and sometimes off-beat or niche products that delight a few even if they don't become huge sellers. And that's of course part reason why they've been in trouble for some years now. Foxconn doesn't have a reputation for premium products or for doing their own thing.
I share the worry that Sharp as we know it will disappear, and just become another nameplate pasted on bland, forgettable me-too stuff.
CChheecckk yyoouurr dduupplleexx sswwiittcchh..