The Epsilon rocket is three stages of solid rocket booster, like an ICBM. So there's no fueling on the pad, no plumbing, no cryogenics, and no turbopumps. The launch team has a lot less to do than with liquid-fueled rockets.
They're also proudly proclaiming how quickly they can prepare the rocket for launch. I don't think that these features are coincidental, and I don't think that cost savings are the only driver behind developing this thing. North Korea's leadership is a bit unstable at times, it may have nuclear weapons, and Japan has had North Korean rockets fly over its territory before. It's a serious potential threat to them.
Since they lost in WWII, Japan has been very pacifist, but in recent years it has begun to expand its military activities a bit, taking part in a UN peace keeping mission for instance. Outright developing an ICBM would probably go a bit too far at this point, but making a civilian rocket that can be launched at short notice with a small crew and has the range to hit North Korea could just be an acceptable compromise between mitigating the NK threat and not rocking the domestic political boat too much with overly aggressive military moves.
Based off of a sample size of 1. Nice generalization.
Hey! That's one better than some of the climate change theories!
(I know this was meant as a troll/joke but you're hitting the nail) No. They have the sample size of "1 earth". Exactly "1 earth". Of course that's due to the lack of spare earths that we could compare ours too. But it is exactly what makes this whole subject statistically "challenging".
If all you could measure was the global average temperature then yes, you'd have one sample of a simple probability distribution, which contains so little information that you can't possibly derive any interesting knowledge from it about a system as complex as our planet. Fortunately, we have measurements of many aspects of that planet, not just temperature but atmospheric composition, ocean temperatures and salinity, albedo, ocean currents and wind, and so on, and not just global averages but measurements localised in space and time. So your one sample is actually a sample of an extremely high-dimensional, highly internally correlated probability distribution, which gives us much more information to work with.
Now, it's true that we can't do controlled experiments covering the entire Earth, since we don't have a control to compare against. We can do such experiments at a smaller scale and use the results to guide construction of whole-planet models however, and we can exploit the natural variety across our planet to test hypotheses and draw conclusions. So science is still possible, we just need to use different tools. Models make predictions, and if a model predicts our actual sample to be unlikely, we can rightfully conclude that that model is unlikely to be a good description of reality.
The modeller's challenge is to create a description of a complete planet that accurately describes the characteristics that you're interested in and correctly mimics the emergent behaviour (insofar as is relevant to your research question) of the actual planet, while still being simple enough to fit in a computer, give meaningful and comprehensible results, and have reasonable uncertainty bounds on its predictions given the limited amount of information we have available to feed it. I don't think that having a second Earth would make that job much easier.
We have 2 political parties in this country. They dictate the issues. The write the rules governing how you create a party, how you get on a ballet. Nearly everyone in the media belongs to one of the two parties. The parties control the message. You basically can not vote for anyone if they do not belong to one of the parties. You can write in a name, but the fact of the matter is it's nearly impossible to co-ordinate a write-in voting effort.
I'm not from the US, but given all that's happened in the past 15 years it seems to me that at this point voting either Republican or Democrat in any federal election should be considered treason. A vote for either of these parties is a vote for a government of the people, by the elite, for the corporations, and as I understand it, that wasn't quite the idea of your country. Perhaps a write-in or third party vote is a wasted vote, but at least you're not actively voting for this abomination.
As for alternatives besides your current third parties, in the most recent elections in Italy (which had similar issues) the Five Star Movement got almost a third of the vote in what was previously a two-party (or two-coalition) system, with a strictly online and on-the-streets campaign (they're boycotting the Berlusconi-controlled mainstream media). They're promoting amongst others more direct (e-)democracy, limited terms in both houses of congress filled by ordinary people who take a few years out of their lives to serve the country, and reduction in campaign spending.
It's certainly not perfect: they are having issues with disagreements within the party, it turns out online voting doesn't work too well technically, and some of their other policy ideas probably wouldn't work in the US. You'd need your own version of such a party for sure, fix some things, and then it still will be a struggle to make it work. But it shows that it's not impossible to break a two-party system even if it controls the mainstream media, and it's worth a try. Even inexperienced and/or somewhat incompetent representatives would be an improvement over what you currently have as long as they're at least honestly trying to represent the people.
Actually, we recently started collecting plastic separately. Which means that we now collect glass (white and coloured often separate), paper, clothing, compostable waste, batteries and other small chemical waste, and plastic separately in most places, and then there is a separate recycling scheme that puts a small extra fee on PET soft drink bottles, glass beer bottles and beer crates, which you get back if you hand them back in in the shop. Supermarkets have machines that you put them into, and they're collected when the shop is resupplied. The bottles are stripped of labels, cleaned, and reused up to 50 or so times IIRC, before they're recycled.
It's not much of a burden, you just keep a few extra bins or bags with waste around and remember to take them with you when you go get your groceries. Just about every supermarket has a bunch of recycling bins on the front court. That's not to say that there aren't lazy people who just toss everything in the garbage, but they're probably a minority.
And as my grandpa used to say "Girls want ponies, people in hell want ice water, I want a million dollars...that don't mean any of us are gonna get it".
Unless they are gonna kickstarter the chips in the thing it'll be DAMN hard to make it FOSS, simply because the ones making the GPUs, wireless, etc, are about the most proprietary lot on the planet. Hell I don't even think you CAN make a FOSS GPU as everything from texture compression on up is patented up the ass, I know there was a project to make one using an FPGA but I never heard any more about it, probably ran into the legal minefield and ran aground.
Basically, it ran out of money; the main contributors didn't have as much time available any more and making an ASIC is expensive. Some prototype boards were manufactured, and the employer of the main developer (who allowed him to use their tools, and work on it some during office hours) made a commercial product based on the design. It never got to producing a consumer video card though. I see now that Kickstarter actually existed in 2010, but I don't think anyone of us had ever heard of it, and I don't think we could have got the couple million dollars needed to have the cards produced.
For those interested, there's still an active mailing list, the project isn't quite dead.
Another option you may want to look into is working at a supercomputer centre. These are usually (semi-)independent organisations that maintain supercomputers and fast networks, and help scientists use them. Jobs there include technical sysadmin type work maintaining compute clusters, storage arrays, and networking equipment, programming with an emphasis on parallellisation, optimisation and visualisation, as well as more consulting-type work where you advise researchers on how to best use the available facilities to achieve their goals, and gather requirements for the programmers. As a random US example, there's one in Chicago.
As for technical skills, if you're in the geosciences then you'll definitely want to brush up on your knowledge of Geographical Information Systems. ESRI ArcGIS is the big commercial vendor there, but there's also a lot of FOSS GIS software available. Also, some knowledge on geostatistics will help you communicate; some tutorials can be found here.
I don't understand why simply putting the closed source firmware on the card suddenly makes it ok for free software. Same code, just different home.
Back in the days of the Open Graphics Project (since defunct, although Timothy N. Miller is still working in this area and the mailing list is still active for those interested in the subject), we had several discussions about the borders between Free software, open firmware, and open hardware.
As I understood the FSF's position at that time, the point is that if the firmware is stored on the host, it can be changed, and frequently is (i.e. firmware updates). Typically, the manufacturer has some sort of assembler/compiler tool to convert firmware written in a slightly higher level language to a binary that is loaded into the hardware, which then contains some simplistic CPU to run it (that's how OGD1 worked anyway). So, the firmware is really just specialised software, and for the whole thing to be Free, you should have access to the complete corresponding source code, plus the tools to compile it, or at least a description of the bitstream format so you can create those. This last part is then an instance of the general rule that for hardware to be Free software-friendly, all its programming interfaces should be completely documented.
If the code is put into ROM, it cannot be changed without physically changing the hardware (e.g. desoldering the chip and putting in another one). At that point, the FSF considers it immutable, and therefore not having the firmware source code doesn't restrict the user's freedom to change the firmware, since they don't have any anyway. The consequences are a bit funny in practice, as you noted, but it is (as always with the FSF) a very consistent position.
We (of the OGP-related Open Hardware Foundation, now also defunct; the whole thing was just a bit too ambitious and too far ahead of its time) argued that since hardware can be changed (i.e. you can desolder and replace that ROM), keeping the design a secret restricts the users freedom just as well. So, we should have open hardware, which would be completely (not just programming interfaces, but the whole design) documented and can therefore be changed/extended/repaired/parts-reused by the user. The FSF wasn't hostile to that idea, but considered it beyond their scope. Of course, any open hardware would automatically also be Free software-friendly.
I tend to agree that in practice, especially if there are no firmware updates forthcoming but it's just a cost-savings measure, loading the code from the host rather than from a ROM is a marginal issue. Strictly speaking though, I do think that the FSF have a point.
Also, in the Star Wars universe, Daala is the protégé of Grand Moff Tarkin, who gave his name to Xiph.org's earlier experimental wavelet-based video codec effort, so the name makes perfect sense from a historical perspective as well.
Those existing codecs are all very similar technically, and riddled with patents. If Monty can make something new (and he can, see CELT) and work around those patents (and he can, see Vorbis, Theora), then it's definitely a welcome addition. And a codec doesn't have to dominate to be useful; Vorbis is widely used (Wikipedia, all sorts of software that plays sound and music including a lot of if not most video games) and supported on a lot of platforms (including hardware players and set-top boxes) even if it never did completely replace MP3 and AAC. If nothing else, having a free and unencumbered option will keep the licensors of the proprietary codecs at least somewhat honest.
Incidentally, isn't it about time for Monty to get an EFF Pioneer award? He's been very successfully working on freely usable audio and video codecs for well over a decade now, starting at a time when many people didn't believe that a non-encumbered audio or video codec was even possible. Someone with his skills could probably make a very good living in proprietary codec development, but he chose to start Xiph.org and fight the good fight (and now works for Red Hat). He belongs in that list IMHO.
So here we are, at a crossroads. If a project produces the source code needed to build a complete, binary-perfect copy of their executable(s), but it was run through the C pre-processor, or C++ pre-processor, is that enough? It compiles, it builds with the version of tools the provider used... if you discount the pre-processor, it is effectively the original source code provided to the compiler. Is that enough?
I believe Stallman answered that question already, and as you would expect from him, it's a smart answer too. In the GPL (v3, but it goes back all the way to v1) it says "The “source code” for a work means the preferred form of the work for making modifications to it." So, if the creator of the source code actually works on the preprocessed source all the time, then it's okay to redistribute only that. If, in fact, any work done on the program is typically done on the original, non-preprocessed source, then that is the source code and that has to be distributed. This neatly avoids having to define a minimum level of readability by simply requiring that all users/developers be equal.
Suggesting that the purpose of intelligence in this man's random musings might be to increase the background levels of entropy for your own benefit.
That's close, I think. I am not a physicist and I skimmed the equations, but here's my take on what they're proposing. Physical systems have states, which can be described by a state vector. The state of these systems evolves according to some set of rules that describes how the state vector changes over time. They've built a simulator in which the probability of a certain state transition is computed by looking at how many different paths (in state space, i.e. future histories of the system) are possible from the new state, in such a way that the system tries to maximise the number of possibilities for the future. In one example, they have a particle that moves towards the centre of a box, because from there it can move in more directions than when it's close to a wall.
They then set up two simple models mimicking two basic intelligence tests, and find that their simulator solves them correctly. One is a cart with a pendulum suspended from it, which the system moves into an upright position because from there it's easiest (cheapest energetically, I gather) to reach any other given state. The other is an animal intelligence test, in which an animal is given some food in a space too small for it to reach, and a tool with which the food can be extracted. In their simulation, the "food" is indeed successfully moved out of the enclosed space, because it's easier to do various things with an object when it's close compared to when it's in a box. However, in neither case does the algorithm "know" the goal of the exercise. So they've shown that they've invented a search algorithm that can solve two particular problems, problems which are often considered tests of intelligence, without knowing the goal.
Then, they use this to support the hypothesis that intelligence essentially means maximising future possibilities. Another way of saying this, I think, is that an intelligent creature will seek to maximise the amount of power it has over its environment, and they've translated that concept into the language of physics. That's an intriguing concept, relating to the concept of liberty, power struggles between people at all scale levels, scientific and technological progress, and so on. I can't imagine this idea being new though. So it all hinges on to what extent this simulation adds anything new to that discussion.
On the face of it, not much. You might as well say that they've found two tests for which the solution happens to coincide with the state that maximises the number of possible future histories. The only surprising thing then is that their stochastically-greedy search algorithm (actually, without having looked at the details, I wouldn't be surprised if it turned out to be yet another variation of Metropolis-Hastings with a particular objective function) finds the global solution without getting stuck in a local minimum, which could be entirely down to coincidence. It's easy to think of another problem that their algorithm won't solve, for example if the goal would be to put the "food" into the box, rather than taking it out. Their algorithm will never do that, because that would increase the future effort necessary to do something with it. Of course, you might consider that pretty intelligent, and many young humans would certainly agree, although their parents might not. It would be interesting to see how many boxed objects you need before the algorithm considers it more efficient to leave them neatly packaged rather than randomly strewn about the floor, if that happens at all.
There's another issue in that the examples are laughably simple. While standing upright allows you to do more different things, no one spends their lives standing up, because it costs more energy to do that as a consequence of all sorts of random disturbances in the environment. The model ignores this completely. Similarly, you could argue that since in the simulation (unlike in the actual animal experiment) there is no reward for using the object, expending the energy to get it out of its box is not very intelligent at all.
Conclusion, interesting idea, but in its present state, not much more than that.
The usual answer to questions like this is:
(1) Decide what you want the computer to do
(2) Acquire the right platform.
Syaing "I've already got [whatever platform], how do I make it do what I want?" is often not a helpful approach.
If RMS and Linus had followed that advice, GNU, Linux, and probably Slashdot would never have existed. Why should one have to buy Windows and allow customer-hostile DRM software on ones computer to be able to watch a movie easily and legally? It's your computer, and the whole point of owning it is that you can make it do what you want. Trying to do just that seems perfectly reasonable to me, and I can't see how any system that doesn't allow you to do that could be the "right platform" for anything.
I can somewhat relate to the documentation issue although I believe that it is more a question of organizing the documentation.
One of the things that bothers me about the documentation is that there's often no distinction between interface and implementation. Instead of a description of what a function does, you get implementation details mixed up with what it approximately hopes to achieve, leaving you unable to see the forest for the trees.
When you mention "a fundamental problem" you mention function implementations, thus library rather than language issues. R itself is an extremely expressive, functional (or rather multi-paradigm) language that can be programmed to run efficient code. Yet it is syntactically minimalistic without unneeded syntax (as opposed to all of the scripting languages perl/python/ruby). This makes it a truly postmodern language IMO.
Well, there's only one implementation, so it's rather pointless that it could be implemented efficiently. The language specification isn't exactly good enough to create a competing, compatible implementation either. I agree that the syntax is minimalistic and that there's extremely little boilerplate, but I could really do with some way of defining data types (Python 2 is lacking there as well IMO), and namespaces...
Efficiency can sometimes be a problem but the break-even point for implementing parts in say C/C++ is only slightly different than for other languages (say perl/python) and is enabled by an excellent interface (Rcpp package).
Ah, the universal solution to problems with R: here's how to do it in some other language or software instead. Sorry for being sarcastic, but it's amazing how often effectively that advice showed up whenever I searched the web for a solution to some problem I encountered with R.
As an example of my experience, I use JAGS to fit models to data, and JAGS wants to have the model as a text file description. My model has a node for every combination of some 13000 sites and 11 years, and the text file gets to several tens of megabytes depending on model options. Creating it is basically a matter of running through all the combinations of sites and years, looking up some additional data, and spitting out a line of text describing them. My first implementation was very naive, nested for loops that essentially did a nested loop on the data. It generated output at several tens of kilobytes per second, getting slower and slower as it went on. I managed to speed it up by preallocating memory (R seems to not double the capacity of a vector when it runs out, as the C++ STL does, but add a constant extra amount, so that growing a vector made the loop run in quadratic time, except that when measured it actually seemed to be exponential, for who knows what reason.), pre-sorting data and changing to a merge join, and vectorising as much as possible. It now does about a megabyte per second, which is fast enough for my purposes. However, the code is now completely unreadable, and it's still not anywhere near what the hardware can do (PostgreSQL does the equivalent nested loop in less than a second). R turned what should have been a trivial programming task into a frustrating adventure, and the result is still not very good.
For myself the biggest change to make was to start thinking in functional concepts coming from a procedural background. Much of R criticism IMO stems on a failure to realize conceptual differences between functional and procedural programming. Another problem that might spoil the impression of R sometimes is the plethora of packages of highly varying quality.
True, but this is really another instance of the don't-do-it-in-R solution, because those functional programming functions effectively just run your loop in C, rather than in R (if they don't forward the whole operation to a C scientific maths library), which makes the performance bearable. If R were really a multi-paradigm language, then you would be able to solve a problem procedurally as well if it happened to be the best way to do it.
I recently switched my scientific programming from R to Python with NumPy and Matplotlib, as I couldn't bear programming in such a misdesigned and underdocumented language any more. R is fine as a statistical analysis system, i.e. as a command line interface to the many ready-made packages available in CRAN, but for programming it's a perfect example of how not to design and implement a programming language. It's also unusably slow unless you vectorise your code or have a tiny amount of data. Unfortunately, vectorisation is not always possible (i.e. the algorithm may be inherently serial), and even when it is, it tends to yield utterly unreadable code. Then there is the disfunctional memory management system which leads you to run out of memory long before you should, and documentation even of the core library that leaves you no choice but to program by coincidence.
As an example of a fundamental problem, here's an R add-on package that has as its goal to be "[..] a set of simple wrappers that make R's string functions more consistent, simpler and easier to use. It does this by ensuring that: function and argument names (and positions) are consistent, all functions deal with NA's and zero length character appropriately, and the output data structures from each function matches the input data structures of other functions.". Needless to say that there is absolutely no excuse for having such problems in the first place; if you can't write consistent interfaces, you have no business designing the core API of any programming language, period.
Python has its issues as well, but it's overall much nicer to work with. It has sane containers including dictionaries (R's lists are interface-wise equivalent to Python's dictionaries, but the complexity of the various operations is...mysterious.) and with NumPy all the array computation features I need. Furthermore it has at least a rudimentary OOP system (speaking of Python 2 here, I understand they've overhauled it in 3, but I haven't looked into that) and much better performance than R. On the other hand, for statistics you'd probably be much better off with R than with Python. I haven't looked at available libraries much, but I don't think the Python world is anywhere near R in that respect.
Anyway, for doing statistics I don't really think there's anything more extensive out there than R, proprietary or not, although some proprietary packages have easier to learn GUIs. In that field, R is not going to go anywhere in the foreseeable future. For programming, almost anything is better than R, and I agree that those improvements you mention are not doing much to improve Rs competitiveness in that area.