I'm running GNOME desktop on Debian/Hurd. Works just fine.
I suspect that most casual users run Hurd in a VM. Really not much need for USB.
But now it has it. Rejoice!
They announced work on Hurd when I was still in university. I've worked a career, ended up disabled, retired, and spent years on a pet project since then, producing 13 point releases. Over 30 years have gone by.
Maybe, what with all the time you have on your hands, you could have taken the time to research your facts.
There are two major distributions (Arch and Debian) that use Hurd as a kernel. Right now, I am running a GNOME desktop on Hurd.
No, it ain't Linux/BSD, but hardly a Duke Nukem scenario.
I would dedicate my time to learning Emacs. In addition to being very useful for a wide variety of tasks, it has a very good offline documentation system. There is enough built-in documentation and things to learn so as to keep you busy for the year. It is largely textual, and if you are just reading documentation and playing around, I would imagine that it is rather power-saver friendly (especially when run in a console).
It would be beneficial to learn Emacs LISP (Elisp) in order to extend the editor to do the sorts of things you want. It's an old LISP, but very practical. It's not a bad start for learning about this type of programming (a book-sized Introduction is included in the offline documentation).
Another nice feature of Emacs is that it has Gnus newsreader/email program. It has a pretty sweet offline mode. In principle, you could load it up with news (gmane) and read it offline. You can compose emails and replies, and queue them up for the next time that you get back in to the city, when you switch to online mode and send your replies and fetch new groups.
In my (rather Occidental) mind, I envision the Himilayas as a place of natural beauty and spiritual renewal. Perhaps you could find religion in Emacs--realizing just how important the editor is to a computer user's experience--and coming back as a guru.
Everyone who takes out a loan for any purpose (home, car, student loan) runs the same risk.
Lenders usually do not make money off of repossession; it is a mitigation of loss. The more diifficult it is to mitigate loss, the less likely a lender is to make the loan in the first place.
And there goes any chance whatsoever of anyone but the rich ever entering ownership unless they have all the cash upfront.
Why do you call it "predatory lending?" That implies that there is some sort of fraud or misrepresentation.
I'm pretty sure that the terms and conditions are stated in full: "if you (subprime borrower who wants a car) miss a payment, then your car will shut off. These are the conditions, take it or leave it."
I am opposed to usury, but what rates/terms constitute usury are determined by the government, that is elected by the people. And let's not forget that these banks are not owned solely by billionaires -- the largest stakeholders include depositors (like middle-class you and me). If the lender failed to be aggressive in enforcement of debt, then the system would collapse. If the lender failed to expand into new markets (including subprime lending), then we depositors would take our business elsewhere.
It is "cool" in some circles to promote Robin Hood thinking -- steal from the rich, to give to the poor. But jeez, take the bus. If this is so onerous as to be oppressive, there is always the option to do what my ancestors did -- move to lands of better opportunity.
Responsibility is a two-way street, and if we are going to scrutinize the lenders, we ought to also scrutinize the decisions being made by the consumer.
beat me to it. I love my Leica M3.
NB. I can't believe that I am responding to this submision, but here I go.
I can guarantee that whatever disease was launched would make its way back to the population that ISIS purports to represent. I predict that its consequences would eventually be more devastating in the generally-poorer populations.
It seems there is a reason that chemical/biological warfare didn't make it much farther than the first world war -- for the simple reason that the "wind" changes direction.
I think Shylock said it best: "cut me, and do I not bleed?" We're all humans, and we need to cut this crap out.
I am a researcher in medical informatics, and statistics is a huge part of my job, though I am not a classically-trained statistician.
First, I would like to offer a stark contrast between two types of statisician: 1) statisticians of the old mold who are wedded to SAS and related tools and 2) research statisticians who employ modern methods such as Bayesian statistics and rather advanced calculus. The former tend to mold all problems into what is available in the canon of SAS routines, while the latter are capable of creating custom models that suit the problem at hand.
Then, there is a new breed of scientist -- the data scientist -- who tends to use black-box machine learning methods and the classical techniques, as programs such as SAS and R have "democratized" the field. I agree with the common gripe of many traditionally-trained statisticians who object that these "data scientist" tend not to understand the statistical background of these computer codes. In fact, it is easy to download R onto one's computer and start firing data through, with little regard for the merits of the model or its results. (Not all data scientists are like this, but I'm simply stating a general observation.)
Another problem with statistics is that it can be very confusing, understanding just what things like p-values mean. After a first course in statistics, it leaves many with a bad taste -- either being terribly confusing, or rather boring. In my opinion, this is because of traditional (frequentist) statistics, which have their origins from luminaries such as Fisher and Pearson.
The "action" today is in Bayesian statistics. This formulation allows for statistical concepts to be expressed is ways that (I believe) most people can understand. But executing Bayesian statistics mandates that one understand the underlying formulation of models; in general, they are not black-box methods. Furthermore, they can be quite computationally-expensive for large data.
Statistics is suffering from perceptions of being a button-pushing, boring profession. As has happened in many other fields (e.g. computational chemistry and CFD), computer programs have democratized the field so that those who have not had years of dedicated study and training can execute statistical models. In my experience, this can be a good thing, or a very bad thing. Another issue is that there is a significant build-up of half a century of code and protocols in both industry (think big business analysis) and government agencies (think FDA).
But modern statistics is actually a hot field. Provided that one understands the background, and is willing to go the extra mile to write custom code, the rewards are endless.
Because *some **people ***are &sick and *(&tired) **of *all &of ***the ****bullshit &that **goes &with writing C and C++ in order to get an order of magnitude performance increase over those dynamic languages that you allude to.
I recently replaced my third generation Airport Extreme with a new Netgear R7000 "Nighthawk." I loaded Tomato "Shibby" branch, and was able to replace my firewall, webserver, openvpn, and a few other services with this bad-boy. Also, I get QoS.
Two weeks later, everything is fine. I am satisfied. It is interesting to me that the range of the Airport Extreme (despite being seven? years old), is comparable to this new wireless router. However, I am happy to invest in a repeater unit using this free software, rather than sinking more into the good--but infinitely proprietary, and less feature-ful)--Apple hardware.
I'm afraid that this article touches on what I perceive as a growing problem: it's the notion that "Everyone's answers and opinions are right and have value."
This might be fine in some areas where many things are subjective, in which case the axiom "there's no disputing taste" is appropriate. In these cases, then I agree that one should probably hold one's criticism.
But especially in the technical areas, such as computer programming and the physical sciences, the laws of physics and logic often times point to a more correct answer. In my own work, I find that I am constantly wading through massive amounts of literature, and wondering -- what the hell happened to peer review that used to weed much of the crap out? Eventually, wrong answers and half-baked opinions stack up to warp reality, such that it is difficult to find or promote the few rigorous and correct.
I think it's a similar situation on peer-reviewed sites like Stack Exchange. Often times, the posted opinions for solution to a problem run the freaking gamut. I am glad that a lot of the good opinions (based on sound reasoning and experience) are boosted up, but the dreck (based on fuzzy thinking, old wive's tales, and "antipatterns") are ranked downward, thus giving some help to an interested third party (such as me) who really doesn't have time to be patient and P.C.
Disclaimer: the right answer can be the minority opinion -- which may have been knocked hard by other reviewers. Here I am speaking about the 99% of the time that the best answer is the most highly rated.
NB: I love PostgreSQL with all my heart. I always upgrade to the most recent version, because they implement features that I really need. Added to the existing features of Postgres, it's totally awesome.
But as I have moved toward "Big Data" and the market segment that these new-fangled (non-relational) databases target, I find myself wishing that Postgres would be able to run my vanilla query (*singular*) using all processors. As it is now, I have to either write some awful functions that query manually-partitioned subtables, or simply wait while it plods through all billion or so rows.
This is a very interesting example. Would you believe that gfortran 4.2.1 gives z=2 (two!) and gfortran 4.8 won't even compile, due to a bizarre REAL(4) vs REAL(8) error? There's something very wrong with this, and your point is taken. (The intent(inout) attribute of f90 would not have helped here, either.)
I would point out that, since this has side effects, I would probably have done this as a SUBROUTINE instead of a FUNCTION. Then, things would have materialized in a predictable way. I try to write all of my functions as "pure" functions that have no side effects.
You keep trying to convince me that Fortran is not a panacea like I might actually believe that it is. I have confirmed that you are trolling.
If one sets a random seed from a reproducible generator, then start a swarm of trajectories sampled from a Maxwell distribution of velocities, then one should be able to get the exact same computer renditions of those trajectories on any computer that implements IEEE arithmetic. These are computationally deterministic simulations. This is reproducible research.
Or maybe you are invoking quantum mechanical uncertainty for a classical mechanics simulations. Even quantum mechanical simulations that start with the same wave packet (x and p) can be reproduced faithfully if the initial conditions are known, and the order of operations is the same.This discussion has nothing to do with ensemble averages or quantum mechanical and
Compilers most certainly can dictate the order of operations of a CPU -- that's the whole point. Whether this is most efficient for a given architecture is another matter, and one reason why performance can drop if the CPU is ordered to perform its instructions in a suboptimal manner.
Finally, I agree with you that MPI has nothing to do with IEEE arithmetic, and I really don't know why you have brought this up.
(I regret that I must refrain from further discussion on this topic. Ummm... you win.)