Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment: Single case anecdote. (Score 4, Interesting) 196

by aussersterne (#49632059) Attached to: Why Was Linux the Kernel That Succeeded?

I had been trying to afford a Unix installation at home as a CS student. All I knew was the Unix vendors. I was not aware of the social structure of the Unix world, various distributions, etc. I was crawling university surplus lots and calling Sun and DEC on the phone to try to find a complete package that I could afford (hardware + license and media). Nothing was affordable.

I was also a heavy BBS and UUCP user at the time over a dial-up line. One day, I found an upload from someone described as "free Unix." It was Linux.

I downloaded it, installed it on the 80386 hardware I was already using, and the rest is history. This was 1993.

So in my case at least, Linux became the OS of choice becuase it had traveled in ways that the other free Unices didn't. It was simply available somewhere where I was.

This isn't an explanation for why Linux ended up there instead of some other free *nix, of course, but by way of explaining the social diffusion of the actual files, I saw Linux distros as floppy disks around on BBSs and newsgroups for several years, with no hint of the others.

For someone with limited network access (by today's standards), this meant that Linux was the obvious choice.

As to why Linux was there and not the others—perhaps packaging and ease of installation had something to do with it? Without much effort, I recognized that the disks were floppy images and wrote out a floppy set. Booted from the first one, and followed my nose. There was no documentation required, and it Just Worked, at least as much as any bare-bones, home-grown CLI *nix clone could be said to Just Work.

I had supported hardware, as it turned out, but then Linux did tend to support the most common commodity hardware at the time.

My hunch is that Linux succeeded because it happened to have the right drivers (developed for what people had in their home PCs, rather than what a university lab might happen to have), and the right packaging (an end-user-oriented install that made it a simple, step-by-step, floppy-by-floppy process to get it up) while the other free *nix systems were less able to go from nothing to system without help and without additional hardware for most home and tiny lab users.

For comparison, I tried Minix around the same time (I can't remember if it was before or after) and struggled mightily just to get it installed, before questions of its capabilities were even at issue. I remember my first Linux install having taken an hour or two, and I was able to get X up and running the same day. It took me much longer to get the disks downloaded and written. Minix, by comparison, took about a week of evenings, and at the end, I was disappointed with the result.

Comment: Re:Rely on the counterfactual. (Score 1) 210

by aussersterne (#49589491) Attached to: Yes, You Can Blame Your Pointy-Haired Boss On the Peter Principle

Yes, in practice it's usually a mix of the two, so the principle is more an abstract model than an argument about real, concrete thresholding.

But the general idea is that by the time someone stops being promoted, if they continue in the job that they are in while not being promoted for an extended period of time, it means that they are likely not amongst the highest-merit individuals around for that particular job and responsibility list—because if they were, they'd have been promoted and/or would have moved to another job elsewhere that offered an equivalent to a promotion.

Comment: Rely on the counterfactual. (Score 5, Informative) 210

by aussersterne (#49588929) Attached to: Yes, You Can Blame Your Pointy-Haired Boss On the Peter Principle

The best way to understand the principle is to imagine the counterfactual.

When does a person *not* get promoted any longer? When they are not actually that great at the position into which they have most recently been promoted. At that point, they do not demonstrate enough merit to earn the next obvious promotion.

So, the cadence goes:

Demonstrates mastery of title A, promoted to title B.
Demonstrates mastery of title B, promoted to title C.
Demonstrates mastery of title C, promoted to title D.

Does not manage to demonstrate mastery of D = is not promoted and stays at that level indefinitely as "merely adequate" or "maybe next year" or "still has a lot to learn."

That's the principle in a nutshell—when you're actually good at your job, you get promoted out of it. When you're average at your job, you stay there for a long time.

Comment: You are describing engineering on public works (Score 1) 634

by aussersterne (#49569463) Attached to: How To Increase the Number of Female Engineers

projects and in academic research, which are an already tiny, extremely competitive, and ever-smaller part of the general pool of engineering labor.

Most of what engineers do is in the broader consumer economy, engineering objects, systems, etc. for people that are already amongst the world's wealthy (i.e. consumers in the largest national economies), that they don't really need, to enrich still wealthier people that don't really need any more enrichment.

I may be a woman underneath it all, because despite being gainfully employed in a high-skill position that makes use of my Ph.D., I can't stand my job, which is all about making stuff with little direct bearing on daily life to help make rich people even richer—yet of course it is taken deadly seriously by everyone in the company, and there is a general disdain for and scoffing at "causes," like say, preventing climate change, expanding human knowledge and capability, or helping to address the massive wealth inequality on the planet.

Comment: He's right, but the conclusion may require nuance. (Score 2) 145

by aussersterne (#49384039) Attached to: The End of College? Not So Fast

Here's the thing—we may not actually want every otherwise unmotivated late teen to be sitting dubiously through college courses just because it's either that or go back to their dorm and twiddle their thumbs. Some things:

- There is an oversupply of graduates these days in most fields and at most levels
- A dawdle-dawdle unmotivated student is not doing their highest quality learning
- Even students that will eventually use what they learn may not do so for years
- In the meantime, what they learned is getting very rusty between learning and use

So with these things said, *how about* a model in which:

- People are not motivated to learn something until they need to
- Once they need to, they are happy to blast through it intensely
- And they will put it to use right away
- And their motivation comes from needs (for a raise, to be competitive, etc.)

I would think this would help to mitigate some of the particular supply/demand problems on all sides (for an education/for students/for graduates as employees).

The one caveat, and it's an important one, is that we do of course want people to be generally mature, thoughtful, capable, and culturally literate if they are goint to be participating in society, and right now high schools are failing utterly at even touching these points.

So to address that need, let's just require a minimal level of "general" college-level education, say a one-year or two-year degree that as no "major" or "minor" selections and issues no grades, but certifies literacy about politics/citizenship, social science (particularly social problems), national culture, basic quantitative reasoning, and so on—enough to become a careful thinker and to better understand "how to learn stuff."

This general education certification would be required in order to:

- Vote
- Get a business license
- Sit on a corporate board

But would be disconnected from particular vocational or other subject-oriented learning issued via, say, MOOCS as well as face-to-face alternatives. And instead of a major in a single discpline, outcomes from MOOC courses could be used to calculate a nationally databased and relatively involved (many measures) "bar chart" for each student, that tallied their experience and competence with particular subject areas, expressed quantitatively as a figure without an upper bound, that is added to with each additional course, and perhaps incorporating quantitative feedback about their performance from employers as well:

So instead of wanting someone with 4-year degree and a "major" in computer science, employers could seek someone with their general education certification along with "at least a 1400 in OS design, a 650 in Java, and a 950 in medical organizations and systems" and so on.

Over the course of a lifetime, scores in any particular area could continue to increase, either by taking additional MOOCs to get more exposure, or by having employers report on accumulated skills and experience to the system.

So that someone that took only a few courses in X in school, but in the real world and on the job, became—over 20 years—the best X in the country, would have this gradually reflected in their national education/experience scores as the years of experience and successes mounted.

Meanwhile, we'd also no longer have the weird mismatches that come when an employee has a degree in Y but actually works in Z, and then has to explain this in various ways to various parties. First of all, at the level of the 1-or-2-year general education, they would no longer gret a "degree in" Y. That would be handed by MOOCs and represented in varous numbers that increased as the result of completing them.

But if someone did do an about-face and choose an entirely different subject or work area in life, this would also gradually be reflected in their education/experience scores. We'd know when someone who'd studied chemistry in their '20s finally became a "real biologist" because their scores in biological areas would begin to overtake their scores in chemical areas, and so on.

Of course none of this is plausible because social systems simply don't work this way. But (after a long digression) getting back to the point, it's not all that bad that a MOOC isn't the same as forcing someone in their late teens or early '20s to sit in a classroom and drag their feet through a BA/BS.

Comment: A social scientist translating for them (Score 2, Informative) 442

by aussersterne (#49366895) Attached to: Experts: Aim of 2 Degrees Climate Goal Insufficient

What they're trying to say, using the usual feminist sociology over-loquatiousness is:

For some on the planet, keeping it under 2 degrees will preserve a relatively familiar or at least acceptable quality of life.

For others on the planet, quality of life can only be preserved by keeping it under, say 1.5 degrees, or even one degree.

The first group (that can live with a higher threshold) are those in the upper portions of the global economic scale, and it's an acceptable rise for them because they can also afford technologies and tools (getting crude, say, air conditioners, new home materials, new kinds of agricultural output, etc.) that make a 2 degree rise tolerable.

The second group (that can't live at the 2 degree threshold, and really need a lower one) are going to tend to be in the lower portions of the global economic scale, who won't have access to the technologies and tools that make a 2 degree rise livable for those at the top of the scale.

Policymakers and scientists tend, by virtue of their privileged position, to be in the first group, and have thus set the 2 degree rise in connection with thinking of their own, best-case lifestyles, rather than—say—a member of one of the globe's largely impoverished equatorial populations without access to much in the way of resources, tools, or technologies already.

It's a good point: the effects are not uniform, and if 2 degrees is the upper bound for the people who are the globe's *most* comfortable, then it's probably a bad upper bound in general, because it will "cook" (even more than already occurs) those that are the *least* comfortable.

It was, however, bad language and clarity—which is a sin that social science commits far too often.

Their point is well taken:

Comment: I'll join the chorus: Mac. (Score 1) 385

by aussersterne (#49289585) Attached to: Ask Slashdot: Choosing a Laptop To Support Physics Research?

Get a nicely configured MBP and be done with it.

It's the most common platform in research and academic settings for individual use these days, which means that there is a social dimension to the available support (i.e. people around you can help with problems). Meanwhile, the platform is narrow enough and the OS and hardware tightly bound together enough that one-off bugs and edge cases are exceedingly rare (which is not the case for Linux).

And Apple has very reasonable quality control in both hardware and software.

Having done a Ph.D. and dealt with the pressures and complexities that come therewith, I'd say that the overriding concerns should be reducing the PITA factor, keeping downtimes short, eliminating unexpected behavior and gotchas to whatever extent possible, and buying in to the largest on-the-ground support network (i.e. installed customer base) that you can find with identical hardware/software.

All of these things point to Mac for academic research settings.

Comment: You're right and wrong. (Score 2) 320

You're absolutely right about incentives and grant money.

How you tied this to the Nobel Prize is beyond me, so let's drop that.

The incentives are all about grant money and outside (the campus) capital. As a result, the science takes a back seat to market economics, market-ing (both of corporate partners and of academic institutions themselves, which increasingly operate in a competitive marketplace for enrollments), management concerns, investors, etc.

This incentive structure is increasingly becoming the norm well beyond U.S. shores.

So the problem isn't that science is increasingly wrong, it's that scientists are increasingly doing labor that may *involve* science, but that is in fact product-oriented R&D driven by short-term investment timelines and economic and investor-friendly optics, and whether any of it is good *science* is secondary or tertiary to whether it's profitable, whether directly or indirectly.

Let the scientists go back to doing science first and money-making (whether to support their own tenure lines or to support corporate profits) second or even better, third, fourth, or fifth, and you'll find that the ship rights itself.

Comment: Mr. Moynihan should have read on the (Score 1) 375

by aussersterne (#49162227) Attached to: Google Wants To Rank Websites Based On Facts Not Links

problems of epistemology, including in science.

Note that there are no shortage of facts whose veracity depends on nuanced facets of context and condition, some of which are disputed.

For example, fact or not: "Linux is a difficult operating system to use, and is a better choice for geeks and hackers than for regular users."

Or how about:

"Android is an operating system written by Google."

Or how about:

"The Bermuda Triangle region has seen an unusually high number of ship and plane disappearances over the years, and may be a particularly dangerous place to travel."

Because unless Google's algorithms are very, very nuanced in their approach, each of these is going to be seen as carrying high levels of factuality based on the preponderance of content out there, particularity in high-authority sources.

Of course, statements like the first and third are too complex for Google's rankings to evaluate and rank, and it can only work with very simple assertions on the order of "Milk is white," or "Obama is a Democrat," the it's going to do practically nothing (good or bad) at all for the rankings, since facts with this level of consensus are generally undisputed, even by those that promote falsehoods.

Comment: This shifts the weakness in Google's rankings (Score 3, Interesting) 375

by aussersterne (#49160739) Attached to: Google Wants To Rank Websites Based On Facts Not Links

from gameability (in short, SPAM) to politics. Rather than punish above-board or non-predatory websites, it will punish both subversive and innovative thought that runs well ahead of social consensus. Sure, it will also eliminate willful misinformation, but it turns Google into an inherently conservative, rather than socially innovative, force.

Can't say I think it's better. Probably not any worse, but certainly not panacea.

Comment: Sociological problem: CYA (Score 5, Insightful) 158

by aussersterne (#49148335) Attached to: Invented-Here Syndrome

Part of the problem is the CYA issue.

If you're writing the code, you sound like a laborer ("I have to..."). If it breaks, it's your fault and you're on the hook publicly.

If you present a third-party component in a meeting, you sound like a manager ("I propose that we..."). Once three or four other people in the meeting have concurred, if something breaks it's the third party's fault. A ticket or two are initiated, it's someone else's problem and everybody gets to cast blame somewhere beyond the walls of the company.

Rational behavior, regrettably.

Comment: You're absolutely right. The desktop is over. (Score 1) 393

by aussersterne (#49071329) Attached to: PC-BSD: Set For Serious Growth?

I have no idea why people are arguing with you about this. The evidence (not least from the desktop computing industry) is everywhere, with catastrophically declining sales over the long term, offset by increases in mobiles and tablets—which, incidentally, Linux has already won, though in large part by leaving the distro community behind.

Linux could actually conquer the desktop in the end—a few years down the road when desktop computing is a specialized, professionals-only computing space. The users of other desktop operating systems are slowly bleeding off to mobile and tablet.

But this can only happen, ironically, if distros and devs stop trying to conquer the desktop in the present. If they continue down the path they're on, the long-term desktop community, which would be a natural fit for the Linux of yore, will probably be on some other OS. (MacOS? Surely not Windows at this point.)

Comment: Are the job description and your actual requiremen (Score 1) 809

by aussersterne (#49048685) Attached to: Ask Slashdot: What Portion of Developers Are Bad At What They Do?

That is to say, did you call for applications from *deeply* experienced people that know esoteric systems X, Y, and Z and that have previously worked with New Hot Language Q and Languages Of The Week I and J?

Or did you ask for unusual gurus that understand and have a *broad* range of experience with a wide variety of fundamental computing concepts and theory and can apply them correctly while rapidly getting up to speed on new environments and/or languages?

Because you're complaining about not getting the second group, while most of the job listings posted in industry are the first group.

There is often little overlap between the two, and HR departments and managers seem to default to looking for the first even when they actually need the second.

At a more prosaic level, if you specifically need someone that is going to understand general purpose encryption tools, you can also put that in the description.

A lot of the frustration with "not being able to get talent" in tech comes down to not asking for (or being willing to hire based on) what is actually needed. Instead, everyone is in CYA mode and making job listings and hires that are buzzword-rich and, thus, easily quantifiable ("he hit the right series of checkboxes, it's not my fault that he sucks, I did my part...")

We cannot command nature except by obeying her. -- Sir Francis Bacon

Working...