Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Unhealthy society. Not just in business or tech. (Score 5, Insightful) 184

This isn't just about startups, this is across U.S. society—there is zero work-life balance.

Sure, every other company proclaims how great they are WRT work-life balance, but it's pure bullshit.

During hiring (for employees) and/or funding (for startups), if you give any evidence that you will ever put anything before the company (family, health, whatever, it doesn't matter) in ANY way, or ever draw a line in the sand about hours/commitment at ANY number, you are totally noncompetitive/nonfundable (they won't use these words) and won't be hired/be funded. If there is any evidence in your CV, online persona, or history that you have ever done any of these things, you won't be hired/funded.

Even after employment/funding, you have to keep this up. Sure, you may be asked (or even pressed) to "slow down," but it's superficial. The moment you do, positive evaluations/promotions/funding dries up; there is a perception that you're "not serious," "not committed," "not a good risk," or simply "not as capable/investment-worthy" as those *other* supermen/women that work 100+ hours a week (at least) and always put work first.

Yes, they want you to take a break, take care of yourself, and balance your life. But hey, if someone else delivers more value or growth more quickly... Well, they'd be nuts not to go with them instead, and hope you stay healthy in the meantime, all the best.

So, in the interest of your self/family/relationships you try to build a career that precisely demands that in order to keep it, you destroy your self/family/relationships. Depression is easy to fall into when your life will fall apart no matter what you do.

Comment MORE importantly, in this case the modeling was (Score 4, Insightful) 193

done precisely in order to encourage behavior that would change the inputs to the model.

Nobody looks at cigarettes today and says, "Gosh, nobody smokes anyway and death rates are coming down, there was no need for all that worry, smoke away!"

The whole point of the data in that case (and in this one) was to encourage the world to change behavior (i.e. alter the inputs) to ensure that the modeled outcome didn't occur.

To peer at the originally modeled outcome after the fact and say that it was "wrong" make no sense.

When we tell a kid, "finish high school or you're going to suffer!" and then they finish high school and don't suffer a decade down the road, we don't say "well, you didn't suffer after all, guess there was no point in you finishing high school!"

That would be silly. As is the idea that the modeling was wrong after the modeling itself led to a change in the behavior being modeled.

Comment To clarify, (Score 1) 364

what I mean is that more and more, there is a perception that for the *real* superstars in research, the job is coming up with wild ideas. The two criteria that a great idea must have are that it must be:

1) Radical, and
2) Internally logically and mathematically consistent

So long as these two criteria are met, the rewards seem to roll in. It's a version of postmodernist discourse/textual analysis; it's not about reference to anything outside the text, it's about the beautifully labyrinthine and provocative way in which the text (hypothesis, theory, etc.) is self-referential and self-consistent.

I'm not saying that PhDs and PhD students should be lab monkeys. I'm saying that more and more there's a perception that if you do have to touch a lab, ever, or if you do actually lower yourself to the point of carrying out "empirical research," it means that you're a couldn't-cut-it rather than a "superstar," i.e. a tier B or tier C scholar that has to operate in the realm of meat and matter, rather than in the realm of ideas.

Real "scholars" speculate beautifully about far out stuff in tremendously clever ways—or so goes the thinking, more and more. And I suspect the reason is because that is what is marketable from the attention and media standpoint. That's where the strong "branding" potential, for individual and for institution, lies. If you can get 50 people in a conerence session to yell at you for being an idiot misconstruing all of science and another 25 to call you the second coming of Albert Einstein (or some other prominent thinker), that's 75 more loud and attentive attendees than the guy down the hall with the methodological field innovation got for his session.

And if you can get covered in some national dead tree rag as a "controversial" scholar, your department tends to be thrilled and to put you front-and-center in their marketing materials.

Comment Not just physics. I see this in a number of fields (Score 4, Interesting) 364

in academics.

The problem is that there is a huge oversupply of Ph.D. and Ph.D. candidate labor for the number of positions available pure academic research and institutions.

People that offer incremental improvements or work—just lab stuff, just more data, just duplication research, or slight variations to tease out empirical nuances—are a dime a dozen and struggle to differentiate themselves. Real science is often workman-like and laborious.

On the other hand, if young Ph.D. candidates and people weaving their way through the identity-building process that is a Ph.D. focus on conceptual innovation and performances—ideas, narratives, radical departures—then they are seen as doing something "new" and "innovative" (which somehow has become what science is about in popular discourse, which creeps into academic discourse), and something that sells better in the presses and to the public when the monographs come out, enabling "crossover" works and coverage that is much more lucrative than straight empirical work that gets buried in the journals or small print runs. They also more attendees at the conferences, and by virtue of interviewing and appearing more, get more coverage for the institution that hires them, often doing more to drive prestige and enrollments.

I think market forces play into this in a significant way.

Comment Not entirely. That's what a market does sometimes. (Score 1) 216

But we also have voting, and the voting public chooses leadership, often in part on the basis of precisely the use of policy to incentivize behavior with government funds. Tax breaks for specific kinds of behavior being the most common of these.

This gives the public two ways to encourage people ot do/build/invent/fix things; one for individual choices, and one for collective goods, presumably (though people don't often think in these ways) to avoid tragedy-of-the-commons situations.

You were never asked per se, but you are a member of said voting public if you're complaining that Musk is being supported by you tax dollars. In such cases, you may or may not have a beef, but we're beyond complaining about Elon Musk there and into the basic tensions of democratic governance, which really isn't all about whether Musk is a good guy or a bad guy, but about whether you like collective goods or not, and if so, what system you prefer for choosing them.

That's political science, political philosophy, and/or the fringe of sociology, but has little to do (once again) with this particular article.

Comment Seems pointless point to me. What am I missing? (Score 5, Insightful) 216

The reason we use government funding to incentivize things is because we as a public want people to do/build/invent/fix those things and are willing to pay for that to happen.

So Elon Musk comes along and says he will and then he does. And then we pay him what, as a public, we planned to pay (via those incentives) to whoever did them.

Seems like everything is going according to plan, for all involved, and that we're lucky enough to have found something of a one-stop-shop for incentivized work that few others are willing to take on, but that seems to really move the needle on tech progress for something other than consumer electronics gadgets.

Win/win all around. Smells like right wing paranoia and demagoguery to me in here.

Comment High-priced is one key. (Score 1) 469

Sure, the hardware was pretty cool, but the prices were just really, really high.

I remember a call to Sun where I was asking about a SunOS license for a used 3/80 I was thinking of buying from the department, and it was going to run me like $3,000 for the OS or $5,000 with development environment or something along those lines, and it was quoted to me as the *academic* price, and no matter which media delivery I wanted I would have had to buy additional hardware to read it, and so on. I mean, that's $5,000 to $8,000+ in today's cash for an already old, low-end Unix system at the time. I remember pleading with the person on the other end of the line to help me brainstorm and find other options, as I was a CS student and needed to be able to do my homework at home and all of that, but of course, they just felt like I was tying up the line—I looked sort of ridiculous from their perspective.

And then they told me that it was really too old to be useful, that I wanted an IPC or an IPX, I forget which, and the prices were well into five digits, again *academic* price for a bare bones configuration. And here I am a broke CS undergrad already struggling to pay $4k/year in tuition to a state school. It was a total non-starter.

Meanwhile, the first purpose-specific Linux box that I assembled (as my Linux excitement grew and I knew I wanted to do a dedicated build) was a 386/40 with 8MB RAM and about 1GB ESDI storage, along with a Tseng ET4000 VGA card. It was all used gear, again bought in surplus channels, but the thing was that it got me beyond what the 3/80 would have provided in terms of performance, and was perfectly servicable at the time as an X+development box, and it cost me a total of like $200. That was just plausible.

I built a sync converter circuit to connect an old Tektronix 19" fixed-sync color monitor to the VGA port and felt like I had a real, honest-to-god Unix workstation for $300, with a very competent development environment and Emacs, NCSA Mosaic, etc., rather than having had to spend 10x that much for less in the end.

People talk about *BSD, but the driver support on *BSD wasn't as good even a few years later when I looked at it. Linux supported crap hardware in addition to great hardware, which sounds bad until you realize it means that any broke student could scrounge around in the boardbucket and put together a fairly decent Linux system for peanuts.

Comment To each his own (servers). (Score 1) 469

I didn't have access to that tier of internet service at home at the time—I had UUCP and a Fido tosser, both of which dialed out over a modem once a day for a multi-megabyte sync.

Sun3/4, HP900, and some DEC boxes were all in use in the CS labs at my school at that period, though they were getting old. So I had ftp access there, but there was no removable storage on those machines, and no direct way to access them from my dial-up, and my account quota was pretty low on the NFS server, just enough to hold a bunch of C code for class and the binaries it produced, so space was pretty precious. For those reasons, it never occurred to me to ftp or gopher around at school for software to figure out how to take home. Plus lab access was limited and you really wanted to spend your time there doing your homework problems and getting them to compile and run, not dinking around looking for freeware.

Minix I read about on Usenet and tracked down some binaries, IIRC. But it wasn't impressive. Every now and then I'd hear something about *BSD, but it really wasn't clear how to get my hands on it, and nobody could really tell me. The people "in the know" as far as I knew were in the CS department and they were using the commercial unices and knew those pretty well.

In the Fido and Usenet groups I was in, Linux was the thing that turned up often and easily. Maybe I was reading the wrong groups, who knows. There were a hell of a lot of them in those days, if you recall, and it was all pretty noisy.

But Linux seemed then and over the several years that followed to come up over and over again. Others—not so much. It is what it is.

Comment Single case anecdote. (Score 4, Interesting) 469

I had been trying to afford a Unix installation at home as a CS student. All I knew was the Unix vendors. I was not aware of the social structure of the Unix world, various distributions, etc. I was crawling university surplus lots and calling Sun and DEC on the phone to try to find a complete package that I could afford (hardware + license and media). Nothing was affordable.

I was also a heavy BBS and UUCP user at the time over a dial-up line. One day, I found an upload from someone described as "free Unix." It was Linux.

I downloaded it, installed it on the 80386 hardware I was already using, and the rest is history. This was 1993.

So in my case at least, Linux became the OS of choice becuase it had traveled in ways that the other free Unices didn't. It was simply available somewhere where I was.

This isn't an explanation for why Linux ended up there instead of some other free *nix, of course, but by way of explaining the social diffusion of the actual files, I saw Linux distros as floppy disks around on BBSs and newsgroups for several years, with no hint of the others.

For someone with limited network access (by today's standards), this meant that Linux was the obvious choice.

As to why Linux was there and not the others—perhaps packaging and ease of installation had something to do with it? Without much effort, I recognized that the disks were floppy images and wrote out a floppy set. Booted from the first one, and followed my nose. There was no documentation required, and it Just Worked, at least as much as any bare-bones, home-grown CLI *nix clone could be said to Just Work.

I had supported hardware, as it turned out, but then Linux did tend to support the most common commodity hardware at the time.

My hunch is that Linux succeeded because it happened to have the right drivers (developed for what people had in their home PCs, rather than what a university lab might happen to have), and the right packaging (an end-user-oriented install that made it a simple, step-by-step, floppy-by-floppy process to get it up) while the other free *nix systems were less able to go from nothing to system without help and without additional hardware for most home and tiny lab users.

For comparison, I tried Minix around the same time (I can't remember if it was before or after) and struggled mightily just to get it installed, before questions of its capabilities were even at issue. I remember my first Linux install having taken an hour or two, and I was able to get X up and running the same day. It took me much longer to get the disks downloaded and written. Minix, by comparison, took about a week of evenings, and at the end, I was disappointed with the result.

Comment Re:Rely on the counterfactual. (Score 1) 211

Yes, in practice it's usually a mix of the two, so the principle is more an abstract model than an argument about real, concrete thresholding.

But the general idea is that by the time someone stops being promoted, if they continue in the job that they are in while not being promoted for an extended period of time, it means that they are likely not amongst the highest-merit individuals around for that particular job and responsibility list—because if they were, they'd have been promoted and/or would have moved to another job elsewhere that offered an equivalent to a promotion.

Comment Rely on the counterfactual. (Score 5, Informative) 211

The best way to understand the principle is to imagine the counterfactual.

When does a person *not* get promoted any longer? When they are not actually that great at the position into which they have most recently been promoted. At that point, they do not demonstrate enough merit to earn the next obvious promotion.

So, the cadence goes:

Demonstrates mastery of title A, promoted to title B.
Demonstrates mastery of title B, promoted to title C.
Demonstrates mastery of title C, promoted to title D.

Does not manage to demonstrate mastery of D = is not promoted and stays at that level indefinitely as "merely adequate" or "maybe next year" or "still has a lot to learn."

That's the principle in a nutshell—when you're actually good at your job, you get promoted out of it. When you're average at your job, you stay there for a long time.

Comment You are describing engineering on public works (Score 1) 634

projects and in academic research, which are an already tiny, extremely competitive, and ever-smaller part of the general pool of engineering labor.

Most of what engineers do is in the broader consumer economy, engineering objects, systems, etc. for people that are already amongst the world's wealthy (i.e. consumers in the largest national economies), that they don't really need, to enrich still wealthier people that don't really need any more enrichment.

I may be a woman underneath it all, because despite being gainfully employed in a high-skill position that makes use of my Ph.D., I can't stand my job, which is all about making stuff with little direct bearing on daily life to help make rich people even richer—yet of course it is taken deadly seriously by everyone in the company, and there is a general disdain for and scoffing at "causes," like say, preventing climate change, expanding human knowledge and capability, or helping to address the massive wealth inequality on the planet.

Comment He's right, but the conclusion may require nuance. (Score 2) 145

Here's the thing—we may not actually want every otherwise unmotivated late teen to be sitting dubiously through college courses just because it's either that or go back to their dorm and twiddle their thumbs. Some things:

- There is an oversupply of graduates these days in most fields and at most levels
- A dawdle-dawdle unmotivated student is not doing their highest quality learning
- Even students that will eventually use what they learn may not do so for years
- In the meantime, what they learned is getting very rusty between learning and use

So with these things said, *how about* a model in which:

- People are not motivated to learn something until they need to
- Once they need to, they are happy to blast through it intensely
- And they will put it to use right away
- And their motivation comes from needs (for a raise, to be competitive, etc.)

I would think this would help to mitigate some of the particular supply/demand problems on all sides (for an education/for students/for graduates as employees).

The one caveat, and it's an important one, is that we do of course want people to be generally mature, thoughtful, capable, and culturally literate if they are goint to be participating in society, and right now high schools are failing utterly at even touching these points.

So to address that need, let's just require a minimal level of "general" college-level education, say a one-year or two-year degree that as no "major" or "minor" selections and issues no grades, but certifies literacy about politics/citizenship, social science (particularly social problems), national culture, basic quantitative reasoning, and so on—enough to become a careful thinker and to better understand "how to learn stuff."

This general education certification would be required in order to:

- Vote
- Get a business license
- Sit on a corporate board

But would be disconnected from particular vocational or other subject-oriented learning issued via, say, MOOCS as well as face-to-face alternatives. And instead of a major in a single discpline, outcomes from MOOC courses could be used to calculate a nationally databased and relatively involved (many measures) "bar chart" for each student, that tallied their experience and competence with particular subject areas, expressed quantitatively as a figure without an upper bound, that is added to with each additional course, and perhaps incorporating quantitative feedback about their performance from employers as well:

So instead of wanting someone with 4-year degree and a "major" in computer science, employers could seek someone with their general education certification along with "at least a 1400 in OS design, a 650 in Java, and a 950 in medical organizations and systems" and so on.

Over the course of a lifetime, scores in any particular area could continue to increase, either by taking additional MOOCs to get more exposure, or by having employers report on accumulated skills and experience to the system.

So that someone that took only a few courses in X in school, but in the real world and on the job, became—over 20 years—the best X in the country, would have this gradually reflected in their national education/experience scores as the years of experience and successes mounted.

Meanwhile, we'd also no longer have the weird mismatches that come when an employee has a degree in Y but actually works in Z, and then has to explain this in various ways to various parties. First of all, at the level of the 1-or-2-year general education, they would no longer gret a "degree in" Y. That would be handed by MOOCs and represented in varous numbers that increased as the result of completing them.

But if someone did do an about-face and choose an entirely different subject or work area in life, this would also gradually be reflected in their education/experience scores. We'd know when someone who'd studied chemistry in their '20s finally became a "real biologist" because their scores in biological areas would begin to overtake their scores in chemical areas, and so on.

Of course none of this is plausible because social systems simply don't work this way. But (after a long digression) getting back to the point, it's not all that bad that a MOOC isn't the same as forcing someone in their late teens or early '20s to sit in a classroom and drag their feet through a BA/BS.

Slashdot Top Deals

No man is an island if he's on at least one mailing list.

Working...