Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Did not have this problem. (Score 1) 179

I switched from iPhone to Android after using iMessage extensively and did not have this problem. So clearly it depends on some particular status/configuration of all the involved parties.

Does this depend on:

1) Moving the SIM from your old phone to your new phone
2) Leaving your old phone on and connected to WiFi so that iMessages still sees you as being on network

Or something like that?

I know that when I switched, it was a really quick thing—new Android phone arrived via USPS, pulled my old SIM, put it into new phone, turned off old phone, and away we went. I was in mid conversation with several people and never experienced a hiccup over the course of the day. Even talked about it over SMS—complained about the default keyboard on the new phone and all kinds of stuff.

Wasn't aware of this issue and didn't experience it. What gives?

Comment The second company was in NY in my case— (Score 1) 263

that is, until the state got involved thanks to myself and one other person. Then, one day, the building was locked, the board was gone to Europe, and all dozen or so employees were standing outside the door baffled and unpaid, from what I understand. I got out sooner, by about two pay periods, and got mine back before it *all* went down.

Comment Will they pay you? (Score 4, Interesting) 263

I've had it with small companies. During the '00s I twice started with small companies only to hear "pay will be late" at the end of an early pay period, then "pay is just around the corner" by the end of the next pay period. In one case, the CEO simply never paid; I left before the third no-pay period was over, demanding that I be paid for my hours, to which he basically replied "so sue us!" I did—but only managed to recoup some of what I was owed. In the other case, they eventually paid but then promptly fired me for the noises I'd made about leaving due to two periods with no pay; that CEO had the gall to act infuriated and hurt at my lack of loyalty to the company.

So be sure that a small company with a low capital/revenue stream doesn't mean "You promise to do it for the love of the company if they can't afford to pay you."

Comment Um, this is why this is a bad thought experiment. (Score 1) 745

Because logical slippage due to the vagaries of language is a decided risk.

Here you're mistaking the location of the dream. Dreams in *our* world, as *we* understand them have these properties. But again (and as I said in my other post) we're talking about another world that we have no reason to assume is not fundamentally different from this one (in fact we might, for many reasons that don't need belaboring here, and that are bound up with the very logic of the proposition in relation to what we understand about our world, have many reasons to assume the opposite—that it *is* fundamentally different from this one).

How does a "dream" behave in another reality in which *this* entire reality can *be* such a "dream?" Who knows. Nothing of what we understand about "dreams" as we know them in practical conception is remotely similar to what we mean when we talk about *our entire reality.*

How does a "computer simulation" behave in another reality in which *this* entire reality can *be* such a "computer simulation?" Who knows. Nothing of what we understand about "computer simulations" as we know them in practical conception is remotely similar to what we mean when we talk about *our entire reality.*

All we have to do to call the universe either a dream or a computer simulation is completely throw out any particular characteristics that are unique and empirically attributable to what we mean when we say "dream" or "computer simulation" as we are able to make use of these terms.

In other words, sure, this universe is a computer simulation or it's a dream...for certain values of "computer simulation" or "dream" that, if we were to accept them as valid, make the terms able to encapsulate *just about any phenomenon*.

This universe could also just be another reality's version of a "jumbo citrus fruit" or of an "Oscar awards ceremony," for the same reasons, and with the same level of practical or logical utility obtaining for these statements. For Slashdot purposes, I propose that we collaboratively write a paper on how this universe is just another encapsulating universe's version of a "Netcraft confirms it, Linux is dying!" press release.

Comment Um, certainly it does, (Score 2) 745

if we're conflating matter with information or information-processing.

A blender perfectly simulates what happens in a blender, mapping matter to information. It is empirically perfect, in that every possible unit of information is represented by a dedicated unit of matter, without shortcuts; it is a perfect simulation of what happens in the theoretical case of "something being blended" which is a subset of the logically possible set of phenomena connected to the physical manifestations found in an appliance store as a "universe" of a particular kind.

"Ah," goes the response, "but in conventional simulations, the physical nature of the reality being simulated is different from the physical nature of the substance of the simulation, i.e. there is a logical congruence reliant upon some measure of generalization, but not a physical congruence, because the only reason to 'run a simulation' is for the case in which physical resources are inadequate to the computational task with complete fidelity, i.e. the case in which we can not 'simulate the concept' using a perfect and total material instance of it."

So be it. But that's my point. If all of this—you, me, the universe—is just a simulation in a "computer" of a physical order so radically different from it as to be analagous to the physical differences between—say—the simulation of a nuclear explosion and the explosion itself (the sorts of things that we need to run simulations of)—then we're talking about a "real" (i.e. non-computed, non-simulation) space so different from our own as to make the use of our terms ("computer", "simulation", and so on) in it, bound up as they are with our own ontological and epistemological limitations and assumptions, essentially meaningless—or worse, ideological—suggestive of something (by virtue of the intuitive and connotative properties of 'computer' and 'simulation') that simply isn't (and, practically speaking, can't be in any universe that we're familiar with) the case.

Comment Silly language games. (Score 5, Interesting) 745

For this to be true in even the most allegorical sense would require that we stretch the definitions of "computer" and "simulation" well beyond anything we currently understand and well beyond the bounds of our ability to be concise and specific about what the terms mean. Using these terms here is just mixing up apples and oranges.

We might as well, in other words, say that our universe is a blender inside a giant appliance store, a stageplay inside a giant theatre district, a mildewing blow tickler inside a giant hoarder's garage mess, or anything else bearing the one of the rough relationships signal:carrier, content:form, fragment:whole, instance:structure, etc.

I mean, what sort of computer are we talking about here?
What is its nature, not just logically, but physically? Do we even know that we're speaking "physically"? Isn't this the scale at which such quantities break down?
And doesn't our idea of computation and simulation require precisely that mathematical rules apply for these to be carried out in the first place?

Comment Spoken like an arrogant developer. (Score 2) 503

Do they continue to be gainfully employed as a digger, yet still dig with their bare hands?

What do they and their boss know about their productivity and job requirements that you don't?

What are they digging for? Is it likely to be damaged by a spade? Are they relying on the tactile sensation in their hands as they dig to make critical digging decisions of some kind? What is the cost of spades? What is the urgency of this dig? Is the limited supply of spades reserved for cases in which rapid digs are needed, in order to avoid excessive spade wear? How long do they dig? Does the spade cause repetitive stress injuries or blisters that hamper their work later on, and for longer periods of time, despite the apparent productivity gains early on? Even if we go all the way to the silly end of the spectrum, are spades against their religion? Even if so, are they nonetheless the most productive member on their team even with bare hands, leading the boss to not give two damns whether they use a spade or a ball of cotton candy to do their work? If you mess with the magic sauce that makes them the most productive person on the team, are you going to be out of a job before they are, even if you believe that your orders for them to change are the "correct" ones?

It seems to me that the job of tech designers isn't to know about digging, but to listen to the diggers carefully as the experts on their kind of digging, digging needs, and the totality of their work life as diggers, and to thoughtfully provide the technical resources needed to enable diggers to do digging as they see fit. They are, after all, the diggers. We are the tech people. Our job is to make tech—which is merely a means to everyone else, not an end. Make the wrong means that doesn't help them to achieve their ends, and you will find that nobody values your tech, no matter how much you try to explain that a spade is a spade.

Comment Some of the GNOME problem is in evidence here. (Score 1) 503

We're conflating use cases and identities when we say "Newbie." As technology designers, we need to be concerned with use cases. There may be a statistical overlap between the two, but mistaking one for the other gets us into deep water for design purposes.

Rather than newbies, let's talk use cases.

Case #1: User is not "at desk, at work" but is rather "in flow, in everyday personal life." They need, for party-planning purposes, or for kid-care purposes, for example, to "send an SMS, "send an email," or "buy more diapers on Amazon.com." These are use cases that are all much better handled by tablets or mobiles, particularly if the user does not spend most of their work or personal life sitting at a computing system. The larger computing system imposes an extraordinary amount of overhead for (say) the stay-at-home parent that just wants more diapers. Leave the playroom, go to the den, power up the desktop, sit down, confront a desktop full of resources, figure out which one is the right one, start the application, and so on. All of that is overhead when we have mobiles: pull iPhone from pocket, press button, tap Amazon, type "diapers", click Buy, put phone back in pocket.

As technology folks, we have a terrible habit of taking someone's bewilderment to mean that they need more training or they're a "newbie," rather than looking at it practically: they're being told that they have to do an *awful lot of work* (moving through the house, navigating a full suite of powerful computing resources, learning to manage them) just to get some more diapers in the midst of their *real life*, the one that they actually care about, which involves diapers, not computing.

Case #2: Person new to computing is also new to the job, but it is now their *full time job* to make charts and graphs with Excel. They will happily sit down with the 600 page book, online training tutorials, and get to work learning. Why? Because this is a set of resources that are not overhead to them—it is the productive work that they will be expected to do, so the investment in time and computing use makes perfect sense. It is work, not waste.

I'd argue that in most cases, trying to marry a full-on computing environment (hierarchical file system or DB storage in quantity, multiple applications, multiple peripherals and forms of connectivity) to a rapid, task-based interface is not going to work out because they're two different use cases. Rapid, task-based use demands lightness and speed. General-purpose "big computing" resources toward the achievement of office work demands feature-richness, open-endedness, and deliberateness (i.e. the opposite of lightness and speed). One is highly endpoint-oriented, the other has no endpoint and is highly process-oriented.

The right answer is not to redesign the desktop environment. The right answer is to get the stay-at-home parent an iPad, or a laptop with everything but the web browser uninstalled, one that preferably boots straight to the web browser—in which case, the UI doesn't matter at all, because the user has no intention to use it.

The "newbies" that we commonly reference are actually a use case—people that feel that their current goals are not well-served by a high-overhead investment in full-scale computing. To serve their needs with a full-on whitebox computer, we just have to strip out the general purpose computing entirely, or at the very least, hide it entirely—which makes the system all but useless for those embroiled in "general purpose computing" use cases, particularly in comparison to those that have a full desktop UI available to them.

Make a better desktop environment *and* make a better information appliance, and both sets of users will thank you.

Try to make a desktop environment *that is* an efficient information appliance, and the computing-for-work people will find it to be inefficient and unhelpful while the casual-net-users will find it to be slow and needlessly complex in comparison to their sister's iPad.

Comment You're quite wrong, and it's not "theorizing," (Score 3, Interesting) 503

there are 30 years of detailed field research on this. Again, see Suchman's "Plans and Situated Actions," Dourish's "Where the Action Is," etc., or visit the ACM digital library and look at usability research (i.e. involving observation of real people in real settings) in CSCW, HCI, etc.

You have one basic fact wrong: they *do* have to think about what it's "time" to do.

Users in computer-at-desk contexts do not have a detailed roadmap for what to do on a click-by-click basis, either from their boss or inside their heads. They have a general set of goals for, say, the quarter ("Get this project launched"), perhaps the week ("Make sure everyone is on-task and progress is being made; keep the CTO appraised of any roadblocks"), and the day ("Put together charts and graphs for Wednesday's meeting to detail progress").

But it is *these* tasks that are "theoretical" quantities. They translate into dozens and dozens of clicks, mouse movements, UI interactions, and so on, many of them interdependent (or, in Suchman/Dourish terms, indexical—that is to say, order-important and constitutive of an evolving informational and UI flow context).

The user may have "Tell bob about tomorrow's meeting" already decided, but they are imagining Bob and imagining Bob *at* the meeting. From there, activity is practical and adaptive. They emphatically do *not* have this in their heads:

- Take mouse in right hand
- Flick mouse to lower-left to establish known position
- Move mouse 5 inches toward right, 0.5 inches toward top of desk to precise location of email icon
- Click email icon
- Wait 0.4 seconds for email window to appear
- Move mouse 7.2 inches toward top of desk, 2 inches toward left to precise location of To: field
- Click to focus on field
- Type "Bob"
- Wait 0.1 seconds for drop-down with completions to appear
- Hit down arrow three times to select correct Bob
- Press enter ...

You laugh, but in fact this is precisely what you're suggesting: that users have a roadmap already. They don't. That's why we invented the GUI—to provide a visual catalogue of available computing resources and an indication of how to access them on an as-needed basis. Then, the user has to decide, in the moment, what was needed. Every single attempt to make things more "simple" or more "efficient" by presenting *only* that one thing that designers imagined to be needed at a given time—the "obvious" next step—has led to users that either feel the system is useless, that fight it to get it to do what they want, or that simply go around the system (I'll just do this task offline, on a pad of paper). You can make very telling changes to users' productive workflows and levels of productivity by changing orderings or locations of icons, etc. Marketers also know this very well on the web (google "page hotspots" to see the research about positioning of advertising and how deeply it affects CPC and other factors in online marketing).

At a less granular level, something like "Get this project launched" is also not available in a detailed roadmap to a user. Go ahead, ask them to elaborate on the precise set of tasks involved in their big quarterly responsibility. They'll come up with 20, 30, maybe even 80 split into four or five sub-areas. But getting the project launched for an average middle manager over the course of a quarter involves tens of thousands or even hundreds of thousands of discrete actions, gestures, etc., some computing-based, some not, with the computing-based ones split across dozens of applications and contexts.

It cannot be mapped out because it is contingently assembled—it has to be done on an as-we-go-basis. So the tasks in the "to do list" (and, in fact, in cognitive behavior) are theorized ("Create a new instance of the platform on test VPN, set up credentials for team") rather than existing as a detailed, moment-by-moment list of actions. This is why user docs people actually have to sit down and use the system, and interact with designers, to write good docs. Because when anybody tries to make a complete mental map of an even 10 or 15 step software task and write it out *without the software actually in front of them*, they leave stuff out. If it was a test, most people—even people that use a piece of software or a feature *every single day*—would not score a hundred percent.

The detailed, moment-by-moment behavior is assembled by observing context—what's going on in the office, what's going on on the screen (where the mouse pointer is, where the files are, etc.) Users don't place their files randomly. Even people here don't. Why do we put them in folder hierarchies, in some cases even tag them? Why do some people use Evernote and others Bento and so on? All of these things are about allowing us to recall where things are because we cannot track all of our files and filenames (or, in fact, all of our information) mentally. We just don't have that kind of cognitive architecture.

So people put things into folders: "Bryce Account" and so on. But ask them, even a week later, to list all of the files in "Bryce Account" and what they contain. They can't do it. That's why they created the folder—because they literally cannot keep track of the information—the mind won't do it. The "Bryce Account" folder is a practical tool—collect all resourced related to the Bryce account in a folder called "Bryce Account" and continue to accumulate. Over time, whenever something needs to be done in relation to the Bryce account, open the "Bryce Account" folder—which is where all possible resources for the account live, and survey the resources to see what's needed from what's available (i.e. what's stored there).

The file icons on the desktop are essentially a folder/resource understood as "What I'm working on these days." The set of overlapping windows on the screen are essentially a folder/resource understood as "What I'm working on this minute." The dock/start menu/application menus are essentially a folder/resource understood as "What actions are available to me." Because *most users cannot keep track of all of this with any level of fidelity.* Humans are just bad at it. But what we're fabulous at is problem-solving: okay, given this list of "current tasks" and this list of "current enablements," what can I do next that will move me closer to "big goal?"

But for this to work, they have to have some other system (i.e. the desktop) that shows them these things, that collects their past actions and current options for them, so that rather than cognitive overhead, it's a ready resource for "deciding on and taking the next step."

This really isn't novel stuff. This has been around for a long, long time. Once again, it's why we invented:

- GUIs
- The desktop metaphor in particular
- Hierarchical file systems
- and so on

It's because we are practical, adaptive actors as a species. Our minds are oriented toward immediate problem-solving. We do not have deep, accurate recall or chess-computer-style planning capabilities. What we have is cybernetic in nature—conceive of general goal, make plausible initial action, observe state and relative progress toward goal following action, make plausible next action, observe state and relative progress toward goal following action, make plausible next action, and so on.

I believe the colloquial truism is "One step at a time."

Removing useful metaphors and a UI paradigm in which what's on the screen represents a broad cross-section of the user's current work context (as apart from the immediate computing task at hand) forces users to try and be what empirical research says they are not: detailed behavioral planners able to plot out click-by-click and motion-by-motion actions, acting on detailed maps of information, well ahead of time.

Again, check the research, academic and industry. This is not theory, this is well understood for decades now. It's why we bothered to invent most of what we bothered to invent in technology. It goes back to pre-PARC stuff. The unfounded theorizing, if any was done at all, was done by the GNOME people, IMHO.

Comment It did at first. That's when I went to GNOME (Score 1) 503

only to see GNOME do the same thing not so very long after that, in the grand scheme of things. That's when I went to Mac OS from Linux (having been a Linux user since 1993). And that was that.

I stuck with KDE4 for at least 2-3 months. But it was a lot of meta work (i.e. work on the environment itself, rather than work *in* the environment). By the time the desktop came back in some form, I was long gone.

At the time that I left KDE, I had been a KDE user since KDE Beta 3 (1.0 beta 3 that is), having switched from TWM and an old .twmrc file from my SunOS days. In fact, I wrote one of the earliest reviews on the web for KDE (during the early beta phase) to be published by (at that time) a top 10 internet property. I was one of those early "Will Linux someday overtake Windows on the desktop?" pundits on the strength of what I saw in the new KDE platform. I was a longtime, committed user. But the total break in the workflow from 3.0->4.0 was unforgivable. KDE 3.0 was chintzy and showed its Linux/X heritage far too much, sure, but 4.0 was flat-out unusable for the first months. It eliminated common workflow assumptions *at the same time* as being so buggy as to fail to do anything predictably. The result was that you never knew whether you were looking at a behavior (an unexplained one at that) or a bug. But it didn't matter, because something different would happen the next time anyway.

When I went to GNOME I found it to be usable in ways that hadn't been true in the GNOME 1.0 days (GNOME 1 was a disaster; tough enough to keep it running, and see the "integration" at work, tougher still to actually use it). So I settled into GNOME and never logged into KDE again.

Then the "press" began to hit in Linux circles about what was coming for the GNOME 3 release, and about the adoption by major distros. I tried the stuff in the dev repository, decided to hackintosh my Thinkpad T60 on a spare partition to give Mac OS X a go, and three months later, after having been a Linux user for going on two decades, I bought a MacBook Pro and haven't had a Linux partition or installation around since (well, except in my phone and router).

Even veteran developers are relatively attached to their workflows. You have benchmarks to hit, as a general rule in modern life, and they do not involve configuring your desktop. Any time spent configuring/learning to use a GUI all over again is, quite simply, is money lost.

In simpler terms than all of this discussion, that's where GNOME and KDE screwed up. Whatever you think of the theory behind the reimagination of the Linux desktop UX/UI, the fact is that there was no demand for it. Like all open source projects, it happened without any particular concern for whether there was demand or not, or for where demand was pointing.

If the Linux world had collectively been interested in driver support, OS X level polish, and interoperability with the most common/dominant commodity and infrastructure systems in use over the '00s, Linux might be *the* dominant operating system today, running the bulk of cloudspace/serverspace, the bulk of mobile computing/phone space, and the bulk of desktop space. Instead, the fatal flaw of open source software kicked in—nobody has to think about the market. The developers had their freedom, and they exercised it.

And the result is a bunch of Netcraft confirmations that Linux on the desktop is dying. (Has died?)

Comment Re:I think you're thinking too hard and the author (Score 5, Insightful) 503

Except that the desktop cannot work using the phone/tablet model because user expectations do not suggest that metaphor when they sit at a desktop.

Even if the desktop metaphor was too complex to master, users still sit down at a desktop and think, "now where are my files?" because they intend to "do work in general" (have an array of their current projects and workflows available to them) rather than "complete a single task."

As was the case with a desk, they expect to be able to construct a cognitive overview of their "current work" at a computer—an expectation that they don't have with a phone, which is precisely experienced as an *interruption to* their "current work." KDE, Gnome, and most recently Windows 8, made the mistake of trying to get users to adopt the "interruption of work" mental map *as* the flow of work. It's never going to happen; they need to be presented with a system that enables them to be "at work." In practice, being "at work" is not about a single task, but about having open access to a series of resources about that the user can employ in order to *reason* about the relatedness and next steps across a *variety* of ongoing tasks. That's the experience of work for most workers in the industrialized world today.

If you place them in a single-task flow for "regular work" they're going to be lost, because they don't know what the task is that they ought to be working on without being able to survey the entirety of "what is going on" in their work life—say, by looking at what's collected on their desktop, what windows are currently open, how they're all positioned relative to one another, and what's visible in each window. Ala Lucy Suchman (see her classic UX work "Plans and Situated Actions"), users do not have well-specified "plans" for use (i.e. step 1, step 2, step 3, task 1, task 2, task 3) but are constantly engaged in trying to "decide what to do next" in-context, in relation to the totality of their projects, obligations, current situation, etc. Successful computing systems will provide resources to assist in deciding, on a moment-by-moment basis, "what to do next," and resources to assist in the construction of a decision-making strategy or set of habits surrounding this task.

The phone metaphor (or any single-task flow) works only once the user *has already decided* what to do next, and is useful only for carrying out *that task*. Once the task is complete, the user is back to having to decide "what to do next."

The KDE and GNOME experiments (at least early on) hid precisely the details necessary to make this decision easy, and to make the decision feel rational, rather than arbitrary. An alternate metaphor was needed, one to tell users how to "see what is going on, overall" in their computing workday. The desktop did this and offered a metaphor for how to use it (survey the visual field, which is ordered conceptually by me as a series of objects). Not only did the KDE and GNOME not offer a metaphor for how to use this "see what is going on" functionality, they didn't even offer the functionality—just a series of task flows.

This left users in the situation of having *lost* the primary mechanism by which they'd come to decide "what to do next" in work life for two decades. "Before, I looked at my desktop to figure out what to do next and what I'm working on. Now that functionality is gone—what should I do next?" It was the return of the post-it note and the Moleskine notebook sitting next to the computer, from the VisiCalc-on-green-screen days. It was a UX joke, frankly.

The problem is that human beings are culture and habit machines; making something possible in UX is not the same thing as making something usable, largely because users come with baggage of exactly this kind.

Comment I think you're thinking too hard and the author is (Score 5, Insightful) 503

using too many words. He means that users of personal computers (as opposed to mobile devices) want simply a "desktop."

As in, the metaphor—the one that has driven PC UI/UX for decades now.

The metaphor behind the desktop UI/UX was that a "real desktop" had:

- A single surface of limited space
- Onto which one could place, or remove files
- And folders
- And rearrange them at will in ways that served as memory and reasoning aides
- With the option to discard them (throw them in the trash) once they were no longer needed on the single, bounded surface

Both of the "traditional breaking" releases from KDE and GNOME did violence to this metaphor; a screen no longer behaved—at least in symbolic ways—like the surface of a desk. The mental shortcuts that could draw conclusions about properties, affordances, and behavior based on a juxtaposition with real-world objects broke down.

Instead of "this is meant to be a desktop, so it's a limited, rectangular space on which I can put, stack, and arrange my stuff and where much of my workday will 'happen'" gave way to "this is obviously a work area of some kind, but it doesn't behave in ways that metaphorically echo a desk—but I don't have any basis on which to make suppositions about how it *does* behave, or what affordances/capabilities or constraints it offers, what sorts of 'objects' populate it, what their properties are,' and so on.

I think that's the biggest problem—the desktop metaphor was done away with, but no alternative metaphor took its place—no obvious mental shortcuts were on offer to imply how things worked enough to allow users to infer the rest. People have argued that the problem was that the new releases were too "phone like," but that's actually not true. The original iPhone, radical though it was, operated on a clear metaphor aided by its physical size and shape: that of a phone—buttons laid out in a grid, a single-task/single-thread use model, and very abbreviated, single-option tasks/threads (i.e. 'apps' that performed a single function, rather than 'software' with many menus and options for UX flow).

Though the iPhone on its surface was a radical anti-phone, in practice, the use experience was very much like a phone: power on, address grid of buttons, perform single task with relatively low flow-open-endedness, power off and set down when complete. KDE4/GNOME3 did not behave this way. They retained the open-endedness, large screen area, feature-heavy, and "dwelling" properties of desktops (it is a space where you spend time, not an object used to perform a single task and then 'end' that task) so the phone metaphor does not apply. But they also removed most of the considered representations, enablements, and constraints that could easily be metaphorically associated with a desktop.

The result was that you constantly had to look stuff up—even if you were an experienced computer user. They reintroduced *precisely* the problem that the desktop metaphor had solved decades earlier—the reason, in fact, that it was created in the first place. It was dumb.

That's what he means by "classic desktop." "Linux users want a desktop, not something else that remains largely unspecified or that must instead be enumerated for users on a feature-by-feature basis with no particular organizing cultural model."

Comment See Negri, affective labor, others. (Score 1) 533

There's already a decent tradition in the social sciences examining the role that emotion increasingly plays as a resource to be allocated in economic systems. Affect becomes labor, across industries (not just software programming).

And yes, there is more than an Orwellian whiff about this. But it is what it is—companies hire great attitude, drive, belief, positivity, team spirit, etc. In several contracts I've been involved with, companies actually had metrics for positivity vs. negativity in meetings, communication, and so on, and this was a part of weekly evaluations. They want to see your Facebook page. Everyone knows that it matters whether you "present" well. People with a "great outlook" and "enthusiasm about the company" are routinely promoted over those that are more competent but perhaps dour. In fact, supremely-competent-but-dour is the butt of jokes (as Slashdot's conventional wisdom is already aware).

There is nothing more personal than one's emotion and affect; it is perhaps the most human thing about us in our day-to-day experiences, and the most individual. But more and more, it's a metric to be evaluated, a "property" of yourself as a system that must be managed to remain compatible with the company. To some extent this makes sense in an increasingly rationalized world—what's in your head is a black box. Your interactive style and self-presentation on a moment-by-moment basis are effectively your API. So efficiency dictates that a certain predictability, compatibility, and growth-oriented, team-oriented set of assumptions will be valued, and thus, ought to be "implemented" by you as the manager of your own API.

At the same time, what good is a world in which nobody can have a bad day or a personal opinion on anything? In which your bank balance is directly correlated to your ability to feel the emotions that your boss has outlined for employees in the company handbook?

Is it really so great to live in an efficient and productive world that is ultimately lacking in what Hannah Arendt called humans' intrinsic "natality," the ability to do and feel something new, something individual, something that is an emergent property of the extremely complex phenomenon that is the self?

It's a bummer. (And of course, this very post is precisely the kind of post that they warn you about in the popular press, in articles about how "what you say on the internet will always be there" and future HR managers might exclude you for a position because of it, i.e. because of your negativity and clear lack of cooperation with basic emotion-and-opinion suppression culture.)

Of course, one group is exempt from these restrictions: the wealthy. They can say what they want, since as a matter of power relations, they are central in the system. Others (with less money) must amend their emotional style to be compatible with the rich, the powerful, the CEO, the venture capitalist. These latter people get to say and feel whatever is on their mind or in their gut, unlike the rest of us. And, irony of ironies, they are broadly applauded for it, no matter how extreme, atypical, or mundane the positions. The rest of us would simply be fired.

Comment Bad marketing. (Score 4, Insightful) 559

Didn't even realize that Wii U was substantively different from Wii. In fact, based on this story and the context here, still can't tell.

What would have been wrong with "Wii 2" which offers a much clearer indication that it's a next generation console? (If, in fact, it is a next generation console.)

First thing that comes to my mind with "Wii U" is that it's the educational version of the Wii.

Slashdot Top Deals

If you want to put yourself on the map, publish your own map.

Working...