Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:I think you're thinking too hard and the author (Score 5, Insightful) 503

Except that the desktop cannot work using the phone/tablet model because user expectations do not suggest that metaphor when they sit at a desktop.

Even if the desktop metaphor was too complex to master, users still sit down at a desktop and think, "now where are my files?" because they intend to "do work in general" (have an array of their current projects and workflows available to them) rather than "complete a single task."

As was the case with a desk, they expect to be able to construct a cognitive overview of their "current work" at a computer—an expectation that they don't have with a phone, which is precisely experienced as an *interruption to* their "current work." KDE, Gnome, and most recently Windows 8, made the mistake of trying to get users to adopt the "interruption of work" mental map *as* the flow of work. It's never going to happen; they need to be presented with a system that enables them to be "at work." In practice, being "at work" is not about a single task, but about having open access to a series of resources about that the user can employ in order to *reason* about the relatedness and next steps across a *variety* of ongoing tasks. That's the experience of work for most workers in the industrialized world today.

If you place them in a single-task flow for "regular work" they're going to be lost, because they don't know what the task is that they ought to be working on without being able to survey the entirety of "what is going on" in their work life—say, by looking at what's collected on their desktop, what windows are currently open, how they're all positioned relative to one another, and what's visible in each window. Ala Lucy Suchman (see her classic UX work "Plans and Situated Actions"), users do not have well-specified "plans" for use (i.e. step 1, step 2, step 3, task 1, task 2, task 3) but are constantly engaged in trying to "decide what to do next" in-context, in relation to the totality of their projects, obligations, current situation, etc. Successful computing systems will provide resources to assist in deciding, on a moment-by-moment basis, "what to do next," and resources to assist in the construction of a decision-making strategy or set of habits surrounding this task.

The phone metaphor (or any single-task flow) works only once the user *has already decided* what to do next, and is useful only for carrying out *that task*. Once the task is complete, the user is back to having to decide "what to do next."

The KDE and GNOME experiments (at least early on) hid precisely the details necessary to make this decision easy, and to make the decision feel rational, rather than arbitrary. An alternate metaphor was needed, one to tell users how to "see what is going on, overall" in their computing workday. The desktop did this and offered a metaphor for how to use it (survey the visual field, which is ordered conceptually by me as a series of objects). Not only did the KDE and GNOME not offer a metaphor for how to use this "see what is going on" functionality, they didn't even offer the functionality—just a series of task flows.

This left users in the situation of having *lost* the primary mechanism by which they'd come to decide "what to do next" in work life for two decades. "Before, I looked at my desktop to figure out what to do next and what I'm working on. Now that functionality is gone—what should I do next?" It was the return of the post-it note and the Moleskine notebook sitting next to the computer, from the VisiCalc-on-green-screen days. It was a UX joke, frankly.

The problem is that human beings are culture and habit machines; making something possible in UX is not the same thing as making something usable, largely because users come with baggage of exactly this kind.

Comment I think you're thinking too hard and the author is (Score 5, Insightful) 503

using too many words. He means that users of personal computers (as opposed to mobile devices) want simply a "desktop."

As in, the metaphor—the one that has driven PC UI/UX for decades now.

The metaphor behind the desktop UI/UX was that a "real desktop" had:

- A single surface of limited space
- Onto which one could place, or remove files
- And folders
- And rearrange them at will in ways that served as memory and reasoning aides
- With the option to discard them (throw them in the trash) once they were no longer needed on the single, bounded surface

Both of the "traditional breaking" releases from KDE and GNOME did violence to this metaphor; a screen no longer behaved—at least in symbolic ways—like the surface of a desk. The mental shortcuts that could draw conclusions about properties, affordances, and behavior based on a juxtaposition with real-world objects broke down.

Instead of "this is meant to be a desktop, so it's a limited, rectangular space on which I can put, stack, and arrange my stuff and where much of my workday will 'happen'" gave way to "this is obviously a work area of some kind, but it doesn't behave in ways that metaphorically echo a desk—but I don't have any basis on which to make suppositions about how it *does* behave, or what affordances/capabilities or constraints it offers, what sorts of 'objects' populate it, what their properties are,' and so on.

I think that's the biggest problem—the desktop metaphor was done away with, but no alternative metaphor took its place—no obvious mental shortcuts were on offer to imply how things worked enough to allow users to infer the rest. People have argued that the problem was that the new releases were too "phone like," but that's actually not true. The original iPhone, radical though it was, operated on a clear metaphor aided by its physical size and shape: that of a phone—buttons laid out in a grid, a single-task/single-thread use model, and very abbreviated, single-option tasks/threads (i.e. 'apps' that performed a single function, rather than 'software' with many menus and options for UX flow).

Though the iPhone on its surface was a radical anti-phone, in practice, the use experience was very much like a phone: power on, address grid of buttons, perform single task with relatively low flow-open-endedness, power off and set down when complete. KDE4/GNOME3 did not behave this way. They retained the open-endedness, large screen area, feature-heavy, and "dwelling" properties of desktops (it is a space where you spend time, not an object used to perform a single task and then 'end' that task) so the phone metaphor does not apply. But they also removed most of the considered representations, enablements, and constraints that could easily be metaphorically associated with a desktop.

The result was that you constantly had to look stuff up—even if you were an experienced computer user. They reintroduced *precisely* the problem that the desktop metaphor had solved decades earlier—the reason, in fact, that it was created in the first place. It was dumb.

That's what he means by "classic desktop." "Linux users want a desktop, not something else that remains largely unspecified or that must instead be enumerated for users on a feature-by-feature basis with no particular organizing cultural model."

Comment See Negri, affective labor, others. (Score 1) 533

There's already a decent tradition in the social sciences examining the role that emotion increasingly plays as a resource to be allocated in economic systems. Affect becomes labor, across industries (not just software programming).

And yes, there is more than an Orwellian whiff about this. But it is what it is—companies hire great attitude, drive, belief, positivity, team spirit, etc. In several contracts I've been involved with, companies actually had metrics for positivity vs. negativity in meetings, communication, and so on, and this was a part of weekly evaluations. They want to see your Facebook page. Everyone knows that it matters whether you "present" well. People with a "great outlook" and "enthusiasm about the company" are routinely promoted over those that are more competent but perhaps dour. In fact, supremely-competent-but-dour is the butt of jokes (as Slashdot's conventional wisdom is already aware).

There is nothing more personal than one's emotion and affect; it is perhaps the most human thing about us in our day-to-day experiences, and the most individual. But more and more, it's a metric to be evaluated, a "property" of yourself as a system that must be managed to remain compatible with the company. To some extent this makes sense in an increasingly rationalized world—what's in your head is a black box. Your interactive style and self-presentation on a moment-by-moment basis are effectively your API. So efficiency dictates that a certain predictability, compatibility, and growth-oriented, team-oriented set of assumptions will be valued, and thus, ought to be "implemented" by you as the manager of your own API.

At the same time, what good is a world in which nobody can have a bad day or a personal opinion on anything? In which your bank balance is directly correlated to your ability to feel the emotions that your boss has outlined for employees in the company handbook?

Is it really so great to live in an efficient and productive world that is ultimately lacking in what Hannah Arendt called humans' intrinsic "natality," the ability to do and feel something new, something individual, something that is an emergent property of the extremely complex phenomenon that is the self?

It's a bummer. (And of course, this very post is precisely the kind of post that they warn you about in the popular press, in articles about how "what you say on the internet will always be there" and future HR managers might exclude you for a position because of it, i.e. because of your negativity and clear lack of cooperation with basic emotion-and-opinion suppression culture.)

Of course, one group is exempt from these restrictions: the wealthy. They can say what they want, since as a matter of power relations, they are central in the system. Others (with less money) must amend their emotional style to be compatible with the rich, the powerful, the CEO, the venture capitalist. These latter people get to say and feel whatever is on their mind or in their gut, unlike the rest of us. And, irony of ironies, they are broadly applauded for it, no matter how extreme, atypical, or mundane the positions. The rest of us would simply be fired.

Comment Bad marketing. (Score 4, Insightful) 559

Didn't even realize that Wii U was substantively different from Wii. In fact, based on this story and the context here, still can't tell.

What would have been wrong with "Wii 2" which offers a much clearer indication that it's a next generation console? (If, in fact, it is a next generation console.)

First thing that comes to my mind with "Wii U" is that it's the educational version of the Wii.

Comment You need a system. Look for classes in the (Score 4, Insightful) 384

Educational Psychology department at the local U about study strategies/study skills. Usually these are geared toward teachers (how to help their students to develop strategies) but sometimes they're even geared toward students at said U (how to study in college, and so on).

These aren't classes about how to improve your brain, or about theory. They're very meat-and-potatoes: ways to organize note-taking, ways to organize reading activity and coordinate it with note-taking, ways to prepare for exams systematically and so on. What seems a problem of recall may be a problem of cognitive data architecture—not "it's not in there" but rather "you're not putting it in there in a way that lends itself to retrieval later on."

I don't know your case or just how hard it is for you, but it's not uncommon for a broad cross-section of students to have many of the same complaints, and often the remedy is to learn differently (i.e. different, time-tested, sample-studied methods for effectively acquiring, organizing, and storing information) rather than to try to "do mental exercises" or improve some immanent property of themselves.

And it's not common sense—they get down into things like how to lay out a page of notes, in geographical regions of the page; how to key words to paragraphs; how to note pages and where, etc. Very mechanical, technique-style stuff. You may find it helpful.

Comment General confusion is #1. (Score 4, Insightful) 293

What is a surface?
Is it a tablet?
A laptop?
Is it highly mobile (well sort of, but not like iPad)
Really lightweight and fast (well sort of, but not like iPad)
Powerful for stationary work (well sort of, but not like a laptop)
Easy to carry (well sort of, but not like an iPad)
Heavy, substantial, and durable (well sort of, but not like a laptop)

People do two things:

(1) Use technology for work or play at their desk
(2) Use technology for work or play not at their desk

Two basic use cases. Just two, at the very bottom of things. In case (1) you go all-out on hardware and power; don't make them sit longer than they have to, let them get their work DONE! (Power, power, power, some ease of use, no compromises.) (2) you go all-out on not making them feel like they need to return to their desk; give them what they need to do what they need to do without feeling tethered (Mobility, mobility, mobility, touch-friendliness, battery, no compromises).

Two basic use cases and Microsoft managed to not hit either one of them well.

Comment Branding matters, both for consumers and for (Score 5, Insightful) 293

project management.

The product is called "Windows." Windows are static things. They are embedded into walls. They provide an unmoving portal into another space.

A monitor on your desktop behaves like a window in some sense. It is always in the same place. You sit and you look at it.

Windows Phone and Windows RT just don't make sense for mobile devices, and provide a kind of complacency to project vision and the wrong idea (unpalatable) to consumers looking for mobile devices.

MS should call the mobile product something mobile:

MS Pathways
MS Journeys
MS Passages
MS Ways
MS Compass
MS Latitude

Then they should focus relentlessly on small-screen/long-battery/mobile UX for the mobile system; design toward the lightweight, mobile ethos of the new name, and market it relentlessly not as "the same as windows" but in fact as exactly different from it.

MS Windows in your office
MS Compass for going places
"Because you're not always sitting still.
"Busy people do more than sit by Windows."

I'm not saying that the marketing is the product; we all know that's ridiculous and leads exactly to a product fail (mismatched expectations vs. reality). I'm saying that if MS was as marketing-led as they ought to have been, they'd do the field research to know what mobile users need (field research they clearly haven't done well) and target the product to those needs, as well as the marketing campaign.

Who needs Windows in their pocket on the street? Nobody. Windows belong inside walls.

Same thing goes for the hardware product. "Surface?" Sounds static and architectural. The opposite of mobility. You can see that they themselves imagined the product this way based on what was shipped out the door. Come up with something lightweight and mobile.

The Microsoft Dispatch.
The Microsoft Portfolio.
The Microsoft Movement (tablet) and Microsoft Velocity (phone).

These are not great ideas yet, but they're light years ahead of "Windows" and "Surface" for a mobile device that ends up acting just like a "Window" or a "Surface."

Submission + - Console gaming is dead? How about $14,600 for a launch-day PS4 (terapeak.com)

aussersterne writes: Seven years after the release of the PS3, Sony released the PS4 on Friday to North American audiences. Research group Terapeak, who have direct access to eBay data, finds that despite claims to the contrary, console gaming still opens wallets. Millions of dollars in PS4 consoles were sold on eBay before the launch even occurred, with prices for single consoles reaching as high as $14,600 on Friday. Would you be willing to pay this much to get a PS4 before Christmas? Or are you more likely to have been the seller, using your pre-orders to turn a tidy profit?

Comment Does it work without nursing bluetooth (Score 1) 365

connectivity along? This was my big gripe with the Sony MN2 (see my other comment in this story). I wanted it to do some basic things: notably, to give me a buzz about events (messages, calendaring, calls). It failed miserably at this task, because keeping it charged and connected to the phone all the time in the daily flow of life turned out not to be possible without making "Sony MN2 management" a new part-time job for myself. A distant second reason for failure (but still deserves mention) is that the touchscreen was so worthless that when it did manage to buzz me, I spent a comical ten minutes tap-tap-tapping on my screen just trying to get my taps to register well enough to see what the buzzing was all about.

It was much faster and less labor intensive in the end to continue what I'd been doing, and what so many others do: fish the phone out of my pocket regularly every ten minutes to see if anything was going down.

I thought about Pebble, but the Sony product made me gun-shy about smartwatches for the general consumer market a this point (though I'd still give an Apple product a look—but without much hope that it would work for me, since I use an Android phone now).

Comment Had a Sony MN2 briefly; problem was VERY familiar. (Score 1) 365

I got a smart watch (Sony MN2) last year because I kept missing the vibrate on my phone for meetings and calls, because my phone isn't always in my pocket. I thought that if I had a device on my wrist, I'd always get the buzz and never miss anything important.

SIMPLE task for the device, no? But it failed miserably.

Reason: Same as Windows CE back in the day. The device wasn't up to the job, because it was busy trying (miserably) to do a hundred other things that it simply wasn't suited for AT ALL.

- There were multiple "apps" on the watch, including for things like Twitter and Facebook
- But the screen is by nature so tiny and the device so limited, these were laughable rather than usable
- Rather than focusing like a laser on doing tiny-device things well, this led to compromises:
- Unusable touchscreen (inaccurate, insensitive)
- Useless battery life (lucky to make a day, often less)
- Worst of all, the device had to be tethered to be useful; lose tether, and it is effectively a bracelet

Compare to Windows CE:

- There were multiple applications on the devices, copying most MS desktop applications of the day
- But the device was by nature so tiny and so limited, these were laughable rather than usable
- Rather than focusing like a laser on doing mobile-device things well, this led to compromises:
- Crappy display, crappy resistive touchscreens, inexact and unpredictable input methods
- Useless battery life (lucky to make a few hours, often less)
- Worst of all, CE devices had to be synced to be useful; fail to sync several times a day and they were a data prison or data corrupter, rather than a data aid

The experience with the Sony MN2 was much the same as what I remember from CE: constant nursing the device along, excessive time spent trying to "make it work" for the most simple tasks, paying WAY TOO MUCH ATTENTION all the time to the connectivity (your body and its attention are pressed into service as the mechanical tool that keep the data flowing) to ensure that it was regular and sound, no intention to try to use any of the laughable features, and continuous frustration (Oh god, the battery went dead/I lost bluetooth sync/something went wrong and I can't tell what it is on this tiny screen device with no error reporting, I didn't get buzzed about that meeting/call, WTF IS THE POINT OF THIS SHITTY DEVICE AND ALL THE TIME I SPEND NURSING IT ALONG ANYWAY?)

Wrongheaded.

I presume that if Apple decides to build one of these, they will have better success, given their reasonable HCI and design and decision-making chops.

Comment Re:She will have to find out more than this. (Score 2) 189

I actually don't know. I have the luxury of having institutional access to a full range of print and electronic subscriptions. But even if they do, think about what you're asking a busy professional to do.

People are suggesting that she should just pony up $thousands annually, that she should dedicate days to travel and research, as apart from patients or family, when there's no necessary technical reason to do so, and now, with ILL, that she should stick to a research project about a case or two for the many weeks that it takes to make ILL work.

Sure, there's ILL, and it may well work as it used to (though I doubt it for electronic resources, based on the ways that licenses right now are written). But we're asking her to stick to a project for $thousands and $weeks of constant attention. She's a professional. She is busy. And she ought to have access. The point is not to ask, "can it, plausibly, be done?" but rather "what is science for, and is this the way that it ought to work?"

We made society, as human beings. We can make it better. I'd suggest that this is a case in which it can be made to function much, much better than it currently does. The goal behind having therapists of all stripes is to help people to overcome real problems, not to test the therapists to see whether or not they can navigate arcane social structures and processes. We should make their jobs as easy as possible. Hell, this applies to virtually every job title. Jobs exist for a reason—because there is demand for what they do, because we value it. Why not, then, make the jobs of professionals as plausible and as easy as possible, rather than risking their doing a much worse job simply so that a few corporations that produce little of value (the value in academic publishing is produced by the academics and the researchers, not by the publishers in the era of easy print-on-demand and easy online access) can earn a decent chunk of change.

Comment She will have to find out more than this. (Score 1) 189

She will have to find out:

1) Which libraries have _print_ as opposed to _electronic only_ subscriptions, and
2) Amongst those that do not (I'm guessing the majority), which allow access to electronic resources by non-students/non-faculty (this kind of access is expressly forbidden, at any cost, by many subscription packages offered to universities).

Even if she is able to identify a library that offers non-affiliated individuals access, she will have to pony up whatever the cost of access for the public to the library is, and then, at that stage, she will have access to _one_ journal. It is unlikely that all of the resources that she needs are to be found in that _one_ journal, and much more likely that relevant material is published in several or even several dozen journals, in which case all she has to do is grill library personnel for 20-30 minutes with a detailed list in each phone call, and likely pony up the access fees (and the transportation, and the saturday mornings) to jump around from one library to another on a wild goose chase over many weeks to piece together the materials that an academic can assemble over a cup of coffee without leaving their screen. Just who, pray, are the academics producing their research _for_? Surely those who might actually be able to use it practically?

All of this stuff can technically be accessed from her office, too, in the space of 10 minutes, but for the profit-oriented restrictions (that do not reflect costs, see my previous post) imposed by journal "publishers."

Comment Two further things— (Score 1) 189

"irritating," not "irritable," my apologies for the misuse of the word (it's late where I am); and I should note that the department had to change the name of the journal and all of its graphics as they brought it entirely in-house and severed the Springer relationship, since Springer held the rights to everything, including all past issues, meaning that the new journal is just that—a clean slate, post-Springer (and good riddance).

Comment Having worked for a Springer journal, (Score 5, Informative) 189

as a managing editor, I can tell you that they do not incur substantial expenses, and that academics provide the important parts of the service, essentially for free in the cases of most journals. It's not like putting out a magazine; we didn't even have copy or layout editors for our journal, the most inexpensive components of editorial labor. It paid the university department that hosted the journal a mere thousands (single digits) per year. There were two "paid" staffers—myself and one other person, The rest of the "editorial board" consisted of faculty of our and another several universities doing the work for free, under the auspices of the "professional duties" of the academics involved (not as paid by Springer, as paid by their respective institutions). Peer reviewers—free. Editorial labor (copy, layout to production files according to specs, submissions queue, even rough line editing, style work)—graduate students looking for a title to add to their emerging CVs.

Essentially Springer's total cost for putting out the journal amounted to the several thousand (again, single digit thousands, split between myself and one other individual) that they (usually belatedly) paid our department annually for the entire journal in its substance, plus printing/distribution (a pittance given the circulation size of academic journals and the cost per print subscription—not to mention the increasing number of electronic-only subscriptions). They had one liason that handled our entire "account," and the level of labor involved allowed this person to be "over" several _dozen_ journals as just a single person. That's as much a labor footprint, in its entirety, as our journal actually had inside the "publisher."

And for this, they held onto the reprint/reuse rights with an iron fist, requiring even authors and PIs to pay $$$ to post significant excerpts on their own blogs.

Seeing the direction the wind has been blowing over the last half-decade, the department decided (and rightfully so) that it's basically a scam, that academic publishing as we know it need not exist any longer, and wound down both the print journal and the relationship with Springer several years ago, instead self-publishing the journal (which is easy these days) to much higher revenue for the department, and the ability to sensibly manage rights in the interest of academic production and values, rather than in the interest of Springer's oinking at the trough on the backs of academics.

Oh, and many university libraries (particularly in urban areas) do not admit just anyone off the street; you must generally hold an ID that grants access to the library (often student or faculty, plus a paid option for the general public, either monthly or annually, that can vary from somewhat affordable to somewhat expensive). Not to mention that for many people, yes, it is a significant professional hardship to lose a day or two of work to be trekking into foreign territory and sitting amongst the stacks—and that this hardship is made much more irritable by the fact that the very same articles are sitting there online, in 2013, yet can't be accessed at reasonable cost.

As an academic, I have the same frustration. We bemoan the state of science in this society, yet under the existing publishing model we essentially insure that only a rarefied few scientists and the very wealthy elite have access to science at all. $30-$60 is not a small amount for the average person—and that is the cost to read _one_ article, usually very narrowly focused, and of unclear utility until they've already paid the money, that is borderline unreadable for the layperson (or for the magazine author hoping to make sense of science _for_ the layperson) anyway. Why, exactly, would we expect anyone to know any science at all beyond university walls, under this arrangement?

Slashdot Top Deals

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...