Follow Slashdot stories on Twitter


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

User Journal

Journal: False Memories in Real Time

Journal by DumbSwede
My wife and I enjoy watching the Chinese mini-series "A Taste of China." How clearly I can recall the English narration, spoken in a deep baritone voice and with a strong, but easy to understand, Chinese accent. The narrator's voice, style, and cadence are all very professional -- the only problem with this memory is that it is totally false.

My wife and I were watching the series online on YouTube with our daughter and she asked me to get something from the kitchen. There had been a pause in the narration and while I was in the kitchen it began again. This time though there was no pleasant Chinese accented English, but unintelligible Mandarin. I was startled for a second, then remembered I had been reading subtitles. I returned to the table and continued watching the program. As I sat I was aware I was listening to Mandarin and I was reading subtitles, but the second I reached into my memory to recall what had just happened the English narration returned.

If I had not had this realization and had you asked me a year from now had I watched the series I would have been convinced I had listened to an English dubbed version. This may not seem like a false memory in the traditional sense, I had merely converted the subtitles into an easier to remember and integrate English narrative, but it illustrates how malleable our memories are. My unconscious mind knows I do not know Mandarin and yet I remember words of the narration. I don't believe it was merely being lazy, but resolving the paradox by inventing the remembered narrator's voice.

Sometimes our perception of an event is in conflict with what seems to be fact. Rather than flag the contradiction it seems our memory will often edit the memory to be whatever our subconscious feels to be the most likely internally consistent explanation. None of this is news. However just like 90% of all drivers think they are in the top 10% of safe drivers, most of us believe our memory of events to be superior to those around us. We are startled when our recollections differ and often assume malice or ulterior motives in those who misremember what we remember.

We probably all know someone who either thinks they are never wrong or have a far more altruistic explanation for some past behavior that on the surface seemed quite self-serving or selfish. We intuitively believe their memories are false (which they probably are). We then give ourselves a mental pat on the back for not living in such a self-deluded state. Obviously our own memories are as infallible and as unyielding at the Rock of Gibraltar. The only trouble is that everyone's memory is fallible -- memories are in constant reedit. Evolution didn't evolve memory to be accurate, evolution evolved memory to be useful. Memory is therefore a repository of non-contradictory facts (also non-contradictory as we perceive or wish our personality to be). As new facts become evident, old memories are revised to fit with the facts. Sometimes this can even make them more accurate, say looking at an old photograph and remembering more accurately the Members Only jacket you use to own (a fact your stylish new self may have edited out).

Unfortunately our desire to be part of a clan or to please others can be the motivation to reedit the facts in our memory. We know that lying is wrong, but if our memory is in conflict with what allows us to have what we want, then memory is often what needs to be changed.

I think it would be the truly rare individual whose head isn't full of false memories. The best we can do is to be aware that memories are not the concrete remembrance of past that they seem. Evolution has probably installed a chalkboard in our head not a printing press. Be cautious of believing only what you see on the board.

Journal: Data Corruption from Excel Autocorrect 1

Journal by Interrobang

Someone on TECHWR-L posted a link to this paper (under the paradoxical title "The Cupertino Effect"), which is about how Excel's autocorrect feature can corrupt statistical analysis of genetic data if/when Excel "makes the wrong assumption" about an entry based on how it looks:

When processing microarray data sets, we recently noticed that some gene names were being changed inadvertently to non-gene names. A little detective work traced the problem to default date format conversions and floating-point format conversions in the very useful Excel program package. The date conversions affect at least 30 gene names; the floating-point conversions affect at least 2,000 if Riken identifiers are included. These conversions are irreversible; the original gene names cannot be recovered.

As the author points out, this can cause gene names to come back in analyses as "unknown," because "[a] default date conversion feature in Excel ... was altering gene names that it considered to look like dates. For example, the tumor suppressor DEC1 ... was being converted to '1-DEC.'"

The authors also note that there is a problem with "RIKEN [4] clone identifiers of the form nnnnnnnEnn" being converted to a floating-point number.

The paper also gives some idea of the devastating scale of the problem and its significance for people doing these sorts of analyses: "A non-expert user might well fail to notice that approximately 3% of the identifiers on a microarray with tens of thousands of genes had been converted to an incorrect form, yet the potential for 2,000 identifiers to be transmogrified without notice is a considerable concern."

As far as I know personally and can glean from the paper, the autocorrect and/or conversion feature is nearly impossible to disable completely, and can only be worked around, possibly unsuccessfully 100% of the time. This suggests that perhaps Excel is not the tool of choice for doing these sorts of analyses. (Does the spreadsheet application in OpenOffice work differently?)


Journal: SETI Fireflies and Lighting

Journal by DumbSwede

This paper has received a new title since being put online. It has had two comments over on and I now realize that the title gave the wrong idea as to contents and intent. I would like to thank _a_lost_packet_ for being one of the few people to read and reply to my post over at, but his comments uncovered what I realized would be a common misconception with my essay given its original title (which had the word Supernova in it) and likely few people did more than read the title and jump to some erroneous conclusions about the content. This essay has NOTHING to do with triggering or harnessing Supernova explosions. It is about exploiting the natural occurrence of Supernovas to facilitate the initiation of communications between extraterrestrial civilizations much in the same way fireflies my all flash in unison when lighting strikes. The fireflies don't cause the lightning nor are they close to it, but you would know they would all flash in unison to it. Now replace lightning with Supernova and ET with fireflies and given the immense distances between stars we would see the resulting 'simultaneous' signals spaced years apart (except in certain rare situations outlined below).

_a_lost_packet_ also posits that waiting around for Supernovas isn't what Extraterrestrial civilizations would do -- in this case we would be one of those civilizations waiting around. Since we have already waited 40 plus years without contact, this isn't necessary a matter we have control over. ETs are probably using more than one method to try an communicate with us and others, this would be one more method that if exploited would be long in coming, but efficient and productive when available. If there is more than one civilization within signaling distance, then the times between observable signals would be roughly 100 divided by the number of civilizations within signaling distance (assuming they too participate in this signal synchronizing strategy).

Again with emphasis - this strategy while exploiting the occurrence of Supernovas for reasons of timing does not require the triggering of Supernovas, the channeling of Supernovas nor being close to Supernovas by ether the senders or receivers, they are merely timing aids to narrow search parameters.

There is a large camp, perhaps a majority of people that see SETI research as a waste of time and effort. Considering the world changing implications of finding an ET signal it is hard for me to see this as a waste even if it hasn't as yet provided results. Of course the search itself has provided results and knowledge, just that the results so far have pushed the boundaries of where near technological societies might be farther. How much farther? Well that is a little hard to quantify and is made murkier yet by the problem of synchronicity. Stated simply we have to not only be looking towards where they will be transmitting from, but during a window of time when they will be transmitting -- and this is the problem this paper hopes to address. Many or most people probably think that SETI has been listening to all planetary systems in the Milky Way Galaxy on a more a less continuously for the last 40-50 years and come up with nada. The truth is we have only scrutinized a relatively few close star systems in any great depth and only for relatively limited amounts of time. There are a larger general sky surveys, but they are piggy backed on other general astronomical research and also suffer from problem of synchronicity.

The core problem is the amount of energy needed to send signals across light years of space is quite large, especially omni-directional continuous signals. Even with megawatts of transmitting power, reception is problematic beyond a few dozen of light years distance, and it is only beyond these distances we are likely to encounter other technological civilizations, even if the Milky Way has thousands or millions of such civilizations. Even assuming conventional SETI and being generous and assuming ET could have detected our early radio broadcasts 75 years ago (a very big assumption) then the round trip reply would be possible from less than .000007% of our cosmic brethren. Or put another way, if there are 1,000,000 technological civilizations out there we would have a less than 1 in 10 chance of detecting them even under the most ideal circumstance for a return signal (and it is pretty safe bet our efforts to date have been far less than ideal).

If it were possible to look in the right direction at the right time things would be far simpler for both sender and receiver. But how to determine the best look and send times before communications have been established? Fortunately the universe has provided such synchronization assist mechanisms in the form of various astronomical events; the most familiar and spectacular of these being Supernovas though there are probably others that might also be exploited.

For ETs to make their presence known they would wait for such a spectacular event and set off a large Electromagnet Pulse (EMP) or other highly energetic signal carrier when observed. Assuming we know the distance to the Supernova and the distance to target star systems we can work out when their respective signals would arrive and thus know when to look for them.

While the core concept is simple there are a lot of problems remaining to be ironed out, the chief one would be working out accurately when to observe each target star system. Unfortunately we do not have the distance to even relatively close stars known with great precision as the data below from the Hipparcos satellite shows.

accuracy level -- stars known -- light years
1 percent -- 400 stars -- 30 light years
5 percent -- 7,000 stars -- 150 light years
10 percent --28,000 stars -- 300 light years

Since there are approximately 100 billion stars in the Milky Way, this means we know the distance with 10% accuracy to less than .000028% of them.

Cosine to the Rescue

If we assume an ET on a star system 300 light years away sent a signal 300 years ago, naïvely we would have to observe the system continuously 30 years to detect the signal event if we knew it to coincide with a Supernova event. Even with only .000028% of the Milky Way to look at, that makes for a lot of systems (28,000) to look at continuously.

Supernovas are not only rare, but also most likely to be tens of thousands of light-years distance from us. If we assume the Milky Way is relatively densely populated with ETs then some star systems will lie close to the line along which the Supernova's light is travelling to us. Take the case where a Supernova goes off 2000 light years from us, and there is an ET civilization a mere 1 light year off this line at 1000 light years (I have kept the numbers simple for purposes of illustration). The pulsed signal sent when ET observes the Supernova would arrive 1/3 day later.

Most likely ET will not lie so close along the line we observe the Supernova from, but this does illustrate how powerful the concentration of time can be for signals not too many arc seconds off its path -- whether we know their exact distance or not. If we do know the distance then we can work out the likely window of observation needed, the closer to this line, the much less time is needed.

We can use the Law of Cosines to work out the arrival times for candidate star systems (assuming reasonable guess as to distance, which for even very distant systems can be worked out by various methods, though with very large error bounds).

b^2 == a^2 + c^2 - 2*a*c*cos(B) (where a, b, c are the sides of a triangle and B is the angle opposite side b).

If we keep distances in light-years, then the final delay off a Supernova event is given by: T == d1 + sqrt(d1^2 + d2^2 - 2*d1*d2*cos(B)) - d2 Where d1 is the distance to the target star system, d2 is the distance to the Supernova, B is the angular separation between them and T is the time in years (when B is small, T will probably/hopefully be a small fraction of a year).

It is my hope that Supernova events would trigger a flurry of activity galaxy wide. Not only would this activity be useful in alerting potential fledging civilizations where to concentrate their observational energies, but such signals would be useful as a cosmic surveyor's aids, helping pin down the distances to star systems with greater accuracy.

Assuming dozens if not hundreds or thousands of civilizations take this opportunity to signal their place in the heavens, it would appear as an expanding disc of flashes around a Supernova. While we may not know exactly wherein the disc the flashes might occur, we can still work out statistically the circular arc where 95% plus of the flashes could occur. Observing this disc while it is still small and synchronicity is at is best is probably our best chance of bagging an ET signal. Though we may never be able to communicate directly with the first ones we observe, as they will likely to be many hundreds or thousands of light-years distant -- observing just one such synchronistic event might be proof enough we are not alone -- two or more a dead certainty. More targeted observations can be made of the few hundred thousand stars within say 200 light years. A synchronization signal from them would justify building huge new space based observatories in multiple EM bands to try and detect not just simple hello pulses, but any directed message or stray EM leakage, even it is just reruns of "I Love Lu-C-Prime".

I have the Power

During the Reagan Presidency there was an exotic missile defense scheme posited employing the use of X-Ray lasers to knock out intercontinental ballistic missiles in mid-flight. Such exotic devices were never built, but exist in theory. The X-Ray lasers proposed were essentially focused atomic blasts. A nuclear devise was to be placed in orbit and affixed to it where several long pipe like attachments each one of which could be pointed at an individual incoming missile. When the nuclear devise was detonated the immense initial EM pulse was focused, converted, or caused lasing in the X-Ray portion of the spectrum along these beam pipes. A short intense X-Ray beam would then have streamed out before the devise was obliterated by the blast-wave to follow scant milliseconds later. Here then in theory is at least one devise that could send a strong unmistakably technological in origin signal. I'm not suggesting this is the only way to send a strong pulsed signal on cue, just positing one devise that could work -- though it obviously would be too dangerous a contraption to build in low Earth orbit.

It is possible to generate ultra strong EM pulses with conventional explosives, I imagine they could be focused and harnessed in much the same fashion as the proposed Cold-War X-Ray Laser if needed. In any event we need not obsess on how such signals are generated, we know that they can. Truly gargantuan EMPs could be detonated on the far side of the moon, thus shielding the Earth from any unintended electronic damage fallout. Our intended recipients would of course be seeing a signal many orders of magnitude smaller given the immense distances.

Depending on the technological level of the sending society they may just use some brief omni-directional brief pulse and not worry about focusing on likely candidate worlds. If it is a society with high enough technology to know with certainty which worlds harbor life, they may use directional beams for greater efficiency and signal strength. The signal may last milliseconds or a day or two. Regardless, the onus in now on the receiving world to listen at the right time and in the right direction.

Where to Look Now

While not in the Milky Way proper, but in the Large Magellanic Cloud, Supernova 1987A might still be worth investigating for transient SETI activity in star systems not far off its coordinates. 21 years is a relatively short time astronomically, and with this Supernova being 170,000 light years away there are lots of candidate systems relative close to its sight line that would just now be having a visible synchronized pulse. Similar to our previous example a civilization at 85,021 years away could be 1890 light-years off the Supernova 1987A's sight line and its beacon or pulse would just now become visible. (note 85,021 was chosen so the light would arrive now, 21 years later, so 1890 is the result of sqrt(85021^2 - 85,000^2) )

If we truly want to join an intergalactic community we should put some effort into creating and EMP signal of some sort. I imaging a few million dollars would buy a firecracker of sufficient size to serve as a beacon. More importantly what I hope to emphasize, is we need to be working out what we should be look for and where in the aftermath of a Supernova explosion. The Supernova itself will be a wonderfully informative event, but I hope this paper stirs to action possible tangential discoveries potentially even more important. I speculate we could get some dual use research out of this search. It is likely in the few years after a Supernova it's light will briefly illuminate the interplanetary dust clouds of systems close to it, providing additional astronomical information and a means to refine our galactic yard sticks. Since we might be looking in this direction anyway, we should be alert to the possibility of extremely transient events that might be intelligent in origin.

Role Playing (Games)

Journal: The Man Who Built The World Next Door

Journal by Interrobang
Gary Gygax, July 27, 1938 - March 4, 2008

This is an odd one for me -- depending on how you look at it, I was in the RPG world for many years, but not of it, or of it, but not in it. Most of my friends are gamers, some of them quite hardcore for years. When I was a teenager, I wrote a paper RPG. (In retrospect, it seems as though what I was reaching for -- completely independently of anything, since I didn't know the people involved and they didn't know me, and there was a lapse of several years involved -- was what someone else eventually wrote as Vampire: The Masquerade, although what I came up with was closer to GURPS and, admittedly, ultimately unplayable. At least I got class credit for it.) I had a front-page entry in the Daily Illuminator (under the name of the person whose e-mail I was borrowing at the time, since I didn't have one of my own -- scroll down to October 2). I even owned a set of dice. (I've since given them away.)

However... Aside from a few abortive attempts at LARPing in the mid-90s, I've never actually played a pen-and-paper RPG.

However again, if it hadn't been for the RPG culture, in which I was at least an interested bystander if not an out-and-out participant, I never would have become the (geeky-but-at-home-with-it) person I am today.

So I owe one to Gary Gygax. Thanks, man. I've never even played your game, but you still changed my life.

Journal: Integrating Help Using RoboHelp/VB -- Map IDs Question 1

Journal by Interrobang
Here's where you guys come in.

The Back Story: I've not done a lot of in-program help integrations, because I've usually been working with people who know how to do it. None of the programmers here do, and I don't either.

We're looking to streamline our help integration process for in-program, window-level help. The way we would like to do this is to have the developers assign Map IDs in the code right from the outset in the development process, before the help is written. However, doing so would mean that any map file I get from them will not include topic titles. Is this a problem?

(The advantage is that by using this system I could go through the application screen by screen during the testing phase and determine which screens do not have associated help topics.)

As of now, I can create a .h file in RoboHelp and import it into the project file, but if there are no topic titles specified, I can't see any of the Map IDs in RoboHelp's Edit Map IDs window.

If we can in fact implement this system, what does the developer need to give me (that will actually work)? What should the file content look like? Do any of you actually know?
It's funny.  Laugh.

Journal: On Being a Pedestrian in Car Land 1

Journal by Interrobang
When you live in a place where the car rules, and car drivers think they're the High Royalty of Turd Island, and you don't drive yourself, you very quickly develop a Traffic Glare. The Traffic Glare, patent pending, is a comprehensive form of non-verbal facial and kinesic communication (consisting partially of extended eye contact, and partially of appropriate facial/physical expression) which clearly notifies menacing drivers that... know they have seen you. have memorised their license plate. are carrying a concealed death ray on your person, and are not afraid to use it if they try anything hinky.

...and further, after having used it, you will file charges against their smoking remains.

Today I had a superlative Traffic Glare. I think it kept me from getting run over at least once...

Journal: Preliminary Course Proposal: Design for Developers 3

Journal by Interrobang
(...and friends.)

A friend of mine sent me this big long e-mail about "yak-shaving," which in this case involved wanting to do something with a piece of software, discovering that it wanted access to a particular library he didn't have, trying to find, and then trying to install the library, just to get back to the original task. He mentioned what I noticed ages ago, which is that a lot of open-source stuff doesn't always include everything you need to get the software up and running, I think in part because the developer(s) assume you already have it (they're funny like that).

That got me thinking about a lot of the crap I've seen both in OSS projects and in proprietary projects -- I have to say, proprietary software has definitely got the drop on OSS in terms of including everything (hell, the other day I got an e-mail with a link to Acrobat Reader download in it, just in case you wanted to grab a linked PDF, and were the last person on earth still without a PDF reader, case in point). And I thought a lot of these problems reflect a lack of basic design knowledge on the part of most software developers.

By "design" I mean "visual, textual, and informational rhetoric," rather than "how to put together a program in a way that works and makes sense."

That got me thinking...

"Every CS programme should include a course on design fundamentals, tailored to the specific wants and needs of software development. I should write a syllabus for a course like that, and post it five or six different places, so the meme gets out..."

So, uh, if you were developing a postsecondary (2nd year+ university or third-year college) course on design fundamentals for software developers, what would you include?

What I'd propose would look something like this:
  • Unit 1: Introduction, What Is Design and Why Do You Care?
  • Unit 2: Audience (User) Analysis: Who are your users? What are their wants and needs? What can you assume about them? What shouldn't you assume about them?; Bias; How to find out
  • Unit 3: Information architecture and management for code, UI, and documentation
  • Unit 4: Documentation: Commenting and in-code documentation; User and peripheral documentation; Web documentation; documentation design
  • Unit 5: UI Design Issues: Consistency (ordered lists, menus, workalikes, labels, navigation controls, buttons), Layout, feature placement
  • Unit 6: International Issues in Design: Design for translation, localisation, language bias, interface bloopers, Basic English
  • Unit 7: Metaphorics, Analogy, and Software Design: Introduction to metaphorics, metaphorics and user analysis, metaphorics and assumptions, critical analysis. Assignment: Critique an application using metaphorics principles learned in class (instructor approved topic), 15%.
  • Unit 8: Troubleshooting Design Problems Before they Happen: Avoiding common problems (yak-shaving, inconsistencies, typos, the Comprehensible Only If Known problem), user acceptance testing with a design focus. Assignment: Using the application from the last assignment, devise a simple testing scenario to evaluate a design feature. 10%
  • Final Assignment: Subject to instructor approval of the application/critique, prepare a critique of a well-known application using at least three of the principles discussed in the course. Max. length 3500 words, 35%, include screenshots.

Note: I know that's only 70% of the total mark accounted for; I'm assuming 10% attendance and 20% in 5% five-finger exercises, or a midterm exam or something...

Interface Blooper Example: I still remember from grad school, doing an analysis of Netscape as it appeared in four different languages, and finding that in German, what was then the "Guide" button in English was "Guide" in the German version as well. A quick trip to the English-German dictionary showed why -- according to that dictionary, anyway, the standard German word for "Guide" is "Fuhrer." *headdesk*

I can hear Eth's feet whistling as he descends from 30 000 feet to jump all over me for that, but I'm just reporting what that particular dictionary said, and obviously it was of some concern, since they'd left the button untranslated (as far as we could tell).


Journal: The Case of the Bus-Driving Genius and the Melanoma Bandages 3

Journal by Interrobang
I have a friend who is possibly the world's most intelligent schoolbus driver. He is, literally, one of these mind-blowing geniuses (in fact, he once was asked to go to a research university in the US so they could recalibrate an intelligence test for people with freakishly high IQs); driving a bus is just what he does for a living.

He has a lot of unusual hobbies, however, like foiling abductions, becoming an official Friend of the Court in the Province of Ontario, and now, conducting a barefoot epidemiology.

You see, a while back he noticed that he was seeing a lot of people at work who were wearing those distinctive "melanoma bandages," indicating that they'd either had biopsies or lesions/suspicious spots removed. He started counting. He figured out that in six months, he'd seen 110 people in the office at his workplace, and of that 110 people, he'd observed 11 wearing melanoma bandages.

According to Health Canada, aggregate incidence statistics for the population he would be looking at (generally between age 30 and 65), are 18.28 per 100 000 (mean data), or 0.1828 percent.

Process Note: To derive these figures, I took the incidence data per 100 000 in the tables on that web page, selected for the cohort between 30 and 65, and then took the mean of the data. Afterwards, I divided the incidence rate per 100 000 by 1000 to obtain the incidence rate as a percentage. Please feel free to critique my math or my methodology, since although I'm not completely ignorant of statistics (it's the one branch of math I can do reasonably well), I will not claim to have even average math chops.

No matter which way you want to slice and dice those numbers, statistically speaking, they're adding up. My friend is now working with an epidemiologist with the federal government who specialises in cancer clusters. (He likes that she's a postdoc or another form of newly-minted PhD.).

To fill in a bit more of the background detail, there are environmental factors at work. Most of the people who drive schoolbuses in this area are otherwise rather outdoorsy (as Ed says, "You have to like the outdoors to drive a bus, since you're in it so much.") -- they're often farmers who are supplementing their income; most of them are men (who have a slightly higher melanoma rate to begin with, especially on the upper body), and most are between 30 and 65, as I mentioned.

As Ed remarked, "If any other profession showed this kind of a cancer cluster, they'd regulate the hell out of it." Well, maybe they will, now that they know about it. (I'm not so sure it's directly professionally-related, but we'll see what happens.)

We already know that melanoma rates are going up, likely because of a combination of 60 years of collective tan-fetish and the thinning of the ozone layer. However, I'm writing this to make (yet again) a plea that people be mindful and pay attention to trends that they might notice around them. Who knows what you might find out? As Yogi Berra was once credited with saying, "You can learn a lot just by observing." (People mock him for it, but it's actually quite profound, if you think about it.)

I mentioned the story to Rustin, and he said, "You really should write this up as a journal entry. You really are the IGCU*, because you see this stuff, and you notice it; your blog entries have the potential to make a lot of people aware of stuff long before it hits the news." I do think things like this are quite significant in a number of different ways. I also think it's important to chronicle them and disseminate the information so other people can learn and contribute.

* For "Inter Geek Communications Unit," which, as he says, is because I function as kind of a nexus between a whole bunch of (groups of) geeks, and pass information through. Not only that, but I'm a huge clearinghouse of information myself. I just wish more people would tap into the resources I have available...

Journal: The Environmentally-Friendly Angel in the Off-the-Grid House 3

Journal by Interrobang
This interesting paper by Sherilyn MacGregor looks at the way much of "environmental citizenship" involves making lifestyle changes that inevitably (I blame the patriarchy) wind up increasing the amount of uncompensated labour expected of (primarily) women. Since, by and large, women still do the majority of family, relationship and household maintenance tasks (many while holding down full- or part-time jobs outside the home), many of the most frequently-touted "personal changes" we hear about guessed it, more work for women (generally).

Switch to cloth diapers? Great, but someone, probably mommy, winds up swamping out diaper pails (you could not pay me enough) and running the washing machine all those extra times.

Switch to environmentally friendly, biodegradable cleaning products? Also great, but someone, most likely the female someone who does most of the household shopping and provisioning, has to first find and buy such products (which, with the exception of vinegar-and-water, are not readily available at every grocery store here), and then apply the extra elbow grease they all seem to require.

Switch from sending Lunchables to school with Sally and Junior to packing brown-bag lunches? Definitely more environmentally friendly (and you can recycle those paper bags, too!), but someone has to make sure lunch-fixings are in the house, and someone has to pack lunches. (When I was a kid, past a certain age, I fixed my own lunch, but that's basically unreasonable below the age of about eight or so.)

Give up the second car in favour of public transportation, walking, or biking? An excellent idea, except that who's likely to find their daily commute prioritised, especially in an unequal relationship dynamic (all too common) where the male partner works full-time outside the home and the female partner works only part-time outside the home or not at all? (Judging by the experiences of friends of mine with children -- and this story by BitchPhD, I'd say it makes more practical sense to let unencumbered Hubby take the bus/train/subway/streetcar/bike and the errand-runner and schlepper of kids take the car...but that, too often, isn't what happens.)

Take up making "Slow Food"? Someone has to spend all those hours searching for the perfect raw ingredients and then spend even more hours preparing and cooking that stuff. One thing "Slow Food" proponents forget, I think, is that it's hard to savour a meal on which you've just put in four or five solid hours of back-breaking, persnicketty work.

Take up shopping locally? Watch the amount of time and effort you spend sourcing and buying food quadruple, and the amount of money you spend easily double, and that's assuming you're shopping around and bartering when you can. That has to come out of someone's time and effort budget...

All of which basically conspires to put straight, partnered women who want to be more environmentally conscious in a kind of a bind -- taking up a more environmentally friendly lifestyle will almost certainly result in a loss of quality of life for them, in terms of the amount of free time they have (cooking meals that take four hours of steady work to prepare does rather cut into the amount of time in the day one might have to, oh, say, be publically involved, or visit friends, or catch up on one's reading) and the amount of work they have to do to maintain their households. It also denotationally represents a huge regression in terms of the lifestyle afforded to women -- a return to the primacy of the home-making role as key to the maintenance of the personal environment; a very Victorian "Angel in the House" concept, if you ask me.

This problem is, of course, a lot simpler for unpartnered people (of either sex) who have no children, because it is significantly likely to be more their decision to do or not to do (social pressure and acculturation notwithstanding) whatever they want/feel like/have time for/have money for. Add in a relationship where there's a power dynamic drawn across gender lines, and it gets significantly more complex in a big hurry. Add children to that relational power dynamic, and the complexity goes off the scale. I think this is a very important issue that more people should be talking about, especially people like me, and Rustin, who are proponents of sustainable/communitarian/off-the-grid living and so on. Regardless of the environmental benefits, a green lifestyle built on egregious gender inequality is as unsustainable in the long term as rampant consumerism (for reasons that have been adequately documented elsewhere, but not excluding financial dependence, backlash against social pressure, and out-of-control authoritarianism based on gender hierarchies).

I'm not entirely certain whether you could say of women as a class that taking this sort of self-ceding action is an exactly uncoerced choice, either -- women are socialised from birth to sacrifice their own well-being for that of others, and there is already a considerable amount of social pressure to do these things for the greater good, but why (I blame the patriarchy) must the vast bulk of the giving-up, doing-without, and labour-intensive alternative living happen because of the uncompensated labour of women?

As I've been saying for years, it's equally shitty to be a reactionary git because it's good for the trees as for any of the usual reasons...

Update: Fixed a thinko, and thought of something else, based on a discussion of wedding planning and Emily Post at Pandagon. Seems to me that a lot of these lifestyle traditionalists who prize things like home-cooked meals and houses cleaned without the convenience of modern chemicals forget that for the majority of people in particularly the 19th Century, but also in other historical periods, where there was a household to be run, there was also hired help. Lots of it. To maintain a similar quality of life to a modern household using late-Victorian technology, you'd need a staff of seven, at least. (This probably explains why women haven't really seen a decline in the amount of housework they do, or feel obligated to do, since the 1920s, despite the proliferation of labour-saving devices. The labour-saving devices have replaced the scullery maid, the housemaid, the butler, the lady's-maid, the housekeeper, and the cook, putting the onus on one person and technology to do what used to be the work of six. I don't even want to get into the idea of rising culturally-imposed standards facilitated by technology, but I will venture the opinion that modern people have more and cleaner clothes and larger and cleaner houses than their ancestors of 120 years ago.) The thing is, technology is not really going to operate your mop or vacuum cleaner for you, wet-dry Roombas notwithstanding. Nor is it going to load your dishwasher (if you have one), load and run your washing machine, and put away the laundry afterward. There are a lot of "collateral" tasks that wind up needing to be done...and redone...and done again after that.

The difference being, in the 19th Century, the heads of the household paid people to do these things. A labour-intensive eco-friendly lifestyle is an awful lot of for-free expended in service to the commonweal, possibly to the detriment of the person doing the work.

This is not, of course, to argue against being environmentally friendly, but on the other hand, I'd like to see people lose a little bit of the rockstar, back-to-the-land glitz, since it's a lot of hard work -- and if they're the male half of a heterosexual couple, they may not even be aware of how much work it really is (I blame the patriarchy), since a good deal of the work is basically invisible, uncompensated, "women's" work.

In short, be mindful, because your environmentally-conscious lifestyle might actually require you to have a consciousness-raising of an entirely other sort.

Journal: Slashdot just isn't "it" anymore 2

Journal by Safety Cap

My current gig is working on a major newspaper's website. Now that I have access to logs and stuff of a MSM, it is very interesting to see where the traffic comes from—and slashdot doesn't really figure into the mix, even though we've been linked.

The three biggest sources of hits are Drudge, Digg and Fark, in that order, with Drudge being larger than #2 by an order of magnitude.

Least you think this is a small-time site, the whole enchilada gets 6 million hits a month by users, sans bots.


Journal: Straw Poll: Do Technical Writers *Do* That? 8

Journal by Interrobang
In my current job, and in several of my past jobs, a significant portion of my time (say, as much as 20%) is finding inconsistencies/typos in the interface, noting and logging them, and tracking crashes and runtime errors and what I did to produce them. (In my current job, they've decided to use my method of error logging in their fix cases now!) An example of finding a typo in an interface would be noting and logging an incorrect apostrophe usage in a menu item ("Copy another suppliers' setup"), or finding an inconsistency would be noting that some fields refer to an item by one term and other fields refer to the same item by another term (in the current application, we're having a persistent problem with switching the term "Dealer" over to "Ship To" and standardising the abbreviation for "Purchase Order" to "PO" not "P.O." or "Po." (Incidentally, I've seen the abbreviation used both the two former ways on the same tab in the application.)

Is this normal tech-writing kind of stuff (once a copy-editor, always a pain in the ass), or have I really and truly crossed over into testing (or what)?

Incidentally, this is why I think there should be a separate job title for someone who copy-edits the text in a UI.
Linux Business

Journal: Anyone looking for a Linux Sysadmin Job? 2

Journal by Interrobang
I know of two that are open right now. One is in the Washington DC area and requires an on-site presence, and the other is a telecommuting or on-site job located in southern California. There's also a Linux sysadmin internship available at the latter one, too.

If interested, leave me a comment with contact info, and I'll send the information.

Journal: Information Overload: I need a library catalogue system 11

Journal by Interrobang
I have realised that I am suffering from information overload, especially pertaining to the streetcar project, which encompasses 500+ electronic documents, several books, a collection of internet bookmarks, various photographs, two videocassettes, some e-mails, and various other stuff. I am starting to realise that I need a database of some sort. A program like Library Master is looking good -- if expensive -- since I also have a digital library of over 10K items, and a personal hardcopy library of over 2000 items I'd like to catalogue. Since it's getting to the point where one single throwaway reference in some obscure 40 year old trade journal is actually a significant puzzle piece to me, I sort of need all this stuff indexed by keyword. Not all these things have indices! And I'm actually willing to do the scary amounts of data entry required. After all, once it's done, it's done, and the catchup work is negligible.

A sample information schema or record entry in my hypothetical database would look like this:

Keywords: Antitrust, EMD (Electro-Motive Division, Electro-Motive Company)
Source: "Is EMD a Monopoly?", Trains, June, 1961
Pages: 6, 11
Authors: Unknown
Notes: diesel locomotives, NY grand jury, indictment, repower, re-engine, Sherman Act, freight traffic, 12M tons/$211M waybills/1st 9 mos of 1959, market share, competitors out of business, percentage of diesels sold

I'm tempted to buy a copy of FileMaker Pro, although it's also rather expensive, since I know that one can easily set up fields in it that are string-searchable very easily (I've used it before to manage a database of ~1000 address/contact information labels, back when I was doing targeted e-mails to schools in South Asia).

Is there anything comparable that I can get for significantly less money (like, optimally, none)? If I spend $250-400 of my research budget on software to manage my information, that's less money that I have for source materials.

Please don't suggest Base, which comes with OpenOffice. I've tried using it already, and, unless you can get the Form Designer walking and talking (I can't, and the documentation is beyond bad -- Hey, OO documentation team! Screenshots, maybe? Got a bit of a Comprehensible Only If Known Problem going on, too!!*), the fields aren't as customiseable as I need. What I want is exactly what I've shown above, and I'm not willing to compromise on organization, since I know how I search for things (by important concepts or keywords), and I pretty much guarantee that would get me the results I want, 99.9% of the time.

* One of these days, once I no longer have seventeen other projects on the go, I'm going to sign on to the OO documentation team.

Journal: "And Where Would *We* Like to Go Today?" 4

Journal by Interrobang
Discourse Analytics Notes on Microsoft Design Guidelines

I've been here before.

A coworker e-mailed me a link to "How To Design a Great User Experience," which appears on the MSDN site. (They've helpfully abbreviated it in some places as "How To Design a Great UX." The Great and Powerful UX has Spoken! Pay no attention to the man behind the curtain.)

This stuff is, in general, really great (if basic) design advice.

If only Microsoft would take it themselves.

You could fisk* this entire article down to the subatomic particles, but I'm really only going to pick on seven of their 17 main points, the ones that offend me the most.

#1. Nail the basics.
Creeping featuritis. Do I really need to say any more?

#3. Don't be all things to all people.
Of course, Microsoft doesn't want you, the application developer, to be all things to all people, because it's too busy solidifying its cartel status in the OS and core application market -- that is, becoming all things to all people itself -- to want anyone to horn into its market share. You could almost say that Microsoft has a corporate philosophy of trying to be all things to all people. (Tie this in to my later point about the word "enable.")

#7. Make it just work.
This is truly a laudable goal, which wouldn't be quite so laughable if anything that came from Microsoft "just worked" outside of a user space roughly the size of a breadbox. "just working" in the Microsoft paradigm is often "Do it our way, and you won't get hurt."

#10. Make it a pleasure to see. This doubtless explains the Windows XP default interface, which only a colourblind person or a four year old with a huge collection of Duplo blocks could love. Actually, personally, I think this is a misstated goal entirely. I don't think an interface should be "a pleasure to see." I don't want to take pleasure in the interface; I want to not notice it 99% of the time. Granted, since a lot of that is habituation, the design goal for a UI should be Make it unobtrusive. (That's an awfully big word for this document, though.)

#11. Keep it simple.
It would be a bad design document if it didn't mention the "KISS Principle" at least once. However, in terms of user interface design, especially in applications (operating systems are an entirely other matter, kaff kaff), I think it's more important to achieve clarity than simplicity. Complex applications generally do not (and should not) have simple interfaces, but there's nothing saying that a complex interface can't be clear. (Terrible precision of language in this document, don't you think?) Perhaps I'll go into more detail about this sometime, as it's threatening to become an entire article on its own.

#15. Don't be annoying.
This one broke my irony meter and exploded the top off my head, all at the same time. Question for Microsoft: If you know this, why don't you practice it?! It took you bloody damn long enough to shoot the paperclip (and I notice there's still an "Office Assistant" on new copies of Word). Your paradigm is notorious for the "Oh! You said 'Do X,' you must mean 'Do Y'!" problem. In the name of making things easier for novice users -- how many completely novice users of Windows do you think there are left in the world?! -- you've implemented things that you've purposely made difficult to turn off and ignore (Automatic Updates, for example) that drive the rest of us nuts. Perhaps you might want to reconsider your design paradigm in terms of relative annoyingness?

#16. Reduce effort, knowledge, and thought.
While I agree with most of the actual concrete suggestions given as bullet points under this tip, the phrasing of the tip itself makes the hair stand up on the back of my neck. I am always in favour of increasing the user's knowledge and thought, although I advocate gentle learning curves wherever possible. Engaged, thoughtful users help you create better products in the long run, through constructive feedback. Taken as a semantic entity, this tip is just a little too close to the prevailing Microsoft dumb-down for comfort. Don't dumb down, smarten up!

A Further Terminological Note: In the next document in the series, called "Powerful and Simple," the writer explains "our definition of power" (in application terms) thusly:

An application is powerful when it enables its target users to realize their full potential efficiently. Thus, the ultimate measure of power is productivity, not the number of features. Different users need help in achieving their full potential in different ways.

Someone walked right into a connotational minefield here, mostly dealing with pop psychology and certain social situations. As I remarked earlier, I've been here before.

You can pretty much tell they were itching to use the word "empower" there, but it got scrapped in favour of the quasi-synonymic "enable." That particular word, however, conjures up some pretty scary connotations that are rather apropos to Microsoft itself, that of the codependent relationship. Microsoft depends on its user base, and they depend on it (not too many tech writing jobs using Linux or Mac these days), but the relationship is abusive at best. You see, they'll get you all sorts of cool toys (like that application-by-application volume control everyone's raving about in Vista), but smack you around with an ugly-ass UI full of DRM and phone-home features, and then get in your face about it when you complain -- "Whaddaya gonna do, switch? Yeah, go ahead and try it, bitch."

I'm also bristling a bit at the "need help achieving their full potential" phrasing, since the other place (besides Microsoft design manifestos) I hear that phrase a lot is in dealing with programmes and services for disabled people. Usually the kinds of programmes and services for disabled people where the staff ask their adult clients, "And how are we today?" *golf clap* Nicely done. I'd shudder to think what kind of a picture of Joe and Jane Averageuser this Microserf has in his/her head. And if that doesn't tell you practically everything you need to know about Microsoft's attitude towards UI design, as revealed subliminally (or sub rosa) by its discourse, I can't help you.

* Dismantle point-by-point, in the style of adversarial journalist Robert Fisk


Journal: [Python] What's with all the semicolons? 1

Journal by dthable

I'm looking at the /. journal extraction script written in Python and I keep seeing lots of lines that end in semicolons. Example:

        def getIndexItem(self):
                # create a summary of the entry
                digest = self.body[:self.DIGEST_LEN].replace('\n', '');
                digest = self.HTML_STRIP.sub('', digest);
                digest = digest.replace('>', '').replace('

When I learned about Python and started to work with it, I was told not to use the semicolons. In fact, I thought that was illegal. Is the parser just throwing them away? Does it have any purpose other than making those C programmers happy?

The degree of technical confidence is inversely proportional to the level of management.