Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
User Journal

Journal Journal: Is Social Networking worthwhile? 3

There are plenty of online social networking sites - LinkedIn is the one I'm most familiar with. They seem to be designed around the notion of the Good Old Boys Club, the gentrified country clubs and the stratified societies of the Victorian era, where who you knew mattered more than what you knew.

But are they really so bad? So far, my experience has given me a resounding "maybe". People collect associations the way others collect baseball cards or antiques - to be looked at and prized, but not necessarily valued (prized and valued are not the same thing), and certainly not to be used. But this defeats the idea of social networking, which attempts to break down the walls and raise awareness. Well, that and make a handsome profit in the deal. Nothing wrong with making money, except when it's at the expense of what you are trying to achieve.

So why the "maybe", if my experience thus far has been largely negative? Because it has also been partially positive, and because I know perfectly well that "country club" attitudes can work for those who work them. The catch is that it has to be the right club and the right attitude. That matters, in such mindsets. It matter a lot.

So, I ask the question: Is there an online social networking site that has the "right" stuff?

User Journal

Journal Journal: Why is wordprocessing so primitive? 12

This is a serious question. I'm not talking about the complexity of the software, per se - if you stuffed any more macros or features into existing products, they'd undergo gravitational collapse. Rather, I'm talking about the whole notion on which word-processors, desktop publishing packages and even typesetting programs such as TeX are based.

What notion is that? That each and every type of writing is somehow magical, special and so utterly distinct from any other type of writing that special templates, special rules and special fonts are absolutely required.

Of course, anyone who has actually written anything in their entire life - from a grocery list onwards - knows that this is nonsense. We freely mix graphics, characters, designators, determinatives and other markings, from scribblings through to published texts. If word-processing is to have maximum usefulness, it must reflect how we actually process words, not some artificial restraint that reflects hardware limitations that ceased to exist about twenty years ago.

The simplest case is adorning the character with notation above it, below it, or as subscript or superscript to either the left or right. With almost no exceptions, this adornment will consist of one or more symbols that already exist in the font you are using. Having one special symbol for every permutation you feel like using is a waste of resources and limits you to the pitiful handful of permutations programmed in.

The next simplest case is any alphabet derived from the Phoenician alphabet (which includes all the Roman, Cyrillic and even Greek alphabets). So long as the program knows the language you want to work in, the translation rules are trivial. The German esset is merely a character that replaces a double s when typing in that language. A simple lookup table - hardly a difficult problem.

Iconographic and Ideographic languages are just an extension to this. You specify a source language and a destination language, and provided you have such a mapping, one word gets substituted with one symbol. You could leave the text underneath and use it as a collection of filenames for grabbing the images, if you wanted to make it easy to edit and easy to program. As before, you already have all the symbols you're ever likely to want to overlay, so you're not talking about having every possible image in a distinct file. Anything not provided can be synthesized.

Other languages can be more of a bugbear, but only marginally. A historical writing style like Cuneiform requires two sizes of line, two sizes of circle, a wedge shape and a half-moon shape. Everything else is a placement problem and can be handled with a combination of lookup tables, rotations and offsets. Computationally, this is trivial stuff.

If the underlying engine, then, has a concept of overlaying characters with different offsets and scales, rotating characters, using lookup tables on regular expressions, and doing simple substitutions as needed, you have an engine that can do all of the atomic operations needed for word-processing or desktop publishing.

This method has been used countless times in the past, but past computers didn't have the horsepower to do a very good job of it. Word-processing has also been stifled in general by the idea that it's a glorified typewriter and that it operates on the character as the atomic unit. What I'm talking about is a fully compositional system. Each end-result character may be produced by a single source symbol, but that would be entirely by chance, as would any connection between any given source symbol and what would be considered a character by the user.

If it's so good, why isn't it used now? Because it's slow. Composing every single character from fundamental components is not a simple process. Because it's not totally repeatable. Two nominally identical characters could potentially differ, because the floating-point arithmetic used is like that. That's why you don't use equalities much in floating-point arithmetic. Because it puts a crimp on the font market. Most fonts are simple derivatives of a basic font, and the whole idea of composition is that simple derivatives are nothing more than a lookup table or macro.

If it's all that, then why want it? Because it makes writing any ancient or modern alphabet trivial, because you can do more in 20 fonts than you can do on existing systems with 2,000, and because it would bugger up the whole Unicode system which can't correctly handle the systems it is currently trying to represent. (The concept behind Unicode is good, but the implementation is a disaster. It needs replacing, but it won't be until someone has a provably superior method - which is the correct approach. It just means that a superior method needs to be found.)

Networking

Journal Journal: Anyone here on LinkedIn? (or Xing) 8

We went through this a few years back when it first started, and I'm already linked to about 20 or 30 of you, but I thought I'd check if there's anyone I missed. You should be able to figure out my "real" email address from my slashdot profile.

User Journal

Journal Journal: In Other News For Nerds

There is a new science/geek website out there, called Null Hypothesis, that covers highly unlikely but totally real science. The headline story at the moment is about the sounds of protein molecules. The BBC's coverage of this attempt to out-geek the geeks reports that there are only 60,000 readers - something like a hundredth of what I believe Slashdot's readership is. Even if nobody actually joins the site, it is our clear moral duty to our fellow nerds (and an interesting science experiment they can report on) to attempt to melt the server under a severe Slashdotting.
Biotech

Journal Journal: Joke of the day: A riddle 5

Q. What's orange and sounds like a parrot?

A. A carrot!

(Try it out on a seven-year-old)

Linuxcare

Journal Journal: turg's rule of computing #42 (and more dumb questions) 8

turg's rule of computing #42: Don't partition your hard drive at one o'clock in the morning.

(sigh)

So I thought I'd make an Ubuntu partition on the new hard laptop -- just to play around with it a bit.

So I pop in the CD and boot it up, choose install and start answering the questionnaire. The question about what size to make the partition is confusing (e.g. is it asking what size to resize the existing partition or what size for the new partition) and after I answer it I realize I've answered it wrong and hit the back button (got an "Are you sure?" and said yes) and gave the right answer. I didn't realize that it had started partitioning immediately -- I thought it wasn't doing anything until I finished the whole questionnaire.

So what I wanted was 45 GB for the WinXP partition and 15 for Ubuntu.

What I got is:
-Windows thinks it has a 15 GB hard disk
-Ubuntu thinks it has a 60 GB hard disk with a 15 GB Ubuntu partition and a 45 GB WinXP partition -- except it shows the WinXP partition as 5 GB used and 10 GB available.

Presumably there's some tool I can use in Ubuntu to fix this? I haven't done anything more with it so I haven't tried connecting Ubuntu to the intarwebs yet.

Also, How do I tell Grub to make WinXP the default?

Though, now that I'm thinking about it, I have since decided to make the laptop the primary home of the music collection so maybe I just want to blow away the Ubuntu partition and give the whole 60 GB back to WinXP. I can find someplace else to play with Linux on the desktop.

User Journal

Journal Journal: Software announcements (or: how to irritate JD) 3

Yes, back to the grumbling again. I do not enjoy this. If I could write about stuff I liked, I would vastly prefer it. However, that will have to wait until there's stuff I like happening.

This rant has to do with software announcements. I covered this to a degree in a previous journal entry on the secrecy of some open source projects. This time, I will be more concerned with neglect (the known version is truly ancient, compared to the published version), quality (compare the Slashdot description with the Freshmeat one for the same piece of software) and reaction.

Neglect is a big one. I own 113 project records on Freshmeat and have bookmarked an additional 303. Why so many? 303 bookmarks is a lot - can't I just look to see when the project updates are announced? I would, if they ever were. The bookmarks are reminders of correctly-assigned records that the author can't be bothered to maintain. If they get updated at all, it's because I updated them. With the sheer volume of projects involved, you're damn right if I sometimes think some of the bigger Open Source consortia that develop these packages should be paying me for my time. Globus is no small concern, it's a friggin' international collaboration of multinational corporations. Why are they depending on volunteers to take on the unpaid, thankless, tedious task of fixing their neglect?

Ok, what about those 113? How many do you think I actually created? I'd say maybe half, the others I picked up usually because the owner no longer existed. In a few cases, the records were so stale and decayed that the last update predated the owner field, yet the software has been continuing just fine. Again, that's not good. At best, it means that inaccuracies or other reports will fail - nobody to send to.

Freshmeat is not the only software inventory out there, although it's the only one I make any effort to assist. I've assisted a few paid sites as a consultant, and frankly the stagnation there was even worse. It would be so easy to spend every waking moment just bringing these databases up to speed. We're talking extreme neglect not in the hundreds of records but in the tens of thousands. These are professional sites, paid by customers who want accurate information. They aren't getting it. What they get is something that could be anywhere from a few days to a few years behind reality. Frankly, those customers would be infinitely better off buying a giant disk array and using Harvest to index every site that Google reports has at least one page with the word "download" on it. It would work out cheaper very quickly, and you can be sure of how fresh the information is.

Ok, what about quality? If there's no freshness, then quality is automatically suspect, as projects are evolving entities. They're not fixed for all time, except in rare cases. Ignoring that, though, how accurate are announcements as a rule? Not very. The quality of information is generally fairly poor - sometimes because the person providing it doesn't really understand what is being communicated ("Chinese Whispers") and sometimes because the information simply doesn't exist and has to be inferred from the meager clues that have been left. Sherlock Holmes may be a great detective, but he is also a work of fiction. And if anyone did have those skills, do you think they'd be spending them on correcting project records? Where it's good, it can be truly excellent, but since it would also take someone of the power of Holmes to tell you when the information is good, it's not that useful. If the only way to tell is if you already know the answer, you have no need to be able to tell.

What about reaction? Well, let me put it this way. Atlas' official version is 3.7.29. Fedora Core 7 beta 1 uses version 3.6.0. The official version of Geant is 8.2 patch-level 1, but Fedora Core 7 beta 1 uses version 3.21. I've done some experiments with my own Open Source projects and have found that updates and patches follow the laws of Brownian motion. It is simply not possible to predict if/when/how updates will ever occur within a single distribution, but across all variations of all distributions, the net rate of pickup and refinement is more-or-less constant. This is, of course, completely useless to most users - even those with subatomic vector plotters.

Overall, it's a nightmare to find what you want, a bigger nightmare to determine if it is actually what you want, and a total and utter diabolical nightmare from the 666th plane of hell to determine if what is actually available in any way reflects what it was that you thought you were getting.

User Journal

Journal Journal: Are distros worth the headaches? 6

One of my (oft repeated) complaints about standard distributions such as Gentoo, Debian or Fedora Core, is that I slaughter their package managers very quickly. I don't know if it's the combination of packages, the number of packages, the phase of the moon, or what, but I have yet to get even three months without having to do some serious manual remodelling of the package database to keep things going. By "keep things going", I literally mean just that. I have routinely pushed Gentoo (by doing nothing more than enabling some standard options and adding a few USE flags) to the point where it is completely incapable of building so much as a "Hello World" program, and have reduced Fedora Core to tears. That this is even possible on a modern distribution is shocking. Half the reason for moving away from the SLS and Slackware models is to eliminate conflicts and interdependency issues. Otherwise, there is zero advantage in an RPM over a binary tarfile. If anything, the tarfile has fewer overheads.

Next on my list of things to savagely maul is the content of distributions. People complain about distributions being too big, but that's because they're not organized. In the SLS days, if you didn't want a certain set of packages, you didn't download that set. It was really that simple. Slackware is still that way today and it's a good system. If Fedora Core was the baseline system and nothing more, it would take one CD, not one DVD. If every trove category took one or two more CDs each, you could very easily pick and choose the sets that applied to your personal needs, rather than some totally generic set.

My mood is not helped by the fact that my Freshmeat account shows me to have bookmarked close to three hundred fairly common programs that (glancing at their records) appear to be extremely popular that do not exist on any of the three distributions I typically use. This is not good. Three hundred obscure programs I could understand. Three hundred extremely recent programs I could also understand - nobody would have had time to add them to the package collection. Some of these are almost as old as Freshmeat itself. In my books, that is more than enough time.

And what of the packages I have bookmarked that are in the distros? The distros can sometimes be many years out-of-date. When dependencies are often as tightly-coupled to particular versions as they generally are, a few weeks can be a long time. Four to five years is just not acceptable. In this line of work, four to five years is two entire generations of machine, an almost total re-write of the OS and possibly an entire iteration of the programming language. Nobody can seriously believe that letting a package stagnate that long is remotely sensible, can they?

I'll finish up with my favorite gripe - tuning - but this time I'm going to attack kernel tuning. There almost isn't any. Linux supports all kinds of mechanisms for auto-tuning - either built-in or as third-party patches. And if you look at Fedora Core's SRPM for the kernel, it becomes very very obvious almost immediately that those guys are not afraid of patches or of playing with the configuration file. So why do I end up invariably adding patches to the set for network and process tuning, and re-crafting the config file to eliminate impossible options, debug/trace code that should absolutely never be enabled on a production system (and should be reserved solely for the debug kernels they also provide), and clean up stuff that they could just as easily have probed for? (lspci isn't there as art deco. If a roll-your-own-kernel script isn't going to make use of the system information the kernel provides, what the hell is?)

User Journal

Journal Journal: Dumb WinXP questions 17

So I've got my new (that is, new to me -- refurbished lease return) laptop.

Last time I was in this position, the first thing I did was go online to download all the Windows updates. Even with the Windows Firewall turned on, I had several worms, etc., by the time the updates were complete. This time I want to avoid this. Though that was with the original WinXP (home) and this is SP2 (pro).

So:

1) What's the simplest procedure for getting the updates safely?

2) It's possible that this machine already has the updates. How do I find out if it does?

The Internet

Journal Journal: The fog of tech - web 2.0

Been a while. I wrote this quick article for Martin and it can be found at 2007FEB020931 & 2007FEB031726 on my flickr site, bootload & repeated it here for anyone who's interested. The title is a play on the term, 'fog of war'. As much as we try, we just cannot get a full grasp of what the future or near tech future holds. Just outside our reach, is something we just haven't forseen ...

The fog of tech - web 2.0

'... sorry for offtopic i just don't know how all this 2.0 stuff works. ...'

Me neither. But I've been thinking about what are the good bits to look at for a while. Web 2.0 is pretty much a created collective term coined by Tim O'Reilly & Dale Dougherty in a brain storming session. You can read the original article, What Is Web 2.0 [1]. In the brain storming session, O'Reilly & Dougherty looked at the differences between the old Web companies & the new ones popping up after the crash in 2001 & summarised them into a diagram called the Web2MemeMap which became the template of what consititues Web 2.0.

But is this really what is happening? Web 2 is after all a made up term describing an set of observations by some book & information publishers. Well yes and no. I'll start with the no because it highlights some shortcomings of the Web2 description.

No

Web 2 these days is a money making machine for information companies. Protected, loosely defined, describing whole slabs of technologies & methodologies that companies have used and can use to build what is essentially just another itteration of the first round web development. This is why it's so difficult to know what Web 2.0 really stands for.

The Web 2 meme also ignores visionaries like Dave Winer who created the technology for and demonstrated blogging and feeds with RSS. What you have here is the top down big business defining its view of the world to better its profitline (O`Reilly as an organisation are constantly looking out for emerging technology to sell to the Alpha geeks)

More than anything Web2.0 is a 'top down' view of what is happening, complete with self interests. Want to know what Web2.0 is? Well come to Web2 Summit where we tell you what Web 2.0 really is (with the attendent costs) as opposed to the lets hack what technology we find after looking at interesting problems & ideas, then organise a really cheap venue where anybody can just turn up as Winer recently demonstrated. The 'Big business Vs Hippy' approach is good in a way as it serves the fragmented market at different levels.

Yes

But supprisingly the original diagram really does describe some tech trends that where simply not as well exploited pre 2000 crash. For example in no particular order

* data: rss, atom and apis is available for a lot of sites

* rights to remix: with the Creative Commons license(s)

* components: using delicious to grab links, images from flickr, rss from your favourite news site

* perpetual beta: on the web nothing is ever finished & always in incremental development mode

* long tail: markets are fragmented into small groups of interested parties (ie: itunes, amazon)

* hackability: with an api and/or rss you can extract data with a bit of code rebuild it into something else on your own site

* mashing: with data via rss or api rebuild something unique like crime over google maps (chicagocrime.org)

* granular access: extract data on delicious by tag by person and topic.

* emergent behaviour: small companies or individuals building things & experimenting instead of a topdown big business plan

So if you have read this far and looked at the links I hope you can see Web2 a bit more clearly. But where are the good bits too look at?

Data

The area I'm increasingly focusing on is Data. For instance, "Who owns you?" If you don't own your own data or at least a copy of it, who does? To me it's data that really matters. Don't be distracted by the AJAX stuff too much. It's important but data is the key. Applications may change but your important data is what matters the most. This is why RSS, service API's matter.

You enter all your data into these fancy websites only to repeat the process over & over again. In the poorly designed or deliberately hobbled sites (Roach motels like the google reversal of the SOAP search api) you can never get your data back. Flickr, Delicious, Stickit are examples of sites that honour your data allowing you to extract it, reuse it and re-program the application via code not the keyboard.

One last concept thats not on the Web2MemeMap but explained in What is Web 2? is the concept of 'software written above the level of the single device'. This idea can be found in section 6 of Tims, "What is Web 2?" and expands on Dave Stutz's advice to Microsoft in his article, Advice to Microsoft regarding commodity software. Anyone who harnesses the Web as a computing platform can expect to gain considerable financial gain through control of that platform. This is useful to understand as key bits of infrastructure (Links via Delicious, Images via Flickr, etc) are snapped up to build that API.

Takeaways

* web 2.0 is a top down emperical description of new business & user models & technology created by O'Reilly for O'Reilly & information consumers

* the descritions of technology & models of both users & businesses who create them have basis in reality

* web 2 is top down. there is a bottom up description that tends to get drowned out or under reported by other developers not the 'oinioted ones'.

* web 2 as an idea is just as much an ideal as a description of a new approach

With that in mind a caution. Like all misunderstood or new technology, Web2 is also hyped a bit too seriously as a silver bullet to all problems and is really a small piece of an evolving technology.

Reference

[1] Its full title is a bit of a mouth full, What Is Web 2.0: Design Patterns and Business Models for the Next Generation of Software.

Classic Games (Games)

Journal Journal: Best cafe ever: Gluten-Free + Free WiFi 2

I just discovered a relatively new cafe in my neighbourhood. I haven't eaten there yet but it might be my favourite place already: they have free wifi and offer a gluten-free menu.

Bug

Journal Journal: It was a year of new beginnings 6

As in: new transmission, new appliances, new furnace. About eleven grand on those (unplanned) expenses alone.

Oh, and we had another kid this year too.

Anyway, I'm really just fishing for a topical way to get into the furnace story.

Let me take you back in time a week or two. It's Wednesday the 20th. We're planning to leave for the in-laws on the morning of the 21st. Last night we called for someone to check out the furnace -- it's been cutting out recently, but will work fine again if we shut off the electricity to the furnace and turn it back on again* (it's a gas furnace with electronic ignition). So the guy shows up at 4 pm, about 15 minutes after I get home from doing the last of the Christmas shopping. He takes stuff apart. He calls me down to take a look.

There are cracks in the wall of one of the chambers where the flames are. This is a bad thing. I never really paid attention to this before but some previous owner (16 years ago) installed an 80,000 BTU furnace in our tiny house. The furnace guy figures, based on the assumption that we're heating about 1000 square feet (it's more like 800), that we only need about 35,000 BTU of output. So anyway, the ducts weren't taking enough heat away from the furnace and things cracked from overheating.

Then he says "I have to shut you down." As in he is legally required to shut off the gas to our house and seal it with a big red tag that says we can't have the gas back on until the equipment is certified to be safe. And a big red tag for the furnace too, indicating that it's unsafe and not to be used.

They can come and do the work the next day (Thursday) if we say so right now, but otherwise it'll have to be after Christmas. He calls around suppliers to see what furnaces are available. We could have kept it under three grand (all in) but figured we might as well go for a high-efficiency model and a two-stage fan.

So there's going to be 24 hours without heat and hot water. My wife quickly packs and she and the kids leaves immediately, a day early. I'll take the train when the furnace is done.

After that, it went smoothly. The biggest part of the job turned out to be the vent. The high-efficiency furnace is vented through a 3-inch pipe going through the wall, rather than through the chimney like the old mid-efficiency one. So they start drilling a big hole through our 100-year-old stone foundation wall. After an hour or two, one of the guys comes to me and asks if I know how thick the wall is because they've drilled 12 inches and there's still no sign of breaking through.

And the rest of the day (starting at 8 am and they were done around 4) they're in our tiny basement (just a crawlspace really) with three guys. There's about enough space for one guy to crouch (not stand up) on each of the three sides of the furnace that aren't against the wall -- and that's all the space there is down there.

Remind me to tell you the story of the appliances (a.k.a "Why Home Depot sucks") some day.

---
*Because doing this and changing the filter are the only things I'm qualified to do with regards to the furnace.

User Journal

Journal Journal: Highly Secret Open Source Projects 7

Nothing in this world will ever be more confusing than projects that are:
  1. Released as Open Source on public web sites
  2. Bragged about extensively on those websites - especially their Open Sourceness
  3. Never to be mentioned or referenced in any way, shape or form by anyone else

Pardon me for my obvious ignorance of the ways of the world, but it would seem obvious enough to even the most demented that once something has been posted on a public site that other people WILL find out about it - from search engines if by no other means.

It would also appear that secrecy and Open Source are mutually exclusive - if you publish the source under a GPL or BSD license, it's rather too late to start whining if others then start poking around the code. I'm not talking about people distributing closed-source and having people try to reverse-engineer or reverse-compile it. That's different. I'm strictly talking about code where the source is open to everyone, where the license is explicitly stated, and the license is - beyond all doubt, reasonable or otherwise - one of the standard Open Source licenses that we all know and love/hate/have-a-strong-opinion-on.

So what gives? Why do we have cases of individuals or organizations who obviously want to take advantage of the Open Source model but who do everything in their power to violate that same model (and possibly even their own licensing scheme)?

I'll offer a suggestion, and those guilty of the above offense will likely take even greater offense at this. I believe it is because Open Source has become the "in thing". It's "hip" to release something Open Source. It's fashionable. It's highly desireable. In some circles, it might even be considered sexy. So what's wrong with any of that? When these are the only reasons, there is a LOT wrong with it. When Open Source ceases to be open and has even less to do with the source but is solely used as a substitute for some perceived genital defect, it ceases to be Open Source. I'm not sure what you'd call it, but it has nothing to do with any community that has even the vaguest understanding of either openness or freedom.

So what should these people do? I'm not going to say that they need to do anything at all, other than be honest. If these programs are "invite only" or to be circulated only amongst friends, then get them the hell away from the public part of the web and use a .htaccess file to restrict who can get them. Or put them on a private FTP site where you can control who has the password. Or only e-mail them to people you like.

Why? There's one very good reason why. If you advertise something as Open Source, offer it as Open Source, post it as Open Source, license it as Open Source, but deny the entirety of Open Source civilization any rights that are explicitly or implicity granted by doing so, purely because they're not your type, they aren't the ones in the wrong. If you offer someone a hamburger but then give them a slice of pizza, they aren't being ungrateful swines if they tell you that's not what you offered.

This particular resentment has been brewing in me for some time, but some projects on Freshmeat recently got closed to editing and then willfully broken by the software developers concerned. Why? So that nobody would bother them. Get a few thousand extra eager eyes looking at the code and you needn't worry about being bothered, although you might have to start screening out all the screaming F/OSS fans who want a glimpse of the next megastar.

I guess I'm posting this today, right now, in a time that has traditionally (well, since the time of the Saturn cults in ancient Rome, at least) been associated with sharing far more than any other time, because the Grinch is not merely alive, well and extremely evil, he's now burning the houses down as he leaves.

Bug

Journal Journal: (1) Weird news story OTD, (2) Help! I'm surrounded! 4

First off: Dolphins saved in the arms of a very tall man

The world's tallest man helped save two dolphins in China by reaching into their stomachs and pulling out harmful plastic they had swallowed, state media said Thursday.

In other news: my wife, both kids, and the cat are all sick. I'm the last one standing. Help. Is there anyone out there? Can anyone hear me?

Hm. I think my throat is a little sore.

I've got alot of eBay stuff to ship out this week. Can people get strep throat through the mail?

Slashdot Top Deals

I program, therefore I am.

Working...