Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:The workflow the publisher uses is key (Score 1) 271

It's interesting that a technical writer wouldn't bother to learn anything about his toolkit. Word doesn't do anything to backticks, but it will, by default, change straight quotes to curly quotes.

Perhaps you misunderstand. I wasn't using Word. It wasn't part of my "toolkit". I didn't say that the backticks were changed to single quotes by some default behaviour. Nevertheless, there they are in the published book.

Comment The workflow the publisher uses is key (Score 2, Informative) 271

I published a book with Sams. Never again. There were two main problems. The first was that they published my material in two books and to start with only paid me for the first (until I pointed out that they had 'forgotten' to pay me). The second problem is that they have a publication process totally based on MS Word. That's very common in publishing. However, in my case the result was that quite a bit of the content got screwed up. The shell commands for example had back-ticks turned into single quotes. Gah. So I won't use Sams or any of the other impressions of Macmillan Computer Publishing again (this is not the same publisher as Macmillian, confusingly). Another thing that would give me pause is the number of completed pages per day they expect. I don't believe an individual author could come within a factor of 3 of that and maintain any level of quality.

Now for the good news. Next time around I would engage with any publisher who has a workflow that either produces camera-ready copy (e.g. with LaTeX for example) or uses something like DocBook -- essentially, any workflow that limits the opportunity for people who don't understand those funny symbols to accidentally mess them up (in my case I don't know if the people or Word messed up my shell code). I'd talk to O'Reilly and Addison-Wesley first, though there are other publishers who are equally accomodating.

OTOH I suggest you take your existing computer science bookshelf, give each book a score out of say 5, and sort them by score. That should give you a shortlist of publishers to talk to.

Comment Re:Control freak (Score 2) 543

RMS holds fast to a number of principles, about software freedom and nomenclature. He is not frequently known to compromise a whole lot (actually it does happen, but most often in pursuit of a more important goal). These aspects of RMS's personality - and the fact that he is apparently not motivated by many of the things other people are motivated by - for example, money - make other people uncomfortable. They don't know how to deal with him, and they don't know what tools to use when negotiating with him (if you want to negotiate, you have to control something the other party wants).

The point many people often miss is that if RMS was easily given to compromise, or if he were not so very determined, then he would have given up long ago. Don't forget that the GNU manifesto was published in 1985. RMS for a long time was a voice crying in the wilderness. Most people would not give up a paying job to work on the "software freedom" thing when nobody else has even heard the phrase. To keep on doing that when everybody else thinks this free software thing is nuts takes a special kind of single-mindedness and determination.

Why anybody thinks RMS would stop being single-minded now is beyond me.

Comment Re:Ignoring the real problem. (Score 2, Informative) 271

And the answer is incoming links from around 86,000 pages according to google (links:domain.name)a lot of them are created internally passing links between malware site to malware site. But the majority come from sites using php forms which add user posts to the the sites pages. A number of months ago i found my sites contact forms were sending a lot of garbage emails to me absolutely stuffed with urls and I wondered why bother doing this since i'm not going to visit the sites. anyway the cure was to only allow the forms to be processed with no more than a few urls in them. stopped the junk hitting the inbox. It's not stopped the automated posting but the forms are not processed and i don't get them any more. When I examined the links to the malware site i found php posted user posts packed with links just like my emails had been the difference being these were posted published and being crawled. Because of these links a site with less than 4 weeks life is ranked highly because of the quantity of inbound links and thats why I got to watch a display of XP like virus and malware scanning,

The general solution to this problem is for you to modify your software so that links in blog comments are served to add rel="nofollow" to all of the links. See http://en.wikipedia.org/wiki/Nofollow for more details. Of course that will not make the spam comment posts go away immediately but if the technique is rolled out widely, then the SEOs will figure out that posting spam blog comments does not gain them anything.

Comment Leaker pays, surely. (Score 4, Insightful) 29

It's fairly obvious that the cost of informing customers - and other related costs - should be borne by the organisation who failed in their duty to ensure the integrity and confidentiality of the data. After all, until we are at a point where it is cheaper to take the measures to keep the data safe than to be delinquent, companies are incentivised to be delinquent.

Comment 2-node failover solution is probably a net lose (Score 1) 298

First, figure out what it means for your website to be available (do people need to be able to fetch a page, or do that also be able to log in, etc.). Select monitoring software and set it up correctly.

As for the serving architecture, at this level of load, you're better off without clustering. You don't need it for the load and it's probably a net loss for reliability; most outages I've seen in two-node cluster is either infrastructural that takes them both out (power distribution failures, for example) or problems with the HA system itself (switches going into jabber-protection mode and provoking a failover, failure detection script bugs, etc.). If you really feel that a single machine does not offer enough protection, go for an active-active configuration and simplify the problem to directing incoming requests to the working web servers, as opposed to "failing over".

This changes a bit if your reliability needs are high enough to justify separate serving facilities in separate data centres in different cities. For that sort of stuff you need to look at working with DNS to solve part of the problem too, but the right approach there depends on to what extent the website is static content.

Slashdot Top Deals

"The medium is the massage." -- Crazy Nigel

Working...