Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Bug

Interbase Backdoor, Secret for Six Years, Revealed in Source 260

Diesel Dave writes "CERT Advisory CA-2001-01 announced today that the Interbase server database contains a compiled-in back door account. The thing is, it was not the result of a malicious code infection, but a direct addition by the original Borland/Inprise authors done before the program was released as open source." The backdoor was installed sometime between 1992 and 1994, and has been included in every version of Interbase during that time.
This discussion has been archived. No new comments can be posted.

Interbase Backdoor, Secret for Six Years, Revealed in Source

Comments Filter:
  • cool dude. I can't believe there hasn't been a serious look at this code. Their proposed solution to this problem is to change the backdoor password to something else! Now if you randomized it or did anything remotely sane this would be ok, but it's still a flaw.
  • I mean, why would I come to Las Vegas?
    Linus Torvalds

    I'm sorry, but I just had to comment. When I saw your sig, the following line just popped into my head.

    "Me, the creator of the Linux Kernel? in las Vegas? with showgirls? What were they thinking?"

    People who've seen "The Fast Show" ("Brilliant" in the US) will know what I'm talking about.

    Rich

    You aint seen me, right!

  • by alteridem ( 46954 ) on Thursday January 11, 2001 @07:59AM (#514578) Homepage
    If you feel so strongly that every open source program should go through a security audit, then when is the last time you volunteered to do one? Opensource is about people volunteering their time which is often in competition with their real jobs, lives, families etc. In a perfect world, all software would go through a security audit, but it is not going to happen.

    At least with opensource, things like this get found. Obviously Borland's security audit didn't find it when they originally released this as a commercial product! If it wasn't for opensource, this would probably still be being silently exploited by the original programmers and the few people they told.
  • by QuantumG ( 50515 ) <qg@biodome.org> on Thursday January 11, 2001 @08:05AM (#514583) Homepage Journal
    Borland was relying on security via obscurity on this one. I don't know why no-one took this up as an issue. Perhaps I will volunteer to security audit this code (it doesn't look like much) but I am honestly of the belief that there are companies out there relying on this software to run their business. Surely they have a responsibility to contribute back to a project that they are making money from. So if you're a company and you give half a damn about security, take some of the responsibility and pay for a security audit on the source! It's in your own interests.
  • There was a blurb about Microsoft being able to access Win95 registry when a user is connected to the Internet and thus gathering information about non-licensed MS software installed.

    This is probably just an urban myth. With the amount of personal firewall software people are running these days someone would have logged the unauthorized data being transmitted and their would be sufficient evidence to get M$ in a whole load of shit.

  • by bluGill ( 862 ) on Thursday January 11, 2001 @06:22AM (#514587)

    OpenBSD has been undergoing a security aduit for years. A couple months ago they were able to claim there had been no known root hacks in the current release for 3 years. (That is they were able to fix root hacks before they were discovered for the last 3 years). Well sometime this summer someone discovered a root hack in the released system, despite all those audits. (To be fair, they had fixed that hole in the unreleased code stream, nobody realized it was exploitable at the time though so there was no hurry to release it early).

    Audits are good, but they take time. OpenBSD has proven they take a lot of time. There is no open source project with as much work in security auditing as openBSD. (Probably no closed source project either). No open source project cares are much, yet they can't always get it right despite 5 years of work. To criticie any other project for not discovereing all secuirity holes is a mistake. Even if the openBSD audit team had decided to work on this with as much effort as went into openBSD there is no reason to belive they would have discovered this sooner.

  • A back door can be good thing on the local level. ie a sysadmin who can unlock a workstation even when the user has forgotten the password.
    By this (implied) definition, 'root' is a backdoor. If I accept that definition, then this becomes a question about the 'domain' of a backdoor. I.e. how many people should know of the existence and details of a given backdoor, and how 'editable' is the backdoor.

    In the case of root, the existence of the backdoor is well known, but the details (password) are nominally only known by a few people. On some systems, the 'root' name is changed to something else (e.g. toor) for obscurity reasons.

    In the case of Inprise, the existence and details of the backdoor were known to external persons (developers) but unknown by the actual user and the details are unchangable without source code. (note: it looks like a quick fix here would be to edit the backdoor details in the source and recompile). This was entirely 'security by obscurity' and, now that the cat is out of the bag, almost every user of the software is at risk.

    Point to be made here: Opening the source code simply made it much easier to find the backdoor. Overall, I think that this is a good thing. There may be some hackers out there who knew of this backdoor for many years. Now we have the knowledge and impetus to clean it up.

    I don't think that this was a malicious backdoor. The design of the software seemed to require it (oops!). The big mistake is that nobody who had access questioned it's existence. The lesson to be learned is that people who have access to source code and see this sort of stuff should make waves to open up the process.

    The best gemeric solution is to remove the need for internal 'backdoors' in code. That being infeasible, the software should be changed so that the details of the backdoor are editable by the end-user (or randomized on every start of the software). Obviously, the user has to be made aware of the need to edit this data. That solution, of course, has its own security implications (exercise for the reader).
    `ø,,ø!

  • by doctor_oktagon ( 157579 ) on Thursday January 11, 2001 @08:09AM (#514590)

    I have two machines linked together by an crossover ethernet cable. Can you hack into that network? I'd be impressed if you could


    A fairly simple manner of splitting the cable and installing my own junction, or attaching my laptop to one of your machines via a serial port /joke

    Anyway, as soon as I saw your comment, I got into your master server (which I noticed connected to the Internet on 127.0.0.1 hah!!), and have told the police about your massive pr0n and war3z collection! You should now notice your hard disk is thrashing as my rm -r * takes affect suX0r!

    Whoops! Hangon? Why is MY disk thrashing ... aargh!!

  • Have you worked on any Open Source projects, or have you seen what happens when a previously closed source project gets released? Strange as it may sound, there is a bit of reverse engineering that must take place, especially if this bundle of source is not brilliantly commented and documented.

    In this case, it was worse, because when Interbase was Open Sourced, it was not buildable. Important scripts, Makefiles, and other assets were missing. So, you had this whole mess of random source that you could begin to guess the function of, but if you hadn't been a former Interbase developer at Inprise, it was all a black box to you.

    Have you ever "read the code from start to finish with a pen and paper next to them" on any major project? Have you ever heard someone do that? Frequently? Or are you just trying to be a troll? That's just not the way it happens.

    Personally, I'm impressed that they've been able to find this. I mean the Firebird project found a working binary from Inprise and a collection of code rumored to, through some magic process, produce that binary. They worked out the magic process, produced their own binary, and moved on from there.

    Only then, after they had source code corresponding to a working program could they start doing the poking and reverse engineering it took to figure out the parts and the places. They had some luck, since they do have the original creators of Interbase on their side, but there were quite a few hurdles to go through before they could even start to make heads or tails of what they were looking at.

    Then again, I dunno, maybe I have it all wrong and these guys were just sitting on their thumbs all day.

  • The first thing to do is replace those with safe alternatives (strncpy, strncat, snprintf)
    Of course, just robotically changing strcpy to strncpy doesn't fix anything, since strncpy does not null-terminate the copy if the source is longer than the destination buffer. By contrast, strncat does guarantee that the destination buffer will have a null terminator.

    There is a fundamental problem at the root of this, which is that the C standard library is hideously irregular, and the C language itself is not meant to be "safe". It's an okay language for writing hardware drivers and other low-level system components, but a safer, more abstract language would be a better choice for applications.

  • Don't forget the other side though. When someone jumps up and shouts "There's a hole in this open source code", you can run to you servers, bring down your firewall, patch the code, recompile and be back on line in a time somewhat proportional to your typing speed. With closed source, all you can do is sit there and wait while script kiddies prod at your sensitive data, the best security you can apply yourself being to work-around (if you can) or take your $3million a week transaction system off-line.

    Rich

  • I trust that the next time a significant source release of this kind is done that you, personally, will download the source and study it exhaustively in this fashion.
  • Which is, of course, the complete opposite of what you said.

    Which is why I like /. comments, because no mistake ever goes uncorrected. I had assumed from reading the security notification that the password was placed in the source just before it had been open sourced. As the yanks say, my bad. It was placed in the original program years ago, but only opensourced one year ago, and that was what led to the backdoor being discovered, I've got that now. I wonder how many people have taken advantage of this over the years.

    the AC
  • Open source made imposible such things.

    Is it a good thing or not?

    Is there a good use for back doors?

  • by QuantumG ( 50515 ) <qg@biodome.org> on Thursday January 11, 2001 @06:34AM (#514600) Homepage Journal
    Well it took 20 minutes but if you grab the file interbase/qli/dtr.c from the firebird cvs you will see one of the very first things it does in main is:

    SCHAR home_directory[256];
    ...
    #ifdef UNIX
    /* If a Unix system, get home directory from environment */
    startup_file = getenv("HOME");
    if (startup_file == NULL)
    {
    startup_file = ".qli_startup";
    }
    else
    {
    strcpy(home_directory, startup_file);
    strcat(home_directory, "/.qli_startup");
    startup_file = home_directory;
    }
    #endif

    That's called a "buffer overflow" and I doubt it is the only one. Just a short grep over the files gives an idea here. 642 strcpy's, 139 strcat and 945 sprintf's. The first thing to do is replace those with safe alternatives (strncpy, strncat, snprintf) and then the fun begins. And I just know that next week I'm gunna be asked to install an Interbase server :)

  • Then why not make the default glibc strcpy/strcat etc safe? Or make the compiler detect overflows (now there's an NP complete problem for ya!). C is here to stay, claiming that we have to move on is not a solution. "Safe" libraries are a good start but people don't use them. Let's just make the default safe. Perhaps this is something OpenBSD would be interesting in doing?
  • by [Xorian] ( 112258 ) on Thursday January 11, 2001 @06:36AM (#514603)

    Anybody running a pre-open-source Interbase seems to have only really unpleasant choices:

    • Use a binary-only patch (if it's even available for the version they're running) which fixes the problem and trust that they really did remove the backdoor and didn't just replace it with a different one (which I know I wouldn't be willing to do given the fact that they put it in there in the first place)
    • Spend an unknown amount of time and effort (and as we all know, time = $) to update to a new version which they know can be trusted (because they can compile it themselves)
    • Switch to a different database altogether
    • Leave it as-is and hope nobody notices

    I'm glad I'm not in that position.

  • by Russ Nelson ( 33911 ) <slashdot@russnelson.com> on Thursday January 11, 2001 @04:33AM (#514605) Homepage
    Turns out that a plain http [cert.org] transfer works as well.
    -russ
  • The only way to find it was to dis-assemble the compiler and codewalk the result.

    You're assuming that the disassembler wasn't also in on the joke, and that it it wouldn't recognize that it was disassembling the compiler and casually omit the incriminating code. For that matter, you're assuming that the C compiler wasn't smart enough to recognize when it was compiling a disassembler and insert the appropriate code to implement the above.

    There are two questions to ask yourself: "am I being paranoid?" and "am I being paranoid enough?".
  • Well if you look at the dates, 92-94, security was very different. Back then security meant remembering to lock the server room door.
    1992 may have been pre-commercialization of the 'net, but it wasn't deep and savage pre-history. People already knew that backdoors were a bad thing. The people at borland were either as a group ignorant of basic security issues, or chose to ignore them. I'd be inclined to bet on the latter.

    I'd also be inclined to bet on the probability that code is being designed and written today with these sorts of problems in them. Probably these people are justifying it to themselves.

    It's necessary.

    It's temoprary
    We're the only ones who know about it.
    Nobody will ever figure it out.
    The proper solution would take too long.
    We shouldn't burden the (ignorant) user with this administrivia.
    There's nothing really wrong with it! (is there?)
    Investment in security will continue until the cost of the security exceeds the cost of a breach -- or until someone insists on getting some usefull work done.
    Murphy's laws file, ~1979

    `ø,,ø!
  • by b1t r0t ( 216468 ) on Thursday January 11, 2001 @08:26AM (#514611)
    Notice how many years it took ANYONE to discover this.

    Correction: how many years it took anyone to discover and announce this. Just because it was only now announced doesn't mean someone didn't know about it two years ago and kept quiet about it.

  • Then why not make the default glibc strcpy/strcat etc safe?

    OpenBSD have addressed this issue. glibc has not yet adopted their solution, although glib is expected to adopt g_strlcat and g_strlcpy in version 2.

    [openbsd.com]
    http://www.openbsd.com/papers/strlcpy-paper.ps
  • by QuantumG ( 50515 ) <qg@biodome.org> on Thursday January 11, 2001 @06:41AM (#514613) Homepage Journal
    The bug in question was a one byte overflow in ftpd. The guy who invented [pulhas.org] one byte overflows had this to say:

    Conclusions could be drawn from this nearly impossible to exploit situation.
    Although I would be surprised to hear of anyone having applied this technique
    to a real world vulnerability, it for sure proves us that there is no such
    thing as a big or small overflow, nor is there such thing as a big or small
    vulnerability. Any flaw is exploitable, all you need is to find out how.

    So even he didn't think this would ever happen and the bug in ftpd was a direct result of this. No one knew it was there because no-one knew that such a bug even existed (and if it did it was most probably not possible to exploit). That is definitely not the case here. This is an obvious flaw in security written by a programmer who obviously never thought the code would be open sourced. It should have been one of those things that you picked up on the first day and said "this is bad, you never should have done this."
  • 484961 lines in .c files
    395521 lines in .h files
    116496 lines in .e files (like a script file)

    you call this big? From a security analysis point of view, this is a baby.
  • It's not laziness I think. I think it is a lack of knowledge about these issues. Ignorance of security causes most of these issues and it is not suprising. I have done a lot of formal education in programmer and never once have I been formally taught anything about security issues. I have read lots of books about programming, but never once have I read (in a book) about security issues. Actually I've never seen a book about security issues that wasnt jam packed full of cryptography.
  • see my other post [slashdot.org] where I just discovered a buffer overflow, it took 20 minutes and I've never worked on the software. Believe it or not there are people who do this for a living, they are called "code auditors" and they perform "security audits" and it is a very local thing. You don't have to be a developer of the software, you don't have to compile the software, you just have to read the source!
  • In order to stop a compiler from adding any thing to your program is to compile the compiler from source code

    Unless the compiler source has no obsticated backdoors of course.

    The solution is to have a basic compiler written in Assembler. This way you do not need to start with a binary compiler that you can know with 100% is clean of any bad things

    And now you assume that more than about 1% (if even that) of the programming community have the skill to analyze 20000 lines of assembler looking for backdoors! I'd much rather try and find a backdoor in 30000 lines of C than 20000 lines of assembler.

  • I'm not trying to understand it. I'm trying to find security flaws and that that's why it's called a "security audit". And yes, finding such a security flaw is such a "subtle process" that it just took me 20 minutes to find [slashdot.org] a buffer overflow in the said source code. Why is it you seem to think you know anything about security analysis? Do you do this for a living? Well I have.
  • Nah, these are not permanent ACLs, just added on for a few weeks to see what kind of a problem this might be for downstream clients. They'll get removed after we see what is going on. Over the last few days, we've seen the exact same signature from a script, sequentially probing IP addresses to port 3050. We're using private I [opensystems.com] to capture and filter the logs so if we get asked by a clueless manager at some later date if we were doing our jobs, we can hand them a pretty report.

    Now that I know what to simulate, I'll rig one of the honeypots and see if the script tries the exploit, or if the crackers wait until later after a positive hit to try their luck. But that will wait until tomorrow, beer is calling :-)

    And besides, if I ever choke out one of the routers, its good justification to accounting to buy bigger routers :-)

    the AC
  • I don't know about "extensively", I think that should be done by someone who actually gives a damn about Interbase. Say maybe one of the companies that is using it as a crucial part of their infrastructure.. but that could be just me. I did however just download the source and spend [slashdot.org] 20 minutes on it.
  • I wonder about MS SQL Server too. Their code is a branch from Sybase's ASE server and Sybase released an urgent security alert advisory for all platforms in September 2000. They gave no details on what the problem was but said "Sybase views this as a mandatory correction that you should implement immediately." Database servers presumably have very big source trees (the stripped ASE executable for Solaris is 11MB) and it must be relatively simple to hide a backdoor in the source code that could lie undiscovered for years. Here's their security advisory [sybase.com] from Sept 2000.
  • actually, the dude is right! I hate the halting problem.
  • buf = malloc(strlen(src)+1); strcpy(buf,src);

    Hey, don't forget to trap those null pointers. And handle them. Then free that buffer when you've finished with it.

    One item in particular that I was referring to was a quick one liner to debug to check that part of a project was working properly by feeding and receiving values. Minimal development time assigned to it, no likelyhood of exploitation and it only had to run once and its job was done. It turned out with another project needed exactly the same functionality on a regular and reliable basis. Now the code was available (I wrote it) and the necessary tightening was done. But just as easily, someone else could have just used it out-of-the box. I think this points not to needing to have over-engineered perfectly written software down to "Hello World" but rather to ensuring that you know the security implications of any software you exec.

    Rich

  • Apart from my previous statements about calling for companies that use the software to spring for a security audit, isn't this the same question as "who is going to pay for all these developers?" on open source projects.
  • But it could not be used as an exploit because the shell environment variable space could not hold enough data to "smash the stack".
  • I have no use for the software. Where's all the companies that have been using this software since day nought? Surely when they heard that Borland was open sourcing their favourite database package they should have ponied up the cash to have it done. Hell, there's probably even a couple of security companies that use this software themselves, they probably just didn't know that it was open sourced (like me!).
  • This brings up the question, to view from several angles, how would you try to sneak in a back door (or any kind of 'easter egg' type code that doesn't need to be there) into an open source project? Obscure hashing of keys in the code? Just naming functions so they sound official?

    There can't be a way to completely hide it. Just make the trail harder to follow. So, as an exercise of what to look for, how would you go about pulling something like this off?

    Jason
  • Have you ever "read the code from start to finish with a pen and paper next to them" on any major project? Have you ever heard someone do that? Frequently? Or are you just trying to be a troll? That's just not the way it happens.

    Actually that's exactly the way it happens. It's called a "security audit" and it involves reading the source. It is best done by a security expert who reads through the source, writes down everything that he is suspicous of and then sits the programmers down in a room and asks them question by question what each of the variables involved are, where they come from, what resultant binaries they are used in, etc. I know this because I used to do security audits for a living and it was during this actual hands on experience with software that I decided that open source was better because you can get more people reading the source simultaniously. As for whether I am trolling? No, but I appear to have attracted a few flame throwers anyway!

  • by Richy_T ( 111409 ) on Thursday January 11, 2001 @08:45AM (#514642) Homepage
    OK, first a comment that you keep saying that it only took you 20 minutes to find the hole. Yet buffer overflows are well understood and strcpy and strcat are obvious red flags (sprintf does not necessarily mean buffer overflows with correct format strings). As you've shown, a quick grep will give you some clue where to look. You could even almost say that use of these functions is an error in the code. Yet the backdoor you are berating people for not finding is not an error, it is deliberately written into the program, more than likely using perfectly valid code. To find that backdoor mean understanding that piece of code and probably large pieces of code around it and, since you don't know in advance where to look, that means that the whole of the code has to be understood (though not necessarily by one person) to be sure there are no backdoors. Even then, if the understanding is held between more than one person, there may be an interaction between the parts which results in an unlocated problem.

    Add into this that this will be a HUGE source base with many many lines of code, that open source contributors generally want to produce things and not be reading over other peoples code and that reading other peoples code (and that usually includes the "you" from >6 months ago) sucks sucks sucks

    But those criticisms aside, it does indicate that open source probably does need to consider security more. Especially when inheriting code from closed source projects but just as importantly for exisitng open source projects. It seems that openBSD is doing a good job of auditing their code. While I wouldn't even think of saying that open source projects *must* do x or y, perhaps a central security auditing helping project which ranks other projects on their security and offers suggestions on common security errors and auditing methodology. Projects could apply these techniques or not as they desired but the end user could check the security status by going to the security site. Interbase would have been ranked red_unsecure_not-yet-audited, sendmail could be blue_unsecure_script-kiddie-heaven etc.

    My second comment is more a query. Are there header files available which make sure that strcpy and friends can't be used? It would go a way to helping if you could use these headers and WARNING:STRCPY USED. COMPILE ABORTED would pop up as appropriate. It wouldn't be a final fix but it would help and might get programmers out of the habit of using these awful functions in the first place.

    Finally, with the front page story yesterday being about OOP, this is clearly the kind of thing where OOP helps. A good string class will take you a long way. Also, OOP is more easy to read and understand in small chunks so it's easier to audit (and easier to get people to audit)

    Rich

  • I'm not trying to understand it. I'm trying to find security flaws and that that's why it's called a "security audit

    A code backdoor is NOT a "security flaw"! Any decent C programmer can spot a buffer overflow in 20 minutes, but very few programmers could spot an obsticated backdoor in a major application like a relational database system without a major investigation by a dedicated team of people!

    Why is it you seem to think you know anything about security analysis? Do you do this for a living? Well I have

    Well I'm a security consultant and could probably spot a hole in a set of firewall rules in 20 minutes, but it doesn't mean I could find a route through a unicode vulnerability in a www server, which accesses an open share on another server, which has trusted access through another firewall to a back-end Oracle system in 20 minutes ... I'd be looking at at least a 5 day penetration test for that!

    Please stop being defensive, and stand back and look at this particular situation!

  • hmm.. maybe because "writing a parser that will example C code for security problems" is an NP hard problem, I don't know, that could have something to do with it. And yes, you do need to know how to code to look for security problems and yes, maybe a half decent C programmer can do it (although I doubt that when you consider that the Interbase code was presumably written by someone with a years experience at programming in C -- if not, what a quality product this is) but whether or not this "stupid programmer problem" is a "security problem" or not I'd dare to say that the folks who have all their data destroyed or their machine taken over as a result of it would tend to think it was. So, finally, if what you say is true and you are not a security expert then shut the hell up about stuff you know nothing about! God damn. Do you give this kind of shit to lawyers who come on Slashdot and give their legal opinion? How about processor engineers. "Hey man, I may not be a hardware engineer but it only takes someone with one years C experience to know that L1 cache is better than L2 man!" .. are you aware of how stupid you sound to everyone on here who has a clue about software security? If C programmers knew what the hell they were doing then we wouldn't see dozens of buffer overflows every month.
  • >"BACKDOOR_PASSWORD\0My_Sekret_Password\0"

    Rich

  • There is a very good fix at http://firebird.ibphoenix.com The fix is an image zapper that finds and replaces the account, password, and the doomsday function with randomized byte strings. It's available for all almost all platforms and works for all versions of Interbase, except for the latest Firebird, which doesn't have the problem. The zapper, named ibsecure, is also ready to zap the anticipated new backdoor in Borland's latest release. Oh, maybe they did do a profession job this time? But since they're not talking, who knows?
  • It worked for 6 years. How much longer would it have worked if they hadn't opened their source?

    Of course you don't want obscurity to be your only method, but you shouldn't rely on peer review as your only method either. It's just that I've grown tired of people saying that obscurity is of no value at all.

  • Now that I know what to simulate, I'll rig one of the honeypots and see if the script tries the exploit, or if the crackers wait until later after a positive hit to try their luck. But that will wait until tomorrow, beer is calling :-)

    They actually let you run a honeypot? You lucky thing! The chances of me actually managing to produce a business justification for one are pretty slim. Management happily spend money on top-end NetRangers etc. which is nice, but this is one step too far for them!

    And besides, if I ever choke out one of the routers, its good justification to accounting to buy bigger routers :-)

    Extremely good point: like accounting would ever understand that processor saturation is down to multiple ACLs ....!

  • It worked for 6 years

    Did it? Are you sure? Do you know that the Interbase people that had it didn't abuse it to go poking around in companies databases, reading peoples private messages? Are you sure they didn't tell any friends? Can you be sure that Interbase didn't supply confidential information obtained illegally about one of their users to a "friendly" competitor? (I mean I'm sure they haven't but the possibility is there)

    Can you be sure this hasn't been exploited somewhere somehow?

    Rich

  • <"BACKDOOR_PASSWORD\0My_Secret_Password\0"
    >"BACKDOOR_PASSWORD\0My_Sekret_Password\0"

    Rich

  • no you're right. This is an easy thing to find. But I also had a look at the backdoor issue and I think an audit would spot that pretty quickly too. My point of saying that it took little time to find an overflow is to suggest that this code has not been audited and it seems kind of strange to mention this particular flaw (the backdoor) when there are even more obvious flaws present. Serverity is certainly an issue. I think the backdoor is probably more severe because it is trivial to exploit. As for heads to make strcpy and the like go away, I don't know, but they would be pretty trivial:

    #define strcpy STRCPY_NOT_ALLOWED_BABY!@#!@#@!

    which would cause a compiler error :) A string class will get rid of trivial buffer overflows like this, certainly, but at what cost?
  • by jpiterak ( 112951 ) on Thursday January 11, 2001 @07:07AM (#514656)
    Hmmm... While I agree with the idea that perhaps more people should be checking out the source code of the open source apps they use, I think you missed the point.

    The backdoor was introduced in the commercial version of the software. It's only now that it is open source that we could even see the error. The people paying for the 'presumably...high-quality app' you extoll the virtue of were receiving the backdoor-enabled product. Rather than being a failure of open-source software, I'd say this one was a sucess. I only wonder what other kind of 'crap' exists in all those apps whose sources are closed.

  • I read it moron. I personally don't think that a year was needed to find this. I would have thought that the first day that the source was released someone would have read the code from start to finish with a pen and paper next to them and written "obvious backdoor in eight files, remove" and fixed it.
    I second this notion: I run the following script on ANY source code I recieve:

    grep -R 'obvious backdoor' `find . -name '*.[ch]' -print` | Mail -s 'Fix these' me

    (It's a one-liner. Re-assemble if necessary. Modify appropriately for other languges.)

    anybody who takes this seriously deserves to .
    `ø,,ø!

  • A couple of things about this story -- and points raised in earlier posts are interesting in light of the NSA's recent release of a secure Linux. First: A backdoor that had existed for years was discovered relatively quickly (hey -- nothing happens overnight) after the code was opened up and began to see common use. Second: As others have pointed out, security by obscurity does work after a fashion and up to a point. Trouble is, so does vulnerability by obscurity.
  • $ export NIG='123456789012345678901234567890123456789012345 67890123456789012345678901234567890123456789012345 67890123456789012345678901234567890123456789012345 67890123456789012345678901234567890123456789012345 67890123456789012345678901234567890123456789012345 678901234567890123456789012345678901234567890'
    $ echo $NIG
    123456789012345678901234567890123456789012345678 90 12345678901234567890123456789012345678901234567890 12345678901234567890123456789012345678901234567890 12345678901234567890123456789012345678901234567890 12345678901234567890123456789012345678901234567890 1234567890123456789012345678901234567890

    My environment variables hold way more than 256 bytes.. where you gettin' yours?

  • A string class will get rid of trivial buffer overflows like this, certainly, but at what cost?

    True. But that's up to the developer to decide when they are drawing up the project spec. For a small internal utility where I know noone is likely to want (or have the need) to perform an exploit I might be lazy and use char[2000] and strcpy for paths (though I still think it would be better if those functions disappeared but note that strncpy is not proof against buffer overflows and there are buffer overflow problems where strings may not be terminated properly [particularly in networked software]). For an absolutely robust, mission critical system such as one that stores credit card numbers (a database) I would be tempted to go for a string class and for something that needed to be robust but definitely needed to be lean, I would probably go for writing some specialised string functions (strcpy and friends do not have to be used in dangerous ways).

    Rich

  • It shure gives a good argument for Open Source in security critcal locations. What if the same exists in Exchange or ISS?

    If bBrland could have it, why not Oracle, Sun, IBM? (Well to be honest you could get the source for Solaris from Sun.)

    / Balp
  • For a small internal utility where I know noone is likely to want (or have the need) to perform an exploit I might be lazy and use char[2000] and strcpy for paths

    I should state here that I have sometimes seen those small internal utilities go into full scale production systems, usually requiring a rewrite to remove all those little nasties. It's probably best to not be lazy in general :)

    Rich

  • No, the senior developer currently on the project was present when the back door was implemented, and used the back door during development of the most recent version (V6). He just didn't think about the implications.
  • What's even funnier is blocking IE on Win98+ from making outgoing connections. It went nuts on me the first time I connected after that, but it's just been sulking quietly ever since. Opera forever, baby...
  • Once we get used to things, it's pretty easy to ignore the implications. Hitler got the people of Germany used to the idea of mass-murder by a gradual increase in the severity of the treatment of Jews.

    Whether it's profit-driven, back-doors, or mass-murder: How often have you heard the phrase:
    "That's just the way we do things."?
    `ø,,ø!

  • I want any non-system application (ie fingerd, bind, apache, etc.) written in a SAFE language so that these kinds of common, braindead errors are IMPOSSIBLE.

    The "trusted" unsafe C codebase should be as small as possible.
  • I have to be defensive, you're attacking me! I've just been flamed by five people for speaking the trueth and you can be pretty sure that none of them are security consultants like you are. Why is this not a security flaw? If I was looking through this code I would see lines like

    return (!strcmp (name, "USER") && !strcmp (project, "LOCKSMITH"));

    and immediately ask "what's this LOCKSMITH thing?" and then take a grep around and discover the #define LOCKSMITH PWD_ls_user() and have a look at that and discover

    char *PWD_ls_user()
    {
    if (strcmp(ls_user,"Firebird ")==0)
    {
    mk_pwd(ls_user);
    }
    return ls_user;
    }

    char *PWD_ls_pw()

    {
    if (strcmp(ls_pw,"Phoenix")==0)
    {
    mk_pwd(ls_pw);
    }
    return ls_pw;
    }

    and say "hey, this thing which is obviously a username and password is hard coded here? What the fuck?" and quickly come to the conclusion that there is a backdoor in the code. When I filed my security report I would include a section on the LOCKSMITH backdoor and when the programmers told me that they did that intentually I would have a little laugh and explain to them the risks of doing that and how to do it properly. They would tell me what is right and wrong with my proposed solution and the problem would get solved.

    BTW, here we distinquish between you guys as "network security" and us guys as "software security" but I've also done network security.
  • I wouldn't be surprsised that the "breakins" into databases recently where millions of cc numbers were compromised were rooted in similar situations.

    It's really given me cause for entrusting my financial data to any online merchant. As such, I make a concerted effort to only use one cc for online purchasing, which I periodically "lose" so I get a new number.

    I recommend you all do this.
  • by Colin Smith ( 2679 ) on Thursday January 11, 2001 @07:27AM (#514689)
    Just because you didn't know about the backdoor doesn't mean that some cracker didn't know about it.

  • actually it's more than that. They claim no remote root hacks in the default install. ie, it may very well be there but if you don't turn on the services that we have turned on by default then you're safe. That's the claim. I'm not sure about all the people who were running the ftp daemon and got exploited think. But I doubt it was "damn, I shouldn't have enabled that ftp server cause it wasn't enabled by default".
  • Fair comment but I would have used

    char *foo;

    foo=NULL;

    if(str_add(*foo,"Hello")!=0){
    //Process error (note: why use blockquotes for a single line?)
    ...
    }

    Where str_add is a function which attempts to allocate a buffer long enough to hold both the concatenation of the two arguments, concatenates them, frees the original foo then sticks the result back out into foo. If the buffer cannot be allocated, dont mess with the pointer to foo and return an error.

    Note that this is not efficient if you are creating long buffers from small chunks of data. In that case, I would make foo a struct containing char * and int and store the length of the allocated buffer in the int. I would #define a BUFF_CHUNK_SIZE and bump the size of the buffer up by that as required.

    But not for a ten line quickie program :)

    Rich

  • No it doesn't; that's why you have the compiler insert back doors in code. I believe Ken Thompson wrote a paper on it, and some aspiring Karma Whore with more enthusiasm than I will surely dredge it up and point to it. :)

    Yes, this is a good thing; backdoors should be eliminated from commercial products. I don't want anyone sneaking into my database. Although Borland might not be too happy about this... :)
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • by Anonymous Coward
    Open Source makes those things unlikely, but not impossible...

    • Backdoors can be added by rogue compilers
    • Backdoors can be intentionally engineered into obfuscated code in such a way that it would appear as an oversight or bug if it were ever discovered.
    • Backdoors could be distributed in the RPM binary, which people routinely install on their machines without compiling from source.

    I'm sure other people could think of more scenarios

  • can be found at InterBase Developer Initiative web site: www.interbase2000.org [interbase2000.org].
  • Yes, and for 6 years the only people who knew about it were the appointed illuminati and the black hats.

    It's an age-old debate. Older than the computer. Some people feel that it's just torture to tell a terminally ill patient that they're about to die. Others welcome the opportunity to say goodbye to friends and spend the their retirement money.
    `ø,,ø!

  • I've seen a steady increase on probes to TCP port 3050 the last few days, so obviously some mailing lits have had the info available for a while. There seems to already be at least one skiddy kit just to probe for this vulnerability.

    It will be interesting to see what various inquiries produce as to why this was put into the code, and why it existed for years in open source before being discovered.

    Off to modify some router ACLs to log and drop...

    the AC
  • by mark.odonohue ( 45542 ) on Thursday January 11, 2001 @07:18PM (#514703)
    Hi

    Have a closer look ;-)

    The code is intialised to the variables in the .h file, and when the server starts up it repaces them with random data using chars with ascii values 1-255

    So every time the server starts up you get a different random password.

    I've posted somewhere else, a bit about how this was done just prior to christmas, to fix the problem, and not introduce any unknowns.

    A more perminant fix will be applied, we found it when we were doing a review of the security

    There are problems, but in Firebird we have several people who do crypto/PKI things for their day job and we were doing a security review, that in part explains how we've found these. It also places us in a good position to fix these things. As far as Borland are concerned, they seem to be ignoring us,

    They wouldn't tell Jim they were working on a patch for prior versions of InterBase, so he felt compelled to write his own.

    But for now it's a good time to keep your Firebird/InterBase server locked behind a firewall

    Cheers

    Mark O'Donohue
    --
    Your database needs YOU!
    http://firebird.sourceforge.net

  • You are wrong about the time aspect, the second any backdoor is introduced, there is a security problem - not when someone other than the person that coded it finds it.

    That is the big difference - in an open source project, a security flaw is accidental, and not exploitable until someone finds it - and hopefully it is the comunity that finds it and not a hacker. With a backdoor in a closed source project, security is broken immediately. The person that coded it knows about, his friends, the person that asked him to put it there, etc....


    Huh? This strikes me as a rather semantic argument. It all depends on how you define the word "problem". In any event, I'd say (and I think most others would agree to) that the most pressing security concern of any product consumer, be it open source or closed, is the effective security of the product. How the problem came to be is not nearly as relevant to the consumer as IF and WHEN it becomes known. Notice: This is not the same thing as saying that just because a problem is unknown at some point in time that it is irrelevant...as long as there is an (actual) risk it is relevant. However, it is equally stupid to somehow imply that any closed source product with any backdoor (no matter WHEN or IF it is discovered) is somehow, necessarily, more problematic for everyone then an Open Source product with a zillion "accidental" security flaws that are discovered haphazardly in great number.

    Put simply, I'd rather face the risk of ONE developer knowing a backdoor (or bug or flaw) exists than face a zillion hackers armed with many different exploits on the comparable open source product long before. One might argue this empirically: the percentage of highly exposed interbase dbs that were hacked versus the somewhat equivelent MySQL database (which, incidentally, has seen it's fair share of security problems).

    All of this, however, is entirely besides my original point. The point is that the poster I was replying to, and indeed a great deal of open source dogma, says that such backdoors are impossible to hide for an extended period of time in a popular Open Source project. I simply assert that: If security flaws can lay dormant for years as the result of improper coding in popular Open Source products, then an honest to god backdoor can certainly be hidden in there by an intelligent coder with equal or greater success, even if it isn't trigged by something as trivial as "MY VOICE IS MY PASSWORD".

  • It really makes you wonder, when something like this has thusfar escaped the relatively large community of InterBase users: what is hiding in the software that we acquire in binary-only form?

    I fully expect that somewhere, in a corporation's database, is the tacit knowledge that I wear brightly colored underpants each and every second Thursday of the month... };-)
  • absolutely right. After all it's not your code, it's your companies (or whatever) and they may do things with it that ends up screwing you. If it takes longer then I guess it really isn't worth the long term safety but if it takes just as long to do buf = malloc(strlen(src)+1); strcpy(buf,src); instead of buf[1024]; strcpy(buf,src); then why not do the thing that is safe?
  • However, that's not only evil nastyness, the day might come where a company blesses a db manufacturer for implementing a backdoor, just after both dba's got run over by the same truck.

    When I was an MIS director, I had all the critical passwords written down on separate, sealed envelopes with my signature on them and put in a safe deposit box which could only be opened by the VP of finance -- specifically to guard against the event that the key sysadmins and/or I should come to an untimely end.

  • Joshua5?
  • On the contrary, it's VERY decent of them not to tamper with the code before releasing it. That way people can more easily learn about the problem and implements whatever fix they want.
  • Firstly I didn't say that you just had to read the code from cover to cover, I said that you had to read the code from cover to cover and you do. Secondly, terribly competent programmers write buffer overflows all the time and if only incompetent people write buffer overflows then this whole project was written by morons because it is full of em. Very good programmers write buffer overflows all the time because it is not in their job description to be a security expert. That's not to say that I think they shouldn't know what a buffer overflow is and how to avoid it but I hardly think it relfects on their coding skill. Frankly if I have the choice between a developer who can code to specification, on time, on budget and with efficency and yet doesn't know strcpy from strncpy and a guy who cant code for crap but can find a flaw in 10 minutes and write the sploit in 5 but cant code a damn, I'll take the good coder, cause that's who I hired. Fallability is about ignorance but I dont think ignorance of security issues makes you a bad programmer.
  • we didnt say bounds checking, we said detecting security faults in C source and I know what I'm talking about, it's equivilent to the halting problem.

  • Hey, I only suggest Java because it is similar enough to C to possibly make my dreams come true in the short term. I don't like it either. =)

    You'll probably be interested to know that we DO have compilers which generate provably safe code. One piece of the puzzle is TAL: Type safe assembly language. Another is TILT: A type-preserving ML compiler. They've also got projects on compiling safe-c, proof carrying code for transmitting this stuff over the network (without sandboxing), etc. The technology is almost there.

    And while I agree with you that the compiler is an important source of more bugs... wouldn't it be nice to plug up holes on the programmer end (since compiler bugs right now also introduce more non-safety) while we wait for this stuff?
  • by Croaker ( 10633 ) on Thursday January 11, 2001 @04:41AM (#514727)

    Is there a good use for back doors?

    I can't think of one. The CERT advisory makes it sound like this particular one is there because the design of the system requires it:

    It turns out the LOCKSMITH is an entity needed to allow "authorized" interaction with the security accounts database between services. This LOCKSMITH is the user account in question compiled into the code with full-access to the security accounts database by default.

    So, at least it doesn't seem to be a Borland/Inprise employee being sneaky. But still, leaving such a gaping hole in the software, even by design, it stupid. Especially considering the password for said account is hard coded! I can't imagine that idea passing the giggle test for any security expert.

  • no.. they have to say, dude, what is this lame function used for? The one that is passing in a string called user_toc_man and then goes on to copy it into a 256 byte buffer. and then the programmers says something like "oh that, that's the user's total object count for all the manual changes he has made".. uh huh.. and the user specifies this? Where? "oh.. it's in a file in his home directory" and the security guys says "can he change this?" and the programmer scratches his head and says "yer.. of course he change it" and whilst the programmer is rambling on about how stupid a question that was the security expert writes it up in his report that function calc_man_response has an exploitable buffer overflow in it because the programmer who is sitting beside him now rambling on about how brilliant he is trusted the length of a user supplied variable.
  • that's right. Could should be "the simplest thing that could possibly work". That is the goal of a programmer and this aversion that people have to reading code is just scary. I can't believe they sit down and start hacking away at it without even reading it first.
  • by TermAnnex ( 154514 ) on Thursday January 11, 2001 @04:41AM (#514733)
    Borland was able to keep this secret for years, or the developers of borland.

    Since the source was released, it's obvious that the developers that added the backdoor have already left borland, since it wasn't removed, and the other developers haven't noticed that there is a backdoor.

    So, If it can go undetected even if the whole world has access to the source. So might this indicate that there is a very certain possibility that the crackers who broke into MS DID backdoor the source?
  • We need a Meta-Godwin's Law, for people who mistakenly invoke Godwin's Law.

    FYI: Godwin's law is about comparing someone to a Nazi, or such. Simply using the Nazis as an example of something isn't covered.

    For instance, if I was to talk about military uniforms through the ages, discussing Nazis is perfectly reasonable.

    Now, Nazis aren't related to databases, and it is a stretch, but the poster is right, people see things happen and get comfortable, then they don't think where those things could get to if abused.
  • by stg ( 43177 ) on Thursday January 11, 2001 @04:43AM (#514739) Homepage
    Some extra info (mostly non-technical, but detailing the discovery and subsequent Borland (non)response) is available at the Interbase Developer Iniative [interbase2000.com].

    BTW, it seems that, as usual, they were not very concerned.
  • by blirp ( 147278 ) on Thursday January 11, 2001 @04:44AM (#514745)
    From the webpage [interbase2000.org]:
    For security reasons, the patch is available only as a binary and you will be required to register for this download.

    Nice, eh?

    M.

  • by brunox ( 152235 ) on Thursday January 11, 2001 @04:48AM (#514760) Journal
    Most of the old school software houses have compiled in some back door or provided an hidden way to get access to users systems all over the years. In my opinion it's common practice. They just love to have this kind of control/power over consumers.
    Loosing this kind of control is one among other things that make industry afraid of going open...
  • by InsaneCreator ( 209742 ) on Thursday January 11, 2001 @04:48AM (#514761)
    Makes me wonder how many back doors are there in other Borland's products, specially those intended for app development. Is it possible that a back door could be compiled into every Delphi/C++ builder/Jbuilder app ever written, or at least the apps compiled with Standard versions, which don't provide the source of the libs?
    Has something like that ever happened before?
  • by FallLine ( 12211 ) on Thursday January 11, 2001 @04:54AM (#514768)
    Oh bullshit. There are security flaws found all the time in Open Source products, many of them quite old. If careless coding can create a security flaw on accident that can slip past so-called "peer review", then certainly a reasonably intelligent person could slip in a very subtle backdoor that is infinititely harder to detect. About all you can really say generally about Open source security is that an ultra-trivial backdoor opened with a string like "I AM BACKDOOR" is unlikely, because even the casual reader it apt to notice.

  • by sql*kitten ( 1359 ) on Thursday January 11, 2001 @05:05AM (#514776)
    On the contrary, there is a huge difference. The default passwords are documented, and easily changed. This backdoor was undocumented and would require a recompile to change.

    Of course, any computer is only as secure as its administrator.

  • by deusx ( 8442 ) on Thursday January 11, 2001 @05:05AM (#514777) Homepage
    ...and why it existed for years in open source before being discovered.

    Correction... Note that the blurb above says "...a direct addition by the original Borland/Inprise authors done before the program was released as open source." This wasn't done after the Open Source release.

    Furthermore, Interbase has only been under an Open ource license for less than a year. Inprise was considering the move around last December [slashdot.org], and was finally (although missing parts and amidst great controversy which eventually forked the code [sourceforge.net]) released under an Open Source license around July 2000 [slashdot.org]

    So, the thing is from what I can see, this is an instance where an Open Source release allowed a security hole, hidden for years as closed source, to be found finally. Which is, of course, the complete opposite of what you said.

  • by alteridem ( 46954 ) on Thursday January 11, 2001 @05:24AM (#514778) Homepage
    Many people seem surprised that it took so long to find the backdoor. Their logic is that since it is opensource and has countless eyes looking at it, then it should have been noticed much sooner. What they don't realize is that a project like this is usually in the range of hundreds of thousands to millions of lines of code and when a developer goes into a project of that scale, he/she does not read everything, but only enough to learn the overall structure of the program, then zeroes in on sections that have been identified to need work or may contain known bugs.

    If anyone truly believes that things like this should be found faster, they should try reading through this amount of code. When their heads stop spinning they will probably have a change of heart.
  • by f5426 ( 144654 ) on Thursday January 11, 2001 @05:24AM (#514782)
    From what I understand, this security hole have been there for years. This was (mostly) harmless as long as the machines were not connected to a global network (well, it could be used to do a lot of harm, but for someone that already have access to the network where the database run. Anyone technically given access to the internal network of a company can do a lot of harm, anyway. Most of internal security is security-thought-obscurity. Hence, when you know how to search...)

    What most guys don't realise is that many many closed-source software that currently run on many computers contains such backdoors, generally implanted to ease remote maintenance (and cut down costs). I, for one, would be _very_ surprised if there was no such backdoor in the various incantations of proprietary operating systems.

    Cheers,

    --fred
  • by AftanGustur ( 7715 ) on Thursday January 11, 2001 @05:13AM (#514790) Homepage

    You can download the surce Here [sourceforge.net]

    According to the page it was registered at Source Forge on 2000-Jan-28 15:37
    --
    Why pay for drugs when you can get Linux for free ?

  • by deusx ( 8442 ) on Thursday January 11, 2001 @05:15AM (#514796) Homepage
    Wow.

    Even more... If you read the saga of the backdoor here [interbase2000.com], it seems that not only was the backdoor known about by Inprise R & D engineers-- but that when the original creators of Interbase (no longer a part of Inprise, but now part of the Firebird development fork) brought the security breach to their attention engineers at Inprise were forbidden to speak to them .

    And furthermore, as they realized that not only was this in the Open Source release, this backdoor was also in the last 3 closed source versions of the database. So they fixed the Firebird source, but also-- even with the company itself forbidding its own engineers to contact these people-- they wrote a binary patch program to disable the backdoor on previous versions.

    Imagine that. Even while being slapped in the face, these guys fixed their product for them.

  • by alteridem ( 46954 ) on Thursday January 11, 2001 @05:15AM (#514797) Homepage
    I agree that many software houses do this, but I doubt it is for control or power. How many stupid users are out there who mess up their systems or forget their passwords. They end up calling tech support and expect to be able to get stuff fixed. These users just don't realize that if the tech support guys can get in then it is a security risk. But then again, not much of reality makes sense to the suits...
  • by Outland Traveller ( 12138 ) on Thursday January 11, 2001 @05:18AM (#514805)
    Lots of people here are apparently surprised that it took so long for this backdoor to be found. I thought I'd try to present an explanation.

    1. Interbase wasn't officially released under an open source license until last summer. I at least, did not spend any serious time with it until the license was correct.

    2. The open source interbase got off to a very slow start. Here's why:

    - Borland didn't release all the tools required to build and test interbase code.
    - Many of the original developers had left Borland, meaning that there was a shortage of mentors for new developers.
    - Borland yanked startup funding at the last minute from the group that was going to take over the management of the code base, causing many to question interbase's future.
    - Documentation of the code base is still unfinished.
    - The codebase is large and complex.

    Independent interbase builds (firebird on sourceforge) didn't start happening until very recently. In my mind they found this bug faster than I would have expected.

    -OT
  • by FallLine ( 12211 ) on Thursday January 11, 2001 @05:48AM (#514809)
    Uh. First off, that doesn't mean open source products are any more secure. Second, many of them do not involve buffer overflows at all, but rather race conditions, poor checking of passwords, fundamentally flawed security architecture, terribly stupid flaws (remember phf?), etc. Third, more difficult for whom and in what way?

    It would take a hacker a significant amount of time to discover a properly hidden and hardcoded backdoor in a closed source product. Notice how many years it took ANYONE to discover this. That is "difficult", or rather time consuming for the hacker. You might say it's easy to reproduce, but that's true for literally hundreds of Open Source security flaws. Once a hacker discovers a means and releases an exploit, the work is done. It doesn't matter to the hax0r, aka script kiddy, if exploit.c sends "LET ME IN BACKDOOR" or a bunch of machine code to the target host. Furthermore, it's quite easy to test for the existence (or at least the probable existence) of a security flaw via improper bounds checking. In other words, you just send a bunch of different programs extra long strings on various inputs until something crashes, then you simply do the work to make the exploit happen. Compare this with trying to find a well hidden backdoor in a closed source product, you either try to reverse engineer the binary or you can try brute force. In either case, it's much harder to detect.

    So the question remains, easier for whom and how is that relevant? It's really not terribly relevant if you ask me. The question is how secure is YOUR product at the end of the day in YOUR environment for YOUR needs. If you start overgeneralizing by saying "Open Source is secure, Closed Source is not" then you're making a fundamental mistake. Rhetoric and dogma are not conducive to practical security.

  • It was checked - that's how the hole was found. You can't security audit code in a short period of time - it takes a while. Anyway, it was because of the source release that this was found. Otherwise, this _never_ would have been fixed.
  • Firebird doesn't have the problem!? Then why on their web page [interbase2000.com] do they have the advisory? And what is this code that I just pulled from the CVS doing?

    char *PWD_ls_user()
    {
    if (strcmp(ls_user,"Firebird ")==0)
    {
    mk_pwd(ls_user);
    }
    return ls_user;
    }

    char *PWD_ls_pw()
    {
    if (strcmp(ls_pw,"Phoenix")==0)
    {
    mk_pwd(ls_pw);
    }
    return ls_pw;
    }

    Perhaps you mean it doesn't use the same backdoor password? If you are using firebird I would suggest you change these lines in interbase/jrd/pwd.c to something else for the time being (note *QUICKFIX* only). If there are any developers of firebird around I wouldn't mind hearing reasons why this isn't the same problem? What's more, the "solution" described on the home page, namely "change super secret backdoor password to something else" won't work. That's security through obscurity in the perfect form.

The end of labor is to gain leisure.

Working...