Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
News

On Measuring the Effectiveness of a Search Engine? 8

younix asks: "I am working at a major telecom provider in the Netherlands and we are implementing a new system for knowledge management. Our system works quite good at the moment, but there is no way to actually measure the effectiveness of the search engine we are using. With effectiveness I mean how much effort do I have to make to find a particular solution that is present in the knowledgebase. Is it possible to express this in a number? Has anyone in the slashdot community done anythin similar before? I would really appreciate any pointers to work done by others on this subject." I think that the single most important aspect of a search engine is the mean time it takes to get useful results out of it. Now, about regarding measuring such a thing (or any other usable characteristic) in an objective manner?...Therein lies the problem.
This discussion has been archived. No new comments can be posted.

On Measuring the Effectiveness of a Search Engine?

Comments Filter:
  • by Anonymous Coward
    First off, you and your sluts may be able to fuckin' determine some "satisfaciton" factor for your sorry search engine, but its meaningless unless compared to the fucking same rating of other search engines. Pop Quiz, why are you such a fuckmonster? For example, if you creat some rating system on a scale of 1-100, and that shit rates a fuckin' 10 as in worthless piece of garbage, a motherfucking 10 maybe great if all other solutions average under a fuckin' 5 (becuase search engines rule in general).

    The fucking obvious problem is now how in the hell do you and your jive come up with a rating? Fuckin' you should know. The fucking non-tech, and probably most effective, is to fuckin' survey the fucking users. Wanna burn? Ask questions like: Did you find a suitable result? That's the fucking question. How in the hell many times did you and your hand-job refine your sorry search? Shut your mouth when I'm asking for answers. I should neglect your ass. Where the fuck was the fucking result located (1st, 50th, etc)? 'Cause you're about to get burning. yadda-yadda. Fact: you are a snacked-on fuckhead.

    Another way is to fuckin' keep track of the fucking average rating of the fucking selected links that the fucking users click on. The fucking search engine probably has a motherfucking numerical relevance number for each result. When a goddamn user clicks a motherfucking link, store that number. Yes, it's true! You are a pussy. You and your hand-job may also (in addition to the fact that you're a piece of shit) want to store the fucking rank of the fucking link (1st, 50th) link in the fucking survey becuse you may have 50 links with relevance of 99-100%, but the fucking fact that the fucking user has to fuckin' scroll to fuckin' link 40 to find fuckin' what he/she is looking for isn't so god-damn great. You are a reamed fuckmonster. The fucking problem with both of these methods is how in the hell do you and your monkey know the fucking link the fucking user chose is fuckin' what them jizzers wanted? Well? Maybe that shit was a totally fucked-up result and them smelly armpits hit the fucking back button and tried another one. You are a fried fuck. Some session tracking would be in order I would guess. With that, you could track the fucking number of searches a user executes. And another thing: Ask me again: you are a pukey cream-filled fuck.

    Stats would look something like this bullshit:
    Session 1
    =======
    Searches: 4
    Links followed: 10
    Final Link Relavance: 85.4%
    Final Link Rank: 3

    Use these numbers, compare them bastards to other search engines (along with surveys) and see how in the hell you and your hand-job're doing.... You are a fuckmonster.
  • How about a simple rating system? IE - Rate This Search (1 ----- 10)

    Have the search result pop up in a framed window with the rating applet at the top (or bottom, or a popup, or whatever). User then rates the relevance of the link based on the information it gives as a result of his/her query. 1 being a crappy link (keywords used to induce search engine hits) and 10 being a great link (relevent topic, good information).

    I've seen everywhere from Microsoft to CNN use things like this to rate Support Questions and news stories. So why not something that people use EVERY DAY?
  • First off, you may be able to determine some "satisfaciton" factor for your search engine, but its meaningless unless compared to the same rating of other search engines. For example, if you creat some rating system on a scale of 1-100, and it rates a 10 as in worthless piece of garbage, a 10 maybe great if all other solutions average under a 5 (becuase search engines suck in general).

    The obvious problem is now how do you come up with a rating? The non-tech, and probably most effective, is to survey the users. Ask questions like: Did you find a suitable result? How many times did you refine your search? Where was the result located (1st, 50th, etc)? yadda-yadda.

    Another way is to keep track of the average rating of the selected links that the users click on. The search engine probably has a numerical relevance number for each result. When a user clicks a link, store that number. You may also want to store the rank of the link (1st, 50th) link in the survey becuse you may have 50 links with relevance of 99-100%, but the fact that the user has to scroll to link 40 to find what he/she is looking for isn't so great. The problem with both of these methods is how do you know the link the user chose is what they wanted? Maybe it was a totally wrong result and they hit the back button and tried another one. Some session tracking would be in order I would guess. With that, you could track the number of searches a user executes.

    Stats would look something like this:
    Session 1
    =======
    Searches: 4
    Links followed: 10
    Final Link Relavance: 85.4%
    Final Link Rank: 3

    Use these numbers, compare them to other search engines (along with surveys) and see how you're doing....

    ÕÕ

  • Just pick the ones you feel/believe/etc work best for you for the respective queries.

    If you can't tell the difference then it doesn't really matter does it? ;)

    If you're the 10% of the bunch with really unusual queries, you're too different to bother. No point having someone watch and time you as you search for 101 different things and may never search for them again (since you'd bookmark or saved them). Also you've probably figured out which search engine is good for what. e.g. google, hotbot, altavista. Pity about dejanews tho :(. A whole type of searches died with them.

    If you're the 80% "John/Jane Doe" then heck Yahoo probably has what you want. e.g. While setting up the Net for your aunt, set the default homepage to Yahoo. If after months the homepage still isn't changed or hasn't complained, then maybe she's not ready for a change eh?

    The remaining 10%? Doh.

    Cheerio,
    Link.
  • It all depends upon the person who is searching.

    Suppose I want to find out information about the Ikeda Attractor. I do a search on 'Ikeda Attractor' and it results with a few pages with pictures of it, but nothing really describes what it is. From the images I realize that it is a chaotic attractor, so I do a search on 'chaotic attractors'. The 2nd search results with more pages that describe the phenomenon of chaos. These pages, however have links to other pages that describe chaos in general and how to represent chaotic attractors. I have to then follow these links to get to the pages that describe exactly what an attractor is. Many of these include the Ikeda attractor as a specific example. Search completed.

    This is similar to any research oriented search. In a university library, the preliminary survey of Scientific Citations Index, or whatever literature search tool that you use rarely turns up good results. These initial results, typically, are only good for directing you to further results via the bibliography (the articles usually tell you which other papers are good for what topic in the introduction). You then get locate the referenced papers and sometimes you look at the references in those. I usually need to read the intros to about 5-6 articles for every good foundation type paper that I come across.

    Search engines in general, won't improve any time soon because every search is personal and customized. A little bit of effort and a little bit of experience is all that you need.

    Finally, I'm not going to proof read this, since I'm late for work, but I hope it all makes sense.

  • I've been using a couple of Internet search portals these past few years (mainly Yahoo, Hotbot, Altavista, and now my favorite, Google), and if there's one thing I've learned is that it's all in how you write your query.

    The use of simple operators like AND, OR, NOT and parenthesis (sp?) can turn a fruitless search into a simple one.

    So what is useless for one user can be quite helpful for another one. It's all in how you put your mind to work.
    Tongue-tied and twisted, just an earth-bound misfit, I
  • The problem with trying to assess efectiveness for users is the old problem that everyone works in different ways - often capable of great idiocy.

    I was involved in the Y2K cover for MCI WorldCom's web services. We had constant calls from scared silly customers who'd seen that support for their area was down around 50-60% and were worried they'd lose service.

    The reality of what happened was we were using ultraseek, they'd type in a term like ATM, then get the search results page. Next to each search was an accuracy score - it just doesn't say it's an accuracy score and they all interpreted it as a service level score, hence the panic.

    So, in conclusion, any metrics you impose are going to end up being subjective and making assumptions of a basic level of intelligence that you just can't assume.

  • Have the search result pop up in a framed window with the rating applet at the top (or bottom, or a popup, or whatever). User then rates the relevance of the link based on the information it gives as a result of his/her query. 1 being a crappy link (keywords used to induce search engine hits) and 10 being a great link (relevent topic, good information).

    There's more than just "good information". The biggest problem with a rating system, is countering for people's varying capacities for specifying good search terms. Someone who is bad at specifying their search terms will rate perfectly good resources as "poor" because they were not what they were looking for.

If you think the system is working, ask someone who's waiting for a prompt.

Working...