Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet

CNN on Common Name Resolution Protocol 83

CamelMan wrote to us with an interesting story over at CNN. The Internet Engineering Task Force is apparently on a fast track to get the Common Name Resolution Protocol into place as early as next year. It will be, however, mostly aimed at intranets, and not at the Internet writ large.
This discussion has been archived. No new comments can be posted.

CNN on Common Name Resolution Protocol

Comments Filter:
  • Ok -- I promise to fire up the search engine and learn more on my own right after this. Is this some sort of LDAP extension or is it more like a standardized crawler/search engine combo? Sorry for my cluelessness.
  • IMHO, this new "common name format" is just another unnecessary layer. The main bragging point was that you don't to remember a long, complex URL; instead you can just type in a document name. The only problem with this is that if an intranet site is any good, you should never have to type in a URL at all. It's just a matter of a navigation interface that should be on the site anyway. For example, they said you could just type in "1996 budget report" instead of the URL. But if a site is designed correctly, you should be able to navigate to reports, then budget, then 1996. Or budget department, then 1996 budget report. This common name system just fixes something that isn't broken anyway -- the URL system. All it provides is an excuse for web site designers to be lazy about designing an intranet site (who needs nav bars? Just type in "memo from Robert T. Frog in Budget Department to Bob Q. Flair in the Widgets subcommitee of the Projects department on August 8, 1996!").
  • i'll take purer tcp/ip and DDNS in win2k over yet another layer.

    matt
  • I agree 100%

    don't forget, if they try to port this to the web then they can designate another internic to control the sales of keywords. just like domain names.

    completely unnecessary.

  • With this proposed standard set to eliminate the typing of "dots, dashes, and backslashes" to get to a page, it's unnecessary--I don't know about you, but I've never had to type a dash or backslash to get to a page.

    Either that, or the clueless reporter is getting computer-related stories wrong again, which wouldn't surprise me in today's media. Are there *any* reporters that get it right?
  • ...they said you could just type in "1996 budget report" instead of the URL. But if a site is designed correctly, you should be able to navigate to reports, then budget, then 1996. Or budget department, then 1996 budget report.

    Or even just type "1996 budget report" into a search line on the site itself. I agree that this could be handled on the individual webpage, but it would admittedly be rather convenient to just type it into the URL line. But, as always, I suspect this will get hijacked, so that typing "The Foo Company" will only work as I'd like if the Foo company has paid all the right fees to all the right people... sigh

  • URLs are not good navigation tools. The hostname should represent a certain domain, and a certain host, and some directory information on what file should be served. One of the main reasons that there are so many .com addresses is that URLs are too simple and you have to use the system in a way it wasn't intended (having a different .com for every possible topic) instead of using hostnames as names of machines on the internet, and a user level protocol for user level information. This should have been done a long time ago.
  • umm, i dunno where you go on the internet, but i know a number of pages whose URLs include dashes and backslashes.
  • I can picture Mindspring or (shudder) AOL setting up a CNR server for their users, so their users can access common sites and information sans URL. A mercenary ISP could even sell entries in their CNR database.

    ----
  • Am I missing something? The whole reason URLs get hard to remember is because some sites are designed improperly and HTML documents are given cryptic names. But just as web maintainers fail to maintain an easy to remember document hierarchy, they will also be resposible (presumably) for setting up the common name servers. Inevitably, they will just make the same mistake once given this Common Name Format. (Ie, every large company I've ever worked with has had so much documentation that they had a numbering system for documents. I'd imagine this is what they'd use if given a Common Name layer. Is it that much easier to remember "RSV 1345 HM Control Interface Design Document" than "spudge/current/dds/" and grab the document from that directory?)

    At any rate, abstraction by way of layers like these only obfuscate matters more - and the keep the end users from having any clue whats really going on. The reason help desks get so many calls isn't cause people are stupid .. it's because technologies designed to make stuff "user friendly" only inhibit users from learning how to solve problems themselves. Techies on a mission to make everything easy for end users are (knowingly or not) contributing to the dumbing down of "end users" - I often charge M$ does the same thing. Hiding functionality behind a nice curtain only makes matters worse ... and just when the general public was starting to get the hang of things .... sigh.

  • I've seen hyphens, a dash is actually about twice as long as a hyphen and used for a completely different purpose. I've never once seen a backslash. The author most likely confused the backslash in c:\windows with the slash in http://foobar.com/.
  • No, if you use SMB with NetBios, you still need WINS, no matter how many nifty tools you get on the internet side. If you want to dump WINS, dump Windows. If you can't do that, dump your Domain Controllers and install Samba [samba.org] on a Linux/BSD/Unix box. Samba's implementation of WINS integrates much better with internet naming systems.

    ----
  • It looks like the idea came forward from someone at realnames. I'm not particularly impressed with their idea. Just replace a URL with a simple text string... and whoever paid us most for that string will get the hit...

    I don't think that people have much trouble in using search engines... Why not just put a search engine on your intranet server and have the home page set to that server?

    It solves the problem of finding the 1996 budget report without knowing the URL... and it doesn't involve replacing browsers, adding in new servers or anything particularly complicated...
  • Not that I trust journalists -- technical and especially mainstream -- to accurately report on technical matters...

    The article implies that a local/intranet registry will have to be maintained describing the mappings between human friendly names and URLs. So instead of remembering URLs, users will have to remember how the thing's registered in the local registry, quite possibly in a way that's not intuitive to them. A user might want the 1998 budget report and look for "1998 Budget Report," but the finance people might've registered the document in the local registry with a title that's only intuitive to them (e.g., "1998 Expenditures"?) and others in their field. A title that's intuitive to person A in discipline A might not be all that intuitive to person B in another discipline. Sounds like we'd be back at square one rather quickly.

    There's a point at which we have to stop dumbing down computers and standards and start insisting on smarter users and smarter website maintainers (i.e., put a little thought into how documents and pages are organized). Perhaps we're reaching that point...

  • Well, ok, I will admit the hyphen/dash difference, but backslashes really do exist in some URLs. The only places I remember seeing them have been in query results, so maybe that means they are not "really" part of the URL. Then again, you can type it in and it will take you to the same results page from a different computer, and that's close enough for me.
  • As entertaining as it is to see passionate debates on slashdot waged between people who don't really know what they're talking about, I thought I'd clear up some potential questions here.

    This is aimed at Intranets. Why? Because this is, in short, a way to connect web requests with a bunch of resources listed in a directory server such as an LDAP server, Novell Directory Server or Micro~1 Active Directory (coming soon to foolish corporations everywhere). "Common Name" or CN, is a standard part of X.400(?)/X.500(?) naming schema which are used in such directories. If you look at the contents of an X.509 certificate, you'll see it as part of a "Distinguished Name" or DN. Something's DN should uniquely identify it. For example, here are some DN's:

    • CN="George Bush" OU="Texas Governor" CO="Republic of Texas"

    • CN="George Bush" OU="Ex-President" CO="United States of America"
    So, allowing someone to type in a CN or partial DN into their browser is of interest to corporations in the near term, because many of them are already deploying Directory Services to centralize their information management.

    The "Common Name" proposal also contains RDF [w3.org] schemas to describe and search for documents based on CN's and partial DN's. This might have some applicability to widely distributed, nonarchic systems such as the web. Of course, RDF is a cutting-edge application of XML which isn't even fully ratified, so don't hold your breath.

  • I think what this will do is be basically a URL search engine. You type in something you are looking for and it pops up. I don't see anything much more advanced getting worked out so soon.
  • In all fairness, the explosion of the web could never have been forseen. And it's not exactly easy to get a new protocal layer out the door.

    But more importantly, I disagree with your assesment of the use of domain names. I'd charge the popularity of .com addresses is the result of media hype. For example, www.such.com/sirslud seems inherently uncooler than www.sirslud.com ... because the public associates having a whole .com to yourself as pretty damn cool. Why, you'll never have a building as large as Coca-Cola's, but you can have a web address that's just as slick and simple as theirs!

    www.coke.com/somesuchpromo is just as easy to remember as www.somesuchpromo.com, if not more intuitive, but there seems to me that there's aa certain unsaid rule that if you need to go beyond the .com, it's not a respectable front door to a commercial site. If anything, I'd say that on the commercial side, URLs are misused.

    Not to mention that document names have nothing to say about their heirarchy and context - URLs do if the site is structured properly. To pull the example off CNN, 1996 Budget Plan says nothing about the group, stage, and version of the document. At least a URL, and consequently knowing the directory stucture leading to the document enforces a certain amount of organization and context for the document. Common names simply add one more layer on which to maintain and organize, a thing we all know techies hate the worst. ;) If anything, I see this as a technology to keep people from having to type file extensions and slashes .. is this really worth a whole other server/protocal/admin_duty stuffed onto your network?

  • And since the same people that design the badly designed site will also design the mapping of keyworks to urls, this mapping scheme will probably suck to.


  • Ok, I'll admit to my ignorance. I don't know jack about Windows, WTF is WINS?

    Speaking as someone who uses SMB over TCP/IP from Windows 95, Windows NT, FreeBSD, and most commonly, Linux. As near as I can tell, there are no WINS servers on our network (95 machines are set to use DNS, NT machines have blank WINS entries), but everything works fine.
  • Since you can register 1998 Expenditures, 1998 Budget, and any others that people are likely to look for, it won't be that difficult. Simply index _every_ way people try to find it, and if you find out they can't find it, register what they're searching for.
  • It looks like this could really be a good thing, in terms of standardizing access for search engines.

    The Macintosh has its "sherlock" system right now, which lets you use the same UI to access a bunch of different search engines. Similarly, there's a way to provide an interface to Netscape so that typing in "? searchterm1 searchterm2" instead of a URL will work (the Google site describes how to configure Netscape to use Google for this). I'm sure there's something similar for Internet Explorer

    A standardized interface for this sort of thing that all search engines and client software could agree upon would be a *very* good thing.

    You could even end up with a hierarchical search engine -- if a site has a "robots" file that prevents it from being externally indexed, but *also* provides this unified interface, then some searches could transparently be forwarded to the site's own engine (eg. I could do a search on slashdot and get included up-to-date hits on freshmeat and linxutoday). This should do wonders for outdated links.
  • Directory servers are ideal for just that. Directories. Directory information. Directory entries can contain attributes, secutiry information, and organization hierachy, and most importanly a schema of allowed attributes based on position within the hierachy. Being a simpler, friendlier revision of X500(okay, I can't remember every protocal name :P) this is what it was designed for, and this is what it's good for.

    If your point is that corperations are pushing it into their intranets cause it was quite the buzzword for awhile, I'll agree. It does make storing vast amounts of hierachial data very fast, but so does .. well, a file system. I'm not against the whole CN thing, and consequently the deployment of directory servers, but I /am/ against using it for the resolution of documents and files. It's perfect for looking up names, records, entries. But it's absolute overkill for document location resolution. Akin to using a tank to kill an ant.
  • I hadn't considered search engines. I think though the commonly used url naming schemes are pretty much solely alpha-numeric w/ .s and /s
  • Common names are used to navigate the Web today in the form of Centraal's RealNames, Netword's NetWords, America Online's KeyWords, Netscape Navigator's Smart Browsing and CompuServe's Go Words.
    And we all know just how helpful and easy those are...

    Seems more useful to build a quick site search. If you are looking for a 1996 Budget Report, there are bound to be a bunch of them on any decent sized company's intranet (i.e., one for each department, etc) so a CN solution could be painful ("Budget Report for 1996 for the Marketing Department" --oops, no, it's "Marketing Department 1996 Budget Report"...) but a site search would give you a whole set of choices right off. And it's a damn sight easier, too, than having to name all of your documents in every way that you think someone might try to access them...
  • They are legal. From RFC 2068 (HTTP 1.1)


    3.2.1 General Syntax

    URIs in HTTP can be represented in absolute form or relative to some
    known base URI, depending upon the context of their use. The two
    forms are differentiated by the fact that absolute URIs always begin
    with a scheme name followed by a colon.

    URI = ( absoluteURI | relativeURI ) [ "#" fragment ]

    absoluteURI = scheme ":" *( uchar | reserved )

    relativeURI = net_path | abs_path | rel_path

    net_path = "//" net_loc [ abs_path ]
    abs_path = "/" rel_path
    rel_path = [ path ] [ ";" params ] [ "?" query ]

    path = fsegment *( "/" segment )
    fsegment = 1*pchar
    segment = *pchar

    params = param *( ";" param )
    param = *( pchar | "/" )
    scheme = 1*( ALPHA | DIGIT | "+" | "-" | "." )
    net_loc = *( pchar | ";" | "?" )

    query = *( uchar | reserved )
    fragment = *( uchar | reserved )

    pchar = uchar | ":" | "@" | "&" | "=" | "+"
    uchar = unreserved | escape
    unreserved = ALPHA | DIGIT | safe | extra | national

    escape = "%" HEX HEX
    reserved = ";" | "/" | "?" | ":" | "@" | "&" | "=" | "+"
    extra = "!" | "*" | "'" | "(" | ")" | ","
    safe = "$" | "-" | "_" | "."
    unsafe = CTL | SP | | "#" | "%" | "<" | ">"
    national = <any OCTET excluding ALPHA, DIGIT,
    reserved, extra, safe, and unsafe>



    I'd say they fall under national
  • WINS is basically a cheesy form of DNS for windows If you double click network neighborhood, windows sends a message out to your subnet saying "Who Are You?". And the machines respons As you can guess, this is horribly inefficent. WINS allows one machine (IE your Samba Server) to respond to the questions about name -> ip resolution. It also can be used to allow multiple subnet browsing.
    Now does anyone know how to make dhcpd under linux tell windows machines where the wins server is?
    My guess is it's something like
    option wins {172.16.0.1} ....
    • [A directory] does make storing vast amounts of hierachial data very fast, but so does .. well, a file system.
    I think you're thinking on the wrong scale.

    Let's say you're PricewaterhouseCoopers. You are the the recent merger of two huge companies that are over a century old. You have 150,000 people worldwide who need documents on everything from tax incentives in Botswana to OS/400 vulnerabilities. Even if you could organize all the data available in the firm, you'd never know when somebody who specializes in electronic banking might suddenly need to know about the Japanese fishing industry. File system? "Properly organized web site?" As if!

    I've pointed it out before, but little of the slashdot readership has experience with enterprises on this scale, so I'm just offering some perspective from someone who does.

  • from what I gather from the cnn story, is to make it easier for someone to find the information that they're looking for.
    ..."You won't have to remember the HTTP address. You can just call a document by its name,"... ... "Imagine a company like Boeing having a database of all the engineering documents for the F-22 fighter and being able to pull up documents by their regular names and not their URLs." ...
    if a company had a database for some set of information, say engineering documents, or phone numbers, or part numbers, or whatever, wouldn't it be fairly simple to just make a web interface to the database (.asp or cgi or perl etc...) so that an end user could just enter a keyword to search the database?

    I think that this protocol would lead to sloppier web site design. Yes, I've looked for information before and wound up on a page with alpha(-numeric-)bet soup for a URL, but if the site (inter- or intra- net) was organized better it wouldn't have to be like that. If you needed an annual report, it would be great if you could just go to the company's website and find it in two clicks, or just go right to www.somecompany.com/reports/year.html It seems that whenever I'm trying to look for something that I think is fairly straightforward on a website, I have to jump through multiple hoops to find it.

    The Common Name standard could eventually be integrated with e-mail standards to allow end users to send messages without knowing the recipient's e-mail address.
    great, mail to John Smith would go to how many people?

  • WINS never should have existed in the first place. I don't want another naming service to mess with at all - on our UNIX boxes we have NIS host resolution off and it uses just DNS, if MS would make their stuff a bit more flexable the same could be true for the PC world. For the most part every modern platform can talk DNS. Any new protocal that comes out will have to be added to existing sytems - I don't want to mess with doing this. This whole name resolution nightmare was created by MS, make them change to fix it not the rest of us.
  • To speculate and elaborate a little on Red Pen's "Reality Check" ....

    Presumably, as a sys admin, you have a web server running. Along comes your boss to tell you to implement the common name protocal. Here we go:

    1. You set up some sort of Directory Server. This will store the mapping and define the category schema's of your intranet's store of documents. Depnding on how logical your file systems schema's is, your directory schema might be very similar.

    2. You add entries to your directory server - one for every document you wish to be accessible by common names. Good thing about this: your entry names can contain spaces .. yay. :P

    3. Browsers, once they support it, will allow you to configure a directory server to point at (Netscape already does, for use with your address book).

    4. Browsers will have functionality such that when you type in a document name, it searches said directory server, beginning at a particular root, for said entry. Results are returned, I suppose ... depends how this is implemented on the browser with regards to how results are handled/displayed.

    5. You have do cocurrently maintain your filesystem in synch with your directory server, or alternatively, develop a tool which allows some sort of method to 'check in' a document to your intranet that handles the moving of the file to the proepr location on the file system and additionaly adds an entry to the directory structure reflecting its location in your document hierarchy.

    6. Your end users only have to remember document names, not full paths, but they will get different results based on search roots. (Ie ... only under 1998 documents, for instance.) Each document will, I assume, have a full DN (distinguished name) that describes their location in the hierarchy (for example: ou="netscape.com", c="1998", c="Budgets", cn="Version 0.6") but if you search from say ou="netscape.com, c="1998", c="Budgets" as a search root, a search on Version 0.6 will return only the one record, since it must be a unique identifier (thus the name distinguished name).

    Keep in mind this is a speculation of how it will be implemented. I have lots of directory server experience, but as of yet I have yet to see it implemented as a layer beween a brower and a file structure.

    Hope this helps - it's not exactly a search engine, and it does end up in more work for the sys admin, as far as I'm concerned. And as always, I never suggest that I'm always right. :) Suggestions are welcome.


  • I don't know the details on the Common Name Resolution Protocol, but if it would include support for, say Unicode or 8-bit ascii, it would be a blessing for many languages that are not English. Right know it is really annoying because you cant make a simple url of a word with for example the swedish characters å ä and ö. Other scandinavian languages, german and perhaps others have the same problem. The solution that is commonly used is to make the url with a or o instead of ä or ö but that looks too awkward sometimes. So in many cases english is used instead, although it is a pure swedish site. This is not good for the swedish language, which already is marked with too much english influence.

    So I say - go Common Name Resolution Protocol!

  • I have two very large concerns with this. What sort of security measures will be built into this? XWZ Inc. might not appreciate an employee from ABC Inc. being able to type in "budget reports for XWZ Inc." and getting it. It also seems to me that this type of system would make internet censorship a hell of a lot easier to implement...

    "What is now proved was once only imagin'd"
  • I think it's a ridiculous idea, b/c it appears to be now making a search engine of itself, because now, instead of finding your page (whether or not you (1) know it actually exists (2) know where it is (3) care that much to find it) you're going to get every bloomin' page out there with the remotest semblance to yours.. as implied in the article:

    "For example, typing in the word "Apple" might bring up Apple Computer's Web site or information about growing apples, depending on the context of the request. "

    Of course, I also note that the quote says "depending on the context of the request," but isn't that delving, still, into the realm of some sort of pseudo-AI, high-tech search engine? Is this necessary? Are we having soo many problems finding web pages? Or is it some attempt to make us think we're being "more efficient" (re: lazy) while really making us work more? Hmmmm.......

    I think it's a waste of time, money, effort, and the lives of the members of the task/action team...and I personally don't like it one bit (could ya tell?)
  • > when somebody who specializes in electronic banking might suddenly need to know about the Japanese fishing industry

    I have experience in enterprises of large scale .. and I'll again conceed its a great technology for directory/HR, since these records are required frequently, and directories inherently support organization structure and authentication, but the case you describe .. well, I think they happen rarely enough such that it's difficult to justify a whole other layer on your intranet management work. I think a properly maintained search engine and intranet does well enough. (And obviously, I conceed that as a former sysadmin, these things do tend to be a matter of taste. :) I didn't expect the backlash this story is getting. I thought I was going to be presenting the unheard for side. ;)

    However, I'll step back a little on my position on the basis that lots of companies /are/ moving their HR info over to directories .. moving document organzation over to it is much more justifyable in this case. I still don't think document organzation alone justifies the deployment of such servers.

    It /is/ nice tho to talk it out with someone who has a clear understanding of the technology involved tho, since I'm always open to the idea that I'm often wrong. ;)
  • "Context" likely refers to the current root of your search, with respect to Directory Servers the and LDAP protocal.
  • Yes, CNs do support security via the use of ACI's (Access Control Information) in an LDAP entry. Check around from some LDAP documentation. I'm assuming of course that this Common Name Resolution stuff is for use with LDAP servers.

    There are a variety of different security restrictions you can impose on an entry that allows various actions depdning on who is doing the query.

  • The problem is that no matter how well-designed your hierarchy is, most end users don't grasp the concept of hierarchies. Do you realize how many people think Yahoo is just a search engine and never, ever drill down through the categories? (Not that Yahoo is especially well-designed, but you get my point.)

    OTOH, the real problem is that end users want to be able to type what they want in plain English (or the badly spelled, ungrammatical, punctuation- and capitalization-free crud that passes for end user English) and get exactly what they want, even if they aren't sure what they want and couldn't express it if they were. Just because you can reference a document as "1996 budget report" doesn't mean your pointy-haired boss isn't going to type "bugdet report 96".
  • Since WINS is essentially Microsoft's name for their version of NBNS (see RFC1001 [linuxberg.com] and RFC1002 [linuxberg.com]), it's probably covered under DHCP's NBNS option [linuxberg.com] (code 44) [see section 8.5]. If that doesn't work, send bug reports to Microsoft :-)

    ----
  • the problem with your database approach is that it doesn't take into account that the data is (a) distributed (b) diverse
    It would probably work for something as boeings manual system (they probably have something like that) but you would basically have to do maintenance on both your server and client systems if youn would want to store something else.
    I'm not sure yet what the common name resolution protocol exactly (the article was a bit vague on that and I don't feel like doing a search on the matter) is but I suspect it has something to do with organizing URL's into directories. Thus keywords have a context/directory specific meaning. If I'm in the directory /emailaddresses/world and type John Smith, well ... But if I'm in /emailaddresses/world/Europe/Sweden/Ronneby/SoftCe nter/IPD and type John Smith you will probably get an error because there's no such person at the department I work :)
  • Why not just have a common boomark file for all those interesed in calling things in a paticular context. I don't see a need to create a standard around this. To some degree what is being described already exists. i.e. If I want a link to a how-to faq doc for emacs I go to an emacs web site and click on the how-to or faq link. By going to the emacs webpage I've placed my self in the context of emacs and when I click on the faq link I should get an emacs faq.

    I fear that this standard could eliminate a degree of freedom on the client side. If I use the pharase "emacs how-to" in the new system proposed, I have to rely on the fact that the person that created the database has selected the best emacs how-to source. I don't know that I would trust a stranger's expertise over mine in this area. Lord only knows the commercial abuse that could arise from this. I think sombody mentioned this in a previous post about a rouge ISP.

    IMHO It seems at best this new idea eliminates at most one mouse click for those to lazy to do their own research.
  • Many people have argued that "marketing budget report 1996" is easier to remember than widgets.com/divisions/marketing/1996_budget.html, but two other points arise from this:

    1. As many people have pointed out, the CN has to be typed exactly. So if someone said "marketing budget 96" or something then they wouldn't find it. Perhaps URLs, by making people remember the string exactly, actually make things easier to find.

    2. In a well-designed heirarchy, nobody should have to remember a web address/CN anyway.
  • The reason help desks get so many calls isn't cause people are stupid .. it's because technologies designed to make stuff "user friendly" only inhibit users from learning how to solve problems themselves.
    Dammit, SirSlud, it's really annoying when someone else crystalizes a thought that's been banging around in my own head for years, in a better and clearer manner than I ever could.

    Somebody moderate that comment up. And everybody heed this fellow's advice.

  • The only way this is going to work is to have a large common database on the backend. How long is it going to take the banner ad/porn people to start seeding it with keywords?
  • How about this:
    For end users, the standard means no longer having to remember or type in a series of dots, dashes and backslashes in order to find the information they need.

    Umm, is it just me, or is a "series of dots and dashes" completely meaningless?

    It sounds like the author has confused morse code and HTTP... I've never seen any url that looked like http://www.foo.com/..--.-\.-..-\..-\\-....--.-...- ./
  • There seem to be people posting here who missed the context thingy mentionted in the article. It's basically a directory service they are proposing.

    I think directories are better than url's for one reason: URL are tied to physical locations and directories are not. For instance my email address jgurp@yahoo.com (don't flame please) is tied to Yahoo. If for whatever reason I would want to change provider, I would have to notify everybody I know not to use that address anymore. The same applies to documents. I don't give a flying fuck on which server company X has stored document Y, I just want to access it quickly.

    The way I see it company X would store it's document Y somewhere and link one or more keywords in one or more directories to its location (i.e. /world/US/companies/Boeing/products/747/manual/scr ew10990814097254.3 instead of www.boeing.com/lots/of/cryptic/stuf/that/changes/a ll/the/time/manual.html).

    By using a local server access can be controlled. By linking the local server into another server (again under a directory), a global directory system can be created.

    This also makes searching a lot easier. Unlike with domain names, it would be no problem for each individual to have a unique branch in this global directory tree (for instance /world/europe/holland/citizens/hometown/me). Of course it should be possible to have multiple paths to the same directory so that the same directory can also be linked under the company I work.
  • ...another internic to control the sales of keywords. just like domain names.

    Exactly my fear. Imagine I have a site that deals with apples and pears. Do I have to register with someone so that people can find me? Will I have to outbid Apple Computers? Will I get sued for being referenced as "apple"? Who the hell is going to find MY site by looking up "apples and pears". I'm sure someone else has the same sick fascinations as I.
    As far as intranets go, you should have enough control over internal documents that navigation to them shouldn't be that difficult. On big sites, an internal search engine works fine as well.
    Sometimes I thinks these standards bodies just want to give O'Reilly another book to sell.


    _damnit_
  • That sounds like a bigger pain in the butt than URL's. If you're out to save money, that's not the way to do it.
    Joe Rabinoff
  • This common-name-to-URL-mapping technology already exists. It's called `Altavista'.

    Seriously, who ever types in URLs these days? I don't. They are all generated by Altavista searchs, following links, saved bookmarks. I don't think I am unusual in this either.

    This common-name proposal seems to be Yet Another One of those marketing schemes designed to raise money without providing value in return.
  • if this "internet is the future" thing really is to happen, someone's gotta make getting on the web as easy for my mother as it is for me.

    that's the promise of technology, isn't it -- letting it make our lives easier? how in the world can "http://www.whatever.com/blah/etc/yadda.shtml" be considered better than "yadda from whatever" ...

    see my point? no matter how logically the sites are organized (and i'm all for that), the jargon still gets in the way.
  • well quite simply,

    They're rarely used 'properly' and that's because 'proper URL' are no standard. Basically you're saying: if people don't behave badly there's no problem. Well people behave badly so there is a problem.
  • Does anyone know how this relates to the long-in-the-works Uniform Resource Name standard? I thought URNs were supposed to replace URLs years ago.
  • Not really. CNRP based names are non-hierarchical.
    Contexts are parameters that are sent along with
    the Common-name (read as "unstructured string").
    One such context is locale (us/ga/atlanta).
  • I'm one of the co-authors. Yes I work for NSI and yes the other co-author works for RealNames. But the idea of human friendly names has been kicked around since '92. If you can find them read some of the old URI disucssion lists where we talked started talking about the addons that were needed for URLs to be really useful.
  • Nope and Nope. ;-)

    1) CNRP is not based on LDAP. The fact that the DIT uses the term CN is really just an acronym
    conflict. CNRP isn't based on LDAP or X.500 at all. One particular reason is that CNRP deals only in flat spaces. I.e. common-names are meant to be unstructured flat strings.

    2) it is aimed at the Internet in general but, like DNS, it can and should be used at each organizational level to allow for the existence of local names. We fully expect global CNRP services to be offered by the likes of RealNames, AOL, etc.
  • Correct!

    There is also one feature of this that differentiates it from general search engines: intent. I.e. if I search for "1996 Budget" in Altavista or any search engine I'm going to get not only the documemt I wanted but also any document that also mentions it. CNRP would only return the document that was specficially meant to be bound with that common-name.
  • [sarcasm ON] Where's the RFC? It cannot become an internet standard without being an RFC.
  • Yep. i18n is a basic requirement for CNRP. There are some issues of encoding and matching that need
    to be ironed out but CNRP itself won't really care since those are server side issues.
  • Unless XWZ allows employees from ABC to enter common-names into their database it isn't a problem. CNRP servers are expected to be organized somewhat like DNS servers in that the local intranet has one that serves names just for that intranet.
  • The main difference is that if I'm looking for "1996 Budget Report" and I ask Altavista I'm going to get everything that even mentions it. CNRP databases _should_ only list those names that are actually bound to a resource, not those that simply mention the resource.
  • URNs don't replace URLs and were never really mean to. A Uniform Resource Name (RFC2141) is meant as a way to give a resource a persistent identifier that can be used to for things like looking up the resource at a latter date after your current location identifier has gone out of date.

    URNs, URLs, and common-names all occupy different niches within an over all structure of Internet naming and addressing.
  • Its not even a working group yet! Ghees, give us some time to actually get the work done! ;-)
  • Several misconceptions here:

    1) This isn't LDAP and isn't related to X.500 (regarldess of the two using the term 'common-name').

    2) CNRP based names are unstructured, flat (probably Unicode) strings. I.e. no-hierarchy. You can put slashes in the names but they won't mean anything to the protocol.

    3) A given name can be accompanied by one or more 'contexts'. A context qualifies the name by giving the query some kind of scope. Two examples of
    common contexts are locale (give me names that are valid only for this geographic region) and topic (give me names that are related to computing as opposed to agriculture).

    4) CNRP names are not unique. This means that two entities can both use the same name (assuming you aren't using trademarks and then that just an issue for courts to decide). I.e. remember the 'pokey.org' thing? Well in that case both the kid and the cartoon character can have the same common-name of 'pokey'. Both would appear as a result if the provided contexts also matched.

    5) You can get involved! The intent is that this should be an IETF working group (that's still pending). If you want to get involved then join the mailing list and do so. You can find the list archive and subscription details at:
    http://lists.internic.net/archives/cnrp-ietf.htm l
  • If you _like_ entering URLs and your brain is geared for remembering thing, then you're more
    than likely not the intended user base. In much the same was that backbone router people tend to think about and use IP addresses more than they do domain-names, URLs will probably be used by us geeks more often than not.

    When you think of who might get the most use out of CNRP think about your grandmother, your parents or your boss.
  • Mmm, my words taste delicious! And guess what's for desert? My foot! ;)

    Thanks for the correction ... Common Name ... it all sounded too much like leveraging LDAP.

    1. CNRP isn't based on LDAP or X.500 at all.
    2. We fully expect global CNRP services to be offered by the likes of RealNames, AOL, etc.
    1. Well, a detailed reading of the proposal makes it clear that this is general-purpose naming system, but the affinity with directory naming is high. I have no doubt that corporate LDAP will be the backend database for a large number of CNRP systems, if in fact they get deployed. The claim that CNRP is a flat space is in conflict with the claim that it will be "context sensitive" (context == scope == heirarchy).

    2. If CNRP services are offered for the current web, they'll be in lieu of existing search engines. I would have to agree with the other posters in this case that the net effect (no pun intended) will be window dressing (again, no pun intended). Not until XML and/or RDF provide valuable metadata on a majority of web content will we be in a position to coherently catalog the data with "Common Names."
  • I agree. How come everytime a new technology comes out, it has to be "dumbed down" to the lowest common denominator? I feel like this industry bends over backwards to make sure everyone understands, and there's still people that won't learn how to use a computer because "It's too hard..."

    It's just like the online columnists that say "Linux is not ready for the desktop, because it's not idiot-proof enough for the end-users." If Linux has to be dumbed down that much to become 'standard,' then it's probably best that it never does. I kinda like it as it is right now.

    -NG


    +--
    stack. the off .sig this pop I as Watch
  • > The main difference is that if I'm looking for
    > "1996 Budget Report" and I ask Altavista I'm
    > going to get everything that even mentions it.

    And how is the CNRP better? Someone pays to get a CNRP name attached to their `1996 budget report' URL, what is the likelihood that the returned report is the one that you wanted to retrieve? Damn poor, IMHO. Even when restricted to a company/intranet, which department's 1996 report should be returned? I'd rather not have a CNRP mechanism silently winnowing down my choices, thank you.

    As far as the Intranet angle goes, the Intranet administrator has the option of setting up a private search engine database, utilizing any of the search engine database building software that is available, some of it GPLed. This would enable you, the user, to search for all 1996 reports restricted to just your Intranet, thereby automatically ignoring those 1996 reports out on the WWW. It's just too easy.

    Clearly, the proposers of CNRP are either clueless about what can be done today, or they are pushing something slimy.

    And .. Internet standards have historically been a community-driven peer-reviewed process, done via RFCs. The RFC process is the precurser to today's OSS methodology. Who gave the CNRP people the authority to usurp this?
  • dammit...I forgot to login. That last AC
    post was from me....bad formatting and all...
  • One fundamental point of CNRP is that names are inherently non-unique. I.e. where currently only one entity can have foo.com, CNRP allows any number of entities to all have the name "foo".

    One of the main points of the goals draft is that there is no such thing as a "private namespace". I.e. RealNames may provide a CNRP service that contains tradenames but that doesn't mean they end up 'owning' those tradenames. They own their database but that's it. Anyone else can come along and setup a similar database if they have the data to put in it.
  • People are dumb. You and I can remember IP numbers, but people can't even remember URL names. We already have made it simple enough. Everyone that NEEDS this should be shot (ok, maybe that is my opinion, but it should be a fact). Now we are going to get a "better" system that will return more porno pages when I look up "C programming" or even more porn pages when those people who can't remember URL's try to look up how to make cereal. I have no problem admitting that the search engines now are not the best, but when that is the de facto, that will suck. I think that ultimately this will cause more harm than good. Computers are a useful tool, but they can't (and shouldn't) think for people. Cashiers can't even make change correctly without the cash register (... the cashier owed me 4 cents. She only had 2 pennies, but plenty of nickels. I gave her a penny and she looked at me like "What is this for?" For the next 5 minutes I explained that 4 + 1 = 5. The manager finally brought another role of pennies over. She gave me 4 pennies, I turned around and gave her 5 pennies, and she gave me a nickel! This happens to me all the time, so I assume that it also happens to you). If this system sounded reasonable (yeah it sounds good on paper, but so does communism) I would fully accept the idea, but after extended use of the computer, I have found that common sense/knowledge can not be replaced by a machine thinking for someone (yes I did use spell check, call me a hypocrite if you will, but if I had never used it, I would be a better speller). Computers can make things easier, but it seems as though this idea is just going to be another hassle for everyone involved.
  • Did anyone else catch the last line of the article?

    "The Common Name standard could eventually be integrated with e-mail standards to allow end users to send messages without knowing the recipient's e-mail address."

    Are you kidding? So, if I write an email message for my friend, then just type "Jason Smith" in the Common Name address field, it'll automatically figure out which Jason Smith will receive it? This sounds like just one more namespace that will quickly fill up and become even more confusing than http's for the regular public.

    Why don't they fast-track IPv6 so I can get my own friggin IP addresses??

    --Mid

Any program which runs right is obsolete.

Working...