I got my latest issue of LR&TS and I actually saw an article that piqued my interest:
Roughly the authors take a look at the NACO normalization rules which have a significant impact on how catalogers search for and create authority headings. There is some weirdness in the current algorithm with its subfield retention and the obvious solution is to get rid of it and simplify the normalization so it is repeatable on an already normalized string.
When I did NACO training there was a good hour spent on stepping through the NACO string normalization so that you could manually insure unique headings and it struck me then that there has to be a better way to handle this.
At the moment the OCLC cataloging utilities don’t verify the headings in a new authority record automatically at the client. Ideally there should be a verification routine within the client (connexion) which would verify the uniqueness of the headings and references before they are loaded and give the cataloger a change to fix the duplicates. That will help maintain the integrity of the authority file regardless of the normalization scheme, and also keep me from having to remember, “Keep the first comma and don’t squeeze out the extra spaces.” So moral of story, maybe I should submit some sort of enhancement request.
One last thing that really surprised me was the integrity of the authority file with respect to duplicate headings, I was expecting alot more. Then again there are duplicates which are just plain errors that can’t be caught by any normalization routine … i.e. 100 1# Smith, Michael G. is the same person 100 1# Smith, Michael G. (Gregory).
Updated: With a link to the preprint (gimme a break I can’t figure out how to include a COinS in wordpress.com)