Category Archives: URN

The Uniqueness of Things

Found the below in my Drafts folder, unearthed after I imported my old blog to the WordPress instance on my own server. While it was written six years ago, I thought it was still worth publishing after I read it. I hope you think so too.

Two years after writing this (and having long since forgotten that I did), I presented the concepts behind URNs and the need for uniqueness in document management at XML Finland. The system was finished and done, and I was proud of it. It wasn’t perfect but it was battle-tested and we knew about its weaknesses. I really wanted to talk about it with other markup people, colleagues who knew about angled brackets, and I was sure they’d understand. In fact, I feared some might say they implemented it all years ago, only better. Yet, what is described here also happened at XML Finland; the importance of uniqueness and the advantages of semantic naming using URNs went right past them, judging by the Q&A afterwards.

Or maybe it’s just that I’m wrong.

Anyway, here goes…

===

I’ve been busy finalising an authoring system that is supposed to identify every resource ever stored in it with URNs. What follows is just a rant, but I do think about it and would like to know the why’s and the how’s. I would like to know why the concept of uniqueness is so difficult to understand.

A URN, of course, is the unique name of a document, as opposed to its location, the URL. Compare with a book in a library. Sometimes books get reorganised in a library, meaning that they will be put on another shelf (another address), but the name will remain the same. The name is unique while the address is not. When identifying content to be reused, this is the principle you need to honour.

Anyway…

It’s been my primary concern all along to ensure that everything is identified with a URN. Everything. If you create a document and link to another, meaning to insert that other document in the one you’re editing, the link should take the form URN#id, where the hash separates the name of the document from a node pointed out within the document when checked into the database. When checked out, in the XML editor, however, the form should be URL#id, since URLs are what most authoring systems can handle; we need the URL for styling the document in the editor, to publish it, and to process it in various ways.

A URN is possible, of course, but it needs to be replaced with a URL when processing, one way or another, so the decision was to use a URL when a resource has been checked out and replace it with a URN when checked in.

Early on, we did make a demo application that opened a document containing URNs pointing to other documents, replaced them with the corresponding URLs, normalised the resulting document, and published it using XSL and FOP. It worked like a charm.

Today, I found that the check-in does not replace the URLs with URNs. The file name is a pseudo-URN (with colons replaced by underscores) so I know my URN scheme is being used, but that’s as far as it goes. The URN-like file names remain.

Talking to a developer, I realised that he hadn’t even thought about it. He was using URNs to identify the resources in the database (the URN being an attribute on the object) but in spite of all our planning, all of our tests, the URLs were left in the links when the document containing them had been checked in. The object IDs in the database are unique, he said, but yes (he admitted), the file names are being used in the database so we can’t store two identically named files in the same folder in the database.

This is not a major problem since we already have the code to do all the work, but what surprises me is that nobody made the connection. Me, I assumed everyone had understood but did not check. I simply assumed that following the test, following the discussions, following the months of development, no-one could fail to understand their true meaning.

Wrong.

What is it that makes the concept of URNs so difficult?

Semantic Profiles

Following my earlier post on semantic documents, I’ve given the subject some thought. In fact, I wrote a paper on a related subject and submitted it to XML Prague for next year’s conference. The paper wasn’t accepted (in all fairness, the paper was off-topic for the themes for the event), but I think the concept is both important and useful.

Briefly, the paper is about profiling XML content. The basics are well known and very frequently used: you profile a node by placing a condition on it. That condition, expressed using an attribute, is then compared to a publishing context defined using a similar condition on the root. If met, the node is included; if not, the node is discarded.

The matching is done with a simple string comparison but the mechanism can be made a lot more advance by, say, imposing Boolean logic on the condition. You need to match something like A AND B AND NOT(C), or the node is discarded. Etc.

The problem is that in the real world, the conditions, the string values, usually represent actual product names or variants, or perhaps an intended reader category. They can be used not only for string matching but for including content inline by using the condition attribute contents as variable text: a product variant, expressed as a string in an attribute in an EMPTY element, can easily be expanded in the resulting publication to provide specific content to personalise the document.

Which is fine and well, until the product variant label or the product itself is changed and the documents need to be updated to reflect this. All kinds of annoyances result, from having to convert values in legacy documents to not being able to do so (because the change is not compatible with the existing documents). Think about it:

If you have a condition “A” and a number of legacy documents using that condition, and need to update the name of the product variant to “B”, you need to update those existing documents accordingly, changing “A” to “B” everywhere. Problem is, someone owning the old product variant “A” now needs to accept documentation for a renamed product “B”. It’s done all the time but still causes confusion.

Or worse, if the change to “B” affects functionality and not just the name itself, you’ll have to add “B” to the list of conditions instead of renaming “A”, which in turn means that even if most of the existing documentation could be reused for both “A” and “B”, it can’t because there is no way to know. You’ll have to add “B” whenever you need to include a node, old or new.

This, in my considered opinion, happens because of the following:

  • The name, the condition, is used directly, both as a condition and as a value.
  • Conditions are not version handled. If “B” is a new version of “A”, then say so.

My solution? Use an abstraction layer. Define a semantic profile, a basic meaning for the condition, and version handle that profile, updating it when there is a change to the condition. The change could be a simple name change for the corresponding product but it could just as well be a change to the product’s functionality. Doesn’t really matter. A significant change will always requires a new version. Then, represent that semantic profile with a value used when publishing.

Since I like URNs, I think URNs are a terrific way to go. It’s easy to define a suitable URN schema that includes versioning and use the URN string as the condition when filtering, but the URN’s corresponding value as expanded content. In the paper, I suggest some simple ways to do this, including an out-of-line profiling mechanism that is pretty much what the XLink spec included years ago.

Using abstraction layers in profiling is hardly a new approach, then, but it’s not being used, not to my knowledge, and I think it should. I fully intend to.

Semantic Documents

I’m back from XML Finland, where I held a presentation on how to use the concept of semantic documents in content management systems. Not everyone was convinced, but I wasn’t thrown out, either.

A semantic document is the core information carrier, before a language or other means of presentation to an audience, is added. It’s an abstraction; obviously, there can be no such thing in the real world but as a concept, the semantic document is useful.

For example, a translation of a document can using the concept be defined as a rendition of the original, just as a JPG image can be rendered in, say, PNG without the contents of the image changing. It is very strictly a matter of definition–the rendition is not necessarily identical in all details of content to the original, it’s simply defined to be a matching rendition for a target audience.

Of course, for a semantic document and its rendition in a given language to be meaningful in a CMS, none of those varying details can be significant to the semantics of the basic information carrier, only to make a necessary clarification of the core information to the target audience. In other words, a translation may differ from the original for, say, cultural reasons (if the original language’s details in question are bound to the original language and readership), but the basic meaning cannot be allowed to change.

To the concept I also added version handling, that is, a formal description of the evolution of the contents of the basic information over time. When a new version is required is, of course, also a matter of definition; I’d go with “a significant and (in some way) completed change”. What’s important is that a two matching or equivalent renditions of the semantic document must always use matching versions.

Expressed using a pseudo-URN schema, if the core semantic document in some well-defined version (say “1”) is defined as URN:1, the Swedish and Finnish versions would be defined as URN:1:sv and URN:1:fi, respectively. They would be defined to be different renditions of each other but identical in basic information. It follows that if a URN:2:sv was made, a new Finnish translation would have to be created, because the old translation would differ in some way, according to the definition

This, of course, is largely a philosophical question. In practice, all kinds of questions arise. I had several objections from the floor, of which most seemed to have to do with the evolution of the translation independently from the original. In my basic definition, of course, this is not a problem since the whole schema is a matter of definition, but in the real world, an independent evolution of a translation is often a very real problem.

It could well be that a translation is worked on rather than the original, for example, in a multi-national environment where different teams manage different parts of the content. While theoretically perfectly manageable simply by bumping the versions of that particular translation, a system keeping track of, say, 40+ active target languages becomes a practical problem.

I don’t think the problem is unsolvable if there is a system in place to keep track of all those different URNs, but only if the basic principles are strictly adhered to. For example, you can never be allowed to develop the content in different languages independently from each other at the same time, because the situation that would arise would have to deal with what in the software development world is known as “forking”, that is, developing differing content from the same basic version. While also solvable, the benefits of such an approach in documentation are doubtful.

Far easier and probably better is to define a “master language” as the only language allowed to drive content change. In the above pseudo-URNs, Swedish could be defined as a master language, meaning that any new content would have to be added to it first and then translated to the other languages.

This is the basic principle behind the CMS, Cassis, that we develop at Condesign. It works, in that the information remains consistent and traceable, regardless of language, and allows for freely modularising documents for maximum reuse.

I would be interested in hearing opposing views. Some I addressed during my talk in Finland, but I’m sure there is more. Is there a reason you can think of that would break the principle of the semantic document?

Permanent URLs, Addresses and Names

I found a link to an article by Taylor Cowan about persistent URLs on the web. It was mostly about what happens to metadata assertions (such as RDF statements) when links break, but there was a little something on persistent links and URNs, too. A comparison with Amazon.com and how books are referenced these days was made. A way to map the ISBN number as a URN was described (URN:ISBN:0-395-36341-1 was mapped to a location by the PURL service, in this case at http://purl.org/urn/isbn/0-395-36341-1), which is quite cool and, in my opinion, both manageable and practical.

The author thought otherwise, however: But on the practical web, we don’t use PURLs or URNs for books, we use the Amazon.com url. I think in practical terms things are going to be represented on the web by the domain that has the best collection with the best open content.

Now, what’s wrong about this? At first, it may seem reasonable that Amazon.com, indeed the domain with the (probably) largest collection of book titles, authors, and so on, should be used. Books are their business and they depend on offering as many titls as possible. In the everyday world, if you want to find a book, you look it up at Amazon.com. I do it and you do it, and the author does it. So what’s wrong about it?

Well, Amazon.com does not provide persistent content per se, they provide a commercial service funded by whatever books they sell. At any time, they may decide to change the availability of a title, relocate its page, offer a later version of the same title, or even some other title altogether. The latter is unlikely, of course, but since we are talking about URLs, addresses, rather than URNs, names, talking about the URL when discussing what essentially is a name is about as relevant as talking about the worn bookshelf in my study when discussing the Chicago Manual of Style.

Yes, I realise that my example is a bit extreme, and I realise that it’s easy enough to make the necessary assertions in RDF to properly reference something described by the address rather than the address itself, but to me, this highlights several key issues:

  • An address, by its very nature, is not persistent. Therefore, a “permanent URL” is to me a bit of an oxymoron. It’s a contradiction in terms.
  • Even if we accept a “permanent URL approach”, should we accept that the addresses are provided and controlled by a commercial entity? One of the reasons to why some of us advocate XML so vigorously is that it is open and owned by no-one. Yes, I know perfectly well that we always rely on commercial vendors for everything from editors to databases, but my point here is that we still own our data, the commercial vendors don’t own it. I can take my data elsewhere.
  • Now, of course, in the world of metadata it’s sensible to give a “see-also” link (indeed that is what Mr Cowan suggests), but the problem is that the “see-also” link is another URL with the same implicit problems as the primary URL.
  • URLs have a hard time addressing (yes, the pun is mostly intentional) the problem with versioning a document. How many times have you looked up a book at Amazon.com and found either the wrong version or a list of several versions, some of which even list the wrong book?

Of course, I’m as guilty as anyone because I do that, too. I point to exciting new books using a link to Amazon.com (actually I order my books from The Book Depository, mostly) because it’s convenient. But if we discuss the principle rather than what we all do, it’s (in my opinion) wrong to suggest that the practice is the best way to solve a problem that stems from addressing rather than naming. It’s not a solution, it merely highlights the problem.