{{{ #!comment NB! Make sure that you follow the guidelines: http://trac.posccaesar.org/wiki/IdsAdiEditing }}} [[TracNav(TracNav/RdsWip)]] [[Image(wiki:IdsAdiBranding:Logo-128x128.gif)]] = POSC-Caesar FIATECH IDS-ADI Projects = == Intelligent Data Sets Accelerating Deployment of ISO15926 == == ''Realizing Open Information Interoperability'' == ---- = Template Methodologies = N-ary relations, that is "templates" in ISO 15926 parlance, can be defined using a wide variety of different approaches. When such an approach takes the form of a systemic process, we call it a "methodology". == The Purpose of a Methodology == The likelihood of two different templates working well together in any given scenario is much higher for those templates that have been generated using the same methodology. In general this is the very purpose of a methodology: to create a set of templates that work well together to solve some interoperability problem. == Retain the Record of Methodology == Because multiple methodologies can be used to create definitions in the same modeling system, it is crucial to record the methodology that created a definition, and any other methodologies it has been proven for. This allows the selection of a cogent, orthogonal set of definitions to solve a specific interoperability problem using a given modeling system. That is to say, the intersection of '''modeling system''' ''and'' '''methodology''' together select the working set of definitions. == Coarse-to-Fine Approach == A coarse-to-fine approach takes relationships that already exist in highly agglutinated or generalized forms and then ''may'' break them down into finer relationships in order to explore the structure of the data. It is important to recognize that often they need not break them down much if at all - often, models built with this approach are purely for the recording of the data, without much emphasis placed on the analysis of the structure, beyond a certain level necessary to solve specific problems. Constraints on the structure of the data Coarse-to-fine approaches are typically empirical or based on actual data - they model information as it is recorded by humans or other systems. This exploits two very important features: * it is often purpose-driven - the model fits the data because it has been developed as an abstraction of the patterns already in the data. * it tends to reflect the way that humans think about the data in the disciplines that work with the problem set. Most data model design processes follow this approach - not because it necessarily results in the best possible model, but because it quickly results in a model that fits the problem and the data fast. Perhaps more importantly, it is popular because it is successful, and it is successful because it does not challenge the participants to alter the way they think about the structure of the data. Note: this is not an obsequious comment - humans are not logical, humans think by making generalizations about observable patterns, and such generalizations need not be comprehensive; that is to say it is implicit that the generalizations are not intended to necessarily cover all cases. Human language is built to communicate these kinds of observable patterns or cases which fit them. Frequently, this means that humans hold to lore that while useful, might not be correct. But its usefulness extends beyond its value as an approximation, or a rule of thumb - it is useful because human languages makes it concise to communicate to other humans (because human language is based on the same principles - generalizations not intended to be comprehensive in scope). As a result, these kinds of approaches can only be used for interoperability where the problem-set is roughly shared across the communicating parties - that is to say, if one party wants to use this data to solve a different problem, there is a good chance it is actually useless to them. == Coarse-to-Fine Approach == The [wiki:RdsWipWorldView/CoarseToFine Coarse-to-Fine] approach takes information and models it in its extant form (as it is exposed from language, data or usage). It then breaks down those "coarse" relations into finer relations to the depth required to address a specific problem set. == Fine-to-Coarse Approach == The [wiki:RdsWipWorldView/FineToCoarse Fine-To-Coarse] approach seeks to model information from a set of founding principles. These principles determine an starting set of finely-grained relations from which successively coarser and coarser sets of relations are built, until (perhaps) relations that are generally useful for solving specific problem sets can be reached. == The Reach of a Methodology == Coarse-to-fine approaches have a tendency to be more neutral to a modeling system - the shallower the coarse-to-fine approach (that is to say, the coarser the concepts) the more neutral it tends to be. This allows very shallow coarse-to-fine methodologies to be applied across different systems. Fine-to-coarse approaches are (by definition) founded on a specific modeling system, and so these methodologies are generally confined to a single modeling system. ---- @todo notes to incorporate/explore included as comments {{{ #!comment The distinction I wanted to make is that from my point of view, templates as a term in 15926 need to include at least two extremes, and everything in else between them. I see these two extremes as being templates produced with a top-down process, where you start essentially with the template and then figure out what it really means by breaking it down into smaller and smaller relationships; and then templates produced with a bottom-up process, where you start with binary relationships and work your way up, generating coarser and coarser templates. Very simplistically, when you take a linguistic approach to defining templates, you are using a top-down approach; when you take a usage approach to defining templates (eg. from existing spreadsheets, datasheets), you are also using a top-down approach; but when you use a formal logical approach to defining templates, you are using a bottom-up approach. It is my *belief* that linguistic templates and usage templates are the fastest to develop, but with the caveat that when working in a collaborative, distributed effort to produce them, they are unlikely to "fit" well with eachother, instead, resulting in little "cultures" of related, relatively well-fitting templates that do not address the needs of disciplines or environments that they were not generated from - ie. poor fit to other "cultures". It is also my *belief* that bottom-up approaches to templates are slower to develop and more costly to map to, but they result in models that retain a higher level of precision across disciplines and they can be produced as a cohesive set in a distributed, collaborative effort (because you have to build from smaller, existing, published blocks). You can still get cultures forming, but at least they will have at their root a common basis. }}} ----