| 1 | {{{ |
| 2 | #!comment |
| 3 | NB! Make sure that you follow the guidelines: http://trac.posccaesar.org/wiki/IdsAdiEditing |
| 4 | }}} |
| 5 | |
| 6 | [[TracNav(TracNav/RdsWip)]] |
| 7 | [[Image(wiki:IdsAdiBranding:Logo-128x128.gif)]] |
| 8 | = POSC-Caesar FIATECH IDS-ADI Projects = |
| 9 | == Intelligent Data Sets Accelerating Deployment of ISO15926 == |
| 10 | == ''Realizing Open Information Interoperability'' == |
| 11 | |
| 12 | ---- |
| 13 | |
| 14 | = Automated Mapping: Reduction to a Common Base Set = |
| 15 | |
| 16 | One of the paths to interoperability that can be explored |
| 17 | is that of a set of generated mappings for any given point |
| 18 | to point integration. |
| 19 | |
| 20 | == Requirements == |
| 21 | |
| 22 | The requirement is that both reference data sets can be reduced |
| 23 | to a common base set of relations via rule-based means using |
| 24 | a single rule language. |
| 25 | |
| 26 | == Limitations == |
| 27 | |
| 28 | The main limitation is that for many sets of coarse-to-fine |
| 29 | generated reference data, it is highly unlikely that a rule |
| 30 | language will actually be able to be used to reduce the entire |
| 31 | set of original reference data down to a common base set that |
| 32 | is consistent enough to be able to perform analysis and reasoning |
| 33 | upon, which is likely to be a pre-requisite for being able to |
| 34 | generate the the mapping. |
| 35 | |
| 36 | == Accommodation == |
| 37 | |
| 38 | Since this likely limitation can be recognized at the outset, |
| 39 | it would be worthwhile investing in techniques that can ensure |
| 40 | for example, that a coarse-to-fine generated reference data set |
| 41 | (such as one based on a linguistic approach) is reducible to |
| 42 | a consistent first-order logic model. With this accommodation |
| 43 | in place the risk of not being able to generate mappings |
| 44 | could be somewhat ameliorated. |
| 45 | |
| 46 | == Fidelity == |
| 47 | |
| 48 | Fidelity is another area that could suffer with such an approach, |
| 49 | however, not more so particularly than with any other mapping |
| 50 | mechanism though. Again, if a methodology can be devised which drives |
| 51 | explicitness and precision in the definition of such a reference |
| 52 | data set, then the fidelity problems can be circumvented to |
| 53 | some degree. |
| 54 | |
| 55 | == Cost == |
| 56 | |
| 57 | Generating mappings from a rule language can substantially |
| 58 | reduce the '''cost''' of translating in a peer-to-peer or |
| 59 | spoke-to-hub scenario, especially if it is late binding. |
| 60 | |
| 61 | == Approach == |
| 62 | |
| 63 | 99% of the definitions that will be imported into the WIP from |
| 64 | existing sources will have been arrived at through coarse-to-fine |
| 65 | approaches, and 99% of those again will have no longhands (ie. base relation derivations) to start with. |
| 66 | |
| 67 | So the RDS/WIP then becomes a crucial environment for people to |
| 68 | collaboratively develop these base derivations. As formerly |
| 69 | mentioned, many will not be logically consistent, at least in |
| 70 | the first order logic sense, but that knowledge at least allows |
| 71 | the user to scope and cost that aspect of the integration |
| 72 | exercise - knowing that some percentage will require special |
| 73 | attention. |
| 74 | |
| 75 | == Conclusions == |
| 76 | |
| 77 | Being able to do this at all is predicated on the idea that imperfect |
| 78 | (that is to say, unexpanded and/or illogical) predicates must be able |
| 79 | to be present in the RDS/WIP. |
| 80 | |
| 81 | ---- |