Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation of domain specific features #10

Closed
EnnoMeijers opened this issue Aug 25, 2020 · 4 comments
Closed

Documentation of domain specific features #10

EnnoMeijers opened this issue Aug 25, 2020 · 4 comments
Labels
documentation Improvements or additions to documentation

Comments

@EnnoMeijers
Copy link
Contributor

The documentation lacks a description for configuring the specific domain features like vocabularies (and others features?).

@wouterbeek wouterbeek added the documentation Improvements or additions to documentation label Aug 25, 2020
@wouterbeek
Copy link
Collaborator

We currently use an endpoint for the datasets in https://data.netwerkdigitaalerfgoed.nl/ld-wizard. These dataset are currently not named very well, starting out with Schema.org only, then adding Dublin Core Terms and CIDOC-CRM later.

We may configure and document this as follows: there is a linked dataset that is specific to the LD Erfgoed Wizard. Configuration can mostly be performed in the linked dataset:

  • Vocabularies can be dynamically added/removed/curated over time.
  • Prefix declarations can be created/changed.
  • This works for all vocabularies that use RDF(S) and OWL appropriately (i.e., owl:Class, owl:DatatypeProperty, owl:ObjectProperty, rdf:Property, rdfs:label, and rdfs:comment.

There is also a code-side aspect to the configuration, but that aspect is kept as small as possible: only the endpoint URL and query request must be set.

@EnnoMeijers
Copy link
Contributor Author

Ok, thank you for clearing this up. If I understand you correctly this means that a core functionality of LDWizard is depending on specific api functions of the TriplyDB platform. Would this mean that the portability of the code is limited to environments where TriplyDB instances are available? Or will any implementation with a generic sparql or elastichsearch serivce work and if this is the case should we demonstrate this too, f.g. by adding dockerized services and config files for them?

@wouterbeek
Copy link
Collaborator

That depends a bit on what part of the API is concerned.

Where standards are available, we of course use them:

  • Standards are strong when it comes to class and property definitions (RDF(S) and OWL), so there the situation is ideal.
  • Checking whether a class or property exists within an existing namespace (support for "non-existing" properties #7) could be done with LDF or SPARQL.

Where standards are not (yet) available, this becomes more difficult:

  • There are no standards for matching and ranking classes and properties, so a less standardized solution is used there. Still, the ElasticSearch service that we currently use is populated with JSON documents that are directly based on the linked data descriptions. (These should be JSON-LD document BTW. This is something where we can improve and be more standards-compliant.)
  • Another example where standards are currently lacking is the retrieval of prefix declarations (but there are ideas to expose these as part of SPARQL 1.2 service descriptions: Ability to use default PREFIX values w3c/sparql-dev#70).

@wouterbeek
Copy link
Collaborator

Finally was able to pick this up... I have added a new section to the LD Wizard main documentation about this: link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

2 participants