You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since the merge of #175 for the salvation of #135, updating the index with the custom metadata blocks from Dataverse during startup does not work anymore.
The text was updated successfully, but these errors were encountered:
Reopening. This chicken and egg problem is not solvable as is now, as we need a reachable Dataverse for the index config and Dataverse needs a ready to serve Solr Index... (circular dependency)
Idea: solve this via scripts and tools until the schema is managed by Dataverse...
The image will contain a standard schema as is now, but will contain include guards.
The image is extended by adding a startup script. If in an oppinionated (configurable) location a schema.xml is given and it is different to the one in place, replace it in our index and in the configset (so a new index will have the correct schema as of the first start)
Open question: how do we retrieve an updated schema when schemas are changed during runtime? sidecar? push/pull? restart pod? who and how do we trigger a reload?
Make dvcli parse the TSV files and create the necessary <field>s and <copyField>s for us. (Needs more than just bash because of complex structure with multivalued attributes for compound fields)
Since Dataverse can start without solr's schema being updated (it only needs the right schema to index/reindex content), why do you consider this a circular dependency? Why won't the sequence of installing an updated Dataverse and then running an updating, using the Dataverse API to get the required fields to add to schema.xml, work?
Since the merge of #175 for the salvation of #135, updating the index with the custom metadata blocks from Dataverse during startup does not work anymore.
The text was updated successfully, but these errors were encountered: