Skip to content

Commit

Permalink
Switch back to returning given DatasetType in registry imports.
Browse files Browse the repository at this point in the history
When the given dataset type differs from the registered dataset type
in imports, it's not clear what the ideal behavior is, but the right
choice for *this* ticket is clearly to not change that behavior.
  • Loading branch information
TallJimbo committed Oct 15, 2024
1 parent 39ab698 commit 152988e
Show file tree
Hide file tree
Showing 3 changed files with 9 additions and 8 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -696,7 +696,7 @@ def insert(

def import_(
self,
dataset_type_name: str,
dataset_type: DatasetType,
run: RunRecord,
data_ids: Mapping[DatasetId, DataCoordinate],
) -> list[DatasetRef]:
Expand All @@ -705,8 +705,8 @@ def import_(
# Just in case an empty mapping is provided we want to avoid
# adding dataset type to summary tables.
return []

Check warning on line 707 in python/lsst/daf/butler/registry/datasets/byDimensions/_manager.py

View check run for this annotation

Codecov / codecov/patch

python/lsst/daf/butler/registry/datasets/byDimensions/_manager.py#L707

Added line #L707 was not covered by tests
if (storage := self._find_storage(dataset_type_name)) is None:
raise MissingDatasetTypeError(f"Dataset type {dataset_type_name!r} has not been registered.")
if (storage := self._find_storage(dataset_type.name)) is None:
raise MissingDatasetTypeError(f"Dataset type {dataset_type.name!r} has not been registered.")

Check warning on line 709 in python/lsst/daf/butler/registry/datasets/byDimensions/_manager.py

View check run for this annotation

Codecov / codecov/patch

python/lsst/daf/butler/registry/datasets/byDimensions/_manager.py#L709

Added line #L709 was not covered by tests
# Current timestamp, type depends on schema version.
if self._use_astropy_ingest_date:
# Astropy `now()` precision should be the same as `now()` which
Expand Down Expand Up @@ -751,7 +751,7 @@ def import_(
)
refs = [
DatasetRef(
datasetType=storage.dataset_type,
datasetType=dataset_type,
id=dataset_id,
dataId=dataId,
run=run.name,
Expand Down
7 changes: 4 additions & 3 deletions python/lsst/daf/butler/registry/interfaces/_datasets.py
Original file line number Diff line number Diff line change
Expand Up @@ -419,16 +419,17 @@ def insert(
@abstractmethod
def import_(
self,
dataset_type_name: str,
dataset_type: DatasetType,
run: RunRecord,
data_ids: Mapping[DatasetId, DataCoordinate],
) -> list[DatasetRef]:
"""Insert one or more dataset entries into the database.
Parameters
----------
dataset_type_name : `str`
Name of the dataset type.
dataset_type : `DatasetType`
Type of dataset to import. Also used as the dataset type for
the returned refs.
run : `RunRecord`
The record object describing the `~CollectionType.RUN` collection
these datasets will be associated with.
Expand Down
2 changes: 1 addition & 1 deletion python/lsst/daf/butler/registry/sql_registry.py
Original file line number Diff line number Diff line change
Expand Up @@ -1176,7 +1176,7 @@ def _importDatasets(
data_ids = {dataset.id: dataset.dataId for dataset in datasets}

Check warning on line 1176 in python/lsst/daf/butler/registry/sql_registry.py

View check run for this annotation

Codecov / codecov/patch

python/lsst/daf/butler/registry/sql_registry.py#L1176

Added line #L1176 was not covered by tests

try:
refs = list(self._managers.datasets.import_(datasetType.name, runRecord, data_ids))
refs = list(self._managers.datasets.import_(datasetType, runRecord, data_ids))
if self._managers.obscore:
self._managers.obscore.add_datasets(refs)
except sqlalchemy.exc.IntegrityError as err:
Expand Down

0 comments on commit 152988e

Please sign in to comment.