v0.1.0: Initial Release
What's Changed
- feat: add abstract base class for approaches by @tilman151 in #1
- fix: remove manual feature extractor by @tilman151 in #3
- feat: add feature extractor models by @tilman151 in #2
- feat: add head model by @tilman151 in #4
- feat: add losses by @tilman151 in #5
- feat: switch adaption losses over to torchmetrics by @tilman151 in #7
- fix: put dropout layers in right positions for rnns by @tilman151 in #8
- feat: make set_model open for override by @tilman151 in #9
- feat: create domain labels inside dann loss by @tilman151 in #10
- fix: rul score backward by @tilman151 in #11
- fix: make adaption metrics movable between devices by @tilman151 in #12
- fix: diverging devices in adaption metrics by @tilman151 in #13
- feat: add dropout option to fc head by @tilman151 in #15
- fix: remove dropout from cnn input by @tilman151 in #16
- fix: remove cpu only pytorch by @tilman151 in #18
- feat: enable checkpointing by @tilman151 in #19
- feat: add gradient weight to GRL by @tilman151 in #20
- fix: introduce epsilon for pairwise euclidean by @tilman151 in #22
- feat: add lstm dann by @tilman151 in #14
- feat: add consistency dann by @tilman151 in #17
- feat: add adarul by @tilman151 in #21
- feat: add tbigru by @tilman151 in #23
- feat: add latent alignment approach by @tilman151 in #6
- docs: update intro and readme by @tilman151 in #24
New Contributors
- @tilman151 made their first contribution in #1
Full Changelog: https://github.com/tilman151/rul-adapt/commits/0.1.0