-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/ PreSync mode #270
base: main
Are you sure you want to change the base?
Feat/ PreSync mode #270
Conversation
This is a pretty interesting proposal to me, but I'm not sure it needs to be a "run twice" type of thing - it could easily be something we do for every migration (with pod placement restrictions to ensure that RWX volumes aren't required) If this is something we want to keep as a "run twice" solution, it would absolutely need a expanded unit test and ideally an integration test too covering that workflow. |
696a8ce
to
b691ac2
Compare
permits to reduce the downtime for the actual migration: we create the destination PVCs and already rsync the data, without having to scale down the pods/deployments/statefulsets. once the copy/rsync stage is complete, we pursue by scaling down pods, doing another copy/rsync (to ensure data consistency), finally swapping the PVCs and scaling up fix: proper matching PVCs count feat: add max-pvs=n flags feat(pre-sync-mode): run a "prefetch" copy before actually migrating Signed-off-by: Clément Nussbaumer <[email protected]>
b691ac2
to
2ced425
Compare
can you take another look ? I modified the code: the behaviour isn't depending on RWX vs RWO now, it always works the same, and does the migration in one go, which is much simpler 🙃 the change makes it so that if the |
flag.BoolVar(&options.SkipSourceValidation, "skip-source-validation", false, "migrate from PVCs using a particular StorageClass name, even if that StorageClass does not exist") | ||
flag.IntVar(&options.MaxPVs, "max-pvs", 0, "maximum number of PVs to process. default to 0 (unlimited)") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like an explanation in this as to what happens if the max is exceeded - do you process that number of PVs and then come back for the rest? Do you process that many PVs and then leave the rest? Do you just exit early?
It looks like it's the second option right now but that needs to be communicated
func scaleDownPods(ctx context.Context, w *log.Logger, clientset k8sclient.Interface, matchingPVCs map[string][]*corev1.PersistentVolumeClaim, checkInterval time.Duration) (map[string][]*corev1.PersistentVolumeClaim, error) { | ||
// build new map with complete pvcCtx | ||
updatedPVCs := matchingPVCs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well, thank you for removing this unneeded variable!
@@ -2796,7 +2797,6 @@ func Test_scaleDownPods(t *testing.T) { | |||
req.Equal(tt.wantPods, actualPods) | |||
req.Equal(tt.wantDeployments, actualDeployments) | |||
req.Equal(tt.wantSS, actualSS) | |||
req.Equal(tt.wantMatchingPVCs, actualMatchingPVCs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it might be good to remove the test entries for wantMatchingPVCs too since they're not used elsewhere
I added an integration test, and that adds a few worries for me (logs here):
The first concern is the logging. This says The second concern comes from the data within the logging, particularly the lines |
Adds a new feature to pvmigrate, which makes it possible to prepare a migration for RWX volumes by copying the data in advance, without actually moving to the new PV.
Then, when the proper migration actually happens, one only needs to run the command again, without the
-pre-sync-mode
flag, which will compare source and dest, update the files that were modified, and finally complete the migration.