You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
warnings.warn(f"log_with={log_with} was passed but no supported trackers are currently installed.")
Before training: Unet First Layer lora up tensor([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
...,
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]])
Before training: Unet First Layer lora down tensor([[-0.2483, -0.2883, 0.0217, ..., 0.3330, 0.2490, 0.2651],
[ 0.0295, -0.0237, -0.0601, ..., -0.1246, -0.1943, -0.1949],
[ 0.1155, 0.4895, 0.1387, ..., 0.1215, -0.0998, 0.1251],
[-0.1189, 0.0684, -0.0199, ..., 0.5316, -0.1501, -0.2685]])
Before training: text encoder First Layer lora up tensor([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
...,
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]])
Before training: text encoder First Layer lora down tensor([[ 0.2033, -0.0032, 0.1602, ..., 0.1470, -0.3368, 0.4285],
[ 0.0142, -0.1524, 0.0096, ..., 0.2099, 0.5034, 0.1431],
[-0.3158, -0.1338, 0.0779, ..., -0.1055, 0.4085, 0.1229],
[-0.3604, 0.1553, -0.3339, ..., -0.3587, 0.6878, -0.1244]])
/home/masaisai/anaconda3/envs/masaisiaxuexilora/lib/python3.10/site-packages/diffusers/configuration_utils.py:244: FutureWarning: It is deprecated to pass a pretrained model name or path to from_config.If you were trying to load a scheduler, please use <class 'diffusers.schedulers.scheduling_ddpm.DDPMScheduler'>.from_pretrained(...) instead. Otherwise, please make sure to pass a configuration dictionary instead. This functionality will be removed in v1.0.0.
deprecate("config-passed-as-path", "1.0.0", deprecation_message, standard_warn=False)
Traceback (most recent call last):
File "/home/masaisai/lora源码/lora/train_lora_w_ti.py", line 1209, in
main(args)
File "/home/masaisai/lora源码/lora/train_lora_w_ti.py", line 918, in main
lr_scheduler = get_scheduler(
File "/home/masaisai/anaconda3/envs/masaisiaxuexilora/lib/python3.10/site-packages/diffusers/optimization.py", line 325, in get_scheduler
return schedule_func(optimizer, last_epoch=last_epoch)
File "/home/masaisai/anaconda3/envs/masaisiaxuexilora/lib/python3.10/site-packages/diffusers/optimization.py", line 53, in get_constant_schedule
return LambdaLR(optimizer, lambda _: 1, last_epoch=last_epoch)
File "/home/masaisai/anaconda3/envs/masaisiaxuexilora/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 221, in init
super().init(optimizer, last_epoch, verbose)
File "/home/masaisai/anaconda3/envs/masaisiaxuexilora/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 79, in init
self._initial_step()
File "/home/masaisai/anaconda3/envs/masaisiaxuexilora/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 85, in _initial_step
self.step()
File "/home/masaisai/anaconda3/envs/masaisiaxuexilora/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 150, in step
values = self.get_lr()
File "/home/masaisai/anaconda3/envs/masaisiaxuexilora/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 268, in get_lr
return [base_lr * lmbda(self.last_epoch)
File "/home/masaisai/anaconda3/envs/masaisiaxuexilora/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 268, in
return [base_lr * lmbda(self.last_epoch)
TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'
Process finished with exit code 1
The text was updated successfully, but these errors were encountered:
warnings.warn(f"
log_with={log_with}
was passed but no supported trackers are currently installed.")Before training: Unet First Layer lora up tensor([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
...,
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]])
Before training: Unet First Layer lora down tensor([[-0.2483, -0.2883, 0.0217, ..., 0.3330, 0.2490, 0.2651],
[ 0.0295, -0.0237, -0.0601, ..., -0.1246, -0.1943, -0.1949],
[ 0.1155, 0.4895, 0.1387, ..., 0.1215, -0.0998, 0.1251],
[-0.1189, 0.0684, -0.0199, ..., 0.5316, -0.1501, -0.2685]])
Before training: text encoder First Layer lora up tensor([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
...,
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]])
Before training: text encoder First Layer lora down tensor([[ 0.2033, -0.0032, 0.1602, ..., 0.1470, -0.3368, 0.4285],
[ 0.0142, -0.1524, 0.0096, ..., 0.2099, 0.5034, 0.1431],
[-0.3158, -0.1338, 0.0779, ..., -0.1055, 0.4085, 0.1229],
[-0.3604, 0.1553, -0.3339, ..., -0.3587, 0.6878, -0.1244]])
/home/masaisai/anaconda3/envs/masaisiaxuexilora/lib/python3.10/site-packages/diffusers/configuration_utils.py:244: FutureWarning: It is deprecated to pass a pretrained model name or path to
from_config
.If you were trying to load a scheduler, please use <class 'diffusers.schedulers.scheduling_ddpm.DDPMScheduler'>.from_pretrained(...) instead. Otherwise, please make sure to pass a configuration dictionary instead. This functionality will be removed in v1.0.0.deprecate("config-passed-as-path", "1.0.0", deprecation_message, standard_warn=False)
Traceback (most recent call last):
File "/home/masaisai/lora源码/lora/train_lora_w_ti.py", line 1209, in
main(args)
File "/home/masaisai/lora源码/lora/train_lora_w_ti.py", line 918, in main
lr_scheduler = get_scheduler(
File "/home/masaisai/anaconda3/envs/masaisiaxuexilora/lib/python3.10/site-packages/diffusers/optimization.py", line 325, in get_scheduler
return schedule_func(optimizer, last_epoch=last_epoch)
File "/home/masaisai/anaconda3/envs/masaisiaxuexilora/lib/python3.10/site-packages/diffusers/optimization.py", line 53, in get_constant_schedule
return LambdaLR(optimizer, lambda _: 1, last_epoch=last_epoch)
File "/home/masaisai/anaconda3/envs/masaisiaxuexilora/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 221, in init
super().init(optimizer, last_epoch, verbose)
File "/home/masaisai/anaconda3/envs/masaisiaxuexilora/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 79, in init
self._initial_step()
File "/home/masaisai/anaconda3/envs/masaisiaxuexilora/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 85, in _initial_step
self.step()
File "/home/masaisai/anaconda3/envs/masaisiaxuexilora/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 150, in step
values = self.get_lr()
File "/home/masaisai/anaconda3/envs/masaisiaxuexilora/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 268, in get_lr
return [base_lr * lmbda(self.last_epoch)
File "/home/masaisai/anaconda3/envs/masaisiaxuexilora/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 268, in
return [base_lr * lmbda(self.last_epoch)
TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'
Process finished with exit code 1
The text was updated successfully, but these errors were encountered: