-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update FlowMatchEulerDiscreteScheduler with new design to support SD3 / SD3.5 / Flux moving forward #10511
Comments
Hi @ukaprch. Thanks for your interest in Flow Match scheduling support. A scheduling refactor is in progress: #10146. I've written an overview of the preliminary design below, see #10146 (comment) for more code samples and output examples. I'm working on adding more schedulers and refining the design. There is a lot to consider for the design, we also need to gracefully handle deprecation and minimize downstream effects. We know this is highly anticipated and we appreciate your patience while we work on it 🤗 Scheduling refactorMotivation
DesignThe new design aims to eliminate variants, improve support coverage and allow easier customization/experimentation. We accomplish this by:
BetaCreates a BetaSchedule(
beta_end=0.012,
beta_schedule="scaled_linear",
beta_start=0.00085,
timestep_spacing="leading",
) Flow MatchCreates a flow_schedule = FlowMatchSchedule(
shift=13.0,
use_dynamic_shifting=False,
base_schedule=FlowMatchSD3(),
) class FlowMatchCustom:
def __call__(self, num_inference_steps: int, **kwargs) -> np.ndarray:
"""
FlowMatchFlux with shifting on end split
"""
sigmas = np.linspace(1.0, 1 / num_inference_steps, num_inference_steps)
half = num_inference_steps // 2
sigmas[half:] = sigmas[half:] * 1.2
return sigmas
flow_schedule = FlowMatchSchedule(
use_dynamic_shifting=True,
base_schedule=FlowMatchCustom(),
) Customclass CustomSchedule:
scale_model_input = False
def __init__(
self,
...
**kwargs,
):
...
def __call__(
self,
num_inference_steps: int = None,
device: Union[str, torch.device] = None,
timesteps: Optional[List[int]] = None,
sigmas: Optional[List[float]] = None,
sigma_schedule: Optional[Union[KarrasSigmas, ExponentialSigmas, BetaSigmas]] = None,
...,
**kwargs,
):
...
return sigmas, timesteps
Karrasclass KarrasSigmas:
def __init__(
self,
sigma_min: Optional[float] = None,
sigma_max: Optional[float] = None,
rho: float = 7.0,
**kwargs,
):
self.sigma_min = sigma_min
self.sigma_max = sigma_max
self.rho = rho
def __call__(self, in_sigmas: torch.Tensor, **kwargs):
sigma_min = self.sigma_min
if sigma_min is None:
sigma_min = in_sigmas[-1].item()
sigma_max = self.sigma_max
if sigma_max is None:
sigma_max = in_sigmas[0].item()
num_inference_steps = len(in_sigmas)
rho = self.rho
ramp = np.linspace(0, 1, num_inference_steps)
min_inv_rho = sigma_min ** (1 / rho)
max_inv_rho = sigma_max ** (1 / rho)
sigmas = (max_inv_rho + ramp * (min_inv_rho - max_inv_rho)) ** rho
return sigmas Customclass ShiftSigmas:
def __init__(
self,
shift: float,
**kwargs,
):
self.shift = shift
def __call__(self, in_sigmas: torch.Tensor, **kwargs):
sigmas = in_sigmas * self.shift
return sigmas
euler = EulerDiscreteScheduler(
schedule_config=BetaSchedule(
beta_end=0.012,
beta_schedule="scaled_linear",
beta_start=0.00085,
timestep_spacing="leading",
),
sigma_schedule_config=KarrasSigmas(),
)
flow_match_euler = EulerDiscreteScheduler(
schedule_config=FlowMatchSchedule(
shift=13.0,
use_dynamic_shifting=False,
base_schedule=FlowMatchSD3(),
),
sigma_schedule_config=None,
) |
I see what you're doing but I think my design is much simpler and accomplishes the necessary goals w/o creating profiles. Yes, the end user would need to know what it is they want and how to do it and this can be done by experimentation with settings and saving these settings in a 'cheat sheet', 'profile', 'workflow' etc. In essence, we can do what the Comfies, Forges, SDNexts are doing with a much simpler design framework. You really need to see what I've done. I'm using it and it allows for a lot of divergence and flexibility. The design is very straightforward as is the case now with but a few additional parameters. I might also add that specific time step spacing is more conducive to SDXL than to FlowMatch. I don't need them in my proposed schedule. Also, with this schedule, using the beta / sigmas you have a built in denoiser in effect that can be easily tuned for more or less noise (i.e. crazy detail or smooth). Take a look at attachment: Using Flux Dev (QINT8) version and this scheduler: The following (3) images were generated w/o using the Sigma Offset feature. (1) is native Flux Karras. (2) is Flux Karras using Betas. (3) is Flux Karras using the 'Xtreme' Beta values to create more noise / detail. The following (3) images make use of the Sigma Offset feature. (1) is native Flux Karras with Sigma Offset set to 0.91. (2) is Flux Karras using Betas with Sigma Offset set to 0.91. (3) is Flux Karras using the 'Xtreme' Beta values with Sigma Offset set to 0.91. |
Model/Pipeline/Scheduler description
Fully support SD3 / SD3.5 / FLUX models using new scheduler design template with a more standardized approach including new parameters to support models. This approach can be utilized in existing schedules to support flow match models.
I have a working FlowMatchEulerDiscreteScheduler schedule in my local library to support this request.
A list of current issues and PR's (and there may be more) may be fully or partially addressed with this request:
#10001
#9982
#10146
#9955
#9951
#9924 <== This current issue
Open source status
Provide useful links for the implementation
@yiyixuxu @linjiapro @vladmandic @hlky
The text was updated successfully, but these errors were encountered: