-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add UNHAP solver #18
base: main
Are you sure you want to change the base?
Add UNHAP solver #18
Conversation
according to review Co-authored-by: Thomas Moreau <[email protected]>
according to review Co-authored-by: Thomas Moreau <[email protected]>
All class names in CamelCase Co-authored-by: Thomas Moreau <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a few comments
moment_matching: boolean, `default=False` | ||
If set to False, baseline, alpha and kernel parameters are randomly | ||
chosen. If set to True, baseline, alpha and kernel parameters are | ||
chosen using the smart init strategy. | ||
The smart init strategy is only implemented | ||
for truncated gaussian and raised_cosine kernels. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should probably have the same structure as init
in the other solvers?
Baseline parameter of the Hawkes process. | ||
|
||
param_baseline_noise : `tensor`, shape (n_dim) | ||
Baseline parameter of the Hawkes process. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo
marks: list of vector of marks (size number of events) | ||
square_int_marks: integral of the square mark in the left part of the loss | ||
rho: list of vector of size (number of events) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does not match the signature
return grad_baseline_noise | ||
|
||
|
||
def get_grad_eta_mixture(precomputations, baseline, alpha, kernel, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we separate the gradient computation for each parameter?
It seems that several quantities are estimated multiple times?
|
||
self.opt_intens.step() | ||
|
||
if i % self.batch_rho == 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
put more comment
@@ -3,6 +3,8 @@ | |||
import torch | |||
from scipy.linalg import toeplitz | |||
|
|||
from fadin.utils.utils import convert_float_tensor | |||
|
|||
|
|||
@numba.jit(nopython=True, cache=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we still use numba in the repo?
In this PR, we add UNHaP solver and examples related to it