Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add UNHAP solver #18

Open
wants to merge 25 commits into
base: main
Choose a base branch
from
Open

Add UNHAP solver #18

wants to merge 25 commits into from

Conversation

GuillaumeStaermanML
Copy link
Collaborator

In this PR, we add UNHaP solver and examples related to it

Copy link
Collaborator

@tomMoral tomMoral left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a few comments

Comment on lines +411 to +416
moment_matching: boolean, `default=False`
If set to False, baseline, alpha and kernel parameters are randomly
chosen. If set to True, baseline, alpha and kernel parameters are
chosen using the smart init strategy.
The smart init strategy is only implemented
for truncated gaussian and raised_cosine kernels.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably have the same structure as init in the other solvers?

Baseline parameter of the Hawkes process.

param_baseline_noise : `tensor`, shape (n_dim)
Baseline parameter of the Hawkes process.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo

Comment on lines +503 to +505
marks: list of vector of marks (size number of events)
square_int_marks: integral of the square mark in the left part of the loss
rho: list of vector of size (number of events)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does not match the signature

return grad_baseline_noise


def get_grad_eta_mixture(precomputations, baseline, alpha, kernel,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we separate the gradient computation for each parameter?
It seems that several quantities are estimated multiple times?


self.opt_intens.step()

if i % self.batch_rho == 0:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

put more comment

@@ -3,6 +3,8 @@
import torch
from scipy.linalg import toeplitz

from fadin.utils.utils import convert_float_tensor


@numba.jit(nopython=True, cache=True)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we still use numba in the repo?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants