Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor docstring fixes #18

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion chroma/data/system.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
import warnings
from dataclasses import dataclass
from functools import partial
from typing import Dict, List, Tuple
from typing import Dict, List, Tuple, Optional

import numpy as np
import torch
Expand Down
2 changes: 1 addition & 1 deletion chroma/data/xcs.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@

`C` (LongTensor), the chain map encoding per-residue chain assignments with
shape `(num_batch, num_residues)`.The chain map codes positions as `0`
when masked, poitive integers for chain indices, and negative integers
when masked, positive integers for chain indices, and negative integers
to represent missing residues (of the corresponding positive integers).

`S` (LongTensor), the sequence of the protein as alphabet indices with
Expand Down
10 changes: 5 additions & 5 deletions chroma/layers/attention.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,11 +64,11 @@ class MultiHeadAttention(nn.Module):
for details and intuition.

Args:
n_head (int): number of attention heads
d_k (int): dimension of the keys and queries in each attention head
d_v (int): dimension of the values in each attention head
d_model (int): input and output dimension for the layer
dropout (float): dropout rate, default is 0.1
n_head (int): number of attention heads
d_k (int): dimension of the keys and queries in each attention head
d_v (int): dimension of the values in each attention head
d_model (int): input and output dimension for the layer
dropout (float): dropout rate, default is 0.1

Inputs:
Q (torch.tensor): query tensor of shape ```(batch_size, sequence_length_q, d_model)```
Expand Down
2 changes: 1 addition & 1 deletion chroma/layers/structure/protein_graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ class ProteinFeatureGraph(nn.Module):
for the the third dimension are PDB order (`[N, CA, C, O]`).
C (LongTensor, optional): Chain map with shape
`(num_batch, num_residues)`. The chain map codes positions as `0`
when masked, poitive integers for chain indices, and negative
when masked, positive integers for chain indices, and negative
integers to represent missing residues of the corresponding
positive integers.
custom_D (Tensor, optional): Pre-computed custom distance map
Expand Down
8 changes: 6 additions & 2 deletions chroma/models/graph_design.py
Original file line number Diff line number Diff line change
Expand Up @@ -1954,8 +1954,12 @@ def sample(
smoothing values less than 1.0 are recommended.
top_p (float, optional): Top-p cutoff for Nucleus Sampling, see
Holtzman et al ICLR 2020.
ban_S (tuple, optional): An optional set of token indices from
`chroma.constants.AA20` to ban during sampling.
mask_S (torch.Tensor, optional): Binary tensor mask indicating
masked/banned tokens during sampling at each residue with shape
`(num_batch, num_residues, num_alphabet)`.
bias (torch.Tensor, optional): Bias for each token for at
each residue added to log probabilities with shape
`(num_batch, num_residues, num_alphabet)`.

Returns:
S_sample (torch.LongTensor): Sampled sequence of shape `(num_batch,
Expand Down