Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changes to documentation #49

Open
wants to merge 14 commits into
base: master
Choose a base branch
from
Open

Conversation

shikhareddy
Copy link
Contributor

Hello @chiragnagpal . Made necessary changes to README.md, all python files in dsm and docs.

inputdim: int
Dimensionality of the input features.
optimizer: str
The choice of the gradient based optimization method. One of
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One of from next line

The choice of the gradient based optimization method. One of
'Adam', 'RMSProp' or 'SGD'.
risks: int
Uncertainty as to whether the parameters are appropriate for
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove risks doc

@@ -74,8 +86,7 @@ def _gen_torch_model(self, inputdim, optimizer, risks):
def fit(self, x, t, e, vsize=0.15, val_data=None,
iters=1, learning_rate=1e-3, batch_size=100,
elbo=True, optimizer="Adam", random_state=100):

r"""This method is used to train an instance of the DSM model.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

put back r

A numpy array of the input features, \( x \).

Returns:
Tensor: input features, \( x \).
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch.tensor
Detail more

@@ -171,9 +181,40 @@ def compute_nll(self, x, t, e):
return loss

def _prepocess_test_data(self, x):
"""This function pre processes the test data.
Copy link
Contributor Author

@shikhareddy shikhareddy Jun 2, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

converts numpy test data to torch tensor

@@ -397,10 +464,41 @@ def _gen_torch_model(self, inputdim, optimizer, risks):
risks=risks)

def _prepocess_test_data(self, x):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

converts variable length numpy arrrays to tensors
Rec Neural networks require special pre processing to work with variable sized sequences. This function pads and creates appropriate sized torch tensors from input numpy arrays.

return torch.from_numpy(_get_padded_features(x))

def _prepocess_training_data(self, x, t, e, vsize, val_data, random_state):
"""RNNs require different preprocessing for variable length sequences"""
"""RNNs require different preprocessing for variable length sequences.
Copy link
Contributor Author

@shikhareddy shikhareddy Jun 2, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add new line

@@ -236,13 +239,13 @@ class DeepRecurrentSurvivalMachinesTorch(DeepSurvivalMachinesTorch):
Dimensionality of the input features.
k: int
The number of underlying parametric distributions.
typ: str
Copy link
Contributor Author

@shikhareddy shikhareddy Jun 2, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Choice of the recurrent neural architecture
One of 'LSTM' , 'RNN' , 'GRU'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant