model_lib

save_model(model, model_save_dir='agat_model')

Saving PyTorch model to the disk. Save PyTorch model, including parameters and structure. See: https://pytorch.org/tutorials/beginner/basics/saveloadrun_tutorial.html

Parameters:
  • model (PyTorch-based model.) – A PyTorch-based model.

  • model_save_dir (str, optional) – A directory to store the model, defaults to ‘agat_model’

Output:

A file saved to the disk under model_save_dir.

Outputtype:

A file.

load_model(model_save_dir='agat_model', device='cuda')

Loading PyTorch model from the disk.

Parameters:
  • model_save_dir (str, optional) – A directory to store the model, defaults to ‘agat_model’

  • device (str, optional) – Device for the loaded model, defaults to ‘cuda’

Returns:

A PyTorch-based model.

Return type:

PyTorch-based model.

save_state_dict(model, state_dict_save_dir='agat_model', **kwargs)

Saving state dict (model weigths and other input info) to the disk. See: https://pytorch.org/tutorials/beginner/basics/saveloadrun_tutorial.html

Parameters:
  • model (PyTorch-based model.) – A PyTorch-based model.

  • state_dict_save_dir (str, optional) – A directory to store the model state dict (model weigths and other input info), defaults to ‘agat_model’

  • **kwargs

    More information you want to save.

Output:

A file saved to the disk under model_save_dir.

Outputtype:

A file

load_state_dict(state_dict_save_dir='agat_model')

Loading state dict (model weigths and other info) from the disk. See: https://pytorch.org/tutorials/beginner/basics/saveloadrun_tutorial.html

Parameters:

state_dict_save_dir (str, optional) – A directory to store the model state dict (model weigths and other info), defaults to ‘agat_model’

Returns:

State dict.

Return type:

dict

Note

Reconstruct a model/optimizer before using the loaded state dict.

Example:

model = PotentialModel(...)
model.load_state_dict(checkpoint['model_state_dict'])
new_model.eval()
model = model.to(device)
model.device = device
optimizer = ...
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
config_parser(config)

Parse the input configurations/settings.

Parameters:

config (str/dict. if str, load from the json file.) – configurations

Raises:

TypeError – DESCRIPTION

Returns:

TypeError(‘Wrong configuration type.’)

Return type:

TypeError

class EarlyStopping

Stop training when model performance stop improving after some steps.

__init__(self, model, graph, logger, patience=10, folder='files')
Parameters:
  • model (torch.nn) – AGAT model

  • logger (_io.TextIOWrapper) – I/O file

  • patience (int, optional) – Stop patience, defaults to 10

  • model_save_dir (str, optional) – A directory to save the model, defaults to ‘model_save_dir’

property model

AGAT model.

property patience

Patience steps.

property counter

Number of steps since last improvement of model performance.

property best_score

Best model performance.

property update

Update state.

property early_stop

Stop training if this variable is true.

step(self, score, model, optimizer)
Parameters:
  • score (float) – metrics of model performance

  • model (agat) – AGAT model object.

  • optimizer (optimizer) – pytorch adam optimizer.

save_model(self, model)

Saves model when validation loss decrease.

Parameters:

model (agat) – AGAT model object.

load_graph_build_method(path)

Load graph building scheme. This file is normally saved when you build your dataset.

Parameters:

path (str) – Path to graph_build_scheme.json file.

Returns:

A dict denotes how to build the graph.

Return type:

dict

PearsonR(y_true, y_pred)

Calculating the Pearson coefficient.

Parameters:
  • y_true (torch.Tensor) – The first torch.tensor.

  • y_pred (torch.Tensor) – The second torch.tensor.

Returns:

Pearson coefficient

Return type:

torch.Tensor

Note

It looks like the torch.jit.script decorator is not helping in comuputing large torch.tensor, see agat/test/tesor_computation_test.py in the GitHub page for more details.