Use run_hopwise()

We enclose the training and evaluation processes in the api of run_hopwise(), which is composed of: dataset loading, dataset splitting, model initialization, model training, model evaluation, checkpoint loading and type of execution (only evaluation on checkpoint or full training).

You can create a python file (e.g., run.py ), and write the following code into the file.

from hopwise.quick_start import run_hopwise

run_hopwise(dataset=dataset, model=model, config_file_list=config_file_list, config_dict=config_dict)

dataset is the name of the data, such as ‘ml-100k’,

model indicates the model name, such as ‘BPR’.

config_file_list indicates the configuration files (e.g., [file1.yaml,file2.yaml,..] ),

config_dict is the parameter dict.

checkpoint full path of the torch .pth checkpoint file

run whether to run the training (default run='train') or only the evaluation (run='evaluate') given a checkpoint.

Important

The difference between config_dict and config_file_list is that with config_dict you can create a Python dictionary containing the arguments you want to pass, while with config_file_list you can create multiple .yaml files where you set different parameters. The order of precedence is config_file_list config_dict, so if a parameter defined in config_file_list is also present in config_dict, it will be overridden.