PEARLMGPT2

Introduction

Title: Faithful Path Language Modeling for Explainable Recommendation over Knowledge Graph (GPT-2 Variant)

Authors: Giacomo Balloccu, Ludovico Boratto, Gianni Fenu, Mirko Marras

Abstract: PEARLMGPT2 is a variant of PEARLM that uses GPT-2 as the base language model architecture. It inherits PEARLM’s constrained graph decoding mechanism while leveraging GPT-2’s pretrained transformer architecture for improved language understanding. This variant provides a balance between model capacity and computational efficiency.

PEARLM (Path Explainable and Aware Reasoning Language Model) extends PLM by adding a constrained graph decoding mechanism to ensure that generated paths are valid according to the knowledge graph structure. The model learns entity-relation sequences from the KG and uses attention-based masking during inference to restrict token predictions to valid graph neighbors.

Running with hopwise

Model Hyper-Parameters:

  • embedding_size (int) : Size of the embeddings. Defaults to 100.

  • num_heads (int) : Number of heads in the multi-head attention. Defaults to 1.

  • num_layers (int) : Number of layers in the transformer. Defaults to 1.

  • temperature (float) : Temperature for sampling. Defaults to 1.0.

  • dropout (float) : Dropout rate. Defaults to 0.1.

  • bias (bool) : Whether to use bias in linear layers. Defaults to True.

  • base_model (str) : The base transformer model. Defaults to 'gpt2'.

  • sequence_postprocessor (str) : The postprocessor for sequence generation. Defaults to 'SampleSearch'.

  • MAX_PATHS_PER_USER (int) : Maximum paths per user during inference. Defaults to 1.

A Running Example:

Write the following code to a python file, such as run.py

from hopwise.quick_start import run_hopwise

run_hopwise(model='PEARLMGPT2', dataset='ml-100k')

And then:

python run.py

Notes:

  • PEARLMGPT2 requires path sampling from the knowledge graph. Ensure your dataset has KG information.

  • Install the pathlm extra: uv pip install hopwise[pathlm]

  • This variant uses GPT-2 architecture with smaller default configuration.

Tuning Hyper Parameters

If you want to use HyperTuning to tune hyper parameters of this model, you can copy the following settings and name it as hyper.test.

learning_rate choice [1e-4,2e-4,5e-4]
embedding_size choice [64,100,200]
num_layers choice [1,2,3]
num_heads choice [1,2,4]
temperature choice [0.5,1.0,1.5]

Note that we just provide these hyper parameter ranges for reference only, and we can not guarantee that they are the optimal range of this model.

Then, with the source code of hopwise (you can download it from GitHub), you can run the run_hyper.py to tuning:

hopwise tune --config_files=[config_files_path] --params_file=hyper.test

For more details about Parameter Tuning, refer to Parameter Tuning.