hopwise.model.knowledge_aware_recommender.kgrec¶
- Reference:
Yuhao Yang et al. “Knowledge Graph Self-Supervised Rationalization for Recommendation” in WWW 2021.
- Reference code:
Classes¶
Module Contents¶
- class hopwise.model.knowledge_aware_recommender.kgrec.Contrast(num_hidden: int, tau: float = 0.7)[source]¶
Bases:
torch.nn.ModuleBase class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self) -> None: super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call
to(), etc.Note
As per the example above, an
__init__()call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- tau: float = 0.7¶
- mlp1¶
- mlp2¶
- class hopwise.model.knowledge_aware_recommender.kgrec.AttnHGCN(embedding_size, n_hops, n_users, n_relations, mess_dropout_rate=0.1)[source]¶
Bases:
torch.nn.ModuleHeterogeneous Graph Convolutional Network
- no_attn_convs¶
- embedding_size¶
- n_hops¶
- n_relations¶
- n_users¶
- mess_dropout_rate = 0.1¶
- relation_embedding¶
- W_Q¶
- n_heads = 2¶
- d_k¶
- mess_dropout¶
- class hopwise.model.knowledge_aware_recommender.kgrec.KGRec(config, dataset)[source]¶
Bases:
hopwise.model.abstract_recommender.KnowledgeRecommenderKGRec is a self-supervised knowledge-aware recommender that identifies and focuses on informative knowledge graph connections through an attentive rationalization mechanism. It combines generative masking reconstruction and contrastive learning tasks to highlight and align meaningful knowledge and interaction signals. By masking and rebuilding high-rationale edges while filtering noisy ones, KGRec learns more interpretable and noise-resistant recommendations.
- input_type¶
- embedding_size¶
- reg_weight¶
- context_hops¶
- node_dropout_rate¶
- mess_dropout_rate¶
- mae_coef¶
- mae_msize¶
- cl_coef¶
- cl_tau¶
- cl_drop¶
- samp_func¶
- inter_edge¶
- kg_graph¶
- user_embedding¶
- entity_embedding¶
- mf_loss¶
- reg_loss¶
- restore_user_e = None¶
- restore_entity_e = None¶
- gcn¶
- contrast_fn¶
- node_dropout¶
- calculate_loss(interaction)[source]¶
Calculate the training loss for a batch data of KG.
- Parameters:
interaction (Interaction) – Interaction class of the batch.
- Returns:
Training loss, shape: []
- Return type:
torch.Tensor
- predict(interaction)[source]¶
Predict the scores between users and items.
- Parameters:
interaction (Interaction) – Interaction class of the batch.
- Returns:
Predicted scores for given users and items, shape: [batch_size]
- Return type:
torch.Tensor
- full_sort_predict(interaction)[source]¶
Full sort prediction function. Given users, calculate the scores between users and all candidate items.
- Parameters:
interaction (Interaction) – Interaction class of the batch.
- Returns:
Predicted scores for given users and all candidate items, shape: [n_batch_users * n_candidate_items]
- Return type:
torch.Tensor