hopwise.model.knowledge_aware_recommender.mcclk

Reference:

Ding Zou et al. “Multi-level Cross-view Contrastive Learning for Knowledge-aware Recommender System.” in SIGIR 2022.

Reference code:

https://github.com/CCIIPLab/MCCLK

Classes

Aggregator

Base class for all neural network modules.

GraphConv

Graph Convolutional Network

MCCLK

MCCLK is a knowledge-based recommendation model.

Module Contents

class hopwise.model.knowledge_aware_recommender.mcclk.Aggregator(item_only=False, attention=True)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

item_only = False
attention = True
forward(entity_emb, user_emb, relation_emb, edge_index, edge_type, inter_matrix)[source]
calculate_sim_hrt(entity_emb_head, entity_emb_tail, relation_emb)[source]

The calculation method of attention weight here follows the code implementation of the author, which is slightly different from that described in the paper.

class hopwise.model.knowledge_aware_recommender.mcclk.GraphConv(config, embedding_size, n_relations, edge_index, edge_type, inter_matrix, device)[source]

Bases: torch.nn.Module

Graph Convolutional Network

n_relations
edge_index
edge_type
inter_matrix
embedding_size
n_hops
node_dropout_rate
mess_dropout_rate
topk
lambda_coeff
build_graph_separately
device
relation_embedding
convs
node_dropout
mess_dropout
edge_sampling(edge_index, edge_type, rate=0.5)[source]
forward(user_emb, entity_emb)[source]
build_adj(context, topk)[source]

Construct a k-Nearest-Neighbor item-item semantic graph.

Returns:

Sparse tensor of the normalized item-item matrix.

_build_graph_separately(entity_emb)[source]
class hopwise.model.knowledge_aware_recommender.mcclk.MCCLK(config, dataset)[source]

Bases: hopwise.model.abstract_recommender.KnowledgeRecommender

MCCLK is a knowledge-based recommendation model. It focuses on the contrastive learning in KG-aware recommendation and proposes a novel multi-level cross-view contrastive learning mechanism. This model comprehensively considers three different graph views for KG-aware recommendation, including global-level structural view, local-level collaborative and semantic views. It hence performs contrastive learning across three views on both local and global levels, mining comprehensive graph feature and structure information in a self-supervised manner.

input_type
embedding_size
reg_weight
lightgcn_layer
item_agg_layer
temperature
alpha
beta
loss_type
inter_matrix
inter_graph
kg_graph
user_embedding
entity_embedding
gcn
fc1
fc2
fc3
reg_loss
restore_user_e = None
restore_item_e = None
get_edges(graph)[source]
forward()[source]
light_gcn(user_embedding, item_embedding, adj)[source]
sim(z1: torch.Tensor, z2: torch.Tensor)[source]
calculate_loss(interaction)[source]

Calculate the training loss for a batch data.

Parameters:

interaction (Interaction) – Interaction class of the batch.

Returns:

Training loss, shape: []

Return type:

torch.Tensor

local_level_loss(A_embedding, B_embedding)[source]
global_level_loss_1(A_embedding, B_embedding)[source]
global_level_loss_2(A_embedding, B_embedding)[source]
predict(interaction)[source]

Predict the scores between users and items.

Parameters:

interaction (Interaction) – Interaction class of the batch.

Returns:

Predicted scores for given users and items, shape: [batch_size]

Return type:

torch.Tensor

full_sort_predict(interaction)[source]

Full sort prediction function. Given users, calculate the scores between users and all candidate items.

Parameters:

interaction (Interaction) – Interaction class of the batch.

Returns:

Predicted scores for given users and all candidate items, shape: [n_batch_users * n_candidate_items]

Return type:

torch.Tensor