examples/00_quick_start/dkn_MIND.ipynb
<i>Copyright (c) Recommenders contributors.</i>
<i>Licensed under the MIT License.</i>
DKN [1] is a deep learning model which incorporates information from knowledge graph for better news recommendation. Specifically, DKN uses TransX [2] method for knowledge graph representation learning, then applies a CNN framework, named KCNN, to combine entity embedding with word embedding and generate a final embedding vector for a news article. CTR prediction is made via an attention-based neural scorer.
[label] [userid] [CandidateNews]%[impressionid]
e.g., 1 train_U1 N1%0
history_size parameter in the config file, which is the max number of user's click history we use. We will automatically keep the last history_size number of user click history, if user's click history is more than history_size, and we will automatically pad with 0 if user's click history is less than history_size. the format is :[Userid] [newsid1,newsid2...]
e.g., train_U1 N1,N2
CandidateNews:34,45,334,23,12,987,3456,111,456,432 and the title entitie value may be: entity:45,0,0,0,0,0,0,0,0,0. Only the first value of entity vector is non-zero due to the word "Trump". The title value and entity value is hashed from 1 to n (where n is the number of distinct words or entities). Each feature length should be fixed at k (doc_size parameter), if the number of words in document is more than k, you should truncate the document to k words, and if the number of words in document is less than k, you should pad 0 to the end.
the format is like:[Newsid] [w1,w2,w3...wk] [e1,e2,e3...ek]
*.npy files of pretrained embeddings. After loading, each file is a [n+1,k] two-dimensional matrix, n is the number of words(or entities) of their hash dictionary, k is dimension of the embedding, note that we keep embedding 0 for zero padding.In this experiment, we used GloVe [4] vectors to initialize the word embedding. We trained entity embedding using TransE [2] on knowledge graph and context embedding is the average of the entity's neighbors in the knowledge graph.
MIND dataset[3] is a large-scale English news dataset. It was collected from anonymized behavior logs of Microsoft News website. MIND contains 1,000,000 users, 161,013 news articles and 15,777,377 impression logs. Every news article contains rich textual content including title, abstract, body, category and entities. Each impression log contains the click events, non-clicked events and historical news click behaviors of this user before this impression.
In this notebook we are going to use a subset of MIND dataset, MIND demo. MIND demo contains 500 users, 9,432 news articles and 6,134 impression logs.
For this quick start notebook, we are providing directly all the necessary word embeddings, entity embeddings and context embedding files.
import warnings
warnings.filterwarnings("ignore")
import os
import sys
from tempfile import TemporaryDirectory
import tensorflow as tf
tf.get_logger().setLevel("ERROR") # only show error messages
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
from recommenders.models.deeprec.deeprec_utils import download_deeprec_resources, prepare_hparams
from recommenders.models.deeprec.models.dkn import DKN
from recommenders.models.deeprec.io.dkn_iterator import DKNTextIterator
from recommenders.utils.notebook_utils import store_metadata
print(f"System version: {sys.version}")
print(f"Tensorflow version: {tf.__version__}")
tmpdir = TemporaryDirectory()
data_path = os.path.join(tmpdir.name, "mind-demo-dkn")
yaml_file = os.path.join(data_path, "dkn.yaml")
train_file = os.path.join(data_path, "train_mind_demo.txt")
valid_file = os.path.join(data_path, "valid_mind_demo.txt")
test_file = os.path.join(data_path, "test_mind_demo.txt")
news_feature_file = os.path.join(data_path, "doc_feature.txt")
user_history_file = os.path.join(data_path, "user_history.txt")
wordEmb_file = os.path.join(data_path, "word_embeddings_100.npy")
entityEmb_file = os.path.join(data_path, "TransE_entity2vec_100.npy")
contextEmb_file = os.path.join(data_path, "TransE_context2vec_100.npy")
if not os.path.exists(yaml_file):
download_deeprec_resources("https://recodatasets.z20.web.core.windows.net/deeprec/", tmpdir.name, "mind-demo-dkn.zip")
EPOCHS = 10
HISTORY_SIZE = 50
BATCH_SIZE = 500
hparams = prepare_hparams(yaml_file,
news_feature_file = news_feature_file,
user_history_file = user_history_file,
wordEmb_file=wordEmb_file,
entityEmb_file=entityEmb_file,
contextEmb_file=contextEmb_file,
epochs=EPOCHS,
history_size=HISTORY_SIZE,
batch_size=BATCH_SIZE)
print(hparams)
model = DKN(hparams, DKNTextIterator)
print(model.run_eval(valid_file))
model.fit(train_file, valid_file)
Now we can check the performance on the test set:
res = model.run_eval(test_file)
print(res)
# Record results for tests - ignore this cell
store_metadata("auc", res["auc"])
store_metadata("group_auc", res["group_auc"])
store_metadata("ndcg@5", res["ndcg@5"])
store_metadata("ndcg@10", res["ndcg@10"])
store_metadata("mean_mrr", res["mean_mrr"])
[1] Wang, Hongwei, et al. "DKN: Deep Knowledge-Aware Network for News Recommendation." Proceedings of the 2018 World Wide Web Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 2018.
[2] Knowledge Graph Embeddings including TransE, TransH, TransR and PTransE. https://github.com/thunlp/KB2E
[3] Wu, Fangzhao, et al. "MIND: A Large-scale Dataset for News Recommendation" Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. https://msnews.github.io/competition.html
[4] GloVe: Global Vectors for Word Representation. https://nlp.stanford.edu/projects/glove/