I am trying to replicate this paper (https://arxiv.org/pdf/1904.12577.pdf) and for graph convolution I would like to use StellarGraph. After researching other GCN libraries I find this one the most advanced, as I prefer to use Keras within TensorFlow 2.2.
My training datases consists of about 20k heterogenous graphs - PDF documents where words for a graphs. My task is to classify each word and tag it with its respective meaning - retrieve specific financial metrics in written text. Being each word a node in graph this should be a node classification problem.
Right now I use RelationalGraphConvolution layer (I need multiple edge types) without Stellargraph generator as my input is my own fit generator providing features for each document with fixed number of words (padded).
Another reason for not using Stellargraph generator is the preprocessing of node features with character embedding, so I can use symbolic tensors for my inputs to RGCN.
Should I use some kind of custom RelationalFullBatchNodeGenerator and modify it as a “fit generator in generator” ? In that case I do not know right now how to preprocess those inputs and learn character embeddings.
Putting aside that currently I cannot save the model (because of https://github.com/stellargraph/stellargraph/issues/1252), my network is performing poorly right now.
Any advice would be highly appreciated.
Thank you for this great python library and keep up the great work.