StellarGraph

StellarGraph is too slow on large Graphs

Hi everyone,

I’m trying to use StellarGraph for embedding on a large Graph, around 7.5M nodes and 9.5M edges. But the ETA training time is too long, around 3000h. Can you please share some tips to speed up the training time?

Hi,

do you mind providing us with a bit more information about your use-case?

What algorithm are you using?

Are you solving a node classification or link prediction problem or just representation learning using, for example, unsupervised GraphSAGE?

What is the dimensionality of your nodes’ feature vectors?

Are you using a GPU?

Regards,

P.

I’m using unsupervised GraphSage to embed the graph structure and node features for downstream tasks. Node features have both categories and floats.

I deployed the training process on a sever which has 8 GPUs, but when checking log. The process was not running on GPUs.

Thanks for reply

Hi,

I am a bit concerned that you get loss: nan so there might be something wrong with your setup or your StellarGraph object.

That said, to speed up Unsupervised GraphSAGE we just recently updated the libray to allow multi-threading support in fit_generator. So, please use the latest StellarGraph version available in the “develop” branch. Then, when calling fit_generator on your Keras model set use_multiprocessing=False and workers=4 (or some reasonable number depending on how powerfull your computer is.) We have found that using multi-threading can provide a 3-4x speed-up with unsupervised GraphSAGE; see this pull request for more information https://github.com/stellargraph/stellargraph/pull/477

The GPUs, unfortunately, will not help you too much for this algorithm in our current implementation. Performance is limited by the dimensionality of your node feature vectors and the number of neighbors you sample per layer. The lower the dimensionality of the feature vectors and the fewer neighbors you sample per layer, the faster the performance. I suggest you start with a simpler model, maybe two GraphSAGE layers sampling 10 and 5 neighbors for each and see how you go.

If the dimensionality of your feature vectors is high > 1000 then consider applying a dimensionality reduction method to bring that down to something smaller, e.g., 100. It will certainly help with training speed.

Hope the above helps. Let us know how you go!

Regards,

P.