StellarGraph

ValueError: If specifying TensorSpec names for nested structures, either zero or all names have to be specified

Hi, I’m trying to export GAT in TF 2.1.0 SavedModel format, I was previously able to do so, but after trying to transition it to a more concise code the following error appeared:

`~/anaconda3/envs/recom/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py in _get_defun_inputs(args, names, structure, flat_shapes)
1189 specified_names = [arg.name for arg in tensor_specs if arg.name]
1190 if specified_names and len(specified_names) < len(tensor_specs):
-> 1191 raise ValueError("If specifying TensorSpec names for nested structures, "
1192 “either zero or all names have to be specified.”)
1193

ValueError: If specifying TensorSpec names for nested structures, either zero or all names have to be specified.`

To reproduce simply create a python 3.7 env, install stellargraph and its dependencies with pip and after training the model in the GAT demo, try to export it in the next cell with:

tf.keras.models.save_model(model, export_dir)

or

tf.saved_model.save( model, export_dir)

or even

model.save(export_dir)

I’m running on macOs Catalina 10.15 and Anaconda.

Any guidance would be appreciated!

Thanks for letting us know. It seems this affects all of our “full batch” models. I’ve filed https://github.com/stellargraph/stellargraph/issues/1251 but we haven’t started investigating it yet. One temporarily work-around would be switch to a different model, such as GraphSAGE: https://github.com/stellargraph/stellargraph/blob/master/demos/node-classification/graphsage/graphsage-cora-node-classification-example.ipynb

We’ll be updating that issue and this thread as we resolve it.

1 Like

Thanks! Just an update, I was able to export it using:
tf.compat.v1.keras.experimental.export_saved_model(model,export_path)
But was unable to import it back with:
emb_model = tf.compat.v1.keras.experimental.load_from_saved_model(
import_path, custom_objects=None
)

~/anaconda3/envs/recom/lib/python3.7/site-packages/tensorflow_core/python/keras/utils/generic_utils.py in class_and_config_for_serialized_keras_object(config, module_objects, custom_objects, printable_module_name)
    248     cls = module_objects.get(class_name)
    249     if cls is None:
--> 250       raise ValueError('Unknown ' + printable_module_name + ': ' + class_name)
    251 
    252   cls_config = config['config']

ValueError: Unknown layer: SqueezedSparseConversion
```

Ah, good find. I’ll make a note on the issue about that too.

StellarGraph defines various custom Keras layers, which need to be passed to custom_objects. This can either be done manually, maybe something like:

emb_model = tf.compat.v1.keras.experimental.load_from_saved_model(
    import_path, 
    custom_objects={"GraphAttentionSparse": sg.layer.GraphAttentionSparse, "SqueezedSparseConversion": sg.layer.SqueezedSparseConversion},
)

(Swapping GraphAttentionSparse to GraphAttention, if you created the FullBatchNodeGenerator with sparse=False.)

Or automatically using the custom_keras_layers variable that’s available at the top level of stellargraph:

emb_model = tf.compat.v1.keras.experimental.load_from_saved_model(
    import_path, 
    custom_objects=sg.custom_keras_layers,
)

I’ve filed https://github.com/stellargraph/stellargraph/issues/1261 about making this more obvious, since it applies even if we fix #1251 above.