Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Saving & Loading the model #15

Open
kiritbasu opened this issue Mar 28, 2018 · 9 comments
Open

Saving & Loading the model #15

kiritbasu opened this issue Mar 28, 2018 · 9 comments

Comments

@kiritbasu
Copy link

Do you have any examples of Saving the model and Loading it back up to run a prediction?

@shoaib77
Copy link

I have a same problem regarding Saving and loading this model.

@guillaume-chevalier
Copy link
Owner

guillaume-chevalier commented Mar 30, 2018

There's a lot of questions online about this, for example, the following link might help you, to sum up, you just need to save the TensorFlow graph to disks and reload it, and you can reload it in Python or here in C++ for example: https://stackoverflow.com/questions/35508866/tensorflow-different-ways-to-export-and-run-graph-in-c
I didn't read it, but it seems quite documented, and it happened to me already to save models on disks in the past.

For now, I don't have the time to implement this. At least, let me give you a hint: you should give a name to the input placeholders and the variables you want to get. For example, in this code:

# Graph input/output
x = tf.placeholder(tf.float32, [None, n_steps, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])

We'll need to the name the params if I'm not wrong:

# Graph input/output
x = tf.placeholder(tf.float32, [None, n_steps, n_input], name="x")
y = tf.placeholder(tf.float32, [None, n_classes], name="y")

Then in the sess.run, instead of referencing the variables, you reference a string formatted in a certain way which refers to the variable name. From what I recall, this string should be like "x:0" to replace the python tensor reference variable, such as like this in Python:

one_hot_predictions, accuracy, final_loss = sess.run(
    [pred, accuracy, cost],
    feed_dict={
        "x:0": X_test,
        "y:0": one_hot(y_test)
    }
)

So once you load back the model you'll need to use the string as you don't have the named Python variable holding the placeholder anymore. Hope this helps.

@jaemin93
Copy link

jaemin93 commented Jun 11, 2019

I have same problem. I saved ckpt and load ckpt to inference. but its result is very different [train->prediction]. i check test data and find what is problem. INPUT_SIGNAL_TYPES is python set data type. that is problem when you distinct train, test code. you change INPUT_SIGNAL_TYPES data type to ordered data type. I'm not familiar English, but I hope it helps.

@guillaume-chevalier
Copy link
Owner

This issue will be fixed by PR #32.

@arvindchandel
Copy link

arvindchandel commented Jan 11, 2021

There's a lot of questions online about this, for example, the following link might help you, to sum up, you just need to save the TensorFlow graph to disks and reload it, and you can reload it in Python or here in C++ for example: https://stackoverflow.com/questions/35508866/tensorflow-different-ways-to-export-and-run-graph-in-c
I didn't read it, but it seems quite documented, and it happened to me already to save models on disks in the past.

For now, I don't have the time to implement this. At least, let me give you a hint: you should give a name to the input placeholders and the variables you want to get. For example, in this code:

# Graph input/output
x = tf.placeholder(tf.float32, [None, n_steps, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])

We'll need to the name the params if I'm not wrong:

# Graph input/output
x = tf.placeholder(tf.float32, [None, n_steps, n_input], name="x")
y = tf.placeholder(tf.float32, [None, n_classes], name="y")

Then in the sess.run, instead of referencing the variables, you reference a string formatted in a certain way which refers to the variable name. From what I recall, this string should be like "x:0" to replace the python tensor reference variable, such as like this in Python:

one_hot_predictions, accuracy, final_loss = sess.run(
    [pred, accuracy, cost],
    feed_dict={
        "x:0": X_test,
        "y:0": one_hot(y_test)
    }
)

So once you load back the model you'll need to use the string as you don't have the named Python variable holding the placeholder anymore. Hope this helps.

@guillaume-chevalier I tried this to run prediction on new sample X_val, its giving me error 'pred' not defined. I am running it like below:
with tf.Session() as sess:
saver = tf.train.import_meta_graph('/home/arvind/checkpoints/model.ckpt.meta')
new=saver.restore(sess, tf.train.latest_checkpoint('/home/arvind/checkpoints/'))
graph = tf.get_default_graph()
input_x = graph.get_tensor_by_name("x:0")
res = graph.get_tensor_by_name("y:0")
feed_Dict = {input_x: X_val,}
output = sess.run([pred], feed_dict=feed_Dict)
print(output)

Error: pred not defined..

@arvindchandel
Copy link

Got it working.

@zlg9folira
Copy link

Got it working.

How did you get it working ? Could you add the missing part(s) for re-constructing pred ?

@GUIMINLONG
Copy link

I have a same problem regarding re-constructing pred

@GUIMINLONG
Copy link

one_hot_predictions, accuracy, final_loss = sess.run(
[pred, accuracy, cost],
feed_dict={
"x:0": X_test,
"y:0": one_hot(y_test)
}
)

How you get it working?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants