Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confused in the train of classifer. #2

Open
siam7tong opened this issue Dec 29, 2017 · 0 comments
Open

Confused in the train of classifer. #2

siam7tong opened this issue Dec 29, 2017 · 0 comments

Comments

@siam7tong
Copy link

I am so sorry to bother you again.
I have trained the snn_autonet with the delta_weight_mean : 2.05338878e-07,then i trained classifer.I got the 23.5% accuracy at first batch of 2000 examples, but i saw your result is 59.38%. Then i train the classifer with lr in your raw set ,but the last result is always around 75%~79%. I don't know what happened. I also want to use the snn_auto model in your files, but i can't load it successely. I am new in DL, there may be something i didn't notice. I really need your help, could you give me some advices. Thank you so much.

Here is the classifer trained results:

2017-12-29 09:50:42.822324: Learning rates LR: 1.000000 2017-12-29 09:52:42.108979: Iter: 2000 [0], loss: 99177.390625, acc: 0.00%, avg_loss: 85936.811722, avg_acc: 28.20% 2017-12-29 09:54:41.830846: Iter: 4000 [0], loss: 20514.828125, acc: 0.00%, avg_loss: 81864.085177, avg_acc: 34.17% 2017-12-29 09:56:40.925028: Iter: 6000 [0], loss: 0.000000, acc: 1.00%, avg_loss: 81793.218836, avg_acc: 37.23% 2017-12-29 09:58:42.632386: Iter: 8000 [0], loss: 0.000000, acc: 1.00%, avg_loss: 79121.567086, avg_acc: 40.06% 2017-12-29 10:00:41.175695: Iter: 10000 [0], loss: 42995.710938, acc: 0.00%, avg_loss: 78534.694568, avg_acc: 41.80% 2017-12-29 10:02:39.190052: Iter: 12000 [0], loss: 0.000000, acc: 1.00%, avg_loss: 77897.013161, avg_acc: 43.37% 2017-12-29 10:02:39.190443: Epoch 0: avg_Loss: 77903.505120136964, avg_Acc: 43.370280856738 2017-12-29 10:02:39.190583: Learning rates changed LR: 0.100000 2017-12-29 10:04:42.617513: Iter: 14000 [1], loss: 18558.640625, acc: 0.00%, avg_loss: 26327.748162, avg_acc: 69.40% 2017-12-29 10:06:40.323889: Iter: 16000 [1], loss: 0.000000, acc: 1.00%, avg_loss: 22216.912610, avg_acc: 71.53% 2017-12-29 10:08:40.542183: Iter: 18000 [1], loss: 0.000000, acc: 1.00%, avg_loss: 21290.121762, avg_acc: 72.17% 2017-12-29 10:10:39.691736: Iter: 20000 [1], loss: 0.000000, acc: 1.00%, avg_loss: 20656.107550, avg_acc: 72.12% 2017-12-29 10:12:36.612161: Iter: 22000 [1], loss: 0.000000, acc: 1.00%, avg_loss: 20444.464095, avg_acc: 72.05% 2017-12-29 10:14:35.092268: Iter: 24000 [1], loss: 118659.570312, acc: 0.00%, avg_loss: 20122.769527, avg_acc: 72.12% 2017-12-29 10:14:35.092541: Epoch 1: avg_Loss: 20124.446563906367, avg_Acc: 72.122676889741 2017-12-29 10:14:35.092582: Learning rates changed LR: 0.100000 2017-12-29 10:16:36.363787: Iter: 26000 [2], loss: 0.000000, acc: 1.00%, avg_loss: 17975.989564, avg_acc: 73.10% 2017-12-29 10:18:35.850549: Iter: 28000 [2], loss: 44473.710938, acc: 0.00%, avg_loss: 17481.193654, avg_acc: 73.25% 2017-12-29 10:20:34.975065: Iter: 30000 [2], loss: 0.000000, acc: 1.00%, avg_loss: 17638.474654, avg_acc: 73.12% 2017-12-29 10:22:36.304807: Iter: 32000 [2], loss: 0.000000, acc: 1.00%, avg_loss: 17455.829458, avg_acc: 72.91% 2017-12-29 10:24:34.476960: Iter: 34000 [2], loss: 0.000000, acc: 1.00%, avg_loss: 17744.024149, avg_acc: 72.76% 2017-12-29 10:26:32.921364: Iter: 36000 [2], loss: 0.000000, acc: 1.00%, avg_loss: 17528.401285, avg_acc: 72.91% 2017-12-29 10:26:32.921725: Epoch 2: avg_Loss: 17529.862106998906, avg_Acc: 72.914409534128 2017-12-29 10:26:32.921873: Learning rates changed LR: 0.010000 2017-12-29 10:28:39.264652: Iter: 38000 [3], loss: 0.000000, acc: 1.00%, avg_loss: 13054.521175, avg_acc: 75.95% 2017-12-29 10:30:26.309336: Iter: 40000 [3], loss: 0.000000, acc: 1.00%, avg_loss: 13008.920330, avg_acc: 76.38% 2017-12-29 10:32:11.410454: Iter: 42000 [3], loss: 0.000000, acc: 1.00%, avg_loss: 13880.511879, avg_acc: 76.25% 2017-12-29 10:34:05.807313: Iter: 44000 [3], loss: 23595.546875, acc: 0.00%, avg_loss: 13951.334126, avg_acc: 76.38% 2017-12-29 10:35:59.726245: Iter: 46000 [3], loss: 115823.546875, acc: 0.00%, avg_loss: 13987.278281, avg_acc: 76.33% 2017-12-29 10:37:50.667808: Iter: 48000 [3], loss: 0.000000, acc: 1.00%, avg_loss: 13714.742202, avg_acc: 76.51% 2017-12-29 10:37:50.668205: Epoch 3: avg_Loss: 13715.885192744245, avg_Acc: 76.514709559130 2017-12-29 10:37:50.668328: Learning rates changed LR: 0.001000 2017-12-29 10:39:45.801393: Iter: 50000 [4], loss: 21998.937500, acc: 0.00%, avg_loss: 14667.064186, avg_acc: 76.35% 2017-12-29 10:41:26.169617: Iter: 52000 [4], loss: 0.000000, acc: 1.00%, avg_loss: 13953.024168, avg_acc: 76.68% 2017-12-29 10:43:24.667357: Iter: 54000 [4], loss: 0.000000, acc: 1.00%, avg_loss: 13596.672170, avg_acc: 76.80% 2017-12-29 10:45:24.305483: Iter: 56000 [4], loss: 0.000000, acc: 1.00%, avg_loss: 13453.069836, avg_acc: 76.76% 2017-12-29 10:47:23.165318: Iter: 58000 [4], loss: 0.000000, acc: 1.00%, avg_loss: 13597.539448, avg_acc: 76.82% 2017-12-29 10:49:21.349746: Iter: 60000 [4], loss:0.000000, acc: 1.00%, avg_loss: 19883.241314, avg_acc: 76.33% 2017-12-20 10:37:11.733672: Epoch 4: avg_Loss: 19884.898389319776, avg_Acc: 76.331527627302 2017-12-20 10:37:11.733709: Learning rates changed LR: 0.001000 2017-12-20 10:38:41.183413: Iter: 62000 [5], loss: 0.000000, acc: 1.00%, avg_loss: 18197.470076, avg_acc: 79.40% 2017-12-20 10:40:07.714596: Iter: 64000 [5], loss: 0.000000, acc: 1.00%, avg_loss: 18597.421187, avg_acc: 78.80% 2017-12-20 10:41:39.236288: Iter: 66000 [5], loss: 87509.156250, acc: 0.00%, avg_loss: 19343.797669, avg_acc: 78.13% 2017-12-20 10:43:02.023565: Iter: 68000 [5], loss: 2268.468750, acc: 0.00%, avg_loss: 19525.549633, avg_acc: 78.16% 2017-12-20 10:44:27.446234: Iter: 70000 [5], loss: 0.000000, acc: 1.00%, avg_loss: 19540.495293, avg_acc: 78.36% 2017-12-20 10:45:55.687838: Iter: 72000 [5], loss: 7991.203125, acc: 0.00%, avg_loss: 19458.476607, avg_acc: 78.35% 2017-12-20 10:45:55.688114: Epoch 5: avg_Loss: 19460.098281627750, avg_Acc: 78.356529710809

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant