Похожие чаты

What is the question? I worked mostly with Theano as backend

computing engine and few times with tensorflow

1 ответов

8 просмотров

I want to add two more layers to this code: when I add first layer it's okay and the program runs but when I add second one to change the program to four layers it doesn't run. InvalidArgumentError (see above for traceback): logits and labels must be same size: logits_size=[32,10] labels_size=[128,10] [[Node: SoftmaxCrossEntropyWithLogits = SoftmaxCrossEntropyWithLogits[T =DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](Reshape_2, Reshape_3)]] from __future__ import print_function import tensorflow as tf # Import MNIST data from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/", one_hot=True) # Parameters learning_rate = 0.001 training_iters = 200000 batch_size = 128 display_step = 10 # Network Parameters n_input = 784 # MNIST data input (img shape: 28*28) n_classes = 10 # MNIST total classes (0-9 digits) dropout = 0.75 # Dropout, probability to keep units # tf Graph input x = tf.placeholder(tf.float32, [None, n_input]) y = tf.placeholder(tf.float32, [None, n_classes]) keep_prob = tf.placeholder(tf.float32) #dropout (keep probability) # Create some wrappers for simplicity def conv2d(x, W, b, strides=1): # Conv2D wrapper, with bias and relu activation x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME') x = tf.nn.bias_add(x, b) return tf.nn.relu(x) def maxpool2d(x, k=2): # MaxPool2D wrapper return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1], padding='SAME') # Create model def conv_net(x, weights, biases, dropout): # Reshape input picture x = tf.reshape(x, shape=[-1, 28, 28, 1]) # Convolution Layer conv1 = conv2d(x, weights['wc1'], biases['bc1']) # Max Pooling (down-sampling) conv1 = maxpool2d(conv1, k=2) # Convolution Layer conv2 = conv2d(conv1, weights['wc2'], biases['bc2']) # Max Pooling (down-sampling) conv2 = maxpool2d(conv2, k=2) # Fully connected layer # Reshape conv2 output to fit fully connected layer input fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]]) fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1']) fc1 = tf.nn.relu(fc1) # Apply Dropout fc1 = tf.nn.dropout(fc1, dropout) # Output, class prediction out = tf.add(tf.matmul(fc1, weights['out']), biases['out']) return out # Store layers weight & bias weights = { # 5x5 conv, 1 input, 32 outputs 'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32])), # 5x5 conv, 32 inputs, 64 outputs 'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])), # fully connected, 7*7*64 inputs, 1024 outputs 'wd1': tf.Variable(tf.random_normal([7*7*64, 1024])), # 1024 inputs, 10 outputs (class prediction) 'out': tf.Variable(tf.random_normal([1024, n_classes])) } biases = { 'bc1': tf.Variable(tf.random_normal([32])), 'bc2': tf.Variable(tf.random_normal([64])), 'bd1': tf.Variable(tf.random_normal([1024])), 'out': tf.Variable(tf.random_normal([n_classes])) } # Construct model pred = conv_net(x, weights, biases, keep_prob) # Define loss and optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Evaluate model correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # Initializing the variables init = tf.global_variables_initializer()

Похожие вопросы

Обсуждают сегодня

Let me insert my "5++" kilocents into how zcash should be positioned (and how it would have been positioned/adopted from day one). You are allowed to believe or not believe in...
eBiker
5
In a country where censorship and arbitrary decisions are the order of the day, offering a message of hope to workers and youth is necessary, but it is only possible with expe...
Ibrahim Rabiu
1
How did we have so many dormant scammers in here? Who are these people. It’s been years. We know each other. If you dm me “I’ve been an og” but haven’t said a single word her...
Justin | LGCY |
42
Hello Everyone 🙌🏻💜 Happy (?) Friday to everyone 😁😁 I have read through all the comments/feedback/complaints in the last 2 days following the CEO Update AMA. Clearly the ...
Patrick - Ultra.io
5
[sheetoshi] Would ergo docs be the best resource for someone who has a few years experience in web development and would like to learn how to build on ergo?
DiscordBridge
17
Kraken has just announced they won t support the merger of Fet and Ocean (Agix not available anyways)! This means we on Kraken are forced to close our position on loss or tran...
MG
21
Absolutely! 100+ countries? That's insane. 🌍🔥
Josh
26
Кстати, а я вот тут подумал. Допустим, у нас имеется цикл который выполняет огромное количество итераций, но мы хотим, чтобы какие-то действия исполнилось только один раз. В Я...
The Bird of Hermes
23
Доброй ночи. Вопрос знатокам. Имеется некая таблица, результат которой выведен в DBGrid на форме. И есть форма, с помощью которой можно как добавить запись, так и отредактиров...
Евгений
28
а всё почему? потому что ассемблер в отличии от яву порождает множество пагубных привычек, среди которых например можно отметить использование глобальных переменных для всего ...
Mixail Frolov
35
Карта сайта