TransWikia.com

Two parallel models for semantic segmentation in Keras

Data Science Asked by Mateusz Bielecki on April 11, 2021

I want to build two parallel models for image semantic segmentation in Keras.

    input1 = Input(shape=(480,480,3))
    input2 = Input(shape=(480,480,1))

    c1_1 = Conv2D(filters=64, kernel_size=(3,3), activation='relu',  padding='same')(input1)
    c1_1 = MaxPool2D(strides=(2,2))(c1_1)

    c2_1 = Conv2D(filters=16, kernel_size=(3,3), activation='relu',  padding='same')(input2)
    c2_1 = MaxPool2D(strides=(2,2))(c2_1)

   (...)

    # Merge to models:
    c = concatenate([c1_n, c2_n], axis=3)
    c = UpSampling2D(size=(2,2))(c)
    c = Conv2D(filters=512, kernel_size=(3,3), activation='relu', padding='same')(c)

   (...)

   output_layer = Conv2D(6, kernel_size=(1,1), activation='softmax')(c)

   model = Model([input1, input2], output_layer)

   model.compile(optimizer=Adam(2e-4), loss='categorical_crossentropy', metrics=['categorical_accuracy'])


   def my_generator(x_train, y_train, batch_size):
     data_generator = ImageDataGenerator(
        (...).flow(x_train, x_train, batch_size, seed=42)
     mask_generator = ImageDataGenerator(
        (...).flow(y_train, y_train, batch_size, seed=42)

     while True:
        x_batch, _ = data_generator.next()
        y_batch, _ = mask_generator.next()
        yield [x_batch[:,:,:,:3], x_batch[:,:,:,3]], y_batch

     #X = [72, 480, 480, 4]
     #Y = [72, 480, 480, 5]
     model.fit_generator(my_generator(X, Y, 7),
                       steps_per_epoch = 60,
                       validation_data = (X_test, Y_test),
                       epochs=150, verbose=2)

But i have an error:

ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[[[ 0.33891213,  0.37238494,  0.33054393,  0.        ],
     [ 0.34728033,  0.38493724,  0.33472803,  0.        ],
     [ 0.35146444,  0.39330544,  0.35983264,  0.        ],
     ....

Can you help me with my error or tell me whether this assumption is correct?

One Answer

This is probably laye but incase someone else is stuck, check out this link.

Basically, your model expects two inputs as defined here:

model = Model([input1, input2], output_layer)

Therefore you will need to pass a list of two inputs with the same shape as you defined here:

input1 = Input(shape=(480,480,3))
input2 = Input(shape=(480,480,1))

Answered by Kosi on April 11, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP