AttributeError: 'google.protobuf.pyext._message.RepeatedCompositeCo' object has no attribute 'append' When running django gunicorn server
I have developed a road segmentation model with django. It works fine on development server. I am testing the application with gunicorn server. The application loads a tensorflow based Unet model from .h5 file. The model gets loaded successfully on development server. But not on gunicorn.
System check identified no issues (0 silenced).
April 23, 2022 - 10:23:03
Django version 4.0.4, using settings 'segmentation_site.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
Works perfectly on development server.
But not on production
gunicorn --bind 127.0.0.1:8000 segmentation_site.wsgi.application
[2022-04-23 03:24:15 -0700] [8192] [INFO] Starting gunicorn 20.0.4
[2022-04-23 03:24:15 -0700] [8192] [INFO] Listening at: http://127.0.0.1:8000 (8192)
[2022-04-23 03:24:15 -0700] [8192] [INFO] Using worker: sync
[2022-04-23 03:24:15 -0700] [8194] [INFO] Booting worker with pid: 8194
2022-04-23 10:24:16.356150: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2022-04-23 10:24:16.356246: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
tensorflow imported.
2022-04-23 10:24:19.782405: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2022-04-23 10:24:19.782526: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
2022-04-23 10:24:19.782567: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (ubuntu): /proc/driver/nvidia/version does not exist
[2022-04-23 10:24:19 +0000] [8194] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process()
File "/usr/lib/python3/dist-packages/gunicorn/workers/base.py", line 119, in init_process
self.load_wsgi()
File "/usr/lib/python3/dist-packages/gunicorn/workers/base.py", line 144, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/lib/python3/dist-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/lib/python3/dist-packages/gunicorn/app/wsgiapp.py", line 49, in load
return self.load_wsgiapp()
File "/usr/lib/python3/dist-packages/gunicorn/app/wsgiapp.py", line 39, in load_wsgiapp
return util.import_app(self.app_uri)
File "/usr/lib/python3/dist-packages/gunicorn/util.py", line 383, in import_app
mod = importlib.import_module(module)
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 848, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/junaidali/Documents/segmentation/segmentation_site/segmentation_site/wsgi.py", line 16, in <module>
application = get_wsgi_application()
File "/home/junaidali/.local/lib/python3.8/site-packages/django/core/wsgi.py", line 12, in get_wsgi_application
django.setup(set_prefix=False)
File "/home/junaidali/.local/lib/python3.8/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/junaidali/.local/lib/python3.8/site-packages/django/apps/registry.py", line 91, in populate
app_config = AppConfig.create(entry)
File "/home/junaidali/.local/lib/python3.8/site-packages/django/apps/config.py", line 126, in create
mod = import_module(mod_path)
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 848, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/junaidali/Documents/segmentation/segmentation_site/predict/apps.py", line 103, in <module>
model = load_model()
File "/home/junaidali/Documents/segmentation/segmentation_site/predict/apps.py", line 17, in load_model
c1 = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer="he_normal", padding="same")(inputs)
File "/home/junaidali/.local/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/junaidali/.local/lib/python3.8/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 176, in _variable_handle_from_shape_and_dtype
handle_data.shape_and_type.append(
AttributeError: 'google.protobuf.pyext._message.RepeatedCompositeCo' object has no attribute 'append'
[2022-04-23 10:24:19 +0000] [8194] [INFO] Worker exiting (pid: 8194)
[2022-04-23 03:24:20 -0700] [8192] [INFO] Shutting down: Master
[2022-04-23 03:24:20 -0700] [8192] [INFO] Reason: Worker failed to boot.
Loading the model in apps.py so that model is loaded only once when the server starts.
The error above references the line 17 in the code where first layer c1 of the model is being defined.
from django.apps import AppConfig
import tensorflow as tf
def load_model():
# define the input layers, with dimensions 512 x 512 x 3
inputs = tf.keras.layers.Input((512, 512, 3))
# Extraction Process layer_1
# 1. first convolution operation to identify 16 fetures, with 3 x 3 filter size. 'relu' activation function
# will be used, because it works better than the others
# 2. Dropout layer will skip the neurons from the layers. 0.1 means 10% of neurons will be skipped
# from training process at each epoch.
# 3. Another convolution operation will identify another 16 features
# 4. MaxPooling2D will reduce the images resolution to half by applying 2 x 2 filter.
c1 = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer="he_normal", padding="same")(inputs)
c1 = tf.keras.layers.Dropout(0.1)(c1)
c1 = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer="he_normal", padding="same")(c1)
p1 = tf.keras.layers.MaxPooling2D((2, 2))(c1)
# Extraction Provess layer_2
# the same process will be followed as layer 1.
c2 = tf.keras.layers.Conv2D(32, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(p1)
c2 = tf.keras.layers.Dropout(0.1)(c2)
c2 = tf.keras.layers.Conv2D(32, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c2)
p2 = tf.keras.layers.MaxPooling2D((2, 2))(c2)
# Extraction Provess layer_3
# the same process will be followed as layer 1.
c3 = tf.keras.layers.Conv2D(64, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(p2)
c3 = tf.keras.layers.Dropout(0.2)(c3)
c3 = tf.keras.layers.Conv2D(64, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c3)
p3 = tf.keras.layers.MaxPooling2D((2, 2))(c3)
# Extraction Provess layer_3
# the same process will be followed as layer 1.
c4 = tf.keras.layers.Conv2D(128, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(p3)
c4 = tf.keras.layers.Dropout(0.2)(c4)
c4 = tf.keras.layers.Conv2D(128, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c4)
p4 = tf.keras.layers.MaxPooling2D((2, 2))(c4)
# Extraction Provess layer_4
# the same process will be followed as layer 1 except the MaxPooling Layer.
# Because by this point image is already in 32 x 32 x 3 dimension
c5 = tf.keras.layers.Conv2D(256, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(p4)
c5 = tf.keras.layers.Dropout(0.3)(c5)
c5 = tf.keras.layers.Conv2D(256, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c5)
# Expansion Process Layer 1
# Conv2DTranspose is opposite of Conv2D layers
# Conv2D tells what features are in the image. Conv2DTranspose tells the location of features
# identified by Conv2D
# So, to combine the feature information with location information, we concatenate the u6 and c4 layers.
u6 = tf.keras.layers.Conv2DTranspose(128, (2, 2), strides=(2, 2), padding="same")(c5)
u6 = tf.keras.layers.concatenate([u6, c4])
c6 = tf.keras.layers.Conv2D(128, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(u6)
c6 = tf.keras.layers.Dropout(0.2)(c6)
c6 = tf.keras.layers.Conv2D(128, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c6)
# Expansion Process Layer 2
# Follow the same for expansaion layer 1
u7 = tf.keras.layers.Conv2DTranspose(64, (2, 2), strides=(2, 2), padding="same")(c6)
u7 = tf.keras.layers.concatenate([u7, c3])
c7 = tf.keras.layers.Conv2D(64, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(u7)
c7 = tf.keras.layers.Dropout(0.2)(c7)
c7 = tf.keras.layers.Conv2D(64, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c7)
# Expansion Process Layer 3
# Follow the same for expansaion layer 1
u8 = tf.keras.layers.Conv2DTranspose(32, (2, 2), strides=(2, 2), padding="same")(c7)
u8 = tf.keras.layers.concatenate([u8, c2])
c8 = tf.keras.layers.Conv2D(32, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(u8)
c8 = tf.keras.layers.Dropout(0.1)(c8)
c8 = tf.keras.layers.Conv2D(32, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c8)
# Expansion Process Layer 2
# Follow the same for expansaion layer 1
# except extracting any featuers, use current features to get probabilities for each pixel between 0 and 1.
# This value between 0 and 1 will represent the paths.
# And these probabilities are actually learned by some loss functions.
u9 = tf.keras.layers.Conv2DTranspose(16, (2, 2), strides=(2, 2), padding="same")(c8)
u9 = tf.keras.layers.concatenate([u9, c1], axis=3)
outputs = tf.keras.layers.Conv2D(1, (1, 1), activation="sigmoid")(u9)
model = tf.keras.Model(inputs=[inputs], outputs=[outputs])
model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])
model.load_weights("model_for_road_segmentation.h5")
return model
print("tensorflow imported.")
model = load_model()
print("model loaded.")
From settings.py
WSGI_APPLICATION = 'segmentation_site.wsgi.application'