TransWikia.com

CUDA_ERROR_OUT_OF_MEMORY: out of memory. How to increase batch size?

Data Science Asked on July 30, 2020

I have one GPU: GTX 1050 with ~4GB memory.

I try Mask RCNN with 192x192pix and batch=7. I got an error: CUDA_ERROR_OUT_OF_MEMORY: out of memory

I found:
https://www.tensorflow.org/guide/using_gpu#allowing_gpu_memory_growth

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config, ...)

  File "<ipython-input-2-0806c9f06bd0>", line 3
    session = tf.Session(config=config, ...)
                                       ^
SyntaxError: positional argument follows keyword argument

3 Answers

It could be the case that your GPU cannot manage the full model (Mask RCNN) with batch sizes like 8 or 16.

I would suggest trying with batch size 1 to see if the model can run, then slowly increase to find the point where it breaks.

You can also use the configuration in Tensorflow, but it will essentially do the same thing - it will just not immediately block all memory when you run a Tensorflow session. It will only take what it needs, which (given a fixed model) will be defined by batch size.


You should alter your code example to be:

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config)

Answered by n1k31t4 on July 30, 2020

Environment:
1.CUDA 10.0
2.cuNDD 10.0 3.tensorflow 1.14.0
4.pip install opencv-contrib-python
5.git clone https://github.com/thtrieu/darkflow
6.Allowing GPU memory growth

enter image description here

Reference

Answered by Willie Cheng on July 30, 2020

If you are using Jupyter Notebook you should run the following code to clear your GPU memory so that you train perfectly

import gc
gc.collect()

If the problem still persists ,use smaller batch size like 4.

Answered by SrJ on July 30, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP