1.OOM定义
oom=out of memory,内存溢出,一个程序中,已经不需要使用某个对象,但是因为仍然有引用指向它垃圾回收器就无法回收它,当该对象占用的内存无法被回收时,就容易造成内存泄露。多个内存泄漏最终会导致内存溢出,即OOM。
这个报错的原因有很多
1、自身代码有问题,导致GPU内存不够用
这个只能自查
2、有未释放的GPU资源,导致GPU不够用;当然也有可能多人使用GPU,别人使用了。
2.问题描述
ResourceExhaustedError: OOM when allocating tensor with shape[1,512,1120,1120] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node rpn_model/rpn_conv_shared/convolution}} = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](fpn_p2/BiasAdd, rpn_conv_shared/kernel/read)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[{{node roi_align_mask/strided_slice_17/_4277}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_3068_roi_align_mask/strided_slice_17", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
3.解决方案就是:
fuser -v /dev/nvidia* #查找占用GPU资源的PID
然后使用kill命令,把偷偷占用的进程杀掉;例如:
kill -9 27600
然后再执行nvidia-smi就可以看到内存已经被释放了
参考:
tensorflow 任意 batch size 不溢出显存( OOM ),使用 darknet 的 sub batch 方法