torch.cuda.memory_allocated()
0
Let’s have a look at what happens when a variable falls off the end of a cell.
Above, I allocated a tensor in CUDA memory and displayed it as the cell output, then deleted it.
I did not use Lovely Tensors, just plain PyTorch.
Why is the CUDA memory not freed? Is there still a reference to the tensor somewhere?
Yes.
['In',
'Out',
'_',
'_2',
'_3',
'_4',
'_5',
'_VSCode_matplotLib_FigureFormats',
'__',
'___']
Do you see the _
They are created by IPython and every cell output you run is saved:
https://ipython.readthedocs.io/en/stable/interactive/reference.html#output-caching-system
If this is not the behavior you want, you can disable it by adding
%config ZMQInteractiveShell.cache_size = 0
at the begining of your notebook, but I think this only works in plain Jupyter and not Jupyter in vscode.
Alternatively, find your pytorch config file (for me it’s ~/.ipython/profile_default/ipython_kernel_config.py
)
and set ZMQInteractiveShell.cache_size to 0.