lovely-tensors
  1. ✨ Misc
  2. 📜 IPython’s history obsession
  • ❤️ Lovely Tensors
  • 🔎 Tensor representations
    • đź§ľ View as a summary
    • 🖌️ View as RGB images
    • 📊 View as a histogram
    • 📺 View channels
  • ✨ Misc
    • 🤔 Config
    • 🙉 Monkey-patching
    • 🎭 Matplotlib integration
    • 📜 IPython’s history obsession
  1. ✨ Misc
  2. 📜 IPython’s history obsession

📜 IPython’s history obsession

Let’s have a look at what happens when a variable falls off the end of a cell.

torch.cuda.memory_allocated()
0
t = torch.tensor(10, device="cuda")
t
tensor(10, device='cuda:0')
torch.cuda.memory_allocated()
512
del t
gc.collect()
torch.cuda.empty_cache()
torch.cuda.memory_allocated()
512

Above, I allocated a tensor in CUDA memory and displayed it as the cell output, then deleted it.
I did not use Lovely Tensors, just plain PyTorch.
Why is the CUDA memory not freed? Is there still a reference to the tensor somewhere?

Yes.

dir()[:10] # Global variables
['In',
 'Out',
 '_',
 '_2',
 '_3',
 '_4',
 '_5',
 '_VSCode_matplotLib_FigureFormats',
 '__',
 '___']

Do you see the _ variables?
They are created by IPython and every cell output you run is saved:
https://ipython.readthedocs.io/en/stable/interactive/reference.html#output-caching-system

print(_3) # Here is my tensor from cell 3
tensor(10, device='cuda:0')

If this is not the behavior you want, you can disable it by adding

%config ZMQInteractiveShell.cache_size = 0

at the begining of your notebook, but I think this only works in plain Jupyter and not Jupyter in vscode.

Alternatively, find your pytorch config file (for me it’s ~/.ipython/profile_default/ipython_kernel_config.py)
and set ZMQInteractiveShell.cache_size to 0.