🧾 View as a summary

x = Tensor([0, 0, 1])
((x == 0).min() == 1).realize().numpy()
array(0., dtype=float32)

source

lovely

 lovely (x:tinygrad.tensor.Tensor, verbose=False, depth=0, color=None)
Type Default Details
x Tensor Tensor of interest
verbose bool False Whether to show the full tensor
depth int 0 Show stats in depth
color NoneType None Force color (True/False) or auto.

Examples

Control laziness of repr
set_config(auto_realize=False)
lovely(spicy)
Tensor[2, 6] n=12 x∈[-7.032e+03, 1.549] μ=-781.232 σ=2.210e+03 +Inf! -Inf! NaN! CPU
lovely(spicy)
Tensor[2, 6] n=12 x∈[-7.032e+03, 1.549] μ=-781.232 σ=2.210e+03 +Inf! -Inf! NaN! CPU
set_config(auto_realize=True)
lovely(spicy)
Tensor[2, 6] n=12 x∈[-7.032e+03, 1.549] μ=-781.232 σ=2.210e+03 +Inf! -Inf! NaN! CPU
lovely(spicy)
Tensor[2, 6] n=12 x∈[-7.032e+03, 1.549] μ=-781.232 σ=2.210e+03 +Inf! -Inf! NaN! CPU
Show the stats and values
lovely(randoms[0])
Tensor CPU -0.703
lovely(randoms[:2])
Tensor[2] μ=-0.597 σ=0.151 CPU [-0.703, -0.490]
lovely(randoms[:6].reshape((2, 3))), # More than 2 elements -> show statistics
(Tensor[2, 3] n=6 x∈[-2.011, 0.207] μ=-0.846 σ=0.862 CPU [[-0.703, -0.490, -0.322], [-1.755, 0.207, -2.011]],)
lovely(randoms[:11])                # More than 10 -> suppress data output
Tensor[11] x∈[-2.011, 1.549] μ=-0.336 σ=1.162 CPU
Gradient
g=Tensor([1,2,3], requires_grad=True)
lovely(g)
Tensor[3] x∈[1.000, 3.000] μ=2.000 σ=1.000 grad CPU [1.000, 2.000, 3.000]
(g*g).sum().backward()
lovely(g)
Tensor[3] x∈[1.000, 3.000] μ=2.000 σ=1.000 grad+ CPU [1.000, 2.000, 3.000]
Note

Note the green ‘+’ when the gradient is available.

lovely(g.grad)
Tensor[3] x∈[2.000, 6.000] μ=4.000 σ=2.000 CPU Realized ADD [2.000, 4.000, 6.000]
Do we have any floating point nasties?
# Statistics and range are calculated on good values only, if there are at lest 3 of them.
lovely(spicy)
Tensor[2, 6] n=12 x∈[-7.032e+03, 1.549] μ=-781.232 σ=2.210e+03 +Inf! -Inf! NaN! CPU
lovely(spicy, color=False)
Tensor[2, 6] n=12 x∈[-7.032e+03, 1.549] μ=-781.232 σ=2.210e+03 +Inf! -Inf! NaN! CPU
lovely(Tensor([float("nan")]*11))
Tensor[11] NaN! CPU
Is the tensor all zeros?
lovely(Tensor.zeros(12))
Tensor[12] all_zeros CPU Realized EXPAND
# XXX empty tensors - fix when they work
# lovely(jnp.array([], dtype=jnp.float16).reshape((0,0,0)))
Shows the dtype if it’s not the default.
lovely(Tensor([1,2,3], dtype=dtypes.int8).realize())
Tensor[3] i8 x∈[1, 3] μ=2.000 σ=0.816 CPU [1, 2, 3]
lovely(spicy, verbose=True)
<Tensor <LB (2, 6) dtypes.float op=buffer<12, dtypes.float, 140164017002944> st=ShapeTracker(views=(View(shape=(2, 6), strides=(6, 1), offset=0, mask=None, contiguous=True),))> on CPU with grad None>
Tensor[2, 6] n=12 x∈[-7.032e+03, 1.549] μ=-781.232 σ=2.210e+03 +Inf! -Inf! NaN! CPU
We need to go deeper
image = np.load("mysteryman.npy")
image[1,2,3] = float('nan')

image = Tensor(image)

lovely(image, depth=2) # Limited by set_config(deeper_lines=N)
Tensor[3, 196, 196] n=115248 x∈[-2.118, 2.640] μ=-0.388 σ=1.073 NaN! CPU
  Tensor[196, 196] n=38416 x∈[-2.118, 2.249] μ=-0.324 σ=1.036 CPU
    Tensor[196] x∈[-1.912, 2.249] μ=-0.673 σ=0.522 CPU
    Tensor[196] x∈[-1.861, 2.163] μ=-0.738 σ=0.418 CPU
    Tensor[196] x∈[-1.758, 2.198] μ=-0.806 σ=0.397 CPU
    Tensor[196] x∈[-1.656, 2.249] μ=-0.849 σ=0.369 CPU
    Tensor[196] x∈[-1.673, 2.198] μ=-0.857 σ=0.357 CPU
    Tensor[196] x∈[-1.656, 2.146] μ=-0.848 σ=0.372 CPU
    Tensor[196] x∈[-1.433, 2.215] μ=-0.784 σ=0.397 CPU
    Tensor[196] x∈[-1.279, 2.249] μ=-0.695 σ=0.486 CPU
    Tensor[196] x∈[-1.364, 2.249] μ=-0.637 σ=0.539 CPU
    ...
  Tensor[196, 196] n=38416 x∈[-1.966, 2.429] μ=-0.274 σ=0.973 NaN! CPU
    Tensor[196] x∈[-1.861, 2.411] μ=-0.529 σ=0.556 CPU
    Tensor[196] x∈[-1.826, 2.359] μ=-0.562 σ=0.473 CPU
    Tensor[196] x∈[-1.756, 2.376] μ=-0.622 σ=0.458 NaN! CPU
    Tensor[196] x∈[-1.633, 2.429] μ=-0.664 σ=0.430 CPU
    Tensor[196] x∈[-1.651, 2.376] μ=-0.669 σ=0.399 CPU
    Tensor[196] x∈[-1.633, 2.376] μ=-0.701 σ=0.391 CPU
    Tensor[196] x∈[-1.563, 2.429] μ=-0.670 σ=0.380 CPU
    Tensor[196] x∈[-1.475, 2.429] μ=-0.616 σ=0.386 CPU
    Tensor[196] x∈[-1.511, 2.429] μ=-0.593 σ=0.399 CPU
    ...
  Tensor[196, 196] n=38416 x∈[-1.804, 2.640] μ=-0.567 σ=1.178 CPU
    Tensor[196] x∈[-1.717, 2.396] μ=-0.982 σ=0.350 CPU
    Tensor[196] x∈[-1.752, 2.326] μ=-1.034 σ=0.314 CPU
    Tensor[196] x∈[-1.648, 2.379] μ=-1.086 σ=0.314 CPU
    Tensor[196] x∈[-1.630, 2.466] μ=-1.121 σ=0.305 CPU
    Tensor[196] x∈[-1.717, 2.448] μ=-1.120 σ=0.302 CPU
    Tensor[196] x∈[-1.717, 2.431] μ=-1.166 σ=0.314 CPU
    Tensor[196] x∈[-1.560, 2.448] μ=-1.124 σ=0.326 CPU
    Tensor[196] x∈[-1.421, 2.431] μ=-1.064 σ=0.383 CPU
    Tensor[196] x∈[-1.526, 2.396] μ=-1.047 σ=0.417 CPU
    ...