from nbdev.showdoc import *
from fastcore.test import test_eq, test
🧾 View as a summary
= Tensor([0, 0, 1])
x == 0).min() == 1).realize().numpy() ((x
array(False)
lovely
lovely (x:tinygrad.tensor.Tensor, verbose=False, depth=0, color=None)
Type | Default | Details | |
---|---|---|---|
x | Tensor | Tensor of interest | |
verbose | bool | False | Whether to show the full tensor |
depth | int | 0 | Show stats in depth |
color | NoneType | None | Force color (True/False) or auto. |
Examples
Control laziness of repr
=False)
set_config(auto_realize lovely(spicy)
Tensor[2, 6] n=12 CPU Lazy COPY
lovely(spicy)
Tensor[2, 6] n=12 CPU Lazy COPY
=True)
set_config(auto_realize lovely(spicy)
Tensor[2, 6] n=12 x∈[-7.032e+03, 1.549] μ=-781.232 σ=2.210e+03 +Inf! -Inf! NaN! CPU Realized COPY
lovely(spicy)
Tensor[2, 6] n=12 x∈[-7.032e+03, 1.549] μ=-781.232 σ=2.210e+03 +Inf! -Inf! NaN! CPU
Show the stats and values
0]) lovely(randoms[
Tensor CPU Realized RESHAPE -0.703
2]) lovely(randoms[:
Tensor[2] μ=-0.597 σ=0.151 CPU [-0.703, -0.490]
6].reshape((2, 3))) # More than 2 elements -> show statistics lovely(randoms[:
Tensor[2, 3] n=6 x∈[-2.011, 0.207] μ=-0.846 σ=0.862 CPU [[-0.703, -0.490, -0.322], [-1.755, 0.207, -2.011]]
11]) # More than 10 -> suppress data output lovely(randoms[:
Tensor[11] x∈[-2.011, 1.549] μ=-0.336 σ=1.162 CPU
Gradient
=Tensor([1.,2,3], requires_grad=True)
g lovely(g)
Tensor[3] x∈[1.000, 3.000] μ=2.000 σ=1.000 grad CPU Realized COPY [1.000, 2.000, 3.000]
*g).sum().backward()
(g lovely(g)
Tensor[3] x∈[1.000, 3.000] μ=2.000 σ=1.000 grad+ CPU [1.000, 2.000, 3.000]
Note
Note the green ‘+’ when the gradient is available.
lovely(g.grad)
Tensor[3] x∈[2.000, 6.000] μ=4.000 σ=2.000 CPU Realized ADD [2.000, 4.000, 6.000]
Do we have any floating point nasties?
# Statistics and range are calculated on good values only, if there are at lest 3 of them.
lovely(spicy)
Tensor[2, 6] n=12 x∈[-7.032e+03, 1.549] μ=-781.232 σ=2.210e+03 +Inf! -Inf! NaN! CPU
=False) lovely(spicy, color
Tensor[2, 6] n=12 x∈[-7.032e+03, 1.549] μ=-781.232 σ=2.210e+03 +Inf! -Inf! NaN! CPU
float("nan")]*11)) lovely(Tensor([
Tensor[11] NaN! CPU Realized COPY
Is the tensor all zeros?
12)) lovely(Tensor.zeros(
Tensor[12] CPU Lazy CONST
# XXX empty tensors - fix when they work
# lovely(jnp.array([], dtype=jnp.float16).reshape((0,0,0)))
Shows the dtype if it’s not the default.
1,2,3], dtype=dtypes.int8).realize()) lovely(Tensor([
Tensor[3] dtypes.char x∈[1, 3] μ=2.000 σ=0.816 CPU [1, 2, 3]
=True) lovely(spicy, verbose
<Tensor <UOp CPU (2, 6) float ShapeTracker(views=(View(shape=(2, 6), strides=(6, 1), offset=0, mask=None, contiguous=True),))> on CPU with grad None> Tensor[2, 6] n=12 x∈[-7.032e+03, 1.549] μ=-781.232 σ=2.210e+03 +Inf! -Inf! NaN! CPU
We need to go deeper
= np.load("mysteryman.npy")
image 1,2,3] = float('nan')
image[
= Tensor(image)
image
=2) # Limited by set_config(deeper_lines=N) lovely(image, depth
Tensor[3, 196, 196] n=115248 x∈[-2.118, 2.640] μ=nan σ=nan CPU Realized COPY
Tensor[196, 196] n=38416 x∈[-2.118, 2.249] μ=-0.324 σ=1.036 CPU
Tensor[196] x∈[-1.912, 2.249] μ=-0.673 σ=0.522 CPU
Tensor[196] x∈[-1.861, 2.163] μ=-0.738 σ=0.418 CPU
Tensor[196] x∈[-1.758, 2.198] μ=-0.806 σ=0.397 CPU
Tensor[196] x∈[-1.656, 2.249] μ=-0.849 σ=0.369 CPU
Tensor[196] x∈[-1.673, 2.198] μ=-0.857 σ=0.357 CPU
Tensor[196] x∈[-1.656, 2.146] μ=-0.848 σ=0.372 CPU
Tensor[196] x∈[-1.433, 2.215] μ=-0.784 σ=0.397 CPU
Tensor[196] x∈[-1.279, 2.249] μ=-0.695 σ=0.486 CPU
Tensor[196] x∈[-1.364, 2.249] μ=-0.637 σ=0.539 CPU
...
Tensor[196, 196] n=38416 x∈[-1.966, 2.429] μ=nan σ=nan CPU
Tensor[196] x∈[-1.861, 2.411] μ=-0.529 σ=0.556 CPU
Tensor[196] x∈[-1.826, 2.359] μ=-0.562 σ=0.473 CPU
Tensor[196] x∈[-1.756, 2.376] μ=nan σ=nan CPU
Tensor[196] x∈[-1.633, 2.429] μ=-0.664 σ=0.430 CPU
Tensor[196] x∈[-1.651, 2.376] μ=-0.669 σ=0.399 CPU
Tensor[196] x∈[-1.633, 2.376] μ=-0.701 σ=0.391 CPU
Tensor[196] x∈[-1.563, 2.429] μ=-0.670 σ=0.380 CPU
Tensor[196] x∈[-1.475, 2.429] μ=-0.616 σ=0.386 CPU
Tensor[196] x∈[-1.511, 2.429] μ=-0.593 σ=0.399 CPU
...
Tensor[196, 196] n=38416 x∈[-1.804, 2.640] μ=-0.567 σ=1.178 CPU
Tensor[196] x∈[-1.717, 2.396] μ=-0.982 σ=0.350 CPU
Tensor[196] x∈[-1.752, 2.326] μ=-1.034 σ=0.314 CPU
Tensor[196] x∈[-1.648, 2.379] μ=-1.086 σ=0.314 CPU
Tensor[196] x∈[-1.630, 2.466] μ=-1.121 σ=0.305 CPU
Tensor[196] x∈[-1.717, 2.448] μ=-1.120 σ=0.302 CPU
Tensor[196] x∈[-1.717, 2.431] μ=-1.166 σ=0.314 CPU
Tensor[196] x∈[-1.560, 2.448] μ=-1.124 σ=0.326 CPU
Tensor[196] x∈[-1.421, 2.431] μ=-1.064 σ=0.383 CPU
Tensor[196] x∈[-1.526, 2.396] μ=-1.047 σ=0.417 CPU
...