XGitHub

Docs

API

API type TensorOptions type TensorIndice type Layer type LayerAsync class Tensor function id variable sorted class DefaultMap class ArrayMap class WeakKeyMap class WeakValueMap class NotImplemented variable floatString variable constToNumeric variable max variable min variable abs variable trunc variable sqrt variable sin variable bigint_sqrt variable bigint_sin variable next variable int_to_bytes variable bytes_to_bigint variable isInf class Enum variable random_id variable string_to_bytes variable bytes_to_string variable bytes_to_hex variable concat_bytes function product function product function product variable divmod function counter variable list_str variable entries function is_less_than type ConstType variable isConst variable is_eq variable intersection function set_default function zip function range variable tuple variable assert function permutations function is_subset function math_gcd variable dedup variable argsort variable all_same variable isInt variable all_int variable colored variable colorize_float variable memsize_to_str variable ansistrip variable ansilen variable make_tuple variable flatten variable fully_flatten variable strip_parens variable round_up variable lo32 variable hi32 variable data64 variable data64Le variable getbits variable i2u variable merge_dicts function merge_maps function merge_sets variable partition variable get_single_element variable unwrap variable getChild variable word_wrap variable to_function_name class Metadata variable _METADATA class GlobalCounters variable perf variable round variable Timing variable _format_fcn class Profiling variable _ensure_downloads_dir variable cpu_time_execution variable cpu_objdump variable capstone_flatdump function replace type Slice function slice function cache function cache_fn type Math variable num variable add variable sub variable mul variable div variable idiv variable neg variable mod variable and variable or variable xor variable lshift variable rshift variable lt variable gt variable le variable ge variable ne variable eq variable polyN variable prod variable sum variable ceildiv variable pow function pairwise function accumulate variable vars type FmtStr variable bitcast class DType class PtrDType class ImageDType class dtypes type DTypeLike variable to_dtype variable promoLattice variable _get_recursive_parents variable least_upper_dtype variable least_upper_float variable DTYPES_DICT variable INVERSE_DTYPES_DICT variable sum_acc_dtype variable truncate type Variable type ConstLike class MathTrait class Ops class GroupOp variable view_supported_devices variable identity_element variable can_pad variable END_FOR_UOP variable resolve variable smax variable smin variable ssimplify variable sym_infer class UOp class KernelInfo variable python_alu variable exec_alu variable print_uops type UPatInput type UPatFn type Pattern class UPat class UPatAny class PatternMatcher class TrackedGraphRewrite class TrackedPatternMatcher variable launch_viz class RewriteContext variable graph_rewrite variable graph_rewrite_map variable spec variable type_verify function split_uop variable div_and_mod_folding variable canonicalize_simplex variable is_increasing variable parse_valid variable uop_given_valid variable simplify_valid variable sint_to_uop variable symbolic_simple variable symbolic variable symbolic_flat variable _substitute variable renderer type sint variable merge_views variable view_left variable TRANSCENDENTAL_SUPPORTED_DTYPES variable _lazy_map_numbers variable mantissa_bits variable exponent_bias variable exponent_mask variable shr variable shl variable rintk variable pow2if variable ilogb2k variable ldexp3k variable ldexp2k variable frexp variable payne_hanek_reduction variable cody_waite_reduction variable trig_poly variable sin_poly variable sin_poly_small variable sin_poly_large variable xsin variable xexp2 variable xlog2 variable fold_expanded variable fix_unfoldable_image_load variable buf_idx_pat variable float4_folding variable simplify_valid_load variable get_late_rewrite_patterns variable threefry2x32 variable sigmoid_like variable loop_collapse variable index_collapse variable gep_through_wmma variable no_vectorized_wmma variable reduce_collapse variable acc_pat variable rng_pat variable rng_aug variable index_load variable arange_augrng variable arange_m variable mulacc_unrolled variable sym variable _expand_arg_to_idx variable _choices_from_args variable _swizzle_args variable do_expand variable do_contract variable no_vectorized_alu variable create_gate variable expander variable no_vectorized_load_store variable no_vectorized_acc variable devectorize variable delete_redundant_gates variable load_store_indexing variable migrate_indexing variable move_mask variable pm_render variable full_graph_rewrite class _Device variable Device variable uop_buffer variable uop_realized variable uop_is_realized class Buffer variable is_dtype_supported class BufferSpec class Allocator class LRUAllocator class _MallocAllocator variable MallocAllocator variable MAP_JIT type ProgramCallArgs class Program class CPUProgram class CompileError class Compiler class ProfileEvent class ProfileDeviceEvent class ProfileRangeEvent class ProfileGraphEntry class ProfileGraphEvent class ProfileResult class Compiled class Model class BatchNorm class BatchNorm2d class BatchNorm3d class Conv2d class Conv1d class ConvTranspose2d class ConvTranspose1d class Linear class GroupNorm class InstanceNorm class LayerNorm class LayerNorm2d class RMSNorm class Embedding class LSTMCell class Optimizer class OptimizerGroup class LARS class SGD class AdamW class Adam variable inverse_safe_dtypes variable safe_load_metadata variable safe_load variable safe_save variable get_state_dict variable get_parameters variable replace_state_dict variable load_state_dict variable tar_extract variable ggml_data_to_tensor variable gguf_load variable mnist variable cifar class GraphException variable apply_graph_to_jit variable get_input_replace class GraphRunner class MultiGraphRunner variable update_depends class CapturedJit variable _prepare_jit_inputs class TinyJit type RenderBarOptions type TqdmProgress type TqdmOnProgress type TqdmOptions class Tqdm variable env variable setEnv class MemoryView

clip

gpt2

llama

llama3

mnist

stable_diffusion

tokenizer

unet

whisper

A `Tensor` is a multi-dimensional matrix containing elements of a single data type.
from tinygrad import Tensor, dtypes, nn import numpy as np import math np.set_printoptions(precision=4)
class Tensor extends MathTrait<Tensor> {
constructor(data?: ConstType | UOp | Uint8Array | any[] | UOp | Tensor | string, { device, dtype, requires_grad: TensorOptions } = [UNSUPPORTED], skip_constructor: boolean = false)
static registry = FinalizationRegistry
_ctx = InstanceType<ReturnType<typeof CreateFunction>>
static training = boolean
static no_grad = boolean
_id = bigint
requires_grad_ = (requires_grad: boolean | undefined) => Tensor
toString = () => unknown
static train = (fn: () => Promise<any> | any) => unknown
static test = (fn: () => Promise<any> | any) => unknown
Creates the schedule needed to realize these Tensor(s), with Variables. NOTE: A Tensor can only be scheduled once.
schedule_with_vars = (lst: Tensor[] = [UNSUPPORTED]) => [ScheduleItem[], Map<Variable, number>]
_debug_ast = () => unknown
Creates the schedule needed to realize these Tensor(s).
schedule = (...lst: : Tensor[]) => ScheduleItem[]
realize = (lst: Tensor[] = [UNSUPPORTED], do_update_stats: boolean = true) => Promise<Tensor>
static realize = (lst: Tensor[], do_update_stats: boolean = true) => unknown
Replaces the data of this tensor with the data of another tensor. Only the shape of the tensors must match.
replace = (x: Tensor) => Tensor
assign_disk = (x: Tensor | number[] | string | Uint8Array) => Promise<Tensor>
assign = (x: Tensor | number[] | number | string | Uint8Array) => Tensor
Returns a new tensor with the same data as this tensor, but detached from the autograd graph.
detach = () => Tensor
_data = () => Promise<MemoryView>
Returns the data of this tensor as a memoryview.
t = new Tensor([1, 2, 3, 4]) console.log(np.frombuffer(t.data(), dtype=np.int32))
data = () => Promise<MemoryView<any>>
Returns the value of this tensor as a standard Python number.
t = new Tensor(42) console.log(t.item())
item = () => Promise<T>
Returns the value of this tensor as a nested list.
t = new Tensor([1, 2, 3, 4]) console.log(t.tolist())
tolist = () => Promise<T>
Creates a clone of this tensor allocating a separate buffer for the data.
clone = () => Tensor
Moves the tensor to the given device.
to = (device?: string | string[]) => Tensor
Moves the tensor to the given device in place.
to_ = (device?: string | string[]) => unknown
Shards the tensor across the given devices. Optionally specify which axis to shard on.
t = Tensor.empty(2, 4) print(t.shard((t.device, t.device), axis=1).lazydata)
shard = (devices: string[], axis?: number) => Tensor
Shards the tensor across the given devices in place.
shard_ = (devices: string[], axis?: number) => unknown
static from_uop = (y: UOp, opts: TensorOptions = [UNSUPPORTED]) => Tensor
static _metaop = (op: Ops, shape: sint[], { dtype, device, : TensorOptions }, arg?: any) => unknown
Creates an empty tensor with the given shape. You can pass in `dtype` and `device` keyword arguments to control the data type and device of the tensor. Additionally, all other keyword arguments are passed to the constructor of the tensor.
t = Tensor.empty(2, 3) print(t.shape)
"""
static empty = (shape: number[], opts: TensorOptions = [UNSUPPORTED]) => unknown
Exposes the pointer as a Tensor without taking ownership of the original data. The pointer must remain valid for the entire lifetime of the created Tensor. You can pass in `dtype` and `device` keyword arguments to control the data type and device of the tensor. Additionally, all other keyword arguments are passed to the constructor of the tensor.
static from_blob = (ptr: bigint, shape: number[], opts: TensorOptions) => Tensor
Create a Tensor from a URL. This === the preferred way to access Internet resources. It currently returns a DISK Tensor, but in the future it may return an HTTP Tensor. This also will soon become lazy (when possible) && !print progress without DEBUG. THe `gunzip` flag will gzip extract the resource && return an extracted Tensor.
static from_url = (url: string, opts?: TensorOptions) => Promise<Tensor>
static from_file = (path: string, opts?: TensorOptions) => Promise<Tensor>
static _seed = number
static _device_seeds = Record<string, Tensor>
static _device_rng_counters = Record<string, Tensor>
Sets the seed for random operations.
Tensor.manual_seed(42) console.log(Tensor.rand(5).numpy()) console.log(Tensor.rand(5).numpy())
Tensor.manual_seed(42) // reset to the same seed console.log(Tensor.rand(5).numpy()) console.log(Tensor.rand(5).numpy())
static manual_seed = (seed: number = 0) => unknown
static _threefry_random_bits = (key: Tensor, counts0: Tensor, counts1: Tensor) => unknown
Creates a tensor with the given shape, filled with random values from a uniform distribution over the interval `[0, 1)`. You can pass in `dtype` && `device` keyword arguments to control the data type && device of the tensor. Additionally, all other keyword arguments are passed to the constructor of the tensor.
Tensor.manual_seed(42) t = Tensor.rand(2, 3) console.log(t.numpy())
static rand = (shape: number[], contiguous: boolean = true, { device, dtype, : TensorOptions } = [UNSUPPORTED]) => Tensor
Creates a tensor with the given shape, filled with the given value. You can pass in `dtype` && `device` keyword arguments to control the data type && device of the tensor. Additionally, all other keyword arguments are passed to the constructor of the tensor.
console.log(Tensor.full((2, 3), 42).numpy())
console.log(Tensor.full((2, 3), false).numpy())
static full = (shape: sint[], fill_value: ConstType, opts?: TensorOptions) => Tensor
Creates a tensor with the given shape, filled with zeros. You can pass in `dtype` && `device` keyword arguments to control the data type && device of the tensor. Additionally, all other keyword arguments are passed to the constructor of the tensor.
console.log(Tensor.zeros(2, 3).numpy())
console.log(Tensor.zeros(2, 3, dtype=dtypes.int32).numpy())
static zeros = (shape: sint[], opts?: TensorOptions) => Tensor
Creates a tensor with the given shape, filled with ones. You can pass in `dtype` && `device` keyword arguments to control the data type && device of the tensor. Additionally, all other keyword arguments are passed to the constructor of the tensor.
console.log(Tensor.ones(2, 3).numpy())
console.log(Tensor.ones(2, 3, dtype=dtypes.int32).numpy())
static ones = (shape: sint[], opts?: TensorOptions) => Tensor
Returns a 1-D tensor of size `ceil((stop - start) / step)` with values from `[start, stop)`, with spacing between values given by `step`. If `stop` !== specified, values are generated from `[0, start)` with the given `step`. If `stop` === specified, values are generated from `[start, stop)` with the given `step`. You can pass in `dtype` && `device` keyword arguments to control the data type && device of the tensor. Additionally, all other keyword arguments are passed to the constructor of the tensor.
console.log(Tensor.arange(5).numpy())
console.log(Tensor.arange(5, 10).numpy())
console.log(Tensor.arange(5, 10, 2).numpy())
console.log(Tensor.arange(5.5, 10, 2).numpy())
static arange = (start: number, stop?: number, step: number = 1, opts?: TensorOptions) => Tensor
Returns a 1-D tensor of `steps` evenly spaced values from `start` to `stop`, inclusive. You can pass in `dtype` and `device` keyword arguments to control the data type and device of the tensor. Additionally, all other keyword arguments are passed to the constructor of the tensor.
print(Tensor.linspace(0, 10, 5).numpy())
print(Tensor.linspace(-1, 1, 5).numpy())
static linspace = (start: Tensor | number, stop: Tensor | number, steps: number, { dtype, : TensorOptions } = [UNSUPPORTED]) => Tensor
Returns a 2-D tensor with `n` rows and `m` columns, with ones on the diagonal and zeros elsewhere. You can pass in `dtype` and `device` keyword arguments to control the data type and device of the tensor. Additionally, all other keyword arguments are passed to the constructor of the tensor.
print(Tensor.eye(3).numpy())
print(Tensor.eye(2, 4).numpy())
static eye = (n: number, m?: number, opts: TensorOptions = [UNSUPPORTED]) => Tensor
Creates a tensor with the same shape as `this`, filled with the given value. If `dtype` !== specified, the dtype of `this` === used. You can pass in the `device` keyword argument to control device of the tensor. Additionally, all other keyword arguments are passed to the constructor of the tensor.
t = Tensor.ones(2, 3) console.log(Tensor.full_like(t, 42).numpy())
full_like = (fill_value: ConstType, opts?: TensorOptions) => Tensor
Creates a tensor with the same shape as `self`, filled with zeros. You can pass in `dtype` and `device` keyword arguments to control the data type and device of the tensor. Additionally, all other keyword arguments are passed to the constructor of the tensor.
t = Tensor.ones(2, 3) print(Tensor.zeros_like(t).numpy())
zeros_like = (opts: TensorOptions) => Tensor
Creates a tensor with the same shape as `this`, filled with ones. You can pass in `dtype` && `device` keyword arguments to control the data type && device of the tensor. Additionally, all other keyword arguments are passed to the constructor of the tensor.
t = Tensor.zeros(2, 3) console.log(Tensor.ones_like(t).numpy())
ones_like = (opts?: TensorOptions) => Tensor
Creates a tensor with the same shape and sharding as `self`, filled with random values from a uniform distribution over the interval `[0, 1)`. You can pass in `dtype` and `device` keyword arguments to control the data type and device of the tensor. Additionally, all other keyword arguments are passed to the constructor of the tensor.
t = Tensor.ones(2, 3) print(Tensor.rand_like(t).numpy())
rand_like = ({ dtype, contiguous, : TensorOptions{ contiguous: boolean } } = [UNSUPPORTED]) => Tensor
Creates a tensor with the given shape, filled with random values from a normal distribution with mean `0` and standard deviation `1`. If `dtype` is not specified, the default type is used. You can pass in the `device` keyword argument to control device of the tensor. Additionally, all other keyword arguments are passed to the constructor of the tensor.
Tensor.manual_seed(42) print(Tensor.randn(2, 3).numpy())
static randn = (shape: number[], { dtype, requires_grad, : TensorOptions } = [UNSUPPORTED]) => Tensor
Creates a tensor with the given shape, filled with random integer values generated uniformly from the interval `[low, high)`. If `dtype` !== specified, the default type === used. You can pass in the `device` keyword argument to control device of the tensor. Additionally, all other keyword arguments are passed to the constructor of the tensor.
Tensor.manual_seed(42) console.log(Tensor.randint(2, 3, low=5, high=10).numpy())
static randint = (shape: number[], low: number = 0, high: number = 10, { dtype, : TensorOptions } = [UNSUPPORTED]) => Tensor
Creates a tensor with the given shape, filled with random values from a normal distribution with the given `mean` and standard deviation `std`. You can pass in `dtype` and `device` keyword arguments to control the data type and device of the tensor. Additionally, all other keyword arguments are passed to the constructor of the tensor.
Tensor.manual_seed(42) print(Tensor.normal(2, 3, mean=10, std=2).numpy())
static normal = (shape: number[], mean: number = 0, std: number = 1, { requires_grad, : TensorOptions }) => Tensor
Creates a tensor with the given shape, filled with random values from a uniform distribution over the interval `[low, high)`. You can pass in `dtype` && `device` keyword arguments to control the data type && device of the tensor. Additionally, all other keyword arguments are passed to the constructor of the tensor.
Tensor.manual_seed(42) console.log(Tensor.uniform(2, 3, low=2, high=10).numpy())
static uniform = (shape: number[], low: number = 0, high: number = 1, { dtype, requires_grad, : TensorOptions } = [UNSUPPORTED]) => Tensor
Creates a tensor with the given shape, filled with random values from a uniform distribution over the interval `[-prod(shape)**-0.5, prod(shape)**-0.5)`. You can pass in `dtype` and `device` keyword arguments to control the data type and device of the tensor. Additionally, all other keyword arguments are passed to the constructor of the tensor.
Tensor.manual_seed(42) print(Tensor.scaled_uniform(2, 3).numpy())
static scaled_uniform = (shape: number[], opts: TensorOptions) => Tensor
<https://www.tensorflow.org/api_docs/python/tf/keras/initializers/GlorotUniform> You can pass in `dtype` && `device` keyword arguments to control the data type && device of the tensor. Additionally, all other keyword arguments are passed to the constructor of the tensor.
Tensor.manual_seed(42) console.log(Tensor.glorot_uniform(2, 3).numpy())
static glorot_uniform = (shape: number[], opts: TensorOptions = [UNSUPPORTED]) => Tensor
You can pass in `dtype` and `device` keyword arguments to control the data type and device of the tensor. Additionally, all other keyword arguments are passed to the constructor of the tensor.
Tensor.manual_seed(42) print(Tensor.kaiming_uniform(2, 3).numpy())
static kaiming_uniform = (shape: number[], a: number = 0.01, opts: TensorOptions) => Tensor
<https://pytorch.org/docs/stable/_modules/torch/nn/init.html#kaiming_normal_> You can pass in `dtype` and `device` keyword arguments to control the data type and device of the tensor. Additionally, all other keyword arguments are passed to the constructor of the tensor.
Tensor.manual_seed(42) print(Tensor.kaiming_normal(2, 3).numpy())
static kaiming_normal = (shape: number[], a: number = 0.01, opts: TensorOptions) => Tensor
multinomial = (num_samples: number = 1, replacement: boolean = false) => Tensor
Compute the gradient of the targets with respect to self.
x = Tensor.eye(3) y = Tensor([[2.0,0,-2.0]]) z = y.matmul(x).sum() dx, dy = z.gradient(x, y) print(dx.tolist()) // dz/dx print(dy.tolist()) // dz/dy
gradient = (targets: Tensor[], gradient?: Tensor) => Tensor[]
Propagates the gradient of a tensor backwards through the computation graph. If the 'gradient' argument !== provided, the tensor must be a scalar, && the gradient === implicitly set to 1.0. If 'retain_graph' === false, the graph used to compute the grads will be freed. Otherwise, it will be kept. Keeping it can increase memory usage.
t = new Tensor([1.0, 2.0, 3.0, 4.0], requires_grad=true) t.sum().backward() console.log(t.grad.numpy())
backward = (gradient?: Tensor, retain_graph: boolean = false) => Tensor
`.view` === an alias for `.reshape`.
view = (...shape: : sint[]) => Tensor
Returns a tensor with the same data as the original tensor but with a different shape. `shape` can be passed as a tuple || as separate arguments.
t = Tensor.arange(6) console.log(t.reshape(2, 3).numpy())
reshape = (...shape: : ( sint | undefined )[]) => Tensor
Returns a tensor that is expanded to the shape that is specified. Expand can also increase the number of dimensions that a tensor has. Passing a `-1` or `undefined` to a dimension means that its size will not be changed.
t = new Tensor([1, 2, 3]) console.log(t.expand(4, -1).numpy())
expand = (...shape: : sint[]) => Tensor
Returns a tensor that is a permutation of the original tensor. The new tensor has the same data as the original tensor but with the dimensions permuted according to the order specified. `order` can be passed as a tuple or as separate arguments.
t = Tensor.arange(6).reshape(2, 3) console.log(t.numpy())
console.log(t.permute(1, 0).numpy())
permute = (...args: : number[]) => Tensor
Returns a tensor that reverses the order of the original tensor along given `axis`. `axis` can be passed as a tuple || as separate arguments.
t = Tensor.arange(6).reshape(2, 3) console.log(t.numpy())
console.log(t.flip(0).numpy())
console.log(t.flip((0, 1)).numpy())
flip = (...axis: : number[]) => Tensor
Returns a tensor that shrinks the each axis based on input arg. `arg` must have the same length as `this.ndim`. For each axis, it can be `undefined`, which means no shrink, || a tuple `(start, end)` that works the same as Python slice.
t = Tensor.arange(9).reshape(3, 3) console.log(t.numpy())
console.log(t.shrink(((undefined, (1, 3)))).numpy())
console.log(t.shrink((((0, 2), (0, 2)))).numpy())
shrink = (...arg: : ( [sint, sint] | undefined )[]) => Tensor
Returns a tensor with padding applied based on the input `padding`. `padding` supports two padding structures: 1. Flat padding: `(padding_left, padding_right, padding_top, padding_bottom, ...)` - This structure matches PyTorch's pad. - `padding` length must be even. 2. Group padding: `(..., (padding_top, padding_bottom), (padding_left, padding_right))` - This structure matches pad for JAX, NumPy, TensorFlow and others. - For each axis, padding can be `undefined`, meaning no padding, || a tuple `(start, end)`. - `padding` must have the same length as `this.ndim`. Padding values can be negative, resulting in dimension shrinks that work similarly to Python negative slices. Padding modes === selected with `mode` which supports `constant`, `reflect` && `replicate`.
t = Tensor.arange(9).reshape(1, 1, 3, 3) console.log(t.numpy())
console.log(t.pad((1, 2, 0, -1)).numpy())
console.log(t.pad(((undefined, undefined, (0, -1), (1, 2)))).numpy())
console.log(t.pad((1, 2, 0, -1), value=-number('inf')).numpy())
pad = (padding: sint[] | ( [sint, sint] | undefined )[], mode: 'constant' | 'reflect' | 'replicate' | 'circular' = constant, value: number | bigint | boolean = 0) => Tensor
_getitem = (indices: TensorIndice[], v?: Tensor) => Tensor
Retrieve a sub-tensor using indexing. Supported Index Types: `int | slice | Tensor | None | List | Tuple | Ellipsis` Examples:
t = Tensor.arange(12).reshape(3, 4) print(t.numpy())
- Int Indexing: Select an element or sub-tensor using integers for each dimension.
print(t[1, 2].numpy())
- Slice Indexing: Select a range of elements using slice notation (`start:end:stride`).
print(t[0:2, ::2].numpy())
- Tensor Indexing: Use another tensor as indices for advanced indexing. Using `tuple` or `list` here also works.
print(t[Tensor([2, 0, 1]), Tensor([1, 2, 3])].numpy())
- `None` Indexing: Add a new dimension to the tensor.
print(t[:, None].shape)
NOTE: Out-of-bounds indexing results in a value of `0`.
t = Tensor([1, 2, 3]) print(t[Tensor([4, 3, 2])].numpy())
get = (...indices: : TensorIndice[]) => unknown
set = (indices: TensorIndice[], v: Tensor | number) => unknown
Gathers values along an axis specified by `dim`.
t = Tensor([[1, 2], [3, 4]]) print(t.numpy())
print(t.gather(1, Tensor([[0, 0], [1, 0]])).numpy())
"""
gather = (dim: number, index: Tensor) => Tensor
Concatenates this with other `Tensor` in `args` along an axis specified by `dim`. All tensors must have the same shape except in the concatenating dimension.
t0, t1, t2 = new Tensor([[1, 2]]), Tensor([[3, 4]]), Tensor([[5, 6]]) console.log(t0.cat(t1, t2, dim=0).numpy())
console.log(t0.cat(t1, t2, dim=1).numpy())
cat = (args: Tensor[], dim: number = 0) => Tensor
static cat = (tensors: Tensor[], dim: number = 0) => Tensor
Concatenates self with other `Tensor` in `args` along a new dimension specified by `dim`.
t0, t1, t2 = Tensor([1, 2]), Tensor([3, 4]), Tensor([5, 6]) print(t0.stack(t1, t2, dim=0).numpy())
print(t0.stack(t1, t2, dim=1).numpy())
static stack = (args: Tensor[], dim: number = 0) => Tensor
stack = (args: Tensor[], dim: number = 0) => unknown
Repeat elements of a tensor.
t = Tensor([1, 2, 3]) print(t.repeat_interleave(2).numpy())
repeat_interleave = (repeats: number, dim?: number) => Tensor
Repeats tensor number of times along each dimension specified by `repeats`. `repeats` can be passed as a tuple || as separate arguments.
t = new Tensor([1, 2, 3]) console.log(t.repeat(4, 2).numpy())
console.log(t.repeat(4, 2, 1).shape)
repeat = (repeats: sint[]) => Tensor
_resolve_dim = (dim: number, extra: boolean = false) => number
Splits the tensor into chunks along the dimension specified by `dim`. If `sizes` is an integer, it splits into equally sized chunks if possible, otherwise the last chunk will be smaller. If `sizes` is a list, it splits into `len(sizes)` chunks with size in `dim` according to `size`.
t = Tensor.arange(10).reshape(5, 2) print(t.numpy())
split = t.split(2) print("\\n".join([repr(x.numpy()) for x in split]))
split = t.split([1, 4]) print("\\n".join([repr(x.numpy()) for x in split]))
split = (sizes: number | number[], dim: number = 0) => Tensor[]
Splits the tensor into `chunks` number of chunks along the dimension `dim`. If the tensor size along `dim` is not divisible by `chunks`, all returned chunks will be the same size except the last one. The function may return fewer than the specified number of chunks.
chunked = Tensor.arange(11).chunk(6) print("\\n".join([repr(x.numpy()) for x in chunked]))
chunked = Tensor.arange(12).chunk(6) print("\\n".join([repr(x.numpy()) for x in chunked]))
chunked = Tensor.arange(13).chunk(6) print("\\n".join([repr(x.numpy()) for x in chunked]))
chunk = (chunks: number, dim: number = 0) => Tensor[]
Generates coordinate matrices from coordinate vectors. Input tensors can be scalars or 1D tensors. `indexing` determines how the output grids are aligned. `ij` indexing follows matrix-style indexing and `xy` indexing follows Cartesian-style indexing.
x, y = Tensor([1, 2, 3]), Tensor([4, 5, 6]) grid_x, grid_y = x.meshgrid(y) print(grid_x.numpy()) print(grid_y.numpy())
grid_x, grid_y = x.meshgrid(y, indexing="xy") print(grid_x.numpy()) print(grid_y.numpy())
meshgrid = (args: Tensor[], indexing: 'ij' | 'xy' = ij) => Tensor[]
Returns a tensor with specified dimensions of input of size 1 removed. If `dim` is not specified, all dimensions with size 1 are removed.
t = Tensor.zeros(2, 1, 2, 1, 2) print(t.squeeze().shape)
print(t.squeeze(0).shape)
print(t.squeeze(1).shape)
squeeze = (dim?: number) => Tensor
Returns a tensor with a new dimension of size 1 inserted at the specified `dim`.
t = Tensor([1, 2, 3, 4]) print(t.unsqueeze(0).numpy())
print(t.unsqueeze(1).numpy())
unsqueeze = (dim: number) => Tensor
Returns a tensor that === a transposed version of the original tensor. The given dimensions `dim0` && `dim1` are swapped.
t = Tensor.arange(6).reshape(2, 3) console.log(t.numpy())
console.log(t.transpose(0, 1).numpy())
transpose = (dim0: number = 1, dim1: number = 0) => Tensor
Flattens the tensor by reshaping it into a one-dimensional tensor. If `start_dim` || `end_dim` are passed, only dimensions starting with `start_dim` && ending with `end_dim` are flattened.
t = Tensor.arange(8).reshape(2, 2, 2) console.log(t.flatten().numpy())
console.log(t.flatten(start_dim=1).numpy())
flatten = (start_dim: number = 0, end_dim: = [UNSUPPORTED]) => unknown
Unflattens dimension `dim` of the tensor into multiple dimensions specified by `sizes`. `Tensor.flatten()` is the inverse of this function.
print(Tensor.ones(3, 4, 1).unflatten(1, (2, 2)).shape)
print(Tensor.ones(3, 4, 1).unflatten(1, (-1, 2)).shape)
print(Tensor.ones(5, 12, 3).unflatten(-2, (2, 2, 3, 1, 1)).shape)
unflatten = (dim: number, sizes: number[]) => unknown
Rolls the tensor along specified dimension(s). The rolling operation is circular, meaning that elements that go beyond the edge are wrapped around to the beginning of the dimension.
t = Tensor.arange(4) print(t.roll(shifts=1, dims=0).numpy())
print(t.roll(shifts=-1, dims=0).numpy())
roll = (shifts: number | number[], dims: number | number[]) => Tensor
Rearranges input according to formula See: https://einops.rocks/api/rearrange/
x = Tensor([[1, 2], [3, 4]]) print(Tensor.rearrange(x, "batch channel -> (batch channel)").numpy())
rearrange = (formula: string, sizes: any) => Tensor
_reduce = (fxn: ReturnType<typeof CreateFunction>, axis?: number | number[], keepdim: boolean = false) => Tensor
Returns the sum of the elements of the tensor along the specified axis || axes. You can pass in `axis` && `keepdim` keyword arguments to control the axis along which the maximum === computed && whether the reduced dimensions are retained. You can pass in `acc_dtype` keyword argument to control the data type of the accumulation. If !specified, the accumulation data type === chosen based on the input tensor's data type.
t = Tensor.arange(6).reshape(2, 3) console.log(t.numpy())
console.log(t.sum().numpy())
console.log(t.sum(axis=0).numpy())
console.log(t.sum(axis=1).numpy())
sum = (axis?: number | number[], keepdim: boolean = false, acc_dtype?: DTypeLike) => unknown
Returns the product of the elements of the tensor along the specified axis || axes. You can pass in `axis` && `keepdim` keyword arguments to control the axis along which the maximum === computed && whether the reduced dimensions are retained. You can pass in `acc_dtype` keyword argument to control the data type of the accumulation. If !specified, the accumulation data type === chosen based on the input tensor's data type.
t = new Tensor([-1, -2, -3, 1, 2, 3])).reshape(2, 3) console.log(t.numpy())
console.log(t.prod().numpy())
console.log(t.prod(axis=0).numpy())
console.log(t.prod(axis=1).numpy())
prod = (axis?: number | number[], keepdim: boolean = false, acc_dtype?: DTypeLike) => unknown
Returns the maximum value of the tensor along the specified axis || axes. You can pass in `axis` && `keepdim` keyword arguments to control the axis along which the maximum === computed && whether the reduced dimensions are retained.
t = new Tensor([[1, 0, 2], [5, 4, 3]]) console.log(t.numpy())
console.log(t.max().numpy())
console.log(t.max(axis=0).numpy())
console.log(t.max(axis=1, keepdim=true).numpy())
max = (axis?: number | number[], keepdim: boolean = false) => unknown
Returns the minimum value of the tensor along the specified axis || axes. You can pass in `axis` && `keepdim` keyword arguments to control the axis along which the minimum === computed && whether the reduced dimensions are retained.
t = new Tensor([[1, 0, 2], [5, 4, 3]]) console.log(t.numpy())
console.log(t.min().numpy())
console.log(t.min(axis=0).numpy())
console.log(t.min(axis=1, keepdim=true).numpy())
min = (axis?: number | number[], keepdim: boolean = false) => unknown
Tests if any element evaluates to `true` along the specified axis || axes. You can pass in `axis` && `keepdim` keyword arguments to control the reduce axis && whether the reduced dimensions are retained.
t = new Tensor([[true, true], [true, false], [false, false]]) console.log(t.numpy())
console.log(t.any().numpy())
console.log(t.any(axis=0).numpy())
console.log(t.any(axis=1, keepdim=true).numpy())
any = (axis?: number | number[], keepdim: boolean = false) => unknown
Tests if all element evaluates to `true` along the specified axis || axes. You can pass in `axis` && `keepdim` keyword arguments to control the reduce axis && whether the reduced dimensions are retained.
t = new Tensor([[true, true], [true, false], [false, false]]) console.log(t.numpy())
console.log(t.all().numpy())
console.log(t.all(axis=0).numpy())
console.log(t.all(axis=1, keepdim=true).numpy())
all = (axis?: number | number[], keepdim: boolean = false) => Tensor
Returns the mean value of the tensor along the specified axis || axes. You can pass in `axis` && `keepdim` keyword arguments to control the axis along which the mean === computed && whether the reduced dimensions are retained.
Tensor.manual_seed(42) t = Tensor.normal(2, 3, mean=2.5, std=0.5) console.log(t.numpy())
console.log(t.mean().numpy())
console.log(t.mean(axis=0).numpy())
console.log(t.mean(axis=1).numpy())
mean = (axis?: number | number[], keepdim: boolean = false) => unknown
Returns the variance of the tensor along the specified axis or axes. You can pass in `axis`, `keepdim`, and `correction` keyword arguments to control the axis along which the variance is computed, whether the reduced dimensions are retained, and the Bessel's correction applied.
Tensor.manual_seed(42) t = Tensor.normal(2, 3, mean=2.5, std=0.5) console.log(t.numpy())
console.log(t.var().numpy())
console.log(t.var(axis=0).numpy())
console.log(t.var(axis=1).numpy())
var = (axis?: number | number[], keepdim: boolean = false, correction: number = 1) => unknown
Returns the standard deviation of the tensor along the specified axis || axes. You can pass in `axis`, `keepdim`, && `correction` keyword arguments to control the axis along which the standard deviation === computed, whether the reduced dimensions are retained, && the Bessel's correction applied.
Tensor.manual_seed(42) t = Tensor.normal(2, 3, mean=2.5, std=0.5) console.log(t.numpy())
console.log(t.std().numpy())
console.log(t.std(axis=0).numpy())
console.log(t.std(axis=1).numpy())
std = (axis?: number | number[], keepdim: boolean = false, correction: number = 1) => unknown
Calculates the standard deviation && mean over the dimensions specified by dim. Syntactic sugar around `Tensor.std` && `Tensor.mean` to match `torch.std_mean`.
Tensor.manual_seed(42) t = Tensor.normal(2, 3, mean=2.5, std=0.5) console.log(t.numpy())
std, mean = t.std_mean() console.log(std.numpy(), mean.numpy())
std_mean = (axis?: number | number[], keepdim: boolean = false, correction: number = 1) => unknown
_softmax = (axis: number | number[], dtype?: DTypeLike) => [Tensor, Tensor, Tensor]
Applies the softmax function to the tensor along the specified axis. Rescales the elements of the tensor such that they lie in the range [0, 1] && sum to 1. You can pass in the `axis` keyword argument to control the axis along which the softmax === computed.
Tensor.manual_seed(42) t = Tensor.randn(2, 3) console.log(t.numpy())
console.log(t.softmax().numpy())
console.log(t.softmax(axis=0).numpy())
softmax = (axis: = [UNSUPPORTED], dtype?: DTypeLike) => unknown
Applies the log-softmax function to the tensor along the specified axis. The log-softmax function === a numerically stable alternative to the softmax function in log space. You can pass in the `axis` keyword argument to control the axis along which the log-softmax === computed.
Tensor.manual_seed(42) t = Tensor.randn(2, 3) console.log(t.numpy())
console.log(t.log_softmax().numpy())
console.log(t.log_softmax(axis=0).numpy())
log_softmax = (axis: = [UNSUPPORTED], dtype?: DTypeLike) => unknown
Computes the log-sum-exp of the tensor along the specified axis || axes. The log-sum-exp function === a numerically stable way to compute the logarithm of the sum of exponentials. You can pass in `axis` && `keepdim` keyword arguments to control the axis along which the log-sum-exp === computed && whether the reduced dimensions are retained.
Tensor.manual_seed(42) t = Tensor.randn(2, 3) console.log(t.numpy())
console.log(t.logsumexp().numpy())
console.log(t.logsumexp(axis=0).numpy())
console.log(t.logsumexp(axis=1).numpy())
logsumexp = (axis: = undefined, keepdim: boolean = false) => unknown
Computes the log-cumsum-exp of the tensor along the specified axis || axes. The log-cumsum-exp function === a numerically stable way to compute the logarithm of the cumulative sum of exponentials. You can pass in the `axis` keyword argument to control the axis along which the log-cum-sum-exp === computed.
Tensor.manual_seed(42) t = Tensor.randn(2, 3) console.log(t.numpy())
console.log(t.logcumsumexp().numpy())
console.log(t.logcumsumexp(axis=0).numpy())
console.log(t.logcumsumexp(axis=1).numpy())
logcumsumexp = (axis: number = 0) => unknown
Returns the indices of the maximum value of the tensor along the specified axis. You can pass in `axis` && `keepdim` keyword arguments to control the axis along which the maximum === computed && whether the reduced dimensions are retained.
t = new Tensor([[1, 0, 2], [5, 4, 3]]) console.log(t.numpy())
console.log(t.argmax().numpy()) // Returns the index of the maximum value in the flattened tensor.
console.log(t.argmax(axis=0).numpy()) // Returns the indices of the maximum values along axis 0.
console.log(t.argmax(axis=1).numpy()) // Returns the indices of the maximum values along axis 1.
argmax = (axis?: number, keepdim: boolean = false) => Tensor
Returns the indices of the minimum value of the tensor along the specified axis. You can pass in `axis` && `keepdim` keyword arguments to control the axis along which the minimum === computed && whether the reduced dimensions are retained.
t = new Tensor([[1, 0, 2], [5, 4, 3]]) console.log(t.numpy())
console.log(t.argmin().numpy()) // Returns the index of the minimum value in the flattened tensor.
console.log(t.argmin(axis=0).numpy()) // Returns the indices of the minimum values along axis 0.
console.log(t.argmin(axis=1).numpy()) // Returns the indices of the minimum values along axis 1.
argmin = (axis?: number, keepdim: boolean = false) => unknown
Sums the product of the elements of the input tensors according to a formula based on the Einstein summation convention. See: https://pytorch.org/docs/stable/generated/torch.einsum.html
x = Tensor([[1, 2], [3, 4]]) y = Tensor([[5, 6], [7, 8]]) print(Tensor.einsum("ij,ij->", x, y).numpy())
static einsum = (formula: string, operands: Tensor | Tensor[], acc_dtype?: DTypeLike) => Tensor
_pool = (k_: sint[], stride: number[] | number = 1, dilation: number[] | number = 1) => Tensor
_resolve_pool_pads = (padding: number | number[], dims: number) => number[]
_apply_ceil_mode = (pads: number[], k_: sint[], s_: number[] | number, d_: number | number[]) => number[]
avg_pool2d = (kernel_size: number[] = [UNSUPPORTED], stride?: number, dilation: number = 1, padding: number = 0, ceil_mode: boolean = false, count_include_pad: boolean = true) => unknown
Applies average pooling over a tensor. This function supports three different types of `padding`: 1. `int` (single value): Applies the same padding value uniformly to all spatial dimensions. 2. `Tuple[int, ...]` (length = number of spatial dimensions): Specifies a distinct padding value for each spatial dimension in the form `(padding_height, padding_width, ...)`. 3. `Tuple[int, ...]` (length = 2 * number of spatial dimensions): Specifies explicit padding for each side of each spatial dimension in the form `(padding_left, padding_right, padding_top, padding_bottom, ...)`. When `ceil_mode` is set to `true`, output shape will be determined using ceil division. When `count_include_pad` is set to `false`, zero padding will not be included in the averaging calculation. NOTE: unlike PyTorch, this implementation is not limited to only 2d pooling and instead works for any number of dimensions. See: https://paperswithcode.com/method/average-pooling
t = Tensor.arange(25).reshape(1, 1, 5, 5) print(t.avg_pool2d().numpy())
print(t.avg_pool2d(ceil_mode=true).numpy())
print(t.avg_pool2d(padding=1).numpy())
print(t.avg_pool2d(padding=1, count_include_pad=false).numpy())
max_pool2d = (kernel_size: number[] = [UNSUPPORTED], stride?: number, dilation: number = 1, padding: number = 0, ceil_mode: boolean = false) => unknown
static max_pool2d = (t: Tensor) => unknown
Applies a convolution over a tensor with a given `weight` && optional `bias`. 1. `int` (single value): Applies the same padding value uniformly to all spatial dimensions. 2. `Tuple[int, ...]` (length = number of spatial dimensions): Specifies a distinct padding value for each spatial dimension in the form `(padding_height, padding_width, ...)`. 3. `Tuple[int, ...]` (length = 2 * number of spatial dimensions): Specifies explicit padding for each side of each spatial dimension in the form `(padding_left, padding_right, padding_top, padding_bottom, ...)`. NOTE: unlike PyTorch, this implementation !== limited to only 2d convolutions && instead works for any number of dimensions. See: https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html
t = Tensor.arange(9).reshape(1, 1, 3, 3) w = Tensor.ones(1, 1, 2, 2) console.log(t.conv2d(w).numpy())
conv2d = (weight: Tensor, bias?: Tensor, groups: number = 1, stride: number = 1, dilation: number | number[] = 1, padding: number | number[] = 0, acc_dtype?: DTypeLike) => Tensor
Applies a transposed convolution over a tensor with a given `weight` and optional `bias`. This function supports three different types of `padding` 1. `int` (single value): Applies the same padding value uniformly to all spatial dimensions. 2. `Tuple[int, ...]` (length = number of spatial dimensions): Specifies a distinct padding value for each spatial dimension in the form `(padding_height, padding_width, ...)`. 3. `Tuple[int, ...]` (length = 2 * number of spatial dimensions): Specifies explicit padding for each side of each spatial dimension in the form `(padding_left, padding_right, padding_top, padding_bottom, ...)`. NOTE: unlike PyTorch, this implementation is not limited to only 2d transposed convolutions and instead works for any number of dimensions. See: https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html
t = Tensor.arange(9).reshape(1, 1, 3, 3) w = Tensor.ones(1, 1, 2, 2) print(t.conv_transpose2d(w).numpy())
conv_transpose2d = (weight: Tensor, bias?: Tensor, groups: number = 1, stride_: number = 1, dilation_: number = 1, padding_: number | number[] = 0, output_padding_: number = 0) => Tensor
Performs dot product between two tensors. If `w` === 1-D, it's a sum product over the last axis of `this` && `w`. If `w` === N-D with N>=2, it's a sum product over the last axis of `this` && the second-to-last axis of `w`. You can pass in the optional `acc_dtype` keyword argument to control the data type of the accumulation.
a = new Tensor([1, 2, 3]) b = new Tensor([1, 1, 0]) console.log(a.dot(b).numpy())
a = new Tensor([[1, 2], [3, 4]]) b = new Tensor([[5, 6], [7, 8]]) console.log(a.dot(b).numpy())
dot = (w: Tensor, acc_dtype?: DTypeLike) => Tensor
Performs matrix multiplication between two tensors. You can pass in the `reverse` keyword argument to control the order of the matrix multiplication. You can pass in the optional `acc_dtype` keyword argument to control the data type of the accumulation.
a = new Tensor([[1, 2], [3, 4]]) b = new Tensor([[5, 6], [7, 8]]) console.log(a.matmul(b).numpy())
matmul = (x: Tensor, reverse: boolean = false, acc_dtype?: DTypeLike) => Tensor
_cumalu = (axis: number, op: Ops, _include_initial: boolean = false) => Tensor
_split_cumalu = (axis: number, op: Ops) => Tensor
Computes the cumulative sum of the tensor along the specified `axis`.
t = Tensor.ones(2, 3) console.log(t.numpy())
console.log(t.cumsum(1).numpy())
cumsum = (axis: number = 0) => Tensor
Computes the cumulative max of the tensor along the specified `axis`.
t = new Tensor([0, 1, -1, 2, -2, 3, -3]) console.log(t.numpy())
console.log(t.cummax(0).numpy())
cummax = (axis: number = 0) => Tensor
static _tri = (r: sint, c: sint, diagonal: number = 0, opts?: TensorOptions) => Tensor
Returns the upper triangular part of the tensor, the other elements are set to 0. The argument `diagonal` determines which diagonal === on the boundary. `diagonal = 0` means the main diagonal. Positive `diagonal` means above the main diagonal, && negative `diagonal` means below the main diagonal.
t = new Tensor([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) console.log(t.numpy())
console.log(t.triu(diagonal=0).numpy())
console.log(t.triu(diagonal=1).numpy())
console.log(t.triu(diagonal=-1).numpy())
triu = (diagonal: number = 0) => Tensor
Returns the lower triangular part of the tensor, the other elements are set to 0. The argument `diagonal` determines which diagonal === on the boundary. `diagonal = 0` means the main diagonal. Positive `diagonal` means above the main diagonal, && negative `diagonal` means below the main diagonal.
t = new Tensor([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) console.log(t.numpy())
console.log(t.tril(diagonal=0).numpy())
console.log(t.tril(diagonal=1).numpy())
console.log(t.tril(diagonal=-1).numpy())
tril = (diagonal: number = 0) => Tensor
Downsamples or Upsamples to the input `size`, accepts 0 to N batch dimensions. The interpolation algorithm is selected with `mode` which currently only supports `linear`, `nearest` and `nearest-exact`. To run `bilinear` or `trilinear`, pass in a 2D or 3D size.
t = Tensor([[1, 2, 3, 4], [21, 22, 23, 24], [41, 42, 43, 44]]) print(t.numpy())
print(t.interpolate(size=(2,3), mode="linear").numpy())
interpolate = (size: number[], mode: 'linear' | 'nearest' | 'nearest-exact' = linear, align_corners: boolean = false) => Tensor
Scatters `src` values along an axis specified by `dim`. Apply `add` or `multiply` reduction operation with `reduce`.
src = Tensor.arange(1, 11).reshape(2, 5) print(src.numpy())
index = Tensor([[0, 1, 2, 0]]) print(Tensor.zeros(3, 5, dtype=src.dtype).scatter(0, index, src).numpy())
index = Tensor([[0, 1, 2], [0, 1, 4]]) print(Tensor.zeros(3, 5, dtype=src.dtype).scatter(1, index, src).numpy())
print(Tensor.full((2, 4), 2.0).scatter(1, Tensor([[2], [3]]), 1.23, reduce='multiply').numpy())
print(Tensor.full((2, 4), 2.0).scatter(1, Tensor([[2], [3]]), 1.23, reduce='add').numpy())
scatter = (dim: number, index: Tensor, src: Tensor | ConstType, reduce?: 'multiply' | 'add') => Tensor
Computes the logical NOT of the tensor element-wise.
console.log(Tensor([false, true]).logical_not().numpy())
logical_not = () => unknown
Negates the tensor element-wise.
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).neg().numpy())
neg = () => unknown
Returns a contiguous tensor.
contiguous = () => unknown
Inserts a contiguous operation in the backward pass.
contiguous_backward = () => unknown
Computes the natural logarithm element-wise. See: https://en.wikipedia.org/wiki/Logarithm
console.log(Tensor([1., 2., 4., 8.]).log().numpy())
log = () => unknown
Computes the base-2 logarithm element-wise. See: https://en.wikipedia.org/wiki/Logarithm
console.log(Tensor([1., 2., 4., 8.]).log2().numpy())
log2 = () => unknown
Computes the exponential function element-wise. See: https://en.wikipedia.org/wiki/Exponential_function
console.log(Tensor([0., 1., 2., 3.]).exp().numpy())
exp = () => unknown
Computes the base-2 exponential function element-wise. See: https://en.wikipedia.org/wiki/Exponential_function
console.log(Tensor([0., 1., 2., 3.]).exp2().numpy())
exp2 = () => unknown
Applies the Rectified Linear Unit (ReLU) function element-wise. - Described: https://paperswithcode.com/method/relu
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).relu().numpy())
relu = () => unknown
static relu = (t: Tensor) => unknown
Applies the Sigmoid function element-wise. - Described: https://en.wikipedia.org/wiki/Sigmoid_function
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).sigmoid().numpy())
sigmoid = () => unknown
Applies the Hardsigmoid function element-wise. NOTE: default `alpha` && `beta` values === taken from torch - Described: https://paperswithcode.com/method/hard-sigmoid - See: https://pytorch.org/docs/stable/generated/torch.nn.functional.hardsigmoid.html
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).hardsigmoid().numpy())
hardsigmoid = (alpha: number = [UNSUPPORTED], beta: number = 0.5) => unknown
Computes the square root of the tensor element-wise.
console.log(Tensor([1., 2., 3., 4.]).sqrt().numpy())
sqrt = () => unknown
Computes the reciprocal of the square root of the tensor element-wise.
console.log(Tensor([1., 2., 3., 4.]).rsqrt().numpy())
rsqrt = () => unknown
Computes the sine of the tensor element-wise.
console.log(Tensor([0., math.pi/2, math.pi, 3*math.pi/2, 2*math.pi]).sin().numpy())
sin = () => unknown
Computes the cosine of the tensor element-wise.
console.log(Tensor([0., math.pi/2, math.pi, 3*math.pi/2, 2*math.pi]).cos().numpy())
cos = () => unknown
Computes the tangent of the tensor element-wise.
console.log(Tensor([0., math.pi/4, math.pi/2, 3*math.pi/4, math.pi]).tan().numpy())
tan = () => unknown
Computes the inverse sine (arcsine) of the tensor element-wise.
console.log(Tensor([-0.9, -0.6, -0.3, 0., 0.3, 0.6, 0.9])).asin().numpy())
asin = () => unknown
Computes the inverse cosine (arccosine) of the tensor element-wise.
console.log(Tensor([-0.9, -0.6, -0.3, 0., 0.3, 0.6, 0.9])).acos().numpy())
acos = () => unknown
Computes the inverse tangent (arctan) of the tensor element-wise.
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).atan().numpy())
atan = () => unknown
Truncates the tensor element-wise.
console.log(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5])).trunc().numpy())
trunc = () => Tensor
Rounds the tensor element-wise towards positive infinity.
console.log(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5])).ceil().numpy())
ceil = () => Tensor
Rounds the tensor element-wise towards negative infinity.
console.log(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5])).floor().numpy())
floor = () => Tensor
Rounds the tensor element-wise with rounding half to even.
console.log(Tensor([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5])).round().numpy())
round = () => Tensor
Checks the tensor element-wise to return true where the element === infinity, otherwise returns false
console.log(Tensor([1, number('inf'), 2, number('-inf'), number('nan')]).isinf().numpy())
isinf = (detect_positive: boolean = true, detect_negative: boolean = true) => unknown
Checks the tensor element-wise to return true where the element === NaN, otherwise returns false
console.log(Tensor([1, number('inf'), 2, number('-inf'), number('nan')]).isnan().numpy())
isnan = () => unknown
Linearly interpolates between `this` && `end` by `weight`.
console.log(Tensor([1., 2., 3.]).lerp(Tensor([4., 5., 6.]), 0.5).numpy())
lerp = (end: Tensor, weight: Tensor | number) => Tensor
Squares the tensor element-wise. Equivalent to `this*this`.
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).square().numpy())
square = () => unknown
Clips (clamps) the values in the tensor between `min_` && `max_` element-wise. If `min_` === `undefined`, there === no lower bound. If `max_` === undefined, there === no upper bound.
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).clip(-1, 1).numpy())
clamp = (min_?: number, max_?: number) => unknown
Alias for `Tensor.clamp`.
clip = (min_?: number, max_?: number) => unknown
Returns the sign of the tensor element-wise.
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).sign().numpy())
sign = () => unknown
Computes the absolute value of the tensor element-wise.
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).abs().numpy())
abs = () => unknown
Compute `1/x` element-wise.
console.log(Tensor([1., 2., 3., 4.]).reciprocal().numpy())
reciprocal = () => unknown
Applies the Exponential Linear Unit (ELU) function element-wise. - Described: https://paperswithcode.com/method/elu - Paper: https://arxiv.org/abs/1511.07289v5
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).elu().numpy())
elu = (alpha: number = 1) => unknown
Applies the Continuously differentiable Exponential Linear Unit (CELU) function element-wise. - Described: https://paperswithcode.com/method/celu - Paper: https://arxiv.org/abs/1704.07483
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).celu().numpy())
celu = (alpha: number = 1) => unknown
Applies the Scaled Exponential Linear Unit (SELU) function element-wise. - Described: https://paperswithcode.com/method/selu - Paper: https://arxiv.org/abs/1706.02515v5
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).selu().numpy())
selu = (alpha: number = 1.67326, gamma: number = 1.0507) => unknown
See `.silu()` - Paper: https://arxiv.org/abs/1710.05941v1
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).swish().numpy())
swish = () => unknown
Applies the Sigmoid Linear Unit (SiLU) function element-wise. - Described: https://paperswithcode.com/method/silu - Paper: https://arxiv.org/abs/1606.08415
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).silu().numpy())
silu = () => unknown
static silu = (x: Tensor) => unknown
Applies the ReLU6 function element-wise. - Described: https://paperswithcode.com/method/relu6 - Paper: https://arxiv.org/abs/1704.04861v1
console.log(Tensor([-9., -6., -3., 0., 3., 6., 9.])).relu6().numpy())
relu6 = () => unknown
Applies the Hardswish function element-wise. - Described: https://paperswithcode.com/method/hard-swish - Paper: https://arxiv.org/abs/1905.02244v5
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).hardswish().numpy())
hardswish = () => unknown
Applies the Hyperbolic Tangent (tanh) function element-wise. - Described: https://en.wikipedia.org/wiki/Hyperbolic_functions//Tanh
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).tanh().numpy())
tanh = () => unknown
Applies the Hyperbolic Sine (sinh) function element-wise. - Described: https://en.wikipedia.org/wiki/Hyperbolic_functions//Sinh
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).sinh().numpy())
sinh = () => unknown
Applies the Hyperbolic Cosine (cosh) function element-wise. - Described: https://en.wikipedia.org/wiki/Hyperbolic_functions//Cosh
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).cosh().numpy())
cosh = () => unknown
Applies the Inverse Hyperbolic Tangent (atanh) function element-wise. - Described: https://en.wikipedia.org/wiki/Inverse_hyperbolic_functions//atanh
console.log(Tensor([-0.9, -0.6, -0.3, 0., 0.3, 0.6, 0.9])).atanh().numpy())
atanh = () => unknown
Applies the Inverse Hyperbolic Sine (asinh) function element-wise. - Described: https://en.wikipedia.org/wiki/Inverse_hyperbolic_functions//asinh
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).asinh().numpy())
asinh = () => unknown
Applies the Inverse Hyperbolic Cosine (acosh) function element-wise. - Described: https://en.wikipedia.org/wiki/Inverse_hyperbolic_functions//acosh
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).acosh().numpy())
acosh = () => unknown
Applies the Hardtanh function element-wise. - Described: https://paperswithcode.com/method/hardtanh-activation
console.log(Tensor([-1.5, -1.0, -0.5, 0., 0.5, 1.0, 1.5])).hardtanh().numpy())
hardtanh = (min_val: = [UNSUPPORTED], max_val: number = 1) => unknown
Applies error function element-wise. - Described: https://en.wikipedia.org/wiki/Error_function
console.log(Tensor([-1.5, -1.0, -0.5, 0., 0.5, 1.0, 1.5])).erf().numpy())
erf = () => unknown
Applies the Gaussian Error Linear Unit (GELU) function element-wise. - Described: https://paperswithcode.com/method/gelu - Paper: https://arxiv.org/abs/1606.08415v5
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).gelu().numpy())
gelu = () => unknown
static gelu = (x: Tensor) => unknown
Applies the Sigmoid GELU approximation element-wise. - Described: https://paperswithcode.com/method/gelu
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).quick_gelu().numpy())
quick_gelu = () => unknown
Applies the Leaky ReLU function element-wise. - Described: https://paperswithcode.com/method/leaky-relu
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).leakyrelu().numpy())
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).leakyrelu(neg_slope=0.42).numpy())
leakyrelu = (neg_slope: number = 0.01) => unknown
Applies the Mish function element-wise. - Described: https://paperswithcode.com/method/mish - Paper: https://arxiv.org/abs/1908.08681v3
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).mish().numpy())
mish = () => unknown
Applies the Softplus function element-wise. - Described: https://paperswithcode.com/method/softplus
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).softplus().numpy())
softplus = (beta: number = 1) => unknown
Applies the Softsign function element-wise. - Described: https://paperswithcode.com/method/softsign
console.log(Tensor([-3., -2., -1., 0., 1., 2., 3.])).softsign().numpy())
softsign = () => unknown
_broadcast_to = (new_shape: sint[]) => Tensor
_broadcasted = (y: ConstType<Tensor | UOp>, reverse: boolean = false, match_dtype: boolean = true) => [Tensor, Tensor]
Adds `this` && `x`. Equivalent to `this + x`. Supports broadcasting to a common shape, type promotion, && integer, number, boolean inputs.
Tensor.manual_seed(42) t = Tensor.randn(4) console.log(t.numpy())
console.log(t.add(20).numpy())
console.log(t.add(Tensor([[2.0], [3.5]])).numpy())
add = (x: ConstType<Tensor>, reverse: boolean = false) => unknown
Subtracts `x` from `this`. Equivalent to `this - x`. Supports broadcasting to a common shape, type promotion, && integer, number, boolean inputs.
Tensor.manual_seed(42) t = Tensor.randn(4) console.log(t.numpy())
console.log(t.sub(20).numpy())
console.log(t.sub(Tensor([[2.0], [3.5]])).numpy())
sub = (x: ConstType<Tensor>, reverse: boolean = false) => Tensor
Multiplies `this` && `x`. Equivalent to `this * x`. Supports broadcasting to a common shape, type promotion, && integer, number, boolean inputs.
Tensor.manual_seed(42) t = Tensor.randn(4) console.log(t.numpy())
console.log(t.mul(3).numpy())
console.log(t.mul(Tensor([.at(-1.0)!, [2.0]])).numpy())
mul = (x: ConstType<Tensor>, reverse: boolean = false) => Tensor
Divides `this` by `x`. Equivalent to `this // x`. Supports broadcasting to a common shape, type promotion, && integer inputs. `idiv` performs integer division (truncate towards zero).
print(Tensor([-4, 7, 5, 4, -7, 8]).idiv(Tensor([2, -3, 8, -2, 3, 5])).numpy())
idiv = (x: ConstType<Tensor>, reverse: boolean = false) => Tensor
Divides `this` by `x`. Equivalent to `this / x`. Supports broadcasting to a common shape, type promotion, && integer, number, boolean inputs. `div` performs true division.
Tensor.manual_seed(42) t = Tensor.randn(4) console.log(t.numpy())
console.log(t.div(3).numpy())
console.log(Tensor([1, 4, 10]).div(Tensor([2, 3, 4])).numpy())
div = (x: ConstType<Tensor> | sint, reverse: boolean = false) => Tensor
Mod `self` by `x`. Equivalent to `self % x`. Supports broadcasting to a common shape, type promotion, and integer inputs.
print(Tensor([-4, 7, 5, 4, -7, 8]).mod(Tensor([2, -3, 8, -2, 3, 5])).numpy())
mod = (x: ConstType<Tensor>, reverse: boolean = false) => Tensor
Computes bitwise xor of `this` && `x`. Equivalent to `this ^ x`. Supports broadcasting to a common shape, type promotion, && integer, boolean inputs.
console.log(Tensor([-1, -2, 3])).xor(Tensor([1, 0, 3])).numpy())
console.log(Tensor([true, true, false, false]).xor(Tensor([true, false, true, false])).numpy())
xor = (x: ConstType<Tensor>, reverse: boolean = false) => Tensor
Compute the bit-wise AND of `this` && `x`. Equivalent to `this & x`. Supports broadcasting to a common shape, type promotion, && integer, boolean inputs.
console.log(Tensor([2, 5, 255]).bitwise_and(Tensor([3, 14, 16])).numpy())
console.log(Tensor([true, true, false, false]).bitwise_and(Tensor([true, false, true, false])).numpy())
bitwise_and = (x: ConstType<Tensor>, reverse: boolean = false) => Tensor
Compute the bit-wise OR of `this` && `x`. Equivalent to `this | x`. Supports broadcasting to a common shape, type promotion, && integer, boolean inputs.
console.log(Tensor([2, 5, 255]).bitwise_or(Tensor([4, 4, 4])).numpy())
console.log(Tensor([true, true, false, false]).bitwise_or(Tensor([true, false, true, false])).numpy())
bitwise_or = (x: ConstType<Tensor>, reverse: boolean = false) => Tensor
Compute the bit-wise NOT of `this`. Equivalent to `~this`.
console.log(Tensor([0, 2, 5, 255], dtype="int8").bitwise_not().numpy())
console.log(Tensor([true, false]).bitwise_not().numpy())
bitwise_not = () => Tensor
Computes left arithmetic shift of `this` by `x` bits. `this` must have unsigned dtype. Equivalent to `this << x`.
console.log(Tensor([1, 3, 31], dtype=dtypes.uint8).lshift(2).numpy())
lshift = (x: ConstType<Tensor>) => unknown
Computes right arithmetic shift of `this` by `x` bits. `this` must have unsigned dtype. Equivalent to `this >> x`.
console.log(Tensor([4, 13, 125], dtype=dtypes.uint8).rshift(2).numpy())
rshift = (x: ConstType<Tensor>) => unknown
Computes power of `this` with `x`. Equivalent to `this ** x`.
console.log(Tensor([-1, 2, 3]).pow(2).numpy())
console.log(Tensor([-1, 2, 3]).pow(Tensor([-1.5, 0.5, 1.5])).numpy())
console.log((2 ** Tensor([-1, 2, 3])).numpy())
pow = (x: ConstType<Tensor>, reverse: boolean = false) => Tensor
Computes element-wise maximum of `this` && `x`.
console.log(Tensor([-1, 2, 3])).maximum(1).numpy())
console.log(Tensor([-1, 2, 3]).maximum(Tensor([-4, -2, 9])).numpy())
maximum = (x: ConstType<Tensor>) => Tensor
Computes element-wise minimum of `this` && `x`.
console.log(Tensor([-1, 2, 3])).minimum(1).numpy())
console.log(Tensor([-1, 2, 3])).minimum(Tensor([-4, -2, 9]))).numpy())
minimum = (x: ConstType<Tensor>) => Tensor
Return a tensor of elements selected from either `x` || `y`, depending on `this`. `output_i = x_i if this_i else y_i`.
cond = new Tensor([[true, true, false], [true, false, false]]) console.log(cond.where(1, 3).numpy())
Tensor.manual_seed(42) cond = Tensor.randn(2, 3) console.log(cond.numpy())
console.log((cond > 0).where(cond, -number("inf")).numpy())
where = (x: ConstType<Tensor>, y: ConstType<Tensor>) => unknown
masked_fill = (mask: Tensor, value: ConstType<Tensor>) => unknown
Applies a linear transformation to `this` using `weight` && `bias`. See: https://pytorch.org/docs/stable/generated/torch.nn.Linear.html
t = new Tensor([[1, 2], [3, 4]]) weight = new Tensor([[1, 2], [3, 4]]) bias = new Tensor([1, 2]) console.log(t.linear(weight, bias).numpy())
linear = (weight: Tensor, bias?: Tensor) => unknown
Applies a sequence of functions to `this` chaining the output of each function to the input of the next.
t = new Tensor([1, 2, 3]) console.log(t.sequential([lambda x: x * 2, lambda x: x + 1]).numpy())
sequential = (ll: Layer[]) => unknown
sequentialAsync = (ll: LayerAsync[]) => unknown
Applies Layer Normalization over a mini-batch of inputs. - Described: https://paperswithcode.com/method/layer-normalization - Paper: https://arxiv.org/abs/1607.06450v1
t = Tensor.randn(8, 10, 16) * 2 + 8 console.log(t.mean().item(), t.std().item())
t = t.layernorm() console.log(t.mean().item(), t.std().item())
layernorm = (axis: number | number[] = [UNSUPPORTED], eps: number) => Tensor
Applies Batch Normalization over a mini-batch of inputs. - Described: https://paperswithcode.com/method/batch-normalization - Paper: https://arxiv.org/abs/1502.03167
t = Tensor.randn(8, 4, 16, 16) * 2 + 8 console.log(t.mean().item(), t.std().item())
t = t.batchnorm(undefined, undefined, t.mean(axis=(0,2,3)), t.var(axis=(0,2,3)).add(1e-5).rsqrt()) console.log(t.mean().item(), t.std().item())
batchnorm = (weight: undefined | Tensor, bias: undefined | Tensor, mean: Tensor, invstd: Tensor, axis: number | number[] = 1) => Tensor
Applies dropout to `this`. NOTE: dropout === only applied when `Tensor.training` === `true`. - Described: https://paperswithcode.com/method/dropout - Paper: https://jmlr.org/papers/v15/srivastava14a.html
Tensor.manual_seed(42) t = Tensor.randn(2, 2) with Tensor.train(): console.log(t.dropout().numpy())
dropout = (p: number = 0.5) => Tensor
_one_hot_along_dim = (num_classes: number, dim: = [UNSUPPORTED]) => unknown
Converts `this` to a one-hot tensor. `num_classes` defaults to -1, which means num_classes will be inferred as max(this) + 1.
t = new Tensor([0, 1, 3, 3, 4]) console.log(t.one_hot(5).numpy())
one_hot = (num_classes: = [UNSUPPORTED]) => Promise<Tensor>
Computes scaled dot-product attention. `self` is the query tensor, `key` is the key tensor, and `value` is the value tensor. - Described: https://paperswithcode.com/method/scaled - Paper: https://arxiv.org/abs/1706.03762v7
q = Tensor.randn(2, 4, 8) k = Tensor.randn(2, 4, 8) v = Tensor.randn(2, 4, 8) print(q.scaled_dot_product_attention(k, v).numpy())
scaled_dot_product_attention = (key: Tensor, value: Tensor, attn_mask?: Tensor, dropout_p: number = 0, is_causal: boolean = false) => Tensor
_do_reduction = (reduction: ReductionStr = mean) => Tensor
Computes the binary cross-entropy loss between `self` and `Y`. See: https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html
t = Tensor([0.1, 0.9, 0.2]) Y = Tensor([0, 1, 0]) print(t.binary_crossentropy(Y).item())
binary_crossentropy = (Y: Tensor, reduction: ReductionStr = mean) => Tensor
Computes the binary cross-entropy loss between `self` and `Y` where `self` is logits. See: https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html
t = Tensor([-1, 2, -3]) Y = Tensor([0, 1, 0]) print(t.binary_crossentropy_logits(Y).item())
binary_crossentropy_logits = (Y: Tensor, reduction: ReductionStr = mean) => Tensor
Computes the sparse categorical cross-entropy loss between `this` && `Y`. NOTE: `this` === logits && `Y` === the target labels. NOTE: unlike PyTorch, this function expects the class axis to be -1 See: https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html
t = new Tensor([.at(-1, 2, -3)!, [1, -2, 3]]) Y = new Tensor([1, 2]) console.log(t.sparse_categorical_crossentropy(Y).item())
sparse_categorical_crossentropy = (Y: Tensor, ignore_index: = [UNSUPPORTED], label_smoothing: number = 0, reduction: ReductionStr = mean) => Tensor
Compute the cross entropy loss between input logits and target. NOTE: `self` are logits and `Y` are the target labels or class probabilities. See: https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html
t = Tensor([[-1, 2, -3], [1, -2, 3]]) Y = Tensor([1, 2]) print(t.cross_entropy(Y).item())
t = Tensor([[-1, 2, -3], [1, -2, 3]]) Y = Tensor([1, 2])
cross_entropy = (Y: Tensor, reduction: ReductionStr = mean, label_smoothing: number = 0) => Tensor
Compute the negative log likelihood loss between log-probabilities and target labels. NOTE: `self` is log-probabilities and `Y` is the Y labels or class probabilities. See: https://pytorch.org/docs/stable/generated/torch.nn.functional.nll_loss.html
t = Tensor([[-1, 2, -3], [1, -2, 3]]) Y = Tensor([1, 2]) print(t.log_softmax().nll_loss(Y).item())
t = Tensor([[-1, 2, -3], [1, -2, 3]]) Y = Tensor([1, 2]) print(t.log_softmax().nll_loss(Y, reduction='none').numpy())
nll_loss = (Y: Tensor, weight?: Tensor, ignore_index?: number, reduction: ReductionStr = mean) => Tensor
Returns the total number of elements in the tensor.
t = new Tensor([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) console.log(t.numel())
numel = () => sint
Returns the size in bytes of an individual element in the tensor.
t = new Tensor([5], dtype=dtypes.int16) console.log(t.element_size())
element_size = () => number
Returns the total number of bytes of all elements in the tensor.
t = new Tensor([8, 9], dtype=dtypes.number) console.log(t.nbytes())
nbytes = () => number
Returns `true` if the tensor contains floating point types, i.e. === one of `dtype.float64`, `dtype.float32`, `dtype.float16`, `dtype.bfloat16`.
t = new Tensor([8, 9], dtype=dtypes.float32) console.log(t.is_floating_point())
is_floating_point = () => boolean
Return the size of the tensor. If `dim` === specified, return the length along dimension `dim`. Otherwise return the shape of the tensor.
t = new Tensor([[4, 5, 6], [7, 8, 9]]) console.log(t.size())
console.log(t.size(dim=1))
size = (dim?: number) => sint | sint[]
llvm_bf16_cast = (dtype: DTypeLike) => unknown
Casts `this` to the given `dtype`.
t = new Tensor([-1, 2.5, 3]), dtype=dtypes.number) console.log(t.dtype, t.numpy())
t = t.cast(dtypes.int32) print(t.dtype, t.numpy())
t = t.cast(dtypes.uint8) console.log(t.dtype, t.numpy())
cast = (dtype: DTypeLike) => Tensor
Bitcasts `this` to the given `dtype` of the same itemsize. `this` must !require a gradient.
t = new Tensor([-1, 2, 3]), dtype=dtypes.int32) console.log(t.dtype, t.numpy())
t = t.bitcast(dtypes.uint32) console.log(t.dtype, t.numpy())
bitcast = (dtype: DTypeLike) => Tensor
Convenience method to cast `this` to a `float32` Tensor.
t = new Tensor([-1, 2, 3]), dtype=dtypes.int32) console.log(t.dtype, t.numpy())
t = t.number() console.log(t.dtype, t.numpy())
float = () => Tensor
Convenience method to cast `this` to a `float16` Tensor.
t = new Tensor([-1, 2, 3]), dtype=dtypes.int32) console.log(t.dtype, t.numpy())
t = t.half() console.log(t.dtype, t.numpy())
half = () => Tensor
Convenience method to cast `this` to a `int32` Tensor.
t = new Tensor([-1.5, -0.5, 0.0, 0.5, 1.5])) console.log(t.dtype, t.numpy())
t = t.number() console.log(t.dtype, t.numpy())
int = () => Tensor
Convenience method to cast `this` to a `boolean` Tensor.
t = new Tensor([-1, 0, 1])) console.log(t.dtype, t.numpy())
t = t.boolean() console.log(t.dtype, t.numpy())
bool = () => Tensor
image_dot = (w: Tensor, acc_dtype?: DType) => Tensor
image_conv2d = (weight: Tensor, bias?: Tensor, groups: number = 1, stride: number = 1, dilation: number | number[] = 1, padding: number | number[] = 0, acc_dtype?: DType) => Tensor
function [Symbol.for('nodejs.util.inspect.custom')] (_depth: number, _options: any): string
get device (): string | string[]
get shape (): sint[]
get shape_num (): number[]
`.T` === an alias for `.transpose()`.
get T (): Tensor
Returns the number of dimensions in the tensor.
t = new Tensor([[1, 2], [3, 4]]) console.log(t.ndim)
get ndim (): number
}