What Is Tensor in Machine Learning ?
In machine learning, any kind of computation happens at the numeric level or on numeric values, and a tensor is the data structure used to store these kinds of numeric values. You can consider it similar to a multidimensional array.
The types of tensors depend on the dimensions of the tensor. According to the dimensions, tensors can be 0D, 1D, 2D, 3D, 4D, 5D, and so on, but for today's discussion, we will focus only on tensors up to 5D.
- 0D Tensor :
If your array contains only a single value, then it is a 0D tensor. Let’s take an example.
In the above example, we declared the NumPy array arr
that contains a single value 0. Now, we can see the properties of this array:
- ndim: Shows the dimension of the array.
- shape: A tuple indicating the total number of elements along each axis.
- size: The total number of elements in the array.
The single element can also be called a scalar. There is one important term, rank. The rank of any tensor is equal to the dimension of the tensor. In this case, the rank of the scalar is 0
.
- 1D Tensor
A group of scalars together forms a vector, and this list of scalars or vector is considered a 1D tensor.
In the above example, you can see the group of scalars [1, 2, 3, 4]. This is called a 1D tensor. Its dimension and rank is 1, but as you can see, its shape is (4,) because the vector contains 4 elements.
- 2D Tensor
When we group multiple vectors together, they form a matrix, and this matrix can be called a 2D tensor.
There are two dimensions in a matrix, i.e., rows and columns. As seen in the above example, the shape of the matrix is (3,3), which means it has 3 rows and 3 columns.
- 3D Tensor
Collection of two or more matrices is called a 3D tensor. It has three dimensions: depth, rows, and columns. You can visualize it in the format of a cube.
As you can see in the above example, the 3D tensor has a shape of (3,3,3), which represents depth, rows, and columns respectively.
- 4D Tensor
When two or more 3D tensors are grouped together, it is considered a 4D tensor. You can think of it as a collection of cubes or a vector of cubes.
In the above example, you can see the shape of a 4D tensor. It adds one more parameter in front of the depth therefore shape of 4D tensor contain 4 values Index , Depth , Row , Column resp.
- 5D Tensor
A matrix of cubes, i.e., a matrix of 3D tensors, can be called a 5D tensor.
In this 5D tensor, there are two rows, where each row contains a vector of cubes, i.e., a vector of 3D tensors. That's the reason why in the shape, we can see there are 5 values.
Let’s take one real-world example to relate the concept of tensors:
We are storing the lowest and highest prices of a stock for one year.
This data forms a matrix with a shape of (365, 2), where 365 represents the days, and 2 represents the highest and lowest prices.
Now suppose we have 10 years of data for same stock
This adds a depth dimension for the 10 years.
The shape becomes (10, 365, 2), which can be visualized as a cube.
Now suppose we have 25 such stocks of one company
We now have a collection of 25 cubes. This forms a 4D tensor with the shape (25, 10, 365, 2).
If we extend this for further 10 companies
Each company has data for 25 stocks, resulting in a collection of matrices of cubes.
This forms a 5D tensor with the shape (10, 25, 10, 365, 2).
Summary :
Tensors are multidimensional data structures used to store numerical data in machine learning. They range from 0D (scalars) to higher dimensions like 5D, based on their shape and complexity. For example, time-series stock data can form tensors: daily prices create a 2D matrix, multiple years add depth for a 3D tensor, multiple stocks create a 4D tensor, and adding multiple companies forms a 5D tensor. Tensors provide a structured way to handle complex data efficiently.