Mar 22, 2025
8 min read
Rust,
Candle,
Pytorch,

Pointwise Operations in Rust Candle Framework and Pytorch Tensors

This article compares the implementation differences between Pytorch and Rust Candle framework for pointwise tensor operations, covering common operations such as absolute value, trigonometric functions, exponentials, etc.

Pointwise operations involve independently operating on each element of a tensor. Pytorch provides numerous APIs to support pointwise operations, and this article focuses on some common operations and their equivalent implementations in Candle.

Pointwise Operations Overview: ✅ Indicates equivalent implementation exists 🚫 Indicates no equivalent implementation exists ☢️ Indicates alternative implementation exists

OperationPytorchCandle
Absolute Valueabs, absoluteabs
Inverse Cosine/Sine/Tangentacos(arccos)/asin/atanNot implemented🚫
Cosine/Sine/Tangentcos/sin/tancos/sin/ (tan not implemented)🚫
Inverse Hyperbolic Cosineacosh, arccoshNot implemented🚫
Addition/Subtraction/Multiplication/Divisionadd/sub/mul/divadd/sub/mul/div
Ceiling/Floorceil/floorceil/floor
Clamp All Elements to Range [min, max]clamp, clipclamp
Hyperbolic CosinecoshNot implemented🚫
Degrees to Radiansdeg2radNot implemented🚫
Exponential e^xexpexp
Truncate to Integerfix, truncNot implemented🚫
Float Tensor Powerfloat_powerpowf
Fractional PartfracNot implemented🚫
Decompose Mantissa and Exponent TensorfrexpNot implemented🚫
Natural Logarithm (e)loglog
Other Logarithmslog10/log2/log1p/log**Not implemented🚫
Negationneg, negativeneg
Reciprocalreciprocalrecip
Square Root/Reciprocal Square Rootsqrt/rsqrtsqrt/not implemented, alternative solution exists☢️
Logical Sigmoid Functionsigmoid, torch.special.expitcandle_nn::ops::sigmoid
Sign Valuesignsign
Softmaxsoftmaxcandle_nn::ops::softmax
SquaresquareNot implemented, alternative solution exists☢️

Absolute Value

Pytorch:

    a = torch.tensor([-1, 2, -3])
    # Output tensor([1, 2, 3])
    print(a.abs())

Candle:

    let a_data = vec![-1i64, 2, -3];
    let a = Tensor::from_vec(a_data, 3, &Device::Cpu)?;
    let y = a.abs()?;
    // [-1, 2, -3] -> [1, 2, 3]
    println!("{y}");

Cosine/Sine/Tangent

In Candle, only cos and sin are implemented, while tan is not. cos and sin default to floating-point types.

Pytorch:

    a = torch.tensor([-1, 2, -3])
    print(a.cos()) # tensor([ 0.5403, -0.4161, -0.9900])
    print(a.sin()) # tensor([-0.8415,  0.9093, -0.1411])
    print(a.tan()) # tensor([-1.5574, -2.1850,  0.1425])

Candle:

    let a_data = vec![-1., 2., -3.];
    let x = Tensor::from_vec(a_data, 3, &Device::Cpu)?;
    let a = x.cos()?; 
    let b = x.sin()?; 
    println!("{a}"); // [ 0.5403, -0.4161, -0.9900]
    println!("{b}"); // [-0.8415,  0.9093, -0.1411]

Although Candle does not support the tangent (tan) operation, it can be indirectly implemented using the formula:

tan(x)=sin(x)cos(x)\tan(x) = \frac{\sin(x)}{\cos(x)}

Thus, tangent can be implemented as follows:

    // Tangent operation
    let c = (x.sin()? / x.cos()?)?;
    println!("{c}"); // [-1.5574, -2.1850,  0.1425]

Addition/Subtraction/Multiplication/Division

The addition/subtraction/multiplication/division methods in Pytorch and Candle have similar regular usage but differ in some details:

  1. Pytorch’s addition and subtraction include an alpha scaling parameter, which Candle lacks. Therefore, operations are equivalent only when no alpha scaling is used.
  2. Pytorch supports operations between tensors and scalars, whereas Candle does not directly support scalar-tensor operations and requires converting scalars into tensors.

Pytorch:

    a = torch.tensor([1, 2, 3])
    print(a.add(10)) # tensor([11, 12, 13])
    print(a + 10) # tensor([11, 12, 13])

Candle:

    let a_data = vec![1., 2., 3.];
    let x1 = Tensor::from_vec(a_data, 3, &Device::Cpu)?;
    // Candle does not support different-sized tensors or scalar-tensor operations, so scalars must be converted to tensors.
    let size = 3;
    let b_data = [10f64].repeat(size);
    let x2: Tensor = Tensor::from_vec(b_data, size, &Device::Cpu)?;
    let y: Tensor = x1.add(&x2)?;
    println!("{y}"); // [11., 12., 13.]
    let y = (x1 + x2)?;
    println!("{y}"); // [11., 12., 13.]

Ceiling/Floor

Pytorch:

    a = torch.tensor([0.5403, -0.4161, -0.9900])
    print(a.ceil())  # tensor([1., -0., -0.])
    print(a.floor()) # tensor([ 0., -1., -1.])

Candle:

    let a_data = vec![0.5403, -0.4161, -0.9900];
    let x = Tensor::new(a_data, &Device::Cpu)?;
    let ceil = x.ceil()?;
    let floor = x.floor()?;

    println!("ceil: {:?}", ceil); // ceil: Tensor[1, -0, -0; f64]
    println!("floor: {:?}", floor); // floor: Tensor[0, -1, -1; f64]

Clamp All Elements to Range [min, max]

Formula:

yi=min(max(xi,min_valuei),max_valuei)y_i = \min(\max(x_i, min\_value_i), max\_value_i)

Pytorch:

    a = torch.tensor([0.5403, -0.4161, -0.9900])
    print(a.clamp(-0.5, 0.5)) # tensor([ 0.5000, -0.4161, -0.5000])

Candle:

    // x = [0.5403, -0.4161, -0.9900]
    let y = x.clamp(-0.5, 0.5)?;
    println!("{y}"); // [0.5000, -0.4161, -0.5000]

Exponential e^x

Formula:

yi=exiy_i = e^{x_i}

Pytorch:

    a = torch.tensor([0., -2., -3.])
    print(a.exp()) # tensor([1.0000, 0.1353, 0.0498])

Candle:

    // x = [0., -2., -3.]
    let y = x.exp()?;
    println!("{y}"); // [1.0000, 0.1353, 0.0498]

Truncate Integer

This operation is not implemented in Candle and may require converting the tensor to a vector, performing the operation, and then converting it back to a tensor. This approach is somewhat costly.

    let a_data = vec![1.0000, -2.9353, -5.0498];
    let x = Tensor::new(a_data, &Device::Cpu)?;
    let v = x.to_vec1::<f64>()?;
    let v: Vec<f64> = v.iter().map(|v| v.trunc()).collect();
    let y = Tensor::new(v, &Device::Cpu)?;
    println!("{y}"); // [1., -2., -5.]

Power of Float Tensor

Pytorch:

    a = torch.tensor([6.0, 4.0, 7.0, 1.0])
    print(a.float_power(2)) # tensor([36., 16., 49., 1.], dtype=torch.float64)

Candle:

    let a_data = vec![6., 4., 7., 1.];
    let x = Tensor::new(a_data, &Device::Cpu)?;
    let y = x.powf(2f64)?;
    println!("{y}"); // [36., 16., 49., 1.]

Fractional Part

Similar to truncating integers, Candle does not implement this operation directly, but it can be achieved by converting to a vector.

    let a_data = vec![1.0000, -2.9353, -5.0498];
    let x = Tensor::new(a_data, &Device::Cpu)?;
    let v = x.to_vec1::<f64>()?;
    let v: Vec<f64> = v.iter().map(|v| v.fract()).collect();
    let y = Tensor::new(v, &Device::Cpu)?;
    println!("{y}"); // [0.0000, -0.9353, -0.0498]

Natural Logarithm

Formula:

yi=loge(xi)y_i = \log_e(x_i)

Pytorch:

    a = torch.tensor([6.0, 4.0, 7.0, 1.0])
    print(a.log()) # tensor([1.7918, 1.3863, 1.9459, 0.0000])

Candle:

    let a_data = vec![6.0, 4.0, 7.0, 1.0];
    let x = Tensor::new(a_data, &Device::Cpu)?;
    let y = x.log()?;
    println!("{y}"); // [1.7918, 1.3863, 1.9459, 0.0000]

Negation

Pytorch:

    a = torch.tensor([6.0, 4.0, 7.0, 1.0])
    print(a.neg()) # tensor([-6., -4., -7., -1.])

Candle:

    let a_data = vec![6.0, 4.0, 7.0, 1.0];
    let x = Tensor::new(a_data, &Device::Cpu)?;
    let y = x.neg()?;
    println!("{y}"); // [-6., -4., -7., -1.]

Reciprocal

Formula:

outi=1inputiout_i = \frac{1}{input_i}

Pytorch:

    a = torch.tensor([1.0, 2.0, 3.0])
    print(a.reciprocal()) # tensor([1.0000, 0.5000, 0.3333])

Candle:

    let a_data = vec![1.0, 2.0, 3.0];
    let x = Tensor::new(a_data, &Device::Cpu)?;
    let y = x.recip()?;
    println!("{y}"); // [1.0000, 0.5000, 0.3333]

Square Root/Reciprocal Square Root

Square root formula:

outi=inputiout_i = \sqrt{input_i}

Reciprocal square root formula:

outi=1inputiout_i = \frac{1}{\sqrt{input_i}}

Pytorch:

    a = torch.tensor([1.0, 2.0, 3.0])
    print(a.sqrt()) # tensor([1.0000, 1.4142, 1.7321])
    print(a.rsqrt()) # tensor([1.0000, 0.7071, 0.5774])

Candle does not have rsqrt, but it can be calculated using the formula above:

    // [1.0, 2.0, 3.0]
    let y = x.sqrt()?;
    println!("{y}"); // [1.0000, 1.4142, 1.7321]
    let y = (1f64 / x.sqrt()?)?;
    println!("{y}"); // [1.0000, 0.7071, 0.5774]

Sigmoid

The sigmoid function is commonly used as an activation function to compress neuron outputs into the range (0,1), representing probabilities.

Formula:

outi=11+einputiout_i = \frac{1}{1 + e^{-input_i}}

This operation differs slightly between Pytorch and Candle, and the default precision display also varies.

Pytorch:

    a = torch.tensor([1.0, 2.0, 3.0])
    print(a.sigmoid()) # tensor([0.7311, 0.8808, 0.9526])

Candle:

    // [1.0, 2.0, 3.0]
    let y = candle_nn::ops::sigmoid(&x)?;
    println!("{:?}", y); // Tensor[0.7310585786300049, 0.8807970779778823, 0.9525741268224334; f64]

Sign

The sign function returns -1 if (x < 0), 0 if (x = 0), and 1 if (x > 0).

Mathematical formula:

sign(x)={1,x<00,x=01,x>0sign(x) = \begin{cases} -1,\,\,x<0\\ 0,\,\,x=0\\ 1,\,\,x>0\\ \end{cases}

Pytorch:

    a = torch.tensor([1.0, -2.0, 3.0])
    print(a.sign()) # tensor([ 1., -1.,  1.])

Candle:

    // [1.0, -2.0, 3.0]
    let y = x.sign()?;
    println!("{:?}", y); // Tensor[1, -1, 1; f64]

Softmax

Softmax is a mathematical function that converts a numeric vector into a probability distribution vector. After applying softmax, each element represents the probability proportion of the corresponding input across the entire vector.

Softmax is commonly used in classification tasks.

Formula:

Softmax(xi)=exp(xi)jexp(xi)Softmax(x_i) = \frac{exp(x_i)}{\sum_j exp(x_i)}

Pytorch:

    a = torch.tensor([1.0, -2.0, 3.0])
    print(a.softmax(0)) # tensor([0.1185, 0.0059, 0.8756])

Candle:

    // [1.0, -2.0, 3.0]
    let y = candle_nn::ops::softmax(&x, 0)?;
    println!("{:?}", y); // Tensor[0.11849965453500959, 0.005899750401902781, 0.8756005950630876; f64]

Square

Pytorch has two ways to calculate squares:

    a = torch.tensor([1.0, -2.0, 3.0])
    print(a.square()) # tensor([1., 4., 9.])
    print(a.pow(2)) # tensor([1., 4., 9.])

Candle does not have square and uses powf instead:

    // [1.0, -2.0, 3.0]
    let y = x.powf(2.)?;
    println!("{:?}", y); // Tensor[1, 4, 9; f64