Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
adtzlr authored Dec 10, 2022
1 parent 4cb4c4e commit 271d542
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Math on (Hyper-Dual) Tensors with Trailing Axes.
- No arbitrary-order gradients (only first- and second order gradients)

# Why `tensortrax`?
Compared to other libaries which introduce a new (hyper-) dual `dtype` (treated as `dtype=object` in NumPy), `tensortrax` relies on its own `Tensor` class. This approach involves a re-definition of all essential math operations (and NumPy-functions), whereas the dtype-approach supports most operations (even NumPy) out of the box. However, in `tensortrax` NumPy operates on default data types (e.g. `dtype=float`). This allows to support functions like `np.einsum()`. Beside the differences concerning the underlying `dtype`, `tensortrax` is formulated on (tensorial) calculus of variation.
Compared to other libaries which introduce a new (hyper-) dual `dtype` (treated as `dtype=object` in NumPy), `tensortrax` relies on its own `Tensor` class. This approach involves a re-definition of all essential math operations (and NumPy-functions), whereas the dtype-approach supports most operations (even NumPy) out of the box. However, in `tensortrax` NumPy operates on default data types (e.g. `dtype=float`). This allows to support functions like `np.einsum()`. Beside the differences concerning the underlying `dtype`, `tensortrax` is formulated on (tensorial) calculus of variation. Gradient- and hessian-vector products are evaluated with very little overhead compared to analytic formulations.

# Usage
Let's define a scalar-valued function which operates on a tensor.
Expand Down

0 comments on commit 271d542

Please sign in to comment.