From 271d542e23504cd3f43b8471ee6272a14fad0053 Mon Sep 17 00:00:00 2001 From: Andreas Dutzler Date: Sat, 10 Dec 2022 09:32:03 +0100 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index bf25d76..db69b6b 100644 --- a/README.md +++ b/README.md @@ -28,7 +28,7 @@ Math on (Hyper-Dual) Tensors with Trailing Axes. - No arbitrary-order gradients (only first- and second order gradients) # Why `tensortrax`? -Compared to other libaries which introduce a new (hyper-) dual `dtype` (treated as `dtype=object` in NumPy), `tensortrax` relies on its own `Tensor` class. This approach involves a re-definition of all essential math operations (and NumPy-functions), whereas the dtype-approach supports most operations (even NumPy) out of the box. However, in `tensortrax` NumPy operates on default data types (e.g. `dtype=float`). This allows to support functions like `np.einsum()`. Beside the differences concerning the underlying `dtype`, `tensortrax` is formulated on (tensorial) calculus of variation. +Compared to other libaries which introduce a new (hyper-) dual `dtype` (treated as `dtype=object` in NumPy), `tensortrax` relies on its own `Tensor` class. This approach involves a re-definition of all essential math operations (and NumPy-functions), whereas the dtype-approach supports most operations (even NumPy) out of the box. However, in `tensortrax` NumPy operates on default data types (e.g. `dtype=float`). This allows to support functions like `np.einsum()`. Beside the differences concerning the underlying `dtype`, `tensortrax` is formulated on (tensorial) calculus of variation. Gradient- and hessian-vector products are evaluated with very little overhead compared to analytic formulations. # Usage Let's define a scalar-valued function which operates on a tensor.