Skip to content

Commit

Permalink
linear calculations nb (#46)
Browse files Browse the repository at this point in the history
  • Loading branch information
sergiomarchio authored Nov 12, 2024
1 parent 6471c76 commit ba0088a
Show file tree
Hide file tree
Showing 4 changed files with 673 additions and 8 deletions.
7 changes: 4 additions & 3 deletions guides/en/05. Linear Layer.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
"\n",
"## Case with 1 Input and 1 Output\n",
"\n",
"In this case the math is similar to the well known 2D line equation $y = wx+b$. In this case $w$, $x$, $b$ and $y$ are all scalars, and we are just multiplying $x$ by $w$ and then adding $b$. \n",
"In this case the math is similar to the well known 2D line equation $y = wx+b$, where $w$, $x$, $b$ and $y$ are all scalars, and we are just multiplying $x$ by $w$ and then adding $b$. \n",
"\n",
"\n",
"\n",
Expand All @@ -43,7 +43,7 @@
"\n",
"Note that:\n",
"* $x w$ is now a matrix multiplication\n",
"* The order between $x$ and $w$ matters because matrix multiplication is not associative\n",
"* The order between $x$ and $w$ matters because matrix multiplication is not commutative\n",
" * A $1×I$ array ($x$) multiplied by another $I×O$ array ($w$) results in a $1×O$ array ($y$)\n",
" * The reverse definition, $y=wx$, would require that $x$ and $y$ be column vectors, or that $w$ has size $O×I$,\n",
"\n",
Expand Down Expand Up @@ -78,6 +78,7 @@
"source": [
"# Create a Linear layer with 2 input and 3 output values\n",
"# Initialize it with values sampled from a normal distribution\n",
"# with mean 0 and standard deviation 1e-12\n",
"\n",
"std = 1e-12\n",
"input_dimension = 2\n",
Expand All @@ -93,7 +94,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Forward Method\n",
"# Forward method\n",
"\n",
"Now that we know how to create and initialize `Linear` layer objects, let's move on to the `forward` method, which can be found in the `edunn/models/linear.py` file.\n",
"\n",
Expand Down
331 changes: 331 additions & 0 deletions guides/en/05b. Linear Calculations.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,331 @@
{
"cells": [
{
"metadata": {},
"cell_type": "markdown",
"source": [
"# Backward method\n",
"\n",
"In the `Bias` layer implementation, the formulas for the derivatives were relatively simple, and the complexity relied on how to use the framework and in the understanding of the difference between the derivative of the input and the one of the parameters.\n",
"\n",
"`Linear` layer's backward method requires teh calculation of $\\frac{dE}{dy}$ and $\\frac{dE}{dw}$. In terms of the framework the implementation is very similar to the `Bias` layer, but the formulas of the derivatives are more complex.\n",
"\n",
"First we'll assume there's only one input example $x$ to make it simpler, the we'll generalize to a batch of $N$ examples.\n"
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"## $dE / dx$\n",
"\n",
"Let's start with $\\frac{dE}{dx}$. This is symmetrical to $\\frac{dE}{dw}$,but easier to grasp conceptually.\n",
"\n",
"We'll think this derivative by scenario, from the simplest to the most complex, increasing the input and output dimensions.\n"
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"### 1 input, 1 output\n",
"\n",
"When both the input and output are 1D then $x \\in R$ and $w \\in R$, they're scalars. Then $\\frac{dE}{dy}$ is also a scalar, and following the Chain Rule:\n",
"\n",
"$\\frac{dE}{dx} = \\frac{dE}{dy} \\frac{dy}{dx} = \\frac{dE}{dy} \\frac{d(wx)}{dx} = \\frac{dE}{dy} w$\n"
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"\n",
"### I inputs, 1 output\n",
"\n",
"With $I$ inputs and 1 output $x$ is a vector with $I$ values, i.e. $x \\in R^I$, then $w \\in R^I$ is also a vector with $I$ values. Then we can think the output as the matrix product between $w$ and $x$\n",
"\n",
"$y = x . w = \\sum_{i=1}^I x_i w_i$\n",
"\n",
"We have one partial derivative for each input: $\\frac{dE}{dx_j}$. Given that $\\frac{dE}{dy}$ is still a scalar (there's only one output) and applying the Chain Rule, we can calculate this derivative:\n",
"\n",
"$\n",
"\\frac{dE}{dx_j} \n",
"= \\frac{dE}{dy} \\frac{dy}{dx_j} \\\\\n",
"= \\frac{dE}{dy} \\frac{d (\\sum_{i=1}^I w_i x_i)}{dx_j} \\\\\n",
"= \\frac{dE}{dy} \\sum_{i=1}^I \\frac{d (w_i x_i) }{dx_j} \\\\\n",
"= \\frac{dE}{dy} \\frac{d (w_j x_j) }{dx_j} \\\\\n",
"= \\frac{dE}{dy} w_j \n",
"$\n",
"\n",
"Then $\\frac{dE}{dx_j} = \\frac{dE}{dy} w_j$. We can generalize this definition and calculate the gradient with respect to the whole vector $x$ as:\n",
"\n",
"$\\frac{dE}{dx} = \\frac{dE}{dy} w$\n",
"\n",
"\n",
"#### Notes\n",
"\n",
"1. It's great that the same definition of $\\frac{dE}{dx}$ works in both scenarios, being with $1$ input or with an arbitrary amount of $I$ inputs.\n",
"1. It's important to consider that in this context we cant think of $\\frac{dE}{dy}$ as a constant, since it's values were calculated previously.\n",
"1. We could obtain $\\frac{dy}{dx}$ without taking into consideration the network error, and then get $\\frac{dE}{dx}$ applying the Chain Rule $\\frac{dE}{dx} =\\frac{dE}{dy} \\frac{dy}{dx}$. We're doing everythiong at the same time to be clearer in the context of the `backward` method of a network.\n"
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"\n",
"### I inputs, O outputs\n",
"\n",
"Again, let's go for one of the input's derivatives, i.e., $\\frac{dE}{dx_j}$:\n",
"\n",
"$\n",
"\\frac{dE}{dx_j} = \\frac{dE}{dy} \\frac{dy}{dx_j} \n",
"$\n",
"\n",
"In this case $y$ is a vector, so we have to add the contribution of each element of $y$ to the Chain Rule:\n",
"\n",
"$\n",
"\\frac{dE}{dx_j} \n",
"= \\frac{dE}{dy} \\frac{dy}{dx_j} \n",
"= \\sum_{i=1}^O \\frac{dE}{dy_i} \\frac{dy_i}{dx_j}\n",
"$\n",
"\n",
"We know that $y_i$ is the dot product between the column $i$ of $w$ with the input $x$, according to the matrix multiplication definition, then:\n",
"\n",
"$\n",
"\\frac{dE}{dx_j} \n",
"= \\sum_{i=1}^O \\frac{dE}{dy_i} \\frac{dy_i}{dx_j} \\\\\n",
"= \\sum_{i=1}^O \\frac{dE}{dy_i} \\frac{d(w_{:,i} \\cdot x)}{dx_j} \\\\\n",
"= \\sum_{i=1}^O \\frac{dE}{dy_i} \\frac{d(\\sum_{k=1}^I w_{k,i} x_k)}{dx_j} \\\\\n",
"= \\sum_{i=1}^O \\frac{dE}{dy_i} ( \\sum_{k=1}^I \\frac{d (w_{k,i} x_k)}{dx_j} ) \\\\\n",
"= \\sum_{i=1}^O \\frac{dE}{dy_i} w_{j,i}\n",
"$\n",
"\n",
"Now, $\\sum_{i=1}^O \\frac{dE}{dy_i} w_{j,i}$ is simply the dot product between the column $i$ of $w$ ($w_{:,i}$) and $\\frac{dE}{dy}$. Then we can write:\n",
"\n",
"$\n",
"\\frac{dE}{dx_j} = \\frac{dE}{dy} \\cdot w_{:,i}\n",
"$\n",
"\n",
"Generalizing to the entire vector $x$, if $\\frac{dE}{dx_j}$ is the product between tho vectors, where $j$ is the column of $w$, we can write $\\frac{dE}{dx}$ as a product between the $\\frac{dE}{dy}$ vector and the entire $w$ matrix:\n",
"\n",
"$\n",
"\\frac{dE}{dx} = w \\frac{dE}{dy}\n",
"$\n",
"\n",
"In this case again, the order matters. $w$ has size $I \\times O$ and $\\frac{dE}{dy}$ has size $O$, then $w \\frac{dE}{dy}$ has size $I$ (the same as $x$)\n",
"\n",
"\n"
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"### Batch implementation\n",
"\n",
"To implement the derivative for an example batch we can iterate over each and calculate the derivatives as we did before. Alternatively we can rewrite the derivative to make it work for a batch of $N$ examples (then, of $N$ vectors of derivatives, both for the input and for the output).\n",
"\n",
"In the batch implementation of $\\frac{dE}{dx}$ we have $x$ as a matrix of size $N \\times I$, then $\\frac{dE}{dx}$ also is. At the same time as $\\frac{dE}{dy}$ is next layer's $\\frac{dE}{dx}$, $\\frac{dE}{dy}$ is a matrix of size $N \\times O$.\n",
"\n",
"Given that, we can't multiply $w \\in R^{I \\times O}$ by $\\frac{dE}{dy} \\in R^{N \\times O}$. In this case you can verify that the correct formula is $\\frac{dE}{dy} w^T$, since when multiplying a matrix of size $N \\times O$ by one of size $O \\times I$ ($w^T$), we get a matrix of size $N \\times I$: the same size as $x$:\n",
"\n",
"$\n",
"\\frac{dE}{dx} = \\frac{dE}{dy} w^T\n",
"$"
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"## $dE/dw$\n",
"\n",
"For the gradient of the error with respect of $w$, we'll also assume at first that there's only one input example $x$, and we'll go from the simplest to the more complex scenarios.\n",
"\n"
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"### 1 input, 1 output\n",
"\n",
"This is the simplest scneario, and it's symmetrical to $\\frac{dE}{dx}$:\n",
"\n",
"$\n",
"\\frac{dE}{dw} = \\frac{dE}{dw} \\frac{dw}{dx} = \\frac{dE}{dw} \\frac{d (wx)}{dw} = \\frac{dE}{dy} x\n",
"$\n"
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"\n",
"### I inputs, 1 output\n",
"\n",
"Now we have $I$ inputs and 1 output.\n",
"\n",
"$y = x . w = \\sum_{i=1}^I x_i w_i$\n",
"\n",
"As $w$ has $I$ elements there's a partial derivative for each value of $w$: $\\frac{dE}{dw_j}$. Keep in mind that $\\frac{dE}{dy}$ is still a scalar (there's only one output), so applying the Chain Rule we can calculate this derivative:\n",
"\n",
"$\n",
"\\frac{dE}{dw_j} \n",
"= \\frac{dE}{dy} \\frac{dy}{dw_j} \\\\\n",
"= \\frac{dE}{dy} \\frac{d \\sum_{i=1}^I w_i x_i }{dw_j} \\\\\n",
"= \\frac{dE}{dy} \\sum_{i=1}^I \\frac{d (w_i x_i) }{dxw_j} \\\\\n",
"= \\frac{dE}{dy} \\frac{d (w_j x_j) }{dw_j} \\\\\n",
"= \\frac{dE}{dy} x_j\n",
"$\n",
"\n",
"Then $\\frac{dE}{dw_j} = \\frac{dE}{dy} x_j$. We can generalize this definition and calculate the gradient with respect to the entire vector $x$ as:\n",
"\n",
"$\\frac{dE}{dw} = \\frac{dE}{dy} x$\n",
"\n",
"Again, this case is **symmetrical** with $x$, since $\\frac{dE}{dx} = \\frac{dE}{dy} w$.\n"
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"\n",
"### I inputs, O outputs\n",
"\n",
"In this case having $O$ outputs, we'll have to find the derivative of each weight for each input $i$ for each output $j$. We'll lose the previous symmetry, but it'll be recovered in the batch version.\n",
"\n",
"Given that, we'll find $\\frac{dE}{dw_{i,j}}$. Applying the Chain Rule:\n",
"\n",
"$\n",
"\\frac{dE}{dw_{i,j}}\n",
"= \\frac{dE}{dy} \\frac{dy}{dw_{i,j}}\n",
"= \\frac{dE}{dy} \\frac{d (xw)}{dw_{i,j}} \n",
"$\n",
"\n",
"As $y$ is a vector, we'll have to add for each value to apply the Chain Rule:\n",
"\n",
"$\n",
"\\frac{dE}{dw_{i,j}}\n",
"= \\frac{dE}{dy} \\frac{d (xw)}{dw_{i,j}}\n",
"= \\sum_{k=1}^O \\frac{dE}{dy_k} \\frac{d(xw)_k}{dw_{i,j}} \n",
"$\n",
"\n",
"As $y_k$ only depends on $w_{i,j}$ if $j=k$, i.e. if we're calculating the output for the column $k$, then:\n",
"\n",
"$\n",
"\\frac{dE}{dw_{i,j}}\n",
"= \\frac{dE}{dy} \\frac{d (xw)}{dw_{i,j}}\n",
"= \\frac{dE}{dy_j} \\frac{d(xw)_j}{dw_{i,j}} \n",
"$\n",
"\n",
"By the matrix multiplication definition, $(xw)_j = \\sum_{l=1}^O x_l w_{l,j}$, so we multiply $x$ for the column $j$ of $w$. Replacing the values:\n",
"\n",
"$\n",
"\\frac{dE}{dw_{i,j}}\n",
"= \\frac{dE}{dy_j} \\frac{d(xw)_j}{dw_{i,j}} \\\\\n",
"= \\frac{dE}{dy_j} \\frac{d(\\sum_{l=1}^O x_l w_{l,j})}{dw_{i,j}} \\\\\n",
"= \\frac{dE}{dy_j} \\sum_{l=1}^O \\frac{d (x_l w_{l,j}) }{dw_{i,j}}\n",
"$\n",
"\n",
"As $w_{i,j}$ is one particular weight of $w$, of the entire sum only remains the term that contains it: $\\frac{d (x_i w_{i,j})}{w_{i,j}} = x_i$. Replacing the values:\n",
"\n",
"$\n",
"\\frac{dE}{dw_{i,j}} \n",
"= \\frac{dE}{dy_j} \\sum_{l=1}^O \\frac{d (x_l w_{l,j})}{dw_{i,j}} \\\\\n",
"= \\frac{dE}{dy_j} \\frac{d(x_i w_{i,j})}{d w_{i,j}} \\\\ \n",
"= \\frac{dE}{dy_j} x_i\n",
"$\n"
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"\n",
"### Vector expression\n",
"\n",
"The previous expression is helpful, but we should use a `for` loop with `i` and `j` indexes over the entire `w` matrix. Instead we can generalize by observing the pattern of the $\\frac{dE}{dw}$ matrix:\n",
"\n",
"$\n",
"\\frac{dE}{dw} = \\left(\n",
"\\begin{matrix} \n",
" \\frac{dE}{dy_1} x_1 & \\frac{dE}{dy_2} x_1 & \\dots & \\frac{dE}{dy_O} x_1 \\\\\n",
" \\frac{dE}{dy_1} x_2 & \\frac{dE}{dy_2} x_2 & \\dots & \\frac{dE}{dy_O} x_2 \\\\\n",
" \\vdots & \\vdots & \\ddots & \\vdots \\\\\n",
" \\frac{dE}{dy_1} x_I & \\frac{dE}{dy_2} x_I & \\dots & \\frac{dE}{dy_O} x_I \\\\\n",
"\\end{matrix}\n",
"\\right) = x \\otimes \\frac{dE}{dy}\n",
"$\n",
"\n",
"Where $\\otimes$ is the [outer product](https://en.wikipedia.org/wiki/Outer_product) between two vectors. With `numpy` the [`outer`](https://numpy.org/doc/stable/reference/generated/numpy.outer.html) function allows this kind of operation without the need of loops.\n",
"\n",
"Keep in mind that the outer product is *not* commutative: if $a$ and $b$ have sizes $p$ and $q$, then $ a \\otimes b$ has size $p \\times q$ and $b \\otimes a$ has size $q \\times p$. Given that, as $\\frac{dE}{dw}$ must have size $I \\times O$, we have to calculate $x \\otimes\\frac{dE}{dy}$ and not $\\frac{dE}{dy} \\otimes x$.\n"
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"\n",
"## Batch calculation\n",
"\n",
"When we have a batch of $N$ examples, $x$ has size $N \\times I$, $w$ has size $I \\times O$, and $\\frac{dE}{dy}$ has size $N \\times O$.\n",
"\n",
"As before with $b$, to calculate $\\frac{dE}{dw}$ we need to add the gradient that contributes with each example $x_i$, then:\n",
"\n",
"$\n",
"\\frac{dE}{dw} = \\sum_{i=1}^{n} x_{i,:} \\otimes \\frac{dE}{dy_{i,:}}\n",
"$\n",
"\n",
"Where $x_{i,:}$ is $x$'s $i$ row, i.e. the $i^{th}$ example (`numpy`'s equivalent would be `x[i,:]`)\n",
"\n",
"For example, for $N=2$ we can verify:\n",
"\n",
"$\n",
"\\frac{dE}{dw} = x_{1,:} \\otimes \\frac{dE}{dy_{1,:}} + x_{1,:} \\otimes \\frac{dE}{dy_{2,:}} \\\\\n",
"= \\left(\n",
"\\begin{matrix}\n",
" \\frac{dE}{dy_{1,1}} x_{1,1} + \\frac{dE}{dy_{2,1}} x_{2,1} & \\frac{dE}{dy_{1,2}} x_{1,1} + \\frac{dE}{dy_{2,2}} x_{2,1} & \\dots & \\frac{dE}{dy_{1,O}} x_{1,1} + \\frac{dE}{dy_{2,O}} x_{2,1} \\\\\n",
" \\frac{dE}{dy_{1,1}} x_{1,2} + \\frac{dE}{dy_{2,1}} x_{2,2} & \\frac{dE}{dy_{1,2}} x_{1,2} + \\frac{dE}{dy_{2,2}} x_{2,2} & \\dots & \\frac{dE}{dy_{1,O}} x_{1,2} + \\frac{dE}{dy_{2,O}} x_{2,2} \\\\\n",
" \\vdots & \\vdots & \\ddots & \\vdots \\\\\n",
" \\frac{dE}{dy_{1,1}} x_{1,I} + \\frac{dE}{dy_{2,1}} x_{2,I} & \\frac{dE}{dy_{1,2}} x_{1,I} + \\frac{dE}{dy_{2,2}} x_{2,I} & \\dots & \\frac{dE}{dy_{1,O}} x_{1,I} + \\frac{dE}{dy_{2,O}} x_{2,I} \\\\\n",
"\\end{matrix}\n",
"\\right) \\\\\n",
" = x^t \\frac{dE}{dy}\n",
"$\n",
"\n",
"This is valid for any $N$. We can confirm this identity given the sizes: multiplying $x^t$ (size $I \\times N$) by $\\frac{dE}{dy}$ (size $N \\times O$), we obtain a matrix of size $I \\times O$, same as $w$, just the size $\\frac{dE}{dw}$ must have!.\n",
"\n",
"Then we can see the symmetry between both derivatives;:\n",
"\n",
"$\n",
"\\frac{dE}{dw} = \\frac{dy}{dw} \\frac{dE}{dy} = x^t \\frac{dE}{dy} \\\\\n",
"\\frac{dE}{dx}= \\frac{dy}{dx} \\frac{dE}{dy} = \\frac{dE}{dy} w\n",
"$\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Loading

0 comments on commit ba0088a

Please sign in to comment.