# An alternative definition for the Partial Derivative

## General idea:

Let’s suppose we are given where is a compact subset of and is continuous. Now, instead of computing partial derivatives with respect to this function of several variables we would like to compute equivalent derivatives with respect to functions of a single variable. How should we proceed?

We note that if denotes the ith standard basis vector, we may define:

\begin{equation} \frac{\partial{f}}{\partial{x_i}} = \lim_{n \to \infty} n \cdot \big(f(x+\frac{1}{n}\cdot e_i)-f(x)\big) = \lim_{n \to \infty}f_n^i \end{equation}

This allows us to introduce the following equivalence:

\begin{equation} \lim_{x_{j \neq i} \to c_j} \frac{\partial f}{\partial x_i} = \frac{\partial}{\partial x_i} \lim_{x_{j \neq i} \to c_j} f \equiv \lim_{x_{j \neq i} \to c_j} \lim_{n \to \infty} f_n^i = \lim_{n \to \infty} \lim_{x_{j \neq i} \to c_j} f_n^i \end{equation}

and we can show that these limits are interchangeable due to the Moore-Osgood theorem since:

\begin{equation} \forall n \in \mathbb{N}, \lim_{x_{j \neq i} \to c_j} f_n(x) \end{equation}

exists due to the assumption that is continuous, and if we define we can show that:

\begin{equation} \lim_{n \to \infty} f_n^i = g_i \end{equation}

uniformly though (4) may not be completely obvious so it warrants a demonstration. In fact, the definition that interests us depends on the correctness of this proof.

## Proof of uniform convergence:

By the Heine-Cantor theorem, since is compact and is assumed to be continuous, is uniformly continuous. It follows that :

\begin{equation} d(f_n,g_i) = \lvert f_n(x)-g_i(x) \rvert = \Big\lvert \frac{f(x+\frac{1}{n}\cdot e_i)-f(x)}{\frac{1}{n}} - \frac{\partial f}{\partial x_i} \Big\rvert < \epsilon \end{equation}

Furthermore, by the Mean Value Theorem (5) simplifies to:

\begin{equation} \exists \alpha \in (0, \frac{1}{n}), \lvert g_i(x+\alpha \cdot e_i) - g_i(x) \rvert < \epsilon \end{equation}

and this concludes our proof.

## Definition:

Given the following extrema:

\begin{equation} m=\min_{x\in \mathcal{M}} \lvert \langle x,e_i \rangle \rvert \end{equation}

\begin{equation} M=\max_{x\in \mathcal{M}} \lvert \langle x,e_i \rangle \rvert \end{equation}

we may define:

\begin{equation} \forall \lambda \in [m,M] \forall x \in \mathcal{M}, \tilde{f}(\lambda, x) = f(\lambda\cdot e_i + x \odot(1_n - e_i)) \tag{*} \end{equation}

where denotes the Hadamard product.

Now, due to the hypotheses on , (2) is valid and so we may define the partial derivatives with respect to for all using (*):

\begin{equation} \lim_{\lambda \to\hat{x_i}} \frac{\partial}{\partial \lambda} \lim_{x\to\hat{x}} \tilde{f}(\lambda,x)= \lim_{x \to \hat{x}} \frac{\partial f}{\partial x_i} \end{equation}

or simply,

\begin{equation} \lim_{\lambda \to\hat{x_i}} \frac{\partial \tilde{f}(\lambda,x=\hat{x})}{\partial \lambda} = \lim_{x \to \hat{x}} \frac{\partial f}{\partial x_i} \end{equation}

where is a function of a single variable.