## General idea:

Let’s suppose we are given $f:\mathcal{M} \rightarrow \mathbb{R}$ where $\mathcal{M}$ is a compact subset of $\mathbb{R}^n$ and $\forall i \in [1,n], \frac{\partial{f}}{\partial{x_i}}$ is continuous. Now, instead of computing partial derivatives with respect to this function of several variables we would like to compute equivalent derivatives with respect to $n$ functions of a single variable. How should we proceed?

We note that if $e_i$ denotes the ith standard basis vector, we may define:

\begin{equation} \frac{\partial{f}}{\partial{x_i}} = \lim_{n \to \infty} n \cdot \big(f(x+\frac{1}{n}\cdot e_i)-f(x)\big) = \lim_{n \to \infty}f_n^i \end{equation}

This allows us to introduce the following equivalence:

\begin{equation} \lim_{x_{j \neq i} \to c_j} \frac{\partial f}{\partial x_i} = \frac{\partial}{\partial x_i} \lim_{x_{j \neq i} \to c_j} f \equiv \lim_{x_{j \neq i} \to c_j} \lim_{n \to \infty} f_n^i = \lim_{n \to \infty} \lim_{x_{j \neq i} \to c_j} f_n^i \end{equation}

and we can show that these limits are interchangeable due to the Moore-Osgood theorem since:

\begin{equation} \forall n \in \mathbb{N}, \lim_{x_{j \neq i} \to c_j} f_n(x) \end{equation}

exists due to the assumption that $f$ is continuous, and if we define $g_i :=\frac{\partial f}{\partial x_i}$ we can show that:

\begin{equation} \lim_{n \to \infty} f_n^i = g_i \end{equation}

uniformly though (4) may not be completely obvious so it warrants a demonstration. In fact, the definition that interests us depends on the correctness of this proof.

## Proof of uniform convergence:

By the Heine-Cantor theorem, since $\mathcal{M}$ is compact and $g_i$ is assumed to be continuous, $g_i$ is uniformly continuous. It follows that $\forall \epsilon > 0 \forall x \in \mathcal{M} \exists N \in \mathbb{N} \forall n \geq N$:

\begin{equation} d(f_n,g_i) = \lvert f_n(x)-g_i(x) \rvert = \Big\lvert \frac{f(x+\frac{1}{n}\cdot e_i)-f(x)}{\frac{1}{n}} - \frac{\partial f}{\partial x_i} \Big\rvert < \epsilon \end{equation}

Furthermore, by the Mean Value Theorem (5) simplifies to:

\begin{equation} \exists \alpha \in (0, \frac{1}{n}), \lvert g_i(x+\alpha \cdot e_i) - g_i(x) \rvert < \epsilon \end{equation}

and this concludes our proof.

## Definition:

Given the following extrema:

\begin{equation} m=\min_{x\in \mathcal{M}} \lvert \langle x,e_i \rangle \rvert \end{equation}

\begin{equation} M=\max_{x\in \mathcal{M}} \lvert \langle x,e_i \rangle \rvert \end{equation}

we may define:

\begin{equation} \forall \lambda \in [m,M] \forall x \in \mathcal{M}, \tilde{f}(\lambda, x) = f(\lambda\cdot e_i + x \odot(1_n - e_i)) \tag{*} \end{equation}

where $\odot$ denotes the Hadamard product.

Now, due to the hypotheses on $f$, (2) is valid and so we may define the partial derivatives with respect to $f$ for all $i \in [1,n]$ using (*):

\begin{equation} \lim_{\lambda \to\hat{x_i}} \frac{\partial}{\partial \lambda} \lim_{x\to\hat{x}} \tilde{f}(\lambda,x)= \lim_{x \to \hat{x}} \frac{\partial f}{\partial x_i} \end{equation}

or simply,

\begin{equation} \lim_{\lambda \to\hat{x_i}} \frac{\partial \tilde{f}(\lambda,x=\hat{x})}{\partial \lambda} = \lim_{x \to \hat{x}} \frac{\partial f}{\partial x_i} \end{equation}

where $\tilde{f}(\lambda,x=\hat{x})$ is a function of a single variable.