Skip to main content

Section 2.9 Integration and vector calculus

Next we recall some few basic facts about integration and vector calculus (MA20223). We will only need these facts for balls, and so we specialise to this case in order to simplify the presentation.

Definition 2.47.

The boundary \(\partial B_r(x_0)\) of a ball \(B_r(x_0) \subseteq \R^N\) is called a sphere. The (outward) unit normal at a point \(x \in \partial B_r(x_0)\) is the unit vector
\begin{equation*} n = \frac{x-x_0}{\abs{x-x_0}}. \end{equation*}
The normal derivative of a function \(u \in C^1(\overline{B_r(x_0)})\) at \(x\) is the directional derivative
\begin{equation*} \frac{\partial u}{\partial n}(x) = n \cdot \nabla u(x) \text{.} \end{equation*}
We will use the letter \(n\) both for such unit normal vectors and as a typical index for sequences \(x_n\) of points or \(f_n\) of functions. In the very few instances where the two notations overlap, there is typically no chance of confusion.

Proof.

Assume for the sake of contradiction that \(\partial u/\partial n \lt 0\) at \(p\text{,}\) and consider the single-variable function \(g(t) = u(p-tn)\text{.}\) Since \(u \in C^1(\overline B)\text{,}\) we can check that \(g\) is \(C^1\) on some small interval \((0,\varepsilon)\text{,}\) and that the one-sided derivative
\begin{equation*} g'(0) = \lim_{t ↘ 0} g'(t) = -\frac{\partial u}{\partial n}(p) \gt 0\text{.} \end{equation*}
By continuity, \(g'(t) \gt 0\) for \(t \gt 0\) sufficiently small, and so by the mean value theorem we have
\begin{equation*} u(p-tn) = g(t) \gt g(0) = u(p) \end{equation*}
for \(t \gt 0\) sufficiently small, contradiction our assumption that \(u\) is maximized at \(p\text{.}\)

Proof.

See MA20223 for \(N=2,3\text{.}\) The proof for general \(N\) is similar.

Example 2.50.

Consider the vector field \(F \maps \R^2 \to \R\) defined by \(F(x)=(1+x_2,x_1^2 x_2)\text{,}\) and let \(B=B_1(0)\) be the unit ball centred at the origin. Then
\begin{equation*} \nabla \cdot F = \partial_1 (1+x_2) + \partial_2 (x_1^2 x_2) = x_1^2 \end{equation*}
and so the right hand side of (2.7) is
\begin{equation*} \int_B x_1^2\, dx = \int_0^{2\pi} \int_0^1 r^2 \cos^2 \theta \, r\, dr\, d\theta = \frac \pi 4, \end{equation*}
while the right hand side of (2.7) is
\begin{align*} \int_{\partial B} (1+x_2,x_1^2 x_2) \cdot (x_1,x_2)\, dS \amp = \int_{\partial B} (x_1+x_1 x_2 + x_1^2 x_2^2)\, dS\\ \amp = \int_0^{2\pi} (\cos \theta + \cos\theta \sin \theta + \cos^2 \theta \sin^2 \theta )\, d\theta\\ \amp = \frac \pi 4\text{.} \end{align*}

Proof.

See MA20223 for \(N=2,3\text{.}\) The general case can be proved by introducing spherical polar coordinates in \(\R^N\text{,}\) or else by using the more general ‘co-area formula’.

Exercises Exercises

1. Divergence theorem and the Laplacian.

Let \(f \in C^0(\R^N)\) and suppose that \(u \in C^2(\R^N)\) satisfies
\begin{gather} \int_{\partial B} \frac{\partial u}{\partial n}\, dS = \int_B f\, dx \tag{✶} \end{gather}
for all balls \(B \subset \R^N\text{.}\) Conclude that \(\Delta u = f \ina \R^N \text{.}\)
Hint.
First apply the divergence theorem to \(\nabla u\text{.}\) Then assume for the sake of contradiction that there exists a point \(x \in \R^N\) where \(\Delta u - f\text{,}\) say. Now consider \(B=B_r(x)\) for small \(r \gt 0\text{,}\) and use the continuity of \(\Delta u - f\text{.}\)

2. Dirichlet principle.

Let \(B\) be a ball, \(u \in C^2(\overline B)\) and \(f \in C^0(\overline B)\text{.}\) Suppose that \(v \in C^2(\overline B)\) vanishes on \(\partial B\text{.}\)
(a)
Show that
\begin{equation*} \int_B v\Delta u \, dx = -\int_B \nabla u \cdot \nabla v \, dx \text{.} \end{equation*}
Hint.
Apply the divergence theorem to \(v\nabla u\text{.}\)
(b)
For \(w \in C^2(\overline B)\text{,}\) define
\begin{equation*} I(w) = \int_B \Big(\frac 12 \abs{\nabla w}^2 + wf\Big)\, dx\text{.} \end{equation*}
Show that
\begin{equation*} \frac d{d\varepsilon} I(u+\varepsilon v)\bigg|_{\varepsilon=0} = \int_B (\nabla u \cdot \nabla v + v f)\, dx = \int_B (-\Delta u + f)v\, dx\text{.} \end{equation*}
Hint.
Expanding things out \(I(u+\varepsilon v)\) is a quadratic polynomial in \(\varepsilon\text{.}\)
(c)
Conclude that if \(u\) solves the PDE \(\Delta u = f \ina B \text{,}\) then
\begin{equation*} \frac d{d\varepsilon} I(u+\varepsilon v) \bigg|_{\varepsilon=0}= 0 \end{equation*}
for all \(v \in C^2(\overline B)\) vanishing on \(\partial B\text{.}\)
(d)
(Optional) Show that the reverse implication is also true.

3. Non-constant coefficients.

Let \(a \in C^1(\R^N,\R^{N \times N})\) be symmetric, i.e. \(a_{ij}(x)=a_{ji}(x)\) for all \(x \in \R^N\text{.}\)
(a)
Suppose that the integral equation in Exercise 2.9.1 is replaced by
\begin{equation*} \int_{\partial B} (a\nabla u) \cdot n\, dS = \int_{\partial B} a_{ij} \partial_i u\, n_j\, dS = \int_B f\, dx \text{.} \end{equation*}
Show that the corresponding PDE is
\begin{gather} \nabla \cdot (a\nabla u) = \partial_i(a_{ij}\partial_j u) = f\text{.}\tag{†} \end{gather}
(b)
Suppose in Exercise 2.9.2 that \(I(w)\) is replaced by
\begin{equation*} I(w) = \int_B \Big(\frac 12 (a\nabla w) \cdot \nabla w + wf\Big)\, dx = \int_B \Big(\frac 12 a_{ij} \partial_i w \partial_j w + wf\Big)\, dx\text{.} \end{equation*}
Show that the corresponding PDE is again (†).

4. Gradients and spheres.

Let \(B = B_r(x_0) \subset \R^N\text{,}\) and let \(u \in C^1(\overline B)\text{.}\)
(a)
Suppose that \(u\) is constant on \(\partial B\text{.}\) Show that
\begin{gather} \nabla u = \frac{\partial u}{\partial n} n \tag{✶} \end{gather}
at every point on \(\partial B\text{.}\)
Solution.
Fix \(p \in \partial B\text{.}\) To show that (✶) holds at \(p\text{,}\) it suffices to show that
\begin{gather} e \cdot \nabla u (p) = \frac{\partial u}{\partial n} e \cdot n\tag{†} \end{gather}
for all unit vectors \(e \in \R^N\text{.}\) Clearly (†) holds when \(e=n\text{,}\) and so, by basic linear algebra, it is enough to show that (†) holds when \(e\) is perpendicular to \(n\text{,}\) in which case the right hand side is zero.
So let \(e\) be a unit vector perpendicular to \(n\text{.}\) An easy calculation shows that
\begin{gather*} x_0 + \cos(\theta) rn + \sin(\theta) re \end{gather*}
lies on \(\partial B\) for any \(\theta \in \R\text{;}\) this is a parametrization of a circle. By the assumption on \(u\text{,}\) we therefore have that
\begin{equation*} \theta \mapsto u(x_0 + r(\cos(\theta) n + \sin(\theta) e)) \end{equation*}
is a constant function. Differentiating using the chain rule, we deduce that
\begin{align*} 0 \amp = \frac d{d\theta}u(x_0 + r(\cos(\theta) n + \sin(\theta) e))\\ \amp = \nabla u(x_0 + r(\cos(\theta) n + \sin(\theta) e)) \cdot r(-\sin(\theta) n + \cos(\theta) e)\text{.} \end{align*}
In particular, at \(\theta = 0\) we find
\begin{gather*} 0 = \nabla u(x_0 + rn) \cdot re = r (\nabla u(p) \cdot e)\text{,} \end{gather*}
and hence \(\nabla u(p) \cdot e = 0\) as desired.
(b)
Suppose that \(u\) achieves its maximum (or minimum) over \(\overline B\) at a point \(p \in \partial B\text{.}\) Show that (✶) holds at \(p\text{.}\)
Solution.
We argue as in the previous part. As \(u\) is maximised at \(p = x_0 + rn\text{,}\) we know that the function
\begin{equation*} \theta \mapsto u(x_0 + r(\cos(\theta) n + \sin(\theta) e)) \end{equation*}
is maximised at \(\theta = 0\text{.}\) In particular, its derivative at \(\theta = 0\) must vanish, which is all we need for our argument above to work.