Skip to main content

Section 4.1 Interior estimates

As often happens in PDEs, our proof of existence will hinge on establishing certain estimates for a hypothetical solution. Suppose that \(u\) solves (4.1). Applying the weak maximum principle as in the proof of Corollary 3.5, we discover that
\begin{equation*} \max_{\overline \Omega} \abs u = \max_{\partial \Omega} \abs u = \max_{\partial \Omega} \abs g. \end{equation*}
In this section we establish versions of the above inequality where partial derivatives of \(u\) appear on the left hand side. The price we will pay for adding these derivatives is that the maximum will be over subsets of \(\Omega\) rather than all of \(\overline\Omega\text{.}\)
There are many ways to proceed, but we will use an elegant and flexible argument due to Bernstein, which involves applying the maximum principle to a carefully chosen function made up from \(u\) and its gradient.

Proof.

Suppose that \(u\) satisfies the hypotheses of the lemma in a ball \(B_r(x_0)\text{,}\) and (as in Exercise 2.3.1 and Exercise 3.1.2) define a function \(v\) by
\begin{equation*} v(x)=u(x_0 + rx)\text{.} \end{equation*}
It is easy to check that \(v\) satisfies the hypotheses of the lemma on \(B_1(0)\text{.}\) Moreover, by the chain rule we have
\begin{equation*} \sup_{B_{r/2}(x_0)} \abs{\nabla u} = \frac 1r\sup_{B_{1/2}(0)} \abs{\nabla v}\text{.} \end{equation*}
Thus it is enough to prove the lemma for \(v\text{,}\) i.e. we may take \(x_0 = 0\) and \(r=1\) without loss of generality.
So suppose that \(r=1\) and \(x_0=0\text{.}\) We now fix a cutoff function \(\eta \in C^2(\overline{B_1(0)})\) with \(\eta \equiv 1\) on \(B_{1/2}(0)\) and \(\eta \equiv 0\) on \(\partial B_1(0)\text{,}\) and set
\begin{equation*} w = \eta^2 \abs{\nabla u}^2 + \alpha u^2, \end{equation*}
where \(\alpha \gt 0\) is a parameter to be determined. Since \(u\) is \(C^3\text{,}\) \(w\) is \(C^2\text{.}\) Our goal is to pick \(\alpha \gt 0\) so that \(\Delta w \ge 0\text{,}\) and then invoke the maximum principle.
Repeatedly applying the product rule and grouping terms, we find
\begin{align} \Delta w \amp = \partial_i \partial_i [ \eta^2 (\partial_j u)(\partial_j u) + \alpha u^2 ]\notag\\ \amp = 2 \partial_i [ \eta\, \partial_i \eta\, \partial_j u\, \partial_j u + \eta^2 \partial_{ij} u\, \partial_j u + \alpha u\partial_i u ]\notag\\ \amp = 2 [ \partial_i \eta\, \partial_i \eta\, \partial_j u\, \partial_j u + \eta \, \partial_{ii} \eta\, \partial_j u\, \partial_j u + 4 \eta \, \partial_i \eta\, \partial_{ij} u\, \partial_j u \tag{4.3}\\ \amp \qquad + \eta^2 \, \partial_{iij} u\, \partial_j u + \eta^2 \, \partial_{ij} u\, \partial_{ij} u + \alpha \partial_i u \, \partial_i u + \alpha u\partial_{ii} u ].\notag \end{align}
Since \(\Delta u = \partial_{ii} u = 0\text{,}\) differentiating with respect to \(x_j\) implies that \(\partial_j \Delta u = \partial_{iij} u = 0\text{.}\) Substituting into (4.3) and again regrouping terms, we obtain
\begin{align} \Delta w \amp = 2\Big(\abs{\nabla \eta}^2 + \eta (\Delta \eta) + \alpha \Big) \abs{\nabla u}^2 + 8 \eta \, \partial_i \eta\, \partial_{ij} u\, \partial_j u + 2\eta^2 \abs{D^2 u}^2\text{,}\tag{4.4} \end{align}
where here the matrix norm \(\abs{D^2 u}\) was defined in Definition 2.5. We now estimate the middle term using the Cauchy–Schwarz inequality, or more precisely the elementary inequality \(8ab \ge -2a^2-8b^2\) with \(a=\eta \partial_{ij} u\) and \(b=\partial_i \eta\, \partial_j u\text{.}\) This yields
\begin{align*} 8 \eta \, \partial_i \eta\, \partial_{ij} u\, \partial_j u \amp \ge -2\eta^2 \partial_{ij} u\, \partial_{ij} u -8\partial_i \eta\, \partial_i \eta\, \partial_j u\, \partial_j u\\ \amp = -2\eta^2 \abs{D^2 u}^2 -8\abs{\nabla \eta}^2 \abs{\nabla u}^2\text{.} \end{align*}
Inserting the above estimate into (4.4), we are finally left with
\begin{equation*} \Delta w \ge 2\Big(-3\abs{\nabla \eta}^2 + \eta (\Delta \eta) + \alpha\Big) \abs{\nabla u}^2\text{.} \end{equation*}
By choosing \(\alpha \gt 0\) sufficiently large (compared to \(\n{\eta}_{C^2}^2\)), we can guarantee that the first factor on the right hand side is non-negative, and hence that \(\Delta w \ge 0\) as desired.
With this choice of \(\alpha\text{,}\) we can now apply the weak maximum principle to find
\begin{equation} \sup_{B_{1/2}(0)} w \le \sup_{B_{1}(0)} w = \sup_{\partial B_{1}(0)} w\text{.}\tag{4.5} \end{equation}
Since \(\eta \equiv 1\) on \(B_{1/2}(0)\) and \(\eta = 0\) on \(\partial B_1(0)\text{,}\) (4.5) implies
\begin{gather*} \sup_{B_{1/2}(0)} \abs{\nabla u}^2 \le \sup_{B_{1/2}(0)} (\abs{\nabla u}^2 + \alpha u^2) \le \sup_{\partial B_{1}(0)} \alpha u^2, \end{gather*}
so that taking square roots yields (4.2) with \(C=\sqrt \alpha\text{.}\)
Since partial derivatives of smooth harmonic functions are again harmonic functions, Lemma 4.2 can be used inductively to bound higher partials as well.
The assumptions in the above results that \(u\) is \(C^3\) or even \(C^4\) can in fact be relaxed, but our proof will use tools from Section 4.2. We state the result here for reference, but will be careful not to use it until it has been proved!
Next we use our estimates to prove a result on uniform convergence of sequences of harmonic functions.

Proof.

Suppose that \(\overline{B_r(x)}\subseteq \Omega\text{.}\) Then for any \(n\) and \(m\) we can apply Lemma 4.2 and Corollary 4.3 to the harmonic function \(f_n-f_m\) to obtain
\begin{align*} \n{f_n-f_m}_{C^2(B_{r/2}(x))} \amp \le C_1 \sup_{B_{r/2}(x)} \Big(\abs{f_n-f_m} + \abs{\nabla(f_n-f_m)} + \abs{D^2(f_n-f_m)}\Big)\\ \amp \le C_2 \sup_{B_r(x)} \abs{f_n-f_m} \to 0 \end{align*}
as \(n,m \to \infty\text{,}\) for some constants \(C_1,C_2\) depending only on \(N\) and \(r\text{.}\) Here we have used the uniform convergence of the sequence \(f_n\) on the compact subset \(\overline{B_r(x)}\text{.}\) Thus the sequence \(f_n\) is Cauchy in \(C^2(\overline{B_{r/2}(x)})\text{,}\) and hence convergent in this space by Theorem 2.20 (see Remark 2.21). Using the uniqueness of limits, we deduce that \(f_n \to f\) in \(C^2(\overline{B_{r/2}(x)})\text{.}\) Hence we have uniform convergence \(f_n \to f\text{,}\) \(\partial_i f_n \to \partial_i f\) and \(\partial_{ij} f_n \to \partial_{ij} f\) on \(\overline{B_{r/2}(x)}\text{.}\) Since \(\Delta f_n = \partial_{ii} f_n \equiv 0\text{,}\) this in turn implies that \(\Delta f = 0\) on \(\overline{B_{r/2}(x)}\text{.}\)
To see that \(\partial_i f_n \to \partial_i f\) and \(\partial_{ij} f_n \to \partial_{ij} f\) uniformly on compact subsets of \(\Omega\text{,}\) let \(K \subset \Omega\) be compact. Since \(\Omega\) is open, for each \(x \in \Omega\) we can find \(r_x \gt 0\) so that \(B_{4r_x}(x) \subset \Omega\) and hence \(\overline{B_{2r_x}(x)} \subset \Omega\text{.}\) Consider the open cover of \(K\) consisting of all balls \(B_{r_x}(x)\) with \(x \in K\text{.}\) By compactness, there is a finite list of such balls \(B_{r_{x_1}}(x_1),\ldots,B_{r_{x_n}}(x_n)\) which still covers \(K\text{,}\) and by the above argument \(\partial_i f_n \to \partial_i f\) and \(\partial_{ij} f_n \to \partial_{ij} f\) uniformly on each of these balls. Thus
\begin{align*} \sup_{x \in K} \abs{\partial_i f_n - \partial_i f} \amp \le \max\left\{ \sup_{x \in B_{r_1}(x_1)} \abs{\partial_i f_n - \partial_i f},\ldots, \sup_{x \in B_{r_n}(x_n)} \abs{\partial_i f_n - \partial_i f}\right\}\\ \amp \to 0 \end{align*}
where in the last step it is crucial that the list of balls is finite, and the same argument also works for \(\partial_{ij} f_n\text{.}\)

Proof.

Since \(f_n\) is uniformly bounded, we can find \(M \gt 0\) so that \(\abs{f_n(x)} \le M\) for all \(n\text{.}\) We first claim that \(f_n\) is equicontinuous. To this end, let \(x_0 \in \Omega\) and pick \(r \gt 0\) such that \(\overline{B_r(x_0)} \subseteq \Omega\text{.}\) By Lemma 4.2, we have
\begin{equation*} \sup_{B_{r/2}(x_0)} \abs{\nabla f_n} \le \frac Cr \sup_{B_{r}(x_0)} \abs{f_n} \le \frac{CM}r \end{equation*}
for all \(n\text{.}\) Thus, by Exercise 2.8.2, the sequence \(f_n\) is equicontinuous at \(x_0\text{.}\) Since \(x_0\) was arbitrary, the sequence \(f_n\) is then equicontinuous on all of \(\Omega\text{.}\) Applying Theorem 2.46, we conclude that \(f_n\) has a subsequence which converges uniformly on compact sets to a function \(f \in C^0(\Omega)\text{.}\) The result then follows by applying Lemma 4.5 to this subsequence.

Exercises Exercises

1. (PS6) A maximum principle for the gradient.

Let \(\Omega\) be bounded, and suppose that \(u \in C^3(\Omega) \cap C^1(\overline\Omega)\) is harmonic. Show that \(v=\abs{\nabla u}^2\) satisfies \(\Delta v = 2\abs{D^2 u}^2 \ge 0\text{,}\) and therefore conclude that
\begin{equation*} \max_{\overline\Omega} \abs{\nabla u} = \max_{\partial \Omega} \abs{\nabla u}\text{.} \end{equation*}
Hint.
Write \(v = \partial_j u \partial_j u\) using the summation notation, and then differentiate twice using the product rule. As in the proof of Lemma 4.2, \(u\) being harmonic means that \(\partial_j \Delta u = \partial_{iij} u = 0\text{.}\)
As in the proof of Lemma 4.2, \(\abs{D^2 u}^2 = \partial_{ij} u\, \partial_{ij} u\) is the matrix norm from Definition 2.5.
Solution.
The assumption \(u \in C^3(\Omega) \cap C^1(\overline\Omega)\) guarantees that \(v \in C^2(\Omega) \cap C^0(\overline\Omega)\text{.}\) Differentiating repeatedly using the product rule, we have
\begin{align*} \Delta v \amp = \partial_i \partial_i [\partial_j u \partial_j u]\\ \amp = 2 \partial_i [\partial_{ij} u \partial_j u]\\ \amp = 2 [\partial_{iij} u \partial_j u + \partial_{ij} u \partial_{ij} u]. \end{align*}
Since \(u\) is harmonic, \(\partial_j \Delta u = 0\) and so the first term vanishes, leaving us with
\begin{equation*} \Delta v = 2 \partial_{ij} u \partial_{ij} u = 2\abs{D^2 u}^2 \ge 0 \end{equation*}
as desired.
Applying the weak maximum principle to \(v\) and the uniformly elliptic operator \(L=\Delta\text{,}\) we conclude that
\begin{equation*} \max_{\overline\Omega} \abs{\nabla u}^2 = \max_{\overline\Omega} v = \max_{\partial \Omega} v = \max_{\overline\Omega} \abs{\nabla u}^2\text{,} \end{equation*}
which yields the desired statement about \(\abs{\nabla u}\) after taking square roots.

2. (PS6) Interior estimate for second derivatives.

Hint.
The idea is to apply Lemma 4.2 to the \(C^3\) harmonic function \(v = \partial_i u\text{,}\) and then to \(u\) itself. Perhaps the trickiest part of the exercise is to get the various balls to nest correctly. One way to do this is to first fix a point \(x \in B_{r/2}(x_0)\text{,}\) and then consider the three balls
\begin{equation*} B_{r/8}(x) \subset B_{r/4}(x) \subset B_{r/2}(x) \subset B_r(x_0)\text{.} \end{equation*}
Solution.
Throughout this solution, \(C\) will denote the constant from Lemma 4.2, and not the constant in the desired inequality. Consider the function \(v = \partial_i u\text{.}\) Since \(u\) is \(C^4\) and harmonic, \(v\) is \(C^3\) and harmonic, and so we can apply Lemma 4.2 to \(v\) on any ball \(B \subseteq B_r(x_0)\text{.}\) Our goal is to estimate \(\nabla v\) on \(B_{r/2}(x_0)\text{,}\) and so following the hint we fix \(x \in B_{r/2}(x_0)\text{.}\) By the triangle inequality, we have \(B_{r/2}(x) \subset B_r(x_0)\text{.}\) Thus we can certainly apply Lemma 4.2 on the strictly smaller ball \(B_{r/4}(x)\) to obtain
\begin{equation*} \sup_{B_{r/8}(x)} \abs{\nabla v} \le \frac {4C}r \sup_{B_{r/4}(x)} \abs v\text{,} \end{equation*}
or, in terms of \(u\text{,}\)
\begin{equation*} \sup_{B_{r/8}(x)} \abs{\nabla\partial_i u} \le \frac {4C}r \sup_{B_{r/4}(x)} \abs {\partial_i u}\text{.} \end{equation*}
In particular, for any \(j\) we have
\begin{equation*} \abs{\partial_{ij} u(x)} \le \frac {4C}r \sup_{B_{r/4}(x)} \abs {\partial_i u}. \end{equation*}
To estimate the right hand side, we apply Lemma 4.2 to \(u\) on \(B_{r/2}(x)\) to get
\begin{gather*} \sup_{B_{r/4}(x)} \abs {\partial_i u} \le \sup_{B_{r/4}(x)} \abs {\nabla u} \le \frac {2C}r \sup_{B_{r/2}(x)} \abs {u}. \end{gather*}
Putting things together, we conclude that
\begin{gather*} \abs{\partial_{ij} u(x)} \le \frac {8C^2}{r^2} \sup_{B_{r/2}(x)} \abs u \le \frac {8C^2}{r^2} \sup_{B_r(x_0)} \abs u\text{.} \end{gather*}
Since the above inequality is true for any \(i,j\) and any \(x \in B_{r/2}(x_0)\text{,}\) the desired inequality follows by taking a supremum. Indeed – and this is perhaps more detail than is really needed – we have
\begin{gather*} \sup_{B_{r/2}(x_0)} \abs{D^2 u} \le N \max_{i,j} \sup_{B_{r/2}(x_0)} \abs{\partial_{ij} u} \le \frac {8NC^2}{r^2} \sup_{B_r(x_0)} \abs u\text{.} \end{gather*}
Comment 1.
With these sorts of arguments one can replace the ball \(B_r(x_0)\) on the left hand sides of the inequalities in Lemma 4.2 and Corollary 4.3 with \(B_{\theta r}(x_0)\) for any \(\theta \in (0,1)\text{,}\) with the caveat that the constant \(C=C(N,\theta)\) now depends on \(\theta\) as well as the dimension \(N\text{.}\) One can also prove such a result directly by choosing the ‘cutoff function’ \(\eta\) in the proof of Lemma 4.2 differently.
Comment 2.
In this problem we need to pass back and forth between estimates for Vector and matrix norms and estimates for components. In this problem, since we are not particularly fussed about getting the best possible constants, it is enough to use the basic inequalities
\begin{alignat*}{2} \max_i \abs{a_i} \le \abs a \amp \le \sqrt N \max_i \abs{a_i} \amp \qquad \amp\fora a \in \R^N,\\ \max_{i,j} \abs{A_{ij}} \le \abs A \amp \le N \max_{i,j} \abs{A_{ij}} \amp \qquad \amp\fora A \in \R^{N \times N}. \end{alignat*}
Let me know if you have any questions about these inequalities or how to prove them.
Comment 3.
In the factor \(C/r\) on the right hand side of (4.2), \(r\) refers to the radius of the ball \(B_r(x_0)\) where Lemma 4.2 is being applied. So when we apply this result on balls with radius \(r/2\) and \(r/4\) in the argument above, the factors that appear are \(2C/r\) and \(4C/r\text{,}\) respectively, rather than just \(C/r\) each time.
Comment 4.
Note that Lemma 4.2 as written applies to a single harmonic function, not a vector or matrix whose components are each harmonic functions. In terms of symbols, Lemma 4.2 requires \(u \in C^3(\overline B)\) and does not allow, say, for \(u\) to instead lie in \(C^3(\overline B,\R^N)\text{;}\) see Notation 2.19. But we can get around this quite easily by, e.g., separately considering each of the components \(\partial_i u\) of the gradient \(\nabla u\) in the official solution.

3. Interior estimate for higher derivatives.

Let \(k \in \N\text{,}\) and suppose that \(u \in C^{k+2}(\overline{B_r(x_0)})\) is harmonic in \(B_r(x_0)\text{.}\) Using induction, show that there is a constant \(C=C(k,N)\) such that
\begin{equation*} \sup_{B_{r/2}(x_0)} \abs{\partial^\alpha u} \le \frac{C(k,N)}{r^k} \sup_{B_r(x_0)} \abs u \end{equation*}
for any multiindex \(\alpha\) of order \(\abs \alpha = k\text{.}\)