# A Promised Proof

I wrote about an odd symmetry, found while developing a method of integration by change of bounds, and I wrote that I would demonstrate the proof.

**So, here:**

If we integrate a function, we are finding the *area under that curve*. So, when you find the area under one function between two points, and that area *equals* the integral of another function between two different points, we can say that their integrals are equal. Sure, the functions may be totally different, but if we can say that the areas always equal, then we have a general equality between the integrals of the two functions.

This equality depends upon the starting and ending points of the integrals. As you move the start or stop of the first function, the start and stop of the second function must move as well. There is a **new **function, which takes the start and stop points of the first function, and outputs the start and stop of the second function. Let’s call the first function *f(x)*, the second function *g(x)*, and the function *h(x)* maps start and stop points of f(x) to start and stop of g(x). For the start and stop points called *a* and *b*, in mathy lingo, “the integral of f(x) from a to b equals the integral of g(x) from h(a) to h(b).”

**So, if we’ve chosen two functions, f(x) and g(x), what is the function that maps their start and stop points, h(x)?**

This is the core of the problem. h(x), the bounds of the integral of g(x), must be found, regardless of what the functions f(x) and g(x) might be. We need a general solution, not just a solution for a particular change of function. f(x) and g(x) might be quadratic functions, or exponentials, even trigonometric functions. We want to solve every combination at once. Yet, we need to start somewhere, so let’s consider two *linear* functions, f(x)=x and g(x)=(1/2)x.

In this case, f(x) and g(x) are straight lines, and f(x) is twice as *steep* as g(x). Let’s look at some particular x=i as the starting point, and imagine a stopping point just a tiny bit further, at x=i+e. The integral, the area under the line f(x), from x=i to x=i+e, must equal the area under g(x) from the starting point x=i to some extra distance, i+e+j=h(x), which is the mapping function we are searching for. h(x) is how much farther g(x) must integrate, to equal the integral of f(x).

For that little slice, from x=i to x=i+e, f(x) is *nearly* constant. A straight line doesn’t change height much when you move a small enough distance to the side. Same goes for g(x). Yet, g(x) is at a *different* height than f(x). Their ratio, f(x)/g(h(x)), expresses how *much* higher f(x) is. And, that’s what we need, to find h(x).

If f(x) is twice as tall compared to g(x), then g(x) will need to be integrated for twice as far. Imagine f(2), the height of f(x) when x=2. Because f(x)=x, this height is 2. At the same point, when x=2, g(2)=1, because g(x)=(1/2)x. So, the area under f(x) accumulates *twice as quickly* as it does under g(x). g(x) must travel *twice as far* to catch up! (And, if the two functions had to travel in the same amount of *time*, then g(x) must travel *twice as fast*.)

This means that h(x), the function we are looking for, has a *rate of change* that equals the ratio between the two functions! Because the stopping point of the first function is just x, while the stopping point of the second function is h(x), in mathese, we have “the derivative of h(x) equal to the ratio f(x)/g(h(x)).” At every point, the ‘speed’ of g(x)’s integration is given by the ratio of heights between the two functions, measured at that point. This doesn’t really answer the question, though. Sure, h’(x), the derivative of h(x), will need to equal the ratio f(x)/g(h(x)) at every point, but *what function does that*? Once we find h(x), we can confirm that its derivative equals that ratio. But, how do we derive h(x)?

**Implying an Answer**

Maths let us contort a truth into new truths. If we know that the derivative of h(x) is equal to f(x)/g(h(x)), then we *also* know that h(x) itself is equal to the *integral *of f(x)/g(h(x)). We’re hoping to solve this equation for h(x), yet h(x) appears on both sides of the equals sign! Only when h(x) is alone on one side, and what it equals is on the other, can we say that we solved for h(x). The integal and g(h(x)) seem to be complications. How do we get around them, mathematically?

Let’s look back at the over-arching goal for a moment: we have an integral from a to b, the area underneath the curve f(x), and we cannot integrate f(x). We hope to change the function to g(x), because we ** can** integrate g(x), and thereby find the area. h(x) is the way to start and stop the integral of g(x) that gives the same area as f(x) did. f(x) is where we begin, and we choose a g(x), and then we use those two equations to find our h(x).

We can’t find h(x) for any old g(x), but we *can sometimes choose* an h(x) so that we can find g(x). Select a function, call it t(x), such that f(x) divided by that function is something we can integrate. Then, integrate that f(x)/t(x). But, the truth we found above was that the integral of f(x)/g(h(x)) equals h(x). So, if our integral of f(x)/t(x) is *set equal to* h(x), then t(x) must equal g(h(x)). Because we now know h(x), and we already knew t(x), we can find the g(x) function such that g(h(x))=t(x). This isn’t a perfect generalization, because it says “give me an answer, h(x), and I can find the *instance* where h(x) is appropriate for g(x).” Yet, this turns out to be sufficient. We can choose g(x) to be ‘whatever function lets me use this h(x) I found’. The problem isn’t just “given f(x), select g(x), and find h(x)” — it’s *also* “given f(x), select h(x), and find g(x).”

**The Symmetry and its Interpretation**

What about when the initial function, f(x), is just equal to 1 all the time? Our equation for h(x) is then equal to the integral of 1/g(h(x)). Still an h(x) on both sides. But, let’s suppose that we knew what g(x) was, when we integrate it — call the integral of g(x), G(x). Integrating g(x) from h(a) to h(b) is equal to G(x) evaluated at h(a) and h(b). That is, the integral from h(a) to h(b) of g(x) equals G(h(b))-G(h(a)). If that integral is to equal the integral from a to b of f(x)=1, then G(h(b))-G(h(a))=b-a. The simplest way for that to be true is if G(h(b))=b and G(h(a))=a, which happens when h(x) equals the inverse of G(x), written G^-1(x).

So, if h(x) equals G^-1(x), then G^-1(x) equals the integral of 1/g(G^-1(x)). This equality is important, because it is a symmetry. A symmetry exists whenever you can start with something, do something to it, and you end up with the thing you had when you started. The ‘something’ that you did didn’t end up changing anything! The ‘something’ done here is the integration of the reciprocal of the derivative of a function, taking the inverse of the function as input, which spits that same inverse of the function back out! This is true for *any function** G(x)!

*(Well, any continuous function where both its derivative and inverse exist… also, Eugene was quick to point out that the above symmetry can be derived, using the chain rule, from the fact that the derivative with respect to x equals one, and x equals G(G^-1(x)) as well, by the symmetry of inverse functions. These two equalities let 1 equal g(G^-1(x)) multiplied by the derivative of G-inverse. Tossing g(G^-1(x)) to the other side as denominator of 1, we can integrate both sides, and integration of a derivative cancels, yeilding: G^-1(x) equals the integral of 1/g(G^-1(x)). Yay, chain rule!)