Background

In developing digital signal processor for Deepstomp (do-it-yourself digital multi-effect pedal platform), I have to optimize all computation in fix point math since the employed signal controller is just a cheap 32-bit processor with no floating point unit  (ARM Cortex M3 microcontroller). The most important effect especially for electric guitar is distortion effect, which can be simulated using sigmoid or sigmoid-like functions such arctan, tanh, or some others. I have even tried the simplest approximation using sin function in the range between pi/4  to pi/2  using CMSIS (Cortex Microcontroller Software Interface Standard) library, but the result is time consuming and eating to much CPU clocks. The fastest method is table lookup with linear interpolation,  but the table size is limited to save the program space. Some different sigmoid tables should be used to model various distortion types, and the fuzzy type has some extreme nonlinearity and I started getting worried about its precision. After searching for alternative methods, I finally implement my own method to improve the linear interpolation accuracy.

Linear Interpolation Extension

For two dimensional data and three dimensional data, we can find bilinear and trilinear interpolation that can be viewed as the extension of the linear interpolation. For one dimensional data, I couldn’t find any form of liner extension to improve its accuracy using multiple points of the available data without introducing the concept of higher order derivatives or polynomial. The fact that I couldn’t find the method doesn’t mean that it’s not discovered yet or there’s no one has implemented it since it could be just beyond my knowledge, limitedly published, being kept secret by proprietary protection, or any other reason. If there’s already known method for what I’ll describe in this article, please let me know so I can add it as a reference here.

Figure 1. Linear Interpolation

In the Figure 1, the blue line shows the original data between P1 and P4, and the red line is the interpolated value between between P2 and P3. The basic linear interpolation uses only P2 and P3 points to calculate all points in between. We can see that the interpolated values has some error distributed along the line, with some error amount proportional (non-linearly) to he distance from the reference point (P2 and P3). We can say that this error is caused by non constant gradient along the line between P1 and P4, so the we can try to correct by the considering the adjacent gradient. While the linear interpolation only consider the gradient between P2 and P3, now let see how gradient between P1 to P2 and P3 to P4 can be employed to correct the linear interpolation between P2 to P4 by plotting the extrapolated values to grab the idea.

Figure 2. Looking at The Extension Components of Linear Interpolation: The Extrapolations

Figure 2 shows yellow lines as the extrapolated values using the lower gradient (between P1-P2), and green line as the extrapolated values using the upper gradient (between P3-P4). The important thing that shown by the interpolated and extrapolated values is that the error is zero at the exact reference point, and increase as the point goes farther. If we mix the all three lines to produce better approximation then it would be made sense if we give them weights that are inversely proportional to the distance of each reference point.

• The interpolated value (red line) has the zero error at both P2 and P3, so I select that the reference point (to compute the weight) is the closest one of P2 or P3.

• The lower gradient extrapolated value (yellow line) has zero error at P2, so this is the reference point for it.

• The upper gradient extrapolated value (green line) has zero error at P3, so this is the reference point for it.

With P1(x1,y1); P2(x2,y2); P3(x3,y3), and P4(x4,y4) coordinate data, I tried to formulate the interpolated value at P(x,y) along the lines between P2 and P3 as

y = f(x,x1,x2,x3,x4,y1,y2,y3,y4);

y = (interpolated * weight + lower_extrapolated * lowerweight + upper_extrapolated * upperweight)/(totalweight)

y = {{y2 + (x-x2)(y3-y2)/(x3-x2)}*{ 1/(min(x-x2,x3-x))}

+ {y2 + (x-x2)(y2-y1)/(x2-x1)} * {1/(x-x2)}

+ {y3 + (x-x3) * (y4-y3)/(x4-x3)}* {1/(x3-x)}}

/ {1/(min(x-x2,x3-x)) + 1/(x-x2) + 1/(x3-x)}

Figure 3. The Extended 4 Point (3 Gradient) Linear Interpolation

Figure 3 show the interpolated result using the above formula, showing read line as the final result, which is the closest one to the original data (blue line) than the other interpolated (the yellow) and the extrapolated (the dark purple and the green) lines. If we see further detail on the final result shown in the figure, we can see that it still has distribute error which can be further corrected by modifying the weight of the linear interpolated component, so I tried to increase by doubling it, so the final formula is

y = {{y2 + (x-x2)(y3-y2)/(x3-x2)}*{ 2/(min(x-x2,x3-x))}

+ {y2 + (x-x2)(y2-y1)/(x2-x1)} * {1/(x-x2)}

+ {y3 + (x-x3) * (y4-y3)/(x4-x3)}* {1/(x3-x)}}

/ {2/(min(x-x2,x3-x)) + 1/(x-x2) + 1/(x3-x)}

Figure 4. Improvement by Doubling The Linear Weight

We can see the figure 4 that now the original and the interpolated lines is almost perfectly matched. The following figures (5-10) show several interpolation results using this method for several different 4 point sets.

Figure 5. Original data (blue) y = sin(0.0942 x), linear interpolation (yellow), and extended linear interpolation (red)

Figure 6. Original data (blue) y = 169-x^2, linear interpolation (yellow), and extended linear interpolation (red)

Figure 7. Original data (blue) y = x^2, linear interpolation (yellow), and extended linear interpolation (red)

Figure 8. Extreme gradient change example, original data (blue) f(1) = 1, f(2) = 3, f(12)=12, f(13) = 12; linear interpolated (yellow), lower gradient extrapolated (green), upper gradient extrapolated (dark purple), extended linear interpolated (red)

Figure 9. More extreme gradient change example, original data (blue) f(1) = 1, f(2) = 1.3, f(12)=10, f(13) = 13; linear interpolated (yellow), lower gradient extrapolated (green), upper gradient extrapolated (dark purple), extended linear interpolated (red)

Figure 10. Another example, f(1) = 1, f(2)=1.5, f(12)=12.5, f(13)=13, the interpolated f(x) for x = 1 to x = 11 show that the linear and extended linear coincides each other (yellow) since the upper and lower gradient extrapolation (green and dark purple) cancels each other

We can see the interpolation extension works in case of  some original data set shown in the Figure 5, 6, and 7 which has monotonic function and monotonic derivative of the function, and has small change in the gradient from lower to upper data points. The example shown in the Figure 8 also has possibly monotonic  in both the function and its derivative of the original data and has more extreme change in the gradient, but the interpolated result seem still acceptable. When the gradient change is increased to a more extreme level, then  the case like shown in the figure 9 might appear, which is likely not acceptable since we normally assume a smooth transition and monotonic final  interpolated result.  In the case that is shown in the Figure 10, we can imagine that the original data has smooth monotonic transition but it must has non-monotonic in its derivative, and the extended interpolated result is also unexpected since it shows no improvement from the pure linear interpolation. For the most cases of table lookup operation, which is usually improved by linear interpolation, it is assumed that we can always make sure that every four adjacent data points in the table has smooth changes and has monotonic profile on its  mapped original data and its derivative.