Weighted Color Accumulation Blending

This is just one of those little ideas that came to me one day back in high school and I decided to follow it up. It really originally started because I wasn’t all too happy with alpha blending as it’s done in most hardware accelerators. I also tried to see if anybody else had come across the same thoughts. As it so happens, throughout the web, I could, at the time, find about one or two people who had this same idea, and their academic web pages have since disappeared; presumably because they’d graduated. One person at BYU referred to them as “homogeneous color coordinates,” which, by naming alone, opened up a few interesting interpretations. After some time and further study, though, the relationship to register accumulator style ISA design couldn’t be denied, and really, the title you see above is a little more general.

So, as for the actual idea, it’s relatively simple. In the case of most color averaging and blending operations (including alpha blending), one blends two colors and that becomes a result to be blended with whatever subsequent color comes along. As you can probably guess, for multiple-color blending, this is flat out wrong. Earlier samples will progressively lose weight in the blended result as more and more colors get blended in. Typically, what is done in cases such as color interpolation and image filtering and so on is that the colors are accumulated without limit and divided by the total number of samples. Similarly, in a filtering operation, certain colors are weighted more than others, so in those cases, the total sum of the weights becomes the divisor at the end of the blend. So I simply thought to have this color weight as a member field of the color coordinates. In short, a color is defined by four members, the fourth being its blending weight. Let’s assume for now that all colors will be defined in RGB space with floating point fields. If we look upon it in the “homogeneous coordinate” sense, then we can consider the “visible” colors to be a subspace of all possible 4-d colors. This subspace, for simplicity’s sake is the set of all colors [R G B 1.0], or in other words, the projection of all weighted sums where a color has a unit weight.

So what does this really accomplish? Well, first let’s consider a simple blending of two colors.

In a normal blending approach, we’d do something along the lines of –

R = (R1 + R2) * 0.5f;

G = (G1 + G2) * 0.5f;

B = (B1 + B2) * 0.5f;

But with accumulative blending, the approach is something more like –

R = R1 + R2;

G = G1 + G2;

B = B1 + B2;

W = W1 + W2;

And later, divide out by the weight.

(R,G,B,W) /= W;

So what, you say? Well, try and picture a few more than just two colors. The thing is that we can allow the weight and terms to just keep on accumulating and perform that one divide at the end. So in that sense, we have a nice performance advantage because you have 4 divisions (or rather an inversion and 4 multiplies) regardless of how many colors are blended together. The main advantage of it is that all the colors that get accumulated actually contribute their intended weight to the resulting color rather than having decreasing weight with each successive blending operation.

So for instance, if we had some 100 colors being blended together (assume that all 100 of them had a weight of 1), the resultant color would be some [R G B 100.0]. So when we are finally finished summing colors and want to write out to a pixel, we simply divide all the members by W.

Well, this all seems well and good. We have a simple method for accumulating large numbers of colors (albeit it limited by the pangs of floating point error) and it keeps track of the number of samples used. Well, there’s more to it than that. As the name suggests, weighted blending is what it’s really made for. Uses include things like image convolution where each pixel to be sampled carries a weight in the sum. And to apply a weight, we first have to realize that any pixel in an already existing image would be a color in the visible subspace, so it innately carries a weight of 1.0. So in order for it to have an arbitrary weight of some ‘w’, then that arbitrary weight needs to be multiplied through by all the members (though multiplying by the existing weight of 1 is simply a replacement).

Of course, that’s just the beginning. One of the first applications that really sparked my interest was Monte Carlo rendering methods such as distribution raytracing and path tracing. Normally, one generates unweighted sample rays in directions that are distributed according to some probability distribution function that exhibits desirable results. In a good number of cases, this is often a cosine-weighted distribution. Rather than generate unweighted rays in a desirably distributed fashion, we can instead generate PDF/BSDF-weighted samples in a quasi-random or jittered grid distribution without having to worry as much about the effective convergence rate. Moreover, it also means that, for instance in a Monte Carlo path tracing implementation, we can perform early culling of insignificant paths based on their weights. This can also help reduce noise because even though low-weighted samples can potentially strike a highly emissive surface, the probability of this happening is sufficiently low that it works out to be a source of noise.

That makes sense in Monte Carlo rendering, but how about in a basic Whitted raytracer or raycaster? Well, in those cases, you have to remember that the effect of lights is a purely additive quantity. In the sense of homogeneous color coordinates, we can think of a lighting contribution not as a color, but as a vector in colorspace. More down-to-earth is the fact that lighting is a “contribution” to a pixel color and not a color in itself. Contributions to color are, in fact, “weightless”. Or rather, have a zero weight. Hypothetically, let’s say that there is one lighting calculation that makes a contribution of [1 1 0], and another that makes a contribution of [0 0 1] at the same pixel. The result should be [1 1 1]. To accomplish this, we assume that we start with an empty image with no guarantee that anything will be visible. So every pixel in the image starts off as [0 0 0 1]. And with each ray and each lighting calculation, we add some [r g b 0.0] to each pixel. Will this potentially cause overflow beyond the [0..1] range? Of course. Lighting is just such a beast. That possibility is there anyway. That is why it’s really more valid to think of rendering in its full dynamic range.

Well, that’s pretty much the long and short of it. I would get deeper in examples, but I’ve pretty much explained everything that really needs explaining. The same technique is perfectly applicable to other linear additive colorspaces wherever weighted color blending is a concern. I probably wouldn’t have even written anything about it if I didn’t suddenly start thinking about it again, so in reality, I don’t have much more to say at this point. Anybody else who dares to play around with this, feel free. In the meantime, I’m simply hoping to get it down on paper… or e-paper, I guess.

- Parashar K.