Dithering

TD COMPs / TOPs / 2023 – Present

Dithering is the process of reducing color or intensity precision while preserving the perception of detail through structured distribution of quantization error. Instead of simply truncating values to a lower bit depth, dithering introduces spatial variation — either through ordered patterns, noise, or error diffusion — to simulate tones that no longer exist in the reduced color space.

This collection of components explores several classical and modern dithering strategies. Each operator provides explicit control over bit depth, output format, and pattern behavior, allowing dithering to function not only as a technical solution for color reduction, but also as a deliberate aesthetic tool.

Index:
Bayer Matrix Dither, Error Diffusion Dither, IGN Dither, Noise Dither

Bayer Matrix Dither

Ordered Dithering

This component implements ordered dithering using a structured threshold matrix. Instead of diffusing quantization error across neighboring pixels, this method compares each pixel value against a repeating threshold pattern derived from a Bayer Matrix. The result is a deterministic, grid‐based distribution of quantization error.

For each pixel, the input value is first quantized to the selected bit depth. A position‐dependent threshold from the Bayer Matrix is then applied, determining whether that pixel rounds up or down within the reduced color space. This produces a controlled spatial pattern that simulates intermediate tones through structured pixel arrangements rather than randomness.

The Bayer Matrix itself is a structured threshold matrix used to distribute quantization error spatially. For example, a 3×3 Bayer Matrix can be defined as:

D B 3 = 1 9 [ 0 7 3 6 5 2 4 1 8 ]

And a common 4×4 Bayer Matrix is defined as:

D B 4 = 1 16 [ 0 8 2 10 12 4 14 6 3 11 1 9 15 7 13 5 ]

Because of its deterministic nature, Bayer dithering produces crisp, geometric structures that emphasize grid alignment and spatial rhythm — making it both a technical quantization strategy and a deliberate graphic aesthetic.

Resources:

Bayer Dithering by surma.dev,
Ordered Dithering on Wikipedia,
Download the .tox files

Parameters

Matrix Scale:

Defines the dimensions of the Bayer threshold matrix used for dithering. Larger values increase the pattern size and spatial repetition.

Pattern Tiling:

Specifies how the Bayer matrix is tiled across the image:
Repeat tiles uniformly, while Mirror alternates orientation to reduce visible seams.


Bit Depth:

Defines the target bit depth used for quantization during dithering.

Pixel Format:

Specifies the output image pixel format.

In‐ / Outputs

Input 0TOP image to be dithered.

Output 0TOP dithered image.

Updated21/2/2025


Error Diffusion Dither

Adaptive Quantization Propagation

This component implements error diffusion dithering by propagating quantization error to neighboring pixels during processing. Rather than using a fixed threshold pattern, error diffusion distributes the difference between the original pixel value and its quantized result across surrounding pixels according to a weighted matrix.

The selected Algorithm determines how this error is spatially distributed. Each diffusion matrix defines a neighborhood pattern and weight distribution that shapes the resulting tonal texture.

  1. Floyd‐Steinberg

    A compact and efficient diffusion matrix that produces sharp, high‐contrast dithering with minimal computational cost. Its limited neighborhood results in visible but crisp diffusion patterns.

    D = 1 16 [ * 7 3 5 1 ]


  2. Jarvis‐Judice‐Ninke

    A wider diffusion matrix that distributes error across two rows of neighboring pixels. This produces smoother tonal gradients and softer transitions at the cost of additional computation.

    D = 1 48 [ * 7 5 3 5 7 5 3 1 3 5 3 1 ]


  3. Stucki

    A variation of Jarvis‐Judice‐Ninke with adjusted weighting for improved tonal balance. It preserves smooth gradients while maintaining slightly stronger local contrast.

    D = 1 42 [ * 8 4 2 4 8 4 2 1 2 4 2 1 ]


  4. Atkinson

    A simplified matrix with a smaller diffusion footprint. Produces a lighter, more stylized dithering pattern often associated with early Macintosh graphics.

    D = 1 8 [ * 1 1 1 1 1 1 ]


  5. Burkes

    A reduced version of the Jarvis‐Judice‐Ninke matrix that diffuses error across two rows but with fewer weights, offering a balance between quality and performance.

    D = 1 32 [ * 8 4 2 4 8 4 2 ]


  6. Sierra

    A family of diffusion matrices offering multiple variants that trade diffusion range for performance.

    • Sierra (Full)

      Balanced diffusion across three rows.

      D = 1 32 [ * 5 3 2 4 5 4 2 2 3 2 ]


    • Sierra Two‐Row

      Reduced vertical spread with moderate smoothing.

      D = 1 16 [ * 4 3 1 2 3 2 1 ]


    • Sierra Lite

      Minimal diffusion footprint with sharper contrast.

      D = 1 4 [ * 2 1 1 ]


By providing multiple diffusion matrices within a single component, the Error Diffusion Dither COMP allows fine control over the spatial character of quantization. Smaller matrices emphasize contrast and pattern visibility, while larger matrices favor smoother tonal reproduction. The structure of the final image emerges directly from the interaction between the input image and the chosen diffusion weights.

Resources:

Download the .tox files,
Article by Tanner Helland,
Dizzy Dithering by Liam Appelbe,
Computerphile’s video on Dithering

Parameters

Algorithm:

Selects the error diffusion kernel used to distribute quantization error across neighboring pixels. Options are:
Floyd‐Steinberg, Jarvis‐Judice‐Ninke, Stucki, Atkinson, Burkes, or Sierra.

Alternate:

Defines the Sierra diffusion kernel variant to use:
Full, Two‐Row, or Lite.


Bit Depth:

Defines the target bit depth used for quantization during dithering.

Pixel Format:

Specifies the output image pixel format.

In‐ / Outputs

Input 0TOP image to be dithered.

Output 0TOP dithered image.

Updated21/2/2025


IGN Dither

Interleaved Gradient Noise Dithering

This component implements dithering using Interleaved Gradient Noise (IGN) — a deterministic pseudo‐random pattern designed to evenly distribute quantization offsets across image space. Conceptually, IGN sits between ordered dithering and random noise dithering: it avoids the visible grid structure of Bayer matrices while remaining deterministic and more spatially uniform than purely random noise.

Rather than propagating error (as in diffusion methods) or repeating a threshold grid (as in ordered dithering), IGN generates a lightweight procedural value per pixel that subtly influences rounding decisions during quantization. The result is a stable, fine‐grained distribution of tonal variation without directional streaking or rigid tiling artifacts.

The IGN pattern is defined per pixel ( ι ) in screen space as:

IGN ( ι ) = fract ( 52.9829 fract ( dot ( ι xy , [ 0.0671106 , 0.00583715 ] ) ) )

This formulation produces a deterministic scalar value in the range [0–1], evenly distributed across image space while remaining computationally lightweight and frame‐stable.

Because IGN is procedural rather than texture‐based, it scales cleanly with resolution and remains temporally stable in animated workflows. The resulting dithering exhibits a subtle, uniform grain structure that avoids the geometric rigidity of ordered dithering and the diffusion streaking of error propagation methods.

Resources:

Download the .tox files,
Original Article by Jorge Jimenez,
Article on The Blog at the Bottom of the Sea,
Dithering Video by Acerola

Parameters

Bit Depth:

Defines the target bit depth used for quantization during dithering.

Pixel Format:

Specifies the output image pixel format.

In‐ / Outputs

Input 0TOP image to be dithered.

Output 0TOP dithered image.

Updated21/2/2025


Noise Dither

Stochastic Dithering

This component implements noise‐based dithering by introducing controlled randomness into the quantization process. Instead of using a fixed threshold matrix, each pixel is compared against a noise value, producing a stochastic distribution of quantization error across the image.

For each pixel, the input value is first quantized to the selected bit depth. A noise sample is then added or compared during the quantization step, influencing whether that pixel rounds up or down within the reduced color space. Unlike ordered dithering, the spatial distribution of tones is not determined by a repeating grid, but by a noise field.

The type of noise used directly influences the visual character of the result. White noise produces a purely random distribution of error, resulting in a grain‐like texture. Blue noise distributes error with higher spatial frequency separation, reducing visible clumping and producing a more visually pleasing and perceptually uniform result. The Random (GPU) mode generates procedural noise directly on the GPU, allowing for dynamic or animated dithering patterns.

Because noise‐based dithering does not rely on fixed spatial patterns, it avoids the geometric repetition seen in ordered methods. This makes it suitable for organic, natural‐looking quantization, subtle tone simulation, or stylized grain effects.

Resources:

Download the .tox files

Parameters

Noise Type:

Selects the noise distribution used for dithering:
Random (GPU), White Noise, or Blue Noise.

Seed:

Defines the seed value used to generate the random noise pattern.
Active when Noise Type is set to Random (GPU).

Noise Dimensions:

Specifies the resolution of the noise texture used for dithering.
Active when Noise Type is set to White Noise or Blue Noise.

Noise Tiling:

Defines how the noise texture is tiled across the image (Repeat or Mirror).
Active when Noise Type is set to White Noise or Blue Noise.


Bit Depth:

Defines the target bit depth used for quantization during dithering.

Pixel Format:

Specifies the output image pixel format.

In‐ / Outputs

Input 0TOP image to be dithered.

Output 0TOP dithered image.

Updated21/2/2025