Pixel art scaling algorithms

Pixel art scaling algorithms are graphical filters that are often used in video game emulators to enhance 2D graphics. The re-scaling of pixel art is a specialist sub-field of image rescaling.

As pixel art graphics are usually in very low resolutions, they rely on careful placing of individual pixels, often with a limited palette of colors. This results in graphics that rely on a high amount of stylized visual cues to define complex shapes with very little resolution, down to individual pixels. This makes image scaling of pixel art a particularly difficult problem.

A number of specialized algorithms[1] have been developed to handle pixel art graphics, as the traditional scaling algorithms do not take such perceptual cues into account.

Since a typical application of this technology is improving the appearance of fourth-generation and earlier video games on arcade and console emulators, many are designed to run in real time for sufficiently small input images at 60 frames per second. This places constraints on the type of programming techniques that can be used for this sort of real-time processing. Many work only on specific scale factors: 2× is the most common, with 3×, 4×, 5x and 6x also present.

Algorithms

EPX/Scale2×/AdvMAME2×

Eric's Pixel Expansion (EPX) is an algorithm developed by Eric Johnston at LucasArts around 1992, when porting the SCUMM engine games from the IBM PC (which ran at 320×200×256 colors) to the early color Macintosh computers, which ran at more or less double that resolution.[2] The algorithm works as follows:

  A    --\ 1 2
C P B  --/ 3 4
  D
 1=P; 2=P; 3=P; 4=P;
 IF C==A => 1=A
 IF A==B => 2=B
 IF D==C => 3=C
 IF B==D => 4=D
 IF of A, B, C, D, three or more are identical: 1=2=3=4=P

Later implementations of this same algorithm (as AdvMAME2× and Scale2×, developed around 2001) have a slightly more efficient but functionally identical implementation:

  A    --\ 1 2
C P B  --/ 3 4
  D
 1=P; 2=P; 3=P; 4=P;
 IF C==A AND C!=D AND A!=B => 1=A
 IF A==B AND A!=C AND B!=D => 2=B
 IF D==C AND D!=B AND C!=A => 3=C
 IF B==D AND B!=A AND D!=C => 4=D

The AdvMAME4×/Scale4× algorithm is just EPX applied twice to get 4× resolution.

Scale3×/AdvMAME3× and ScaleFX

The AdvMAME3×/Scale3× algorithm can be thought of as a generalization of EPX to the 3× case. The corner pixels are calculated identically to EPX.

A B C --\  1 2 3
D E F    > 4 5 6
G H I --/  7 8 9
 1=E; 2=E; 3=E; 4=E; 5=E; 6=E; 7=E; 8=E; 9=E;
 IF D==B AND D!=H AND B!=F => 1=D
 IF (D==B AND D!=H AND B!=F AND E!=C) OR (B==F AND B!=D AND F!=H AND E!=A) => 2=B
 IF B==F AND B!=D AND F!=H => 3=F
 IF (H==D AND H!=F AND D!=B AND E!=A) OR (D==B AND D!=H AND B!=F AND E!=G) => 4=D
 5=E
 IF (B==F AND B!=D AND F!=H AND E!=I) OR (F==H AND F!=B AND H!=D AND E!=C) => 6=F
 IF H==D AND H!=F AND D!=B => 7=D
 IF (F==H AND F!=B AND H!=D AND E!=G) OR (H==D AND H!=F AND D!=B AND E!=I) => 8=H
 IF F==H AND F!=B AND H!=D => 9=F

There is also a variant improved over Scale3× called ScaleFX, developed by Sp00kyFox, and a version combined with Reverse-AA called ScaleFX-Hybrid.[3][4]

Eagle

Eagle works as follows: for every in pixel we will generate 4 out pixels. First, set all 4 to the color of the in pixel we are currently scaling (as nearest-neighbor). Next look at the pixels up and to the left; if they are the same color as each other, set the top left pixel to that color. Continue doing the same for all four pixels, and then move to the next one.[5]

Assume an input matrix of 3×3 pixels where the center most pixel is the pixel to be scaled, and an output matrix of 2×2 pixels (i.e., the scaled pixel)

first:        |Then
. . . --\ CC  |S T U  --\ 1 2
. C . --/ CC  |V C W  --/ 3 4
. . .         |X Y Z
              | IF V==S==T => 1=S
              | IF T==U==W => 2=U
              | IF V==X==Y => 3=X
              | IF W==Z==Y => 4=Z

Thus if we have a black pixel on a white background it will vanish. This is a bug in the Eagle algorithm, but is solved by other algorithms such as EPX, 2xSaI and HQ2x.

2×SaI

2×SaI, short for 2× Scale and Interpolation engine, was inspired by Eagle. It was designed by Derek Liauw Kie Fa, also known as Kreed, primarily for use in console and computer emulators, and it has remained fairly popular in this niche. Many of the most popular emulators, including ZSNES and VisualBoyAdvance, offer this scaling algorithm as a feature.

Since Kreed released[6] the source code under the GNU General Public License, it is freely available to anyone wishing to utilize it in a project released under that license. Developers wishing to use it in a non-GPL project would be required to rewrite the algorithm without using any of Kreed's existing code.

Super 2×SaI and Super Eagle

Several slightly different versions of the scaling algorithm are available, and these are often referred to as Super 2×SaI and Super Eagle. Super Eagle, which is also written by Kreed, is similar to the 2×SaI engine, but does more blending. Super 2×SaI, which is also written by Kreed, is a filter that smooths the graphics, but it blends more than the Super Eagle engine.

hqnx family

Main article: hqx

Maxim Stepin's hq2x, hq3x, and hq4x are for scale factors of 2:1, 3:1, and 4:1 respectively. Each works by comparing the color value of each pixel to those of its eight immediate neighbours, marking the neighbours as close or distant, and using a pregenerated lookup table to find the proper proportion of input pixels' values for each of the 4, 9 or 16 corresponding output pixels. The hq3x family will perfectly smooth any diagonal line whose slope is ±0.5, ±1, or ±2 and which is not anti-aliased in the input; one with any other slope will alternate between two slopes in the output. It will also smooth very tight curves. Unlike 2xSaI, it anti-aliases the output.[7] Shader long thought to be hqx was in fact another shader of lower quality called HqFilter/ScaleHQ.[8] True hqx[9] has comparable quality to the early versions of xBR.

Image enlarged 3× with the nearest-neighbor interpolation
Image enlarged in size by 3× with hq3x algorithm

hqnx was initially created for the Super Nintendo emulator ZSNES. The author of bsnes has released a space-efficient implementation of hq2x (ScaleHQ) to the public domain.[10]

xBR family

There are 6 filters in this family: xBR , xBRZ, xBR-Hybrid, Super xBR, xBR+3D and Super xBR+3D.

xBR,[11] created by Hyllian, works much the same way as HQx (based on pattern recognition), and would generate the same result as HQx when given the above pattern. However, it goes further than HQx by using a 2-stage set of interpolation rules, which better handle more complex patterns such as anti-aliased lines and curves. Scaled background textures keep the sharp characteristics of the original image, rather than becoming blurred like HQx(In reality ScaleHQ) tends to do. Newest xBR versions are multi-pass and can preserve small details better. There is also a version of xBR combined with Reverse-AA shader called xBR-Hybrid.[12] xBR+3D is a version with a 3D mask that only filters 2D elements.

xBRZ,[13] is a modified version of xBR, created by Zenju and implemented from scratch as a CPU-based filter in C++. It uses the same basic idea as xBR's pattern recognition and interpolation, but with a different rule set designed to preserve fine image details as small a few pixels. This makes it useful for scaling the details in faces, and in particular eyes. xBRZ is optimized for multi-core CPUs and 64-bit architectures and shows 40–60% better performance than HQx even when running on a single CPU core only. It supports scaling images with an alpha channel, and scaling by factors from 2x up to 6x.

Super xBR[14][15] is an algorithm developed by Hylian in 2015. It uses some combinations of known linear filters along with xBR edge detection rules in a non-linear way. It works in two passes and can only scale an image by two (or multiples of two by reapplying it and also has anti-ringing filter). Super xBR+3D is a version with a 3D mask that only filters 2D elements. There is also a Super xBR version rewritten in C/C++.[16]

RotSprite

RotSprite is a scaling and rotation algorithm for sprites developed by Xenowhirl. It produces far fewer artifacts than nearest-neighbor rotation algorithms, and like EPX, it does not introduce new colors into the image (unlike most interpolation systems).[17]

The algorithm first scales the image to 8 times its original size with a modified Scale2× algorithm which treats similar (rather than identical) pixels as matches. It then calculates what rotation offset to use by favoring sampled points which are not boundary pixels. Next, the rotated image is created with a nearest-neighbor scaling and rotation algorithm that simultaneously shrinks the big image back to its original size and rotates the image. Finally, overlooked single-pixel details are restored if the corresponding pixel in the source image is different and the destination pixel has three identical neighbors.[18]

Kopf–Lischinski

The Kopf–Lischinski algorithm is a novel way to extract resolution-independent vectors from pixel art described in the 2011 paper "Depixelizing Pixel Art".[19][20]

EDIUpsizer

EDIUpsizer[21] is a resampling filter that upsizes an image by a factor of two both horizontally and vertically using NEDI (new edge-directed interpolation).[22] EDIUpsizer also uses a few modifications to basic NEDI in order to prevent a lot of the artifacts that NEDI creates in detailed areas. These include condition number testing and adaptive window size,[23] as well as capping constraints. All modifications and constraints to NEDI are optional (can be turned on and off) and are user configurable. Just note that this filter is rather slow

FastEDIUpsizer

FastEDIUpsizer is a slimmed down version of EDIUpsizer that is slightly more tuned for speed. It uses a constant 8x8 window size, only performs NEDI on the luma plane, and only uses either bicubic or bilinear interpolation as the fall back interpolation method.

eedi3

Another edge-directed interpolation filter. Works by minimizing a cost functional involving every pixel in a scan line. It is slow.

EEDI2

EEDI2 resizes an image by 2x in the vertical direction by copying the existing image to 2*y(n) and interpolating the missing field. It is intended for edge-directed interpolation for deinterlacing (i.e. not really made for resizing a normal image, but can do that as well). EEDI2 can be used with both TDeint and TIVTC, see the discussion link for more info on how to do this.[24]

SuperRes

The SuperRes[25] shaders use a different scaling method which can be used in combination with NEDI (or any other scaling algorithm). This method is explained in detail here.[26] This method seems to give better results than just using NEDI, and rival those of NNEDI3. These are now also available as an MPDN renderscript.

NEDI

The idea behind edge-directed interpolation (EDI) is to use statistical sampling to ensure the quality of an image when scaling it up.[27] There were several earlier methods that involved detecting edges to generate blending weights for linear interpolation or classifying pixels according to their neighbour conditions and using different otherwise isotropic interpolation schemes based on the classification. Any given interpolation approach boils down to weighted averages of neighbouring pixels. The goal is to find optimal weights. Bilinear interpolation sets all the weights to be equal. Higher order interpolation methods like bicubic or sinc interpolation consider a larger number of neighbours than just the adjacent ones. NEDI (New Edge-Directed Interpolation), computes local covariances in the original image, and use them to adapt the interpolation at high resolution.

NNEDI

NNEDI[28] - nnedi is an intra-field only deinterlacer. It takes in a frame, throws away one field, and then interpolates the missing pixels using only information from the kept field. It has same rate and double rate modes, and works with YUY2 and YV12 input. nnedi can also be used to enlarge images by powers of two.

NNEDI2

NNEDI2 is an intra-field only deinterlacer. It takes in a frame, throws away one field, and then interpolates the missing pixels using only information from the kept field. It has same rate and double rate modes, and works with YV12, YUY2, and RGB24 input. nnedi2 is also very good for enlarging images by powers of 2, and includes a function 'nnedi2_rpow2' for that purpose.

ChromaNEDI

ChromaNEDI[29] is a way of using NEDI to upscale chroma using information from the luma channels.

NNEDI3

nnedi2 with improved predictor neural network architecture and local neighborhood pre-processing. nnedi3 also has multiple local neighborhood size options to better handle image enlargement vs deinterlacing and give more quality vs speed options. NNEDI3 has a "predictor neural network" that consists of neurons. Possible settings for madvr NNEDI3 neurons are 16, 32, 64, 128, and 256. 16 is fastest. 256 is slowest, but should give the best quality. This is a quality vs speed option; however, differences are usually small between the amount of neurons for a specific resize factor, however the performance difference between the count of neurons becomes larger as you quadruple the image size. If you are only planning on doubling the resolution then you won't see massive differences between 16 and 256 neurons. There is still a noticeable difference between the highest and lowest options, but not orders of magnitude different.

SuperChromaRes

With techniques similar to those of SuperRes it's also possible to do chroma scaling. One major advantage is that this makes it possible to do chroma scaling in linear light, which would normally be impossible. This can improve image quality greatly for images consisting of saturated colours (especially red) on a white background. This is also available as an MPDN renderscript, but I've also decided to make the original experimental shaders available to make it possible to try it out with other renderers. Be warned that support for these experimental shaders will be minimal, I will not be backporting all the improvements made in the renderscript, nor will I explain all the options, they also have some of the same issues as ChromaNEDI but will generally work well for HD sources.

Waifu2x

Waifu2x[30] is Image Super-Resolution for Anime-styled art using Deep Convolutional Neural Networks. It also supports photos. The demo application can be found at http://waifu2x.udp.jp/

References

  1. "Pixel Scalers". Retrieved 19 February 2016.
  2. Thomas, Kas (1999). "Fast Blit Strategies: A Mac Programmer's Guide". MacTech.
  3. libretro. "common-shaders/scalenx at master · libretro/common-shaders · GitHub". GitHub. Retrieved 19 February 2016.
  4. "ScaleNx - Artifact Removal and Algorithm Improvement [Archive] - Libretro Forums". Retrieved 19 February 2016.
  5. "Eagle (idea)". Everything2. 2007-01-18.
  6. "Gmane Loom". Retrieved 19 February 2016.
  7. Stepin, Maxim. "hq3x Magnification Filter". Retrieved 2007-07-03.
  8. Hunter K. "Filthy Pants: A Computer Blog". Retrieved 19 February 2016.
  9. libretro. "common-shaders/hqx at master · libretro/common-shaders · GitHub". GitHub. Retrieved 19 February 2016.
  10. Byuu. Release announcement Accessed 2011-08-14.
  11. "xBR algorithm tutorial". Retrieved 19 February 2016.
  12. libretro. "common-shaders/xbr at master · libretro/common-shaders · GitHub". GitHub. Retrieved 19 February 2016.
  13. zenju. "xBRZ". SourceForge. Retrieved 19 February 2016.
  14. "Super-xBR.pdf". Google Docs. Retrieved 19 February 2016.
  15. libretro. "common-shaders/xbr/shaders/super-xbr at master · libretro/common-shaders · GitHub". GitHub. Retrieved 19 February 2016.
  16. http://pastebin.com/cbH8ZQQT
  17. "RotSprite". Sonic Retro. Retrieved 19 February 2016.
  18. "Sprite Rotation Utility". Sonic and Sega Retro Message Board. Retrieved 19 February 2016.
  19. Johannes Kopf and Dani Lischinski (2011). "Depixelizing Pixel Art". ACM Transactions on Graphics (Proceedings of SIGGRAPH 2011). 30 (4): 99:1–99:8. doi:10.1145/2010324.1964994. Archived from the original on 2015-09-01. Retrieved 24 October 2012.
  20. Johannes Kopf (2011). "Depixeling pixel art". SIGGRAPH. Retrieved 2016-05-22.
  21. http://web.missouri.edu/~kes25c/
  22. http://web.archive.org/web/20101126091759/http://neuron2.net/library/nedi.pdf
  23. http://web.archive.org/web/20041221052401/http://www.cs.ucdavis.edu:80/~bai/ECS231/finaltzeng.pdf
  24. "TDeint and TIVTC - Page 21 - Doom9's Forum". Retrieved 19 February 2016.
  25. "nnedi3 vs NeuronDoubler - Doom9's Forum". Retrieved 19 February 2016.
  26. "Shader implementation of the NEDI algorithm - Page 6 - Doom9's Forum". Retrieved 19 February 2016.
  27. https://www.doom9.org/showthread.php?s=7fb2fb184cfe82b7d76b63bb26df481a&t=170727
  28. "NNEDI - intra-field deinterlacing filter - Doom9's Forum". Retrieved 19 February 2016.
  29. "Shader implementation of the NEDI algorithm - Doom9's Forum". Retrieved 19 February 2016.
  30. nagadomi. "GitHub - nagadomi/waifu2x: Image Super-Resolution for Anime-Style Art". GitHub. Retrieved 19 February 2016.
This article is issued from Wikipedia - version of the 11/19/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.