Safir P.
Safir P. RGB AND YCbCr COLOR SPACES, STANDARDS AND CONVERSIONS FOR FPGA ENGINEERS // Universum: технические науки : электрон. научн. журн. 2022. 10(103). URL: https://7universum.com/ru/tech/archive/item/14321 (дата обращения: 01.04.2023).
Прочитать статью:
DOI - 10.32743/UniTech.2022.103.10.14321



This paper analyzes, from an FPGA engineer's point of view, the conversion of color spaces from RGB to YCbCr and back again. The standards of color spaces are considered as well as the transformation in the FPGA environment. Consideration needs to be given to when it is better or worse to use a particular standard; and also it is a good idea to consider the mathematical rules of conversion in general.  Conclusions should also be made about what instances prevent conversion and how it affects the speed of the process and the quality of the video signal in the example of RGB24bit[888] and RGB16bit[565] the conversion standards. Why convert from RGB16bit[565] to RGB24bit[888] using the YCbCr color space? Consider, for example, some sampling formats for the YCbCr color space.  All of the following are suitable for working with languages such as VHDL[1] and Verilog[2].


В этой статье анализируется с точки зрения FPGA инженера преобразование цветовых пространств из RGB в YCbCr и обратно. Рассмотрены стандарты цветовых пространств, а также преобразования в среде FPGA. Делаются выводы, в каком случае не допускать преобразование и как это влияет на быстродействие всего процесса и качества видео сигнала на примере преобразования стандарта RGB24bit[888] и RGB16bit[565]. Почему при конвертации из цветового пространства RGB16bit[565] переходить в цветовое пространство RGB24bit[888] через цветовое пространство YCbCr. Рассмотрим некоторые форматы отбора проб для цветового пространства YCbCr. Все нижеописанное подходит для работы с такими языками как VHDL[1] и Verilog[2].


Keywords: RGB, YCbCr, color space, FPGA.

Ключевые слова: RGB, YCbCr, цветовая модель, ПЛИС.



FPGA development has progressed considerably recently. We see a significant improvement in the performance of the FPGA systems themselves and also on the market a lot of FPGA-based embedded systems are available. Firms such as Xilinx and Intel produce their own FPGA with SoC (system on chip). Integration of SoC (more precisely ARM processor architecture) together with the FPGA allows us to combine the power of a software processor with the power of a programmable hardware processor on a chip that definitely increases the productivity and scalability of the entire system. For the systems needed to work in real time mode the speed of data processing is very important. With different transformations, and further usage, it is possible to save significantly on the processing time of video signals and on the physical area of the FPGA chip itself.

Basic color space standards

RGB color model

RGB is a color model. What does this mean? There are three colors in the RGB color space, namely RED GREEN BLUE. When we add these colors in certain proportions to black, we get new shades of color. In the decimal system, we can get 256 different shades of color for each of the three channels. In the binary system we use 8 bits which means [00000000] to [11111111] for each channel. To define black in each of the three channels we have to define R=[00000000] G=[00000000] B=[00000000] and to define white we have to define R=[1111111111] G=[11111111] B=[11111111] and this is the maximum color value for an 8 bit RGB color space. Changing the parameters within 8 bits in any of the three channels will change the overall color that the human eye perceives. There are RGB32bit and RGB64bit standards, but they require large memory sizes and are not usually used in FPGA systems. Two RGB color model standards are primarily used for video in FPGAs. These are the RGB24bit [888] and RGB16bit [656] models.

RGB24bit[888] and RGB16bit[656] color models

Ready-made video modules (video cameras) that are used in embedded systems or in systems based on FPGA have built-in video codec.  However, where the output, which is formed by a digital video signal in RGB format or module creates an analogy signal we need to add an additional video codec, such as this ADV7844[3]. At the output from the codec, we have created a RGB signal in 12 bit. This video signal format is intended only for displaying on a VGA display. But if we need to work with a video signal, e.g. for using different filters, this type of information is not acceptable in our system. It is therefore necessary to convert RGB[888] into RGB[565]. This will significantly reduce the amount of memory needed to process streaming video on our system. It is also much faster to process 2 bytes than 3 bytes.


In this color model we use only 5 bytes per RED and BLUE color and 6 bytes for the GREEN color. Why do we leave six bytes for the GREEN color? It is a basic fact of human evolution that our eyes are more sensitive to green than to other colors. So, changes in the RED and BLUE channels will not affect our pictures as much as changes in the BLUE channel. Therefore, we include an additional bit for the green channel [4].

Color model conversion:

RGB24bit[888] to RGB16bit[656]:

In this conversion method, we remove the low bits leaving only the high bits. Unfortunately, we lose the low-order bits and cannot recover them.

       24ibt RGB888                              to       16bit RGB656  

       [R7 R6 R5 R4 R3 R2 R1 R0]                 [R7 R6 R5 R4 R3]

       [G7 G6 G5 G4 G3 G2 G1 G0]                 [G7 G6 G5 G4 G3 G2]

       [B7 B6 B5 B4 B3 B2 B1 B0]                  [B7 B6 B5 B4 B3]

RGB16bit[656] to RGB24bit[888]:

In this method we add the low bits:

       16bit RGB656                              to                 24ibt RGB888  

       [R4 R3 R2 R1 R0]                                  [R4 R3 R2 R1 R0 R2 R1 R0]     

       [G5 G4 G3 G2 G1 G0]                            [G5 G4 G3 G2 G1 G0 G1 G0]

       [B4 B3 B2 B1 B0]                                   [B4 B3 B2 B1 B0 B2 B1 B0]       

YCbCr color model

The YCbCr color model where Y is luminance, Cb is Chrominance-blue, Cr is Chrominance-red. We perceive the luminance component (Y-component) much more strongly than the color, so we can separate it into a different component and also separate it from the color. The intensity of the Y-component can be changed without affecting the color. Unlike the RGB color model, where all colors are mixed in equal proportions, in the YCbCr model there is a rigid distinction between the brightness Y channel and density of the Cb and Cr channels. Therefore, by sub-sampling, we can significantly reduce the amount of transmitted information that will reduce the memory needed to store information and the video stream processing speed.  At the same time, the brightness component Y stays in high resolution. The entire specification of the color model is described in the standard organization ITU-T[5] standard number Rec. ITU-R BT.601-6 [6]. Many video modules have embedded video codecs which output video in YCbCr color model. For example, such a video codec ADV7181B generates YCbCr in YCbCr 4:2:2 format, which is not convenient for further work in the FPGA, because we cannot pass the entire signal sequence in one cycle of the clock  and will, as a consequence, work much more efficiently if we convert this format to YCbCr4:4:4 which we can pass in one cycle of the clock.

YCbCr formats

YCbCr 4:4:4

The most obvious format without compression. There is a Cb and Cr chromatic component for each part of the Y component. In this format we bi-pass the chromatic components completely. Each component is 8 bits, so each pixel in this format without compression is 3 bytes. In this format, it is more convenient to transmit data to FPGA because we can transmit a pixel in one cycle of the clock.

Four pixels:  [Y0 Cb0 Cr0] [Y1 Cb1 Cr1] [Y2 Cb2 Cr2] [Y3 Cb3 Cr3]

Pixels location in memory: Y0 Cb0 Cr0 Y1 Cb1 Cr1 Y2 Cb2 Cr2 Y3 Cb3 Cr3


In this format, for each luminance Y channel report, we get half of the chromatic component of the 4:4:4 horizontal scan. Each component is 1byte (8 bit). To display the first two pixels in this format we need only four components, namely Y0, Y1, Cb0, Cr1 and that 4 bytes.

Four pixels: [Y0 Cb0 Cr0] [Y1 Cb1 Cr1] [Y2 Cb2 Cr2] [Y3 Cb3 Cr3]

Pixels location in memory: Y0 Cb0 Y1 Cr1 Y2 Cb2 Y3 Cr3

Output 4 pixels: [Y0 Cb0 Cr1] [Y1 Cb0 Cr1] [Y2 Cb2 Cr3] [Y3 Cb2 Cr3]

YCbCr 4:1:1

This is not the best compression format, but is acceptable where high video quality is not needed. For four adjacent pixels, it will take six bytes, assuming that each component is 1 byte. To display the first four pixels we need Y0 Y1 Y2 Y3 Cb0 Cr3 and that's 6 bytes.

Four pixels: [Y0 Cb0 Cr0] [Y1 Cb1 Cr1] [Y2 Cb2 Cr2] [Y3 Cb3 Cr3]

Pixels location in memory: Y0 Cb0 Y1 Y2 Cr3 Y3

Output 4 pixels: [Y0 Cb0 Cr3] [Y1 Cb0 Cr3] [Y2 Cb0 Cr3] [Y3 Cb0 Cr3]

YCbCr 4:2:0

This is probably the most popular format. The Y component Cb and Cr has one count per four reports. There are two kinds of counts. The first takes the 4 closest Cb and Cr components and forms one count; and the second takes the 2 vertical Cb and Cr components and forms another count.

Eight pixels: [Y0 Cb0 Cr0] [Y1 Cb1 Cr1] [Y2 Cb2 Cr2] [Y3 Cb3 Cr3]

[Y5 Cb5 Cr5] [Y6 Cb6 Cr6] [Y7 Cb7 Cr7] [Y8 Cb8 Cr8]

Pixels location in memory: Y0 Cb0 Y1 Y2 Cb2 Y3

Y5 Cr5 Y6 Y7 Cr7 Y8

Output 4 pixels: [Y0 Cb0 Cr5] [Y1 Cb0 Cr5] [Y2 Cb2 Cr7] [Y3 Cb2 Cr7]

[Y5 Cb0 Cr5] [Y6 Cb0 Cr5] [Y7 Cb2 Cr7] [Y8 Cb2 Cr7]

YCbCr and RGB conversions

Conversion between YCbCr and RGB formats is very simple. They are interchangeable formats, so the transition from one format to another can be easily implemented in VHDL or Verilog quite simply. The Y luminance component can be calculated as an averaging of components such as RGB:

k-color multiplier for each component.

Each component is the difference between the color component R, G, B and the brightness component Y:

Cb=B-Y                                                                    (1.2)

Cr=R-Y                                                                    (1.3)

Cg=G-Y                                                                   (1.4)

We now have a fourth component, which did not exist before, namely Cg, although there are three of them in RGB space. The sum of Cb+Cr+Cg is constant, so knowing only two of the three components, Cb and Cr, we can easily calculate the third component. But to display the image it is better to convert from YCbCr to RGB. The sum of constants K does not exceed 1.

Direct transformation:

Inverse transformation:

Standard Rec. ITU-R BT.601-6 [6] recommends the following coefficients:

As a result, we get formulas for forward and reverse conversion:

YCbCr:                                                                   RGB:

Y=0.229R+0.587G+0.114B                                 R=Y+1.402Cr

Cb=0.564(B-Y)                                                      G=Y-0.344Cb-0.714Cr

Cr=0.713(R-Y)                                                       B=Y+1.722Cb


The RGB color space can be easily converted to any YCbCr format, which will save us a lot of memory and processing speed. But it is also necessary to ensure the data packet fits in one clock cycle if necessary. Fortunately, in YCbCr format the information can be compressed significantly and then returned to RGB format without significant losses.



  1. Orhan Gazi, A Tutorial Introduction to VHDL Programming 1st ed. 2019 Edition, Publisher Springer, ISBN-10 9811323089, PP 30-44.
  2. Samir Palnitkar, Verilog HDL: A Guide to Digital Design and Synthesis 2nd Edition, Publisher Prentice Hall, ISBN-10 9780132599702, pp 28-45.
  3. Iain Richardson, Video Codec Design: Developing Image and Video Compression Systems 1st Edition, Publisher Wiley, ISBN-10 0471485535, pp 120-132.  
  4. Ellen J. Gerl, Molly R. Morris, The Causes and Consequences of Color Vision, Evolution: Education and Outreach volume 1, pages476–486 (2008)
  5. Jamal Shahin, The International Telecommunication Union, JAARGANG 34, NR. 154, 2010/2, pp 3-5.
  6. International Telecommunication Union. Rec. ITU-R BT.601-6 1, RECOMMENDATION ITU-R BT.601-6, Studio encoding parameters of digital television for standard 4:3 and wide-screen 16:9 aspect ratios. pp 7-8.
  7. Peter Marwedel, Embedded System Design: Embedded Systems Foundations of Cyber-Physical Systems 2nd ed. 2011 Edition, Springer Verlag, pp 78-90.
  8. E. Prathibha, Dr.A.Manjunath, Likitha.R, RGB to YCbCr Color Conversion using VHDL approach, International Journal of Engineering Research and Development, Volume 1, Issue 3 (June 2012), PP.15-22
Информация об авторах

Bachelor of Science, The Azrieli College of Engineering in Jerusalem (JCE), Israel, Jerusalem

бакалавр наук, Академический инженерный колледж Азриэли, Израиль, г. Иерусалим

Журнал зарегистрирован Федеральной службой по надзору в сфере связи, информационных технологий и массовых коммуникаций (Роскомнадзор), регистрационный номер ЭЛ №ФС77-54434 от 17.06.2013
Учредитель журнала - ООО «МЦНО»
Главный редактор - Ахметов Сайранбек Махсутович.