• 1(current)
  • 2
  • 3
  • 4
  • 5
  • 29
  •   
Win -  HOW TO - Set up madVR for Kodi DSPlayer & External Players
#1
madVR Set up Guide (for Kodi DSPlayer and Media Player Classic)
madVR v0.92.17
LAV Filters 0.73
Last Updated: Feb 02, 2019

Please provide corrections to any technical information if you believe it is incorrect. This may not be uncommon as new features are added that are beyond my technical knowledge.

New to Kodi? Try this Quick Start Guide.

What is madVR?

This guide is an additional resource for those using Kodi DSPlayer or MPC. Set up for madVR is a lengthy topic and its configuration will remain fairly consistent regardless of the chosen media player.

Table of Contents:
  1. Devices;
  2. Processing;
  3. Scaling Algorithms;
  4. Rendering;
  5. Measuring Performance & Troubleshooting;
  6. Sample Settings Profiles & Profile Rules;
  7. Other Resources.
..............

Devices
Identification, Properties, Calibration, Display Modes, Color & Gamma, HDR and Screen Config.

Processing
Deinterlacing, Artifact Removal, Image Enhancements and Zoom Control.

Scaling Algorithms
Chroma Upscaling, Image Downscaling, Image Upscaling and Upscaling Refinement.

Rendering
General Settings, Windowed Mode Settings, Exclusive Mode Settings, Stereo 3D, Smooth Motion, Dithering and Trade Quality for Performance.

..............

Credit goes to Asmodian's madVR Options Explained, JRiver Media Center MADVR Expert Guide and madshi for most technical descriptions.

To access the control panel, open madHcCtrl in the installation folder:
Image

Double-click the tray icon or select Edit madVR Settings...

Image

During Video Playback: 

Ctrl + S opens the control panel. I suggest mapping this shortcut to your media remote.


..............

Resource Use of Each Setting

madVR can be very demanding on most graphics cards. Accordingly, each setting is ranked based on the amount of processing resources consumed: Minimum, Low, Medium, High and Maximum. Users of integrated graphics cards should not combine too many features labelled Medium and will be unable to use features labelled High or Maximum without performance problems.

This performance scale only relates to processing features requiring use of the GPU.

..............

GPU Overclocking

Overclocking the GPU with a utility such as MSI Afterburner can improve the performance of madVR. Increasing the memory clock speed alone is a simple adjustment that is often beneficial in lowering rendering times. Most overclocking utilities also offer the ability to create custom fan curves to reduce fan noise.

..............

Image gallery of madVR image processing settings

..............

Summary of the rendering process:

Image
Source
Reply
#2
1. DEVICES
  • Identification
  • Properties
  • Calibration
  • Display Modes
  • Color & Gamma
  • HDR
  • Screen Config

Image

Devices contains settings necessary to describe the capabilities of your display, including: color space, bit depth, 3D support, calibration, display modes, HDR support and screen type.

device name
Customizable device name. The default name is taken from the device's EDID (Extended Display Information Data).

device type
The device type is only important when using a Digital Projector or a Receiver, Processor or Switch. If Digital Projector is selected, a new screen config section becomes available under devices.

Identification

The identification tab shows the EDID data read from the connected device.

Before continuing on, it can be helpful to have a refresher on basic video terminology. This information is completely optional:

Common Video Source Specifications & Definitions

Reading & Understanding Display Calibration Charts

Properties – RGB Output Levels

Image

Step one is to configure the video output levels, so black and white are shown correctly.

PCs and consumer video use different levels for output. Using 8-bits as a reference, the levels will be either full range RGB (0-255) (PC) or limited range RGB (16-235) (Video). In each case, reference black starts at a different place, but video content of 16-235 will appear identical. What you want are consistent levels from madVR all the way to the display without any unwanted conversions. What the display does with this is another matter...as long as black and white are the same as when they left the player, you can't ask for much more.

Note: The RGB Output levels checkboxes in LAV Video will not impact these conversions.

Option 1:

If you just connect an HDMI cable from PC to TV, chances are you'll end up with a signal path like this:

(madVR) PC levels (0-255) -> (GPU) Limited Range RGB 16-235 -> (Display) Output as RGB 16-235

madVR expands the 16-235 source to full range RGB and it is converted back to 16-235 by the graphics card. Expanding the source prevents the GPU from clipping the image during conversion to 16-235. The desktop and videos will both look accurate. However, it is possible to introduce banding if the GPU doesn't use dithering when stretching 0-255 to 16-235. The range is converted twice: by madVR and the GPU. This option isn’t recommended because of the range compression by the GPU and should only be used if no other suitable option is possible.

If your graphics card doesn't allow for a full range setting (like many Intel iGPUs or older Nvidia cards), then this may be your only option. If so, it may be worth running madLevelsTweaker.exe in the madVR installation folder to see if you can force full range output from the GPU.

Option 2:

If your PC is a dedicated HTPC, you might consider this approach:

(madVR) TV levels (16-235) -> (media front-end) Use limited color range (16-235) -> (GPU) Full Range RGB 0-255 -> (Display) Output as RGB 16-235

In this configuration, the signal remains 16-235 all the way to the display. A GPU set to 0-255 will allow passthrough without clipping any levels output by madVR. Any media front-end used should also be configured to use 16-235. 

When set to 16-235, madVR will not clip Blacker-than-Black (BtB) and Whiter-than-White (WtW). This means it is possible to pass 0-15 and 236-255 if the source includes these values. Black and white clipping patterns should be used to adjust brightness and contrast until 16-235 are the only visible bars.

This can be the best option for GPUs that output full range to a display that only accepts limited RGB. Banding is unlikely as madVR handles the single range conversion (YCbCr -> RGB) and the GPU is bypassed. However, the desktop and other applications will output incorrect levels. PC applications render black at 0,0,0, while the display expects 16,16,16. The result is crushed blacks. This sacrifice is made to improve the quality of the video player at the expense of other computing.

Option 3:

A final option involves setting all sources to full range — identical to a traditional PC and computer monitor:

(madVR) PC levels (0-255) -> (GPU) Full Range RGB 0-255 -> (Display) Output as RGB 0-255

madVR expands 16-235 to 0-255 and it is presented in full range by the display. The display's HDMI black level must be set to display full range RGB (High or Normal vs. Low).

When converting YCbCr 16-235 to RGB 0-255, madVR will clip 0-15 and 236-255. Clipping below 16 and above 235 is acceptable as long as a correct grayscale is maintained. Test patterns for black and white clipping can be used to confirm video levels (16-235) are displayed correctly.

This should be the optimal setting for most with displays and GPUs supporting full range output. The desktop will output correct levels and banding is unlikely as madVR handles the only range conversion.

Prevention of banding is most likely when the GPU is set to RGB 0-255. Both Option 2 and Option 3 use this configuration.

Confirm the display is showing video levels correctly without any crush or clipping. This may require some adjustment to the brightness and contrast controls. For testing, start with these AVS Forum Black and White Clipping Patterns (under Basic Settings) to confirm the output of 16-25 and 230-235, and move on to these videos, which can be used to fine-tune "black 16" and "white 235." This is one of the most important aspects of set up for having a correct grayscale.

Video Levels Explained in Pictures

Spears' & Munsil's Choosing a Color Space

Discussion from madshi on RGB vs. YCbCr

How to Configure a Display and GPU for a HTPC 

Properties – Native Display Bit Depth

Image

Bit depth describes the number of available color shades for each RGB channel. 

Current source bit depths:

BT.709 - 8-bits (256 shades): common to current 1080p standard.
BT.2020 - 10-bits (1,024 shades) or 12-bits (4,096 shades): common to current UHD standard.

Remember, we are saying there are 256 steps per RGB channel or 1,024 steps per RGB channel.

At 8-bits, this equals 256 x 256 x 256 = 16,777,216 color shades.
At 10-bits, this equals 1024 x 1024 x 1024 = 1,073,741,824 color shades.


The bit depth does not influence "colorfulness" and is instead a measure of how well colors blend together to prevent color banding. Higher bit depths are useful for lossy image processing such as mastering and compression. Presentation of an image with at least 8-bits (16 million colors steps plus dithering) involves incredibly small blending that is difficult for human eyes to detect when content is graded correctly. Human beings can see an estimated 10 million colors shades across the visible spectrum. Having a new standard with one billion available color shades sounds great, but it isn't a change likely to make a visible difference. If banding isn't present, our simple human vision cones will perceive the additional steps as being the same continuous colors.

Can You See the Difference Between 10-bit and 8-bit Images and Video Footage?

The native display bit depth is the value madVR dithers to when reducing its 32-bit processing result.

All display panels are manufactured to a specific bit depth with most current displays being either 8-bit or 10-bit. Although, many UHD displays are actually 8-bits with FRC temporal dithering despite being advertised as native 10-bit panels (typical of some VA 120 Hz TVs). You confirm 10-bit output is displayed accurately with this test protocol.

10-bit output requires the following is checked in general settings:
  • use Direct3D 11 for presentation (Windows 7 and newer)

Other required options:
  • Windows 7/8: enable automatic fullscreen exclusive mode;
  • Windows 10: 10-bit output is possible in windowed mode or fullscreen exclusive mode.

Displays with good processing should be set to match its native bit depth (8-bit or 10-bit) if there are no settings conflicts. Feeding a 10-bit or 12-bit input to an 8-bit display without FRC temporal dithering will lead two outcomes: low-quality dithering noise or color banding. If unsure, testing both 8-bits and 10-bits with the above linked test protocol gradient test with and without dithering enabled can assist in determining if both look the same or one is superior.

Some factors that may force you to choose 8-bit output:
  • You are unable to find any official specs for the display’s bit depth;
  • The best option at 4K 60 Hz is 8-bit RGB due to the bandwidth limitations of HDMI 2.0;
  • You have created a custom resolution in madVR that has forced 8-bit output;
  • Display mode switching to 12-bits at 24 Hz is not working correctly with Nvidia video drivers;
  • The display has poor processing and creates banding with a 10/12-bit input even though it is a native 10-bit panel.

So is it a good idea to output a 10-bit source at 8-bits?

The answer to this depends on an understanding of madVR's processing.

The source starts as 10-bit. This means the capture and mastering processes will be more precise with more available bits and the video will compress much better to ensure there is no banding in the SOURCE.

The conversion from 10-bit Y'CbCr to RGB creates 32-bit floating point data. These bits are not invented but available to assist in the rounding from one color space to another. Precision is maintained until the final processing result, which is dithered in the highest-quality possible. So the end result is a 10-bit source upconverted to 32-bits and then downconverted for display. 

madVR is designed to preserve the information from its processing and the initial data provided by YCbCr to RGB conversion to lower bit depths, so it should never introduce banding at any stage because the data is kept all the way to the final output. This all depends on the quality of the source and whether it had banding to begin with.

Color gamuts have fixed top and bottom values. You can manipulate the bit depth all you want without screwing up the colors you started with. You just get more shades of each color when the bit depth is increased; everything in between becomes smoother, not more colorful. But madVR makes all bit depths look smooth and nearly identical by adding invisible noise called dithering (explained here)

There is an argument that when capturing something with a digital camera there is no value in using 10-bits if the noise captured by the camera is not below a certain threshold (signal-to-noise ratio). If it is above this threshold, then the dithering added at 8-bits will be indiscernible from the noise captured at 10-bits. That is really what you are measuring when it comes to bit depths as high as 8-bits: detectable dithering noise. If the dithering noise is not detectable, then an 8-bit panel is an acceptable way to show 10-bit content. Dithering noise can be particularly hard to detect at 4K resolutions.

Check out these two images, which show the impact of dithering to a bit depth as low as 2-bits:

Dithering to 2-bits (64 colors):
2 bit Ordered Dithering
2 bit No Dithering

*Best viewed at 100% zoom for the dithering to look most accurate.

Seems remarkable? Regardless of the output bit depth, high-quality dithering used by madVR makes the choice of bit depth somewhat unimportant as gradients will always remain smooth without introducing any banding.

Given dithering is designed to spread out and erase any quantization (rounding) errors, it is not intended to remove source banding from 8-bit video. Rather, if the source (without dithering) is free of banding, that information can be maintained at lower display bit depths with dithering. At 8-bits or greater, this shading can blend so seamlessly that it completely avoids detection from the human eye.

General Rule (best to worst): 10-bit RGB > 8-bit RGB > 10-bit YCbCr 4:2:2 > 10-bit YCbCr 4:2:0

With the above stated, there is a reason madVR defaults to 8-bit output. Most displays are 8-bit, even though they will accept a 10-bit input without issue. Second, there is very little to no quality loss by selecting 8-bit output. Lastly, setting the GPU to 8-bits can often simplify GPU configuration due to the bandwidth limitations of 4K & HDMI 2.0 and those using restrictive custom resolutions. Let your eyes be the judge of what bit depth to use; the only thing that should change is the noise floor of the image, and this subtlety can be invisible. A good 8-bits is more difficult to mess up than a poor 10-bits, so it worth watching for banding when outputting at 10 to 12-bits. The odd UHD display may struggle with high bit depths due to the use of low bit depths for its internal processing or some other deficiency (like some LG OLEDs).

Determining Display-Panel Bit Depth

Example of Color Errors

A Technical Paper on How 8-bit LED Displays Can Be Used to Display 12-bit PQ Luminance Values

Properties – 3D Format

Image

3D playback in madVR is limited to MPEG4-MVC as found on 3D Blu-ray discs. It is easy to create a 3D mkv from a frame-packed source with MakeMKV.

The input format must be frame-packed MPEG4-MVC. The output format depends on the HDMI spec, operating system and display. 3D formats with the left and right images on the same frame will be sent out as 2D images.

3D playback requires four ingredients:
  • enable stereo 3d playback is checked in the madVR control panel (rendering -> stereo 3d);
  • A 3D decoder is installed (LAV Filters 0.68+ with 3D software decoder installation selected);
  • A 3D-capable display is used (with its 3D mode enabled);
  • Windows 8.1 or Windows 10 is the operating system.

Note: Some users may need to enable automatic fullscreen exclusive mode in general settings if 3D videos play in 2D. You can make a seperate profile under rendering to apply fullscreen exclusive mode only to 3D videos. Nvidia users may also need a batch file to turn stereoscopic 3D on and off in the GPU control panel. This is the case if the switch enable stereo 3d playback in the madVR control panel doesn't toggle 3D mode in the Nvidia control panel at playback start and end. 

Stereoscopic 3D is designed to capture separate images of the same object from slightly different angles to create an image for the left eye and right eye. The brain is able to combine the two images into one, which leads to a sense of enhanced depth.

The display type determines the way 3D images are displayed:
  • Active 3D TV: The left and right eye images are alternated.
  • Passive 3D TV: The left eye and right eye images are shown on the same frame.

Active 3D TVs display 3D content in frame-sequential format, where the left eye and right eye images are separated and alternated. This is done 48 times per second or 24 times per eye. Battery-powered 3D glasses use active shutters to open and close each eye in time with the image on-screen.

Passive 3D TVs are limited to showing a single image, which interweaves each eye onto a single frame. The display and 3D glasses use a polarizing filter, where only the portions of the screen meant for each eye are visible.

auto
The default output format is frame-packed 3D Blu-ray. The output is an extra-tall (1920 x 2205 - with padding) frame containing the left eye and right eye images stacked on top of each other at full resolution.

auto – (HDMI 1.4+, Windows 8+ & Display with HDMI 1.4+): Receives the full resolution, frame-packed output. On an active 3D display, each frame is split and shown sequentially. A passive 3D display interweaves the two images as a single image.

auto – (HDMI 1.3, Windows+ & Display with HDMI 1.3): Receives a downconverted, half side-by-side format. On an active 3D display, each frame is split, upscaled and shown sequentially. A passive 3D display upscales the two images and combines them as a single frame.

It is possible to override this behavior by selecting a specific 3D format.

Force 3D format below:

side-by-side

Side-by-side (SbS) stacks the left eye and right eye images horizontally. madVR outputs half SbS, where each eye is stored at half its horizontal resolution (960 x 1080) to fit on one 2D frame. This is done to reduce file sizes for HDMI 1.3. The display splits the frame and scales each image back to its original resolution.

An active 3D display shows half SbS sequentially. Passive 3D displays will split the screen into odd and even horizontal lines. The left eye and right eye odd sections are combined. Then the left eye and right eye even sections are combined. This weaving creates a sense of having two images.

top-and-bottom

Top-and-bottom (TaB) stacks the left eye and right eye images vertically. madVR outputs half TaB, where each eye is stored at half its vertical resolution (1920 x 540) to fit on one 2D frame. This is done to reduce file sizes for HDMI 1.3. The display splits the frame and scales each image back to its original resolution.

An active 3D display shows half TaB sequentially. Passive 3D displays will split the screen into odd and even horizontal lines. The left eye and right eye odd sections are combined. Then the left eye and right eye even sections are combined. This weaving creates a sense of having two images.

line alternative

Line alternative is an interlaced 3D format designed for passive 3D displays. Each frame contains a left odd field and right odd field. The next frame contains a left even field and right even field. 3D glasses make the appropriate lines visible for the left or right eye. The display must be set to use its native resolution without any over or underscan.

column alternative

Column alternative is an interlaced 3D format similar to line alternative, except the frames are matched vertically as opposed to horizontally. This is another passive 3D format. One frame contains a left odd field and right odd field. The next frame contains a left even field and right even field. 3D glasses make the appropriate lines visible for the left or right eye. The display must be set to use its native resolution without any over or underscan.

swap left / right eye

Swaps the order in which frames are displayed. This is to correct the behavior of some displays, which show the left eye and right eye images in the incorrect order. Incorrect eye order can be fixed for all formats including line and column alternative. Many displays offer the same option to swap eyes in its picture menus.

Make certain your 3D glasses are synced with the display. If the image seems blurry (particularly, the background elements), your glasses are probably not enabled.

Further Detail on the Various 3D Formats

Calibration

Image

When doing any kind of gamut mapping or transfer function conversion, madVR uses the values in calibration as the target. This requires you know your display's calibrated color gamut and gamma curve and attach any available yCMS or 3D LUT calibration files.

Most 4K UHD displays have separate display modes for HDR and SDR. Calibration settings in madVR only apply to the display's default SDR mode. BT.2020 HDR content is passed through unless a special setting in hdr is enabled such as converting HDR to SDR.

disable calibration controls for this display

Turns off calibration controls for gamut and transfer function conversions. 

If you purchased your display and went through only basic calibration without any knowledge of its calibrated gamma or color gamut, this is the safest choice.

Turning off calibration controls defaults to:
  • primaries / gamut: BT.709
  • transfer function / gamma: pure power curve 2.20

this display is already calibrated

This impacts the mapping of content with a different gamut than the display. For example, a BT.2020 source, such as an UHD Blu-ray, may need to be mapped to the BT.709 color space of an SDR display, or a BT.709 source could be mapped to an UHD display calibrated to BT.2020. Any display with an Automatic color space setting can match the input color gamut, but all other displays require the input gamut matches the calibrated gamut to prevent over or undersaturation. madVR should convert any source that doesn’t match the calibrated gamut.

If you want to use this feature but are unsure of how your display is calibrated, try the following values, which are most common.

1080p Display:
  • primaries / gamut: BT.709
  • transfer function / gamma: pure power curve 2.20

4K Display:
  • primaries / gamut: BT.709 (Auto/Normal) / BT.2020 (Wide/Extended/Native)
  • transfer function / gamma: pure power curve 2.20

Note: transfer function / gamma is only used if enable gamma processing is checked under color & gamma. Gamma processing is unnecessary as madVR will always use the source/mastering display gamma. This value is only applied by default for the conversion of HDR to SDR because madVR must convert a PQ source to a preferred SDR gamma curve.

HDR -> SDR Instructions: Mapping wide color gamuts

calibrate this display by using yCMS

Medium Processing

yCMS and 3DLUT files are forms of color management that use the GPU for gamut and transfer function correction. yCMS is the simpler of the two, only requiring a few measurements with a colorimeter and appropriate software. This a lengthy topic beyond the scope of this guide.

yCMS files can be created with the use of HCFR. If you are going this route, it may be better to use the more accurate 3D LUT.

calibrate this display by using external 3DLUT files

Medium - High Processing

Display calibration software such as ArgyllCMS, CalMAN, LightSpace CMS or dispcalGUI is used along with madVR to create a 256 x 256 x 256 3D LUT.

A 3D LUT (3D lookup table) is an automated form of display calibration that uses the computer's GPU to produce corrected color values for sophisticated grayscale, transfer function and primary color calibration.

3D LUTs are created with display calibration software, a colorimeter and a set of test patterns. madTPG.exe (madVR Test Pattern Generator) found in the madVR installation folder provides the necessary patterns. The display software uses hundreds or thousands of color patches to assess the accuracy of the display before calibration, calculate necessary corrections and assess the performance of the display with those corrections enabled.

Display calibration software will generate .3dlut files that are used as the calibration profile for the monitor. Active 3D LUTs are indicated in the madVR OSD. A special split screen mode (Ctrl + Alt + Shift + 3) is available to show the unprofiled monitor on one side of the screen and the corrections provided by the 3D LUT on the other. 

Multiple 3D LUTs can be created to cover different gamuts, which includes HDR content. HDR 3D LUTs must be added separately from the hdr section.

A LUT is essentially a corrected output based on an input value for each RGB triplet. When done correctly, it should enforce near-ideal adherence to a desired color gamut when madVR renders content to your display. This is more sophisticated than traditional grayscale calibration as a 3D LUT is able to provide additional correction beyond the limited color controls of a typical high-definition display.

Common HD color gamuts: BT.709, DCI-P3 and BT.2020.

Instructions on how to generate and use 3D LUT files with madVR are found below:
ArgyllCMS | CalMAN | LightSpace CMS

What Is a LUT?

Visual Representation of a 3D LUT

Luminance adds volume to a chromaticity diagram.
This creates a 256 x3 cube like the RGB cube below:

Image

Any color space (e.g. XYZ) can be represented inside the cube.
Luminance (black to white) creates uneven distribution of colors:

Image

A 3D LUT is capable of correcting the three main aspects of display calibration:
  • Grayscale: Finding the achromatic point (D6500) and maintaining it from black to white (0 to 100% white) without any RGB intrusion.
  • Primaries: Combining values of red, green and blue to create the values placed on the corners of a triangular color gamut. These primaries are the base to create other colors.
  • Transfer Function: Producing gamma-corrected or perceptual quantization-corrected (PQ) color values. A capture device converts light to voltage. A display converts voltage to light for each pixel using a transfer function suitable for the gamut luminance range.

Manual grayscale calibration is only focused on adjusting the controls that directly influence the above qualities. If the display was perfectly linear — where any input signal produced a 100% predictable change in displayed color — a grayscale and primary color calibration would produce a perfect result. The problem is almost no consumer displays are perfectly linear and suffer from some color cross-coupling and a lack of RGB separation. A 3D LUT does not have to focus exclusively on the grayscale and primary colors and can treat all colors profiled within the cube equally to correct colors found between the white point and gamut edges. 3D LUT corrections use a small amount of GPU power, but can produce near-reference color when combined with a good display.

The Flaws of Using Delta E Alone for Display Calibration

disable GPU gamma ramps
Disables the default GPU gamma LUT. This will return to its default when madVR is closed. Using a windowed overlay means this setting only impacts madVR.

Enable if you have installed an ICC color profile in Windows Color Management. madVR cannot make use of ICC profiles.

report BT.2020 to display (Nvidia only)
Allows the gamut to be flagged as BT.2020 when outputting in DCI-P3. Can be useful in situations where a display or video processor requires or expects a BT.2020 container but DCI-P3 output is preferred.

Display Modes

Image

Display modes matches the source frame rate to the display refresh rate. This will ensure the smoothest playback by playing content such as 23.976 fps video at a matching refresh rate or fixed multiple (24 Hz from the GPU and 120 Hz — 24 x 5 — at the display). Conversely, playing 23.976 fps content at 60 Hz presents a mismatch — the frame frequencies do not align — artificial frames are added by 3:2 pulldown, which creates motion judder. The goal of display modes is to eliminate motion judder caused by mismatched frame rates.

You want to enter all of the refresh rates supported by your display into the blank textbox. At the start of playback, madVR will switch the GPU and by extension the display to output modes that best match the source frame rate.

A list of available refresh rates for the connected display can be found in Windows:
  • Right-click on the desktop and select Display settings;
  • Click on Advanced display settings;
  • Choose Display adapter properties -> Monitor;
  • A list of compatible refresh rates is shown in the drop-down.

Ideally, a GPU and display should be capable of the following refresh rates:
  • 23.976 Hz
  • 24 Hz
  • 25 Hz
  • 29.97 Hz
  • 30 Hz
  • 50 Hz
  • 59.94 Hz
  • 60 Hz

madVR recognizes display modes by output resolution and refresh rate. You only need to output to one resolution for all content, which includes 1080p 3D videos, to ensure image upscaling is applied when beneficial.

To cover all of the refresh rates above, eight entries are needed:

1080p Display: 1080p23, 1080p24, 1080p25, 1080p29, 1080p30, 1080p50, 1080p59, 1080p60

4K Display: 2160p23, 2160p24, 2160p25, 2160p29, 2160p30, 2160p50, 2160p59, 2160p60

Note: You can selectively choose which display modes to use. You may not need 2160p25, for example, if you choose to use 2160p50 (as 25p x 2 = 50p). Remember that refresh rates below 30 Hz can be needed for 4K 10-bit output.

In most cases, the display will output at a multiple of the source frame rate (29.97 fps x 2 = 59.94 Hz). Interpolation is avoided so long as the refresh rates match.

treat 25p movies as 24p (requires ReClock or VideoClock)
Check this box to remove PAL Speedup common to PAL region (European) content. madVR will slow down 25 fps film by 4.2% to its original 24 fps. Requires the use of an audio renderer such as ReClock or VideoClock (JRiver Media Center) to slow the down the audio by the same amount.

hack Direct3D to make 24.000Hz and 60.000Hz work
madVR Explained: A hack to Direct3D that enables true 24 and 60 Hz display modes in Windows 8.1 or 10, which are usually locked to 23.976 Hz and 59.940 Hz. May cause presentation queues to not fill.

Note on 24p Smoothness:

Due to the low frame count, video with a frame rate of 24 fps (such as film and television) will display some stutter in panning shots even when shown at its native refresh rate (24p). The human eye can easily discern frame rates as high as 150 fps, so low frame counts will be visible to the human eye and are no different than a watching the same source at a commercial theatre. To simulate the low motion of 24p, try switching your GPU to 24 Hz and moving the mouse cursor around.

Motion interpolation can improve the fluidity of 24 fps content, but it will introduce a noticeable and unwanted soap-opera effect. True 24 fps playback at a matching refresh rate (5:5 pulldown), even with small amounts of stutter or blur, remains the best way to accurately view film-based content.

A Beginner's Guide to 24p Playback

Custom Modes

This is actually a second tab under display modes. This is for users who do not want to use ReClock or other similar audio renders to correct clock jitter, which can result in dropped/repeated frames every few minutes with many graphics cards. Generally, this is anyone who is bitstreaming rather than decoding to PCM. The goal is to reduce or eliminate the dropped/repeated frames counted by madVR.

What is clock jitter? Clock jitter is caused by a lack of synchronization between the playback clocks: the system clock, video clock and audio clock. The system clock always runs at 1.0x. The audio and video clocks tick away independent of each other. Having three independent clocks invites of the possibility of losing synchronization during playback. These clocks are subject to variability caused by differences in A/V hardware, drivers and software. Any difference from the system clock is captured by the display and clock deviation in madVR's rendering stats.

Let's use an example:

display (video): 23.97142 Hz

With an ideal value of 23.976 Hz, the 23.97142 Hz rate of the video clock means it is slower than the system clock.

clock deviation (audio): 0.00217%

With a deviation of 0.00217% (23.976 * (1 + 0.00217 / 100) = 23.97652 Hz), the audio clock is slightly faster than the system clock. This would be acceptable if the audio clock randomly matched the video clock. However, this is not the case:

audio/video synchronization: 23.97652 Hz (audio) - 23.97142 Hz (video) = 0.0051 Hz (deviation)

The audio and video are out-of-sync. This small deviation would lead to a slow drift between the audio and video during playback. The video clock yields to the audio clock — a frame is dropped or repeated every few minutes to resynchronize.

reported display rate < (movie frame rate * (1 +/- clock deviation / 100)) = dropped frames
The smaller the refresh rate, the larger the repeat frequency = too fast


reported display rate > (movie frame rate * (1 +/- clock deviation / 100)) = repeated frames
The larger the refresh rate, the smaller the repeat frequency = too slow


Creating a custom mode is a means to improve the synchronization of the video clock in relation to the audio clock. This should result in fewer dropped or repeated frames.

madVR Explained:

Only custom timings can be optimized but simply editing a mode and applying the "EDID / CTA" timing parameters creates a custom mode and is the recommended way to start optimizing a refresh rate. New timing parameters must be tested before they can be applied. Delete replaces the add button when selecting a custom mode. It uses each of GPU vendor's private APIs to add these modes and does not work with EDID override methods like CRU; supports AMD, Intel and Nvidia GPUs. With Nvidia, these custom modes can only be set to 8-bit but 10 or 12-bit output is still possible if the GPU is already using a high bit depth before switching to the custom resolution.

SimpleTutorial: How to Create Custom Modes

Detailed Tutorial: How to Create Custom Modes

Color & Gamma

Color and transfer function adjustments should be avoided unless you are unable to correct an issue using the calibration controls of your display.

enable gamma processing

This option works in conjunction with the gamma set in calibration. The value in calibration is used as the base, which madVR uses to map to a chosen gamma below. A gamma must be set in calibration for this feature to work.

Most viewing environments will require a gamma between 2.20 and 2.40. Although, many other values are possible.

madVR Explained:

pure power curve
Uses the standard pure power gamma function.

BT.709/601 curve
Uses the inverse of a meant for camera gamma function. This can be helpful if your display has crushed shadows.

2.20
Brightens mid-range values, which can be nice in a brightly lit room.

2.40
Darkens mid-range values, which might look better in a darker room.

It is best to leave these options alone. Without knowing what you're doing, it is more likely you will degrade the image rather than improve it.
Reply
#3
1. DEVICES (Continued...)

HDR

Image

The HDR section specifies how HDR sources are handled. HDR refers to High Dynamic Range content. This is the new standard for consumer media and includes sources ranging from 4K UHD Blu-ray to streaming services such as Netflix, Amazon, iTunes and Vudu as well as 4K HDR broadcasts.

HDR works by grading content in a way that maximizes peak white luminance to allow for brighter highlights and more texture detail in bright image areas. In the past, colorists worked with a peak brightness of 100 nits for BT.709 HD content, yet current displays are much brighter and have been so for a long time. HDR enhancement also defines reference white (normal diffuse white) as 100 nits, but avoids clipping the brightest details by leaving room for spectral highlight detail up to 10,000 nits. Much of what we can see was already contained in the SDR range, but this extra headroom provides the ability to call attention to bright elements like the sun on the horizon or the chrome reflection on a car. Combined with an expanded color gamut, the result can be an image with vastly improved contrast and color volume.

Current HDR support in madVR focuses on PQ HDR10 content.

Other formats such as Hybrid Log Gamma (HLG) and Dolby Vision are not currently supported

UHD Blu-ray separates PQ HDR metadata into two layers:
  • Base Layer - HDR10: 1,000 - 10,000 nits, DCI-P3 -> BT.2020, 10-bit HEVC;
  • Enhancement Layer - Dolby Vision & HDR10+: 4,000 or 1,000 - 10,000 nits, DCI-P3 -> BT.2020, 12 or 10-bit HDR.

PQ HDR metadata is converted by the display by:
  • Tone Mapping: Compressing the highlights to fit the peak luminance of the display;
  • Gamut Mapping: Mapping the DCI-P3 or BT.2020 primaries to the display's visible colors;
  • Gamma Transfer: Decoding the SMPTE 2084 PQ transfer function to the display transfer function.

madVR is capable of tone mapping, gamut mapping and gamma transfer conversion so that any display can show HDR10 content using the limitations of its available color gamut and peak luminance. 

madVR offers four methods for dealing with HDR10 metadata:
  • let madVR decide: The display capabilities are detected by madVR. Displays that are HDR-compatible receive the metadata via passthrough (untouched). Not HDR-compatible? The metadata is converted to SDR via pixel shader math at reasonable quality, but not the highest quality.
  • passthrough HDR to display: The display receives the regular HDR content untouched for conversion by the display (a setting of let madVR decide will also accomplish this). HDR passthrough should only be used for displays which natively support HDR playback. send HDR metadata to the display: Use Nvidia's/AMD's private APIs: requires a Nvidia/AMD GPU with recent drivers and a minimum of Windows 7. Use Windows 10 API (D3D 11 only): For Intel users; requires Windows 10 and use Direct3D 11 for presentation (Windows 7 and newer). HDR and WCG should be enabled in Windows display settings. This is important as the Windows API will not dynamically switch in and out of HDR mode. By comparison, the Nvidia/AMD API does dynamically switch HDR on and off when an HDR video is stopped and started allowing for perfect SDR and HDR playback. The Windows 10 API is all or nothing — all HDR all the time. The Nvidia/AMD API requires Windows 10 HDR and WCG is deactivated. AMD also requires two additional settings: Direct3D 11 for presentation (Windows 7 and newer) in general settings and 10-bit output from madVR (8-bits at the GPU is possible). You do not need to select 10-bit output for Nvidia GPUs; dithered 8-bit output is acceptable and sometimes preferable for some displays.
  • tone map HDR using pixel shaders: HDR is converted to SDR. The display receives SDR content. output video in HDR format: The display receives HDR content, but the HDR source is tone mapped/downconverted to the target specs.
  • tone map HDR using external 3DLUT: The display receives HDR or SDR content with the 3D LUT downconverting the HDR source to some extent. The 3D LUT input is R'G'B' HDR (PQ). The output is either R'G'B' SDR (gamma) or R'G'B' HDR (PQ). The 3D LUT applies some tone and/or gamut mapping.

Most with HDR displays will be selecting passthrough HDR to display to take advantage of its additional brightness and optimized EOTF curve for HDR sources. madVR offers pixel shader tone mapping, but it may not be the best option for those with newer HDR displays due to incompatibility or the need to convert to SDR and balance HDR and SDR with the same display mode.

Signs Your Display Has Correctly Switched into HDR Mode:
  • An HDR icon typically appears in a corner of the screen;
  • Backlight in the Picture menu will go up to its highest level;
  • Display information should show a BT.2020 PQ SMPTE 2084 input signal;
  • The first line of the madVR OSD will indicate NV HDR or AMD HDR.

*A faulty video driver can prevent the display from correctly entering HDR mode. If this is the case, it is always worth rolling back to an older, working driver.

List of Video Drivers that Support HDR Passthrough

tone map HDR using pixel shaders

Pixel shaders rescales HDR sources before output by applying madVR's own tone mapping curve and proprietary gamut mapping algorithms. Tone mapping and gamut mapping combine to lower the dynamic range of an HDR source and adjust any outlying pixels into the visible dynamic range of the target display.

How RGB Values Are Created  — Y'CbCr to RGB Conversion and the Relationship to Tone Mapping and Gamut Mapping

Pixel Shader HDR Output Formats:

Default: SDR Gamma

The default option converts HDR PQ to SDR gamma. madVR redistributes values along the SDR gamma curve with added dithering to mimic the response of a PQ EOTF. This is HDR converted at the source side rather than the display side to replace a display's HDR picture mode.

Advantages:
  • madVR has complete control over the output, especially if the calibrated gamma of the display matches the output gamma from madVR. There is no extra processing by the display;
  • The relative SDR gamma curve makes it easy to configure the brightness to your liking by adjusting the target peak nits up or down;
  • The dynamic tone mapping used by madVR is able to adjust the tone curve throughout the presentation without any interference from the display. Most HDR displays use static tone mapping based on a single HDR10 static metadata value (MaxCLL or mastering display peak). This most often results in a darker image than desired and some loss of detail as the fixed tone curve is unable to react to the constantly varying peak brightness of the video. This metadata is also known to be wholly unreliable in representing the average peak brightness of the source.

Disadvantages:
  • HDR -> SDR is more difficult to configure than simple HDR10 passthrough to get a good result;
  • The SDR gamma curve is relative with many choices for display gamma and the PQ EOTF is absolute, which again makes it more difficult to configure to get a good result;
  • The result is limited by the calibrated peak brightness of the display in SDR output. If the display uses a strict 100-120 nits SDR calibration, low brightness HDR may be the result .

Best Usage Cases:

4K HDR projector owners with a BT.2020/DCI-P3 gamut may find this option particularly appealing as there is little difference in brightness between the projector's SDR and HDR modes, so balancing the two formats with one calibration makes sense (you can even balance the SDR and HDR gamuts in calibration). The method offered by madVR is customizable and should offer better quality than the typical static and inaccurate tone mapping used by current 4K HDR projectors. SDR display owners may also be interested in watching HDR content on older SDR screens. Think of this as a good option for SDR display owners without a dedicated HDR mode and HDR display owners with limited HDR brightness.

HDR -> SDR could be appealing for those with a bright HDR display, but relative to the display’s HDR mode, the image may seem too dim. SDR content looks best at close to 100 nits and HDR content looks best at 1,000 nits and brighter — it may be difficult to choose a single backlight setting given the sizable mismatch in peak brightness between SDR and HDR formats. If convenient, you might consider configuring a separate SDR picture mode just to handle HDR content.

output video in HDR format: PQ EOTF

Checking output video in HDR format outputs in the original PQ EOTF. madVR's tone mapping is applied and the metadata is altered to reflect the lowered RGB values after mapping. So the display receives the mapped RGB values along with the PQ transfer function and correct metadata to trigger its HDR mode. madVR does some pre-tone mapping for the display.

Advantages:
  • This can work with the existing tone curve of the display, which may already be fairly well-optimized;
  • You have less control over the brightness, but it is easier to select a target peak nits because you are only doing some processing for the display and not all of it;
  • The output can still take advantage of the extra peak brightness offered by the display's HDR mode to keep the overall APL higher. It can also complement a display that uses dynamic tone mapping that should react to the precise brightness of the remapped RGB values rather than have to interpret the static metadata output by madVR;
  • madVR uses some extra processing to make compressed parts of the image sharper than the display should be able to produce, and possibly more accurate color rendition.

Disadvantages:
  • madVR only has partial control over the tone mapping. The PQ tone curve used by madVR and the display won't be identical, so there is some double processing involved. This may result in inaccurate colors if the processing of the display doesn't match well with madVR;
  • There is less control over the brightness;
  • It is more difficult for madVR's dynamic tone mapping to work when the display uses static tone mapping (one curve for the whole movie). The output from madVR will still vary dynamically depending on the brightness of each scene in the movie, but the display may force one consistent tone curve;
  • It is difficult to predict how the display will react to the metadata output by madVR making it difficult to configure the target peak nits. Certain changes to the metadata may result in a very different selection of tone curves by the display (or even no tone curve with some displays).

Best Usage Cases:

Bright LED displays (peak brightness of 1,000 nits and higher) typically use a soft-roll-off at the display peak rather than a standard tone mapping curve leading to a clipping of very bright highlight detail. madVR can bring back this bright detail into the frame without lowering the brightness of the rest of the movie. Any HDR display may benefit if madVR is able to consistently force it to select the same ideal internal curve by sending identical metadata and rescaled values for all sources. As mentioned, display's with dynamic tone mapping should react directly to the source values provided by madVR and may work more harmoniously with this setting.

This method is only advisable if it works with your display's own tone mapping, largely based on how it handles the altered static metadata provided by madVR (lowered MaxCLL and mastering display peak). The only way to determine if this option is compatible with your display without producing incorrect colors is by experimentation with different target peak nits.

*It has been determined that recent Nvidia drivers are sending incorrect HDR metadata to the display when madVR is set to passthrough HDR content. A fix for future drivers may be on the way, while drivers up to v398.11 appear to be sending the metadata correctly.

More Information on False HDR Metadata Being Sent by Nvidia GPUs

The best method to reliably defeat the display's tone mapping is to use the option tone map HDR using external 3DLUT and create a 3D LUT with display calibration software, which will trigger the display's HDR mode and adjust its color balance as it reacts to HDR metadata by using corrections provided by the 3D LUT table. 

Static HDR 3D LUTs can be created in various configurations to replace the selection of static curves used by the display: such as 500 nits, 1,000 nits, 1,500 nits and 4,000 nits. HDR 3D LUT curve selection is automated with HDR profile rules referencing hdrVideoPeak.

Example of image tone mapped by madVR
10,000 nits BT.2020 -> 150 nits BT.709 (425 target nits):

Image

Introduction to HDR Tone Mapping

The PQ SMPTE 2084 transfer curve is an absolute luminance scale that extends from a value of 0 to 1,023, where 0 = 0 nits and 1023 = 10,000 nits on a 10,000 nit mastering monitor. Current HDR displays with lesser peak white output are asked to represent this curve as faithfully as possible while recognizing any deficiencies in peak luminance. Compressing the luminance curve of an HDR source is called tone mapping and display manufacturers were not provided with a universal tone mapping solution when the HDR10 standard was introduced. While most current content can be represented faithfully on a display with 1,500 nits peak output, the majority of HDR displays will have to make frequent use of a tone mapping curve.

What tone mapping describes is the shape of the PQ curve used by an HDR display. The PQ curve stretches all the way from 0 to 10,000 nits. A display that clips will follow the original curve as far as possible and cut-off any pixels beyond the luminance capabilities of the display. Tone mapping, on the other hand, starts a compression curve at a fixed knee point on the PQ curve that roll-offs gradually towards the peak luminance of the display. Rather than create flat image areas, the extra space provided by the gradual roll-off allows pixels above the display’s brightness to be mapped to a lower luminance in a manner that creates a perception that the entire image is contained within the display range. This maintains color relationships and accuracy as well as the creator’s intent at the expense of slightly rolling-off the APL (Average Picture Level).

Bright displays require less tone mapping than dim displays. A display such as a projector that may reach a maximum of 100 nits (usually less) will have to start the tone curve well below the original PQ curve, which will dim the majority of the source values. Placing the knee point lower on the PQ curve helps to retain the high dynamic range and contrast of the source and distinguish it from an SDR presentation. As the source brightness is lowered, steps between luminance values are compressed, which can lead to a loss of some fine texture detail (a loss of visible luminance steps) if values are not spaced out appropriately.

All displays need an active gamma curve, so tone mapping curves are almost always in effect for any display with less brightness than a mastering display. Displays with dynamic tone mapping may change where the compression curve (knee point) begins and alter the severity of the roll-off — increasing or decreasing the roll-off per scene or even per frame.

Tone Mapping Curve Summarized:

Follow the PQ Curve -> Knee Point -> Tone Curve w/ Roll-off -> Display Peak Brightness 

Tone Curve: reference white at 90 nits w/ roll-off at 700 nits:
Image


Example 100 nit PQ tone curve for a projector

Discussion on the Merits of Deviating from the PQ Curve

Tone mapping with correct color is important. Color perception is based on luminance, saturation and hue. Compressing the luminance of pixels will lead to changes in color saturation that must be adjusted and some pixels will not fit the destination color gamut after tone mapping. Tone mapping is designed to compress luminance and luminance does not scale linearly (1:1) with RGB gamuts producing many out-of-gamut pixels. Displays with bad tone mapping do not handle out-of-gamut colors correctly by clipping them at the gamut edge or by failing to preserve the hue: the original ratio of red, green and blue. This can lead to noticeable hue shifts (for example, orange might become yellow, and blue could become purple). Undersaturated highlights are also created when a display overly desaturates these colors. 

Examples of Hue Shifts Caused by Poor Gamut Mapping:

La La Land Correct Hue
La La Land Incorrect Hue

Life Untouched HDR Demo Correct Hue
Life Untouched HDR Demo Incorrect Hue 

Configuring madVR's Tone Mapping

target peak nits: [200] Peak nits is the target display brightness for tone mapping in PQ nits. This value is not meant to correlate to the actual brightness of the display when converted to SDR gamma. HDR content requires you adapt the source to your display, so choose any value that looks best to you. To create high dynamic range, increase the nits target above the actual peak nits of the display to create contrast. This will best preserve the HDR presentation and give the image greater detail and depth.

HDR -> SDR: Choosing a target peak nits

output video in HDR format requires an estimate of the absolute brightness of the display with a goal of finding a target value that prevents the display from doing any additional tone mapping or a value that is above the display’s peak brightness. Unlike SDR output, this value is absolute and not stretched by the relative SDR gamma curve. Values below the actual brightness of the display should make the image darker because tone mapping will compress the source to an increasingly lower peak brightness.

With output video in HDR format checked, only scenes above the target peak nits receive tone mapping. If set to 700 nits, for example, the majority of current HDR content would be output 1:1 and only the brightest scenes would have tone mapping applied (you can reference the brightness of any scene in the madVR OSD). Tone mapping should be most helpful with any HDR display that clips bright highlight detail.

High Processing

tone mapping curve: [BT.2390] BT.2390 is the default curve. The entire range will be compressed to match the set target peak nits.

A tone mapping curve is necessary because clipping the brightest information would cause the image to flatten and lose detail wherever pixels exceed the display capabilities. Tone mapping applies an S-shaped curve to create different amounts of compression to pixels of different luminances. The strongest compression is applied to the highlights while adjusting other pixels relative to each other to retain similar contrast between bright and dark detail relative to the original image. 

clipping is automatically substituted if the content peak brightness is below the target peak nits.

Report BT.2390 comes from the International Telecommunications Union (ITU).

clipping
No tone mapping curve is applied. All pixels higher than the set target nits are clipped and those lower are preserved 1:1. Pixels that clip will often shift hue. Obviously, this is not recommended if you want to preserve specular highlight detail.

arve custom curve
With the aid of Arve's Custom Gamma Tool, it is possible to create custom PQ curves that are converted to 2.20, 2.40, BT.1886 or PQ EOTFs. The Arve Tool is designed to work with a JVC projector via a network connection, but it may be possible to manually adjust the curve without direct access to the display by changing the curve parameters in Python and saving the output for madVR. Be prepared to do some reading as this tool is complicated.

Instructions:
color tweaks for fire & explosions: [balanced] Fire is mostly comprised of a mixture of red, orange and yellow hues. After tone mapping and gamut mapping are applied, yellow shifts towards white due to the effects of tone mapping, which can cause fire and explosions to appear overly red. To correct this, madVR shifts bright red/orange pixels towards yellow to put a little yellow back into the flames and make fire appear more impactful. All bright red/orange pixels are impacted, so it is possible this shift is not desirable in every secene, but it is not always noticeable.

high strength
Bright red/orange out-of-gamut pixels are shifted towards yellow by 55.55% when gamut mapping is applied to compensate for the loss of yellow hues in fire and flames caused by tone mapping. This is meant to improve the impact of fire and explosions directly, but will have an effect on all bright red/orange pixels.

balanced [Default]
Bright red/orange out-of-gamut pixels are shifted towards yellow by 33.33% (and only the brightest pixels) when gamut mapping is applied to compensate for the loss of yellow hues in fire and flames caused by tone mapping. This is meant to improve the impact of fire and explosions directly, but will have an effect on all bright red/orange pixels.

disabled
All out-of-gamut pixels retain the same hue as the tone mapped result when moved in-gamut.

High - Maximum Processing

highlight recovery strength: [none] Detail in compressed image areas can become slightly smeared due to a loss of visible luminance steps. When adjacent pixels with large luminance steps become the same luminance or the difference between those steps is drastically reduced (e.g. a difference of 5 steps becomes a difference of 2 steps), a loss of texture detail is created. madVR attempts to correct this by simply adding back some detail lost in the luminance channel. The effect is similar to image sharpening applied to certain frequencies with the potential to give the image an unwanted sharpened appearance at higher strengths. 

A choice of strengths from low to are you nuts!? is offered. Higher strengths may be more desirable at lower target peak nits because the image can become increasingly flat. Expect a significant performance hit, with only the fastest GPUs being advised to enable it with 4K 60 fps content.

Batman v Superman:
highlight recovery strength: none
highlight recovery strength: medium

none [Default]
highlight recovery strength is disabled.

low - are you nuts!?
Recovered frequency width varies from 3.25 to 22.0. Resource use remains the same across all strengths.

measure each frame's peak luminance: [Checked] Overcomes the limitation of HDR10 metadata, which provides a single value for peak luminance but no per scene or per frame dynamic metadata. madVR can measure the brightness of each pixel in each frame and provide a rolling average, as reported in the OSD. The brightness range of an HDR video will vary during each scene. By measuring the peak luminance of each pixel, madVR will adjust the tone mapping curve subtlety throughout the video to provide optimized highlight detail. This is like having HDR10+ metadata available to provide more dynamic tone mapping for future releases.

Note: The checkbox compromise on tone & gamut mapping accuracy under trade quality for performance is checked by default. Gamut mapping is applied without hue and saturation correction when this is enabled. Unless you have limited processing resources available, you'll want to uncheck this to get the full benefit of tone mapping. 

Preview of Functionality of the Next Official madVR Build and Current AVS Forum Test Builds: 
Instructions: Using madMeasureHDR to create dynamic HDR10 metadata

HDR -> SDR: The following should also be ticked in devices -> calibration -> this display is already calibrated:
  • primaries / gamut (BT.709, DCI-P3 or BT.2020)
  • transfer function / gamma (pure power curve 2.xx)

If no calibration profile is selected (by ticking disable calibration controls for this display), madVR will use the target nits value to map HDR content to BT.709 and pure power curve 2.20.

When tone map HDR using pixel shaders is selected with SDR output, madVR will use any 3D LUTs attached in calibration. HDR is converted to SDR and the 3D LUT is left to process the SDR output as it would any other video.

In addition to configuration of madVR, HDR content requires:
  • LAV Filters 0.68+: To pass HDR metadata to madVR;
  • Unencrypted HDR Content: Copyright-free content that can be played by the media player.

Image Gallery of HDR -> SDR Tone Mapping (150 nits - BT.709):
[Image: 45618507845_ae7aeae965_o.png]

*Flickr compresses the images when viewed through a browser. There isn't anything I can do about that. The images themselves are also slightly compressed.

Tip: To display the active HDR mode selected in madVR and the known HDR metadata for the source, create an empty folder in the madVR installation folder named "ShowHdrMode."

4K HDR Demos from YouTube with MPC-BE

HDR10 Metadata Explained

The Basics of How HDR Works

Understanding UHDTV Displays with PQ/HLG HDR and WCG

HDR (High Dynamic Range) Explained

TVs Are Only Getting Brighter, but How Much Light Is Enough?

Some Great Comparison Photos of HDR and SDR and Information on HDR Mastering

White Paper Discussion on Tone Mapping and Gamut Mapping

Screen Config

screen config becomes available when Digital Projector is selected as the device type. Use to adjust the projected image to fit evenly on your screen. Projectors outputting to standard aspect ratios shouldn't require any adjustment.

Black bar cropping and zoom control are also offered (as discussed later). These features can provide value for users with Constant Image Height (CIH), Constant Image Width (CIW) or Constant Image Area (CIA) setups. The more common CIH projection attempts to show all content on a 2.35:1 ratio (extra-wide) screen. Content with a 16:9 (1.78:1) ratio fills the height of the screen but not its sides. While movies with an aspect ratio of 2.35:1 are zoomed to fill both the height and width of the screen. Thereby, all content fills the height of the screen but not its width, and unwanted black bars do not alter the height of the image.

madVR Explained:

define visible screen area by cropping masked borders
Allows the image to be scaled to a lower resolution by placing black pixels on the missing borders to simulate screen masking. madVR will maintain this framing when cropping black bars with its zoom control. Only active when fullscreen.

move OSD into active video area
Moves the madVR OSD into the defined screen area. madVR can also move some video player OSDs depending on the API it uses.

activate lens memory number
Sends a command to a JVC or Sony projector to activate an on-projector lens memory number.

anamorphic lens
Allows output to non-square pixels. Should be checked if your projector uses an anamorphic lens to allow for a vertical stretch.

Anamorphic lens stretch the image horizontally to fill the width of a 2.35:1 screen. This leaves the projector to zoom the image vertically to fill the top and bottom of the frame and eliminate any black bars. A standard projector lens, by comparison, leaves a post-cropped image needing a resize in both height AND width to achieve the same effect. The advantage of anamorphic projection is a brighter image with less visible pixel structure. The smaller pixel structure is a result of the pixels being flattened before they are enlarged.

stretch factor
This is the ratio of vertical stretch applied by madVR. Vertical stretching should only be enabled for madVR or the projector, not both. madVR takes the vertical stretch into account when image scaling, so no extra scaling operation is performed. The vertical zoom performed by madVR should be of higher quality than most projectors.

More on (CIH) Constant Image Height Projection
Reply
#4
2. PROCESSING
  • Deinterlacing
  • Artifact Removal
  • Image Enhancements
  • Zoom Control

Deinterlacing

Image

Deinterlacing should be an automatic process if your sources are flagged correctly. It is becoming increasingly uncommon to encounter interlaced sources, so deinterlacing shouldn't be a significant concern for most. We are mostly talking about DVDs and broadcast HDTV. Native interlaced sources can put a large strain on madVR because the frame rate is doubled after deinterlacing.

Doom9 Forum: Deinterlacing is the process of converting interlaced video, such as common analog television signals or 1080i format HDTV signals, into a non-interlaced (progressive) form. Interlaced sources are measured in fields per second, which is equal to double the stored frame rate.

Deinterlacing is applied to content of two types:

Film: Film is photographic material produced for the cinema. It originated at 24 frames/second and has been converted to video, or telecined to 29.97 fps, for showing on 59.94 Hz NTSC TVs. Alternatively, film is sped up 4.2% to 25 fps for showing on 50 Hz PAL TVs.

Video: This is content shot on video for TVs. The frame rate used reflects the region is which it is produced. NTSC content, found in most of North and South America and East Asia, employs 59.94 half-frames, or fields, per second and 525 horizontal lines per frame or 262.5 per field. PAL is a European TV format using 50 half-frames, or fields, per second. Both NTSC and PAL interlaced content is broadcast at 1080i with a frame rate of 59.94i (= 29.94 fps) or 50i (= 25 fps).

Deinterlacing a video source captured at 60 fields per second and stored as 29.97 fps interlaced will result in a doubling of the frame rate (29.97 x 2 = 59.94 fps) after deinterlacing. An interlaced signal shows a single frame of video as two half-frames. A good deinterlacer adds new frames to match each half-frame.

Removing interlaced frames from 24 fps film interpolated to 29.97 fps such as NTSC DVD and broadcast movies and television is possible by using inverse telecine (IVTC). IVTC restores the original 24p presentation.

Low Processing

automatically activate deinterlacing when needed
Deinterlaces video based on the content flag.

If doubt, activate deinterlacing
Always deinterlaces if content is not flagged as progressive.

If doubt, deactivate deinterlacing
Only deinterlaces if content is flagged as interlaced.

Low Processing

disable automatic source type detection
Overrides automatic deinterlacing with setting below.

force film mode
Forces inverse telecine (IVTC), reconstructing the original progressive frames from video encoded as interlaced, decimating duplicate frames if necessary. A source with a field rate of 60i (and a frame rate of 30 fps) would be converted to 24p under this method. Software (CPU) deinterlacing is used in this case.

force video mode
Forces DXVA deinterlacing, which uses the GPU’s deinterlacing as set in its drivers.

only look at pixels in the frame center
This is generally thought as the best way to detect the video cadence to determine if deinterlacing is necessary and the type that should be applied.

Deinterlacing is best set to automatically activate deinterlacing when needed unless you know the content flag is being read incorrectly by madVR and wish to override it. Note that using inverse telecine (IVTC) on a native interlaced source will lead to artifacts. Deinterlacing quality is determined by that provided by the GPU drivers.

Note: Hardware deinterlacing is not currently possible when D3D11 Automatic (Native) hardware decoding is used. DXVA2 (copy-back) can be substituted or the source can be deinterlaced by the media player.

Deinterlacing Explained

Artifact Removal

Image

Video artifacts are unavoidable byproducts of the capture, production and display process. Even well-produced, high-quality video is subject to common visual artifacts.

The list of potential artifacts can be lengthy:
  • compression artifacts;
  • digital artifacts;
  • signal noise;
  • signal distortion;
  • interlacing artifacts;
  • screen tearing;
  • color banding;
  • screen-door (projection) effect;
  • silk screen (rear projection) effect;
  • rainbow (DLP) effect;
  • camera effects (noise, chromatic aberration and purple fringing);
  • etc.

Not all artifacts are inherent to video sources, which is the focus of artifact removal in this context. You should know if you are looking at a high-quality source. Even so, artifact removal can be useful in many cases.

The aim of artifact removal algorithms is to be precise as possible in removing the unwanted artifact while not harming overall image detail. However, some detail loss is possible, whether this is actually noticeable or not. You may choose to skip these settings if you desire the sharpest image possible, but sometimes a cleaner image is preferable to a sharper image.

Every source has some artifacts, but not all sources are offensive. You need to decide if the artifact is common enough to warrant using the filter at all times, or if it is better to turn it on or off for specific sources. Getting rid of an artifact completely can often mean using a high setting, which does not always work well for all sources. In the worst cases, some of the artifact may remain. You may need to lower some other settings to use some of the more demanding filters or create special profiles. It can be a great idea to program these settings to a keyboard shortcut in madVR and enable them when needed.

reduce banding artifacts

Wikipedia: Color banding is a problem of inaccurate color presentation in computer graphics. In 24-bit color modes, 8 bits per channel is usually considered sufficient to render images in BT.709 or sRGB. However, in some cases there is a risk of producing abrupt changes between shades of the same color. For instance, displaying natural gradients (like sunsets, dawns or clear blue skies) can show minor color bands.

Color banding is usually present at three stages:
  • It is present in the source;
  • It was added by the final codec due to low bitrates and/or poor encoding;
  • It was created due inaccurate color conversions.

Banding created by inaccurate color conversions is addressed by the use of dithering at output. Instead, the debanding filter is concerned with banding which originated in the source from mastering or lossy compression. Display processing such as HDR tone mapping, screen uniformity issues and processing the image at too low of a bit depth can also create banding, but this can't be addressed by madVR's debanding.

Even sources such as 4K UHD and 1080p Blu-rays can display subtle color banding in large gradients or dark scenes. 4K UHD sources are less likely to exhibit banding due to the combination of higher bit depths and better compression codecs. 

Choosing to leave debanding enabled at a low value is usually an improvement in most cases with only the finest details being impacted. In general, the less compression applied to the source, the less likelihood of source banding. 

Demonstration of Debanding

1080p Blu-ray Credits:
Original
Debanding low
Debanding medium
Debanding high

Low - Medium Processing

reduce banding artifacts
Allows madVR to smooth the edges of color bands by recalculating new pixel values for gradients at much higher bit depths.

default debanding strength
Sets the amount of correction from low to high. Higher settings will slightly soften image detail.

strength during fade in/out
Five frames are rendered with correction when a fade is detected. This only applies if this setting is higher than the default debanding strength.

If banding is obviously present in the source, a setting of high/high may be necessary to provide adequate correction. However, this is not a set-it-and-forget-it scenario, as a clean source would be unnecessarily smoothed. A setting of high is considerably stronger than medium or low. As such, it may be safer to set debanding to low/medium or medium/medium if the majority of your sources are high-quality. A setting of low puts the highest priority on avoiding detail loss while still doing a decent amount of debanding. While medium does effective debanding for most sources while accepting only the smallest of detail loss. And high removes all debanding, even from rather bad sources, with acceptable detail loss, but no more than necessary.

What Causes Banding?

reduce ringing artifacts

Ringing artifacts refer to source ringing — and not ringing caused by video rendering. Source ringing results from resizing a source master with upscaling or downscaling or is a consequence of attempted edge enhancement. This may sound sloppy, but there are many examples of high-quality sources that ship with ringing artifacts. For example, attempting to improve the 3D depth of a Blu-ray with edge enhancement most often leads to these artifacts.

Wikipedia: In signal processing, particularly digital image processing, ringing artifacts are artifacts that appear as spurious signals near sharp transitions in an image. Visually, they appear as bands or "ghosts" near edges. The term "ringing" is because the output signal oscillates at a fading rate around a sharp transition in the input, similar to a bell after being struck.

Ringing can be introduced in various ways:
  • Image upscaling or downscaling is used during the mastering process;
  • Edge enhancement is applied during the mastering process;
  • The signal is bandwidth-limited, discarding too much information for high frequencies;
  • The video renderer resizes with image upscaling or downscaling;
  • The video renderer applies edge enhancement as a post-process.

madVR focuses on removing ringing added during the mastering process. These halos are different than those created by compression artifacts. Not all sources will display ringing. The deringing filter attempts to be non-destructive to these sources, but it is possible to remove some image detail.

Medium - High Processing

reduce ringing artifacts
Allows madVR to remove source ringing artifacts with a deringing filter.

reduce dark halos around bright edges, too
Ringing artifacts are of two types: bright halos or dark halos. Removing dark halos increases the likelihood of removing valid detail. This can be particularly true with animated content, which makes this a risk/reward setting. It may be a safer choice to focus on bright halos and leave dark halos alone.

Lighthouse Top:
No Deringing
madVR Deringing

DVD Animated:
No Deringing
madVR Deringing

Deringing is up to preference. If you are always noticing halos around objects (particularly, around actor's heads), it can be worth enabling. The filter can be more or less as destructive to a clean source as debanding.

Definitions and Causes of Ringing Artifacts

reduce compression artifacts

A video is compressed by a codec such as HEVC or H.264, which removes pixels it considers redundant in the same frame or across multiple frames. Codecs are impressive math algorithms that hold up surprisingly well at even reasonable bitrates. At a certain bitrate, however, the source will start to deteriorate rapidly, as too much pixel data is lost and can't be recovered. Outside of Blu-ray, few consumer sources (particularly, streaming and broadcast sources) maintain high enough bitrates at all times to completely avoid compression artifacts.

Wikipedia: A compression artifact (or artefact) is a noticeable distortion of media (including images, audio, and video) caused by the application of lossy compression. Lossy data compression involves discarding some of the media's data so that it becomes simplified enough to be stored within the desired disk space or be transmitted (or streamed) within the bandwidth limitations (known as a data rate or bitrate for media that is streamed). If the compressor could not reproduce enough data in the compressed version to reproduce the original, the result is a diminishing of quality, or introduction of artifacts. Alternatively, the compression algorithm may not be intelligent enough to discriminate between distortions of little subjective importance and those objectionable to the viewer.

Common compression artifacts:
  • Posterizing (banding);
  • Contouring (related to banding);
  • Ringing;
  • Staircase noise (aliasing) along curving edges;
  • Blockiness in "busy" regions (block boundary artefacts, sometimes called macroblocking, quilting, or checkerboarding).

High - Maximum Processing

reduce compression artifacts
A shader designed to remove blocking and ringing (and some noise) caused by video encoding by compression. This type of correction is beneficial for sources with low bitrates. The bitrate where compression artifacts occur depends on a combination of factors such as the source bit depth, frame rate, input and output resolution and compression codec.

Bitrates: Constant Rate Factor Encoding Explained (0 lossless -> 51 terrible quality)

strength
The amount of correction applied. A strength value of 2-8 is recommended for most sources with lower values being the safest choice to preserve fine details.

quality
There are four quality settings: low, medium, high and very high. Each level will alter the effectiveness of the algorithm and the stress put on the GPU.

process chroma channels, too
By default, reduce compression artifacts works on the luma (black and white) channel only. Enabling this includes the chroma (color) layer in the algorithm's pass. Keep in mind, this setting almost doubles the resources used by the algorithm and removing chroma artifacts may be overkill. The soft chroma layer makes compression artifacts harder to notice.

activate only if it comes for free (as part of NGU sharp)
Only applies RCA when NGU Sharp medium, high or very high is used to upscale the image. NGU Sharp and RCA are fused together with no additional resource use.

NGU Sharp medium fuses RCA medium quality, NGU Sharp high fuses RCA high quality and NGU Sharp very high fuses RCA very high quality. The strength value of RCA is left to the user. This only applies when image upscaling.

Animated:
Original
NGU Sharp very high
NGU Sharp very high + RCA very high / strength:8

Face Close-up:
NGU Sharp very high
NGU Sharp very high + RCA very high / strength:12

The free version of RCA can be worth using because compression artifacts are so common. It also works well as a general denoiser for higher-quality sources.

Bad macroblocking cannot be adequately addressed by this shader. It seems to be designed more to remove the type of artifacts found in something such as an iTunes download. Film grain can also be smoothed, which will remove some image detail. Common sources with compression artifacts tend to be torrents and Internet streaming video. This is one you may want to map to your keyboard to use with your compressed or noisy sources.

Note: GPU strain must be considered when enabling RCA. It is very hard on the GPU if not used for free as part of image upscaling (especially at high and very high). So be warned!

Video Compression Artifacts Explained in Pictures

reduce random noise

Wikipedia: Image noise is random variation of brightness or color information in images, and is usually an aspect of electronic noise. It can be produced by the sensor and circuitry of a scanner or digital camera. Image noise can also originate in film grain and in the unavoidable shot noise of an ideal photon detector. Image noise is an undesirable byproduct of image capture that obscures the desired information.

Heavy film grain and images with excessive noise can be bothersome to many. The denoising filter is the best option to address this problem in exchange for some acceptable loss of fine detail.

High - Maximum Processing

reduce random noise
Removes all video noise and grain while attempting to leave the rest of the image undisturbed.

strength
Consider this a slider between preserving image detail and removing as much image noise as possible.

process chroma channels, too
reduce random noise focuses on the luma (black and white) channel by default. To include the chroma (color) layer, check this setting. Remember, you are almost doubling the resources used by the algorithm and chroma noise is much harder to see than luma noise.

It is possible to remove some noise with reduce compression artifacts, but RRN is much better at this task. Typically, reduce random noise is most effective when used with low strength values. Note that some image detail in the foreground is blurred to remove noise from the background. Such is the sometimes indiscriminate nature of denoising/degrain filters.

Saving Private Ryan:
Original
Denoising strength: 2
Denoising strength: 3
Denoising strength: 4
Denoising strength: 5

Definitions of Image Noise

Image Enhancements

Image

image enhancements are not used to remove artifacts, but are instead available to sharpen the image pre-resize. These shaders are applied before upscaling or to sources shown at its native resolution (e.g. 1080p at 1080p, or 4K UHD at 4K UHD). This can be a way to bring out more detail in the source, which can make the image more pleasing or more artificial, depending on your tastes. Soft video footage is typically sharpened/enhanced in post-production, but some sources can still appear soft. There are also those who like the additional depth and texture provided by shapending shaders and prefer to have them enabled with all sources. If you find the image is too soft despite the use of sharp upscaling, image sharpening may be desirable.

Effective use of sharpening is about finding the right balance of enhancement without oversharpening. Noticeable enhancement of grain or visible halos or ringing around edges are two signs the image may be oversharpened.

image enhancements are not recommended for content that needs to be upscaled. Pre-resize sharpening will show a stronger effect than sharpening applied after resize like that under upscaling refinement. In many cases, this will lead to an image that is oversharpened and less natural in appearance.

You may choose to combine the shaders together to hit the image from different angles.

Saving Private Ryan:
Native Original
sharpen edges (4.0) + AR
crispen edges (3.0) + AR
LumaSharpen (1.50) + AR
AdaptiveSharpen (1.5) + LL + AR

Some Things to Watch for When Applying Sharpening to an Image

Medium Processing

activate anti-bloating filter
Reduces the line fattening that occurs when sharpening shaders are applied to an image. This uses more processing power than anti-ringing, but has the effect of blurring oversharpened pixels to produce a more natural result that better blends into the background elements.

Applies to LumaSharpen, sharpen edges and AdaptiveSharpen. Both crispen edges and thin edges are "skinny" by design and are omitted.

Low Processing

activate anti-ringing filter
Applies an anti-ringing filter to reduce ringing artifacts caused by aggressive edge enhancement. Uses a small amount of GPU resources and reduces the overall sharpening effect. Anti-ringing should be checked with all shaders as each will produce varying levels of ringing.

Applies to LumaSharpen, crispen edges, sharpen edges and AdaptiveSharpen.

Low Processing

enhance detail

Doom9 Forum: Focuses on making faint image detail in flat areas more visible. It does not discriminate, so noise and grain may be sharpened as well. It does not enhance the edges of objects but can work well with line sharpening algorithms to provide complete image sharpening.

LumaSharpen

SweetFX WordPress: LumaSharpen works its magic by blurring the original pixel with the surrounding pixels and then subtracting the blur. The end result is similar to what would be seen after an image has been enhanced using the Unsharp Mask filter in GIMP or Photoshop. While a little sharpening might make the image appear better, more sharpening can make the image appear worse than the original by oversharpening it. Experiment and apply in moderation.

Medium Processing

crispen edges

Doom9 Forum: Focuses on making high-frequency edges crisper by adding light edge enhancement. This should lead to an image that appears more high-definition.

thin edges

Doom9 Forum: Attempts to make edges, lines and even full image features thinner/smaller. This can be useful after large upscales, as these features tend to become fattened after upscaling. May be most useful with animated content and/or used in conjunction with sharpen edges at low values.

sharpen edges

Doom9 Forum: A line/edge sharpener similar to LumaSharpen and AdaptiveSharpen. Unlike these sharpeners, sharpen edges introduces less bloat and fat edges.

AdaptiveSharpen

Doom9 Forum: Adaptively sharpen the image by sharpening more intensely near image edges and less intensely far from edges. The outer weights of the laplace matrix are variable to mitigate ringing on relative sharp edges and to provide more sharpening on wider and blurrier edges. The final stage is a soft limiter that confines overshoots based on local values.

General Usage of Image enhancements:

Each shader serves a different purpose. It may be desirable to match an edge sharpener with a detail enhancer such as enhance detail. The two algorithms will sharpen the image from different perspectives, filling in the flat areas of an image as well as its angles. A good combination might be:

sharpen edges (AB & AR) + enhance detail

sharpen edges provides subtle line sharpening for an improved 3D look, while enhance detail brings out texture detail in the remaining image.

Zoom Control

Image

The last set of settings are most applicable to projector owners using any forms of Constant Image Height (CIH), Constant Image Width (CIW) or Constant Image Area (CIA) projection. zoom control detects and crops black bars that do not contain any visible video. The remaining image can be left alone or zoomed to fit the display aspect ratio. It is a good idea to visit the screen config section in devices before adjusting these settings.

When cropping using zoom control, madVR will resize based on the instruction of the media player. A media player set to 100% / no zoom will not resize a cropped image even when madVR is set to zoom. But a setting of touch window from inside / zoom could interact with the settings in zoom control. Only MPC-HC provides on-demand zoom status. All other media players should be set to notify media player about cropped black bars to communicate with the player and adjust madVR's zoom control to the match the output configuration of the media player.

More Detail on Media Player Zoom Notification

madVR is capable of handling videos with multiple aspect ratios such as The Dark Knight, which switches frequently between a 1.78 and 2.35 aspect ratio. These videos can be cropped and resized on-the-fly.

Unfortunately, this process is not standardized and may involve experimentation with the various options below to find a compromise between eliminating all black bars and reducing excessive zooming during playback. Black bar detection is run on the CPU as opposed to the GPU.

Note: Detection of black bars is not currently possible when D3D11 Automatic (Native) hardware decoding is used. DXVA2 (copy-back) should be selected instead until full support is added.

madVR Explained:

disable scaling if image size changes by only
If the resolution needs scaling by the number of pixels set or less, image upscaling is disabled and black pixels are instead added to the right and/or bottom of the image.

move subtitles
This is important when removing black bars. Otherwise, it is possible to display subtitles outside the visible screen area.

automatically detect hard coded black bars
This setting unlocks a number of other settings designed to detect and crop black bars.

Black bar detection detects black bars added to fit video content to an aspect ratio other than the source, or the small black bars left from imprecise analog captures. An example of imprecise analog captures includes 16:9 video with black bars on the top and bottom encoded as 4:3 video, or the few blank pixels on the left and right of a VHS capture. madVR can detect black bars on all sides.

if black bars change pick one zoom factor
Sets a single zoom factor to avoid changing the zoom or crop factor of black bars which appear intermittently during playback. When set to which doesn't lose any image content, madVR will not zoom or crop a 16:9 portion of a 4:3 film. Conversely, when set to which doesn't show any black bars, madVR will zoom or crop all of the 4:3 footage the amount needed to remove the black bars from 16:9 sections.

if black bars quickly change back and forth
This can be used in place of the option above. A limit is placed on how often madVR can change the zoom or crop during playback to remove black bars as they are detected. Without either of these options, madVR will always change the crop or zoom to remove all black bars.

notify media player about cropped black bars
Defines how often the media player is notified of changes to the black bars. Some media players use this information to resize the window.

always shift the image
Determines whether the top or bottom of the video is cropped when zooming.

keep bars visible if they contain subtitles
Disables zooming or cropping of black bars when subtitles are detected as part of the black bar. Black bars can remain visible permanently or for a set period of time.

cleanup image borders by cropping
Crops additional non-black pixels beyond the black bars or on all edges. When set to crop all edges, pixels are cropped even when no black bars are detected.

if there are big black bars
Defines a specific cropping for large black bars. This can include zooming the image to hide the black bars.

zoom small black bars away
This removes black bars by zooming the video slightly. This usually results in cropping a small amount of video information from one edge to maintain the original aspect ratio and resizing to the original display resolution. For example, the bottom of the image is cropped after removing small black bars on the left and right and the video scaled back to its original resolution.

crop black bars
Crops black bars to change the display aspect ratio and resolution. Cropping black bars increases performance as the pixels no longer need to be processed. Profile rules referencing resolution will use the post-crop resolution.
Reply
#5
3. SCALING ALGORITHMS
  • Chroma Upscaling
  • Image Downscaling
  • Image Upscaling
  • Upscaling Refinement

Image

The real fun begins with madVR's image scaling algorithms. This is perhaps the most demanding and confusing aspect of madVR due to the sheer number of combinations available. It can be easy to simply turn all settings to its maximum. However, most graphics cards, even powerful ones, will be forced to compromise somewhere. To understand where to start, an introduction to scaling algorithms from the JRiver MADVR Expert Guide is in order.

“Scaling Algorithms

Image scaling is one of the main reasons to use madVR. It offers very high quality scaling options that rival or best anything I have seen.

Most video is stored using chroma subsampling in a 4:2:0 video format. In simple terms, what this means is that the video is basically stored as a black-and-white “detail” image (luma) with a lower resolution “color” image (chroma) layered on top. This works because the detail image helps to mask the low resolution of the color image that is being layered on top.

So the scaling options in madVR are broken down into three different categories: Chroma upscaling, which is the color layer. Image upscaling, which is the detail layer. Image downscaling, which only applies when the image is being displayed at a lower resolution than the source — 1080p content on a 720p display, or in a window on a 1080p display, for example.

Chroma upscaling is performed on all videos — it takes the half-resolution chroma image, and upscales it to the native luma resolution of the video. If there is any further scaling to be performed; whether that is upscaling or downscaling, then the image upscaling/downscaling algorithm is applied to both chroma and luma.”

Not all displays are capable of receiving chroma 4:4:4 or RGB inputs and will instead convert the input signal to YCbCr 4:2:2 or 4:2:0. Many displays must downconvert to 4:2:2 to complete its internal video processing. This is even the case with current 4K UHD displays, which advertise 4:4:4 support, but this is often only in PC or game modes that come with its own shortcomings for video playback. This means some of the chroma pixels are missing and shared with neighboring luma pixels. When converted directly to RGB, this has the effect of lowering chroma resolution by blurring some of the chroma planes.

Chroma 4:4:4 Display Support Test Image — Must Be Viewed 1:1

Drag image into MPC window; If you can perfectly read the last two lines (with red and blue background, as well as some other lines like the blue and pink ones) then the chroma subsampling is 4:4:4. Otherwise, the chroma is 4:2:2 or 4:2:0 (likely it's 4:2:2). You should drop your resolution into something like 24hz to ensure the display is giving its best shot.

HTPC Chroma Subsampling:

(Source) YCbCr 4:2:0 -> (madVR) YCbCr 4:4:4 to RGB -> (GPU) RGB or YCbCr -> (Display) RGB or YCbCr to YCbCr 4:4:4/4:2:2/4:2:0 or RGB -> (Display Output) RGB

Chroma Subsampling Explained in Pictures

One Example of a YCbCr -> RGB Conversion Matrix

Chroma and Image Scaling Options in madVR

The following section lists the chroma upscaling, image downscaling and image upscaling algorithms available in madVR. The algorithms are ranked by the amount of GPU processing required to use each setting. Keep in mind, super-xbr and higher scaling requires large GPU usage (especially if scaling content to 4K). Users with low-powered GPUs should stick with settings labeled Medium or lower.

The goal of image scaling is to replicate what a low resolution image would look like if it was a high resolution image. It is not about adding artificial detail or enhancement, but attempting to recreate what the source should look like at a higher or lower resolution.

Most algorithms offer a tradeoff between three factors:
  • sharpness: crisp, coarse detail.
  • aliasing: jagged, square edges on lines/curves.
  • ringing: haloing around objects.

The list below does not have be considered an absolute quality scale from worst to best. You may have your own preference as to what looks best (sharp/soft) and this should be considered along with the power of your graphics card.

Sample of Scaling Algorithms:
Bilinear
Bicubic
Lanczos4
Jinc

[Default Values]

Chroma Upscaling [Bicubic 60]

Doubles the chroma layer in both directions: vertical and horizontal to match the native luma layer. Chroma upsampling is a requirement for all videos before converting to RGB:

Y' (luma - 4) CbCr (chroma - 2:0) -> Y'CbCr 4:4:4 -> RGB

Note: If downscaling by a large amount, you may want to check scale chroma separately... in trade quality for performance to avoid chroma upscaling before downscaling.

activate SuperRes filter, strength: Applies a sharpening filter to the chroma layer after upscaling. Use of chroma sharpening is up to preference, although oversharpening chroma information is generally not recommended. A Medium Processing feature.

Minimum Processing
  • Nearest Neighbor
  • Bilinear

Low Processing
  • Cubic
    sharpness: 50 - 150 (anti-ringing filter)

Medium Processing
  • Lanczos
    3 - 4 taps (anti-ringing filter)
  • Spline
    3 - 4 taps (anti-ringing filter)
  • Bilateral
    old - sharp

High Processing
  • Jinc
    3 taps (anti-ringing filter)
  • super-xbr
    sharpness: 25 - 150

High - Maximum Processing
  • NGU
    low - very high
  • Reconstruction
    soft - placebo AR

Comparison of Chroma Upscaling Algorithms

Image Downscaling [Bicubic 150]

Downscales the luma and chroma as RGB when the source is larger than the the output resolution:

RGB -> downscale -> RGB downscaled.

scale in linear light (recommended when image downscaling)

Low Processing
  • DXVA2 (overrides madVR processing and chroma upscaling)
  • Nearest Neighbor
  • Bilinear

Medium Processing
  • Cubic
    sharpness: 50 - 150 (scale in linear light) (anti-ringing filter)

High Processing
  • SSIM 1D
    strength: 25% - 100% (scale in linear light) (anti-ringing filter)
  • Lanczos
    3 - 4 taps (scale in linear light) (anti-ringing filter)
  • Spline
    3 - 4 taps (scale in linear light) (anti-ringing filter)

Maximum Processing
  • Jinc
    3 taps (scale in linear light) (anti-ringing filter)
  • SSIM 2D
    strength: 25% - 100% (scale in linear light) (anti-ringing filter)

Image Upscaling [Lanczos 3]

Upscales the luma and chroma as RGB when the source is smaller than the output resolution:

RGB -> upscale -> RGB upscaled.

scale in sigmoidal light (not recommended when image upscaling)

Minimum Processing
  • DXVA2 (overrides madVR processing and chroma upscaling)
  • Bilinear

Low Processing
  • Cubic
    sharpness: 50 - 150 (anti-ringing filter)

Medium Processing
  • Lanczos
    3 - 4 taps (anti-ringing filter)
  • Spline
    3 - 4 taps (anti-ringing filter)

High Processing
  • Jinc
    3 taps (anti-ringing filter)

Image Doubling [Off]

Doubles the resolution (2x) of the luma and chroma independently or as RGB when the source is smaller than the output resolution. This may require additional upscaling or downscaling to correct any undershoot or overshoot of the output resolution:

Y / CbCr / RGB -> Image doubling -> upscale or downscale -> RGB upscaled.

High Processing
  • super-xbr luma & chroma doubling
    sharpness: 25 - 150
    (always to 4x scaling factor)

High - Maximum Processing
  • NGU Anti-Alias luma & chroma doubling
    low - very high
    (always to 4x scaling factor)
  • NGU Soft luma & chroma doubling
    low - very high
    (always to 4x scaling factor)
  • NGU Standard luma & chroma doubling
    low - very high
    (always to 4x scaling factor)
  • NGU Sharp luma & chroma doubling
    low - very high
    (always to 4x scaling factor)

Image

Ranking the Image Downscaling Algorithms (Best to Worst):
  • SSIM 2D
  • SSIM 1D
  • Bicubic150
  • Lanczos
  • Spline
  • Jinc
  • DXVA2
  • Bilinear
  • Nearest Neighbor

What is Image Doubling?

Image doubling is simply another form of image upscaling that results in a doubling of resolution — in either X or Y direction — such as 540p to 1080p, or 1080p to 2160p. Once doubled, the image may be subject to further upscaling or downscaling to match the output resolution. Image doubling produces exact 2x resizes and can run multiple times (x4 to x8). The benefit of image doubling algorithms is they do a good job of detecting and preserving edges to eliminate the staircase effect (aliasing) caused by simpler resizers. Some of the better image doubling algorithms like NGU can also be very sharp without introducing any ringing. Image doubling is generally regarded as the most effective way to upscale an image, and applied techniques such as deep learning or neural networks have helped to further refine these algorithms.

Chroma upscaling is considered a form of image doubling. You are, however, less likely to notice the benefits of image doubling when upscaling the soft chroma layer compared to the sharp luma layer.

Available Image Doubling Algorithms:

super-xbr
  • Resolution doubler;
  • Relies on RGB inputs —  luma and chroma are doubled together;
  • High sharpness, low aliasing, medium ringing.

NGU Family:
  • Neural network resolution doubler;
  • Next Generation Upscaler proprietary to madVR;
  • Uses YCbCr color space — capable of doubling luma and chroma independently.
  • Medium - high sharpness, low aliasing, no ringing.

madshi on how NGU's neural networks work:
Quote:This is actually very near to how madVR's "NGU Sharp" algorithm was designed: It tries to undo/revert a 4K -> 2K downscale in the best possible way. There's zero artificial sharpening going on. The algo is just looking at the 2K downscale and then tries to take a best guess at how the original 4K image might have looked like, by throwing lots and lots of GLOPS on the task. The core part of the whole algo is a neural network (AI) which was carefully trained to "guess" the original 4K image, given only the 2K image. The training of such a neural network works by feeding it with both the downscaled 2K and the original 4K image, and then the training automatically analyzes what the neural network does and how much its output differs from the original 4K image, and then applies small corrections to the neural network to get nearer to the ideal results. This training is done hundreds of thousands of times, over and over again.

Sadly, if a video wasn't actually downscaled from 4K -> 2K, but is actually a native 2K source, the algorithm doesn't produce as good results as otherwise, but it's usually still noticably better than conventional upscaling algorithms.
Source

Technical Study on How Neural Networks Are Used to Improve Image Scaling 

NGU Anti-Alias
  • Best choice for low to mid-quality sources with some aliasing or for those who don't like NGU Sharp;
  • Most natural lines, but more blurry than NGU Sharp and less detailed.

NGU Soft
  • Best choice for poor sources with a lot of artifacts or for those who hate sharp upscaling;
  • Softest and most blurry variant of NGU.

NGU Standard
  • Renders softer edges than NGU Sharp, but not nearly as soft as NGU Soft;
  • Similar to NGU Sharp, but a bit blurrier and less detailed.

NGU Sharp
  • Sharpest upscaler in madVR and best choice for high-quality sources with clean lines;
  • Produces the clearest image with the most detail, but can create some plastic images with lower-quality sources and very large upscales.

Note on comparisons below: The "Original 1080p" images in the image comparisons below can make for a difficult reference because Photoshop tends to alter image detail significantly when downscaling. The color is also a little different. These images are still available as a reference as to how sharp the upscaled image should appear.

Video Game Poster:
Original 1080p
Photoshop Downscaled 480p
Lanczos3 - no AR
Jinc + AR
super-xbr100 + AR
NGU Anti-Alias very high
NGU Standard very high
NGU Sharp very high

American Dad:
Original
Jinc + AR
super-xbr100 + AR 
NNEDI3 256 neurons + SuperRes (4)
NGU Sharp very high

Wall of Books:
Original 480p
Lanczos3 - no AR
Jinc + AR
super-xbr-100 + AR
NGU Anti-Alias very high
NGU Standard very high
NGU Sharp very high

Comic Book:
Original 1080p
Photoshop Downscaled 540p
Lanczos3 - no AR
Jinc + AR
super-xbr-100 + AR
NGU Anti-Alias very high
NGU Standard very high
NGU Sharp very high

Corporate Photo:
Original 1080p
Photoshop Downscaled 540p
Lanczos3 - no AR
Jinc + AR
super-xbr100 + AR
NGU Anti-Alias very high
NGU Standard very high
NGU Sharp very high

Bilinear (For Nvidia Shield owners)

Image

algorithm quality <-- luma doubling:

luma doubling/quality always refers to image doubling of the Y layer of a Y'CbCr source. This will provide the majority of the improvement in image quality as the black and white luma contains image detail. Priority should be made to maximize this value first before adjusting other settings.

super-xbr: sharpness: 25 - 150
NGU Anti-Alias: low - very high
NGU Soft: low - very high
NGU Standard: low - very high
NGU Sharp: low - very high

algorithm quality <-- luma quadrupling:

luma quadrupling is doubling rendered twice or a direct quadruple (4x scaling factor).

let madVR decide: direct quadruple - same as luma doubling; double again (super-xbr & NGU Anti-Alias)
double again --> low - very high
direct quadruple --> low - very high

algorithm quality <-- chroma

chroma quality determines how the chroma layer (CbCr) will be doubled to match the luma layer (Y). This is separate from chroma upscaling that is performed on all videos. The chroma layer is inherently soft and lacks fine detail making chroma doubling overkill or unnecessary in most cases. Bicubic60 + AR provides the best bang for the buck here. It saves resources for luma doubling while still providing acceptable chroma quality. Adjust chroma quality last.

let madVR decide: Bicubic60 + AR unless using NGU very high. In that case, NGU medium is used instead.
normal: Bicubic60 + AR
high: NGU low
very high: NGU medium

activate doubling/quadrupling... <-- doubling

Determines the scaling factor when image doubling is activated.

let madVR decide: 1.2x

activate doubling/quadrupling... <-- quadrupling

Determines the scaling factor when image quadrupling is activated.

let madVR decide: 2.4x

if any (more) scaling needs to be done <-- upscaling algo

Image upscaling is applied after doubling if the scaling factor is greater than 2x but less than 4x, or greater than 4x but less than 8x. This is the case if scaling 480p -> 1080p, or 480p -> 2160p, for example. The luma and/or chroma is further upscaled after doubling to fill in any remaining pixels (960p -> 1080p, or 1920p -> 2160p). Upscaling after image doubling is not overly important.

let madVR decide: Bicubic60 + AR unless using NGU very high. In that case, Jinc + AR is used instead.

if any (more) scaling needs to be done <-- downscaling algo

Image downscaling will reduce the value of the luma and/or chroma if the scaling result is larger than the target resolution. Image downscaling is necessary for scaling factors less than 2x or when quadrupling resolutions less than 4x. This is true when scaling 720p -> 1080p, or 720p -> 2160p, for example. Much like upscaling after doubling and chroma quality, downscaling after image doubling is only somewhat important.

let madVR decide: Bicubic150 + LL + AR unless using NGU very high. In that case, SSIM 1D 100% + LL + AR is used instead.

Example of Image Doubling

Imagine a source scaled 1280 x 720p -> 1920 x 1080p.

This is a scaling factor of 1.5x.

Image

chroma > NGU Sharp (low)

The first entry is the chroma upscaling setting, which scales the half-resolution chroma to match the luma layer:

Y (luma - 4) CbCr (chroma - 2:0) -> YCbCr 4:4:4 -> RGB

luma > NGU Sharp (very high) < SSIM1D100AR

The luma layer (Y) is doubled using NGU Sharp (very high). However, the resulting output (720p -> 1440p) is too large for the target resolution (1080p). Therefore, image downscaling is used to reduce the image (1440p -> 1080p) using the setting from downscaling algo. In this case, SSIM 1D 100% + AR.

RGB -> YCbCr 4:4:4 -> -> 1440p -> 1080p​​​​

chroma > Bicubic60AR

The upscaled chroma layer (CbCr) is scaled directly from 720p -> 1080p with Bicubic60 + AR to match the doubled luma layer using the chroma quality setting. Rather than waste resources on image doubling, Bicubic60 + AR allows for the use of higher settings for luma quality.

RGB -> YCbCr 4:4:4 -> CbCr -> 1080p​​​​

Upscaling Refinement

upscaling refinement is also available to further improve the quality of upscaling.

upscaling refinement applies sharpening to the image post-resize. Post-resize luma sharpening is a means to combat the softness introduced by upscaling. In most cases, even sharp image upscaling is incapable of replicating the image as it should appear at a higher resolution.

To illustrate the impact of image upscaling, view the image below:

Original Castle Image (before 50% downscale)

The image is downscaled 50%. Then, upscaling is applied to bring the image back to the original resolution using super-xbr100. Despite the sharp upscaling of super-xbr, the image appears noticeably softer:

Downscaled Castle Image resized using super-xbr100

Now, image sharpening is layered on top of super-xbr. Note the progressive nature of each sharpener in increasing perceived detail. This can be good or bad depending on the sharpener. In this case, SuperRes occupies the middle ground in detail but is most faithful to the original when resized without adding extra detail not found in the original image.

superxbr100 + FineSharp (4.0)

superxbr100 + SuperRes (4)

superxbr100 + AdaptiveSharpen (0.8)

Compare the above images to the original. The benefit of image sharpening should become apparent as the image moves closer to its intended target. In practice, using slightly less aggressive values of each sharpener is best to limit artifacts such as excess ringing and aliasing. But clearly some added sharpening can be beneficial to the upscaling process.

Note: Extra sharpness is usually unnecessary when using NGU Sharp. In fact, upscaling refinements soften edges and add grain are offered to soften NGU Sharp's upscaling, which can be excessively sharp at high values, particularly with large upscales.

Sharpening shaders share four common settings:

refine the image after every ~2x upscaling step
Sharpening is applied after every 2x resize. This is mostly helpful for large upscales of 4x or larger where the image can become very soft. Uses extra processing for a small improvement in image sharpness.

refine the image only once after upscaling is complete
Sharpening is only applied after the resize is complete.

Medium Processing

activate anti-bloating filter
Reduces the line fattening that occurs when sharpening shaders are applied to an image. This uses more processing power than anti-ringing, but has the effect of blurring oversharpened pixels to produce a more natural result that better blends into the background elements.

Applies to LumaSharpen, sharpen edges and AdaptiveSharpen. Both crispen edges and thin edges are "skinny" by design and are omitted.

Low Processing

activate anti-ringing filter
Applies an anti-ringing filter to reduce ringing artifacts caused by aggressive edge enhancement. Uses a small amount of GPU resources and reduces the overall sharpening effect. Anti-ringing should be checked with all shaders as each will produce varying levels of ringing.

Applies to LumaSharpen, crispen edges, sharpen edges and AdaptiveSharpen. SuperRes includes its own built-in anti-ringing filter.

Low Processing

soften edges / add grain

Doom9 Forum: These options are meant to work with NGU Sharp. When trying to upscale a low-res image, it's possible to get the edges very sharp and very near to the "ground truth" (the original high-res image the low-res image was created from). However, texture detail which is lost during downscaling cannot properly be restored. This can lead to "cartoon" type images when upscaling by large factors with full sharpness, because the edges will be very sharp, but there's no texture detail. In order to soften this problem, I've added options to "soften edges" and "add grain." Here's a little comparison to show the effect of these options:

NGU Sharp | NGU Sharp + soften edges + add grain | Jinc + AR

enhance detail

Doom9 Forum: Focuses on making faint image detail in flat areas more visible. It does not discriminate, so noise and grain may be sharpened as well. It does not enhance the edges of objects but can work well with line sharpening algorithms to provide complete image sharpening.

Medium Processing

LumaSharpen

SweetFX WordPress: LumaSharpen works its magic by blurring the original pixel with the surrounding pixels and then subtracting the blur. The end result is similar to what would be seen after an image has been enhanced using the Unsharp Mask filter in GIMP or Photoshop. While a little sharpening might make the image appear better, more sharpening can make the image appear worse than the original by oversharpening it. Experiment and apply in moderation.

crispen edges

Doom9 Forum: Focuses on making high-frequency edges crisper by adding light edge enhancement. This should lead to an image that appears more high-definition.

Medium - High Processing

thin edges

Doom9 Forum: Attempts to make edges, lines and even full image features thinner/smaller. This can be useful after large upscales, as these features tend to become fattened after upscaling. May be most useful with animated content and/or used in conjunction with sharpen edges at low values.

sharpen edges

Doom9 Forum: A line/edge sharpener similar to LumaSharpen and AdaptiveSharpen. Unlike these sharpeners, sharpen edges introduces less bloat and fat edges. 

AdaptiveSharpen

Doom9 Forum: Adaptively sharpen the image by sharpening more intensely near image edges and less intensely far from edges. The outer weights of the laplace matrix are variable to mitigate ringing on relative sharp edges and to provide more sharpening on wider and blurrier edges. The final stage is a soft limiter that confines overshoots based on local values.

SuperRes

Doom9 Forum: The general idea behind the super resolution method is explained in the white paper Alexey Lukin et al. The idea is to treat upscaling as inverse downscaling. So the aim is to find a high resolution image, which, after downscaling is equal to the low resolution image.

This concept is a bit complex, but can be summarized as follows:

Estimated upscaled image is calculated -> Image is downscaled -> Differences from the original image are calculated

Forces (corrections) are calculated based on the calculated differences -> Combined forces are applied to upscale the image

This process is repeated 2-4 times until the image is upscaled with corrections provided by SuperRes.

All of the above shaders focus on the luma channel.

upscaling refinement is useful for almost any upscale, particularly for those users who prefer a sharp image. There is no right or wrong combination, and what looks best mostly comes down to your tastes. As a general rule, the amount of sharpening suitable for a given source increases with the amount of upscaling applied, as sources will become softer with larger amounts of upscaling.
Reply
#6
4. RENDERING
  • General Settings
  • Windowed Mode Settings
  • Exclusive Mode Settings
  • Stereo 3D
  • Smooth Motion
  • Dithering
  • Trade Quality for Performance

General Settings

General settings are designed to ensure hardware and operating system compatibility for smooth playback. Minor performance improvements may be experienced, but they aren't likely to be noticeable. The key is to achieve correct open and close behavior of the media player and eliminate any dropped frames or presentation glitches caused by system incompatibilities.

Expert Guide:

delay playback start until render queue is full

Pauses the video playback until a number of frames have been rendered in advance of playback. This potentially avoids some stuttering right at the start of video playback, or after seeking through a video — but it will add a slight delay to both. It is disabled by default, but I prefer to have it enabled. If you are having problems where a video fails to start playing, this is the first option I would disable when troubleshooting.

enable windowed overlay (Windows 7 and newer)

Windows 7/8/10

Changes the way that windowed mode is rendered, and will generally give you better performance. The downside to windowed overlay is that you cannot take screenshots of it with the Print Screen key on your keyboard. Other than that, it's mostly a “free” performance increase.

It does not work with AMD graphics cards or fullscreen exclusive mode. D3D9 Only.

enable automatic fullscreen exclusive mode

Windows 7/8/10*

Allows madVR to use fullscreen exclusive mode for video rendering. This allows for several frames to be sent to the video card in advance, which can help eliminate random stuttering during playback. It will also prevent things like notifications from other applications being displayed on the screen at the same time, and similar to the Windowed Overlay mode, it stops Print Screen from working. The main downside to fullscreen exclusive mode is that when switching in/out of FSE mode, the screen will flash black for a second (similar to changing refresh rates). A mouse-based interface is rendered in such a way that it would not be visible in FSE mode, so madVR gets kicked out of FSE mode any time you use it, and you get that black flash on the screen. I personally find this distracting, and as such, have disabled FSE mode. The "10ft interface" is unaffected and renders correctly inside FSE mode.

Required for 10-bit output with Windows 7 or 8. fullscreen exclusive mode is not recommended with Windows 10 due to the way Windows 10 handles this mode. In reality, fullscreen exclusive mode is no longer exclusive in Windows 10 and in fact fake, not to mention unreliable with many drivers and media players. Consider it unsupported. It is only useful in Windows 10 if you are unable to get smooth playback with the default windowed mode.

disable desktop composition (Vista and newer)

Windows Vista/7

This option will disable Aero during video playback. Back in the early days of madVR this may have been necessary on some systems, but I don't recommend enabling this option now. Typically, the main thing that happens is that it breaks VSync and you get screen tearing (horizontal lines over the video). Not available for Windows 8 and Windows 10.

use Direct3D 11 for presentation (Windows 7 and newer)

Windows 7/8/10

Uses a Direct3D 11 presentation path in place of Direct3D 9. This may allow for faster entering and exiting of fullscreen exclusive mode. Overrides windowed overlay.

Required for 10-bit output (all video drivers) and HDR passthrough (AMD).

present a frame for every VSync

Windows 7/8/10

Disabling this setting may improve performance but can cause presentation glitches. However, enabling it will cause presentation glitches on other systems. When disabled, madVR presents new frames when needed, relying on Direct3D 11 to repeat frames as necessary to maintain VSync. Unless you are experiencing dropped frames, it is best to leave it enabled.

use a separate device for presentation (Vista and newer)

Windows Vista/7/8/10

By default, this option is now disabled. It could provide a small performance improvement or performance hit depending on the system. You will have to experiment with this one.

use a separate device for DXVA processing (Vista and newer)

Windows Vista/7/8/10

Also disabled by default. Similar to the option above, this may improve or impair performance slightly.

CPU/GPU queue size

This sets the size of the decoder/subtitle queues (CPU) (video & subtitle) and upload/render queues (GPU) (madVR). Unless you are experiencing problems, I would leave it at the default settings of 16/8. The higher these queue sizes are, the more memory madVR requires. With larger queues, you could potentially have smoother playback on some systems, but increased queue sizes also mean increased delays when seeking if the delay playback… options are enabled.

The default queue sizes should be more than enough for most systems. Some weaker PCs may benefit from lowering the CPU queue and possibility the GPU queue.
 
Windowed Mode Settings

Image

present several frames in advance

Provides a buffer to protect against dropped frames and presentation glitches by sending a predetermined number of frames in advance of playback to the GPU driver. This presentation buffer comes at the expense of some delay during seeking. Dropped frames will occur when the present queue shown in the madVR OSD reaches zero.

It is best to leave this setting enabled. For the most responsive playback, the majority should stick with smaller present queues (typically, 4-8 frames). If the number of frames presented in advance is increased, the size of the CPU and GPU queues may also need to be larger to fill the present queue.

If the present queue is stuck at zero, your GPU has likely run out of resources and madVR processing settings will have to be reduced until it fills.

Leave the flush settings alone unless you know what you are doing.

Exclusive Mode Settings

Image

show seek bar

This should be unchecked if using fullscreen exclusive mode and a desktop media player such as MPC. Otherwise, a seek bar will appear at the bottom of every video that cannot be removed during playback.

delay switch to exclusive mode by 3 seconds

Switching to FSE can sometimes be slow. Checking this options gives madVR time to fill its buffers and complete the switch to FSE, limiting the chance of dropped frames or presentation glitches.

present several frames in advance

Like the identical setting in windowed mode, present several frames in advance is protection against dropped frames and presentation glitches and should be left on. For the most responsive playback, the majority should stick with smaller present queues (typically, 4-8 frames). If the number of frames presented in advance is increased, the size of the CPU and GPU queues may also need to be larger to fill the present queue.

If the present queue is stuck at zero, your GPU has likely run out of resources and madVR processing settings will have to be reduced until it fills.

Again, flush settings should be left alone unless you know what you are doing.

Stereo 3D

Image

enable stereo 3d playback

Enables stereoscopic 3D playback for supported media, which is currently limited to frame-packed MPEG4-MVC 3D Blu-ray. 

when playing 2d content

Nvidia GPUs are known to crash on occasion when 3D mode is active in the operating system and 2D content is played. This most often occurs when use Direct3D 11 for presentation (Windows 7 and newer) is used by madVR. Disable OS stereo 3d support for all displays should be checked if using this combination.

when playing 3d content

Not all GPUs need to have 3D enabled in the operating system. If 3D mode is enabled in the operating system, some GPUs will change the display calibration to optimize playback for frame-packed 3D. This can interfere with the performance of madVR's 3D playback. Possible side effects include altered gamma curves (designed for frame-packed 3D) and screen flickering caused by the use of an active shutter. Disable OS stereo 3d support for all displays is a failsafe to prevent GPU 3D settings from altering the image in unwanted ways. 

restore OS stereo 3D settings when media player is closed

Returns the GPU back to the same state as before playback. So this is an override for any of the GPU control panel adjustments made by the two settings above. Overrides made by madVR will be enabled again when the media player is started.

It is best to leave all secondary 3D settings at its defaults, unless 3D playback is causing problems or 2D videos are not playing correctly. 

madVR's approach to 3D is not failsafe and can be at the mercy of GPU drivers. If 3D mode is not engaged at playback start, try checking enable automatic fullscreen exclusive mode. If this does not work, a batch file may be needed to toggle 3D mode in the GPU control panel.

The use of batch files with madVR is beyond the scope of this guide, but a batch file that automatically enables stereoscopic 3D in the Nvidia control panel can be found here.

Smooth Motion

Image

Expert Guide: smooth motion is a frame blending system for madVR. What smooth motion is not, is a frame interpolation system — it will not introduce the “soap opera effect” like you see on 120 Hz+ TVs, or reduce 24p judder.

smooth motion is designed to display content where the source frame rate does not match up to any of the refresh rates that your display supports. For example, that would be 25/50fps content on a 60 Hz-only display, or 24p content on a 60 Hz-only display.

It does not replace ReClock or JRiver VideoClock, and if your display supports 1080p24, 1080p50, and 1080p60 then you should not need to use smooth motion at all.

Because smooth motion works by using frame blending you may see slight ghost images at the edge of moving objects — but this seems to be rare and dependent on the display you are using, and is definitely preferable to the usual judder from mismatched frame rates/refresh rates.

Medium Processing

only if there would be motion judder without it...
Enables smooth motion when 3/2 pulldown or any other irregular frame pattern is detected.

...or if the display refresh rate is an exact multiple of the movie frame rate
Enables smooth motion when the refresh rate of the display is an exact duplicate of the content refresh rate.

always
Enables smooth motion for all content.

In general, if your display is limited to 60 Hz playback without the possibility of display mode switching, smooth motion may be an acceptable substitution for 3/2 pulldown. Although, use of smooth motion largely comes down your taste for this form of frame smoothing. Keeping the same refresh rate and using smooth motion can also simplify life for projector owners with equipment that takes ages to change refresh rates.

Dithering

Image

madVR Explained:
Dithering is performed as the last step in madVR to convert its internal 32-bit data to the bit depth set for the display. Any time madVR does anything to the video (e.g. upsample or convert to another color space), high bit-depth information is created. Dithering allows much of this information to be preserved when displayed at 8-10 bits. For example, the conversion of Y'CbCr to RGB generates >10-bits of RGB data.

Rather than create a simple gradient consisting completely of "96 gray," for instance, dithering allows the quantization (rounding) error of each calculated RGB value to be distributed to neighboring pixels. This creates a random yet controlled pattern that better approximates the varied shades present in the original high bit-depth gradient. Such a randomized use of colors is a way to create an artificial sense of having an expanded color palette. This dithering pattern adds a little noise to the image. The larger the output bit depth, the lower the visible dithering noise.

Dithering to 2-bits:
2 bit Ordered Dithering
2 bit No Dithering

Low Processing

Random Dithering
Very fast dithering. High-noise, no dither pattern.

Ordered Dithering
Very fast dithering. Low-noise, high dither pattern. This offers high-quality dithering basically for free.

use colored noise
Uses an inverted dither pattern for green ("opposite color"), which reduces luma noise but adds chroma noise.

change dither for every frame
Uses a new dither seed for every frame. Or, for Ordered Dithering, add random offsets and rotate the dither texture 90° between every frame.

Medium Processing

Error Diffusion - option 1
DirectCompute is used to perform very high-quality error diffusion dithering. Mid-noise, no dither pattern. Requires a DX 11-compatible graphics card.

Error Diffusion - option 2
DirectCompute is used to perform very high-quality error diffusion dithering. Low-noise, mid dither pattern. Requires a DX 11-compatible graphics card.

Regardless of the hardware used, dithering is best left on at all times because disabling it can introduce color banding. Ordered Dithering offers performance similar to Error Diffusion with slightly lower resource use and should be considered the default setting unless your system has resources to spare.

Trade Quality for Performance

The final set of settings reduce GPU usage at the expense of image quality. Most, if not all, options will provide very small degradations to image quality. I would begin by disabling all options and only check them if you truly need the extra performance. 

Those trying to squeeze the last bit of power from their GPU will want to start at the top and work their way to the bottom. It usually takes more than one checkbox to put rendering times under the movie frame interval or cause the present queue to fill.
Reply
#7
5. MEASURING PERFORMANCE & TROUBLESHOOTING

How Do I Measure the Performance of My Chosen Settings?

Once the settings have been configured to your liking, it is important those settings match the capabilities of your hardware. To determine this, the menu below can be overlaid during playback by pressing Ctrl + J to provide feedback on your PC’s rendering performance. Combining several settings labelled Medium or higher will create a large load on the graphics card.

Rendering performance is dependent upon the average rendering and present time of each frame in relation to the movie frame interval. In the example below, a new frame is drawn every 41.71ms. However, at an average rendering time of 49.29ms plus a present time of 0.61ms (49.29 + 0.61 = 49.90ms), the GPU is unable to keep up with the frame rate of the video. The result is dropped frames, presentation glitches and generally choppy playback. As such, settings in madVR will have to be dialed-down until rendering times are comfortably under the frame interval.

Predicting the load placed on the graphics processor is factor of how many pixels are in the source combined with the output resolution, as well as the frame rate and bit depth of the video. A video with a native frame rate of 29.97 fps will require madVR to work 25% faster than a video with a frame rate of 23.976 fps, as the frame interval becomes shorter and madVR must present frames at a faster rate. Live TV broadcast at 1920 x 1080/60i can be particularly demanding because the source frame rate is doubled after deinterlacing. 

Common Source Frame Intervals:
  • 23.976 fps -> 41.71ms
  • 25 fps -> 40.00ms
  • 29.97 fps -> 33.37ms
  • 50 fps -> 20.00ms
  • 59.94 fps -> 16.68ms

Display Rendering Stats:
Ctrl + J during fullscreen playback
Rendering must be comfortably under the frame interval:

Image

Rather than attempt to optimize one set of settings for your most demanding source, it is almost always preferable to create seperate profiles for different content types: SD720p, 1080p, 2160p, etc. Each content type can often work best with different settings. Profile rules are covered in the last section.

Understanding madVR's Queues

Image

madVR’s rendering stats include a list of five queues. These are memory buffers for rendering. Each queue represents a measure of performance for a specific component of your system: decoding, access memory, rendering and presentation. Filling all queues in order is a prerequisite for rendering an image.

Summary of the Queues:

decoder queue: CPU memory buffer

subtitle queue: CPU memory buffer

upload queue: GPU memory buffer

render queue: GPU memory buffer

present queue: GPU memory buffer

Changing the queue sizes under rendering -> general settings and windowed mode or exclusive mode will increase the size of the queues and the amount of CPU RAM or GPU VRAM devoted to each.

When a queue fails to fill, there is no immediate indication of the source, but the problem can often be inferred. The queues should fill in order. When all queues are empty, the cause can usually be traced to the first queue that fails to fill.

Summary of Causes of Empty Queues:

decoder queue: Insufficient system RAM; slow RAM speed for iGPUs/APUs; failed software decoding; bottleneck in shared hardware decoding; lack of PCIe bandwidth; network latency.

Network requirements for UHD Blu-ray: Gigabit Ethernet adapters, switches and routers; Cat5e plus cabling.

Test network transfer speeds: LAN Speed Test (Write access to the media folders is required to complete the test)

List of maximum and average Ethernet transfer speeds (Note: Blu-ray bitrates are expressed in Mbps) 

subtitle queue: Insufficient system RAM with subtitles enabled; slow RAM speed for APUs; weak CPU.

Monitoring CPU Performance: Windows Task Manager can be used to assess CPU load and system RAM usage during video playback.

upload queue: Insufficient VRAM; failed hardware decoding. 

render queue: Insufficient VRAM; lack of GPU rendering resources.

present queue: Insufficient VRAM; lack of GPU rendering resources; video driver problems.

Monitoring GPU Performance: GPU-Z (with the Sensors tab) can be used to assess GPU load and VRAM usage during video playback.

Note: Systems with limited system RAM and/or VRAM should stick with the smallest CPU and GPU queues possible that allow for smooth playback.

Translation of the madVR Debug OSD

Image

display 23.97859Hz (NV HDR, 8-bit, RGB, full)
The reported refresh rate of the video clock. The second entry (NV HDR, 8-bit, RGB, full) indicates the active GPU output mode (Nvidia only). NV HDR or AMD HDR indicate that HDR10 metadata is being passed through using the private APIs of Nvidia or AMD.

composition rate 23.977Hz 
The measured refresh rate of the virtual Windows Aero desktop composition. This should be very close to the video clock, but it is not uncommon for the composition rate to be different, sometimes wildly different. The discrepancy between the display refresh rate and composition rate is only an issue if the OSD is reporting dropped frames or presentation glitches, or if playback is jerky. The composition rate should not appear in fullscreen exclusive mode.

clock deviation 0.00580%
The amount the audio clock deviates from the system clock. 

smooth motion off (settings)
Whether madVR’s smooth motion is enabled or disabled.

D3D11 fullscreen windowed (8-bit)
Indicates whether a D3D9 or D3D11 presentation path is used, the active windowed mode (windowed, fullscreen windowed or exclusive) and the output bit depth from madVR.

P010, 10-bit, 4:2:0 (DXVA11)
The decoded format provided by the video decoder. The last entry (DXVA11) is available if native hardware decoding is used (either DXVA11 or DXVA2). madVR is unable to detect if copy-back decoding is active.

movie 23.976 fps (says source filter)
The frame rate of the video as reported by the source filter. Videos subject to deinterlacing will report the frame rate before deinterlacing.

1 frame repeat every 14.12 minutes
Uses the difference between the reported video clock and audio clock deviation to estimate how often a frame correction will have to be made to restore VSync. This value is only an estimate and the actual dropped frames or repeated frames counters may contradict this number.

movie 3840x2160, 16:9
The pixel dimensions (resolution) and aspect ratio of the video.

scale 0,0,3840,2160 -> 0,0,1920,1080
Describes the position of the video before and after resizing: left,top,right,bottom. The example starts at 0 on the left and top of the screen and draws 1920 pixels horizontally and 1080 pixels vertically. Videos encoded without black bars and image cropping can lead to some shifting of the image after resize. 

touch window from inside
Indicates the active media player zoom mode. This is relevant when using madVR’s zoom control because the two settings can interact. 

chroma > Bicubic60 AR
The algorithm used to upscale the chroma resolution to 4:4:4, with AR indicating the use of an anti-ringing filter.

image < SSim2D75 LL AR
The image upscaling or downscaling algorithm used to resize the image, with AR indicating the use of an anti-ringing filter and LL indicating scaling in linear light.

vsync 41.71ms, frame 41.71ms
The vertical sync interval and frame interval of the video. In order to present each frame on time, rendering times must be comfortably under the frame interval.

matrix BT.2020 (says upstream)
The matrix coefficients used in deriving the original luma and chroma (YUV) from the RGB primaries and the coefficients used to convert back to RGB.

primaries BT.2020 (says upstream)
The chromaticity coordinates of the source primaries of the viewing/mastering display.

HDR 1102 nits, BT.2020 -> DCI-P3
Displayed when an HDR video is played. The first entry (1040 nits) indicates the source brightness as reported by a valid MaxCLL or the mastering display peak luminance. If a .measurements file is available, the source peak is substituted for the peak value measured by madVR. The second entry (BT.2020 -> DCI-P3) indicates that DCI-P3 primaries were used within a BT.2020 container. 

frame/avg/scene/movie 0/390/1/1222 nits, tone map 0 nits
Displayed when an HDR video is played using tone map HDR using pixel shaders. This reporting changes to a detailed description when a .measurements file is available: peak of the measured frame / AvgFMLL of the movie / peak of the scene / peak of the movie. Tone mapping targets a combination of the measured frame peak brightness and scene peak brightness.

limited range (says upstream)
The video levels used by the source (either limited or full). 

deinterlacing off (dxva11)
Whether deinterlacing was used to deinterlace the video. The second entry indicates the source of the deinterlacing: (dxva11) D3D11 Native; (dxva2) DXVA2 Native; (says upstream) copy-back; (settings) madVR IVTC film mode.

How to Get Help
 
  • Take a Print Screen of the madVR OSD (Ctrl + J) during playback when the issue is present;
  • Post this screenshot along with a description of your issue at the Official Doom9 Support Forum;
  • If that isn't convenient, post your issue in this thread.

Important Information:
  1. Detailed description of the issue;
  2. List of settings checked under general settings;
  3. GPU model (e.g. GTX 1060 6GB);
  4. Video driver version: Nvidia/AMD/Intel (e.g. 417.22);
  5. Operating system or Windows 10 Version Number (e.g. Windows 10 1809);
  6. Details of the video source (e.g. resolution; frame rate; video codec; file extension/format; interlacing).

How to Capture a Crash Report for madVR

Crashes likely caused by madVR should be logged via a madVR crash report. Crash reports are produced by pressing CTRL+ALT+SHIFT+BREAK when madVR becomes unresponsive. This report will appear on the desktop. Copy and paste this log to Pastebin and provide a link.

Troubleshooting Dropped Frames/Presentation Glitches

Weak CPU

Problem: The decoder and subtitle queues fail to fill.

Solution: Ease the load on the CPU by enabling hardware acceleration in LAV Video. If your GPU does not support the format played (e.g. HEVC or VP9), consider upgrading to a card with support for these formats. GPU hardware decoding is particularly critical for smooth playback of high bitrate HEVC.

Empty Present Queue

Problem: Reported rendering stats are under the movie frame interval, but the present queue remains at zero and will not fill.

Solution: It is not abnormal to have the present queue contradict the rendering stats — in most cases, the GPU is simply overstrained and unable to render fast enough. Ease the load on the GPU by reducing processing settings until the present queue fills. If the performance deficit is very low, this situation can be cured by checking a few of the trade quality for performance checkboxes.

Lack of Headroom for GUI Overlays

Problem: Whenever a GUI element is overlaid, madVR enters low latency mode. This will temporarily reduce the present queue to 1-2/8 to maintain responsiveness of the media player. If the present queue reaches zero or fails to refill when the GUI element is removed, your madVR settings are too aggressive. This can also lead to a flickering OSD.

Solution: Ease the load on the GPU by reducing processing settings. If the performance deficit is very low, this situation can be cured by checking a few of the trade quality for performance checkboxes. Enabling GUI overlays during playback is the ultimate stress test for madVR settings — the present queue should recover effortlessly.

Inaccurate Rendering Stats

Problem: The average and max rendering stats indicate rendering is below the movie frame interval, but madVR still produces glitches and dropped frames.

Solution: A video with a frame interval of 41.71 ms should have average rendering stats of 35-37 ms to give madVR adequate headroom to render the image smoothly. Anything higher risks dropped frames or presentation glitches during performance peaks.

Scheduled Frame Drops/Repeats

Problem: This generally refers to clock jitter. Clock jitter is caused by a lack of synchronization between three clocks: the system clock, video clock and audio clock. The system clock always runs at 1.0x. The audio and video clocks tick away independent of each other. Having three independent clocks invites of the possibility of losing synchronization. These clocks are subject to variability caused by differences in A/V hardware, drivers and software. Any difference from the system clock is captured by the display and clock deviation in madVR's rendering stats. If the audio and video clocks are synchronized by luck or randomness, then frames are presented "perfectly." However, any reported difference between the two would lead to a slow drift between audio and video during playback. The video clock yields to the audio clock — a frame is dropped or repeated every few minutes to maintain synchronization.

Solution: Correcting clock jitter requires an audio renderer designed for this purpose. It also requires all audio is output as multichannel PCM. ReClock is an example of an audio renderer that uses decoded PCM audio to correct audio/video clock synchronization. For those wishing to bitstream, use of custom resolutions can reduce the frequency of dropped frames to an acceptable amount, to as few as one interruption per hour or several hours. Frame drops or repeats caused by clock jitter are considered a normal occurrence with almost all HTPCs.

Interrupted Playback

Problem: Windows or other software interrupts playback with a notification or background process causing frame drops.

Solution: The most stable playback mode in madVR is enable automatic fullscreen exclusive mode (found in general settings). Exclusive mode will ensure madVR has complete focus during all aspects of playback and the most stable VSync. Some systems do not work well with fullscreen exclusive mode and will drop frames.
Reply
#8
6. SAMPLE SETTINGS PROFILES & PROFILE RULES

Note: Feel free to customize the settings within the limits of your graphics card. If color is your issue, consider buying a colorimeter and calibrating your display with a 3D LUT.

The settings posted represent my personal preferences. You may disagree, so don't assume these are the "best madVR settings" available. Some may want to use more shaders to create a sharper image, and others may use more artifact removal. Everyone has their own preference as to what looks good. The suggested settings are meant to err on the conservative side when it comes to processing the image.


Summary of the rendering process:

Image
Source

So, with all of the settings laid out, let's move on to some settings profiles...

It is important to know your graphics card when using madVR, as the program relies heavily on this hardware. Due to the large performance variability in graphics cards and the breadth of possible madVR configurations, it can be difficult to recommend settings for specific GPUs. However, I’ll attempt to provide a starting pointing for settings by using some examples with my personal hardware. The example below demonstrates the difference in madVR performance between an integrated graphics card and a dedicated gaming GPU.

I own a laptop with an Intel HD 3000 graphics processor and Sandy Bridge i7. madVR runs with settings similar to its defaults:

Integrated GPU:
  • Chroma: Bicubic60 + AR
  • Downscaling: Bicubic150 + LL + AR
  • Image upscaling: Lanczos3 + AR
  • Image doubling: Off
  • Upscaling refinement: Off
  • Artifact removal - Debanding: Off
  • Artifact removal - Deringing: Off
  • Artifact removal - Deblocking: Off
  • Artifact removal - Denoising: Off
  • Image enhancements: Off
  • Dithering: Ordered Dithering

I am upscaling primarily high-quality, 24 fps content to 1080p24. These settings are very similar to those provided by Intel DXVA rendering in Kodi with the quality benefits provided by madVR and offer a small subjective improvement.

I also owned a HTPC that combined a Nvidia GTX 750 Ti and Core 2 Duo CPU.

Adding a dedicated GPU allows the flexibility to use more of everything: more demanding scaling algorithms, artifact removal, sharpening and high-quality dithering.

Settings assume all trade quality for performance checkboxes are unchecked save the one related to subtitles.

Given the flexibility of a gaming GPU, four different scenarios are outlined based on common sources:

Display: 1920 x 1080p

Scaling factor: Increase in vertical resolution or pixels per inch.

Resizes:
  • 1080p -> 1080p
  • 720p -> 1080p
  • SD -> 1080p
  • 4K -> 1080p

Profile: "1080p"

1080p -> 1080p
1920 x 1080 -> 1920 x 1080
Increase in pixels: 0
Scaling factor: 0

Native 1080p sources require basic processing. The settings to be concerned with are Chroma upscaling, which is necessary for all videos, and Dithering. The only upscaling taking place is the resizing of the subsampled chroma layer.

Chroma Upscaling: Doubles the 2:0 of a 4:2:0 source to match the native resolution of the luma layer (upscale to 4:4:4 and convert to RGB). Chroma upscaling is where the majority of your resources should go with native sources. My preference is for NGU Anti-Alias over NGU Sharp, as it seems better for upscaling the soft chroma layer. The sharp, black and white luma and soft chroma can often benefit from different treatment. This is difficult to test. ReconstructionNGU Sharp, NGU Standard and super-xbr100 are also good choices.

Comparison of Chroma Upscaling Algorithms

Read the following post before choosing a chroma upscaling algorithm

Image Downscaling: N/A.

Image Upscaling: Set this to Jinc + AR in case some pixels are missing. This setting should be ignored, however, as there is no upscaling involved at 1080p.

Image Doubling: N/A.

Upscaling Refinement: N/A.

Artifact Removal: Artifact removal includes DebandingDeringing, Deblocking and Denoising. I typically choose to leave Debanding enabled at a low value because it is hard to find 8-bit sources that don't display some form of color banding, even when the source is an original Blu-ray. Banding is a common artifact and madVR's debanding algorithm is pretty good. To avoid removing image detail, a setting of low/medium or medium/medium is advisable. You can choose to disable this if you want the sharpest image possible.

Deringing, Deblocking and Denoising are not usually general use settings. These types of artifacts are less common, or the artifact removal algorithm can be guilty of smoothing an otherwise clean source. If you want to use these algorithms with your worst cases, try using madVR's keyboard shortcuts. This will allow you to quickly turn the algorithm on and off with your keyboard when needed and all profiles will simply reset when the video is finished.

Used in small amounts, artifact removal can improve image quality without having a significant impact on image detail. Some choose to offset any loss of image sharpness by adding a small amount of sharpening shaders. Deblocking is great for cleaning up compressed video. Even sources that have undergone light compression can benefit from it without harming image detail when low values are used. Deringing is very effective for any sources with noticeable edge enhancement. And Denoising will harm image detail, but can often be the only way to remove bothersome video noise or film grain. Some may believe Deblocking, Deringing or Denoising are general use settings, while others may not.

Image Enhancements: Applying sharpening shaders to the image shouldn't be necessary as the source is already assumed to be of high-quality. image enhancements can still be attractive for those who feel chroma upscaling is simply not doing enough to sharpen the picture and want more texture and image depth. The example profile avoids using any enhancements.

Dithering: This is the last step before presentation. The difference between Ordered Dithering and Error Diffusion is very small, especially if the bit depth is 8-bits or greater. But if you have the resources, you might as well use them, and Error Diffusion will produce a small quality improvement over Ordered Dithering. The slight performance difference between Ordered Dithering and Error Diffusion is a way to save a few resources when you need them. You aren't supposed to see dithering, anyways.

1080p:
  • Chroma: NGU Anti-Alias (high)
  • Downscaling: SSIM 1D 100% + LL + AR
  • Image upscaling: Jinc + AR
  • Image doubling: Off
  • Upscaling refinement: Off
  • Artifact removal - Debanding: low/medium
  • Artifact removal - Deringing: Off
  • Artifact removal - Deblocking: Off
  • Artifact removal - Denoising: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2

Supersampling is a way to provide noticeable enhancement to a native source. This is requires a powerful GPU.

Supersampling involves doubling a source to twice its original size and returning it to its original resolution. The chain would look like this: Image doubling -> Upscaling refinement (optional) -> Image downscaling. Doubling a source and reducing it to a smaller image can lead to a sharper image than what you started with without actually applying any sharpening to the image.

Chroma Upscaling: NGU Anti-Alias is selected.

Image Downscaling: SSIM 1D + LL + AR + AB 100% is selected to retain the detail from the doubled image at 1080p. If you own a GTX 1060 or better, SSIM 2D is an even more effective downscaler (but far more expensive).

Image Upscaling: N/A.

Image Doubling: Supersampling involves image doubling followed by image downscaling. I recommend NGU Sharp as the supersampler/sharpener because of its ultra-sharp upscaling. Supersampling must be manually chosen: image upscaling  -> image doubling <-- Doubling: ...always - supersampling.

Upscaling Refinement: NGU Sharp is quite sharp. But you may want to add some extra sharpening to the doubled image; crispen edges is a good choice.

Artifact Removal: Debanding is set to low/medium.

Image Enhancements: N/A.

Dithering: Error Diffusion 2 is selected.

1080p -> 2160p Supersampling (for newer GPUs):
  • Chroma: NGU Anti-Alias (high)
  • Downscaling: SSIM 1D 100% + LL + AR + AB 100%
  • Image upscaling: Off
  • Image doubling: NGU Sharp
  • <-- Luma doubling: high
  • <-- Luma quadrupling: let madVR decide (direct quadruple - NGU Sharp (high))
  • <-- Chroma: let madVR decide (Bicubic60 + AR)
  • <-- Doubling: ...always - supersampling
  • <-- Quadrupling: let madVR decide (scaling factor 2.4x (or bigger))
  • <-- Upscaling algo: let madVR decide (Bicubic60 + AR)
  • <-- Downscaling algo: use "image downscaling" settings
  • Upscaling refinement: Off
  • Artifact removal - Debanding: low/medium
  • Artifact removal - Deringing: Off
  • Artifact removal - Deblocking: Off
  • Artifact removal - Denoising: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2

If you want to avoid any kind of sharpening or ehancement of native sources, avoid supersampling and use the first profile. If you want the sharpening effect to be more noticeable, applying image enhancements to the native source will create greater sharpening than supersampling can provide.

Profile: "720p"

720p -> 1080p
1280 x 720 -> 1920 x 1080
Increase in pixels: 2.25x
Scaling factor: 1.5x

Image upscaling is introduced at 720p to 1080p.

Upscaling the sharp luma channel is most important in resolving image detail, so settings for Image upscaling followed with Upscaling refinement are most critical for upscaled sources.

Chroma Upscaling: NGU Anti-Alias is selected.

Image Downscaling: N/A.

Image Upscaling: Jinc + AR is the chosen image upscaler. We are upscaling by RGB directly from 720p -> 1080p.

Image Doubling: N/A.

Upscaling Refinement: SuperRes (1) is layered on top of Jinc to provide additional sharpness. This is important as upscaling alone will create a noticeably soft image. Note that sharpening is added from Upscaling refinement, so it is applied to the post-resized image.

Artifact Removal: Debanding is set to low/medium.

Image Enhancements: N/A.

Dithering: Error Diffusion 2 is selected.

720p Regular upscaling:
  • Chroma: NGU Anti-Alias (medium)
  • Downscaling: SSIM 1D 100% + LL + AR
  • Image upscaling: Jinc + AR
  • Image doubling: Off
  • Upscaling refinement: SuperRes (1)
  • Artifact removal - Debanding: low/medium
  • Artifact removal - Deringing: Off
  • Artifact removal - Deblocking: Off
  • Artifact removal - Denoising: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2

Image doubling is another and often superior approach to upscaling a 720p source.

This will double the image (720p -> 1440p) and use Image downscaling to correct the slight overscale (1440p -> 1080p). 

Chroma Upscaling: NGU Anti-Alias is selected. Lowering the value of chroma upscaling is an option when trying to increase the quality of image doubling. Always try to maximize Luma doubling first, if possible. This is especially true if your display converts all inputs to 4:2:2 rather than 4:4:4. Chroma upscaling could be wasted by the display's processing. The larger quality improvements will come from improving the luma layer, not the chroma, and it will always retain the full resolution when it reaches the display.

Image Downscaling: N/A.

Image Upscaling: N/A.

Image Doubling: NGU Sharp is used to double the image. NGU Sharp is usually a safe choice for image upscaling because it is almost always sharp without any need for added enhancement.

Image doubling performs an exact 2x resize combined with image downscaling.

To calibrate image doubling, select image upscaling -> doubling -> NGU Sharp and use the drop-down menus. Set Luma doubling to its maximum value (very high) and everything else to let madVR decide.

If the maximum luma quality value is too aggressive, reduce Luma doubling until rendering times are under the movie frame interval (35-37ms for a 24 fps source). Leave the other settings to madVR. Luma quality always comes first and is most important.

Think of let madVR decide as madshi's expert recommendations for each upscaling scenario. This will help you avoid wasting resources on settings which do very little to improve image quality. So, let madVR decide. When you become more advanced, you may consider manually adjusting these settings, but only expect small improvements. In this case, I've added SSIM 1D for downscaling.

Luma & Chroma are upscaled separately:

Luma: RGB 
-> Y'CbCr 4:4:4 -> -> 720p ->1440p -> 1080p​​​​

Chroma: RGB 
-> Y'CbCr 4:4:4 -> CbCr -> 720p -> 1080p​​​​

Keep in mind, NGU very high is three times slower than NGU high while only producing a small improvement in image quality. Attempting to a use a setting of very high at all costs without considering GPU stress or high rendering times is not always a good idea. NGU very high is the best way to upscale, but only if you can accommodate the considerable performance hit. Higher values of NGU will cause fine detail to be slightly more defined, but the overall appearance produced by each type (Anti-Alias, Soft, Standard, Sharp) will remain identical through each quality level.

Upscaling Refinement: NGU Sharp shouldn’t require any added sharpening. If you want the image to be sharper, you can check some options here such as crispen edges or sharpen edges.  

Artifact Removal: Debanding is set to low/medium.

Image Enhancements: N/A.

Dithering: Error Diffusion 2 is selected.

720p Image doubling:
  • Chroma: NGU Anti-Alias (medium)
  • Downscaling: SSIM 1D 100% + LL + AR
  • Image upscaling: Off
  • Image doubling: NGU Sharp
  • <-- Luma doubling: high
  • <-- Luma quadrupling: let madVR decide (direct quadruple - NGU Sharp (high))
  • <-- Chroma: let madVR decide (Bicubic60 + AR)
  • <-- Doubling: let madVR decide (scaling factor 1.2x (or bigger))
  • <-- Quadrupling: let madVR decide (scaling factor 2.4x (or bigger))
  • <-- Upscaling algo: let madVR decide (Bicubic60 + AR)
  • <-- Downscaling algo: SSIM 1D 100 AR Linear Light
  • Upscaling refinement: Off
  • Artifact removal - Debanding: low/medium
  • Artifact removal - Deringing: Off
  • Artifact removal - Deblocking: Off
  • Artifact removal - Denoising: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2

Profile: "SD"

SD -> 1080p
640 x 480 -> 1920 x 1080
Increase in pixels: 6.75x
Scaling factor: 2.25x

By the time SD content is reached, the scaling factor starts to become quite large (2.25x). Here, the image becomes soft due to the errors introduced by upscaling. Countering this soft appearance is possible by introducing more sophisticated image upscaling provided by madVR's image doubling. Image doubling does just that — it takes the full resolution luma and chroma information and scales it by factors of two to reach the desired resolution (2x for a double and 4x for a quadruple). If larger than needed, the result is interpolated down to the target.

Doubling a 720p source to 1080p involves overscaling by 0.5x and downscaling back to the target resolution. Improvements in image quality may go unnoticed in this case. However, image doubling applied to larger resizes of 540p to 1080p or 1080p to 2160p will, in most cases, result in the highest-quality image.

Chroma Upscaling: NGU Anti-Alias is selected.

Image Downscaling: N/A.

Image Upscaling: N/A.

Image Doubling: As stated, NGU Sharp is the best choice for image doubling due to its high sharpness, low aliasing and lack of ringing. It does not require any added sharpening from upscaling refinement to appear appreciably sharp. In fact, NGU Sharp can often look artificial at times when set to very high quality with large scaling factors, where missing texture detail that can't be recovered by image upscaling is revealed. To avoid creating "cartoon" edges with large upscales, it is recommended to enable soften edges and/or add grain in upscaling refinement at scaling factors greater than 2x.

NGU Sharp | NGU Sharp + soften edges + add grain | Jinc + AR

Alternatively, use NGU Anti-Alias, which better tolerates low-quality sources. I prefer NGU Standard with some added grain to reduce the plastic look caused by using a sharp uspscaler such as NGU Sharp without losing too much of the desired sharpness and detail.

Luma & Chroma are upscaled separately:

Luma: RGB 
-> Y'CbCr 4:4:4 -> -> 480p ->960p -> 1080p​​​​

Chroma: RGB 
-> Y'CbCr 4:4:4 -> CbCr -> 480p -> 1080p​​​​

Upscaling Refinement: add grain is used to mask some of the lost texture detail caused by upsampling a lower-quality SD source to a much higher resolution.

Artifact Removal: Debanding is set to low/medium.

Image Enhancements: N/A.

Dithering: Error Diffusion 2 is selected.

SD Image doubling:
  • Chroma: NGU Anti-Alias (medium)
  • Downscaling: SSIM 1D 100% + LL + AR
  • Image upscaling: Off
  • Image doubling: NGU Standard
  • <-- Luma doubling: high
  • <-- Luma quadrupling: let madVR decide (direct quadruple - NGU Standard (high))
  • <-- Chroma: let madVR decide (Bicubic60 + AR)
  • <-- Doubling: let madVR decide (scaling factor 1.2x (or bigger))
  • <-- Quadrupling: let madVR decide (scaling factor 2.4x (or bigger))
  • <-- Upscaling algo: Jinc AR
  • <-- Downscaling algo: let madVR decide (Bicubic150 + LL + AR)
  • Upscaling refinement: add grain (2)
  • Artifact removal - Debanding: low/medium
  • Artifact removal - Deringing: Off
  • Artifact removal - Deblocking: Off
  • Artifact removal - Denoising: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2

Profile: "4K to 1080p"

2160p -> 1080p
3840 x 2160 -> 1920 x 1080
Decrease in pixels: 4x
Scaling factor: -2x

The last 1080p profile is for the growing number of people who want to watch 4K UHD content on a 1080p display. madVR offers a high-quality HDR -> SDR conversion that can make watching HDR content palatable and attractive on an SDR display. This will apply to many that have put off upgrading to a 4K display for various reasons. HDR -> SDR is meant to replace the HDR mode of an HDR display by using the available brightness of the SDR calibration and reducing the color gamut by 26% from DCI-P3 to BT.709. The conversion from BT.2020/DCI-P3 to BT.709 is excellent and perfectly matches the 1080p Blu-ray in many cases if they were mastered from the same source.

The example graphics card is a GTX 1050 Ti outputting to an SDR display calibrated to 150 nits.

madVR is set to the following:

primaries / gamut: BT.709
transfer function / gamma: pure power curve 2.40

Note: The transfer function / gamma setting only applies to HDR -> SDR conversion and may require some adjustment.

Chroma Upscaling: Bicubic150 + AR is selected. Chroma upscaling to 3840 x 2160p before image downscaling is generally a waste of resources. If you check scale chroma separately, if it saves performance under trade quality for performance, chroma upscaling is disabled because the native resolution of the chroma layer is already 1080p. This is exactly what you should do. The performance savings will allow you to use higher values for image downscaling.

Bicubic150 downscales the chroma channel.

Image Downscaling: SSIM 2D + LL + AR + AB 50% is selected. Image downscaling is also a big drag on performance but obviously necessary when reducing a 4K source to 1080p. SSIM 2D is by far and away the sharpest downscaler in madVR and the best choice to preserve the detail of the larger 4K source.

SSIM 1D 100% and Bicubic150 are also good, sharp downscalers. DXVA2 is the fastest (and lowest quality) option.

Image Upscaling: N/A.

Image Doubling: N/A.

Upscaling Refinement: N/A.

Artifact Removal: Artifact removal is disabled. The source is assumed to be a high-quality, 4K UHD rip.

Some posterization can be caused by compression from tone mapping; however, this cannot be detected and addressed by artifact removal. I recommend disabling debanding for 4K UHD content.

Image Enhancements: N/A

Dithering: Error Diffusion 2 is selected. Reducing from 10-bits to 8-bits makes high-quality dithering more critical, so Error Diffusion is an easy choice.

HDR: tone map HDR using pixel shaders

target peak nits: 
275 nits. The target nits can be thought of as a dynamic range slider. You increase it to preserve the dynamic range and contrast of the source at the expense of making the overall image darker and decrease it to produce brighter images at the expense of lowering the dynamic range of the source and making the overall image flatter. If this value is set too low, the absolute brightness of the image will become raised and washed out. The chosen static value is meant to be a middle ground for sources with a high or low dynamic range. 

HDR -> SDR Instructions: Choosing a target peak nits

tone mapping curve: BT.2390.

color tweaks for fire & explosions: disabled. When enabled, bright reds are shifted towards yellow to compensate for changes in perceived color caused by gamut mapping. This hue correction is intended to improve the appearance of fire and explosions alone, but applies to any scenes with bright red/orange pixels. I find there are more bright reds and oranges in a movie that aren't related to fire or explosions and I prefer to have them appear as red as they were encoded, so I choose to disable this shift towards yellow.

highlight recovery strength: medium. You run the risk of enhancing artifacts slightly and overcooking the image by enabling this setting, but tone mapping can often leave the image overly flat due to a loss of visible luminance steps, so any help with fine detail is welcome. Another huge hog on performance. I prefer medium because it seems most natural without looking sharpened.

For 4K 60 fps content, highlight recovery strength to should be set to none. This setting is too expensive for 4K 60 fps.

measure each frame's peak luminance: checked. 

Note: trade quality for performance checkbox compromise on tone & gamut mapping accuracy is unchecked. The quality of tone mapping goes down considerably when this is enabled, so avoid using it if possible. It should only be considered a last resort.

4K to 1080p Downscaling:
  • Chroma: Bicubic150 + AR
  • Downscaling: SSIM 2D 75% + LL + AR + AB 50%
  • Image upscaling: Jinc + AR
  • Image doubling: Off
  • Upscaling refinement: Off
  • Artifact removal - Debanding: Off
  • Artifact removal - Deringing: Off
  • Artifact removal - Deblocking: Off
  • Artifact removal - Denoising: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2

Creating madVR Profiles

Now we will translate each profile into a resolution profile with profile rules.

Add this code to each profile group:

if (srcHeight > 1080) "2160p"
else if (srcWidth > 1920) "2160p"

else if (srcHeight > 720) and (srcHeight <= 1080) "1080p"
else if (srcWidth > 1280) and (srcWidth <= 1920) "1080p"

else if (srcHeight > 540) and (srcHeight <= 720) "720p"
else if (srcWidth > 960) and (srcWidth <= 1280) "720p"

else if (srcHeight <= 540) and (srcWidth <= 960) "SD"

deintFps (the source frame rate after deinterlacing) is another factor on top of the source resolution that greatly impacts the load placed on madVR. Doubling the frame rate, for example, doubles the demands placed on madVR. Profile rules such as (deintFps <= 25) and (deintFps > 25) may be combined with srcWidth and srcHeight to create additional profiles.

A more "fleshed-out" set of profiles incorporating the source frame rate might look like this:
  • "2160p25"
  • "2160p60"
  • "1080p25"
  • "1080p60"
  • "720p25"
  • "720p60"
  • "SD25"
  • "SD60"

Click on scaling algorithms. Create a new folder by selecting create profile group.

Image

Each profile group offers a choice of settings to include.

Select all items, and name the new folder "Scaling."

Image


Select the Scaling folder. Using add profile, create eight profiles.

Name each profile: 2160p25, 2160p601080p25, 1080p60, 720p25, 720p60, 540p25, 540p60.

Copy and paste the code below into Scaling:

if (deintFps <= 25) and (srcHeight > 1080) "2160p25"
else if (deintFps <= 25) and (srcWidth > 1920) "2160p25"

else if (deintFps > 25) and (srcHeight > 1080) "2160p60"
else if (deintFps > 25) and (srcWidth > 1920) "2160p60"

else if (deintFps <= 25) and ((srcHeight > 720) and (srcHeight <= 1080)) "1080p25"
else if (deintFps <= 25) and ((srcWidth > 1280) and (srcWidth <= 1920)) "1080p25"

else if (deintFps > 25) and ((srcHeight > 720) and (srcHeight <= 1080)) "1080p60"
else if (deintFps > 25) and ((srcWidth > 1280) and (srcWidth <= 1920)) "1080p60"

else if (deintFps <= 25) and ((srcHeight > 540) and (srcHeight <= 720)) "720p25"
else if (deintFps <= 25) and ((srcWidth > 960) and (srcWidth <= 1280)) "720p25"

else if (deintFps > 25) and ((srcHeight > 540) and (srcHeight <= 720)) "720p60"
else if (deintFps > 25) and ((srcWidth > 960) and (srcWidth <= 1280)) "720p60"

else if (deintFps <= 25) and ((srcWidth <= 960) and (srcHeight <= 540)) "540p25"

else if (deintFps > 25) and ((srcWidth <= 960) and (srcHeight <= 540)) "540p60"

A green check mark should appear above the box to indicate the profiles are correctly named and no code conflicts exist.

Image

Additional profile groups must be created for processing and rendering.

Note: The use of eight profiles may be unnecessary for other profile groups. For instance, if I wanted image enhancements (under processing) to apply only to 1080p content, two folders would be required:

if (srcHeight > 720) and (srcHeight <= 1080) "1080p"
else if (srcWidth > 1280) and (srcWidth <= 1920) "1080p"

else "Other"

Disabling Image upscaling for Cropped Videos:

You may encounter some 1080p or 2160p videos cropped just short of their original size (e.g. width = 1916). Those few missing pixels will put an abnormal strain on madVR as it tries to resize to the original display resolution. zoom control in the madVR control panel contains a setting to disable image upscaling if the video falls within a certain range (e.g. 10 lines or less). Disabling scaling adds a few black pixels to the video and prevents the image upscaling algorithm from resizing the image. This may prevent cropped videos from pushing madVR over the frame interval.

How to Configure madVR Profile Rules

Display: 3840 x 2160p

Let's repeat this process, this time assuming the display resolution is 3840 x 2160p (4K UHD). Two graphics cards will be used for reference. A Medium-level card such as the GTX 1050 Ti, and a High-level card similar to a GTX 1080 Ti. Again, the source is assumed to be of high quality with a frame rate of 24 fps.

Scaling factor: Increase in vertical resolution or pixels per inch.

Resizes:
  • 2160p -> 2160p
  • 1080p -> 2160p
  • 720p -> 2160p
  • SD -> 2160p

Profile: "2160p"

2160p -> 2160p
3840 x 2160 -> 3840 x 2160
Increase in pixels: 0
Scaling factor: 0

This profile is identical in appearance to that for a 1080p display. Without image upscaling, the focus is on settings for Chroma upscaling, which is necessary for all videos, and Dithering. The only upscaling taking place is the resizing of the subsampled chroma layer.

Chroma Upscaling: Doubles the 2:0 of a 4:2:0 source to match the native resolution of the luma layer (upscale to 4:4:4 and convert to RGB). Chroma upscaling is where the majority of your resources should go with native sources. My preference is for NGU Anti-Alias over NGU Sharp, as it seems better for upscaling the soft chroma layer. The sharp, black and white luma and soft chroma can often benefit from different treatment. This is difficult to test. ReconstructionNGU Sharp, NGU Standard and super-xbr100 are also good choices.

Comparison of Chroma Upscaling Algorithms

Read the following post before choosing a chroma upscaling algorithm

Image Downscaling: N/A.

Image Upscaling: Set this to Jinc + AR in case some pixels are missing. This setting should be ignored, however, as there is no upscaling involved at 2160p.

Image Doubling: N/A.

Upscaling Refinement: N/A.

Artifact Removal: Artifact removal includes DebandingDeringing, Deblocking and Denoising. I typically choose to leave Debanding enabled at a low value, but this should be less of an issue with 10-bit 4K UHD sources compressed by HEVC. So we will save debanding for other profiles.

Deringing, Deblocking and Denoising are not usually general use settings. These types of artifacts are less common, or the artifact removal algorithm can be guilty of smoothing an otherwise clean source. If you want to use these algorithms with your worst cases, try using madVR's keyboard shortcuts. This will allow you to quickly turn the algorithm on and off with your keyboard when needed and all profiles will simply reset when the video is finished.

Used in small amounts, artifact removal can improve image quality without having a significant impact on image detail. Some choose to offset any loss of image sharpness by adding a small amount of sharpening shaders. Deblocking is great for cleaning up compressed video. Even sources that have undergone light compression can benefit from it without harming image detail when low values are used. Deringing is very effective for any sources with noticeable edge enhancement. And Denoising will harm image detail, but can often be the only way to remove bothersome video noise or film grain. Some may believe Deblocking, Deringing or Denoising are general use settings, while others may not.

Image Enhancements: Applying sharpening shaders to the image shouldn't be necessary as the source is already assumed to be of high-quality. image enhancements can still be attractive for those who feel chroma upscaling is simply not doing enough to sharpen the picture and want more texture and image depth. The example profile avoids using any enhancements.

Dithering: This is the last step before presentation. The difference between Ordered Dithering and Error Diffusion is very small, especially if the bit depth is 8-bits or greater. But if you have the resources, you might as well use them, and Error Diffusion will produce a small quality improvement over Ordered Dithering. The slight performance difference between Ordered Dithering and Error Diffusion is a way to save a few resources when you need them. You aren't supposed to see dithering, anyways. With madVR set to 8-bit output, I would recommended Error Diffusion, as reducing a 10-bit source to 8-bits involves a greater need for high-quality dithering.

Both Medium and High profiles use Error Diffusion 2.

HDR: For HDR10 content, read the instructions in Devices -> HDR. Simple passthrough involves a few checkboxes. AMD users must output from madVR at 10-bits (but 8-bit output from the GPU is still possible).

Medium:
  • Chroma: NGU Anti-Alias (medium)
  • Downscaling: SSIM 1D 100% + LL + AR
  • Image upscaling: Jinc + AR
  • Image doubling: Off
  • Upscaling refinement: Off
  • Artifact removal - Debanding: Off
  • Artifact removal - Deringing: Off
  • Artifact removal - Deblocking: Off
  • Artifact removal - Denoising: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2

High:
  • Chroma: NGU Anti-Alias (high)
  • Downscaling: SSIM 1D 100% + LL + AR
  • Image upscaling: Jinc + AR
  • Image doubling: Off
  • Upscaling refinement: Off
  • Artifact removal - Debanding: Off
  • Artifact removal - Deringing: Off
  • Artifact removal - Deblocking: Off
  • Artifact removal - Denoising: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2

Profile: "Tone Mapping HDR"

This profile makes one small adjustment to the one above for anyone using tone map HDR using pixel shaders. madVR’s tone mapping can be very resource-heavy with all of the HDR enhancements enabled. To make room, I would recommend simply reducing the value of chroma upscaling to Bicubic60 + AR. Bicubic is more than acceptable as a basic chroma upscaler and is not in any way as impactful as madVR’s tone mapping in improving image quality.

HDR -> SDR Instructions: Choosing a target peak nits

Recommended checkboxes:
color tweaks for fire & explosions: disabled or balanced
highlight recovery strength: medium-high
measure each frame's peak luminance: checked


tone map HDR using pixel shaders:
  • Chroma: Bicubic60 + AR
  • Downscaling: SSIM 1D 100% + LL + AR
  • Image upscaling: Jinc + AR
  • Image doubling: Off
  • Upscaling refinement: Off
  • Artifact removal - Debanding: Off
  • Artifact removal - Deringing: Off
  • Artifact removal - Deblocking: Off
  • Artifact removal - Denoising: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2

Profile: "1080p"

1080p -> 2160p
1920 x 1080 -> 3840 x 2160
Increase in pixels: 4x
Scaling factor: 2x

A 1080p source requires image upscaling.

For upscaling FHD content to UHD, image doubling is a perfect match for the 2x resize. 

Chroma Upscaling: NGU Anti-Alias is selected. Lowering the value of chroma upscaling is an option when trying to increase the quality of image doubling. Always try to maximize Luma doubling first, if possible. This is especially true if your display converts all inputs to 4:2:2 rather than 4:4:4. Chroma upscaling could be wasted by the display's processing. The larger quality improvements will come from improving the luma layer, not the chroma, and it will always retain the full resolution when it reaches the display.

Image Downscaling: N/A.

Image Upscaling: N/A.

Image Doubling: NGU Sharp is used to double the image. NGU Sharp is usually a safe choice for image upscaling because it is almost always sharp without any need for added enhancement.

Image doubling performs an exact 2x resize.

To calibrate image doubling, select image upscaling -> doubling -> NGU Sharp and use the drop-down menus. Set Luma doubling to its maximum value (very high) and everything else to let madVR decide.

If the maximum luma quality value is too aggressive, reduce Luma doubling until rendering times are under the movie frame interval (35-37ms for a 24 fps source). Leave the other settings to madVR. Luma quality always comes first and is most important.

Think of let madVR decide as madshi's expert recommendations for each upscaling scenario. This will help you avoid wasting resources on settings which do very little to improve image quality. So, let madVR decide. When you become more advanced, you may consider manually adjusting these settings, but only expect small improvements.

Luma & Chroma are upscaled separately:

Luma: RGB -> Y'CbCr 4:4:4 -> Y -> 1080p -> 2160p

Chroma: RGB -> Y'CbCr 4:4:4 -> CbCr -> 1080p -> 2160p​​​​

Keep in mind, NGU very high is three times slower than NGU high while only producing a small improvement in image quality. Attempting to a use a setting of very high at all costs without considering GPU stress or high rendering times is not always a good idea. NGU very high is the best way to upscale, but only if you can accommodate the considerable performance hit. Higher values of NGU will cause fine detail to be slightly more defined, but the overall appearance produced by each type (Anti-Alias, Soft, Standard, Sharp) will remain identical through each quality level.

Upscaling Refinement: NGU Sharp shouldn’t require any added sharpening. If you want the image to be sharper, you can check some options here such as crispen edges or sharpen edges.  

Artifact Removal: Debanding is set to low/medium. Most 8-bit sources, even uncompressed Blu-rays, can display small amounts of banding because they don't compress as well as 10-bit HEVC sources. So I find it helpful to use a small level of debanding to help with these artifacts because they are so common, which shouldn't negatively harm image detail. You can choose to disable this if you want the sharpest image possible.

Image Enhancements: N/A.

Dithering: Both Medium and High profiles use Error Diffusion 2.

Medium:
  • Chroma: NGU Anti-Alias (low)
  • Downscaling: SSIM 1D 100% + LL + AR
  • Image upscaling: Off
  • Image doubling: NGU Sharp
  • <-- Luma doubling: high
  • <-- Luma quadrupling: let madVR decide (direct quadruple - NGU Sharp (high))
  • <-- Chroma: let madVR decide (Bicubic60 + AR)
  • <-- Doubling: let madVR decide (scaling factor 1.2x (or bigger))
  • <-- Quadrupling: let madVR decide (scaling factor 2.4x (or bigger))
  • <-- Upscaling algo: let madVR decide (Bicubic60 + AR)
  • <-- Downscaling algo: let madVR decide (Bicubic150 + LL + AR)
  • Upscaling refinement: Off
  • Artifact removal - Debanding: low/medium
  • Artifact removal - Deringing: Off
  • Artifact removal - Deblocking: Off
  • Artifact removal - Denoising: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2

High:
  • Chroma: NGU Anti-Alias (medium)
  • Downscaling: SSIM 1D 100% + LL + AR
  • Image upscaling: Off
  • Image doubling: NGU Sharp
  • <-- Luma doubling: very high
  • <-- Luma quadrupling: let madVR decide (direct quadruple - NGU Sharp (very high))
  • <-- Chroma: let madVR decide (NGU medium)
  • <-- Doubling: let madVR decide (scaling factor 1.2x (or bigger))
  • <-- Quadrupling: let madVR decide (scaling factor 2.4x (or bigger))
  • <-- Upscaling algo: let madVR decide (Jinc + AR)
  • <-- Downscaling algo: let madVR decide (SSIM 1D 100% + LL + AR)
  • Upscaling refinement: Off
  • Artifact removal - Debanding: low/medium
  • Artifact removal - Deringing: Off
  • Artifact removal - Deblocking: Off
  • Artifact removal - Denoising: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2

Profile: "720p"

720p -> 2160p
1280 x 720 -> 3840 x 2160
Increase in pixels: 9x
Scaling factor: 3x

At a 3x scaling factor, it is possible to quadruple the image.

The image is upscaled 4x and downscaled by 1x (reduced 25%) to match the output resolution. This is the lone change from Profile 1080p. If quadrupling is used, it is best combined with sharp Image downscaling such as SSIM 1D or Bicubic150.

Chroma Upscaling: NGU Anti-Alias is selected.

Image Downscaling: N/A.

Image Upscaling: N/A.

Image Doubling: NGU Sharp is the selected image doubler. Image doubling performs an exact 4x resize combined with image downscaling.

Luma & Chroma are upscaled separately:

Luma: RGB -> Y'CbCr 4:4:4 -> -> 720p ->2880p -> 2160p​​​​

Chroma: RGB 
-> Y'CbCr 4:4:4 -> CbCr -> 720p -> 2160p​​​​

Upscaling Refinement: NGU Sharp shouldn’t require any added sharpening. If you want the image to be sharper, you can check some options here such as crispen edges or sharpen edges.  

soften edges is added to assist NGU Sharp. The flaw of NGU Sharp is that edges can often become too straight when the scaling factor becomes large. Using soften edges will apply a very small correction to all edges without having much or any impact on image detail. Some may also want to experiment with add grain with large upscales for similar reasons.

NGU Sharp | NGU Sharp + soften edges + add grain | Jinc + AR

Artifact Removal: Debanding is set to low/medium.

Image Enhancements: N/A.

Dithering: Both Medium and High profiles use Error Diffusion 2.

Medium:
  • Chroma: NGU Anti-Alias (low)
  • Downscaling: SSIM 1D 100% + LL + AR
  • Image upscaling: Off
  • Image doubling: NGU Sharp
  • <-- Luma doubling: high
  • <-- Luma quadrupling: let madVR decide (direct quadruple - NGU Sharp (high))
  • <-- Chroma: let madVR decide (Bicubic60 + AR)
  • <-- Doubling: let madVR decide (scaling factor 1.2x (or bigger))
  • <-- Quadrupling: let madVR decide (scaling factor 2.4x (or bigger))
  • <-- Upscaling algo: let madVR decide (Bicubic60 + AR)
  • <-- Downscaling algo: let madVR decide (Bicubic150 + LL + AR)
  • Upscaling refinement: soften edges (2)
  • Artifact removal - Debanding: low/medium
  • Artifact removal - Deringing: Off
  • Artifact removal - Deblocking: Off
  • Artifact removal - Denoising: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2

High:
  • Chroma: NGU Anti-Alias (medium)
  • Downscaling: SSIM 1D 100% + LL + AR
  • Image upscaling: Off
  • Image doubling: NGU Sharp
  • <-- Luma doubling: very high
  • <-- Luma quadrupling: let madVR decide (direct quadruple - NGU Sharp (very high))
  • <-- Chroma: let madVR decide (NGU medium)
  • <-- Doubling: let madVR decide (scaling factor 1.2x (or bigger))
  • <-- Quadrupling: let madVR decide (scaling factor 2.4x (or bigger))
  • <-- Upscaling algo: let madVR decide (Jinc + AR)
  • <-- Downscaling algo: let madVR decide (SSIM 1D 100% + LL + AR)
  • Upscaling refinement: soften edges (2)
  • Artifact removal - Debanding: low/medium
  • Artifact removal - Deringing: Off
  • Artifact removal - Deblocking: Off
  • Artifact removal - Denoising: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2

Profile: "SD"

SD -> 2160p
640 x 480 -> 3840 x 2160
Increase in pixels: 27x
Scaling factor: 4.5x

The final resize, SD to 2160p, is a monster (4.5x!). This is perhaps the only scenario where image quadrupling is not only useful but necessary to maintain the integrity of the original image.

The image is upscaled 4x by image doubling and the remaining 0.5x by the Upscaling algo

Chroma Upscaling: NGU Anti-Alias is selected.

Image Downscaling: N/A.

Image Upscaling: N/A.

Image Doubling: Because we are upscaling SD sources, NGU Standard will be substituted for NGU Sharp. NGU Standard with some added grain can reduce the plastic look caused by using a sharp uspscaler such as NGU Sharp on a lower-quality source without losing too much of the desired sharpness and detail. The softer NGU Anti-Alias is also an option.

Image doubling performs an exact 4x resize combined with image upscaling.

Luma & Chroma are upscaled separately:

Luma: RGB -> Y'CbCr 4:4:4 -> -> 480p ->1920p -> 2160p​​​​

Chroma: RGB 
-> Y'CbCr 4:4:4 -> CbCr -> 480p -> 2160p​​​​

Upscaling Refinement: If you want the image to be sharper, try adding a small level of crispen edges or sharpen edges.  

soften edges and add grain are used to mask some of the lost texture detail caused by upsampling a lower-quality SD source to a much higher resolution. When performing such a large upscale, the upscaler can keep the edges of the image quite sharp, but fail to recreate the accompanying texture detail. 

Artifact Removal: Debanding is set to low/medium.

Image Enhancements: N/A.

Dithering: Both Medium and High profiles use Error Diffusion 2.

Medium:
  • Chroma: NGU Anti-Alias (low)
  • Downscaling: SSIM 1D 100% + LL + AR
  • Image upscaling: Off
  • Image doubling: NGU Standard
  • <-- Luma doubling: high
  • <-- Luma quadrupling: let madVR decide (direct quadruple - NGU Standard (high))
  • <-- Chroma: let madVR decide (Bicubic60 + AR)
  • <-- Doubling: let madVR decide (scaling factor 1.2x (or bigger))
  • <-- Quadrupling: let madVR decide (scaling factor 2.4x (or bigger))
  • <-- Upscaling algo: let madVR decide (Bicubic60 + AR)
  • <-- Downscaling algo: let madVR decide (Bicubic150 + LL + AR)
  • Upscaling refinement: soften edges (1); add grain (3)
  • Artifact removal - Debanding: low/medium
  • Artifact removal - Deringing: Off
  • Artifact removal - Deblocking: Off
  • Artifact removal - Denoising: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2

High:
  • Chroma: NGU Anti-Alias (medium)
  • Downscaling: SSIM 1D 100% + LL + AR
  • Image upscaling: Off
  • Image doubling: NGU Standard
  • <-- Luma doubling: very high
  • <-- Luma quadrupling: let madVR decide (direct quadruple - NGU Standard (very high))
  • <-- Chroma: let madVR decide (NGU medium)
  • <-- Doubling: let madVR decide (scaling factor 1.2x (or bigger))
  • <-- Quadrupling: let madVR decide (scaling factor 2.4x (or bigger))
  • <-- Upscaling algo: let madVR decide (Jinc + AR)
  • <-- Downscaling algo: let madVR decide (SSIM 1D 100% + LL + AR)
  • Upscaling refinement: soften edges (1); add grain (3)
  • Artifact removal - Debanding: low/medium
  • Artifact removal - Deringing: Off
  • Artifact removal - Deblocking: Off
  • Artifact removal - Denoising: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2

Creating madVR Profiles

These profiles can be translated into madVR profile rules.

Add this code to each profile group:

if (srcHeight > 1080) "2160p"
else if (srcWidth > 1920) "2160p"

else if (srcHeight > 720) and (srcHeight <= 1080) "1080p"
else if (srcWidth > 1280) and (srcWidth <= 1920) "1080p"

else if (srcHeight > 540) and (srcHeight <= 720) "720p"
else if (srcWidth > 960) and (srcWidth <= 1280) "720p"

else if (srcHeight <= 540) and (srcWidth <= 960) "SD"

OR

if (deintFps <= 25) and (srcHeight > 1080) "2160p25"
else if (deintFps <= 25) and (srcWidth > 1920) "2160p25"

else if (deintFps > 25) and (srcHeight > 1080) "2160p60"
else if (deintFps > 25) and (srcWidth > 1920) "2160p60"

else if (deintFps <= 25) and ((srcHeight > 720) and (srcHeight <= 1080)) "1080p25"
else if (deintFps <= 25) and ((srcWidth > 1280) and (srcWidth <= 1920)) "1080p25"

else if (deintFps > 25) and ((srcHeight > 720) and (srcHeight <= 1080)) "1080p60"
else if (deintFps > 25) and ((srcWidth > 1280) and (srcWidth <= 1920)) "1080p60"

else if (deintFps <= 25) and ((srcHeight > 540) and (srcHeight <= 720)) "720p25"
else if (deintFps <= 25) and ((srcWidth > 960) and (srcWidth <= 1280)) "720p25"

else if (deintFps > 25) and ((srcHeight > 540) and (srcHeight <= 720)) "720p60"
else if (deintFps > 25) and ((srcWidth > 960) and (srcWidth <= 1280)) "720p60"

else if (deintFps <= 25) and ((srcWidth <= 960) and (srcHeight <= 540)) "540p25"

else if (deintFps > 25) and ((srcWidth <= 960) and (srcHeight <= 540)) "540p60"

Sony Reality Creation Processing Emulation

markmon1 at AVS Forum devised a set of settings that are meant to emulate the video processing used by Sony projectors and TVs. Sony’s Reality Creation processing combines advanced upscaling, sharpening/enhancement and noise reduction to reduce image noise while still producing a very sharp image.

To match the result of Reality Creation in madVR, markmon lined-up a Sony VPL-VW675ES and JVC DLA-RS640 side-by-side with various settings checked in madVR until the projected image from the JVC resembled the projected image from the Sony. The settings profiles created for 1080p ("4k Upscale") and 4K content utilize sharp upscaling in madVR combined with a little bit of sharpening shaders, noise reduction and artifact removal, all intended to slightly lower the noise floor of the image without compromising too much detail or sharpness.

Gallery of settings for Sony Realty Creation emulation in madVR:
[Image: 32589565088_16e96e7d8d_o.jpg]
Reply
#9
7. OTHER RESOURCES

Advanced Topics

List of Compatible Media Players & Calibration Software

madVR Player Support Thread

Building a High-performance HTPC for madVR

Building a 4K madVR HTPC

Kodi Beginner's Guide

Kodi Quick Start Guide

Configuring a Remote Control

HOW TO - Configure a Logitech Harmony Remote for Kodi

HTPC Updater

This program is designed to download and install updated copies of MPC-HC, LAV Filters and madVR.

For this tool to work, a 32-bit version of MPC-HC must installed on your system along with LAV Filters and madVR. Running the program will update copies of each program. The benefit for DSPlayer users is this avoids the process of manually extracting and re-registering madVR with each update.

Note: On the first run, madVR components are dropped one level above the existing installation folder. If your installation was C:\Program Files\madVR, madVR installation files would be placed in the C:\Program Files directory. This is the default behavior of the program. Subsequent runs will overwrite the existing installation. If one component fails, try updating it manually before running the program again.

HTPC Updater

MakeMKV

MakeMKV is pain-free software for ripping Blu-rays and DVDs into an MKV container, which can be read by Kodi. By selecting the main title and an audio stream, it is possible to create bit-for-bit copies of Blu-rays with the accompanying lossless audio track in one hour or less. No encoding is required — the video is placed in a new container and packaged with the audio and subtitle track(s). From here, the file can be added directly to your Kodi library or compressed for storage using software such as Handbrake. This is the fastest way to import your Blu-ray collection into Kodi.

Tip: Set the minimum title length to 3600 seconds (60 minutes) and a default language preference in Preferences to ease the task of identifying the correct video, audio and subtitle tracks.

MakeMKV Homepage (Beta Registration Key)

Launcher4Kodi

Launcher4Kodi is a HTPC helper utility that can assist in creating appliance-like behavior of a Windows-based HTPC running Kodi. This utility auto-starts Kodi on power on/resume from sleep and auto-closes Kodi on power off. It can also be used to ensure Kodi remains focused when loaded fullscreen and set either Windows or Kodi to run as a shell.
Reply
#10
Reserved...
Reply
#11
Reserved....
Reply
#12
awesome information buddy
Reply
#13
(2016-02-10, 05:39)Derek Wrote: awesome information buddy

Thanks. Hopefully it will be of some use to others.
Reply
#14
Hi.
Excellent review, future-proof.
I wanted to ask just one thing, for Demo HDR played on a display 1920x1080 24Hz and Video Rendering madVR, rules + profile you entered for display: 3840 x 2160p can be inserted to a display 1920x1080 24Hz?
Thanks.
Reply
#15
(2016-02-12, 16:57)gotham_x Wrote: Hi.
Excellent review, future-proof.
I wanted to ask just one thing, for Demo HDR played on a display 1920x1080 24Hz and Video Rendering madVR, rules + profile you entered for display: 3840 x 2160p can be inserted to a display 1920x1080 24Hz?
Thanks.

If you're asking if you can use the profile rules for a 4K display for a 1080p display, then yes.
Reply
  • 1(current)
  • 2
  • 3
  • 4
  • 5
  • 29
  •   
 
Thread Rating:
  • 4 Vote(s) - 3.75 Average



Logout Mark Read Team Forum Stats Members Help
HOW TO - Set up madVR for Kodi DSPlayer & External Players3.754