Windows - HOW TO - Set up madVR for Kodi DSPlayer & External Players

  Thread Rating:
  • 3 Vote(s) - 4.67 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Post Reply
Warner306 Offline
Posting Freak
Posts: 2,666
Joined: Feb 2014
Reputation: 91
Location: Canada
Post: #1
madVR Set up Guide (for Kodi DSPlayer and Media Player Classic)
madVR 0.91.9
LAV Filters 0.69
Last Updated: 2017-04-28

Please provide corrections if you notice any technical information appears incorrect. This may not be uncommon as new features are added that are beyond my technical acumen.

It is also my intent to make the content as easy to read as possible. If you find any typos or unclear statements, point them out and I will attempt to improve the language.

What is madVR?

New to Kodi? Try this Quick Start Guide.

This guide is an additional resource for those using Kodi DSPlayer or MPC-HC. Set up for madVR is a lengthy topic and its configuration will remain fairly consistent regardless of the chosen media player.

Table of Contents:
  1. Devices;
  2. Processing;
  3. Scaling Algorithms;
  4. Rendering;
  5. Measuring Performance;
  6. Sample Settings Profiles, Profile Rules & Advanced Settings;
  7. Other Resources.
..............

Devices
Identification, Properties, Calibration, Display Modes, Color & Gamma, HDR and Screen Config.

Processing
Deinterlacing, Artifact Removal, Image Enhancements and Zoom Control.

Scaling Algorithms
Chroma Upscaling, Image Downscaling, Image Upscaling and Upscaling Refinement.

Rendering
General Settings, Windowed Mode Settings, Exclusive Mode Settings, Stereo 3D, Smooth Motion, Dithering and Trade Quality for Performance.

..............

Credit goes to the JRiver Media Center MADVR Expert Guide, Asmodian's madVR Options Explained and madshi for most technical descriptions.

madVR Rendering Path

The chart below is a summary of the rendering process.

[Image: madVR%20Chart_zpsacwqfwoc.png]
Source

..............

Resource Use of Each Setting

madVR can be very demanding on most graphics cards. Accordingly, each setting is ranked based on the amount of processing resources consumed: Minimum, Low, Medium, High and Maximum. Users of integrated graphics cards should not combine too many features labelled Medium and will be unable to use features labelled High or Maximum without performance problems.

This performance scale only relates to processing features requiring use of the GPU.

..............
(This post was last modified: 2017-04-29 04:12 by Warner306.)
find quote
Warner306 Offline
Posting Freak
Posts: 2,666
Joined: Feb 2014
Reputation: 91
Location: Canada
Post: #2
1. DEVICES
  • Identification
  • Properties
  • Calibration
  • Display Modes
  • Color & Gamma
  • HDR
  • Screen Config
[Image: Devices_zps5q85dk7k.png]

Devices contains settings necessary to describe the capabilities of your display, including: color space, bit depth, 3D support, calibration, display modes, HDR support and screen type.

device name
Customizable device name. The default name is taken from the device's EDID (Extended Display Information Data).

device type
The device type is only important when using a Digital Projector. If Digital Projector is selected, a new screen config section becomes available under devices.

Identification

This tab doesn't provide anything to configure — it merely shows the EDID data for the chosen device.

madVR can be very technical to new users. Before continuing on, it can be useful to have refresher on video formats and common terminology. Knowing a range of basic terms will make the sections that follow less daunting:

Common Video Source Specifications & Definitions

Properties – RGB Output Levels

[Image: RGB-Levels_zpsztecgs5j.png]

Correct RGB output levels are necessary when passing from PC to TV color spaces.

Note: LAV Video RGB settings are not relevant and will not impact these conversions.

Option 1:

When sending video via HDMI to a TV, the most straightforward color space is set as follows:

(madVR) PC levels (0-255) -> (GPU) Limited Range RGB 16-235 -> (TV) Output as RGB 16-235

madVR expands the source 16-235 signal to full range RGB leaving the conversion back to 16-235 to the graphics card. Expanding the source prevents the GPU from clipping the image during conversion to 16-235. Desktop levels remain accurate. However, it is possible to introduce banding if the GPU does not use dithering when stretching 0-255 to 16-235. The range is converted twice: by madVR and the GPU.

This may be the only option for graphics cards that do not allow full range RGB (0-255) output over HDMI like many older Intel iGPUs and Nvidia cards. Although, any video driver can be configured to output 0-255 by running madLevelsTweaker.exe in the madVR installation folder.

Option 2:

If your PC is a dedicated HTPC, an alternative approach is possible:

(madVR) TV levels (16-235) -> (Kodi) Use limited color range (16-235) -> (GPU) Full Range RGB 0-255 -> (TV) Output as RGB 16-235

In this configuration, the signal remains 16-235 until it reaches the display. A GPU set to 0-255 will allow passthrough without clipping the levels output by madVR. Kodi must be configured under System -> Video output to match madVR — using a limited color range for both Kodi and VideoPlayer.

When set to 16-235, madVR will not alter the source levels. This means it is possible to pass 0-15 and 236-255 if the source includes these values. The display will clip to 16-235. Clipping patterns such as these MP4 test patterns should be used to adjust brightness and contrast until bars 16-235 are visible.

This can be the best option for GPUs that output full range to a display that only accepts limited RGB. Banding is unlikely as madVR handles the single range conversion (Y'CbCr -> RGB) and the GPU is bypassed. However, the desktop and other applications will output incorrect levels. PC applications render black at 0,0,0 while the display expects 16,16,16. The result is crushed blacks. This sacrifice is made to improve the quality of the video player at the expense of other computing.

Option 3:

A final option involves setting all sources to full range:

(madVR) PC levels (0-255) -> (GPU) Full Range RGB 0-255 -> (TV) Output as RGB 0-255

madVR expands the source to 0-255 and displays it full range on your display. The display's HDMI black level must be set to display full range RGB (HDMI Normal vs. HDMI Low).

When converting Y'CbCr 16-235 to RGB 0-255, madVR will clip 0-15 and 236-255. Clipping below 16 and above 235 is acceptable as long as a correct grayscale is maintained. Use of these MP4 test patterns for black and white clipping should be used to confirm video levels (16-235) are displayed correctly.

This should be the optimal setting for displays and GPUs with a full range setting. The desktop maintains correct PC levels and banding is unlikely as madVR handles the lone range conversion.

I would recommend trying all three options. In my experience, the result of each setting can vary depending on the GPU and display, and this will not always conform with the theory of the ideal presentation path.

For testing, start with the referenced AVS Forum Calibration Patterns (Basic Settings) to confirm the output of 16-25 and 230-235, and move on to these videos, which can be used to fine-tune "black 16" and "white 235."

More information on PC vs TV color spaces here.

Properties – Native Display Bit Depth

The display bit depth is the value madVR dithers to when reducing its 16-bit processing result. Display panels are manufactured to a specific bit depth. The importance of the display bit depth can be misunderstood due to the use of high-quality dithering. madVR's dithering algorithms will smooth 8-bit gradients to the point that dithered 8-bit and dithered 10-bit look equally as smooth when viewed from a normal seating position.

10-bit output requires the following is checked in general settings:
  • enable automatic fullscreen exclusive mode;
  • use Direct3D 11 for presentation (Windows 7 and newer).
Typically, TVs supporting UHD resolutions also support 10-bit output, but this is not always the case. UHD TVs supporting High Dynamic Range (HDR) are the only models currently requiring this spec. Many older TVs are also native 10 bit panels. To confirm 10-bit output is received, the following test protocol can be undertaken.

[Image: Display-Bit-Depth_zpsnqjv3xkr.png]

The bit depth determines the number of available shades of each color. This should not be confused with standard bit depths used with color spaces such as rec.709 & rec.2020:

Rec. 709 - Output at 8-bits (256 colors), common to current 1080p Blu-ray.
Rec. 2020 - Output at 10-bits (1,024 colors) or 12-bits (4,096 colors), common to current UHD standard.

The bit depth defines the range of color values but does not imply enhanced primary colors or improved color saturation. The input color space will remain unchanged regardless of the chosen bit depth. Outputting an 8-bit (Rec. 709) source at 10-bits adds 2-bits of precision increasing the number of steps between each color without changing the maximum or minimum values. The increased bit depth means less dithering is added to the image (when downconverting the 16-bit processing result to the display bit depth) to simulate the missing color steps. The result is a smoother picture with less overall noise.

It is always a safe bet to set madVR to 8-bits if the panel bit-depth is unknown, as this value is most common. Almost all displays will accept a 10-bit input but many will dither the output to 8-bits.

More on bit depths here.

Properties – 3D Format

[Image: 3D-Format_zps64piv6wk.png]

madVR supports stereoscopic 3D encoded in MPEG4-MVC. This includes 3D Blu-ray discs and MKV rips. MakeMKV is likely the easiest way to convert a frame-packed 3D source into a playable MKV.

Stereoscopic 3D is designed to capture separate images of the same object from slightly different angles to create an image for the left eye and right eye. The brain is able to combine the two images into one, which leads to a sense of enhanced depth.

The input format must be frame-packed MPEG4-MVC. The output format depends on the HDMI spec, operating system and display. 3D formats with the left and right images on the same frame will be sent out as 2D images.

3D playback requires four ingredients:
  • enable stereo 3d playback is checked in the madVR control panel (rendering -> stereo 3d);
  • A 3D decoder is installed (LAV Filters 0.68+ with 3D software decoder installation selected);
  • A 3D-capable display is used (with its 3D mode enabled);
  • Windows 8.1 or Windows 10 is the operating system.
The display type determines the way 3D images are displayed:
  • Active 3D TV: The left and right eye images are alternated.
  • Passive 3D TV: The left eye and right eye images are shown on the same frame.
Active 3D TVs display 3D content in frame-sequential format, where the left eye and right eye images are separated and alternated. This is done 48 times per second or 24 times per eye. Battery-powered 3D glasses use active shutters to open and close each eye in time with the image on-screen.

Passive 3D TVs are limited to showing a single image, which interweaves each eye onto a single frame. The display and 3D glasses use a polarizing filter, where only the portions of the screen meant for each eye are visible.

auto
The default output format is frame-packed 3D Blu-ray. The output is an extra-tall (1920 x 2205 - with padding) frame containing the left eye and right eye images stacked on top of each other at full resolution. A display will convert this output.

auto – (HDMI 1.4+, Windows 8+ & Display with HDMI 1.4+): Receives the full resolution, frame-packed output. On an active 3D display, each frame is split and shown sequentially. A passive 3D display interweaves the two images as a single image.

auto – (HDMI 1.3, Windows+ & Display with HDMI 1.3): Receives a downconverted, half side-by-side format. On an active 3D display, each frame is split, upscaled and shown sequentially. A passive 3D display upscales the two images and combines them as a single frame.

It is possible to override this behavior by selecting a specific 3D format.

Force 3D format below:

side-by-side

Side-by-side (SbS) stacks the left eye and right eye images horizontally. madVR outputs half SbS, where each eye is stored at half its horizontal resolution (960 x 1080) to fit on one 2D frame. This is done to reduce file sizes for HDMI 1.3. The display splits the frame and scales each image back to its original resolution.

An active 3D display shows half SbS sequentially. Passive 3D displays will split the screen into odd and even horizontal lines. The left eye and right eye odd sections are combined. Then the left eye and right eye even sections are combined. This weaving creates a sense of having two images.

top-and-bottom

Top-and-bottom (TaB) stacks the left eye and right eye images vertically. madVR outputs half TaB, where each eye is stored at half its vertical resolution (1920 x 540) to fit on one 2D frame. This is done to reduce file sizes for HDMI 1.3. The display splits the frame and scales each image back to its original resolution.

An active 3D display shows half TaB sequentially. Passive 3D displays will split the screen into odd and even horizontal lines. The left eye and right eye odd sections are combined. Then the left eye and right eye even sections are combined. This weaving creates a sense of having two images.

line alternative

Line alternative is an interlaced 3D format designed for passive 3D displays. Each frame contains a left odd field and right odd field. The next frame contains a left even field and right even field. 3D glasses make the appropriate lines visible for the left or right eye. The display must be set to use its native resolution without any over or underscan.

column alternative

Column alternative is an interlaced 3D format similar to line alternative, except the frames are matched vertically as opposed to horizontally. This is another passive 3D format. One frame contains a left odd field and right odd field. The next frame contains a left even field and right even field. 3D glasses make the appropriate lines visible for the left or right eye. The display must be set to use its native resolution without any over or underscan.

swap left / right eye

Swap the order in which frames are displayed. This is to correct the behavior of some displays, which show the left eye and right eye images in the incorrect order. Incorrect eye order can be fixed for all formats including line and column alternative. Many displays offer the same option to swap eyes in its menus.

Make certain your 3D glasses are synced with the display. If the image seems blurry (particularly, the background elements), your glasses are probably not turned on.

More detail on 3D formats here.

Calibration

[Image: Calibration_zpswzpyaknc.png]

Calibration controls are a base set of values for madVR to work with when doing any type of color matrixing or transfer function conversion. This requires you know basic information about your display such as its native color gamut and transfer function.

disable calibration controls for this display

Turns off calibration controls for gamut and transfer function conversions. The lone exception is High Dynamic Range (HDR) mapping, which remains on by default if HDR mapping is enabled.

If you purchased your display and went through only basic calibration without confirming a correct grayscale with test patterns or a colorimeter, this is the safest choice.

this display is already calibrated

This impacts the mapping of content with a different gamut than the display. For example, a Rec. 601 (SMPTE-C) source, such as an SD DVD, could be mapped to the Rec. 709 (BT.709) color space of most HD televisions. Gamut mapping is necessary due to the fact most displays are calibrated for a specific color space. It would take multiple calibrations and multiple 3D LUT files to handle all sources. Allowing madVR to handle these color space conversions is a simpler solution.

If you want to use this feature but don't own a calibrated display, try the following values, which are the most common:
  • primaries / gamut: BT.709
  • transfer function / gamma: pure power curve 2.20
calibrate this display by using yCMS

Medium Processing

yCMS and 3DLUT files are forms of color management that use the GPU for gamut and transfer function correction. yCMS is the simpler of the two, only requiring a few measurements with a colorimeter and appropriate software. This a lengthy topic beyond the scope of this guide.

yCMS files can be created with the use of hcfr. If you are going this route, it may be better to use the more accurate 3D LUT.

calibrate this display by using external 3DLUT files

Medium Processing

Display calibration software such as ArgyllCMS, CalMAN, LightSpace CMS or dispcalGUI is used with madVR to create a 256 x 256 x 256 3D LUT.

A 3D LUT (3D lookup table) provides sophisticated grayscale, transfer function and primary color calibration by using the computer's GPU to produce corrected color values.

Display calibration software measure test patterns with the use of a colorimeter placed in a fixed position. Running madTPG.exe (madVR Test Pattern Generator) from the madVR installation folder provides the necessary patterns. The software will output a .3dlut file. To use the 3D LUT, select it from this menu before playback.

A LUT is essentially a corrected output based on a input value for each RGB triplet. When done correctly, it should enforce near-ideal adherence to a desired color gamut when madVR renders content to your display. This is more sophisticated than traditional grayscale calibration as a 3D LUT is able provide additional correction beyond the limited color controls of a typical high-definition display.

Common HD color gamuts: Rec. 709 (BT.709), DCI-P3 and Rec. 2020.

Instructions on how to generate and use 3D LUT files with madVR are found below:
ArgyllCMS | CalMAN | LightSpace CMS

Link: What is a LUT?

Visual Representation of a 3D LUT

Luminance adds volume to a chromaticity diagram.
This creates a 256 x3 cube like the RGB cube below:

[Image: 3D-LUT-Wireframe_zps7ixaz3em.png]

Any color space (e.g. XYZ) can be represented inside the cube.
Luminance (black to white) creates uneven distribution of colors:

[Image: 3D-LUT-XYZ_zpsiw1xblal.png]

A 3D LUT is capable of correcting the three main aspects of display calibration:
  • Grayscale: Finding the achromatic point (D6500) and maintaining it from black to white (0 to 100% white) without any RGB intrusion.
  • Primaries: Combining values of red, green and blue to create the values placed on the corners of a triangular color gamut. These primaries are the base to create other colors.
  • Transfer Function: Producing gamma-corrected or perceptual quantization-corrected (PQ) color values. A capture device converts light to voltage. A display converts voltage to light for each pixel using a transfer function suitable for the gamut luminance range.
Rather than use the dials of your display to create a balance that satisfies all of the above within an accepted measure of error (e.g. Delta-E). A 3D LUT can theoretically correct any RGB input by applying the adjustments stored in the table. This requires a small amount of GPU power but can produce near perfect color.

disable GPU gamma ramps
Disable the default GPU gamma LUT. This will return to its default when madVR is closed. Using a windowed overlay means this setting only impacts madVR.

Enable this option if you are using a 3D LUT.

Display Modes

[Image: Display-Modes_zpsdyfv4w5p.png]

Display modes adjusts the refresh rate of the display to match the source frame rate. This will ensure the smoothest playback by matching the source frame frequency to the display's refresh frequency. For example, playing 23.976 fps content such as a 1080p Blu-ray using the display's 24 Hz mode (23 Hz in Windows) ensures each frame is shown once (or at a fixed multiple). Conversely, playing 23.976 fps content at 60 Hz presents a mismatch — the refresh rates do not align — artificial frames are added by 3:2 pulldown, which creates motion judder.

It is recommended to complete the blank textbox with all compatible display modes for your display. Upon starting playback, the screen should flash as the display switches to an output mode that best matches the source.

A list of available refresh rates for the connected display can be viewed in Windows:
  • Right-click on the desktop and select Display settings then Advanced display settings;
  • Choose Display adapter properties -> Monitor;
  • A list of compatible refresh rates is shown in the drop-down.
Ideally, a GPU and display should be capable of the following refresh rates:
  • 23.976 Hz
  • 24 Hz
  • 25 Hz
  • 29.97 Hz
  • 30 Hz
  • 50 Hz
  • 59.94 Hz
  • 60 Hz
In most cases, the display will output at a multiple of the input refresh rate (29.97 fps x 2 = 59.94 Hz). Telecine (3:2 pulldown) is avoided so long as the refresh rates match or remain close to the original.

madVR recognizes display modes by resolution and refresh rate. The five most important (for a 1080p display) are: 1080p23, 1080p24, 1080p50, 1080p59 and 1080p60.

treat 25p movies as 24p (requires ReClock or VideoClock)
Check this box to remove PAL Speedup common to PAL region (European) content. madVR will slow down 25 fps film by 4% to its original 24 fps. This requires the use of an audio renderer such as ReClock or VideoClock (JRiver Media Center) to slow the down the audio by the same amount.

Note on 24p Smoothness:

Refresh rate matching cannot compensate for flaws in the way content is captured or displayed. Video with a frame rate of 24 fps (such as film and television) will display some stutter in panning scenes even when shown at its native refresh rate (24p). This is due to the low frame count, which is insufficient to resolve fine detail in motion. The human eye can easily discern frame rates as high as 60 fps. As result, even film shown at a commercial theater is capable of displaying less than perfect motion. While less than perfect, true 24 fps playback will show superior motion tracking to 3:2 pulldown and other forms of frame interpolation. Just don't expect it to be flawless.

A secondary concern is screen flicker. The low frame rate of 24 fps sources can produce varying degrees of flicker due to the slight pauses between each frame change. Most consumer displays address flicker by showing 24 fps content at some multiple. The higher the multiple, the less flicker. A 24 fps source shown at a multiple of at least three such as 72 Hz (24 fps x 3 = 72 Hz) or 120 Hz (24 fps x 5 = 120 Hz) should eliminate this problem. This repeated cadence, however, has the side effect of creating some blurring as your eye tracks the frame across the screen.

DSPlayer Users: Those experiencing audio sync issues with 24p playback may want to experiment with adding a fixed audio delay. It is also possible to leave display mode switching to Kodi (Video -> Playback -> Adjust display refresh rate) as opposed to madVR. Kodi offers the simplest solution to automatic refresh rate matching but not necessarily the most stable.

A beginner's guide to 24p playback can be found here.

Color & Gamma

[Image: Color-amp-Gamma_zpsiibiulwq.png]

Color and transfer function adjustments should be avoided unless you are unable to correct an issue using the calibration controls of your display.

enable gamma processing

This option works in conjunction with the gamma set in calibration. The value in calibration is used as the base, which madVR uses to map to a chosen gamma below. A gamma must be set in calibration for this feature to work.

Most viewing environments will require a gamma between 2.20 and 2.40 to conform to SD/HD color gamuts. Although, other values are available.

madVR Explained:

pure power curve
Use the standard pure power gamma function.

BT.709/601 curve
Use the inverse of a meant for camera gamma function. This can be helpful if your display has crushed shadows.

2.20
Brightens mid-range values, which can be nice in a brightly lit room.

2.40
Darkens mid-range values, which might look better in a darker room.

It is best to leave these options alone. Without knowing what you're doing, it is more likely you will degrade the image rather than improve it.

HDR

[Image: HDR_zpsvpv03mp9.png]

HDR is for mapping High Dynamic Range content to your display. This includes a range of 4K UHD media including UHD Blu-ray and streaming services such as Netflix and Amazon. Currently, this support may be limited to HDR demos due to a lack of DRM-free content.

HDR content works by showing its metadata in a way that maximizes peak white luminance to allow for brighter highlights and more texture detail in bright image areas. In the past, colorists worked with a peak brightness of 100 nits for BT.709 HD content. This luminance range could be stretched by a bright display, but this was not intended and would distort the output. HDR enhancement expands the brightness range and color space when video is mastered to vastly improve the contrast ratio.

UHD Blu-ray separates HDR metadata into two layers:
  • Base Layer - HDR10: 1,000 - 10,000 nits, DCI-P3 -> Rec. 2020, 10-bit HEVC;
  • Enhancement Layer - Dolby Vision: 4,000 - 10,000 nits, DCI-P3 -> Rec. 2020, 12-bit HDR.
A typical UHD display is designed to convert HDR metadata using its own proprietary algorithms into something it can display.

HDR metadata conversion involves:
  • Tone Mapping: Compressing highlights to fit the peak luminance of the display;
  • Gamut Mapping: Mapping the DCI-P3 or Rec. 2020 primaries to the display's visible colors;
  • Gamma Transfer: Decoding the SMPTE 2084 HDR (PQ) transfer function to the display EOTF.
This process makes HDR support less standardized than it seems. No current display is capable of displaying the maximum 10,000 nits. Luckily, current HDR content uses a range that tops out at 1,200 - 4,000 nits, so not even the content is tight to the standard. Second, although UHD content is designed for the Rec. 2020 color space, few TVs can produce the lesser DCI-P3 in full. And most HDR content falls well short of Rec. 2020. Lastly, HDR requires a new transfer function, SMPTE 2084 perceptual quantization (PQ), which maps each pixel to a desired luminance value. This transfer function is designed for HDR displays but can be decoded into a SDR (standard dynamic range) gamma curve with its own compression roll-off and specified peak luminance.

madVR is capable of tone mapping, gamut mapping and gamma transfer conversion so that any display can show HDR10 content using the limitations of its available color gamut and peak luminance. The algorithms used by madVR should be of a higher quality than those used by most UHD displays.

madVR offers five methods for dealing with HDR metadata:
  • passthrough HDR content to the display: The display receives the regular HDR content untouched. HDR passthrough should only be used for displays which natively support HDR playback. Send HDR metadata to the display: Use Nvidia's private API: requires a Nvidia GPU with recent drivers and a minimum of Windows 7. Use Windows 10 API (D3D 11 only): For AMD/Intel users; requires Windows 10, enable automatic fullscreen exclusive mode and use Direct3D 11 for presentation (Windows 7 and newer).
  • convert HDR content to SDR by using pixel shader math: HDR is converted to SDR. The display receives SDR content.
  • convert HDR content to SDR by using an external 3DLUT: HDR content is converted to SDR. The display receives SDR content. If you supply multiple 3DLUT files, the one which best matches the source gamut will be used. The 3DLUT receives untouched R'G'B' HDR (PQ) data, applies tone & gamut mapping, then outputs R'G'B' data in the display's native gamut and transfer function.
  • process HDR content by using pixel shader math: The display receives HDR content, but the HDR source is downconverted to the target specs.
  • process HDR content by using an external 3DLUT: The display receives HDR content, but the 3DLUT downconverts the HDR source to some extent. The 3DLUT input/output is R'G'B' HDR (PQ). The 3DLUT applies some tone and/or gamut mapping.
Converting HDR to SDR

this display's peak nits
The display peak luminance specifies maximum display brightness. This defines the upper range of the tone mapping curve or the point where values are clipped. This applies when doing any HDR to SDR conversion or when downconverting an HDR source before passthrough. There is no such thing as a correct setting, so experiment with this value. A display configured to Rec.709 (BT.709) should start with a value of 265 nits even if it is calibrated to 100 nits. Higher peak luminance values will progressively darken the image.

Medium Processing

preserve hue in ...
Out of gamut colors require gamut mapping to fit the display gamut. For example, a highly saturated green color may come from the source as 50, 320, 40. madVR reads this value as being outside the target gamut, so it is clipped to 50, 255, 40. This creates a less saturated green but one with an incorrect hue. Instead, madVR attempts to preserve the correct hue while desaturating out of gamut colours. Two methods are available: low quality and high quality, which are ranked by processing resources used.
  • fix too bright & saturated pixels by: In addition to preserving the correct hue, gamut mapping and tone mapping are a balancing act between luminance and saturation. The % applied to luminance or saturation defines the priority of the tone mapping/gamut mapping algorithm. A setting of 100% luminance reduction and 0% saturation puts the priority on preserving the correct hue/saturation while reducing luminance. A setting of 0% luminance reduction and 100% saturation places the priority on preserving luminance while ignoring hue/saturation.
Medium Processing

compress highlights
Compress highlights applies tone mapping to reduce luminance values to fit under the chosen display peak nits. If unchecked, values larger than the display peak luminance value are simply clipped and ignored. madVR attempts to leave lower nits values unaltered either way (typically 0 to 100 nits). Compressing highlights preserves more detail than clipping them, even if it is less accurate.
  • measure each frame's peak luminance
    This overcomes the limitation of HDR metadata, which provides a single value for peak luminance but no dynamic metadata. madVR can measure the brightness of each pixel in each frame. The brightness range of a video will vary during playback. By measuring the peak luminance of each pixel, madVR is able to offer on-demand tone mapping that is adjustable per frame.
  • restore details in compressed highlights
    Compressing highlight image areas into a smaller data set will lead to a loss of detail as values are removed. madVR selectively restores some of the detail lost in compressed areas by employing image sharpening. This added sharpening can introduce bloating and ringing, so appropriate anti-bloating and anti-ringing filters are also provided.
The following should also be ticked in devices -> calibration:
  • primaries/gamut (e.g. BT.709)
  • transfer function/gamma (e.g. pure power curve 2.20)
Selecting the correct color gamut and gamma are important in making HDR to SDR conversions appear accurate. If no calibration profile is selected (by ticking disable calibration controls for this display), madVR will use the display peak luminance value to map HDR content to BT.709 and pure power curve 2.20. The contrast ratio of the display is as important as its peak luminance in making HDR appear accurate.

Using a 3D LUT in calibration is possible if convert HDR content to SDR by using pixel shader math is selected. madVR converts HDR to SDR and the 3D LUT is left to process the SDR output as it would any other video. All other settings will cause the 3D LUT to be ignored when HDR metadata is encountered. To use a 3D LUT for HDR -> HDR or HDR -> SDR mapping, select it from the HDR menu.

In addition to configuration of madVR, HDR content requires:
  • LAV Filters 0.68+: To pass HDR metadata to madVR;
  • Unencrypted HDR Content: Copyright-free content that can be played by the media player.
If HDR technology still seems confusing, a basic course on High Dynamic Range photography can be helpful. Note that televisions are stuck with a single moving image, so the focus is on enhancement of contrast rather than combining photos of different exposures.

A more technical discussion on HDR technology can be found here.

Screen Config

When Digital Projector is selected as the device type, this option will appear. Use to adjust the projected image to fit evenly on your screen.

Screen configuration may be useful for users of projectors with a Constant Image Height (CIH) setup. CIH projection attempts to show all content on a 2.35:1 ratio (extra-wide) screen. Content with a 16:9 (1.78:1) ratio fills the height of the screen but not its sides. While movies with an aspect ratio of 2.35:1 are zoomed to fill both the height and width of the screen. Thereby, all content fills the height of the screen but not its width.

madVR Explained:

define visible screen area by cropping masked borders
Allows the image to be scaled to a lower resolution by placing black pixels on the missing borders to simulate screen masking. madVR will maintain this framing when cropping black bars with its zoom control. Only active when fullscreen.

move OSD into active video area
Move the madVR OSD into the defined screen area. madVR can also move some video player OSDs depending on the API it uses.

activate lens memory number
Sends a command to a JVC or Sony projector to activate an on-projector lens memory number.

anamorphic lens
Allows output to non-square pixels. Must be checked if your projector uses an anamorphic lens to allow for a vertical stretch.

Anamorphic lens stretch the image horizontally to fill the width of the screen. This leaves the projector to zoom the image height to fill the top of the frame. A standard projector lens, by comparison, leaves a post-cropped image needing a resize in both height AND width. The advantage of anamorphic projection is a brighter image with less visible pixel structure. The smaller pixel structure is a result of the pixels being flattened before they are enlarged.

stretch factor
This is the ratio of vertical stretch applied by madVR. Vertical stretching should only be enabled for madVR or the projector, not both. madVR takes the vertical stretch into account when image scaling, so no extra scaling operation is performed. The vertical zoom performed by madVR should be of higher quality than most projectors.

These settings can be combined with zoom control (aka black bar cropping) to complete the CIH experience.

More on Constant Image Height (CIH) projection here.
(This post was last modified: 2017-04-29 03:28 by Warner306.)
find quote
Warner306 Offline
Posting Freak
Posts: 2,666
Joined: Feb 2014
Reputation: 91
Location: Canada
Post: #3
2. PROCESSING
  • Deinterlacing
  • Artifact Removal
  • Image Enhancements
  • Zoom Control
Deinterlacing

Doom9 Forum: Deinterlacing is the process of converting interlaced video, such as common analog television signals or 1080i format HDTV signals, into a non-interlaced (progressive) form. Interlaced sources are measured in fields per second, which is equal to double the frame rate.

Deinterlacing is applied to content of three types:

Film: Film is photographic material produced for the cinema. It originated at 24 frames/second and has been converted to video, or telecined to 29.97 fps, for showing on 59.94 Hz NTSC TVs. Alternatively, film is speed up 4.2% to 25 fps for showing on 50 Hz PAL TVs.

NTSC: This is video content produced for TVs used in most of North and South America and East Asia. Normally, only news and sports broadcasts, together with some TV series, are produced as pure NTSC. In the US, NTSC televisions employ 59.94 half-frames, or fields, per second and 525 horizontal lines per frame or 262.5 per field. Content is broadcast as 23.976 fps or 29.97 fps in progressive (720p, 2160p) or interlaced (1080i) format.

PAL: PAL is a European TV format using 50 half-frames, or fields, per second. Content is broadcast as 25 fps progressive or interlaced.

Content shot as video is captured at 29.97 fps or 25 fps. Deinterlacing a source captured at 59.94 fields per second and stored as 29.97 fps interlaced will result in a doubling of the frame rate (29.97 x 2 = 59.94 fps) after deinterlacing. An interlaced signal shows a single frame of video as two half-frames. A good deinterlacer adds new frames to match each field or half-frame.

Content shot as film is captured at 24 fps. Removing interlaced frames from film interpolated to 29.97 fps such as NTSC DVD and broadcast movies and television is possible by using inverse telecine (IVTC). This form of deinterlacing is completed by the video renderer or transcoder.

Low Processing

automatically activate deinterlacing when needed
Deinterlace video based on the content flag.

If doubt, activate deinterlacing
Always deinterlace if content is not flagged as progressive.

If doubt, deactivate deinterlacing
Only deinterlace if content is flagged as interlaced.

Low Processing

disable automatic source type detection
Override automatic deinterlacing with setting below.

force film mode
Force inverse telecine (IVTC), reconstructing the original progressive frames from video encoded as interlaced, decimating duplicate frames if necessary. A source with a field rate of 60i (and a frame rate of 30 fps) would be converted to 24p under this method. Software (CPU) deinterlacing is used in this case.

force video mode
Force DXVA deinterlacing, which uses the GPU’s deinterlacing as set in its drivers.

only look at pixels in the frame center
This is generally thought as the best way to detect the video cadence to determine if deinterlacing is necessary and the type that should be applied.

Deinterlacing is best set to automatically activate deinterlacing when needed unless you know the content flag is being read incorrectly by madVR and wish to override it. Note that using inverse telecine (IVTC) on a native interlaced source will lead to artifacts. An interlaced source must be deinterlaced.

More on deinterlacing here.

Artifact Removal (reduce banding artifacts)

Wikipedia: Color banding is a problem of inaccurate color presentation in computer graphics. In 24 bit color modes, 8 bits per channel is usually considered sufficient to render images in Rec. 709 or sRGB. However, in some cases there is a risk of producing abrupt changes between shades of the same color. For instance, displaying natural gradients (like sunsets, dawns or clear blue skies) can show minor banding.

Banding can be introduced at various stages:
  • It is present in the source;
  • It was created due inaccurate color conversions;
  • It was added by the final codec due to low bit rates and/or poor encoding.
Banding due to inaccurate color conversions is addressed by using dithering at output. Instead, madVR attempts to correct banding present in the source. This source could be mostly uncompressed or subject to poor encoding. In either case, banding is a typical concern of most video.

Poorly mastered 8-bit Blu-rays can display small amounts of banding in shadow detail and bright highlights. Even an uncompressed 10-bit, 4K UHD source is capable of banding. 4K UHD sources have five times as many color shades as a 1080p Blu-ray (1,024 vs. 256) but ten times more luminance steps (1,000 nits vs. 100 nits). This leads to a lack of bits to cover the expanded luminance range. The luminance curve allocates more bits to lower levels, which invites the possibility of banding at the top of the curve. In general, the less compression applied to the source, the less likelihood of banding.

Example of Debanding

Medium Processing

reduce banding artifacts
Allows madVR to smooth the edges of color bands by applying dithering.

default debanding strength
Sets the amount of correction from Low to High. Higher settings will slightly soften image detail.

strength during fade in/out
Five frames are rendered with correction when a fade is detected. This only applies if this setting is higher than the default debanding strength.

If banding is obviously present in the source, a setting of high/high would be necessary to provide adequate correction. However, this is not a set-it-and-forget-it scenario, as a clean source would be unnecessarily smoothed. A setting of high is considerably stronger than medium or low. As such, it may be safer to set debanding to low/medium or medium/high if the majority of your sources are high-quality.

Artifact Removal (reduce ringing artifacts)

Wikipedia: In signal processing, particularly digital image processing, ringing artifacts are artifacts that appear as spurious signals near sharp transitions in an image. Visually, they appear as bands or "ghosts" near edges. The term "ringing" is because the output signal oscillates at a fading rate around a sharp transition in the input, similar to a bell after being struck.

Ringing can be introduced in various ways:
  • Image upscaling or downscaling is used during the mastering process;
  • Edge enhancement or sharpening is applied during the mastering process;
  • The signal is bandwidth-limited, discarding too much information for high frequencies;
  • The video renderer or media player resizes with image upscaling or downscaling;
  • The video renderer or media player applies image sharpening as a post-process.
madVR focuses on removing ringing added during the mastering process (due to image upscaling or downscaling and edge enhancement). These halos are different than those created by compression artifacts. Scaling algorithms and sharpening shaders have its own anti-ringing filters. Not all sources will display ringing. The deringing filter attempts to be non-destructive to these sources, but it is possible to remove some image detail.

Medium Processing

reduce ringing artifacts
Allows madVR to remove ringing artifacts with a deringing filter. This is ringing present in the source.

reduce dark halos around bright edges, too
Ringing artifacts are of two types: bright halos or dark halos. Removing dark halos increases the likelihood of removing valid detail. This can be particularly true with animated content, which makes this a risk/reward setting. It may be a safer choice to focus on bright halos and leave dark halos alone.

Image Comparison – Lighthouse Top:
No Deringing
madVR Deringing

Image Comparison – Animated:
No Deringing
madVR Deringing

Because the deringing filter may not be beneficial with all sources, its use is up to preference. The filter can be more destructive to a clean source than debanding.

Image Enhancements

Image enhancements are best used to enhance content shown at its native resolution: 1080p -> 1080p, or 2160p -> 2160p. Image enhancements apply sharpening to the image before image upscaling (pre-resize). Luma sharpening can make a soft Blu-ray appear sharper and add some additional depth to the image. Low values are recommended to avoid oversharpening.

Image enhancements are not recommended for content that needs to be upscaled. Pre-resize sharpening will show a stronger effect than sharpening applied after resize like that under upscaling refinement. In many cases, this will lead to an image that is oversharpened and less natural in appearance.

An alternative method of enhancing content shown at its native resolution is supersampling — applying a chain of image doubling with image downscaling (e.g. 1080p -> 2160p -> 1080p). This opens up the possibility to use upscaling refinement sharpeners such as SuperRes to the upscaled image. The image is returned to its original resolution with image downscaling while retaining the added sharpening.

Greater precision may be achieved by sharpening a doubled image. However, less control is available over how sharp the image will appear and the overall sharpening effect is lessened. Supersampling is very resource-intensive and should only be reserved for powerful GPUs. Image enhancements present much less overhead and are a better choice for most users.

Low Processing

enhance detail:

Doom9 Forum: Focuses on making faint image detail in flat areas more visible. It does not discriminate, so noise and grain may be sharpened as well. It does not enhance the edges of objects but can work well with line sharpening algorithms to provide complete image sharpening.

LumaSharpen:

SweetFX WordPress: LumaSharpen works its magic by blurring the original pixel with the surrounding pixels and then subtracting the blur. The end result is similar to what would be seen after an image has been enhanced using the Unsharp Mask filter in GIMP or Photoshop. While a little sharpening might make the image appear better, more sharpening can make the image appear worse than the original by oversharpening it. Experiment and apply in moderation.

Medium Processing

crispen edges:

Doom9 Forum: Focuses on making high-frequency edges crisper by adding light edge enhancement. This should lead to an image that appears more high-definition.

thin edges:

Doom9 Forum: Attempts to make edges, lines and even full image features thinner/smaller. This can be useful after large upscales, as these features tend to become fattened after upscaling. May be most useful with animated content and/or used in conjunction with sharpen edges at low values.

sharpen edges:

Doom9 Forum: A line/edge sharpener similar to LumaSharpen and AdaptiveSharpen. Unlike these sharpeners, sharpen edges introduces less bloat and fat edges. More aggressive than crispen edges.

AdaptiveSharpen:

Doom9 Forum: Adaptively sharpen the image by sharpening more intensely near image edges and less intensely far from edges. The outer weights of the laplace matrix are variable to mitigate ringing on relative sharp edges and to provide more sharpening on wider and blurrier edges. The final stage is a soft limiter that confines overshoots based on local values.

Medium Processing

activate anti-bloating filter

Reduce the fattening that occurs when line sharpening algorithms are applied to an image. If sharpening is designed to exaggerate the difference between high frequency and low frequency pixels, then anti-bloating tames the frequencies that are too hot and removes the low frequencies that shouldn't be there. This uses more processing power than anti-ringing but has the effect of blurring oversharpened pixels to produce a more natural result that better blends into the background elements.

Applies to LumaSharpen, sharpen edges and AdaptiveSharpen. Both crispen edges and thin edges are "skinny" by design and are omitted.

Low Processing

activate anti-ringing filter

Reduce ringing artifacts. This is at the expense of a small decrease in GPU performance and a reduction in the sharpening effect. Anti-ringing should be checked with all shaders as each will produce varying levels of ringing. Applies to LumaSharpen, crispen edges, sharpen edges and AdaptiveSharpen.

General Usage of Image enhancements:

Each sharpener serves a different purpose. It may be desirable to match an edge sharpener with a detail enhancer such as enhance detail. The two algorithms will sharpen the image from different perspectives, filling in the flat areas of an image as well as its angles. A good combination might be:

sharpen edges (AB & AR) + enhance detail

sharpen edges provides subtle line sharpening for an improved 3D look, while enhance detail brings out texture in the remaining image.

Zoom Control

These settings are most applicable to projector owners using Constant Image Height (CIH) projection. Zoom control detects and crops black bars that do not contain any visible video. The remaining image can be left alone or zoomed to fit the display aspect ratio. The projector device and screen size should be defined in devices before adjusting these settings.

When cropping using zoom control, madVR will resize based on the instruction of the media player. A media player set to 100% / no zoom will not resize a cropped image even when madVR is set to zoom. But a setting of touch window from inside / zoom will follow the setting in zoom control. Only MPC-HC provides on-demand zoom status. All other media players should be set to notify media player about cropped black bars to inform madVR of the media player zoom setting and allow the player to carry out its own zoom.

Cropping black borders that spill off the screen offers the advantages of removing unneeded pixels and reducing the load placed on madVR, as those discarded pixels no longer need to be processed. madVR is capable of handling videos with multiple aspect ratios such as The Dark Knight, which switches frequently between a 1.78 and 2.35 aspect ratio. madVR can crop these scenes and resize on-the-fly.

Unfortunately, this process is not standardized and may involve experimentation with the various options below to find a compromise between eliminating all black bars and reducing excessive zooming during playback.

madVR Explained:

disable scaling if image size changes by only
If the resolution needs scaling by the number of pixels set or less, image upscaling is disabled and black pixels are instead added to the right and/or bottom of the image.

move subtitles
This is important when removing black bars. Otherwise, it is possible to display subtitles outside the visible screen area.

automatically detect hard coded black bars
This setting unlocks a number of other settings designed to detect and crop black bars.

Black bar detection detects black bars added to fit video content to an aspect ratio other than the source, or the small black bars left from imprecise analog captures. An example of imprecise analog captures includes 16:9 video with black bars on the top and bottom encoded as 4:3 video, or the few blank pixels on the left and right of a VHS capture. madVR can detect black bars on all sides.

if black bars change pick one zoom factor
Set a single zoom factor to avoid changing the zoom or crop factor of black bars which appear intermittently during playback. When set to which doesn't lose any image content, madVR will not zoom or crop a 16:9 portion of a 4:3 film. Conversely, when set to which doesn't show any black bars, madVR will zoom or crop all of the 4:3 footage the amount needed to remove the black bars from 16:9 sections.

if black bars quickly change back and forth
This can be used in place of the option above. A limit is placed on how often madVR can change the zoom or crop during playback to remove black bars as they are detected. Without either of these options, madVR will always change the crop or zoom to remove all black bars.

notify media player about cropped black bars
Define how often the media player is notified of changes to the black bars.

always shift the image
Determine whether the top or bottom of the video is cropped when zooming.

keep bars visible if they contain subtitles
Disable zooming or cropping of black bars when subtitles are detected as part of the black bar. Black bars can remain visible permanently or for a set period of time.

cleanup image borders by cropping
Crop additional pixels on the edges of black bars or on all edges. When set to crop all edges, pixels are cropped even when no black bars are detected.

if there are big black bars
Defines a specific cropping for large black bars. This can include zooming the image to hide the black bars.

zoom small black bars away
This removes black bars by zooming the video slightly. This usually results in cropping a small amount of video information from one edge to maintain the original aspect ratio and resizing to the original display resolution. For example, the bottom is cropped to remove small black bars on the left and right and the video upscaled back to its original resolution.

crop black bars
Crop black bars to change the display aspect ratio and resolution. Cropping black bars increases performance as the pixels no longer need to be processed. Profile rules referencing resolution will use the post-crop resolution.
(This post was last modified: 2017-05-09 21:38 by Warner306.)
find quote
Warner306 Offline
Posting Freak
Posts: 2,666
Joined: Feb 2014
Reputation: 91
Location: Canada
Post: #4
3. SCALING ALGORITHMS
  • Chroma Upscaling
  • Image Downscaling
  • Image Upscaling
  • Upscaling Refinement
[Image: Chroma-Upscaling_zpsdjah8tyq.png]

The real fun begins with madVR's image scaling algorithms. This is perhaps the most demanding and confusing aspect of madVR due to the sheer number of combinations available. It can be easy to simply turn all settings to its maximum. However, most graphics cards, even powerful ones, will be forced to compromise somewhere. To understand where to start, an introduction to scaling algorithms from the JRiver MADVR Expert Guide is in order.

“Scaling Algorithms

Image scaling is one of the main reasons to use madVR. It offers very high quality scaling options that rival or best anything I have seen.

Most video is stored using chroma subsampling in a 4:2:0 video format. In simple terms, what this means is that the video is basically stored as a black-and-white “detail” image (luma) with a lower resolution “color” image (chroma) layered on top. This works because the detail image helps to mask the low resolution of the color image that is being layered on top.

So the scaling options in madVR are broken down into three different categories: Chroma upscaling, which is the color layer. Image upscaling, which is the detail (luma) layer. Image downscaling, which only applies when the image is being displayed at a lower resolution than the source — 1080p content on a 720p display, or in a window on a 1080p display for example.

Chroma upscaling is performed on all videos — it takes the half-resolution chroma image, and upscales it to the native luma resolution of the video. If there is any further scaling to be performed; whether that is upscaling or downscaling, then the image upscaling/downscaling algorithm is applied to both chroma and luma.”

More on chroma subsampling here.

Chroma 4:4:4 Display Support Test Image

Not all televisions are capable of receiving chroma 4:4:4 inputs and will instead convert the signal to 4:2:2 or 4:2:0. This is due to the fact many Blu-ray players output at 4:2:2.

HDMI bandwidth is also a factor when sending 4K signals. At frame rates higher than 30 fps, chroma subsampling is required at the GPU or TV level to meet the data transfer limitations of HDMI 1.4. HDMI 2.0 is capable of 4K 60 fps at 4:4:4, but only with 8-bit sources. 4K 10-bit exceeds the limits of HDMI bandwidth and requires a reduction in frame rate to pass 4:4:4 or chroma subsampling to pass 60 fps.

Drag image into MPC-HC window; support determined by the numbers most clearly visible: None (4:2:0) | 4:2:2 | 4:4:4.


HTPC Chroma Subsampling:

(Source) Y'CbCr 4:2:0 -> (madVR) Y'CbCr 4:4:4 to RGB -> (Display) Y'CbCr 4:4:4/4:2:2/4:2:0 or RGB to Y'CbCr 4:4:4/RGB -> (Display Output) RGB

Chroma and Image upscaling Options in madVR

The list below shows the chroma upscaling, image downscaling and image upscaling algorithms available in madVR. The algorithms are ranked by the amount of GPU processing required to use each setting. Keep in mind, super-xbr and higher scaling requires large GPU usage (especially if scaling content to 4K). Users with low-powered GPUs should stick with settings labeled Medium or lower.

Each algorithm offers a tradeoff between three factors:
  • sharpness: crisp, coarse detail.
  • aliasing: jagged, square edges on lines/curves.
  • ringing: haloing around objects.
The list below should not be considered an absolute quality scale from worst to best. Experiment to find algorithms that fit your personal preference and power of your graphics card.

Sample of Scaling Algorithms: Bilinear | Bicubic | Lanczos4 | Jinc

[Default Values]

Chroma Upscaling [Bicubic 60]

Double the chroma layer in both directions to match the luma layer:

Y' (luma - 4) CbCr (chroma - 2:0) -> Y'CbCr 4:4:4.

When downscaling by a large amount, the chroma is scaled to the target resolution rather than the luma resolution.

activate SuperRes filter, strength: Sharpening filter applied to the chroma layer after upscaling. Use of this filter is up to preference. This is a Medium Processing feature.

Minimum Processing
  • Bilinear
Low Processing
  • Cubic
    sharpness: 50 - 150 (anti-ringing filter)
Medium Processing
  • Lanczos
    3 - 4 taps (anti-ringing filter)
  • Spline
    3 - 4 taps (anti-ringing filter)
  • Bilateral
High Processing
  • super-xbr
    sharpness: 25 - 150 (anti-ringing filter)
  • Jinc
    3 taps (anti-ringing filter)
Maximum Processing
  • NGU
    low - very high
  • Reconstruction
    soft - sharp AR
  • NNEDI3
    16 - 256 neurons
Image Downscaling [Bicubic 150]

Downscale the luma and chroma as RGB when the source is larger than the the output resolution:

RGB -> downscale -> RGB downscaled.

scale in linear light (recommended when image downscaling)

Minimum Processing
  • DXVA2 (overrides madVR processing)
  • Bilinear
Low Processing
  • Cubic
    sharpness: 50 - 150 (anti-ringing filter) (scale in linear light)
Medium Processing
  • SSIM 1D, 2D
    strength: 25% - 100% (anti-ringing filter) (scale in linear light)
  • Lanczos
    3 - 4 taps (anti-ringing filter) (scale in linear light)
  • Spline
    3 - 4 taps (anti-ringing filter) (scale in linear light)
High Processing
  • Jinc
    3 taps (anti-ringing filter) (scale in linear light)
Image Upscaling [Lanczos 3]

Upscale the luma and chroma as RGB when the source is smaller than the output resolution:

RGB -> upscale -> RGB upscaled.

scale in sigmoidal light (not recommended when image upscaling)

Minimum Processing
  • DXVA2 (overrides madVR processing)
  • Bilinear
Low Processing
  • Cubic
    sharpness: 50 - 150 (anti-ringing filter)
Medium Processing
  • Lanczos
    3 - 4 taps (anti-ringing filter)
  • Spline
    3 - 4 taps (anti-ringing filter)
High Processing
  • Jinc
    3 taps (anti-ringing filter)

Image Doubling [Off]

Double the resolution (2x) of the luma and chroma independently or as RGB. This may require additional upscaling or downscaling to correct any undershoot or overshoot of the output resolution:

Y / CbCr / RGB -> Image doubling -> upscale or downscale -> RGB upscaled.

High Processing (2x Image Doubling)
  • super-xbr luma & chroma doubling
    sharpness: 25 - 150
    (always to 4x scaling factor)
Maximum Processing (2x Image Doubling)
  • NGU Anti-Alias luma & chroma doubling
    low - very high
    (always to 4x scaling factor)
  • NGU Standard luma & chroma doubling
    low - very high
    (always to 4x scaling factor)
  • NGU Sharp luma & chroma doubling
    low - very high
    (always to 4x scaling factor)
  • NNEDI3 luma & chroma doubling
    16 - 256 neurons
    (always to 4x scaling factor)
Image Doubling

Image upscaling offers a choice of four image doublers: super-xbr, NNEDI3, NGU Anti-Alias and NGU Sharp.

Image doubling is simply another form of image upscaling that results in a doubling of resolution — in either X or Y direction — such as 540p to 1080p, or 1080p to 2160p. Once doubled, the image may be subject to further upscaling or downscaling to match the output resolution. Image doubling produces exact 2x resizes and can run multiple times (x4 to x8). By detecting and preserving edges, these algorithms produce a sharp image which lacks the staircase effect (aliasing) of linear resizers.

Image doubling is most effective when applied to resizes at least 2x or larger. Incremental improvement may be observed in smaller upscales, but the corresponding resources consumed upscaling and downscaling may not be worth the extra processing. For example, doubling a 720p source to 1080p would require image downscaling to correct the overscale (720p -> 1440p -> 1080p). This uses a lot of resources for little gain in image quality.

An exception to this rule is supersampling. Supersamping involves doubling a source that does not need upscaling. On a 1080p display, this would be 1080p -> 2160p. Image doubling applied in this way will use image downscaling to correct the overscale: 2160p -> 1080p. The upscaling and downscaling in-effect cancel each other out.

Why would you want to use supersampling? Image doubling added to native sources allows image sharpening shaders from upscaling refinement to be applied when the image is doubled. It also allows madVR to use its high-quality image downscaling algorithms to attempt to retain some of the information found in the larger pixel count. The idea is to add extra detail to an image by increasing its size and sharpening it before reducing it to its original size. The result can be more precise but less sharp than enhancing an unscaled source with image enhancements. Supersampling is very resource-intensive and should only be reserved for GPUs with plenty of extra power.

Note: Chroma upscaling is a form of image doubling, and all four algorithms are available for this purpose. Visual differences between algorithms will be small when upscaling the chroma layer alone.

super-xbr
  • Resolution doubler;
  • Relies on RGB inputs - luma and chroma are doubled together;
  • Second fastest and second sharpest of the three. Slightly faster than Jinc;
  • High sharpness, low aliasing, medium ringing;
  • Less aliasing on edges than NNEDI3 16 neurons.
NNEDI3
  • Resolution doubler;
  • Uses Y'CbCr color space - capable of doubling luma and chroma independently;
  • Slowest of the three. NNEDI3 128 neurons is slower than NGU very high;
  • Similar sharpness to super-xbr. But more in-focus.
NGU Anti-Alias
  • Resolution doubler;
  • Next Generation Upscaler - proprietary to madVR;
  • Uses Y'CbCr color space - capable of doubling luma and chroma independently;
  • Fastest to second slowest of the three depending on the setting. Faster than NNEDI3 128 neurons;
  • Best for low quality sources with a lot of aliasing;
  • Second best overall image characteristics - sharpness, aliasing and ringing.
NGU Standard
  • Resolution doubler;
  • Next Generation Upscaler - proprietary to madVR;
  • Uses Y'CbCr color space - capable of doubling luma and chroma independently;
  • Fastest to second slowest of the three depending on the setting. Faster than NNEDI3 128 neurons;
  • Renders softer edges than NGU Sharp;
  • Third best overall image characteristics - sharpness, aliasing and ringing.
NGU Sharp
  • Resolution doubler;
  • Next Generation Upscaler - proprietary to madVR;
  • Uses Y'CbCr color space - capable of doubling luma and chroma independently;
  • Fastest to second slowest of the three depending on the setting. Faster than NNEDI3 128 neurons;
  • Best for high-quality sources with clean lines;
  • Best overall image characteristics - sharpness, aliasing and ringing.
Image Comparison – American Dad:
Original
Jinc3 + AR
super-xbr100
NNEDI3 256 neurons + SuperRes (4)
NGU Sharp (very high)

[Image: Image-Doubling_zps3xgpdses.png]

algorithm quality <-- luma doubling:

Luma doubling/quality always refers to image doubling of the Y layer of a Y'CbCr source. This will provide the majority of the improvement in image quality as the black and white luma contains image detail. Priority should be made to maximize this value first before adjusting other settings.

super-xbr: sharpness: 25 - 150
NNEDI3: 16 - 256 neurons
NGU Anti-Alias: low - very high
NGU Standard: low - very high
NGU Sharp: low - very high

algorithm quality <-- luma quadrupling:

Luma quadrupling is doubling rendered twice or a direct quadruple (4x scaling factor).

let madVR decide: direct quadruple - same as luma doubling
double again --> low - very high
direct quadruple --> low - very high

algorithm quality <-- chroma

Chroma quality determines how the chroma layer (CbCr) will be doubled to match the luma layer (Y). This is separate from chroma upscaling that is performed on all videos. The chroma layer is inherently soft and lacks fine detail making chroma doubling overkill or unnecessary in most cases. Bicubic60 + AR provides the best bang for the buck here. It saves resources for luma doubling while still providing acceptable chroma quality. Adjust chroma quality last.

let madVR decide: Bicubic60 + AR unless using NNEDI3 128-256 neurons or NGU very high. In that case, NGU medium is used instead.
normal: Bicubic60 + AR
high: NGU low
very high: NGU medium

activate doubling/quadrupling... <-- doubling

Determine the scaling factor when image doubling is activated.

let madVR decide: 1.2x to 1.5x (or bigger)

activate doubling/quadrupling... <-- quadrupling

Determine the scaling factor when image quadrupling is activated.

let madVR decide: 2.4x to 3.0x (or bigger)

if any (more) scaling needs to be done <-- upscaling algo

Image upscaling is applied after doubling if the scaling factor is greater than 2x but less than 4x, or greater than 4x but less than 8x. This is the case if scaling 480p -> 1080p, or 480p -> 2160p, for example. The luma and/or chroma is further upscaled after doubling to fill in any remaining pixels (960p -> 1080p, or 1920p -> 2160p). Upscaling after image doubling is not overly important.

let madVR decide: Bicubic60 + AR unless using NNEDI3 128-256 neurons or NGU very high. In that case, Jinc3 + AR is used instead.

if any (more) scaling needs to be done <-- downscaling algo

Image downscaling will reduce the value of the luma and/or chroma if the scaling result is larger than the target resolution. Image downscaling is necessary for scaling factors less than 2x or when quadrupling resolutions less than 4x. This is true when scaling 720p -> 1080p, or 720p -> 2160p, for example. Much like upscaling after doubling and chroma quality, downscaling after image doubling is not important, so high quality can be maintained with lesser algorithms.

let madVR decide: Bicubic150 + AR + LL unless using NNEDI3 128-256 neurons or NGU very high. In that case, SSIM 1D 100% + AR + LL is used instead.

Example of Image Doubling

Imagine a source scaled 1280 x 720p -> 1920 x 1080p.

This is a scaling factor of 1.5x.

[Image: NGU-image--doubling_zpskmuuxalj.png]

chroma > NGU Sharp (low)

The first entry is the chroma upscaling setting, which scales the half-resolution chroma to match the luma layer:

Y' (luma - 4) CbCr (chroma - 2:0) -> Y'CbCr 4:4:4.

luma > NGU Sharp (very high) < SSIM1D100AR

The luma layer (Y) is doubled using NGU Sharp (very high). However, the resulting output (720p -> 1440p) is too large for the target resolution (1080p). Therefore, image downscaling is used to reduce the image using the setting from downscaling algo (1440p -> 1080p). In this case, SSIM 1D 100% + AR + LL.

chroma > Bicubic60AR

The upscaled chroma layer (CbCr) is scaled from 720p -> 1080p with Bicubic60 + AR to match the doubled luma layer using the chroma quality setting. Rather than waste resources on image doubling, Bicubic60 + AR allows the user to use higher settings for luma quality.

Demonstration of NNEDI3 Image Doubling

[Image: Chart-Scaling-Algorithms_zpstzh7zfmz.png]

Upscaling Refinement

Upscaling refinement can further improve the quality of upscaling.

Upscaling refinement apply sharpening to the image post-resize. Post-resize luma sharpening is a means to combat the softness introduced by upscaling. In most cases, even sharp image upscaling is incapable of replicating the image as it appeared before upscaling.

To illustrate the impact of image upscaling, view the image below:

Original Castle Image (before 50% downscale)

The image is downscaled 50%. Then, upscaling is applied to bring the image back to the original resolution using super-xbr100. Despite the sharp upscaling of super-xbr, the image appears noticeably softer:

Downscaled Castle Image resized using super-xbr100

Now, image sharpening is layered on top of super-xbr. Note the progressive nature of each sharpener in increasing perceived detail. This can be good or bad depending on the sharpener. In this case, SuperRes occupies the middle ground in detail but is most faithful to the original when resized without adding extra detail not found in the original image.

superxbr100 + FineSharp(4.0)

superxbr100 + SuperRes(4)

superxbr100 + Adaptive Sharpen(0.8)

Compare the above images to the original. The benefit of image sharpening should become apparent as the image moves closer to its intended target. In practice, using slightly less aggressive values of each sharpener is best to limit artifacts such as excess ringing and aliasing. But clearly some added sharpening can be beneficial to the upscaling process.

Note: Extra sharpness is unnecessary when using NGU Sharp. In fact, upscaling refinements soften edges and add grain are offered to soften NGU's upscaling, which can be excessively sharp at high values.

Sharpening shaders share four common settings:

refine the image after every ~2x upscaling step
Sharpening is applied after every 2x resize.

refine the image only once after upscaling is complete
Sharpening is applied after resize is complete.

activate anti-bloating filter
Reduce the fattening that occurs when line sharpening algorithms are applied to an image. If sharpening is designed to exaggerate the difference between high frequency and low frequency pixels, then anti-bloating tames the frequencies that are too hot and removes the low frequencies that shouldn't be there. This uses more processing power than anti-ringing but has the effect of blurring oversharpened pixels to produce a more natural result that better blends into the background elements.

Applies to LumaSharpen, sharpen edges and AdaptiveSharpen. Both crispen edges and thin edges are "skinny" by design and are omitted.

activate anti-ringing filter
Reduce ringing artifacts. This is at the expense of a small decrease in GPU performance and a reduction in the sharpening effect. Anti-ringing should be checked with all shaders as each will produce varying levels of ringing. Applies to LumaSharpen, crispen edges, sharpen edges and AdaptiveSharpen. SuperRes includes its own built-in anti-ringing filter.

Low Processing

soften edges / add grain:

Doom9 Forum: These options are meant to work with NGU Sharp. When trying to upscale a low-res image, it's possible to get the edges very sharp and very near to the "ground truth" (the original high-res image the low-res image was created from). However, texture detail which is lost during downscaling cannot properly be restored. This can lead to "cartoon" type images when upscaling by large factors with full sharpness, because the edges will be very sharp, but there's no texture detail. In order to soften this problem, I've added options to "soften edges" and "add grain." Here's a little comparison to show the effect of these options:

NGU Sharp | NGU Sharp + soften edges + add grain | Jinc3 + AR

enhance detail:

Doom9 Forum: Focuses on making faint image detail in flat areas more visible. It does not discriminate, so noise and grain may be sharpened as well. It does not enhance the edges of objects but can work well with line sharpening algorithms to provide complete image sharpening.

Medium Processing

LumaSharpen:

SweetFX WordPress: LumaSharpen works its magic by blurring the original pixel with the surrounding pixels and then subtracting the blur. The end result is similar to what would be seen after an image has been enhanced using the Unsharp Mask filter in GIMP or Photoshop. While a little sharpening might make the image appear better, more sharpening can make the image appear worse than the original by oversharpening it. Experiment and apply in moderation.

crispen edges:

Doom9 Forum: Focuses on making high-frequency edges crisper by adding light edge enhancement. This should lead to an image that appears more high-definition.

Medium - High Processing

thin edges:

Doom9 Forum: Attempts to make edges, lines and even full image features thinner/smaller. This can be useful after large upscales, as these features tend to become fattened after upscaling. May be most useful with animated content and/or used in conjunction with sharpen edges at low values.

sharpen edges:

Doom9 Forum: A line/edge sharpener similar to LumaSharpen and AdaptiveSharpen. Unlike these sharpeners, sharpen edges introduces less bloat and fat edges. More aggressive than crispen edges.

AdaptiveSharpen:

Doom9 Forum: Adaptively sharpen the image by sharpening more intensely near image edges and less intensely far from edges. The outer weights of the laplace matrix are variable to mitigate ringing on relative sharp edges and to provide more sharpening on wider and blurrier edges. The final stage is a soft limiter that confines overshoots based on local values.

SuperRes:

Doom9 Forum: The general idea behind the super resolution method is explained in the white paper Alexey Lukin et al. The idea is to treat upscaling as inverse downscaling. So the aim is to find a high resolution image, which, after downscaling is equal to the low resolution image.

This concept is a bit complex, but can be summarized as follows:

Estimated upscaled image is calculated -> Image is downscaled -> Differences from the original image are calculated

Forces (corrections) are calculated based on the calculated differences -> Combined forces are applied to upscale the image

This process is repeated 2-4 times until the image is upscaled with corrections provided by SuperRes.

Again, sharpening is applied to the luma information with all shaders. Chroma is untouched.

Upscaling refinement is useful for almost any upscale; particularly, for those users who prefer a sharp image. There is no right or wrong combination, and what looks best mostly comes down to your tastes. As a general rule, the amount of sharpening suitable for a given source increases with the amount of upscaling applied, as sources will become softer with larger amounts of upscaling.
(This post was last modified: 2017-04-29 04:11 by Warner306.)
find quote
Warner306 Offline
Posting Freak
Posts: 2,666
Joined: Feb 2014
Reputation: 91
Location: Canada
Post: #5
4. RENDERING
  • General Settings
  • Windowed Mode Settings
  • Exclusive Mode Settings
  • Stereo 3D
  • Smooth Motion
  • Dithering
  • Trade Quality for Performance
General Settings

These are general settings designed to ensure hardware and operating system compatibility for smooth playback. Minor performance improvements may be experienced but they aren't likely to be noticeable. The key is to achieve correct open and close behavior of the media player and eliminate presentation glitches caused by incompatibilities rather than a weak GPU.

Expert Guide:

delay playback start until render queue is full

Pause the video playback until a number of frames have been rendered in advance of playback. This potentially avoids some stuttering right at the start of video playback, or after seeking through a video—but it will add a slight delay to both. It is disabled by default, but I prefer to have it enabled. If you are having problems where a video fails to start playing, this is the first option I would disable when troubleshooting.

enable windowed overlay (Windows 7 and newer)

Changes the way that windowed mode is rendered, and will generally give you better performance. The downside to windowed overlay is that you cannot take screenshots of it with the Print Screen key on your keyboard. Other than that, it's mostly a “free” performance increase for people running Windows 7/8. It does not work with AMD graphics cards. D3D9 Only.

enable automatic fullscreen exclusive mode

Allows madVR to use fullscreen exclusive mode for video rendering. This can potentially give you some big performance improvements, and allows for several frames to be sent to the video card in advance, which can help eliminate random stuttering during playback. It will also prevent things like notifications from other applications being displayed on the screen at the same time, and similar to the Windowed Overlay mode, it stops Print Screen from working. The main downside to Fullscreen Exclusive mode is that when switching in/out of FSE mode, the screen will flash black for a second. (similar to changing refresh rates) Media Center's mouse-based interface is rendered in such a way that it would not be visible in FSE mode, so madVR gets kicked out of FSE mode any time you use it, and you get that black flash on the screen. I personally find this distracting, and as such, have disabled FSE mode, because I don't need the additional performance for smooth playback on my computer. (I have an Nvidia GTX 570) The "10ft interface" is unaffected, and renders correctly inside FSE mode.

disable desktop composition (Vista and newer)

This option will disable Aero during video playback. Back in the early days of madVR this may have been necessary on some systems, but I don't recommend enabling this option now. Typically the main thing that happens is that it breaks VSync and you get screen tearing (horizontal lines over the video). Not available for Windows 8 and Windows 10.

use Direct3D 11 for presentation (Windows 7 and newer)

Use a Direct3D 11 presentation path rather than Direct3D 9. This may allow for faster entering and exiting of fullscreen exclusive mode. Overrides windowed overlay. Required for 10-bit output along with fullscreen exclusive mode.

present a frame for every VSync

Disabling this setting may improve performance but can cause presentation glitches. However, enabling it can cause presentation glitches on other systems. When disabled, madVR presents new frames when needed, relying on Direct3D 11 to repeat frames as necessary to maintain VSync.

use a separate device for presentation (Vista and newer)

By default this option is now disabled, but I see a big increase in performance when it is enabled using Nvidia graphics cards. You will have to experiment with this one.

use a separate device for DXVA processing (Vista and newer)

Similar to the option above, this may or may not improve performance.

CPU/GPU queue size

This sets the size of the decoder queue (CPU) (video & subtitle) and upload/render queues (GPU) (madVR). Unless you are experiencing problems, I would leave it at the default settings of 16/8. The higher these queue sizes are, the more memory madVR requires. With larger queues you could potentially have smoother playback on some systems, but increased queue sizes also mean increased delays when seeking if the delay playback… options are enabled. Depending on your system, if you are having trouble getting smooth playback with madVR, sometimes turning the queue sizes all the way up or all the way down seems to help. It really depends on the machine.

Windowed Mode Settings

[Image: Windowed-Mode_zpsqemuuu3i.png]

present several frames in advance

This can be thought of as a buffer to prevent presentation glitches. When the present queue (shown in the madVR OSD) reaches zero, presentation glitches will occur. Sending frames in advance provides protection against these glitches.

Problems filling the present queue cannot be fixed by increasing the number of frames presented. Either madVR's settings are too aggressive for the GPU, or the CPU/GPU queues need to be increased to fill the presentation buffer. Increasing the size of the CPU/GPU queues in general settings will increase the memory used by madVR.

The present queue should be the same size or smaller than the CPU/GPU queues (e.g. render: 8 frames / present: 8 frames). A present queue stuck at zero means your GPU has run out of resources and madVR processing settings will have to be reduced until it fills.

Flush settings should be left alone unless you know what you are doing.

Exclusive Mode Settings

[Image: Exclusive-Mode_zpse0p2l2ov.png]

show seek bar

This should be unchecked if using fullscreen exclusive mode and a desktop media player such as MPC-HC. Otherwise, a seek bar will appear at the bottom of every video that cannot be removed during playback.

delay switch to exclusive mode by 3 seconds

Switching to FSE can sometimes be slow. Checking this options gives madVR time to fill its buffers and complete the switch to FSE, limiting the chance of dropped frames or presentation glitches.

present several frames in advance

Like the identical setting in windowed mode, present several frames in advance is protection against presentation glitches and should be left on. If the number of frames presented in advance is increased, the size of the CPU/GPU queues in general settings may also need to be larger to adequately fill the present queue.

The present queue should be the same size or smaller than the CPU/GPU queues (e.g. render: 8 frames / present: 8 frames). A present queue stuck at zero means your GPU has run out of resources and madVR processing settings will have to be reduced until it fills.

Again, flush settings should be left alone unless you know what you are doing.

Stereo 3D

enable stereo 3d playback

Enable stereoscopic 3D playback. This is currently limited to frame-packed MPEG4-MVC 3D Blu-ray. The easiest way to create a digital copy of a 3D Blu-ray is with MakeMKV.

when playing 2d content

Nvidia GPUs are known to crash on occasion when 3D mode is active in the operating system and 2D content is played. This most often occurs when the Direct3D 11 presentation path is used by madVR. Disable OS stereo 3d support for all displays should be checked if using this combination.

when playing 3d content

If 3D mode is enabled in the operating system, some GPUs will change the display calibration to optimize playback for frame-packed 3D. This can interfere with the performance of madVR's 3D playback. Possible side effects include altered gamma curves (designed for frame-packed 3D) and screen flickering caused by the use of an active shutter. Disable OS stereo 3d support for all displays is a failsafe to prevent GPU 3D settings from altering the image in unwanted ways.

It is safest to check both options to disable 3D in the operating system. This will limit the potential for crashes and playback oddities.

Smooth Motion

Expert Guide: Smooth motion is a frame blending system for madVR. What smooth motion is not, is a frame interpolation system—it will not introduce the “soap opera effect” like you see on 120 Hz+ TVs, or reduce 24p judder.

Smooth motion is designed to display content where the source frame rate does not match up to any of the refresh rates that your display supports. For example, that would be 25/50fps content on a 60 Hz-only display, or 24p content on a 60 Hz-only display.

It does not replace ReClock, and if your display supports 1080p24, 1080p50, and 1080p60 then you should not need to use smooth motion at all.

Because smooth motion works by using frame blending you may see slight ghost images at the edge of moving objects—but this seems to be rare and dependent on the display you are using, and is definitely preferable to the usual judder from mismatched frame rates/refresh rates.

Medium Processing

only if there would be motion judder without it...
Enable smooth motion when 3/2 pulldown or any other irregular frame pattern is detected.

...or if the display refresh rate is an exact multiple of the movie frame rate
Enable smooth motion when the refresh rate of the display is an exact duplicate of the content refresh rate.

always
Enable smooth motion for all content.

In general, if your display is limited to 60 Hz playback without the possibility of display mode switching, smooth motion may be an acceptable substitution for 3/2 pulldown. Although, use of smooth motion largely comes down your taste for this form of frame smoothing.

Dithering

madVR Explained: Dithering is performed as the last step in madVR to convert its internal 16 bit data to the bit depth set for the display. Any time madVR does anything to the video, high bit-depth information is created. Dithering allows much of this information to be preserved when displayed at 8-10 bits. For example, the conversion of Y'CbCr to RGB generates > 8-bits of RGB data. The higher the output bit depth, the lower the visible dithering noise.

Rather than create a simple gradient consisting completely of "96 gray," for instance, dithering allows the quantization error of each calculated RGB value to be distributed to neighboring pixels. This creates a random yet controlled pattern that better approximates the varied shades present in the original gradient. Such a randomized use of colors is a way to create an artificial sense of having an expanded color palette.

Dithering to 2-bits:
2 bit Ordered Dithering
2 bit No Dithering

Low Processing

Random Dithering
Very fast dithering. High-noise, no dither pattern.

Ordered Dithering
Very fast dithering. Low-noise, high dither pattern. This offers high-quality dithering basically for free.

use colored noise
Use an inverted dither pattern for green ("opposite color"), which reduces luma noise but adds chroma noise.

change dither for every frame
Use a new dither seed for every frame. Or, for Ordered Dithering, add random offsets and rotate the dither texture 90° between every frame.

Medium Processing

Error Diffusion - option 1
Use Direct-Compute to perform very high-quality error diffusion dithering. Mid-noise, no dither pattern. Requires a DX 11-compatible graphics card.

Error Diffusion - option 2
Use Direct-Compute to perform very high-quality error diffusion dithering. Low-noise, mid dither pattern. Requires a DX 11-compatible graphics card.

Regardless of the hardware used, dithering is best left on at all times. Ordered Dithering offers performance similar to Error Diffusion with much lower resource use and should be considered the default setting unless your system has resources to spare.

Trade Quality for Performance

These settings reduce GPU usage at the expense of image quality. Most, if not all, options will provide very small degradation of image quality. The options are sorted chronologically. The items at the top have the least impact on picture quality. The items at the bottom have the greatest impact on picture quality (and performance).

Those trying to squeeze the last bit of power from their GPU will want to start at the top and work their way to the bottom. It usually takes more than one checkbox to put a rendering queue under the movie frame interval or cause the present queue to fill.
(This post was last modified: 2016-05-27 19:09 by Warner306.)
find quote
Warner306 Offline
Posting Freak
Posts: 2,666
Joined: Feb 2014
Reputation: 91
Location: Canada
Post: #6
5. MEASURING PERFORMANCE

How Do I Measure the Performance of My Chosen Settings?

Once the settings have been configured to your liking, it is important madVR's settings match the capabilities of your hardware. To determine this, the menu below can be overlaid during playback by pressing Cntrl + J to provide feedback on your PC’s rendering performance. Combining several settings labelled Medium or higher will create a large load on the graphics card.

Rendering performance is dependent upon the average rendering and present time of each frame in relation to the movie frame interval. In the example below, a new frame is drawn every 41.71ms. However, at an average rendering time of 49.29ms plus a present time of 0.61ms (49.29 + 0.61 = 49.90ms), the computer is unable to keep up with the frame rate of the video. The result is dropped frames, presentation glitches and generally choppy playback. As such, settings in madVR will have to be dialed-down.

Predicting the load placed on the graphics processor is factor of the resolution of the video and the display, as well as the frame rate and bit depth of the video. A video with a native frame rate of 29.97 frames/s will require madVR to work 25% faster than a video with a frame rate of 23.976 frames/s. Live TV broadcast at 1920 x 1080/60i can be particularly demanding because the source frame rate is doubled after deinterlacing.

It is advised to find a demanding source to test your settings, ensuring some headroom is left under the movie frame interval. The number of pixels processed by madVR greatly impacts the resources used. So, to stress a 4K UHD panel, a 1080p source (scaled 1080p -> 2160p) would actually be more taxing than a 720p source. Similarly, a 720p source (scaled 720p -> 1080p) would be more demanding than an SD source with a 1080p display.

Display Rendering Stats:
Cntrl + J during full screen playback:
[Image: madVR-Stats_zpsecmtquvt.png]

Understanding madVR's List of Queues

One of the most mysterious aspects of madVR's rendering stats are its queues. These are presentation buffers. Each queue represents a measure of performance for a specific component of your system: decoding, access memory, rendering and presentation. Filling all queues is a prerequisite for rendering an image.

[Image: madVR-Queues_zps3se05cca.png]

decoder queue: (CPU) software video decoder / (GPU) hardware video decoder

subtitle queue: (CPU) software subtitle decoder

upload queue: (CPU) system memory / (GPU) video memory

render queue: (GPU) video memory / (GPU) video rendering

present queue: (GPU) video memory / (GPU) video rendering

These queues should not be used as a guide to choose madVR settings. Rendering times are superior for this purpose. Rather, queues are most valuable for troubleshooting. Any queue that fails to fill may be a sign of a system weakness or system incompatibility (e.g. a weak CPU for HEVC decoding). When all queues are empty, the cause can usually be traced to the first queue that fails to fill, as the queues should fill in order.

The present queue should never be larger than the upload and render queues. In fact, it may improve performance to increase the CPU/GPU queue sizes and reduce the present queue to improve playback smoothness.

Rendering in the control panel contains settings for adjusting the size of specific queues and the amount of system memory (CPU/GPU) devoted to each. Think of these queues as protection against presentation issues.

Troubleshooting Dropped Frames/Presentation Glitches

Weak CPU

Problem: The decoder and subtitle queues fail to fill.

Solution: Ease the load on the CPU by enabling hardware acceleration in LAV Video. If your GPU does not support the format played (e.g. HEVC or VP9), consider upgrading to a card with support for these formats. GPU hardware decoding is particularly critical for smooth playback of high bit-rate HEVC.

Empty Present Queue

Problem: Reported rendering stats are under the movie frame interval, but the present queue remains at zero and will not fill.

Solution: It is not abnormal to have the present queue contradict the rendering stats — in most cases, the GPU is simply overstrained and unable to render fast enough. Ease the load on the GPU by reducing processing settings until the present queue fills. If the performance deficit is very low, this situation can be cured by checking a few of the trade quality for performance checkboxes.

Lack of Headroom for GUI Overlays

Problem: Whenever a GUI element is overlaid, madVR enters low latency mode. This will temporarily reduce the present queue to 1-2/8 to maintain responsiveness of the media player. If the present queue reaches zero or fails to refill when the GUI element is removed, your madVR settings are too aggressive.

Solution: Ease the load on the GPU by reducing processing settings. If the performance deficit is very low, this situation can be cured by checking a few of the trade quality for performance checkboxes. Enabling GUI overlays during playback is the ultimate stress test for madVR settings — the present queue should recover effortlessly.

Inaccurate Rendering Stats

Problem: The average and max rendering stats indicate rendering is below the movie frame interval, but madVR still produces glitches and dropped frames.

Solution: A video with a frame interval of 41.71 ms should have average rendering stats of 35-37 ms to give madVR adequate headroom to render the image smoothly. Anything higher risks dropped frames or presentation glitches during performance peaks.

Scheduled Frame Drops

Problem: This generally refers to clock jitter. Clock jitter is caused by a lack of synchronization between three clocks: the system clock, video clock and audio clock. The system clock always runs at 1.0x. The audio and video clocks tick away independent of each other. Having three independent clocks invites of the possibility of losing synchronization. These clocks are subject to variability caused by differences in A/V hardware, drivers or software. Any difference from the system clock is captured by the display and clock deviation in madVR's rendering stats. If the audio and video clocks are synchronized by luck or randomness, then frames are presented "perfectly." However, any reported difference between the two would lead to a slow drift between audio and video during playback. The video clock yields to the audio clock — a frame is dropped or repeated every few minutes to maintain synchronization.

Solution: Correcting clock jitter requires an audio renderer designed for this purpose. It also requires all audio is output as multichannel PCM. ReClock is an example of an audio renderer that uses decoded PCM audio to correct audio/video clock synchronization. For those wishing to bitstream, use of custom resolutions can reduce the frequency of dropped frames; although, this won't eliminate the problem. Frame drops or repeats caused by clock jitter are considered a normal occurrence with almost all HTPCs.

Interrupted Playback

Problem: Windows or other software interrupts playback with a notification or background process causing frame drops.

Solution: The most stable playback mode in madVR is enable automatic fullscreen exclusive mode (found in general settings). Exclusive mode will ensure madVR has complete focus during all aspects of playback and the most stable VSync. Some systems do not work well with fullscreen exclusive mode and will drop frames.
(This post was last modified: 2016-08-08 21:50 by Warner306.)
find quote
Warner306 Offline
Posting Freak
Posts: 2,666
Joined: Feb 2014
Reputation: 91
Location: Canada
Post: #7
6. SAMPLE SETTINGS PROFILES, PROFILE RULES & ADVANCED SETTINGS

So, with all of the settings laid out, let's move on to some settings profiles...

It is important to know your graphics card when using madVR, as the program relies heavily on this hardware. Due to the large performance variability in graphics cards and the breadth of possible madVR configurations, it can be difficult to recommend settings for specific GPUs. However, I’ll attempt to provide a starting pointing for settings by using some examples with my personal hardware. The example below demonstrates the difference in madVR performance between an integrated graphics card and a dedicated gaming card.

I have a laptop with an Intel HD 3000 graphics processor and Sandy Bridge i7. I can run madVR with settings similar to its default values.

Integrated GPU:
  • Chroma: Bicubic60 + AR
  • Image upscaling: Lanczos3 + AR
  • Image doubling: Off
  • Upscaling refinement: Off
  • Artifact removal - Debanding: Off
  • Artifact removal - Deringing: Off
  • Image enhancements: Off
  • Dithering: Ordered
I am upscaling primarily 24 fps content to 1080p24. Subjectively, the picture quality is superior to Kodi DXVA upscaling with less noise, noticeable color banding and improved color accuracy. DXVA upscaling with Intel processors uses something similar to Lanczos3 + AR already. This is probably why the picture seems similar: Lanczos produces a crisp, coarse scaling that is very identifiable.

I also have an older HTPC with a Nvidia GTX 750 Ti and Core 2 Duo CPU.

A dedicated gaming card allows the flexibility to use more demanding scaling algorithms, add sharpening, artifact removal and increase the quality of dithering. Settings assume all trade quality for performance checkboxes are unchecked save the one related to subtitles.

Given the flexibility a gaming card provides, I will offer three different scenarios based on common resizes:

Display: 1920 x 1080p

Resizes:
  • 1080p -> 1080p
  • 720p -> 1080p
  • SD -> 1080p
Scaling factor: Increase in vertical resolution or pixels per inch.

Profile: "1080p"

1080p -> 1080p
1920 x 1080 -> 1920 x 1080
Increase in pixels: 0
Scaling factor: 0

At 1080p, image upscaling is unnecessary. Instead, the settings to be concerned with are Chroma upscaling, which is applied to all videos, Image enhancements — the lone form of image sharpening available for native content and Dithering.

Artifact removal includes Debanding and Deringing. Debanding is non-destructive and useful for 8-bit sources, which often display some form of color banding even when the source is uncompressed. Deringing can be destructive, by comparison, and ringing artifacts are less common with high-quality sources. Deringing is not recommended as a general use setting.

A high-quality 1080p source should not require a lot of enhancement. If Image enhancements are used, they should be used in moderation. My preference is for small values of crispen edges or sharpen edges (with anti-bloating) to improve the high-definition look of this content.

1080p:
  • Chroma: NGU Sharp (high)
  • Downscaling: SSIM 1D 100% + AR + LL
  • Image upscaling: Jinc3 + AR
  • Image doubling: Off
  • Upscaling refinement: Off
  • Artifact removal - Debanding: Medium/High
  • Artifact removal - Deringing: Off
  • Image enhancements: crispen edges (0.5) + AR
  • Dithering: Error Diffusion 2
Supersampling (image doubling) is another approach to enhancing a 1080p source. Although, this is not recommended and requires a powerful GPU. Supersampling involves doubling a source to twice its original size, sharpening it, and returning it to its original resolution. The chain would look like this: Image doubling -> Upscaling refinement -> Image downscaling. The hope is applying sharpening to a doubled image will lead to a subtler sharpening effect with fewer artifacts.

I recommend NGU as the supersampler/sharpener because of its clean lines and ultra-sharp upscaling.

1080p -> 2160p Supersampling (for high-end GPUs):
  • Chroma: NGU Sharp (medium)
  • Downscaling: SSIM 2D 100% + AR + LL
  • Image upscaling: Off
  • Image doubling: NGU Sharp
  • <-- Luma doubling: high
  • <-- Luma quadrupling: let madVR decide (direct quadruple - NGU Sharp (high))
  • <-- Chroma: normal (Bicubic60 + AR)
  • <-- Doubling: ...always - supersampling
  • <-- Quadrupling: let madVR decide (scaling factor 2.4x (or bigger))
  • <-- Upscaling algo: let madVR decide (Bicubic60 + AR)
  • <-- Downscaling algo: SSIM 1D 100% + AR + LL
  • Upscaling refinement: Off
  • Artifact removal - Debanding: Medium/High
  • Artifact removal - Deringing: Off
  • Image enhancements: Off
  • Dithering: Ordered
If you prefer to have more control over the application of image sharpening, avoid supersampling and use the first profile. Applying Image enhancements to the native source allows more influence over the final sharpening effect.

Profile: "720p"

720p -> 1080p
1280 x 720 -> 1920 x 1080
Increase in pixels: 2.25x
Scaling factor: 1.5x

At 720p, image upscaling is introduced. Upscaling the luma channel is most important in resolving image detail. As such, settings for Image upscaling followed with Upscaling refinement are most important for upscaled sources.

Jinc is the chosen upscaler. SuperRes is layered on top of Jinc to provide additional sharpness. This is important as upscaling alone will create a soft image. Note that sharpening is added from Upscaling refinement, so it is applied to the post-resized image.

720p Regular upscaling:
  • Chroma: NGU Sharp (medium)
  • Downscaling: SSIM 1D 100% + AR + LL
  • Image upscaling: Jinc3 + AR
  • Image doubling: Off
  • Upscaling refinement: SuperRes (1)
  • Artifact removal - Debanding: Medium/High
  • Artifact removal - Deringing: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2
Image doubling can further improve a 720p source. This involves doubling the resolution (720p -> 1440p) and using Image downscaling to correct the slight overscale (1440p -> 1080p). This is resource-intensive, so only attempt if your GPU is capable.

NGU Sharp is the choice image doubler. It does not require any added sharpening and may actually benefit from enabling soften edges and/or add grain to counter the sharpness of its upscaling.

To calibrate NGU, select Image upscaling -> doubling -> NGU Sharp and use the drop-down menus. Set Luma doubling to its maximum value (very high), Chroma to normal and everything else to let madVR decide.

If the maximum value is too aggressive, reduce Luma doubling until rendering times are under the movie frame interval (35-37ms for a 24 fps source). Then increase Upscaling algo, Downscaling algo and Chroma in that order to use any remaining GPU resources. Luma quality is always first and most important.

Keep in mind, NGU Sharp (very high) is three times slower than NGU Sharp (high) while only producing a small improvement in image quality.

720p Image doubling:
  • Chroma: NGU Sharp (medium)
  • Downscaling: SSIM 1D 100% + AR + LL
  • Image upscaling: Off
  • Image doubling: NGU Sharp
  • <-- Luma doubling: high
  • <-- Luma quadrupling: let madVR decide (direct quadruple - NGU Sharp (high))
  • <-- Chroma: normal (Bicubic60 + AR)
  • <-- Doubling: let madVR decide (scaling factor 1.2x (or bigger))
  • <-- Quadrupling: let madVR decide (scaling factor 2.4x (or bigger))
  • <-- Upscaling algo: let madVR decide (Bicubic60 + AR)
  • <-- Downscaling algo: let madVR decide (Bicubic150 + AR + LL)
  • Upscaling refinement: Off
  • Artifact removal - Debanding: Medium/High
  • Artifact removal - Deringing: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2
Profile: "SD"

SD -> 1080p
640 x 480 -> 1920 x 1080
Increase in pixels: 6.75x
Scaling factor: 2.25x

By the time SD content is reached, the scaling factor starts to become quite large (2.25x). Here, the image becomes soft due to the errors introduced by upscaling. Countering this soft appearance is possible by introducing more sophisticated image upscaling provided by madVR's image doubling. Image doubling does just that — it takes the full resolution luma and chroma information and scales it by factors of two to reach the desired resolution (2x for a double and 4x for a quadruple). If larger than needed, the result is interpolated down to the target.

Doubling a 720p source to 1080p involves overscaling by 0.5x and downscaling back to the target resolution. Improvements in image quality may go unnoticed in this case. However, image doubling applied to larger resizes of 480p to 1080p or 1080p to 2160p will, in most cases, result in the highest-quality image.

As stated, NGU is the best choice for image doubling due to its high sharpness, low aliasing and lack of ringing. It does not require added sharpening from Upscaling refinement to appear razor sharp. In fact, NGU Sharp can look artificial at times when set to very high quality with large scaling factors. NGU's sharpness tends to reveal missing detail from a downscaled source. To avoid creating "cartoon" edges, it is recommended to enable soften edges and/or add grain in Upscaling refinement when NGU is set to high or very high quality at scaling factors of 2x or greater. Alternatively, use NGU Anti-Alias, which better tolerates low quality sources.

NGU Sharp | NGU Sharp + soften edges + add grain | Jinc3 + AR

Again, settings for Chroma, Upscaling algo and Downscaling algo are unimportant. Always try to maximize Luma doubling first, if possible.

SD Image doubling:
  • Chroma: NGU Anti-Alias (medium)
  • Downscaling: SSIM 1D 100% + AR + LL
  • Image upscaling: Off
  • Image doubling: NGU Anti-Alias
  • <-- Luma doubling: high
  • <-- Luma quadrupling: let madVR decide (direct quadruple - NGU Anti-Alias (high))
  • <-- Chroma: normal (Bicubic60 + AR)
  • <-- Doubling: let madVR decide (scaling factor 1.2x (or bigger))
  • <-- Quadrupling: let madVR decide (scaling factor 2.4x (or bigger))
  • <-- Upscaling algo: let madVR decide (Bicubic60 + AR)
  • <-- Downscaling algo: let madVR decide (Bicubic150 + AR + LL)
  • Upscaling refinement: Off
  • Artifact removal - Debanding: Medium/High
  • Artifact removal - Deringing: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2
If you do not like the look of image doubling, replacing it with Jinc is an alternative. NGU Sharp can be very sharp. Jinc is less sharp but produces an image that some may feel is softer and more natural.

SD Regular upscaling:
  • Chroma: Jinc3 + AR
  • Downscaling: SSIM 1D 100% + AR + LL
  • Image upscaling: Jinc3 + AR
  • Image doubling: Off
  • Upscaling refinement: SuperRes (3)
  • Artifact removal - Debanding: Medium/High
  • Artifact removal - Deringing: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2
Creating madVR Profiles

Now we will translate each profile into a resolution profile with profile rules.

Add this code to each profile group:

if (srcWidth > 1280) "1080p"
else if (srcWidth <= 1280) and (srcHeight > 720) "1080p"

else if (srcWidth > 960) and (srcWidth <= 1280) "720p"
else if (srcWidth <= 960) and ((srcHeight > 540) and (srcHeight <= 720)) "720p"

else if (srcWidth <= 960) and (srcHeight <= 540) "SD"

deintFps (the source frame rate after deinterlacing) is another factor on top of the source resolution that greatly impacts the load placed on madVR. Doubling the frame rate, for example, doubles the load placed on madVR. Profile rules such as (deintFps <= 25) and (deintFps > 25) may be combined with srcWidth and srcHeight to create additional profiles.

A more "fleshed-out" set of profiles incorporating the source frame rate might look like this:
  • "1080p25"
  • "1080p60"
  • "720p25"
  • "720p60"
  • "SD25"
  • "SD60"
Click on scaling algorithms. Create a new folder by selecting create profile group.

[Image: Create-Profile-Group_zpsvywrclwf.png]

Each profile group offers a choice of settings to include.

Select all items, and name the new folder "Scaling."

[Image: Profile-Checkboxes_zpshm81vzwl.png]

Select the Scaling folder. Using add profile, create six profiles.

Name each profile: 1080p25, 1080p60, 720p25, 720p60, 540p25, 540p60.

Paste the code below into Scaling:

if (deintFps <= 25) and (srcWidth > 1280) "1080p25"
else if (deintFps <= 25) and ((srcWidth <= 1280) and (srcHeight > 720)) "1080p25"

else if (deintFps > 25) and (srcWidth > 1280) "1080p60"
else if (deintFps > 25) and ((srcWidth <= 1280) and (srcHeight > 720)) "1080p60"

else if (deintFps <= 25) and ((srcWidth > 960) and (srcWidth <= 1280)) "720p25"
else if (deintFps <= 25) and ((srcWidth <= 960) and (srcHeight > 540) and (srcHeight <= 720)) "720p25"

else if (deintFps > 25) and ((srcWidth > 960) and (srcWidth <= 1280)) "720p60"
else if (deintFps > 25) and ((srcWidth <= 960) and (srcHeight > 540) and (srcHeight <= 720)) "720p60"

else if (deintFps <= 25) and ((srcWidth <= 960) and (srcHeight <= 540)) "540p25"

else if (deintFps > 25) and ((srcWidth <= 960) and (srcHeight <= 540)) "540p60"

A green check mark should appear above the box to indicate the profiles are correctly named and no code conflicts exist.

[Image: madVR-Control-Profile-Rules_zpstubyggsb.png]

Additional profile groups must be created for processing and rendering.

Note: The use of six profiles may be unnecessary for other profile groups. For instance, if I wanted Image enhancements (under processing) to apply only to 1080p content, two folders would be required:

if (srcWidth > 1280) "1080p"
else if (srcWidth <= 1280) and (srcHeight > 720) "1080p"

else "Other"

Disabling Image upscaling for Cropped 1080p Videos:

You may encounter some 1080p videos cropped just short of their original size (e.g. width = 1916). Those few missing pixels will put an abnormal strain on madVR as it tries to resize to the original display resolution. zoom control in the madVR control panel contains a setting to disable image upscaling if the video falls within a certain range (e.g. 10 lines or less). Disabling scaling adds a few black pixels to the video and prevents the image upscaling algorithm from resizing the image. This may prevent cropped videos from pushing madVR over the rendering queue.

Link: How to Configure madVR Profile Rules

Display: 3840 x 2160p

Let's repeat this process, this time assuming the display resolution is 3840 x 2160p (4K UHD). Two graphics cards will be used for reference. A Medium-level card such as the GTX 960, and a High-level card similar to a GTX 1080. Again, the source frame rate is assumed to be 24 fps.

Resizes:
  • 2160p -> 2160p
  • 1080p -> 2160p
  • 720p -> 2160p
  • SD -> 2160p
Scaling factor: Increase in vertical resolution or pixels per inch.

Profile: "2160p"

2160p -> 2160p
3840 x 2160 -> 3840 x 2160
Increase in pixels: 0
Scaling factor: 0

This profile is identical in appearance to that for a 1080p display. Without image upscaling, the focus is on settings for Chroma upscaling, Image enhancements, Debanding and Dithering. If the source is 10-bit and high-quality, Debanding may be unnecessary.

Medium:
  • Chroma: NGU Sharp (high)
  • Downscaling: SSIM 1D 100% + AR + LL
  • Image upscaling: Jinc3 + AR
  • Image doubling: Off
  • Upscaling refinement: Off
  • Artifact removal - Debanding: Medium/High
  • Artifact removal - Deringing: Off
  • Image enhancements: crispen edges (0.5) + AR
  • Dithering: Ordered
High:
  • Chroma: NGU Sharp (very high)
  • Downscaling: SSIM 2D 100% + AR + LL
  • Image upscaling: Jinc3 + AR
  • Image doubling: Off
  • Upscaling refinement: Off
  • Artifact removal - Debanding: Medium/High
  • Artifact removal - Deringing: Off
  • Image enhancements: crispen edges (0.5) + AR
  • Dithering: Error Diffusion 2
Profile: "1080p"

1080p -> 2160p
1920 x 1080 -> 3840 x 2160
Increase in pixels: 4x
Scaling factor: 2x

At an even 2x resize, image doubling is an ideal match for FHD content upscaled to UHD. For this purpose, I've picked NGU Sharp as the image upscaler. NGU is very resource-hungry, but its sharp, artifact-free scaling remains the gold standard of madVR image scaling. If the image appears too sharp, using NGU Standard or soften edges is an option.

NGU Sharp | NGU Sharp + soften edges + add grain | Jinc3 + AR

To calibrate NGU, select Image upscaling -> doubling -> NGU Sharp and use the drop-down menus. Set Luma doubling to its maximum value (very high), Chroma to normal and everything else to let madVR decide.

If the maximum value is too aggressive, reduce Luma doubling until rendering times are under the movie frame interval (35-37ms for a 24 fps source). Then increase Upscaling algo, Downscaling algo and Chroma in that order to use any remaining GPU resources. Luma quality is always first and most important.

Keep in mind, NGU Sharp (very high) is three times slower than NGU Sharp (high) while only producing a small improvement in image quality.

Medium:
  • Chroma: NGU Sharp (medium)
  • Downscaling: SSIM 1D 100% + AR + LL
  • Image upscaling: Off
  • Image doubling: NGU Sharp
  • <-- Luma doubling: medium
  • <-- Luma quadrupling: let madVR decide (direct quadruple - NGU Sharp (med))
  • <-- Chroma: normal (Bicubic60 + AR)
  • <-- Doubling: let madVR decide (scaling factor 1.2x (or bigger))
  • <-- Quadrupling: let madVR decide (scaling factor 2.4x (or bigger))
  • <-- Upscaling algo: let madVR decide (Bicubic60 + AR)
  • <-- Downscaling algo: let madVR decide (Bicubic150 + AR + LL)
  • Upscaling refinement: Off
  • Artifact removal - Debanding: Medium/High
  • Artifact removal - Deringing: Off
  • Image enhancements: Off
  • Dithering: Ordered
High:
  • Chroma: NGU Sharp (high)
  • Downscaling: SSIM 2D 100% + AR + LL
  • Image upscaling: Off
  • Image doubling: NGU Sharp
  • <-- Luma doubling: very high
  • <-- Luma quadrupling: let madVR decide (direct quadruple - NGU Sharp (very high))
  • <-- Chroma: let madVR decide (NGU medium)
  • <-- Doubling: let madVR decide (scaling factor 1.2x (or bigger))
  • <-- Quadrupling: let madVR decide (scaling factor 2.4x (or bigger))
  • <-- Upscaling algo: let madVR decide (Jinc3 + AR)
  • <-- Downscaling algo: let madVR decide (SSIM 1D 100% + AR + LL)
  • Upscaling refinement: Off
  • Artifact removal - Debanding: Medium/High
  • Artifact removal - Deringing: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2
Profile: "720p"

720p -> 2160p
1280 x 720 -> 3840 x 2160
Increase in pixels: 9x
Scaling factor: 3x

At a 3x scaling factor, image quadrupling becomes possible. The image is upscaled 4x and downscaled by 1x (reduced 25%) to match the output resolution. This is the lone change from Profile 1080p.

Image quadrupling may not be a realistic setting for many graphics cards, especially when scaled via NGU. In either case, some form of image doubling remains desirable given the large scaling factor. If quadrupling is used, it is best combined with sharp Image downscaling such as SSIM or Bicubic150.

Medium:
  • Chroma: NGU Sharp (medium)
  • Downscaling: SSIM 1D 100% + AR + LL
  • Image upscaling: Off
  • Image doubling: NGU Sharp
  • <-- Luma doubling: medium
  • <-- Luma quadrupling: let madVR decide (direct quadruple - NGU Sharp (med))
  • <-- Chroma: normal (Bicubic60 + AR)
  • <-- Doubling: let madVR decide (scaling factor 1.2x (or bigger))
  • <-- Quadrupling: let madVR decide (scaling factor 2.4x (or bigger))
  • <-- Upscaling algo: let madVR decide (Bicubic60 + AR)
  • <-- Downscaling algo: let madVR decide (Bicubic150 + AR + LL)
  • Upscaling refinement: Off
  • Artifact removal - Debanding: Medium/High
  • Artifact removal - Deringing: Off
  • Image enhancements: Off
  • Dithering: Ordered
High:
  • Chroma: NGU Sharp (high)
  • Downscaling: SSIM 2D 100% + AR + LL
  • Image upscaling: Off
  • Image doubling: NGU Sharp
  • <-- Luma doubling: very high
  • <-- Luma quadrupling: let madVR decide (direct quadruple - NGU Sharp (very high))
  • <-- Chroma: let madVR decide (NGU medium)
  • <-- Doubling: let madVR decide (scaling factor 1.2x (or bigger))
  • <-- Quadrupling: let madVR decide (scaling factor 2.4x (or bigger))
  • <-- Upscaling algo: let madVR decide (Jinc3 + AR)
  • <-- Downscaling algo: let madVR decide (SSIM 1D 100% + AR + LL)
  • Upscaling refinement: Off
  • Artifact removal - Debanding: Medium/High
  • Artifact removal - Deringing: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2
Profile: "SD"

SD -> 2160p
640 x 480 -> 3840 x 2160
Increase in pixels: 27x
Scaling factor: 4.5x

The final resize, SD to 2160p, is a monster (4.5x!). This is perhaps the only scenario where image quadrupling is not only useful but necessary to maintain the integrity of the original image. The image is upscaled 4x by image doubling and the remaining 0.5x by the Upscaling algo. NGU Anti-Alias is substituted to minimize the enhancement of aliasing in low quality SD sources. Again, if pushed for resources, other settings should be accommodated around image doubling, particularly Dithering and Chroma upscaling.

Medium:
  • Chroma: NGU Anti-Alias (medium)
  • Downscaling: SSIM 1D 100% + AR + LL
  • Image upscaling: Off
  • Image doubling: NGU Anti-Alias
  • <-- Luma doubling: medium
  • <-- Luma quadrupling: let madVR decide (direct quadruple - NGU Anti-Alias (med))
  • <-- Chroma: normal (Bicubic60 + AR)
  • <-- Doubling: let madVR decide (scaling factor 1.2x (or bigger))
  • <-- Quadrupling: let madVR decide (scaling factor 2.4x (or bigger))
  • <-- Upscaling algo: let madVR decide (Bicubic60 + AR)
  • <-- Downscaling algo: let madVR decide (Bicubic150 + AR + LL)
  • Upscaling refinement: Off
  • Artifact removal - Debanding: Medium/High
  • Artifact removal - Deringing: Off
  • Image enhancements: Off
  • Dithering: Ordered
High:
  • Chroma: NGU Anti-Alias (high)
  • Downscaling: SSIM 2D 100% + AR + LL
  • Image upscaling: Off
  • Image doubling: NGU Anti-Alias
  • <-- Luma doubling: very high
  • <-- Luma quadrupling: let madVR decide (direct quadruple - NGU Anti-Alias (very high))
  • <-- Chroma: let madVR decide (NGU medium)
  • <-- Doubling: let madVR decide (scaling factor 1.2x (or bigger))
  • <-- Quadrupling: let madVR decide (scaling factor 2.4x (or bigger))
  • <-- Upscaling algo: let madVR decide (Jinc3 + AR)
  • <-- Downscaling algo: let madVR decide (SSIM 1D 100% + AR + LL)
  • Upscaling refinement: Off
  • Artifact removal - Debanding: Medium/High
  • Artifact removal - Deringing: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2
Creating madVR Profiles

These profiles can be translated into madVR profile rules.

Add this code to each profile group:

if (srcWidth > 1920) "2160p"
else if (srcWidth <= 1920) and (srcHeight > 1080) "2160p"

else if (srcWidth > 1280) and (srcWidth <= 1920) "1080p"
else if (srcWidth <= 1280) and ((srcHeight > 720) and (srcHeight <= 1080)) "1080p"

else if (srcWidth > 960) and (srcWidth <= 1280) "720p"
else if (srcWidth <= 960) and ((srcHeight > 540) and (srcHeight <= 720)) "720p"

else if (srcWidth <= 960) and (srcHeight <= 540) "SD"

OR

if (deintFps <= 25) and (srcWidth > 1920) "2160p25"
else if (deintFps <= 25) and ((srcWidth <= 1920) and (srcHeight > 1080)) "2160p25"

else if (deintFps > 25) and (srcWidth > 1920) "2160p60"
else if (deintFps > 25) and ((srcWidth <= 1920) and (srcHeight > 1080)) "2160p60"

else if (deintFps <= 25) and ((srcWidth > 1280) and (srcWidth <= 1920)) "1080p25"
else if (deintFps <= 25) and ((srcWidth <= 1280) and (srcHeight > 720) and (srcHeight <= 1080)) "1080p25"

else if (deintFps > 25) and ((srcWidth > 1280) and (srcWidth <= 1920)) "1080p60"
else if (deintFps > 25) and ((srcWidth <= 1280) and (srcHeight > 720) and (srcHeight <= 1080)) "1080p60"

else if (deintFps <= 25) and ((srcWidth > 960) and (srcWidth <= 1280)) "720p25"
else if (deintFps <= 25) and ((srcWidth <= 960) and (srcHeight > 540) and (srcHeight <= 720)) "720p25"

else if (deintFps > 25) and ((srcWidth > 960) and (srcWidth <= 1280)) "720p60"
else if (deintFps > 25) and ((srcWidth <= 960) and (srcHeight > 540) and (srcHeight <= 720)) "720p60"

else if (deintFps <= 25) and ((srcWidth <= 960) and (srcHeight <= 540)) "540p25"

else if (deintFps > 25) and ((srcWidth <= 960) and (srcHeight <= 540)) "540p60"

madVR Image Settings Hierarchy

Beyond the above profiles, I will offer some general settings advice...

Each settings category in madVR has a minimum and maximum value, where each setting offers the possibility of higher performance at the expense of greater resources use. However, the maximum value of each category can have a dramatically different impact in improving overall image fidelity. For example, the highest level of dithering will produce a very small improvement in image quality compared to the highest level of image upscaling.

Because most settings profiles involve a compromise: where one setting must be turned down so another can be turned up. It is important to understand which settings are important and which are luxuries only to be used when extra processing resources are available.

General Rules and Caveats:
  • The human eye has difficulty detecting changes to the chroma layer compared to the black and white luma. This is why chroma subsampling is so widely-used. As such, it can be difficult to see the difference in chroma upscaling settings beyond Catmull-Rom without good eyes and appropriate test patterns.
  • Detecting the difference in dither patterns between Ordered Dithering and Error Diffusion with real-world content can be equally as challenging, especially with output bit depths of 8 bits or greater.
  • Image enhancements and upscaling refinement are forms of image enhancement — but only when applied to high-quality sources. Enhancing artifacts in low-quality sources may end up making them appear worse.
  • Artifact Removal will benefit low-quality sources the most, where these artifacts are most present.
Video Rendering is Divided into Two Cases:
  • Image Upscaling is used to resize the image;
  • Native Sources are involved, where the image is shown at its native resolution.
Settings Hierarchy: Rank order of each setting based on its relative impact in improving picture quality.

[Image: madVR-Hierarchy_zps4vnmy0ko.png]

madVR Rendering Path

The chart below is a summary of the rendering process.

[Image: madVR%20Chart_zpsacwqfwoc.png]
Source
(This post was last modified: 2017-05-23 19:18 by Warner306.)
find quote
Warner306 Offline
Posting Freak
Posts: 2,666
Joined: Feb 2014
Reputation: 91
Location: Canada
Post: #8
7. OTHER RESOURCES

List of Compatible Media Players & Calibration Software

madVR Player Support Thread

Building a High-performance HTPC for madVR

Building a 4K madVR HTPC

Kodi Beginner's Guide

Kodi Quick Start Guide

Configuring a Remote Control

HOW TO - Configure a Logitech Harmony Remote for Kodi

HTPC Updater

This program is designed to download and install updated copies of MPC-HC, LAV Filters and madVR.

For this tool to work, 32-bit versions of MPC-HC, LAV Filters and madVR must be installed on your system. Running the program will update copies of each program. This avoids the process of manually extracting and re-registering madVR with each update.

Note: madVR components are dropped into the Local Disk C: folder. This is the default behavior of the program. If one component fails, try updating it manually before running the program again.

HTPC Updater

MakeMKV

MakeMKV is pain-free software for ripping Blu-rays and DVDs into an MKV container, which can be read by Kodi. By selecting the main title and an audio stream, it is possible to create bit-for-bit copies of Blu-rays with the accompanying lossless audio track in one hour or less. No encoding is required — the video is placed in a new container and packaged with the audio and subtitle track(s). From here, the file can be added directly to your Kodi library or compressed for storage using software such as Handbrake. This is the fastest way to import your Blu-ray collection into Kodi.

Tip: Set the minimum title length to 3600 seconds (60 minutes) and a default language preference in Preferences to ease the task of identifying the correct video, audio and subtitle tracks.

MakeMKV Homepage (Beta Registration Key)

Launcher4Kodi

Launcher4Kodi is a HTPC helper utility that can assist in creating appliance-like behavior of a Windows-based HTPC running Kodi. This utility auto-starts Kodi on power on/resume from sleep and auto-closes Kodi on power off. It can also be used to ensure Kodi remains focused when loaded fullscreen and set either Windows or Kodi to run as a shell.
(This post was last modified: 2017-02-20 04:06 by Warner306.)
find quote
Warner306 Offline
Posting Freak
Posts: 2,666
Joined: Feb 2014
Reputation: 91
Location: Canada
Post: #9
Reserved.
(This post was last modified: 2016-02-08 07:50 by Warner306.)
find quote
Warner306 Offline
Posting Freak
Posts: 2,666
Joined: Feb 2014
Reputation: 91
Location: Canada
Post: #10
Reserved..
(This post was last modified: 2016-02-08 07:50 by Warner306.)
find quote
Warner306 Offline
Posting Freak
Posts: 2,666
Joined: Feb 2014
Reputation: 91
Location: Canada
Post: #11
Reserved...
find quote
Derek Offline
Posting Freak
Posts: 1,714
Joined: Aug 2009
Reputation: 23
Location: Bonnie Scotland
Post: #12
awesome information buddy
find quote
Warner306 Offline
Posting Freak
Posts: 2,666
Joined: Feb 2014
Reputation: 91
Location: Canada
Post: #13
(2016-02-10 05:39)Derek Wrote:  awesome information buddy

Thanks. Hopefully it will be of some use to others.
find quote
gotham_x Offline
Fan
Posts: 382
Joined: Jun 2014
Reputation: 1
Location: The City of Darkness _@_Italy_@_
Post: #14
Hi.
Excellent review, future-proof.
I wanted to ask just one thing, for Demo HDR played on a display 1920x1080 24Hz and Video Rendering madVR, rules + profile you entered for display: 3840 x 2160p can be inserted to a display 1920x1080 24Hz?
Thanks.
find quote
Warner306 Offline
Posting Freak
Posts: 2,666
Joined: Feb 2014
Reputation: 91
Location: Canada
Post: #15
(2016-02-12 16:57)gotham_x Wrote:  Hi.
Excellent review, future-proof.
I wanted to ask just one thing, for Demo HDR played on a display 1920x1080 24Hz and Video Rendering madVR, rules + profile you entered for display: 3840 x 2160p can be inserted to a display 1920x1080 24Hz?
Thanks.

If you're asking if you can use the profile rules for a 4K display for a 1080p display, then yes.
find quote
Post Reply