View text source at Wikipedia
This article needs additional citations for verification. (October 2009) |
Screen tearing[1] is a visual artifact in video display where a display device shows information from multiple frames in a single screen draw.[2]
The artifact occurs when the video feed to the device is not synchronized with the display's refresh rate. That can be caused by non-matching refresh rates, and the tear line then moves as the phase difference changes (with speed proportional to the difference of frame rates). It can also occur simply from a lack of synchronization between two equal frame rates, and the tear line is then at a fixed location that corresponds to the phase difference. During video motion, screen tearing creates a torn look as the edges of objects (such as a wall or a tree) fail to line up.
Tearing can occur with most common display technologies and video cards and is most noticeable in horizontally-moving visuals, such as in slow camera pans in a movie or classic side-scrolling video games.
Screen tearing is less noticeable when more than two frames finish rendering during the same refresh interval since that means the screen has several narrower tears, instead of a single wider one.
Ways to prevent video tearing depend on the display device and video card technology, the software in use, and the nature of the video material. The most common solution is to use multiple buffering.
Most systems use multiple buffering and some means of synchronization of display and video memory refresh cycles.[3]
Option "TearFree" "boolean": disable or enable TearFree updates. This option forces X to perform all rendering to a back buffer before updating the actual display. It requires an extra memory allocation the same size as a framebuffer, the occasional extra copy, and requires Damage tracking. Thus, enabling TearFree requires more memory and is slower (reduced throughput) and introduces a small amount of output latency, but it should not impact input latency. However, the update to the screen is then performed synchronously with the vertical refresh of the display so that the entire update is completed before the display starts its refresh. That is only one frame is ever visible, preventing an unsightly tear between two visible and differing frames. This replicates what the compositing manager should be doing, however, TearFree will redirect the compositor updates (and those of fullscreen games) directly onto the scan out thus incurring no additional overhead in the composited case. Not all compositing managers prevent tearing, and if the outputs are rotated, there will still be tearing without TearFree enabled.
— From Intel open source GPU driver, https://manpages.debian.org/buster/xserver-xorg-video-intel/intel.4.en.html
A vertical synchronization is an option in most systems in which the video card is prevented from doing anything visible to the display memory until after the monitor finishes its current refresh cycle.
During the vertical blanking interval, the driver orders the video card to either rapidly copy the off-screen graphics area into the active display area (double buffering), or treat both memory areas as displayable, and simply switch back and forth between them (page flipping).
Nvidia and AMD video adapters provide an 'Adaptive Vsync' option, which will turn on vertical synchronization only when the frame rate of the software exceeds the display's refresh rate, disabling it otherwise. That eliminates the stutter that occurs as the rendering engine frame rate drops below the display's refresh rate.[4]
Alternatively, technologies like FreeSync[5] and G-Sync[6] reverse the concept and adapt the display's refresh rate to the content coming from the computer. Such technologies require specific support from both the video adapter and the display.
When vertical synchronization is used, the frame rate of the rendering engine gets limited to the video signal frame rate. That feature normally improves video quality but involves trade-offs in some cases.
Vertical synchronization can also cause artifacts in video and movie presentations since they are generally recorded at frame rates significantly lower than the typical monitor frame rates (24–30 frame/s). When such a movie is played on a monitor set for a typical 60 Hz refresh rate, the video player misses the monitor's deadline fairly frequently, and the interceding frames are displayed slightly faster than intended, resulting in an effect similar to judder. (See Telecine: Frame rate differences.)
Video games, which use a wide variety of rendering engines, tend to benefit visually from vertical synchronization since a rendering engine is normally expected to build each frame in real-time, based on whatever the engine's variables specify at the moment a frame is requested. However, because vertical synchronization causes input lag, it interferes with the interactive nature of games,[7] and particularly interferes with games that require precise timing or fast reaction times.
Lastly, benchmarking a video card or rendering engine generally implies that the hardware and software render the display as fast as possible, without regard to monitor capabilities or resultant video tearing. Otherwise, the monitor and video card throttle the benchmarking program, causing invalid results.
Some graphics systems let the software perform its memory accesses so that they stay at the same time point relative to the display hardware's refresh cycle, known as raster interrupt or racing the beam. In that case, the software writes to the areas of the display that have just been updated, staying just behind the monitor's active refresh point. That allows for copy routines or rendering engines with less predictable throughput as long as the rendering engine can "catch up" with the monitor's active refresh point when it falls behind.
Alternatively, the software can instead stay just ahead of the active refresh point. Depending on how far ahead one chooses to stay, that method may demand code that copies or renders the display at a fixed, constant speed. Too much latency causes the monitor to overtake the software on occasion, leading to rendering artifacts, tearing, etc.
Demo software on classic systems such as the Commodore 64 and ZX Spectrum frequently exploited those techniques because of the predictable nature of their respective video systems to achieve effects that might otherwise be impossible.