Video card definition
in simple terms, a video card is used to calculate (render) an image and display it on a monitor screen . In other words, the video adapter is engaged in the formation of everything that you see on your monitor. These are its main functions, but besides this, there is now a tendency to use its great computing capabilities in tasks not directly related to the formation and display of an image.
All video cards can be divided into two large groups: integrated and discrete. Integrated or otherwise integrated video cards, as the name implies, are an integral part of the motherboard or central processor , that is, integrated into them. Often the following expressions are used: embedded video, integrated graphics, integrated graphics controller, video adapter integrated into the chipset and others. The presence of integrated video reduces the cost and energy consumption of a computer, but they have limited performance (often do not have their own video memory and use computer RAM ) and are used mainly in the lower and middle segments of the computer systems market.
A discrete graphics card is a separate expansion card that is installed in a special slot on the motherboard . It has everything necessary for a full-fledged work. Due to this, it can have high performance, allowing you to use it in “heavy” 3D-games and serious graphics applications. The main disadvantages are the high cost and power consumption, which is especially important for laptops.
In turn, they can be divided into two classes, gaming and professional. The former are mainly used by ordinary people for games, and professional graphics cards are aimed at using 3D modeling, CAD, and the like in various “heavy” graphic applications, where they can give a significant performance boost. Accordingly, the cost of high-performance models can be sky-high.
- Features of modern graphics cards
- Video Card Modes
- Color depth and resolution
- Hardware acceleration of graphic functions
- 3D conveyor
- Graphics APIs
- Characteristics of modern graphics cards
- Multi-monitor systems
- SLI and CrossFire Technologies
- Graphics card interfaces
- Graphics and GPU manufacturers
Features of modern graphics cards
A modern video card (video adapter) is, in fact, the second independent computer inside a personal computer. Moreover, when a person plays his beloved 3D and rpy, the video card processor actually does most of the work, and the central processor recedes into the background. Thus, a multiprocessor architecture is implemented, which, if you look at the root, is a departure from the main ideology of personal computers PC, where the central processor performs everything and everything. But this is the result of the fact that people began to demand from the computer not just the ability to calculate, but to create a communication environment where you can comfortably work and relax.
Below are two options for the appearance of a modern video card. The first version of the ATI Radeon HD 5670 is not a very expensive model for home and office computers, and the second ATI Radeon HD 5970 is a “sophisticated” and super-powerful system for a gaming computer. Both video cards use the PCI-Express interface and are equipped with powerful fans for cooling the chips, and the second one has pretty solid sizes.
Despite the difference in size between these video cards, if you work only with office software packages, there is practically no difference between them in terms of speed. But everything changes when a three-dimensional game or graphic modeling program is launched. Here, not to mention the speed of work, it can be seen with the naked eye that a more “powerful” video adapter (especially compared to older video cards) creates such a realistic image on the monitor screen that it is hard to believe that people and objects are mathematically modeled and not copied from photos.
True, if you experiment with various video cards, looking at the fine details on the image, you can easily see that in one case the picture is like, but not in the other, there are some jagged lines, and here the flat surface of the balloon is covered with some kind of stains. Sometimes in the game some details of the playing field disappear or the weapon looks completely different. The reason for the inconsistency of the images here is that the video card, being a peripheral device, only on the interface must fully correspond to the motherboard and monitor, but how the three-dimensional image will be simulated inside it remains the prerogative of the developers. A more powerful (fast and with a wide range of functions) GPU will create a more realistic image. Accordingly, GPU developers,
Video Card Modes
Oddly enough this sounds today, but the main video mode for personal computers is text mode. In this mode, graphically! Elements – lines and rectangles – are created using pseudographic symbols. And only on the instructions of the operating system does the video card switch to graphic mode. This is clearly visible when, after turning on the power, the computer is running BIOS programs. During the initial loading, information is displayed on the screen in text mode with a resolution of 720 × 400 (line frequency is 31.5 kHz, frame rate is 70 Hz). Only occasionally, during testing of the video adapter itself, does it switch to the graphic mode with a resolution of 640 × 480 (line frequency is 31.5 kHz, frame rate is 60 Hz). Note that users use the text mode of the video subsystem only in MS-DOS mode or, for example,
The emergence of two different principles for constructing images on a monitor screen has arisen historically. The text mode went to the personal computer IBM PC from computers, where the graphic mode at that time was a very unique feature, which required an unusually large amount of resources to support. Moreover, to draw on the screen or a printing device, various tricks were used, for example, they created an image using a set of characters of letters, numbers and punctuation marks. By the way, then the term “computer graphics tools” was actively used. At the dawn of the computer era, text mode was advantageous in that only 4 KB of RAM (80 characters per line and 25 lines) was needed to store the screen image. For each character, only 2 bytes of video memory were required (1st byte — character code, 2nd byte — brightness, color,
After the cheapening of memory chips and increasing processor performance, the text mode has ceased to be popular among users who now prefer to work in a graphical shell, for example, the Windows operating system. But in this case, the computer has to remember every point on the screen, that is, one byte controls not a group of points, as in text mode, but only one. Moreover, when you want to display a better image on the monitor screen, to store information about the color and brightness of a point, you need to allocate 2, 3 or 4 bytes. Since users after a short delight from the latest computer very soon again become dissatisfied with the ability to display graphic information on the monitor,
At first, a person had to work behind a black and white monitor, then color ones appeared with rather modest characteristics. Next, the number of points displayed on the screen and the number of colors that could be displayed on the monitor increased sequentially. Accordingly, each standard was characterized by its resolution and color depth (at the same time, the frame and line scan values of the monitor changed, as well as the image synchronization method). In order for the video adapter and monitor to work correctly in any standard, mode numbers were introduced that uniquely characterize resolution, color depth, sweep frequency, and also the operation mode – text or graphic (the mode number is used by programmers to work with video adapters). For the old standards MDA (Monochrome Diaplay Adapter), CGA (Color Graphics Adapter), HGC (Hercules Graphics Card) and EGA (Enhanced Graphics Adapter) are designed for operating modes of the video card from 0 to 13h. For the VGA (Video Graphics Array) standard, modes from 18h to 27h are used. For modern operating modes, according to the VESA (Video Electronics Standards Association) VGA standard, numbers from lOlh to 1 lAh are defined, and the same also applies to SVGA modes.
A little needs to be said about the symbols of the modes. After the VGA mode, the SVGA mode follows, more precisely, all resolutions above 640 × 480 and 16 colors are attributed to SVGA (Super VGA). In principle, for each combination of resolution and number of colors there is a different designation, but these abbreviations are not widely used, because manufacturers give their new products names, most often taking into account the advertising appeal. Attempts are made periodically to introduce a unified classification, but so far no international organization has achieved any particular success (the issue is complicated by the fact that it is necessary to take into account the operating modes of displays of a wide variety of devices, for example, displays of cell phones). For a number of applications, such as LCD displays and projectors, the following classification is popular:
- VGA – 640 × 480
- SVGA – 800 × 600
- XGA – 1024 × 768
- SXGA – 1280 × 1024
Color depth and resolution
Currently, the most popular modes for desktop computers are 1024 × 768 and 1280 × 1024, and for laptops and netbooks – 1024 × 800. Moreover, the highest transmission quality of 32 bits for each point is usually used. Widescreen monitors and HD panels for TVs are also becoming popular. For example, if you count the number of pixels (dots), for example, for the simplest options, then with a resolution of VGA or 800 × 600 the image on the monitor screen consists of 480,000 pixels, and at 1024 × 768 of 786 432. The first computer monitors only operated two values of the brightness of a point on the screen: there is an image of a point and the point is off, that is, only one bit was required to store information about the point. In the future, added information about the brightness of the point (halftone), and when a color appears, information about the three primary colors.
In a modern monitor, for each image point, a strictly defined color is indicated, which is obtained from a mixture of three primary colors – red, green and blue (RGB). The total number of shades can reach millions of colors, but 16 or 256 colors are used for the simplest modes. The minimum amount of required video memory is determined depending on the supported resolution (number of lines times the number of dots per line) and color depth (required number of bytes to store information about each point). Accordingly, the formula linking the amount of video memory with the resolution and the number of reproduced colors looks like this:
Video RAM RAM = (number of dots per line) x (number of lines) x (number of bytes per point).
The first two values are determined by the mode you want, and the number of bits (bytes) per point or the number of colors is selected from the table
The ratio between the color depth and the number of bits per pixel:
Possible choices are shown below.
In practice, for working office applications and watching videos, 8 MB of video memory for a resolution of 800 × 600 or 16 MB for a resolution of 1024 × 768 are enough. All other memory, beyond this, which is available today in modern video adapters, is spent on third-party needs, in particular, to support the on-screen graphics of the Windows operating system (especially in Windows Vista and Windows 7). The use of 128, 256 and 512 MB of video memory is connected, first of all, with the interests of “gamers”, which, to be honest, not even a lot of 512 MB. It should be said that the rapid increase in video memory is currently not associated with the same progress in increasing the resolution of the image on the screen. The ceiling for traditional video display systems has almost been reached. The main reason for the ever increasing RAM of the video adapter is that the video adapter now has a video processor (GPU, Graphics Processing Unit, graphics processor) that can independently. according to the control commands of the central processor, to build images of various objects, including three-dimensional images (they are also 3D), and this requires an unusually many resources for storing intermediate results of calculations and texture samples, which fill the conditional planes of the modeled figures.
ATTENTION! Even for office applications, today, if the Windows operating system uses the DirectX 9 or DirectX 10 interface, the amount of video card memory must be at least 128 MB. For more advanced users, 256-512 MB is desirable, and gamers are better off guided by the 1,000 MB border.
Hardware acceleration of graphic functions
The standard VGA graphics card of a personal computer, like its predecessors MDA, CGA and EGA, in fact, is a set of hard logic and video memory. Everything that is recorded by the central processor in the video memory, according to strictly defined algorithms, is converted into an analog video signal, which is fed to the monitor. Thus, the central processor needs to calculate the parameters of all the points that should be currently displayed on the screen, and load all the data into the video memory. Any change on the screen, even if it is a mouse trace, is the result of the CPU (if hardware acceleration is not used). Accordingly, the higher the resolution used and the number of colors, the more time the processor spends calculating all points of the generated raster. For example, in three-dimensional games,
Since the personal computer over time began to inextricably communicate with the Windows graphical interface (DirectX graphical interface) and various two- and three-dimensional games, the hardware developers took a number of steps to improve the standard video card to save the central processor from unnecessary work on drawing elementary images. Such devices are called graphic accelerators, or otherwise graphic accelerators. Initially, hardware accelerators were used on separate boards (alas, these devices appeared too late and could not compete with modern solutions).
As the semiconductor technology improved, it was possible to place all the elements of hardware accelerators on the video card itself. Currently, the chip on the video card, it is also a video or graphics processing unit (GPU, Graphics Processing Unit), independently calculates the new parameters of the points on the screen by the commands of the central processor. For example, move the Windows window to another location on the screen, draw and paint a circle or rectangle. Let’s pay a little attention to the terminology:
- 2D Graphics is a two-dimensional graphics that allows you to draw in the same plane. For example, the user interface of the Windows operating system is a prime example of two-dimensional graphics;
- 3D Graphics is a three-dimensional graphics that allows you to create a visual display of a three-dimensional object on the plane of the screen. In this case, the video processor creates (mathematically calculates) a three-dimensional object in the video memory.
When describing methods for constructing two- and three-dimensional images, special terms are used, which are often the so-called “tracing-paper” from the corresponding English terms, For example, Rendering is a term that refers to the process of creating an image on the screen using a mathematical model of the object and formulas to add color and shadow. The term Rasterization refers to the process of dividing an object into pixels. The often referred to term Texture refers to a two-dimensional image of a surface, such as paper or metal, stored in memory in one of the standard pixel formats. From the point of view of circuitry, graphic accelerators of two-dimensional graphics are simple controllers,
When working with three-dimensional graphics, the same principles were used at first. But the requirement to improve image quality led to the fact that gradually a simple controller on the video card turned into a powerful specialized processor with its own special instruction system. Since the calculation of three-dimensional images is a lot of mathematical calculations with floating point, the most advanced video processors have acquired a mathematical coprocessor. In the future, the number of specialized coprocessors began to increase rapidly, specialization appeared, when one group of coprocessors calculates the coordinates of the vertices of the figures, and the other, for example, checks the visibility of points for a two-dimensional projection. As a result, a modern video processor can have several hundred coprocessors, and the GPU architecture itself has become very complex and not like traditional solutions for central processing units (general purpose processors). For example Here is a model of conveyors in the GeForce 8800 GPU, which has a relatively simple and understandable architecture; small boxes on the flowchart, grouped in pieces of 8, with the inscriptions SP (Special Processor) – these are specialized coprocessors.
Since modern video cards are, first of all, ample opportunities for modeling realistic images of objects, when buying a new video card, you should understand what developers mean by this or that term. After all, each developer of chips for video cards has their own proprietary technology for modeling objects. In addition, we should not forget that, in contrast to simple calculations, the methods of constructing objects for each video card are slightly different. Therefore, the basics of the functioning of the 3D conveyor or the process of calculating a three-dimensional image (more precisely, its two-dimensional projection) are explained below. In the process of synthesis of a three-dimensional object, there are several main stages (their number depends on the type of video card used by the video processor):
- Construction of a geometric model – at this stage, the coordinates of the control points and the equations of the lines connecting them are set, which leads to the creation of a wireframe model of the object (wireframe);
- Dividing the surface of an object into the simplest elementary elements – working with a complex object is very difficult, so curved surfaces are turned into a set of rectangles or triangles, creating a faceted object. The division process is called tessellation;
- Transformation – simple objects more often than not, it is necessary to change or transform (transformation) in a certain way in order to obtain a more natural object, or imitate its movement in space. For this, the coordinates of the vertices of the faces of the object (vertex – vertex) are recalculated using operations of matrix algebra and geometric transformations. In modern graphics cards, the geometric coprocessor is intensively used for this, and in older ones the central processor should do this;
- Calculation of illumination and shading – in order for the object to be visible on the screen, it is necessary to calculate the lighting and shading of each elementary rectangle or triangle. Moreover, it is necessary to simulate the actual distribution of illumination, that is, it is required to hide the changes in illumination between rectangles or triangles. For this, various interpolation methods are used, for example, Gouraud (Gouraud Shading) or Phong (Phong Shading);
- Projection – a three-dimensional object is converted into a two-dimensional, but it also remembers the distances of the vertices of the faces to the surface of the screen (Z coordinate, Z-buffer) onto which the object is projected;
- Processing the coordinates of the vertices – at the stages of modeling the object, all the coordinates of the vertices of the faces are obtained in the form of floating point numbers, but since it is possible to enter only integers in the video memory, it is necessary to carry out the conversion step. At the same stage, it is allowed to sort the vertices in order to discard invisible faces. In the calculations, subpixel correction is used when each pixel is represented as a matrix of subpixels, for example, 3 × 3, 4 × 4, etc., with which the calculations are performed (i.e., one point is converted to 9, 16 or more subpixels );
- Removing hidden surfaces – all invisible surfaces are removed from the two-dimensional projection of a three-dimensional object. This process is usually carried out in several stages at different stages of the ZD conveyor;
- Texture mapping – because the capabilities of the video card processor are not endless, the surface of the object is modeled using a limited number of rectangles or triangles, therefore, to create a realistic image, a texture imitating a real surface is applied to each elementary surface. Textures are stored in memory as bitmap images. The smallest bitmap element is called texel (texel – texture element). The stage of texture mapping is the most time-consuming and complicated, and many problems arise here with combining the edges of the textures of adjacent planes. In addition, when scaling an image, there is a problem of matching the resolution of the texture used with the resolution of the monitor, because
- Creating transparency and translucency effects – at this stage, color correction of pixels (alpha-blending – alpha-blending, fogging – fogging) is carried out taking into account the transparency of the simulated objects and taking into account the properties of the surrounding environment;
- Correction of defects – simulated lines and borders of objects, if they are not vertical or horizontal, look angular on the screen, therefore, they carry out image correction, called anti-aliasing;
- Interpolation of missing colors – if you used a different number of colors when modeling objects than in the current mode of the video card, you need to calculate the missing colors or remove redundant ones. This process is called dithering.
After calculating all the points in the frame, information about each pixel is moved to video memory.
There are a lot of terms and concepts of 3D graphics, so the most important points of constructing multidimensional objects were given above. Despite all the efforts of the developers, modeling of photorealistic images is still a difficult problem. In particular, it is very difficult to model such seemingly simple elements as hair. Progress in the field of modeling can be seen by comparing old and new movies that used computer graphics.
Prior to the release of the Windows Vista operating system and the introduction of the DirectX 10 version of the Application Programming Interface (API), it was not necessary to speak in detail about 3D graphics programs. This topic was previously more interesting to gamers who wanted to have the most realistic map of the virtual world. Now, alas, everything has changed, OS Windows Vista and Windows 7 and require so many resources from all nodes of the computer that you should briefly get to know what and why. As already mentioned, the hardware itself can not do anything, so you always need an intermediary program that can handle a specific model of a device. So for video cards you need an intermediary who turns the programmer’s instructions into an image on the screen (it’s very difficult to write programs at the level of machine codes,
At the beginning, each manufacturer of video cards offered its own version of a set of instructions (ways to access the video processor), which was a very unsuccessful and expensive solution. In the future, two standards appeared that currently determine both the development of new hardware and the writing of games (and other programs with cool graphics). Today, there are two generally accepted options for the graphics APIs: OpenGL, the international open standard where GL stands for Graphics Library (graphics library), and Microsoft DirectX, tied to the Windows operating system. True, in addition to graphical functions, DirectX additionally describes sound and input-output interfaces. The OpenGL standard is more conservative, but more reliable, as it is the collective work of many organizations. Microsoft DirectX is very quickly updated, and as it happened,
As the GPUs improved and the amount of memory on the video cards increased, so did the APIs. Currently, to get more realistic graphics, DirectX 10 options are being introduced, which allows you to create a fairly real picture of the virtual world, but it requires a lot of resources. We will not dwell on the novelties of the theory of image construction, since it is better to look at the resulting drawings on the websites of NVIDIA and AMD corporations where simple examples show how the same image changes when a particular standard or technology changes.
Characteristics of modern graphics cards
The capabilities of modern graphics cards, when it comes to text modes, are amazing. In practice, on a computer today it is possible to mathematically create a model of a person that will be difficult to distinguish from a real photograph. But nothing comes for free, so the processor and video memory on board the video card must have very, very serious technical specifications. The desired amount of video memory was discussed earlier, but in addition to its size, there are two more parameters that currently determine the capabilities of the video card, affecting the quality of image construction and the image refresh rate.
The first and most important parameter is the video memory bus width, or how many bits at a time (cycle) the video memory processor is transmitted over the bus (and this is not about the RAM located on the system board, since such solutions are inefficient and are currently available for external video cards are practically not used.) If the first video processors (video cards) were content with standard 8, 16 and 32 bits, then in order to meet the modern level the video memory bus width should be approximately the following:
- budget and office options – 64 or 128 bit;
- gaming mid-level – 128 or 256 bits;
- High-End categories – from 512 bits and higher.
But you should not just look at this option when buying a new video card. Alas, all the characteristics of a video card must match each other. For example, a good graphics card should have a high-performance graphics processor, a sufficient amount of video memory, a large bus width of the video memory, high clock frequencies of the video processor and video memory. Choosing based on only one parameter is a big mistake. In particular, there should be an optimal combination of frequencies of the video processor / video memory, as well as the width of the video memory. As a measure of this combination, FPS is most often used – the number of frames per second, measured in a game, for example Quake 4. As a rule, all large video card manufacturers offer customers optimal options in the corresponding price range. And basically
It is more difficult when a video card is released by a little-known company and some super parameter is advertised, not taking into account all the others – it is possible that such a video card really deserves attention, but, most often, there will not be much sense from it, or problems will surely appear in the future. Most often, such “cheap” products use defective memory modules, overclocked video processors and memory chips, as well as various tricks for moving a video processor from one performance category to another.
NOTE Modern high-performance graphics cards use dynamic memory standards GDDR3, GDDR4 and GDDR5. Moreover, the GDDR4 memory, having just appeared, is already giving way to GDDR5. Note that all these standards are not related to the computer’s RAM. In particular, for video memory, single errors in data storage are allowed, because a random malfunction in the video memory affects only one image frame, which is not critical in most cases.
Until the PCI bus appeared, the personal computer could fully work with only one video card, displaying the image on one monitor, while the second video card on the ISA bus could only work in parallel in text mode. The PCI bus allowed installing an arbitrary number of video cards in the system. With this in mind, starting with Windows 98, Microsoft began to develop software that supports multi-monitor systems. When you install two video cards or one video card with two video outputs in a personal computer, you can organize more convenient work with a number of applications – most often this feature is required when working with graphic editors. In this case, say, on the primary monitor (BIOS information is displayed on it when the computer starts), the image is edited, and on the secondary monitor there are windows used in the work of the palettes. But you can use a second monitor to duplicate the image. In addition, up to 9 video cards can be installed in the system, which allows, for example, to stretch one Windows window to all monitors to create a video wall. Monitors are allowed to be grouped in any way. An example of using multi-monitor systems during a game:
As a rule, all modern PCI Express video cards (not the lowest price category) allow you to connect up to two monitors. To connect more monitors, it is desirable to have a motherboard with two PCI Express x16 slots. In cases where it is not possible to install a second video card inside the computer, you can use special video cards to build video walls. Such a video card can be made either as a card for PCI Express or PCI slots, or as an external unit. True, the price of such solutions is high.
SLI and CrossFire Technologies
A certain mirror reflection of multimonitor systems is the case when two video cards work on one monitor. In particular, NVIDIA Corporation proposed SLI technology (Scalable Link Interface – a scalable connection interface), when two video processors (video cards) fraternally share the construction of a picture on a monitor screen; and AMD Corporation – similar technology CrossFire. To use SLI technology on a motherboard with two PCI Express slots, it is acceptable to install two video cards based on NVIDIA. Connected via the NVIDIA SLI connector, both cards work with one monitor, providing increased performance due to the redistribution of computing load between the two video processors. An example of the use of this technology is shown below.
Similar to SLI technology, AMD proposed CrossFire technology, which allows the use of two or more video cards (video processors) for processing a single image. For example, below is a flow chart of CrossFire technology.
Please note that using CrossFire or SLI technology implies increased requirements for a computer power supply, cooling, memory modules, in addition, video cards must be certified to operate in this mode. Not all graphics cards support NVIDIA SLI or AMD CrossFire technology. Check certification before purchasing a video card. When assembling a system with two video cards on your own, remember that you must use a certified system board and two certified video cards. In addition, you need a power supply with high power. To get real performance gains, you need to use high-performance memory modules.
NOTE. Using NVIDIA SLI and AMD CrossFire technologies is an expensive solution that is needed only for a limited circle of users. Accordingly, if you want to use this or that technology, it is necessary to study the technical material offered on corporate websites and only then buy the necessary hardware.
To watch TV shows on a personal computer, video capture cards and TV tuner cards installed in a PCI slot are traditionally used. Video capture cards are designed to digitize the analog video signal that comes from a VCR or camcorder, and, therefore, cannot work with an on-air signal. Cards of TV tuners are used to receive signals of on-air and cable television; for this, a high-frequency receiver is installed on the card. Video capture cards provide the best image quality, but the price of these cards is usually quite high. TV tuners have a more affordable price, but the quality of digitization can not always satisfy the tastes of movie lovers.
Below is an AverTV TV tuner designed for installation in a PCI slot. There are a lot of different versions of AverTV TV tuners, and they are the most affordable for most users (from $ 60 to $ 100). These are fairly typical representatives of devices designed to watch television broadcasts and recordings from analog video cameras of a household class. When buying TV tuners, you should pay attention to their ability to process signals of television standards.
Below you can see the TV tuner, made in the form of an external device connected to the USB 2.0 interface. External TV tuners do not have any advantages or special problems, compared to PCI cards.
There is also a better way to input / output a television signal to a computer – this is the use of a modern video card, in which a powerful video processor additionally allows you to process a TV signal. When considering the television capabilities of video cards, it should be borne in mind that there are two functions that enable the computer to pair with the TV:
- the first is the formation of a television signal for supply to a household TV. The problem is quite complex, since no modern video mode of the monitor is compatible with television standards, but the power of video processors is currently quite enough to convert a computer image to a television one. True, not all video cards can boast of this property;
- the second, most interesting function is the digitization of a television image. Video card manufacturers, not wanting to increase the dimensions of printed circuit boards and deal with rather complicated problems associated with numerous television standards, often prefer to process an already partially processed television signal. In this case, a component signal of the S-VHS format (separate supply of luminance and color signals) is supplied to the video card through a 4-pin DIN connector. A number of video recorders and video cameras have such outputs. But, say, on boards All-in-Wonder with Radeon chipset manufactured by ATI (AMD), a full-fledged high-frequency unit is installed.
When buying a video card with television features, you should definitely pay attention to what color television standards a video card can handle. In particular, does it allow to process a signal of the SECAM standard. In addition, it is necessary to ensure that the television software is designed for the operating system that is installed on your computer, otherwise you will encounter problems setting up the video card in the mode of receiving a television signal.
NOTE. Currently, AMD Avivo or NVIDIA PureVideo HD technologies can be used to accelerate video streams in hardware, which allow for hardware decompression of HD video (resolution 1920 × 1080, 30 frames / sec).
NOTE. Today, video cards and monitors must comply with the HD Ready brand. This means that the HDCP (High-bandwidth Digital Content Protection) protocol is supported. However, so far this only applies to films distributed on HD DVD and Blu-ray DVD media and transmitted via DVI and HDMI.
Graphics card interfaces
Modern video cards are equipped with several ports so that you can connect more than one monitor. In turn, each monitor has a different type of connectors, which the user will find it useful to know.
VgaVideo Graphics Array (adapter) is a fairly ancient 15-pin piece of blue that specialized in analog signal output. Its peculiarity was that different things could affect the image: the length of the wire (which consisted of 5 meters) or the personal properties of the video card. Previously, it was one of the main ones, however, with the advent of flat monitors, it began to lose ground, because the screen resolution increased, which VGA could not cope with. Used to this day.
S video connector
S-Video is also an analog connector, which can often be found on TVs and rarely on video cards. Its quality is worse than that of VGA, but its cable reaches 20 meters, while still maintaining a good picture. Information is transmitted in three channels.
DVI overtook the well-known VGA in that it acquired the ability to transmit digital signals. This connector is already more familiar to the modern world, because thanks to it, you can connect monitors, already, high resolution, which was impossible before. The length of its cable reaches 10 meters, but this does not affect the quality of the displayed image. Due to its uniqueness, it instantly gained popularity among other equipment, such as projectors and other things. There are three types: only digital DVI-D, very rare – analog DVI-A and combining two past DVI-I. Thanks to special adapters, it can be connected to a monitor that has only a VGA connector.
HDMI has several advantages over DVI. Its main feature is that in addition to the video channel, it also has audio. Thanks to this, he achieved great popularity among well-known companies, having received support. Also of the advantages can be noted its compactness and lack of mounts, which are observed in DVI. In addition, except for the video card, it perfectly “collaborates” with other devices.
DISPLAYPORT, in principle, has not gone far from HDMI, since both of them are capable of displaying high-quality images on a large screen along with audio accompaniment. However, DISPLAY has adapters for other, popular types of connectors. Unlike HDMI, manufacturers are able to not pay tax, which increases its popularity. However, the chance to meet him among household users is still much less. The maximum cable size reaches 15 meters. The bandwidth is higher than that of HDMI, although it varies depending on its version.
Graphics and GPU manufacturers
While personal computers were weak by today’s standards, and photo-realistic graphics were not required from the video card, many companies were involved in the production of video cards. In particular, one should recall the famous cards of the S3 type, which were on many personal computers . But the complexity of graphic tasks, the need to use a powerful graphics processor, as well as the requirement to develop original technologies for forming three-dimensional images reduced the number of developers and manufacturers to a negligible amount, slightly more than in the field of development of processors of the x-86 family. In fact, it now makes sense to talk about only two companies that manufacture graphics processors:
- NVIDIA Corporation
- AMD (formerly ATI chipsets)
These two companies are so far ahead in the development of their competitors that their position in the video chip market is almost the same as that of Intel and AMD in the processor market. Therefore, the main battle in the field of improving the output of the image on the monitor screen occurs between these two players. In particular, the change of generations and architectures of the graphic processors of these two companies occurs so often that new, more revolutionary solutions can appear after 6, or even 3 months. The following companies are popular among manufacturers of video cards based on NVIDIA and AMD chipsets , as indicated in the list of brand names, sorted approximately in order of popularity: