What to choose: NVIDIA, Radeon or Intel?

There are 2 sides of the graphics card market ― the green NVIDIA and the red AMD/Radeon. Historically, NVIDIA is the main player in the industry, that drives the technology forward. Hardware ray tracing, DLSS artificial scaling, Frame Gen interpolation system are the latest things that come to mind. The latest generations of the company's graphics cards have turned out to be quite expensive, but anyway it's the best option in terms of performance/stability. Designers, media content creators, game developers and other representatives of the creative industry appreciate NVIDIA products for a reason.

Radeon has a rather interesting role. On the one hand, the company is in a position of always catching up. On the other hand, it plays the role of Prometheus, who brings gamers free analogues of competitor technologies. In the last 3 years alone, the “reds” have introduced their answers DLSS, Frame Gen and Anti-Lag, which do not require separate neural and tensor cores to work correctly. At the same time, prices for AMD graphics cards are often slightly lower than analogues from NVIDIA. Although, to a greater extent it depends on the specific model.

In 2022, Intel tried to break the dualism with its line of Arc graphics cards, but its debut can not be called successful for a thousand different reasons. There is hope that over time the company’s engineers will fix everything, but for now it’s better to look towards NVIDIA and Radeon.

What exactly is a graphics card for?

GeForce RTX 3090: an ideal graphics card for both gaming and media content.

Graphics cards are divided into two types: professional and gaming. Professional graphics cards are optimized for complex graphics tasks such as rendering, precision modeling and working with large volumes of data. In most cases, they work under the control of specialized drivers, and software like NVIDIA Studio is supplied with the graphics card. Gaming graphics cards, in turn, are optimized for fast real-time graphics processing with high image quality and smooth animation.

However, after the high-profile debut of NVIDIA RTX 3000 and 4000 graphics cards, the lines have blurred and top GPUs of GeForce RTX 3080/3090/4080/4090 level are now actively used in studios around the world to work with games, create 3D animation, process video content, train neural networks and many other complex and resource-intensive tasks.

As for the simplest tasks like texts-documents-mail-YouTube, a processor with integrated graphics is quite enough.

Which GPU generation should be chosen?

It's probably not a good idea to buy a 6-year-old graphics card.

The latest generation GPUs have made an impressive leap in performance, beating their predecessors by several cases. Don't get it wrong, it is possible to achieve adequate frame rate from a conventional GeForce GTX 1650 even when playing modern AAA-projects, but it will not be the most pleasant experience. For example, in the best game of 2023 Baldur's Gate 3 this card is able to give the desired 50 – 60 FPS in 1080p resolution, but the graphics settings will have to be lowered to the minimum, and the frame rate will drop heavily during battles and visits to NPC locations.

Considering that a graphics card is usually bought 3 – 4 years in advance, the question arises, is it worth saving money? No, because the past generations of GPUs haven't really dropped in price: for example, GeForce RTX 3070 Ti, which is still extremely popular, now costs about $500 at the announced price of $599. Minus $100 for 2 years is a questionable discount considering the speed of graphics cards outdating.

In our opinion, such savings only make sense if every penny counts and you want to build an inexpensive gaming PC from affordable components for the price of a portable Steam Deck console. In this case, a combination of a conditional Ryzen 5 5500, a cheap motherboard and a second-hand Radeon 5700 XT may look like a really interesting option, but it will be very limited in terms of upgrades, and many particularly demanding games like Alan Wake 2, Starfield and A Plague Tale Requiem will run difficult on it.

Ray tracing ― necessary or not?

Ray tracing significantly changes lighting in games.

Ray tracing in computer games is a rendering technique for creating more accurate and realistic shadows and lighting, including soft shadows that naturally change intensity and edges based on the distance and shape of objects blocking light. In theory, this technology makes the image much more beautiful through more realistic lighting and ubiquitous reflections in windows, mirrors, puddles, and passing cars.

The first graphics cards with support for this technology were models of the NVIDIA GeForce RTX 2000 series. Despite the use of separate RT cores for working with lighting, the technology turned out to be too resource-intensive, increasing the load on the GPU by one and a half to two times. At the same time, the results after 6 years look inconsistent, since it is not NVIDIA, but the game developer, who is responsible for the implementation of the technology. So it turned out that games like Minecraft, Quake 2 and Control become beautiful after raytracing is turned on, while Cyberpunk 2077 does not seem to get much better and starts to slow down a lot. It's no surprise that to this day there are videos on YouTube with titles in the style of "Ray Tracing Is Stupid".

But there is also good news. Let's talk about them in the next paragraph.

DLSS is the coolest feature of modern graphics cards

DLSS 3.0 with frame generation is especially useful in games at high resolutions 2K/4K with ray tracing enabled.

Perhaps the coolest feature of modern graphics cards is the artificial scaling technology NVIDIA Deep Learning Super Sampling (or simply DLSS). This is the so-called supersampling algorithm, which, using separate computing cores of the graphics card, scales the image resolution in the style of modern TV processors, increasing the smoothness of the gaming processor. At least that's how the first and second versions of DLSS worked. During the premiere of the GeForce RTX 4000 graphics cards, the “greens” presented the third version of DLSS with a revolutionary frame generation technology, which actually completes the FPS using AI. As tests have shown, this technology can increase the overall FPS several times (note: sometimes 2 – 4 times), but it does not work on NVIDIA RTX 3000 and earlier series of graphics cards.

Radeon is slightly behind in time, but brings technology to the masses. At first, its free analogue of DLSS called FidelityFX Super Resolution (FSR) did not work very well, but with the advent of the second generation FSR the situation has changed dramatically and it is quite difficult to immediately see the difference between DLSS 2 and FSR 2. At the same time, FSR can be run on almost any graphics card, including the old RX 570 and GeForce GTX 1060. The FSR 3.0 technology, introduced in 2023, also learned to generate frames, increasing the FPS level several times, but for now the technology needs time to be improved.


How much memory do modern games require?

GeForce RTX 4090 became the first consumer graphics card with 24 GB of VRAM.

In recent years, the landscape of graphics card memory has shifted dramatically. Previously, 4 GB was standard for mid-range cards, while 8 or 12 GB signified luxury. However, the emergence of next-gen consoles like the PlayStation 5 and Xbox Series X, boasting 12 GB of memory and advanced features, has reshaped priorities for game developers. With the widespread adoption of Unreal Engine 5 optimized for these consoles, it's becoming impractical for studios to target lower VRAM specs without sacrificing quality. Even for those not pursuing high resolutions or ray tracing, a 12 or preferably 16 GB VRAM card is now considered the most sensible investment for future-proofing. We recently published material on this topic.

Dimensions and TDP

Before purchasing a graphics card, it is important to compare the dimensions and the power of the power supply unit.

When choosing a graphics card, you should compare the dimensions of the case; the maximum length of the graphics card is always indicated in the specifications. If the gap between the case and the edges of the graphics card is only a few centimeters, the build process can become very complicated. And the graphics card itself will not be well ventilated.

The same advice can be given regarding energy consumption. Each specific model consumes a certain amount of energy, and in order not to force the user to independently calculate the required TDP for the entire system, most manufacturers indicate the recommended power of the power supply unit in the specifications. We advise you to follow these recommendations, since the figure is taken with a small margin so that the power supply unit does not have to work at its limit.

Conclusion

Radeon RX 7800 XT turned out to be one of the most interesting graphics cards of 2023.

When choosing a graphics card for gaming or work, it is important to find a balance between the performance of it and the cost of the rest of the components. No one cancels physics, the strength of a chain is determined by its weakest link, so buying a GeForce RTX 4090 to a Core i5 level 6-core literally makes as much sense as the new seasons of The Walking Dead. In general, when choosing a graphics card, consider not only its price, year of release, and overall performance level, but also factors such as power consumption and compatibility with your case. You should also consider current market conditions, because prices for older GPUs may drop once newer lines are announced.

To get the most up-to-date information and advice, we recommend you to familiarize yourself with actual reviews on specialized sites or in groups in social networks. For example, on our e-katalog we regularly publish materials with optimal builds for different budgets, as well as separate reviews of the most successful models and TOP GPUs by categories. A filter by price and popularity in the graphics cards section can be a very useful tool.