Whether you’re a regular computer user or a computer enthusiast, you’ve undoubtedly heard the terms RAM, DDR or even GDDR. Everyone is familiar with DDR. But what about GDDR? Is there any difference between it and DDR and GDDR? Stay tuned and find out in the DDR vs GDDR comparison!
What are the differences between DDR and GDDR?
The “G” in front of DDR was added to differentiate memory types specific to graphics applications, forming GDDR. As the name suggests, this distinctive chip is used in video cards, video game consoles, image rendering equipment, and more.
Video card with memory modules highlighted
GDDR is similar to the RAM memory of sticks attached to the motherboard, as they use the same storage architecture and implement the DDR (Double Data Rate) feature. Differences start to emerge when we look at the transfer rates they work with, which need to be much higher for video processing.
As with the main system, the graphics memory has also undergone modifications and improvements over time. Each new standard is identified by the number that comes right after the name, ranging from GDDR2 to GDDR5. GDDR3 remains the most used among them.
Voltage and Bandwidth
Bandwidth is the term used to define the length of the path (also called BUS) between memory and its controller. The greater this width, the more data can enter and exit the chips simultaneously, greatly contributing to processing speed.
Since they came to the market almost 10 years ago, DDR memories continue to use a 64-bit bandwidth, which can be expanded to 128-bit if the motherboard provides the dual-channel feature (Dual-Channel). In contrast, video cards use between four and eight channels, allowing for a memory interface of up to 512 bits.
Comparison in DDR vs GDDR
Furthermore, the GDDR standard works with voltages lower than the common DDR, allowing to reach higher clock cycles and requiring simpler cooling solutions. The first version of GDDR worked at 2.5 V, reduced to 1.5 V in the GDDR5 version.
On PCs, DDR3 is the highest standard available on the market, but not for long. Semiconductor companies such as Samsung are already engaged in the development of DDR4, which is expected to double the transfer speed of the previous standard and further decrease energy and heating consumption.
Why not to use GDDR as system memory?
Despite being better in many aspects, the GDDR technology is still not the same as used in the main system for several reasons. The first is the highest cost. Even the most expensive video cards don’t have 2 GB of GDDR memory, which would be too expensive on computers that already exceed 8 GB of RAM.
Another reason is the difficulty in adopting new standards. Video cards are almost like a separate computer, as they have a processor, memory, controllers, cooling system and even different power. And since it’s just one manufacturer that puts all the components together, like XFX or MSI, it’s easier to redesign the next model to use the latest technologies.
Not so with computers. In addition to having many more subsystems that need to be compatible with each other, most PCs are not planned and built by the same manufacturer (when it’s not the user), requiring slower progress so that everyone can keep up with the new standards.