I imagine CPUs have to have features that allow it to communicate and work with the GPU, and I can imagine this exists today, but in the early days of GPUs, how did companies get support from large CPU companies to have their devices be supported, and what features did CPU companies add to enable this?
How did the first GPUs get support from CPUs?
109 Views Asked by shashvat At
1
There are 1 best solutions below
Related Questions in GPU
- A deterministic GPU implementation of fused batch-norm backprop, when training is disabled, is not currently available
- What is the parameter for CLI YOLOv8 predict to use Intel GPU?
- Windows 10 TensorFlow cannot detect Nvidia GPU
- Is there a way to profile a CUDA kernel from another CUDA kernel
- Does Unity render invisible material?
- Quantization 4 bit and 8 bit - error in 'quantization_config'
- Pyarrow: ImportError: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found
- How to setup SLI on two GTX 560Ti's
- How can I delete a process in CUDA?
- No GPU EC2 instances associated with AWS Batch
- access fan and it's speed, in linux mint on acer predator helios 300
- Why can CPU memory be specified and allocated during instance creation but not GPU memory on the cloud?
- Why do CUDA asynchronous errors occur? (occur on the linux OS)
- Pytorch how to use num_worker>0 for Dataloader when using multiple gpus
- Running PyTorch MPS acceleration on Apple M1, get "Placeholder storage has not been allocated on MPS device!" error, but all seems to be on device
Related Questions in CPU
- the end of the I/O operation is notified to the system by an interrupt.how much system time do the mentioned operations occupy?
- Python process CPU usage going high suddenly. how to detect the place?
- Problem on CPU scheduling algorithms in OS
- Will a processor with such a defect work?
- Google Chrome is consuming a lot of CPU on a video call?
- access fan and it's speed, in linux mint on acer predator helios 300
- I am trying to calculate the cpu percentage a certain process take but the values are very differnt than that of the task manger
- Can out-of-order execution of CPU affect the order of new operator in C++?
- Unexpected OS Shutdown
- Maximum CPU Voltage reading
- ClickHouse Materialized View consuming a lot of Memory and CPU
- Use of OpenVINO on a computer with 2 physical cpus
- How is cpu's state saved by os without altering it?
- why the CPU utilization and other indicators collected by glances are larger than those collected?
- Python serial communication causing high CPU Usage when baudrate is 1000000
Related Questions in CPU-ARCHITECTURE
- What is causing the store latency in this program?
- what's the difference between "nn layout" and "nt layout"
- Will a processor with such a defect work?
- How do i find number of Cycles of a processor?
- Why does LLVM-MCA measure an execution stall?
- Can out-of-order execution of CPU affect the order of new operator in C++?
- running SPEC in gem5 using the SimPoint methodology
- Why don't x86-64 (or other architectures) implement division by 10?
- warn: MOVNTDQ: Ignoring non-temporal hint, modeling as cacheable!, While simulating x86 with spec2006 benchamrks I am getting stuck in warn message
- arithmetic intensity of zgemv versus dgemv/sgemv?
- What is the microcode scoreboard?
- Why don't x86/ARM CPU just stop speculation for indirect branches when hardware prediction is not available?
- Question about the behaviour of registers
- How to increase throughput of random memory reads/writes on multi-GB buffers?
- RISVC Single Cycle Processor Data Path and Testbench
Related Questions in RETRO-COMPUTING
- Apple II - How to concatenate basic program lines together on a single line to reduce the program's size/line count
- Shared library with a .COM file
- Connecting a function to QHttp::requestFinished
- How can I enable Uniflex BSD sockets to be BROADCAST?
- What is the intended purpose of the Hex Branch instruction on the IBM System/32
- Using xa65 to create a Commodore 64 .prg file and disk image
- Porting from Santaka ZX Spectrum clone code to Z80 to compile a Santaka game - how to translate to this dialect?
- Trouble understanding the Z80 book and flags
- Reverse Bresenham algorithm? How to convert array of drawn pixels into coordinates
- CLion custom compiler: Makefile parser says "No compilation commands found"
- ZX81 BASIC to Pygame Conversion of "Dropout" Game
- How to make a hidden menu with elements focusable with keyboard?
- How did the first GPUs get support from CPUs?
- Database join in the '60s with tape / punch cards only?
- Linker Inserts Unnecessary Opcode Padding
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
You mean special support beyond just being devices on a bus like PCI? (Or even older, ISA or VLB.)
TL:DR: All the special features CPUs have which are useful for improved bandwidth to write (and sometimes read) video memory came after 3D graphics cards were commercially successful. They weren't necessary, just a performance boost.
Once GPUs were commercially successful and popular, and a necessary part of a gaming PC, it made obvious sense for CPU vendors to add features to make things better.
The same IO busses that let you plug in a sound card or network card already have the capabilities to access device memory and MMIO, and device IO ports, which is all that's necessary for video drivers to make a graphics card do things.
Modern GPUs are often the highest-bandwidth devices in a system (especially non-servers), so they benefit from fast buses, hence AGP for a while, until PCI Express (PCIe) unified everything again.
Anyway, graphics cards could work on standard busses; it was only once 3D graphics became popular and commercially important (and fast enough for the PCI bus to be a bottleneck), that things needed to change. At that point, CPU / motherboard companies were fully aware that consumers cared about 3D games, and thus it would make sense to develop a new bus specifically for graphics cards.
(Along with a GART, graphics address/aperture remapping table, an IOMMU that made it much easier / safer for drivers to let an AGP or PCIe video card read directly from system memory. Including I think with addresses under control of user-space, without letting user-space read arbitrary system memory, thanks to it being an IOMMU that only allows a certain address range.)
Before the GART was a thing, I assume drivers for PCI GPUs needed to have the host CPU initiate DMA to the device. Or if bus-master DMA by the GPU did happen, it could read any byte of physical memory in the system if it wanted, so drivers would have to be careful not to let programs pass arbitrary pointers.
Anyway, having a GART was new with AGP, which post-dates early 3D graphics cards like 3dfx's Voodoo and ATI 3D Rage. I don't know enough details to be sure I'm accurately describing the functionality a GART enables.
So most of the support for GPUs was in terms of busses, and thus a chipset thing, not CPUs proper. (Back then, CPUs didn't have integrated memory controllers, instead just talking to the chipset northbridge over a frontside bus.)
Relevant CPU instructions included Intel's SSE and SSE2 instruction sets, which had streaming (NT = non-temporal) stores which are good for storing large amounts of data that won't be re-read by the CPU any time soon, if at all.
SSE4.1 in 2nd-gen Core2 (2008 ish) added a streaming load instruction (
movntdqa) which (still) only does anything special if used on memory regions marked in the CPU's page tables or MTRR as WC (aka USWC: uncacheable, write-combining). Copying back from GPU memory to the host was the intended use-case. (Non-temporal loads and the hardware prefetcher, do they work together?)x86 CPUs introducing the MTRR (Memory Type Range Register) is another feature that improved CPU -> GPU write bandwidth. Again, this came after 3D graphics were commercially successful for gaming.