As their name suggests, Graphics Processing Units (GPUs) were originally designed for graphics processing; made of computational-intensive tasks that are massively parallel and exhibit small divergence as well as coalesced memory accesses. Made of numerous small cores, GPUs gather a processing power far beyond that of a CPU.
As technology progressed, it was noticed that other areas of computing such as mathematical libraries were exhibiting such tasks. The idea of using GPUs for general purpose programming and no longer only for graphics processing led to the design of CUDA. With GPUs now being used in machine learning, artificial intelligence and mathematical libraries to name a few, CUDA can be seen as the first technology that made it possible.