How is optical fiber connectivity advancing the generative AI revolution?
By Mustafa Keskin
Published: June 6, 2024
What comes to mind when you think about Artificial Intelligence (AI)? For me, it all began last November with a post from an old friend on LinkedIn, expressing how impressed they were with ChatGPT. After eventually signing up myself, what truly captivated me was its ability to provide human-like answers that were both contextually appropriate and technically sound.
Its limitations were also clear of course – almost like interacting with an intelligent but slightly dull human friend. It would respond with bullet-pointed answers and consistently remind me that it was, in fact, an AI model. It urged me to take its responses with a grain of skepticism. What I found most appealing was the way the answers appeared on the screen—each letter and word appearing slowly, as if typed by a human on the other end of the connection.
Fast forward six months, and now when I type a question for ChatGPT, it responds so rapidly that it leaves me a bit dizzy. What transpired during these past six months? What changes were implemented by the creators of ChatGPT?
Most likely, OpenAI has scaled the inference capacity of their AI cluster to accommodate the demands of over 100 million subscribers. NVIDIA, a leading AI chip maker, is reported to have supplied around 20,000 graphic processing units (GPUs) to support the development of ChatGPT. Moreover, there are plans for significantly increased GPU usage, with speculation that their upcoming AI model may require as many as 10 million GPUs.