Sourcengine
Sourcenginebreadcrumb separatorResource Articlesbreadcrumb separatorIndustry Newsbreadcrumb separator
Sourcengine’s Tech Angle: The Rise of Generative AI

Sourcengine’s Tech Angle: The Rise of Generative AI

An artistic depiction of AI

Artificial intelligence (AI) is the word of 2023! While AI has been around for some time now, recent developments and new products within artificial intelligence have led to an explosion of interest. With the increasing demand for AI-capable chips to power large artificial intelligence models, original component manufacturers (OCMs) across the board are developing their own AI product lines.  

Now, the umbrella of artificial intelligence and its many applications is a massive rabbit hole filled with various avenues of study. These different subsets within AI can be developed to perform various tasks such as pattern recognition, a commonly used feature. This ongoing boom in artificial intelligence popularity is no different, as the type of AI utilized is a relatively specific subset.  

Generative artificial intelligence is the driving force behind popular large language models and even recent art generation trends. One of the reasons generative AI has become so popular recently are emerging applications that aim to boost user productivity in various ways such as text, code, and even art.  

What is Generative Artificial Intelligence

So, what exactly is generative artificial intelligence? Generative AI aims to create new, synthetic content based on artificial intelligence algorithms. More accurately, a generative AI application, like other AI models, is trained and learned from specific datasets. Unlike traditional AI models that can derive insights and present it to users to aid decision making, generative AI can produce something entirely new based on its learned data. Generative AI can "generate" several types of content, including text, image, video, code, data, and even 3D renderings.  

The “generative” part of this artificial intelligence model is the critical defining trait between generative AI and other AI applications.  

According to a 2023 report by Accenture, 97% of global executives within the study believe “that foundation models will enable connections across data types, revolutionizing where and how AI is used.” Generative AI will be a large part of this move within organizations and their corresponding workflows due to the capabilities it presents. Already, generative AI has proven itself to be an invaluable customer support tool for site chatbots.  

Accenture goes on to detail the impact generative AI will have in just a few years. “Generative AI will bring unprecedented speed and creativity to areas like design research and copy generation. It will take business process automation to a transformative new level, catalyzing a new era of efficiency in both the back and front offices. It will significantly boost productivity among software coders by automating code writing and rapidly converting one programming language to another. And in time, it will support enterprise governance and information security, protecting against fraud and improving regulatory compliance.”

Part of this improvement comes from the use of machine learning within most generative AI applications. Machine learning models leverage data to improve the performance of machines or programs through their ability to learn and adapt through algorithms and statistical models. For some generative AI programs, machine learning is a piece of the larger puzzle. In content creation that involves text, such as chatbots, machine learning aids in improving the generative AI model’s performance over time. For generative chatbots the more conversations the AI has with humans, the more human-like or accurate the content it generates becomes.  

These latest developments and growing interest in AI have resulted in international acclaim. According to statistics company, Statista, the global AI market will grow to nearly $2 trillion by 2030 based on market data in March 2030. This prediction already rings true for chipmaker Nvidia, whose enhanced graphic processing units (GPUs) power these generative AI applications propelled the company into the $1 trillion club.

{{newsletter-signup}}

The Rise of Generative AI and Nvidia

OpenAI’s ChatGPT and DALL-E, the conversational chatbot and AI image generator, are two of the most popular generative AI programs released within the last year. ChatGPT won the hearts of many thanks to its ability to produce various new content, including essays, emails, computer code, poems, and Excel formulas, instantaneously.  

Within a week of its launch in late November 2022, ChatGPT had already accumulated over one million users. As a result of its popularity boom, tech giants from Microsoft to Google raced to release their generative AI bots to varying success. Despite the hiccups, interest in the abilities of generative AI and solving its faults has only continued to grow.  

To power large language models like ChatGPT, renewed and voracious interest within the semiconductor industry has sparked a revitalization in AI-capable product lines.

Nvidia currently holds the crown as the king of artificial intelligence components. Jensen Huang, Nvidia’s CEO, attributes the recent boom of success to the past 15 years of investment in artificial intelligence solutions. Nvidia’s flagship component lines, including the Nvidia H100, are leading a new age in powerful AI components. Everyone from OCMs to tech giants are searching high and low for a bundle of these coveted chips. For ChatGPT, nearly a dozen of Nvidia’s GPUs are necessary to operate the application due to its vast parameter requirements.

Within semiconductor manufacturing, Nvidia’s chips and AI inference platforms are transforming production line automation. Nvidia’s AI-enhanced method called AutoDMP, powered by Nvidia’s DGX H100 GPUs, is said to optimize chip designs 30 times faster than other techniques. It also improves energy efficiency and decreases carbon emissions. Global foundry leader, TSMC, is said to be implementing AutoBMP in its ultra-fine fabrication processing systems to produce the latest advanced nodes such as 3nm, 2nm, and, one day, 1nm.

At Computex Taipei, the annual Taipei International Information Technology Show, Nvidia and AI took center stage. With the continued interest in artificial intelligence, the electronic component industry is prioritizing new advances within artificial intelligence, including generative AI. The latest series of announcements on upcoming AI solutions at Computex came from chipmaking giants Qualcomm, NXP Semiconductor, and Texas Instruments.  

Even more recently, AMD announced its latest high-tech GPU to challenge Nvidia’s H100 series and support large language models, such as OpenAI’s ChatGPT. The new chip is called the MI300X. So far, the details around the MI300X have captured engineers’ and software developers' interests. The MI300x boasts 192 GB of memory, far larger than Nvidia’s 120 GB and has a transistor count of 153 billion with a memory bandwidth of 5.2 terabytes per second.  

During its announcement, AMD’s chip ran a 40 billion parameter model called Falcon. For comparison, the current ChatGPT model, GPT-3, has 175 billion parameters making numerous GPUs a necessity. Like Nvidia, AMD plans to offer Infinity Architecture, combining eight chips into one system, for AI applications. For developers, AMD stated the MI300X will come with its own software package AMD called ROCm, similar to Nvidia’s CUDA. This package allows developers to get up close and personal with the chip’s core hardware features.

With these growing endeavors within artificial intelligence, generative AI programs are set to benefit immensely from more advanced, AI-capable components. Nvidia currently dominates the market with its GPU line-up, but competition is heating up. That’s great news for AI developers as further diversification will uncover new solutions and receive more efficient components.

Now, all that’s left is to find the components to power your next generative AI project.  

Meet Your AI Needs with Sourcengine

As the leading e-commerce site for electronic components, Sourcengine hosts over 1 billion offers from +3,500 suppliers globally. Nvidia, the current king of artificial intelligence, is a franchised partner of Sourcengine’s parent company Sourceability. As a franchised partner, not only does Sourcengine offer Nvidia’s AI-capable products, but our engineers work alongside Nvidia to deliver design solutions and support for how to utilize Nvidia’s impressive GPUs best.

Nvidia isn’t the only chip manufacturer on Sourcengine offering AI solutions either. To find the components you need to power your latest artificial intelligence product, you can quickly upload your BOM through Sourcengine’s integrated BOM management tool Quotengine. With Quotengine, users can quickly find offers for their needed components and filter them depending on specific needs. If you like the offers, you can add all or select some to cart for a streamlined and simple check-out process.  

If you can’t find what you need, no problem. Send our team an RFQ for your personalized quote on the necessary component stock. Ready to get started? Reach out to our team today!

Quotengine: Your Ultimate BOM Tool
With Quotengine’s real-time data on over 1 billion part offers, managing your BOM effectively has never been simpler.
Upload Your BOM
What’s Your Excess Worth?
Real-time market data, quick response time, and unique price offers to help you maximize your return on excess inventory.
Get an Estimate
The Last Integration You’ll Ever Need
Streamline manual processes and gain real-time access to inventory data, pricing updates, and order tracking through Sourcengine’s API
Sign-up Here
Sourcengine’s Lead Time Report
Strategize for upcoming market shifts through lead time and price trends with our quarterly lead time report.
Download now
Sourcengine’s Lead Time Report
Strategize for upcoming market shifts through lead time and price trends with our quarterly lead time report.
Download now