THE FUTURE IS HERE

Fine-Tuning Generative Models | Foundational LLMs for Generative AI

👉 Fundamental concepts and approaches to fine-tune LLM foundation models of various parameter sizes (GPT 8 billion, GPT 43 Billion, GPT 530 Billion). Along with fine-tuning, the concept of p-tuning is also discussed. Fine-tuning LLMs in an economical manner requires the use of model parallelism along with data parallelism. Hence, these concepts are also discussed. Also building an end-to-end pipeline with LLMs is addressed by integrating it with an enterprise database through an informed network and preventing LLMs from hallucinating.

👉 Who is this DataHour for?
—- Students & Freshers who want to build a career in the Data-tech domain.
—- Working professionals who want to transition to the Data-tech domain.
—- Data science professionals who want to accelerate their career growth
—- Prerequisites: No prerequisites, people of any domain or any work experience with an interest in AI can join.

👉 About the Speaker
Amit Kumar works as a Senior Enterprise Architect -Generative AI and Deep Learning at NVIDIA. He helps enterprises from various verticals build and architect end-to-end generative AI and LLM-based solutions. He completed his B.Tech in 2010 from IIT Guwahati and is pursuing higher education at Stanford University. He has previously worked at HP research lab, VMware, EFI and got the opportunity to complete internships at Google as a summer of code student, Indian statistical institute Kolkata, Technical University of Braunschweig Germany. He has 7 patents in Deep learning, NLP, Classical machine learning, statistical modeling, Industry 4.0, and 3D printing.

—————-
👉 Tags
—————-
generative ai
ai arr
large language models
foundation model
generative ai examples
llm models
generative ai meaning
openai
jasper
chatgpt
gpt3 models
llm in ai
language model in artificial intelligence