Going Deeper into AI: The Guts of ChatCPT and Large Language Models

Nerding Out with That Nerdy Catholic
Nerding Out with That Nerdy Catholic
Going Deeper into AI: The Guts of ChatCPT and Large Language Models
Loading
/

In this episode, we take some time to look under the hood of how a Large Language Model (LLM) like ChatCPT works. Building off of the previous episode where we looked at neural networks, the basis for all AI, we explore how these neural networks are used to comb through large quantities of text, find its meaning, and respond with language that is easy to understand.

We move beyond simple neural networks to discuss the intricacies of Transformer Architecture and Retrieval-Augmented Generation (RAG) and how this approach is central to understanding modern artificial intelligence and machine learning applications.

We also touch on the benefits of running your own local AI model and agent and how we use Ollama at That Nerdy Catholic to help with some of our tasks. You want to know how LLM actually works? Continue this exploration of AI with us.

Links from Episode:

Attention is all you need: https://arxiv.org/pdf/1706.03762

We want your input for this series, so head over to https://thatnerdycatholic.com/ai to share your thoughts and experiences.

Get your Nerding Out merch at https://thatnerdycatholic.com/merch
Support That Nerdy Catholic at https://thatnerdycatholic.com/support
Facebook: https://www.facebook.com/thatnerdycatholic/
Instagram: https://www.instagram.com/ThatNerdyCatholic

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

More posts