Introduction to LLMs

Lecture 17

Dr. Benjamin Soltoff

Cornell University
INFO 4940/5940 - Fall 2025

October 28, 2025

Announcements

Announcements

Learning objectives

  • Define a conversation with a large language model (LLM)
  • Identify the key roles in an LLM conversation
  • Use {ellmer} in R or chatlas in Python to converse with an LLM
  • Define a generative pre-trained transformer
  • Create basic chat apps using Shiny

Application exercise

ae-16

Instructions

  • Go to the course GitHub org and find your ae-16 (repo name will be suffixed with your GitHub name).
  • Clone the repo in Positron, install required packages using renv::restore() (R) or uv sync (Python), open the Quarto document in the repo, and follow along and complete the exercises.
  • Render, commit, and push your edits by the AE deadline – end of the day

⌨️ 01_hello-llm

Instructions

Test that you can connect to OpenAI’s API using {ellmer} in R or chatlas in Python by running the provided code.

Copy your .env file from your user directory to the ae-16 folder.

How to think about LLMs

Think empirically, Not theoretically

  • It’s okay to treat LLMs as black boxes. We’re not going to focus on how they work internally

  • Just try it! When wondering if an LLM can do something, experiment rather than theorize

  • You might think they could not possibly do things that they clearly can do today

Embrace the experimental process

  • Don’t worry about ROI during exploration. Focus on learning and engaging with the technology

  • Failure is valuable! Those are some of the most interesting conversations that we have

  • It doesn’t have to be a success. Attempts that don’t work still provide insights

Start simple, build understanding

  • We’re going to focus on the core building blocks.

  • All the incredible things you see AI do decompose to just a few key ingredients.

  • Our goal is to build intuition through hands-on experience.

Anatomy of a conversation

What’s an HTTP request?

Talking with ChatGPT happens via HTTP

Messages have roles

Message roles

Role Description
system_prompt Instructions from the developer (that’s you!)
to set the behavior of the assistant
user Messages from the person interacting
with the assistant
assistant The AI model’s responses to the user

Hello, {ellmer} and chatlas!

R

Python

Hello, {ellmer} and chatlas!

R

library(ellmer)

Python

import chatlas

Hello, {ellmer} and chatlas!

R

library(ellmer)

chat <- chat_openai()

Python

import chatlas

chat = chatlas.ChatOpenAI()

Hello, {ellmer} and chatlas!

R

library(ellmer)

chat <- chat_openai()

chat$chat("Tell me a joke about R.")

Python

import chatlas

chat = chatlas.ChatOpenAI()

chat.chat("Tell me a joke about Python.")

Hello, {ellmer} and chatlas!

R

library(ellmer)

chat <- chat_openai()

chat$chat("Tell me a joke about R.")
#> Why did the R programmer go broke?
#> Because he kept using `sample()` and lost all his data!

Python

import chatlas

chat = chatlas.ChatOpenAI()

chat.chat("Tell me a joke about Python.")
#> Why do Python programmers prefer using snakes as pets?
#> Because they don't mind the indentation!

❓ What are the user and assistant roles in this example?

Hello, {ellmer} and chatlas!

R

chat
<Chat OpenAI/gpt-4.1 turns=2 tokens=14/29 $0.00>
── user [14] ────────────────────────────────────────
Tell me a joke about R.
── assistant [29] ───────────────────────────────────
Why did the R programmer go broke?

Because he kept using `sample()` and lost all his data!

Python

print(chat)

Hello, {ellmer} and chatlas!

R

chat

Python

print(chat)
## 👤 User turn:

Tell me a joke about Python.

## 🤖 Assistant turn:

Why do Python programmers prefer using snakes as pets?

Because they don't mind the indentation!

❓ What about the system prompt?

Hello, {ellmer} and chatlas!

R

library(ellmer)

chat <- chat_openai(
  system_prompt = "You are a dad joke machine."
)

chat$chat("Tell me a joke about R.")

Python

import chatlas

chat = chatlas.ChatOpenAI(
  system_prompt="You are a dad joke machine."
)

chat.chat("Tell me a joke about Python.")

Hello, {ellmer} and chatlas!

R

library(ellmer)

chat <- chat_openai(
  system_prompt = "You are a dad joke machine."
)

chat$chat("Tell me a joke about R.")
Why did the letter R get invited to all the pirate parties?

Because it always knows how to *arr-r-ive* in style!

Hello, {ellmer} and chatlas!

R

chat
<Chat OpenAI/gpt-4.1 turns=3 tokens=25/28 $0.00>
── system [0] ───────────────────────────────────────
You are a dad joke machine.
── user [25] ────────────────────────────────────────
Tell me a joke about R.
── assistant [28] ───────────────────────────────────
Why did the letter R get invited to all the pirate parties?

Because it always knows how to *arr-r-ive* in style!

⌨️ 02_word-game

Instructions

  1. Set up a chat with a system prompt:

    You are playing a word guessing game. At each turn, guess the word and tell us what it is.

  2. Ask: In British English, guess the word for the person who lives next door.

  3. Ask: What helps a car move smoothly down the road?

  4. Create a new, empty chat and ask the second question again.

  5. How do the answers to 3 and 4 differ? Why?

Demo: clearbot

👨‍💻 _demos/03_clearbot/app.py

System prompt:

You are playing a word guessing game. At each turn, guess the word and tell us what it is.

First question:

In British English, guess the word for the person who lives next door.

Second question:

What helps a car move smoothly down the road?

How to talk to robots

When you talk to ChatGPT

  1. You write some words

  2. The ChatGPT continues writing words

  3. You think you’re having a conversation

ChatGPT

Chatting with a Generative Pre-trained Transformer

LLMLarge Language Model

How to make an LLM

If you read everything
ever written…

  • Books and stories

  • Websites and articles

  • Poems and jokes

  • Questions and answers


…then you could…

  • Answer questions
  • Write stories
  • Tell jokes
  • Explain things
  • Translate into any language

The cat sat in the ____

  • 🎩
  • 🛌
  • 📦
  • 🪟
  • 🛒
  • 👠

Actually: tokens, not words

  • Fundamental units of information for LLMs
  • Words, parts of words, or individual characters
    • “hello” → 1 token
    • “unconventional” → 3 tokens: un|con|ventional
  • Important for:
    • Model input/output limits
    • API pricing is usually by token
  • Not just words, but images can be tokenized too

Demo: token-possibilities

👨‍💻 _demos/04_token-possibilities/app.R

Programming is fun, but I kind of like ChatGPT…

{ellmer} and chatlas can do that, too!

Console Browser
ellmer live_console(chat) live_browser(chat)
chatlas chat.console() chat.app()

⌨️ 05_live

Instructions

  1. Your job: write a groan-worthy roast of students at Cornell University

  2. Bonus points for puns, rhymes, and one-liners

  3. Don’t be mean

04:00

shinychat

ellmer

shinychat in R

Start with the shinyapp snippet

library(shiny)
library(bslib)

ui <- page_fillable(

)

server <- function(input, output, session) {

}

shinyApp(ui, server)

shinychat in R

Load {shinychat} and {ellmer}

library(shiny)
library(bslib)
library(shinychat)
library(ellmer)

ui <- page_fillable(

)

server <- function(input, output, session) {

}

shinyApp(ui, server)

shinychat in R

Use the shinychat chat module

library(shiny)
library(bslib)
library(shinychat)
library(ellmer)

ui <- page_fillable(
  chat_mod_ui("chat")
)

server <- function(input, output, session) {
  chat_mod_server("chat")
}

shinyApp(ui, server)

shinychat in R

Create and hook up a chat client to use in the app

library(shiny)
library(bslib)
library(shinychat)
library(ellmer)


ui <- page_fillable(
  chat_mod_ui("chat")
)

server <- function(input, output, session) {
  client <- chat_openai()
  chat_mod_server("chat", client)
}

shinyApp(ui, server)

shinychat

ellmer

shinychat in Python

Start with the shinyapp snippet

from shiny import App, reactive, render, req, ui

app_ui = ui.page_fillable(
  ui.input_slider("n", "N", 0, 100, 20),
  ui.output_text_verbatim("txt"),
)


def server(input, output, session):
  @render.text
  def txt():
    return f"n*2 is {input.n() * 2}"


app = App(app_ui, server)

shinychat in Python

Remove the parts we don’t need

from shiny import App, ui

app_ui = ui.page_fillable(

)

def server(input, output, session):
    pass

app = App(app_ui, server)

shinychat in Python

Create a chatlas chat client

import chatlas
from shiny import App, ui

app_ui = ui.page_fillable(

)

def server(input, output, session):
    client = chatlas.ChatOpenAI()

app = App(app_ui, server)

shinychat in Python

Add the Chat UI and server logic (client and chat aren’t connected yet!)

import chatlas
from shiny import App, ui

app_ui = ui.page_fillable(
    ui.chat_ui("chat")
)

def server(input, output, session):
    client = chatlas.ChatOpenAI()
    chat = ui.Chat("chat")

app = App(app_ui, server)

shinychat in Python

When the user submits a message…

import chatlas
from shiny import App, ui

app_ui = ui.page_fillable(
    ui.chat_ui("chat")
)

def server(input, output, session):
    client = chatlas.ChatOpenAI()
    chat = ui.Chat("chat")

    @chat.on_user_submit
    async def _(user_input: str):
        # Send input to LLM
        # Send response back to UI

app = App(app_ui, server)

shinychat in Python

we’ll send the input to the LLM…

import chatlas
from shiny import App, ui

app_ui = ui.page_fillable(
    ui.chat_ui("chat")
)

def server(input, output, session):
    client = chatlas.ChatOpenAI()
    chat = ui.Chat("chat", client)

    @chat.on_user_submit
    async def _(user_input: str):
        response = await client.stream_async(user_input)
        # Send response back to UI

app = App(app_ui, server)

shinychat in Python

…and then stream the response back to the UI.

import chatlas
from shiny import App, ui

app_ui = ui.page_fillable(
    ui.chat_ui("chat")
)

def server(input, output, session):
    client = chatlas.ChatOpenAI()
    chat = ui.Chat("chat", client)

    @chat.on_user_submit
    async def _(user_input: str):
        response = await client.stream_async(user_input)
        await chat.append_message_stream(response)

app = App(app_ui, server)

⌨️ 06_word-games

Instructions

  1. I’ve set up the basic Shiny app snippet and a system prompt.

  2. Your job: create a chatbot that plays the word guessing game with you.

  3. The twist: this time, you’re guessing the word.

07:00

Interpolation in R

library(ellmer)

words <- c("elephant", "bicycle", "sandwich")

interpolate(
  "The secret word is {{ sample(words, 1) }}."
)
[1] │ The secret word is elephant.

Interpolation in R

library(ellmer)

words <- c("elephant", "bicycle", "sandwich")

interpolate(
  "The secret word is {{ words }}."
)
[1] │ The secret word is elephant.
[2] │ The secret word is bicycle.
[3] │ The secret word is sandwich.

Interpolation in Python

import random

words = ["elephant", "bicycle", "sandwich"]

f"The secret word is {random.choice(words)}."
'The secret word is bicycle.'

Wrap-up

Recap

  • LLM conversations happen via HTTP requests with messages that have roles
  • {ellmer} and chatlas make it easy to converse with LLMs in R and Python
  • Generative pre-trained transformers use the Transformer architecture and attention mechanism to generate text
  • You can build chat apps using Shiny

Acknowledgments