Programming with LLMs

Lecture 18

Dr. Benjamin Soltoff

Cornell University
INFO 4940/5940 - Fall 2025

October 30, 2025

Announcements

Announcements

  • Get your homework 05 APIs deployed
  • Project 01 draft due today
  • Project 01 final submissions due next week

Learning objectives

  • Understand the difference between LLM providers and models
  • Choose appropriate models for different tasks
  • Handle multi-modal inputs (images, PDFs)
  • Extract structured data from LLM responses
  • Perform parallel and batch LLM calls for efficiency

Application exercise

ae-17

Instructions

  • Go to the course GitHub org and find your ae-17 (repo name will be suffixed with your GitHub name).
  • Clone the repo in Positron, install required packages using renv::restore() (R) or uv sync (Python), open the Quarto document in the repo, and follow along and complete the exercises.
  • Render, commit, and push your edits by the AE deadline – end of the day

Providers and Models

Provider
company that hosts and serves models
Model
a specific LLM with particular capabilities

How are models different?

  1. Content: How many tokens can you give the model?
  2. Speed: How many tokens per second?
  3. Cost: How much does it cost to use the model?
  4. Intelligence: How “smart” is the model?
  5. Capabilities: Vision, reasoning, tools, etc.

Choose a model

Task OpenAI Anthropic Gemini
Coding GPT-5 Claude 4 Sonnet Gemini 2.5 Pro
Fast/General GPT-5 mini Claude 3.5 Sonnet Gemini 2.0 Flash
Complex Tasks o3 Claude 4 Opus Gemini 2.5 Pro
Cost-Effective Mini Haiku Flash

Learn more

Providers

  • chat_openai()

  • chat_anthropic()

  • chat_google_gemini()

Local models

  • chat_ollama()

Enterprise

  • chat_aws_bedrock()

chatlas

Providers

  • ChatOpenAI()

  • ChatAnthropic()

  • ChatGoogle()

Local models

  • ChatOllama()

Enterprise

  • ChatBedrockAnthropic()

Chat in Easy Mode

R

library(ellmer)

chat <- chat("anthropic")

Python

import chatlas

chat = chatlas.ChatAuto("anthropic")

Chat in Easy Mode

R

library(ellmer)

chat <- chat("anthropic")
#> Using model = "claude-sonnet-4-20250514".

Python

import chatlas

chat = chatlas.ChatAuto("anthropic")
#> Anthropic/claude-sonnet-4-0

Chat in Easy Mode

R

library(ellmer)

chat <- chat("openai")
#> Using model = "gpt-4.1".

Python

import chatlas

chat = chatlas.ChatAuto("openai")
#> OpenAI/gpt-4.1

Chat in Easy Mode

R

library(ellmer)

chat <- chat("openai/gpt-4.1-nano")

Python

import chatlas

chat = chatlas.ChatAuto("openai/gpt-4.1-nano")

⌨️ 07_models

  1. Use chatlas and {ellmer} to list available models from Anthropic and OpenAI.

  2. Send the same prompt to different models and compare the responses.

  3. Feel free to change the prompt!

04:00

Favorite models for exercises

  • OpenAI
    • gpt-4.1-nano
    • gpt-5
  • Anthropic
    • claude-sonnet-4-20250514
    • claude-3-5-haiku-20241022

Multi-modal input

A picture is worth a thousand words

Or for an LLM, a picture is roughly 227 words, or 170 tokens.

🖼️ 🔍 Open Images Dataset

🌆 content_image_file

R

library(ellmer)

chat <- chat("openai/gpt-4.1-nano")
chat$chat(
  content_image_file("cute-cats.jpg"),
  "What do you see in this image?"
)

Python

import chatlas

chat = chatlas.ChatAuto("openai/gpt-4.1-nano")
chat.chat(
    chatlas.content_image_file("cute-cats.jpg"),
    "What do you see in this image?"
)

🐈 content_image_url

R

library(ellmer)

chat <- chat("openai/gpt-4.1-nano")
chat$chat(
  content_image_url("https://placecats.com/bella/400/400"),
  "What do you see in this image?"
)

Python

import chatlas

chat = chatlas.ChatAuto("openai/gpt-4.1-nano")
chat.chat(
    chatlas.content_image_url("https://placecats.com/bella/400/400"),
    "What do you see in this image?"
)

⌨️ 08_vision

  1. I’ve put some images of food in the data/recipes/images folder.

  2. Your job: show the food to the LLM and see if it gets hungry.

05:00

📑 content_pdf_file

R

library(ellmer)

chat <- chat("openai/gpt-4.1-nano")
chat$chat(
  content_pdf_file("financial-report.pdf"),
  "What's my tax liability for 2024?"
)

Python

import chatlas

chat = chatlas.ChatAuto("openai/gpt-4.1-nano")
chat.chat(
    content_pdf_file("financial-report.pdf"),
    "What's my tax liability for 2024?"
)

📑 content_pdf_url

R

library(ellmer)

chat <- chat("openai/gpt-4.1-nano")
chat$chat(
  content_pdf_url("http://pdf.secdatabase.com/1757/0001104659-25-042659.pdf"),
  "Describe Tesla’s executive compensation and stock award programs."
)

Python

import chatlas

chat = chatlas.ChatAuto("openai/gpt-4.1-nano")
chat.chat(
    content_pdf_url("http://pdf.secdatabase.com/1757/0001104659-25-042659.pdf"),
    "Describe Tesla’s executive compensation and stock award programs."
)

⌨️ 09_pdf

  1. We have the actual recipes as PDFs in the data/recipes/pdf folder.

  2. Your job: ask the LLM to convert the recipes to markdown.

05:00

Structured output

How would you extract name and age?

age_free_text <- list(
    "I go by Alex. 42 years on this planet and counting.",
    "Pleased to meet you! I'm Jamal, age 27.",
    "They call me Li Wei. Nineteen years young.",
    "Fatima here. Just celebrated my 35th birthday last week.",
    "The name's Robert - 51 years old and proud of it.",
    "Kwame here - just hit the big 5-0 this year."
)

If you wrote R code, it might look like this…

word_to_num <- function(x) {
    # normalize
    x <- tolower(x)
    # direct numbers
    if (grepl("\\b\\d+\\b", x)) return(as.integer(regmatches(x, regexpr("\\b\\d+\\b", x))))
    # hyphenated like "5-0"
    if (grepl("\\b\\d+\\s*-\\s*\\d+\\b", x)) {
        parts <- as.integer(unlist(strsplit(regmatches(x, regexpr("\\b\\d+\\s*-\\s*\\d+\\b", x)), "\\s*-\\s*")))
        return(10 * parts[1] + parts[2])
    }
    # simple word numbers
    ones <- c(
        zero=0, one=1, two=2, three=3, four=4, five=5, six=6, seven=7, eight=8, nine=9,
        ten=10, eleven=11, twelve=12, thirteen=13, fourteen=14, fifteen=15, sixteen=16,
        seventeen=17, eighteen=18, nineteen=19
    )
    tens <- c(twenty=20, thirty=30, forty=40, fifty=50, sixty=60, seventy=70, eighty=80, ninety=90)
    # e.g., "nineteen"
    if (x %in% names(ones)) return(ones[[x]])
    # e.g., "thirty five" or "thirty-five"
    x2 <- gsub("-", " ", x)
    parts <- strsplit(x2, "\\s+")[[1]]
    if (length(parts) == 2 && parts[1] %in% names(tens) && parts[2] %in% names(ones)) {
        return(tens[[parts[1]]] + ones[[parts[2]]])
    }
    if (length(parts) == 1 && parts[1] %in% names(tens)) return(tens[[parts[1]]])
    return(NA_integer_)
}

# Extract name candidates
extract_name <- function(s) {
    # patterns that introduce a name
    pats <- c(
        "I go by\\s+([A-Z][a-z]+)",
        "I'm\\s+([A-Z][a-z]+(?:\\s+[A-Z][a-z]+)?)",
        "They call me\\s+([A-Z][a-z]+(?:\\s+[A-Z][a-z]+)?)",
        "^([A-Z][a-z]+) here",
        "The name's\\s+([A-Z][a-z]+)",
        "^([A-Z][a-z]+)\\s" # fallback: leading capital word
    )
    for (p in pats) {
        m <- regexpr(p, s, perl = TRUE)
        if (m[1] != -1) {
            return(sub(p, "\\1", regmatches(s, m)))
        }
    }
    NA_character_
}

# Extract age phrases and convert to number
extract_age <- function(s) {
    # capture common age phrases around a number
    m <- regexpr("(\\b\\d+\\b|\\b\\d+\\s*-\\s*\\d+\\b|\\b[Nn][a-z-]+\\b)\\s*(years|year|birthday|young|this)", s, perl = TRUE)
    if (m[1] != -1) {
        token <- sub("(years|year|birthday|young|this)$", "", trimws(substring(s, m, m + attr(m, "match.length") - 1)))
        return(word_to_num(token))
    }
    # handle pure word-number without trailing keyword (e.g., "Nineteen years young." handled above)
    m2 <- regexpr("\\b([A-Z][a-z]+)\\b\\s+years", s, perl = TRUE)
    if (m2[1] != -1) {
        token <- tolower(sub("\\s+years.*", "", regmatches(s, m2)))
        return(word_to_num(token))
    }
    # handle hyphenated "big 5-0"
    m3 <- regexpr("big\\s+(\\d+\\s*-\\s*\\d+)", s, perl = TRUE)
    if (m3[1] != -1) {
        token <- sub("big\\s+", "", regmatches(s, m3))
        return(word_to_num(token))
    }
    NA_integer_
}

If you wrote R code, it might look like this…

dplyr::tibble(
  name = purrr::map_chr(age_free_text, extract_name),
  age = purrr::map_int(age_free_text, extract_age)
)
# A tibble: 6 × 2
  name     age
  <chr>  <int>
1 Alex      42
2 Jamal     NA
3 Li Wei    NA
4 Fatima    NA
5 Robert    51
6 Kwame      5
age_free_text
[[1]]
[1] "I go by Alex. 42 years on this planet and counting."

[[2]]
[1] "Pleased to meet you! I'm Jamal, age 27."

[[3]]
[1] "They call me Li Wei. Nineteen years young."

[[4]]
[1] "Fatima here. Just celebrated my 35th birthday last week."

[[5]]
[1] "The name's Robert - 51 years old and proud of it."

[[6]]
[1] "Kwame here - just hit the big 5-0 this year."

But if you ask an LLM…

library(ellmer)

chat <- chat(
  "openai/gpt-5-nano",
  system_prompt = "Extract the name and age."
)

chat$chat(age_free_text[[1]])
#>

chat$chat(age_free_text[[2]])
#>

But if you ask an LLM…

library(ellmer)

chat <- chat(
  "openai/gpt-5-nano",
  system_prompt = "Extract the name and age."
)

chat$chat(age_free_text[[1]])
#> Name: Alex; Age: 42

chat$chat(age_free_text[[2]])
#> Name: Jamal; Age: 27

Wouldn’t this be nice?

chat$chat(age_free_text[[1]])
#> list(
#>   name = "Alex",
#>   age = 42
#> )

Structured chat output

chat$chat_structured(age_free_text[[1]])
#> list(
#>   name = "Alex",
#>   age = 42
#> )

Structured chat output

type_person <- type_object(
  name = type_string(),
  age = type_integer()
)

chat$chat_structured(age_free_text[[1]], type = type_person)
#> list(
#>   name = "Alex",
#>   age = 42
#> )

Structured chat output

type_person <- type_object(
  name = type_string(),
  age = type_integer()
)

chat$chat_structured(age_free_text[[1]], type = type_person)
#> $name
#> [1] "Alex"
#>
#> $age
#> [1] 42

ellmer’s type functions

type_person <- type_object(
  name = type_string("The person's name"),
  age = type_integer("The person's age in years")
)

In Python, use Pydantic

import chatlas
from pydantic import BaseModel

class Person(BaseModel):
    name: str
    age: int

chat = chatlas.ChatAuto("openai/gpt-5-nano")
chat.chat_structured(
    "I go by Alex. 42 years on this planet and counting.",
    data_model=Person
)
#> Person(name='Alex', age=42)

In Python, use Pydantic

import chatlas
from pydantic import BaseModel, Field

class Person(BaseModel):
    name: str = Field(..., description="The person's name")
    age: int = Field(..., description="The person's age in years")

chat = chatlas.ChatAuto("openai/gpt-5-nano")
chat.chat_structured(
    "I go by Alex. 42 years on this planet and counting.",
    data_model=Person
)
#> Person(name='Alex', age=42)

In Python, use Pydantic

import chatlas
from pydantic import BaseModel, ConfigDict

class Person(BaseModel):
    model_config = ConfigDict(use_attribute_docstrings=True)

    name: str
    """The person's name"""
    age: int
    """The person's age in years"""

⌨️ 10_structured-output

  1. We also have text versions of the recipes in data/recipes/txt.

  2. Use ellmer::type_*() or a Pydantic model to extract structured data from the recipe you used in activity 09.

  3. I’ve given you the expected structure, you just need to implement it.

07:00

Parallel and batch calls

Structured chat output

type_person <- type_object(
  name = type_string(),
  age = type_integer()
)

chat$chat_structured(age_free_text[[1]], type = type_person)
#> $name
#> [1] "Alex"
#>
#> $age
#> [1] 42

Structured chat output

type_person <- type_object(
  name = type_string(),
  age = type_integer()
)

chat$chat_structured(age_free_text, type = type_person)
#> ???

Structured chat output

type_person <- type_object(
  name = type_string(),
  age = type_integer()
)

chat$chat_structured(age_free_text, type = type_person)
#> Error in `FUN()`:
#> ! `...` must be made up strings or <content> objects, not a list.

Parallel chat calls

parallel_chat_structured(
  chat,
  age_free_text,
  type = type_person
)

Parallel chat calls

parallel_chat_structured(
  chat,
  age_free_text,
  type = type_person
)
#> [working] (0 + 0) -> 2 -> 4 | ■■■■■■■■■■■■■■■■■■■■■             67%

Parallel chat calls

parallel_chat_structured(
  chat,
  age_free_text,
  type = type_person
)
#>     name age
#> 1   Alex  42
#> 2  Jamal  27
#> 3 Li Wei  19
#> 4 Fatima  35
#> 5 Robert  51
#> 6  Kwame  50

Batch chat calls

batch_chat_structured(
  chat,
  age_free_text,
  type = type_person
)

Batch chat calls

batch_chat_structured(
  chat,
  age_free_text,
  type = type_person,
  path = "people.json"
)

Batch chat calls

chat <- chat("anthropic/claude-3-5-haiku-20241022")
batch_chat_structured(
  chat,
  age_free_text,
  type = type_person,
  path = "people.json"
)

Batch vs. parallel

Parallel

  • 🌲 Works for any provider/model
     

  • ⚡ Much faster for small jobs

  • 💸 More expensive

  • 😓 Not easy to stop/resume

  • 🔜 Coming soon to chatlas

Batch

  • 🌱 Only works for some providers (OpenAI, Anthropic)

  • ⏲️ Finishes… eventually

  • 🏦 Much cheaper per prompt

  • 😎 Easy to stop/resume

  • 🧑‍🚀 Works in chatlas

chatlas.batch_chat_structured

from chatlas import ChatAuto, batch_chat_structured

chat = chatlas.ChatAuto("anthropic/claude-3-haiku-20240307")
res = batch_chat_structured(
    chat=chat,
    prompts=age_free_text,
    data_model=Person,
    path="people.json",
)

⌨️ 11_batch

  1. Take your type_recipe or Recipe model from activity 10

  2. And apply it to all the text recipes in data/recipes/txt.

  3. Use parallel processing with gpt-4.1-nano from OpenAI.

  4. Use the Batch API with Claude 3.5 Haiku from Anthropic.

  5. Save the results to data/recipes/recipes.json and try out the Shiny recipe cookbook app!

07:00

Wrap-up

Recap

  • LLM providers and models have different capabilities, costs, and speeds
  • Choose models based on task requirements
  • Multi-modal inputs allow LLMs to process images and PDFs
  • Structured output can be achieved using type definitions
  • Parallel and batch calls improve efficiency for large-scale LLM tasks

Acknowledgments