AE 15: Programmatically interacting with LLMs

Suggested answers

Application exercise
Answers
Modified

October 29, 2024

Load packages

Set up API key

Warning

If you have not already completed the pre-class preparation to set up your API key, do this now.

Your turn: Run Sys.getenv("OPENAI_API_KEY") from your console to ensure your API key is set up correctly.

Basic usage

Your turn: Initiate a basic conversation with the GPT-4o model by asking “What is R programming?” and then follow up with a relevant question..

what_is_r <- llm_message("What is R programming?") |>
  openai()
what_is_r
Message History:
system: You are a helpful assistant
--------------------------------------------------------------
user: What is R programming?
--------------------------------------------------------------
assistant: R is a programming language and environment specifically designed for statistical computing and data analysis. It is widely used by statisticians, data scientists, and researchers for various statistical tests, data manipulation, and graphical representations. Here are some key features of R:

1. **Statistical Tests and Analysis**: R provides a wide array of tools for statistical analysis, including linear and nonlinear modeling, time-series analysis, classification, clustering, and more. It also allows for custom statistical methodology to be implemented.

2. **Data Manipulation**: R has powerful data manipulation capabilities, made even more robust by packages like `dplyr` and `tidyr`, which enhance its ability to handle, manipulate, and clean data effectively.

3. **Visualization**: One of R's strongest features is its ability to create high-quality plots and graphics, which can be highly customized to suit specific needs. Packages like `ggplot2` have become industry standards for data visualization.

4. **Open Source**: R is an open-source language, which means it is freely available for anyone to download and use. Its open-source nature encourages users to contribute to its development, leading to a rich ecosystem of packages and tools.

5. **Extensible**: The R community has contributed thousands of packages available through repositories like CRAN (The Comprehensive R Archive Network), enhancing R's capabilities with specialized functions for a wide range of applications.

6. **Reproducibility**: R supports the creation of reproducible data analysis workflows and reports using tools like R Markdown and knitr, which allow for the integration of R code with narrative text, making it easier to share and document analyses.

7. **Community and Support**: R has a large and active community, providing extensive resources, forums, and tutorials that help users from beginners to experts.

Overall, R is a very powerful tool for data analysis and statistical computing, valued for its depth and flexibility, especially when it comes to handling complex data and performing advanced analyses.
--------------------------------------------------------------
what_is_r |>
  llm_message("How does tidyverse relate to R programming?") |>
  openai()
Message History:
system: You are a helpful assistant
--------------------------------------------------------------
user: What is R programming?
--------------------------------------------------------------
assistant: R is a programming language and environment specifically designed for statistical computing and data analysis. It is widely used by statisticians, data scientists, and researchers for various statistical tests, data manipulation, and graphical representations. Here are some key features of R:

1. **Statistical Tests and Analysis**: R provides a wide array of tools for statistical analysis, including linear and nonlinear modeling, time-series analysis, classification, clustering, and more. It also allows for custom statistical methodology to be implemented.

2. **Data Manipulation**: R has powerful data manipulation capabilities, made even more robust by packages like `dplyr` and `tidyr`, which enhance its ability to handle, manipulate, and clean data effectively.

3. **Visualization**: One of R's strongest features is its ability to create high-quality plots and graphics, which can be highly customized to suit specific needs. Packages like `ggplot2` have become industry standards for data visualization.

4. **Open Source**: R is an open-source language, which means it is freely available for anyone to download and use. Its open-source nature encourages users to contribute to its development, leading to a rich ecosystem of packages and tools.

5. **Extensible**: The R community has contributed thousands of packages available through repositories like CRAN (The Comprehensive R Archive Network), enhancing R's capabilities with specialized functions for a wide range of applications.

6. **Reproducibility**: R supports the creation of reproducible data analysis workflows and reports using tools like R Markdown and knitr, which allow for the integration of R code with narrative text, making it easier to share and document analyses.

7. **Community and Support**: R has a large and active community, providing extensive resources, forums, and tutorials that help users from beginners to experts.

Overall, R is a very powerful tool for data analysis and statistical computing, valued for its depth and flexibility, especially when it comes to handling complex data and performing advanced analyses.
--------------------------------------------------------------
user: How does tidyverse relate to R programming?
--------------------------------------------------------------
assistant: The **tidyverse** is a collection of R packages designed for data science that share an underlying design philosophy, grammar, and data structures. It aims to simplify and streamline data manipulation, exploration, and visualization tasks, making it easier for users to work with data in a clear and intuitive manner. Here’s how the tidyverse relates to R programming:

1. **Consistent Grammar**: The tidyverse packages are designed around the concept of "tidy data," where datasets have a consistent structure (each variable as a column, each observation as a row). This structure makes it easier to apply the same analysis techniques across different datasets.

2. **Key Packages**:
   - **`ggplot2`**: Used for creating complex and beautiful visualizations.
   - **`dplyr`**: Provides a set of powerful functions for data manipulation, such as filtering, selecting, and summarizing data.
   - **`tidyr`**: Helps in tidying up data so that they are easy to work with; it includes functions for reshaping data.
   - **`readr`**: Offers functions for reading data into R quickly and efficiently.
   - **`purrr`**: Enhances the way R handles function programming, offering more intuitive tools for iteration.
   - **`tibble`**: An enhanced version of data frames that provides better printing and subsetting functionalities.
   - **`stringr`**: Simplifies working with strings.
   - **`forcats`**: Provides tools for working with categorical data as factors.

3. **Ease of Learning and Use**: Tidyverse functions are designed to be human-readable, which makes code more intuitive and easier to write and maintain. The use of a consistent syntax across different packages helps reduce the learning curve for new users.

4. **Integration**: Each package in the tidyverse is designed to work seamlessly together, allowing users to perform complex data manipulation tasks in a cohesive workflow.

5. **Community and Documentation**: The tidyverse has excellent documentation, with a strong focus on examples and use cases. Its active community contributes further tutorials, forums, and other resources, making it accessible to beginners and advanced users alike.

6. **Impact on Data Science**: The tidyverse has had a significant influence on how data science is practiced in R. It has standardized how many analysts and data scientists approach data tasks, emphasizing clear and reproducible workflows.

In summary, the tidyverse is an invaluable part of modern R programming for data science. It simplifies everyday data tasks and helps in constructing a clear and reproducible workflow, all while adhering to a set of consistent principles and practices for handling data.
--------------------------------------------------------------

Adding additional inputs

Images

Your turn: Use the documentation for llm_message() to have GPT-4o describe the two images below.

Image credit: Cornell Photos

Image credit: Cornell Photos

Image credit: Cornell Photos

Image credit: Cornell Photos
seal_describe <- llm_message("Describe this picture.",
  .imagefile = "data/llm/0654_14_041_select.jpg"
) |>
  openai()

# print the last reply
last_reply(seal_describe)
[1] "The picture shows a bronze or metal plaque with an embossed design featuring an open book. The book has an inscription that reads: \"I WOULD FOUND AN INSTITUTION WHERE ANY PERSON CAN FIND INSTRUCTION IN ANY STUDY.\" The text is in uppercase letters within the shape of the open book. The plaque has an aged appearance, with a green patina typical of oxidized bronze. It is part of a larger design with other decorative elements around it."
garden_describe <- llm_message("Describe this picture.",
  .imagefile = "data/llm/0792_05_B3_select.jpg"
) |>
  openai()

# print the last reply
last_reply(garden_describe)
[1] "The picture depicts a serene garden scene, with a pergola on the left adorned with climbing plants. Sunlight filters through the wooden beams, casting warm, dappled light onto the stone pathway and greenery below. The garden is lush and green, with well-maintained grass and a variety of plants and shrubs. In the background, there's a gently sloping hill covered in vegetation, adding to the tranquil and picturesque atmosphere."

PDFs

Your turn: Use generative AI to summarize the contents of my doctoral dissertation.

doc_summary <- llm_message("Please summarize the major arguments in the provided document.",
  .pdf = "data/llm/soltoff_dissertation_final.pdf"
) |>
  # need to use gpt-4o-mini for this task - document is too long otherwise
  openai(.model = "gpt-4o-mini")

last_reply(doc_summary) |> cat()
Benjamin C. Soltoff's dissertation at The Pennsylvania State University, titled "Agenda-Setting in State Courts of Last Resort," presents a comprehensive analysis of how state courts set their agendas, diverging from traditional studies that focus on the U.S. Supreme Court. Below are the major arguments summarized from the dissertation:

### Major Arguments:

1. **Institutional Judicial Agenda-Setting Theory**
   - The dissertation develops a theory that explains how institutional structures, individual judge preferences, litigant motivations, and interest group interventions shape the agenda-setting process at both macro and micro levels in state courts of last resort (COLRs).
   - Existing research has often overlooked the role of institutional design in influencing a court's agenda, leading to a misunderstanding of the factors that drive judicial attention to specific issues.

2. **Cross-State Variation and Empirical Analysis**
   - By leveraging variations among state COLRs (e.g., jurisdictional rules and judge selection processes), the study empirically tests hypotheses related to how these institutional differences impact the types of cases heard by courts.
   - The research highlights that macro-policy agendas in these courts are influenced by more than just strategic factors, with judges often engaging in sincere, straightforward decision-making rather than highly sophisticated, strategic calculations.

3. **Impact of Campaign Contributions**
   - Soltoff investigates how campaign contributions from litigants affect judicial decisions to grant or deny review of cases, suggesting that money can buy judicial access in certain states, particularly in Mississippi.
   - Contributions from the petitioner's attorney increase the likelihood of the court granting a review. In contrast, contributions from the respondent's attorney are associated with a decreased probability of granting review. This reveals a complex interaction between money and access in judicial decision-making.

4. **Limited Influence of Institutional-Strategic Factors**
   - The findings indicate that while some institutional factors (like judicial elections and professionalization) have an effect on decision-making, the relationships are less straightforward, and some expected relationships (e.g., between ideological alignment and decisions) were not supported.
   - For example, there was little evidence supporting the notion that an increase in the ideological distance between justices and the government led to a decrease in the likelihood of granting review, which contrasts with expectations based on institutional theory.

5. **Sincere versus Sophisticated Agenda Setting**
   - The research emphasizes that state COLR justices tend to be sincere in their decision-making rather than employing sophisticated strategies. Judges grant appeals when they disagree with lower court outcomes regardless of their ideological beliefs about the final decision.
   - This sincere agenda-setting reflects a commitment to error correction rather than strategic manipulation of the court's agenda. Judges in more ideologically cohesive courts are still less likely to grant review when they cannot predict the court's outcome effectively.

6. **Case Design and Judicial Politics**
   - The dissertation encourages future studies to focus on the interplay between judicial selection processes, campaign funding, and related political dynamics, as these factors potentially shape both the case selection process and the outcomes of judicial review.
   - It calls for a deeper understanding of how state legislatures might alter the rules governing COLRs’ functioning, and how these changes affect the kinds of cases that come before them.

### Conclusion:
Soltoff’s work contributes significantly to the understanding of judicial agenda-setting in state courts and challenges existing models by illustrating the effects of institutional characteristics, the role of contributions, and the predominance of sincere decision-making among judges. The findings promise to reshape perspectives on judicial politics and encourage further comparative analyses of judicial behavior beyond the U.S. Supreme Court.

API parameters

Temperature

Your turn: Use GPT-4o to create a knock knock joke. Generate separate conversations using the same prompt and vary the temperature setting to see how it affects the output.

Note

For GPT-4o, the temperature parameter controls the randomness of the output. A low temperature will result in more deterministic responses, while a high temperature will result in more random responses. It ranges from \([0, 2]\) with a default value of 1.

# default value is 1
llm_message("Create a knock knock joke about dinosaurs that would amuse my 8 year old child.") |>
  openai(.temperature = NULL)
Message History:
system: You are a helpful assistant
--------------------------------------------------------------
user: Create a knock knock joke about dinosaurs that would amuse my 8 year old child.
--------------------------------------------------------------
assistant: Sure, here's a dinosaur-themed knock-knock joke for your child:

**You:** Knock, knock.  
**Your Child:** Who's there?  
**You:** Dino.  
**Your Child:** Dino who?  
**You:** Dino if you'll laugh, but I'm hoping you'll roar with delight! 
--------------------------------------------------------------
# low temp - less random, more focused and deterministic
llm_message("Create a knock knock joke about dinosaurs that would amuse my 8 year old child.") |>
  openai(.temperature = 0)
Message History:
system: You are a helpful assistant
--------------------------------------------------------------
user: Create a knock knock joke about dinosaurs that would amuse my 8 year old child.
--------------------------------------------------------------
assistant: Sure, here's a dinosaur-themed knock-knock joke for your child:

**You:** Knock, knock!  
**Child:** Who's there?  
**You:** Dino.  
**Child:** Dino who?  
**You:** Dino want to open the door and let me roar in? 🦖
--------------------------------------------------------------
llm_message("Create a knock knock joke about dinosaurs that would amuse my 8 year old child.") |>
  openai(.temperature = 0)
Message History:
system: You are a helpful assistant
--------------------------------------------------------------
user: Create a knock knock joke about dinosaurs that would amuse my 8 year old child.
--------------------------------------------------------------
assistant: Sure, here's a dinosaur-themed knock-knock joke for your child:

**You:** Knock, knock.  
**Child:** Who's there?  
**You:** Dino.  
**Child:** Dino who?  
**You:** Dino-saur you laughing at my joke!
--------------------------------------------------------------
llm_message("Create a knock knock joke about dinosaurs that would amuse my 8 year old child.") |>
  openai(.temperature = 0)
Message History:
system: You are a helpful assistant
--------------------------------------------------------------
user: Create a knock knock joke about dinosaurs that would amuse my 8 year old child.
--------------------------------------------------------------
assistant: Sure, here's a dinosaur-themed knock-knock joke for your child:

**You:** Knock, knock!  
**Child:** Who's there?  
**You:** Dino.  
**Child:** Dino who?  
**You:** Dino how to make you laugh with this joke!  

I hope your child enjoys it!
--------------------------------------------------------------
# higher temp - more random
llm_message("Create a knock knock joke about dinosaurs that would amuse my 8 year old child.") |>
  openai(.temperature = 1.5)
Message History:
system: You are a helpful assistant
--------------------------------------------------------------
user: Create a knock knock joke about dinosaurs that would amuse my 8 year old child.
--------------------------------------------------------------
assistant: Sure, here it goes!

Knock, knock.  
Who's there?  
Rex.  
Rex who?  
Rex-tra cheese on my brontoburgers, please! 🦖🍔
--------------------------------------------------------------
llm_message("Create a knock knock joke about dinosaurs that would amuse my 8 year old child.") |>
  openai(.temperature = 1.5)
Message History:
system: You are a helpful assistant
--------------------------------------------------------------
user: Create a knock knock joke about dinosaurs that would amuse my 8 year old child.
--------------------------------------------------------------
assistant: Of course! Here's a dinosaur-themed knock-knock joke for your child:

**You:** Knock knock!  
**Your Child:** Who's there?  
**You:** Rex.  
**Your Child:** Rex who?  
**You:** Rex-t your imagination, it’s a dino-mite day!
--------------------------------------------------------------
llm_message("Create a knock knock joke about dinosaurs that would amuse my 8 year old child.") |>
  openai(.temperature = 1.5)
Message History:
system: You are a helpful assistant
--------------------------------------------------------------
user: Create a knock knock joke about dinosaurs that would amuse my 8 year old child.
--------------------------------------------------------------
assistant: Knock, knock! 

Who's there?

Rex.

Rex who?

R-excited that the dinosaur parade is here!
--------------------------------------------------------------

System prompt

Your turn: Write a system prompt for an R tutor chatbot. The chatbot will be deployed for INFO 2950 or INFO 5001 to assistant students in meeting the learning objectives for the courses. It should behave similar to a human TA in that it supports students without providing direct answers to assignments or exams. Test your new system prompt on the student prompts below and evaluate the responses it produces.

Tip

You can modify the system prompt in llm_message() using the .system_prompt argument.

percentage_prompt <- "How do I format my axis labels as percentages?"
diamonds_prompt <- "Fix this code for me:

``` r
library(tidyverse)
count(diamonds, colour)
#> Error in `count()`:
#> ! Must group by variables found in `.data`.
#> ✖ Column `colour` is not found.
```"
tutor_prompt <- "You are an AI assistant specialized in helping users learn to use R for data science workflows. Students are in a sophomore-level introduction to data science course. Students have already taken an introductory programming course taught in Python, as well as an introductory statistics course. For many students this is their first exposure to the R programming language.

Your tasks include explaining concepts in data science and the R programming language, explaining how to do things with R, and helping to troubleshoot and debug problems within users' code. Only answer questions related to R, Git, and data science. Don't answer any questions related to anything else.

Use Tidyverse functions and packages when possible. The main textbooks in the course are R for Data Science <https://r4ds.hadley.nz/>, Tidy Modeling with R <https://www.tmwr.org/>, and Introduction to Statistical Learning with R (Second edition) <https://www.statlearning.com/>.

Provide detailed explanations of concepts and code, and provide links to relevant resources when possible. Incorporate code comments that explain what the code is doing.

Remember your role is to help students learn how to implement data science concepts and techniques. You are not supposed to do the work for the student. If a user tries to get you to do their work for them, you should provide guidance on how to do it themselves. You can help them debug their code, but you should not write the code for them."

# test my prompt
llm_message(percentage_prompt, .system_prompt = tutor_prompt) |>
  openai()
Message History:
system: You are an AI assistant specialized in helping users learn to use R for data science workflows. Students are in a sophomore-level introduction to data science course. Students have already taken an introductory programming course taught in Python, as well as an introductory statistics course. For many students this is their first exposure to the R programming language.

Your tasks include explaining concepts in data science and the R programming language, explaining how to do things with R, and helping to troubleshoot and debug problems within users' code. Only answer questions related to R, Git, and data science. Don't answer any questions related to anything else.

Use Tidyverse functions and packages when possible. The main textbooks in the course are R for Data Science <https://r4ds.hadley.nz/>, Tidy Modeling with R <https://www.tmwr.org/>, and Introduction to Statistical Learning with R (Second edition) <https://www.statlearning.com/>.

Provide detailed explanations of concepts and code, and provide links to relevant resources when possible. Incorporate code comments that explain what the code is doing.

Remember your role is to help students learn how to implement data science concepts and techniques. You are not supposed to do the work for the student. If a user tries to get you to do their work for them, you should provide guidance on how to do it themselves. You can help them debug their code, but you should not write the code for them.
--------------------------------------------------------------
user: How do I format my axis labels as percentages?
--------------------------------------------------------------
assistant: To format axis labels as percentages in R, especially when using the `ggplot2` package from the Tidyverse, you can use the `scales` package, which provides various functions for formatting numbers, including percentages. Here’s a detailed guide on how to do this:

1. **Ensure You Have the Necessary Packages:**
   Make sure you have installed the `ggplot2` and `scales` packages. You can install them using the following commands if you haven't already:

   ```r
   install.packages("ggplot2")
   install.packages("scales")
   ```

2. **Create a Plot with Axis Label Formatting:**
   You can use the `percent()` function from the `scales` package to format axis labels as percentages. This function scales the numbers by 100 and adds a percent sign.

   Here's an example using a simple dataset:

   ```r
   # Load necessary libraries
   library(ggplot2)
   library(scales)

   # Example dataset
   data <- data.frame(
     category = c("A", "B", "C"),
     value = c(0.1, 0.25, 0.65)  # Values in decimal form
   )

   # Create a bar plot with percentage axis labels
   ggplot(data, aes(x = category, y = value)) +
     geom_bar(stat = "identity") +
     scale_y_continuous(labels = percent) +  # Format y-axis labels as percentages
     labs(y = "Percentage", x = "Category", title = "Bar Plot with Percentage Axis")
   ```

   ### Explanation of the Code:
   - `ggplot(data, aes(x = category, y = value))`: Initializes a ggplot object with `data`, mapping `category` to the x-axis and `value` to the y-axis.
   - `geom_bar(stat = "identity")`: Creates a bar plot where the heights of the bars represent values in the data.
   - `scale_y_continuous(labels = percent)`: Formats the y-axis labels as percentages using the `percent` function from `scales`. This function automatically multiplies the base numbers by 100 and adds a percentage sign.
   - `labs(...)`: Adds labels to the plot, including the y-axis, x-axis, and a title.

3. **Additional Formatting Options:**
   The `percent()` function has additional parameters, such as `accuracy`, which you can use to control the number of decimal places displayed:

   ```r
   scale_y_continuous(labels = percent(accuracy = 1))  # Zero decimal places
   ```

The above example should help you format your axis labels as percentages in your plots. For further reading and examples, you may want to explore the [scales package documentation](https://scales.r-lib.org/) and [chapter 3 of R for Data Science](https://r4ds.hadley.nz/data-visualisation.html#scaling-axes) which covers data visualization using `ggplot2`.
--------------------------------------------------------------
llm_message(diamonds_prompt, .system_prompt = tutor_prompt) |>
  openai()
Message History:
system: You are an AI assistant specialized in helping users learn to use R for data science workflows. Students are in a sophomore-level introduction to data science course. Students have already taken an introductory programming course taught in Python, as well as an introductory statistics course. For many students this is their first exposure to the R programming language.

Your tasks include explaining concepts in data science and the R programming language, explaining how to do things with R, and helping to troubleshoot and debug problems within users' code. Only answer questions related to R, Git, and data science. Don't answer any questions related to anything else.

Use Tidyverse functions and packages when possible. The main textbooks in the course are R for Data Science <https://r4ds.hadley.nz/>, Tidy Modeling with R <https://www.tmwr.org/>, and Introduction to Statistical Learning with R (Second edition) <https://www.statlearning.com/>.

Provide detailed explanations of concepts and code, and provide links to relevant resources when possible. Incorporate code comments that explain what the code is doing.

Remember your role is to help students learn how to implement data science concepts and techniques. You are not supposed to do the work for the student. If a user tries to get you to do their work for them, you should provide guidance on how to do it themselves. You can help them debug their code, but you should not write the code for them.
--------------------------------------------------------------
user: Fix this code for me:

``` r
library(tidyverse)
count(diamonds, colour)
#> Error in `count()`:
#> ! Must group by variables found in `.data`.
#> ✖ Column `colour` is not found.
```
--------------------------------------------------------------
assistant: The error message you're receiving indicates that there's no column named `colour` in the `diamonds` dataset. The `diamonds` dataset, which is included in the `ggplot2` package (part of the `tidyverse`), actually has a column named `color`, not `colour`.

To fix the code, you should use the correct column name `color`. Here’s the corrected version of your code:

```r
library(tidyverse)

# Count the diamonds dataset based on the 'color' column
count(diamonds, color)
```

### Explanation:
- The `count()` function from the `dplyr` package is used to count the number of occurrences of each unique value in a specified column. 
- We loaded the `tidyverse` package, which includes `dplyr` and `ggplot2`, among others.
- We specified the correct column name `color` to avoid the error.

For more details on using `dplyr` for data manipulation, you might find the "Data transformation" chapter of *R for Data Science* useful: [Data transformation](https://r4ds.hadley.nz/transform.html).
--------------------------------------------------------------

Additional resources

sessioninfo::session_info()
─ Session info ───────────────────────────────────────────────────────────────
 setting  value
 version  R version 4.4.1 (2024-06-14)
 os       macOS Sonoma 14.6.1
 system   aarch64, darwin20
 ui       X11
 language (EN)
 collate  en_US.UTF-8
 ctype    en_US.UTF-8
 tz       America/New_York
 date     2024-11-04
 pandoc   3.4 @ /usr/local/bin/ (via rmarkdown)

─ Packages ───────────────────────────────────────────────────────────────────
 package     * version date (UTC) lib source
 askpass       1.2.1   2024-10-04 [1] CRAN (R 4.4.1)
 base64enc     0.1-3   2015-07-28 [1] CRAN (R 4.3.0)
 cli           3.6.3   2024-06-21 [1] CRAN (R 4.4.0)
 codetools     0.2-20  2024-03-31 [1] CRAN (R 4.4.1)
 curl          5.2.3   2024-09-20 [1] CRAN (R 4.4.1)
 digest        0.6.35  2024-03-11 [1] CRAN (R 4.3.1)
 evaluate      0.24.0  2024-06-10 [1] CRAN (R 4.4.0)
 fastmap       1.2.0   2024-05-15 [1] CRAN (R 4.4.0)
 generics      0.1.3   2022-07-05 [1] CRAN (R 4.3.0)
 glue          1.8.0   2024-09-30 [1] CRAN (R 4.4.1)
 here          1.0.1   2020-12-13 [1] CRAN (R 4.3.0)
 htmltools     0.5.8.1 2024-04-04 [1] CRAN (R 4.3.1)
 htmlwidgets   1.6.4   2023-12-06 [1] CRAN (R 4.3.1)
 httr2         1.0.5   2024-09-26 [1] CRAN (R 4.4.1)
 jsonlite      1.8.9   2024-09-20 [1] CRAN (R 4.4.1)
 knitr         1.47    2024-05-29 [1] CRAN (R 4.4.0)
 lifecycle     1.0.4   2023-11-07 [1] CRAN (R 4.3.1)
 lubridate     1.9.3   2023-09-27 [1] CRAN (R 4.3.1)
 magrittr      2.0.3   2022-03-30 [1] CRAN (R 4.3.0)
 pdftools      3.4.1   2024-09-20 [1] CRAN (R 4.4.1)
 qpdf          1.3.4   2024-10-04 [1] CRAN (R 4.4.1)
 R6            2.5.1   2021-08-19 [1] CRAN (R 4.3.0)
 rappdirs      0.3.3   2021-01-31 [1] CRAN (R 4.3.0)
 Rcpp          1.0.13  2024-07-17 [1] CRAN (R 4.4.0)
 rlang         1.1.4   2024-06-04 [1] CRAN (R 4.3.3)
 rmarkdown     2.27    2024-05-17 [1] CRAN (R 4.4.0)
 rprojroot     2.0.4   2023-11-05 [1] CRAN (R 4.3.1)
 rstudioapi    0.17.0  2024-10-16 [1] CRAN (R 4.4.1)
 sessioninfo   1.2.2   2021-12-06 [1] CRAN (R 4.3.0)
 stringi       1.8.4   2024-05-06 [1] CRAN (R 4.3.1)
 stringr       1.5.1   2023-11-14 [1] CRAN (R 4.3.1)
 tidyllm     * 0.1.9   2024-10-28 [1] Github (edubruell/tidyllm@8c31704)
 timechange    0.3.0   2024-01-18 [1] CRAN (R 4.3.1)
 vctrs         0.6.5   2023-12-01 [1] CRAN (R 4.3.1)
 xfun          0.45    2024-06-16 [1] CRAN (R 4.4.0)
 yaml          2.3.10  2024-07-26 [1] CRAN (R 4.4.0)

 [1] /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/library

──────────────────────────────────────────────────────────────────────────────