The Oxide Programming Language

Oxide is an alternative syntax for Rust that compiles to identical binaries via the Rust toolchain. It keeps Rust's performance, safety, and ecosystem while offering a more approachable, Swift/Kotlin-inspired surface syntax.

This book teaches Oxide from first principles. If you already know Rust, you will recognize the concepts and semantics, but the syntax will feel different. If you are new to systems programming, this book is designed to be friendly, incremental, and practical.

What You Will Learn

  • The core language constructs: variables, types, functions, and control flow
  • Ownership, borrowing, and lifetimes (the same rules as Rust)
  • Structs, enums, pattern matching, and traits
  • Error handling with Result, ?, and Oxide's nullability operators
  • Modules, packages, and the Cargo build system
  • Concurrency and async programming
  • How to build real programs in Oxide

How to Use This Book

The chapters build on each other. Follow them in order if you are new to Rust or systems programming. If you are experienced, you can jump to the chapters you care about most.

Throughout the book, you will see Oxide examples and occasional Rust equivalents to highlight syntax differences. The semantics are the same unless explicitly stated otherwise.

Foreword

Oxide exists to make Rust more approachable without compromising what makes Rust powerful. It is not a new runtime or a new ecosystem; it is a different syntax for the same language and compiler.

This book is adapted from The Rust Programming Language and follows its structure, examples, and teaching philosophy while presenting everything in Oxide syntax. Rust's core ideas - ownership, borrowing, lifetimes, and fearless concurrency - remain intact. Oxide simply presents those ideas with syntax that many developers already find familiar.

If you are a Rustacean, we hope this book feels like a friendly translation. If you are new to Rust, we hope the Oxide syntax helps you get to the ideas faster. In either case, thank you for exploring Oxide with us.

The Oxide Programming Language

Welcome to The Oxide Programming Language, an introductory book about Oxide.

Oxide is an alternative syntax for Rust that provides familiar Swift/Kotlin-inspired conventions while producing identical binary output to equivalent Rust code. It maintains all of Rust's safety guarantees, ownership model, and performance while making systems programming more accessible.

Who This Book Is For

This book is for developers transitioning from Swift, Kotlin, TypeScript, or C# who want to leverage Rust's performance and safety guarantees without navigating its unfamiliar syntax.

How to Use This Book

This book follows the same structure as The Rust Programming Language book. If you're already familiar with Rust, you can skip to the specific chapters that cover Oxide's syntax differences.

Rust Syntax Fallbacks (Important)

Oxide intentionally changes parts of Rust's grammar. Those Rust spellings are syntax errors in Oxide code (for example, ::, =>, and let mut). However, when there is no Oxide-specific alternative and no conflict with Oxide syntax, Oxide accepts Rust syntax as a compatibility fallback. Examples include try { }, const { }, impl Trait in return position, async move { }, and Rust-style macros like format!.

Rust's Option<T>, Some, and None are also accepted for interop, but idiomatic Oxide uses T? and null.

Getting Started

This chapter helps you set up the Oxide toolchain and write your first programs. You'll install the compiler, print a classic greeting, and use Cargo to create and manage a project.

What You'll Learn

  • How to install the Oxide compiler and toolchain
  • How to compile and run a simple .ox program
  • How to use Cargo to create and build a project

Chapter Roadmap

  1. Installation - Install the toolchain and verify your setup
  2. Hello, World! - Write and run a tiny Oxide program
  3. Hello, Cargo! - Create a project with Cargo and run it

If you already have the toolchain installed, feel free to skip ahead to the Hello World section.

Installation

Let's get Oxide set up on your computer! We'll install the necessary tools to compile and run Oxide programs.

What is Oxide?

Oxide is an alternative syntax for Rust that compiles to identical binary output via a rustc fork. If you're coming from Swift, Kotlin, TypeScript, or C#, Oxide's syntax will feel familiar while giving you access to Rust's powerful safety guarantees and performance.

Key points about Oxide:

  • Rust with familiar syntax: Oxide uses Swift/Kotlin-inspired conventions as an alternative to Rust's conventions
  • 100% compatible with Rust: Both .ox files (Oxide) and .rs files (Rust) can coexist in the same project
  • Same performance: Oxide compiles to the exact same machine code as Rust—zero runtime overhead
  • All of Rust's power: You get ownership, borrowing, the type system, traits, generics, and everything else that makes Rust safe

Let's look at a quick example of what Oxide looks like:

// Oxide: familiar and readable
fn greet(name: &str): String {
    "Hello, \(name)!"
}

Compared to equivalent Rust:

#![allow(unused)]
fn main() {
// Rust: using Rust's conventions
fn greet(name: &str) -> String {
    format!("Hello, {}", name)
}
}

Both compile to identical code. The difference is the syntax.

Prerequisites

Before installing Oxide, you'll need:

  1. Rust toolchain - The Rust compiler (rustc), Cargo package manager, and Rust standard library
  2. A text editor or IDE - Any editor works; VS Code with Rust Analyzer is recommended
  3. A terminal - Oxide development happens on the command line

Since Oxide compiles via a rustc fork, having the standard Rust toolchain installed is essential, even though you'll primarily use Oxide syntax.

Installation

Step 1: Install Rust

First, install Rust using rustup. Open your terminal and run:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

This downloads and installs rustup, which manages your Rust toolchain. Follow the on-screen prompts.

On Windows, download and run rustup-init.exe from https://rustup.rs.

After installation, verify Rust is installed:

rustc --version
cargo --version

You should see version numbers for both.

Step 2: Install Oxide (Future Reference)

Note: The Oxide compiler is currently in development. This section describes how you'll install it once it's released.

Once available, you'll install the Oxide compiler via Cargo:

cargo install oxide-compiler

This installs the oxide-compiler command, which is the Oxide compiler. It's a wrapper around the rustc fork that handles .ox files transparently.

Verify the installation:

oxide-compiler --version

You should see the Oxide compiler version.

For Now: Setting Up Your First Project

While the compiler is under development, you can explore Oxide syntax and concepts by:

  1. Reading this book and the examples
  2. Experimenting with the ideas in Rust code
  3. Preparing your workflow for when Oxide releases

Setting Up an Oxide Project

Creating a new Oxide project is straightforward using Cargo with the --oxide flag:

cargo new --oxide hello_oxide
cd hello_oxide

This creates an Oxide project with src/main.ox containing:

fn main() {
    println!("Hello, world!")
}

Development Status: Oxide v1.0 is currently in development. The Oxide compiler (a fork of rustc) and the --oxide flag for Cargo are being implemented and will be available when the Oxide toolchain is released. For early testing before the release, you can manually create a project with cargo new and rename src/main.rs to src/main.ox.

Here's what a basic project structure looks like:

hello_oxide/
├── Cargo.toml
└── src/
    ├── main.ox       # Your Oxide code
    └── lib.rs        # Optional: Rust library code (can coexist)

When you run cargo build or cargo run, Oxide files (.ox) and Rust files (.rs) work together seamlessly. Both compile to identical machine code.

Building and Running

Build and run your Oxide project:

cargo build
cargo run

The Oxide compiler processes your .ox files and compiles them via the rustc fork.

Troubleshooting

"command not found: oxide-compiler"

Ensure you've installed the Oxide compiler with cargo install oxide-compiler and that ~/.cargo/bin is in your PATH.

"Can't find Rust toolchain"

The Oxide compiler requires the Rust toolchain. Verify Rust is installed with rustc --version. If not, follow Step 1 above.

"Unknown file extension: .ox"

Make sure your file has the .ox extension (lowercase). Oxide files must use this specific extension to be recognized.

IDE/Editor support

If your editor doesn't recognize .ox files:

  • VS Code: Install the Oxide extension (coming soon)
  • Other editors: Configure them to treat .ox files as Rust for syntax highlighting until Oxide support is available

Moving Forward

You're now ready to start learning Oxide! The next chapter will guide you through writing your first program. Whether you're new to systems programming or transitioning from another language, Oxide provides syntax that may feel familiar while giving you access to Rust's excellent type system and safety guarantees.

If you get stuck, remember:

  • The examples in this book are all valid Oxide code
  • You can always reference the Rust equivalent to understand the underlying semantics
  • Rust's extensive ecosystem documentation applies directly to Oxide code
  • Both Oxide and Rust are valid choices—Oxide simply offers an alternative syntax

Happy coding!

Hello, World!

It's traditional to begin learning a new programming language by writing a little program that prints the text "Hello, world!" to the screen, so we'll do that here! You'll also write your first Oxide program.

Creating a Project Directory

First, create a directory where you'll store your Oxide code. It doesn't matter to Oxide where your code lives, but for the exercises and projects in this book, we suggest creating a projects directory in your home directory and keeping all your projects there.

Open a terminal and run the following commands to create a projects directory and a helloWorld project within it.

On Linux, macOS, and PowerShell on Windows, enter this:

$ mkdir ~/projects
$ cd ~/projects
$ mkdir hello_world
$ cd hello_world

On Windows CMD, enter this:

> mkdir %USERPROFILE%\projects
> cd /d %USERPROFILE%\projects
> mkdir hello_world
> cd hello_world

Writing and Running the Program

Next, make a new source file and call it main.ox. Oxide files always end with the .ox extension. If you're using more than one word in your filename, the convention is to use an underscore to separate them. For example, you'd use hello_world.ox rather than helloworld.ox.

Now open the main.ox file you just created and enter the code in Listing 1-1:

Filename: main.ox

fn main() {
    println!("Hello, world!")
}

Listing 1-1: A program that prints "Hello, world!"

Save the file and go back to your terminal window in the ~/projects/hello_world directory. On Linux or macOS, enter the following commands to compile and run the file:

$ oxide-compiler main.ox
$ ./main
Hello, world!

On Windows, enter:

> oxide-compiler main.ox
> .\main.exe
Hello, world!

Regardless of your operating system, the string Hello, world! should print to the terminal. If you didn't see this output, refer to the Troubleshooting section for ways to get help.

Anatomy of an Oxide Program

Let's review this "Hello, world!" program in detail. Here's the first piece:

fn main() {

}

These lines define a function named main. The main function is special: it's always the first code that runs in every executable Oxide program. We declare it using the keyword fn, and it takes no parameters and returns nothing. If there were parameters, they would go inside the parentheses (). Also notice that the function body is wrapped in curly brackets {}. Oxide requires curly brackets around all function bodies.

Next is this line:

    println!("Hello, world!")

This line does all the work in this little program. It calls the println! macro with the string "Hello, world!" as an argument. The ! indicates that println! is a macro rather than a normal function. (We'll learn more about macros in detail in Chapter 19.) The macro prints the string to the screen.

Semicolons and Optional Syntax

You might notice this program doesn't have a semicolon at the end of the println! line. In Oxide, semicolons are optional in most contexts. They're not required at the end of function calls or statements, so we'll omit them unless we need to separate two expressions on a single line:

fn main() {
    println!("Hello, world!")
}

Comparison with Rust

If you're familiar with Rust, you might notice some differences. In Rust, you would write:

fn main() {
    println!("Hello, world!");
}

The key differences between Oxide and Rust in this example:

  1. File extension: Oxide uses .ox instead of .rs
  2. Semicolons: Oxide makes them optional; Rust requires them
  3. Compiler: Oxide uses oxide-compiler (the Oxide compiler, which is a rustc fork); Rust uses rustc

Everything else—the function syntax, the println! macro, and the overall program structure—is identical to Rust. This is because Oxide is fundamentally Rust with a different surface syntax.

Compiling and Running Are Separate Steps

You've just seen how to run a new program, but let me explain the process more fully.

Before running an Oxide program, you must compile it using the Oxide compiler, even for simple programs. The command is oxide-compiler followed by your source filename:

$ oxide-compiler main.ox

If you're on Windows, use:

> oxide-compiler main.ox

This command compiles your main.ox file and creates an executable called main (or main.exe on Windows). You can then run the executable:

On Linux or macOS:

$ ./main
Hello, world!

On Windows:

> .\main.exe
Hello, world!

Even for one-line programs, you need to explicitly compile before running. This follows Rust's ahead-of-time compilation approach, which differs from interpreted languages like Python or JavaScript where you can run code directly. The benefit is performance: compiled code runs much faster than interpreted code, and you catch errors at compile time rather than runtime.

Troubleshooting

The most common problems beginners encounter:

Command not found: oxide-compiler

This usually means the Oxide compiler isn't installed or isn't in your system's PATH. Refer to the Installation chapter to properly install Oxide.

Compilation errors

If you see errors when running oxide-compiler, double-check that:

  • Your file is saved as main.ox (or another .ox filename)
  • You're running the command from the directory containing your source file
  • Your code matches Listing 1-1 exactly, including the curly brackets and parentheses

Program output doesn't appear

  • On macOS or Linux, make sure you're using ./main (with the dot-slash) to run the executable
  • On Windows, make sure you're using .\main.exe (with the backslash)
  • If the window closes immediately on Windows, try running it from PowerShell or Command Prompt directly

Congratulations! You've officially written your first Oxide program. Next, we'll look at Oxide's package manager and build system, Cargo, which makes creating more complex programs much easier.

Hello, Cargo!

Cargo is Rust's excellent build system and package manager, and it makes Rust projects much easier to manage. You can use Cargo to create new projects, build them, test them, and distribute them. The good news is that Cargo works seamlessly with Oxide—.ox files are treated identically to .rs files. Oxide benefits directly from Cargo's powerful tooling and ecosystem.

What is Cargo?

Cargo handles several important tasks for you:

  • Building your code with cargo build
  • Running your code with cargo run
  • Testing your code with cargo test
  • Generating documentation with cargo doc
  • Publishing libraries to crates.io
  • Managing dependencies through Cargo.toml

Without Cargo, you'd need to manually compile your code with rustc, manage compilation flags, handle dependencies by hand, and coordinate all these tasks yourself. With Cargo, everything is automated and standardized.

Creating an Oxide Project

To create a new Oxide project, use Cargo with the --oxide flag:

$ cargo new --oxide hello_oxide
     Created binary (application) `hello_oxide` package
$ cd hello_oxide

This generates an Oxide project with src/main.ox containing:

fn main() {
    println!("Hello, world!")
}

Development Status: Oxide v1.0 is currently in development. The --oxide flag will be available in Cargo once the Oxide toolchain is released. For early testing before the release, you can manually create a project with cargo new and rename src/main.rs to src/main.ox.

Let's see what Cargo generated:

$ cd hello_oxide
$ ls -la
drwxr-xr-x  .git
drwxr-xr-x  .gitignore
-rw-r--r--  Cargo.toml
drwxr-xr-x  src

Let's look at the directory structure:

$ tree .
.
├── Cargo.toml
└── src
    └── main.ox

Understanding Cargo.toml

The Cargo.toml file is the manifest for your project. It contains metadata about your package and its dependencies. Here's what was created for us:

[package]
name = "hello_oxide"
version = "0.1.0"
edition = "2021"

[dependencies]

The [package] section contains metadata:

  • name: The name of your project
  • version: The current version of your code
  • edition: The Rust edition (Oxide projects use the same editions as Rust)

The [dependencies] section is where you'd list any external crates your project depends on. We don't have any dependencies yet.

Understanding the src Directory

Cargo expects your source files to be in the src directory. With the --oxide flag, Cargo creates main.ox with a simple "Hello, World!" program:

fn main() {
    println!("Hello, world!")
}

This is valid Oxide code that compiles and runs just like Rust. Let's make it feel more Oxide-like by using string interpolation:

fn main() {
    let language = "Oxide"
    println!("Hello from \(language)!")
}

The differences from the original:

  • Added a variable to demonstrate Oxide's string interpolation: "\(language)"
  • Omitted semicolons (optional in Oxide)
  • The fn main() and println!() syntax work identically in both Oxide and Rust

Building and Running Your Oxide Project

Now let's build the project using Cargo:

$ cargo build
   Compiling hello_oxide v0.1.0
    Finished dev [unoptimized + debuginfo] target/debug/hello_oxide

Congratulations! You've successfully compiled your first Oxide program with Cargo. The executable has been created in target/debug/hello_oxide (or target/debug/hello_oxide.exe on Windows).

To run it, use cargo run:

$ cargo run
   Compiling hello_oxide v0.1.0
    Finished dev [unoptimized + debuginfo] target/debug/hello_oxide
     Running `target/debug/hello_oxide`
Hello, Oxide!

The cargo run command compiles the code and then runs the resulting executable—all in one command. It's very convenient for projects you're actively working on.

A Note About Cargo.lock

When you first build your project, Cargo creates a Cargo.lock file. This file keeps track of the exact versions of dependencies you've built with. For binary projects (like this one), you should commit Cargo.lock to version control so everyone working on the project uses the same dependency versions. For libraries, it's typically not committed.

Cargo Check

If you want to verify that your code compiles without actually producing an executable, you can use cargo check:

$ cargo check
   Checking hello_oxide v0.1.0
    Finished dev [unoptimized + debuginfo] target/debug/hello_oxide

This command is much faster than cargo build because it stops after type-checking and doesn't generate code. It's perfect for getting quick feedback as you're writing code.

Using Cargo with Mixed Oxide and Rust

One powerful feature of Oxide is that you can freely mix .ox and .rs files in the same Cargo project. Cargo doesn't care which extension you use—both are compiled and linked together seamlessly.

For example, if you have existing Rust code you want to reuse, or if you're gradually migrating a project to Oxide, you can simply keep both file types in the same src/ directory:

src/
├── main.ox          # Oxide code
├── utils.rs         # Rust code
├── lib.ox           # Oxide library code
└── integrations.rs  # Rust integrations

Cargo will compile all of them together:

$ cargo build
   Compiling hello_oxide v0.1.0
    Finished dev [unoptimized + debuginfo] target/debug/hello_oxide

You can call Rust functions from Oxide and vice versa—they're compiled to the same intermediate representation and linked together. This makes it easy to adopt Oxide incrementally.

Thinking in Terms of Cargo

As you work with Rust and Oxide projects, here's a mental model for Cargo:

Cargo is to systems programming as npm is to Node.js or pip is to Python. It's the standard way projects are organized, built, and distributed. Rust's community designed Cargo exceptionally well, and Oxide benefits from this thoughtful tooling.

Every Rust and Oxide project you'll encounter follows the same structure:

  • Source code in src/
  • Dependencies listed in Cargo.toml
  • Binaries in target/debug/ or target/release/
  • Tests alongside your source code

Learning Cargo now means you'll immediately understand the structure of any Rust or Oxide project you encounter. This standardization is one of Rust's great strengths.

Building for Release

So far, we've been building in development mode with cargo build. This mode is great for development because compilation is fast and includes debug information. However, it doesn't optimize your code, so the resulting executable is slower.

When you're ready to ship your code, or when you care about performance, use the --release flag:

$ cargo build --release
   Compiling hello_oxide v0.1.0
    Finished release [optimized] target/release/hello_oxide

The executable will be in target/release/hello_oxide. Release builds take longer to compile but run much faster. You can also use:

$ cargo run --release
   Compiling hello_oxide v0.1.0
    Finished release [optimized] target/release/hello_oxide
     Running `target/release/hello_oxide`
Hello, Oxide!

For benchmarking or production deployment, always use --release.

Summary

Congratulations! You've learned the basics of working with Cargo and Oxide:

  • Cargo new creates a new project structure
  • Cargo build compiles your project
  • Cargo run compiles and runs your project
  • Cargo check quickly verifies your code compiles
  • .ox files work seamlessly with Cargo just like .rs files
  • Mixed projects are fully supported—use both .ox and .rs files in the same crate
  • Release builds provide optimizations for production use

You now have everything you need to start building Oxide projects. In the next chapter, we'll explore the fundamental concepts of the language itself.

Programming a Guessing Game

Let's jump into Oxide by working through a hands-on project together! This chapter introduces several common Oxide concepts by showing you how to use them in a real program. You'll learn about var, input/output, string interpolation, match expressions, and more!

The project is a classic beginner programming problem: we'll implement a guessing game. Here's how it works: the program generates a random integer between 1 and 100, then prompts you to enter a guess. After you enter a guess, the program indicates whether the guess is too low or too high. If your guess is correct, the game prints a congratulatory message and exits.

Setting Up a New Project

Let's set up a new Oxide project using Cargo with the --oxide flag:

$ cargo new --oxide guessing_game
$ cd guessing_game

This creates a new Oxide project with src/main.ox ready to go.

Development Status: Oxide v1.0 is currently in development. The --oxide flag will be available in Cargo once the Oxide toolchain is released. For early testing before the release, you can manually create a project with cargo new and rename src/main.rs to src/main.ox.

Now let's look at what Cargo generated. Open Cargo.toml:

[package]
name = "guessing_game"
version = "0.1.0"
edition = "2021"

[dependencies]

And here's the initial src/main.ox:

fn main() {
    println!("Hello, world!")
}

Let's test it with cargo run:

$ cargo run
   Compiling guessing_game v0.1.0
    Finished dev [unoptimized + debuginfo] target(s) in 1.23s
     Running `target/debug/guessing_game`
Hello, world!

Great! The cargo run command compiles and runs the project in one step. Now let's make it into a guessing game.

Processing a Guess

The first part of the guessing game program will ask for user input, process that input, and check that the input is in the expected form. To start, we'll allow the player to input a guess.

Filename: src/main.ox

import std.io

fn main() {
    println!("Guess the number!")

    println!("Please input your guess.")

    var guess = String.new()

    io.stdin()
        .readLine(&mut guess)
        .expect("Failed to read line")

    println!("You guessed: \(guess)")
}

This code contains a lot of information, so let's go through it line by line.

Getting User Input

To obtain user input and then print the result, we need the io library from the standard library:

import std.io

In Oxide, we use import with dot notation. This replaces Rust's use std::io; syntax. Note that :: does not exist in Oxide - dot notation is the only path separator.

The main function is the entry point:

fn main() {

Next, we use println! to print a prompt:

println!("Guess the number!")
println!("Please input your guess.")

Storing Values with Variables

Now we'll create a variable to store the user input:

var guess = String.new()

In Oxide, we use var to create a mutable variable. This is equivalent to Rust's let mut. The variable guess is bound to a new, empty String. In Oxide, String is the same type as in Rust—a growable, UTF-8 encoded text type.

Receiving User Input

Next, we call readLine on the standard input handle:

io.stdin()
    .readLine(&mut guess)

The stdin function returns an instance of std.io.Stdin, a type representing a handle to the standard input. The .readLine(&mut guess) method reads user input into the string we pass to it.

We pass &mut guess as an argument. The & indicates this is a reference, which gives you a way to let multiple parts of your code access one piece of data without copying it into memory multiple times. References are immutable by default, so we need &mut to make it mutable.

Handling Potential Failure with Result

We're still working on this line:

    .expect("Failed to read line")

The readLine method returns a Result value. Result is an enum with variants Ok and Err. The Ok variant indicates the operation was successful, and inside Ok is the successfully generated value. The Err variant means the operation failed, and contains information about how or why the operation failed.

The expect method will cause the program to crash and display the message you passed to it if the Result is an Err value. If you don't call expect, the program will compile with a warning.

Printing Values with String Interpolation

Finally, we print the guess:

println!("You guessed: \(guess)")

Oxide supports string interpolation using \(expression) syntax. This is one of Oxide's ergonomic features that makes string formatting more concise. It's equivalent to Rust's format! macro.

Let's test this code:

$ cargo run
   Compiling guessing_game v0.1.0
    Finished dev [unoptimized + debuginfo] target(s) in 2.34s
     Running `target/debug/guessing_game`
Guess the number!
Please input your guess.
6
You guessed: 6

Great! The first part is working.

Generating a Secret Number

Next, we need to generate a secret number that the user will try to guess. The secret number should be different every time so the game is fun to play more than once. Let's use a random number between 1 and 100.

Rust's standard library doesn't include random number functionality, so we'll use the rand crate. Add it to Cargo.toml:

Filename: Cargo.toml

[dependencies]
rand = "0.8.5"

Now update src/main.ox:

Filename: src/main.ox

import std.io
import rand.Rng

fn main() {
    println!("Guess the number!")

    let secretNumber = rand.threadRng().genRange(1..=100)

    println!("The secret number is: \(secretNumber)")

    println!("Please input your guess.")

    var guess = String.new()

    io.stdin()
        .readLine(&mut guess)
        .expect("Failed to read line")

    println!("You guessed: \(guess)")
}

We add import rand.Rng. The Rng trait defines methods that random number generators implement.

We call rand.threadRng() to get a random number generator, then call genRange with the range 1..=100 (inclusive on both ends).

Note that we use let (not var) for secretNumber because we won't be changing it.

Let's run it:

$ cargo run
   Compiling rand v0.8.5
   Compiling guessing_game v0.1.0
    Finished dev [unoptimized + debuginfo] target(s) in 3.45s
     Running `target/debug/guessing_game`
Guess the number!
The secret number is: 42
Please input your guess.
25
You guessed: 25

You should see a different random number, and it should change each time you run the program.

Comparing the Guess to the Secret Number

Now that we have user input and a random number, we can compare them:

Filename: src/main.ox

import std.io
import std.cmp.Ordering
import rand.Rng

fn main() {
    println!("Guess the number!")

    let secretNumber = rand.threadRng().genRange(1..=100)

    println!("The secret number is: \(secretNumber)")

    println!("Please input your guess.")

    var guess = String.new()

    io.stdin()
        .readLine(&mut guess)
        .expect("Failed to read line")

    let guess: Int = guess.trim().parse().expect("Please type a number!")

    println!("You guessed: \(guess)")

    match guess.cmp(&secretNumber) {
        Ordering.Less -> println!("Too small!"),
        Ordering.Greater -> println!("Too big!"),
        Ordering.Equal -> println!("You win!"),
    }
}

We import Ordering, which is an enum with variants Less, Greater, and Equal.

We convert the guess string to a number:

let guess: Int = guess.trim().parse().expect("Please type a number!")

We create a new variable also named guess. Oxide, like Rust, allows us to shadow the previous value. The trim method removes whitespace, and parse converts the string to a number. We specify the type as Int (Oxide's alias for i32).

Then we use a match expression to compare the guess to the secret number:

match guess.cmp(&secretNumber) {
    Ordering.Less -> println!("Too small!"),
    Ordering.Greater -> println!("Too big!"),
    Ordering.Equal -> println!("You win!"),
}

In Oxide, we use -> for match arms (Rust's => is invalid in Oxide), and dot notation for enum variants (Ordering.Less). Rust's double colon syntax (Ordering::Less) does not exist in Oxide - it will cause a syntax error.

Let's try it:

$ cargo run
   Compiling guessing_game v0.1.0
    Finished dev [unoptimized + debuginfo] target(s) in 1.23s
     Running `target/debug/guessing_game`
Guess the number!
The secret number is: 58
Please input your guess.
76
You guessed: 76
Too big!

Nice! But we can only make one guess. Let's fix that with a loop.

Allowing Multiple Guesses with Looping

The loop keyword creates an infinite loop:

Filename: src/main.ox

import std.io
import std.cmp.Ordering
import rand.Rng

fn main() {
    println!("Guess the number!")

    let secretNumber = rand.threadRng().genRange(1..=100)

    loop {
        println!("Please input your guess.")

        var guess = String.new()

        io.stdin()
            .readLine(&mut guess)
            .expect("Failed to read line")

        let guess: Int = guess.trim().parse().expect("Please type a number!")

        println!("You guessed: \(guess)")

        match guess.cmp(&secretNumber) {
            Ordering.Less -> println!("Too small!"),
            Ordering.Greater -> println!("Too big!"),
            Ordering.Equal -> {
                println!("You win!")
                break
            },
        }
    }
}

We've moved everything after the secret number generation into a loop. When the guess equals the secret number, we print "You win!" and break to exit the loop.

$ cargo run
   Compiling guessing_game v0.1.0
    Finished dev [unoptimized + debuginfo] target(s) in 1.45s
     Running `target/debug/guessing_game`
Guess the number!
Please input your guess.
50
You guessed: 50
Too small!
Please input your guess.
75
You guessed: 75
Too big!
Please input your guess.
62
You guessed: 62
You win!

Excellent! Now it loops until you guess correctly.

Handling Invalid Input

Let's make the game more robust by handling invalid input gracefully:

Filename: src/main.ox

import std.io
import std.cmp.Ordering
import rand.Rng

fn main() {
    println!("Guess the number!")

    let secretNumber = rand.threadRng().genRange(1..=100)

    loop {
        println!("Please input your guess.")

        var guess = String.new()

        io.stdin()
            .readLine(&mut guess)
            .expect("Failed to read line")

        let guess: Int = match guess.trim().parse() {
            Ok(num) -> num,
            Err(_) -> {
                println!("Please type a number!")
                continue
            },
        }

        println!("You guessed: \(guess)")

        match guess.cmp(&secretNumber) {
            Ordering.Less -> println!("Too small!"),
            Ordering.Greater -> println!("Too big!"),
            Ordering.Equal -> {
                println!("You win!")
                break
            },
        }
    }
}

Instead of crashing with expect, we now use a match expression to handle the Result from parse. If parse returns Ok, we extract the number. If it returns Err, we print a message and continue to the next iteration of the loop.

Let's test it:

$ cargo run
   Compiling guessing_game v0.1.0
    Finished dev [unoptimized + debuginfo] target(s) in 1.23s
     Running `target/debug/guessing_game`
Guess the number!
Please input your guess.
abc
Please type a number!
Please input your guess.
50
You guessed: 50
Too small!
Please input your guess.
62
You guessed: 62
You win!

Perfect! Now let's remove the debug line that prints the secret number:

Filename: src/main.ox

import std.io
import std.cmp.Ordering
import rand.Rng

fn main() {
    println!("Guess the number!")

    let secretNumber = rand.threadRng().genRange(1..=100)

    loop {
        println!("Please input your guess.")

        var guess = String.new()

        io.stdin()
            .readLine(&mut guess)
            .expect("Failed to read line")

        let guess: Int = match guess.trim().parse() {
            Ok(num) -> num,
            Err(_) -> {
                println!("Please type a number!")
                continue
            },
        }

        println!("You guessed: \(guess)")

        match guess.cmp(&secretNumber) {
            Ordering.Less -> println!("Too small!"),
            Ordering.Greater -> println!("Too big!"),
            Ordering.Equal -> {
                println!("You win!")
                break
            },
        }
    }
}

Summary

This project was a hands-on way to introduce you to many new Oxide concepts: var, match, functions, external crates, and more. In the next chapters, you'll learn about these concepts in more detail.

This project demonstrated Oxide's practical syntax while building on Rust's excellent type system and error handling. The guessing game uses the same robust foundation as any Rust program—ownership, borrowing, and the type system—but with syntax that may feel more familiar if you're coming from languages like Swift, Kotlin, or TypeScript.

In Chapter 3, you'll learn about concepts that most programming languages have, such as variables, data types, and functions, and see how they work in Oxide.

Common Programming Concepts

This chapter introduces the core building blocks of Oxide. If you have used other languages, many of these ideas will feel familiar, but the syntax and rules are tailored to Oxide and Rust's safety model.

What You'll Learn

  • Immutable and mutable bindings with let and var
  • Core scalar and compound data types
  • How to define and call functions
  • Comment styles and documentation basics
  • Control flow with if, match, and loops

A Quick Tour

Here is a small program that touches several of the concepts in this chapter:

fn main() {
    let name = "Oxide"
    var counter = 0

    while counter < 3 {
        println!("Hello, \(name)! Count = \(counter)")
        counter = counter + 1
    }

    let grade = 92
    let letter = match grade {
        90..=100 -> "A",
        80..=89 -> "B",
        70..=79 -> "C",
        60..=69 -> "D",
        _ -> "F",
    }

    println!("Final grade: \(letter)")
}

We'll unpack each of these pieces in the sections that follow.

Variables and Mutability

Variables are fundamental to any programming language. In Oxide, every variable binding follows clear rules about mutability that help prevent bugs and make your code easier to reason about. If you're familiar with Rust, you'll find Oxide's approach very similar, but with syntax inspired by languages like Swift and Kotlin.

Immutable Bindings with let

When you declare a variable with let, it creates an immutable binding. This means once you assign a value, you cannot change it:

fn main() {
    let x = 5
    println!("The value of x is: \(x)")
}

If you try to reassign an immutable variable, the compiler will stop you:

fn main() {
    let x = 5
    println!("The value of x is: \(x)")
    x = 6  // Error: cannot assign twice to immutable variable
}

This might seem restrictive at first, but immutability is a powerful feature. When a value cannot change, you can trust that it stays the same throughout your program. This eliminates entire categories of bugs and makes code easier to understand, especially in larger projects.

Rust comparison: The let keyword works identically to Rust. The only visible difference is that semicolons are optional in Oxide.

#![allow(unused)]
fn main() {
// Rust
let x = 5;
println!("The value of x is: {}", x);
}

Mutable Bindings with var

Of course, sometimes you need values that can change. Oxide uses the var keyword for mutable bindings:

fn main() {
    var x = 5
    println!("The value of x is: \(x)")
    x = 6
    println!("The value of x is: \(x)")
}

This outputs:

The value of x is: 5
The value of x is: 6

With var, you can modify the value as many times as needed. Use var when you know a value will change, like a counter in a loop or an accumulator.

Rust comparison: Oxide's var is equivalent to Rust's let mut. The choice of var is intentional, as it's a familiar keyword from Swift, Kotlin, JavaScript, and many other languages.

#![allow(unused)]
fn main() {
// Rust equivalent
let mut x = 5;
x = 6;
}

Constants

Constants are values that are bound to a name and cannot change throughout the entire program. Unlike let bindings, constants:

  • Must always have a type annotation
  • Must be set to a constant expression (evaluated at compile time)
  • Can be declared in any scope, including the global scope
  • Use the const keyword and SCREAMING_SNAKE_CASE by convention
const THREE_HOURS_IN_SECONDS: Int = 60 * 60 * 3

fn main() {
    println!("Three hours is \(THREE_HOURS_IN_SECONDS) seconds")
}

Constants are useful for values that many parts of your code need to know about, like the maximum number of players in a game or the speed of light. Naming hardcoded values as constants helps future readers understand the significance of the value.

Rust comparison: Constants work identically to Rust. The only difference is using Oxide's type aliases (Int instead of i32).

#![allow(unused)]
fn main() {
// Rust
const THREE_HOURS_IN_SECONDS: i32 = 60 * 60 * 3;
}

Shadowing

You can declare a new variable with the same name as a previous variable. This is called shadowing, and the new variable shadows the previous one:

fn main() {
    let x = 5

    let x = x + 1

    {
        let x = x * 2
        println!("The value of x in the inner scope is: \(x)")
    }

    println!("The value of x is: \(x)")
}

This outputs:

The value of x in the inner scope is: 12
The value of x is: 6

Shadowing is different from marking a variable as mutable with var. If we try to reassign without let, we get a compile-time error. By using let, we can perform transformations on a value but have the variable be immutable after those transformations.

Another advantage of shadowing is that you can change the type of a value while reusing the same name:

fn main() {
    let spaces = "   "
    let spaces = spaces.len()
    println!("Number of spaces: \(spaces)")
}

The first spaces is a &str, and the second spaces is a UIntSize (the return type of .len()). This is allowed because we're creating a new variable with let.

If we tried to use var and reassign, we'd get a type mismatch error:

fn main() {
    var spaces = "   "
    spaces = spaces.len()  // Error: mismatched types
}

Type Annotations

Oxide can usually infer the type of a variable from its value, but you can also be explicit:

fn main() {
    let count: Int = 42
    let name: String = "Alice".toString()
    let active: Bool = true

    var score: Float = 0.0
    score = 99.5
}

Type annotations use a colon followed by the type, consistent with languages like TypeScript, Kotlin, and Swift. Oxide provides intuitive type aliases for primitives:

Oxide TypeRust Equivalent
Inti32
Floatf64
Boolbool
UIntSizeusize

We'll explore all the available types in the next section.

Rust comparison: The annotation syntax is the same as Rust, just with Oxide's type names.

#![allow(unused)]
fn main() {
// Rust
let count: i32 = 42;
let active: bool = true;
}

When to Use let vs var

As a general guideline:

  • Default to let: Start with immutable bindings. This makes your code safer and easier to reason about.
  • Use var when needed: If you find you need to modify a value, change it to var.

The compiler will tell you if you've marked something as immutable but try to change it. Following this approach helps you take advantage of the safety that Oxide provides while still having the flexibility to use mutable state when appropriate.

Summary

  • Use let for immutable bindings that cannot change after initialization
  • Use var for mutable bindings that you need to modify
  • Use const for compile-time constants with global scope
  • Shadowing allows reusing names while keeping immutability benefits
  • Type annotations use the format name: Type

Now that you understand how variables work, let's look at the different data types available in Oxide.

Data Types

Every value in Oxide has a data type, which tells the compiler what kind of data is being specified so it knows how to work with that data. Oxide is statically typed, meaning the compiler must know the types of all variables at compile time. The compiler can usually infer types from values and how we use them, but when many types are possible, we must add a type annotation.

Scalar Types

A scalar type represents a single value. Oxide has four primary scalar types: integers, floating-point numbers, Booleans, and characters. Oxide provides intuitive type aliases that will feel familiar if you're coming from Swift, Kotlin, or TypeScript.

Integer Types

An integer is a number without a fractional component. Oxide provides signed and unsigned integers of various sizes:

Oxide TypeRust EquivalentSizeRange
Int8i88-bit-128 to 127
Int16i1616-bit-32,768 to 32,767
Int32i3232-bit-2.1B to 2.1B
Int64i6464-bitVery large
Inti3232-bitDefault signed integer
IntSizeisizearchPointer-sized signed
UInt8u88-bit0 to 255
UInt16u1616-bit0 to 65,535
UInt32u3232-bit0 to 4.3B
UInt64u6464-bitVery large
UIntu3232-bitDefault unsigned integer
UIntSizeusizearchPointer-sized unsigned

The Int type (which maps to Rust's i32) is the default choice for integers and is generally the fastest, even on 64-bit systems. Use UIntSize when indexing collections, as it matches the size of memory addresses on your system.

fn main() {
    let age: Int = 30
    let temperature: Int = -15
    let count: UIntSize = 1000

    // Type inference works too
    let inferred = 42  // Defaults to Int
}

Integer Literals

You can write integer literals in various forms:

LiteralExample
Decimal98_222
Hex0xff
Octal0o77
Binary0b1111_0000

Note that underscores can be inserted for readability: 1_000_000 is the same as 1000000.

Floating-Point Types

Oxide has two floating-point types for numbers with decimal points:

Oxide TypeRust EquivalentSizePrecision
Float32f3232-bit~6-7 digits
Float64f6464-bit~15-16 digits
Floatf6464-bitDefault floating-point

The Float type (which maps to Rust's f64) is the default because modern CPUs handle double-precision floats nearly as fast as single-precision, and it provides more accuracy.

fn main() {
    let pi: Float = 3.14159
    let temperature = 98.6  // Inferred as Float
    let precise: Float64 = 2.718281828459045

    // Scientific notation
    let avogadro: Float = 6.022e23
}

Numeric Operations

Oxide supports the standard mathematical operations: addition, subtraction, multiplication, division, and remainder:

fn main() {
    // Addition
    let sum = 5 + 10

    // Subtraction
    let difference = 95.5 - 4.3

    // Multiplication
    let product = 4 * 30

    // Division
    let quotient = 56.7 / 32.2
    let truncated = 5 / 3  // Results in 1 (integer division)

    // Remainder
    let remainder = 43 % 5
}

The Boolean Type

Oxide's Boolean type has two possible values: true and false. Booleans are one byte in size and are specified using the Bool type:

fn main() {
    let isActive: Bool = true
    let isComplete = false  // Inferred as Bool

    // Booleans are often the result of comparisons
    let isGreater = 5 > 3  // true
}

Rust comparison: Oxide uses Bool instead of bool, following the convention of capitalizing type names.

The Character Type

The char type represents a single Unicode scalar value. Character literals use single quotes:

fn main() {
    let letter = 'a'
    let emoji = '😀'
    let heart = '❤'
}

The char type is four bytes and represents a Unicode Scalar Value, which means it can represent much more than just ASCII.

Compound Types

Compound types can group multiple values into one type. Oxide has two primitive compound types: tuples and arrays.

Tuples

A tuple groups together values of different types into one compound type. Tuples have a fixed length; once declared, they cannot grow or shrink.

fn main() {
    let tup: (Int, Float, Bool) = (500, 6.4, true)

    // Destructuring
    let (x, y, z) = tup
    println!("The value of y is: \(y)")

    // Access by index
    let fiveHundred = tup.0
    let sixPointFour = tup.1
    let isTrue = tup.2
}

The tuple without any values, (), is called the unit type and represents an empty value or empty return type.

Arrays

Arrays contain multiple values of the same type with a fixed length. Use square brackets for array literals:

fn main() {
    let numbers: [Int; 5] = [1, 2, 3, 4, 5]
    let months: [&str; 12] = [
        "January", "February", "March", "April",
        "May", "June", "July", "August",
        "September", "October", "November", "December"
    ]

    // Initialize with same value
    let zeros: [Int; 5] = [0; 5]  // [0, 0, 0, 0, 0]

    // Accessing elements
    let first = numbers[0]
    let second = numbers[1]
}

Arrays are useful when you want data on the stack rather than the heap, or when you need a fixed number of elements. For a collection that can grow or shrink, use Vec<T> instead.

The String Type

Oxide has two string types:

  • str: A string slice, usually seen as &str. This is an immutable reference to string data.
  • String: A growable, heap-allocated string.
fn main() {
    // String literal (type is &str)
    let greeting = "Hello, world!"

    // Create an owned String
    let name: String = "Alice".toString()

    // String with interpolation
    let message = "Hello, \(name)!"
    println!("\(message)")
}

String interpolation with \(expression) is a key Oxide feature. Any expression inside \() is evaluated and converted to a string:

fn main() {
    let count = 42
    let price = 19.99

    println!("Count: \(count), Price: $\(price)")
    println!("Total: $\(count as Float * price)")
}

Rust comparison: Rust uses format!("{}", x) for string formatting. Oxide's \(x) syntax is inspired by Swift and is more concise.

#![allow(unused)]
fn main() {
// Rust
println!("Count: {}, Price: ${}", count, price);
}

Nullable Types

Oxide has first-class support for nullable (optional) types using the ? suffix. A T? type can hold either a value of type T or null:

fn main() {
    let maybeNumber: Int? = 42
    let nothing: String? = null

    // Check if value exists
    if let number = maybeNumber {
        println!("Got number: \(number)")
    }

    // Provide a default with ??
    let value = maybeNumber ?? 0

    // Force unwrap with !! (use carefully!)
    let forced = maybeNumber!!
}

When a context expects T?, you can assign or return a T directly and Oxide will implicitly wrap it in Some(...). You can still write Some(...) explicitly when you want to be clear.

The T? syntax is equivalent to Rust's Option<T>, and null is equivalent to None:

OxideRust
Int?Option<i32>
String?Option<String>
nullNone
Some(x)Some(x)
x ?? yx.unwrapOr(y)
x!!x.unwrap()
fn findUser(id: Int): User? {
    if id == 1 {
        User { name: "Alice".toString() }
    } else {
        null
    }
}

fn main() {
    let user = findUser(1) ?? User { name: "Guest".toString() }
    println!("Hello, \(user.name)")
}

Collection Types

Oxide uses Rust's standard collection types directly:

Vectors

Vec<T> is a growable array type:

fn main() {
    // Create a vector with the vec! macro
    var numbers: Vec<Int> = vec![1, 2, 3]

    // Add elements
    numbers.push(4)
    numbers.push(5)

    // Access elements
    let first = numbers[0]
    let maybeTenth: Int? = numbers.get(10).copied()

    // Iterate
    for num in numbers.iter() {
        println!("\(num)")
    }
}

HashMaps

HashMap<K, V> stores key-value pairs:

import std.collections.HashMap

fn main() {
    var scores: HashMap<String, Int> = HashMap.new()

    scores.insert("Blue".toString(), 10)
    scores.insert("Red".toString(), 50)

    let blueScore = scores.get(&"Blue".toString())

    for (team, score) in scores.iter() {
        println!("\(team): \(score)")
    }
}

Note: Oxide v1.0 uses Rust's collection names directly (Vec, HashMap) rather than providing aliases like Array or Dict. This helps you learn the actual Rust types you'll encounter in the ecosystem.

Type Inference

Oxide has strong type inference. The compiler can usually figure out types from context:

fn main() {
    let x = 5          // Int
    let y = 3.14       // Float
    let z = true       // Bool
    let s = "hello"    // &str

    var items = vec![] // Vec<???> - needs annotation or usage
    items.push(1)      // Now compiler knows it's Vec<Int>
}

When the compiler cannot infer the type, you need to provide an annotation:

fn main() {
    // Compiler needs help here
    let guess: Int = "42".parse().unwrap()

    // Or specify the type in the turbofish
    let guess = "42".parse<Int>().unwrap()
}

Summary

Oxide provides intuitive type names that feel familiar to developers from many language backgrounds while mapping directly to Rust's type system:

  • Integers: Int, Int64, UInt, UIntSize, etc.
  • Floats: Float, Float32, Float64
  • Boolean: Bool
  • Character: char
  • Tuples: (T, U, V)
  • Arrays: [T; N]
  • Strings: &str, String with \(expr) interpolation
  • Nullable: T? with null, ??, and !! operators
  • Collections: Vec<T>, HashMap<K, V>

Since Oxide types ARE Rust types (just with different names), you get full compatibility with the entire Rust ecosystem.

Functions

Functions are pervasive in Oxide code. You've already seen the main function, which is the entry point of many programs. The fn keyword allows you to declare new functions.

Oxide uses camelCase for function and variable names, in contrast to Rust's snake_case. This follows the conventions of Swift, Kotlin, and TypeScript. When you call Rust code from Oxide, name conversion happens automatically.

Defining Functions

Functions are defined with fn, followed by a name, parameters in parentheses, and a body in curly braces:

fn main() {
    println!("Hello, world!")
    anotherFunction()
}

fn anotherFunction() {
    println!("Another function.")
}

Functions can be defined before or after main; Oxide doesn't care where you define them, as long as they're in scope.

Parameters

Functions can have parameters, which are special variables that are part of the function's signature. When a function has parameters, you provide concrete values (called arguments) when you call it.

fn main() {
    greet("Alice")
    printSum(5, 3)
}

fn greet(name: &str) {
    println!("Hello, \(name)!")
}

fn printSum(a: Int, b: Int) {
    println!("\(a) + \(b) = \(a + b)")
}

Parameters must have type annotations. This is a deliberate design decision; requiring types in function signatures means the compiler rarely needs type annotations elsewhere.

Rust comparison: The syntax is identical, except Oxide uses camelCase for function names and its own type aliases.

#![allow(unused)]
fn main() {
// Rust
fn print_sum(a: i32, b: i32) {
    println!("{} + {} = {}", a, b, a + b);
}
}

Return Values

Functions can return values. Declare the return type after a colon (:) following the parameter list:

fn five(): Int {
    5
}

fn add(a: Int, b: Int): Int {
    a + b
}

fn main() {
    let x = five()
    let sum = add(10, 20)
    println!("x = \(x), sum = \(sum)")
}

The return value is the final expression in the function body. You can also return early using the return keyword:

fn absoluteValue(x: Int): Int {
    if x < 0 {
        return -x
    }
    x
}

Rust comparison: Oxide uses : for return types instead of Rust's ->. This aligns with TypeScript, Kotlin, and Swift conventions.

#![allow(unused)]
fn main() {
// Rust
fn add(a: i32, b: i32) -> i32 {
    a + b
}
}

Statements and Expressions

Function bodies are made up of a series of statements optionally ending in an expression. Understanding the difference is important:

  • Statements perform actions but don't return a value
  • Expressions evaluate to a resulting value
fn main() {
    // This is a statement (variable declaration)
    let y = 6

    // This block is an expression that evaluates to 4
    let x = {
        let temp = 3
        temp + 1  // No semicolon - this is the block's value
    }

    println!("x = \(x)")  // Prints: x = 4
}

Note that temp + 1 has no semicolon. Adding a semicolon would turn it into a statement, and the block would return () (the unit type) instead.

Visibility

By default, functions are private to their module. Use public to make them accessible from other modules:

public fn createUser(name: &str): User {
    User { name: name.toString() }
}

fn helperFunction() {
    // This is only accessible within this module
}

Rust comparison: Oxide uses public instead of pub. The word is spelled out for clarity.

#![allow(unused)]
fn main() {
// Rust
pub fn create_user(name: &str) -> User {
    User { name: name.to_string() }
}
}

Generic Functions

Functions can be generic over types:

import std.fmt.Display

fn identity<T>(value: T): T {
    value
}

fn printPair<T, U>(first: T, second: U)
where
    T: Display,
    U: Display,
{
    println!("(\(first), \(second))")
}

fn main() {
    let x = identity(42)
    let s = identity("hello")
    printPair(1, "one")
}

Generic constraints can be specified inline or with a where clause, just like in Rust.

Functions That Return Nothing

Functions that don't return a value implicitly return (), the unit type. You can omit the return type:

fn greet(name: &str) {
    println!("Hello, \(name)!")
}

// This is equivalent:
fn greetExplicit(name: &str): () {
    println!("Hello, \(name)!")
}

Functions That Never Return

Some functions never return, like those that always panic or loop forever. Use the Never type for these:

fn diverges(): Never {
    panic!("This function never returns!")
}

fn infiniteLoop(): Never {
    loop {
        // Do something forever
    }
}

Closures

Closures are anonymous functions you can store in variables or pass as arguments. Oxide uses a Swift-inspired syntax with curly braces:

fn main() {
    // No parameters
    let sayHello = { println!("Hello!") }

    // One parameter
    let double = { x -> x * 2 }

    // Multiple parameters
    let add = { x, y -> x + y }

    // With type annotations
    let parse = { s: &str -> s.parse<Int>().unwrap() }

    sayHello()
    println!("Double 5: \(double(5))")
    println!("3 + 4: \(add(3, 4))")
}

Rust comparison: Oxide uses { params -> body } instead of Rust's |params| body.

#![allow(unused)]
fn main() {
// Rust
let double = |x| x * 2;
let add = |x, y| x + y;
}

Implicit it Parameter

In trailing closures (closures passed as the last argument to a function), you can use the implicit it parameter for single-argument closures:

fn main() {
    let numbers = vec![1, 2, 3, 4, 5]

    // Using implicit `it`
    let doubled = numbers.iter().map { it * 2 }.collect<Vec<Int>>()

    // Equivalent with explicit parameter
    let doubled = numbers.iter().map { x -> x * 2 }.collect<Vec<Int>>()

    // Filter with `it`
    let evens = numbers.iter().filter { it % 2 == 0 }

    // More complex usage
    let users = vec![user1, user2, user3]
    let activeNames = users
        .iter()
        .filter { it.isActive }
        .map { it.name.clone() }
        .collect<Vec<String>>()
}

Important: The implicit it is only available in trailing closure position. You cannot use it in variable bindings:

// NOT allowed - it only works in trailing closures
let f = { it * 2 }  // Error!

// Use explicit parameter instead
let f = { x -> x * 2 }  // OK

Trailing Closure Syntax

When the last argument to a function is a closure, you can write it outside the parentheses:

fn main() {
    let numbers = vec![1, 2, 3, 4, 5]

    // Trailing closure syntax
    numbers.forEach { println!("\(it)") }

    // Equivalent to:
    numbers.forEach({ println!("\(it)") })

    // With other arguments
    numbers.iter().fold(0, { acc, x -> acc + x })
}

Multi-Statement Closures

Closures can contain multiple statements:

fn main() {
    let process = { item ->
        let validated = validate(item)
        let transformed = transform(validated)
        transformed
    }

    let result = process(myItem)
}

Async Functions

Async functions allow non-blocking I/O operations. Oxide uses prefix await instead of Rust's postfix .await:

async fn fetchData(url: &str): Result<String, Error> {
    let response = await client.get(url).send()?
    let body = await response.text()?
    Ok(body)
}

async fn main(): Result<(), Error> {
    let data = await fetchData("https://example.com")?
    println!("Got: \(data)")
    Ok(())
}

Rust comparison: Oxide uses await expr (prefix) instead of Rust's expr.await (postfix).

#![allow(unused)]
fn main() {
// Rust
async fn fetch_data(url: &str) -> Result<String, Error> {
    let response = client.get(url).send().await?;
    let body = response.text().await?;
    Ok(body)
}
}

The prefix await reads naturally left-to-right and matches the convention in Swift, Kotlin, JavaScript, and Python.

Function Pointers

You can store functions in variables and pass them around:

fn add(a: Int, b: Int): Int {
    a + b
}

fn multiply(a: Int, b: Int): Int {
    a * b
}

fn applyOperation(a: Int, b: Int, op: (Int, Int) -> Int): Int {
    op(a, b)
}

fn main() {
    let result1 = applyOperation(5, 3, add)
    let result2 = applyOperation(5, 3, multiply)

    println!("5 + 3 = \(result1)")
    println!("5 * 3 = \(result2)")
}

Rust comparison: Function pointer types use (T) -> U just like Rust. Only function declarations use : for return types.

#![allow(unused)]
fn main() {
// Rust
fn apply_operation(a: i32, b: i32, op: fn(i32, i32) -> i32) -> i32 {
    op(a, b)
}
}

Summary

  • Functions are declared with fn and use camelCase names
  • Parameters require type annotations
  • Return types follow : (not ->)
  • Use public (not pub) for public visibility
  • Closures use { params -> body } syntax
  • The implicit it parameter works in trailing closures
  • Async functions use prefix await

Functions in Oxide are designed to be familiar to developers from modern language backgrounds while maintaining full compatibility with Rust's function system.

Comments

Comments are essential for documenting your code. Good comments explain why code exists, not just what it does. Oxide's comment syntax is identical to Rust's, which will be familiar if you've used C, C++, Java, or JavaScript.

Line Comments

The most common comment form is the line comment, which starts with // and continues to the end of the line:

fn main() {
    // This is a line comment
    let x = 5  // This comment follows code

    // Comments can span
    // multiple lines
    // like this

    let y = 10
}

Use line comments liberally to explain non-obvious logic:

fn calculateDiscount(price: Float, memberYears: Int): Float {
    // Base discount for all members
    var discount = 0.05

    // Long-term members get additional rewards
    // The formula was approved by marketing in Q3 2024
    if memberYears >= 5 {
        discount += 0.02 * (memberYears - 4).min(5) as Float
    }

    price * discount
}

Block Comments

For longer explanations, use block comments that start with /* and end with */:

fn main() {
    /* This is a block comment.
       It can span multiple lines
       and is useful for longer explanations. */

    let x = 5

    /*
     * Some developers prefer to format
     * block comments with asterisks
     * on each line for readability.
     */
}

Block comments can also be nested, which is useful when commenting out code that already contains comments:

fn main() {
    /*
    This outer comment contains:
    /* An inner comment */
    And continues after it.
    */
    println!("Hello!")
}

In practice, line comments (//) are more commonly used than block comments.

Documentation Comments

Oxide supports special documentation comments that can be processed by documentation tools. These come in two forms.

Outer Documentation Comments (///)

Use /// to document the item that follows (functions, structs, enums, etc.):

/// Calculates the factorial of a non-negative integer.
///
/// # Arguments
///
/// * `n` - The number to calculate factorial for
///
/// # Returns
///
/// The factorial of `n`, or `null` if `n` is negative
///
/// # Examples
///
/// ```oxide
/// let result = factorial(5)
/// assertEq!(result, Some(120))
/// ```
public fn factorial(n: Int): Int? {
    if n < 0 {
        return null
    }
    if n <= 1 {
        return Some(1)
    }
    Some(n * factorial(n - 1)?)
}

Documentation comments support Markdown formatting, so you can include:

  • Headers with #
  • Code blocks with triple backticks
  • Lists with * or -
  • Bold with **text**
  • Links with [text](url)

Inner Documentation Comments (//!)

Use //! at the beginning of a file or module to document the module itself:

//! # String Utilities
//!
//! This module provides helper functions for string manipulation.
//!
//! ## Features
//!
//! - Case conversion
//! - Trimming and padding
//! - Search and replace
//!
//! ## Example
//!
//! ```oxide
//! import mylib.strings
//!
//! let result = strings.toTitleCase("hello world")
//! assertEq!(result, "Hello World")
//! ```

public fn toTitleCase(s: &str): String {
    // Implementation here
    s.toString()
}

public fn capitalize(s: &str): String {
    // Implementation here
    s.toString()
}

Inner documentation comments are typically placed at the very top of a file, before any code.

Common Documentation Sections

By convention, documentation for public APIs follows a standard structure:

/// Brief one-line description of the function.
///
/// More detailed explanation if needed. This can span multiple
/// paragraphs and include any relevant background information.
///
/// # Arguments
///
/// * `param1` - Description of the first parameter
/// * `param2` - Description of the second parameter
///
/// # Returns
///
/// Description of what the function returns.
///
/// # Errors
///
/// Description of when this function returns an error.
///
/// # Panics
///
/// Description of when this function might panic.
///
/// # Safety
///
/// For unsafe functions, describe safety requirements.
///
/// # Examples
///
/// ```oxide
/// let result = myFunction(arg1, arg2)
/// ```
public fn myFunction(param1: Int, param2: &str): Result<String, Error> {
    // Implementation
}

Not all sections are needed for every function. Use the sections that are relevant:

  • # Arguments - When parameters aren't self-explanatory
  • # Returns - For non-obvious return values
  • # Errors - For functions returning Result
  • # Panics - When the function can panic
  • # Safety - Required for unsafe functions
  • # Examples - Highly recommended for public APIs

Documenting Structs and Enums

Document each field or variant:

/// Represents a user in the system.
///
/// Users are the primary actors in our application and
/// can perform various actions based on their role.
#[derive(Debug, Clone)]
public struct User {
    /// Unique identifier for the user.
    id: Int,

    /// Display name shown in the UI.
    name: String,

    /// Email address for notifications.
    /// Must be verified before the user can post.
    email: String,

    /// Whether the user has admin privileges.
    isAdmin: Bool,
}

/// Possible states for an order.
public enum OrderStatus {
    /// Order has been placed but not yet processed.
    Pending,

    /// Order is being prepared for shipment.
    Processing,

    /// Order has been shipped to the customer.
    /// Contains the tracking number.
    Shipped { trackingNumber: String },

    /// Order has been delivered successfully.
    Delivered,

    /// Order was cancelled.
    /// Contains the reason for cancellation.
    Cancelled { reason: String },
}

Best Practices

Write Comments for Your Future Self

Code that seems obvious today might be confusing in six months:

// BAD: States what the code does (obvious from reading it)
// Increment counter by 1
counter += 1

// GOOD: Explains why
// We count from 1 because the API expects 1-indexed results
counter += 1

Keep Comments Up to Date

Outdated comments are worse than no comments. When you change code, update the corresponding comments:

// BAD: Comment doesn't match code
// Returns the user's full name
fn getUsername(user: &User): String {
    user.email.clone()  // Actually returns email!
}

// GOOD: Comment matches code
// Returns the user's email as their display identifier
fn getUsername(user: &User): String {
    user.email.clone()
}

Use Comments to Explain "Why", Not "What"

// BAD: Describes what the code does
// Loop through users and filter by active status
let activeUsers = users.iter().filter { it.isActive }

// GOOD: Explains the business reason
// Only active users should receive the weekly newsletter
let activeUsers = users.iter().filter { it.isActive }

Document Public APIs Thoroughly

Internal code can have lighter documentation, but public APIs deserve comprehensive docs:

/// Parses a date string in ISO 8601 format.
///
/// Accepts dates in the format `YYYY-MM-DD`. The time component
/// is optional and defaults to midnight UTC if not provided.
///
/// # Arguments
///
/// * `input` - A string slice containing the date to parse
///
/// # Returns
///
/// A `DateTime` if parsing succeeds, or `null` if the input
/// is not a valid ISO 8601 date string.
///
/// # Examples
///
/// ```oxide
/// let date = parseIsoDate("2024-03-15")
/// assert!(date.isSome())
///
/// let invalid = parseIsoDate("not a date")
/// assert!(invalid.isNone())
/// ```
public fn parseIsoDate(input: &str): DateTime? {
    // Implementation
}

Summary

  • Use // for line comments (most common)
  • Use /* */ for block comments (can be nested)
  • Use /// to document the following item
  • Use //! to document the containing module
  • Documentation comments support Markdown
  • Follow standard sections: Arguments, Returns, Errors, Panics, Examples
  • Comment the "why", not the "what"
  • Keep comments synchronized with code

Good documentation makes your code more maintainable and helps others (including your future self) understand your intent.

Control Flow

Control flow constructs let you run code conditionally or repeatedly. Oxide provides several ways to control execution: if expressions, match expressions, guard statements, and various loops. While the semantics match Rust exactly, some syntax is designed to be more approachable.

if Expressions

An if expression lets you branch your code based on conditions:

fn main() {
    let number = 7

    if number < 5 {
        println!("condition was true")
    } else {
        println!("condition was false")
    }
}

The condition must be a Bool. Unlike some languages, Oxide won't automatically convert non-Boolean types:

fn main() {
    let number = 3

    // This won't compile!
    if number {  // Error: expected Bool, found Int
        println!("number was three")
    }

    // This works:
    if number != 0 {
        println!("number was not zero")
    }
}

Multiple Conditions with else if

Chain conditions with else if:

fn main() {
    let number = 6

    if number % 4 == 0 {
        println!("number is divisible by 4")
    } else if number % 3 == 0 {
        println!("number is divisible by 3")
    } else if number % 2 == 0 {
        println!("number is divisible by 2")
    } else {
        println!("number is not divisible by 4, 3, or 2")
    }
}

Using if in a let Statement

Because if is an expression, you can use it on the right side of a let:

fn main() {
    let condition = true
    let number = if condition { 5 } else { 6 }

    println!("The value of number is: \(number)")
}

Both branches must return the same type:

fn main() {
    let condition = true

    // This won't compile!
    let number = if condition { 5 } else { "six" }  // Error: incompatible types
}

if let for Pattern Matching

The if let syntax combines pattern matching with conditionals. It's especially useful for nullable types:

fn main() {
    let maybeNumber: Int? = 42

    // Auto-unwrap: the Some() wrapper is implicit for T?
    if let number = maybeNumber {
        println!("Got number: \(number)")
    }

    // Explicit Some() also works
    if let Some(number) = maybeNumber {
        println!("Got number: \(number)")
    }

    // With else branch
    if let value = maybeNumber {
        println!("Value: \(value)")
    } else {
        println!("No value present")
    }
}

Rust comparison: In Oxide, if let x = nullable automatically wraps the pattern in Some() when the right-hand side is a nullable type. In Rust, you must always write if let Some(x) = nullable.

#![allow(unused)]
fn main() {
// Rust requires explicit Some()
if let Some(number) = maybe_number {
    println!("Got number: {}", number);
}
}

guard Statements

The guard statement is for early returns when conditions aren't met. The else block must diverge (return, break, continue, or panic):

fn processUser(user: User?): Result<String, Error> {
    guard let user = user else {
        return Err(anyhow!("User not found"))
    }
    // `user` is now available and non-null

    guard user.isActive else {
        return Err(anyhow!("User is not active"))
    }
    // We know user is active here

    Ok("Processing \(user.name)")
}

guard is particularly useful for validation at the start of functions:

fn divide(a: Int, b: Int): Result<Int, String> {
    guard b != 0 else {
        return Err("Cannot divide by zero".toString())
    }

    Ok(a / b)
}

fn processItems(items: Vec<Item>): Result<Summary, Error> {
    guard !items.isEmpty() else {
        return Err(anyhow!("No items to process"))
    }

    // Continue with non-empty items...
    Ok(summarize(items))
}

Rust comparison: Oxide's guard let x = expr else { } is equivalent to Rust's let Some(x) = expr else { }. The guard condition else { } form is similar to an inverted if.

#![allow(unused)]
fn main() {
// Rust
let Some(user) = user else {
    return Err(anyhow!("User not found"));
};

// Or for conditions:
if items.is_empty() {
    return Err(anyhow!("No items"));
}
}

match Expressions

The match expression compares a value against a series of patterns. Oxide uses -> for match arms (instead of Rust's =>) and _ as the wildcard:

fn main() {
    let number = 3

    match number {
        1 -> println!("one"),
        2 -> println!("two"),
        3 -> println!("three"),
        _ -> println!("something else"),
    }
}

Matching Multiple Patterns

Use | to match multiple values:

fn main() {
    let number = 2

    match number {
        1 | 2 -> println!("one or two"),
        3 -> println!("three"),
        _ -> println!("other"),
    }
}

Matching Ranges

Use ..= for inclusive ranges:

fn main() {
    let number = 7

    match number {
        1..=5 -> println!("one through five"),
        6..=10 -> println!("six through ten"),
        _ -> println!("something else"),
    }
}

Matching with Guards

Add conditions to patterns with if:

fn main() {
    let pair = (2, -2)

    match pair {
        (x, y) if x == y -> println!("twins"),
        (x, y) if x + y == 0 -> println!("opposites"),
        (x, _) if x % 2 == 0 -> println!("first is even"),
        _ -> println!("no match"),
    }
}

Matching Enums

Match is essential for working with enums:

enum Status {
    Active,
    Inactive,
    Pending { reason: String },
}

fn describeStatus(status: Status): String {
    match status {
        Status.Active -> "User is active".toString(),
        Status.Inactive -> "User is inactive".toString(),
        Status.Pending { reason: r } -> "Pending: \(r)".toString(),
    }
}

Rust comparison: Oxide uses dot notation (Status.Active) for enum variants. Rust's double colon syntax (Status::Active) does not exist in Oxide and will cause a syntax error.

Matching Nullable Types

Use null in pattern position to match None:

fn describe(value: Int?): String {
    match value {
        Some(n) if n > 0 -> "positive: \(n)".toString(),
        Some(n) if n < 0 -> "negative: \(n)".toString(),
        Some(0) -> "zero".toString(),
        null -> "no value".toString(),
    }
}

Match as Expression

Like if, match is an expression and returns a value:

fn main() {
    let number = 3

    let description = match number {
        1 -> "one",
        2 -> "two",
        3 -> "three",
        _ -> "many",
    }

    println!("Number is \(description)")
}

Multi-Statement Arms

Use blocks for complex match arms:

fn process(value: Int): String {
    match value {
        0 -> "zero".toString(),
        n if n > 0 -> {
            let doubled = n * 2
            let squared = n * n
            "positive: doubled=\(doubled), squared=\(squared)".toString()
        },
        _ -> {
            println!("Warning: negative value")
            "negative".toString()
        },
    }
}

Loops

Oxide provides three loop constructs: loop, while, and for.

Infinite Loops with loop

The loop keyword creates an infinite loop:

fn main() {
    var counter = 0

    loop {
        counter += 1
        println!("Count: \(counter)")

        if counter >= 5 {
            break
        }
    }
}

Returning Values from Loops

You can return a value from a loop using break:

fn main() {
    var counter = 0

    let result = loop {
        counter += 1

        if counter == 10 {
            break counter * 2
        }
    }

    println!("Result: \(result)")  // Prints: Result: 20
}

Loop Labels

Use labels to break or continue outer loops:

fn main() {
    var count = 0

    'outer: loop {
        println!("count = \(count)")
        var remaining = 10

        loop {
            println!("remaining = \(remaining)")

            if remaining == 9 {
                break
            }
            if count == 2 {
                break 'outer
            }
            remaining -= 1
        }

        count += 1
    }

    println!("End count = \(count)")
}

Conditional Loops with while

Execute code while a condition is true:

fn main() {
    var number = 3

    while number != 0 {
        println!("\(number)!")
        number -= 1
    }

    println!("LIFTOFF!")
}

while let for Conditional Pattern Matching

Similar to if let, but loops while the pattern matches:

fn main() {
    var stack: Vec<Int> = vec![1, 2, 3]

    while let value = stack.pop() {
        println!("Popped: \(value)")
    }
}

Iterating with for

The for loop iterates over collections:

fn main() {
    let numbers = [10, 20, 30, 40, 50]

    for number in numbers {
        println!("Value: \(number)")
    }
}

Iterating with Ranges

fn main() {
    // 1 to 4 (exclusive end)
    for i in 1..5 {
        println!("\(i)")
    }

    // 1 to 5 (inclusive end)
    for i in 1..=5 {
        println!("\(i)")
    }

    // Reverse order
    for i in (1..=5).rev() {
        println!("\(i)")
    }
}

Iterating with Index

Use enumerate() to get both index and value:

fn main() {
    let names = vec!["Alice", "Bob", "Charlie"]

    for (index, name) in names.iter().enumerate() {
        println!("\(index): \(name)")
    }
}

Loop Control

Use break and continue to control loop execution:

fn main() {
    for i in 1..=10 {
        if i == 3 {
            continue  // Skip 3
        }
        if i == 8 {
            break  // Stop at 8
        }
        println!("\(i)")
    }
}

Combining Control Flow

Control flow constructs can be combined for complex logic:

fn processUsers(users: Vec<User>): Vec<String> {
    var results: Vec<String> = vec![]

    for user in users.iter() {
        // Skip inactive users
        guard user.isActive else {
            continue
        }

        // Handle different user types
        let message = match user.role {
            Role.Admin -> "Admin: \(user.name)".toString(),
            Role.Moderator -> "Mod: \(user.name)".toString(),
            Role.User -> {
                if let email = user.email {
                    "User: \(user.name) <\(email)>".toString()
                } else {
                    "User: \(user.name)".toString()
                }
            },
        }

        results.push(message)
    }

    results
}

Summary

ConstructPurposeOxide Syntax
ifConditional branchingif cond { } else { }
if letPattern match + conditionalif let x = nullable { }
guardEarly return on failureguard cond else { return }
matchMulti-way pattern matchingmatch x { P -> e, _ -> d }
loopInfinite looploop { }
whileConditional loopwhile cond { }
while letPattern match loopwhile let x = iter.next() { }
forIterationfor item in collection { }

Key Oxide differences from Rust:

  • Match arms use -> (Rust's => is invalid in Oxide)
  • Match wildcard arm is _
  • guard provides clean early-return syntax
  • if let x = nullable auto-unwraps without Some()
  • Enum variants use dot notation (Enum.Variant); Rust's :: syntax does not exist in Oxide

These constructs give you precise control over program flow while maintaining Oxide's goal of being approachable and readable.

Understanding Ownership

Ownership is the feature that makes Rust (and therefore Oxide) both safe and fast without a garbage collector. It determines when values are created, moved, borrowed, and dropped.

What You'll Learn

  • The ownership rules and how moves work
  • Borrowing with references and the rules that keep them safe
  • The slice type for working with parts of collections

A First Look

fn main() {
    let name = String.from("Oxide")

    // `name` moves into `owner`
    let owner = name

    // name is no longer valid here
    // println!("\(name)")

    let greeting = String.from("Hello")
    let length = stringLength(&greeting)
    println!("'\(greeting)' has length \(length)")
}

fn stringLength(text: &String): Int {
    text.len()
}

Ownership errors can feel strict at first, but they prevent entire classes of bugs at compile time. The next sections explain the rules in detail and show how to design programs around them.

What is Ownership?

Ownership is the defining feature of Oxide. It's the system that allows Oxide to make memory safety guarantees without needing a garbage collector. Understanding ownership is crucial to becoming proficient in Oxide. In this chapter, we'll explore ownership and related features that help manage memory automatically.

Oxide inherits Rust's brilliant ownership system with one simple observation: Rust was right. Rather than reinvent the wheel, Oxide provides Oxide's familiar syntax while keeping ownership semantics identical to Rust. This means you get the same safety guarantees, the same zero-cost abstractions, and the same performance—just with a syntax that feels more natural if you're coming from Swift, Kotlin, or TypeScript.

The Ownership Rules

Oxide has three fundamental ownership rules:

  1. Each value in Oxide has a variable that is its owner
  2. There can only be one owner at a time
  3. When the owner goes out of scope, the value is dropped (freed)

These rules prevent memory leaks and use-after-free bugs at compile time. Let's explore each with examples.

Variable Scope

A scope is the range within a program where an item is valid. Consider this example:

fn main() {
    {  // s is not yet declared
        let s = "hello"  // s is valid from this point forward
        println!("\(s)")  // s is still valid
    }  // this scope is now over, and s is no longer valid
    // println!("\(s)")  // Error: s is out of scope
}

When s comes into scope, it is valid. It remains valid until it goes out of scope. At that point, the string's memory is automatically freed.

The String Type and the Move Semantic

Let's look at how ownership works with dynamically allocated data. We'll use String as our example because it's more interesting than the &str literals we've seen so far:

fn main() {
    let s1 = String.from("hello")
    let s2 = s1  // s1's data is MOVED to s2

    // println!("\(s1)")  // Error: s1 no longer owns the data
    println!("\(s2)")     // OK: s2 owns the data
}

When we assign s1 to s2, the ownership transfers. This is different from copying the data—it's moving ownership. s1 is no longer valid, and s2 owns the string data.

Why move instead of copy? Moving is Oxide's way of ensuring that only one owner exists at a time. This prevents multiple parts of your code from trying to manage the same memory, which would be a recipe for bugs.

Here's what happens in memory:

  1. s1 is created and points to allocated memory containing "hello"
  2. s2 = s1 moves the pointer, length, and capacity from s1 to s2
  3. s1 is invalidated (its ownership is lost)
  4. When s2 goes out of scope, the memory is freed

This is sometimes called a "shallow copy" in other languages, but Oxide goes further by making the original binding invalid. There's no risk of both variables trying to free the same memory.

Ownership and Functions

The same ownership rules apply when passing values to functions:

fn main() {
    let s = String.from("hello")
    takesOwnership(s)
    // println!("\(s)")  // Error: s has been moved into the function
}

fn takesOwnership(someString: String) {
    println!("\(someString)")
}  // someString goes out of scope and the string is dropped

When s is passed to takesOwnership, ownership of the string is transferred to the function's parameter. Once the function returns, the string is dropped.

If you want to use s after calling the function, you need to get the ownership back:

fn main() {
    let s = String.from("hello")
    let s = takesAndReturnsOwnership(s)
    println!("\(s)")  // OK: we got ownership back
}

fn takesAndReturnsOwnership(someString: String): String {
    someString  // ownership is returned
}

This pattern—taking ownership and returning it—is cumbersome. This is why Oxide has references and borrowing, which we'll explore in the next section. For now, understand that ownership follows these simple rules everywhere in your code.

Ownership and Copying

Some types in Oxide are simple enough that their values can be copied bit-by-bit without issues. These types implement the Copy trait. If a type implements Copy, ownership is not moved—the value is copied instead:

fn main() {
    let x = 5
    let y = x  // x is COPIED, not moved

    println!("x = \(x), y = \(y)")  // Both are valid!
}

Integer types, floating-point numbers, booleans, and characters all implement Copy because they're small and live on the stack. More complex types like String do not implement Copy because their data lives on the heap.

As a rule of thumb:

  • Stack types (integers, floats, bools, chars) implement Copy
  • Heap types (strings, vectors, collections) do NOT implement Copy

Implicit Returns and Ownership

In Oxide, the last expression in a function is the return value. This interacts with ownership in an important way:

fn createString(): String {
    let s = String.from("hello")
    s  // ownership is transferred to the caller
}

fn main() {
    let result = createString()
    println!("\(result)")  // OK: createString returned ownership
}

The value s goes out of scope at the end of createString, but because it's being returned, ownership is transferred to the caller. The memory is not dropped.

Why Ownership Matters

Oxide's ownership system provides several critical benefits:

  1. Memory Safety: Ownership prevents use-after-free and double-free bugs
  2. No Garbage Collector: Memory is freed automatically without runtime overhead
  3. No Runtime Errors: Memory errors are caught at compile time, not in production
  4. Zero-Cost Abstractions: The safety checks have no runtime cost

The elegance of Rust's ownership system is that it aligns with how we actually think about resources. When a function takes ownership, it's clear that the function is responsible for that resource. When it returns something, it transfers responsibility to the caller. This natural model prevents accidental resource leaks.

Rust's Ownership: The Gold Standard

Rust's ownership system is considered one of the greatest achievements in programming language design. It solved a problem that plagued systems programming for decades: how to provide memory safety without garbage collection. Oxide embraces this system completely.

If you've used languages with garbage collectors, this might feel unfamiliar at first. But once you understand the rules, you'll find that ownership makes code clearer and safer. You're not fighting the language—the language is helping you express your intent precisely.

Summary

  • Each value has one owner at a time
  • Ownership transfers when assigning to a new variable or passing to a function
  • When the owner goes out of scope, the value is dropped (memory is freed)
  • Types that implement Copy are copied instead of moved
  • The ownership rules are the same as Rust—we kept what worked perfectly

The ownership system might seem strict, but it's what makes Oxide safe by default. In the next section, we'll learn about references and borrowing, which lets you use data without taking ownership of it.

References and Borrowing

Ownership is powerful, but constantly moving values in and out of functions gets tedious. Fortunately, Oxide has a feature for using values without transferring ownership: references.

A reference allows you to refer to a value without taking ownership of it. Instead of passing the value itself, you pass a reference to it. When the function returns, ownership remains with the original owner.

Creating and Using References

You create a reference using the ampersand (&) operator:

fn main() {
    let s = String.from("hello")

    let length = calculateLength(&s)

    println!("'{}' has length {}", s, length)  // s is still valid!
}

fn calculateLength(s: &String): UIntSize {
    s.len()
}  // s goes out of scope, but it doesn't own the String, so nothing happens

The &s syntax creates a reference to s. The calculateLength function receives a reference (&String) instead of the string itself. Since calculateLength doesn't own the string, the string is not dropped when the function returns.

Notice that we can still use s after calling calculateLength. The original owner retains ownership; we only loaned it to the function.

Mutable References

By default, references are immutable. If you try to modify the data through a reference, the compiler stops you:

fn main() {
    let s = String.from("hello")

    // changeString(&s)  // Error: cannot mutate through immutable reference
    changeString(&mut s)  // Error: s is immutable
}

fn changeString(s: &String) {
    // s.pushStr(" world")  // Error: cannot mutate through immutable reference
}

To modify data through a reference, you need a mutable reference, declared with &mut:

fn main() {
    var s = String.from("hello")

    changeString(&mut s)

    println!("\(s)")  // "hello world"
}

fn changeString(s: &mut String) {
    s.pushStr(" world")
}

Important: The original binding must be mutable (var s) to allow mutable references. You cannot create a mutable reference to an immutable binding.

The Rules of Borrowing

Oxide's borrow checker enforces two critical rules:

  1. Either multiple immutable references OR one mutable reference at a time
  2. References must always be valid (no use-after-free)

Let's explore these rules with examples.

Multiple Immutable References

You can have multiple immutable references to the same data:

fn main() {
    let s = String.from("hello")

    let r1 = &s
    let r2 = &s
    let r3 = &s

    println!("\(r1), \(r2), \(r3)")  // All three references work fine
}

This is safe because all readers are immutable. Multiple readers cannot corrupt data.

Immutable and Mutable References Cannot Coexist

Once you create a mutable reference, you cannot have any immutable references:

fn main() {
    var s = String.from("hello")

    let r1 = &s
    let r2 = &s
    // var r3 = &mut s  // Error: cannot borrow as mutable while already borrowed as immutable

    println!("\(r1), \(r2)")
}

The compiler prevents the mutable reference because r1 and r2 are still in use. If we allowed &mut s, code using r1 or r2 might suddenly see the data change, which would be surprising and dangerous.

Using Scope to Release Borrows

A borrow ends when the reference is last used, not necessarily when it goes out of scope:

fn main() {
    var s = String.from("hello")

    let r1 = &s
    let r2 = &s
    println!("\(r1), \(r2)")  // Last use of r1 and r2

    // r1 and r2 are no longer needed after this point
    var r3 = &mut s  // OK: r1 and r2's borrows have ended
    r3.pushStr(" world")

    println!("\(r3)")
}

This is called non-lexical lifetimes (NLL). Rust introduced this feature to make borrowing less restrictive. Oxide inherits it, which means you often get mutable access sooner than you might expect.

Mutable References Are Exclusive

Only one mutable reference can exist at a time:

fn main() {
    var s = String.from("hello")

    var r1 = &mut s
    // var r2 = &mut s  // Error: cannot have two mutable references

    r1.pushStr(" world")
    println!("\(r1)")
}

This rule prevents data races and ensures that if you modify data, no other code can see it in an inconsistent state.

The &str Lightweight Reference

We've seen &str in function parameters. This is a reference to a string slice, which we'll explore in the next section. For now, understand that &str borrows a string—it's lighter weight than &String because it doesn't own any heap allocation:

fn main() {
    let s = String.from("hello world")

    let word = firstWord(&s)

    println!("\(word)")  // "hello"
}

fn firstWord(s: &str): &str {
    let bytes = s.asBytes()

    for (i, &item) in bytes.iter().enumerate() {
        if item == ' ' {
            return &s[0..i]
        }
    }

    &s[..]
}

Using &str is more flexible than &String because it accepts both String references and string literals.

Why Borrowing Matters

The borrowing system solves the ownership problem we encountered earlier. Instead of constantly moving values and returning them, you can borrow them:

// Without borrowing: tedious
fn main() {
    let s1 = String.from("hello")
    let (s1, len) = calculateLength(s1)
    println!("'{}' has length {}", s1, len)
}

fn calculateLength(s: String): (String, UIntSize) {
    let length = s.len()
    (s, length)
}

// With borrowing: clean
fn main() {
    let s1 = String.from("hello")
    let len = calculateLength(&s1)
    println!("'{}' has length {}", s1, len)
}

fn calculateLength(s: &String): UIntSize {
    s.len()
}

Borrowing lets functions use data without taking responsibility for it.

Mutable References Enable Controlled Mutation

Mutable references are Oxide's way of saying "this function needs to modify this data." They make your code's intent clear:

fn main() {
    var user = User { name: "Alice".toString() }
    updateUserName(&mut user, "Bob")
    println!("\(user.name)")  // "Bob"
}

fn updateUserName(user: &mut User, newName: &str) {
    user.name = newName.toString()
}

The &mut syntax makes it obvious that the function will modify the argument. This is much clearer than passing a regular parameter and having side effects.

Lifetime Annotations

Sometimes the compiler needs you to explicitly specify how long a reference is valid. This is called a lifetime. We'll explore lifetimes in depth in a later chapter, but here's a simple example:

fn longest<'a>(x: &'a str, y: &'a str): &'a str {
    if x.len() > y.len() {
        x
    } else {
        y
    }
}

fn main() {
    let s1 = "short"
    let s2 = "a much longer string"
    let result = longest(s1, s2)
    println!("\(result)")
}

The 'a notation tells the compiler that the returned reference lives as long as both input references. This ensures the return value is always valid.

You don't need to understand lifetimes yet. Just know that Oxide sometimes requires you to be explicit about how long references live, which is another safety feature.

Rust Comparison

The referencing and borrowing system is identical to Rust. The same &T and &mut T syntax, the same rules, the same benefits:

#![allow(unused)]
fn main() {
// Rust - identical to Oxide
fn calculate_length(s: &String) -> usize {
    s.len()
}

fn change_string(s: &mut String) {
    s.push_str(" world");
}
}

Summary

  • References allow using values without taking ownership
  • Immutable references (&T) are the default
  • Mutable references (&mut T) allow modification, with restrictions
  • Oxide enforces: either many immutable references OR one mutable reference
  • Borrows end when the reference is no longer used (non-lexical lifetimes)
  • References prevent data races and use-after-free bugs at compile time
  • Lifetime annotations sometimes explicit about how long references live

Borrowing is one of Oxide's most powerful features. It lets you express ownership clearly while remaining flexible about how data flows through your program. Combined with ownership, it enables memory safety without garbage collection—the same achievement Rust pioneered.

In the next section, we'll explore a special kind of reference: slices.

The Slice Type

A slice is a reference to a contiguous sequence of elements in a collection. Unlike references to the whole collection, slices let you reference a specific portion of it. Slices are incredibly useful and appear throughout Oxide code.

String Slices

A string slice is a reference to part of a String:

fn main() {
    let s = String.from("hello world")

    let hello = &s[0..5]
    let world = &s[6..11]

    println!("\(hello)")  // "hello"
    println!("\(world)")  // "world"
}

Rather than taking a reference to the entire string, &s[0..5] creates a reference to a portion of the string. The range syntax [startingIndex..endingIndex] includes startingIndex but excludes endingIndex.

Rust comparison: String slices work identically to Rust. The type annotation is &str.

#![allow(unused)]
fn main() {
// Rust - identical syntax
let s = String::from("hello world");
let hello = &s[0..5];
let world = &s[6..11];
}

Slice Shorthand Syntax

You can omit the starting index if it's 0 or the ending index if it's the length:

fn main() {
    let s = String.from("hello")

    let slice1 = &s[0..2]  // "he"
    let slice2 = &s[..2]   // "he" - same, omit start

    let slice3 = &s[3..5]  // "lo"
    let slice4 = &s[3..]   // "lo" - same, omit end

    let slice5 = &s[..]    // "hello" - entire string
}

This shorthand makes slice syntax less verbose for common cases.

Using Slices in Functions

Slices are particularly useful in function parameters because they're more flexible than &String:

fn firstWord(s: &str): &str {
    let bytes = s.asBytes()

    for (i, &item) in bytes.iter().enumerate() {
        if item == ' ' as UInt8 {
            return &s[0..i]
        }
    }

    &s[..]
}

fn main() {
    let myString = String.from("hello world")
    let word = firstWord(&myString)
    println!("\(word)")  // "hello"

    // Also works with string literals
    let word = firstWord("hello world")
    println!("\(word)")  // "hello"
}

Notice that firstWord takes &str, not &String. This makes it more flexible. We can pass:

  • A reference to a String: &myString (automatically coerced to &str)
  • A string literal: "hello world" (which is already &str)

If the function required &String, we couldn't pass string literals directly.

The Power of &str Over &String

Using &str in your API makes your code more flexible:

// Restrictive: only accepts String references
fn processBad(s: &String): UIntSize {
    s.len()
}

// Flexible: accepts string slices and String references
fn processGood(s: &str): UIntSize {
    s.len()
}

fn main() {
    let myString = String.from("hello")
    let literal = "world"

    // processBad(literal)  // Error: literal is &str, not &String
    processGood(&myString)  // OK: &String coerces to &str
    processGood(literal)    // OK: already &str
}

This is a key principle in Oxide and Rust: prefer &str over &String, &[T] over &Vec<T>, etc. Your APIs are more useful when they accept slices rather than owned collections.

Array Slices

Slices work with arrays and vectors, not just strings:

fn main() {
    let arr = [1, 2, 3, 4, 5]

    // Slice part of the array
    let slice = &arr[1..4]  // [2, 3, 4]

    // Iterate over the slice
    for item in slice {
        println!("\(item)")
    }
}

With vectors, slices are even more useful:

fn main() {
    let v = vec![1, 2, 3, 4, 5]

    let slice = &v[2..]  // [3, 4, 5]

    println!("Length of slice: \(slice.len())")

    for &item in slice {
        println!("\(item)")
    }
}

Slices and Borrowing

Slices respect the borrowing rules. You cannot create a mutable reference to a collection while slices exist:

fn main() {
    var s = String.from("hello world")

    let word = firstWord(&s)  // immutable borrow

    // s.clear()  // Error: cannot borrow as mutable while borrowed as immutable

    println!("\(word)")  // word's borrow ends here

    s.clear()  // OK now: word is no longer used
}

fn firstWord(s: &str): &str {
    let bytes = s.asBytes()

    for (i, &item) in bytes.iter().enumerate() {
        if item == ' ' as UInt8 {
            return &s[0..i]
        }
    }

    &s[..]
}

This prevents a common bug: modifying a collection while holding a reference to its contents. The slice is only valid as long as the underlying data doesn't change.

The Slice Type Syntax

The type of a slice is &[T], where T is the element type:

fn main() {
    let s = String.from("hello")
    let slice: &str = &s[..]  // &str is shorthand for &[char] (roughly)

    let arr = [1, 2, 3]
    let slice: &[Int] = &arr[..]  // &[Int] slice

    let v = vec![1, 2, 3]
    let slice: &[Int] = &v[..]  // &[Int] slice from vector
}

The syntax &[T] represents "a reference to a contiguous sequence of T". Notice that &str is special—it's the slice type for strings, optimized for UTF-8 text.

Practical Example: Splitting Words

Let's build something practical. A function that splits a string into words and returns an array of slices:

fn splitWords(s: &str): Vec<&str> {
    var words = vec![]
    var start = 0

    for (i, &c) in s.asBytes().iter().enumerate() {
        if c == ' ' as UInt8 {
            words.push(&s[start..i])
            start = i + 1
        }
    }

    if start < s.len() {
        words.push(&s[start..])
    }

    words
}

fn main() {
    let sentence = "the quick brown fox"
    let words = splitWords(sentence)

    for word in words {
        println!("Word: \(word)")
    }
}

Output:

Word: the
Word: quick
Word: brown
Word: fox

This function takes a string slice, and returns a vector of slices pointing into the original string. No data is copied—only references are created.

Why Slices Are Important

Slices embody three key principles:

  1. Zero-Cost Abstractions: Slices have no runtime overhead. They're just a pointer and length.
  2. Safety: Slices prevent out-of-bounds access at compile time.
  3. Flexibility: Functions accepting slices work with owned collections, string literals, and stack arrays.

Because of these properties, slices appear everywhere in Oxide code. They're the idiomatic way to work with sequences.

Common Slice Methods

Slices provide useful methods for working with sequences:

fn main() {
    let v = vec![1, 2, 3, 4, 5]
    let slice = &v[1..4]  // [2, 3, 4]

    // Length
    println!("Length: \(slice.len())")  // 3

    // Access elements
    println!("First: \(slice[0])")      // 2
    println!("Last: \(slice[2])")       // 4

    // Iteration
    for &item in slice {
        println!("Item: \(item)")
    }

    // Check if empty
    if slice.isEmpty() {
        println!("Empty slice")
    } else {
        println!("Not empty")
    }
}

Summary

  • Slices are references to contiguous portions of collections
  • String slices are denoted &str; array/vector slices are &[T]
  • Create slices with range syntax: &collection[start..end]
  • Use shorthand: &s[..] for the whole collection, &s[..n] to omit end, etc.
  • Slices are more flexible than references to whole collections; prefer them in APIs
  • Slices respect borrowing rules: no modification while slices exist
  • Slices have zero runtime cost and no bounds checking overhead

Slices combine safety with flexibility. They're one of the features that makes Oxide (and Rust) pleasant to use. By understanding ownership, references, and slices, you now have the foundation to write safe, efficient code.

The ownership system—ownership itself, borrowing, and slices—is complete. These three concepts form the bedrock of Oxide's memory safety story.

Using Structs to Structure Related Data

Structs let you group related data into a single type. In Oxide, structs look similar to Rust, but methods are implemented using extension blocks.

What You'll Learn

  • How to define and instantiate structs
  • How to use field shorthand and update syntax
  • How to add methods with extension

A Simple Example

public struct User {
    public username: String,
    public email: String,
    public active: Bool,
}

extension User {
    public static fn new(username: String, email: String): User {
        User {
            username,
            email,
            active: true,
        }
    }

    public mutating fn deactivate() {
        self.active = false
    }
}

In the next sections, we'll explore how struct syntax works and how to build methods that make your types easier to use.

Defining and Instantiating Structs

Structs are one of the fundamental ways to create custom types in Oxide. A struct, short for "structure," lets you package together related values under a single name. If you're coming from object-oriented languages, a struct is similar to a class's data attributes.

Defining Structs

To define a struct, use the struct keyword followed by the struct name and curly braces containing the field definitions. Each field has a name and a type, using Oxide's camelCase naming convention for field names.

struct User {
    active: Bool,
    username: String,
    email: String,
    signInCount: UInt64,
}

Notice that:

  • Field names use camelCase (e.g., signInCount, not signInCount)
  • Types use Oxide's type aliases (Bool, UInt64) or standard types (String)
  • The struct body uses curly braces, just like Rust

Public Structs and Fields

To make a struct accessible from other modules, use the public keyword:

public struct User {
    public active: Bool,
    public username: String,
    email: String,           // Private by default
    signInCount: UInt64,     // Private by default
}

The public keyword replaces Rust's pub for visibility. You can apply it to both the struct itself and individual fields.

Creating Instances

To create an instance of a struct, specify the struct name followed by curly braces containing the field values:

fn main() {
    let user = User {
        active: true,
        username: "alice123".toString(),
        email: "alice@example.com".toString(),
        signInCount: 1,
    }
}

The order of fields doesn't matter when creating an instance, but all fields must be provided (unless they have default values through other mechanisms).

Mutable Instances

If you need to modify a struct instance after creation, use var instead of let:

fn main() {
    var user = User {
        active: true,
        username: "alice123".toString(),
        email: "alice@example.com".toString(),
        signInCount: 1,
    }

    // Now we can modify fields
    user.email = "newemail@example.com".toString()
    user.signInCount += 1
}

Note that in Oxide, the entire instance must be mutable to change any field. You cannot mark only certain fields as mutable.

Accessing Field Values

Use dot notation to access struct fields:

fn main() {
    let user = User {
        active: true,
        username: "alice123".toString(),
        email: "alice@example.com".toString(),
        signInCount: 1,
    }

    println!("Username: \(user.username)")
    println!("Email: \(user.email)")
    println!("Sign-in count: \(user.signInCount)")
}

Oxide's string interpolation with \(expression) makes it easy to embed field values directly in output strings.

Field Init Shorthand

When variable names match field names, you can use the shorthand syntax:

fn createUser(username: String, email: String): User {
    User {
        active: true,
        username,   // Same as username: username
        email,      // Same as email: email
        signInCount: 1,
    }
}

This shorthand reduces repetition when the parameter names match the field names.

Struct Update Syntax

When creating a new instance that reuses most values from an existing instance, use the struct update syntax with ..:

fn main() {
    let user1 = User {
        active: true,
        username: "alice123".toString(),
        email: "alice@example.com".toString(),
        signInCount: 1,
    }

    let user2 = User {
        email: "bob@example.com".toString(),
        ..user1  // Use remaining fields from user1
    }

    // user2 has bob's email but alice's username, active status, and signInCount
}

The ..user1 must come last in the struct literal. It copies all remaining fields from the source instance.

Important ownership note: The struct update syntax moves data from the source. After the update, user1 cannot be used if any of its fields were moved (like String fields). However, if only copyable types (like Bool or UInt64) are transferred, the source remains valid.

Tuple Structs

Oxide supports tuple structs, which are structs without named fields. These are useful when you want to give a tuple a distinct type name:

struct Color(Int, Int, Int)
struct Point(Int, Int, Int)

fn main() {
    let black = Color(0, 0, 0)
    let origin = Point(0, 0, 0)

    // Access fields by index
    let red = black.0
    let green = black.1
    let blue = black.2
}

Even though Color and Point have the same field types, they are different types. A function expecting a Color won't accept a Point.

Unit-Like Structs

You can also define structs with no fields, called unit-like structs:

struct AlwaysEqual

fn main() {
    let subject = AlwaysEqual
}

Unit-like structs are useful when you need to implement a trait on a type but don't need to store any data.

Adding Attributes

Structs commonly use derive attributes to automatically implement traits:

#[derive(Debug, Clone, PartialEq)]
public struct User {
    active: Bool,
    username: String,
    email: String,
    signInCount: UInt64,
}

The #[derive(...)] attribute automatically generates implementations for common traits:

  • Debug: Enables printing with {:?} format specifier
  • Clone: Enables creating deep copies with .clone()
  • PartialEq: Enables comparison with == and !=

Complete Example

Here's a complete example showing struct definition and usage:

#[derive(Debug, Clone)]
public struct Rectangle {
    width: Int,
    height: Int,
}

fn main() {
    let rect = Rectangle {
        width: 30,
        height: 50,
    }

    println!("Rectangle: {:?}", rect)
    println!("Width: \(rect.width)")
    println!("Height: \(rect.height)")

    // Create a modified copy
    let wider = Rectangle {
        width: 60,
        ..rect
    }

    println!("Wider rectangle: {:?}", wider)
}

Rust Comparison

AspectRustOxide
Visibilitypubpublic
Field namingsnake_casecamelCase
Struct syntaxstruct { }struct { } (same)
Tuple structstruct Point(i32, i32)struct Point(Int, Int)
Typesi32, bool, u64Int, Bool, UInt64

The underlying semantics are identical. Oxide structs are Rust structs with different naming conventions and type aliases. The compiled binary is exactly the same as equivalent Rust code.

Summary

Structs let you create custom types that package related data together:

  • Use struct with curly braces for named fields
  • Use camelCase for field names
  • Use public for visibility
  • Create instances with struct literals
  • Use var for mutable instances
  • Derive common traits with #[derive(...)]

In the next section, we'll build a complete program using structs to see how they work in practice.

An Example Program Using Structs

To understand when structs are useful, let's build a program that calculates the area of a rectangle. We'll start with simple variables and progressively refactor to use structs, demonstrating how they improve code organization and clarity.

Starting with Simple Variables

Here's a basic approach using individual variables:

fn main() {
    let width = 30
    let height = 50

    println!(
        "The area of the rectangle is \(area(width, height)) square pixels."
    )
}

fn area(width: Int, height: Int): Int {
    width * height
}

This works, but the area function has two parameters that conceptually belong together. The relationship between width and height isn't explicit in the code.

Refactoring with Tuples

We can group the dimensions using a tuple:

fn main() {
    let rect = (30, 50)

    println!(
        "The area of the rectangle is \(area(rect)) square pixels."
    )
}

fn area(dimensions: (Int, Int)): Int {
    dimensions.0 * dimensions.1
}

This groups the data, but now we've lost meaning. Is dimensions.0 the width or the height? The tuple doesn't convey this information, making the code harder to understand.

Refactoring with Structs

Structs solve this by giving meaningful names to both the type and its fields:

struct Rectangle {
    width: Int,
    height: Int,
}

fn main() {
    let rect = Rectangle {
        width: 30,
        height: 50,
    }

    println!(
        "The area of the rectangle is \(area(&rect)) square pixels."
    )
}

fn area(rectangle: &Rectangle): Int {
    rectangle.width * rectangle.height
}

Now the code clearly shows that width and height are dimensions of a Rectangle. The function signature area(rectangle: &Rectangle) immediately conveys what the function operates on.

Notice that area takes a reference &Rectangle. This means:

  • The function borrows the rectangle rather than taking ownership
  • The original rect remains valid after the function call
  • No data is copied, just a reference to the existing rectangle

Adding Debug Output

When developing, you often want to print struct values for debugging. Let's see what happens if we try to print our rectangle:

struct Rectangle {
    width: Int,
    height: Int,
}

fn main() {
    let rect = Rectangle {
        width: 30,
        height: 50,
    }

    println!("rect is \(rect)")  // This won't compile!
}

This fails because Rectangle doesn't implement the Display trait that string interpolation requires. For debugging purposes, we can use the Debug trait:

#[derive(Debug)]
struct Rectangle {
    width: Int,
    height: Int,
}

fn main() {
    let rect = Rectangle {
        width: 30,
        height: 50,
    }

    // Use {:?} for Debug formatting
    println!("rect is {:?}", rect)

    // Use {:#?} for pretty-printed Debug output
    println!("rect is {:#?}", rect)
}

Output:

rect is Rectangle { width: 30, height: 50 }
rect is Rectangle {
    width: 30,
    height: 50,
}

The #[derive(Debug)] attribute automatically generates an implementation of the Debug trait, enabling the {:?} and {:#?} format specifiers.

Using dbg! Macro

For quick debugging, the dbg! macro is even more convenient:

#[derive(Debug)]
struct Rectangle {
    width: Int,
    height: Int,
}

fn main() {
    let scale = 2
    let rect = Rectangle {
        width: dbg!(30 * scale),
        height: 50,
    }

    dbg!(&rect)
}

Output:

[src/main.ox:9:16] 30 * scale = 60
[src/main.ox:13:5] &rect = Rectangle {
    width: 60,
    height: 50,
}

The dbg! macro:

  • Prints the file and line number
  • Shows the expression being evaluated
  • Returns ownership of the value (so it can be used inline)
  • Outputs to stderr rather than stdout

Notice that we use dbg!(&rect) with a reference to avoid moving ownership.

Complete Working Example

Here's the complete program with all improvements:

#[derive(Debug)]
struct Rectangle {
    width: Int,
    height: Int,
}

fn main() {
    let rect = Rectangle {
        width: 30,
        height: 50,
    }

    let area = calculateArea(&rect)

    println!("Rectangle: {:#?}", rect)
    println!("Area: \(area) square pixels")

    // Example with multiple rectangles
    let rectangles = vec![
        Rectangle { width: 10, height: 20 },
        Rectangle { width: 30, height: 50 },
        Rectangle { width: 5, height: 15 },
    ]

    println!("\nAll rectangles:")
    for rect in rectangles.iter() {
        println!("  {:?} -> area: \(calculateArea(rect))", rect)
    }
}

fn calculateArea(rectangle: &Rectangle): Int {
    rectangle.width * rectangle.height
}

Output:

Rectangle: Rectangle {
    width: 30,
    height: 50,
}
Area: 1500 square pixels

All rectangles:
  Rectangle { width: 10, height: 20 } -> area: 200
  Rectangle { width: 30, height: 50 } -> area: 1500
  Rectangle { width: 5, height: 15 } -> area: 75

Deriving Multiple Traits

In practice, you'll often derive several traits together:

#[derive(Debug, Clone, PartialEq, Eq)]
struct Rectangle {
    width: Int,
    height: Int,
}

fn main() {
    let rect1 = Rectangle { width: 30, height: 50 }
    let rect2 = rect1.clone()  // Create a copy
    let rect3 = Rectangle { width: 40, height: 50 }

    println!("rect1 == rect2: \(rect1 == rect2)")  // true
    println!("rect1 == rect3: \(rect1 == rect3)")  // false
}

Common derivable traits:

  • Debug: Enables {:?} formatting for debugging
  • Clone: Enables .clone() for deep copying
  • PartialEq: Enables == and != comparison
  • Eq: Indicates that equality is reflexive, symmetric, and transitive
  • Hash: Enables use as a key in HashMap

Why Use Structs?

This example demonstrates several benefits of structs:

  1. Semantic clarity: Rectangle is more meaningful than (Int, Int)
  2. Self-documenting code: Field names like width and height explain themselves
  3. Type safety: A Rectangle can't be confused with other (Int, Int) tuples
  4. Extensibility: Easy to add more fields or functionality later
  5. Maintainability: Changes to the struct definition are centralized

Moving Toward Methods

The calculateArea function works, but it's disconnected from the Rectangle type. Conceptually, calculating area is something a rectangle does, not something done to a rectangle.

In the next section, we'll learn about methods, which let us define functions that are directly associated with a struct:

extension Rectangle {
    fn area(): Int {
        self.width * self.height
    }
}

fn main() {
    let rect = Rectangle { width: 30, height: 50 }
    println!("Area: \(rect.area())")  // More natural!
}

This syntax, using extension blocks and implicit self, is one of Oxide's major features and is covered in detail in the next section.

Summary

In this section, we saw how to:

  • Refactor code to use structs for better organization
  • Derive the Debug trait for printing struct values
  • Use dbg! for quick debugging
  • Access struct fields through references
  • Appreciate the benefits of structured data

Next, we'll explore method syntax to make our struct-related functions even more intuitive and powerful.

Method Syntax

Methods are functions defined within the context of a type. In Oxide, we define methods using extension blocks, which associate functions with a struct, enum, or trait. This is one of Oxide's major features that differs significantly from Rust's impl blocks.

Defining Methods with Extension Blocks

Let's transform our calculateArea function from the previous section into a method on Rectangle:

#[derive(Debug)]
struct Rectangle {
    width: Int,
    height: Int,
}

extension Rectangle {
    fn area(): Int {
        self.width * self.height
    }
}

fn main() {
    let rect = Rectangle {
        width: 30,
        height: 50,
    }

    println!("Area: \(rect.area()) square pixels")
}

Key points:

  • extension Rectangle { } replaces Rust's impl Rectangle { }
  • Methods have implicit access to self
  • No self parameter is written in the method signature
  • Call methods with dot notation: rect.area()

Understanding self in Oxide Methods

In Oxide, self is implicit in non-static methods. The method modifier determines how self is accessed:

ModifierSelf TypeDescription
(none)&selfImmutable borrow (default)
mutating&mut selfMutable borrow
consumingselfTakes ownership
static(none)No self parameter

This is a fundamental difference from Rust, where you explicitly write &self, &mut self, or self as the first parameter.

Default Methods: Immutable Borrow

When you define a method without any modifier, it receives an immutable borrow of self:

extension Rectangle {
    // Default: borrows self immutably (&self)
    fn area(): Int {
        self.width * self.height
    }

    fn perimeter(): Int {
        2 * (self.width + self.height)
    }

    fn isSquare(): Bool {
        self.width == self.height
    }
}

These methods can read from self but cannot modify it. This is the most common method type and makes sense for any operation that doesn't change the object's state.

Rust Equivalent

The Oxide code above translates to this Rust code:

#![allow(unused)]
fn main() {
impl Rectangle {
    fn area(&self) -> i32 {
        self.width * self.height
    }

    fn perimeter(&self) -> i32 {
        2 * (self.width + self.height)
    }

    fn is_square(&self) -> bool {
        self.width == self.height
    }
}
}

Mutating Methods: Mutable Borrow

Use the mutating modifier when a method needs to modify self:

extension Rectangle {
    mutating fn scale(factor: Int) {
        self.width *= factor
        self.height *= factor
    }

    mutating fn setWidth(newWidth: Int) {
        self.width = newWidth
    }

    mutating fn setHeight(newHeight: Int) {
        self.height = newHeight
    }

    mutating fn double() {
        self.scale(2)  // Can call other mutating methods
    }
}

fn main() {
    var rect = Rectangle { width: 10, height: 20 }
    println!("Before: {:?}", rect)

    rect.scale(3)
    println!("After scale(3): {:?}", rect)

    rect.setWidth(100)
    println!("After setWidth(100): {:?}", rect)
}

Output:

Before: Rectangle { width: 10, height: 20 }
After scale(3): Rectangle { width: 30, height: 60 }
After setWidth(100): Rectangle { width: 100, height: 60 }

Important notes:

  • You can only call mutating methods on mutable bindings (var, not let)
  • The mutating keyword clearly signals that the method modifies state
  • This pattern is inspired by Swift's mutating keyword

Rust Equivalent

#![allow(unused)]
fn main() {
impl Rectangle {
    fn scale(&mut self, factor: i32) {
        self.width *= factor;
        self.height *= factor;
    }

    fn set_width(&mut self, new_width: i32) {
        self.width = new_width;
    }
}
}

Consuming Methods: Taking Ownership

Use the consuming modifier when a method takes ownership of self:

extension Rectangle {
    consuming fn intoSquare(): Rectangle {
        let size = (self.width + self.height) / 2
        Rectangle { width: size, height: size }
    }

    consuming fn decompose(): (Int, Int) {
        (self.width, self.height)
    }

    consuming fn destroy() {
        // self is dropped at the end of this method
        println!("Rectangle destroyed!")
    }
}

fn main() {
    let rect = Rectangle { width: 30, height: 50 }
    let (w, h) = rect.decompose()
    println!("Width: \(w), Height: \(h)")

    // rect can no longer be used - ownership was consumed
    // println!("{:?}", rect)  // ERROR: value moved
}

Consuming methods are used when:

  • Transforming a value into something else
  • Extracting owned data from a struct
  • Intentionally consuming a resource (like closing a file handle)

The naming convention often uses "into" prefix (like intoSquare) to signal that the original value will be consumed.

Rust Equivalent

#![allow(unused)]
fn main() {
impl Rectangle {
    fn into_square(self) -> Rectangle {
        let size = (self.width + self.height) / 2;
        Rectangle { width: size, height: size }
    }

    fn decompose(self) -> (i32, i32) {
        (self.width, self.height)
    }
}
}

Static Methods: No Self Parameter

Use the static modifier for functions that don't operate on an instance:

extension Rectangle {
    static fn new(width: Int, height: Int): Rectangle {
        Rectangle { width, height }
    }

    static fn square(size: Int): Rectangle {
        Rectangle { width: size, height: size }
    }

    static fn zero(): Rectangle {
        Rectangle { width: 0, height: 0 }
    }

    static fn fromDimensions(dimensions: (Int, Int)): Rectangle {
        Rectangle {
            width: dimensions.0,
            height: dimensions.1,
        }
    }
}

fn main() {
    let rect1 = Rectangle.new(30, 50)
    let rect2 = Rectangle.square(25)
    let rect3 = Rectangle.zero()

    println!("rect1: {:?}", rect1)
    println!("rect2: {:?}", rect2)
    println!("rect3: {:?}", rect3)
}

Static methods are called on the type itself using dot notation: Rectangle.new(30, 50) rather than rect.new(30, 50).

Common uses for static methods:

  • Constructors (like new, default, fromXxx)
  • Factory methods that create instances
  • Utility functions related to the type

Using Self in Static Methods

Inside an extension block, Self (capital S) refers to the type being extended:

extension Rectangle {
    static fn square(size: Int): Self {
        Self { width: size, height: size }
    }

    static fn default(): Self {
        Self.zero()  // Can call other static methods
    }
}

Rust Equivalent

#![allow(unused)]
fn main() {
impl Rectangle {
    fn new(width: i32, height: i32) -> Rectangle {
        Rectangle { width, height }
    }

    fn square(size: i32) -> Self {
        Self { width: size, height: size }
    }
}
}

Note: In Rust, these are called "associated functions" when they don't take self. Oxide uses static to make this explicit.

Methods with Additional Parameters

Methods can take additional parameters beyond the implicit self:

extension Rectangle {
    fn canHold(other: &Rectangle): Bool {
        self.width > other.width && self.height > other.height
    }

    mutating fn resizeTo(width: Int, height: Int) {
        self.width = width
        self.height = height
    }

    fn areaRatio(other: &Rectangle): Float {
        self.area() as Float / other.area() as Float
    }
}

fn main() {
    let rect1 = Rectangle { width: 30, height: 50 }
    let rect2 = Rectangle { width: 10, height: 20 }

    println!("rect1 can hold rect2: \(rect1.canHold(&rect2))")
    println!("Area ratio: \(rect1.areaRatio(&rect2))")
}

Multiple Extension Blocks

You can split methods across multiple extension blocks:

struct Rectangle {
    width: Int,
    height: Int,
}

// Constructors
extension Rectangle {
    static fn new(width: Int, height: Int): Self {
        Self { width, height }
    }

    static fn square(size: Int): Self {
        Self { width: size, height: size }
    }
}

// Geometry calculations
extension Rectangle {
    fn area(): Int {
        self.width * self.height
    }

    fn perimeter(): Int {
        2 * (self.width + self.height)
    }
}

// Mutations
extension Rectangle {
    mutating fn scale(factor: Int) {
        self.width *= factor
        self.height *= factor
    }
}

This helps organize methods by category, though it's also fine to keep everything in a single block.

Implementing Traits with Extension Blocks

Extension blocks also implement traits using the syntax extension Type: Trait:

import std.fmt.{ Display, Formatter, Result }

#[derive(Clone)]
struct Rectangle {
    width: Int,
    height: Int,
}

extension Rectangle: Display {
    fn fmt(f: &mut Formatter): Result {
        write!(f, "Rectangle(\(self.width) x \(self.height))")
    }
}

fn main() {
    let rect = Rectangle { width: 30, height: 50 }
    println!("\(rect)")  // Uses Display implementation
}

This replaces Rust's impl Trait for Type syntax. The colon reads naturally: "extend Rectangle with Display capability."

Multiple Trait Implementations

import std.fmt.{ Display, Formatter, Result }
import std.cmp.{ Ord, Ordering }

extension Rectangle: Display {
    fn fmt(f: &mut Formatter): Result {
        write!(f, "\(self.width)x\(self.height)")
    }
}

extension Rectangle: PartialOrd {
    fn partialCmp(other: &Self): Ordering? {
        self.area().partialCmp(&other.area())
    }
}

Visibility in Extension Blocks

Use public to make methods accessible from other modules:

public struct Rectangle {
    public width: Int,
    public height: Int,
}

extension Rectangle {
    public static fn new(width: Int, height: Int): Self {
        Self { width, height }
    }

    public fn area(): Int {
        self.width * self.height
    }

    // Private helper method
    fn validate(): Bool {
        self.width > 0 && self.height > 0
    }

    public mutating fn scale(factor: Int) {
        if self.validate() {
            self.width *= factor
            self.height *= factor
        }
    }
}

Complete Example: A Point Struct

Here's a comprehensive example showing all method modifiers:

#[derive(Debug, Clone, PartialEq)]
public struct Point {
    x: Float,
    y: Float,
}

extension Point {
    // Static: constructors
    public static fn new(x: Float, y: Float): Self {
        Self { x, y }
    }

    public static fn origin(): Self {
        Self { x: 0.0, y: 0.0 }
    }

    public static fn fromAngle(angle: Float, distance: Float): Self {
        Self {
            x: distance * angle.cos(),
            y: distance * angle.sin(),
        }
    }

    // Default (&self): read-only operations
    public fn distanceFromOrigin(): Float {
        (self.x * self.x + self.y * self.y).sqrt()
    }

    public fn distanceTo(other: &Point): Float {
        let dx = self.x - other.x
        let dy = self.y - other.y
        (dx * dx + dy * dy).sqrt()
    }

    public fn midpointTo(other: &Point): Point {
        Point {
            x: (self.x + other.x) / 2.0,
            y: (self.y + other.y) / 2.0,
        }
    }

    // Mutating (&mut self): modifications
    public mutating fn translate(dx: Float, dy: Float) {
        self.x += dx
        self.y += dy
    }

    public mutating fn scale(factor: Float) {
        self.x *= factor
        self.y *= factor
    }

    public mutating fn normalize() {
        let dist = self.distanceFromOrigin()
        if dist != 0.0 {
            self.x /= dist
            self.y /= dist
        }
    }

    // Consuming (self): ownership transfer
    public consuming fn intoPolar(): (Float, Float) {
        let r = self.distanceFromOrigin()
        let theta = self.y.atan2(self.x)
        (r, theta)
    }

    public consuming fn add(other: Point): Point {
        Point {
            x: self.x + other.x,
            y: self.y + other.y,
        }
    }
}

fn main() {
    // Using static methods
    var point = Point.new(3.0, 4.0)
    let origin = Point.origin()

    // Using default (immutable) methods
    println!("Distance from origin: \(point.distanceFromOrigin())")
    println!("Distance to origin: \(point.distanceTo(&origin))")

    // Using mutating methods
    point.translate(1.0, 1.0)
    println!("After translate: {:?}", point)

    point.scale(2.0)
    println!("After scale: {:?}", point)

    // Using consuming method
    let polar = point.intoPolar()
    println!("Polar coordinates: r=\(polar.0), theta=\(polar.1)")

    // point is now consumed and cannot be used
}

Summary of Method Modifiers

ModifierSelf AccessUse CaseExample
(none)&selfReading data, calculationsfn area(): Int
mutating&mut selfModifying statemutating fn scale(f: Int)
consumingselfOwnership transfer, transformsconsuming fn into(): T
static(none)Constructors, utilitiesstatic fn new(): Self

Comparison with Rust

AspectRustOxide
Implementation blockimpl Type { }extension Type { }
Trait implementationimpl Trait for Type { }extension Type: Trait { }
Immutable borrowfn foo(&self)fn foo()
Mutable borrowfn foo(&mut self)mutating fn foo()
Take ownershipfn foo(self)consuming fn foo()
No selffn foo() (associated fn)static fn foo()
Method callobj.method()obj.method() (same)
Static callType::method()Type.method()

Note: Rust's :: path separator does not exist in Oxide. Using Type::method() in Oxide code will cause a syntax error. Oxide uses . as its only path separator.

The key insight is that Oxide makes the method's relationship to self explicit through modifiers rather than through the first parameter. This makes the intent clearer when reading method signatures.

Summary

Extension blocks are Oxide's way of adding methods to types:

  • Use extension Type { } for inherent methods
  • Use extension Type: Trait { } for trait implementations
  • Method modifiers (mutating, consuming, static) replace explicit self parameters
  • Default methods borrow immutably (&self)
  • mutating methods can modify state (&mut self)
  • consuming methods take ownership (self)
  • static methods have no self and are called on the type
  • Use public for visibility, just like with structs

This syntax makes method signatures more readable and the relationship between methods and their data more explicit, while compiling to exactly the same code as equivalent Rust.

Enums and Pattern Matching

Enums let you define a type by enumerating its possible variants. They are a core tool for modeling state and making illegal states unrepresentable. Oxide uses match with -> arms and _ as the wildcard.

What You'll Learn

  • How to define enums and attach data to variants
  • How to use match to branch on variants
  • How to write concise control flow with if let

A Quick Example

public enum Message {
    Quit,
    Move { x: Int, y: Int },
    Write(String),
}

fn describe(msg: Message): String {
    match msg {
        Message.Quit -> "Quit message",
        Message.Move { x, y } -> "Move to \(x), \(y)",
        Message.Write(text) -> "Text: \(text)",
        _ -> "Unknown message",
    }
}

In the following sections, you'll see how enums and pattern matching work together to express rich, safe control flow.

Defining an Enum

Enums allow you to define a type by enumerating its possible variants. Where structs give you a way of grouping together related fields and data, enums give you a way of saying a value is one of a possible set of values. For example, we may want to say that Shape is one of a set of possible shapes that also includes Circle and Triangle. Oxide lets us express these possibilities as an enum.

Let's look at a situation we might want to express in code and see why enums are useful and more appropriate than structs in this case. Say we need to work with IP addresses. Currently, two major standards are used for IP addresses: version four and version six. Because these are the only possibilities for an IP address that our program will come across, we can enumerate all possible variants, which is where enumeration gets its name.

Any IP address can be either a version four or a version six address, but not both at the same time. That property of IP addresses makes the enum data structure appropriate because an enum value can only be one of its variants. Both version four and version six addresses are still fundamentally IP addresses, so they should be treated as the same type when the code is handling situations that apply to any kind of IP address.

We can express this concept in code by defining an IpAddrKind enumeration and listing the possible kinds an IP address can be, V4 and V6. These are the variants of the enum:

enum IpAddrKind {
    V4,
    V6,
}

IpAddrKind is now a custom data type that we can use elsewhere in our code.

Enum Values

We can create instances of each of the two variants of IpAddrKind like this:

let four = IpAddrKind.V4
let six = IpAddrKind.V6

Note that the variants of the enum are namespaced under its identifier using dot notation: IpAddrKind.V4 and IpAddrKind.V6. This is useful because now both values are of the same type: IpAddrKind. We can then, for instance, define a function that takes any IpAddrKind:

fn route(ipKind: IpAddrKind) {
    // handle routing
}

And we can call this function with either variant:

route(IpAddrKind.V4)
route(IpAddrKind.V6)

Using enums has even more advantages. Thinking more about our IP address type, at the moment we don't have a way to store the actual IP address data; we only know what kind it is. Given that you just learned about structs, you might be tempted to tackle this problem with structs:

enum IpAddrKind {
    V4,
    V6,
}

struct IpAddr {
    kind: IpAddrKind,
    address: String,
}

let home = IpAddr {
    kind: IpAddrKind.V4,
    address: "127.0.0.1".toString(),
}

let loopback = IpAddr {
    kind: IpAddrKind.V6,
    address: "::1".toString(),
}

Here, we've defined a struct IpAddr that has two fields: a kind field that is of type IpAddrKind (the enum we defined previously) and an address field of type String. We have two instances of this struct.

However, representing the same concept using just an enum is more concise: rather than an enum inside a struct, we can put data directly into each enum variant. This new definition of the IpAddr enum says that both V4 and V6 variants will have associated String values:

enum IpAddr {
    V4(String),
    V6(String),
}

let home = IpAddr.V4("127.0.0.1".toString())
let loopback = IpAddr.V6("::1".toString())

We attach data to each variant of the enum directly, so there is no need for an extra struct. Here, it's also easier to see another detail of how enums work: the name of each enum variant that we define also becomes a function that constructs an instance of the enum. That is, IpAddr.V4() is a function call that takes a String argument and returns an instance of the IpAddr type.

There's another advantage to using an enum rather than a struct: each variant can have different types and amounts of associated data. Version four IP addresses will always have four numeric components that will have values between 0 and 255. If we wanted to store V4 addresses as four UInt8 values but still express V6 addresses as one String value, we wouldn't be able to with a struct. Enums handle this case with ease:

enum IpAddr {
    V4(UInt8, UInt8, UInt8, UInt8),
    V6(String),
}

let home = IpAddr.V4(127, 0, 0, 1)
let loopback = IpAddr.V6("::1".toString())

We've shown several different ways to define data structures to store version four and version six IP addresses. However, as it turns out, wanting to store IP addresses and encode which kind they are is so common that the standard library has a definition we can use! Let's look at how the standard library defines IpAddr: it has the exact enum and variants that we've defined and used, but it embeds the address data inside the variants in the form of two different structs, which are defined differently for each variant.

Enums with Named Fields

Enum variants can also have named fields, similar to structs:

#[derive(Debug, Clone)]
public enum Message {
    Quit,
    Move { x: Int, y: Int },
    Write(String),
    ChangeColor(Int, Int, Int),
}

This enum has four variants with different types:

  • Quit has no data associated with it at all.
  • Move has named fields, like a struct.
  • Write includes a single String.
  • ChangeColor includes three Int values.

Defining an enum with variants such as the ones above is similar to defining different kinds of struct definitions, except the enum doesn't use the struct keyword and all the variants are grouped together under the Message type.

Creating instances of these variants:

let quit = Message.Quit
let moveMsg = Message.Move { x: 10, y: 20 }
let write = Message.Write("hello".toString())
let color = Message.ChangeColor(255, 128, 0)

Defining Methods on Enums

We're also able to define methods on enums using extension blocks. Here's a method named call that we could define on our Message enum:

extension Message {
    fn call() {
        match self {
            Message.Quit -> println!("Quit"),
            Message.Move { x, y } -> println!("Move to (\(x), \(y))"),
            Message.Write(text) -> println!("Write: \(text)"),
            Message.ChangeColor(r, g, b) -> println!("Color: (\(r), \(g), \(b))"),
        }
    }
}

let m = Message.Write("hello".toString())
m.call()

The body of the method uses self to get the value that we called the method on. In this example, we've created a variable m that has the value Message.Write("hello".toString()), and that is what self will be in the body of the call method when m.call() runs.

The Nullable Type: Using T? Instead of Option

Oxide provides a built-in way to express the concept of a value being present or absent using nullable types. Instead of writing Option<T> as you would in Rust, Oxide uses the more concise T? syntax. This is so common and useful that it's built into the language itself.

The nullable type encodes the very common scenario in which a value could be something or it could be nothing. For example, if you request the first item of a non-empty list, you would get a value. If you request the first item of an empty list, you would get nothing.

Expressing this concept in terms of the type system means the compiler can check whether you've handled all the cases you should be handling; this functionality can prevent bugs that are extremely common in other programming languages.

Here's how you use nullable types in Oxide:

let someNumber: Int? = Some(5)
let someString: String? = Some("a string".toString())

let absentNumber: Int? = null
let absentString: String? = null

The type of someNumber is Int?. The type of someString is String?. Because we've specified a type annotation, Oxide knows these are nullable types.

When we have a Some value, we know that a value is present and the value is held within the Some. When we have a null value, in some sense it means the same thing as null in other languages: we don't have a valid value.

So why is T? any better than having null? In short, because Int? and Int are different types, the compiler won't let us use an Int? value as if it were definitely an Int. For example, this code won't compile because it's trying to add an Int? to an Int:

let x: Int = 5
let y: Int? = Some(5)

let sum = x + y  // Error! Can't add Int and Int?

When we have a value of a type like Int in Oxide, the compiler will ensure we always have a valid value. We can proceed confidently without having to check for null before using that value. Only when we have a Int? do we have to worry about possibly not having a value, and the compiler will make sure we handle that case before using the value.

In other words, you have to convert a T? to a T before you can perform T operations with it. Generally, this helps catch one of the most common issues with null: assuming that something isn't null when it actually is.

Eliminating the risk of incorrectly assuming a not-null value helps you to be more confident in your code. In order to have a value that can possibly be null, you must explicitly opt in by making the type of that value T?. Then, when you use that value, you are required to explicitly handle the case when the value is null. Everywhere that a value has a type that isn't a T?, you can safely assume that the value isn't null.

So how do you get the T value out of a Some variant when you have a value of type T? so that you can use that value? The T? type has a large number of methods that are useful in a variety of situations; you can find them in the Rust documentation for Option<T>. Becoming familiar with the methods on Option<T> will be extremely useful in your journey with Oxide.

In general, in order to use a T? value, you want to have code that will handle each variant. You want some code that will run only when you have a Some(T) value, and this code is allowed to use the inner T. You want some other code to run only if you have a null value, and that code doesn't have a T value available. The match expression is a control flow construct that does just this when used with enums: it will run different code depending on which variant of the enum it has, and that code can use the data inside the matching variant.

The match Expression

Oxide has an extremely powerful control flow construct called match that allows you to compare a value against a series of patterns and then execute code based on which pattern matches. Patterns can be made up of literal values, variable names, wildcards, and many other things. The power of match comes from the expressiveness of the patterns and the fact that the compiler confirms that all possible cases are handled.

Think of a match expression as being like a coin-sorting machine: coins slide down a track with variously sized holes along it, and each coin falls through the first hole it encounters that it fits into. In the same way, values go through each pattern in a match, and at the first pattern the value "fits," the value falls into the associated code block to be used during execution.

Speaking of coins, let's use them as an example using match! We can write a function that takes an unknown US coin and, in a similar way as the counting machine, determines which coin it is and returns its value in cents:

enum Coin {
    Penny,
    Nickel,
    Dime,
    Quarter,
}

fn valueInCents(coin: Coin): Int {
    match coin {
        Coin.Penny -> 1,
        Coin.Nickel -> 5,
        Coin.Dime -> 10,
        Coin.Quarter -> 25,
    }
}

Let's break down the match in the valueInCents function. First we list the match keyword followed by an expression, which in this case is the value coin. This seems very similar to a conditional expression used with if, but there's a big difference: with if, the condition needs to evaluate to a Boolean value, but here it can be any type. The type of coin in this example is the Coin enum that we defined.

Next are the match arms. An arm has two parts: a pattern and some code. The first arm here has a pattern that is the value Coin.Penny and then the -> that separates the pattern and the code to run. The code in this case is just the value 1. Each arm is separated from the next with a comma.

When the match expression executes, it compares the resultant value against the pattern of each arm, in order. If a pattern matches the value, the code associated with that pattern is executed. If that pattern doesn't match the value, execution continues to the next arm, much as in a coin-sorting machine.

The code associated with each arm is an expression, and the resultant value of the expression in the matching arm is the value that gets returned for the entire match expression.

We don't typically use curly brackets if the match arm code is short, as it is in the previous example where each arm just returns a value. If you want to run multiple lines of code in a match arm, you must use curly brackets, and the comma following the arm is then optional:

fn valueInCents(coin: Coin): Int {
    match coin {
        Coin.Penny -> {
            println!("Lucky penny!")
            1
        },
        Coin.Nickel -> 5,
        Coin.Dime -> 10,
        Coin.Quarter -> 25,
    }
}

Patterns That Bind to Values

Another useful feature of match arms is that they can bind to the parts of the values that match the pattern. This is how we can extract values out of enum variants.

As an example, let's change one of our enum variants to hold data inside it. From 1999 through 2008, the United States minted quarters with different designs for each of the 50 states on one side. No other coins got state designs, so only quarters have this extra value. We can add this information to our enum by changing the Quarter variant to include a UsState value stored inside it:

#[derive(Debug, Clone, Copy)]
enum UsState {
    Alabama,
    Alaska,
    Arizona,
    Arkansas,
    California,
    // ... etc
}

enum Coin {
    Penny,
    Nickel,
    Dime,
    Quarter(UsState),
}

Let's imagine that a friend is trying to collect all 50 state quarters. While we sort our loose change by coin type, we'll also call out the name of the state associated with each quarter so that if it's one our friend doesn't have, they can add it to their collection.

In the match expression for this code, we add a variable called state to the pattern that matches values of the variant Coin.Quarter. When a Coin.Quarter matches, the state variable will bind to the value of that quarter's state. Then we can use state in the code for that arm:

fn valueInCents(coin: Coin): Int {
    match coin {
        Coin.Penny -> 1,
        Coin.Nickel -> 5,
        Coin.Dime -> 10,
        Coin.Quarter(state) -> {
            println!("State quarter from \(state:?)")
            25
        },
    }
}

If we were to call valueInCents(Coin.Quarter(UsState.Alaska)), coin would be Coin.Quarter(UsState.Alaska). When we compare that value with each of the match arms, none of them match until we reach Coin.Quarter(state). At that point, the binding for state will be the value UsState.Alaska. We can then use that binding in the println! expression, thus getting the inner state value out of the Coin enum variant for Quarter.

Matching with Nullable Types

In the previous section, we wanted to get the inner T value out of the Some case when using T?; we can also handle T? using match, as we did with the Coin enum! Instead of comparing coins, we'll compare the variants of T?, but the way the match expression works remains the same.

Let's say we want to write a function that takes a Int? and, if there's a value inside, adds 1 to that value. If there isn't a value inside, the function should return the null value and not attempt to perform any operations.

This function is very easy to write, thanks to match:

fn plusOne(x: Int?): Int? {
    match x {
        null -> null,
        Some(i) -> Some(i + 1),
    }
}

let five: Int? = Some(5)
let six = plusOne(five)
let none = plusOne(null)

Let's examine the first execution of plusOne in more detail. When we call plusOne(five), the variable x in the body of plusOne will have the value Some(5). We then compare that against each match arm:

null -> null,

The Some(5) value doesn't match the pattern null, so we continue to the next arm:

Some(i) -> Some(i + 1),

Does Some(5) match Some(i)? It does! We have the same variant. The i binds to the value contained in Some, so i takes the value 5. The code in the match arm is then executed, so we add 1 to the value of i and create a new Some value with our total 6 inside.

Now let's consider the second call of plusOne, where x is null. We enter the match and compare to the first arm:

null -> null,

It matches! There's no value to add to, so the program stops and returns the null value on the right side of ->. Because the first arm matched, no other arms are compared.

Combining match and enums is useful in many situations. You'll see this pattern a lot in Oxide code: match against an enum, bind a variable to the data inside, and then execute code based on it. It's a bit tricky at first, but once you get used to it, you'll wish you had it in all languages. It's consistently a user favorite.

Matches Are Exhaustive

There's one other aspect of match we need to discuss: the arms' patterns must cover all possibilities. Consider this version of our plusOne function, which has a bug and won't compile:

fn plusOne(x: Int?): Int? {
    match x {
        Some(i) -> Some(i + 1),
    }
}

We didn't handle the null case, so this code will cause a bug. Luckily, it's a bug the compiler knows how to catch. If we try to compile this code, we'll get an error indicating that we haven't handled all possible cases.

Matches in Oxide are exhaustive: we must exhaust every last possibility in order for the code to be valid. Especially in the case of T?, when the compiler ensures we explicitly handle the null case, it protects us from assuming we have a value when we might have null, thus making the mistake discussed earlier impossible.

Catch-all Patterns with _

Using enums, we can also take special actions for a few particular values, but for all other values take one default action. Let's look at an example where we want to implement game logic for a dice roll:

fn handleDiceRoll(diceRoll: Int) {
    match diceRoll {
        3 -> addFancyHat(),
        7 -> removeFancyHat(),
        _ -> reroll(),
    }
}

fn addFancyHat() {}
fn removeFancyHat() {}
fn reroll() {}

For the first two arms, the patterns are the literal values 3 and 7. For the last arm that covers every other possible value, the pattern is the wildcard _. The code that runs for the wildcard arm calls the reroll function.

This code compiles, even though we haven't listed all the possible values an Int can have, because the _ pattern will match all values not specifically listed. The catch-all pattern meets the requirement that match must be exhaustive. Note that we have to put the catch-all arm last because the patterns are evaluated in order. If we put the catch-all arm earlier, the other arms would never run.

Catch-all with a Bound Variable

Sometimes you want to use the matched value in your catch-all arm. You can bind the value to a variable by using a name other than _:

fn handleDiceRoll(diceRoll: Int) {
    match diceRoll {
        3 -> addFancyHat(),
        7 -> removeFancyHat(),
        other -> movePlayer(other),
    }
}

fn movePlayer(spaces: Int) {
    println!("Moving \(spaces) spaces")
}

Here, we're using the variable other to capture all values that don't match 3 or 7, and we use that value in the arm's code.

Ignoring Values with _

When you want a catch-all but don't need the value, use _:

fn handleDiceRoll(diceRoll: Int) {
    match diceRoll {
        3 -> addFancyHat(),
        7 -> removeFancyHat(),
        _ -> {},  // Do nothing
    }
}

Here, we're telling the compiler explicitly that we aren't going to use any other value, by using _ with an empty block.

Matching Multiple Patterns

You can match multiple patterns in a single arm using the | operator:

fn describeLetter(letter: char): String {
    match letter {
        'a' | 'e' | 'i' | 'o' | 'u' -> "vowel".toString(),
        'a'..='z' -> "consonant".toString(),
        _ -> "not a lowercase letter".toString(),
    }
}

Matching with Guards

Sometimes pattern matching alone isn't expressive enough. Match guards allow you to add an additional condition to a pattern:

fn checkNumber(x: Int?) {
    match x {
        Some(n) if n < 0 -> println!("Negative: \(n)"),
        Some(n) if n > 0 -> println!("Positive: \(n)"),
        Some(n) -> println!("Zero"),
        null -> println!("No value"),
    }
}

The if n < 0 part is called a match guard. It's an additional condition on a match arm that must also be true for that arm to be chosen. Match guards are useful for expressing more complex ideas than a pattern alone allows.

Destructuring Enums with Named Fields

When matching enums with named fields, you can destructure them using struct-like syntax:

enum Message {
    Quit,
    Move { x: Int, y: Int },
    Write(String),
    ChangeColor(Int, Int, Int),
}

fn processMessage(msg: Message) {
    match msg {
        Message.Quit -> {
            println!("Quit received")
        },
        Message.Move { x, y } -> {
            println!("Moving to x=\(x), y=\(y)")
        },
        Message.Write(text) -> {
            println!("Text message: \(text)")
        },
        Message.ChangeColor(r, g, b) -> {
            println!("Changing color to RGB(\(r), \(g), \(b))")
        },
    }
}

You can also rename the bound variables:

match msg {
    Message.Move { x: horizontal, y: vertical } -> {
        println!("Moving horizontally: \(horizontal), vertically: \(vertical)")
    },
    _ -> {},
}

Nested Patterns

Patterns can be nested to match complex data structures:

enum Color {
    Rgb(Int, Int, Int),
    Hsv(Int, Int, Int),
}

enum Message {
    Quit,
    ChangeColor(Color),
}

fn processMessage(msg: Message) {
    match msg {
        Message.ChangeColor(Color.Rgb(r, g, b)) -> {
            println!("RGB: \(r), \(g), \(b)")
        },
        Message.ChangeColor(Color.Hsv(h, s, v)) -> {
            println!("HSV: \(h), \(s), \(v)")
        },
        Message.Quit -> println!("Quit"),
    }
}

The match expression is one of Oxide's most powerful features. It's used extensively throughout Oxide code for control flow, error handling, and working with optional values. As you become more familiar with pattern matching, you'll find yourself reaching for match whenever you need to handle multiple cases based on the structure of your data.

if let and while let

The if let syntax lets you combine if and let into a less verbose way to handle values that match one pattern while ignoring the rest. Consider the following program that matches on a Int? value in the configMax variable but only wants to execute code if the value is a Some variant:

let configMax: Int? = 3

match configMax {
    Some(max) -> println!("The maximum is configured to be \(max)"),
    null -> {},
}

If the value is Some, we print out the value in the Some variant by binding the value to the variable max in the pattern. We don't want to do anything with the null value. To satisfy the match expression, we have to add null -> {} after processing just one variant, which is annoying boilerplate code to add.

Instead, we could write this in a shorter way using if let. The following code behaves the same as the match above:

let configMax: Int? = 3

if let Some(max) = configMax {
    println!("The maximum is configured to be \(max)")
}

The syntax if let takes a pattern and an expression separated by an equal sign. It works the same way as a match, where the expression is given to the match and the pattern is its first arm. In this case, the pattern is Some(max), and the max binds to the value inside the Some. We can then use max in the body of the if let block in the same way we used max in the corresponding match arm. The code in the if let block isn't run if the value doesn't match the pattern.

Using if let means less typing, less indentation, and less boilerplate code. However, you lose the exhaustive checking that match enforces. Choosing between match and if let depends on what you're doing in your particular situation and whether gaining conciseness is an appropriate trade-off for losing exhaustive checking.

In other words, you can think of if let as syntax sugar for a match that runs code when the value matches one pattern and then ignores all other values.

Auto-Unwrapping for Nullable Types

Oxide provides a powerful feature for working with nullable types: automatic unwrapping in if let expressions. When the right-hand side of an if let is a nullable type (T?), you don't need to explicitly write Some(...) in your pattern. Oxide will automatically unwrap the value for you.

Consider this code:

let maybeUser: User? = findUser(id)

// Traditional way with explicit Some
if let Some(user) = maybeUser {
    println!("Hello, \(user.name)")
}

// Oxide auto-unwrap - simpler and more readable
if let user = maybeUser {
    println!("Hello, \(user.name)")
}

Both forms are equivalent and compile to the same code. The auto-unwrap syntax (if let user = maybeUser) is more concise and reads naturally: "if there's a user, use it."

This is particularly useful when working with function return values:

fn findUserById(id: Int): User? {
    // ... lookup logic
    null
}

fn processUser(id: Int) {
    if let user = findUserById(id) {
        println!("Found user: \(user.name)")
        sendWelcomeEmail(user)
    }
}

The auto-unwrap also works with chained method calls:

if let email = user.profile?.email {
    sendNotification(email)
}

Using else with if let

We can include an else with an if let. The block of code that goes with the else is the same as the block of code that would go with the null case in the match expression:

let coin = Coin.Quarter(UsState.Alaska)

if let Coin.Quarter(state) = coin {
    println!("State quarter from \(state:?)")
} else {
    println!("Not a quarter")
}

This is equivalent to:

match coin {
    Coin.Quarter(state) -> println!("State quarter from \(state:?)"),
    _ -> println!("Not a quarter"),
}

Combining if let with else if

You can chain if let expressions with else if and else if let:

fn describeValue(value: Int?) {
    if let n = value {
        if n > 100 {
            println!("Large number: \(n)")
        } else if n > 0 {
            println!("Positive number: \(n)")
        } else if n < 0 {
            println!("Negative number: \(n)")
        } else {
            println!("Zero")
        }
    } else {
        println!("No value")
    }
}

Or matching against multiple nullable types:

fn processCoordinates(x: Int?, y: Int?) {
    if let xVal = x {
        if let yVal = y {
            println!("Point: (\(xVal), \(yVal))")
        } else {
            println!("Only X coordinate: \(xVal)")
        }
    } else if let yVal = y {
        println!("Only Y coordinate: \(yVal)")
    } else {
        println!("No coordinates")
    }
}

if let with Conditions

You can combine if let with additional conditions using &&:

let user: User? = findUser(id)

if let user = user && user.isActive {
    greet(user)
}

This is equivalent to:

if let Some(user) = user {
    if user.isActive {
        greet(user)
    }
}

The combined syntax is more concise and clearly expresses the intent: "if we have a user AND they are active, greet them."

while let

Similar to if let, Oxide provides while let for looping as long as a pattern continues to match. This is particularly useful when working with iterators or any sequence that returns nullable values.

var stack: Vec<Int> = vec![1, 2, 3]

while let Some(top) = stack.pop() {
    println!("Popped: \(top)")
}

This code pops values from the stack and prints them until the stack is empty. The pop method returns Int?, returning Some(value) when there's a value and null when the stack is empty.

With auto-unwrap syntax:

var stack: Vec<Int> = vec![1, 2, 3]

while let top = stack.pop() {
    println!("Popped: \(top)")
}

Practical Examples

Working with Configuration

struct Config {
    databaseUrl: String?,
    maxConnections: Int?,
    timeout: Int?,
}

fn loadConfig(): Config {
    Config {
        databaseUrl: Some("postgres://localhost/db".toString()),
        maxConnections: Some(10),
        timeout: null,
    }
}

fn initializeDatabase() {
    let config = loadConfig()

    if let url = config.databaseUrl {
        println!("Connecting to: \(url)")

        let connections = config.maxConnections ?? 5
        println!("Max connections: \(connections)")

        if let timeout = config.timeout {
            println!("Timeout: \(timeout)s")
        } else {
            println!("No timeout configured, using default")
        }
    } else {
        println!("No database URL configured!")
    }
}
struct SearchResult {
    title: String,
    url: String,
    snippet: String?,
}

fn search(query: &str): SearchResult? {
    // ... search logic
    Some(SearchResult {
        title: "Example".toString(),
        url: "https://example.com".toString(),
        snippet: Some("An example result".toString()),
    })
}

fn displaySearchResult(query: &str) {
    if let result = search(query) {
        println!("Found: \(result.title)")
        println!("URL: \(result.url)")

        if let snippet = result.snippet {
            println!("Snippet: \(snippet)")
        }
    } else {
        println!("No results found for '\(query)'")
    }
}

Iterating with while let

struct Node {
    value: Int,
    next: Box<Node>?,
}

fn sumLinkedList(head: Box<Node>?): Int {
    var sum = 0
    var current = head

    while let node = current {
        sum += node.value
        current = node.next.clone()
    }

    sum
}

Handling User Input

fn readValidNumber(): Int? {
    // Simulating user input
    Some(42)
}

fn processInput() {
    while let number = readValidNumber() {
        if number == 0 {
            println!("Exiting...")
            break
        }
        println!("Processing: \(number)")
    }
}

When to Use if let vs match

Use if let when:

  • You only care about one specific pattern
  • You want concise code for simple cases
  • The "else" case is trivial or can be ignored

Use match when:

  • You need to handle multiple patterns explicitly
  • You want the compiler to ensure you've handled all cases
  • The logic for different patterns is complex
// Good use of if let - only care about Some case
if let user = findUser(id) {
    greet(user)
}

// Good use of match - need to handle all cases explicitly
match command {
    Command.Start -> startServer(),
    Command.Stop -> stopServer(),
    Command.Restart -> {
        stopServer()
        startServer()
    },
    Command.Status -> printStatus(),
}

The if let and while let constructs provide a more ergonomic way to work with nullable types and pattern matching when you don't need the full power of match. Combined with Oxide's auto-unwrap feature for nullable types, they make working with optional values concise and readable.

Packages, Crates, and Modules

As your programs grow larger, you'll find it essential to organize your code into logical units. Oxide (like Rust) provides a powerful module system that lets you:

  • Break your code into reusable pieces
  • Keep related functionality together
  • Control which parts of your code are public and which are private
  • Avoid naming conflicts

In this chapter, we'll explore:

  • Packages and crates: The containers for your code
  • Modules: How to organize code within crates
  • import statements: How to bring names into scope
  • Visibility: Controlling what parts of your API are public

These features form the foundation of larger Oxide projects and libraries. Whether you're organizing a single crate or distributing code across multiple crates, mastering modules will make your code more maintainable and your intentions clearer.

Overview

Before diving into syntax, let's clarify the key concepts:

  • Package: A Cargo feature that contains one or more crates
  • Crate: A tree of modules that produces a library or executable
  • Module: A way to organize code into logical hierarchies with control over privacy
  • Path: A way to name an item in the module tree (e.g., restaurant.food.appetizers.Appetizer)

We'll start with packages and crates, then move to modules and visibility, and finally explore how to construct paths to access items.

Let's begin!

Packages and Crates

Packages and crates are closely related concepts, but they serve different purposes. Understanding the distinction is crucial for organizing larger Oxide projects.

What is a Crate?

A crate is the smallest amount of code that the Oxide compiler considers at a time. When you run oxc (the Oxide compiler), it treats the input as a single crate. Similarly, Cargo treats your project as a crate.

A crate can be in one of two forms:

  • Binary crate: A standalone executable program
  • Library crate: A collection of code meant to be used by other programs

Each crate has a root module that defines the structure of the entire crate:

  • For binary crates: The root module is typically src/main.ox
  • For library crates: The root module is typically src/lib.ox

When you compile a crate, the compiler starts at the root module and looks for code that needs to be compiled, including any code referenced through module declarations or imports.

What is a Package?

A package is one or more crates that work together. It contains a Cargo.toml file that describes how to build those crates.

A package can contain:

  • At most one library crate: If present, it's named after the package
  • Any number of binary crates: These are placed in src/bin/

Package Structure

Here's a typical package structure:

my_oxide_project/
├── Cargo.toml
├── src/
│   ├── main.ox       (binary crate root)
│   └── lib.ox        (library crate root)
└── src/bin/
    ├── tool1.ox      (another binary crate)
    └── tool2.ox      (another binary crate)

The Cargo.toml file at the package root is the manifest that describes the entire package.

Creating a Package

When you create a new package with Cargo, it automatically sets up the structure:

$ cargo new my_oxide_project
     Created binary (application) package

This creates:

my_oxide_project/
├── Cargo.toml
└── src/
    └── main.ox

The package name in Cargo.toml defaults to the directory name. This package contains only a binary crate (because of main.ox).

To create a package with a library crate, use the --lib flag:

$ cargo new --lib my_oxide_lib
     Created library package

This creates:

my_oxide_lib/
├── Cargo.toml
└── src/
    └── lib.ox

Binary and Library Crates in One Package

You can have both binary and library crates in a single package. For example:

$ cargo new my_oxide_project
$ cargo new --lib my_oxide_project  # This would overwrite, so instead:

Simply add a lib.ox file alongside your main.ox:

my_oxide_project/
├── Cargo.toml
├── src/
│   ├── lib.ox        (library crate)
│   └── main.ox       (binary crate)

Now you have both. The binary can import and use code from the library crate because they're in the same package.

Multiple Binary Crates

If you need multiple binary crates beyond the default main.ox, place them in src/bin/:

my_oxide_project/
├── Cargo.toml
├── src/
│   ├── lib.ox
│   └── main.ox       (default binary)
└── src/bin/
    ├── tool1.ox
    └── tool2.ox

Build a specific binary:

$ cargo build --bin tool1

Run a specific binary:

$ cargo run --bin tool2

Library Crates vs. Binary Crates

Library Crates

Use a library crate when you're building code meant to be used by other programs:

  • Contains reusable functionality
  • Has a lib.ox root
  • Publishes code with public visibility for others to use
  • Cannot be run directly with cargo run

Example: A math library that others can import and use in their projects.

Binary Crates

Use a binary crate for executable programs:

  • Has a main.ox root with a main() function
  • Typically uses code from library crates
  • Can be run with cargo run
  • Can be installed with cargo install

Example: A command-line tool that users can run.

Separating Concerns: bin and lib

A common pattern is to have both:

  • lib.ox: Contains the core logic and reusable functionality
  • main.ox: Contains the command-line interface or user-facing code

This separation makes testing easier and allows the library to be reused by other programs.

For example, a search tool might have:

// lib.ox - Core search logic (reusable)
public fn searchFiles(directory: &str, pattern: &str): Vec<String> {
    // Implementation
    []
}

// main.ox - CLI interface
import lib

fn main() {
    let args = getCommandLineArgs()
    let results = lib.searchFiles(args[0], args[1])
    for result in results {
        println!("\(result)")
    }
}

Other programs can import and use searchFiles from the library, while the binary provides a command-line interface.

Rust Comparison

In Rust:

  • The default root module for a binary is src/main.rs
  • The default root module for a library is src/lib.rs
  • Binary files in src/bin/ follow the same pattern
  • Package structure is identical to Oxide

The main difference is file extensions (.rs vs .ox) and Oxide's syntax for imports and visibility.

Summary

  • Crates are the unit of compilation, either binary or library
  • Packages contain crates and are managed with Cargo.toml
  • Binary crates have main.ox and can be executed
  • Library crates have lib.ox and provide reusable code
  • Multiple binaries go in src/bin/
  • Separating core logic in lib.ox and interface in main.ox is a best practice

Now that you understand packages and crates, let's explore how to organize code within a crate using modules.

Defining and Organizing Modules

Modules allow you to organize code within a crate into logical, hierarchical groups. They provide a namespace for your code and allow you to control what's public and what's private.

Module Basics

A module is declared with the module keyword (not mod as in Rust). It creates a namespace that can contain functions, structs, traits, and other items:

module restaurant {
    fn prepareFood() {
        println!("Preparing food")
    }
}

fn main() {
    restaurant.prepareFood()
}

In this example, prepareFood is defined inside the restaurant module and accessed using dot notation: restaurant.prepareFood().

Nested Modules

Modules can be nested inside other modules, creating a hierarchy:

module restaurant {
    module food {
        module appetizers {
            public fn bruschetta() {
                println!("Making bruschetta")
            }
        }

        module mains {
            public fn pasta() {
                println!("Making pasta")
            }
        }
    }

    module house {
        fn greet() {
            println!("Welcome!")
        }
    }
}

fn main() {
    restaurant.food.appetizers.bruschetta()
    restaurant.food.mains.pasta()
}

Paths use dot notation, just like accessing nested objects or properties in other languages.

Module Organization Conventions

While Oxide allows you to nest modules deeply, it's often clearer to organize them in files:

Single-File Organization

For small projects, keep everything in src/main.ox or src/lib.ox:

// src/main.ox
module restaurant {
    module food {
        public fn appetizer() { }
        public fn mainCourse() { }
    }

    module house {
        public fn greet() { }
    }
}

fn main() {
    restaurant.food.appetizer()
}

Multi-File Organization

For larger projects, split modules into separate files. The conventional approach is:

src/
├── lib.ox           (declares modules, defines some items)
├── restaurant.ox    (or restaurant/mod.ox)
└── restaurant/
    ├── food.ox
    ├── house.ox
    └── payment.ox

In src/lib.ox:

external module restaurant

The external module keyword tells Oxide that the module is defined in an external file, not inline.

In src/restaurant.ox (or src/restaurant/mod.ox):

public module food {
    public fn appetizer() {
        println!("Appetizer")
    }
}

public module house {
    public fn greet() {
        println!("Welcome")
    }
}

In src/restaurant/food.ox:

public fn appetizer() {
    println!("Appetizer served")
}

public fn dessert() {
    println!("Dessert served")
}

Then import and use:

import restaurant.food

fn main() {
    food.appetizer()
}

File Naming and Location

When you declare external module foo, Oxide looks for:

  1. A file named foo.ox in the same directory, or
  2. A directory named foo/ with a mod.ox file inside

For nested modules, you can either:

Option 1: Inline with dots

external module restaurant.food.appetizers

Option 2: Nested directories

src/
├── restaurant.ox
└── restaurant/
    ├── food.ox
    └── food/
        ├── appetizers.ox

Both approaches work. Choose the one that feels most natural for your project structure.

Public and Private Items

By default, all items in a module are private:

module restaurant {
    fn secret() {
        println!("Secret recipe")
    }
}

fn main() {
    restaurant.secret()  // Error: secret is private
}

Use the public keyword to make items available outside the module:

public module restaurant {
    public fn greeting() {
        println!("Welcome!")
    }

    fn secret() {
        println!("Secret recipe")  // Private, not accessible from outside
    }
}

fn main() {
    restaurant.greeting()  // OK
    restaurant.secret()    // Error: private
}

Note: A module can be public at the declaration, but items inside it are still private by default:

public module restaurant {
    // This module itself is public
    fn secret() { }        // But this item is private
    public fn welcome() { } // This item is public
}

Re-exporting with public import

Sometimes you want to reorganize your internal module structure without breaking the public API. Use public import to re-export items:

// src/lib.ox
external module internals

public import internals.helpers.createMessage

// Now users can do:
// import myLib.createMessage
// Instead of:
// import myLib.internals.helpers.createMessage

This is useful for:

  • Simplifying the public API
  • Reorganizing internal code without breaking user code
  • Grouping related functionality under a simple name

Privacy Rules

Oxide's privacy model is hierarchical:

  1. Private by default: Items are private unless marked public
  2. Public items only at boundaries: You can only make items public that are directly in a public module
  3. Public all the way down: To access a deeply nested public item, all parent modules must also be public

Example:

module restaurant {                    // Private module (default)
    public module food {               // Public submodule
        public fn appetizer() { }      // Public function
    }
}

// In another file:
import restaurant.food.appetizer      // Error! restaurant is private
// The rule: parent must be public too

To fix:

public module restaurant {             // Make parent public
    public module food {
        public fn appetizer() { }
    }
}

// Now it works!
import restaurant.food.appetizer

Best Practices

public module httpServer {
    public module handlers {
        public fn handleRequest() { }
        public fn handleError() { }
    }

    public module middleware {
        public fn logRequest() { }
        public fn validateAuth() { }
    }
}

2. Keep Public APIs Simple

Use public import to flatten your public interface:

// Organize internally
external module internals.utils
external module internals.validators

// Expose cleanly
public import internals.utils.createConfig
public import internals.validators.validateInput

3. Use Consistent Naming

Module names should be descriptive but concise:

// Good
public module user_management { }
public module payment { }
public module notifications { }

// Avoid overly nested or redundant names
public module users.user_management.user_utils { }

Comparison with Rust

In Rust:

  • Modules are declared with mod, not module
  • External modules are declared with mod foo; (with semicolon)
  • File organization is similar but uses .rs extension
  • Snake_case is used for module names (Oxide follows the same convention)

Oxide syntax:

  • module foo { } for inline modules
  • external module foo for file-based modules
  • Dot notation for paths instead of :: or ::
  • snake_case for module names by convention

Summary

  • Modules organize code into hierarchical namespaces
  • Inline modules are defined with module keyword
  • External modules are file-based and declared with external module
  • Public modules and items use the public keyword
  • Privacy is hierarchical: parent modules must be public to access nested public items
  • Dot notation accesses paths: restaurant.food.appetizers
  • public import re-exports items to simplify the public API

Now that you understand how to organize code with modules, let's explore how to bring those items into scope with import statements.

Paths for Referring to Items in the Module Tree

A path is a way to refer to an item in the module tree. Paths can take two forms: absolute and relative.

Absolute Paths

An absolute path starts from the crate root and uses the full path to an item:

public module restaurant {
    public module food {
        public module appetizers {
            public fn bruschetta() {
                println!("Making bruschetta")
            }
        }
    }
}

fn main() {
    // Absolute path
    restaurant.food.appetizers.bruschetta()
}

This is the most explicit and clear form. It tells readers exactly where the item comes from.

Relative Paths

A relative path starts from the current module and builds from there. You can reference items without writing the full path:

module restaurant {
    module food {
        public fn appetizer() {
            println!("Appetizer")
        }

        public fn describe() {
            // Relative path - within the same module
            appetizer()
        }
    }

    module house {
        fn greet() {
            // This won't work - food is a sibling, not parent
            // food.appetizer()  // Error!
        }
    }
}

Within a module, you can call other items in the same module directly. However, sibling modules require you to use their full name relative to a common parent.

Understanding the Module Hierarchy

Think of the module tree like a filesystem:

crate_root
├── restaurant          (module)
│   ├── food           (module)
│   │   ├── appetizers (module)
│   │   │   └── bruschetta (function)
│   │   └── mains      (module)
│   │       └── pasta  (function)
│   └── house          (module)
│       └── greet      (function)
└── main               (function)

To access an item, you navigate this tree. For example:

  • restaurant.food.appetizers.bruschetta - Start at root, go to restaurant, then food, then appetizers, then call bruschetta
  • restaurant.house.greet - Start at root, go to restaurant, then house, then call greet

Public vs. Private in Paths

Paths work only for public items. If an item or any parent module is private, you can't access it from outside:

module restaurant {                  // Private (not marked public)
    public fn greeting() { }
}

fn main() {
    restaurant.greeting()  // Error! restaurant is private
}

To fix:

public module restaurant {           // Make parent public
    public fn greeting() { }
}

fn main() {
    restaurant.greeting()  // OK
}

This is the public visibility rule: all ancestors in the path must be public for you to access an item.

Paths in Imports

When you import, you're creating a new name binding using a path:

import restaurant.food.appetizers as starters

fn main() {
    starters.bruschetta()  // starters is the imported name
}

The import statement says: "Follow the path restaurant.food.appetizers and bind the result to the name starters."

Paths with Generics and Complex Types

When dealing with generic types or trait objects, paths can include type information:

import std.collections.HashMap

fn main() {
    // HashMap is a path referring to a generic type
    let map: HashMap[String, Int] = HashMap()
}

The path std.collections.HashMap refers to the type itself, which can be instantiated with type arguments.

Documenting Paths

When you document your code, include paths in comments and doc comments:

/// Represents a restaurant's menu.
///
/// The `Restaurant` struct is found at `restaurant.Restaurant`.
/// To add items, use the `addItem` method from `restaurant.menu.Menu`.
public struct Restaurant {
    name: String,
}

This helps users understand how to navigate your module structure.

Common Path Patterns

Pattern 1: Accessing Sibling Modules

module server {
    module http {
        fn handleRequest() { }
    }

    module websocket {
        fn handleConnection() {
            // To call sibling module, use full path from parent
            http.handleRequest()  // This won't work!
            // Instead:
        }
    }
}

From within websocket, you need to reference http as a sibling. The safest approach is to use the full path:

module server {
    module http {
        public fn handleRequest() { }
    }

    module websocket {
        fn handleConnection() {
            // Use the full path (but this requires http to be public)
            server.http.handleRequest()
        }
    }
}

Or more simply, within the same crate, you can use the parent module:

public module server {
    public module http {
        public fn handleRequest() { }
    }

    public module websocket {
        fn handleConnection() {
            // Within the same public parent, access via dot notation
            http.handleRequest()  // This works within the server module
        }
    }
}

Pattern 2: Accessing Parent Items

public module restaurant {
    public module kitchen {
        fn cook() {
            // You can't easily reference parent items
            // Instead, structure code to avoid this need
        }
    }

    public fn announceReady() {
        println!("Food is ready!")
    }
}

fn main() {
    // You access via the full path
    restaurant.announceReady()
}

If you need parent functionality in a child module, consider:

  1. Passing it as a parameter
  2. Creating a shared utility module
  3. Restructuring to avoid the parent dependency

Pattern 3: Items in the Same Module

public module restaurant {
    public fn appetizer() {
        println!("Appetizer")
    }

    public fn mainCourse() {
        // Within the same module, call directly
        appetizer()  // This works
    }
}

Re-exports and Path Visibility

When you re-export an item, you create an alternative path to it:

// src/lib.ox
external module internals

public import internals.helpers.createMessage

// Now there are two paths to the same item:
// 1. internals.helpers.createMessage (private, internal only)
// 2. createMessage (public, preferred)

Users see the shorter path, while your internal structure remains hidden.

Full Paths in Error Messages

When the compiler reports an error, it uses paths to tell you about items:

error: cannot find function `bruschetta` in module `restaurant::food::appetizers`
 --> main.ox:3:5
  |
3 |     bruschetta()
  |     ^^^^^^^^^^ not found in this scope
  |
help: consider using the full path with the item's type
  |
3 |     restaurant.food.appetizers.bruschetta()
  |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The compiler shows the full path and suggests how to fix it.

Comparison with Rust

In Rust:

  • Absolute paths start with crate::
  • Relative paths use self:: or super::
  • Paths use :: not .
  • You can use super to reference parent modules

Rust example:

#![allow(unused)]
fn main() {
// Absolute path
crate::restaurant::food::appetizers::bruschetta();

// Relative path with super
super::other_module::do_something();

use crate::restaurant::food::appetizers as starters;
}

Oxide equivalent:

// Absolute path
restaurant.food.appetizers.bruschetta()

// Relative path (within same module)
otherModule.doSomething()

import restaurant.food.appetizers as starters

Oxide's simpler path syntax (no ::, crate::, or super::) makes navigation more intuitive, especially for those familiar with object-oriented languages.

Summary

  • Paths refer to items in the module tree using dot notation
  • Absolute paths start from the crate root
  • Relative paths navigate from the current module
  • Privacy rules: all ancestors in a path must be public
  • Paths are clear: they explicitly show where items come from
  • Importing creates shorter names for paths
  • Re-exports create alternative paths to the same items
  • Oxide uses simple dot notation, unlike Rust's :: syntax

Now that you understand paths, let's look at practical file organization strategies.

Import Statements and Bringing Names Into Scope

Once you've organized your code into modules, you need a way to bring those items into scope so you can use them without always writing the full path. This is where import statements come in.

Basic Import

The simplest form of an import brings an item directly into scope:

import restaurant.food.appetizers

fn main() {
    appetizers.bruschetta()  // Can use appetizers without the full path
}

Without the import, you'd need to write:

fn main() {
    restaurant.food.appetizers.bruschetta()  // Full path
}

Importing Specific Items

Import a single function or struct directly:

import restaurant.food.appetizers.bruschetta

fn main() {
    bruschetta()  // No prefix needed
}

This brings just bruschetta into scope.

Importing Multiple Items

Import multiple items from the same module using braces:

import restaurant.food.appetizers.{bruschetta, calamari, soup}

fn main() {
    bruschetta()
    calamari()
    soup()
}

Importing an Entire Module

Import a module and access its contents with dot notation:

import restaurant.food

fn main() {
    food.appetizers.bruschetta()
    food.mains.pasta()
    food.desserts.tiramisu()
}

Nested Imports

You can nest imports to bring multiple modules from a parent:

import restaurant.{food, payment, house}

fn main() {
    food.appetizer()      // Via imported module
    payment.process()     // Via imported module
    house.greet()         // Via imported module
}

This is equivalent to:

import restaurant.food
import restaurant.payment
import restaurant.house

Wildcard Imports

Import everything from a module using *:

import restaurant.food.*

fn main() {
    appetizers.bruschetta()
    mains.pasta()
}

Note: While wildcard imports are convenient, they can make code harder to read because it's not clear where appetizers comes from. Use them sparingly and mainly in tests or small scopes.

Renaming Imports

If two modules export items with the same name, or if you just prefer a different name, use as to rename:

import restaurant.food.appetizers as starters

fn main() {
    starters.bruschetta()
}

Multiple renames in one import:

import restaurant.{
    food.appetizers as starters,
    payment.creditCard as cardPayment,
}

fn main() {
    starters.bruschetta()
    cardPayment.process()
}

Relative Paths

Within the same file or module, you can use relative paths:

module restaurant {
    module food {
        public fn appetizer() { }
    }

    module house {
        fn greet() {
            // Access sibling module using dot notation
            let name = "appetizer"
            // Can access food.appetizer() here
            food.appetizer()
        }
    }
}

Importing from External Crates

If your project depends on other crates, import from them using the crate name:

# In Cargo.toml
[dependencies]
serde = "1.0"
import serde.json  // Import from the serde crate

fn main() {
    let data = json.parse("{\"key\": \"value\"}")
}

Import Styles

There are several valid import styles. Choose the one that makes sense for your code:

Style 1: Import Functions Directly

Use when you call the function many times:

import myLib.math.fibonacci

fn main() {
    println!("\(fibonacci(10))")
    println!("\(fibonacci(20))")
}

Style 2: Import the Module

Use when you need multiple items from the same module:

import myLib.math

fn main() {
    println!("\(math.fibonacci(10))")
    println!("\(math.add(5, 3))")
}

Style 3: Import with Alias

Use when dealing with naming conflicts:

import myLib.v1.process as processV1
import myLib.v2.process as processV2

fn main() {
    processV1.handle()
    processV2.handle()
}

Style 4: Full Paths

Use for rare or library items:

fn main() {
    let result = myLib.math.fibonacci(10)
}

Scoped Imports

Imports can be declared at any scope level, not just the top of a file:

fn processData() {
    import utils.math
    let result = math.fibonacci(10)
}

fn formatOutput() {
    import utils.formatting
    let text = formatting.indent("Text")
}

This can help keep imports close to where they're used, though top-level imports are more common.

Circular Imports

Oxide, like Rust, prevents circular dependencies at the crate level. However, within a crate, you can have modules that reference each other:

// src/lib.ox
module a {
    public fn callB() {
        b.doSomething()
    }
}

module b {
    public fn doSomething() {
        println!("B does something")
    }
}

This works because both modules are in the same compilation unit. The key is that you can't have circular dependencies between crates.

Re-exporting

When you import something in a public scope, it becomes available to your module's users:

// internal/utils.ox
public fn createConfig() { }

// src/lib.ox
import internal.utils.createConfig

// Users of this library can now do:
// import myLib.createConfig

To make this explicit, use public import:

public import internal.utils.createConfig

This makes it clear that createConfig is part of your public API.

Organizing Imports

A common convention is to organize imports into groups:

// Standard library imports (if any)
import io
import collections

// Internal imports
import utils.math
import utils.formatting

// Re-exports
public import math.fibonacci
public import formatting.indent

fn main() {
    let fib = fibonacci(10)
}

Comparison with Rust

In Rust:

  • Modules are brought into scope with use, not import
  • Paths use :: not .
  • Wildcard imports use use module::*;
  • pub use re-exports (same as Oxide)

Rust example:

#![allow(unused)]
fn main() {
use restaurant::food::appetizers::bruschetta;
use restaurant::food::*;
use restaurant::food::appetizers as starters;

pub use internal::utils::createMessage;
}

Oxide equivalent:

import restaurant.food.appetizers.bruschetta
import restaurant.food.*
import restaurant.food.appetizers as starters

public import internal.utils.createMessage

Summary

  • Basic imports bring items into scope to avoid writing full paths
  • Selective imports bring only what you need using braces
  • Module imports let you access items with dot notation
  • Wildcard imports bring everything into scope (use sparingly)
  • Renaming with as helps avoid conflicts and improve clarity
  • Nested imports import multiple items from the same parent
  • Relative paths work within modules without explicit imports
  • Scoped imports can be declared at any level
  • Re-exports with public import make items part of your public API

Now that you understand imports, let's explore paths and how to understand the full module hierarchy.

File Organization and Directory Hierarchy

As your project grows, you'll want to split your code into multiple files organized in a directory structure. Oxide provides flexible conventions for organizing files and modules.

Converting Inline Modules to Files

When an inline module becomes large, you can move it to a separate file.

Starting with Inline Modules

// src/lib.ox
module restaurant {
    public fn greet() {
        println!("Welcome!")
    }

    public fn serve() {
        println!("Your food is ready")
    }
}

Extracting to a File

Create a new file src/restaurant.ox:

// src/restaurant.ox
public fn greet() {
    println!("Welcome!")
}

public fn serve() {
    println!("Your food is ready")
}

Then update src/lib.ox to declare the external module:

// src/lib.ox
external module restaurant

Now your module is in a separate file, but the module structure is identical.

Module Organization Patterns

Pattern 1: Flat Structure (For Small Projects)

Use one file for all modules:

src/
└── lib.ox  (contains all modules inline)

When to use: Projects with < 500 lines of code in a single logical unit.

Pattern 2: One Module Per File

Each module gets its own file:

src/
├── lib.ox          (declares modules)
├── restaurant.ox   (restaurant module)
├── menu.ox         (menu module)
└── payment.ox      (payment module)

src/lib.ox:

external module restaurant
external module menu
external module payment

When to use: Most projects. Clear one-to-one mapping between modules and files.

Pattern 3: Nested Modules in Directories

Create a directory for complex modules:

src/
├── lib.ox
├── restaurant.ox
└── restaurant/
    ├── mod.ox      (or just named as directory marker)
    ├── food.ox
    ├── payment.ox
    └── house.ox

src/lib.ox:

external module restaurant

src/restaurant.ox:

external module food
external module payment
external module house

public fn welcome() {
    println!("Welcome to our restaurant!")
}

Or alternatively, use a mod.ox file:

src/
├── lib.ox
└── restaurant/
    ├── mod.ox       (equivalent to restaurant.ox)
    ├── food.ox
    ├── payment.ox
    └── house.ox

src/restaurant/mod.ox:

external module food
external module payment
external module house

public fn welcome() {
    println!("Welcome!")
}

When to use: When a module has several sub-modules and related code.

Pattern 4: Deeply Nested Structure

For complex projects with many sub-modules:

src/
├── lib.ox
└── server/
    ├── mod.ox
    ├── http/
    │   ├── mod.ox
    │   ├── handlers.ox
    │   ├── middleware.ox
    │   └── routing.ox
    ├── websocket/
    │   ├── mod.ox
    │   ├── connection.ox
    │   └── protocol.ox
    └── common.ox

src/lib.ox:

external module server

src/server/mod.ox:

external module http
external module websocket
external module common

src/server/http/mod.ox:

external module handlers
external module middleware
external module routing

public fn startServer() {
    // Implementation
}

When to use: Large projects with multiple subsystems. Keep nesting to 2-3 levels max for clarity.

File Naming Conventions

Naming Rules

  • File names use snake_case to match module names
  • Directory names match module names
  • For mod.ox files, the directory name is the module name

Examples

Module NameFile Location
restaurantsrc/restaurant.ox
restaurant.foodsrc/restaurant/food.ox
restaurant.food.appetizerssrc/restaurant/food/appetizers.ox
my_api.userssrc/my_api/users.ox

Absolute vs. Relative File Declarations

Absolute Declaration

Declare the full path from the crate root:

// src/lib.ox
external module restaurant.food.appetizers

Oxide will look for: src/restaurant/food/appetizers.ox

Hierarchical Declaration

Declare modules at each level:

// src/lib.ox
external module restaurant

// src/restaurant.ox (or src/restaurant/mod.ox)
external module food

// src/restaurant/food.ox
external module appetizers

// src/restaurant/food/appetizers.ox
// Contains the actual items
public fn bruschetta() { }

Both approaches are valid. The hierarchical approach is more common because it keeps each file focused.

Practical Example: A Complete Project Structure

Let's build a restaurant management library:

Directory Structure

my_restaurant/
├── Cargo.toml
└── src/
    ├── lib.ox
    ├── restaurant.ox
    └── restaurant/
        ├── menu.ox
        ├── kitchen.ox
        ├── payment.ox
        └── users.ox

src/lib.ox

/// Restaurant management library
external module restaurant

// Re-export public API
public import restaurant.menu.Menu
public import restaurant.menu.MenuItem
public import restaurant.kitchen.startCooking
public import restaurant.payment.processPayment

src/restaurant.ox

external module menu
external module kitchen
external module payment
external module users

public fn createRestaurant(name: &str): Restaurant {
    Restaurant {
        name: String.from(name),
        menu: menu.createMenu(),
        users: [],
    }
}

public struct Restaurant {
    public name: String,
    public menu: menu.Menu,
    public users: Vec<users.Employee>,
}

src/restaurant/menu.ox

public struct Menu {
    public items: Vec<MenuItem>,
}

public struct MenuItem {
    public name: String,
    public price: Float,
    public description: String,
}

public fn createMenu(): Menu {
    Menu {
        items: [],
    }
}

public fn addItem(menu: &mut Menu, item: MenuItem) {
    menu.items.push(item)
}

src/restaurant/kitchen.ox

public fn startCooking(order: Order): Dish {
    println!("Chef is cooking: \(order.item.name)")
    Dish {
        name: order.item.name,
        preparedAt: getCurrentTime(),
    }
}

public struct Dish {
    public name: String,
    public preparedAt: UInt64,
}

src/restaurant/payment.ox

public fn processPayment(amount: Float, method: PaymentMethod): Result[String] {
    match method {
        PaymentMethod.Credit -> {
            println!("Processing credit card payment: $\(amount)")
            Ok("Payment successful".toString())
        },
        PaymentMethod.Cash -> {
            println!("Received cash payment: $\(amount)")
            Ok("Payment successful".toString())
        },
    }
}

public enum PaymentMethod {
    Credit,
    Cash,
    Check,
}

src/restaurant/users.ox

public struct Employee {
    public id: String,
    public name: String,
    public role: Role,
}

public enum Role {
    Waiter,
    Chef,
    Manager,
}

public fn hireEmployee(id: String, name: String, role: Role): Employee {
    Employee { id, name, role }
}

Directory Conventions

Key Principles

  1. One module per file (usually): Makes it easy to find code
  2. Match directory structure to module structure: module a.b.c lives in a/b/c.ox
  3. Use mod.ox for module hubs: If a directory contains sub-modules
  4. Keep nesting shallow: No more than 3-4 levels deep for readability
  5. Group related functionality: Put related modules in the same directory

When NOT to Create Separate Files

  • For very small modules (< 50 lines)
  • For internal helper modules
  • In test modules

Circular Dependencies

Oxide prevents circular dependencies between modules:

module a imports module b
module b imports module a  // Error!

To fix circular dependencies:

  1. Extract shared code: Create a third module both depend on
// Before (circular)
module a { ... }  // imports b
module b { ... }  // imports a

// After (acyclic)
module shared { ... }
module a { ... }  // imports shared
module b { ... }  // imports shared
  1. Reorganize module hierarchy: Move items to appropriate levels
module parent {
    module child1 { ... }  // child1 can reference child2 via parent
    module child2 { ... }
}

Public Item Visibility in Files

When items are in separate files, visibility rules still apply:

// src/restaurant.ox
public fn publicFunction() { }  // Public
fn privateFunction() { }        // Private

Users can only access publicFunction. The file organization doesn't change privacy rules.

Comparison with Rust

In Rust:

  • Modules use mod keyword
  • File-based modules use mod name;
  • The conventional file for a mod server is server.rs or server/mod.rs
  • Paths use :: not .
  • Snake_case is used for module and file names

Rust example:

#![allow(unused)]
fn main() {
// src/lib.rs
mod restaurant;

// src/restaurant.rs
pub fn greet() { }
}

Oxide equivalent:

// src/lib.ox
external module restaurant

// src/restaurant.ox
public fn greet() { }

Best Practices

For Small Projects (< 1000 lines)

src/
└── lib.ox  (inline modules)

For Medium Projects (1000-10000 lines)

src/
├── lib.ox
├── users.ox
├── products.ox
├── orders.ox
└── payment.ox

For Large Projects (> 10000 lines)

src/
├── lib.ox
├── auth/
│   ├── mod.ox
│   ├── login.ox
│   ├── permission.ox
│   └── encryption.ox
├── api/
│   ├── mod.ox
│   ├── handlers.ox
│   ├── middleware.ox
│   └── routing.ox
└── data/
    ├── mod.ox
    ├── models.ox
    ├── database.ox
    └── cache.ox

Summary

  • Inline modules are useful for small, related functionality
  • External modules in files scale better for larger projects
  • File names match module names in snake_case
  • Directories organize nested modules: module.submodule lives in module/submodule.ox
  • Keep nesting shallow: Maximum 3-4 levels for clarity
  • Use mod.ox as a hub for modules in a directory
  • Privacy rules apply regardless of file organization
  • Extract shared code to prevent circular dependencies
  • Cargo handles compilation: No need to manually include files

You now understand the full module system—from crates and packages, through module organization, imports, and file hierarchies. These tools will help you structure even the largest Oxide projects into clean, maintainable code!

Common Collections

Oxide's standard library includes a number of very useful data structures called collections. Most other data types represent one specific value, but collections can contain multiple values. Unlike the built-in array and tuple types, the data that these collections point to is stored on the heap, which means the amount of data does not need to be known at compile time and can grow or shrink as the program runs. Each kind of collection has different capabilities and costs, and choosing an appropriate one for your current situation is a skill you'll develop over time. In this chapter, we'll discuss three collections that are used very often in Oxide programs:

  • A vector allows you to store a variable number of values next to each other.
  • A string is a collection of characters. We've mentioned the String type previously, but in this chapter we'll talk about it in depth.
  • A hash map allows you to associate a value with a specific key. It's a particular implementation of the more general data structure called a map.

To learn about the other kinds of collections provided by the standard library, see the standard library documentation.

We'll discuss how to create and update vectors, strings, and hash maps, as well as what makes each special.

Collection Types in Oxide

Oxide uses Rust's standard collection types directly. There are no type aliases for collections in Oxide v1.0:

TypeDescription
Vec<T>A growable, heap-allocated array
StringA growable, UTF-8 encoded string
HashMap<K, V>A hash map for key-value pairs

This design keeps things simple and ensures you learn Rust's actual collection types, which is essential for reading documentation and understanding how your code works under the hood.

Rust comparison: The collection types are identical to Rust. Oxide uses the same Vec<T>, String, and HashMap<K, V> types.

#![allow(unused)]
fn main() {
// Rust - identical syntax
use std::collections::HashMap;
let v: Vec<i32> = Vec::new();
let s: String = String::new();
let h: HashMap<String, i32> = HashMap::new();
}

Storing Lists of Values with Vectors

The first collection type we'll look at is Vec<T>, also known as a vector. Vectors allow you to store more than one value in a single data structure that puts all the values next to each other in memory. Vectors can only store values of the same type. They are useful when you have a list of items, such as the lines of text in a file or the prices of items in a shopping cart.

Creating a New Vector

To create a new, empty vector, we call the Vec.new function:

fn main() {
    let v: Vec<Int> = Vec.new()
}

Note that we added a type annotation here. Because we aren't inserting any values into this vector, Oxide doesn't know what kind of elements we intend to store. This is an important point. Vectors are implemented using generics; we'll cover how to use generics with your own types in a later chapter. For now, know that the Vec<T> type provided by the standard library can hold any type. When we create a vector to hold a specific type, we can specify the type within angle brackets. In the example above, we've told Oxide that the Vec<T> in v will hold elements of the Int type.

More often, you'll create a Vec<T> with initial values, and Oxide will infer the type of value you want to store, so you rarely need to do this type annotation. Oxide provides the vec! macro, which will create a new vector that holds the values you give it:

fn main() {
    let v = vec![1, 2, 3]
}

Because we've given initial Int values, Oxide can infer that the type of v is Vec<Int>, and the type annotation isn't necessary. Next, we'll look at how to modify a vector.

Rust comparison: The main syntax difference is using dot notation for the constructor: Vec.new() instead of Vec::new(). The vec! macro works identically.

#![allow(unused)]
fn main() {
// Rust
let v: Vec<i32> = Vec::new();
let v = vec![1, 2, 3];
}

Updating a Vector

To create a vector and then add elements to it, we can use the push method:

fn main() {
    var v: Vec<Int> = Vec.new()
    v.push(5)
    v.push(6)
    v.push(7)
    v.push(8)
}

As with any variable, if we want to be able to change its value, we need to make it mutable using the var keyword. The numbers we place inside are all of type Int, and Oxide infers this from the data, so we don't need the Vec<Int> annotation.

Rust comparison: Oxide uses var instead of let mut for mutable bindings.

#![allow(unused)]
fn main() {
// Rust
let mut v: Vec<i32> = Vec::new();
v.push(5);
v.push(6);
}

Reading Elements of Vectors

There are two ways to reference a value stored in a vector: via indexing or by using the get method. In the following examples, we've annotated the types of the values that are returned from these functions for extra clarity.

fn main() {
    let v = vec![1, 2, 3, 4, 5]

    let third: &Int = &v[2]
    println!("The third element is \(third)")

    let third: Int? = v.get(2).copied()
    match third {
        Some(value) -> println!("The third element is \(value)"),
        null -> println!("There is no third element."),
    }
}

Note a few details here. We use the index value of 2 to get the third element because vectors are indexed by number, starting at zero. Using & and [] gives us a reference to the element at the index value. When we use the get method with the index passed as an argument, we get an Option<&T> (which in Oxide we can think of as (&T)?) that we can use with match.

Oxide provides these two ways to reference an element so that you can choose how the program behaves when you try to use an index value outside the range of existing elements. As an example, let's see what happens when we have a vector of five elements and then we try to access an element at index 100 with each technique:

fn main() {
    let v = vec![1, 2, 3, 4, 5]

    let doesNotExist = &v[100]        // This will panic!
    let doesNotExist = v.get(100)     // This returns null
}

When we run this code, the first [] method will cause the program to panic because it references a nonexistent element. This method is best used when you want your program to crash if there's an attempt to access an element past the end of the vector.

When the get method is passed an index that is outside the vector, it returns null without panicking. You would use this method if accessing an element beyond the range of the vector may happen occasionally under normal circumstances. Your code will then have logic to handle having either Some(&element) or null.

Ownership and Borrowing with Vectors

When the program has a valid reference, the borrow checker enforces the ownership and borrowing rules to ensure that this reference and any other references to the contents of the vector remain valid. Recall the rule that states you can't have mutable and immutable references in the same scope. That rule applies here, where we hold an immutable reference to the first element in a vector and try to add an element to the end. This program won't work if we also try to refer to that element later in the function:

fn main() {
    var v = vec![1, 2, 3, 4, 5]

    let first = &v[0]

    v.push(6)  // Error! Cannot borrow `v` as mutable

    println!("The first element is: \(first)")
}

Compiling this code will result in an error:

error[E0502]: cannot borrow `v` as mutable because it is also borrowed as immutable

The code might look like it should work: Why should a reference to the first element care about changes at the end of the vector? This error is due to the way vectors work: Because vectors put the values next to each other in memory, adding a new element onto the end of the vector might require allocating new memory and copying the old elements to the new space, if there isn't enough room to put all the elements next to each other where the vector is currently stored. In that case, the reference to the first element would be pointing to deallocated memory. The borrowing rules prevent programs from ending up in that situation.

Iterating Over the Values in a Vector

To access each element in a vector in turn, we would iterate through all of the elements rather than use indices to access one at a time. Here's how to use a for loop to get immutable references to each element in a vector of Int values and print them:

fn main() {
    let v = vec![100, 32, 57]
    for i in &v {
        println!("\(i)")
    }
}

We can also iterate over mutable references to each element in a mutable vector in order to make changes to all the elements. The following for loop will add 50 to each element:

fn main() {
    var v = vec![100, 32, 57]
    for i in &mut v {
        *i += 50
    }
}

To change the value that the mutable reference refers to, we have to use the * dereference operator to get to the value in i before we can use the += operator. We'll talk more about the dereference operator in a later chapter.

Iterating over a vector, whether immutably or mutably, is safe because of the borrow checker's rules. If we attempted to insert or remove items in the for loop body, we would get a compiler error. The reference to the vector that the for loop holds prevents simultaneous modification of the whole vector.

Using an Enum to Store Multiple Types

Vectors can only store values that are of the same type. This can be inconvenient; there are definitely use cases for needing to store a list of items of different types. Fortunately, the variants of an enum are defined under the same enum type, so when we need one type to represent elements of different types, we can define and use an enum!

For example, say we want to get values from a row in a spreadsheet in which some of the columns in the row contain integers, some floating-point numbers, and some strings. We can define an enum whose variants will hold the different value types, and all the enum variants will be considered the same type: that of the enum. Then we can create a vector to hold that enum and so, ultimately, hold different types:

fn main() {
    enum SpreadsheetCell {
        IntValue(Int),
        FloatValue(Float),
        Text(String),
    }

    let row = vec![
        SpreadsheetCell.IntValue(3),
        SpreadsheetCell.FloatValue(10.12),
        SpreadsheetCell.Text("blue".toString()),
    ]
}

Oxide needs to know what types will be in the vector at compile time so that it knows exactly how much memory on the heap will be needed to store each element. We must also be explicit about what types are allowed in this vector. If Oxide allowed a vector to hold any type, there would be a chance that one or more of the types would cause errors with the operations performed on the elements of the vector. Using an enum plus a match expression means that Oxide will ensure at compile time that every possible case is handled.

If you don't know the exhaustive set of types a program will get at runtime to store in a vector, the enum technique won't work. Instead, you can use a trait object, which we'll cover in a later chapter.

Common Vector Methods

Now that we've discussed some of the most common ways to use vectors, here are some additional useful methods defined on Vec<T>:

fn main() {
    var v = vec![1, 2, 3]

    // Add an element to the end
    v.push(4)

    // Remove and return the last element
    let last: Int? = v.pop()  // Returns Some(4)

    // Get the length
    let len = v.len()  // Returns 3

    // Check if empty
    let empty = v.isEmpty()  // Returns false

    // Clear all elements
    v.clear()

    // Create with capacity (optimization)
    let withCapacity: Vec<Int> = Vec.withCapacity(10)
}

Rust comparison: Method names are generally the same, but Oxide uses camelCase for method names like isEmpty instead of is_empty. Also note Vec.withCapacity uses dot notation instead of Vec::with_capacity.

#![allow(unused)]
fn main() {
// Rust
let mut v = vec![1, 2, 3];
v.push(4);
let last = v.pop();
let len = v.len();
let empty = v.is_empty();
v.clear();
let with_capacity: Vec<i32> = Vec::with_capacity(10);
}

Dropping a Vector Drops Its Elements

Like any other value, a vector is freed when it goes out of scope:

fn main() {
    {
        let v = vec![1, 2, 3, 4]

        // do stuff with v
    } // <- v goes out of scope and is freed here
}

When the vector gets dropped, all of its contents are also dropped, meaning the integers it holds will be cleaned up. The borrow checker ensures that any references to contents of a vector are only used while the vector itself is valid.

Let's move on to the next collection type: String!

Storing UTF-8 Encoded Text with Strings

We talked about strings in the ownership chapter, but we'll look at them in more depth now. New programmers commonly get stuck on strings for a combination of three reasons: the language's propensity for exposing possible errors, strings being a more complicated data structure than many programmers give them credit for, and UTF-8. These factors combine in a way that can seem difficult when you're coming from other programming languages.

We discuss strings in the context of collections because strings are implemented as a collection of bytes, plus some methods to provide useful functionality when those bytes are interpreted as text. In this section, we'll talk about the operations on String that every collection type has, such as creating, updating, and reading. We'll also discuss the ways in which String is different from the other collections, namely how indexing into a String is complicated by the differences between how people and computers interpret String data.

What Is a String?

Oxide has only one string type in the core language, which is the string slice str that is usually seen in its borrowed form, &str. We talked about string slices in the ownership chapter, which are references to some UTF-8 encoded string data stored elsewhere. String literals, for example, are stored in the program's binary and are therefore string slices.

The String type, which is provided by the standard library rather than coded into the core language, is a growable, mutable, owned, UTF-8 encoded string type. When we refer to "strings" in Oxide, we might be referring to either the String or the string slice &str types, not just one of those types. Although this section is largely about String, both types are used heavily in the standard library, and both String and string slices are UTF-8 encoded.

Creating a New String

Many of the same operations available with Vec<T> are available with String as well because String is actually implemented as a wrapper around a vector of bytes with some extra guarantees, restrictions, and capabilities. An example of a function that works the same way with Vec<T> and String is the new function to create an instance:

fn main() {
    let s = String.new()
}

This line creates a new, empty string called s, into which we can then load data. Often, we'll have some initial data with which we want to start the string. For that, we use the toString method, which is available on any type that implements the Display trait, as string literals do:

fn main() {
    let data = "initial contents"
    let s = data.toString()

    // Or more directly:
    let s = "initial contents".toString()
}

This code creates a string containing initial contents.

We can also use the function String.from to create a String from a string literal:

fn main() {
    let s = String.from("initial contents")
}

Because strings are used for so many things, we can use many different APIs for strings, providing us with a lot of options. In this case, String.from and toString do the same thing, so which one you choose is a matter of style and readability.

Rust comparison: Oxide uses dot notation (String.new(), String.from()) instead of path notation (String::new(), String::from()). Also, Oxide uses toString() in camelCase instead of to_string().

#![allow(unused)]
fn main() {
// Rust
let s = String::new();
let s = "initial contents".to_string();
let s = String::from("initial contents");
}

Remember that strings are UTF-8 encoded, so we can include any properly encoded data in them:

fn main() {
    let hello = String.from("Hola")
    let hello = String.from("Hello")
    let hello = String.from("Zdravstvuyte")
    let hello = String.from("Bonjour")
    let hello = String.from("Hallo")
    let hello = String.from("Ciao")
    let hello = String.from("Olá")
}

All of these are valid String values.

String Interpolation

One of Oxide's most convenient features for working with strings is string interpolation. Instead of using the format! macro with placeholders, you can embed expressions directly in string literals using \(expression) syntax:

fn main() {
    let name = "Alice"
    let age = 30

    // String interpolation
    let greeting = "Hello, \(name)! You are \(age) years old."
    println!("\(greeting)")

    // Expressions work too
    let message = "Next year you'll be \(age + 1)."
    println!("\(message)")
}

This is much more readable than the equivalent code using format!:

fn main() {
    let name = "Alice"
    let age = 30

    // Using format! macro (also works)
    let greeting = format!("Hello, {}! You are {} years old.", name, age)
}

Rust comparison: Rust requires the format! macro for string formatting. Oxide's \(expr) syntax is inspired by Swift and provides a cleaner alternative.

#![allow(unused)]
fn main() {
// Rust
let name = "Alice";
let age = 30;
let greeting = format!("Hello, {}! You are {} years old.", name, age);
}

Updating a String

A String can grow in size and its contents can change, just like the contents of a Vec<T>, if you push more data into it. In addition, you can conveniently use the + operator or string interpolation to concatenate String values.

Appending with pushStr and push

We can grow a String by using the pushStr method to append a string slice:

fn main() {
    var s = String.from("foo")
    s.pushStr("bar")
    println!("\(s)")  // Prints: foobar
}

After these two lines, s will contain foobar. The pushStr method takes a string slice because we don't necessarily want to take ownership of the parameter. For example, in the following code, we want to be able to use s2 after appending its contents to s1:

fn main() {
    var s1 = String.from("foo")
    let s2 = "bar"
    s1.pushStr(s2)
    println!("s2 is \(s2)")  // s2 is still valid!
}

If the pushStr method took ownership of s2, we wouldn't be able to print its value on the last line. However, this code works as we'd expect!

The push method takes a single character as a parameter and adds it to the String:

fn main() {
    var s = String.from("lo")
    s.push('l')
    println!("\(s)")  // Prints: lol
}

Rust comparison: Oxide uses camelCase method names: pushStr instead of push_str.

#![allow(unused)]
fn main() {
// Rust
let mut s = String::from("foo");
s.push_str("bar");
s.push('l');
}

Concatenating with + or String Interpolation

Often, you'll want to combine two existing strings. One way to do so is to use the + operator:

fn main() {
    let s1 = String.from("Hello, ")
    let s2 = String.from("world!")
    let s3 = s1 + &s2  // Note: s1 has been moved here and can no longer be used
}

The string s3 will contain Hello, world!. The reason s1 is no longer valid after the addition, and the reason we used a reference to s2, has to do with the signature of the method that's called when we use the + operator. The + operator uses the add method, whose signature looks something like this:

consuming fn add(s: &str): String

This means s1 will be moved into the add call and will no longer be valid after that. So, although let s3 = s1 + &s2; looks like it will copy both strings and create a new one, this statement actually takes ownership of s1, appends a copy of the contents of s2, and then returns ownership of the result.

If we need to concatenate multiple strings, the behavior of the + operator gets unwieldy:

fn main() {
    let s1 = String.from("tic")
    let s2 = String.from("tac")
    let s3 = String.from("toe")

    let s = s1 + "-" + &s2 + "-" + &s3
}

At this point, s will be tic-tac-toe. With all of the + and " characters, it's difficult to see what's going on. For combining strings in more complicated ways, we can instead use string interpolation:

fn main() {
    let s1 = String.from("tic")
    let s2 = String.from("tac")
    let s3 = String.from("toe")

    let s = "\(s1)-\(s2)-\(s3)"
}

This code also sets s to tic-tac-toe. String interpolation is much easier to read, and unlike the + operator, it doesn't take ownership of any of its parameters because it uses references internally.

Indexing into Strings

In many other programming languages, accessing individual characters in a string by referencing them by index is a valid and common operation. However, if you try to access parts of a String using indexing syntax in Oxide, you'll get an error:

fn main() {
    let s1 = String.from("hello")
    let h = s1[0]  // Error! Strings cannot be indexed by integers
}

The error tells the story: Oxide strings don't support indexing. But why not? To answer that question, we need to discuss how Oxide stores strings in memory.

Internal Representation

A String is a wrapper over a Vec<UInt8>. Let's look at some of our properly encoded UTF-8 example strings. First, this one:

let hello = String.from("Hola")

In this case, len will be 4, which means the vector storing the string "Hola" is 4 bytes long. Each of these letters takes 1 byte when encoded in UTF-8. The following line, however, may surprise you (note that this string begins with the capital Cyrillic letter Ze, not the number 3):

let hello = String.from("Zdravstvuyte")  // Russian greeting

If you were asked how long the string is, you might say 12. In fact, Oxide's answer is 24: That's the number of bytes it takes to encode "Zdravstvuyte" in UTF-8, because each Unicode scalar value in that string takes 2 bytes of storage. Therefore, an index into the string's bytes will not always correlate to a valid Unicode scalar value.

You already know that answer will not be Z, the first letter. When encoded in UTF-8, the first byte of Z is 208 and the second is 151, so it would seem that answer should in fact be 208, but 208 is not a valid character on its own. Returning 208 is likely not what a user would want if they asked for the first letter of this string; however, that's the only data that Oxide has at byte index 0.

The answer, then, is that to avoid returning an unexpected value and causing bugs that might not be discovered immediately, Oxide doesn't compile this code at all and prevents misunderstandings early in the development process.

Bytes, Scalar Values, and Grapheme Clusters

Another point about UTF-8 is that there are actually three relevant ways to look at strings from Oxide's perspective: as bytes, scalar values, and grapheme clusters (the closest thing to what we would call letters).

If we look at the Hindi word "namaste" written in the Devanagari script, it is stored as a vector of UInt8 values that looks like this:

[224, 164, 168, 224, 164, 174, 224, 164, 184, 224, 165, 141, 224, 164, 164, 224, 165, 135]

That's 18 bytes and is how computers ultimately store this data. If we look at them as Unicode scalar values, which are what Oxide's Char type is, those bytes look like this:

['n', 'm', 's', '्', 't', 'े']

There are six Char values here, but the fourth and sixth are not letters: They're diacritics that don't make sense on their own. Finally, if we look at them as grapheme clusters, we'd get what a person would call the four letters that make up the Hindi word.

Oxide provides different ways of interpreting the raw string data that computers store so that each program can choose the interpretation it needs, no matter what human language the data is in.

A final reason Oxide doesn't allow us to index into a String to get a character is that indexing operations are expected to always take constant time (O(1)). But it isn't possible to guarantee that performance with a String, because Oxide would have to walk through the contents from the beginning to the index to determine how many valid characters there were.

Slicing Strings

Indexing into a string is often a bad idea because it's not clear what the return type of the string-indexing operation should be: a byte value, a character, a grapheme cluster, or a string slice. If you really need to use indices to create string slices, Oxide asks you to be more specific.

Rather than indexing using [] with a single number, you can use [] with a range to create a string slice containing particular bytes:

let hello = "Zdravstvuyte"  // Russian greeting in Cyrillic

let s = &hello[0..4]

Here, s will be a &str that contains the first 4 bytes of the string. Earlier, we mentioned that each of these characters was 2 bytes, which means s will be the first two Cyrillic characters.

If we were to try to slice only part of a character's bytes with something like &hello[0..1], Oxide would panic at runtime in the same way as if an invalid index were accessed in a vector:

thread 'main' panicked at 'byte index 1 is not a char boundary'

You should use caution when creating string slices with ranges, because doing so can crash your program.

Iterating Over Strings

The best way to operate on pieces of strings is to be explicit about whether you want characters or bytes. For individual Unicode scalar values, use the chars method. Calling chars on a Cyrillic string separates out and returns values of type Char, and you can iterate over the result to access each element:

fn main() {
    for c in "Hello".chars() {
        println!("\(c)")
    }
}

This code will print:

H
e
l
l
o

Alternatively, the bytes method returns each raw byte, which might be appropriate for your domain:

fn main() {
    for b in "Hello".bytes() {
        println!("\(b)")
    }
}

This code will print the bytes that make up this string:

72
101
108
108
111

But be sure to remember that valid Unicode scalar values may be made up of more than 1 byte.

Getting grapheme clusters from strings is complex, so this functionality is not provided by the standard library. Crates are available on crates.io if this is the functionality you need.

Common String Methods

Here are some commonly used methods on String and &str:

fn main() {
    let s = String.from("Hello, World!")

    // Check if empty
    let empty = s.isEmpty()  // false

    // Get length in bytes
    let len = s.len()  // 13

    // Check if string contains a substring
    let hasWorld = s.contains("World")  // true

    // Replace occurrences
    let replaced = s.replace("World", "Oxide")  // "Hello, Oxide!"

    // Convert to uppercase/lowercase
    let upper = s.toUppercase()  // "HELLO, WORLD!"
    let lower = s.toLowercase()  // "hello, world!"

    // Trim whitespace
    let padded = "  hello  "
    let trimmed = padded.trim()  // "hello"

    // Split into parts
    let csv = "a,b,c"
    for part in csv.split(',') {
        println!("\(part)")
    }
}

Rust comparison: Method names use camelCase in Oxide: isEmpty instead of is_empty, toUppercase instead of to_uppercase.

#![allow(unused)]
fn main() {
// Rust
let s = String::from("Hello, World!");
let empty = s.is_empty();
let upper = s.to_uppercase();
let lower = s.to_lowercase();
}

Strings Are Not So Simple

To summarize, strings are complicated. Different programming languages make different choices about how to present this complexity to the programmer. Oxide has chosen to make the correct handling of String data the default behavior for all Oxide programs, which means programmers have to put more thought into handling UTF-8 data up front. This trade-off exposes more of the complexity of strings than is apparent in other programming languages, but it prevents you from having to handle errors involving non-ASCII characters later in your development life cycle.

The good news is that the standard library offers a lot of functionality built off the String and &str types to help handle these complex situations correctly. Be sure to check out the documentation for useful methods like contains for searching in a string and replace for substituting parts of a string with another string.

Let's switch to something a bit less complex: hash maps!

Storing Keys with Associated Values in Hash Maps

The last of our common collections is the hash map. The type HashMap<K, V> stores a mapping of keys of type K to values of type V using a hashing function, which determines how it places these keys and values into memory. Many programming languages support this kind of data structure, but they often use a different name, such as hash, map, object, hash table, dictionary, or associative array, just to name a few.

Hash maps are useful when you want to look up data not by using an index, as you can with vectors, but by using a key that can be of any type. For example, in a game, you could keep track of each team's score in a hash map in which each key is a team's name and the values are each team's score. Given a team name, you can retrieve its score.

We'll go over the basic API of hash maps in this section, but many more goodies are hiding in the functions defined on HashMap<K, V> by the standard library. As always, check the standard library documentation for more information.

Creating a New Hash Map

One way to create an empty hash map is to use new and to add elements with insert. In the following example, we're keeping track of the scores of two teams whose names are Blue and Yellow. The Blue team starts with 10 points, and the Yellow team starts with 50:

import std.collections.HashMap

fn main() {
    var scores: HashMap<String, Int> = HashMap.new()

    scores.insert("Blue".toString(), 10)
    scores.insert("Yellow".toString(), 50)
}

Note that we need to first import the HashMap from the collections portion of the standard library. Of our three common collections, this one is the least often used, so it's not included in the features brought into scope automatically in the prelude. Hash maps also have less support from the standard library; there's no built-in macro to construct them, for example.

Just like vectors, hash maps store their data on the heap. This HashMap has keys of type String and values of type Int. Like vectors, hash maps are homogeneous: All of the keys must have the same type, and all of the values must have the same type.

Rust comparison: Oxide uses dot notation (HashMap.new()) instead of path notation (HashMap::new()), and imports use dot notation (std.collections.HashMap) instead of std::collections::HashMap.

// Rust
use std::collections::HashMap;

fn main() {
    let mut scores: HashMap<String, i32> = HashMap::new();
    scores.insert(String::from("Blue"), 10);
    scores.insert(String::from("Yellow"), 50);
}

Accessing Values in a Hash Map

We can get a value out of the hash map by providing its key to the get method:

import std.collections.HashMap

fn main() {
    var scores: HashMap<String, Int> = HashMap.new()

    scores.insert("Blue".toString(), 10)
    scores.insert("Yellow".toString(), 50)

    let teamName = "Blue".toString()
    let score: Int = scores.get(&teamName).copied().unwrapOr(0)

    println!("Blue team score: \(score)")
}

Here, score will have the value that's associated with the Blue team, and the result will be 10. The get method returns an Option<&V> (or (&V)? in Oxide terms); if there's no value for that key in the hash map, get will return null. This program handles the Option by calling copied to get an Option<Int> rather than an Option<&Int>, then unwrapOr to set score to zero if scores doesn't have an entry for the key.

We can iterate over each key-value pair in a hash map in a similar manner as we do with vectors, using a for loop:

import std.collections.HashMap

fn main() {
    var scores: HashMap<String, Int> = HashMap.new()

    scores.insert("Blue".toString(), 10)
    scores.insert("Yellow".toString(), 50)

    for (key, value) in &scores {
        println!("\(key): \(value)")
    }
}

This code will print each pair in an arbitrary order:

Yellow: 50
Blue: 10

Rust comparison: Oxide uses camelCase for method names: unwrapOr instead of unwrap_or.

#![allow(unused)]
fn main() {
// Rust
let score = scores.get(&team_name).copied().unwrap_or(0);
}

Hash Maps and Ownership

For types that implement the Copy trait, like Int, the values are copied into the hash map. For owned values like String, the values will be moved and the hash map will be the owner of those values:

import std.collections.HashMap

fn main() {
    let fieldName = "Favorite color".toString()
    let fieldValue = "Blue".toString()

    var map: HashMap<String, String> = HashMap.new()
    map.insert(fieldName, fieldValue)

    // fieldName and fieldValue are invalid at this point!
    // println!("\(fieldName)")  // Error: value has been moved
}

We aren't able to use the variables fieldName and fieldValue after they've been moved into the hash map with the call to insert.

If we insert references to values into the hash map, the values won't be moved into the hash map. The values that the references point to must be valid for at least as long as the hash map is valid. We'll talk more about these issues in the chapter on lifetimes.

Updating a Hash Map

Although the number of key and value pairs is growable, each unique key can only have one value associated with it at a time (but not vice versa: for example, both the Blue team and the Yellow team could have the value 10 stored in the scores hash map).

When you want to change the data in a hash map, you have to decide how to handle the case when a key already has a value assigned. You could replace the old value with the new value, completely disregarding the old value. You could keep the old value and ignore the new value, only adding the new value if the key doesn't already have a value. Or you could combine the old value and the new value. Let's look at how to do each of these!

Overwriting a Value

If we insert a key and a value into a hash map and then insert that same key with a different value, the value associated with that key will be replaced. Even though the following code calls insert twice, the hash map will only contain one key-value pair because we're inserting the value for the Blue team's key both times:

import std.collections.HashMap

fn main() {
    var scores: HashMap<String, Int> = HashMap.new()

    scores.insert("Blue".toString(), 10)
    scores.insert("Blue".toString(), 25)

    println!("\(scores)")  // Prints: {"Blue": 25}
}

This code will print {"Blue": 25}. The original value of 10 has been overwritten.

Adding a Key and Value Only If a Key Isn't Present

It's common to check whether a particular key already exists in the hash map with a value and then to take the following actions: If the key does exist in the hash map, the existing value should remain the way it is; if the key doesn't exist, insert it and a value for it.

Hash maps have a special API for this called entry that takes the key you want to check as a parameter. The return value of the entry method is an enum called Entry that represents a value that might or might not exist. Let's say we want to check whether the key for the Yellow team has a value associated with it. If it doesn't, we want to insert the value 50, and the same for the Blue team:

import std.collections.HashMap

fn main() {
    var scores: HashMap<String, Int> = HashMap.new()
    scores.insert("Blue".toString(), 10)

    scores.entry("Yellow".toString()).orInsert(50)
    scores.entry("Blue".toString()).orInsert(50)

    println!("\(scores)")
}

The orInsert method on Entry is defined to return a mutable reference to the value for the corresponding Entry key if that key exists, and if not, it inserts the parameter as the new value for this key and returns a mutable reference to the new value. This technique is much cleaner than writing the logic ourselves and, in addition, plays more nicely with the borrow checker.

Running this code will print {"Yellow": 50, "Blue": 10}. The first call to entry will insert the key for the Yellow team with the value 50 because the Yellow team doesn't have a value already. The second call to entry will not change the hash map, because the Blue team already has the value 10.

Rust comparison: Oxide uses camelCase: orInsert instead of or_insert.

#![allow(unused)]
fn main() {
// Rust
scores.entry(String::from("Yellow")).or_insert(50);
}

Updating a Value Based on the Old Value

Another common use case for hash maps is to look up a key's value and then update it based on the old value. For instance, the following code counts how many times each word appears in some text. We use a hash map with the words as keys and increment the value to keep track of how many times we've seen that word. If it's the first time we've seen a word, we'll first insert the value 0:

import std.collections.HashMap

fn main() {
    let text = "hello world wonderful world"

    var map: HashMap<String, Int> = HashMap.new()

    for word in text.splitWhitespace() {
        let count = map.entry(word.toString()).orInsert(0)
        *count += 1
    }

    println!("\(map)")
}

This code will print {"world": 2, "hello": 1, "wonderful": 1}. You might see the same key-value pairs printed in a different order: Recall that iterating over a hash map happens in an arbitrary order.

The splitWhitespace method returns an iterator over subslices, separated by whitespace, of the value in text. The orInsert method returns a mutable reference (&mut V) to the value for the specified key. Here, we store that mutable reference in the count variable, so in order to assign to that value, we must first dereference count using the asterisk (*). The mutable reference goes out of scope at the end of the for loop, so all of these changes are safe and allowed by the borrowing rules.

Rust comparison: Oxide uses splitWhitespace in camelCase instead of split_whitespace.

#![allow(unused)]
fn main() {
// Rust
for word in text.split_whitespace() {
    let count = map.entry(String::from(word)).or_insert(0);
    *count += 1;
}
}

Hashing Functions

By default, HashMap uses a hashing function called SipHash that can provide resistance to denial-of-service (DoS) attacks involving hash tables. This is not the fastest hashing algorithm available, but the trade-off for better security that comes with the drop in performance is worth it. If you profile your code and find that the default hash function is too slow for your purposes, you can switch to another function by specifying a different hasher. A hasher is a type that implements the BuildHasher trait. We'll talk about traits and how to implement them in a later chapter. You don't necessarily have to implement your own hasher from scratch; crates.io has libraries shared by other users that provide hashers implementing many common hashing algorithms.

A Complete Example

Let's put together a more complete example that demonstrates working with hash maps. This program tracks player scores in a game:

import std.collections.HashMap

public struct GameScores {
    scores: HashMap<String, Int>,
}

extension GameScores {
    public static fn new(): GameScores {
        GameScores { scores: HashMap.new() }
    }

    public mutating fn addPlayer(name: String) {
        self.scores.entry(name).orInsert(0)
    }

    public mutating fn addPoints(name: &str, points: Int) {
        if let Some(score) = self.scores.getMut(&name.toString()) {
            *score += points
        }
    }

    public fn getScore(name: &str): Int {
        self.scores.get(&name.toString()).copied().unwrapOr(0)
    }

    public fn displayScores() {
        println!("Current Scores:")
        for (name, score) in &self.scores {
            println!("  \(name): \(score)")
        }
    }
}

fn main() {
    var game = GameScores.new()

    game.addPlayer("Alice".toString())
    game.addPlayer("Bob".toString())

    game.addPoints("Alice", 10)
    game.addPoints("Alice", 5)
    game.addPoints("Bob", 20)

    game.displayScores()

    let aliceScore = game.getScore("Alice")
    println!("Alice's final score: \(aliceScore)")
}

This example demonstrates:

  • Creating and initializing a HashMap
  • Using the entry API for safe insertions
  • Iterating over key-value pairs
  • Getting mutable references to update values
  • Working with hash maps inside a struct

Summary

Vectors, strings, and hash maps will provide a large amount of functionality necessary in programs when you need to store, access, and modify data. Here are some exercises you should now be equipped to solve:

  1. Given a list of integers, use a vector and return the median (when sorted, the value in the middle position) and mode (the value that occurs most often; a hash map will be helpful here) of the list.

  2. Convert strings to pig latin. The first consonant of each word is moved to the end of the word and -ay is added, so first becomes irst-fay. Words that start with a vowel have -hay added to the end instead (apple becomes apple-hay). Keep in mind the details about UTF-8 encoding!

  3. Using a hash map and vectors, create a text interface to allow a user to add employee names to a department in a company; for example, "Add Sally to Engineering" or "Add Amir to Sales." Then, let the user retrieve a list of all people in a department or all people in the company by department, sorted alphabetically.

The standard library API documentation describes methods that vectors, strings, and hash maps have that will be helpful for these exercises!

We're getting into more complex programs in which operations can fail, so it's a perfect time to discuss error handling. We'll do that next!

Error Handling

Errors are a fact of life in software, so Oxide has a number of features for handling situations in which something goes wrong. In many cases, Oxide requires you to acknowledge the possibility of an error and take some action before your code will compile. This requirement makes your program more robust by ensuring that you'll discover errors and handle them appropriately before deploying your code to production!

Oxide groups errors into two major categories: recoverable and unrecoverable errors. For a recoverable error, such as a file not found error, we most likely just want to report the problem to the user and retry the operation. Unrecoverable errors are always symptoms of bugs, such as trying to access a location beyond the end of an array, and so we want to immediately stop the program.

Most languages don't distinguish between these two kinds of errors and handle both in the same way, using mechanisms such as exceptions. Oxide doesn't have exceptions. Instead, it has:

  • The type Result<T, E> for recoverable errors
  • The panic! macro that stops execution when the program encounters an unrecoverable error
  • Nullable types T? for values that may or may not exist
  • Ergonomic operators ?? and !! for working with nullable values

This chapter covers:

  1. Unrecoverable Errors with panic! - When to stop the program entirely
  2. Recoverable Errors with Result - When to give callers a chance to handle failure
  3. Working with Nullable Types - Using T?, ??, and !! effectively
  4. To panic! or Not to panic! - Guidelines for choosing the right approach

Oxide's Error Handling Philosophy

Oxide inherits Rust's philosophy of making error handling explicit and type-safe. However, Oxide adds ergonomic operators that make common patterns more concise:

OxideRust EquivalentPurpose
T?Option<T>Value that might be absent
nullNoneAbsence of a value
x ?? defaultx.unwrapOr(default)Provide fallback for nullable
x!!x.unwrap()Force unwrap (panics if null)
x?x?Propagate errors (unchanged)

These operators make error handling more readable while maintaining the same safety guarantees as Rust.

Unrecoverable Errors with panic!

Sometimes bad things happen in your code, and there's nothing you can do about it. In these cases, Oxide has the panic! macro. There are two ways to cause a panic in practice: by taking an action that causes our code to panic (such as accessing an array past the end) or by explicitly calling the panic! macro. In both cases, we cause a panic in our program. By default, these panics will print a failure message, unwind, clean up the stack, and quit. Via an environment variable, you can also have Oxide display the call stack when a panic occurs to make it easier to track down the source of the panic.

Unwinding the Stack or Aborting in Response to a Panic

By default, when a panic occurs, the program starts unwinding, which means Oxide walks back up the stack and cleans up the data from each function it encounters. However, walking back and cleaning up is a lot of work. Oxide therefore allows you to choose the alternative of immediately aborting, which ends the program without cleaning up.

Memory that the program was using will then need to be cleaned up by the operating system. If in your project you need to make the resultant binary as small as possible, you can switch from unwinding to aborting upon a panic by adding panic = 'abort' to the appropriate [profile] sections in your Cargo.toml file. For example, if you want to abort on panic in release mode, add this:

[profile.release]
panic = 'abort'

Let's try calling panic! in a simple program:

fn main() {
    panic!("crash and burn")
}

When you run the program, you'll see something like this:

thread 'main' panicked at src/main.ox:2:5:
crash and burn
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

The call to panic! causes the error message contained in the last two lines. The first line shows our panic message and the place in our source code where the panic occurred: src/main.ox:2:5 indicates that it's the second line, fifth character of our src/main.ox file.

In this case, the line indicated is part of our code, and if we go to that line, we see the panic! macro call. In other cases, the panic! call might be in code that our code calls, and the filename and line number reported by the error message will be someone else's code where the panic! macro is called, not the line of our code that eventually led to the panic! call.

Using a panic! Backtrace

We can use the backtrace of the functions the panic! call came from to figure out the part of our code that is causing the problem. To understand how to use a panic! backtrace, let's look at another example and see what it's like when a panic! call comes from a library because of a bug in our code instead of from our code calling the macro directly. Here's some code that attempts to access an index in a vector beyond the range of valid indexes:

fn main() {
    let v = vec![1, 2, 3]

    v[99]
}

Here, we're attempting to access the 100th element of our vector (which is at index 99 because indexing starts at zero), but the vector has only three elements. In this situation, Oxide will panic. Using [] is supposed to return an element, but if you pass an invalid index, there's no element that Oxide could return here that would be correct.

In C, attempting to read beyond the end of a data structure is undefined behavior. You might get whatever is at the location in memory that would correspond to that element in the data structure, even though the memory doesn't belong to that structure. This is called a buffer overread and can lead to security vulnerabilities if an attacker is able to manipulate the index in such a way as to read data they shouldn't be allowed to that is stored after the data structure.

To protect your program from this sort of vulnerability, if you try to read an element at an index that doesn't exist, Oxide will stop execution and refuse to continue. Let's try it and see:

thread 'main' panicked at src/main.ox:4:5:
index out of bounds: the len is 3 but the index is 99
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

This error points at line 4 of our main.ox where we attempt to access index 99 of the vector in v.

The note: line tells us that we can set the RUST_BACKTRACE environment variable to get a backtrace of exactly what happened to cause the error. A backtrace is a list of all the functions that have been called to get to this point. Backtraces in Oxide work as they do in other languages: The key to reading the backtrace is to start from the top and read until you see files you wrote. That's the spot where the problem originated. The lines above that spot are code that your code has called; the lines below are code that called your code.

Let's try getting a backtrace by setting the RUST_BACKTRACE environment variable to any value except 0:

$ RUST_BACKTRACE=1 cargo run
thread 'main' panicked at src/main.ox:4:5:
index out of bounds: the len is 3 but the index is 99
stack backtrace:
   0: rust_begin_unwind
   1: core::panicking::panic_fmt
   2: core::panicking::panic_bounds_check
   3: <usize as core::slice::index::SliceIndex<[T]>>::index
   4: core::slice::index::<impl core::ops::index::Index<I> for [T]>::index
   5: <alloc::vec::Vec<T,A> as core::ops::index::Index<I>>::index
   6: panic::main
             at ./src/main.ox:4:5
   7: core::ops::function::FnOnce::call_once
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

That's a lot of output! The exact output you see might be different depending on your operating system and Oxide version. In order to get backtraces with this information, debug symbols must be enabled. Debug symbols are enabled by default when using cargo build or cargo run without the --release flag, as we have here.

In the output above, line 6 of the backtrace points to the line in our project that's causing the problem: line 4 of src/main.ox. If we don't want our program to panic, we should start our investigation at the location pointed to by the first line mentioning a file we wrote. The way to fix the panic is to not request an element beyond the range of the vector indexes. When your code panics in the future, you'll need to figure out what action the code is taking with what values to cause the panic and what the code should do instead.

We'll come back to panic! and when we should and should not use panic! to handle error conditions in the "To panic! or Not to panic!" section later in this chapter. Next, we'll look at how to recover from an error using Result.

Recoverable Errors with Result

Most errors aren't serious enough to require the program to stop entirely. Sometimes when a function fails, it's for a reason that you can easily interpret and respond to. For example, if you try to open a file and that operation fails because the file doesn't exist, you might want to create the file instead of terminating the process.

The Result enum is defined as having two variants, Ok and Err, as follows:

enum Result<T, E> {
    Ok(T),
    Err(E),
}

The T and E are generic type parameters. T represents the type of the value that will be returned in a success case within the Ok variant, and E represents the type of the error that will be returned in a failure case within the Err variant.

Let's call a function that returns a Result value because the function could fail. Here we try to open a file:

import std.fs.File

fn main() {
    let greetingFileResult = File.open("hello.txt")
}

The return type of File.open is a Result<T, E>. The generic parameter T has been filled in by the implementation of File.open with the type of the success value, std.fs.File, which is a file handle. The type of E used in the error value is std.io.Error. This return type means the call to File.open might succeed and return a file handle that we can read from or write to. The function call also might fail: For example, the file might not exist, or we might not have permission to access the file.

We need to add code to take different actions depending on the value File.open returns. Here's one way to handle the Result using a match expression:

import std.fs.File

fn main() {
    let greetingFileResult = File.open("hello.txt")

    let greetingFile = match greetingFileResult {
        Ok(file) -> file,
        Err(error) -> panic!("Problem opening the file: \(error)"),
    }
}

Note that, like nullable types, the Result enum and its variants have been brought into scope by the prelude, so we don't need to specify Result. before the Ok and Err variants in the match arms.

When the result is Ok, this code will return the inner file value out of the Ok variant, and we then assign that file handle value to the variable greetingFile. After the match, we can use the file handle for reading or writing.

The other arm of the match handles the case where we get an Err value from File.open. In this example, we've chosen to call the panic! macro. If there's no file named hello.txt in our current directory and we run this code, we'll see the following output from the panic! macro:

thread 'main' panicked at src/main.ox:9:23:
Problem opening the file: Os { code: 2, kind: NotFound, message: "No such file or directory" }

Matching on Different Errors

The code above will panic! no matter why File.open failed. However, we want to take different actions for different failure reasons. If File.open failed because the file doesn't exist, we want to create the file and return the handle to the new file. If File.open failed for any other reason--for example, because we didn't have permission to open the file--we still want the code to panic!. For this, we add an inner match expression:

import std.fs.File
import std.io.ErrorKind

fn main() {
    let greetingFileResult = File.open("hello.txt")

    let greetingFile = match greetingFileResult {
        Ok(file) -> file,
        Err(error) -> match error.kind() {
            ErrorKind.NotFound -> match File.create("hello.txt") {
                Ok(fc) -> fc,
                Err(e) -> panic!("Problem creating the file: \(e)"),
            },
            _ -> panic!("Problem opening the file: \(error)"),
        },
    }
}

The type of the value that File.open returns inside the Err variant is io.Error, which is a struct provided by the standard library. This struct has a method, kind, that we can call to get an io.ErrorKind value. The enum io.ErrorKind is provided by the standard library and has variants representing the different kinds of errors that might result from an io operation. The variant we want to use is ErrorKind.NotFound, which indicates the file we're trying to open doesn't exist yet.

Alternatives to Using match with Result<T, E>

That's a lot of match! The match expression is very useful but also very much a primitive. In Chapter 13, you'll learn about closures, which are used with many of the methods defined on Result<T, E>. These methods can be more concise than using match when handling Result<T, E> values in your code.

For example, here's another way to write the same logic using closures and the unwrapOrElse method:

import std.fs.File
import std.io.ErrorKind

fn main() {
    let greetingFile = File.open("hello.txt").unwrapOrElse { error ->
        if error.kind() == ErrorKind.NotFound {
            File.create("hello.txt").unwrapOrElse { error ->
                panic!("Problem creating the file: \(error)")
            }
        } else {
            panic!("Problem opening the file: \(error)")
        }
    }
}

Although this code has the same behavior as the nested match, it doesn't contain any match expressions and is cleaner to read.

Shortcuts for Panic on Error

Using match works well enough, but it can be a bit verbose and doesn't always communicate intent well. The Result<T, E> type has many helper methods defined on it to do various, more specific tasks.

The unwrap Method

The unwrap method is a shortcut method implemented just like the match expression we wrote earlier. If the Result value is the Ok variant, unwrap will return the value inside the Ok. If the Result is the Err variant, unwrap will call the panic! macro for us:

import std.fs.File

fn main() {
    let greetingFile = File.open("hello.txt").unwrap()
}

If we run this code without a hello.txt file, we'll see an error message from the panic! call that the unwrap method makes:

thread 'main' panicked at src/main.ox:4:49:
called `Result.unwrap()` on an `Err` value: Os { code: 2, kind: NotFound, message: "No such file or directory" }

The expect Method

Similarly, the expect method lets us also choose the panic! error message. Using expect instead of unwrap and providing good error messages can convey your intent and make tracking down the source of a panic easier:

import std.fs.File

fn main() {
    let greetingFile = File.open("hello.txt")
        .expect("hello.txt should be included in this project")
}

We use expect in the same way as unwrap: to return the file handle or call the panic! macro. The error message used by expect in its call to panic! will be the parameter that we pass to expect, rather than the default panic! message that unwrap uses:

thread 'main' panicked at src/main.ox:5:10:
hello.txt should be included in this project: Os { code: 2, kind: NotFound, message: "No such file or directory" }

In production-quality code, most developers choose expect rather than unwrap and give more context about why the operation is expected to always succeed. That way, if your assumptions are ever proven wrong, you have more information to use in debugging.

Propagating Errors

When a function's implementation calls something that might fail, instead of handling the error within the function itself, you can return the error to the calling code so that it can decide what to do. This is known as propagating the error and gives more control to the calling code, where there might be more information or logic that dictates how the error should be handled than what you have available in the context of your code.

For example, here's a function that reads a username from a file. If the file doesn't exist or can't be read, this function will return those errors to the code that called the function:

import std.fs.File
import std.io
import std.io.Read

fn readUsernameFromFile(): Result<String, io.Error> {
    let usernameFileResult = File.open("hello.txt")

    var usernameFile = match usernameFileResult {
        Ok(file) -> file,
        Err(e) -> return Err(e),
    }

    var username = String.new()

    match usernameFile.readToString(&mut username) {
        Ok(_) -> Ok(username),
        Err(e) -> Err(e),
    }
}

This function can be written in a much shorter way, but we're going to start by doing a lot of it manually in order to explore error handling; at the end, we'll show the shorter way.

The return type of the function is Result<String, io.Error>. This means the function is returning a value of the type Result<T, E>, where the generic parameter T has been filled in with the concrete type String and the generic type E has been filled in with the concrete type io.Error.

If this function succeeds without any problems, the code that calls this function will receive an Ok value that holds a String--the username that this function read from the file. If this function encounters any problems, the calling code will receive an Err value that holds an instance of io.Error that contains more information about what the problems were.

This pattern of propagating errors is so common that Oxide provides the question mark operator ? to make this easier.

The ? Operator Shortcut

Here's an implementation of readUsernameFromFile that has the same functionality, but this implementation uses the ? operator:

import std.fs.File
import std.io
import std.io.Read

fn readUsernameFromFile(): Result<String, io.Error> {
    var usernameFile = File.open("hello.txt")?
    var username = String.new()
    usernameFile.readToString(&mut username)?
    Ok(username)
}

The ? placed after a Result value is defined to work in almost the same way as the match expressions that we defined to handle the Result values earlier. If the value of the Result is an Ok, the value inside the Ok will get returned from this expression, and the program will continue. If the value is an Err, the Err will be returned from the whole function as if we had used the return keyword so that the error value gets propagated to the calling code.

There is a difference between what the match expression does and what the ? operator does: Error values that have the ? operator called on them go through the from function, defined in the From trait in the standard library, which is used to convert values from one type into another. When the ? operator calls the from function, the error type received is converted into the error type defined in the return type of the current function. This is useful when a function returns one error type to represent all the ways a function might fail, even if parts might fail for many different reasons.

The ? operator eliminates a lot of boilerplate and makes this function's implementation simpler. We could even shorten this code further by chaining method calls immediately after the ?:

import std.fs.File
import std.io
import std.io.Read

fn readUsernameFromFile(): Result<String, io.Error> {
    var username = String.new()
    File.open("hello.txt")?.readToString(&mut username)?
    Ok(username)
}

Or even more concisely using std.fs.readToString:

import std.fs

fn readUsernameFromFile(): Result<String, io.Error> {
    fs.readToString("hello.txt")
}

Reading a file into a string is a fairly common operation, so the standard library provides the convenient fs.readToString function that opens the file, creates a new String, reads the contents of the file, puts the contents into that String, and returns it.

Where to Use the ? Operator

The ? operator can only be used in functions whose return type is compatible with the value the ? is used on. This is because the ? operator is defined to perform an early return of a value out of the function.

Let's look at the error we'll get if we use the ? operator in a main function with a return type that is incompatible:

import std.fs.File

fn main() {
    let greetingFile = File.open("hello.txt")?
}

This code opens a file, which might fail. The ? operator follows the Result value returned by File.open, but this main function has the return type of (), not Result. When we compile this code, we get an error message:

error[E0277]: the `?` operator can only be used in a function that returns `Result` or `Option`
 --> src/main.ox:4:48
  |
3 | fn main() {
  | --------- this function should return `Result` or `Option` to accept `?`
4 |     let greetingFile = File.open("hello.txt")?
  |                                               ^ cannot use the `?` operator in a function that returns `()`

To fix the error, you have two choices. One choice is to change the return type of your function to be compatible with the value you're using the ? operator on. The other choice is to use a match or one of the Result<T, E> methods to handle the Result<T, E> in whatever way is appropriate.

The ? operator can also be used with nullable types (T?). The behavior is similar: If the value is null, the null will be returned early from the function at that point. If the value is Some, the value inside the Some is the resultant value of the expression, and the function continues:

fn lastCharOfFirstLine(text: &str): Char? {
    text.lines().next()?.chars().last()
}

This function returns Char? because it's possible that there is a character there, but it's also possible that there isn't. This code takes the text string slice argument and calls the lines method on it, which returns an iterator over the lines in the string. Because this function wants to examine the first line, it calls next on the iterator to get the first value from the iterator. If text is the empty string, this call to next will return null, in which case we use ? to stop and return null from lastCharOfFirstLine.

Note that you can use the ? operator on a Result in a function that returns Result, and you can use the ? operator on a nullable type in a function that returns a nullable type, but you can't mix and match. The ? operator won't automatically convert a Result to a nullable type or vice versa; in those cases, you can use methods like the ok method on Result or the okOr method on nullable types to do the conversion explicitly.

main Can Return Result

The main function can also return a Result<(), E>:

import std.error.Error
import std.fs.File

fn main(): Result<(), Box<dyn Error>> {
    let greetingFile = File.open("hello.txt")?

    Ok(())
}

The Box<dyn Error> type is a trait object. For now, you can read Box<dyn Error> to mean "any kind of error." Using ? on a Result value in a main function with the error type Box<dyn Error> is allowed because it allows any Err value to be returned early.

When a main function returns a Result<(), E>, the executable will exit with a value of 0 if main returns Ok(()) and will exit with a nonzero value if main returns an Err value.

Now that we've discussed the details of calling panic! or returning Result, let's look at Oxide's ergonomic operators for working with nullable types.

Nullable Operators (?? and !!)

Oxide provides two ergonomic operators for working with nullable types (T?) that make common patterns more concise and readable. These operators complement the ? try operator and give you fine-grained control over how you handle the absence of values.

The Null Coalescing Operator (??)

The null coalescing operator ?? provides a default value when the left-hand side is null. This is one of the most common patterns when working with optional values.

Basic Usage

let username = maybeUsername ?? "Guest"
let port = configPort ?? 8080
let config = Config.load() ?? Config.default()

When maybeUsername contains a value, that value is used. When it's null, the right-hand side ("Guest") is used instead.

How It Works

The ?? operator desugars to method calls on the underlying type:

// Oxide
let name = optionalName ?? "Anonymous"

// Desugars to (conceptually):
let name = optionalName.unwrapOr("Anonymous")

For complex right-hand side expressions, Oxide uses lazy evaluation:

// Oxide
let config = loadConfig() ?? computeExpensiveDefault()

// Desugars to:
let config = loadConfig().unwrapOrElse { computeExpensiveDefault() }

This means computeExpensiveDefault() is only called if loadConfig() returns null.

Chaining ??

You can chain multiple ?? operators to provide a sequence of fallbacks:

let username = primaryName ?? secondaryName ?? "Guest"

This tries primaryName first, then secondaryName, and finally falls back to "Guest" if both are null.

?? with Different Types

The right-hand side of ?? must be the same type as the value inside the nullable type:

let count: Int? = Some(5)
let value = count ?? 0  // OK: Int? ?? Int -> Int

let maybeString: String? = null
let text = maybeString ?? "default".toString()  // OK: String? ?? String -> String

CRITICAL: ?? works with T? (Option) ONLY, NOT Result<T, E>

This is an intentional design decision. Result<T, E> contains typed error information that should not be silently discarded:

// This will NOT compile:
let value: Result<Int, Error> = Err(someError)
let result = value ?? 0  // ERROR: ?? only works with T?

Why? If ?? worked with Result, it would silently discard error information, making debugging difficult and hiding potential issues in your code.

For Result, use explicit methods:

// Provide a default value (discards the error)
let value = riskyOperation().unwrapOr(default)

// Handle the error with a closure
let value = riskyOperation().unwrapOrElse { err ->
    log("Error occurred: \(err)")
    computeDefault()
}

// Convert Result to nullable (discards error info)
let maybeValue: Int? = riskyOperation().ok()
let value = maybeValue ?? default

Practical Examples

Configuration with defaults:

struct ServerConfig {
    host: String,
    port: Int,
    maxConnections: Int,
}

fn loadServerConfig(env: &Environment): ServerConfig {
    ServerConfig {
        host: env.get("HOST") ?? "localhost".toString(),
        port: env.get("PORT")?.parse().ok() ?? 3000,
        maxConnections: env.get("MAX_CONN")?.parse().ok() ?? 100,
    }
}

Safe dictionary access:

fn getUserDisplayName(users: &HashMap<String, User>, id: &str): String {
    let user = users.get(id)
    let displayName = user?.displayName ?? user?.username ?? "Unknown User".toString()
    displayName
}

Combining with if let:

fn processItem(item: Item?): String {
    // Use ?? when you just need a value
    let name = item?.name ?? "unnamed".toString()

    // Use if let when you need to do more complex processing
    if let item = item {
        processFullItem(item)
    } else {
        processDefault()
    }
}

The Force Unwrap Operator (!!)

The force unwrap operator !! extracts the value from a nullable type, panicking if the value is null. Use this when you are certain a value exists and want to express that certainty explicitly.

Basic Usage

let user = findUser(id)!!  // Panics if null
let first = nonEmptyList.first()!!  // Panics if null

How It Works

The !! operator desugars to an unwrap call:

// Oxide
let user = findUser(id)!!

// Desugars to:
let user = findUser(id).unwrap()

When to Use !!

Use !! when:

  1. You have verified the value exists:
let items = vec![1, 2, 3]
if !items.isEmpty() {
    // We KNOW this won't be null because we checked isEmpty()
    let first = items.first()!!
    println!("First item: \(first)")
}
  1. In tests where you want to fail fast:
#[test]
fn testUserCreation() {
    let user = createUser("test@example.com")!!
    assertEq!(user.email, "test@example.com")
}
  1. In prototypes where proper error handling is deferred:
// Quick prototype - will add proper error handling later
fn main() {
    let config = Config.loadFromFile("config.toml")!!
    let server = Server.new(config)!!
    server.run()!!
}
  1. When the value being null would indicate a bug:
// If we reach this code, currentUser MUST be set by the auth middleware
let user = getCurrentUser()!!

Warning: Use !! Sparingly

The !! operator is a clear signal that "if this is null, something has gone fundamentally wrong." Overuse of !! defeats the purpose of nullable types:

// BAD: Using !! everywhere
fn processUser(id: String): String {
    let user = findUser(id)!!  // What if user doesn't exist?
    let profile = user.profile()!!  // What if profile is incomplete?
    let address = profile.address()!!  // What if address is optional?
    format!("User lives at \(address)")
}

// BETTER: Handle the nullable cases appropriately
fn processUser(id: String): String? {
    let user = findUser(id)?
    let profile = user.profile()?
    let address = profile.address()?
    Some(format!("User lives at \(address)"))
}

// Or using ?? for defaults
fn processUser(id: String): String {
    let user = findUser(id) ?? return "Unknown user".toString()
    let address = user.profile()?.address() ?? "No address".toString()
    format!("User lives at \(address)")
}

Combining ??, !!, and ?

These three operators serve different purposes and can be combined effectively:

OperatorPurposeWhen to Use
?Propagate null/error to callerWhen caller should handle the absence
??Provide a fallback valueWhen you have a sensible default
!!Assert value exists (panic if not)When null indicates a bug

Example: User Authentication Flow

fn authenticateAndLoadProfile(token: String?): Result<UserProfile, AuthError> {
    // Use ? to propagate - caller handles missing token
    let token = token.okOr(AuthError.MissingToken)?

    // Use ? to propagate - caller handles invalid token
    let userId = validateToken(token)?

    // Use ?? for default - missing profile is OK, use default
    let profile = loadProfile(userId) ?? UserProfile.default()

    Ok(profile)
}

fn loadUserDashboard(user: User): Dashboard {
    // Use !! - user MUST have a primary account after login
    let primaryAccount = user.primaryAccount()!!

    // Use ?? - optional secondary accounts
    let secondaryAccounts = user.secondaryAccounts() ?? vec![]

    Dashboard {
        main: AccountView.new(primaryAccount),
        others: secondaryAccounts.map { AccountView.new(it) },
    }
}

Operator Precedence

Understanding precedence is important when combining these operators:

PrecedenceOperatorDescription
2 (high)? !!Try operator, force unwrap (postfix)
14 (low)??Null coalescing

This means:

// a?.b ?? c is parsed as (a?.b) ?? c
let value = user?.name ?? "Anonymous"

// a!! ?? b would be unusual but parses as (a!!) ?? b
// (If a!! succeeds, ?? never evaluates; if a!! panics, we never reach ??)

// await and ?? interact predictably
let data = await fetchData() ?? defaultData  // (await fetchData()) ?? defaultData

Best Practices

  1. Prefer ?? over !! when a default makes sense:
// Good
let timeout = configuredTimeout ?? Duration.fromSecs(30)

// Avoid if null is actually possible
let timeout = configuredTimeout!!  // Panic if not configured!
  1. Use ? to propagate, ?? to provide defaults:
fn loadConfig(path: String?): Result<Config, Error> {
    let path = path ?? "config.toml".toString()  // Default path
    let content = fs.readToString(&path)?  // Propagate file errors
    parseConfig(&content)  // Propagate parse errors
}
  1. Document why you're using !!:
// The auth middleware guarantees currentUser is set for all authenticated routes
let user = getCurrentUser()!!
  1. In libraries, prefer returning T? or Result over panicking:
// Library code - let the caller decide
public fn findById(id: &str): User? {
    self.users.get(id).cloned()
}

// Application code - can use !! when appropriate
let user = db.findById(requiredId)!!

Summary

Oxide's ?? and !! operators make working with nullable types more ergonomic:

OperatorDesugars ToBehavior on null
x ?? defaultx.unwrapOr(default)Returns default
x!!x.unwrap()Panics
x?Early returnReturns null from function

Remember:

  • ?? only works with nullable types (T?), not Result<T, E>
  • !! should be used sparingly and indicates "null here is a bug"
  • ? propagates the absence to the caller

These operators, combined with guard let, if let, and pattern matching, give you complete control over how to handle values that might be absent.

To Panic or Not to Panic

Now that we've covered how to use panic!, Result<T, E>, nullable types T?, and the operators ??, !!, and ?, we need to discuss when to use each approach. This is a critical design decision that impacts both the robustness and usability of your code.

The Core Decision

When you have a situation that could fail, you face a choice:

  1. Panic - Stop execution immediately (unrecoverable error)
  2. Return Result - Let the caller decide what to do (recoverable error)
  3. Use T? - Let the caller handle absence of values (nullable types)

The key principle is: Returning Result is the default choice for library code. Use panic! only when you're certain the situation is truly unrecoverable or when panic would serve debugging better than error handling.

Why Result is the Default

When you return Result<T, E>, you're saying to the caller: "This operation can fail, and I'm giving you the information you need to decide how to handle it." This respects the caller's knowledge of their own situation:

// Bad: Making the decision for the caller
fn loadConfig(): Config {
    File.open("config.toml")!!  // Panics if file missing
}

// Good: Letting the caller decide
fn loadConfig(): Result<Config, Error> {
    let content = fs.readToString("config.toml")?
    parseConfig(&content)
}

// Now the caller can choose:
fn main(): Result<(), Box<dyn Error>> {
    // Option 1: Propagate the error
    let config = loadConfig()?

    // Option 2: Use a default
    let config = loadConfig().unwrapOr(Config.default())

    // Option 3: Custom handling
    let config = match loadConfig() {
        Ok(cfg) -> cfg,
        Err(e) -> {
            eprintln!("Warning: {}", e)
            Config.default()
        }
    }

    Ok(())
}

When to Use panic!

Use panic! in these specific situations:

1. Examples and Prototypes

When writing example code or prototyping, panicking makes sense because your focus is on illustrating the happy path, not writing production-grade error handling:

// Example code - clarity is the goal
fn demonstrateStringParsing() {
    let numbers = vec!["1", "2", "3"]
    let parsed: Vec<Int> = numbers.iter()
        .map { Int.parse(it)!! }
        .collect()

    println!("Numbers: {:?}", parsed)
}

The !! here signals to readers: "In this example, we know these conversions will succeed. In real code, you'd handle errors."

2. Tests

In tests, panic indicates test failure. Using !! or .expect() is exactly right:

#[test]
fn testUserCreation() {
    let user = User.new("alice@example.com")!!
    assertEq!(user.email, "alice@example.com")
}

If user creation fails, the test should fail. There's no recovery.

3. When You Have More Information Than the Compiler

Sometimes you know something the compiler doesn't. You've verified through logic that a value will be present, but the type system can't express that guarantee. In these cases, use expect() with a detailed message explaining your reasoning:

fn parseIpAddress(): IpAddr {
    // We hardcoded this address, so we know it's valid
    "127.0.0.1".parse()
        .expect("Hardcoded IP address is always valid")
}

fn loadAuthenticatedUser(currentUserId: UserId?): User {
    // The auth middleware guarantees currentUserId is set after authentication
    let userId = currentUserId!!

    // This should never fail in a correctly functioning system
    database.findUser(userId)
        .expect("User must exist; auth middleware verifies this")
}

The message is crucial—it explains why you believe the panic can't happen, helping future maintainers understand the assumption.

4. Invalid State That Violates Contracts

When your function has a contract (documented preconditions), breaking that contract indicates a programmer error that should be caught immediately:

/// Creates a Guess from a value.
///
/// # Panics
///
/// Panics if the value is less than 1 or greater than 100.
struct Guess {
    value: Int,
}

extension Guess {
    public fn new(value: Int): Guess {
        guard value >= 1 && value <= 100 else {
            panic!("Guess must be between 1 and 100, got {}", value)
        }

        Guess { value }
    }
}

// Client code violation leads to panic during development
let guess = Guess.new(150)  // Panics - contract violation caught immediately

This is different from recoverable errors like "file not found"—it's a programming mistake.

5. External Code in Unexpected State

When calling external code that's out of your control and it returns an invalid state you can't fix:

fn processWebResponse(response: HttpResponse): Result<Data, Error> {
    // HttpStatus is supposed to be one of a defined set
    // If we get an undefined status, that's a library bug, not our error
    let status = match response.status() {
        HttpStatus.Ok -> HttpStatus.Ok,
        HttpStatus.NotFound -> HttpStatus.NotFound,
        HttpStatus.ServerError -> HttpStatus.ServerError,
        unknown -> panic!("HTTP library returned undefined status: {}", unknown),
    }

    // ... continue processing
    Ok(Data {})
}

6. Security-Critical Operations

When operating on invalid data would compromise security, panic rather than silently continuing:

fn processBuffer(buffer: &[UInt8], maxSize: UIntSize): Result<Vec<UInt8>, Error> {
    guard buffer.len() <= maxSize else {
        panic!("Buffer size {} exceeds maximum {}", buffer.len(), maxSize)
    }

    // Panic happened during development, not at runtime in production
    // because this contract violation is a security issue
    Ok(process(buffer))
}

When to Return Result

Use Result<T, E> in these situations:

1. Expected Failure

When failure is an expected part of normal operation, return Result:

// File might not exist - this is expected
fn readConfigFile(path: String): Result<String, Error> {
    fs.readToString(path)
}

// Network request might fail - this is expected
fn fetchUserData(id: String): Result<UserData, HttpError> {
    http.get(&format!("/users/{}", id))?.json()
}

// User input might be invalid - this is expected
fn parseUserInput(input: String): Result<Command, ParseError> {
    Command.parse(&input)
}

2. Errors You Can't Control

When the error comes from outside your code and you can't prevent it, return Result:

fn downloadFile(url: String): Result<Vec<UInt8>, DownloadError> {
    let response = http.get(&url)?  // Network failure
    let bytes = response.bytes()?    // Response reading failure
    Ok(bytes)
}

The caller can retry, use a fallback, or notify the user.

3. Library Code

Libraries should return Result to give callers maximum flexibility. Application code can panic; libraries can't know the right error handling strategy:

// Good library code
public fn findUserById(id: String): Result<User, NotFoundError> {
    self.database.find(id)
        .okOr(NotFoundError { id })
}

// Caller decides:
// - In a CLI tool: panic to stop
// - In a web server: return HTTP 404
// - In a batch job: log and continue

4. Recoverable State Issues

When continuing with a reasonable fallback makes sense:

fn loadServerConfig(env: &Environment): Result<Config, ConfigError> {
    // User might not set these - use defaults
    let host = env.get("HOST").unwrapOr("localhost".toString())
    let port = env.get("PORT")
        ?.parse()
        .unwrapOr(8080)
    let workers = env.get("WORKERS")
        ?.parse()
        .unwrapOr(4)

    Ok(Config { host, port, workers })
}

5. Parsing and Validation

When converting user input or external data:

fn parseJson(data: String): Result<JsonValue, ParseError> {
    // User data is inherently unreliable
    // Return error, don't panic
    JsonParser.parse(&data)
}

fn parseDate(dateStr: String): Result<Date, ParseError> {
    // User might enter invalid date - expected failure
    Date.parse(&dateStr)
}

Using Nullable Types (T?)

The nullable type T? represents values that might be absent, and it's useful for:

1. Optional Data

When a value might not exist but absence isn't an error:

struct UserProfile {
    name: String,
    email: String,
    phoneNumber: String?,  // Not everyone provides a phone
    bio: String?,           // Bio is optional
}

fn getDisplayName(user: UserProfile): String {
    user.bio ?? "No bio provided".toString()
}

2. Collection Operations

Getting values from collections that might not contain them:

fn getFirstElement<T>(items: Vec<T>): T? {
    items.first()
}

fn getByKey<K, V>(map: HashMap<K, V>, key: K): V? {
    map.get(&key)
}

// Using it:
let users: Vec<User> = vec![...]
if let firstUser = users.first() {
    println!("First user: {}", firstUser.name)
}

3. Graceful Degradation

When you can continue with a default:

fn getUserPreference(userId: String, key: String): String {
    let value = database.getUserPref(userId, key)
    value ?? "default_preference".toString()
}

Decision Tree

Here's a practical decision tree:

Is the error unexpected or indicates a programming bug?
├─ Yes, and continuing would be unsafe/incorrect
│  └─ Use panic! or !!
├─ Yes, but caller should know about it
│  └─ Use Result<T, E>
└─ No, absence is normal and expected
   └─ Use T? or Result/Option

More specifically:

Can callers handle this situation better than you can?
├─ Yes (they have more context)
│  └─ Return Result or use T?
└─ No (it's a fundamental programming error)
   ├─ Is the value missing, or is the entire operation wrong?
   │  ├─ Just missing → Use T?
   │  └─ Operation invalid → Use panic!
   └─ Are you in library code?
      ├─ Yes → Always return Result or T?
      └─ No → panic! is OK if it simplifies code

Pattern: Graceful Error Handling with Three Levels

Many applications use three levels of error handling:

fn main(): Result<(), Box<dyn Error>> {
    let config = loadConfigOrDefault()  // Level 1: Defaults
    let client = createClient(&config)?  // Level 2: Return error
    client.run()                          // Level 3: Panic on unexpected
}

// Level 1: Use ?? for optional configuration
fn loadConfigOrDefault(): Config {
    let configPath = env.var("CONFIG").okOr(()).ok() ?? "config.toml".toString()
    fs.readToString(configPath)
        .ok()
        .flatMap { parseConfig(it) }
        .ok()
        ?? Config.default()
}

// Level 2: Return errors from operations that can fail
fn createClient(config: &Config): Result<Client, Error> {
    let database = Database.connect(&config.dbUrl)?
    let cache = Cache.new(&config.cacheUrl)?
    Ok(Client { database, cache })
}

// Level 3: Panic if invariants are violated
extension Client {
    fn run(): Result<()> {
        guard self.database.isConnected() else {
            panic!("Database connection lost during operation")
        }

        // ... continue
        Ok(())
    }
}

Guidelines Summary

SituationUseReasoning
Expected failure (file, network, parse)Result<T, E>Caller can handle
Value might be absentT?Absence is normal
Example code!! or .expect()Clarity over robustness
Test code.expect() or !!Panic = test failure
You have more info than compiler.expect("why...")Document your assumption
Contract violationpanic!Programmer error
Library code, generalResult<T, E>Respect caller's context
Application code, main logicResult<T, E>Robust production code
Prototype/spike solution!!Speed over perfection
Security-critical codepanic! or ResultNever silently ignore

Real-World Examples

Web Server - Mixed Strategies

fn handleRequest(req: HttpRequest): Result<HttpResponse, Box<dyn Error>> {
    // Level 1: Optional headers - use ??
    let contentType = req.header("Content-Type") ?? "application/octet-stream".toString()

    // Level 2: Expected failures - use Result
    let userId = parseUserId(&req.body())?

    // Level 3: Invariants - use expect
    let user = database.findUser(userId)
        .okOr(UserNotFound)?

    // Level 3: Contract violations - panic
    guard user.isActive else {
        panic!("Attempting to process inactive user {}", userId)
    }

    Ok(HttpResponse.ok().body(format!("Hello {}", user.name)))
}

Data Processing - Graceful Degradation

fn processData(input: Vec<DataPoint>): Result<Summary, ProcessError> {
    // Return error if no data - expected failure
    guard !input.isEmpty() else {
        return Err(ProcessError.EmptyInput)
    }

    let results: Vec<Int> = input.iter()
        .map { it.parse() }
        .collect<Result<Vec<Int>, ProcessError>>()?  // Propagate parse errors

    let avg = results.iter().sum<Int>() / results.len() as Int
    let maxValue = results.iter().max()  // Returns Option
        .copied()
        ?? 0  // Default if empty (shouldn't happen after guard, but defensive)

    Ok(Summary { average: avg, maximum: maxValue })
}

Configuration - All Three Levels

struct AppConfig {
    database: String,
    port: Int,
    logLevel: LogLevel,
}

fn loadAppConfig(): AppConfig {
    let dbUrl = env.var("DATABASE_URL")  // Returns Result
        .ok()  // Convert to Option
        ?? "sqlite:memory:".toString()  // Level 1: Default

    let port = env.var("PORT")
        .ok()
        .flatMap { it.parse().ok() }
        ?? 8080  // Level 1: Default

    let logLevel = env.var("LOG_LEVEL")
        .ok()
        .flatMap { LogLevel.parse(it) }
        ?? LogLevel.Info  // Level 1: Default

    AppConfig { database: dbUrl, port, logLevel }
}

fn initializeApp(config: AppConfig): Result<App, Error> {
    let db = Database.connect(&config.database)?  // Level 2: Return error
    let cache = Cache.new()?  // Level 2: Return error
    Ok(App { db, cache, config })
}

extension App {
    fn run(): Result<(), Error> {
        guard self.db.isHealthy() else {
            panic!("Database failed health check before main loop")
        }

        // ... continue with main event loop
        Ok(())
    }
}

Summary

The decision between panic!, Result, and T? is fundamental to Rust/Oxide's error handling philosophy:

  1. Default to Result - Especially in library code and functions that can fail for external reasons
  2. Use T? - For values that might not exist but absence isn't an error
  3. Panic for contracts - When callers violate documented preconditions
  4. Expect with explanation - When you know better than the compiler but need to document why
  5. Minimize !! - It should be rare in production code

Remember: The goal is to write code that's both safe and clear about what can go wrong and how to handle it. Your error handling strategy is part of your API's contract with users.

Generic Types, Traits, and Lifetimes

Every programming language has tools for effectively handling the duplication of concepts. In Oxide, one such tool is generics: abstract stand-ins for concrete types or other properties. We can express the behavior of generics or how they relate to other generics without knowing what will be in their place when compiling and running the code.

Functions can take parameters of some generic type, instead of a concrete type like Int or String, in the same way they take parameters with unknown values to run the same code on multiple concrete values. In fact, we already used generics in Chapter 6 with Option<T> (written as T? in Oxide), in Chapter 8 with Vec<T> and HashMap<K, V>, and in Chapter 9 with Result<T, E>. In this chapter, you will explore how to define your own types, functions, and methods with generics!

First, we will review how to extract a function to reduce code duplication. We will then use the same technique to make a generic function from two functions that differ only in the types of their parameters. We will also explain how to use generic types in struct and enum definitions.

Then, you will learn how to use traits to define behavior in a generic way. You can combine traits with generic types to constrain a generic type to accept only those types that have a particular behavior, as opposed to just any type.

Finally, we will discuss lifetimes: a variety of generics that give the compiler information about how references relate to each other. Lifetimes allow us to give the compiler enough information about borrowed values so that it can ensure that references will be valid in more situations than it could without our help.

Oxide Syntax vs. Rust

Since Oxide compiles to Rust, the generic syntax is largely the same:

FeatureRustOxide
Generic functionfn foo<T>() {}fn foo<T>() {}
Generic structstruct Point<T> {}struct Point<T> {}
Generic enumenum Option<T> {}enum Option<T> {}
Trait boundfn foo<T: Display>() {}fn foo<T: Display>() {}
Where clausewhere T: Clonewhere T: Clone
Lifetimefn foo<'a>(x: &'a T) {}fn foo<'a>(x: &'a T) {}
Trait implimpl Trait for Type {}extension Type: Trait {}
Method modifiers&self, &mut selfImplicit, with mutating

The key difference in Oxide is the extension syntax for implementing traits, which reads more naturally than Rust's impl Trait for Type.

Chapter Structure

We'll take the same approach that the Rust Book does:

  1. First, we'll see how to extract functions to reduce duplication
  2. Then we'll learn about generic data types for functions, structs, enums, and methods
  3. Next, we'll explore traits to define shared behavior
  4. We'll use trait bounds to constrain generics
  5. Finally, we'll tackle lifetimes and how they interact with generics

Let's dive in!

Removing Duplication by Extracting a Function

Generics allow us to replace specific types with a placeholder that represents multiple types to remove code duplication. Before diving into generics syntax, let's first look at how to remove duplication in a way that doesn't involve generic types by extracting a function that replaces specific values with a placeholder that represents multiple values. Then, we will apply the same technique to extract a generic function! By looking at how to recognize duplicated code you can extract into a function, you will start to recognize duplicated code that can use generics.

We will begin with a short program that finds the largest number in a list:

fn main() {
    let numberList = vec![34, 50, 25, 100, 65]

    var largest = &numberList[0]

    for number in &numberList {
        if number > largest {
            largest = number
        }
    }

    println!("The largest number is \(largest)")
}

We store a list of integers in the variable numberList and place a reference to the first number in the list in a variable named largest. We then iterate through all the numbers in the list, and if the current number is greater than the number stored in largest, we replace the reference in that variable. However, if the current number is less than or equal to the largest number seen so far, the variable doesn't change, and the code moves on to the next number in the list. After considering all the numbers in the list, largest should refer to the largest number, which in this case is 100.

We have now been tasked with finding the largest number in two different lists of numbers. To do so, we can choose to duplicate the code and use the same logic at two different places in the program:

fn main() {
    let numberList = vec![34, 50, 25, 100, 65]

    var largest = &numberList[0]

    for number in &numberList {
        if number > largest {
            largest = number
        }
    }

    println!("The largest number is \(largest)")

    let numberList = vec![102, 34, 6000, 89, 54, 2, 43, 8]

    var largest = &numberList[0]

    for number in &numberList {
        if number > largest {
            largest = number
        }
    }

    println!("The largest number is \(largest)")
}

Although this code works, duplicating code is tedious and error-prone. We also have to remember to update the code in multiple places when we want to change it.

To eliminate this duplication, we will create an abstraction by defining a function that operates on any list of integers passed in as a parameter. This solution makes our code clearer and lets us express the concept of finding the largest number in a list abstractly.

We extract the code that finds the largest number into a function named largest. Then, we call the function to find the largest number in the two lists:

fn largest(list: &[Int]): &Int {
    var largest = &list[0]

    for item in list {
        if item > largest {
            largest = item
        }
    }

    largest
}

fn main() {
    let numberList = vec![34, 50, 25, 100, 65]
    let result = largest(&numberList)
    println!("The largest number is \(result)")

    let numberList = vec![102, 34, 6000, 89, 54, 2, 43, 8]
    let result = largest(&numberList)
    println!("The largest number is \(result)")
}

The largest function has a parameter called list, which represents any concrete slice of Int values we might pass into the function. As a result, when we call the function, the code runs on the specific values that we pass in.

In summary, here are the steps we took to change the code:

  1. Identify duplicate code.
  2. Extract the duplicate code into the body of the function, and specify the inputs and return values of that code in the function signature.
  3. Update the two instances of duplicated code to call the function instead.

Next, we will use these same steps with generics to reduce code duplication. In the same way that the function body can operate on an abstract list instead of specific values, generics allow code to operate on abstract types.

For example, say we had two functions: one that finds the largest item in a slice of Int values and one that finds the largest item in a slice of Char values. How would we eliminate that duplication? Let's find out!

Generic Data Types

We use generics to create definitions for items like function signatures or structs, which we can then use with many different concrete data types. Let's first look at how to define functions, structs, enums, and methods using generics. Then, we will discuss how generics affect code performance.

In Function Definitions

When defining a function that uses generics, we place the generics in the signature of the function where we would usually specify the data types of the parameters and return value. Doing so makes our code more flexible and provides more functionality to callers of our function while preventing code duplication.

Continuing with our largest function, here are two functions that both find the largest value in a slice. We will then combine these into a single function that uses generics.

fn largestInt(list: &[Int]): &Int {
    var largest = &list[0]

    for item in list {
        if item > largest {
            largest = item
        }
    }

    largest
}

fn largestChar(list: &[Char]): &Char {
    var largest = &list[0]

    for item in list {
        if item > largest {
            largest = item
        }
    }

    largest
}

fn main() {
    let numberList = vec![34, 50, 25, 100, 65]
    let result = largestInt(&numberList)
    println!("The largest number is \(result)")

    let charList = vec!['y', 'm', 'a', 'q']
    let result = largestChar(&charList)
    println!("The largest char is \(result)")
}

The largestInt function is the one we extracted previously that finds the largest Int in a slice. The largestChar function finds the largest Char in a slice. The function bodies have the same code, so let's eliminate the duplication by introducing a generic type parameter in a single function.

To parameterize the types in a new single function, we need to name the type parameter, just as we do for the value parameters to a function. You can use any identifier as a type parameter name. But we will use T because, by convention, type parameter names are short, often just one letter, and the type-naming convention is UpperCamelCase. Short for type, T is the default choice of most programmers.

When we use a parameter in the body of the function, we have to declare the parameter name in the signature so that the compiler knows what that name means. Similarly, when we use a type parameter name in a function signature, we have to declare the type parameter name before we use it. To define the generic largest function, we place type name declarations inside angle brackets, <>, between the name of the function and the parameter list, like this:

fn largest<T>(list: &[T]): &T {
    // ...
}

We read this definition as "The function largest is generic over some type T." This function has one parameter named list, which is a slice of values of type T. The largest function will return a reference to a value of the same type T.

Here is the combined largest function definition using the generic data type in its signature. The listing also shows how we can call the function with either a slice of Int values or Char values. Note that this code won't compile yet:

fn largest<T>(list: &[T]): &T {
    var largest = &list[0]

    for item in list {
        if item > largest {
            largest = item
        }
    }

    largest
}

fn main() {
    let numberList = vec![34, 50, 25, 100, 65]
    let result = largest(&numberList)
    println!("The largest number is \(result)")

    let charList = vec!['y', 'm', 'a', 'q']
    let result = largest(&charList)
    println!("The largest char is \(result)")
}

If we compile this code right now, we will get an error:

error[E0369]: binary operation `>` cannot be applied to type `&T`
 --> src/main.rs:5:17
  |
5 |         if item > largest {
  |            ---- ^ ------- &T
  |            |
  |            &T
  |
help: consider restricting type parameter `T`
  |
1 | fn largest<T: std::cmp::PartialOrd>(list: &[T]): &T {
  |             ++++++++++++++++++++++

The help text mentions std.cmp.PartialOrd, which is a trait, and we are going to talk about traits in the next section. For now, know that this error states that the body of largest won't work for all possible types that T could be. Because we want to compare values of type T in the body, we can only use types whose values can be ordered. To enable comparisons, the standard library has the std.cmp.PartialOrd trait that you can implement on types. By restricting T to only those types that implement PartialOrd, the code will compile because the standard library implements PartialOrd on both Int and Char.

In Struct Definitions

We can also define structs to use a generic type parameter in one or more fields using the <> syntax. Here is a Point<T> struct to hold x and y coordinate values of any type:

struct Point<T> {
    x: T,
    y: T,
}

fn main() {
    let integer = Point { x: 5, y: 10 }
    let float = Point { x: 1.0, y: 4.0 }
}

The syntax for using generics in struct definitions is similar to that used in function definitions. First, we declare the name of the type parameter inside angle brackets just after the name of the struct. Then, we use the generic type in the struct definition where we would otherwise specify concrete data types.

Note that because we have used only one generic type to define Point<T>, this definition says that the Point<T> struct is generic over some type T, and the fields x and y are both that same type, whatever that type may be. If we create an instance of a Point<T> that has values of different types, our code won't compile:

struct Point<T> {
    x: T,
    y: T,
}

fn main() {
    let wontWork = Point { x: 5, y: 4.0 }
}

In this example, when we assign the integer value 5 to x, we let the compiler know that the generic type T will be an integer for this instance of Point<T>. Then, when we specify 4.0 for y, which we have defined to have the same type as x, we will get a type mismatch error like this:

error[E0308]: mismatched types
 --> src/main.rs:7:38
  |
7 |     let wontWork = Point { x: 5, y: 4.0 }
  |                                      ^^^ expected integer, found floating-point number

To define a Point struct where x and y are both generics but could have different types, we can use multiple generic type parameters. For example, we change the definition of Point to be generic over types T and U where x is of type T and y is of type U:

struct Point<T, U> {
    x: T,
    y: U,
}

fn main() {
    let bothInteger = Point { x: 5, y: 10 }
    let bothFloat = Point { x: 1.0, y: 4.0 }
    let integerAndFloat = Point { x: 5, y: 4.0 }
}

Now all the instances of Point shown are allowed! You can use as many generic type parameters in a definition as you want, but using more than a few makes your code hard to read. If you find you need lots of generic types in your code, it could indicate that your code needs restructuring into smaller pieces.

In Enum Definitions

As we did with structs, we can define enums to hold generic data types in their variants. Let's take another look at the Option<T> enum that the standard library provides, which we use through Oxide's T? syntax:

#![allow(unused)]
fn main() {
// This is what Option<T> looks like in Rust
enum Option<T> {
    Some(T),
    None,
}
}

This definition should now make more sense to you. As you can see, the Option<T> enum is generic over type T and has two variants: Some, which holds one value of type T, and a None variant that doesn't hold any value. By using the Option<T> enum (or T? in Oxide), we can express the abstract concept of an optional value, and because Option<T> is generic, we can use this abstraction no matter what the type of the optional value is.

In Oxide, we typically write this using the nullable type syntax:

fn findFirst<T>(items: &[T], predicate: (&T) -> Bool): T? {
    for item in items {
        if predicate(item) {
            return Some(item.clone())
        }
    }
    null
}

Enums can use multiple generic types as well. The definition of the Result enum that we used in Chapter 9 is one example:

enum Result<T, E> {
    Ok(T),
    Err(E),
}

The Result enum is generic over two types, T and E, and has two variants: Ok, which holds a value of type T, and Err, which holds a value of type E. This definition makes it convenient to use the Result enum anywhere we have an operation that might succeed (return a value of some type T) or fail (return an error of some type E). In fact, this is what we used to open a file in Chapter 9, where T was filled in with the type std.fs.File when the file was opened successfully and E was filled in with the type std.io.Error when there were problems opening the file.

When you recognize situations in your code with multiple struct or enum definitions that differ only in the types of the values they hold, you can avoid duplication by using generic types instead.

In Method Definitions

We can implement methods on structs and enums and use generic types in their definitions too. Here is the Point<T> struct with a method named x implemented on it:

struct Point<T> {
    x: T,
    y: T,
}

extension Point<T> {
    fn x(): &T {
        &self.x
    }
}

fn main() {
    let p = Point { x: 5, y: 10 }
    println!("p.x = \(p.x())")
}

Here, we have defined a method named x on Point<T> that returns a reference to the data in the field x.

Note that we have to declare T just after extension so that we can use T to specify that we are implementing methods on the type Point<T>. By declaring T as a generic type after extension, Oxide can identify that the type in the angle brackets in Point is a generic type rather than a concrete type. We could have chosen a different name for this generic parameter than the generic parameter declared in the struct definition, but using the same name is conventional.

Rust Equivalent

The Oxide code above translates to this Rust code:

#![allow(unused)]
fn main() {
struct Point<T> {
    x: T,
    y: T,
}

impl<T> Point<T> {
    fn x(&self) -> &T {
        &self.x
    }
}
}

We can also specify constraints on generic types when defining methods on the type. We could, for example, implement methods only on Point<Float> instances rather than on Point<T> instances with any generic type. Here we use the concrete type Float, meaning we don't declare any types after extension:

extension Point<Float> {
    fn distanceFromOrigin(): Float {
        (self.x.powi(2) + self.y.powi(2)).sqrt()
    }
}

This code means the type Point<Float> will have a distanceFromOrigin method; other instances of Point<T> where T is not of type Float will not have this method defined. The method measures how far our point is from the point at coordinates (0.0, 0.0) and uses mathematical operations that are available only for floating-point types.

Generic type parameters in a struct definition aren't always the same as those you use in that same struct's method signatures. Here is an example that uses the generic types X1 and Y1 for the Point struct and X2 and Y2 for the mixup method signature to make the example clearer:

struct Point<X1, Y1> {
    x: X1,
    y: Y1,
}

extension Point<X1, Y1> {
    fn mixup<X2, Y2>(other: Point<X2, Y2>): Point<X1, Y2> {
        Point {
            x: self.x,
            y: other.y,
        }
    }
}

fn main() {
    let p1 = Point { x: 5, y: 10.4 }
    let p2 = Point { x: "Hello", y: 'c' }

    let p3 = p1.mixup(p2)

    println!("p3.x = \(p3.x), p3.y = \(p3.y)")
}

In main, we have defined a Point that has an Int for x (with value 5) and a Float for y (with value 10.4). The p2 variable is a Point struct that has a string slice for x (with value "Hello") and a Char for y (with value 'c'). Calling mixup on p1 with the argument p2 gives us p3, which will have an Int for x because x came from p1. The p3 variable will have a Char for y because y came from p2. The println! macro call will print p3.x = 5, p3.y = c.

The purpose of this example is to demonstrate a situation in which some generic parameters are declared with extension and some are declared with the method definition. Here, the generic parameters X1 and Y1 are declared after extension because they go with the struct definition. The generic parameters X2 and Y2 are declared after fn mixup because they are only relevant to the method.

Performance of Code Using Generics

You might be wondering whether there is a runtime cost when using generic type parameters. The good news is that using generic types won't make your program run any slower than it would with concrete types.

Oxide and Rust accomplish this by performing monomorphization of the code using generics at compile time. Monomorphization is the process of turning generic code into specific code by filling in the concrete types that are used when compiled. In this process, the compiler does the opposite of the steps we used to create the generic function: The compiler looks at all the places where generic code is called and generates code for the concrete types the generic code is called with.

Let's look at how this works by using the standard library's generic Option<T> enum:

let integer: Int? = Some(5)
let float: Float? = Some(5.0)

When Oxide compiles this code, it performs monomorphization. During that process, the compiler reads the values that have been used in Option<T> instances and identifies two kinds of Option<T>: One is Int and the other is Float. As such, it expands the generic definition of Option<T> into two definitions specialized to Int and Float, thereby replacing the generic definition with the specific ones.

The monomorphized version of the code looks similar to the following (the compiler uses different names than what we're using here for illustration):

enum OptionInt {
    Some(Int),
    None,
}

enum OptionFloat {
    Some(Float),
    None,
}

fn main() {
    let integer = OptionInt.Some(5)
    let float = OptionFloat.Some(5.0)
}

The generic Option<T> is replaced with the specific definitions created by the compiler. Because Oxide compiles generic code into code that specifies the type in each instance, we pay no runtime cost for using generics. When the code runs, it performs just as it would if we had duplicated each definition by hand. The process of monomorphization makes generics extremely efficient at runtime.

Traits: Defining Shared Behavior

A trait defines the functionality a particular type has and can share with other types. We can use traits to define shared behavior in an abstract way. We can use trait bounds to specify that a generic type can be any type that has certain behavior.

Note: Traits are similar to a feature often called interfaces in other languages, although with some differences.

Defining a Trait

A type's behavior consists of the methods we can call on that type. Different types share the same behavior if we can call the same methods on all of those types. Trait definitions are a way to group method signatures together to define a set of behaviors necessary to accomplish some purpose.

For example, let's say we have multiple structs that hold various kinds and amounts of text: a NewsArticle struct that holds a news story filed in a particular location and a SocialPost that can have, at most, 280 characters along with metadata that indicates whether it was a new post, a repost, or a reply to another post.

We want to make a media aggregator library crate named aggregator that can display summaries of data that might be stored in a NewsArticle or SocialPost instance. To do this, we need a summary from each type, and we will request that summary by calling a summarize method on an instance. Here is the definition of a public Summary trait that expresses this behavior:

public trait Summary {
    fn summarize(): String
}

Here, we declare a trait using the trait keyword and then the trait's name, which is Summary in this case. We also declare the trait as public so that crates depending on this crate can make use of this trait too, as we will see in a few examples. Inside the curly brackets, we declare the method signatures that describe the behaviors of the types that implement this trait, which in this case is fn summarize(): String.

After the method signature, instead of providing an implementation within curly brackets, we use a semicolon. Each type implementing this trait must provide its own custom behavior for the body of the method. The compiler will enforce that any type that has the Summary trait will have the method summarize defined with this signature exactly.

A trait can have multiple methods in its body: The method signatures are listed one per line, and each line ends in a semicolon.

Rust Equivalent

The Oxide trait definition above translates to this Rust code:

#![allow(unused)]
fn main() {
pub trait Summary {
    fn summarize(&self) -> String;
}
}

Note that in Oxide, trait methods use the same receiver modifiers as methods in extension blocks: fn implies &self, mutating fn implies &mut self, consuming fn implies self, and static fn has no receiver.

Implementing a Trait on a Type

Now that we have defined the desired signatures of the Summary trait's methods, we can implement it on the types in our media aggregator. In Oxide, we use extension Type: Trait syntax to implement a trait on a type:

public struct NewsArticle {
    public headline: String,
    public location: String,
    public author: String,
    public content: String,
}

extension NewsArticle: Summary {
    fn summarize(): String {
        "\(self.headline), by \(self.author) (\(self.location))"
    }
}

public struct SocialPost {
    public username: String,
    public content: String,
    public reply: Bool,
    public repost: Bool,
}

extension SocialPost: Summary {
    fn summarize(): String {
        "\(self.username): \(self.content)"
    }
}

Implementing a trait on a type is similar to implementing regular methods. The difference is that after extension, we put the type name, then a colon, then the trait name we want to implement. Within the extension block, we put the method signatures that the trait definition has defined and fill in the method body with the specific behavior that we want the methods of the trait to have for the particular type.

Now that the library has implemented the Summary trait on NewsArticle and SocialPost, users of the crate can call the trait methods on instances of NewsArticle and SocialPost in the same way we call regular methods. The only difference is that the user must bring the trait into scope as well as the types. Here is an example of how a binary crate could use our aggregator library crate:

import aggregator.{ Summary, SocialPost }

fn main() {
    let post = SocialPost {
        username: "horse_ebooks".toString(),
        content: "of course, as you probably already know, people".toString(),
        reply: false,
        repost: false,
    }

    println!("1 new post: \(post.summarize())")
}

This code prints 1 new post: horse_ebooks: of course, as you probably already know, people.

Rust Equivalent

The Oxide trait implementation above translates to this Rust code:

#![allow(unused)]
fn main() {
impl Summary for NewsArticle {
    fn summarize(&self) -> String {
        format!("{}, by {} ({})", self.headline, self.author, self.location)
    }
}

impl Summary for SocialPost {
    fn summarize(&self) -> String {
        format!("{}: {}", self.username, self.content)
    }
}
}

Note that in Rust, the syntax is impl Trait for Type, while in Oxide it's extension Type: Trait. The Oxide syntax reads naturally as "extend Type with Trait capability."

Coherence and the Orphan Rule

One restriction to note is that we can implement a trait on a type only if either the trait or the type, or both, are local to our crate. For example, we can implement standard library traits like Display on a custom type like SocialPost as part of our aggregator crate functionality because the type SocialPost is local to our aggregator crate. We can also implement Summary on Vec<T> in our aggregator crate because the trait Summary is local to our aggregator crate.

But we can't implement external traits on external types. For example, we can't implement the Display trait on Vec<T> within our aggregator crate, because Display and Vec<T> are both defined in the standard library and aren't local to our aggregator crate. This restriction is part of a property called coherence, and more specifically the orphan rule, so named because the parent type is not present. This rule ensures that other people's code can't break your code and vice versa. Without the rule, two crates could implement the same trait for the same type, and the compiler wouldn't know which implementation to use.

Default Implementations

Sometimes it's useful to have default behavior for some or all of the methods in a trait instead of requiring implementations for all methods on every type. Then, as we implement the trait on a particular type, we can keep or override each method's default behavior.

Here we specify a default string for the summarize method of the Summary trait instead of only defining the method signature:

public trait Summary {
    fn summarize(): String {
        "(Read more...)".toString()
    }
}

To use a default implementation to summarize instances of NewsArticle, we specify an empty extension block with extension NewsArticle: Summary {}.

Even though we are no longer defining the summarize method on NewsArticle directly, we have provided a default implementation and specified that NewsArticle implements the Summary trait. As a result, we can still call the summarize method on an instance of NewsArticle, like this:

let article = NewsArticle {
    headline: "Penguins win the Stanley Cup Championship!".toString(),
    location: "Pittsburgh, PA, USA".toString(),
    author: "Iceburgh".toString(),
    content: "The Pittsburgh Penguins once again are the best \
              hockey team in the NHL.".toString(),
}

println!("New article available! \(article.summarize())")

This code prints New article available! (Read more...).

Creating a default implementation doesn't require us to change anything about the implementation of Summary on SocialPost. The reason is that the syntax for overriding a default implementation is the same as the syntax for implementing a trait method that doesn't have a default implementation.

Default implementations can call other methods in the same trait, even if those other methods don't have a default implementation. In this way, a trait can provide a lot of useful functionality and only require implementors to specify a small part of it. For example, we could define the Summary trait to have a summarizeAuthor method whose implementation is required, and then define a summarize method that has a default implementation that calls the summarizeAuthor method:

public trait Summary {
    fn summarizeAuthor(): String

    fn summarize(): String {
        "(Read more from \(self.summarizeAuthor())...)".toString()
    }
}

To use this version of Summary, we only need to define summarizeAuthor when we implement the trait on a type:

extension SocialPost: Summary {
    fn summarizeAuthor(): String {
        "@\(self.username)".toString()
    }
}

After we define summarizeAuthor, we can call summarize on instances of the SocialPost struct, and the default implementation of summarize will call the definition of summarizeAuthor that we have provided:

let post = SocialPost {
    username: "horse_ebooks".toString(),
    content: "of course, as you probably already know, people".toString(),
    reply: false,
    repost: false,
}

println!("1 new post: \(post.summarize())")

This code prints 1 new post: (Read more from @horseEbooks...).

Note that it isn't possible to call the default implementation from an overriding implementation of that same method.

Traits as Parameters

Now that you know how to define and implement traits, we can explore how to use traits to define functions that accept many different types. We will use the Summary trait we implemented on the NewsArticle and SocialPost types to define a notify function that calls the summarize method on its item parameter, which is of some type that implements the Summary trait. In Oxide, we express this using a trait bound on a generic type:

public fn notify<T: Summary>(item: &T) {
    println!("Breaking news! \(item.summarize())")
}

This parameter accepts any type that implements the specified trait. In the body of notify, we can call any methods on item that come from the Summary trait, such as summarize. We can call notify and pass in any instance of NewsArticle or SocialPost. Code that calls the function with any other type, such as a String or an Int, won't compile because those types don't implement Summary.

Trait Bound Syntax

Trait bounds can also express more complex cases. For example, we can have two parameters that each implement Summary but are allowed to be different types:

public fn notify<T: Summary, U: Summary>(item1: &T, item2: &U) {
    // ...
}

If we want to force both parameters to have the same type, we use a single generic parameter:

public fn notify<T: Summary>(item1: &T, item2: &T) {
    // ...
}

The generic type T specified as the type of the item1 and item2 parameters constrains the function such that the concrete type of the value passed as an argument for item1 and item2 must be the same.

Specifying Multiple Trait Bounds with the + Syntax

We can also specify more than one trait bound. Say we wanted notify to use display formatting as well as summarize on item: We specify in the notify definition that item must implement both Display and Summary. We do so using the + syntax on the trait bound:

public fn notify<T: Summary + Display>(item: &T) {
    // ...
}

With the two trait bounds specified, the body of notify can call summarize and use {} to format item.

Clearer Trait Bounds with where Clauses

Using too many trait bounds has its downsides. Each generic has its own trait bounds, so functions with multiple generic type parameters can contain lots of trait bound information between the function's name and its parameter list, making the function signature hard to read. For this reason, Oxide (like Rust) has alternate syntax for specifying trait bounds inside a where clause after the function signature. So, instead of writing this:

fn someFunction<T: Display + Clone, U: Clone + Debug>(t: &T, u: &U): Int {
    // ...
}

we can use a where clause, like this:

fn someFunction<T, U>(t: &T, u: &U): Int
where
    T: Display + Clone,
    U: Clone + Debug,
{
    // ...
}

This function's signature is less cluttered: The function name, parameter list, and return type are close together, similar to a function without lots of trait bounds.

Returning Types That Implement Traits

In Oxide, you can hide a concrete return type with impl Trait when the function always returns a single concrete type:

fn returnsSummarizable(): impl Summary {
    SocialPost {
        username: "horse_ebooks".toString(),
        content: "of course, as you probably already know, people".toString(),
        reply: false,
        repost: false,
    }
}

When you need to return different concrete types, use a trait object. A trait object is a value like Box<dyn Summary> that can hold any type implementing the trait.

fn returnsSummarizable(switch: Bool): Box<dyn Summary> {
    if switch {
        Box.new(NewsArticle {
            headline: "Penguins win the Stanley Cup Championship!".toString(),
            location: "Pittsburgh, PA, USA".toString(),
            author: "Iceburgh".toString(),
            content: "The Pittsburgh Penguins once again are the best \
                      hockey team in the NHL.".toString(),
        })
    } else {
        Box.new(SocialPost {
            username: "horse_ebooks".toString(),
            content: "of course, as you probably already know, people".toString(),
            reply: false,
            repost: false,
        })
    }
}

Because the return type is a trait object, the caller does not need to know the concrete type. This is especially useful for closures and iterators, which often have compiler-generated types that are difficult to name.

We will cover trait objects in detail in Chapter 18.

Using Trait Bounds to Conditionally Implement Methods

By using a trait bound with an extension block that uses generic type parameters, we can implement methods conditionally for types that implement the specified traits. For example, the type Pair<T> always implements the new function to return a new instance of Pair<T>. But Pair<T> only implements the cmpDisplay method if its inner type T implements the PartialOrd trait that enables comparison and the Display trait that enables printing:

struct Pair<T> {
    x: T,
    y: T,
}

extension Pair<T> {
    static fn new(x: T, y: T): Self {
        Self { x, y }
    }
}

extension Pair<T>
where
    T: Display + PartialOrd,
{
    fn cmpDisplay() {
        if self.x >= self.y {
            println!("The largest member is x = \(self.x)")
        } else {
            println!("The largest member is y = \(self.y)")
        }
    }
}

We can also conditionally implement a trait for any type that implements another trait. Implementations of a trait on any type that satisfies the trait bounds are called blanket implementations and are used extensively in the standard library. For example, the standard library implements the ToString trait on any type that implements the Display trait. The extension block in the standard library looks similar to this code:

extension<T: Display> T: ToString {
    // --snip--
}

Because the standard library has this blanket implementation, we can call the toString method defined by the ToString trait on any type that implements the Display trait. For example, we can turn integers into their corresponding String values like this because integers implement Display:

let s = 3.toString()

Blanket implementations appear in the documentation for the trait in the "Implementors" section.

Traits and trait bounds let us write code that uses generic type parameters to reduce duplication but also specify to the compiler that we want the generic type to have particular behavior. The compiler can then use the trait bound information to check that all the concrete types used with our code provide the correct behavior. In dynamically typed languages, we would get an error at runtime if we called a method on a type that didn't define the method. But Oxide moves these errors to compile time so that we are forced to fix the problems before our code is even able to run. Additionally, we don't have to write code that checks for behavior at runtime because we have already checked at compile time. Doing so improves performance without having to give up the flexibility of generics.

Lifetimes: Validating References with Lifetimes

One detail we didn't discuss in Chapter 4 is that every reference in Oxide has a lifetime, which is the scope for which that reference is valid. Most of the time, lifetimes are implicit and inferred, just like most of the time, types are inferred. But just as we sometimes must annotate types when multiple types are possible, we must annotate lifetimes when the lifetimes of references could be related in a few different ways.

Oxide requires us to annotate the relationships using generic lifetime parameters to ensure the actual references used at runtime will definitely be valid.

Preventing Dangling References with Lifetimes

The main aim of lifetimes is to prevent dangling references, which cause a program to reference data other than the data it's intended to reference. Consider this pseudocode, where we explicitly show the lifetime of a reference:

{
    let r: &Int // lifetime must satisfy: 'a

    {
        let x = 5
        r = &x // x has lifetime 'b
    } // x is dropped here, ending 'b

    println!("{}", r) // r still tries to use x here, but x no longer exists!
}

In this example, r tries to reference an Int (x) that goes out of scope before we try to use the reference. The variable x doesn't live long enough. The reason is that x will be dropped when the inner scope ends, but r will still be referring to that location in memory.

Oxide's borrow checker prevents this problem from compiling. Let's look at how lifetimes help us write valid code.

The Borrow Checker

Oxide's compiler has a borrow checker that compares scopes to determine whether all borrows are valid. Here's the logic:

  1. Each reference has a lifetime that corresponds to the scope of the code it's being used in
  2. You cannot borrow a value for a lifetime that outlives the value
  3. The compiler will reject your code if it violates these rules

Let's look at an example where the borrow checker catches a dangling reference:

fn main() {
    let r: &Int

    {
        let x = 5
        r = &x
    }

    println!("r: {}", r) // error: `x` does not live long enough
}

When we compile this code, Oxide will give us an error:

error[E0597]: `x` does not live long enough
 --> src/main.rs:8:13
  |
7 |         r = &x;
  |             ^^ borrowed value does not live long enough
8 |     }
  |     - `x` dropped here while still borrowed
9 |
10 |     println!("r: {}", r);
       |                     - borrow later used here

The variable r has a reference to x, but x goes out of scope right away. So r will be referencing memory that no longer contains valid data. This is a classic memory safety issue that Oxide prevents at compile time.

Lifetime Annotations in Function Signatures

In most cases, lifetimes are implicit. However, when a function returns a reference, we need to specify which input parameter's lifetime the returned reference is tied to. We do this with lifetime annotations.

Lifetime annotations use the syntax 'a (pronounced "lifetime a"). Let's look at an example:

fn longest<'a>(x: &'a String, y: &'a String): &'a String {
    if x.len() > y.len() {
        x
    } else {
        y
    }
}

This function compares two string slices and returns the longer one. The lifetime annotation 'a tells the compiler:

  • The function takes two parameters that are references with lifetime 'a
  • The function returns a reference with the same lifetime 'a
  • The returned reference will be valid as long as both input references are valid

Let's examine what happens without the lifetime annotations:

fn longest(x: &String, y: &String): &String {
    if x.len() > y.len() {
        x
    } else {
        y
    }
}

The compiler can't tell which input parameter the return reference relates to. It could be x or y. So we get an error:

error[E0106]: missing lifetime specifier
 --> src/main.rs:1:33
  |
1 | fn longest(x: &String, y: &String): &String {
  |               -------       -------          ^ expected lifetime parameter
  |

Let's use our annotated version correctly:

fn longest<'a>(x: &'a String, y: &'a String): &'a String {
    if x.len() > y.len() {
        x
    } else {
        y
    }
}

fn main() {
    let string1 = "abracadabra".toString()
    let string2 = "xyz".toString()
    let result = longest(&string1, &string2)
    println!("The longest string is '{}'", result)
}

This code compiles because:

  • Both string1 and string2 are in the main function's scope
  • We return a reference to one of them
  • That reference will be valid as long as both input references are valid
  • When we print result, both string1 and string2 still exist

But this wouldn't work:

fn longest<'a>(x: &'a String, y: &'a String): &'a String {
    if x.len() > y.len() {
        x
    } else {
        y
    }
}

fn main() {
    let string1 = "long string".toString()

    let result: &String
    {
        let string2 = "xyz".toString()
        result = longest(&string1, &string2)
    }

    println!("The longest string is '{}'", result)
}

The problem is:

  • string2 goes out of scope at the end of the inner block
  • But result might reference string2
  • The lifetime annotation says the return reference must be valid as long as both inputs
  • Since string2 becomes invalid before we use result, this is an error

Lifetime Annotations in Struct Definitions

We can also use lifetime annotations in struct definitions when a struct holds references:

struct ImportantExcerpt<'a> {
    part: &'a String,
}

fn main() {
    let novel = "Call me Ishmael. Some years ago...".toString()
    let firstSentence = novel.split('.').next() ?? ""
    let excerpt = ImportantExcerpt { part: &firstSentence }
}

The lifetime annotation 'a means that the ImportantExcerpt struct can hold a reference to a String, and that reference must live at least as long as the ImportantExcerpt instance.

This struct can't outlive the reference it holds:

struct ImportantExcerpt<'a> {
    part: &'a String,
}

fn main() {
    let excerpt: ImportantExcerpt

    {
        let novel = "Call me Ishmael. Some years ago...".toString()
        excerpt = ImportantExcerpt { part: &novel }
    }

    println!("{}", excerpt.part) // error: `novel` does not live long enough
}

The Three Lifetime Rules

The borrow checker uses three rules to determine whether borrows are valid.

Rule 1: Each reference has its own lifetime

Each time you have a reference, it has an associated lifetime. For example:

fn foo<'a>(x: &'a Int) { }
fn bar<'a, 'b>(x: &'a Int, y: &'b Int) { }

Each parameter gets its own lifetime variable.

Rule 2: Return lifetimes must relate to input lifetimes

If a function takes references as parameters and returns a reference, the return type's lifetime must relate to the lifetimes of the input parameters:

fn foo<'a>(x: &'a Int, y: &Int): &'a Int { }

Here, the return value's lifetime is tied to x's lifetime, not y's. This means the returned reference is valid as long as x is valid.

Rule 3: Methods use &self's lifetime

For methods (in extension blocks), the returned reference's lifetime is implicitly tied to &self:

extension ImportantExcerpt<'a> {
    fn announceAuthor() {
        println!("Attention please: {}", self.part)
    }
}

This is equivalent to:

extension ImportantExcerpt<'a> {
    fn announceAuthor() {
        println!("Attention please: {}", self.part)
    }
}

Lifetime Elision

Because the rules are common, Oxide (like Rust) allows you to omit lifetime annotations in many cases. This is called lifetime elision. The compiler will infer the lifetimes for you automatically in these situations:

Elision works for input references

If there's only one input reference, its lifetime is automatically used for all outputs:

// You can write this...
fn firstWord(s: &String): &String {
    let bytes = s.asBytes()
    for (i, &item) in bytes.iter().enumerate() {
        if item == b' ' {
            return &s[0..i]
        }
    }
    &s[..]
}

// Without specifying: fn firstWord<'a>(s: &'a String): &'a String { }
// The compiler figures it out!

Elision works for methods

Methods always have an implicit &self lifetime:

struct Excerpt {
    text: String,
}

extension Excerpt {
    fn returnText(): &String {
        &self.text
    }
}

// Implicit: fn returnText<'a>(&'a self): &'a String { }

Combining Lifetimes with Generics and Traits

You can combine lifetimes with generic type parameters and trait bounds:

fn longest<'a, T: PartialOrd + Display>(x: &'a T, y: &'a T): &'a T {
    if x > y {
        println!("x is larger: {}", x)
        x
    } else {
        println!("y is larger: {}", y)
        y
    }
}

This function takes two generic parameters with the same lifetime, compares them, and returns a reference with that lifetime.

Here's a more complex example with trait bounds and lifetimes:

struct Pair<'a, T> {
    items: &'a [T],
}

extension<'a, T> Pair<'a, T> {
    fn first(): T? {
        self.items.first().cloned()
    }
}

Static Lifetime

The 'static lifetime is a special lifetime that means the reference is valid for the entire duration of the program. All string literals have the 'static lifetime:

let s: &'static String = "Hello, world!"

String literals are baked directly into the program's binary, so they're always available.

You'll often see 'static bounds on types when you want to ensure they own their data:

fn printIt<T: std.fmt.Display + 'static>(t: T) {
    println!("{}", t)
}

This says T must implement Display and must not contain any borrowed references.

Advanced Lifetime Patterns

Higher-Ranked Trait Bounds

Sometimes you need to specify that a function accepts references with any lifetime. This uses for<'a> syntax:

fn acceptAnyLifetime<F>(f: F)
where
    F: for<'a> Fn(&'a Int) -> &'a Int,
{
    // f works with references of any lifetime
}

Lifetime Subtyping

A longer lifetime is a subtype of a shorter lifetime:

fn demo<'a, 'b>(x: &'a Int): &'b Int
where
    'a: 'b, // 'a outlives 'b
{
    x
}

The constraint 'a: 'b means 'a outlives 'b, so we can treat a &'a Int as a &'b Int.

Common Lifetime Mistakes

Mistake 1: Returning a reference to a local variable

fn badFunction(): &String {
    let s = "hello".toString()
    &s // Error: `s` does not live long enough
}

Solution: Return owned data instead:

fn goodFunction(): String {
    let s = "hello".toString()
    s
}

Mistake 2: Mismatching input and output lifetimes

fn problem<'a, 'b>(x: &'a String): &'b String {
    x // Error: lifetime mismatch
}

Solution: Make sure the output lifetime matches an input:

fn fixed<'a>(x: &'a String): &'a String {
    x
}

Summary

Lifetimes are Oxide's way of ensuring that references are always valid. While the syntax can seem intimidating at first, the rules are logical:

  1. Every reference has a lifetime
  2. Function signatures must make the relationship between input and output lifetimes explicit
  3. The compiler will reject code where references might outlive the data they point to

In practice, you'll rarely need to write lifetime annotations because:

  • Single input references automatically determine output lifetime
  • Methods automatically use &self's lifetime
  • The compiler will tell you exactly where you need them

With lifetimes, Oxide ensures memory safety without garbage collection or runtime overhead. This is one of the most powerful features of the language!

Comparison with Rust

Oxide's lifetime syntax is identical to Rust's because lifetimes are a fundamental part of the compiler's borrow checking algorithm. The only difference is in how method syntax works, but the lifetime rules remain exactly the same.

Both Oxide and Rust use the same three rules for lifetime elision and the same mechanisms for explicit annotation. If you understand lifetimes in Oxide, you understand them in Rust as well!

Writing Automated Tests

Testing in Oxide follows Rust's model: you write test functions annotated with #[test] and run them with cargo test. Assertions are provided by the standard library.

What You'll Learn

  • How to write and run tests
  • How to organize tests within a crate
  • How to use common assertions

A Small Example

fn addTwo(value: Int): Int {
    value + 2
}

#[test]
fn addsTwo() {
    let result = addTwo(40)
    assertEq!(result, 42)
}

The next sections dig into test organization, runtime options, and best practices for building reliable code.

How to Write Tests

Rust includes first-class support for writing automated tests, and Oxide inherits this capability with the same syntax. Tests are Oxide functions annotated with the #[test] attribute that verify your code behaves as expected.

The Anatomy of a Test Function

A test in Oxide is a function annotated with #[test]. When you run cargo test, Cargo builds a test runner binary that runs all functions marked with this attribute and reports whether each test passes or fails.

Let's create a new library project to explore testing:

cargo new adder --lib
cd adder

Cargo automatically generates a test module in src/lib.ox:

public fn add(left: UIntSize, right: UIntSize): UIntSize {
    left + right
}

#[cfg(test)]
module tests {
    import super.*

    #[test]
    fn itWorks() {
        let result = add(2, 2)
        assertEq!(result, 4)
    }
}

Let's examine this code:

  • The #[test] attribute marks itWorks as a test function
  • The #[cfg(test)] attribute tells Oxide to compile this module only when running tests
  • Inside tests, we import everything from the parent module with import super.*
  • The assertEq! macro checks that two values are equal

Run the tests with:

cargo test

Output:

running 1 test
test tests.itWorks ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out

Note that even though we named our function itWorks in camelCase, Oxide will map it to snake_case when crossing into Rust. This is Oxide's automatic case conversion at work.

Adding More Tests

Let's add more tests to understand how test failures work:

public fn add(left: UIntSize, right: UIntSize): UIntSize {
    left + right
}

public fn multiply(a: Int, b: Int): Int {
    a * b
}

#[cfg(test)]
module tests {
    import super.*

    #[test]
    fn itWorks() {
        let result = add(2, 2)
        assertEq!(result, 4)
    }

    #[test]
    fn multiplicationWorks() {
        let result = multiply(3, 4)
        assertEq!(result, 12)
    }

    #[test]
    fn anotherTest() {
        let result = add(10, 5)
        assertEq!(result, 15)
    }
}

Running cargo test:

running 3 tests
test tests.anotherTest ... ok
test tests.itWorks ... ok
test tests.multiplicationWorks ... ok

test result: ok. 3 passed; 0 failed; 0 ignored

What Happens When Tests Fail

Tests fail when the test function panics. Each test runs in its own thread, and when the main thread sees that a test thread has died, the test is marked as failed.

Let's add a failing test:

#[cfg(test)]
module tests {
    import super.*

    #[test]
    fn thisFails() {
        let expected = 5
        let actual = 2 + 2
        assertEq!(expected, actual, "Math is broken!")
    }
}

Output:

running 1 test
test tests.thisFails ... FAILED

failures:

---- tests.thisFails stdout ----
thread 'tests.thisFails' panicked at src/lib.ox:15:9:
assertion `left == right` failed: Math is broken!
  left: 5
 right: 4

failures:
    tests.thisFails

test result: FAILED. 0 passed; 1 failed; 0 ignored

The output shows exactly where the assertion failed and what values were compared.

Checking Results with assert! Macros

The standard library provides several assertion macros for testing.

assert! Macro

The assert! macro checks that a condition is true. If false, it panics:

#[derive(Debug)]
struct Rectangle {
    width: Int,
    height: Int,
}

extension Rectangle {
    fn canHold(other: &Rectangle): Bool {
        self.width > other.width && self.height > other.height
    }
}

#[cfg(test)]
module tests {
    import super.*

    #[test]
    fn largerCanHoldSmaller() {
        let larger = Rectangle { width: 8, height: 7 }
        let smaller = Rectangle { width: 5, height: 1 }

        assert!(larger.canHold(&smaller))
    }

    #[test]
    fn smallerCannotHoldLarger() {
        let larger = Rectangle { width: 8, height: 7 }
        let smaller = Rectangle { width: 5, height: 1 }

        assert!(!smaller.canHold(&larger))
    }
}

assertEq! and assertNe! Macros

These macros compare two values for equality or inequality:

public fn addTwo(a: Int): Int {
    a + 2
}

#[cfg(test)]
module tests {
    import super.*

    #[test]
    fn itAddsTwoEqual() {
        assertEq!(4, addTwo(2))
    }

    #[test]
    fn itAddsTwoNotEqual() {
        assertNe!(5, addTwo(2))
    }
}

When these assertions fail, they print both values, making it easy to see what went wrong:

assertion `left == right` failed
  left: 4
 right: 5

Important: Values compared with assertEq! and assertNe! must implement the PartialEq and Debug traits. For custom types, derive these traits:

#[derive(Debug, PartialEq)]
struct Point {
    x: Int,
    y: Int,
}

#[cfg(test)]
module tests {
    import super.*

    #[test]
    fn pointsAreEqual() {
        let p1 = Point { x: 3, y: 4 }
        let p2 = Point { x: 3, y: 4 }
        assertEq!(p1, p2)
    }
}

Adding Custom Failure Messages

You can add custom messages to assertion macros:

public fn greeting(name: &str): String {
    format!("Hello, \(name)!")
}

#[cfg(test)]
module tests {
    import super.*

    #[test]
    fn greetingContainsName() {
        let result = greeting("Carol")
        assert!(
            result.contains("Carol"),
            "Greeting did not contain name, value was `\(result)`"
        )
    }
}

If the test fails, the custom message appears:

thread 'tests.greetingContainsName' panicked at src/lib.ox:14:9:
Greeting did not contain name, value was `Hello, Carol!`

Testing for Panics with #[shouldPanic]

Sometimes you want to verify that code panics under certain conditions. Use the #[shouldPanic] attribute:

public struct Guess {
    value: Int,
}

extension Guess {
    public static fn new(value: Int): Guess {
        if value < 1 || value > 100 {
            panic!("Guess value must be between 1 and 100, got \(value)")
        }
        Guess { value }
    }
}

#[cfg(test)]
module tests {
    import super.*

    #[test]
    #[should_panic]
    fn greaterThan100() {
        Guess.new(200)
    }
}

This test passes because the code panics as expected.

Expected Panic Messages

To ensure tests don't pass for the wrong reason, add an expected parameter:

extension Guess {
    public static fn new(value: Int): Guess {
        if value < 1 {
            panic!("Guess value must be greater than or equal to 1, got \(value)")
        } else if value > 100 {
            panic!("Guess value must be less than or equal to 100, got \(value)")
        }
        Guess { value }
    }
}

#[cfg(test)]
module tests {
    import super.*

    #[test]
    #[should_panic(expected = "less than or equal to 100")]
    fn greaterThan100() {
        Guess.new(200)
    }
}

The test passes only if the panic message contains the expected substring.

Using Result<T, E> in Tests

Tests can also return Result<T, E>, which lets you use the ? operator:

#[cfg(test)]
module tests {
    #[test]
    fn itWorks(): Result<(), String> {
        if 2 + 2 == 4 {
            Ok(())
        } else {
            Err("two plus two does not equal four".toString())
        }
    }
}

With Result, you can write more ergonomic tests:

#[cfg(test)]
module tests {
    import std.num.ParseIntError

    #[test]
    fn parseAndAdd(): Result<(), ParseIntError> {
        let a = "10".parse<Int>()?
        let b = "20".parse<Int>()?
        assertEq!(a + b, 30)
        Ok(())
    }
}

Note: You cannot use #[shouldPanic] on tests that return Result<T, E>. To assert that an operation returns an Err, use assert!(result.isErr()) instead.

Testing Optional Values

Oxide's nullable types integrate naturally with tests:

fn findItem(items: &Vec<Int>, target: Int): Int? {
    for item in items.iter() {
        if *item == target {
            return Some(*item)
        }
    }
    null
}

#[cfg(test)]
module tests {
    import super.*

    #[test]
    fn findsExistingItem() {
        let items = vec![1, 2, 3, 4, 5]
        let result = findItem(&items, 3)

        assert!(result.isSome())
        assertEq!(result!!, 3)
    }

    #[test]
    fn returnsNullForMissing() {
        let items = vec![1, 2, 3]
        let result = findItem(&items, 10)

        assert!(result.isNone())
    }

    #[test]
    fn usesNullCoalescing() {
        let items = vec![1, 2, 3]
        let result = findItem(&items, 10) ?? -1

        assertEq!(result, -1)
    }
}

Summary

Oxide tests use the same testing infrastructure as Rust:

  • Mark test functions with #[test]
  • Use #[cfg(test)] to compile test modules only during testing
  • Use assert!, assertEq!, and assertNe! for assertions
  • Add custom messages to explain failures
  • Use #[shouldPanic] to test for expected panics
  • Return Result<T, E> to use the ? operator in tests
  • Derive Debug and PartialEq for types you want to compare

The next section covers how to control test execution.

Controlling How Tests Are Run

Just as cargo run compiles and runs your code, cargo test compiles your code in test mode and runs the resulting test binary. The default behavior is to run all tests in parallel and capture output. You can customize this behavior with command line options.

Command Line Options

Some options go to cargo test, and some go to the test binary. To separate them, use --:

cargo test --help        # Options for cargo test
cargo test -- --help     # Options for the test binary

Running Tests in Parallel or Consecutively

By default, tests run in parallel using threads. This makes tests complete faster but means they should not depend on each other or share mutable state.

If your tests depend on shared state, run them consecutively:

cargo test -- --test-threads=1

This sets the number of test threads to 1, running tests one at a time.

Showing Function Output

By default, successful tests capture any println! output. If a test passes, you won't see its printed output. Only failing tests show their output.

To see output from passing tests:

cargo test -- --show-output

Example:

fn printsAndReturnsValue(a: Int): Int {
    println!("I got the value \(a)")
    a
}

#[cfg(test)]
module tests {
    import super.*

    #[test]
    fn thisPasses() {
        let value = printsAndReturnsValue(4)
        assertEq!(value, 4)
    }

    #[test]
    fn thisFails() {
        let value = printsAndReturnsValue(8)
        assertEq!(value, 5)
    }
}

Running cargo test:

running 2 tests
test tests.thisPasses ... ok
test tests.thisFails ... FAILED

failures:

---- tests.thisFails stdout ----
I got the value 8
thread 'tests.thisFails' panicked at src/lib.ox:17:9:
assertion `left == right` failed
  left: 8
 right: 5

Notice only the failing test shows its println! output.

Running cargo test -- --show-output:

running 2 tests
test tests.thisFails ... FAILED
test tests.thisPasses ... ok

successes:

---- tests.thisPasses stdout ----
I got the value 4


successes:
    tests.thisPasses

failures:

---- tests.thisFails stdout ----
I got the value 8
thread 'tests.thisFails' panicked at src/lib.ox:17:9:
assertion `left == right` failed
  left: 8
 right: 5

Now both tests show their output.

Running a Subset of Tests by Name

Running the full test suite takes time. You can run specific tests by name.

Running Single Tests

Pass the test name to cargo test:

cargo test thisPasses

Output:

running 1 test
test tests.thisPasses ... ok

test result: ok. 1 passed; 0 filtered out

Note: You can use either the Oxide camelCase name (thisPasses) or the Rust snake_case name (this_passes).

Filtering to Run Multiple Tests

Specify part of a test name to run all matching tests:

public fn addTwo(a: Int): Int {
    a + 2
}

#[cfg(test)]
module tests {
    import super.*

    #[test]
    fn addTwoAndTwo() {
        assertEq!(4, addTwo(2))
    }

    #[test]
    fn addThreeAndTwo() {
        assertEq!(5, addTwo(3))
    }

    #[test]
    fn oneHundred() {
        assertEq!(102, addTwo(100))
    }
}

Running all tests:

cargo test
running 3 tests
test tests.addThreeAndTwo ... ok
test tests.addTwoAndTwo ... ok
test tests.oneHundred ... ok

Running only the "add" tests:

cargo test add
running 2 tests
test tests.addThreeAndTwo ... ok
test tests.addTwoAndTwo ... ok

Running Tests by Module Name

You can also filter by module name:

cargo test tests::

This runs all tests in the tests module.

Ignoring Tests Unless Specifically Requested

Some tests are expensive and you want to skip them in normal runs. Use the #[ignore] attribute:

#[cfg(test)]
module tests {
    #[test]
    fn itWorks() {
        assertEq!(2 + 2, 4)
    }

    #[test]
    #[ignore]
    fn expensiveTest() {
        // This test takes a long time to run
        // Code that takes an hour to run...
    }
}

Running cargo test:

running 2 tests
test tests.expensiveTest ... ignored
test tests.itWorks ... ok

test result: ok. 1 passed; 0 failed; 1 ignored

To run only the ignored tests:

cargo test -- --ignored

To run all tests including ignored ones:

cargo test -- --include-ignored

Running Tests by Type

Cargo can run different types of tests:

cargo test --lib          # Run library tests only
cargo test --doc          # Run documentation tests only
cargo test --bins         # Run binary tests only
cargo test --tests        # Run integration tests only

Running a Specific Test File

For integration tests, run tests from a specific file:

cargo test --test integration_test

This runs tests only from tests/integration_test.rs (or tests/integration_test.ox).

Useful Test Options

Here are commonly used test options:

OptionDescription
--test-threads=NNumber of parallel test threads
--show-outputShow output from passing tests
--ignoredRun only ignored tests
--include-ignoredRun all tests including ignored
--nocaptureDon't capture output (same as --show-output)
--test NAMERun a specific test binary
--exactMatch test name exactly
--skip PATTERNSkip tests matching pattern

Exact Matching

By default, test name matching is a substring match. For exact matching:

cargo test thisPasses -- --exact

Skipping Tests

Skip tests matching a pattern:

cargo test -- --skip expensive

Summary

  • Use cargo test to run all tests
  • Add -- to pass options to the test binary
  • Control parallelism with --test-threads
  • See output with --show-output
  • Filter tests by name with cargo test <pattern>
  • Use #[ignore] for expensive tests
  • Run ignored tests with --ignored
  • Use --exact for exact name matching
  • Skip patterns with --skip

The next section covers how to organize your tests into unit tests and integration tests.

Test Organization

The Oxide and Rust community thinks about tests in terms of two main categories: unit tests and integration tests. Unit tests are small and focused, testing one module in isolation and can test private interfaces. Integration tests are external to your library and use your code the same way external code would, using only the public interface.

Unit Tests

The purpose of unit tests is to test each unit of code in isolation to quickly pinpoint where code is or isn't working as expected. Unit tests go in the src directory in each file with the code they're testing.

The Tests Module and #[cfg(test)]

The #[cfg(test)] annotation tells Oxide to compile and run the test code only when you run cargo test, not when you run cargo build. This saves compile time and reduces the binary size.

public fn add(left: UIntSize, right: UIntSize): UIntSize {
    left + right
}

#[cfg(test)]
module tests {
    import super.*

    #[test]
    fn itWorks() {
        let result = add(2, 2)
        assertEq!(result, 4)
    }
}

The #[cfg(test)] attribute on the tests module means:

  • The module is only compiled during cargo test
  • All code inside is test-only, including helper functions
  • The module is stripped from release builds

Testing Private Functions

Unlike some languages, Oxide allows you to test private functions directly. Since tests are in the same file as the code, they have access to private items:

fn internalAdd(a: Int, b: Int): Int {
    a + b
}

public fn publicAdd(a: Int, b: Int): Int {
    internalAdd(a, b)
}

#[cfg(test)]
module tests {
    import super.*

    #[test]
    fn testInternalAdd() {
        // We can test the private function directly
        assertEq!(internalAdd(2, 3), 5)
    }

    #[test]
    fn testPublicAdd() {
        assertEq!(publicAdd(2, 3), 5)
    }
}

Whether you choose to test private functions is a matter of opinion. Oxide makes it possible, but you're not required to do so.

Organizing Unit Tests

For larger modules, you might organize tests into submodules:

public struct Calculator {
    value: Int,
}

extension Calculator {
    public static fn new(): Calculator {
        Calculator { value: 0 }
    }

    public fn add(amount: Int): Int {
        self.value + amount
    }

    public fn multiply(factor: Int): Int {
        self.value * factor
    }
}

#[cfg(test)]
module tests {
    import super.*

    module additionTests {
        import super.*

        #[test]
        fn addsPositiveNumbers() {
            let calc = Calculator { value: 5 }
            assertEq!(calc.add(3), 8)
        }

        #[test]
        fn addsNegativeNumbers() {
            let calc = Calculator { value: 5 }
            assertEq!(calc.add(-3), 2)
        }
    }

    module multiplicationTests {
        import super.*

        #[test]
        fn multipliesPositiveNumbers() {
            let calc = Calculator { value: 5 }
            assertEq!(calc.multiply(3), 15)
        }

        #[test]
        fn multipliesByZero() {
            let calc = Calculator { value: 5 }
            assertEq!(calc.multiply(0), 0)
        }
    }
}

Integration Tests

Integration tests are entirely external to your library. They use your library in the same way any other code would, which means they can only call public functions. Their purpose is to test that multiple parts of your library work together correctly.

The tests Directory

To create integration tests, create a tests directory at the top level of your project, next to src:

adder/
  Cargo.toml
  src/
    lib.ox
  tests/
    integration_test.ox

Let's create tests/integration_test.ox:

import adder

#[test]
fn itAddsTwo() {
    assertEq!(4, adder.add(2, 2))
}

Key differences from unit tests:

  1. No #[cfg(test)] needed - Cargo knows the tests directory contains tests
  2. External perspective - We import our crate with import adder
  3. Public API only - We can only use public functions and types

Run integration tests with:

cargo test --test integration_test

Or run all tests:

cargo test

Output:

running 1 test
test tests.itWorks ... ok

     Running tests/integration_test.ox (target/debug/deps/integration_test-...)

running 1 test
test itAddsTwo ... ok

test result: ok. 1 passed; 0 failed

Each file in tests is compiled as a separate crate.

Submodules in Integration Tests

As you add more integration tests, you might want to organize them. Each file in tests compiles as its own crate, so they don't share behavior like modules in src.

For shared helper code, create a subdirectory with a mod.ox file:

tests/
  common/
    mod.ox
  integration_test.ox

In tests/common/mod.ox:

public fn setup(): String {
    // Setup code that might be needed by multiple tests
    "test_database".toString()
}

public struct TestConfig {
    public name: String,
    public debug: Bool,
}

extension TestConfig {
    public static fn default(): TestConfig {
        TestConfig {
            name: "test".toString(),
            debug: true,
        }
    }
}

In tests/integration_test.ox:

external module common

import adder
import crate.common.{ setup, TestConfig }

#[test]
fn itAddsTwo() {
    let _config = TestConfig.default()
    let _db = setup()
    assertEq!(4, adder.add(2, 2))
}

#[test]
fn itAddsLargeNumbers() {
    assertEq!(1000000002, adder.add(1000000000, 2))
}

Files in subdirectories of tests don't get compiled as separate test crates. The common/mod.ox pattern prevents Cargo from treating common as a test file while allowing other tests to import it.

Integration Tests for Binary Crates

If your project only contains a src/main.ox and no src/lib.ox, you can't create integration tests in the tests directory and import functions with import cratename.

This is one reason Oxide projects with a binary have a straightforward src/main.ox that calls logic in src/lib.ox. The library can be tested with integration tests while the main file remains minimal.

Multiple Integration Test Files

For larger projects, organize integration tests by feature:

tests/
  common/
    mod.ox
  api_tests.ox
  database_tests.ox
  user_tests.ox

Each file runs as its own test suite. Run a specific one:

cargo test --test api_tests

Test Organization Best Practices

Unit Test Guidelines

  1. Keep tests close to code - Tests in the same file as the code they test
  2. Test one thing - Each test should verify a single behavior
  3. Use descriptive names - testAddWithNegativeNumbers not test1
  4. Arrange-Act-Assert - Structure tests clearly
#[test]
fn userCanChangeName() {
    // Arrange
    var user = User.new("Alice")

    // Act
    user.setName("Bob")

    // Assert
    assertEq!(user.name, "Bob")
}

Integration Test Guidelines

  1. Test the public interface - Don't try to access private internals
  2. Test realistic scenarios - Combine operations as users would
  3. Share setup code - Use the common module pattern
  4. One concern per file - Organize by feature or subsystem

When to Use Each

Test TypeUse For
Unit testsIndividual functions, edge cases, error handling
Integration testsFeature workflows, API contracts, module interactions

Documentation Tests

Oxide also runs code examples in documentation comments as tests:

/// Adds two numbers together.
///
/// # Examples
///
/// ```oxide
/// let result = adder.add(2, 2)
/// assertEq!(result, 4)
/// ```
public fn add(left: UIntSize, right: UIntSize): UIntSize {
    left + right
}

Run documentation tests with:

cargo test --doc

Documentation tests ensure your examples stay correct as code evolves.

Summary

Oxide's testing features help you write and organize tests effectively:

Unit Tests:

  • Placed in #[cfg(test)] modules alongside code
  • Can test private functions
  • Fast to compile and run
  • Use for isolated, focused tests

Integration Tests:

  • Placed in the tests directory
  • Use your library as an external consumer would
  • Test only the public API
  • Use for testing feature workflows

Best Practices:

  • Keep tests close to the code they test
  • Test one behavior per test
  • Use descriptive test names
  • Share setup code through common modules
  • Run tests frequently during development

Testing is a skill that improves with practice. Start with simple tests and grow your test suite as your codebase grows.

Chapter 12: An I/O Project - Building a CLI Program

In this chapter, we'll build a practical command-line program that demonstrates several concepts we've learned: file I/O, handling command-line arguments, error handling with the ? operator, and testing. We'll create a grep-like tool called oxgrep that searches for a query string in a file and prints matching lines.

Project Overview

Our oxgrep program will:

  1. Accept a search query and a filename as command-line arguments
  2. Read the file
  3. Find and print lines containing the query
  4. Handle errors gracefully with custom error types and the ? operator
  5. Include comprehensive tests

This is an excellent opportunity to practice:

  • Command-line argument parsing - Processing user input
  • File I/O - Reading files efficiently
  • Error handling - Creating custom error types and using the ? operator
  • Code organization - Separating concerns into modules
  • Testing - Writing tests for different scenarios
  • Refactoring - Improving code quality and maintainability

Getting Started

Let's create a new binary project:

cargo new oxgrep
cd oxgrep

This creates a project with the following structure:

oxgrep/
├── Cargo.toml
└── src/
    └── main.ox

Throughout this chapter, we'll build up the program incrementally, explaining each piece as we go. We'll start with the simplest working version and then add features and improve the design.

What We'll Build

By the end of this chapter, you'll have a working program that can be used like this:

cargo run -- to sample.txt
cargo run -- is poem.txt
cargo run -- CASE_INSENSITIVE sample.txt -- -i

The program will output matching lines and handle missing files, invalid arguments, and other error conditions gracefully.

Let's dive in!

Next Steps

  1. Accepting Command Line Arguments - Parse user input
  2. Reading a File - Learn how to work with the file system
  3. Refactoring to Improve Modularity and Error Handling - Build better error handling
  4. Adding Functionality with Test Driven Development - Ensure your code works correctly
  5. Working with Environment Variables - Handle case-insensitive search
  6. Redirecting Errors to Standard Error - Keep stdout clean
  7. Improving Our I/O Project (Chapter 13) - Polish your implementation

Accepting Command-Line Arguments

A good command-line program needs to parse and validate arguments correctly. In this section, we'll improve how our program handles user input.

The Problem with Raw Arguments

Our current approach directly accesses args[1] and args[2], which is error-prone:

let args = std.env.args().collect<Vec<String>>()

if args.len() < 3 {
    eprintln!("usage: oxgrep <query> <filename>")
    std.process.exit(1)
}

let query = args[1].clone()
let filename = args[2].clone()

This works, but it's hard to maintain and extend. As we add features (like case-insensitive search), the argument parsing becomes messier.

Refactoring into a Config Struct

Let's create a Config struct to encapsulate argument parsing:

import std.fs
import std.env
import std.error.Error

struct Config {
    query: String,
    filename: String,
    ignoreCase: Bool,
}

extension Config {
    static fn new(args: &Vec<String>): Result<Config, String> {
        if args.len() < 3 {
            return Err("not enough arguments".toString())
        }

        let query = args[1].clone()
        let filename = args[2].clone()
        let ignoreCase = args.len() > 3 && args[3] == "--ignore-case"

        Ok(Config {
            query,
            filename,
            ignoreCase,
        })
    }
}

fn main(): Result<(), Box<dyn Error>> {
    let args = std.env.args().collect<Vec<String>>()

    let config = Config.new(&args)
        .unwrapOrElse { err ->
            eprintln!("Problem parsing arguments: {}", err)
            std.process.exit(1)
        }?

    run(config)
}

fn run(config: Config): Result<(), Box<dyn Error>> {
    let contents = std.fs.readToString(&config.filename)?

    for line in contents.lines() {
        if line.contains(&config.query) {
            println!("{}", line)
        }
    }

    Ok(())
}

Now the argument parsing is isolated in the Config struct, making it easy to test and modify.

Improving Error Messages

The generic "not enough arguments" message could be more helpful:

extension Config {
    static fn new(args: &Vec<String>): Result<Config, String> {
        if args.len() < 3 {
            return Err(
                "not enough arguments\n\
                 usage: oxgrep <query> <filename> [--ignore-case]".toString()
            )
        }

        let query = args[1].clone()
        let filename = args[2].clone()
        let ignoreCase = args.len() > 3 && args[3] == "--ignore-case"

        Ok(Config {
            query,
            filename,
            ignoreCase,
        })
    }
}

Handling Invalid Options

What if the user passes an unrecognized flag? Let's validate:

extension Config {
    static fn new(args: &Vec<String>): Result<Config, String> {
        if args.len() < 3 {
            return Err(
                "not enough arguments\n\
                 usage: oxgrep <query> <filename> [--ignore-case]".toString()
            )
        }

        let query = args[1].clone()
        let filename = args[2].clone()
        var ignoreCase = false

        // Parse remaining arguments
        for i in 3..args.len() {
            match args[i].asStr() {
                "--ignore-case" -> {
                    ignoreCase = true
                },
                _ -> {
                    return Err(format!("Unknown option: {}", args[i]))
                }
            }
        }

        Ok(Config {
            query,
            filename,
            ignoreCase,
        })
    }
}

Iterator-Based Parsing

For more complex programs, you might use iterators for elegant argument parsing:

extension Config {
    static fn new(var args: Vec<String>): Result<Config, String> {
        args.remove(0) // Remove program name

        if args.isEmpty() {
            return Err("not enough arguments".toString())
        }

        let query = args.remove(0)

        if args.isEmpty() {
            return Err("filename required".toString())
        }

        let filename = args.remove(0)
        let ignoreCase = args.contains(&"--ignore-case".toString())

        Ok(Config {
            query,
            filename,
            ignoreCase,
        })
    }
}

Using Environment for Options

Some tools allow configuration via environment variables. For example, you might set OXGREP_IGNORE_CASE:

extension Config {
    static fn new(args: &Vec<String>): Result<Config, String> {
        if args.len() < 3 {
            return Err("not enough arguments".toString())
        }

        let query = args[1].clone()
        let filename = args[2].clone()

        // Check both command-line flag and environment variable
        let ignoreCase =
            args.len() > 3 && args[3] == "--ignore-case"
            || std.env.var("OXGREP_IGNORE_CASE").isOk()

        Ok(Config {
            query,
            filename,
            ignoreCase,
        })
    }
}

Adding Help Text

A professional CLI tool includes help documentation:

extension Config {
    static fn new(args: &Vec<String>): Result<Config, String> {
        if args.len() < 2 {
            return Err(Self.helpText())
        }

        // Check for help flag first
        if args[1] == "--help" || args[1] == "-h" {
            println!("{}", Self.helpText())
            std.process.exit(0)
        }

        if args.len() < 3 {
            return Err(Self.helpText())
        }

        let query = args[1].clone()
        let filename = args[2].clone()
        let ignoreCase = args.len() > 3 && args[3] == "--ignore-case"

        Ok(Config {
            query,
            filename,
            ignoreCase,
        })
    }

    static fn helpText(): String {
        "oxgrep - A simple text search tool\n\n\
         USAGE:\n    \
         oxgrep <QUERY> <FILENAME> [OPTIONS]\n\n\
         OPTIONS:\n    \
         --ignore-case    Case-insensitive search\n    \
         --help, -h       Show this help message".toString()
    }
}

Summary

Good argument parsing leads to better user experience:

  • Encapsulation - Use structs to group related arguments
  • Validation - Check arguments early and provide helpful error messages
  • Flexibility - Support both command-line flags and environment variables
  • Documentation - Include helpful error messages and help text
  • Extensibility - Make it easy to add new options without refactoring the entire program

Next, we'll improve error handling with custom error types.

Reading and Writing Files

In Oxide (like Rust), working with the file system requires the std.fs module. This section covers reading files, a fundamental operation for our grep-like program.

Reading a File

Let's start with the most basic operation: reading a file's contents into a string.

Create a sample file named poem.txt:

I'm nobody! Who are you?
Are you nobody too?
Then there's a pair of us!
Don't tell! they'd banish us, you know.

How dreary to be somebody!
How public, like a Frog
To tell one's name the livelong June
To an admiring Bog!

Now, let's write code to read this file. Update src/main.ox:

import std.fs

fn main() {
    let filename = "poem.txt"
    let contents = std.fs.readToString(filename)

    println!("File contents:\n\(contents)")
}

Run the program:

cargo run

Output:

File contents:
I'm nobody! Who are you?
Are you nobody too?
...

Handling Errors with Result

What happens if the file doesn't exist? The readToString function returns a Result<String, Error>, which means it can fail. We need to handle this using the ? operator.

import std.fs

fn main() {
    let filename = "poem.txt"
    let contents = std.fs.readToString(filename)?

    println!("File contents:\n\(contents)")
}

But wait—main doesn't return Result, it returns nothing. We need to change that:

import std.fs

fn main(): Result<(), Box<dyn Error>> {
    let filename = "poem.txt"
    let contents = std.fs.readToString(filename)?

    println!("File contents:\n\(contents)")
    Ok(())
}

Now if the file doesn't exist, the error message will be printed by Rust's error handling system:

Error: Os { code: 2, kind: NotFound, message: "No such file or directory" }

Using Variables from Arguments

In a real program, the filename shouldn't be hardcoded. Let's accept it as a command-line argument.

First, let's use std.env to get command-line arguments:

import std.fs
import std.env
import std.error.Error

fn main(): Result<(), Box<dyn Error>> {
    let args = std.env.args().collect<Vec<String>>()

    if args.len() < 2 {
        eprintln!("usage: oxgrep <query> <filename>")
        std.process.exit(1)
    }

    let query = args[1].clone()
    let filename = args[2].clone()

    let contents = std.fs.readToString(&filename)?

    println!("In file \(filename)")
    println!("Contents: \(contents)")

    Ok(())
}

Now you can run:

cargo run -- to poem.txt

Output:

In file poem.txt
Contents:
I'm nobody! Who are you?
...

Processing File Contents

Now that we can read files, let's search for matching lines. We'll create a function to do the heavy lifting:

import std.fs
import std.env
import std.error.Error

fn search(query: &str, contents: &str): Vec<String> {
    var results = Vec.new()

    for line in contents.lines() {
        if line.contains(query) {
            results.push(line.toString())
        }
    }

    results
}

fn main(): Result<(), Box<dyn Error>> {
    let args = std.env.args().collect<Vec<String>>()

    if args.len() < 3 {
        eprintln!("usage: oxgrep <query> <filename>")
        std.process.exit(1)
    }

    let query = args[1].clone()
    let filename = args[2].clone()

    let contents = std.fs.readToString(&filename)?

    let results = search(&query, &contents)

    for line in results {
        println!("{}", line)
    }

    Ok(())
}

Let's test it:

cargo run -- is poem.txt

Output:

I'm nobody! Who are you?
Are you nobody too?
Then there's a pair of us!
How public, like a Frog

Perfect! But this approach is inefficient for large files because it stores all matching lines in memory. For a real grep tool, we'd print lines as we find them. However, for learning purposes, this demonstrates the concept clearly.

Improving Efficiency

For large files, a better approach is to print directly without storing results:

import std.fs
import std.env
import std.error.Error

fn main(): Result<(), Box<dyn Error>> {
    let args = std.env.args().collect<Vec<String>>()

    if args.len() < 3 {
        eprintln!("usage: oxgrep <query> <filename>")
        std.process.exit(1)
    }

    let query = args[1].clone()
    let filename = args[2].clone()

    let contents = std.fs.readToString(&filename)?

    for line in contents.lines() {
        if line.contains(&query) {
            println!("{}", line)
        }
    }

    Ok(())
}

This version is simpler and more memory-efficient—perfect for real applications.

Writing Files

While our grep tool only reads files, it's useful to know how to write files. Use std.fs.write:

import std.fs

fn main(): Result<(), Box<dyn Error>> {
    let content = "Hello, World!"
    std.fs.write("output.txt", content)?

    println!("File written successfully")
    Ok(())
}

For more complex file operations (appending, seeking), use OpenOptions:

import std.fs

fn main(): Result<(), Box<dyn Error>> {
    // Append to a file
    var file = std.fs.OpenOptions.new()
        .append(true)
        .open("log.txt")?

    std.io.Write.writeAll(&mut file, b"Log entry\n")?

    Ok(())
}

Summary

File I/O in Oxide follows Rust's patterns:

  • std.fs.readToString - Read an entire file into a String
  • std.fs.write - Write content to a file
  • std.fs.OpenOptions - For more control over how files are opened
  • Result-based errors - Always handle potential failures with ? or match
  • Iterating over lines - Use .lines() to process files line-by-line

Next, we'll make our program more user-friendly by handling command-line arguments better.

Improving Error Handling with Custom Error Types

As our program grows, generic String errors become problematic. Oxide (following Rust) allows us to create custom error types for better error handling.

The Problem with String Errors

Currently, we're returning Result<Config, String>. This has issues:

  1. Loss of context - A string doesn't tell us what kind of error occurred
  2. Difficult error handling - Callers can't distinguish between different errors
  3. Hard to extend - Adding new error types requires refactoring
// Current approach - hard to work with
let config = Config.new(&args)
    .unwrapOrElse { err ->
        eprintln!("Error: {}", err)
        std.process.exit(1)
    }?

Creating a Custom Error Type

Let's define an AppError enum that represents the different errors our program can encounter:

import std.fmt
import std.error.Error
import std.io

enum AppError {
    ArgumentError(String),
    FileError(io.Error),
    SearchError(String),
}

Now we need to implement Display and Error traits for proper error handling:

extension AppError: fmt.Display {
    fn fmt(f: &mut fmt.Formatter): fmt.Result {
        match self {
            AppError.ArgumentError(msg) -> {
                write!(f, "Argument error: {}", msg)
            },
            AppError.FileError(err) -> {
                write!(f, "File error: {}", err)
            },
            AppError.SearchError(msg) -> {
                write!(f, "Search error: {}", msg)
            },
        }
    }
}

extension AppError: fmt.Debug {
    fn fmt(f: &mut fmt.Formatter): fmt.Result {
        write!(f, "{:?}", self)
    }
}

extension AppError: From<io.Error> {
    static fn from(err: io.Error): Self {
        AppError.FileError(err)
    }
}

Using the Custom Error Type

Now update Config to return our custom error:

extension Config {
    static fn new(args: &Vec<String>): Result<Config, AppError> {
        if args.len() < 3 {
            return Err(AppError.ArgumentError(
                "not enough arguments\nusage: oxgrep <query> <filename>".toString()
            ))
        }

        let query = args[1].clone()
        let filename = args[2].clone()
        let ignoreCase = args.len() > 3 && args[3] == "--ignore-case"

        Ok(Config {
            query,
            filename,
            ignoreCase,
        })
    }
}

Update the main function:

fn main(): Result<(), Box<dyn Error>> {
    let args = std.env.args().collect<Vec<String>>()

    let config = Config.new(&args)
        .unwrapOrElse { err ->
            eprintln!("Problem parsing arguments: {}", err)
            std.process.exit(1)
        }?

    run(config)
}

fn run(config: Config): Result<(), Box<dyn Error>> {
    let contents = std.fs.readToString(&config.filename)?

    for line in contents.lines() {
        if line.contains(&config.query) {
            println!("{}", line)
        }
    }

    Ok(())
}

The From<io.Error> implementation allows automatic conversion. When readToString returns an IO error, it's automatically converted to our AppError.

Detailed Error Messages

With custom error types, we can provide context-specific messages:

enum AppError {
    ArgumentError {
        message: String,
        usage: String,
    },
    FileNotFound(String),
    PermissionDenied(String),
    InvalidEncoding,
}

extension AppError: fmt.Display {
    fn fmt(f: &mut fmt.Formatter): fmt.Result {
        match self {
            AppError.ArgumentError { message, usage } -> {
                write!(
                    f,
                    "Error: {}\n\n{}",
                    message, usage
                )
            },
            AppError.FileNotFound(path) -> {
                write!(f, "File not found: {}", path)
            },
            AppError.PermissionDenied(path) -> {
                write!(f, "Permission denied reading: {}", path)
            },
            AppError.InvalidEncoding -> {
                write!(f, "File is not valid UTF-8")
            },
        }
    }
}

The ?? Operator

Oxide provides a convenient ?? operator for working with Option types. This null-coalescing operator provides a default value when an Option is None:

fn findValue(data: Vec<String>, key: String): String? {
    for item in data.iter() {
        if item.contains(&key) {
            return Some(item.clone())
        }
    }
    null
}

fn main(): Result<(), Box<dyn Error>> {
    let data = vec!["key:value".toString(), "other:data".toString()]

    // Using ?? to provide a default value
    let value = findValue(data, "key".toString()) ?? "default".toString()

    println!("Value: {}", value)
    Ok(())
}

The ?? operator:

  • Returns the inner value if Some
  • Uses the right-hand default value if None
  • Makes code more concise than unwrapOr

Error Handling in Functions

When functions can fail, propagate errors with ?:

fn search(
    query: &str,
    contents: &str
): Result<Vec<String>, AppError> {
    var results = Vec.new()

    for line in contents.lines() {
        if line.contains(query) {
            results.push(line.toString())
        }
    }

    if results.isEmpty() {
        // Optionally return an error if nothing found
        // Err(AppError.SearchError("No matches found".toString()))
    }

    Ok(results)
}

Summary

Custom error types improve error handling:

  • Better semantics - Error types reflect actual problems
  • Composability - Use From to convert between error types
  • Easier debugging - Detailed context helps find issues
  • Type safety - Match on specific error variants
  • Operator support - Use ? and ?? for elegant error propagation

Next, we'll add support for case-insensitive searching using environment variables.

Writing Tests for Our CLI Program

Testing CLI programs requires special techniques. We need to test file I/O, argument parsing, and search logic in isolation. Let's write comprehensive tests for our grep clone.

Organizing Code for Testing

First, let's restructure our code to be testable. Create src/lib.ox alongside src/main.ox:

// src/lib.ox
import std.fs
import std.env
import std.error.Error

public struct Config {
    public query: String,
    public filename: String,
    public ignoreCase: Bool,
}

extension Config {
    public static fn new(args: &Vec<String>): Result<Config, String> {
        if args.len() < 3 {
            return Err("not enough arguments".toString())
        }

        let query = args[1].clone()
        let filename = args[2].clone()

        let commandLineFlag = args.len() > 3 && args[3] == "--ignore-case"
        let envVariable = std.env.var("OXGREP_IGNORE_CASE").isOk()

        let ignoreCase = commandLineFlag || envVariable

        Ok(Config {
            query,
            filename,
            ignoreCase,
        })
    }
}

public fn search(config: &Config, contents: &str): Vec<String> {
    var results = Vec.new()

    let query = if config.ignoreCase {
        config.query.toLowercase()
    } else {
        config.query.clone()
    }

    for line in contents.lines() {
        let searchLine = if config.ignoreCase {
            line.toLowercase()
        } else {
            line.toString()
        }

        if searchLine.contains(&query) {
            results.push(line.toString())
        }
    }

    results
}

public fn run(config: &Config): Result<(), Box<dyn Error>> {
    let contents = std.fs.readToString(&config.filename)?
    let results = search(config, &contents)

    for line in results {
        println!("{}", line)
    }

    Ok(())
}

Now src/main.ox just handles argument parsing and calls run:

// src/main.ox
import std.env
import std.error.Error
import oxgrep.*

fn main(): Result<(), Box<dyn Error>> {
    let args = std.env.args().collect<Vec<String>>()

    let config = Config.new(&args)
        .unwrapOrElse { err ->
            eprintln!("Problem parsing arguments: {}", err)
            std.process.exit(1)
        }?

    run(&config)
}

Testing the Search Function

Now we can test the core search logic:

// In src/lib.ox
#[cfg(test)]
module tests {
    import super.*

    #[test]
    fn caseInsensitive() {
        let query = "duct".toString()
        let contents = "Rust:\nsafe, fast, productive.\nPick three.".toString()

        var config = Config {
            query,
            filename: "test.txt".toString(),
            ignoreCase: true,
        }

        let results = search(&config, &contents)

        assertEq!(results.len(), 1)
        assertEq!(results[0], "safe, fast, productive.".toString())
    }

    #[test]
    fn caseSensitive() {
        let query = "duct".toString()
        let contents = "Rust:\nsafe, fast, productive.\nPick three.".toString()

        var config = Config {
            query,
            filename: "test.txt".toString(),
            ignoreCase: false,
        }

        let results = search(&config, &contents)

        assertEq!(results.len(), 1)
        assertEq!(results[0], "safe, fast, productive.".toString())
    }

    #[test]
    fn noMatches() {
        let query = "xyz".toString()
        let contents = "Rust:\nsafe, fast, productive.\nPick three.".toString()

        var config = Config {
            query,
            filename: "test.txt".toString(),
            ignoreCase: false,
        }

        let results = search(&config, &contents)

        assertEq!(results.len(), 0)
    }

    #[test]
    fn multipleMatches() {
        let query = "a".toString()
        let contents = "apple\napricot\navocado".toString()

        var config = Config {
            query,
            filename: "test.txt".toString(),
            ignoreCase: false,
        }

        let results = search(&config, &contents)

        assertEq!(results.len(), 3)
        assertEq!(results[0], "apple".toString())
        assertEq!(results[1], "apricot".toString())
        assertEq!(results[2], "avocado".toString())
    }
}

Testing Argument Parsing

Test the Config struct:

#[cfg(test)]
module configTests {
    import super.*

    #[test]
    fn requiredArguments() {
        let args = vec!["oxgrep".toString()]
        let result = Config.new(&args)

        assert!(result.isErr())
    }

    #[test]
    fn parsesQueryAndFilename() {
        let args = vec![
            "oxgrep".toString(),
            "test".toString(),
            "file.txt".toString(),
        ]
        let result = Config.new(&args)

        assert!(result.isOk())

        let config = result.unwrap()
        assertEq!(config.query, "test".toString())
        assertEq!(config.filename, "file.txt".toString())
        assert!(!config.ignoreCase)
    }

    #[test]
    fn parsesIgnoreCaseFlag() {
        let args = vec![
            "oxgrep".toString(),
            "test".toString(),
            "file.txt".toString(),
            "--ignore-case".toString(),
        ]
        let result = Config.new(&args)

        assert!(result.isOk())

        let config = result.unwrap()
        assert!(config.ignoreCase)
    }
}

Testing With Temporary Files

For more integration-like tests, use temporary files:

import std.fs
import std.path.PathBuf
import std.env

fn createTempFile(contents: &str): Result<PathBuf, Box<dyn Error>> {
    let tempDir = std.env.tempDir()
    let filename = tempDir.join("oxgrep_test.txt")

    std.fs.write(&filename, contents)?

    Ok(filename)
}

#[cfg(test)]
module integrationTests {
    import super.*

    #[test]
    fn searchesRealFile(): Result<(), Box<dyn Error>> {
        let filename = createTempFile("apple\napricot\navocado")?

        var config = Config {
            query: "a".toString(),
            filename: filename.toString(),
            ignoreCase: false,
        }

        let contents = std.fs.readToString(&config.filename)?
        let results = search(&config, &contents)

        assertEq!(results.len(), 3)

        // Cleanup
        std.fs.removeFile(&filename)?

        Ok(())
    }

    #[test]
    fn handlesNonExistentFile(): Result<(), Box<dyn Error>> {
        var config = Config {
            query: "test".toString(),
            filename: "nonexistent.txt".toString(),
            ignoreCase: false,
        }

        let result = run(&config)

        assert!(result.isErr())

        Ok(())
    }
}

Running Tests

Run all tests:

cargo test

Output:

running 7 tests
test configTests.parsesIgnoreCaseFlag ... ok
test configTests.parsesQueryAndFilename ... ok
test configTests.requiredArguments ... ok
test integrationTests.handlesNonExistentFile ... ok
test integrationTests.searchesRealFile ... ok
test tests.caseInsensitive ... ok
test tests.caseSensitive ... ok
test tests.multipleMatches ... ok
test tests.noMatches ... ok

test result: ok. 9 passed; 0 failed

Testing Error Conditions

Test that errors are handled gracefully:

#[test]
fn invalidArguments() {
    let args = vec!["oxgrep".toString(), "query".toString()]
    let result = Config.new(&args)

    assert!(result.isErr())

    match result {
        Err(msg) -> {
            assert!(msg.contains("not enough arguments"))
        },
        _ -> panic!("Expected error"),
    }
}

#[test]
fn emptySearch() {
    let query = "".toString()
    let contents = "Line 1\nLine 2\nLine 3".toString()

    var config = Config {
        query,
        filename: "test.txt".toString(),
        ignoreCase: false,
    }

    let results = search(&config, &contents)

    // Empty query matches all lines (contains("") is true)
    assertEq!(results.len(), 3)
}

#[test]
fn emptyFile() {
    let query = "test".toString()
    let contents = "".toString()

    var config = Config {
        query,
        filename: "test.txt".toString(),
        ignoreCase: false,
    }

    let results = search(&config, &contents)

    assertEq!(results.len(), 0)
}

Test Organization Best Practices

  1. Group related tests - Use modules to organize test functions
  2. Descriptive names - Make test purpose clear from the name
  3. Test one thing - Each test should verify one behavior
  4. Use fixtures - Create helper functions for common setup
  5. Test edge cases - Empty input, large input, invalid input
  6. Integration tests - Test the complete flow
  7. Document assumptions - Explain why tests work as they do

Running Specific Tests

# Run tests matching a name
cargo test case_insensitive

# Run tests in a specific module
cargo test configTests

# Run with output
cargo test -- --nocapture

# Run one test per thread (slower but shows output in order)
cargo test -- --test-threads=1

Summary

Effective testing for CLI programs:

  • Separate concerns - Move logic into a library for easier testing
  • Unit tests - Test individual functions with test data
  • Integration tests - Test complete workflows with real files
  • Error cases - Verify graceful failure handling
  • Assertions - Use assert!, assertEq!, and assertNe!
  • Organization - Group tests logically in modules

Next, we'll refactor our code for better performance and maintainability.

Working with Environment Variables

Many CLI programs allow configuration through environment variables. Let's add case-insensitive search support using OXGREP_IGNORE_CASE.

Reading Environment Variables

The std.env module provides functions to read environment variables:

import std.env

fn main() {
    // Get a variable - returns String?
    let debugMode = std.env.var("DEBUG")

    match debugMode {
        Ok(value) -> println!("DEBUG is set to: \(value)"),
        Err(_) -> println!("DEBUG is not set"),
    }
}

Checking for a Flag

To check if an environment variable exists, just see if the result is Ok:

import std.env

fn main() {
    let ignoreCase = std.env.var("OXGREP_IGNORE_CASE").isOk()

    if ignoreCase {
        println!("Case-insensitive search enabled")
    }
}

Updating Our Config Struct

Let's integrate environment variable support into our Config struct:

import std.env

struct Config {
    query: String,
    filename: String,
    ignoreCase: Bool,
}

extension Config {
    static fn new(args: &Vec<String>): Result<Config, String> {
        if args.len() < 3 {
            return Err(
                "not enough arguments\nusage: oxgrep <query> <filename>".toString()
            )
        }

        let query = args[1].clone()
        let filename = args[2].clone()

        // Check both command-line flag and environment variable
        let commandLineFlag = args.len() > 3 && args[3] == "--ignore-case"
        let envVariable = std.env.var("OXGREP_IGNORE_CASE").isOk()

        let ignoreCase = commandLineFlag || envVariable

        Ok(Config {
            query,
            filename,
            ignoreCase,
        })
    }
}

Now update the search function to use the ignoreCase field:

fn search(config: &Config, contents: &str): Vec<String> {
    var results = Vec.new()

    for line in contents.lines() {
        if config.ignoreCase {
            if line.toLowercase().contains(&config.query.toLowercase()) {
                results.push(line.toString())
            }
        } else {
            if line.contains(&config.query) {
                results.push(line.toString())
            }
        }
    }

    results
}

Or more concisely:

fn search(config: &Config, contents: &str): Vec<String> {
    var results = Vec.new()

    let query = if config.ignoreCase {
        config.query.toLowercase()
    } else {
        config.query.clone()
    }

    for line in contents.lines() {
        let searchLine = if config.ignoreCase {
            line.toLowercase()
        } else {
            line.toString()
        }

        if searchLine.contains(&query) {
            results.push(line.toString())
        }
    }

    results
}

Updated Main Function

Update main and run to pass the config around:

import std.fs
import std.env
import std.error.Error

fn main(): Result<(), Box<dyn Error>> {
    let args = std.env.args().collect<Vec<String>>()

    let config = Config.new(&args)
        .unwrapOrElse { err ->
            eprintln!("Problem parsing arguments: {}", err)
            std.process.exit(1)
        }?

    run(&config)
}

fn run(config: &Config): Result<(), Box<dyn Error>> {
    let contents = std.fs.readToString(&config.filename)?

    let results = search(config, &contents)

    for line in results {
        println!("{}", line)
    }

    Ok(())
}

Testing Environment Variable Behavior

You can test environment variable behavior from the command line:

# Case-sensitive (default)
cargo run -- "is" poem.txt
# Output: Lines containing "is" (lowercase)

# Case-insensitive via environment variable
OXGREP_IGNORE_CASE=1 cargo run -- "is" poem.txt
# Output: Lines containing "is" or "IS" or "Is"

# Case-insensitive via command-line flag
cargo run -- "is" poem.txt --ignore-case
# Output: Same as above

Getting Multiple Values

For more complex configuration, you might read multiple environment variables:

import std.env

struct AppConfig {
    query: String,
    filename: String,
    ignoreCase: Bool,
    maxResults: UInt,
    verbose: Bool,
}

extension AppConfig {
    static fn fromEnv(): Self {
        let ignoreCase = std.env.var("OXGREP_IGNORE_CASE").isOk()

        let maxResults = std.env.var("OXGREP_MAX_RESULTS")
            .ok()
            .andThen { s -> s.parse<UInt>().ok() }
            .unwrapOr(1000)

        let verbose = std.env.var("OXGREP_VERBOSE").isOk()

        AppConfig {
            query: String.new(),
            filename: String.new(),
            ignoreCase,
            maxResults,
            verbose,
        }
    }
}

Environment Variables in Tests

When writing tests, you can set environment variables programmatically:

#[test]
fn testIgnoreCaseFromEnv() {
    // In real tests, you'd need to set the environment before creating Config
    // This is more complex due to test concurrency
}

Important Notes About Environment Variables

  1. Thread Safety - Reading environment variables is thread-safe, but setting them is not. Set variables before spawning threads.

  2. Performance - Environment variable lookups are relatively expensive. Cache values if you use them frequently.

  3. Naming Conventions - Use UPPERCASE_WITH_UNDERSCORES for environment variable names.

  4. Documentation - Always document which environment variables your program recognizes.

  5. Security - Be careful with sensitive data in environment variables (passwords, API keys). They're visible in process listings.

Summary

Environment variables allow flexible configuration:

  • std.env.var - Read a variable, returns Result<String, VarError>
  • Defaults - Use unwrapOr to provide defaults
  • Combining sources - Command-line flags and env vars work together
  • Caching - Read environment variables early, not in loops
  • Testing - Be aware that environment variable state is global

Next, we'll write comprehensive tests for our program.

Writing to Standard Error Instead of Standard Output

Command-line tools often send normal output to standard output (stdout) and errors to standard error (stderr). This makes it easy for users to pipe output to files while still seeing errors in the terminal.

Using eprintln!

The simplest way to write to stderr is eprintln!, which mirrors println!:

fn main() {
    eprintln!("error: expected a filename")
}

Writing Directly to stderr

For more control, you can write to std.io.stderr():

import std.io.Error as IoError
import std.io.{ Write }

fn reportError(message: &str): Result<(), IoError> {
    var stderr = std.io.stderr()
    stderr.writeAll("\(message)\n".asBytes())?
    Ok(())
}

This approach is useful when you want to avoid formatting macros or when you need to write binary data.

Why It Matters

Separating stdout and stderr lets users do things like:

cargo run -- file.txt > results.txt

If your program writes errors to stdout, the error messages will end up in the output file. Writing errors to stderr keeps the output clean.

Functional Language Features: Iterators and Closures

Oxide's design incorporates ideas from many existing languages and paradigms, and functional programming has significantly influenced its design. Programming in a functional style often includes using functions as values by passing them in arguments, returning them from other functions, assigning them to variables for later execution, and so forth.

In this chapter, we'll cover two powerful features that enable functional programming patterns in Oxide:

  • Closures: Anonymous functions that can capture their environment, with a clean { params -> body } syntax
  • Iterators: A way of processing a series of elements lazily and efficiently

We'll also explore how closures and iterators enable you to write clear, expressive code that's also fast. You'll learn that these high-level abstractions compile down to code that's just as efficient as what you might write by hand.

Closures and iterators are central to idiomatic Oxide code. Mastering them will help you write code that's both elegant and performant.

What You'll Learn

Closures

Oxide's closure syntax is inspired by Swift and Kotlin, using curly braces and an arrow:

// Rust closure: |x| x * 2
// Oxide closure:
let double = { x -> x * 2 }

// Trailing closures with implicit `it`
items.filter { it > 0 }.map { it * 2 }

You'll learn:

  • How to define closures with various parameter forms
  • How closures capture values from their environment
  • The three Fn traits that determine how closures interact with captured values
  • The implicit it parameter for concise trailing closures

Iterators

Iterators provide a powerful, lazy way to process sequences of values:

let sum = numbers
    .iter()
    .filter { it % 2 == 0 }
    .map { it * it }
    .sum()

You'll learn:

  • How iterators work under the hood
  • Consuming adaptors that produce final values
  • Iterator adaptors that transform iterators
  • How to create your own custom iterators

Performance

A common concern with high-level abstractions is runtime overhead. You'll learn:

  • Why Oxide's iterators and closures are "zero-cost abstractions"
  • How the compiler optimizes iterator chains
  • When to use iterators vs. traditional loops (spoiler: iterators are usually just as fast)

Let's start by exploring closures in depth.

Closures: Anonymous Functions That Capture Their Environment

Oxide's closures are anonymous functions you can save in a variable or pass as arguments to other functions. Unlike regular functions, closures can capture values from the scope in which they're defined. This makes them incredibly useful for customizing behavior and creating concise, expressive code.

Closure Syntax

Oxide uses a Swift/Kotlin-inspired syntax for closures that differs from Rust:

// No parameters
let sayHello = { println!("Hello!") }

// Single parameter with explicit name
let double = { x -> x * 2 }

// Multiple parameters
let add = { x, y -> x + y }

// With type annotations
let format = { x: Int -> "Number: \(x)" }

// Multi-statement body
let process = { item ->
    let validated = validate(item)
    let transformed = transform(validated)
    transformed
}

Rust comparison: Oxide uses { params -> body } instead of Rust's |params| body:

#![allow(unused)]
fn main() {
// Rust
let double = |x| x * 2;
let add = |x, y| x + y;
let format = |x: i32| format!("Number: {}", x);
}

Type Inference with Closures

Unlike functions, closures don't require you to annotate the types of parameters or the return value. The compiler can usually infer these types from context.

fn main() {
    let numbers = vec![1, 2, 3, 4, 5]

    // Type of x is inferred as &Int from the iterator
    let doubled: Vec<Int> = numbers.iter().map { x -> x * 2 }.collect()

    // Can also add explicit annotations when needed
    let parsed = { s: &str -> s.parse<Int>().unwrapOr(0) }
}

However, once the compiler infers concrete types for a closure, those types are fixed:

fn main() {
    let identity = { x -> x }

    let s = identity("hello")  // x is inferred as &str
    let n = identity(5)        // Error: expected &str, found integer
}

If you need a closure that works with multiple types, you'll need to define a generic function instead.

Capturing the Environment

One of the most powerful features of closures is their ability to capture values from the enclosing scope. This is something regular functions cannot do:

fn main() {
    let multiplier = 3

    // This closure captures `multiplier` from the environment
    let multiply = { x -> x * multiplier }

    println!("5 * 3 = \(multiply(5))")  // Prints: 5 * 3 = 15
}

Closures can capture values in three ways, corresponding to the three ways a function can take a parameter: borrowing immutably, borrowing mutably, and taking ownership.

Immutable Borrow (Default)

By default, closures borrow values immutably:

fn main() {
    let list = vec![1, 2, 3]

    // Closure borrows `list` immutably
    let printList = { println!("List: \(list:?)") }

    printList()
    printList()

    // We can still use `list` here because it was only borrowed
    println!("Original list: \(list:?)")
}

Mutable Borrow

If the closure needs to modify a captured value, it will borrow mutably:

fn main() {
    var list = vec![1, 2, 3]

    // This closure borrows `list` mutably
    var addToList = { item -> list.push(item) }

    addToList(4)
    addToList(5)

    // After the closure is done being used, we can use `list` again
    println!("Updated list: \(list:?)")  // Prints: Updated list: [1, 2, 3, 4, 5]
}

Note that between the point where the mutable closure is defined and where it's last used, you can't have any other borrows of list. The borrow checker enforces this:

fn main() {
    var list = vec![1, 2, 3]

    var addToList = { item -> list.push(item) }

    println!("\(list:?)")  // Error: cannot borrow `list` as immutable
                            // because it is also borrowed as mutable

    addToList(4)
}

Taking Ownership with move

Sometimes you want a closure to take ownership of the values it captures, even when the body of the closure doesn't strictly need ownership. This is common when passing a closure to a new thread:

import std.thread

fn main() {
    let list = vec![1, 2, 3]

    // Use `move` to force the closure to take ownership
    thread.spawn(move { println!("From thread: \(list:?)") })
        .join()
        .unwrap()

    // Error: `list` has been moved into the closure
    // println!("\(list:?)")
}

The move keyword is placed before the opening brace of the closure. Without move, the closure would try to borrow list, but since the thread might outlive the function, the borrow checker would reject this.

The Fn Traits

Closures automatically implement one or more special traits that define how they can be called. These traits determine what a closure can do with the values it captures:

TraitWhat it meansCan be called...
FnOnceMight move captured values outOnce only
FnMutMight mutate captured valuesMultiple times
FnOnly reads captured valuesMultiple times

Every closure implements FnOnce because every closure can be called at least once. Closures that don't move captured values also implement FnMut, and closures that don't need mutable access also implement Fn.

FnOnce: Called Once

A closure that moves a value out of its environment can only be called once:

fn consumeWithCallback<F>(f: F)
where
    F: FnOnce(),
{
    f()
}

fn main() {
    let greeting = "Hello".toString()

    // This closure moves `greeting` out when called
    let consume = move {
        let moved = greeting  // Takes ownership of greeting
        println!("\(moved)")
    }

    consumeWithCallback(consume)
    // consume() // Error: closure cannot be called again
}

FnMut: Mutable Access

A closure that mutates captured values but doesn't move them out implements FnMut:

fn main() {
    var count = 0

    // This closure mutates `count`
    var increment = { count += 1 }

    increment()
    increment()
    increment()

    println!("Count: \(count)")  // Prints: Count: 3
}

Fn: Immutable Access

A closure that only reads from its environment implements Fn:

fn callTwice<F>(f: F)
where
    F: Fn(),
{
    f()
    f()
}

fn main() {
    let message = "Hello"

    // This closure only reads `message`
    let greet = { println!("\(message)") }

    callTwice(greet)
}

Closures as Function Parameters

When writing functions that accept closures, you specify the trait bound:

// Accepts any closure that can be called with an Int and returns a Bool
fn filterNumbers<F>(numbers: Vec<Int>, predicate: F): Vec<Int>
where
    F: Fn(Int) -> Bool,
{
    var result = vec![]
    for n in numbers {
        if predicate(n) {
            result.push(n)
        }
    }
    result
}

fn main() {
    let numbers = vec![1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
    let evens = filterNumbers(numbers, { x -> x % 2 == 0 })
    println!("Evens: \(evens:?)")
}

Choose the most flexible trait that works for your use case:

  • Use FnOnce when the closure only needs to be called once
  • Use FnMut when the closure might mutate state
  • Use Fn when the closure only needs to read state

Returning Closures from Functions

Functions can return closures using impl Trait:

fn makeMultiplier(factor: Int): impl Fn(Int) -> Int {
    move { x -> x * factor }
}

fn main() {
    let double = makeMultiplier(2)
    let triple = makeMultiplier(3)

    println!("5 * 2 = \(double(5))")  // Prints: 5 * 2 = 10
    println!("5 * 3 = \(triple(5))")  // Prints: 5 * 3 = 15
}

Note the move keyword - it's needed because factor is a local variable that would go out of scope when makeMultiplier returns. With move, the closure takes ownership of factor.

The Implicit it Parameter

Oxide provides a special convenience for single-parameter closures in trailing closure position: the implicit it parameter. When you write a closure without an explicit parameter list and use it in the body, Oxide automatically creates a single-parameter closure:

fn main() {
    let numbers = vec![1, 2, 3, 4, 5]

    // Using implicit `it`
    let doubled: Vec<Int> = numbers.iter().map { it * 2 }.collect()
    let evens: Vec<Int> = numbers.iter().filter { it % 2 == 0 }.copied().collect()

    // Equivalent explicit forms
    let doubled: Vec<Int> = numbers.iter().map { x -> x * 2 }.collect()
    let evens: Vec<Int> = numbers.iter().filter { x -> x % 2 == 0 }.copied().collect()
}

Important restriction: The implicit it is only available in trailing closure position (closures passed as the last argument to a function call). You cannot use it when assigning a closure to a variable:

// NOT allowed - `it` only works in trailing closures
let f = { it * 2 }  // Error: `it` only valid in trailing closure

// Use explicit parameter instead
let f = { x -> x * 2 }  // OK

This restriction keeps the code clear by ensuring it only appears in contexts where its meaning is obvious.

Trailing Closure Syntax

When the last argument to a function is a closure, you can write it outside the parentheses:

fn main() {
    let numbers = vec![1, 2, 3, 4, 5]

    // Trailing closure - closure after the parentheses
    numbers.iter().forEach { println!("\(it)") }

    // Equivalent non-trailing form
    numbers.iter().forEach({ x -> println!("\(x)") })

    // When there are no other arguments, parentheses can be omitted entirely
    numbers.iter().forEach { println!("\(it)") }
}

Trailing closures make code more readable, especially when the closure body spans multiple lines:

fn main() {
    let result = someFunction(arg1, arg2) {
        let step1 = processStep1(it)
        let step2 = processStep2(step1)
        finalizeResult(step2)
    }
}

Real-World Example: unwrap_or_else

Many methods in the standard library accept closures. A common example is unwrapOrElse on Option (or T? in Oxide):

fn main() {
    let config = loadConfig()  // Returns Config?

    // If config is null, call the closure to create a default
    let settings = config ?? Config.default()

    // The ?? operator is equivalent to:
    let settings = config.unwrapOrElse { Config.default() }
}

The closure is only called if the value is null, allowing you to defer expensive default computation:

fn expensiveDefault(): Config {
    println!("Computing expensive default...")
    // ... expensive computation ...
    Config { /* ... */ }
}

fn main() {
    let config: Config? = Some(loadedConfig)

    // expensiveDefault() is never called because config is Some
    let settings = config ?? expensiveDefault()
}

Closures in Iterator Methods

Closures really shine when combined with iterator methods. Here's a preview of what we'll cover in the next section:

#[derive(Debug, Clone)]
struct User {
    name: String,
    age: Int,
    active: Bool,
}

fn main() {
    let users = vec![
        User { name: "Alice".toString(), age: 30, active: true },
        User { name: "Bob".toString(), age: 25, active: false },
        User { name: "Carol".toString(), age: 35, active: true },
    ]

    // Find active users over 28, get their names
    let activeAdultNames: Vec<String> = users
        .iter()
        .filter { it.active && it.age > 28 }
        .map { it.name.clone() }
        .collect()

    println!("Active adults: \(activeAdultNames:?)")
    // Prints: Active adults: ["Alice", "Carol"]
}

Summary

Closures in Oxide provide:

  • Clean syntax: { params -> body } is concise and readable
  • Environment capture: Closures can access variables from their enclosing scope
  • Flexible ownership: Choose immutable borrow, mutable borrow, or move as needed
  • Trait-based polymorphism: Fn, FnMut, and FnOnce allow generic closure parameters
  • Implicit it: Single-parameter trailing closures can use it for brevity
  • Trailing syntax: Closures can appear outside parentheses for readability

Understanding closures is essential for writing idiomatic Oxide code, especially when working with iterators, which we'll explore next.

Processing a Series of Items with Iterators

The iterator pattern allows you to perform some task on a sequence of items in turn. An iterator is responsible for the logic of iterating over each item and determining when the sequence has finished. When you use iterators, you don't have to reimplement that logic yourself.

In Oxide, iterators are lazy, meaning they have no effect until you call methods that consume the iterator. This lets you chain multiple operations together efficiently.

The Iterator Trait

All iterators implement the Iterator trait from the standard library. The trait definition looks like this:

trait Iterator {
    type Item
    mutating fn next(): Self.Item?
}

The trait requires you to define one method: next. Each call to next returns one item of the iterator wrapped in Some, and when iteration is over, it returns null.

Creating Iterators

The most common way to create an iterator is by calling a method on a collection:

fn main() {
    let numbers = vec![1, 2, 3]

    // Creates an iterator over references
    var iter = numbers.iter()

    // Manually calling next()
    println!("\(iter.next():?)")  // Some(1)
    println!("\(iter.next():?)")  // Some(2)
    println!("\(iter.next():?)")  // Some(3)
    println!("\(iter.next():?)")  // null
}

There are three common methods to create iterators from collections:

MethodProducesUse when
iter()&T (references)You want to read values
iterMut()&mut T (mutable references)You want to modify values
intoIter()T (owned values)You want to take ownership
fn main() {
    var numbers = vec![1, 2, 3]

    // iter() - borrows immutably
    for n in numbers.iter() {
        println!("Read: \(n)")
    }

    // iterMut() - borrows mutably
    for n in numbers.iterMut() {
        *n *= 2  // Double each value
    }

    println!("After doubling: \(numbers:?)")

    // intoIter() - takes ownership
    for n in numbers.intoIter() {
        println!("Owned: \(n)")
    }
    // numbers is no longer usable here
}

Iterators Are Lazy

A critical characteristic of iterators is that they're lazy - they don't do anything until you consume them:

fn main() {
    let numbers = vec![1, 2, 3]

    // This does NOTHING by itself
    let iter = numbers.iter().map { it * 2 }

    // The work happens when we consume the iterator
    let doubled: Vec<Int> = iter.collect()
}

The compiler will warn you if you create an iterator and don't use it:

warning: unused `Map` that must be used
  --> src/main.ox:4:5
   |
4  |     numbers.iter().map { it * 2 }
   |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   |
   = note: iterators are lazy and do nothing unless consumed

Consuming Adaptors

Methods that call next are called consuming adaptors because they use up the iterator. Let's look at the most common ones.

sum

The sum method consumes the iterator and adds all elements:

fn main() {
    let numbers = vec![1, 2, 3, 4, 5]

    let total: Int = numbers.iter().sum()

    println!("Sum: \(total)")  // Prints: Sum: 15
}

collect

The collect method consumes an iterator and collects the results into a collection:

fn main() {
    let numbers = vec![1, 2, 3, 4, 5]

    let doubled: Vec<Int> = numbers
        .iter()
        .map { it * 2 }
        .collect()

    println!("Doubled: \(doubled:?)")  // Prints: Doubled: [2, 4, 6, 8, 10]
}

You often need to specify the target type, either with a type annotation or using the turbofish syntax:

fn main() {
    let numbers = vec![1, 2, 3]

    // Type annotation on the variable
    let doubled: Vec<Int> = numbers.iter().map { it * 2 }.collect()

    // Or turbofish syntax
    let doubled = numbers.iter().map { it * 2 }.collect<Vec<Int>>()

    // Or turbofish syntax
    let doubled = numbers.iter().map { it * 2 }.collect<Vec<Int>>()
}

Other Consuming Adaptors

fn main() {
    let numbers = vec![1, 2, 3, 4, 5]

    // count - counts elements
    let count = numbers.iter().count()

    // last - gets the last element
    let last = numbers.iter().last()

    // nth - gets the nth element (0-indexed)
    let third = numbers.iter().nth(2)

    // fold - reduces with an initial value and accumulator
    let product: Int = numbers.iter().fold(1, { acc, x -> acc * x })

    // any - checks if any element satisfies a predicate
    let hasEven = numbers.iter().any { it % 2 == 0 }

    // all - checks if all elements satisfy a predicate
    let allPositive = numbers.iter().all { it > 0 }

    // find - finds the first element matching a predicate
    let firstEven = numbers.iter().find { it % 2 == 0 }

    // position - finds the index of the first matching element
    let evenIndex = numbers.iter().position { it % 2 == 0 }
}

Iterator Adaptors

Iterator adaptors are methods that transform an iterator into a different iterator. You can chain multiple adaptors together because they produce new iterators. However, because iterators are lazy, you must call a consuming adaptor at the end to get results.

map

The map adaptor transforms each element:

fn main() {
    let numbers = vec![1, 2, 3]

    let squares: Vec<Int> = numbers
        .iter()
        .map { it * it }
        .collect()

    println!("Squares: \(squares:?)")  // Prints: Squares: [1, 4, 9]
}

filter

The filter adaptor keeps only elements that match a predicate:

fn main() {
    let numbers = vec![1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

    let evens: Vec<Int> = numbers
        .iter()
        .filter { it % 2 == 0 }
        .copied()  // Convert &Int to Int
        .collect()

    println!("Evens: \(evens:?)")  // Prints: Evens: [2, 4, 6, 8, 10]
}

Chaining Adaptors

The real power comes from chaining multiple adaptors:

fn main() {
    let numbers = vec![1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

    // Filter evens, square them, keep those under 50
    let result: Vec<Int> = numbers
        .iter()
        .filter { it % 2 == 0 }
        .map { it * it }
        .filter { it < 50 }
        .copied()
        .collect()

    println!("Result: \(result:?)")  // Prints: Result: [4, 16, 36]
}

More Iterator Adaptors

fn main() {
    let numbers = vec![1, 2, 3, 4, 5]

    // take - takes at most n elements
    let first3: Vec<Int> = numbers.iter().take(3).copied().collect()

    // skip - skips the first n elements
    let last2: Vec<Int> = numbers.iter().skip(3).copied().collect()

    // takeWhile - takes while predicate is true
    let underFour: Vec<Int> = numbers.iter().takeWhile { it < &4 }.copied().collect()

    // skipWhile - skips while predicate is true
    let fromFour: Vec<Int> = numbers.iter().skipWhile { it < &4 }.copied().collect()

    // enumerate - adds indices
    for (index, value) in numbers.iter().enumerate() {
        println!("\(index): \(value)")
    }

    // zip - combines two iterators
    let letters = vec!['a', 'b', 'c']
    let zipped: Vec<(Int, Char)> = numbers.iter().copied().zip(letters.iter().copied()).collect()

    // chain - concatenates two iterators
    let more = vec![6, 7, 8]
    let all: Vec<Int> = numbers.iter().copied().chain(more.iter().copied()).collect()

    // flatten - flattens nested iterators
    let nested = vec![vec![1, 2], vec![3, 4], vec![5, 6]]
    let flat: Vec<Int> = nested.iter().flatten().copied().collect()

    // flatMap - map then flatten
    let doubledFlat: Vec<Int> = numbers
        .iter()
        .copied()
        .flatMap { n -> vec![n, n].intoIter() }
        .collect()

    // rev - reverses the iterator
    let reversed: Vec<Int> = numbers.iter().copied().rev().collect()

    // cloned/copied - clones/copies the elements
    let cloned: Vec<Int> = numbers.iter().cloned().collect()
    let copied: Vec<Int> = numbers.iter().copied().collect()

    // inspect - peek at values (useful for debugging)
    let result: Vec<Int> = numbers
        .iter()
        .inspect { println!("Before map: \(it)") }
        .map { it * 2 }
        .inspect { println!("After map: \(it)") }
        .collect()
}

Closures That Capture Their Environment

Iterator adaptors that take closures benefit from closures' ability to capture their environment:

#[derive(Debug)]
struct Shoe {
    size: Int,
    style: String,
}

fn shoesInMySize(shoes: Vec<Shoe>, mySize: Int): Vec<Shoe> {
    shoes
        .intoIter()
        .filter { it.size == mySize }  // Captures mySize from environment
        .collect()
}

fn main() {
    let shoes = vec![
        Shoe { size: 10, style: "sneaker".toString() },
        Shoe { size: 13, style: "sandal".toString() },
        Shoe { size: 10, style: "boot".toString() },
    ]

    let myShoes = shoesInMySize(shoes, 10)
    println!("My shoes: \(myShoes:?)")
}

Creating Your Own Iterators

You can create custom iterators by implementing the Iterator trait. You only need to implement the next method:

struct Counter {
    count: Int,
    max: Int,
}

extension Counter {
    public static fn new(max: Int): Counter {
        Counter { count: 0, max: max }
    }
}

extension Counter: Iterator {
    type Item = Int

    mutating fn next(): Int? {
        if self.count < self.max {
            self.count += 1
            Some(self.count)
        } else {
            null
        }
    }
}

fn main() {
    let counter = Counter.new(5)

    for n in counter {
        println!("Count: \(n)")
    }
    // Prints: Count: 1, Count: 2, Count: 3, Count: 4, Count: 5
}

Once you implement next, you get all the other iterator methods for free:

fn main() {
    let counter = Counter.new(10)

    let sum: Int = counter
        .filter { it % 2 == 0 }
        .map { it * it }
        .sum()

    println!("Sum of squares of evens: \(sum)")
    // 4 + 16 + 36 + 64 + 100 = 220
}

Using Iterator Methods vs. Loops

Iterator methods can often replace explicit loops with more declarative code. Compare these two approaches:

Using a Loop

fn search(query: &str, contents: &str): Vec<&str> {
    var results: Vec<&str> = vec![]

    for line in contents.lines() {
        if line.contains(query) {
            results.push(line)
        }
    }

    results
}

Using Iterators

fn search(query: &str, contents: &str): Vec<&str> {
    contents
        .lines()
        .filter { it.contains(query) }
        .collect()
}

The iterator version:

  • Has no mutable state (var results)
  • Is more declarative (says what to do, not how)
  • Is easier to parallelize later (swap iter() for parIter() with rayon)
  • Is typically just as fast (see the Performance section)

Common Patterns

Here are some common iterator patterns you'll encounter:

Transforming Collections

fn main() {
    let strings = vec!["1", "2", "three", "4", "five"]

    // Parse valid numbers, ignore errors
    let numbers: Vec<Int> = strings
        .iter()
        .filterMap { it.parse<Int>().ok() }
        .collect()

    println!("Numbers: \(numbers:?)")  // Prints: Numbers: [1, 2, 4]
}

Finding Elements

fn main() {
    let numbers = vec![1, 2, 3, 4, 5]

    // Find first even
    let firstEven = numbers.iter().find { it % 2 == 0 }

    // Find last odd
    let lastOdd = numbers.iter().rev().find { it % 2 == 1 }

    // Find or default
    let target = numbers.iter().find { it > &10 }.copied().unwrapOr(0)
}

Grouping and Partitioning

fn main() {
    let numbers = vec![1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

    // Partition into two collections
    let (evens, odds): (Vec<Int>, Vec<Int>) = numbers
        .iter()
        .copied()
        .partition { it % 2 == 0 }

    println!("Evens: \(evens:?)")  // [2, 4, 6, 8, 10]
    println!("Odds: \(odds:?)")    // [1, 3, 5, 7, 9]
}

Building Strings

fn main() {
    let words = vec!["Hello", "World", "from", "Oxide"]

    let sentence = words.iter().copied().collect<Vec<&str>>().join(" ")

    // Or using fold
    let sentence = words
        .iter()
        .skip(1)
        .fold(words[0].toString(), { acc, word -> "\(acc) \(word)" })

    println!("\(sentence)")  // Hello World from Oxide
}

Numeric Ranges

fn main() {
    // Sum of 1 to 100
    let sum: Int = (1..=100).sum()

    // Squares of 1 to 10
    let squares: Vec<Int> = (1..=10).map { it * it }.collect()

    // Even numbers from 0 to 20
    let evens: Vec<Int> = (0..=20).filter { it % 2 == 0 }.collect()
}

Summary

Iterators in Oxide provide:

  • Lazy evaluation: Work is only done when needed
  • Composability: Chain multiple operations together
  • Clean syntax: Trailing closures with it make chains readable
  • Zero cost: Compiles to efficient code (see next section)
  • Rich API: Many built-in adaptors for common operations

Iterators and closures together form a powerful toolkit for processing data in a declarative, efficient way. In the next section, we'll see why you don't have to sacrifice performance for this expressiveness.

Refactoring and Best Practices

Now that we have a working, tested program, let's improve it with better design patterns and performance optimizations.

Current Implementation Review

Our current search function works but has some inefficiencies:

public fn search(config: &Config, contents: &str): Vec<String> {
    var results = Vec.new()

    let query = if config.ignoreCase {
        config.query.toLowercase()
    } else {
        config.query.clone()
    }

    for line in contents.lines() {
        let searchLine = if config.ignoreCase {
            line.toLowercase()
        } else {
            line.toString()
        }

        if searchLine.contains(&query) {
            results.push(line.toString())
        }
    }

    results
}

Issues:

  1. Allocates query multiple times - We prepare the query inside the function
  2. Converts each line - For case-insensitive search, we convert every line
  3. Collects all results - For large files, this uses a lot of memory
  4. String cloning - Unnecessary cloning of lines

Refactoring: Pre-Process the Query

Prepare the query once:

public struct SearchConfig {
    public query: String,
    public originalQuery: String,
    public ignoreCase: Bool,
}

extension SearchConfig {
    public static fn new(query: String, ignoreCase: Bool): SearchConfig {
        let searchQuery = if ignoreCase {
            query.toLowercase()
        } else {
            query.clone()
        }

        SearchConfig {
            query: searchQuery,
            originalQuery: query,
            ignoreCase,
        }
    }

    public fn matches(line: &str): Bool {
        if self.ignoreCase {
            line.toLowercase().contains(&self.query)
        } else {
            line.contains(&self.query)
        }
    }
}

Refactoring: Using Iterators

Modern Rust/Oxide code uses iterators instead of manual loops. Here's a cleaner version:

public fn search(config: &Config, contents: &str): Vec<String> {
    let searchConfig = SearchConfig.new(
        config.query.clone(),
        config.ignoreCase
    )

    contents
        .lines()
        .filter { line -> searchConfig.matches(line) }
        .map { line -> line.toString() }
        .collect()
}

This is more idiomatic and easier to understand at a glance.

Refactoring: Streaming Results

For large files, collecting all results in memory is wasteful. Instead, stream results directly:

public fn search<F>(config: &Config, contents: &str, var callback: F)
where F: FnMut(&str) {
    let searchConfig = SearchConfig.new(
        config.query.clone(),
        config.ignoreCase
    )

    for line in contents.lines() {
        if searchConfig.matches(line) {
            callback(line)
        }
    }
}

public fn run(config: &Config): Result<(), Box<dyn Error>> {
    let contents = std.fs.readToString(&config.filename)?

    search(config, &contents) { line ->
        println!("{}", line)
    }

    Ok(())
}

Refactoring: Better Config Struct

Separate concerns in the Config struct:

public struct Config {
    public query: String,
    public filename: String,
    public searchOptions: SearchOptions,
}

public struct SearchOptions {
    public ignoreCase: Bool,
    public invertMatch: Bool,  // Show non-matching lines
    public lineNumbers: Bool,  // Show line numbers
    public countOnly: Bool,    // Just show count
}

extension SearchOptions {
    public static fn new(): SearchOptions {
        SearchOptions {
            ignoreCase: std.env.var("OXGREP_IGNORE_CASE").isOk(),
            invertMatch: false,
            lineNumbers: false,
            countOnly: false,
        }
    }
}

extension Config {
    public static fn new(args: &Vec<String>): Result<Config, String> {
        if args.len() < 3 {
            return Err("not enough arguments".toString())
        }

        let query = args[1].clone()
        let filename = args[2].clone()
        var searchOptions = SearchOptions.new()

        for i in 3..args.len() {
            match args[i].asStr() {
                "--ignore-case" -> searchOptions.ignoreCase = true,
                "--invert-match" -> searchOptions.invertMatch = true,
                "--line-numbers" -> searchOptions.lineNumbers = true,
                "--count" -> searchOptions.countOnly = true,
                flag -> return Err(format!("Unknown flag: {}", flag)),
            }
        }

        Ok(Config {
            query,
            filename,
            searchOptions,
        })
    }
}

Refactoring: Add Features Incrementally

Now adding new features is easier. Here's case-insensitive search:

extension SearchConfig {
    public fn matches(line: &str): Bool {
        let lineToSearch = if self.ignoreCase {
            line.toLowercase()
        } else {
            line.toString()
        }

        if self.invertMatch {
            !lineToSearch.contains(&self.query)
        } else {
            lineToSearch.contains(&self.query)
        }
    }
}

And line numbers:

public fn searchWithLineNumbers(
    config: &Config,
    contents: &str
): Vec<(UIntSize, String)> {
    let searchConfig = SearchConfig.new(
        config.query.clone(),
        config.searchOptions.ignoreCase
    )

    contents
        .lines()
        .enumerate()
        .filter { (_, line) -> searchConfig.matches(line) }
        .map { (num, line) -> (num + 1, line.toString()) }
        .collect()
}

Refactoring: Error Handling

Use custom error types for better error handling:

import std.fmt
import std.error.Error as StdError
import std.io

#[derive(Debug)]
public enum AppError {
    IoError(std.io.Error),
    ConfigError(String),
}

extension AppError: fmt.Display {
    fn fmt(f: &mut fmt.Formatter): fmt.Result {
        match self {
            AppError.IoError(err) -> write!(f, "IO error: {}", err),
            AppError.ConfigError(msg) -> write!(f, "Config error: {}", msg),
        }
    }
}

extension AppError: StdError {}

extension AppError: From<std.io.Error> {
    static fn from(err: std.io.Error): Self {
        AppError.IoError(err)
    }
}

extension AppError: From<String> {
    static fn from(err: String): Self {
        AppError.ConfigError(err)
    }
}

public fn run(config: &Config): Result<(), AppError> {
    let contents = std.fs.readToString(&config.filename)?

    search(config, &contents) { line ->
        println!("{}", line)
    }

    Ok(())
}

Refactoring: Documentation

Add documentation for public APIs:

/// Searches for a pattern in text, with optional case-insensitivity.
///
/// # Arguments
///
/// * `config` - Search configuration including the query and options
/// * `contents` - The text to search in
/// * `callback` - Function to call for each matching line
///
/// # Example
///
/// ```oxide
/// let config = Config {
///     query: "test".toString(),
///     filename: "file.txt".toString(),
///     searchOptions: SearchOptions.new(),
/// }
///
/// search(&config, "test\nno match\ntest again") { line ->
///     println!("{}", line)
/// }
/// ```
public fn search<F>(config: &Config, contents: &str, var callback: F)
where F: FnMut(&str) {
    // Implementation
}

Refactoring: Benchmarking

For production code, benchmark different approaches:

# Add to Cargo.toml
[[bench]]
name = "search"
harness = false

Create benches/search.rs:

fn benchmarkCaseSensitive() {
    let largeContent = "..." // Large file content
    let config = Config {
        query: "search".toString(),
        filename: "bench.txt".toString(),
        searchOptions: SearchOptions { ignoreCase: false, .. },
    }

    // Measure time
    search(&config, &largeContent) { }
}

Run benchmarks:

cargo bench

Final Refactored Code Structure

Here's the complete, refactored program:

// src/lib.ox
import std.fs
import std.env

public struct Config {
    public query: String,
    public filename: String,
    public searchOptions: SearchOptions,
}

public struct SearchOptions {
    public ignoreCase: Bool,
    public invertMatch: Bool,
}

public struct SearchConfig {
    public query: String,
    public ignoreCase: Bool,
}

extension Config {
    public static fn new(args: &Vec<String>): Result<Config, String> {
        // ... argument parsing
        Ok(Config { /* ... */ })
    }
}

extension SearchConfig {
    public static fn new(query: String, ignoreCase: Bool): SearchConfig {
        let query = if ignoreCase {
            query.toLowercase()
        } else {
            query
        }
        SearchConfig { query, ignoreCase }
    }

    public fn matches(line: &str): Bool {
        let line = if self.ignoreCase {
            line.toLowercase()
        } else {
            line.toString()
        }
        line.contains(&self.query)
    }
}

public fn search<F>(config: &Config, contents: &str, var callback: F)
where F: FnMut(&str) {
    let searchConfig = SearchConfig.new(
        config.query.clone(),
        config.searchOptions.ignoreCase
    )

    for line in contents.lines() {
        if searchConfig.matches(line) {
            callback(line)
        }
    }
}

public fn run(config: &Config): Result<(), Box<dyn std.error.Error>> {
    let contents = std.fs.readToString(&config.filename)?

    search(config, &contents) { line ->
        println!("{}", line)
    }

    Ok(())
}

Summary

Good refactoring practices:

  • Extract methods - Move complex logic into helper functions
  • Use iterators - They're idiomatic and often faster
  • Separate concerns - Each struct should have one responsibility
  • Document APIs - Use doc comments for public items
  • Add tests first - Tests ensure refactoring doesn't break things
  • Benchmark - Profile before optimizing
  • Version incrementally - Make small, focused changes

This completes our CLI project! You now have a well-structured, tested, and maintainable program that demonstrates core Oxide concepts.

Comparing Performance: Loops vs. Iterators

When you see high-level abstractions like iterator chains and closures, you might worry about runtime performance. Won't all those function calls and intermediate values slow things down?

The short answer: no. Oxide's iterators and closures are zero-cost abstractions, meaning you can use them without paying a runtime penalty compared to hand-written low-level code.

Zero-Cost Abstractions

The term "zero-cost abstraction" comes from C++, where Bjarne Stroustrup defined it as:

What you don't use, you don't pay for. And further: What you do use, you couldn't hand code any better.

Oxide (through Rust) follows this principle. Iterators compile down to roughly the same code you would write if you implemented the operations manually with loops.

A Concrete Example

Let's compare two implementations of a search function:

Using a Loop

fn search(query: &str, contents: &str): Vec<&str> {
    var results: Vec<&str> = vec![]

    for line in contents.lines() {
        if line.contains(query) {
            results.push(line)
        }
    }

    results
}

Using Iterators

fn search(query: &str, contents: &str): Vec<&str> {
    contents
        .lines()
        .filter { it.contains(query) }
        .collect()
}

If you benchmark these two implementations, you'll find they perform nearly identically. In fact, the iterator version is sometimes slightly faster because the compiler can optimize it more aggressively.

How the Compiler Optimizes Iterators

The compiler performs several optimizations on iterator chains:

Inlining

Closures and iterator methods are typically small enough to be inlined. This means the function call overhead is eliminated - the code is inserted directly at the call site.

Loop Fusion

When you chain multiple iterator adaptors:

numbers
    .iter()
    .map { it * 2 }
    .filter { it > 10 }
    .map { it + 1 }
    .collect()

The compiler doesn't create intermediate collections. Instead, it fuses these operations into a single loop that processes each element through all transformations in one pass.

Bounds Check Elimination

When iterating over a collection, the compiler can often prove that index bounds checks are unnecessary and eliminate them. This is easier to do with iterators than with manual indexing.

Loop Unrolling

For known-size collections or simple operations, the compiler may unroll loops to eliminate loop overhead:

// The compiler might transform this:
let sum: Int = [1, 2, 3, 4].iter().sum()

// Into something like this:
let sum = 1 + 2 + 3 + 4

A More Complex Example

Let's look at a more complex example - computing audio samples:

fn computeBuffer(buffer: &mut [Float], coefficients: &[Float]) {
    for (i, sample) in buffer.iterMut().enumerate() {
        let decay = 0.99_f64.powi(i as Int)
        let weighted = coefficients
            .iter()
            .zip(0..)
            .map { (coef, j) -> coef * (i + j) as Float }
            .sum<Float>()

        *sample = weighted * decay
    }
}

This code:

  1. Iterates over buffer with indices
  2. For each position, zips coefficients with indices
  3. Maps and sums to compute a weighted value
  4. Applies exponential decay

Despite the nested iterators and multiple closures, this compiles to tight, efficient machine code. The compiler:

  • Inlines all closures
  • Fuses the inner iterator chain
  • Eliminates intermediate allocations
  • May vectorize (use SIMD) for parallel processing

When to Use Iterators vs. Loops

Given that iterators and loops perform similarly, when should you use each?

Prefer Iterators When:

  • Clarity: The operation is a clear transform/filter/reduce
  • Composition: You need to chain multiple operations
  • Parallelism: You might later want to parallelize (with libraries like rayon)
  • Functional style: The logic fits the functional paradigm
// Clear intent: filter and transform
let activeUserEmails: Vec<String> = users
    .iter()
    .filter { it.isActive }
    .map { it.email.clone() }
    .collect()

Prefer Loops When:

  • Complex control flow: Multiple breaks, continues, or early returns
  • Multiple outputs: You need to update several things at once
  • Index-heavy logic: The algorithm heavily depends on indices
  • Readability: A loop is clearer for the specific case
// Complex control flow with early exit
for item in items {
    if shouldSkip(item) {
        continue
    }

    match process(item) {
        Ok(result) -> outputs.push(result),
        Err(e) if e.isRecoverable() -> {
            log(e)
            continue
        },
        Err(e) -> return Err(e),
    }

    if outputs.len() >= maxResults {
        break
    }
}

Common Performance Myths

Myth: "Closures are slow"

Closures in Oxide are not like closures in languages with garbage collection. They don't allocate on the heap (unless you box them), and they're typically inlined away completely.

// This closure is completely inlined
let doubled: Vec<Int> = numbers.iter().map { it * 2 }.collect()

// Compiles to essentially the same code as:
var doubled = Vec.withCapacity(numbers.len())
for n in numbers.iter() {
    doubled.push(n * 2)
}

Myth: "Iterator chains allocate intermediate collections"

Iterator adaptors like map and filter don't allocate. They return new iterator types that wrap the original. Only consuming adaptors like collect allocate.

// No allocations until collect()
let result: Vec<Int> = (0..1_000_000)
    .map { it * 2 }      // No allocation
    .filter { it > 100 } // No allocation
    .take(10)            // No allocation
    .collect()           // Allocates Vec with ~10 elements

Myth: "Functional code is slow in systems languages"

This might be true in languages where functional constructs have runtime overhead, but Oxide (through Rust) is specifically designed to make abstractions zero-cost.

Practical Tips

Use Release Mode for Benchmarks

Always benchmark with optimizations enabled. Debug builds disable most optimizations, making iterator code appear slower than it is.

# Debug build - don't benchmark this
oxide build

# Release build - benchmark this
oxide build --release

Don't Over-Optimize Prematurely

Write clear, idiomatic code first. Use iterators where they make code clearer. Only optimize after profiling shows a bottleneck.

// Good: Clear and efficient
let sum: Int = numbers.iter().filter { it > 0 }.sum()

// Unnecessary: Manual optimization that's not faster
var sum = 0
for n in numbers.iter() {
    if n > 0 {
        sum += n
    }
}

Consider collect() Placement

If you need to iterate multiple times, collecting once can be more efficient than recreating the iterator:

// Inefficient if baseIter() is expensive
let count = baseIter().filter { predicate(it) }.count()
let sum: Int = baseIter().filter { predicate(it) }.sum()

// Better: collect once, iterate twice
let filtered: Vec<Int> = baseIter().filter { predicate(it) }.collect()
let count = filtered.len()
let sum: Int = filtered.iter().sum()

Use Appropriate Iterator Methods

Some methods are more efficient than others for specific tasks:

// Use any() instead of filter().count() > 0
let hasEven = numbers.iter().any { it % 2 == 0 }

// Use find() instead of filter().next()
let firstEven = numbers.iter().find { it % 2 == 0 }

// Use position() instead of enumerate().filter().map()
let evenIndex = numbers.iter().position { it % 2 == 0 }

Summary

Iterators and closures in Oxide are zero-cost abstractions:

  • No runtime overhead: They compile to the same code as manual loops
  • Compiler optimizations: Inlining, fusion, and unrolling
  • Choose for clarity: Use the approach that makes your code clearest
  • Profile before optimizing: Don't sacrifice readability for premature optimization

You can confidently use iterators and closures throughout your Oxide code, knowing that you're not trading performance for expressiveness. This is one of Oxide's (and Rust's) core strengths: high-level abstractions with low-level performance.

More About Cargo and Crates.io

We've already covered the basics of Cargo in earlier chapters—how to create projects, build code, and run tests. Now we'll explore more advanced features that Cargo provides for optimizing your builds, sharing your code with the world, organizing large projects, and distributing binary tools.

Release Profiles

Cargo has a concept called profiles that control how your code is compiled in different contexts. A profile is a collection of settings that determine compilation behavior. Cargo ships with four built-in profiles: dev, release, test, and bench. Each profile has its own set of optimization settings.

The Dev Profile

When you run cargo build without the --release flag, Cargo uses the dev profile. This profile is optimized for iteration during development:

$ cargo build
   Compiling my_oxide_app v0.1.0
    Finished dev [unoptimized + debuginfo] target/debug/my_oxide_app

The dev profile prioritizes compilation speed over runtime performance. It includes debug information, which makes binaries larger and slower but allows you to debug them effectively with tools like gdb.

The Release Profile

When you're ready to deploy your code, use the --release flag:

$ cargo build --release
   Compiling my_oxide_app v0.1.0
    Finished release [optimized] target/release/my_oxide_app

The release profile applies aggressive optimizations:

  • Optimizes for speed and size
  • Removes debug information by default
  • Takes longer to compile but produces much faster binaries

For production environments, always use --release builds. The performance difference can be dramatic—sometimes 10-100x faster than debug builds.

Customizing Profiles

You can customize profile settings in Cargo.toml by adding profile sections. For example, to add more optimizations to the release profile or add debug info to release builds:

[profile.release]
opt-level = 3
lto = true
codegen-units = 1
debug = true

Common profile settings include:

SettingDefault (dev)Default (release)Purpose
opt-level03Optimization level (0-3, higher = more optimization)
debugtruefalseInclude debug symbols
split-debuginfoHow to handle debug information
debug-assertionstruefalseInclude runtime assertions
overflow-checkstruefalsePanic on integer overflow
ltofalsefalseLink-Time Optimization (slower compile, faster binary)
panicunwindabortPanic strategy
incrementaltruefalseEnable incremental compilation
codegen-units25616Parallel codegen units (lower = more optimization, slower compile)

Creating Custom Profiles

You can create custom profiles beyond the built-in ones. For example, create a fast-compile profile that's optimized for quick iteration:

[profile.fast-compile]
inherits = "dev"
opt-level = 1

Build with your custom profile using cargo build --profile fast-compile.

Profile Settings in Practice

Here's a realistic release configuration that balances compile time with runtime performance:

[profile.release]
opt-level = 3
lto = "thin"
codegen-units = 16
strip = true

[profile.dev]
opt-level = 0
debug-assertions = true

The strip = true setting removes symbols from the final binary, making it smaller. The lto = "thin" setting provides most of the benefits of link-time optimization with faster compile times than lto = true.

Publishing Crates to crates.io

Cargo and Rust's package registry crates.io make it straightforward to publish libraries for others to use. Publishing is free, and your code is permanently available.

Setting Up for Publication

Before publishing, ensure your Cargo.toml includes these fields:

[package]
name = "my_oxide_lib"
version = "0.1.0"
edition = "2021"
authors = ["Your Name <you@example.com>"]
license = "MIT"
description = "A short description of what your library does"
repository = "https://github.com/yourusername/my_oxide_lib"
documentation = "https://docs.rs/my_oxide_lib"

Required Metadata

  • name: Must be unique on crates.io and follow specific naming rules (alphanumeric, hyphens, underscores only)
  • version: Must follow semantic versioning
  • edition: The Rust edition your code targets
  • authors: Your name and email
  • license: At least one license identifier (MIT, Apache-2.0, GPL-3.0, etc.)
  • description: A brief description displayed on crates.io

Documentation

Include a documentation comment at the top of your library's main file:

//! # My Oxide Library
//!
//! `myOxideLib` provides utilities for working with sequences of data.
//!
//! ## Examples
//!
//! ```
//! import myOxideLib.SearchOptions
//!
//! let options = SearchOptions.default()
//! ```

Documentation comments (starting with //!) describe the item they're attached to. Documentation tests (code blocks in doc comments) are automatically tested by cargo test.

Removing Items from Public API

Use the public import re-export pattern to control your public API:

//! My library documentation
public import self.kinds.PrimaryColor
public import self.utils.mix

public module kinds { ... }
module utils { ... }

This approach allows you to organize internal code while exposing a clean public API. Internal modules can remain private.

Publishing Your Crate

First, create an account on crates.io and get an API token from your account settings.

Log in to Cargo:

$ cargo login

Paste your token when prompted. This saves your credentials locally.

Then publish your crate:

$ cargo publish
    Uploading my_oxide_lib v0.1.0 to registry index
    Uploaded my_oxide_lib v0.1.0

Congratulations! Your crate is now available on crates.io. Others can use it by adding it to their Cargo.toml:

[dependencies]
my_oxide_lib = "0.1.0"

Semantic Versioning

Follow semantic versioning when publishing updates:

  • MAJOR.MINOR.PATCH (e.g., 1.2.3)
  • Increment MAJOR when making incompatible API changes
  • Increment MINOR when adding new features in a backward-compatible way
  • Increment PATCH for bug fixes

For pre-release versions, append a suffix like 0.1.0-alpha or 1.0.0-rc.1.

Deprecating Crate Versions

If you need to deprecate a version after publishing, use the yank command to prevent new downloads:

$ cargo yank --vers 1.0.1

This prevents new projects from depending on that version while allowing existing dependents to continue using it. You can undo a yank with --undo.

Workspaces

As projects grow, you often need to organize code into multiple related crates. Cargo workspaces allow you to manage multiple crates in a single repository with shared settings and dependencies.

Creating a Workspace

A workspace is defined by a Cargo.toml file in the root directory listing member crates:

[workspace]
members = [
    "lib",
    "app",
    "cli",
]

[workspace.package]
version = "0.1.0"
edition = "2021"
authors = ["Your Name <you@example.com>"]

The [workspace.package] section defines shared metadata across all member crates.

Workspace Structure

my_workspace/
├── Cargo.toml
├── lib/
│   ├── Cargo.toml
│   └── src/
│       ├── main.ox
│       └── lib.ox
├── app/
│   ├── Cargo.toml
│   └── src/
│       └── main.ox
└── cli/
    ├── Cargo.toml
    └── src/
        └── main.ox

Each member crate has its own Cargo.toml file. The workspace root Cargo.toml coordinates the build.

Member Crate Configuration

Each member crate specifies its metadata:

lib/Cargo.toml:

[package]
name = "my_oxide_lib"
version.workspace = true
edition.workspace = true
authors.workspace = true

[lib]
name = "my_oxide_lib"
path = "src/lib.ox"

app/Cargo.toml:

[package]
name = "my_oxide_app"
version.workspace = true
edition.workspace = true

[dependencies]
my_oxide_lib = { path = "../lib" }

Using version.workspace = true references the version from the workspace manifest, avoiding duplication.

Building Workspaces

Build all members from the workspace root:

$ cargo build
   Compiling my_oxide_lib v0.1.0
   Compiling my_oxide_app v0.1.0
    Finished dev [unoptimized + debuginfo]

Build specific members:

$ cargo build -p my_oxide_app
$ cargo run -p my_oxide_cli

Run tests for all members:

$ cargo test --workspace

Shared Dependencies

Dependencies can be shared across workspace members. When you specify a dependency in one member's Cargo.toml, Cargo downloads and compiles it once, then reuses it for other members.

This significantly speeds up builds and ensures consistency. If two crates depend on the same library, Cargo uses a single compiled version.

Workspaces vs. Monorepos

Workspaces are ideal for related crates that are versioned together. Each member crate:

  • Has its own src/ directory and tests
  • Can be published independently to crates.io
  • Can have different configurations

Use workspaces when:

  • Multiple crates form a cohesive project
  • You want to share code and dependencies
  • Crates are released together

Use separate repositories when:

  • Crates are independent projects
  • Different teams maintain them
  • They have different release cycles

Installing Binary Tools with cargo install

Cargo isn't just for building libraries and applications—it can also install binary tools globally on your system. This is how you install command-line utilities written in Rust or Oxide.

Installing Binaries

Install a binary crate from crates.io:

$ cargo install ripgrep
  Installing ripgrep v13.0.0
   Compiling regex v1.7.0
   Compiling ripgrep v13.0.0
    Finished `release` profile [optimized] target/release/ripgrep
  Installed binary `rg` to ~/.cargo/bin/rg

The binary is installed to ~/.cargo/bin/, which should be in your PATH. If not, add it:

export PATH="$HOME/.cargo/bin:$PATH"

Installing from Local Paths

Install a binary from your local filesystem:

$ cargo install --path .

This is useful for distributing your own tools to team members or for development.

Installing Specific Versions

Install a particular version:

$ cargo install ripgrep --version 13.0.0

Or allow any version matching a pattern:

$ cargo install ripgrep --version ">=13"

Installing with Features

Some crates have optional features. Install with specific features enabled:

$ cargo install ripgrep --features gitignore

Listing Installed Binaries

View what binaries you've installed:

$ cargo install --list
my_oxide_tool v0.1.0 (path+file:///Users/yourname/project):
    my_oxide_tool
ripgrep v13.0.0:
    rg

Updating Installed Binaries

cargo install doesn't have a built-in update command, but you can reinstall to get the latest version:

$ cargo install ripgrep --force

The --force flag forces a reinstall even if the tool is already installed.

Making Your Crate Installable

To make your crate installable with cargo install, include a [[bin]] section in Cargo.toml:

[[bin]]
name = "my_oxide_tool"
path = "src/main.ox"

If your crate has exactly one binary in src/main.ox, this is implicit.

For crates with multiple binaries:

[[bin]]
name = "my_oxide_tool"
path = "src/bin/main.ox"

[[bin]]
name = "other_tool"
path = "src/bin/other.ox"

Users can then install your tool:

$ cargo install my_oxide_tool

Practical Example: A Complete Workspace

Let's create a real-world example: a workspace with a shared library and multiple applications using it.

Setting Up the Workspace

Create the workspace structure:

$ mkdir my_oxide_workspace
$ cd my_oxide_workspace
$ cargo new --oxide --lib shared
$ cargo new --oxide app
$ cargo new --oxide cli

Root Cargo.toml

[workspace]
members = ["shared", "app", "cli"]

[workspace.package]
version = "0.1.0"
edition = "2021"
authors = ["Your Name <you@example.com>"]

Shared Library (shared/src/lib.ox)

public struct Message {
    public content: String,
    public timestamp: UInt64,
}

public fn createMessage(content: String): Message {
    Message {
        content,
        timestamp: getCurrentTimestamp(),
    }
}

fn getCurrentTimestamp(): UInt64 {
    // Implementation here
    0
}

App Crate (app/Cargo.toml)

[package]
name = "my_oxide_app"
version.workspace = true
edition.workspace = true

[dependencies]
shared = { path = "../shared" }

Building and Running

$ cargo build --workspace
   Compiling shared v0.1.0
   Compiling my_oxide_app v0.1.0
   Compiling my_oxide_cli v0.1.0
    Finished dev [unoptimized + debuginfo]

$ cargo run -p my_oxide_app
    Finished dev [unoptimized + debuginfo]
     Running `target/debug/my_oxide_app`

Summary

We've explored Cargo's advanced features for professional development:

  • Release Profiles let you customize compilation settings for different scenarios
  • Publishing to crates.io makes your libraries available to the entire community
  • Workspaces help you organize multiple related crates in a single project
  • cargo install distributes binary tools for easy system-wide installation

These features make Cargo an exceptionally powerful tool for managing Oxide and Rust projects at scale. Whether you're building libraries for others, organizing complex multi-crate projects, or distributing command-line tools, Cargo provides the structure and automation to do it efficiently.

Oxide-Specific Notes

Remember that Oxide code (.ox files) works seamlessly in all of these scenarios:

  • Profiles: Apply to Oxide code just like Rust code
  • Publishing: Oxide libraries can be published to crates.io just like Rust libraries
  • Workspaces: Mix Oxide and Rust crates freely in the same workspace
  • cargo install: Works with Oxide binaries just as well as Rust binaries

When creating new projects in a workspace, use cargo new --oxide to create Oxide-specific projects with .ox files instead of .rs files.

In the next chapter, we'll dive deeper into Oxide's unique features and how they differ from Rust.

Customizing Builds with Release Profiles

Cargo uses profiles to control how your code is compiled in different contexts. The two most common profiles are dev (used by cargo build and cargo run) and release (used by cargo build --release).

You can customize these profiles in Cargo.toml:

[profile.dev]
opt-level = 0
debug = true

[profile.release]
opt-level = 3
debug = false
lto = true

Common Settings

  • opt-level controls optimization (0-3).
  • debug controls debug info generation.
  • lto enables link-time optimization.
  • panic can be unwind or abort.

Use release builds for benchmarks or production deployments, and dev builds for fast iteration.

Publishing a Crate to Crates.io

When your library or tool is ready, you can publish it to crates.io so others can depend on it. Oxide uses Cargo for publishing, just like Rust.

Prepare Your Metadata

Ensure your Cargo.toml has the required fields:

[package]
name = "ox-utils"
version = "0.1.0"
edition = "2021"
description = "Utilities written in Oxide"
license = "MIT OR Apache-2.0"

Add a README.md, include a license file if needed, and verify that tests pass.

Log In and Publish

cargo login
cargo publish

Cargo will upload your crate. If you need to fix a mistake without breaking existing users, you can yank a version:

cargo yank --version 0.1.0

Yanking prevents new downloads of that version while preserving existing dependencies.

Cargo Workspaces

Workspaces let you manage multiple crates in a single repository. They share a common Cargo.lock and output directory, which makes builds faster and dependencies consistent.

Defining a Workspace

Create a top-level Cargo.toml:

[workspace]
members = [
  "cli",
  "core",
  "utils",
]

Each member is a normal Cargo crate in its own directory.

Why Use Workspaces?

  • Share code between crates without publishing
  • Keep one Cargo.lock for consistent dependency versions
  • Build and test all crates together with cargo build or cargo test

Workspaces are ideal for larger Oxide projects, especially when you want to split libraries and binaries into separate crates.

Installing Binaries with cargo install

Cargo can compile and install command-line tools for you. This is useful for shipping small utilities or installing tools from crates.io.

Installing a Published Binary

cargo install oxgrep

Cargo installs binaries into ~/.cargo/bin by default. Make sure that directory is on your PATH.

Installing from a Local Path

cargo install --path .

This is handy for testing your own tool before publishing it.

Uninstalling

cargo uninstall oxgrep

You can list installed binaries with:

ls ~/.cargo/bin

Extending Cargo with Custom Commands

Cargo is extensible. Any executable named cargo-<command> on your PATH can be invoked as cargo <command>.

A Simple Example

If you create an executable named cargo-oxide, you can run:

cargo oxide

Cargo will locate cargo-oxide and execute it, passing along any arguments.

Why This Matters

Custom Cargo commands are a convenient way to build project tooling:

  • Code generators
  • Lint wrappers
  • Release automation
  • Project-specific scripts

Because these commands are just executables, you can write them in Oxide, Rust, or any language.

Smart Pointers

A smart pointer is a data structure that acts like a pointer but also has additional metadata and capabilities. Smart pointers are a powerful feature in Oxide that give you more flexibility and safety than ordinary references.

In this chapter, we'll explore the most important smart pointers in Oxide's standard library:

  • Box<T> - For allocating values on the heap
  • Rc<T> - For multiple ownership via reference counting
  • RefCell<T> - For interior mutability patterns
  • Reference cycles - How to avoid memory leaks with circular references

What are Smart Pointers?

Smart pointers are pointers with additional behavior and metadata. The most common smart pointers in the standard library provide functionality beyond what references provide: they manage memory automatically through ownership and borrowing, just like Oxide's core reference system.

Smart pointers are typically implemented using structs, but they implement the Deref and Drop traits to give them pointer-like behavior:

  • The Deref trait allows a smart pointer to be treated like a regular reference through deref coercion
  • The Drop trait lets you customize what happens when a smart pointer goes out of scope

When to Use Smart Pointers

Smart pointers solve different problems:

  • Box<T>: When you need to move a large value or allocate something on the heap
  • Rc<T>: When you need multiple parts of your program to own the same data
  • RefCell<T>: When you need interior mutability—the ability to mutate data even through an immutable reference

Let's explore each one in detail.

Comparing Smart Pointers

Here's a quick comparison of the three main smart pointers:

Smart PointerUse CaseCost
Box<T>Single ownership, heap allocationNo runtime overhead
Rc<T>Multiple ownership (single-threaded)Reference count tracking
RefCell<T>Interior mutabilityRuntime borrow checking

As you read through this chapter, you'll understand when and why to use each one.

A Note on Performance

The Oxide compiler optimizes smart pointer operations aggressively. In many cases, the overhead of smart pointers is eliminated through inlining and monomorphization. However, Rc<T> and RefCell<T> do have runtime costs because they maintain additional state, so use them thoughtfully in performance-critical code.

In the next section, we'll dive into Box<T>, the simplest and most commonly used smart pointer.

Box<T>: Simple Heap Allocation

The most straightforward smart pointer is Box<T>, which allows you to store data on the heap rather than the stack. When you box a value, ownership of that value moves into the box. When the box goes out of scope, the boxed value is dropped and the memory is freed.

Using Box<T> to Store Data on the Heap

In most cases, we know at compile time whether we need data on the stack or the heap. However, there are cases where storing data on the heap is advantageous:

1. When you have a large value and want to move it cheaply

When you have a large struct, moving it by value copies all the data. Using Box<T> instead moves just a pointer:

struct LargeData {
    public data: Vec<Int>,
}

fn main() {
    // Without Box: this copies the entire Vec
    let large = LargeData { data: vec![1, 2, 3] }

    // With Box: only the pointer is moved
    let boxed = Box { LargeData { data: vec![1, 2, 3] } }
    processLargeData(boxed)
}

fn processLargeData(data: Box<LargeData>) {
    println!("Processing data")
}

2. When you need trait objects

The most important use of Box<T> is creating trait objects for dynamic dispatch. We'll explore this more in the OOP chapter, but here's a simple example:

public trait Draw {
    fn draw(): Void
}

fn main() {
    let shapes: Vec<Box<dyn Draw>> = vec![
        Box { Circle {} } as Box<dyn Draw>,
        Box { Square {} } as Box<dyn Draw>,
    ]

    for shape in shapes {
        shape.draw()
    }
}

3. When you want to avoid stack overflow

Very large structs can overflow the stack. Boxing the data allocates it on the heap instead:

// This could overflow the stack
struct HugeArray {
    public data: [Int; 1000000],  // 4MB on the stack!
}

// This is safer - only stores a pointer
let huge = Box { HugeArray { data: [0; 1000000] } }

Deref Coercion

Box<T> implements the Deref trait, which means you can use a boxed value as if it were a regular reference. This is called deref coercion.

fn main() {
    let boxed = Box { 5 }
    println!("Boxed value: \(boxed)")  // Automatically deref'd
}

fn printInt(value: &Int) {
    println!("The value is: \(value)")
}

fn main() {
    let boxed = Box { 5 }
    printInt(&boxed)  // Deref coercion happens here
}

Because Box<T> implements Deref, the Oxide compiler automatically converts &Box<T> to &T when needed. This makes boxed values feel natural to work with.

Recursive Types

One of the classic uses for Box<T> is building recursive data structures, like a linked list or tree. Without Box<T>, recursive types would have infinite size because the compiler wouldn't know how to calculate the size.

Consider a simple linked list:

public enum List {
    case Cons(Int, Box<List>)
    case Nil
}

fn main() {
    let list = List.Cons(1,
        Box { List.Cons(2,
            Box { List.Cons(3,
                Box { List.Nil }
            )}
        )}
    )
}

Why does this work? Because Box<T> has a known size (the size of a pointer), the compiler can now calculate the size of List:

  • Cons(Int, Box<List>) is an Int plus a Box pointer, both known sizes
  • Nil is a zero-sized variant

Without the Box, Cons(Int, List) would be infinitely sized because List contains itself.

A Better Example: Binary Tree

Here's a practical example—a simple binary search tree:

public struct TreeNode<T> {
    public value: T,
    public left: Box<TreeNode<T>>?,
    public right: Box<TreeNode<T>>?,
}

extension TreeNode {
    public static fn new(value: T): TreeNode<T> {
        TreeNode {
            value: value,
            left: null,
            right: null,
        }
    }

    public mutating fn insertLeft(value: T) {
        self.left = Box { TreeNode.new(value) }
    }

    public mutating fn insertRight(value: T) {
        self.right = Box { TreeNode.new(value) }
    }
}

fn main() {
    var root = TreeNode.new(5)
    root.insertLeft(3)
    root.insertRight(7)

    println!("Root: \(root.value)")
    println!("Left: \(root.left?.value)")
    println!("Right: \(root.right?.value)")
}

When NOT to Use Box<T>

  • For small stack-allocated values: Boxing adds indirection (pointer dereferencing) without benefit
  • When you don't need trait objects: Just use references or ownership directly
  • When you need multiple ownership: Use Rc<T> instead

Box<T> vs Rust's Box<T>

In Oxide, Box<T> works identically to Rust's Box<T>. The syntax is the same, and the semantics are identical. The main difference is in how you construct boxes:

// Oxide - uses `Box { value }` syntax
let boxed = Box { String.from("hello") }
#![allow(unused)]
fn main() {
// Rust - uses Box::new(value)
let boxed = Box::new(String::from("hello"));
}

Both are equivalent. Oxide's syntax is more consistent with the struct literal syntax.

Summary

Box<T> is Oxide's simplest smart pointer. Use it when you:

  • Need to allocate something on the heap
  • Want cheap moves of large values
  • Need to create trait objects for dynamic dispatch
  • Want to build recursive data structures

In the next section, we'll explore Rc<T>, which allows multiple ownership of the same value.

Treating Smart Pointers Like Regular References

The Deref trait lets smart pointers behave like references. When a type implements Deref, you can use the * operator to access its inner value, and Rust's deref coercions apply automatically.

Implementing Deref

Here is a simple smart pointer that wraps a value:

public struct MyBox<T> {
    value: T,
}

extension MyBox<T> {
    public static fn new(value: T): MyBox<T> {
        MyBox { value }
    }
}

extension MyBox<T>: Deref {
    type Target = T

    fn deref(): &T {
        &self.value
    }
}

Deref Coercions

Once Deref is implemented, Oxide can coerce references for you:

fn greet(name: &str) {
    println!("Hello, \(name)")
}

fn main() {
    let name = MyBox.new(String.from("Oxide"))

    // Manual deref
    greet(&*name)

    // Deref coercion
    greet(&name)
}

Deref coercions make APIs ergonomic while still keeping explicit ownership and borrowing rules.

Running Code on Cleanup with the Drop Trait

The Drop trait lets you run code automatically when a value goes out of scope. This is the foundation of resource management in Rust and Oxide.

Implementing Drop

public struct Connection {
    host: String,
}

extension Connection {
    public static fn new(host: String): Connection {
        Connection { host }
    }
}

extension Connection: Drop {
    mutating fn drop() {
        println!("Closing connection to \(self.host)")
    }
}

When a Connection is dropped, the drop method runs automatically.

Dropping Early

You can drop a value before the end of its scope by calling drop:

fn main() {
    let conn = Connection.new("db.example.com")
    drop(conn)
    println!("Connection closed early")
}

This is useful when you want to release resources as soon as possible.

Rc<T>: Reference-Counted Shared Ownership

In most cases, ownership is clear: you know exactly which variable owns a given value. However, sometimes a single value needs to be owned by multiple parts of your program. For example, in a graph data structure, multiple edges might point to the same node, and logically that node is owned by all the edges that point to it.

For these cases, Oxide provides Rc<T>, a type that enables reference counting. Rc stands for "Reference Counted." An Rc<T> keeps track of how many owners a value has and only frees the value when there are no more owners.

When to Use Rc<T>

  • When you have data that needs to be owned by multiple parts of your program
  • When you don't know at compile time which part will finish using the data last
  • In graph structures where multiple nodes reference the same data
  • In single-threaded code (for multi-threaded, use Arc<T>)

A Practical Example: Graph with Multiple Owners

Let's say we're building a graph where multiple nodes can reference the same data:

public struct Node {
    public name: String,
    public neighbors: Vec<Rc<Node>>,
}

fn main() {
    // Create a node using Rc
    let node1 = Rc { Node {
        name: "Node 1",
        neighbors: vec![],
    } }

    // Clone the Rc to create another reference (not a copy of the data)
    let node2 = Rc.clone(&node1)

    // Both node1 and node2 point to the same data
    println!("Node 1: \(node1.name)")
    println!("Node 2: \(node2.name)")

    // The reference count is 2 at this point
    println!("Reference count: \(Rc.strongCount(&node1))")
}

Cloning an Rc<T>

When you call Rc.clone(&rcValue), you're creating another reference to the same value, not copying the value itself. This is different from calling clone() on the value inside.

let original = Rc { String.from("hello") }

// This creates another Rc pointing to the same String
let cloned = Rc.clone(&original)

// Both original and cloned point to the same String
println!("Reference count: \(Rc.strongCount(&original))")  // Prints 2

Why the explicit Rc.clone() instead of just .clone()? Because Rc.clone() is cheap—it just increments a counter—while clone() on the String inside would copy the entire string. Using Rc.clone() makes it explicit that you're doing cheap reference counting, not expensive data copying.

Reference Counting in Action

Let's trace through what happens with reference counts:

fn main() {
    // rc1 points to a String, reference count = 1
    let rc1 = Rc { String.from("hello") }
    println!("Count after creating rc1: \(Rc.strongCount(&rc1))")  // 1

    {
        // rc2 is a new reference to the same String, count = 2
        let rc2 = Rc.clone(&rc1)
        println!("Count after creating rc2: \(Rc.strongCount(&rc1))")  // 2

        // Inside this scope, both rc1 and rc2 are valid
        println!("rc1: \(rc1), rc2: \(rc2)")
    }  // rc2 goes out of scope, count decrements to 1

    println!("Count after rc2 goes out of scope: \(Rc.strongCount(&rc1))")  // 1
}  // rc1 goes out of scope, count = 0, String is dropped

Rc<T> with Structs

Here's a more practical example using structs:

public struct Cons<T> {
    public head: T,
    public tail: Rc<Cons<T>>?,
}

fn main() {
    // Create a list: [3, [5, [10]]]
    let list1 = Rc { Cons {
        head: 3,
        tail: Rc { Cons {
            head: 5,
            tail: Rc { Cons {
                head: 10,
                tail: null,
            } },
        } },
    } }

    // Create list2 by sharing the tail of list1
    let list2 = Rc { Cons {
        head: 20,
        tail: Rc.clone(&list1),
    } }

    // Create list3 by sharing the tail of list1
    let list3 = Rc { Cons {
        head: 30,
        tail: Rc.clone(&list1),
    } }

    // Now list1, list2, and list3 share the same tail data
    println!("list1 reference count: \(Rc.strongCount(&list1))")  // 3
}

Rc<T> Does NOT Enable Mutation

An important limitation of Rc<T> is that it gives you immutable references to the data inside. You cannot mutate data inside an Rc<T>:

let rc = Rc { vec![1, 2, 3] }
// rc.push(4)  // Error: cannot mutate through an Rc

This is by design. Since multiple parts of your code might be referencing the same data, allowing mutation could cause data races and undefined behavior.

If you need interior mutability alongside reference counting, combine Rc<T> with RefCell<T>, which we'll explore in the next section.

Rc<T> vs Other Ownership Models

OwnershipCostWhen to Use
Owned valueNoneSingle owner, known at compile time
&T referenceNoneTemporary borrowing
Box<T>MinimalSingle owner on heap
Rc<T>Reference count overheadMultiple owners (single-threaded)
Arc<T>Atomic reference countMultiple owners (multi-threaded)

Reference Counting Performance

Rc<T> has a small but real performance cost:

  1. Memory overhead: Each Rc allocates extra space for the reference count
  2. CPU overhead: Cloning increments a counter; dropping decrements it (atomic operations in Arc<T>)
  3. Pointer indirection: Accessing the data requires dereferencing the pointer

For most applications, this overhead is negligible. However, in performance-critical code or when creating millions of references, consider whether you really need Rc<T> or if another approach would be better.

Rc<T> is Single-Threaded

Rc<T> is designed for single-threaded programs. If you need reference counting in a multi-threaded program, use Arc<T> (Atomic Reference Counted) instead, which uses atomic operations for thread safety.

// Single-threaded (use Rc)
let rc = Rc { data }

// Multi-threaded (use Arc)
let arc = Arc { data }

Real-World Example: Document Structure

Here's a practical example of using Rc<T> for a document structure where paragraphs might share common styling:

public struct Paragraph {
    public text: String,
    public style: Rc<Style>,
}

public struct Style {
    public fontSize: Int,
    public fontColor: String,
}

fn main() {
    // Create a style that will be shared
    let headingStyle = Rc { Style {
        fontSize: 24,
        fontColor: "blue",
    } }

    let heading = Paragraph {
        text: "Welcome",
        style: Rc.clone(&headingStyle),
    }

    let subheading = Paragraph {
        text: "Introduction",
        style: Rc.clone(&headingStyle),
    }

    // Both paragraphs share the same style object
    println!("Style reference count: \(Rc.strongCount(&headingStyle))")  // 3
}

Summary

Rc<T> enables multiple ownership of the same value through reference counting:

  • Use Rc<T> when you need multiple parts of your program to own the same data
  • Clone an Rc<T> with Rc.clone() (cheap—increments a counter)
  • The data is only dropped when the reference count reaches zero
  • Rc<T> provides immutable access; combine with RefCell<T> for interior mutability
  • Use Arc<T> for multi-threaded code

In the next section, we'll see how to achieve interior mutability—the ability to mutate data through an immutable reference—using RefCell<T>.

RefCell<T>: Interior Mutability

Interior mutability is a design pattern that allows you to mutate data even when there are only immutable references to that data. Normally, this is disallowed by Oxide's borrow rules. To mutate data, the borrow rules normally require a mutable reference.

RefCell<T> is a smart pointer that enforces the borrowing rules at runtime instead of at compile time. This trades compile-time safety for runtime flexibility.

When to Use RefCell<T>

Use RefCell<T> when:

  • You need to mutate data through an immutable reference (interior mutability)
  • You're sure the borrowing rules will be satisfied at runtime, but the compiler cannot verify it
  • You have a single-threaded program (use Mutex<T> for multi-threaded code)

A common scenario is when you have a value that has a method that takes &self but needs to modify some internal state.

A Motivating Example: Test Mock Objects

Imagine you're writing a test and you need to create a mock object that tracks how many times certain methods were called:

public trait Logger {
    fn log(message: String): Void
}

public struct MockLogger {
    // We want to count calls, but log() takes &self, not &mut self
    public callCount: Int,
}

extension MockLogger: Logger {
    fn log(message: String) {
        // This won't compile: callCount is immutable!
        // self.callCount = self.callCount + 1
    }
}

The problem: callCount is immutable, but we want to increment it inside log(). The solution: use RefCell<T> to enable interior mutability.

Using RefCell<T>

import std.cell.RefCell

public struct MockLogger {
    // Wrap callCount in a RefCell
    public callCount: RefCell<Int>,
}

extension MockLogger: Logger {
    fn log(message: String) {
        // Borrow callCount mutably at runtime
        var count = self.callCount.borrowMut()
        count = count + 1
    }
}

fn main() {
    let logger = MockLogger {
        callCount: RefCell { 0 }
    }

    logger.log("test message")
    println!("Call count: \(logger.callCount.borrow())")  // 1
}

The borrow() and borrowMut() Methods

RefCell<T> uses the borrow() and borrowMut() methods to enforce borrowing rules at runtime:

let cell = RefCell { 5 }

// Immutable borrow
let ref1 = cell.borrow()
println!("\(ref1)")  // 5

// You can have multiple immutable borrows
let ref2 = cell.borrow()
println!("\(ref2)")  // 5

// Mutable borrow
var refMut = cell.borrowMut()
refMut = 10
println!("\(refMut)")  // 10

// After the mutable borrow goes out of scope, you can borrow again
let ref3 = cell.borrow()
println!("\(ref3)")  // 10

Runtime Panics

If you violate the borrowing rules at runtime, RefCell<T> will panic:

let cell = RefCell { String.from("hello") }

// Immutable borrow
let ref1 = cell.borrow()

// This panics! We already have an immutable borrow
// let refMut = cell.borrowMut()  // Panic: already borrowed

// After ref1 goes out of scope, we can borrow mutably

This is the trade-off: RefCell<T> catches borrowing violations at runtime instead of compile time. You get more flexibility, but you must be careful not to violate the rules.

Rc<T> + RefCell<T>: Mutable Shared State

The real power comes from combining Rc<T> (multiple ownership) with RefCell<T> (interior mutability). This pattern allows you to have multiple owners that can mutate shared data:

import std.rc.Rc
import std.cell.RefCell

public struct Node {
    public value: Int,
    public next: Rc<RefCell<Node>>?,
}

fn main() {
    let node = Rc { RefCell { Node {
        value: 5,
        next: null,
    } } }

    // Clone the Rc to get another reference
    let node2 = Rc.clone(&node)

    // Both node and node2 point to the same data, and we can mutate it
    var borrowed = node.borrowMut()
    borrowed.value = 10

    println!("Node value: \(node2.borrow().value)")  // 10
}

RefCell<T> with Closures

A practical use case is storing closures that need to mutate their environment:

public struct Button {
    public label: String,
    public onClickCallCount: RefCell<Int>,
}

extension Button {
    fn simulateClick() {
        var count = self.onClickCallCount.borrowMut()
        count = count + 1
        println!("Button clicked \(count) times")
    }
}

fn main() {
    let button = Button {
        label: "Click me",
        onClickCallCount: RefCell { 0 },
    }

    button.simulateClick()  // Clicked 1 times
    button.simulateClick()  // Clicked 2 times
    button.simulateClick()  // Clicked 3 times
}

RefCell<T> vs Mutable References

When should you use RefCell<T> instead of &mut T?

ApproachWhen to UseCost
&mut TSingle owner, compile-time flexibilityNone
RefCell<T>Multiple borrows, need mutabilityRuntime checks
Rc<T> + RefCell<T>Multiple owners with mutationReference counts + runtime checks

Use &mut T when you can—it's checked at compile time and has no runtime cost. Use RefCell<T> when the borrow rules prevent what you need to do.

Common Patterns with RefCell<T>

Pattern 1: Cached Values

Store a cached value that is computed on first access:

public struct Expensive {
    public value: Int,
    public cached: RefCell<Int?>,
}

extension Expensive {
    fn getValue(): Int {
        if let cached = self.cached.borrow() {
            return cached
        }

        let result = self.value * 2
        self.cached.borrowMut() = result
        result
    }
}

Pattern 2: Tracking Internal State

Track internal state without exposing mutability in the public interface:

public struct Counter {
    public name: String,
    public count: RefCell<Int>,
}

extension Counter {
    public fn increment() {
        var c = self.count.borrowMut()
        c = c + 1
    }

    public fn getValue(): Int {
        self.count.borrow()
    }
}

Performance Considerations

  • No compile-time checks: The cost is paid at runtime when borrowing
  • Panic overhead: Borrowing violations cause panics, which have overhead
  • Indirection: Accessing data requires dereferencing through RefCell<T>

Use RefCell<T> sparingly in performance-critical code. For hot loops, prefer compile-time verified borrowing.

RefCell<T> is Single-Threaded

RefCell<T> is designed for single-threaded code. In multi-threaded code, use Mutex<T> instead, which uses locks instead of runtime panic checking.

Real-World Example: Observer Pattern

Here's how you might implement the observer pattern with Rc<T> and RefCell<T>:

public struct Subject {
    public observers: RefCell<Vec<String>>,
}

extension Subject {
    public fn notifyObservers(message: String) {
        for observer in self.observers.borrow() {
            println!("Notifying \(observer): \(message)")
        }
    }

    public mutating fn attachObserver(observer: String) {
        self.observers.borrowMut().push(observer)
    }
}

fn main() {
    var subject = Subject {
        observers: RefCell { vec![] }
    }

    subject.attachObserver("Observer A")
    subject.attachObserver("Observer B")

    subject.notifyObservers("Hello, observers!")
}

Summary

RefCell<T> provides interior mutability by moving borrow checking from compile time to runtime:

  • Use RefCell<T> when you need to mutate data through an immutable reference
  • Call .borrow() for immutable access and .borrowMut() for mutable access
  • Borrowing violations cause runtime panics, not compile errors
  • Combine Rc<T> and RefCell<T> for multiple owners with mutable access
  • Use Mutex<T> for multi-threaded code instead

In the next section, we'll explore how circular references can cause memory leaks even with Rc<T>, and how to prevent them.

Reference Cycles and Memory Leaks

Oxide's ownership system prevents most memory leaks, but it's still possible to create memory leaks by creating reference cycles—when values refer to each other in a circle, preventing the reference counts from ever reaching zero.

Creating a Reference Cycle

Let's see how a reference cycle can occur with Rc<T> and RefCell<T>:

import std.rc.Rc
import std.cell.RefCell

public struct Node {
    public value: Int,
    public next: RefCell<Rc<Node>?>,
}

fn main() {
    // Create a node
    let a = Rc { Node {
        value: 5,
        next: RefCell { null }
    } }

    // Create another node
    let b = Rc { Node {
        value: 10,
        next: RefCell { null }
    } }

    // This creates a cycle: a -> b -> a
    a.next.borrowMut() = Rc.clone(&b)
    b.next.borrowMut() = Rc.clone(&a)

    // When a and b go out of scope, the memory is NOT freed
    // because each holds a reference to the other
}

Why Reference Cycles Cause Memory Leaks

Let's trace the reference counts:

  1. After creating a: a ref count = 1
  2. After creating b: b ref count = 1
  3. After a.next = b: b ref count = 2
  4. After b.next = a: a ref count = 2
  5. a goes out of scope: a ref count = 1 (not zero!)
  6. b goes out of scope: b ref count = 1 (not zero!)

Because the reference counts never reach zero, the memory is never freed—even though both a and b are unreachable from the rest of the program. This is a memory leak.

The Solution: Weak References

The solution is to use weak references instead of strong references. Weak<T> is like Rc<T>, but it doesn't prevent the value from being dropped:

  • Strong reference (Rc<T>): Keeps a value alive; prevents it from being dropped
  • Weak reference (Weak<T>): Does not keep a value alive; allows it to be dropped

When all strong references are gone, the value is dropped, even if there are weak references remaining. Weak references that point to dropped values become null.

import std.rc.Rc
import std.rc.Weak
import std.cell.RefCell

public struct Node {
    public value: Int,
    public next: RefCell<Rc<Node>?>,
    public previous: RefCell<Weak<Node>?>,  // Use Weak for back-references
}

fn main() {
    let a = Rc { Node {
        value: 5,
        next: RefCell { null },
        previous: RefCell { null },
    } }

    let b = Rc { Node {
        value: 10,
        next: RefCell { null },
        previous: RefCell { null },
    } }

    // a -> b (strong reference)
    a.next.borrowMut() = Rc.clone(&b)

    // b -> a (weak reference - doesn't create a cycle!)
    b.previous.borrowMut() = Weak.clone(&a)

    // Now when we drop a and b, the memory is properly freed
}

When to Use Weak References

Use weak references when you have a hierarchical relationship where:

  • A parent owns its children (strong references)
  • Children reference their parent (weak references)

Common examples:

1. Tree Structure

public struct TreeNode<T> {
    public value: T,
    public children: RefCell<Vec<Rc<TreeNode<T>>>>,  // Strong: owns children
    public parent: RefCell<Weak<TreeNode<T>>?>,      // Weak: doesn't own parent
}

extension TreeNode {
    public fn newRoot(value: T): Rc<TreeNode<T>> {
        Rc { TreeNode {
            value: value,
            children: RefCell { vec![] },
            parent: RefCell { null },
        } }
    }

    public fn addChild(parent: Rc<TreeNode<T>>, child: T) {
        let newChild = Rc { TreeNode {
            value: child,
            children: RefCell { vec![] },
            parent: RefCell { Weak.clone(&parent) },
        } }

        parent.children.borrowMut().push(newChild)
    }
}

2. Graph with Shared Edges

public struct Person {
    public name: String,
    public friends: RefCell<Vec<Weak<Person>>>,  // Weak references to prevent cycles
}

fn main() {
    let alice = Rc { Person {
        name: "Alice",
        friends: RefCell { vec![] },
    } }

    let bob = Rc { Person {
        name: "Bob",
        friends: RefCell { vec![] },
    } }

    // Both can have weak references to each other
    alice.friends.borrowMut().push(Weak.clone(&bob))
    bob.friends.borrowMut().push(Weak.clone(&alice))

    // Memory is properly freed when alice and bob go out of scope
}

Using Weak References

To use a weak reference, you must upgrade it to a strong reference first:

let weakRef: Weak<MyType> = ...

// Upgrade to a strong reference
if let Some(strongRef) = weakRef.upgrade() {
    // Use strongRef
    println!("Value: \(strongRef.value)")
}

The upgrade() method returns Rc<T>? because the value might have been dropped.

Complete Example: Doubly-Linked List

Here's a complete example of a doubly-linked list that uses weak references to prevent memory leaks:

import std.rc.Rc
import std.rc.Weak
import std.cell.RefCell

public struct ListNode<T> {
    public data: T,
    public next: RefCell<Rc<ListNode<T>>?>,
    public previous: RefCell<Weak<ListNode<T>>?>,
}

public struct DoublyLinkedList<T> {
    public head: RefCell<Rc<ListNode<T>>?>,
    public tail: RefCell<Weak<ListNode<T>>?>,
}

extension DoublyLinkedList {
    public static fn new(): DoublyLinkedList<T> {
        DoublyLinkedList {
            head: RefCell { null },
            tail: RefCell { null },
        }
    }

    public mutating fn push(value: T) {
        let newNode = Rc { ListNode {
            data: value,
            next: RefCell { null },
            previous: RefCell { null },
        } }

        if let Some(tailNode) = self.tail.borrow().upgrade() {
            tailNode.next.borrowMut() = Rc { newNode }
            newNode.previous.borrowMut() = Weak.clone(&self.tail.borrow())
        } else {
            self.head.borrowMut() = Rc { newNode }
        }

        self.tail.borrowMut() = Weak.clone(&newNode)
    }
}

fn main() {
    var list: DoublyLinkedList<Int> = DoublyLinkedList.new()
    list.push(1)
    list.push(2)
    list.push(3)

    // List is properly cleaned up when it goes out of scope
}

Detecting Reference Cycles in Your Code

Ask yourself these questions:

  1. Do I have circular references? If A points to B and B points to A, you have a potential cycle
  2. Are they all strong references? If yes, you have a memory leak
  3. Is there an owner relationship? Parent owns children → parent uses Rc<T>, children use Weak<T>

If you answer "yes" to questions 1 and 2, you probably need weak references.

Reference Cycles and Rc.strongCount()

You can use Rc.strongCount() to check the reference count and detect cycles:

let a = Rc { Node { value: 5 } }
println!("Count: \(Rc.strongCount(&a))")  // 1

let b = Rc.clone(&a)
println!("Count: \(Rc.strongCount(&a))")  // 2

// If the count doesn't decrease when a value goes out of scope,
// it might be part of a reference cycle

Performance Impact of Weak References

Weak references have minimal overhead:

  • Slightly larger struct (stores both strong and weak counts)
  • Slight cost to upgrade() operation
  • Minimal cost compared to fixing a memory leak

Summary

Reference cycles can prevent values from being dropped, causing memory leaks even in Oxide:

  • A reference cycle occurs when values refer to each other circularly
  • Both strong references in Rc<T> prevent dropping
  • Use Weak<T> for back-references in hierarchical structures
  • Parent owns children (strong Rc<T>); children reference parent (weak Rc<T>)
  • Always upgrade() a weak reference before using it
  • Detecting cycles: ask if there's an ownership hierarchy; if so, use weak references for back-references

This completes our exploration of smart pointers! You now understand:

  • Box<T> for single ownership on the heap
  • Rc<T> for multiple ownership
  • RefCell<T> for interior mutability
  • How to prevent memory leaks with weak references

These tools give you the flexibility to write complex data structures while maintaining Oxide's memory safety guarantees.

Fearless Concurrency

Handling concurrent programming safely and efficiently is one of Rust's major goals, and Oxide inherits all of these powerful guarantees. Concurrent programming, where different parts of a program execute independently, and parallel programming, where different parts execute at the same time, are becoming increasingly important as computers take advantage of multiple processors.

Historically, programming in these contexts has been difficult and error-prone: it's notoriously hard to get right, and debugging concurrent bugs can feel like chasing ghosts. Rust's ownership and type checking systems are a powerful set of tools to help manage memory safety and concurrency problems. By leveraging ownership and type checking, many concurrency errors are caught at compile time rather than at runtime. We call this approach fearless concurrency.

Fearless concurrency allows you to write code that is free of subtle bugs and is easy to refactor without introducing new bugs.

What You'll Learn

This chapter covers:

  1. Threads - How to create threads to run multiple pieces of code at the same time
  2. Message Passing - Using channels to send messages between threads, following the principle "Do not communicate by sharing memory; share memory by communicating"
  3. Shared State - Using Mutex and Arc for multiple threads to access the same data safely
  4. The Sync and Send Traits - How Oxide extends its concurrency guarantees to user-defined types

Oxide's Concurrency Model

Oxide inherits Rust's full concurrency model without changes. The syntax for working with threads, channels, mutexes, and atomic types is largely the same, with Oxide's standard syntactic differences:

ConceptOxide SyntaxNotes
Import thread moduleimport std.threadDot notation for paths
Spawn with closurethread.spawn { ... }Trailing closure syntax
Move closuremove { ... }Move keyword before brace
Channel creationmpsc.channel()Dot notation
Arc cloningArc.clone(&counter)Associated function call

The ownership rules that make Rust's concurrency safe are preserved exactly in Oxide. When you spawn a thread with a closure, the borrow checker ensures you either move ownership into the thread or use thread-safe reference types like Arc<T>.

Why Fearless?

Many languages provide tools for handling concurrent problems, but Rust (and by extension, Oxide) is different: the type system catches concurrency bugs at compile time. Consider this: if you write concurrent code in other languages and make a mistake, you might not discover the bug until your code is running in production under heavy load. In Oxide, the compiler catches these bugs before your code even runs.

Here's what the type system prevents:

  • Data races - Two threads accessing the same memory where at least one is writing, without synchronization
  • Dangling references - A thread holding a reference to data that another thread has freed
  • Use after move - Accessing data that has been moved into another thread

The rest of this chapter explores how to use threads effectively while letting the compiler ensure your concurrent code is correct.

A Note on Async

This chapter focuses on OS threads using std.thread. Oxide also supports asynchronous programming with async/await, which provides a different model for concurrency. Async programming is covered separately and uses Oxide's prefix await syntax:

// Oxide uses prefix await
let result = await fetchData(url)?

For now, let's dive into thread-based concurrency.

Using Threads to Run Code Simultaneously

In most current operating systems, an executed program's code runs in a process, and the operating system manages multiple processes at once. Within a program, you can also have independent parts that run simultaneously. The features that run these independent parts are called threads.

Splitting the computation in your program into multiple threads to run multiple tasks at the same time can improve performance, but it also adds complexity. Because threads can run simultaneously, there's no guarantee about the order in which parts of your code on different threads will run. This can lead to problems such as:

  • Race conditions - Threads accessing data or resources in an inconsistent order
  • Deadlocks - Two threads waiting for each other, preventing both from continuing
  • Bugs that happen only in certain situations - Hard to reproduce and fix reliably

Oxide's ownership system and type checker help prevent these problems at compile time. Let's explore how to work with threads safely.

Creating a New Thread with spawn

To create a new thread, we call the thread.spawn function and pass it a closure containing the code we want to run in the new thread:

import std.thread
import std.time.Duration

fn main() {
    thread.spawn {
        for i in 1..10 {
            println!("hi number \(i) from the spawned thread!")
            thread.sleep(Duration.fromMillis(1))
        }
    }

    for i in 1..5 {
        println!("hi number \(i) from the main thread!")
        thread.sleep(Duration.fromMillis(1))
    }
}

Note that when the main thread of an Oxide program completes, all spawned threads are shut down, whether or not they have finished running. The output from this program might be a little different every time, but you'll see something like this:

hi number 1 from the main thread!
hi number 1 from the spawned thread!
hi number 2 from the main thread!
hi number 2 from the spawned thread!
hi number 3 from the main thread!
hi number 3 from the spawned thread!
hi number 4 from the main thread!
hi number 4 from the spawned thread!
hi number 5 from the spawned thread!

The calls to thread.sleep force a thread to stop its execution for a short duration, allowing a different thread to run. The threads will probably take turns, but that isn't guaranteed: it depends on how your operating system schedules the threads.

In this run, the main thread printed first, even though the print statement from the spawned thread appears first in the code. And even though we told the spawned thread to print until i is 9, it only got to 5 before the main thread shut down.

Waiting for All Threads to Finish Using join Handles

The code above not only stops the spawned thread prematurely most of the time due to the main thread ending, but there's no guarantee that the spawned thread will get to run at all.

We can fix the problem of the spawned thread not running or ending prematurely by saving the return value of thread.spawn in a variable. The return type of thread.spawn is JoinHandle. A JoinHandle is an owned value that, when we call the join method on it, will wait for its thread to finish:

import std.thread
import std.time.Duration

fn main() {
    let handle = thread.spawn {
        for i in 1..10 {
            println!("hi number \(i) from the spawned thread!")
            thread.sleep(Duration.fromMillis(1))
        }
    }

    for i in 1..5 {
        println!("hi number \(i) from the main thread!")
        thread.sleep(Duration.fromMillis(1))
    }

    handle.join().unwrap()
}

Calling join on the handle blocks the thread currently running until the thread represented by the handle terminates. Blocking a thread means that thread is prevented from performing work or exiting. Because we've put the call to join after the main thread's for loop, running this should produce output similar to:

hi number 1 from the main thread!
hi number 1 from the spawned thread!
hi number 2 from the main thread!
hi number 2 from the spawned thread!
hi number 3 from the main thread!
hi number 3 from the spawned thread!
hi number 4 from the main thread!
hi number 4 from the spawned thread!
hi number 5 from the spawned thread!
hi number 6 from the spawned thread!
hi number 7 from the spawned thread!
hi number 8 from the spawned thread!
hi number 9 from the spawned thread!

The two threads continue alternating, but the main thread waits because of the call to handle.join() and does not end until the spawned thread is finished.

But let's see what happens when we instead move handle.join() before the for loop in main:

import std.thread
import std.time.Duration

fn main() {
    let handle = thread.spawn {
        for i in 1..10 {
            println!("hi number \(i) from the spawned thread!")
            thread.sleep(Duration.fromMillis(1))
        }
    }

    handle.join().unwrap()

    for i in 1..5 {
        println!("hi number \(i) from the main thread!")
        thread.sleep(Duration.fromMillis(1))
    }
}

The main thread will wait for the spawned thread to finish and then run its for loop, so the output won't be interleaved anymore:

hi number 1 from the spawned thread!
hi number 2 from the spawned thread!
hi number 3 from the spawned thread!
hi number 4 from the spawned thread!
hi number 5 from the spawned thread!
hi number 6 from the spawned thread!
hi number 7 from the spawned thread!
hi number 8 from the spawned thread!
hi number 9 from the spawned thread!
hi number 1 from the main thread!
hi number 2 from the main thread!
hi number 3 from the main thread!
hi number 4 from the main thread!

Small details, such as where join is called, can affect whether or not your threads run at the same time.

Using move Closures with Threads

We'll often use the move keyword with closures passed to thread.spawn because the closure will then take ownership of the values it uses from the environment, transferring ownership of those values from one thread to another.

Here's an example of attempting to use a vector created in the main thread inside a spawned thread:

import std.thread

fn main() {
    let v = vec![1, 2, 3]

    let handle = thread.spawn {
        println!("Here's a vector: \(v:?)")
    }

    handle.join().unwrap()
}

The closure uses v, so it will capture v and make it part of the closure's environment. Because thread.spawn runs this closure in a new thread, we should be able to access v inside that new thread. But when we compile this example, we get the following error:

error[E0373]: closure may outlive the current function, but it borrows `v`,
which is owned by the current function
 --> src/main.ox:6:23
  |
6 |     let handle = thread.spawn {
  |                               ^ may outlive borrowed value `v`
7 |         println!("Here's a vector: \(v:?)")
  |                                      - `v` is borrowed here
  |
note: function requires argument type to outlive `'static`
help: to force the closure to take ownership of `v` (and any other referenced
variables), use the `move` keyword
  |
6 |     let handle = thread.spawn move {
  |                               ++++

Oxide infers how to capture v, and because println! only needs a reference to v, the closure tries to borrow v. However, there's a problem: Oxide can't tell how long the spawned thread will run, so it doesn't know if the reference to v will always be valid.

Consider this potentially problematic scenario:

import std.thread

fn main() {
    let v = vec![1, 2, 3]

    let handle = thread.spawn {
        println!("Here's a vector: \(v:?)")
    }

    drop(v)  // Oh no!

    handle.join().unwrap()
}

If Oxide allowed this code to run, there's a possibility the spawned thread would be immediately put in the background without running at all. The spawned thread has a reference to v inside, but the main thread immediately drops v. Then, when the spawned thread starts to execute, v is no longer valid, so a reference to it is invalid. Dangerous!

To fix the compile error, we use the move keyword:

import std.thread

fn main() {
    let v = vec![1, 2, 3]

    let handle = thread.spawn move {
        println!("Here's a vector: \(v:?)")
    }

    handle.join().unwrap()
}

By adding the move keyword before the closure, we force the closure to take ownership of the values it's using rather than borrowing. This modification compiles and runs as we intend.

What would happen if we tried to use v in the main thread after the move closure? Let's try:

import std.thread

fn main() {
    let v = vec![1, 2, 3]

    let handle = thread.spawn move {
        println!("Here's a vector: \(v:?)")
    }

    println!("Main thread: \(v:?)")  // Error!

    handle.join().unwrap()
}

The compiler gives us this error:

error[E0382]: borrow of moved value: `v`
  --> src/main.ox:10:31
   |
4  |     let v = vec![1, 2, 3]
   |         - move occurs because `v` has type `Vec<Int>`, which does not
   |           implement the `Copy` trait
5  |
6  |     let handle = thread.spawn move {
   |                               ---- value moved into closure here
7  |         println!("Here's a vector: \(v:?)")
   |                                      - variable moved due to use in closure
...
10 |     println!("Main thread: \(v:?)")
   |                               ^ value borrowed here after move

The ownership rules have saved us again! We got an error because Oxide is being conservative and only borrowing v for the thread, which meant the main thread could theoretically invalidate the spawned thread's reference. By telling Oxide to move ownership of v to the spawned thread, we guarantee that the main thread won't use v anymore. The compiler enforces this guarantee.

Rust Comparison

The thread API is nearly identical between Oxide and Rust. The main differences are syntactic:

ConceptRustOxide
Importuse std::thread;import std.thread
Spawnthread::spawn(|| { ... })thread.spawn { ... }
Move closuremove || { ... }move { ... }
Path separator::.

The semantics, including how ownership transfers to threads and how join handles work, are exactly the same. Oxide's trailing closure syntax makes thread spawning more readable, while maintaining all the safety guarantees of Rust's type system.

Summary

  • Use thread.spawn with a closure to create new threads
  • Threads execute concurrently and may interleave in unpredictable ways
  • Use JoinHandle.join() to wait for a thread to finish
  • Use move closures to transfer ownership of data into threads
  • The borrow checker prevents data races at compile time

Using Message Passing to Transfer Data Between Threads

One increasingly popular approach to ensuring safe concurrency is message passing, where threads communicate by sending each other messages containing data. Here's the idea in a slogan from the Go language documentation: "Do not communicate by sharing memory; instead, share memory by communicating."

To accomplish message-sending concurrency, Oxide's standard library provides an implementation of channels. A channel is a programming concept by which data is sent from one thread to another.

You can imagine a channel in programming as being like a directional channel of water, such as a stream or a river. If you put something like a rubber duck into a river, it will travel downstream to the end of the waterway.

A channel has two halves: a transmitter and a receiver. The transmitter half is the upstream location where you put rubber ducks into the river, and the receiver half is where the rubber duck ends up downstream. One part of your code calls methods on the transmitter with the data you want to send, and another part checks the receiving end for arriving messages. A channel is said to be closed if either the transmitter or receiver half is dropped.

Creating a Channel

Let's start by creating a channel that doesn't do anything:

import std.sync.mpsc

fn main() {
    let (tx, rx) = mpsc.channel()
}

We create a new channel using the mpsc.channel function. The name mpsc stands for multiple producer, single consumer. This means a channel can have multiple sending ends that produce values but only one receiving end that consumes those values. Think of multiple streams flowing into one big river: everything sent down any of the streams will end up in one river at the end.

The mpsc.channel function returns a tuple, the first element of which is the transmitter (often called tx) and the second element is the receiver (often called rx). We use let with a pattern to destructure the tuple.

Let's move the transmitting end into a spawned thread and have it send one string so the spawned thread is communicating with the main thread:

import std.sync.mpsc
import std.thread

fn main() {
    let (tx, rx) = mpsc.channel()

    thread.spawn move {
        let val = "hi".toString()
        tx.send(val).unwrap()
    }
}

We're using thread.spawn to create a new thread and then using move to move tx into the closure so the spawned thread owns tx. The spawned thread needs to own the transmitter to send messages through the channel.

The transmitter has a send method that takes the value we want to send. The send method returns a Result<T, E> type, so if the receiver has already been dropped and there's nowhere to send a value, the send operation will return an error. In this example, we're calling unwrap to panic in case of an error.

Receiving Values from the Channel

Now let's receive the value in the main thread:

import std.sync.mpsc
import std.thread

fn main() {
    let (tx, rx) = mpsc.channel()

    thread.spawn move {
        let val = "hi".toString()
        tx.send(val).unwrap()
    }

    let received = rx.recv().unwrap()
    println!("Got: \(received)")
}

The receiver has two useful methods: recv and tryRecv. We're using recv, short for receive, which will block the main thread's execution and wait until a value is sent down the channel. Once a value is sent, recv will return it in a Result<T, E>. When the transmitter closes, recv will return an error to signal that no more values will be coming.

The tryRecv method doesn't block, but will instead return a Result<T, E> immediately: an Ok value holding a message if one is available and an Err value if there aren't any messages this time. Using tryRecv is useful if this thread has other work to do while waiting for messages: we could write a loop that calls tryRecv every so often, handles a message if one is available, and otherwise does other work for a little while until checking again.

We've used recv in this example for simplicity; we don't have any other work for the main thread to do other than wait for messages, so blocking the main thread is appropriate.

When we run this code, we'll see the value printed from the main thread:

Got: hi

Channels and Ownership Transference

The ownership rules play a vital role in message sending because they help you write safe, concurrent code. Preventing errors in concurrent programming is the advantage of thinking about ownership throughout your Oxide programs. Let's do an experiment to show how channels and ownership work together to prevent problems.

Consider what would happen if we tried to use val in the spawned thread after we've sent it down the channel:

import std.sync.mpsc
import std.thread

fn main() {
    let (tx, rx) = mpsc.channel()

    thread.spawn move {
        let val = "hi".toString()
        tx.send(val).unwrap()
        println!("val is \(val)")  // Error!
    }

    let received = rx.recv().unwrap()
    println!("Got: \(received)")
}

Here, we try to print val after we've sent it down the channel via tx.send. Allowing this would be a bad idea: once the value has been sent to another thread, that thread could modify or drop it before we try to use the value again. Potentially, the other thread's modifications could cause errors or unexpected results due to inconsistent or nonexistent data.

The compiler catches this mistake:

error[E0382]: borrow of moved value: `val`
  --> src/main.ox:10:32
   |
8  |         let val = "hi".toString()
   |             --- move occurs because `val` has type `String`, which does
   |                 not implement the `Copy` trait
9  |         tx.send(val).unwrap()
   |                 --- value moved here
10 |         println!("val is \(val)")
   |                            ^^^ value borrowed here after move

Our concurrency mistake has caused a compile time error. The send function takes ownership of its parameter, and when the value is moved, the receiver takes ownership of it. This stops us from accidentally using the value again after sending it; the ownership system checks that everything is okay.

Sending Multiple Values and Seeing the Receiver Waiting

The previous code compiled and ran, but it didn't clearly show us that two separate threads were talking to each other over the channel. Let's make a modification that will prove the code is running concurrently: the spawned thread will send multiple messages and pause for a second between each message:

import std.sync.mpsc
import std.thread
import std.time.Duration

fn main() {
    let (tx, rx) = mpsc.channel()

    thread.spawn move {
        let vals = vec![
            "hi".toString(),
            "from".toString(),
            "the".toString(),
            "thread".toString(),
        ]

        for val in vals {
            tx.send(val).unwrap()
            thread.sleep(Duration.fromSecs(1))
        }
    }

    for received in rx {
        println!("Got: \(received)")
    }
}

This time, the spawned thread has a vector of strings that we want to send to the main thread. We iterate over them, sending each individually, and pause between each by calling the thread.sleep function with a Duration value of 1 second.

In the main thread, we're not calling the recv function explicitly anymore: instead, we're treating rx as an iterator. For each value received, we print it. When the channel is closed, iteration will end.

When running this code, you should see the following output with a 1-second pause between each line:

Got: hi
Got: from
Got: the
Got: thread

Because we don't have any code that pauses or delays in the for loop in the main thread, we can tell that the main thread is waiting to receive values from the spawned thread.

Creating Multiple Producers by Cloning the Transmitter

Earlier we mentioned that mpsc stands for multiple producer, single consumer. Let's put mpsc to use and expand the code to create multiple threads that all send values to the same receiver. We can do so by cloning the transmitter:

import std.sync.mpsc
import std.thread
import std.time.Duration

fn main() {
    let (tx, rx) = mpsc.channel()

    // Clone the transmitter for the second thread
    let tx1 = tx.clone()

    // First producer thread
    thread.spawn move {
        let vals = vec![
            "hi".toString(),
            "from".toString(),
            "the".toString(),
            "thread".toString(),
        ]

        for val in vals {
            tx1.send(val).unwrap()
            thread.sleep(Duration.fromSecs(1))
        }
    }

    // Second producer thread
    thread.spawn move {
        let vals = vec![
            "more".toString(),
            "messages".toString(),
            "for".toString(),
            "you".toString(),
        ]

        for val in vals {
            tx.send(val).unwrap()
            thread.sleep(Duration.fromSecs(1))
        }
    }

    for received in rx {
        println!("Got: \(received)")
    }
}

This time, before we create the first spawned thread, we call clone on the transmitter. This will give us a new transmitter we can pass to the first spawned thread. We pass the original transmitter to a second spawned thread. This gives us two threads, each sending different messages to the one receiver.

When you run the code, your output will probably look something like this:

Got: hi
Got: more
Got: from
Got: messages
Got: the
Got: for
Got: thread
Got: you

You might see the values in another order, depending on your system. This is what makes concurrency interesting as well as difficult. If you experiment with thread.sleep, giving it various values in the different threads, each run will be more nondeterministic and create different output each time.

Rust Comparison

The channel API works identically between Rust and Oxide:

ConceptRustOxide
Importuse std::sync::mpsc;import std.sync.mpsc
Create channelmpsc::channel()mpsc.channel()
Sendtx.send(val)tx.send(val)
Receiverx.recv()rx.recv()
Try receiverx.tryRecv()rx.tryRecv()
Clone transmittertx.clone()tx.clone()

The ownership transfer semantics are identical: when you send a value through a channel, ownership moves to the receiver. This prevents data races by ensuring only one thread can access the data at a time.

Summary

  • Channels allow threads to communicate by passing messages
  • Use mpsc.channel() to create a transmitter/receiver pair
  • send transfers ownership of the value to the channel
  • recv blocks until a value is available; tryRecv returns immediately
  • Clone the transmitter to create multiple producers
  • The receiver can be used as an iterator
  • Ownership rules prevent accessing values after they're sent

Using Shared-State Concurrency

We've explored message passing as a way for threads to communicate with each other. Now let's look at another method: shared-state concurrency. Shared-state concurrency is when multiple threads have access to the same data. While message passing ("share memory by communicating") is often the better choice, Oxide's type system makes shared-state concurrency safe and practical through the use of Mutex<T> and Arc<T>.

Mutex Provides Mutual Exclusion

Mutex is short for mutual exclusion, and as the name suggests, a mutex allows only one thread to access some data at any given time. To access the data in a mutex, a thread must first signal that it wants access by asking to acquire the mutex's lock. The lock is a data structure that is part of the mutex that keeps track of who currently has exclusive access to the data.

A mutex is described as guarding the data it holds via the locking system.

The API of Mutex<T>

Let's first look at how to use a mutex:

import std.sync.Mutex

fn main() {
    let m = Mutex.new(5)

    {
        var num = m.lock().unwrap()
        *num = 6
    }

    println!("m = \(m:?)")
}

Like many types, we create a Mutex<T> using the associated function new. To access the data inside the mutex, we use the lock method to acquire the lock. This call will block the current thread so it can't do any work until it's our turn to have the lock.

The call to lock returns a Result<MutexGuard>. If another thread holding the lock panicked, the lock call will fail and return an Err. Here we use unwrap() to panic in that situation.

The lock method returns a smart pointer called MutexGuard. This smart pointer implements Deref to point at our inner data. The smart pointer also has a Drop implementation that releases the lock automatically when the MutexGuard drops (goes out of scope).

When we run this code, we'll see:

m = Mutex { data: 6 }

The mutex successfully protected the integer inside, preventing data races.

Sharing a Mutex<T> Between Multiple Threads

Now let's try to share a value between multiple threads using a Mutex<T>. We'll spin up 10 threads and have each increment a counter by 1, so the counter goes from 0 to 10:

import std.sync.Mutex
import std.thread

fn main() {
    let counter = Mutex.new(0)
    var handles = vec![]

    for _ in 0..10 {
        let handle = thread.spawn move {
            var num = counter.lock().unwrap()
            *num += 1
        }
        handles.push(handle)
    }

    for handle in handles {
        handle.join().unwrap()
    }

    println!("Result: \(counter.lock().unwrap())")
}

We create a Mutex<Int> with an initial value of 0. We then create 10 threads by looping 10 times. For each thread, we use move to move the counter into the thread closure.

However, if we try to compile this, we get an error:

error[E0382]: borrow of moved value: `counter`
 --> src/main.ox:11:37
  |
9 |     for _ in 0..10 {
10|         let handle = thread.spawn move {
11|             var num = counter.lock().unwrap()
   |                          ------- value borrowed here after move

The problem is that counter is moved into the first thread's closure because we use move. So the second iteration of the loop tries to move an already-moved value! The compiler correctly tells us we can't move counter multiple times.

Arc<T>: Atomic Reference Counting

The solution is to use Arc<T>, which stands for Atomic Reference Counting. The Arc<T> type lets us have multiple owners of a value. The atomic part means Arc<T> is safe to use in concurrent situations.

Let's modify our code to use Arc<Mutex<Int>>:

import std.sync.Arc
import std.sync.Mutex
import std.thread

fn main() {
    let counter = Arc.new(Mutex.new(0))
    var handles = vec![]

    for _ in 0..10 {
        let counter = Arc.clone(&counter)
        let handle = thread.spawn move {
            var num = counter.lock().unwrap()
            *num += 1
        }
        handles.push(handle)
    }

    for handle in handles {
        handle.join().unwrap()
    }

    println!("Result: \(counter.lock().unwrap())")
}

The key part is that we clone the Arc for each thread. The Arc.clone(&counter) call creates a new Arc that points to the same value on the heap. Now when we move the cloned Arc into each thread's closure, the reference count increases by one, meaning the data won't be deallocated until all threads are done with it.

When we run this code, we'll see:

Result: 10

Perfect! Each thread successfully incremented the counter.

How Arc<Mutex<T>> Works

Let's understand the combination:

  • Mutex<T> - Provides interior mutability, allowing us to mutate the contents even when we only have an immutable reference to the Mutex.
  • Arc<T> - Allows multiple ownership with automatic cleanup when the reference count reaches zero. Each clone increments the reference count.

Together, Arc<Mutex<T>> is a safe way to share mutable state across threads:

import std.sync.Arc
import std.sync.Mutex
import std.thread
import std.time.Duration

fn main() {
    let data = Arc.new(Mutex.new(vec![]))

    for i in 0..5 {
        let data = Arc.clone(&data)
        thread.spawn move {
            var list = data.lock().unwrap()
            list.push(i)
            // Lock is released here when list goes out of scope
        }
    }

    thread.sleep(Duration.fromMillis(100))

    let finalData = data.lock().unwrap()
    println!("Final data: \(finalData:?)")
}

Each thread clones the Arc, takes ownership of the clone, acquires the lock, modifies the data, and releases the lock when the MutexGuard drops. The Arc ensures the underlying data lives as long as any thread needs it.

Comparing Message Passing and Mutex

When should you use message passing versus a Mutex? Here are some guidelines:

ScenarioUse
Passing data once from one thread to anotherMessage passing (channels)
Sharing mutable state across threadsArc<Mutex<T>>
Complex communication patternsMessage passing
Simple shared counters or flagsMutex<T>
Want to avoid lock contentionMessage passing

In general, prefer message passing for most concurrent code. It's easier to reason about and naturally encourages a design where threads have clear responsibilities. Use Arc<Mutex<T>> when you genuinely need shared mutable state.

Deadlock Risk

One downside of using Mutex is the risk of deadlocks. A deadlock occurs when:

  1. Operation A needs locks on resources 1 and 2
  2. Operation B needs locks on resources 2 and 1
  3. Operation A locks resource 1, then waits for resource 2
  4. Operation B locks resource 2, then waits for resource 1

Both threads are now blocked forever. Oxide's type system prevents some deadlock scenarios, but not all. Always:

  • Acquire locks in a consistent order across all code paths
  • Keep the lock scope as small as possible
  • Avoid nested locks when possible

Rust Comparison

The Mutex and Arc APIs are nearly identical between Rust and Oxide:

ConceptRustOxide
Importuse std::sync::{Mutex, Arc};import std.sync.Mutex, import std.sync.Arc
Create mutexMutex::new(5)Mutex.new(5)
Acquire lockm.lock().unwrap()m.lock().unwrap()
Clone ArcArc::clone(&arc)Arc.clone(&arc)

The semantics are identical: Mutex<T> provides mutual exclusion, and Arc<T> provides shared ownership with reference counting. Both are essential for safe shared-state concurrency in Oxide.

Summary

  • Mutex<T> allows only one thread at a time to access the data
  • Arc<T> enables multiple ownership of a value with automatic cleanup
  • Arc<Mutex<T>> is the combination needed for safe shared mutable state across threads
  • The lock is automatically released when the MutexGuard drops
  • Prefer message passing for most concurrent code, use Arc<Mutex<T>> when you genuinely need shared state
  • Be aware of deadlock risks when using multiple mutexes

Extensible Concurrency with the Sync and Send Traits

One of the interesting aspects of Oxide's concurrency story is that the language defines the concurrency primitives quite minimally. Nearly all concurrency features we've talked about are part of the standard library, not the language itself.

However, two concurrency concepts are embedded in the language: the Send and Sync traits.

Send and the Transferable Across Threads

The Send marker trait indicates that ownership of values of the type implementing Send can be transferred between threads. Almost every Oxide type is Send, but there are some exceptions, such as Rc<T>, which is not Send.

Rc<T> is not Send because of how it maintains reference counts. When you clone an Rc<T>, the reference count is incremented without using atomic operations. If you sent an Rc<T> to another thread and that thread cloned it while the original thread was also cloning it, the reference count could be corrupted.

Most basic types are Send: integers, floats, booleans, strings, and most collections built from Send types. Types that contain raw pointers are generally not Send because it's unsafe to send a raw pointer to another thread.

Examples of Send Types

// All of these are Send
let x: Int = 5
let s: String = "hello".toString()
let v: Vec<Int> = vec![1, 2, 3]

// This is Send because it contains only Send types
struct SendStruct {
    value: Int,
    text: String
}

Examples of Non-Send Types

// Rc is not Send (reference counting is not atomic)
import std.rc.Rc

let rcValue = Rc.new(5)
// thread.spawn move { println!(\(rcValue)) }  // Error: Rc is not Send

Sync and Thread-Safe Shared References

The Sync marker trait indicates that a type is safe to share by reference between threads. In other words, a type T is Sync if &T is Send. A reference is safe to send to another thread if the type implements Sync.

For example, i32 is Sync because references to i32 are safe to share with threads (integers are thread-safe). Cell<T>, on the other hand, is not Sync because it uses interior mutability without synchronization.

Why These Traits Matter

These traits help enforce thread safety at compile time:

import std.sync.Mutex
import std.sync.Arc
import std.thread

fn main() {
    let safeCounter = Arc.new(Mutex.new(0))

    let counterClone = Arc.clone(&safeCounter)
    thread.spawn move {
        var count = counterClone.lock().unwrap()
        *count += 1
    }
    // Code continues...
}

This compiles because:

  • Arc<T> is Send and Sync when T is both
  • Mutex<Int> is both Send and Sync
  • The compiler verifies ownership can be safely transferred

Implementing Send and Sync Manually

Most of the time, you don't need to implement Send and Sync manually. Oxide automatically derives these traits for structs and enums composed entirely of Send and Sync types.

However, in rare cases where you're working with raw pointers or other unsafe code, you might need to manually implement these traits:

// Only do this if you're sure your type is actually safe!
// This is unsafe to implement incorrectly.

struct MyType {
    ptr: *const Int
}

// UNSAFE: only implement if you know what you're doing
unsafe extension MyType: Send {}

unsafe extension MyType: Sync {}

As you can see, implementing Send and Sync requires the unsafe keyword. This is because the compiler can't verify that your type is actually safe to send or share across threads; you're promising it with the unsafe extension.

Common Patterns

Arc<Mutex<T>> is Send and Sync

When T is Send and Sync, Arc<Mutex<T>> is both:

import std.sync.Arc
import std.sync.Mutex
import std.thread

fn main() {
    let counter = Arc.new(Mutex.new(0))

    // Works because Arc<Mutex<Int>> is Send + Sync
    for _ in 0..5 {
        let c = Arc.clone(&counter)
        thread.spawn move {
            var n = c.lock().unwrap()
            *n += 1
        }
    }
}

Rc<T> is Neither Send nor Sync

Rc<T> is not Send because the reference counting isn't atomic. It's also not Sync because &Rc<T> is not Send.

import std.rc.Rc
import std.thread

fn main() {
    let rc = Rc.new(5)

    // This won't compile
    // thread.spawn move {
    //     println!(\(rc))  // Error: Rc is not Send
    // }
}

If you need shared ownership across threads, use Arc<T> instead of Rc<T>.

Cell<T> is Sync but Not Send

Cell<T> provides interior mutability using dynamic borrowing instead of locks. It's Sync because it's safe to share references, but not Send because it's not safe to move across threads:

import std.cell.Cell
import std.thread

fn main() {
    let cell = Cell.new(5)

    // Cell is Sync, so we can share a reference safely
    // (but this requires a reference, not an owned value)

    // Cannot do this: Cell is not Send
    // thread.spawn move {
    //     cell.set(10)  // Error: Cell is not Send
    // }
}

Rust Comparison

The Send and Sync traits work identically between Rust and Oxide:

ConceptRustOxideNotes
Send traitBuilt-inBuilt-inIndicates safe to transfer ownership
Sync traitBuilt-inBuilt-inIndicates safe to share by reference
Manual implunsafe extension T: Send {}unsafe impl Send for T {}Oxide uses extension
Arc is Send+SyncWhen T isWhen T isFor thread-safe shared ownership
Rc is not SendCorrectCorrectReference counting is not atomic

The behavior and semantics are identical. Both languages use these traits to provide compile-time verification of thread safety without runtime overhead.

Summary

  • Send - A marker trait indicating that a type can be safely transferred between threads
  • Sync - A marker trait indicating that a type is safe to share by reference between threads
  • Types composed of Send/Sync types are automatically Send/Sync
  • Arc<T> is Send and Sync when T is both
  • Rc<T> is neither Send nor Sync
  • You can manually implement these traits with unsafe extension, but should only do so when you're certain of thread safety
  • The compiler uses these traits to prevent data races at compile time

Async/Await: Asynchronous Programming

Many operations we ask the computer to perform can take a while to complete. For example, downloading a video from a web server involves waiting for network data to arrive, while exporting that video uses intensive CPU computation. These are fundamentally different kinds of waiting, and handling them efficiently is crucial for responsive, performant applications.

Async vs. Blocking

When you call a function that performs I/O (like reading a file or making a network request), you have two choices:

  1. Blocking: The function waits until the operation completes before returning. Simple, but your program can't do anything else while waiting.

  2. Asynchronous: The function returns immediately, giving you a "future" that will eventually contain the result. Your program can do other work while waiting.

// Blocking approach - program waits for download to complete
let data = downloadFile(url)  // Blocks until done
processData(data)

// Async approach - program can do other work while waiting
let future = downloadFileAsync(url)
doOtherWork()
let data = await future  // Wait for result when we need it
processData(data)

Concurrency vs. Parallelism

These terms are related but distinct:

  • Concurrency is about dealing with multiple things at once. A single chef managing multiple dishes, switching between them as needed.

  • Parallelism is about doing multiple things at once. Multiple chefs each working on their own dish simultaneously.

Async programming primarily enables concurrency: efficiently managing multiple tasks even on a single CPU core. Whether those tasks actually run in parallel depends on your hardware and runtime configuration.

When to Use Async

Async programming shines for I/O-bound operations:

  • Network requests (HTTP calls, database queries)
  • File system operations
  • User input handling
  • Timer-based events

For CPU-bound operations (heavy computation), traditional multithreading with std.thread might be more appropriate. However, many real-world applications mix both patterns.

Oxide's Async Model

Oxide provides async programming with a few key components:

  1. async fn: Declares a function that can be paused and resumed
  2. await: Waits for an async operation to complete (prefix syntax!)
  3. Futures: Values representing work that may complete in the future
  4. Runtimes: Execute async code (like Tokio or async-std)

Prefix Await: A Key Oxide Difference

Unlike Rust which uses postfix expr.await, Oxide uses prefix await:

// Oxide - prefix await (reads left-to-right)
let response = await fetch(url)
let data = await response.json()

Rust equivalent:

#![allow(unused)]
fn main() {
let response = fetch(url).await;
let data = response.json().await;
}

This syntax matches JavaScript, Python, Swift, and Kotlin, making async code read naturally from left to right.

Chapter Overview

In this chapter, we'll explore:

  1. Futures and Async Syntax - How to write async functions and understand futures
  2. Concurrency with Async - Running multiple async operations together
  3. Working with More Futures - Advanced patterns like racing and timeouts
  4. Streams - Processing sequences of async values
  5. Traits for Async - Understanding Future, Pin, and Stream traits

By the end of this chapter, you'll be comfortable writing async code in Oxide and understand how it differs from traditional blocking code.

A Note on Runtimes

Unlike languages with built-in runtimes (JavaScript, Python), Rust and Oxide require you to choose an async runtime. The most popular options are:

  • Tokio: Full-featured, production-ready runtime
  • async-std: Standard library-like API
  • smol: Lightweight, minimal runtime

This book uses Tokio in examples, but the concepts apply to any runtime. The runtime handles scheduling, executing, and coordinating your async code.

Let's dive in!

Futures and the Async Syntax

At the heart of async programming in Oxide are futures: values that represent work that may not be complete yet. When you call an async function, it returns a future immediately. The actual work happens later, when you await that future.

What Is a Future?

A future is like a receipt for a meal at a restaurant. When you order, you get the receipt immediately, but your food isn't ready yet. The receipt represents your eventual meal. You can either:

  • Stand at the counter waiting (blocking)
  • Go do other things and come back when called (async)

In code:

// This returns immediately with a future, not the actual response
let responseFuture = fetchWebPage(url)

// The actual network request happens when we await
let response = await responseFuture

Your First Async Function

Let's write an async function that fetches a web page title:

import tokio
import reqwest.{ Client, Error }

async fn pageTitle(url: &str): String? {
    let client = Client.new()

    // await the HTTP request
    let response = await client.get(url).send()

    // Handle potential errors
    guard let resp = response.ok() else {
        return null
    }

    // await getting the response body
    let body = await resp.text()
    guard let text = body.ok() else {
        return null
    }

    // Parse the HTML and extract the title
    extractTitle(&text)
}

fn extractTitle(html: &str): String? {
    // Simple extraction (real code would use an HTML parser)
    let start = html.find("<title>")?
    let end = html.find("</title>")?
    let titleStart = start + 7  // length of "<title>"
    Some(html[titleStart..end].toString())
}

Let's break down what's happening:

  1. async fn marks the function as asynchronous. It can contain await expressions and returns a future.

  2. await client.get(url).send() suspends execution until the HTTP request completes. The await keyword comes before the expression (prefix syntax).

  3. await resp.text() similarly waits for the response body to be read.

  4. The function returns String?, not Future<String?>. The async keyword handles wrapping the return type in a future automatically.

Prefix Await Syntax

Oxide uses prefix await, which differs from Rust's postfix .await:

// Oxide - prefix await
let response = await client.get(url).send()
let body = await response.text()

Rust equivalent:

#![allow(unused)]
fn main() {
let response = client.get(url).send().await;
let body = response.text().await;
}

Why Prefix Await?

Prefix await reads naturally from left to right, matching how we think about the operation: "await the result of this expression." This syntax is familiar to developers from:

  • JavaScript: await fetch(url)
  • Python: await response.json()
  • Swift: await fetchData()
  • Kotlin: Uses suspend functions, but await in coroutines is prefix

Precedence

The await operator binds tighter than ?, so error propagation works naturally:

// await binds first, then ? propagates any error
let response = await client.get(url).send()?
let body = await response.text()?

// Equivalent to:
let response = (await client.get(url).send())?
let body = (await response.text())?

Chaining with Prefix Await

When chaining async operations, each await handles one async step:

async fn fetchAndProcess(url: &str): Result<Data, Error> {
    // Each await handles one async operation
    let response = await client.get(url).send()?
    let json = await response.json()?
    let processed = processData(json)  // sync operation, no await needed

    Ok(processed)
}

For long chains, you might use intermediate variables or format across lines:

async fn complexFetch(url: &str): Result<String, Error> {
    let response = await client
        .get(url)
        .header("Authorization", token)
        .timeout(Duration.fromSecs(30))
        .send()?

    let body = await response.text()?
    Ok(body)
}

Futures Are Lazy

A crucial concept: futures don't execute until awaited. Simply calling an async function creates a future but doesn't start the work:

async fn printMessage() {
    println!("Hello from async!")
}

fn main() {
    let future = printMessage()  // Nothing printed yet!
    println!("Future created")

    // The message only prints when we await the future
    // (We'd need a runtime to actually run this)
}

This laziness allows you to compose futures before executing them:

async fn main() {
    let futureA = fetchData(urlA)  // Doesn't start fetching yet
    let futureB = fetchData(urlB)  // Doesn't start fetching yet

    // Now we can choose how to run them:
    // - Sequentially: await futureA, then await futureB
    // - Concurrently: race them or join them
}

How Async Functions Compile

Under the hood, async fn is transformed into a regular function returning a type that implements the Future trait:

// You write:
async fn fetchData(url: &str): String {
    await client.get(url).send().text()
}

// Conceptually compiles to something like:
#![allow(unused)]
fn main() {
fn fetch_data(url: &str) -> impl Future<Output = String> {
    // Returns a state machine that can be polled
}
}

The compiler generates a state machine that tracks progress through each await point. When polled, it either:

  • Continues execution if the awaited future is ready
  • Returns Pending if still waiting

You rarely need to think about this, but it explains why async functions can suspend and resume without losing their state.

Running Async Code

Async code needs a runtime to execute. The runtime handles:

  • Polling futures to make progress
  • Managing I/O operations efficiently
  • Scheduling tasks across available resources

Here's a complete example using Tokio:

import tokio

#[tokio.main]
async fn main() {
    let title = await pageTitle("https://www.rust-lang.org")

    match title {
        Some(t) -> println!("Page title: \(t)"),
        null -> println!("Could not extract title"),
    }
}

async fn pageTitle(url: &str): String? {
    // ... implementation from earlier
}

The #[tokio.main] attribute sets up the Tokio runtime and converts our async fn main into a regular fn main that blocks on the async code.

The Runtime's Role

The runtime is essential because main() itself cannot be async - someone has to start the async machinery! Here's what happens:

// With the attribute:
#[tokio.main]
async fn main() {
    await doAsyncStuff()
}

// Is roughly equivalent to:
fn main() {
    let rt = tokio.runtime.Runtime.new().unwrap()
    rt.blockOn(async {
        await doAsyncStuff()
    })
}

You can also manually create and use runtimes:

import tokio.runtime.Runtime

fn main() {
    let runtime = Runtime.new().unwrap()

    runtime.blockOn(async {
        let result = await fetchData("https://example.com")
        println!("Result: \(result)")
    })
}

Async Blocks

Besides async fn, you can create anonymous async blocks:

fn main() {
    let runtime = Runtime.new().unwrap()

    // An async block creates a future inline
    let future = async {
        let a = await fetchA()
        let b = await fetchB()
        a + b
    }

    let result = runtime.blockOn(future)
    println!("Combined result: \(result)")
}

Async blocks are useful when you need a future but don't want to define a separate function. They capture variables from their environment like closures:

async fn processItems(items: Vec<String>): Vec<Result> {
    let client = Client.new()

    var results = vec![]
    for item in items {
        // Async block capturing `client` and `item`
        let result = async {
            await client.process(&item)
        }
        results.push(await result)
    }
    results
}

Async with Move

Like closures, async blocks can use move to take ownership of captured variables:

fn spawnTask(data: String) {
    // Without move, this would borrow `data`
    // But the task might outlive this function!
    tokio.spawn(async move {
        println!("Processing: \(data)")
        await processData(&data)
    })
}

The async move pattern is essential when spawning tasks that need to own their data, since the task may run on a different thread and outlive the current scope.

Summary

In this section, we covered:

  • Futures represent values that will be available later
  • async fn declares functions that can be suspended and resumed
  • Prefix await waits for a future (Oxide's key difference from Rust!)
  • Futures are lazy - they don't execute until awaited
  • Runtimes (like Tokio) execute async code
  • Async blocks create inline futures that capture their environment

The prefix await syntax is one of Oxide's distinctive features, making async code read naturally from left to right. In the next section, we'll explore how to run multiple async operations concurrently.

Concurrency with Async

One of async programming's greatest strengths is efficiently handling multiple operations at once. In this section, we'll explore how to run async operations concurrently, communicate between tasks, and coordinate complex workflows.

Sequential vs. Concurrent Execution

First, let's understand the difference between sequential and concurrent async code:

import tokio
import tokio.time.{ sleep, Duration }

async fn fetchData(name: &str, delayMs: UInt64): String {
    println!("Starting fetch for \(name)...")
    await sleep(Duration.fromMillis(delayMs))
    println!("Finished fetch for \(name)")
    "data_\(name)".toString()
}

#[tokio.main]
async fn main() {
    // Sequential: total time ~3 seconds
    let a = await fetchData("A", 1000)
    let b = await fetchData("B", 1000)
    let c = await fetchData("C", 1000)
    println!("Sequential results: \(a), \(b), \(c)")
}

Each await waits for completion before starting the next fetch. With three 1-second fetches, this takes about 3 seconds total.

Running Futures Concurrently with join!

To run futures concurrently, use tokio.join!:

import tokio

#[tokio.main]
async fn main() {
    // Concurrent: total time ~1 second (they run together!)
    let (a, b, c) = await tokio.join!(
        fetchData("A", 1000),
        fetchData("B", 1000),
        fetchData("C", 1000)
    )
    println!("Concurrent results: \(a), \(b), \(c)")
}

All three fetches start immediately and run concurrently. Since they each take 1 second and run in parallel, the total time is about 1 second.

Note the prefix await before tokio.join! - the macro produces a future that we then await.

Spawning Independent Tasks

Sometimes you want a task to run independently in the background. Use tokio.spawn:

import tokio

#[tokio.main]
async fn main() {
    // Spawn a background task
    let handle = tokio.spawn(async {
        await sleep(Duration.fromSecs(2))
        println!("Background task complete!")
        42
    })

    // Main task continues immediately
    println!("Main task doing work...")
    await sleep(Duration.fromSecs(1))
    println!("Main task still working...")

    // Wait for the spawned task to complete
    let result = await handle
    println!("Background task returned: \(result.unwrap())")
}

Output:

Main task doing work...
Main task still working...
Background task complete!
Background task returned: 42

The spawned task runs concurrently with the main task. tokio.spawn returns a JoinHandle that you can await to get the result.

Important: Move Semantics with Spawn

Spawned tasks may outlive the current function, so they must own their data:

async fn processUser(user: User) {
    // ERROR: task may outlive this function, but borrows `user`
    tokio.spawn(async {
        println!("Processing \(user.name)")
    })

    // CORRECT: move ownership into the task
    tokio.spawn(async move {
        println!("Processing \(user.name)")
    })
}

Message Passing Between Tasks

Tasks often need to communicate. Tokio provides async channels for this:

import tokio
import tokio.sync.mpsc

#[tokio.main]
async fn main() {
    // Create a channel with buffer size 32
    let (tx, mut rx) = mpsc.channel(32)

    // Spawn a producer task
    let producer = tokio.spawn(async move {
        for i in 0..5 {
            await tx.send(format!("Message \(i)"))
            await sleep(Duration.fromMillis(100))
        }
        // tx is dropped here, closing the channel
    })

    // Spawn a consumer task
    let consumer = tokio.spawn(async move {
        while let Some(msg) = await rx.recv() {
            println!("Received: \(msg)")
        }
        println!("Channel closed")
    })

    // Wait for both tasks
    await tokio.join!(producer, consumer)
}

Output:

Received: Message 0
Received: Message 1
Received: Message 2
Received: Message 3
Received: Message 4
Channel closed

Key points:

  • mpsc.channel(n) creates a multi-producer, single-consumer channel
  • tx.send() is async and may wait if the buffer is full
  • rx.recv() is async and returns null when the channel closes
  • Dropping all senders closes the channel

Multiple Producers

Clone the sender to allow multiple tasks to send:

import tokio
import tokio.sync.mpsc

#[tokio.main]
async fn main() {
    let (tx, mut rx) = mpsc.channel(32)

    // Spawn multiple producer tasks
    for i in 0..3 {
        let tx = tx.clone()  // Clone for each task
        tokio.spawn(async move {
            for j in 0..3 {
                await tx.send(format!("Producer \(i), message \(j)"))
                await sleep(Duration.fromMillis(50))
            }
        })
    }

    // Drop the original sender so channel closes when tasks finish
    drop(tx)

    // Receive all messages
    while let Some(msg) = await rx.recv() {
        println!("\(msg)")
    }
}

Racing Futures with select!

Sometimes you want the result of whichever future completes first:

import tokio

async fn fetchFastest(): String {
    await tokio.select! {
        result = fetchFromServerA() -> result,
        result = fetchFromServerB() -> result,
    }
}

#[tokio.main]
async fn main() {
    let fastest = await fetchFastest()
    println!("Got response: \(fastest)")
}

async fn fetchFromServerA(): String {
    await sleep(Duration.fromMillis(100))
    "Response from A".toString()
}

async fn fetchFromServerB(): String {
    await sleep(Duration.fromMillis(200))
    "Response from B".toString()
}

The select! macro races the futures and returns when the first one completes. The other futures are cancelled.

Select with Timeouts

A common pattern is racing an operation against a timeout:

import tokio
import tokio.time.timeout

async fn fetchWithTimeout(url: &str): Result<Response, Error> {
    match await timeout(Duration.fromSecs(5), fetch(url)) {
        Ok(response) -> Ok(response),
        Err(elapsed) -> Err(Error.Timeout(elapsed)),
    }
}

// Or using select! directly:
async fn fetchWithTimeoutSelect(url: &str): Result<Response, Error> {
    await tokio.select! {
        response = fetch(url) -> Ok(response),
        _ = sleep(Duration.fromSecs(5)) -> Err(Error.Timeout),
    }
}

Fair Scheduling with join!

The join! macro provides fair scheduling between futures:

import tokio

#[tokio.main]
async fn main() {
    await tokio.join!(
        countTo("A", 5),
        countTo("B", 5),
    )
}

async fn countTo(name: &str, n: Int) {
    for i in 1..=n {
        println!("\(name): \(i)")
        await tokio.task.yieldNow()  // Let other tasks run
    }
}

Output (interleaved):

A: 1
B: 1
A: 2
B: 2
A: 3
B: 3
...

The yieldNow() function explicitly yields control to the runtime, allowing other tasks to make progress.

Handling Multiple Channels

Use select! to handle messages from multiple sources:

import tokio
import tokio.sync.mpsc

#[tokio.main]
async fn main() {
    let (tx1, mut rx1) = mpsc.channel(10)
    let (tx2, mut rx2) = mpsc.channel(10)

    // Spawn producers
    tokio.spawn(async move {
        for i in 0..3 {
            await tx1.send(format!("From channel 1: \(i)"))
            await sleep(Duration.fromMillis(100))
        }
    })

    tokio.spawn(async move {
        for i in 0..3 {
            await tx2.send(format!("From channel 2: \(i)"))
            await sleep(Duration.fromMillis(150))
        }
    })

    // Handle messages from both channels
    loop {
        await tokio.select! {
            msg = rx1.recv() -> {
                match msg {
                    Some(m) -> println!("RX1: \(m)"),
                    null -> break,
                }
            },
            msg = rx2.recv() -> {
                match msg {
                    Some(m) -> println!("RX2: \(m)"),
                    null -> break,
                }
            },
        }
    }
}

Shared State Between Tasks

For shared mutable state, use tokio.sync.Mutex:

import tokio
import tokio.sync.Mutex
import std.sync.Arc

#[tokio.main]
async fn main() {
    let counter = Arc.new(Mutex.new(0))

    var handles = vec![]

    for _ in 0..10 {
        let counter = Arc.clone(&counter)
        let handle = tokio.spawn(async move {
            for _ in 0..100 {
                var lock = await counter.lock()
                *lock += 1
            }
        })
        handles.push(handle)
    }

    for handle in handles {
        await handle.unwrap()
    }

    println!("Counter: \(await counter.lock())")  // Prints: Counter: 1000
}

Note: tokio.sync.Mutex is designed for async code. It allows the task to yield while waiting for the lock, unlike std.sync.Mutex which blocks the thread.

Task Cancellation

When you drop a future before it completes, it's cancelled:

import tokio

#[tokio.main]
async fn main() {
    let handle = tokio.spawn(async {
        println!("Task starting...")
        await sleep(Duration.fromSecs(10))
        println!("Task complete!")  // Never printed if cancelled
    })

    // Wait a bit then cancel
    await sleep(Duration.fromSecs(1))
    handle.abort()  // Cancel the task

    // Check if it was cancelled
    match await handle {
        Ok(_) -> println!("Task completed"),
        Err(e) if e.isCancelled() -> println!("Task was cancelled"),
        Err(e) -> println!("Task failed: \(e)"),
    }
}

Cancellation happens at await points - if a task is in the middle of non-async code when cancelled, it will continue until the next await.

Summary

This section covered key concurrency patterns:

  • join! runs multiple futures concurrently and waits for all
  • tokio.spawn creates independent background tasks
  • Channels (mpsc) enable message passing between tasks
  • select! races futures and returns the first to complete
  • yieldNow() explicitly yields control to other tasks
  • Async Mutex provides safe shared mutable state
  • Task cancellation happens when futures are dropped

Remember: all await expressions use prefix syntax in Oxide. The examples above consistently show await tokio.join!(...), await tx.send(...), and similar patterns.

In the next section, we'll explore more advanced future patterns including custom timeouts and composing futures in sophisticated ways.

Working With More Futures

Now that we understand the basics of async and concurrency, let's explore more advanced patterns. We'll learn about yielding control, building custom async abstractions, and handling dynamic collections of futures.

Yielding Control to the Runtime

Async code is cooperatively scheduled: the runtime can only switch between tasks at await points. If your code does a lot of work without awaiting, it can starve other tasks:

import tokio
import tokio.time.{ sleep, Duration }

async fn greedyTask() {
    // This blocks other tasks for the entire loop!
    for i in 0..1_000_000 {
        heavyComputation(i)  // No await points
    }
    println!("Greedy task done")
}

async fn politeTask() {
    println!("Polite task trying to run...")
}

#[tokio.main]
async fn main() {
    await tokio.join!(
        greedyTask(),
        politeTask(),
    )
}

In this example, politeTask can't run until greedyTask finishes because there are no await points where the runtime can switch tasks.

Solution 1: Add Await Points

Break up long-running work with await:

async fn friendlyTask() {
    for i in 0..1_000_000 {
        heavyComputation(i)

        // Periodically yield control
        if i % 10_000 == 0 {
            await sleep(Duration.fromMillis(0))
        }
    }
    println!("Friendly task done")
}

Solution 2: Use yieldNow

A more explicit way to yield control:

import tokio.task.yieldNow

async fn yieldingTask() {
    for i in 0..1_000_000 {
        heavyComputation(i)

        if i % 10_000 == 0 {
            await yieldNow()  // Explicitly give other tasks a chance
        }
    }
    println!("Yielding task done")
}

yieldNow() is more efficient than sleep(0) and clearly expresses intent.

Solution 3: Spawn Blocking Work

For truly CPU-intensive work, use spawnBlocking to run it on a dedicated thread pool:

import tokio.task.spawnBlocking

async fn heavyLifting() {
    // Run on a blocking thread, not the async runtime
    let result = await spawnBlocking {
        // This can take as long as it needs
        expensiveComputation()
    }.unwrap()

    println!("Result: \(result)")
}

This keeps the async runtime responsive for I/O tasks while heavy computation runs elsewhere.

Building Custom Async Abstractions

One of async's strengths is composability. Let's build a timeout function that races any future against a timer:

import tokio
import tokio.time.{ sleep, Duration }

async fn timeout<T, F>(
    future: F,
    duration: Duration
): Result<T, TimeoutError>
where
    F: Future<Output = T>,
{
    await tokio.select! {
        result = future -> Ok(result),
        _ = sleep(duration) -> Err(TimeoutError.new(duration)),
    }
}

#[derive(Debug)]
struct TimeoutError {
    duration: Duration,
}

extension TimeoutError {
    static fn new(duration: Duration): TimeoutError {
        TimeoutError { duration }
    }
}

Now we can use it with any async operation:

#[tokio.main]
async fn main() {
    match await timeout(fetchData(), Duration.fromSecs(5)) {
        Ok(data) -> println!("Got data: \(data)"),
        Err(e) -> println!("Timed out after \(e.duration:?)"),
    }
}

Building a Retry Function

Here's a more complex example - a function that retries failed operations:

async fn retry<T, E, F, Fut>(
    operation: F,
    maxAttempts: UInt32,
    delayBetween: Duration,
): Result<T, E>
where
    F: Fn() -> Fut,
    Fut: Future<Output = Result<T, E>>,
{
    var lastError: E? = null

    for attempt in 1..=maxAttempts {
        match await operation() {
            Ok(value) -> return Ok(value),
            Err(e) -> {
                println!("Attempt \(attempt) failed: \(e:?)")
                lastError = Some(e)

                if attempt < maxAttempts {
                    await sleep(delayBetween)
                }
            }
        }
    }

    Err(lastError.unwrap())
}

// Usage
#[tokio.main]
async fn main() {
    let result = await retry(
        { -> fetchUnreliableData() },
        3,  // max attempts
        Duration.fromSecs(1),  // delay between attempts
    )

    match result {
        Ok(data) -> println!("Success: \(data)"),
        Err(e) -> println!("All attempts failed: \(e)"),
    }
}

Combining Patterns

You can combine timeout and retry:

async fn fetchWithRetryAndTimeout(): Result<Data, Error> {
    await retry(
        { -> timeout(fetchData(), Duration.fromSecs(5)) },
        3,
        Duration.fromSecs(1),
    )
}

Working with Dynamic Collections of Futures

Sometimes you don't know at compile time how many futures you'll have. The FuturesUnordered collection handles this:

import futures.stream.{ FuturesUnordered, StreamExt }

async fn fetchAll(urls: Vec<String>): Vec<Response> {
    var futures = FuturesUnordered.new()

    // Add all fetch operations
    for url in urls {
        futures.push(fetchUrl(url))
    }

    // Collect results as they complete
    var results = vec![]
    while let Some(result) = await futures.next() {
        results.push(result)
    }

    results
}

Results come back in completion order, not submission order. This is efficient when you don't care about order but want results as fast as possible.

Preserving Order

If you need results in the original order, use indices:

async fn fetchAllOrdered(urls: Vec<String>): Vec<Response> {
    var futures = FuturesUnordered.new()

    // Track indices with each future
    for (i, url) in urls.intoIter().enumerate() {
        futures.push(async move {
            let response = await fetchUrl(&url)
            (i, response)
        })
    }

    // Collect and sort by index
    var results: Vec<(UIntSize, Response)> = vec![]
    while let Some(result) = await futures.next() {
        results.push(result)
    }

    results.sortBy { it.0 }
    results.iter().map { it.1 }.collect()
}

Limiting Concurrency

Too many concurrent operations can overwhelm resources. Use a semaphore to limit concurrency:

import tokio.sync.Semaphore
import std.sync.Arc

async fn fetchWithLimit(urls: Vec<String>, maxConcurrent: UIntSize): Vec<Response> {
    let semaphore = Arc.new(Semaphore.new(maxConcurrent))
    var handles = vec![]

    for url in urls {
        let semaphore = Arc.clone(&semaphore)
        let handle = tokio.spawn(async move {
            // Acquire permit (waits if at limit)
            let _permit = await semaphore.acquire().unwrap()

            // Permit is dropped when this scope ends, releasing it
            await fetchUrl(&url)
        })
        handles.push(handle)
    }

    var results = vec![]
    for handle in handles {
        results.push(await handle.unwrap())
    }
    results
}

#[tokio.main]
async fn main() {
    let urls = getUrlList()  // Might be thousands of URLs
    // Only 10 concurrent fetches at a time
    let results = await fetchWithLimit(urls, 10)
}

Buffered Streams

For processing streams with limited concurrency, use bufferUnordered:

import futures.stream.{ self, StreamExt }

async fn processAll(items: Vec<Item>): Vec<Result> {
    let results = stream.iter(items)
        .map { item -> processItem(item) }  // Create futures
        .bufferUnordered(10)               // Run up to 10 concurrently
        .collect()                          // Collect results

    await results
}

Graceful Shutdown

Real applications need to shut down cleanly. Here's a pattern using a shutdown signal:

import tokio
import tokio.sync.broadcast

async fn worker(id: Int, mut shutdown: broadcast.Receiver<()>) {
    loop {
        await tokio.select! {
            _ = shutdown.recv() -> {
                println!("Worker \(id) shutting down")
                return
            },
            _ = doWork() -> {
                println!("Worker \(id) completed work")
            },
        }
    }
}

#[tokio.main]
async fn main() {
    let (shutdownTx, _) = broadcast.channel(1)

    var handles = vec![]
    for i in 0..3 {
        let rx = shutdownTx.subscribe()
        handles.push(tokio.spawn(worker(i, rx)))
    }

    // Let workers run for a while
    await sleep(Duration.fromSecs(5))

    // Signal shutdown
    println!("Sending shutdown signal...")
    shutdownTx.send(()).unwrap()

    // Wait for all workers to finish
    for handle in handles {
        await handle.unwrap()
    }
    println!("All workers stopped")
}

Error Handling in Concurrent Operations

When running multiple operations, you need to decide how to handle errors:

Fail Fast (Stop on First Error)

async fn fetchAllFailFast(urls: Vec<String>): Result<Vec<Response>, Error> {
    var futures = FuturesUnordered.new()
    for url in urls {
        futures.push(fetchUrl(url))
    }

    var results = vec![]
    while let Some(result) = await futures.next() {
        // Return immediately on error
        results.push(result?)
    }
    Ok(results)
}

Collect All Results (Errors and Successes)

async fn fetchAllResults(urls: Vec<String>): Vec<Result<Response, Error>> {
    var futures = FuturesUnordered.new()
    for url in urls {
        futures.push(fetchUrl(url))
    }

    var results = vec![]
    while let Some(result) = await futures.next() {
        results.push(result)
    }
    results
}

// Later, separate successes from failures
fn processResults(results: Vec<Result<Response, Error>>) {
    let (successes, failures): (Vec<Result<Response, Error>>, Vec<Result<Response, Error>>) =
        results.intoIter().partition { it.isOk() }

    println!("\(successes.len()) succeeded, \(failures.len()) failed")
}

Summary

This section covered advanced async patterns:

  • Yielding control with yieldNow() prevents task starvation
  • spawnBlocking handles CPU-intensive work without blocking the runtime
  • Custom abstractions like timeout and retry compose naturally
  • FuturesUnordered handles dynamic collections of futures
  • Semaphores limit concurrent operations
  • Buffered streams process items with bounded concurrency
  • Graceful shutdown uses channels to coordinate stopping
  • Error handling strategies for concurrent operations

Remember: async code in Oxide uses prefix await syntax. All the examples above demonstrate this: await timeout(...), await semaphore.acquire(), await futures.next(), and so on.

Next, we'll explore streams - async sequences of values that arrive over time.

Streams: Async Sequences

While a Future represents a single value that will be available later, a Stream represents a sequence of values that arrive over time. Streams are the async equivalent of iterators.

Understanding Streams

Think about these real-world scenarios:

  • Messages arriving in a chat application
  • Log entries being written to a file
  • Sensor readings coming from IoT devices
  • Chunks of data downloading from the network

Each produces multiple values over time, not just one. That's what streams model.

// Iterator: produces values synchronously
for item in collection.iter() {
    process(item)
}

// Stream: produces values asynchronously
while let Some(item) = await stream.next() {
    await process(item)
}

Creating Streams

From Iterators

The simplest way to create a stream is from an existing iterator:

import futures.stream.{ self, StreamExt }

async fn processNumbers() {
    let numbers = vec![1, 2, 3, 4, 5]
    var stream = stream.iter(numbers)

    while let Some(n) = await stream.next() {
        println!("Got: \(n)")
    }
}

From Channels

Channels naturally produce streams:

import tokio.sync.mpsc

async fn receiveMessages() {
    let (tx, mut rx) = mpsc.channel(100)

    // Producer sends messages
    tokio.spawn(async move {
        for i in 0..5 {
            await tx.send(format!("Message \(i)"))
            await sleep(Duration.fromMillis(100))
        }
    })

    // Receiver processes the stream
    while let Some(msg) = await rx.recv() {
        println!("Received: \(msg)")
    }
}

Using stream! Macro

The async-stream crate provides a convenient macro:

import asyncStream.stream

fn countingStream(max: Int): impl Stream<Item = Int> {
    stream! {
        for i in 0..max {
            await sleep(Duration.fromMillis(100))
            yield i
        }
    }
}

async fn useCountingStream() {
    var stream = countingStream(5)

    while let Some(n) = await stream.next() {
        println!("Count: \(n)")
    }
}

Stream Combinators

Like iterators, streams have powerful combinators for transformation and filtering. You need to import StreamExt to access these methods.

Map

Transform each element:

import futures.stream.StreamExt

async fn doubleStream() {
    let numbers = stream.iter(vec![1, 2, 3, 4, 5])

    var doubled = numbers.map { it * 2 }

    while let Some(n) = await doubled.next() {
        println!("\(n)")  // 2, 4, 6, 8, 10
    }
}

Filter

Keep only matching elements:

async fn filterStream() {
    let numbers = stream.iter(vec![1, 2, 3, 4, 5, 6, 7, 8, 9, 10])

    var evens = numbers.filter { it % 2 == 0 }

    while let Some(n) = await evens.next() {
        println!("\(n)")  // 2, 4, 6, 8, 10
    }
}

Filter Map

Combine filter and map - returns Some to include, null to skip:

async fn filterMapStream() {
    let items = stream.iter(vec!["1", "two", "3", "four", "5"])

    var parsed = items.filterMap { s ->
        s.parse<Int>().ok()  // Only keeps successfully parsed numbers
    }

    while let Some(n) = await parsed.next() {
        println!("\(n)")  // 1, 3, 5
    }
}

Take and Skip

Limit the stream:

async fn limitStream() {
    let numbers = stream.iter(0..100)

    // Skip first 10, then take 5
    var limited = numbers.skip(10).take(5)

    while let Some(n) = await limited.next() {
        println!("\(n)")  // 10, 11, 12, 13, 14
    }
}

Collect

Gather all stream elements into a collection:

async fn collectStream() {
    let numbers = stream.iter(vec![1, 2, 3, 4, 5])
    let doubled = numbers.map { it * 2 }

    // Collect all results into a Vec
    let results: Vec<Int> = await doubled.collect()
    println!("Results: \(results:?)")  // [2, 4, 6, 8, 10]
}

Fold

Reduce a stream to a single value:

async fn sumStream() {
    let numbers = stream.iter(vec![1, 2, 3, 4, 5])

    let sum = await numbers.fold(0, { acc, n -> acc + n })
    println!("Sum: \(sum)")  // 15
}

Async Operations in Streams

Then

Apply an async function to each element:

async fn asyncTransform() {
    let urls = stream.iter(vec![
        "https://example.com/1",
        "https://example.com/2",
        "https://example.com/3",
    ])

    // fetch is async, so we use then
    var responses = urls.then { url -> fetch(url) }

    while let Some(response) = await responses.next() {
        println!("Got response: \(response.status())")
    }
}

Buffered Processing

Process multiple items concurrently with bounded parallelism:

async fn bufferedFetch() {
    let urls = stream.iter(getUrls())

    // Create futures (doesn't execute yet)
    let fetches = urls.map { url -> fetch(url) }

    // Execute up to 5 concurrently
    var results = fetches.bufferUnordered(5)

    while let Some(result) = await results.next() {
        println!("Completed: \(result:?)")
    }
}

Results arrive in completion order, not original order. Use buffered(n) if you need original order.

For-Await Loops

Oxide provides a convenient syntax for iterating over streams:

import tokio_stream.StreamExt

async fn forAwaitExample() {
    var stream = countingStream(5)

    // for-await iterates over a stream
    for n in await stream.next() {
        println!("Got: \(n)")
    }
}

Note: This uses the pattern for item in await stream.next() rather than a special for await syntax. Each iteration awaits the next item.

For a cleaner loop, you can use while let:

async fn whileLetStream() {
    var stream = countingStream(5)

    while let Some(n) = await stream.next() {
        println!("Got: \(n)")
    }
}

Combining Streams

Merge

Combine multiple streams into one, interleaving items:

import futures.stream.{ select, StreamExt }

async fn mergeStreams() {
    let streamA = stream.iter(vec![1, 3, 5])
    let streamB = stream.iter(vec![2, 4, 6])

    // Items from both streams interleave
    var merged = select(streamA, streamB)

    while let Some(n) = await merged.next() {
        println!("\(n)")  // Order depends on timing
    }
}

Chain

Concatenate streams (one after another):

async fn chainStreams() {
    let first = stream.iter(vec![1, 2, 3])
    let second = stream.iter(vec![4, 5, 6])

    var chained = first.chain(second)

    while let Some(n) = await chained.next() {
        println!("\(n)")  // 1, 2, 3, 4, 5, 6 (in order)
    }
}

Zip

Pair items from two streams:

async fn zipStreams() {
    let names = stream.iter(vec!["Alice", "Bob", "Carol"])
    let ages = stream.iter(vec![30, 25, 35])

    var zipped = names.zip(ages)

    while let Some((name, age)) = await zipped.next() {
        println!("\(name) is \(age) years old")
    }
}

Real-World Example: Log Tail

Here's a practical example - watching a log file for new lines:

import tokio.fs.File
import tokio.io.{ AsyncBufReadExt, BufReader }

async fn tailLog(path: &str) {
    let file = await File.open(path).unwrap()
    var reader = BufReader.new(file)
    var line = String.new()

    println!("Watching \(path) for new lines...")

    loop {
        line.clear()
        match await reader.readLine(&mut line) {
            Ok(0) -> {
                // End of file, wait and try again
                await sleep(Duration.fromMillis(100))
            },
            Ok(_) -> {
                print!("\(line)")  // Line already has newline
            },
            Err(e) -> {
                println!("Error reading: \(e)")
                break
            },
        }
    }
}

Real-World Example: Event Throttling

Throttle UI events to prevent overwhelming handlers:

import tokio.time.{ interval, Duration }
import futures.stream.StreamExt

struct ThrottledStream<S> {
    inner: S,
    interval: Interval,
    pending: S.Item?,
}

async fn throttle<S: Stream>(stream: S, period: Duration): impl Stream<Item = S.Item> {
    var interval = interval(period)
    var pending: S.Item? = null
    var stream = Box.pin(stream)

    stream! {
        loop {
            await tokio.select! {
                item = stream.next() -> {
                    match item {
                        Some(i) -> pending = i,
                        null -> {
                            if let p = pending.take() {
                                yield p
                            }
                            return
                        },
                    }
                },
                _ = interval.tick() -> {
                    if let p = pending.take() {
                        yield p
                    }
                },
            }
        }
    }
}

// Usage: only emit at most one event per 100ms
async fn handleEvents() {
    let events = getEventStream()
    var throttled = throttle(events, Duration.fromMillis(100))

    while let Some(event) = await throttled.next() {
        await handleEvent(event)
    }
}

Stream vs Iterator Comparison

OperationIteratorStream
Next itemiter.next()await stream.next()
Transformiter.map(f)stream.map(f)
Filteriter.filter(p)stream.filter(p)
Collectiter.collect()await stream.collect()
Folditer.fold(init, f)await stream.fold(init, f)
For loopfor x in iterwhile let Some(x) = await stream.next()

The patterns are nearly identical - the key difference is adding await where async operations occur.

Summary

Streams are the async version of iterators:

  • Create streams from iterators, channels, or the stream! macro
  • Transform streams with familiar combinators: map, filter, take, etc.
  • Process async operations with then and bufferUnordered
  • Combine streams with merge, chain, and zip
  • Iterate using while let Some(item) = await stream.next()

Remember: stream operations that produce values use prefix await: await stream.next(), await stream.collect(), await stream.fold(...).

In the final section, we'll look at the underlying traits that make async programming possible: Future, Pin, and Stream.

Traits for Async: Under the Hood

Understanding the traits that power async programming helps you write better async code and debug issues when they arise. In this section, we'll explore Future, Pin, and Stream - the building blocks of Rust and Oxide's async system.

The Future Trait

Every async operation in Oxide is backed by the Future trait:

trait Future {
    type Output

    mutating fn poll(cx: &mut Context): Poll<Self.Output>
}

Let's break this down:

  • Output: The type of value the future produces when complete
  • poll: Called by the runtime to check if the future is ready
  • Poll: An enum with two variants:
    • Poll.Ready(value): The future completed with a value
    • Poll.Pending: The future isn't ready yet

How Await Works

When you write await someFuture, the compiler transforms it into code that repeatedly calls poll() until the future returns Poll.Ready:

// You write:
let result = await someFuture

// Conceptually becomes something like:
loop {
    match someFuture.poll(cx) {
        Poll.Ready(value) -> break value,
        Poll.Pending -> {
            // Yield to runtime, which will poll again later
            suspend()
        },
    }
}

The runtime handles the actual polling loop and decides when to poll each future based on readiness notifications.

State Machines

When you write an async fn, the compiler generates a state machine:

async fn fetchAndProcess(url: &str): Data {
    let response = await fetch(url)      // State 1
    let body = await response.text()     // State 2
    processData(&body)                   // State 3
}

The compiler creates something like:

enum FetchAndProcessState {
    Start { url: String },
    WaitingForFetch { fetchFuture: FetchFuture },
    WaitingForBody { bodyFuture: BodyFuture },
    Done,
}

Each await point becomes a state transition. The poll method checks which state we're in, polls the appropriate inner future, and transitions when ready.

The Pin Type

You might have noticed Pin<&mut Self> in the poll signature. Pin is crucial for async safety but often confusing at first.

The Problem: Self-Referential Futures

State machines can contain references to their own fields:

async fn example() {
    let data = vec![1, 2, 3]
    let reference = &data[0]  // reference points to data
    await someOperation()
    println!("\(reference)")  // use the reference after await
}

The state machine stores both data and reference. But what if the state machine is moved in memory? The reference would point to the old location!

Pin's Solution

Pin guarantees that pinned data won't move in memory:

import std.pin.Pin

// Once pinned, the value cannot be moved
var boxed = Box.pin(MyFuture.new())

// We can poll it safely
await boxed.asMut().poll(cx)

The Unpin Marker Trait

Most types are Unpin, meaning they're safe to move even when pinned:

// These are all Unpin - can be moved freely
let x: Int = 42
let s: String = "hello".toString()
let v: Vec<Int> = vec![1, 2, 3]

Only self-referential types (like compiler-generated futures) are !Unpin. In practice, you rarely need to think about Pin unless:

  1. You're implementing Future manually
  2. You're working with !Unpin types in collections
  3. You're writing low-level async utilities

Working with Pin in Practice

When you need to pin a future:

import std.pin.pin

async fn example() {
    // Use the pin! macro for stack pinning
    let future = someFuture()
    pin!(future)

    // Now we can poll it
    await future
}

// Or use Box.pin for heap allocation
async fn heapPinned() {
    let future = Box.pin(someFuture())
    await future
}

The Stream Trait

Stream is to Future what Iterator is to single values:

trait Stream {
    type Item

    mutating fn pollNext(cx: &mut Context): Poll<Self.Item?>
}

The return type Poll<Self.Item?> combines:

  • Poll.Ready(Some(item)): An item is available
  • Poll.Ready(null): The stream has ended
  • Poll.Pending: No item ready yet, try again later

Stream vs Future

AspectFutureStream
ProducesOne valueMany values
Poll returnsPoll<T>Poll<T?>
Methodpoll()pollNext()
AnalogSingle async resultAsync iterator

The StreamExt Trait

Like Iterator has many useful methods, Stream has StreamExt:

trait StreamExt: Stream {
    mutating async fn next(): Self.Item?
    consuming fn map<F, T>(f: F): Map<Self, F>
    consuming fn filter<F>(f: F): Filter<Self, F>
    consuming async fn collect<C>(): C
    // ... many more
}

StreamExt provides the convenient next() method that hides the polling details:

// Instead of manual polling:
loop {
    match stream.pollNext(cx) {
        Poll.Ready(Some(item)) -> process(item),
        Poll.Ready(null) -> break,
        Poll.Pending -> suspend(),
    }
}

// You can write:
while let Some(item) = await stream.next() {
    process(item)
}

Implementing Future Manually

Sometimes you need to implement Future yourself. Here's a simple timer:

import std.future.Future
import std.pin.Pin
import std.task.{ Context, Poll }
import std.time.{ Duration, Instant }

struct Timer {
    deadline: Instant,
}

extension Timer {
    static fn new(duration: Duration): Timer {
        Timer {
            deadline: Instant.now() + duration,
        }
    }
}

extension Timer: Future {
    type Output = ()

    mutating fn poll(cx: &mut Context): Poll<()> {
        if Instant.now() >= self.deadline {
            Poll.Ready(())
        } else {
            // Schedule wakeup (simplified - real impl uses a timer wheel)
            cx.waker().wakeByRef()
            Poll.Pending
        }
    }
}

// Usage
async fn useTimer() {
    await Timer.new(Duration.fromSecs(1))
    println!("Timer fired!")
}

Wakers

The Context contains a Waker that tells the runtime when to poll again:

mutating fn poll(cx: &mut Context): Poll<Output> {
    if self.isReady() {
        Poll.Ready(self.result())
    } else {
        // Store the waker to call later when ready
        self.waker = Some(cx.waker().clone())
        Poll.Pending
    }
}

// Later, when the operation completes:
fn onComplete() {
    if let Some(waker) = &self.waker {
        waker.wake()  // Tell runtime to poll again
    }
}

Async Runtimes

Runtimes tie everything together. They:

  1. Execute the top-level future (from blockOn or #[tokio.main])
  2. Poll futures when they're ready to make progress
  3. Manage wakers and notifications
  4. Schedule tasks across threads (for multi-threaded runtimes)

Runtime Comparison

Different runtimes make different trade-offs:

RuntimeThreadsUse Case
Tokio (current_thread)SingleSimple applications
Tokio (multi_thread)MultipleHigh-performance servers
async-stdMultiplestd-like API
smolConfigurableMinimal footprint

Choosing a Runtime

For most applications, Tokio is the standard choice:

// Single-threaded (simpler, no Send requirements)
#[tokio.main(flavor = "current_thread")]
async fn main() {
    // ...
}

// Multi-threaded (default, better for CPU-bound tasks)
#[tokio.main]
async fn main() {
    // ...
}

Send and Sync with Async

For multi-threaded runtimes, futures must be Send:

// This works with single-threaded runtime
async fn nonSendExample() {
    let cell = Rc.new(RefCell.new(0))  // Rc is !Send
    await doWork()
    *cell.borrowMut() += 1
}

// For multi-threaded, use Send types
async fn sendExample() {
    let counter = Arc.new(AtomicUInt32.new(0))  // Arc is Send
    await doWork()
    counter.fetchAdd(1, Ordering.SeqCst)
}

If you hold a !Send value across an await point, the entire future becomes !Send:

async fn problematic() {
    let rc = Rc.new(42)
    await someOperation()  // rc is held across await
    println!("\(rc)")      // Future is !Send
}

Solutions:

  1. Use Send types (Arc instead of Rc)
  2. Don't hold !Send values across await
  3. Use a single-threaded runtime

Debugging Async Code

Common Issues

1. Future not awaited:

async fn forgetfulExample() {
    fetch(url)  // WARNING: future not awaited!
    // The fetch never happens
}

2. Blocking in async context:

async fn badExample() {
    std.thread.sleep(Duration.fromSecs(1))  // Blocks the runtime!
    // Use tokio.time.sleep instead
}

3. Deadlock with sync Mutex:

async fn deadlockRisk() {
    let lock = syncMutex.lock()  // Holds lock across await
    await someOperation()        // Other tasks can't acquire lock
    drop(lock)
}
// Use tokio.sync.Mutex instead

Debugging Tools

  • tokio-console: Real-time async debugging
  • Tracing: Add instrumentation to async code
  • Careful logging: Log at await points to track progress

Summary

The async system is built on three key traits:

  • Future: Represents a single async computation

    • poll() checks for completion
    • Returns Poll.Ready(value) or Poll.Pending
  • Pin: Ensures self-referential futures don't move

    • Most types are Unpin and can be moved freely
    • Compiler-generated futures may be !Unpin
  • Stream: Represents an async sequence

    • pollNext() yields items over time
    • StreamExt provides convenient methods

Understanding these traits helps you:

  • Debug mysterious async behavior
  • Write custom futures when needed
  • Choose appropriate types for concurrent code
  • Work effectively with async runtimes

Remember: Oxide uses prefix await throughout. Whether you're calling await future, await stream.next(), or await customFuture.poll(...), the await keyword always comes before the expression.

This concludes our tour of async programming in Oxide. You now have the knowledge to write efficient, concurrent applications using futures, streams, and async/await!

Futures, Tasks, and Threads

Async programming introduces new execution units: tasks. A task is a future that the async runtime schedules, usually on a pool of operating system threads. Understanding the distinction between tasks and threads helps you design efficient systems.

Tasks vs. Threads

  • Threads are managed by the operating system. Each thread has its own stack and OS scheduling overhead.
  • Tasks are managed by the async runtime. Many tasks can run on a small number of threads.

Tasks are lightweight and great for I/O-bound work, while threads are better for heavy CPU-bound tasks unless you offload work to a dedicated thread pool.

Spawning Tasks

Most runtimes provide a task API. For example, with Tokio:

import tokio.task

async fn fetch(url: &str): String {
    // Imagine an HTTP request here
    "ok".toString()
}

async fn fetchAll(urls: Vec<String>): Vec<String> {
    let handles = urls.map { url ->
        task.spawn { -> fetch(&url) }
    }

    handles.map { handle -> await handle }
}

Each spawn call schedules a task. The runtime polls those tasks on worker threads and wakes them when they can make progress.

When to Use Threads

If you need to run blocking or CPU-heavy work, use threads (or a dedicated blocking pool) so you don't stall the async runtime:

import std.thread

fn heavyComputation(): Int {
    // CPU-intensive work
    42
}

fn runInThread(): Int {
    let handle = thread.spawn { -> heavyComputation() }
    handle.join().unwrap()
}

Async is about waiting efficiently. Threads are about doing work in parallel. Choose the tool that matches the kind of work you are doing.

Object-Oriented Programming Features

Oxide is fundamentally a systems programming language rooted in the functional and type-system capabilities of Rust. However, Oxide incorporates many object-oriented programming (OOP) features that allow you to write code in a style familiar to those coming from traditional OOP languages. This chapter explores how Oxide supports OOP patterns while maintaining its emphasis on safety and performance.

What Is OOP?

Object-oriented programming is a programming paradigm organized around objects that contain data and behavior. When we talk about OOP in Oxide, we're discussing design patterns and language features that enable this style of programming. Some argue whether Oxide truly qualifies as an OOP language, but it provides the tools you need to structure your programs using OOP principles.

Rust, and by extension Oxide, takes a different approach than some traditional OOP languages. Rather than inheriting implementations from parent classes, Oxide uses trait composition and extension blocks to achieve similar goals with better safety guarantees and more explicit control.

Topics Covered

This chapter covers three core OOP concepts as they apply to Oxide:

  1. Encapsulation - Bundling data with the methods that operate on it, while controlling which details are exposed to the outside world.

  2. Inheritance vs. Composition - Understanding how Oxide achieves code reuse through trait composition rather than class inheritance, and how to design flexible, maintainable abstractions.

  3. Trait Objects - Using dynamic dispatch with dyn Trait to work with multiple types through a common interface at runtime.

These features work together to enable you to build scalable, maintainable systems in Oxide.

The Oxide Approach to OOP

Rather than traditional class inheritance hierarchies, Oxide emphasizes:

  • Traits for defining behavior contracts
  • Extension blocks for adding methods to types
  • Composition for building complex types from simpler ones
  • Type safety enforced at compile time

This approach reduces common OOP pitfalls like fragile base class problems and provides more predictable behavior.

Let's begin by exploring how Oxide achieves encapsulation.

Encapsulation: Hiding Implementation Details

Encapsulation is one of the foundational principles of object-oriented programming. It means bundling related data and behavior together while hiding implementation details from the outside world. In Oxide, encapsulation is achieved through a combination of structs, access modifiers, and extension blocks.

What Is Encapsulation?

Encapsulation serves two main purposes:

  1. Data hiding - Keep internal state private so it can only be modified in controlled ways
  2. Interface stability - Expose a stable public interface while free to change implementation details

By making fields private and providing public methods to interact with them, you ensure that users of your type can't accidentally break its invariants.

Public and Private in Oxide

By default, struct fields and methods are private - they can only be accessed from within the same module. To make something accessible outside the module, use the public keyword:

public struct BankAccount {
    public accountNumber: String,
    private balance: Decimal,  // Only accessible within this module
}

Note: In Oxide, private is explicit for clarity, though fields without any visibility modifier are private by default.

Private Fields with Public Methods

The key to encapsulation is exposing a public interface while keeping the internal state private. Here's a complete example:

public struct BankAccount {
    public accountNumber: String,
    private balance: Decimal,
}

extension BankAccount {
    static fn create(accountNumber: String): Self {
        Self {
            accountNumber,
            balance: Decimal(0),
        }
    }

    public fn getBalance(): Decimal {
        self.balance
    }

    public fn deposit(amount: Decimal) {
        if amount > Decimal(0) {
            self.balance = self.balance + amount
        }
    }

    public fn withdraw(amount: Decimal): Bool {
        if amount > Decimal(0) && amount <= self.balance {
            self.balance = self.balance - amount
            return true
        }
        return false
    }
}

In this example:

  • accountNumber is public because it's just a reference number
  • balance is private because modifying it directly could break the account's invariants
  • All operations on balance go through public methods that maintain the account's validity

Users of BankAccount are forced to use the deposit and withdraw methods, ensuring that the balance never becomes negative unintentionally.

Extension Blocks for Encapsulation

Oxide uses extension blocks to add methods to types, which enables clean encapsulation patterns. Extension blocks can be in the same module as the struct or in external modules, giving you flexibility in organizing your code:

// In user.rs
public struct User {
    public id: UInt64,
    public username: String,
    private passwordHash: String,
}

// In auth.rs
import user.User

extension User {
    public fn verifyPassword(password: String): Bool {
        // Verify password against passwordHash
        // This method has access to the private passwordHash field
        // because it's in the same module
        hashPassword(password) == self.passwordHash
    }

    private fn hashPassword(password: String): String {
        // Implementation details hidden
        // ...
    }
}

Extension blocks in the same module can access private fields, enabling you to group related functionality together while maintaining encapsulation.

Getters and Setters

Use public methods to control access to private fields. This pattern is sometimes called "getter" and "setter" methods:

public struct Temperature {
    private celsius: Float,
}

extension Temperature {
    static fn fromCelsius(celsius: Float): Self {
        Self { celsius }
    }

    static fn fromFahrenheit(fahrenheit: Float): Self {
        Self { celsius: (fahrenheit - 32) * 5 / 9 }
    }

    public fn getCelsius(): Float {
        self.celsius
    }

    public fn getFahrenheit(): Float {
        self.celsius * 9 / 5 + 32
    }

    public fn setCelsius(celsius: Float) {
        self.celsius = celsius
    }
}

By controlling access through methods, you can:

  • Validate input before storing it
  • Perform calculations when retrieving values
  • Change the internal representation without affecting the public API
  • Add logging or other side effects

Invariant Enforcement

One of the primary benefits of encapsulation is maintaining object invariants - conditions that must always be true about an object's state. For example, a Stack type maintains the invariant that the number of elements stored should match the length of the internal vector:

public struct Stack<T> {
    private items: Vec<T>,
}

extension Stack<T> {
    public fn new(): Self {
        Self { items: Vec<T>() }
    }

    public fn push(item: T) {
        self.items.append(item)
    }

    public fn pop(): T? {
        if self.items.isEmpty() {
            return null
        }
        return self.items.removeLast()
    }

    public fn size(): UInt {
        self.items.count()
    }

    public fn isEmpty(): Bool {
        self.items.isEmpty()
    }
}

The invariant here is: "size equals the number of items in the internal vector." By making items private and only exposing operations through methods, you guarantee that this invariant is always maintained. Users cannot bypass these operations to corrupt the internal state.

Encapsulation and Module Boundaries

Encapsulation works hand-in-hand with Oxide's module system. A struct doesn't need to explicitly mark fields as private if they're only used within the module - they're private by default. The module boundary provides the first level of encapsulation:

// In banking.rs module
struct InternalTransaction {
    // No 'public' keyword - private to the module
    timestamp: UInt64,
    amount: Decimal,
    description: String,
}

public struct Account {
    public id: String,
    private transactions: Vec<InternalTransaction>,
}

extension Account {
    public fn getTransactionHistory(): Vec<(UInt64, Decimal, String)> {
        // Convert private transactions to public data
        // This controls what information is exposed
        self.transactions.map { (t) in
            (t.timestamp, t.amount, t.description)
        }
    }
}

Comparison with Rust

In Rust, encapsulation is achieved with the pub keyword:

#![allow(unused)]
fn main() {
pub struct BankAccount {
    pub account_number: String,
    balance: Decimal,  // Private by default
}

impl BankAccount {
    pub fn new(account_number: String) -> Self {
        Self {
            account_number,
            balance: Decimal::new(0),
        }
    }

    pub fn deposit(&mut self, amount: Decimal) {
        if amount > Decimal::new(0) {
            self.balance = self.balance + amount;
        }
    }
}
}

The key difference in Oxide is the use of public instead of pub and the extension block syntax instead of impl. The underlying semantics are identical.

Summary

Encapsulation in Oxide is achieved through:

  • Private fields by default - Only what you explicitly mark as public is exposed
  • Public methods - Control how external code can interact with your types
  • Extension blocks - Organize methods logically and group related functionality
  • Invariant maintenance - Ensure that object state remains valid through controlled access
  • Module boundaries - First level of access control, allowing module-level privacy

By using these tools effectively, you create robust, maintainable code where types control their own state and guarantee their own correctness. In the next section, we'll explore how to achieve code reuse through composition rather than inheritance.

Using Trait Objects for Dynamic Dispatch

In previous sections, we explored how to use traits as trait bounds on generic types to achieve compile-time polymorphism. This approach requires knowing all concrete types at compile time.

Sometimes, you need to work with multiple types through a common interface where the actual type is only known at runtime. Oxide provides trait objects using the dyn keyword to achieve this dynamic dispatch.

What Are Trait Objects?

A trait object is a dynamically-sized type that represents any type implementing a specific trait. It allows you to store or pass around values of different types as long as they all implement the trait.

The syntax for a trait object is &dyn Trait (for borrowed trait objects) or Box<dyn Trait> (for owned trait objects).

Why Trait Objects?

Consider a scenario where you want to create a collection of different types that all share the same behavior:

public trait Shape {
    fn area(): Float
    fn perimeter(): Float
}

public struct Circle {
    public radius: Float,
}

public struct Rectangle {
    public width: Float,
    public height: Float,
}

extension Circle: Shape {
    fn area(): Float {
        3.14159 * self.radius * self.radius
    }

    fn perimeter(): Float {
        2.0 * 3.14159 * self.radius
    }
}

extension Rectangle: Shape {
    fn area(): Float {
        self.width * self.height
    }

    fn perimeter(): Float {
        2.0 * (self.width + self.height)
    }
}

If you wanted to store both Circle and Rectangle in the same vector, you'd need trait objects:

fn main() {
    // This won't compile because Vec<Shape> doesn't know the size
    // let shapes: Vec<Shape> = vec![Circle { ... }, Rectangle { ... }]

    // This works! Vec<Box<dyn Shape>> is a vector of trait objects
    let shapes: Vec<Box<dyn Shape>> = vec![
        Box { Circle { radius: 5.0 } } as Box<dyn Shape>,
        Box { Rectangle { width: 10.0, height: 20.0 } } as Box<dyn Shape>,
    ]

    var totalArea = 0.0
    for shape in shapes {
        totalArea = totalArea + shape.area()
    }

    println!("Total area: \(totalArea)")
}

Trait Objects vs. Generics

When should you use trait objects instead of generics? Let's compare:

Generics: Compile-Time Polymorphism

// Generic approach - compile time polymorphism
fn printShape<T: Shape>(shape: &T) {
    println!("Area: \(shape.area())")
    println!("Perimeter: \(shape.perimeter())")
}

fn main() {
    let circle = Circle { radius: 5.0 }
    let rect = Rectangle { width: 10.0, height: 20.0 }

    printShape(&circle)  // Type known at compile time
    printShape(&rect)    // Type known at compile time
}

Advantages:

  • Zero runtime overhead
  • Can call trait methods efficiently
  • Compiler knows the concrete type

Disadvantages:

  • Can't store different types in a single collection
  • Monomorphization increases binary size
  • All types must be known at compile time

Trait Objects: Runtime Polymorphism

// Trait object approach - runtime polymorphism
fn printShape(shape: &dyn Shape) {
    println!("Area: \(shape.area())")
    println!("Perimeter: \(shape.perimeter())")
}

fn main() {
    let circle = Circle { radius: 5.0 }
    let rect = Rectangle { width: 10.0, height: 20.0 }

    printShape(&circle)  // Type checked at runtime
    printShape(&rect)    // Type checked at runtime

    // Can also store mixed types
    let shapes: Vec<Box<dyn Shape>> = vec![
        Box { circle },
        Box { rect },
    ]
}

Advantages:

  • Can store different types together
  • More compact binary (no monomorphization)
  • Flexible collection types

Disadvantages:

  • Small runtime overhead for dynamic dispatch
  • Can only call methods from the trait, not type-specific methods
  • Less efficient than generics

Creating and Using Trait Objects

Borrowed Trait Objects (&dyn Trait)

Use &dyn Trait when you want to pass a reference to something implementing the trait:

public trait Draw {
    fn draw()
}

public struct Button {
    public label: String,
}

public struct TextField {
    public placeholder: String,
}

extension Button: Draw {
    fn draw() {
        println!("[Button: \(self.label)]")
    }
}

extension TextField: Draw {
    fn draw() {
        println!("[TextField: \(self.placeholder)]")
    }
}

fn renderUI(component: &dyn Draw) {
    component.draw()
}

fn main() {
    let button = Button { label: "OK".toString() }
    let field = TextField { placeholder: "Enter text".toString() }

    renderUI(&button)  // Passes as &dyn Draw
    renderUI(&field)   // Passes as &dyn Draw
}

Owned Trait Objects (Box<dyn Trait>)

Use Box<dyn Trait> when you need to store trait objects that you own, typically in collections:

public trait Plugin {
    fn getName(): String
    fn execute()
}

public struct AudioPlugin {
    public name: String,
}

public struct VideoPlugin {
    public name: String,
}

extension AudioPlugin: Plugin {
    fn getName(): String {
        self.name
    }

    fn execute() {
        println!("Playing audio...")
    }
}

extension VideoPlugin: Plugin {
    fn getName(): String {
        self.name
    }

    fn execute() {
        println!("Playing video...")
    }
}

fn main() {
    var plugins: Vec<Box<dyn Plugin>> = vec![]

    plugins.append(Box { AudioPlugin { name: "MP3 Player".toString() } } as Box<dyn Plugin>)
    plugins.append(Box { VideoPlugin { name: "MP4 Player".toString() } } as Box<dyn Plugin>)

    for plugin in plugins {
        println!("Plugin: \(plugin.getName())")
        plugin.execute()
    }
}

Object Safety

Not all traits can be used as trait objects. A trait is object-safe if:

  1. The trait doesn't contain any static methods
  2. The trait doesn't require Self to be Sized
  3. For object safety, methods do not return Self or take Self as a parameter (except as the receiver: fn, mutating fn, or consuming fn)

Trait methods can return Self; doing so just means the trait can't be used as a dyn trait object unless those methods are restricted to Self: Sized (and then they aren't callable on the trait object). This is the same rule as Rust.

For example, this trait is NOT object-safe:

public trait Drawable {
    fn draw(): Self  // Returns Self - not object-safe!
}

// This won't compile:
// let obj: Box<dyn Drawable> = Box { ... }

But you can make it object-safe by using a different return type:

public trait Drawable {
    fn draw(): String  // Returns String instead - object-safe!
}

// This works:
let obj: Box<dyn Drawable> = Box { SomeType { ... } }

Practical Example: A Media Player

Here's a realistic example showing trait objects in action:

public trait MediaSource {
    fn load(path: String)
    fn play()
    fn pause()
    fn stop()
    fn getDuration(): Float
}

public struct AudioFile {
    public path: String,
    public duration: Float,
    public isPlaying: Bool,
}

public struct VideoFile {
    public path: String,
    public duration: Float,
    public width: Int,
    public height: Int,
    public isPlaying: Bool,
}

extension AudioFile: MediaSource {
    fn load(path: String) {
        println!("Loading audio: \(path)")
    }

    fn play() {
        println!("Playing audio")
    }

    fn pause() {
        println!("Pausing audio")
    }

    fn stop() {
        println!("Stopping audio")
    }

    fn getDuration(): Float {
        self.duration
    }
}

extension VideoFile: MediaSource {
    fn load(path: String) {
        println!("Loading video: \(path) (\(self.width)x\(self.height))")
    }

    fn play() {
        println!("Playing video")
    }

    fn pause() {
        println!("Pausing video")
    }

    fn stop() {
        println!("Stopping video")
    }

    fn getDuration(): Float {
        self.duration
    }
}

public struct MediaPlayer {
    private currentMedia: Box<dyn MediaSource>?,
    private playlist: Vec<Box<dyn MediaSource>>,
    private currentTrackIndex: Int,
}

extension MediaPlayer {
    static fn create(): Self {
        Self {
            currentMedia: null,
            playlist: vec![],
            currentTrackIndex: 0,
        }
    }

    public fn addToPlaylist(media: Box<dyn MediaSource>) {
        self.playlist.append(media)
    }

    public fn loadNext() {
        if self.currentTrackIndex < self.playlist.count() {
            self.currentMedia = self.playlist.remove(self.currentTrackIndex)
            self.currentTrackIndex = self.currentTrackIndex + 1
        }
    }

    public fn playCurrentMedia() {
        if let media = self.currentMedia {
            media.play()
        }
    }

    public fn getPlaylistDuration(): Float {
        var total = 0.0
        for media in self.playlist {
            total = total + media.getDuration()
        }
        return total
    }
}

fn main() {
    var player = MediaPlayer.create()

    player.addToPlaylist(
        Box { AudioFile {
            path: "song.mp3".toString(),
            duration: 240.0,
            isPlaying: false,
        } } as Box<dyn MediaSource>
    )

    player.addToPlaylist(
        Box { VideoFile {
            path: "video.mp4".toString(),
            duration: 180.0,
            width: 1920,
            height: 1080,
            isPlaying: false,
        } } as Box<dyn MediaSource>
    )

    println!("Total playlist duration: \(player.getPlaylistDuration()) seconds")

    player.loadNext()
    player.playCurrentMedia()
}

Dynamic Dispatch Overhead

When you use trait objects, method calls are dispatched at runtime using a virtual method table (vtable). This adds a small performance cost compared to static dispatch with generics:

// Static dispatch - zero cost abstraction
fn process<T: Shape>(shape: &T) {
    shape.area()  // Inlined or directly called
}

// Dynamic dispatch - small runtime cost
fn process(shape: &dyn Shape) {
    shape.area()  // Looked up in vtable at runtime
}

The overhead is typically minimal and well worth the flexibility. Only use static dispatch if:

  1. You need maximum performance in hot code paths
  2. You're working with a small, fixed set of types

Comparison with Rust

Rust's trait object syntax is identical to Oxide's:

Rust:

#![allow(unused)]
fn main() {
let shapes: Vec<Box<dyn Shape>> = vec![
    Box::new(Circle { radius: 5.0 }),
    Box::new(Rectangle { width: 10.0, height: 20.0 }),
];
}

Oxide:

let shapes: Vec<Box<dyn Shape>> = vec![
    Box { Circle { radius: 5.0 } } as Box<dyn Shape>,
    Box { Rectangle { width: 10.0, height: 20.0 } } as Box<dyn Shape>,
]

The core mechanism is the same - dynamic dispatch through vtables.

Summary

Trait objects (dyn Trait) enable runtime polymorphism:

  • &dyn Trait - Borrowed trait object, pass references to different types
  • Box<dyn Trait> - Owned trait object, store different types together
  • Object safety - Not all traits can be trait objects (static methods and Self return types aren't allowed)
  • Dynamic dispatch - Method calls are resolved at runtime, with slight overhead
  • When to use - When you have a collection of different types or don't know the type until runtime

Trait objects are a powerful tool for creating flexible, extensible architectures. Combined with the encapsulation and composition patterns covered in this chapter, they enable you to write robust, maintainable object-oriented code in Oxide.

Inheritance vs. Composition

One of the most important differences between Oxide and traditional object-oriented languages is how it approaches code reuse. Rather than using class inheritance, Oxide favors composition and trait implementation. This chapter explores why Oxide made this choice and how to use composition effectively.

The Problem with Inheritance

Traditional OOP languages use inheritance to achieve code reuse and polymorphism. A derived class inherits all the behavior of its parent class and can override or extend that behavior. While this seems convenient, inheritance has several well-documented problems:

The Fragile Base Class Problem

When you inherit from a class, you depend on the internals of that class. If the base class author changes the implementation in a way you don't expect, your derived class can break. For example:

#![allow(unused)]
fn main() {
// Traditional inheritance pseudocode
class Bird {
    fn fly() { /* implementation */ }
}

class Penguin extends Bird {
    // Inherits fly(), which doesn't match penguin behavior!
    // Need to override with an error or fake behavior
}
}

Tight Coupling

Inheritance creates tight coupling between parent and child classes. The derived class must understand the parent's implementation details, making changes risky.

Deep Hierarchies

Inheritance encourages deep class hierarchies that are hard to navigate, maintain, and reason about:

Animal
├── Mammal
│   ├── Cat
│   ├── Dog
│   └── Whale
└── Bird
    ├── Eagle
    ├── Penguin
    └── Ostrich

Adding a new category or changing the hierarchy becomes increasingly difficult.

Oxide's Solution: Composition and Traits

Oxide avoids these problems by emphasizing composition and trait-based design. Instead of "is-a" relationships (inheritance), Oxide uses "has-a" relationships (composition) and explicit behavior contracts (traits).

Composition: Building with Smaller Pieces

Rather than inheriting from a base class, build your types from smaller, focused components:

public struct Bird {
    public name: String,
    public age: UInt,
    public canFly: Bool,
}

public struct FlyingAbility {
    public maxAltitude: Int,
    public speed: Float,
}

public struct Penguin {
    public bird: Bird,  // Has a bird, not is a bird
    public swimSpeed: Float,
    // Note: no flying ability - penguins can't fly
}

public struct Eagle {
    public bird: Bird,  // Has a bird, not is a bird
    public flyingAbility: FlyingAbility,
}

This approach is more flexible because:

  1. Explicit composition - You can see exactly what capabilities each type has
  2. Flexibility - You can easily add or remove capabilities without changing hierarchies
  3. No false hierarchies - Penguins don't pretend to be flying birds

Traits Define Behavior Contracts

Traits specify what behaviors a type can perform. Multiple types can implement the same trait, creating a polymorphic interface without inheritance:

public trait Animal {
    fn getName(): String
    fn makeSound(): String
}

public trait Swimmer {
    fn swim(distance: Float)
    fn getSwimSpeed(): Float
}

public trait Flyer {
    fn fly(altitude: Int)
    fn getMaxAltitude(): Int
}

// Penguin implements Animal and Swimmer, but not Flyer
extension Penguin: Animal {
    fn getName(): String {
        self.bird.name
    }

    fn makeSound(): String {
        "squawk".toString()
    }
}

extension Penguin: Swimmer {
    fn swim(distance: Float) {
        println!("Swimming \(distance)m at \(self.swimSpeed) m/s")
    }

    fn getSwimSpeed(): Float {
        self.swimSpeed
    }
}

// Eagle implements all three
extension Eagle: Animal {
    fn getName(): String {
        self.bird.name
    }

    fn makeSound(): String {
        "screech".toString()
    }
}

extension Eagle: Flyer {
    fn fly(altitude: Int) {
        println!("Flying to \(altitude)m")
    }

    fn getMaxAltitude(): Int {
        self.flyingAbility.maxAltitude
    }
}

extension Eagle: Swimmer {
    fn swim(distance: Float) {
        println!("Swimming \(distance)m")
    }

    fn getSwimSpeed(): Float {
        20.5  // Eagles swim slower than penguins
    }
}

Practical Example: UI Components

Here's a realistic example showing composition and traits in action:

public struct Button {
    public label: String,
    public enabled: Bool,
}

public struct Clickable {
    public onClickHandler: () -> Unit,
}

public struct Styleable {
    public backgroundColor: String,
    public textColor: String,
    public borderWidth: Int,
}

// A button is composed of these behaviors
public struct StyledButton {
    public button: Button,
    public clickable: Clickable,
    public styleable: Styleable,
}

public trait Interactive {
    fn handleClick()
}

public trait Visual {
    fn render(): String
}

extension StyledButton: Interactive {
    fn handleClick() {
        if self.button.enabled {
            self.clickable.onClickHandler()
        }
    }
}

extension StyledButton: Visual {
    fn render(): String {
        "<button style='background: \(self.styleable.backgroundColor); color: \(self.styleable.textColor);'>\(self.button.label)</button>".toString()
    }
}

// Usage
fn createButton(): StyledButton {
    StyledButton {
        button: Button {
            label: "Click me".toString(),
            enabled: true,
        },
        clickable: Clickable {
            onClickHandler: {
                println!("Button clicked!")
            },
        },
        styleable: Styleable {
            backgroundColor: "#007bff".toString(),
            textColor: "white".toString(),
            borderWidth: 1,
        },
    }
}

Default Trait Implementations

Traits can provide default implementations for common behavior, reducing duplication:

public trait Drawable {
    fn draw(): String {
        "[Drawing object]".toString()
    }

    fn getSize(): (Int, Int) {
        (100, 100)
    }
}

public struct Circle {
    public radius: Int,
}

// Use default implementations
extension Circle: Drawable {
    // Both draw() and getSize() use defaults
}

// Or override them
extension Circle: Drawable {
    fn draw(): String {
        "○ (radius: \(self.radius))".toString()
    }

    fn getSize(): (Int, Int) {
        (self.radius * 2, self.radius * 2)
    }
}

Delegation Pattern

Sometimes you want one type to forward method calls to another. This is called delegation and is a key composition pattern:

public struct Logger {
    private logLevel: String,

    fn log(message: String) {
        println!("[\(self.logLevel)] \(message)")
    }
}

public struct Application {
    public logger: Logger,
    public name: String,
}

extension Application {
    fn log(message: String) {
        // Delegate to the logger
        self.logger.log(message)
    }
}

// Usage
var app = Application {
    logger: Logger { logLevel: "INFO".toString() },
    name: "MyApp".toString(),
}
app.log("Application started") // Delegates to app.logger.log()

When to Use Composition vs. Traits

Use composition (has-a) when:

  • One type contains instances of other types
  • You want to reuse implementation
  • The relationship is "part of" or "contains"

Use traits (can do) when:

  • Multiple types share the same behavior interface
  • You want polymorphism (same operation, different implementations)
  • You want to define a contract that types must satisfy

Often use both together:

  • Compose types from smaller components
  • Implement traits to define how they behave
  • Use traits to write generic code that works with any type implementing the trait

Code Reuse Through Traits

Here's how to achieve the code reuse benefit of inheritance using traits:

public trait Named {
    fn getName(): String
}

public trait Identifiable {
    fn getId(): String
}

public struct Person {
    public id: String,
    public name: String,
    public age: UInt,
}

public struct Company {
    public id: String,
    public name: String,
    public employeeCount: UInt,
}

// Both types can implement the same traits
extension Person: Named {
    fn getName(): String {
        self.name
    }
}

extension Person: Identifiable {
    fn getId(): String {
        self.id
    }
}

extension Company: Named {
    fn getName(): String {
        self.name
    }
}

extension Company: Identifiable {
    fn getId(): String {
        self.id
    }
}

// Write generic code that works with any Named type
fn greet<T: Named>(entity: &T) {
    println!("Hello, \(entity.getName())!")
}

fn main() {
    let person = Person {
        id: "p123".toString(),
        name: "Alice".toString(),
        age: 30,
    }

    let company = Company {
        id: "c456".toString(),
        name: "TechCorp".toString(),
        employeeCount: 100,
    }

    greet(&person)    // Works!
    greet(&company)   // Works too!
}

Comparison with Rust

Rust doesn't have classes or inheritance either. Both Rust and Oxide use the same composition + traits approach:

Rust:

#![allow(unused)]
fn main() {
impl Animal for Penguin {
    fn make_sound(&self) -> String {
        "squawk".to_string()
    }
}
}

Oxide:

extension Penguin: Animal {
    fn makeSound(): String {
        "squawk".toString()
    }
}

The underlying semantics are identical; Oxide just uses different syntax that emphasizes extending a type with additional capabilities.

Summary

Oxide avoids inheritance in favor of composition and traits because:

  1. Safety - No fragile base class problem
  2. Flexibility - Types can have multiple capabilities without deep hierarchies
  3. Clarity - Relationships are explicit: what each type contains and what it can do
  4. Composability - Build complex types from simple, focused pieces
  5. Explicitness - The compiler forces you to be clear about what you're doing

Key takeaways:

  • Use composition to structure types (has-a relationships)
  • Use traits to define behavior contracts (can-do capabilities)
  • Implement the same trait on different types for polymorphism
  • Write generic code using trait bounds to accept any type implementing a trait

In the next section, we'll explore trait objects, which enable runtime polymorphism when you need to work with multiple types through a common interface.

Patterns and Matching

Patterns are a special syntax in Oxide for matching against the structure of types, both complex and simple. Using patterns in combination with match expressions and other constructs gives you more control over your program's control flow. A pattern consists of some combination of the following:

  • Literal values
  • Destructured arrays, enums, structs, or tuples
  • Variables
  • Wildcards
  • Placeholders

Some example patterns include x, (a, b), and Point { x, y }.

What You'll Learn

In this chapter, we cover:

All the Places Patterns Can Be Used

Patterns appear in several places in Oxide code:

  • match arms
  • if let expressions
  • while let loops
  • Function parameters
  • let statements

Refutability: Whether a Pattern Might Fail to Match

Patterns come in two forms:

  • Refutable patterns: patterns that might fail to match for some values
  • Irrefutable patterns: patterns that will always match for any value passed

Understanding this distinction is crucial for writing correct Oxide code.

Pattern Syntax

We'll explore all the ways you can construct patterns to match values:

  • Literal patterns
  • Named variables
  • Multiple patterns with |
  • Destructuring structs, enums, and tuples
  • Wildcard patterns with else
  • Range patterns
  • Binding patterns with guards

Pattern matching is one of the most powerful features in Oxide, and mastering it will help you write cleaner, more expressive code that fully leverages the compiler's ability to ensure correctness.

Let's dive into how patterns work!

All the Places Patterns Can Be Used

We've seen patterns used in several places in the previous chapters. This section explores all the places where patterns can appear in Oxide and how to use them effectively.

match Expressions

As we discussed in the "Enums and Pattern Matching" chapter, match expressions use patterns in their arms. The syntax is:

match VALUE {
    PATTERN1 -> EXPRESSION1,
    PATTERN2 -> EXPRESSION2,
    PATTERN3 -> EXPRESSION3,
}

One requirement of match expressions is that they must be exhaustive, meaning every possible value of the type being matched must be covered. A good way to ensure this is to have a catch-all pattern as the last arm:

fn describeValue(value: Int?) {
    match value {
        Some(n) if n > 0 -> println!("Positive: \(n)"),
        Some(n) -> println!("Non-positive: \(n)"),
        null -> println!("No value"),
    }
}

Conditional if let Expressions

As we discussed in the "if let and while let" chapter, if let is a concise way to match one pattern while ignoring the rest. The syntax is:

if let PATTERN = EXPRESSION {
    // code that runs if the pattern matches
} else {
    // optional else block
}

The if let construct is less strict than match because it doesn't require exhaustive pattern matching. You can use it when you only care about one specific pattern:

let coin = Coin.Quarter(UsState.Alaska)

if let Coin.Quarter(state) = coin {
    println!("State quarter from \(state:?)")
}

One advantage of if let is that it's more concise when dealing with nullable types, thanks to Oxide's auto-unwrap feature:

let maybeValue: Int? = Some(5)

if let value = maybeValue {
    println!("The value is: \(value)")
}

while let Loops

The while let construct allows a loop to run as long as a pattern continues to match. This is useful when working with iterators or sequences that return nullable values:

var numbers: Vec<Int> = vec![1, 2, 3]

while let Some(num) = numbers.pop() {
    println!("Popped: \(num)")
}

Or with auto-unwrap:

var stack: Vec<Int> = vec![1, 2, 3]

while let value = stack.pop() {
    println!("Got: \(value)")
}

Function Parameters

Patterns can be used in function parameters, allowing you to destructure arguments directly:

fn printPoint(point: (Int, Int)) {
    let (x, y) = point
    println!("Point is at x=\(x), y=\(y)")
}

But you can also destructure directly in the function signature:

fn printPoint((x, y): (Int, Int)) {
    println!("Point is at x=\(x), y=\(y)")
}

This works with structs too:

struct Point {
    x: Int,
    y: Int,
}

fn printPoint(Point { x, y }: Point) {
    println!("Point is at x=\(x), y=\(y)")
}

let Statements

Every let statement you've written uses patterns:

let x = 5  // matches the pattern 'x'
let (x, y, z) = (1, 2, 3)  // destructuring tuple
let Point { x, y } = point  // destructuring struct

The pattern comes after let. In the simplest case (let x = 5), the pattern is just a variable name.

You can also use patterns to destructure more complex values:

struct User {
    name: String,
    email: String,
    age: Int,
}

let user = User {
    name: "Alice".toString(),
    email: "alice@example.com".toString(),
    age: 30,
}

// Destructure the struct
let User { name, email, age } = user
println!("User: \(name), Email: \(email), Age: \(age)")

// Or rename fields while destructuring
let User { name: userName, email, age } = user
println!("Name: \(userName)")

Pattern Syntax in Practice

Ignoring Values in let Statements

Sometimes you want to bind only some values from a destructuring:

let (x, _, z) = (1, 2, 3)
// x is 1, z is 3, we ignore the middle value

Or using _ to ignore:

let (x, _) = (1, 2)
// x is 1, we ignore the second value

Ignoring Remaining Values

You can use .. to ignore remaining values in a destructuring:

struct Point {
    x: Int,
    y: Int,
    z: Int,
}

let Point { x, .. } = point
// x is bound, y and z are ignored

This is particularly useful with larger structs where you only need a few fields:

struct Config {
    host: String,
    port: Int,
    username: String,
    password: String,
    timeout: Int,
    retries: Int,
}

let Config { host, port, .. } = config
println!("Connecting to \(host):\(port)")

Practical Examples

Processing Command Line Arguments

fn processArgs(args: Vec<String>) {
    match args.count() {
        0 -> println!("No arguments"),
        1 -> println!("One argument: \(args[0])"),
        2 -> {
            let [first, second] = [args[0], args[1]]
            println!("Two args: \(first) and \(second)")
        },
        _ -> println!("Many arguments: \(args.count())"),
    }
}

Parsing Configuration Files

enum ConfigValue {
    String(String),
    Number(Int),
    Boolean(Bool),
    List(Vec<ConfigValue>),
}

fn printConfigValue(value: ConfigValue) {
    match value {
        ConfigValue.String(s) -> println!("String: \(s)"),
        ConfigValue.Number(n) -> println!("Number: \(n)"),
        ConfigValue.Boolean(b) -> println!("Boolean: \(b)"),
        ConfigValue.List(items) -> println!("List with \(items.count()) items"),
    }
}

Working with Results and Options

fn processFile(path: String) {
    if let content = readFile(path) {
        if let lines = content.split("\n") {
            for line in lines {
                println!("Line: \(line)")
            }
        }
    } else {
        println!("Failed to read file")
    }
}

Patterns are a fundamental part of Oxide's expressiveness. They allow you to extract values from complex data structures and ensure that your code handles all cases correctly. By understanding where and how patterns can be used, you'll write more powerful and concise Oxide code.

Refutability: Whether a Pattern Might Fail to Match

Patterns come in two forms: refutable and irrefutable. It's important to understand the difference because the Oxide compiler enforces certain rules about which patterns can be used where.

Irrefutable Patterns

An irrefutable pattern is a pattern that will always match for any value you pass to it. Examples include:

let x = 5
let (x, y) = (1, 2)
let Point { x, y } = point

These patterns always match because they bind to values unconditionally. Irrefutable patterns are the ones you'll most commonly use in let statements, function parameters, and other places where the pattern must always match.

Refutable Patterns

A refutable pattern is one that might not match for some values. Examples include:

Some(x) // might be null instead
Coin.Penny // might be a different coin variant
n if n > 5 // might not satisfy the condition

These patterns could fail to match some values. When you use a refutable pattern, you need to handle the case where the pattern doesn't match.

Where Each Pattern Type Is Allowed

The Oxide compiler requires irrefutable patterns in certain contexts and allows refutable patterns in others:

Irrefutable Patterns Required

In these contexts, you must use irrefutable patterns:

let Statements

// Valid - irrefutable pattern
let x = 5

// Valid - irrefutable tuple destructuring
let (x, y) = (1, 2)

// Invalid - refutable pattern in let statement
// This would not compile:
// let Some(x) = value  // error: refutable pattern

// Must use if let instead:
if let Some(x) = value {
    println!("Got: \(x)")
}

Function Parameters

Function parameters require irrefutable patterns because the function must be prepared to handle any value passed to it:

// Valid - irrefutable parameter
fn printPoint((x, y): (Int, Int)) {
    println!("x: \(x), y: \(y)")
}

// Invalid - refutable pattern in function parameter
// fn printPoint(Some(x): Int?) {  // error: refutable pattern
//     println!("x: \(x)")
// }

// If you need to work with a refutable pattern, handle it inside the function:
fn processValue(value: Int?) {
    if let x = value {
        println!("Got: \(x)")
    }
}

Refutable Patterns Allowed

In these contexts, you can use refutable patterns:

match Arms

match expressions are designed to handle refutable patterns. In fact, if you use an irrefutable pattern as the first arm, the compiler will warn you that subsequent arms are unreachable:

match value {
    Some(x) -> println!("Got: \(x)"),  // refutable pattern - OK
    null -> println!("No value"),       // refutable pattern - OK
}

match value {
    // Warning: irrefutable pattern as first arm makes other arms unreachable
    x -> println!("Got: \(x)"),
    Some(y) -> println!("Got y: \(y)"),  // This code is unreachable!
}

if let Expressions

if let is specifically designed for refutable patterns:

if let Some(x) = value {
    println!("Got: \(x)")
}

if let Coin.Quarter(state) = coin {
    println!("State: \(state:?)")
}

while let Loops

while let loops continue as long as a refutable pattern matches:

var stack: Vec<Int> = vec![1, 2, 3]

while let value = stack.pop() {
    println!("Popped: \(value)")
}

Understanding Refutability Through Examples

Example 1: Pattern That Might Fail

fn processOptional(value: Int?) {
    // This would be an error:
    // let Some(x) = value  // error: refutable pattern in let statement

    // Must use if let instead:
    if let x = value {
        println!("Got value: \(x)")
    } else {
        println!("Value was null")
    }
}

The pattern Some(x) is refutable because the value might be null. Using if let is the correct way to handle this.

Example 2: Pattern That Always Matches

fn processAny(value: Int) {
    // This is fine - the pattern will always match:
    let x = value
    println!("Value: \(x)")
}

The pattern x is irrefutable because any value can bind to a variable.

Example 3: Guard Conditions Make Patterns Refutable

A pattern with a guard condition is refutable:

fn checkNumber(n: Int) {
    // This is refutable because the guard might not match:
    // let x if x > 5 = n  // error: refutable pattern in let

    // Use match instead:
    match n {
        x if x > 5 -> println!("Greater than 5"),
        x if x < 0 -> println!("Negative"),
        x -> println!("Between -1 and 5"),
    }
}

Example 4: Multiple Patterns in match

match coin {
    Coin.Penny -> println!("One cent"),
    Coin.Nickel -> println!("Five cents"),
    Coin.Dime -> println!("Ten cents"),
    Coin.Quarter -> println!("Twenty-five cents"),
}

Each pattern is refutable (the value might be a different variant), but together they cover all possibilities.

Common Mistakes and How to Fix Them

Mistake 1: Using a Refutable Pattern in let

// Error: refutable pattern
// let Some(x) = maybeValue

// Fix: Use if let
if let x = maybeValue {
    println!("Got: \(x)")
}

Mistake 2: Unreachable Code in match

// Warning: this code is unreachable
match value {
    x -> println!("Any value"),      // matches everything
    Some(y) -> println!("Some: \(y)"), // unreachable!
}

// Fix: put the catch-all pattern last
match value {
    Some(y) -> println!("Some: \(y)"),
    _ -> println!("Any other value"),
}

Mistake 3: Forgetting to Handle the Failure Case

// If you only care about one case, use if let:
if let Coin.Quarter(state) = coin {
    println!("State: \(state:?)")
}

// Not match without handling other cases:
// match coin {
//     Coin.Quarter(state) -> println!("State: \(state:?)"),
//     // error: missing patterns
// }

Best Practices

  1. Use let for irrefutable patterns - When you know a pattern will always match
  2. Use if let for single refutable patterns - When you only care about one case
  3. Use match for multiple refutable patterns - When you need to handle several cases
  4. Always handle the failure case - Either with if let ... else or with exhaustive match
  5. Use guards carefully - Remember that guards make patterns refutable

Understanding refutability helps you write safer, more expressive code while leveraging the Oxide compiler's ability to catch mistakes at compile time rather than runtime.

Pattern Syntax

In this section, we explore the different kinds of patterns you can use in Oxide. Patterns are combinations of the above for matching against one or all values. Each type of pattern has its own use cases.

Literal Patterns

You can match against literal values directly:

fn handleValue(x: Int) {
    match x {
        1 -> println!("One"),
        2 -> println!("Two"),
        3 -> println!("Three"),
        _ -> println!("Other"),
    }
}

fn handleText(text: String) {
    match text {
        "hello" -> println!("Hello there!"),
        "goodbye" -> println!("See you!"),
        _ -> println!("Unknown greeting"),
    }
}

fn handleBoolean(b: Bool) {
    match b {
        true -> println!("It is true"),
        false -> println!("It is false"),
    }
}

Named Variable Patterns

A named variable pattern matches any value and binds the value to a variable:

fn printValue(value: Int) {
    // The pattern 'value' will match any Int
    // and the value is already bound to the parameter
    println!("Got: \(value)")
}

let x = 5      // pattern 'x' matches 5
let y = Some(3) // pattern 'y' matches Some(3)

In match expressions, variables capture the matched value:

fn processNumber(value: Int) {
    match value {
        0 -> println!("Zero"),
        n -> println!("Number: \(n)"),  // 'n' captures the value
    }
}

Wildcard Patterns in Match Arms

Use _ as the wildcard pattern when no other pattern matches:

fn ignoreValue(x: Int) {
    match x {
        1 -> println!("One"),
        2 -> println!("Two"),
        _ -> println!("Something else"),
    }
}

// Using _ in destructuring to ignore values
let (x, _, z) = (1, 2, 3)
// x is 1, the middle value is ignored, z is 3

// Using _ in let statements
let (first, _) = tuple

Use _ wherever you would normally use a wildcard within a pattern.

match point {
    (0, 0) -> println!("Origin"),
    (x, 0) -> println!("On x-axis at \(x)"),
    (0, y) -> println!("On y-axis at \(y)"),
    (_, _) -> println!("Somewhere else"),
}

Multiple Patterns with |

The | operator lets you match multiple patterns in a single arm:

fn describeNumber(n: Int) {
    match n {
        1 | 2 | 3 -> println!("One, two, or three"),
        4 | 5 | 6 -> println!("Four, five, or six"),
        _ -> println!("Something else"),
    }
}

fn describeVowel(c: char) {
    match c {
        'a' | 'e' | 'i' | 'o' | 'u' -> println!("Vowel"),
        _ -> println!("Consonant"),
    }
}

Range Patterns

You can use ranges to match multiple values:

fn describeNumber(n: Int) {
    match n {
        1..=5 -> println!("Between 1 and 5"),
        6..=10 -> println!("Between 6 and 10"),
        _ -> println!("Outside the range"),
    }
}

fn describeGrade(grade: char) {
    match grade {
        'a'..='z' -> println!("Lowercase letter"),
        'A'..='Z' -> println!("Uppercase letter"),
        '0'..='9' -> println!("Digit"),
        _ -> println!("Other character"),
    }
}

Range patterns are inclusive on both ends with ..=.

Destructuring Structs

You can destructure struct fields in patterns:

struct Point {
    x: Int,
    y: Int,
}

fn printPoint(point: Point) {
    let Point { x, y } = point
    println!("x: \(x), y: \(y)")
}

// In match expressions:
match point {
    Point { x: 0, y: 0 } -> println!("Origin"),
    Point { x, y: 0 } -> println!("On x-axis at \(x)"),
    Point { x: 0, y } -> println!("On y-axis at \(y)"),
    Point { x, y } -> println!("At (\(x), \(y))"),
}

You can also rename fields while destructuring:

match point {
    Point { x: horizontal, y: vertical } -> {
        println!("Horizontal: \(horizontal), Vertical: \(vertical)")
    },
    _ -> {},
}

And use .. to ignore remaining fields:

struct User {
    name: String,
    email: String,
    age: Int,
    city: String,
}

let user = User { name: "Alice".toString(), email: "alice@example.com".toString(), age: 30, city: "NYC".toString() }

match user {
    User { name, age, .. } -> println!("\(name) is \(age) years old"),
}

Destructuring Enums

We've seen enum destructuring before, but let's review the full syntax:

enum Message {
    Quit,
    Move { x: Int, y: Int },
    Write(String),
    ChangeColor(Int, Int, Int),
}

fn processMessage(msg: Message) {
    match msg {
        // Unit variant
        Message.Quit -> println!("Quit"),

        // Struct-like variant
        Message.Move { x, y } -> println!("Moving to (\(x), \(y))"),

        // Tuple-like variant with single value
        Message.Write(text) -> println!("Writing: \(text)"),

        // Tuple-like variant with multiple values
        Message.ChangeColor(r, g, b) -> println!("RGB(\(r), \(g), \(b))"),
    }
}

Destructuring Tuples

Tuples can be destructured to extract individual values:

fn printTuple((x, y): (Int, String)) {
    println!("x: \(x), y: \(y)")
}

// In match expressions:
let point = (1, 2, 3)
match point {
    (0, 0, 0) -> println!("Origin"),
    (x, 0, 0) -> println!("On x-axis"),
    (x, y, z) -> println!("Point: (\(x), \(y), \(z))"),
}

You can use _ to ignore values:

let (x, _) = tuple
let (first, _, third) = tuple

Nested Patterns

Patterns can be nested for destructuring complex data:

enum Color {
    Rgb(Int, Int, Int),
    Hsv(Int, Int, Int),
}

enum Message {
    Quit,
    ChangeColor(Color),
}

fn processMessage(msg: Message) {
    match msg {
        Message.ChangeColor(Color.Rgb(r, g, b)) -> {
            println!("RGB: \(r), \(g), \(b)")
        },
        Message.ChangeColor(Color.Hsv(h, s, v)) -> {
            println!("HSV: \(h), \(s), \(v)")
        },
        Message.Quit -> println!("Quit"),
    }
}

// Nested tuple destructuring:
let ((x, y), (a, b)) = ((1, 2), (3, 4))
println!("x: \(x), y: \(y), a: \(a), b: \(b)")

Match Guards

A match guard is an additional if condition specified after the pattern in a match arm that must also be true for that arm to be chosen:

fn checkNumber(n: Int?) {
    match n {
        Some(x) if x < 0 -> println!("Negative: \(x)"),
        Some(x) if x == 0 -> println!("Zero"),
        Some(x) -> println!("Positive: \(x)"),
        null -> println!("No value"),
    }
}

fn processUser(user: User) {
    match user {
        User { age, .. } if age >= 18 -> println!("Adult"),
        User { age, .. } if age > 0 -> println!("Minor"),
        _ -> println!("Invalid age"),
    }
}

Match guards are useful when you need to express conditions that patterns alone cannot express:

fn classifyNumber(n: Int) {
    match n {
        n if n % 2 == 0 -> println!("Even"),
        n if n % 2 != 0 -> println!("Odd"),
        _ -> println!("Not a number"),
    }
}

You can use complex conditions in guards:

fn processValue(value: Int, max: Int) {
    match value {
        v if v > 0 && v < max -> println!("In range"),
        v if v == max -> println!("At max"),
        v if v < 0 -> println!("Negative"),
        _ -> println!("Out of range"),
    }
}

Binding with @ Pattern

The @ operator lets you bind a value while also matching against a pattern:

fn checkRange(num: Int) {
    match num {
        n @ 1..=5 -> println!("Small number: \(n)"),
        n @ 6..=10 -> println!("Medium number: \(n)"),
        n -> println!("Large number: \(n)"),
    }
}

enum Message {
    Hello { id: Int },
}

fn processMessage(msg: Message) {
    match msg {
        Message.Hello { id: id @ 5..=7 } -> {
            println!("Hello with special ID: \(id)")
        },
        Message.Hello { id } -> println!("Hello with ID: \(id)"),
    }
}

Practical Examples

Complex Configuration Matching

struct Config {
    port: Int,
    host: String,
    tls: Bool?,
}

fn setupServer(config: Config) {
    match config {
        Config { port: 80, host, tls: null } -> {
            println!("HTTP server on \(host):80")
        },
        Config { port: 443, host, tls: true } -> {
            println!("HTTPS server on \(host):443")
        },
        Config { port, host, tls } -> {
            println!("Server on \(host):\(port)")
        },
    }
}

Processing Nested Data

struct Address {
    street: String,
    city: String,
    country: String,
}

struct Person {
    name: String,
    address: Address?,
}

fn printLocation(person: Person) {
    match person {
        Person {
            name,
            address: Some(Address { city, country, .. }),
        } -> println!("\(name) lives in \(city), \(country)"),
        Person { name, address: null } -> println!("\(name) has no address"),
    }
}

Handling Multiple Enum Variants

enum Result {
    Success(String),
    Error(String),
    Pending,
}

fn processResult(result: Result) {
    match result {
        Result.Success(msg) | Result.Pending -> {
            println!("Good state: \(msg)")
        },
        Result.Error(err) -> println!("Error: \(err)"),
    }
}

Pattern syntax in Oxide is extremely powerful and expressive. By mastering these various pattern forms, you can write code that is both safe and concise, with the compiler ensuring that you handle all cases correctly.

Advanced Pattern Techniques

Now that you understand the basics of patterns, let's explore some more advanced techniques that will help you write cleaner, more expressive Oxide code.

Guard let: Conditionally Unwrapping in Guards

The guard let construct combines pattern matching with conditional logic, allowing you to unwrap a value and check a condition in a single statement. This is particularly useful at the beginning of functions to handle error cases early.

Basic guard let Syntax

fn processOptionalNumber(value: Int?) {
    guard let num = value else {
        println!("No value provided")
        return
    }

    println!("Got number: \(num)")
    println!("Double: \(num * 2)")
}

The guard let statement can be read as: "Guard against the case where this pattern doesn't match." If the pattern doesn't match, the else block executes and typically returns early.

guard let vs if let

guard let and if let both unwrap nullable types, but they're used in different situations:

// Use if let when you want to handle just the success case
if let user = findUser(id) {
    displayUser(user)
}

// Use guard let when you need to handle the failure case first
fn processUser(userId: Int) {
    guard let user = findUser(userId) else {
        println!("User not found")
        return
    }

    // Now user is guaranteed to be unwrapped for the rest of the function
    println!("Processing: \(user.name)")
    updateUserStatus(user)
    sendNotification(user)
}

Multiple guard let Statements

You can chain multiple guard let statements to handle several optional values:

fn setupConnection(host: String?, port: Int?, credentials: String?) {
    guard let h = host else {
        println!("Host is required")
        return
    }

    guard let p = port else {
        println!("Port is required")
        return
    }

    guard let creds = credentials else {
        println!("Credentials are required")
        return
    }

    println!("Connecting to \(h):\(p) with provided credentials")
    connect(h, p, creds)
}

Or more concisely with a single guard statement:

fn setupConnection(host: String?, port: Int?, credentials: String?) {
    guard let h = host && let p = port && let creds = credentials else {
        println!("Host, port, and credentials are required")
        return
    }

    println!("Connecting to \(h):\(p)")
    connect(h, p, creds)
}

guard let with Conditions

You can add conditions to guard let for more complex validation:

fn validateUser(user: User?) {
    guard let u = user && u.isActive else {
        println!("User is not active")
        return
    }

    println!("User \(u.name) is active and ready")
}

fn processPayment(amount: Int?) {
    guard let amt = amount && amt > 0 && amt < 1000000 else {
        println!("Invalid amount")
        return
    }

    println!("Processing payment of \(amt)")
}

guard let in Different Contexts

In Function Bodies

fn findAndProcessUser(userId: Int): String? {
    guard let user = fetchUserFromDatabase(userId) else {
        return "User not found"
    }

    updateLastSeen(user)
    return "Processing \(user.name)"
}

In Method Bodies

struct DataProcessor {
    fn processData(input: String?) {
        guard let data = input else {
            println!("No input data")
            return
        }

        let processed = transform(data)
        save(processed)
    }
}

In Closure Bodies

let users: Vec<User> = vec![]

// Using guard let in a closure
users.forEach { user ->
    guard let profile = user.profile else {
        println!("Skipping user without profile")
        return
    }

    displayProfile(profile)
}

Combining Patterns with Multiple Conditions

You can create complex pattern matching scenarios by combining multiple features:

Multiple Conditions with Guards

fn categorizeRequest(request: Request) {
    match request {
        Request { method: "GET", path, .. } if path.starts(with: "/api") -> {
            println!("API GET request for \(path)")
        },
        Request { method: "POST", path, .. } if path.starts(with: "/api") -> {
            println!("API POST request for \(path)")
        },
        Request { method: m, path: p, .. } if p.contains("health") -> {
            println!("Health check: \(m) \(p)")
        },
        _ -> println!("Other request"),
    }
}

Combining Multiple Pattern Types

enum NetworkEvent {
    Connected(Int),
    Disconnected(String),
    DataReceived(String),
    Error(String),
}

fn handleNetworkEvent(event: NetworkEvent) {
    match event {
        // Binding and range pattern
        NetworkEvent.Connected(port) if port >= 1024 && port <= 65535 -> {
            println!("Connected on valid port \(port)")
        },

        // Destructuring with condition
        NetworkEvent.Error(msg) if msg.contains("timeout") -> {
            println!("Timeout error: \(msg)")
        },

        // Multiple patterns
        NetworkEvent.DataReceived(data) | NetworkEvent.Connected(_) -> {
            println!("Received something")
        },

        _ -> println!("Other event"),
    }
}

Pattern Refining Strategy

When writing complex patterns, use this strategy to keep code readable:

1. Start with the Most Specific Patterns

// Good: specific patterns first
match status {
    Status.Success(code) if code == 200 -> handleSuccess(),
    Status.Success(code) if code >= 300 && code < 400 -> handleRedirect(),
    Status.Error(msg) if msg.contains("timeout") -> handleTimeout(),
    Status.Error(msg) -> handleError(msg),
    _ -> handleUnknown(),
}

// Bad: general patterns might catch specific cases
match status {
    _ -> handleAny(),  // This would prevent other patterns from executing
    Status.Success -> handleSuccess(),  // Unreachable!
}
// Group by functionality
match data {
    // All success cases
    ParseResult.Json(obj) | ParseResult.Xml(obj) -> {
        processObject(obj)
    },

    // All error cases
    ParseResult.InvalidFormat(err) | ParseResult.DecodeError(err) -> {
        logError(err)
    },

    // Default
    ParseResult.Empty -> println!("No data"),
}

3. Use Helper Functions for Complex Patterns

fn isValidEmail(email: String): Bool {
    email.contains("@") && email.contains(".")
}

fn processUser(user: User) {
    match user {
        User { email, .. } if isValidEmail(email) -> {
            sendWelcome(user)
        },
        User { email, .. } -> {
            println!("Invalid email: \(email)")
        },
    }
}

Practical Examples

Configuration Validation with guard let

struct AppConfig {
    databaseUrl: String?,
    apiKey: String?,
    debugMode: Bool,
}

fn startApp(config: AppConfig?) {
    guard let cfg = config else {
        println!("Configuration is required")
        return
    }

    guard let dbUrl = cfg.databaseUrl else {
        println!("Database URL is required")
        return
    }

    guard let apiKey = cfg.apiKey else {
        println!("API key is required")
        return
    }

    println!("Starting app with DB: \(dbUrl)")
    println!("Debug mode: \(cfg.debugMode)")

    initializeDatabase(dbUrl)
    setApiKey(apiKey)
}

Type-Safe API Response Handling

enum ApiResponse {
    Success(String),
    Failure(Int, String),
    NetworkError(String),
}

fn processApiResponse(response: ApiResponse) {
    match response {
        ApiResponse.Success(data) -> {
            println!("Success: \(data)")
        },
        ApiResponse.Failure(code, message) if code >= 400 && code < 500 -> {
            println!("Client error \(code): \(message)")
        },
        ApiResponse.Failure(code, message) if code >= 500 -> {
            println!("Server error \(code): \(message)")
            retryRequest()
        },
        ApiResponse.NetworkError(err) -> {
            println!("Network error: \(err)")
            retryRequest()
        },
        _ -> println!("Unknown response"),
    }
}

Data Extraction with Nested Patterns and Guards

struct Message {
    from: String,
    to: String?,
    content: String,
    attachments: Vec<String>,
}

fn processEmail(message: Message) {
    guard let recipient = message.to else {
        println!("Message has no recipient")
        return
    }

    match message {
        Message { content, attachments, .. }
            if attachments.count > 0 && content.contains("invoice") -> {
            println!("Processing invoice with attachments")
            processInvoice(content, attachments)
        },

        Message { content, .. } if content.starts(with: "URGENT:") -> {
            println!("Urgent message to \(recipient)")
            markAsUrgent()
        },

        Message { from, content, .. } -> {
            println!("Regular message from \(from)")
            saveToArchive(content)
        },
    }
}

Best Practices for Advanced Patterns

  1. Use guard let for early returns - It makes your intent clear and improves readability
  2. Put the most specific patterns first - Ensures they get evaluated before catch-all patterns
  3. Use guards for additional conditions - When pattern matching alone isn't expressive enough
  4. Group related patterns with | - Reduces repetition and groups similar logic
  5. Keep patterns readable - Use helper functions if patterns become too complex
  6. Prefer exhaustive matching - Use match instead of if let when handling multiple cases

By mastering these advanced techniques, you'll be able to write Oxide code that is both powerful and maintainable, with the compiler ensuring that you handle all cases correctly.

Advanced Features

This chapter explores the advanced features of Oxide that allow you to write powerful, expressive, and efficient code. These topics build on what you've learned so far and enable you to tackle complex programming challenges.

What You'll Learn

This chapter covers five major advanced features:

  1. Unsafe Code - How to tell the compiler to trust you and use unsafe Oxide when necessary for performance or system-level programming
  2. Advanced Traits - Mastering trait objects, associated types, default generic type parameters, and the blanket implementation pattern
  3. Advanced Types - Working with type aliases, the never type, dynamically sized types, and function pointers
  4. Advanced Functions and Closures - Function pointers, closures as function parameters and return types, and macro-like closures
  5. Macros - Writing declarative and procedural macros to generate code and extend Oxide's syntax

Why Advanced Features Matter

Advanced features exist for good reasons:

  • Performance: Sometimes you need unsafe code to write high-performance system libraries or implement low-level algorithms
  • Expressiveness: Advanced traits and types let you write generic code that works across many types
  • Code Generation: Macros eliminate boilerplate and allow you to extend the language itself
  • Integration: Unsafe code lets you call C libraries and implement Oxide libraries that other languages can call

A Word of Caution

These features are called "advanced" for a reason. They give you more power, but with that power comes more responsibility:

  • Unsafe code requires careful reasoning to maintain memory safety
  • Complex trait bounds can be confusing to read and maintain
  • Macros can hide what your code is actually doing
  • Generic code can produce large binaries if not carefully designed

That said, don't be intimidated! These features have their place, and understanding them will make you a better Oxide programmer even if you don't use them every day.

Prerequisites

This chapter assumes you've mastered the previous chapters, particularly:

  • Ownership and borrowing (Chapter 4)
  • Traits and generics (Chapters 10 and 9)
  • Closures (Chapter 13)

Let's dive in!

Unsafe Code

By default, Oxide enforces strict safety rules at compile time. The borrow checker, ownership system, and type system all work together to prevent entire classes of bugs. However, sometimes you need to do things that the compiler can't prove are safe. In these cases, Oxide provides unsafe code blocks.

Unsafe Oxide allows you to:

  • Dereference raw pointers
  • Call unsafe functions
  • Mutate statics
  • Implement unsafe traits

It's important to understand that unsafe doesn't mean "no rules"—it means "I promise the compiler that these rules are satisfied" where the compiler can't verify them itself.

Raw Pointers

Unsafe Oxide has two new pointer types called raw pointers: *const T (immutable) and *mut T (mutable). These are like references, but without the borrow checker's guarantees.

Creating Raw Pointers

You can create raw pointers from safe code:

fn main() {
    var num = 5

    // Create immutable and mutable raw pointers
    let r1 = &num as *const Int
    let r2 = &mut num as *mut Int

    unsafe {
        println!("r1 is: \((*r1)?)")
        println!("r2 is: \((*r2)?)")
    }
}

Note that creating raw pointers is safe—dereferencing them is what requires unsafe.

Dereferencing Raw Pointers

To read or write through a raw pointer, you must use the dereference operator *, and you must do it in an unsafe block:

fn main() {
    var x = 5
    let r = &mut x as *mut Int

    unsafe {
        *r = 10
        println!("x is now: \(x)")  // Prints: x is now: 10
    }
}

Why Raw Pointers?

Raw pointers are useful when:

  1. Interfacing with C code - C libraries use raw pointers extensively
  2. Performance-critical code - Sometimes avoiding the borrow checker's overhead matters
  3. Complex pointer manipulations - Like building custom data structures

Here's an example that demonstrates raw pointers' flexibility:

fn main() {
    var data = vec![1, 2, 3, 4, 5]
    let ptr = data.asMutPtr()

    unsafe {
        // Access the pointer directly
        *ptr = 100
        *(ptr.offset(1)) = 101
        *(ptr.offset(2)) = 102
    }

    println!("\(data:?)")  // Prints: [100, 101, 102, 4, 5]
}

Calling Unsafe Functions

An unsafe function is one that has requirements that the compiler can't check. You must call them in an unsafe block:

fn unsafeOperation() {
    println!("This is an unsafe function")
}

unsafe fn veryUnsafeOperation() {
    println!("This does something dangerous")
}

fn main() {
    unsafeOperation()  // OK - not marked unsafe

    unsafe {
        veryUnsafeOperation()  // OK - inside unsafe block
    }

    // veryUnsafeOperation()  // Error: unsafe function requires unsafe block
}

Declaring Unsafe Functions

When you declare a function as unsafe, you're making a contract: callers must ensure safety preconditions are met:

/// Divides a by b. Caller must ensure b is not zero.
///
/// # Safety
///
/// Calling this function with `b == 0` is undefined behavior.
unsafe fn divide(a: Int, b: Int): Int {
    a / b  // Undefined if b == 0
}

fn main() {
    let result = unsafe {
        divide(10, 2)
    }
    println!("10 / 2 = \(result)")
}

The # Safety section in documentation comments is the standard way to document unsafe function preconditions.

Safe Abstractions Over Unsafe Code

Often you'll want to provide a safe interface to unsafe operations. This is the key to using unsafe code effectively:

fn main() {
    var v = vec![1, 2, 3, 4, 5]

    // This is safe because splitAtMut checks the index before doing unsafe operations
    let (left, right) = v.splitAtMut(2)

    println!("Left: \(left:?)")   // [1, 2]
    println!("Right: \(right:?)")  // [3, 4, 5]
}

// This is what splitAtMut might look like internally:
fn splitAtMut<T>(v: &mut Vec<T>, mid: Int): (&mut Vec<T>, &mut Vec<T>) {
    // Safe because we check the index
    if mid > v.len() {
        panic!("Index out of bounds")
    }

    unsafe {
        let ptr = v.asMutPtr()
        let left = std.slice.fromRawPartsMut(ptr, mid)
        let right = std.slice.fromRawPartsMut(ptr.offset(mid as IntSize), v.len() - mid)
        (&mut *left, &mut *right)
    }
}

The principle here is: safe boundary around unsafe code. Do all the validation and safety checks in the safe wrapper, leaving the dangerous operations in unsafe blocks.

Using extern for FFI

When calling C functions from Oxide, you use extern to declare foreign functions:

extern "C" {
    // Declare C functions
    fn strlen(s: *const UInt8): UInt
    fn malloc(size: UInt): *mut UInt8
    fn free(ptr: *mut UInt8)
}

fn main() {
    unsafe {
        let ptr = malloc(1024)
        free(ptr)
    }
}

You can also expose Oxide functions to C:

#[no_mangle]
extern "C" fn oxideAdd(a: Int, b: Int): Int {
    a + b
}

Mutable Statics

You can declare global mutable variables using var at module scope:

var COUNTER: Int = 0

fn incrementCounter() {
    unsafe {
        COUNTER += 1
    }
}

fn main() {
    incrementCounter()
    unsafe {
        println!("Counter: \(COUNTER)")
    }
}

Accessing mutable statics is unsafe because:

  • Multiple threads could access and modify the value simultaneously
  • The compiler can't enforce the usual borrowing rules across the program

For safe multi-threaded access to shared state, use std.sync.Mutex or std.sync.atomic.

Unsafe Traits

Sometimes a trait has requirements that can't be checked by the compiler. You mark such traits as unsafe:

unsafe trait UnsafeMarkerTrait {
    fn importantInvariant()
}

// Implementing an unsafe trait requires unsafe
unsafe extension SomeType: UnsafeMarkerTrait {
    fn importantInvariant() {
        // Must uphold the invariant
    }
}

A real example from the standard library is Send and Sync:

// These are marker traits - they have no methods
unsafe trait Send {}
unsafe trait Sync {}

// Only types that are safe to send between threads implement Send
// The compiler implements this automatically for most types

When to Use Unsafe

Use unsafe code when:

  1. You must for the task - Calling C functions, low-level system programming
  2. It's worth the risk - The performance gain or expressive power justifies the safety trade-off
  3. You can isolate it - Keep unsafe code in small, well-documented modules
  4. You can verify it - You can convince yourself (and reviewers) it's actually safe

Don't use unsafe when:

  1. Safe alternatives exist - The standard library usually provides safe versions
  2. You're not sure it's safe - If you can't prove it's safe, it probably isn't
  3. It makes code much more complex - The safety trade-off should be worth it

Guidelines for Safe Unsafe Code

When you do write unsafe code, follow these principles:

Document the Safety Contract

/// Performs an operation that requires careful pointer manipulation.
///
/// # Safety
///
/// The caller must ensure:
/// - `ptr` is a valid pointer to at least `len` elements of type T
/// - `len` is the actual number of elements `ptr` points to
/// - `ptr` is properly aligned for type T
/// - The memory pointed to by `ptr` is not accessed elsewhere while this function runs
unsafe fn dangerousOperation<T>(ptr: *const T, len: Int) {
    // Implementation
}

Validate Before Acting

unsafe fn validateAndOperate(slice: &[UInt8], index: Int) {
    // Safe checks first
    if index >= slice.len() {
        panic!("Index out of bounds")
    }

    // Only then do unsafe operations
    unsafe {
        let ptr = slice.asPtr().offset(index as IntSize)
        // ...
    }
}

Keep Unsafe Blocks Small

// Good: unsafe is localized
fn findZero(data: &[UInt8]): Int? {
    for (i, &byte) in data.iter().enumerate() {
        if byte == 0 {
            return Some(i)
        }
    }
    null
}

// Avoid: large unsafe blocks where they're not needed
unsafe fn findZeroBad(data: &[UInt8]): Int? {
    for (i, &byte) in data.iter().enumerate() {
        if byte == 0 {
            return Some(i)
        }
    }
    null
}

Common Unsafe Patterns

Pattern: Working with Raw Pointers

fn processBuffer(buf: &mut [UInt8]) {
    unsafe {
        let ptr = buf.asMutPtr()

        // Operate on the pointer
        for i in 0..<buf.len() {
            *ptr.offset(i as IntSize) = (*ptr.offset(i as IntSize)).wrappingAdd(1)
        }
    }
}

Pattern: Calling C Functions

extern "C" {
    fn systemCall(command: *const UInt8): Int
}

fn runCommand(cmd: String): Int {
    cmd.asBytes().withCStr { cPtr ->
        unsafe { systemCall(cPtr) }
    }
}

Pattern: Casting Between Types

fn castPtrToInt(ptr: *const UInt8): Int {
    unsafe {
        ptr as Int
    }
}

Testing Unsafe Code

Unsafe code deserves extra testing:

#[test]
fn testUnsafeOperation() {
    var x = 5
    let ptr = &mut x as *mut Int

    unsafe {
        *ptr = 10
    }

    assertEq!(x, 10)
}

#[test]
fn testRawPointerOffset() {
    var array = [1, 2, 3, 4, 5]
    let ptr = array.asMutPtr()

    unsafe {
        assertEq!(*ptr, 1)
        assertEq!(*ptr.offset(1), 2)
        assertEq!(*ptr.offset(4), 5)
    }
}

Summary

Unsafe code in Oxide:

  • Exists for a reason - Sometimes you need it for performance or interoperability
  • Requires careful thought - Document your safety requirements clearly
  • Should be isolated - Keep it in small, well-tested modules
  • Isn't the default - Most Oxide code is safe, and that's a feature
  • Doesn't bypass the type system - Unsafe code still gets type-checked

Remember: unsafe doesn't mean "do whatever you want." It means "the compiler can't verify this is safe, so you must verify it yourself." Take that responsibility seriously, and unsafe code can be a powerful tool in your Oxide toolkit.

Advanced Traits

Traits are a core feature of Oxide, enabling abstraction and code reuse. In this chapter, we'll explore advanced trait techniques that let you write flexible, powerful code.

Associated Types

Associated types let you define placeholder types inside a trait that concrete types will specify:

trait Iterator {
    type Item

    mutating fn next(): Item?
}

Here, Item is an associated type. When you implement Iterator, you specify what type Item is:

struct CountUp {
    current: Int,
    max: Int,
}

extension CountUp: Iterator {
    type Item = Int

    mutating fn next(): Int? {
        if current < max {
            current += 1
            return Some(current)
        }
        null
    }
}

fn main() {
    var counter = CountUp { current: 0, max: 3 }
    println!("\(counter.next():?)")  // Some(1)
    println!("\(counter.next():?)")  // Some(2)
    println!("\(counter.next():?)")  // Some(3)
    println!("\(counter.next():?)")  // null
}

Why Associated Types Matter

Associated types are more flexible than generic type parameters. Compare these approaches:

// Using generics (less flexible)
trait IteratorGeneric<Item> {
    mutating fn next(): Item?
}

// Using associated types (more flexible)
trait Iterator {
    type Item
    mutating fn next(): Item?
}

With generics, one type could implement IteratorGeneric<Int> and IteratorGeneric<String>. But with associated types, each implementation must choose exactly one Item type. This prevents ambiguity and is usually what you want.

Associated Types in Generic Code

You can use associated types in generic bounds:

fn processIterator<I>(mut iter: I)
where
    I: Iterator,
{
    while let Some(item) = iter.next() {
        println!("Processing: \(item)")
    }
}

fn main() {
    let counter = CountUp { current: 0, max: 3 }
    processIterator(counter)
}

Default Generic Type Parameters

You can specify default types for generic parameters:

trait Add<Rhs = Self> {
    type Output

    consuming fn add(rhs: Rhs): Output
}

Here, Rhs defaults to Self. This means you can write:

extension Int: Add {
    type Output = Int
    consuming fn add(rhs: Int): Int {
        // ...
    }
}

extension Int: Add<String> {
    type Output = String
    consuming fn add(rhs: String): String {
        // ...
    }
}

Default generic type parameters enable:

  1. Backward compatibility - Adding generic parameters without breaking existing code
  2. Operator overloading - Different types can use operators in different ways
  3. Convenience - Sensible defaults reduce boilerplate

Trait Objects

Sometimes you want to store different types that implement the same trait. You can use trait objects:

trait Animal {
    fn speak()
}

struct Dog;
struct Cat;

extension Dog: Animal {
    fn speak() {
        println!("Woof!")
    }
}

extension Cat: Animal {
    fn speak() {
        println!("Meow!")
    }
}

fn main() {
    // Create a vector of trait objects
    let animals: Vec<Box<dyn Animal>> = vec![
        Box.new(Dog),
        Box.new(Cat),
        Box.new(Dog),
    ]

    for animal in animals {
        animal.speak()
    }
    // Output:
    // Woof!
    // Meow!
    // Woof!
}

Trait Objects and Dynamic Dispatch

Trait objects enable dynamic dispatch: the method to call is determined at runtime:

trait Shape {
    fn area(): Float
}

struct Circle {
    radius: Float,
}

struct Rectangle {
    width: Float,
    height: Float,
}

extension Circle: Shape {
    fn area(): Float {
        3.14159 * radius * radius
    }
}

extension Rectangle: Shape {
    fn area(): Float {
        width * height
    }
}

fn printAreas(shapes: Vec<Box<dyn Shape>>) {
    for shape in shapes {
        println!("Area: \(shape.area())")
    }
}

fn main() {
    let shapes: Vec<Box<dyn Shape>> = vec![
        Box.new(Circle { radius: 2.0 }),
        Box.new(Rectangle { width: 3.0, height: 4.0 }),
    ]

    printAreas(shapes)
}

Trait Object Syntax

The syntax &dyn TraitName creates a trait object reference. Key rules:

// Trait objects must use reference or Box
let obj: &dyn Animal = &dog     // OK: reference
let obj: Box<dyn Animal> = Box.new(dog)  // OK: Box
// let obj: dyn Animal = dog     // Error: cannot have unboxed trait objects

// Multiple trait bounds
let obj: &(dyn Animal + Debug) = &dog  // OK: requires Animal and Debug

Limitations of Trait Objects

Trait objects have limitations compared to generics:

  1. Object safety - The trait must be "object safe"
  2. Performance - Dynamic dispatch is slower than static dispatch
  3. Size - You can't know the size of the concrete type at compile time

A trait is object safe if:

  • All its methods return Self or don't reference Self
  • It has no static methods
  • All methods don't have generic type parameters
trait ObjectSafe {
    fn method()
    fn returnsString(): String
}

trait NotObjectSafe {
    fn returnsSelf(): Self  // Error: returns Self
    fn generic<T>(t: T)    // Error: has generic parameter
}

// Can't create trait objects of NotObjectSafe
// let obj: Box<dyn NotObjectSafe> = Box.new(something)  // Error!

Blanket Implementations

You can implement a trait for any type that implements another trait:

trait MyTrait {
    fn doSomething()
}

trait AnotherTrait {}

// Blanket implementation: implement MyTrait for ANY type that implements AnotherTrait
extension<T> T: MyTrait
where
    T: AnotherTrait,
{
    fn doSomething() {
        println!("Doing something!")
    }
}

struct MyType;

extension MyType: AnotherTrait {}

fn main() {
    let obj = MyType
    obj.doSomething()  // Works because MyType implements AnotherTrait
}

Real-World Example: ToString

The standard library uses blanket implementations effectively:

// Simplified version of what's in std:
trait Display {
    fn fmt(f: &mut Formatter): Result
}

trait ToString {
    fn toString(): String
}

// Blanket implementation
extension<T> T: ToString
where
    T: Display,
{
    fn toString(): String {
        format!("\(self)")
    }
}

Now any type that implements Display automatically gets toString():

extension Int: Display {
    fn fmt(f: &mut Formatter): Result {
        // implementation
    }
}

fn main() {
    let n = 42
    println!("\(n.toString())")  // Works!
}

Supertraits

A trait can require that implementors also implement another trait:

trait OutlineDisplay: Display {
    fn outlinePrint() {
        println!("*** \(toString()) ***")
    }
}

struct Point {
    x: Int,
    y: Int,
}

extension Point: Display {
    fn fmt(f: &mut Formatter): Result {
        println!("(\(x), \(y))")
    }
}

extension Point: OutlineDisplay {}

fn main() {
    let p = Point { x: 5, y: 10 }
    p.outlinePrint()  // Prints: *** (5, 10) ***
}

The syntax trait OutlineDisplay: Display means:

  • "To implement OutlineDisplay, you must also implement Display"
  • Inside methods of OutlineDisplay, you can call methods from Display

Associated Type Bounds

You can constrain associated types with trait bounds:

trait Container {
    type Item

    fn capacity(): Int
}

fn printItems<C>(container: C)
where
    C: Container,
    C.Item: Display,
{
    println!("Capacity: \(container.capacity())")
    for item in container {
        println!("Item: \(item)")
    }
}

The constraint C.Item: Display means "the associated Item type must implement Display".

Implementing Trait Methods with Defaults

Trait methods can have default implementations:

trait Animal {
    fn speak()

    fn sleep() {
        println!("Zzz...")
    }

    fn eat() {
        println!("Nom nom!")
    }
}

struct Dog;

extension Dog: Animal {
    fn speak() {
        println!("Woof!")
    }
    // Can use default implementations for sleep and eat
}

fn main() {
    let dog = Dog
    dog.speak()  // Prints: Woof!
    dog.sleep()  // Prints: Zzz...
    dog.eat()    // Prints: Nom nom!
}

You can override defaults when needed:

struct Cat;

extension Cat: Animal {
    fn speak() {
        println!("Meow!")
    }

    fn sleep() {
        println!("Cat naps for 16 hours...")
    }
}

Advanced Generic Bounds

Combine multiple traits with +:

fn process<T>(item: T)
where
    T: Clone + Display + Debug,
{
    let cloned = item.clone()
    println!("Original: \(item)")
    println!("Cloned: \(cloned:?)")
}

Use lifetime bounds with traits:

trait Produces<'a> {
    type Output: 'a
}

extension<'a> SomeType: Produces<'a> {
    type Output = &'a str
}

Higher-ranked trait bounds:

// For all lifetimes 'a, T must implement Fn(&'a str) -> UInt
fn takesClosure<F>(f: F)
where
    F: for<'a> Fn(&'a str) -> UInt,
{
    // ...
}

Example: Building a Plugin System

Let's combine these concepts into a real-world plugin system:

trait Plugin: Send + Sync {
    fn name(): &str
    fn version(): &str
    fn execute(input: String): String?
}

struct PluginManager {
    plugins: Vec<Box<dyn Plugin>>,
}

extension PluginManager {
    static fn new(): Self {
        PluginManager { plugins: vec![] }
    }

    mutating fn register<P: Plugin + 'static>(plugin: P) {
        self.plugins.push(Box.new(plugin))
    }

    fn executeAll(input: String): Vec<String> {
        self.plugins
            .iter()
            .filterMap { plugin ->
                plugin.execute(input.clone())
            }
            .collect()
    }
}

struct UppercasePlugin;

extension UppercasePlugin: Plugin {
    fn name(): &str {
        "Uppercase"
    }

    fn version(): &str {
        "1.0"
    }

    fn execute(input: String): String? {
        Some(input.toUppercase())
    }
}

fn main() {
    var manager = PluginManager.new()
    manager.register(UppercasePlugin)

    let results = manager.executeAll("hello".toString())
    for result in results {
        println!("\(result)")  // Prints: HELLO
    }
}

Summary

Advanced traits enable:

  • Associated types - Flexible placeholder types in traits
  • Default generic parameters - Sensible defaults and backward compatibility
  • Trait objects - Storing different types implementing the same trait
  • Blanket implementations - Implement traits for broad categories of types
  • Supertraits - Require implementations of multiple traits
  • Complex bounds - Fine-grained control over generic constraints

Understanding these patterns will help you write more flexible, reusable Oxide code and better understand the standard library.

Advanced Types

Oxide's type system is powerful and flexible. Let's explore some advanced type features that enable you to write expressive, type-safe code.

Type Aliases

Type aliases give an existing type another name:

type Kilometers = Int
type Pounds = Int

fn main() {
    let distance: Kilometers = 5
    let weight: Pounds = 50

    // These are the same underlying type, so they can be mixed
    let total = distance + weight

    println!("Total: \(total)")  // Prints: Total: 55
}

The key point: type aliases create aliases, not new types. Kilometers and Pounds are both Int, so values of these types can be used interchangeably.

When to Use Type Aliases

Use aliases to:

  1. Reduce repetition - Shorten long type names
  2. Clarify intent - Make code more readable
  3. Refactor - Change a type in one place
// Reduce repetition
type Result<T> = std.result.Result<T, String>

fn readFile(path: &str): Result<String> {
    // ...
}

fn parseJson(json: &str): Result<Value> {
    // ...
}

The Result type alias shows up throughout the standard library, making error handling code more readable.

Generics in Type Aliases

Type aliases can be generic:

type Callback<T> = (T) -> T

fn applyTwice<T>(f: Callback<T>, x: T): T {
    f(f(x))
}

fn main() {
    let double: Callback<Int> = { x -> x * 2 }
    println!("\(applyTwice(double, 5))")  // Prints: 20
}

The Never Type !

The ! type, called the "never type," represents a function that never returns:

fn fail(msg: &str)! {
    panic!("\(msg)")
}

fn loopForever()! {
    loop {
        println!("Forever!")
    }
}

fn main() {
    // Never type is compatible with any type
    let x: Int = if condition {
        5
    } else {
        fail("Error!")  // Returns !
    }
}

Why Never Type Matters

The never type is useful in several situations:

// In match expressions
fn example(x: Int) {
    let msg = match x {
        1 -> "one",
        2 -> "two",
        _ -> panic!("Unknown"),  // Returns !
    }
}

// In Option handling
fn getOrPanic(opt: Int?): Int {
    opt.unwrapOrElse { panic!("No value") }
}

// In loops
fn keepAsking(): String {
    loop {
        let input = getUserInput()
        if valid(input) {
            return input
        }
        println!("Invalid!")
    }
}

The never type allows these patterns to work because the compiler understands that ! can be treated as any type.

Dynamically Sized Types (DSTs)

Most types have a size known at compile time. But some types, called dynamically sized types (DSTs), don't:

// Sized type - compiler knows the size
let x: Int = 5

// DST - compiler doesn't know the size
let x: Vec<Int> = [1, 2, 3]  // Error: Vec<Int> requires explicit construction

You can't use DSTs directly because Oxide needs to know the size at compile time. Instead, use references or pointers:

let x: &[Int] = &[1, 2, 3]  // OK: reference to a slice
let x: &str = "hello"       // OK: reference to a str

// Trait objects are also DSTs
let obj: &dyn Clone = &value  // OK: reference to trait object

Deref Coercion

Oxide automatically converts between types when using the Deref trait. This is called deref coercion:

fn takesStr(s: &str) {
    println!("\(s)")
}

fn main() {
    let s = "hello".toString()
    takesStr(&s)  // Coerces String to &str
}

Under the hood, Oxide is calling the deref method:

  • String implements Deref<Target = str>
  • So &String is coerced to &str

The Deref Trait

You can implement Deref for your own types:

import std.ops.Deref

struct MyBox<T> {
    value: T,
}

extension<T> MyBox<T>: Deref {
    type Target = T

    fn deref(): &T {
        &value
    }
}

fn main() {
    let x = MyBox { value: 5 }
    println!("\(*x)")  // Prints: 5
}

Function Pointers

Function pointers let you pass functions like values:

fn add(a: Int, b: Int): Int {
    a + b
}

fn multiply(a: Int, b: Int): Int {
    a * b
}

fn executeOperation(op: (Int, Int) -> Int, a: Int, b: Int): Int {
    op(a, b)
}

fn main() {
    let result1 = executeOperation(add, 5, 3)
    println!("add: \(result1)")  // Prints: add: 8

    let result2 = executeOperation(multiply, 5, 3)
    println!("multiply: \(result2)")  // Prints: multiply: 15
}

Function Pointers vs Closures

Function pointers (fn) are different from closure types:

// Function pointer - implements Fn, FnMut, FnOnce
let f: (Int) -> Int = { x -> x * 2 }

// Closure - captures environment
let multiplier = 3
let g = { x -> x * multiplier }  // Can't assign to fn type!

// But closures can be assigned to function pointer types if they don't capture
let h: (Int) -> Int = { x -> x * 2 }  // OK: no captured variables

When to use each:

  • Function pointers (fn) - When you need a simple function type without closure capture
  • Closures - When you need to capture variables from the environment
  • Trait objects (&dyn Fn) - For maximum flexibility

Function Item Types

When you write a function name without calling it, you get its item type:

fn add(a: Int, b: Int): Int {
    a + b
}

fn main() {
    // These are all equivalent
    let f = add             // Function item type (Int, Int) -> Int
    let g: (Int, Int) -> Int = add
    let h = add as (Int, Int) -> Int

    println!("\(f(5, 3))")  // Prints: 8
}

Function Traits

All functions and closures implement one of the Fn* traits:

// Regular function
fn regular(x: Int): Int { x * 2 }

// Closure with no captures
let noCapture = { x: Int -> x * 2 }

// Closure with immutable capture
let value = 3
let immutCapture = { x -> x * value }

// Closure with mutable capture
var counter = 0
let mutCapture = { counter += 1 }

// Closure that takes ownership
let owned = "hello".toString()
let moveCapture = move { owned.len() }

All of these can be used as function parameters:

fn applyTwice<F>(f: F, x: Int): Int
where
    F: Fn(Int) -> Int,
{
    f(f(x))
}

fn main() {
    println!("\(applyTwice(regular, 2))")        // Prints: 8
    println!("\(applyTwice(noCapture, 2))")      // Prints: 8
    println!("\(applyTwice(immutCapture, 2))")   // Prints: 27
}

Generic Trait Bounds

You can use complex trait bounds to express sophisticated type constraints:

// Multiple bounds with +
fn process<T>(item: T)
where
    T: Clone + Display + Debug,
{
    // Can use Clone, Display, and Debug methods
}

// Higher-ranked bounds
fn takesRefs<F>(f: F)
where
    F: for<'a> Fn(&'a str) -> UInt,
{
    // f can accept &str with any lifetime
}

// Where clauses for clarity
fn example<T, U>(t: T, u: U)
where
    T: Clone,
    U: Clone,
    T: Display,
    U: Display,
{
    // Clearer than T: Clone + Display, U: Clone + Display
}

Type Inference Limitations

Oxide's type inference is powerful but not unlimited:

fn main() {
    // Inference works
    let v = vec![1, 2, 3]  // Inferred as Vec<Int>

    // Sometimes you need to help
    let v: Vec<Int> = vec![]  // Can't infer Int from empty vec

    // Turbofish syntax for explicit types
    let v = Vec<Int>.new()
    let nums = "1,2,3".split(",").map { s -> s.parse<Int>().unwrap() }.collect<Vec<Int>>()
}

Phantom Types

Sometimes you want a generic parameter that doesn't actually store a value:

import std.marker.PhantomData

struct PhantomType<T> {
    data: Int,
    phantom: PhantomData<T>,  // Has size 0
}

extension<T> PhantomType<T> {
    static fn new(data: Int): Self {
        PhantomType {
            data,
            phantom: PhantomData,
        }
    }
}

fn main() {
    let p1: PhantomType<String> = PhantomType.new(5)
    let p2: PhantomType<Int> = PhantomType.new(5)

    // These are different types even though they have the same data
}

Phantom types are useful for:

  • Maintaining type information without storing it
  • Implementing type-safe abstractions
  • Working with unsafe code

Generic Specialization

Sometimes you want different implementations for different types:

// Generic implementation
extension<T> Vec<T>: Clone
where
    T: Clone,
{
    fn clone(): Self {
        // Clone each element
    }
}

// Specialized for Copy types (faster)
extension Vec<Int>: Clone {
    fn clone(): Self {
        // Can use memcpy because Int is Copy
    }
}

Advanced Example: Type-Safe Builder

Here's a real-world pattern using advanced types:

struct Builder<S> {
    name: String?,
    age: Int?,
    phantom: PhantomData<S>,
}

trait BuilderState {}
struct NoName;
struct HasName;
struct Complete;

extension NoName: BuilderState {}
extension HasName: BuilderState {}
extension Complete: BuilderState {}

extension Builder<NoName> {
    static fn new(): Self {
        Builder {
            name: null,
            age: null,
            phantom: PhantomData,
        }
    }

    consuming fn name(name: String): Builder<HasName> {
        Builder {
            name: name,
            age: self.age,
            phantom: PhantomData,
        }
    }
}

extension Builder<HasName> {
    consuming fn age(age: Int): Builder<Complete> {
        Builder {
            name: self.name,
            age: age,
            phantom: PhantomData,
        }
    }
}

extension Builder<Complete> {
    consuming fn build(): Person {
        Person {
            name: name.unwrap(),
            age: age.unwrap(),
        }
    }
}

struct Person {
    name: String,
    age: Int,
}

fn main() {
    // Compile error: can't build without setting both fields
    // let p = Builder.new().build()

    // OK: set both fields
    let p = Builder.new()
        .name("Alice".toString())
        .age(30)
        .build()

    println!("Person: \(p.name), age \(p.age)")
}

This pattern uses phantom types to enforce at compile time that the builder is in the correct state.

Summary

Advanced types in Oxide:

  • Type aliases - Give names to complex types for clarity
  • Never type - Represents functions that don't return
  • DSTs and Deref - Work with unsized types safely
  • Function pointers - Pass functions as values
  • Function traits - Flexible function parameters
  • Generic bounds - Express sophisticated constraints
  • Phantom types - Type information without storage
  • Specialization - Different implementations for different types

These features combine to give Oxide a type system that's both expressive and safe, letting you write code that's both correct by construction and readable.

Advanced Functions and Closures

Functions are fundamental to Oxide, but there's more to them than basic definitions and calls. Let's explore advanced function patterns that enable powerful abstractions.

Function Pointers as Parameters

We can write functions that accept function pointers:

fn applyOperation(x: Int, y: Int, op: (Int, Int) -> Int): Int {
    op(x, y)
}

fn add(a: Int, b: Int): Int {
    a + b
}

fn multiply(a: Int, b: Int): Int {
    a * b
}

fn main() {
    println!("5 + 3 = \(applyOperation(5, 3, add))")        // 8
    println!("5 * 3 = \(applyOperation(5, 3, multiply))")   // 15
}

This is straightforward, but it's less flexible than using trait bounds because function pointers can't capture variables.

Returning Function Pointers

Functions can return function pointers:

fn chooseOperation(isAddition: Bool): (Int, Int) -> Int {
    if isAddition {
        { a, b -> a + b }
    } else {
        { a, b -> a * b }
    }
}

fn main() {
    let addFn = chooseOperation(true)
    let mulFn = chooseOperation(false)

    println!("5 + 3 = \(addFn(5, 3))")  // 8
    println!("5 * 3 = \(mulFn(5, 3))")  // 15
}

Returning Closures with Trait Objects

When closures capture variables, they can't be assigned to function pointer types. Instead, return a boxed trait object:

fn makeAdder(x: Int): Box<dyn Fn(Int) -> Int> {
    Box.new(move { y -> x + y })
}

fn main() {
    let addFive = makeAdder(5)
    println!("5 + 3 = \(addFive(3))")  // 8

    let addTen = makeAdder(10)
    println!("10 + 3 = \(addTen(3))")  // 13
}

The move keyword is essential here—without it, the closure would try to borrow x, but x goes out of scope when makeAdder returns.

Returning Different Closures

If you need to return different closures from different branches, use trait objects:

fn getClosure(isDouble: Bool): Box<dyn Fn(Int) -> Int> {
    if isDouble {
        Box.new(move { x -> x * 2 })
    } else {
        Box.new(move { x -> x + 1 })
    }
}

fn main() {
    let f1 = getClosure(true)
    let f2 = getClosure(false)

    println!("\(f1(5))")  // 10
    println!("\(f2(5))")  // 6
}

Higher-Order Functions

A function is higher-order if it takes functions as parameters or returns functions. We've already seen examples, but let's explore them more:

fn map<T, U, F>(items: Vec<T>, f: F): Vec<U>
where
    F: Fn(T) -> U,
{
    var result = vec![]
    for item in items {
        result.push(f(item))
    }
    result
}

fn main() {
    let numbers = vec![1, 2, 3, 4, 5]

    let doubled = map(numbers.clone(), { x -> x * 2 })
    println!("Doubled: \(doubled:?)")

    let strings = map(numbers, { x -> x.toString() })
    println!("Strings: \(strings:?)")
}

Flexible Function Types with Trait Bounds

Using trait bounds gives more flexibility than function pointers:

// Using function pointer - restrictive
fn processWithPointer(items: Vec<Int>, op: (Int) -> Int) {
    for item in items {
        println!("\(op(item))")
    }
}

// Using trait bound - flexible
fn processWithTrait<F>(items: Vec<Int>, op: F)
where
    F: Fn(Int),
{
    for item in items {
        println!("\(op(item))")
    }
}

fn main() {
    let numbers = vec![1, 2, 3]

    // Can use closure with captured variable
    let multiplier = 10
    let closure = { x -> println!("\(x * multiplier)") }

    // This works with trait bound
    processWithTrait(numbers.clone(), closure)

    // But not with function pointer
    // processWithPointer(numbers, closure)  // Error: closure captures multiplier
}

Closures with Trait Bounds

You can use multiple trait bounds on closure types:

fn callWithDifferentValues<F>(mut f: F)
where
    F: FnMut(Int),
{
    f(1)
    f(2)
    f(3)
}

fn main() {
    var count = 0

    callWithDifferentValues({ x ->
        count += x
        println!("Count: \(count)")
    })
}

Remember the three function traits:

  • Fn - Immutable borrows, can call multiple times
  • FnMut - Mutable borrow, can call multiple times
  • FnOnce - Takes ownership, can call once

Function Item Types

Every function has a unique type, sometimes called a function item type:

fn add(x: Int, y: Int): Int { x + y }
fn multiply(x: Int, y: Int): Int { x * y }

fn main() {
    // These have different types!
    let f = add
    let g = multiply

    // But both can be converted to a common function pointer type
    let fPtr: (Int, Int) -> Int = add
    let gPtr: (Int, Int) -> Int = multiply
}

This is why you might see fn types in error messages—they're the actual types of functions, before being converted to function pointers.

Combining Trait Bounds with Closures

Complex scenarios require combining trait bounds:

fn processItems<F, G>(
    items: Vec<Int>,
    filter: F,
    transform: G,
): Vec<Int>
where
    F: Fn(Int) -> Bool,
    G: Fn(Int) -> Int,
{
    items
        .iter()
        .filter { filter(it) }
        .map { transform(it) }
        .collect()
}

fn main() {
    let numbers = vec![1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

    let result = processItems(
        numbers,
        { x -> x % 2 == 0 },      // filter: even numbers
        { x -> x * x },             // transform: square
    )

    println!("Result: \(result:?)")  // [4, 16, 36, 64, 100]
}

The Never Type and Functions

Functions can return the never type ! to indicate they never return:

fn failWithMessage(msg: &str)! {
    panic!("\(msg)")
}

fn infiniteLoop()! {
    loop {
        println!("Forever!")
    }
}

fn example(condition: Bool): Int {
    if condition {
        42
    } else {
        failWithMessage("Invalid condition!")
    }
}

The never type allows these patterns to work: even though failWithMessage doesn't return, the if expression is valid because ! is compatible with any type.

Variadic Functions

Oxide doesn't have true variadic functions like some languages, but you can simulate them:

// Using vectors
fn sum(numbers: Vec<Int>): Int {
    var total = 0
    for num in numbers {
        total += num
    }
    total
}

// Using the vec! macro
fn main() {
    println!("\(sum(vec![1, 2, 3]))")           // 6
    println!("\(sum(vec![10, 20, 30, 40]))")   // 100
}

Or with variadic arguments using macros (see the Macros chapter for more):

// Simulate variadic with macro
sum!(1, 2, 3, 4, 5)

Function Composition

Build complex operations from simpler functions:

fn compose<A, B, C, F, G>(f: F, g: G): impl Fn(A) -> C
where
    F: Fn(A) -> B,
    G: Fn(B) -> C,
{
    move { x -> g(f(x)) }
}

fn main() {
    let addOne = { x: Int -> x + 1 }
    let double = { x: Int -> x * 2 }

    let addOneThenDouble = compose(addOne, double)
    let doubleThenAddOne = compose(double, addOne)

    println!("(5 + 1) * 2 = \(addOneThenDouble(5))")  // 12
    println!("(5 * 2) + 1 = \(doubleThenAddOne(5))")  // 11
}

Currying

Transform functions with multiple parameters into chains of single-parameter functions:

fn curry<A, B, C, F>(f: F): impl Fn(A) -> impl Fn(B) -> C
where
    F: Fn(A, B) -> C + 'static,
    A: 'static,
    B: 'static,
    C: 'static,
{
    move { a -> move { b -> f(a, b) } }
}

fn add(x: Int, y: Int): Int {
    x + y
}

fn main() {
    let curriedAdd = curry(add)
    let addFive = curriedAdd(5)
    let result = addFive(3)

    println!("5 + 3 = \(result)")  // 8
}

Memoization Pattern

Cache function results to improve performance:

import std.collections.HashMap

struct Memoized<T> {
    cache: HashMap<Int, T>,
    func: (Int) -> T,
}

extension<T: Clone> Memoized<T> {
    static fn new(func: (Int) -> T): Self {
        Memoized {
            cache: HashMap.new(),
            func,
        }
    }

    mutating fn call(x: Int): T {
        if let Some(result) = cache.get(&x) {
            return result.clone()
        }

        let result = (func)(x)
        cache.insert(x, result.clone())
        result
    }
}

fn expensiveOperation(n: Int): Int {
    println!("Computing for \(n)...")
    var result = 0
    for i in 0..<n {
        result += i
    }
    result
}

fn main() {
    var memo = Memoized.new(expensiveOperation)

    println!("First call:")
    println!("\(memo.call(5))")  // Computes

    println!("Second call:")
    println!("\(memo.call(5))")  // Returns cached value
}

Advanced Iterator Patterns

Functions and closures shine with iterators:

fn main() {
    let numbers = vec![1, 2, 3, 4, 5]

    // Chain operations
    let result: Vec<Int> = numbers
        .iter()
        .filter { it % 2 == 0 }
        .map { it * it }
        .collect()

    println!("Evens squared: \(result:?)")  // [4, 16]

    // Use fold to aggregate
    let sum = numbers
        .iter()
        .fold(0) { acc, x -> acc + x }

    println!("Sum: \(sum)")  // 15

    // Use find to search
    let firstEven = numbers
        .iter()
        .find { it % 2 == 0 }

    println!("First even: \(firstEven:?)")  // Some(2)
}

Summary

Advanced functions in Oxide:

  • Function pointers - Pass and return functions as values
  • Trait bounds - More flexible than function pointers
  • Trait objects - Return closures and different closure types
  • Higher-order functions - Functions that work on other functions
  • Function composition - Build complex operations from simple ones
  • Currying - Transform multi-parameter functions
  • Memoization - Cache function results
  • Iterator patterns - Powerful function chains

These patterns form the foundation of functional programming in Oxide and enable elegant, expressive code.

Macros

Macros are one of Oxide's most powerful features. They allow you to write code that writes other code, enabling abstractions that would be difficult or impossible with functions alone.

What Are Macros?

Macros are programs that generate code at compile time. They're different from functions in important ways:

AspectFunctionsMacros
Type checkingCheckedGenerated code is checked
ParametersMust match typesCan accept variable number of arguments
MonomorphizationFunctions are sharedCode can be duplicated
SpeedRuntimeCompile time

There are two kinds of macros in Oxide:

  1. Declarative macros - Define rules for pattern matching and code generation
  2. Procedural macros - Functions that manipulate token streams

Declarative Macros

Declarative macros, also called "macros by example," let you define a syntax pattern and the code to generate when that pattern matches.

The vec! Macro

The vec! macro is a classic example:

// Calling the macro
let v = vec![1, 2, 3]

// Equivalent to
let v = {
    var v = Vec.new()
    v.push(1)
    v.push(2)
    v.push(3)
    v
}

Defining Declarative Macros

The syntax is:

Note: macro_rules! blocks use Rust's macro syntax (including => for rule arms). This is a deliberate compatibility island; outside macros, Oxide uses ->.

macro_rules! name {
    (pattern1) => { expansion1 };
    (pattern2) => { expansion2 };
}

Here's a simple example:

macro_rules! shout {
    ($e:expr) => {
        println!("\(($e).toUppercase())!")
    };
}

fn main() {
    shout!("hello")  // Prints: HELLO!
    shout!("world")  // Prints: WORLD!
}

Macro Patterns and Fragments

The part in parentheses after the macro name defines what patterns it accepts. Common fragment specifiers:

SpecifierMatches
exprExpressions
stmtStatements
itemItems (functions, structs, etc.)
tyTypes
identIdentifiers
pathPaths
ttToken trees
blockBlock expressions

Here's a macro that accepts multiple expressions:

macro_rules! printAll {
    ($($e:expr),*) => {
        $(
            println!("\($e)")
        )*
    };
}

fn main() {
    printAll!(1, 2, 3, "hello", true)
}

The $(...)* syntax means "repeat this pattern zero or more times."

Building the vec! Macro

Let's build our own version of vec!:

macro_rules! myVec {
    // Empty vec
    () => {
        Vec.new()
    };

    // Single element
    ($e:expr) => {
        {
            var v = Vec.new()
            v.push($e)
            v
        }
    };

    // Multiple elements
    ($($e:expr),+) => {
        {
            var v = Vec.new()
            $(
                v.push($e)
            )*
            v
        }
    };

    // Comma-separated list with trailing comma
    ($($e:expr),+ ,) => {
        myVec!($($e),*)
    };
}

fn main() {
    let v1 = myVec!()
    let v2 = myVec!(1)
    let v3 = myVec!(1, 2, 3)
    let v4 = myVec!(1, 2, 3,)

    println!("\(v1:?)")  // []
    println!("\(v2:?)")  // [1]
    println!("\(v3:?)")  // [1, 2, 3]
    println!("\(v4:?)")  // [1, 2, 3]
}

Repetition Patterns

The repetition syntax uses several operators:

// * means 0 or more times
($($e:expr),*)

// + means 1 or more times
($($e:expr),+)

// ? means 0 or 1 times
($e:expr)?

// You can separate with any token
($($e:expr);*)  // semicolon-separated
($($e:expr) |*)  // pipe-separated

Advanced Example: DSL for Assertions

Here's a declarative macro that creates a domain-specific language (DSL) for assertions:

macro_rules! assertEqWithMsg {
    ($left:expr, $right:expr) => {
        assertEqWithMsg!($left, $right, "")
    };

    ($left:expr, $right:expr, $msg:expr) => {
        {
            let left = &($left)
            let right = &($right)
            if left != right {
                panic!(
                    "assertion failed: {} == {}, message: {}",
                    left, right, $msg
                )
            }
        }
    };
}

fn main() {
    assertEqWithMsg!(5, 5)
    assertEqWithMsg!(10, 5 * 2, "Basic math")
    // assertEqWithMsg!(5, 6)  // Would panic
}

Procedural Macros

Procedural macros are more powerful but also more complex. They're functions that take a token stream as input and produce a token stream as output.

Derive Macros

The most common procedural macros are derive macros, which automatically implement traits:

#[derive(Clone, Copy, Debug)]
struct Point {
    x: Int,
    y: Int,
}

fn main() {
    let p = Point { x: 5, y: 10 }
    let p2 = p.clone()

    println!("\(p:?)")   // Point { x: 5, y: 10 }
    println!("\(p2:?)")  // Point { x: 5, y: 10 }
}

In this example, #[derive(Debug)] automatically implements the Debug trait, so we can print the struct with the :? format specifier.

Common derive macros:

  • Debug - Printable representation
  • Clone - Cloning
  • Copy - Copy semantics
  • Default - Default values
  • Hash - Hashing
  • PartialEq, Eq - Equality
  • PartialOrd, Ord - Ordering

Attribute Macros

Attribute macros modify items:

#[route(GET, "/")]
fn index(): String {
    "Hello, world!".toString()
}

The #[route(...)] macro is called with the attribute parameters and the item it decorates.

Function-like Macros

Function-like procedural macros look like function calls but process tokens:

let sql = sql!(
    SELECT * FROM users
    WHERE age > 18
)

Macro Rules and Macros Defined in Terms of Other Macros

Macros can call other macros:

macro_rules! fiveTimes {
    ($e:expr) => {
        $e + $e + $e + $e + $e
    };
}

macro_rules! timesFifteen {
    ($e:expr) => {
        fiveTimes!(5 * $e) + fiveTimes!(2 * $e)
    };
}

fn main() {
    println!("\(timesFifteen!(2))")  // (5 * 2) * 5 + (2 * 2) * 5 = 70
}

Common Macro Patterns

Variadic Functions

Implement function-like behavior with variable arguments:

macro_rules! sum {
    ($e:expr) => { $e };
    ($e:expr, $($rest:expr),+) => {
        $e + sum!($($rest),+)
    };
}

fn main() {
    println!("\(sum!(1))")           // 1
    println!("\(sum!(1, 2))")        // 3
    println!("\(sum!(1, 2, 3, 4))")  // 10
}

Type Repetition

Generate code for multiple types:

macro_rules! implForTypes {
    ($trait:ident, $method:ident, $($ty:ty),*) => {
        $(
            extension $ty: $trait {
                fn $method(): String {
                    stringify!($ty).toString()
                }
            }
        )*
    };
}

Debug Printing

Create convenient debugging utilities:

macro_rules! dbg {
    ($($e:expr),*) => {
        $(
            eprintln!("{} = {:?}", stringify!($e), &$e)
        )*
    };
}

fn main() {
    let x = 5
    let y = 10
    dbg!(x, y)  // Prints debug info
}

Hygiene

Macros are "hygienic," meaning they don't accidentally capture or shadow variables:

macro_rules! setup {
    ($n:expr) => {
        let x = $n  // This x is local to the macro
    };
}

fn main() {
    let x = "outer"
    setup!(5)
    println!("\(x)")  // Prints: outer, not 5
}

This is different from macros in C, where they're simple text substitution.

Useful Built-in Macros

Oxide provides several useful built-in macros:

// Print debugging
println!("Hello, \(name)!")

// Panic with a message
panic!("Something went wrong")

// Assert conditions
assert!(condition)
assertEq!(left, right)

// Create vectors
vec![1, 2, 3]

// Stringify expressions
let s = stringify!(x + y)

// Get file and line info
file!()
line!()
column!()

// Access environment variables at compile time
env!("HOME")

When to Use Macros

Use macros when:

  1. Repetitive code generation - Avoid boilerplate
  2. Domain-specific languages - Create natural syntax
  3. Metaprogramming - Generate code based on input
  4. Zero-cost abstractions - Generate specialized code
  5. Syntax extensions - Extend the language

Don't use macros when:

  1. A function would work - Functions are simpler and more testable
  2. You need dynamic behavior - Macros run at compile time
  3. Code clarity matters more - Macros can hide what's happening
  4. You need debugging - Macro expansion can be hard to debug

Debugging Macros

To see what a macro expands to, use cargo expand (requires the nightly compiler):

cargo expand

This will show the expanded code, helping you understand what the macro is generating.

Best Practices

Document Your Macros

/// Creates a vector with the given elements.
///
/// # Examples
///
/// ```
/// let v = myVec![1, 2, 3]
/// assertEq!(v.len(), 3)
/// ```
macro_rules! myVec {
    // ...
}

Keep Them Simple

// Good: simple, focused
macro_rules! triple {
    ($e:expr) => { $e + $e + $e };
}

// Avoid: complex, hard to understand
macro_rules! complex {
    // ... 50 lines of rules
}

Use Descriptive Names

// Good
macro_rules! assertError {
    // ...
}

// Avoid
macro_rules! ae {
    // ...
}

Test Your Macros

#[test]
fn testMyVecMacro() {
    let v = myVec![1, 2, 3]
    assertEq!(v.len(), 3)
    assertEq!(v[0], 1)
}

Summary

Macros in Oxide:

  • Declarative macros - Pattern matching and code generation
  • Procedural macros - More powerful, work with token streams
  • Derive macros - Automatically implement traits
  • Hygiene - Safe variable scoping
  • Powerful abstractions - Can eliminate boilerplate and create DSLs

Macros are advanced because they're powerful but require careful design. Use them judiciously, and they can make your code cleaner and more expressive. When in doubt, prefer functions—they're easier to understand and debug.

Final Project: Building a Multithreaded Web Server

This chapter brings together everything you've learned so far to build a practical, real-world project: a web server.

Project Overview

We'll build a web server that:

  • Listens for incoming TCP connections
  • Parses HTTP requests
  • Responds with HTML content
  • Handles multiple requests concurrently using a thread pool
  • Implements graceful shutdown

This project demonstrates several important concepts:

  • TCP and HTTP networking - Understanding how web servers communicate
  • Thread pools - Managing concurrent work efficiently
  • Graceful shutdown - Properly cleaning up resources
  • Error handling - Building robust systems

What You'll Learn

Throughout this chapter, you'll practice:

  • Working with the standard library's networking APIs
  • Designing concurrent systems with thread pools
  • Using channels for thread communication
  • Implementing clean shutdown patterns
  • Building a real, working application

Project Structure

We'll build this incrementally:

  1. Single-threaded server - Start simple with a basic working server
  2. Multithreaded server - Add concurrency using a thread pool
  3. Graceful shutdown - Implement clean termination

Each step builds on the previous one, giving you a chance to understand each concept before moving forward.

Setting Up

Create a new binary project:

cargo new webserver
cd webserver

Your project structure should look like:

webserver/
├── Cargo.toml
└── src/
    └── main.ox

We'll also create a library module for the thread pool:

// src/lib.ox

Ready? Let's start building!

Files Overview

  • ch21-00-final-project-a-web-server.md - Project overview and learning objectives
  • ch21-01-single-threaded.md - Building a basic single-threaded web server
  • ch21-02-multithreaded.md - Implementing a thread pool for concurrent request handling
  • ch21-03-graceful-shutdown-and-cleanup.md - Adding graceful shutdown capabilities

Project Summary

This chapter guides you through building a web server that:

  • Listens on a TCP socket for incoming connections
  • Parses HTTP requests
  • Handles multiple requests concurrently using a thread pool
  • Gracefully shuts down when signaled

Key Oxide Features Demonstrated

  • var keyword - Mutable variable declaration (replaces Rust's let mut)
  • String interpolation - Using \(variable) syntax
  • Path notation - Using . instead of :: (e.g., std.io.Write)
  • extension keyword - Implementing methods on types
  • Match expressions - Pattern matching on message types
  • Closures - Using { params -> body } closures with thread spawning
  • Trait objects - dyn Fn() + Send + 'static
  • Null types - Using T? syntax
  • Result type - Error handling with Result<T, E>

Code Examples

Main Topics

TCP and HTTP Networking

  • Binding to a TCP socket with TcpListener.bind()
  • Accepting connections with listener.incoming()
  • Reading HTTP requests with BufReader
  • Writing HTTP responses with Write trait

Thread Pool Implementation

  • Using mpsc channels for job distribution
  • Arc<Mutex<T>> for thread-safe shared state
  • Worker threads spawned with thread.spawn()
  • Message passing between threads
  • Graceful shutdown with explicit terminate messages

Concurrency Patterns

  • Fixed-size thread pools vs. unbounded thread spawning
  • Queue-based job distribution
  • Thread synchronization with mutexes and channels
  • Clean shutdown coordination

Building the Project

# Create the project
cargo new webserver
cd webserver

# Build
cargo build

# Run
cargo run

# Test with curl
curl http://127.0.0.1:7878

This chapter is adapted from: The Rust Book Chapter 21: Final Project - Building a Multithreaded Web Server

Learning Outcomes

After completing this chapter, you will understand:

  • How TCP connections and HTTP protocols work
  • How to build concurrent systems with thread pools
  • Thread-safe communication using channels
  • Graceful shutdown patterns
  • Real-world systems design in Oxide/Rust
  • Performance considerations in concurrent programming

Important Oxide vs Rust Differences

RustOxideNotes
let mut xvar xMutable bindings use var
format!()String interpolationUse "\(var)" syntax
std::iostd.ioPath separators are . not ::
`xx * 2`
_ wildcard_ wildcardPattern matching uses _
fn new()static fn new()Static methods use static keyword
NonenullNullability is built-in with T?

Testing the Server

The single-threaded version demonstrates basic functionality.

The multithreaded version can be tested with concurrent requests:

# Terminal 1
cargo run

# Terminal 2
curl http://127.0.0.1:7878 &
curl http://127.0.0.1:7878 &
curl http://127.0.0.1:7878 &
wait

All requests should complete concurrently.

Extension Ideas

To further explore the concepts, consider:

  • Adding logging with a logging crate
  • Implementing more complex routing (pattern matching on paths)
  • Adding middleware for request/response processing
  • Implementing HTTPS with TLS
  • Adding static file serving with caching
  • Building a REST API handler

Notes

  • This is a learning project demonstrating concurrency concepts
  • For a production web server, use an established framework like Actix or Tokio
  • The thread pool is intentionally simple to show the concepts clearly
  • Real servers would handle errors more gracefully

Building a Single-Threaded Web Server

Before diving into concurrency, let's build a basic web server that handles one request at a time. This gives us a solid foundation to build upon.

Understanding TCP and HTTP

A web server operates at two levels:

  1. TCP (Transmission Control Protocol) - The low-level networking protocol that handles connection establishment and data transmission
  2. HTTP (HyperText Transfer Protocol) - The application-level protocol that defines the format of requests and responses

Our server will:

  1. Listen on a TCP socket
  2. Accept incoming connections
  3. Read HTTP requests
  4. Send back HTTP responses

Creating the Server

Let's start with the main structure. Update src/main.ox:

import std.io.{BufRead, BufReader, Write}
import std.net.TcpListener
import std.fs.readToString

fn main() {
    let listener = TcpListener.bind("127.0.0.1:7878").expect("Failed to bind to port 7878")
    println!("Server listening on http://127.0.0.1:7878")

    for stream in listener.incoming() {
        var stream = stream.expect("Failed to accept connection")
        handleConnection(&mut stream)
    }
}

fn handleConnection(stream: &mut TcpStream) {
    let bufReader = BufReader.new(stream)
    let requestLine = bufReader.lines().next().expect("Should have first line")
        .expect("Should read first line")

    let (status, filename) = if requestLine == "GET / HTTP/1.1" {
        ("200 OK", "hello.html")
    } else {
        ("404 NOT FOUND", "404.html")
    }

    let contents = readToString(filename).unwrapOrElse { _ -> "Error reading file".toString() }
    let length = contents.len()

    let response = "HTTP/1.1 \(status)\r\nContent-Length: \(length)\r\n\r\n\(contents)"
    stream.writeAll(response.asBytes()).expect("Failed to write response")
}

Imports and Modules

Let's understand the imports:

  • std.io.{BufRead, BufReader, Write} - For reading buffered input and writing output
  • std.net.TcpListener - For listening on a TCP socket
  • std.fs.readToString - For reading file contents

Notice the Oxide syntax for method paths using . instead of Rust's ::

Listening for Connections

let listener = TcpListener.bind("127.0.0.1:7878").expect("Failed to bind to port 7878")

The TcpListener.bind method:

  • Takes an address and port as a string
  • Returns a Result that we unwrap with expect
  • The address 127.0.0.1 is localhost (your own machine)
  • Port 7878 is somewhat arbitrary - choose any unused port

Iterating Over Connections

for stream in listener.incoming() {
    var stream = stream.expect("Failed to accept connection")
    handleConnection(&mut stream)
}

The listener.incoming() method returns an iterator of incoming TCP connections. Each item is a Result<TcpStream, Error>. We:

  • Use expect to handle the error
  • Make the stream mutable with var since we'll write to it
  • Pass it to our handler function

Parsing HTTP Requests

let bufReader = BufReader.new(stream)
let requestLine = bufReader.lines().next().expect("Should have first line")
    .expect("Should read first line")

A typical HTTP request looks like:

GET / HTTP/1.1
Host: 127.0.0.1:7878
User-Agent: curl/7.64.0
Accept: */*

We read just the first line (the request line) which contains:

  • The HTTP method (GET, POST, etc.)
  • The path being requested
  • The HTTP version

Routing Requests

let (status, filename) = if requestLine == "GET / HTTP/1.1" {
    ("200 OK", "hello.html")
} else {
    ("404 NOT FOUND", "404.html")
}

This is simple routing:

  • If the request is for /, we serve hello.html with a 200 OK status
  • For anything else, we serve 404.html with a 404 NOT FOUND status

Building the Response

let contents = readToString(filename).unwrapOrElse { _ -> "Error reading file".toString() }
let length = contents.len()

let response = "HTTP/1.1 \(status)\r\nContent-Length: \(length)\r\n\r\n\(contents)"
stream.writeAll(response.asBytes()).expect("Failed to write response")

An HTTP response has the format:

HTTP/1.1 200 OK
Content-Length: 44

<html body content>

We:

  • Read the HTML file contents
  • Calculate the content length
  • Build the response string with proper HTTP headers
  • Write the bytes to the stream

HTML Files

Create hello.html:

<!DOCTYPE html>
<html lang="en">
    <head>
        <meta charset="utf-8" />
        <title>Hello!</title>
    </head>
    <body>
        <h1>Hello!</h1>
        <p>Hi from our Oxide web server</p>
    </body>
</html>

Create 404.html:

<!DOCTYPE html>
<html lang="en">
    <head>
        <meta charset="utf-8" />
        <title>Hello!</title>
    </head>
    <body>
        <h1>Oops!</h1>
        <p>Sorry, I don't know what you're asking for.</p>
    </body>
</html>

Running the Server

Compile and run:

cargo run

Then visit http://127.0.0.1:7878 in your browser. You should see "Hello!" displayed.

Try visiting http://127.0.0.1:7878/something-else to see the 404 page.

Testing the Server

You can also test with curl:

curl http://127.0.0.1:7878
curl http://127.0.0.1:7878/not-found

The Problem with This Approach

Our current server handles one request at a time. If a client connects and takes a long time to send data, no other clients can connect until that request is fully processed.

Try this to see the problem:

# In one terminal:
cargo run

# In another terminal:
curl http://127.0.0.1:7878/sleep

If sleep.html takes 5 seconds to process, other requests will block during that time.

This is where threading comes in. In the next section, we'll implement a thread pool to handle multiple requests concurrently.

Summary

We've built a basic working web server that:

  • Listens on a TCP socket
  • Reads HTTP requests
  • Parses the request path
  • Sends back HTML responses

The single-threaded design is simple but limited. Let's make it concurrent in the next section.

Converting to a Multithreaded Web Server

The single-threaded server works, but it can only handle one request at a time. Let's add concurrency using a thread pool so multiple clients can be served simultaneously.

The Challenge

We could spawn a new thread for each incoming connection:

for stream in listener.incoming() {
    let stream = stream.expect("Failed to accept connection")

    thread.spawn {
        handleConnection(stream)
    }
}

While this works, it's inefficient. Creating a new thread for each connection consumes resources and the operating system can only create so many threads before performance degrades.

Thread Pool Design

A better approach is a thread pool: a fixed number of worker threads that wait for jobs.

The workflow:

  1. Main thread accepts connections
  2. Main thread puts the work (handling a connection) in a job queue
  3. Worker threads take jobs from the queue and process them
  4. When a job is done, the worker waits for the next job

Creating the Library Structure

Create src/lib.ox:

import std.sync.{mpsc, Arc, Mutex}
import std.thread

public struct ThreadPool {
    workers: Vec<Worker>,
    sender: mpsc.Sender<Message>,
}

struct Worker {
    id: UIntSize,
    thread: thread.JoinHandle<()>?,
}

enum Message {
    NewJob(Job),
    Terminate,
}

type Job = Box<dyn Fn() + Send + 'static>

extension ThreadPool {
    public static fn new(size: UIntSize): ThreadPool {
        assert!(size > 0, "Thread pool size must be greater than 0")

        let (sender, receiver) = mpsc.channel()
        let receiver = Arc.new(Mutex.new(receiver))

        var workers = Vec.new()

        for id in 0..size {
            workers.push(Worker.new(id, Arc.clone(&receiver)))
        }

        ThreadPool { workers, sender }
    }

    public fn execute(f: Job) {
        let message = Message.NewJob(f)
        self.sender.send(message).expect("Failed to send job")
    }
}

extension Worker {
    fn new(id: UIntSize, receiver: Arc<Mutex<mpsc.Receiver<Message>>>): Worker {
        let thread = thread.spawn(move {
            loop {
                let message = receiver.lock().expect("Mutex poisoned").recv().expect("Failed to receive message")

                match message {
                    Message.NewJob(job) -> {
                        println!("Worker \(id) got a job; executing.")
                        job()
                    }
                    Message.Terminate -> {
                        println!("Worker \(id) was told to terminate.")
                        break
                    }
                }
            }
        })

        Worker { id, thread: Some(thread) }
    }
}

extension ThreadPool: Drop {
    mutating fn drop() {
        println!("Sending terminate message to all workers.")

        for _ in &self.workers {
            self.sender.send(Message.Terminate).expect("Failed to send terminate message")
        }

        println!("Shutting down all workers.")

        for worker in &mut self.workers {
            println!("Shutting down worker \(worker.id)")

            if let Some(thread) = worker.thread.take() {
                thread.join().expect("Failed to join worker thread")
            }
        }
    }
}

Understanding the Design

The Job Type

type Job = Box<dyn Fn() + Send + 'static>

This defines a type alias for:

  • Box<...> - A boxed value on the heap
  • dyn Fn() - A dynamic trait object for a function that takes no arguments and returns nothing
  • Send - The function can be sent between threads
  • 'static - The function has no borrowed data with a limited lifetime

The Message Enum

enum Message {
    NewJob(Job),
    Terminate,
}

The job queue sends Message enums:

  • NewJob(job) - A new job to execute
  • Terminate - Signal to shutdown

Thread Pool Creation

public static fn new(size: UIntSize): ThreadPool {
    let (sender, receiver) = mpsc.channel()
    let receiver = Arc.new(Mutex.new(receiver))

    var workers = Vec.new()

    for id in 0..size {
        workers.push(Worker.new(id, Arc.clone(&receiver)))
    }

    ThreadPool { workers, sender }
}

We:

  • Create a multi-producer, single-consumer channel
  • Wrap the receiver in Arc<Mutex<...>> so multiple threads can share it
  • Spawn size worker threads, each with a clone of the receiver
  • Return the thread pool with the sender

Worker Thread Loop

let thread = thread.spawn(move {
    loop {
        let message = receiver.lock().expect("Mutex poisoned").recv().expect("Failed to receive message")

        match message {
            Message.NewJob(job) -> {
                job()
            }
            Message.Terminate -> {
                break
            }
        }
    }
})

Each worker:

  • Continuously loops waiting for messages
  • Locks the mutex to access the receiver
  • Blocks until a message arrives
  • Either executes the job or terminates

Sending Jobs

public fn execute(f: Job) {
    let message = Message.NewJob(f)
    self.sender.send(message).expect("Failed to send job")
}

The main thread sends jobs through the channel. Thanks to mpsc (multi-producer), multiple threads could send jobs (though in our case only the main thread does).

Graceful Shutdown

extension ThreadPool: Drop {
    mutating fn drop() {
        for _ in &self.workers {
            self.sender.send(Message.Terminate).expect("Failed to send terminate message")
        }

        for worker in &mut self.workers {
            if let Some(thread) = worker.thread.take() {
                thread.join().expect("Failed to join worker thread")
            }
        }
    }
}

When the thread pool is dropped (goes out of scope):

  • Send a Terminate message to each worker
  • Wait for each worker thread to finish with join()

This ensures clean shutdown.

Using the Thread Pool in main.ox

Update src/main.ox:

import webserver.ThreadPool

import std.io.{BufRead, BufReader, Write}
import std.net.TcpListener
import std.fs.readToString

fn main() {
    let listener = TcpListener.bind("127.0.0.1:7878").expect("Failed to bind to port 7878")
    let pool = ThreadPool.new(4)

    println!("Server listening on http://127.0.0.1:7878")

    for stream in listener.incoming() {
        let stream = stream.expect("Failed to accept connection")

        pool.execute {
            handleConnection(stream)
        }
    }
}

fn handleConnection(stream: &mut TcpStream) {
    let bufReader = BufReader.new(stream)
    let requestLine = bufReader.lines().next().expect("Should have first line")
        .expect("Should read first line")

    let (status, filename) = if requestLine == "GET / HTTP/1.1" {
        ("200 OK", "hello.html")
    } else {
        ("404 NOT FOUND", "404.html")
    }

    let contents = readToString(filename).unwrapOrElse { _ -> "Error reading file".toString() }
    let length = contents.len()

    let response = "HTTP/1.1 \(status)\r\nContent-Length: \(length)\r\n\r\n\(contents)"
    stream.writeAll(response.asBytes()).expect("Failed to write response")
}

Key Changes

  1. Create a thread pool with 4 workers:

    let pool = ThreadPool.new(4)
    
  2. Execute jobs in the pool instead of spawning threads:

    pool.execute {
        handleConnection(stream)
    }
    

The closure captures the stream and will be executed by one of the worker threads.

Testing the Multithreaded Server

Compile and run:

cargo run

Open multiple browser tabs to http://127.0.0.1:7878. The server can now handle them concurrently!

You can also test with concurrent curl requests:

curl http://127.0.0.1:7878 &
curl http://127.0.0.1:7878 &
curl http://127.0.0.1:7878 &
curl http://127.0.0.1:7878 &
wait

All four requests should complete without waiting for the previous one to finish.

How the Thread Pool Handles Concurrency

When you make multiple requests:

  1. The main thread accepts each connection
  2. Instead of spawning a new thread, it sends the connection to the thread pool
  3. An available worker thread takes the job from the queue
  4. The worker processes the request while the main thread can accept more connections
  5. With 4 workers, up to 4 requests are handled simultaneously
  6. Additional requests queue up and are processed when workers become free

This is much more efficient than creating a new thread per request!

Performance Improvement

With the thread pool approach:

  • Fixed resources - Always 4 threads running, not thousands
  • Better latency - No thread creation overhead per request
  • Predictable performance - System resources are bounded
  • Concurrent handling - Multiple requests processed simultaneously

Summary

We've implemented a thread pool that:

  • Maintains a fixed number of worker threads
  • Uses channels for job distribution
  • Provides graceful shutdown
  • Enables concurrent request handling

Next, we'll add a mechanism to gracefully stop the server.

Implementing Graceful Shutdown

Currently, the server runs forever. Pressing Ctrl+C terminates it abruptly. Let's implement a graceful shutdown that finishes in-flight requests before terminating.

The Problem

When you press Ctrl+C, the OS sends a signal that terminates the program immediately. Any workers processing requests are cut off mid-execution. In a real server, you want to:

  1. Stop accepting new connections
  2. Let existing requests complete
  3. Clean up resources
  4. Exit cleanly

Improving the Thread Pool

We need to improve our thread pool to handle partial shutdown. Let's modify src/lib.ox:

import std.sync.{mpsc, Arc, Mutex}
import std.thread

public struct ThreadPool {
    workers: Vec<Worker>,
    sender: mpsc.Sender<Message>,
}

struct Worker {
    id: UIntSize,
    thread: thread.JoinHandle<()>?,
}

enum Message {
    NewJob(Job),
    Terminate,
}

type Job = Box<dyn Fn() + Send + 'static>

extension ThreadPool {
    public static fn new(size: UIntSize): ThreadPool {
        assert!(size > 0, "Thread pool size must be greater than 0")

        let (sender, receiver) = mpsc.channel()
        let receiver = Arc.new(Mutex.new(receiver))

        var workers = Vec.new()

        for id in 0..size {
            workers.push(Worker.new(id, Arc.clone(&receiver)))
        }

        ThreadPool { workers, sender }
    }

    public fn execute<F>(f: F)
    where
        F: Fn() + Send + 'static,
    {
        let job = Box.new(f)
        let message = Message.NewJob(job)
        self.sender.send(message).expect("Failed to send job")
    }

    // Graceful shutdown: finish existing jobs then terminate
    public consuming fn shutdown() {
        println!("Sending terminate message to all workers.")

        for _ in &self.workers {
            self.sender.send(Message.Terminate).expect("Failed to send terminate")
        }

        println!("Shutting down all workers.")

        for worker in &mut self.workers {
            println!("Shutting down worker \(worker.id)")

            if let Some(thread) = worker.thread.take() {
                thread.join().expect("Failed to join worker thread")
            }
        }
    }
}

extension Worker {
    fn new(id: UIntSize, receiver: Arc<Mutex<mpsc.Receiver<Message>>>): Worker {
        let thread = thread.spawn(move {
            loop {
                let message = receiver
                    .lock()
                    .expect("Mutex poisoned")
                    .recv()
                    .expect("Failed to receive message")

                match message {
                    Message.NewJob(job) -> {
                        println!("Worker \(id) got a job; executing.")
                        job()
                    }
                    Message.Terminate -> {
                        println!("Worker \(id) was told to terminate.")
                        break
                    }
                }
            }
        })

        Worker { id, thread: Some(thread) }
    }
}

// Graceful shutdown via Drop trait
extension ThreadPool: Drop {
    mutating fn drop() {
        // Don't do anything - we want explicit shutdown via shutdown()
        // This prevents automatic shutdown on scope exit
    }
}

Controlling Server Shutdown

The key improvement is the explicit shutdown() method. Now we can control when the server stops accepting requests.

Update src/main.ox to handle a fixed number of requests before shutdown:

import webserver.ThreadPool

import std.io.{BufRead, BufReader, Write}
import std.net.TcpListener
import std.fs.readToString

fn main() {
    let listener = TcpListener.bind("127.0.0.1:7878").expect("Failed to bind to port 7878")
    let pool = ThreadPool.new(4)

    println!("Server listening on http://127.0.0.1:7878")
    println!("The server will accept 2 requests, then shut down gracefully.")

    for stream in listener.incoming().take(2) {
        let stream = stream.expect("Failed to accept connection")

        pool.execute {
            handleConnection(stream)
        }
    }

    println!("Shutting down server.")
    pool.shutdown()
}

fn handleConnection(stream: &mut TcpStream) {
    let bufReader = BufReader.new(stream)
    let requestLine = bufReader.lines().next().expect("Should have first line")
        .expect("Should read first line")

    let (status, filename) = if requestLine == "GET / HTTP/1.1" {
        ("200 OK", "hello.html")
    } else {
        ("404 NOT FOUND", "404.html")
    }

    let contents = readToString(filename).unwrapOrElse { _ -> "Error reading file".toString() }
    let length = contents.len()

    let response = "HTTP/1.1 \(status)\r\nContent-Length: \(length)\r\n\r\n\(contents)"
    stream.writeAll(response.asBytes()).expect("Failed to write response")
}

Key Changes

  1. Limited incoming connections:

    for stream in listener.incoming().take(2) {
    

    The .take(2) method limits iteration to 2 items. After 2 requests, the loop exits.

  2. Explicit shutdown:

    pool.shutdown()
    

    Call the shutdown method to gracefully terminate all workers.

Advanced: Handling Signals (for Real Servers)

For a production server, you'd want to respond to signals like SIGTERM. Here's how you could handle that:

import std.sync.atomic.{AtomicBool, Ordering}
import std.sync.Arc

fn main() {
    let listener = TcpListener.bind("127.0.0.1:7878").expect("Failed to bind to port 7878")
    let pool = ThreadPool.new(4)
    let shouldRun = Arc.new(AtomicBool.new(true))

    // In a real app, you'd set shouldRun.store(false, Ordering.SeqCst)
    // when a signal handler is invoked

    println!("Server listening on http://127.0.0.1:7878")

    for stream in listener.incoming() {
        if !shouldRun.load(Ordering.SeqCst) {
            println!("Received shutdown signal, stopping acceptance of new connections")
            break
        }

        let stream = stream.expect("Failed to accept connection")

        pool.execute {
            handleConnection(stream)
        }
    }

    println!("Shutting down server.")
    pool.shutdown()
}

Testing Graceful Shutdown

Compile and run with our limited-request version:

cargo run

In another terminal, make requests:

curl http://127.0.0.1:7878 &
curl http://127.0.0.1:7878 &

You'll see output like:

Server listening on http://127.0.0.1:7878
The server will accept 2 requests, then shut down gracefully.
Worker 0 got a job; executing.
Worker 1 got a job; executing.
Shutting down server.
Sending terminate message to all workers.
Shutting down all workers.
Shutting down worker 0
Shutting down worker 1
Shutting down worker 2
Shutting down worker 3

Both requests complete before the workers are terminated.

What Happens During Graceful Shutdown

  1. Stop accepting connections - The loop exits, no new work enters the queue
  2. Send terminate messages - One per worker thread
  3. Wait for workers - Each worker thread processes its current job, then sees the Terminate message and exits
  4. Join threads - The main thread waits for all worker threads to finish
  5. Exit cleanly - Once all workers are done, the program terminates

Key Concepts

Arc (Atomic Reference Counting)

let receiver = Arc.new(Mutex.new(receiver))

Arc allows multiple threads to safely share ownership of the same data. When the last Arc clone is dropped, the data is deallocated.

Mutex (Mutual Exclusion)

receiver.lock().expect("Mutex poisoned")

Mutex ensures only one thread accesses the receiver at a time. Calling lock() blocks until the lock is available.

Message Passing

self.sender.send(message)

The channel allows threads to communicate without sharing memory directly. This is Rust's (and Oxide's) philosophy: "share memory by communicating, don't communicate by sharing memory."

Summary

We've implemented graceful shutdown by:

  • Adding an explicit shutdown() method to the thread pool
  • Finishing in-flight requests before terminating
  • Properly cleaning up all resources
  • Demonstrating how to limit the server to a fixed number of requests

The complete server now demonstrates:

  • TCP networking fundamentals
  • Thread pool design
  • Channel-based communication
  • Graceful shutdown patterns

This is a production-quality pattern used in real servers worldwide.

Appendix

The appendices provide quick reference material for Oxide and its Rust foundations. These sections are designed to be skimmed, bookmarked, and revisited as needed.

You will find:

  • Keyword and operator references
  • A list of derivable traits
  • Tooling recommendations
  • Notes about editions and stability
  • Information about translations and nightly features

Appendix A: Keywords

The Oxide programming language uses a combination of Oxide-specific keywords and keywords shared with Rust. This appendix lists all keywords and provides information about their usage.

Oxide-Specific Keywords

These keywords are unique to Oxide or have different semantics from their Rust counterparts:

KeywordCategoryRust EquivalentDescription
varBindinglet mutMutable variable binding. Declares a variable that can be reassigned.
publicVisibilitypubPublic visibility modifier. Makes items accessible from outside the current module.
importModuleuseModule imports. Brings items from other modules into scope.
externalModule(with module) mod X;External module declaration. Declares a module whose body is in a separate file.
moduleModulemodModule definition. Defines a module either inline or as an external reference.
extensionImplementationimplType extension/implementation. Adds methods to a type or implements a trait for a type.
guardControl Flow(no direct equivalent)Early return guard. Ensures a condition holds or executes a diverging else block.
mutatingMethod Modifier&mut selfMutable method modifier. Indicates the method borrows self mutably.
consumingMethod ModifierselfConsuming method modifier. Indicates the method takes ownership of self.
staticMethod Modifier(no self)Static method modifier. Indicates the method has no self parameter.
nullLiteralNoneNull literal. Represents the absence of a value in nullable types (T?).

Detailed Usage

var - Mutable Variable Binding

var counter = 0
counter += 1  // Allowed because counter is mutable

var items: Vec<String> = vec![]
items.push("hello")

See Chapter 3.1: Variables and Mutability for more details.

public - Public Visibility

public fn createUser(name: &str): User {
    User { name: name.toString() }
}

public struct Config {
    path: PathBuf,
    verbose: Bool,
}

import - Module Imports

import std.collections.HashMap
import std.fs
import anyhow.{ Result, Error }
import crate.engine.{ Action, scan }

external module - External Module Declaration

// Declares a module in a separate file (engine.ox or engine/mod.ox)
external module engine
external module config

module - Inline Module Definition

module tests {
    import super.*

    #[test]
    fn testSomething() {
        assert!(true)
    }
}

extension - Type Extension

extension Config {
    public fn validate(): Bool {
        self.path.exists()
    }
}

// Trait implementation
import std.fmt.{ Display, Formatter, Result }

extension Config: Display {
    fn fmt(f: &mut Formatter): Result {
        write!(f, "Config: \(self.path)")
    }
}

See Chapter 5.3: Method Syntax for more details.

guard - Early Return Guard

guard condition else {
    return  // Must diverge!
}

guard let user = findUser(id) else {
    return Err(anyhow!("User not found"))
}
// user is now available

See Chapter 3.5: Control Flow for more details.

mutating, consuming, static - Method Modifiers

extension Config {
    // Default: &self (immutable borrow)
    fn validate(): Bool { self.path.exists() }

    // mutating: &mut self
    mutating fn setPath(path: PathBuf) { self.path = path }

    // consuming: self (takes ownership)
    consuming fn destroy() { drop(self) }

    // static: no self parameter
    static fn load(): Config? { Self.fromFile("config.toml") }
}

null - Null Literal

let name: String? = null
let value: Int? = null

match optional {
    Some(x) -> process(x),
    null -> handleNull(),
}

See Chapter 6.3: Concise Control Flow with if let for more details.

Shared Keywords

These keywords are shared with Rust and have the same or very similar semantics:

KeywordCategoryDescription
letBindingImmutable variable binding
fnFunctionFunction definition
structTypeStructure type definition
enumTypeEnumeration type definition
traitTypeTrait definition
typeTypeType alias definition
constBindingCompile-time constant
asyncAsyncAsynchronous function modifier
awaitAsyncAwait a future (prefix in Oxide, postfix in Rust)
ifControl FlowConditional expression
elseControl FlowAlternative branch in if / guard
matchControl FlowPattern matching expression
forControl FlowFor loop
whileControl FlowWhile loop
loopControl FlowInfinite loop
breakControl FlowBreak out of a loop
continueControl FlowContinue to next loop iteration
returnControl FlowReturn from a function
selfReferenceReference to the current instance
SelfTypeType alias for the implementing type
superModuleParent module reference
crateModuleCrate root reference
whereGenericGeneric type constraints
asConversionType casting
inControl FlowUsed in for loops
unsafeSafetyUnsafe code block or function
dynTypeDynamic trait object
moveClosureMove semantics for closures
refPatternReference pattern binding
mutModifierMutable reference or pattern binding
trueLiteralBoolean true
falseLiteralBoolean false
implImplementationRust's implementation keyword (use extension in Oxide)
pubVisibilityRust's public visibility (use public in Oxide)
useModuleRust's import keyword (use import in Oxide)
modModuleRust's module keyword (use module in Oxide)
externFFIExternal function declaration

Special Note: await

While await is shared with Rust, its position differs:

// Oxide: prefix await
let response = await client.get(url).send()?
let data = await response.json()?
#![allow(unused)]
fn main() {
// Rust: postfix .await
let response = client.get(url).send().await?;
let data = response.json().await?;
}

Oxide uses prefix await because it reads more naturally from left to right and matches the syntax of Swift, Kotlin, JavaScript, and Python.

Special Note: Match Wildcards

In Oxide, _ is the wildcard pattern in match expressions (same as Rust):

match command {
    Command.Run -> executeRun(),
    Command.Build -> executeBuild(),
    _ -> showHelp(),  // Wildcard - equivalent to Rust's _
}

Reserved Keywords

The following keywords are reserved for potential future use:

KeywordNotes
abstractReserved
becomeReserved
boxReserved
doReserved
finalReserved
macroReserved
overrideReserved
privReserved
tryReserved
typeofReserved
unsizedReserved
virtualReserved
yieldReserved

These keywords cannot be used as identifiers even though they do not currently have a defined meaning in Oxide.

Raw Identifiers

If you need to use a keyword as an identifier (for example, when interfacing with Rust code that uses reserved keywords as names), you can use the raw identifier syntax with backticks:

let `type` = "keyword"  // Uses 'type' as an identifier

This is the Oxide equivalent of Rust's r#type.

Quick Reference

Keyword Mapping: Oxide to Rust

OxideRust
var x = ...let mut x = ...
public fnpub fn
import a.buse a::b;
Type.method()Type::method()
external module xmod x;
module x { }mod x { }
extension T { }impl T { }
extension T: Trait { }impl Trait for T { }
guard c else { }if !c { } or let-else
mutating fnfn(&mut self)
consuming fnfn(self)
static fnfn() (no self)
nullNone
await exprexpr.await
match { _ -> }match { _ => }

IMPORTANT: Most Rust spellings in the right column are not valid in Oxide code. These are grammar changes, not style preferences. For example, :: does not exist in Oxide paths and => is not valid in match arms. Oxide uses . as its only path separator.

Exceptions: Oxide intentionally accepts a small Rust-syntax fallback surface for compatibility where no Oxide-specific form exists. This includes macro invocations like format!, Rust Option<T>/Some/None (idiomatic Oxide uses T? and null), impl Trait in return position, and try { }/const { }/async move { } blocks. Rust impl Trait for Type is not supported; use extension Type: Trait instead.

Categories at a Glance

Bindings: let, var, const

Functions: fn, async, return

Types: struct, enum, trait, type, Self

Methods: extension, mutating, consuming, static, self

Control Flow: if, else, match, for, while, loop, break, continue, guard

Modules: import, module, external, public, super, crate

Async: async, await

Safety: unsafe

Literals: true, false, null

Modifiers: mut, ref, move, dyn, where, as, in

Appendix B: Operators and Symbols

This appendix provides a quick reference for all operators in Oxide, including their precedence and Oxide-specific operators.

Operator Precedence Table

Operators are listed from highest precedence (evaluated first) to lowest (evaluated last).

PrecedenceOperatorsDescriptionAssociativity
1 (highest).Path/field accessLeft
2() [] ? !!Call, index, try, force unwrapLeft
3await ! - * & &mutPrefix operatorsRight
4asType castLeft
5* / %Multiplication, division, remainderLeft
6+ -Addition, subtractionLeft
7<< >>Bit shiftsLeft
8&Bitwise ANDLeft
9^Bitwise XORLeft
10|Bitwise ORLeft
11== != < > <= >=ComparisonsRequires parentheses
12&&Logical ANDLeft
13||Logical ORLeft
14??Null coalescingLeft
15.. ..=Range operatorsRequires parentheses
16= += -= *= /= %= &= |= ^= <<= >>=AssignmentRight
17 (lowest)return break continueControl flowRight

Operators by Category

Arithmetic Operators

OperatorDescriptionExample
+Addition5 + 3
-Subtraction10 - 4
*Multiplication6 * 7
/Division20 / 4
%Remainder17 % 5
-Negation (unary)-42

Comparison Operators

OperatorDescriptionExample
==Equal tox == 5
!=Not equal tox != y
<Less thanx < y
>Greater thanx > y
<=Less than or equalx <= 5
>=Greater than or equalx >= 10

Logical Operators

OperatorDescriptionExample
&&Logical AND (short-circuit)a && b
||Logical OR (short-circuit)a || b
!Logical NOT!a

Bitwise Operators

OperatorDescriptionExample
&Bitwise ANDa & b
|Bitwise ORa | b
^Bitwise XORa ^ b
<<Left shifta << 2
>>Right shifta >> 2

Reference Operators

OperatorDescriptionExample
&Create shared reference&value
&mutCreate mutable reference&mut value
*Dereference*ptr

Assignment Operators

OperatorEquivalent
=Assign
+=x = x + y
-=x = x - y
*=x = x * y
/=x = x / y
%=x = x % y
&= |= ^= <<= >>=Bitwise compound assignment

Range Operators

OperatorDescriptionExample
..Exclusive range0..5 (0 to 4)
..=Inclusive range1..=5 (1 to 5)

Oxide-Specific Operators

Null Coalescing (??)

Provides a default value when the left-hand side is null.

let name = userName ?? "Anonymous"
let config = Config.load() ?? Config.default()

CRITICAL: ?? works with T? ONLY, NOT Result<T, E>

// This will NOT compile:
let value: Result<Int, Error> = Err(someError)
let result = value ?? 0  // ERROR: ?? only works with T?

Why? Result<T, E> contains typed error information that should not be silently discarded. Using ?? would hide important error details.

For Result, use explicit methods:

let value = riskyOperation().unwrapOr(default)
let value = riskyOperation().unwrapOrElse { err -> handleError(err) }

Rust equivalent: .unwrap_or() or .unwrap_or_else()

Force Unwrap (!!)

Forcefully unwraps an optional value. Panics if null.

let user = findUser(id)!!  // Panics if null

Rust equivalent: .unwrap()

Warning: Use sparingly. Prefer if let, pattern matching, or ?? when possible.

Prefix Await

Oxide uses prefix await (unlike Rust's postfix .await).

let data = await fetchData()
let response = await client.get(url).send()?

Precedence: await binds tighter than ?, so await expr? means (await expr)?

Rust equivalent: expr.await

Try Operator (?)

Unchanged from Rust. Propagates errors in Result or Option returning functions.

fn readConfig(): Result<Config, Error> {
    let content = std.fs.readToString("config.toml")?
    Ok(parseConfig(content)?)
}

Symbols and Delimiters

SymbolUsage
{ }Blocks, closures, struct/enum bodies
( )Grouping, function parameters, tuples
[ ]Array literals, indexing
< >Generic type parameters
#[ ]Attributes
.Field access, method call, path separator
:Type annotation, return type
->Closure params, match arms
,List separator
;Statement terminator (optional)

Summary: Differences from Rust

OxideRustDescription
??.unwrapOr()Null coalescing (Option only)
!!.unwrap()Force unwrap
await exprexpr.awaitPrefix await
.::Path separator
:->Return type annotation
->=>Match arm, closure params

IMPORTANT: The Rust operators in the right column (::, -> for return types, => for match arms) are not valid Oxide syntax. These are grammar changes, not style preferences. Using :: in Oxide code will result in a syntax error. Oxide uses . as its only path separator.

C - Derivable Traits

Oxide uses the same derive system as Rust. You can automatically implement common traits with #[derive(...)].

Here are the most commonly derivable traits:

  • Debug - Enables formatting with {:?}
  • Clone - Allows explicit cloning
  • Copy - Allows bitwise copies for simple types
  • Eq / PartialEq - Equality comparisons
  • Ord / PartialOrd - Ordering comparisons
  • Hash - Enables hashing, useful for HashMap keys
  • Default - Provides a default value

Example

#[derive(Debug, Clone, PartialEq, Eq)]
public struct Point {
    public x: Int,
    public y: Int,
}

Not all traits are derivable for all types. For example, Copy requires that all fields are also Copy.

D - Useful Development Tools

Oxide is built on the Rust toolchain, so many Rust tools apply directly. These tools help you format, lint, test, and document your projects.

Formatting

  • cargo fmt formats your code using rustfmt rules.

Linting

  • cargo clippy runs Clippy lints to catch common mistakes and style issues.

Documentation

  • cargo doc --open generates API documentation.

Testing

  • cargo test runs your unit and integration tests.

Editor Support

  • rust-analyzer powers IDE features like autocomplete and inline errors.

Other Useful Tools

  • cargo audit checks dependencies for known vulnerabilities.
  • cargo expand shows macro expansion output.
  • miri helps detect undefined behavior in unsafe code.

Tool availability depends on your installed toolchain and system setup.

E - Editions

Rust editions are opt-in language revisions that allow the language to evolve without breaking existing code. Oxide tracks Rust editions because it compiles through the Rust compiler.

You set the edition in Cargo.toml:

[package]
edition = "2021"

What Editions Change

An edition can introduce new keywords, lints, or language rules. Code written for an older edition continues to compile as long as that edition is selected.

Choosing an Edition

For new projects, choose the most recent stable edition supported by your toolchain. For existing projects, keep the edition consistent across crates to avoid unnecessary friction.

F - Translations of the Book

We welcome translations of the Oxide Book. The best way to contribute is to coordinate with the core documentation team so translations stay current with the specification and the English source.

Guidelines

  • Keep code examples in Oxide syntax.
  • Preserve meaning and tone rather than translating word-for-word.
  • Note any terms that are intentionally kept in English.
  • Submit updates when the spec changes.

If you are interested in translating, open a documentation issue or contact the maintainers to coordinate the effort.

G - How Rust Is Made and "Nightly Rust"

Rust development happens in public, with new features landing in nightly builds before they stabilize. Oxide tracks Rust's stability model, so nightly features remain unstable until they are stabilized in Rust.

Stability Channels

  • Stable: The default release channel, intended for production use.
  • Beta: A preview of the next stable release.
  • Nightly: The cutting edge, where unstable features live.

Using Unstable Features

Unstable features require a nightly toolchain and a feature gate:

#![allow(unused)]
#![feature(some_unstable_feature)]
fn main() {
}

Because Oxide compiles through Rust, the same stability rules apply. Unless you specifically need an unstable feature, prefer stable releases for production code.