Skip to content

Developing AI Applications with Effect

Integrating with large language models (LLMs) has become essential for developing modern applications. Whether you’re generating content, analyzing data, or building conversational interfaces, adding AI-powered features to your application has the potential to enhance your product’s capabilities and improve user experience.

However, successfully integrating LLM-powered interactions into an application can be quite challenging. Developers must navigate a complex landscape of potential failures: network errors, provider outages, rate limits, and more, all while keeping the underlying application stable and responsive for the end user. In addition, the differences between LLM provider APIs can force developers to write brittle “glue code” which can become a significant source of technical debt.

Today, we are going to discuss Effect’s AI integration packages — a set of libraries designed to make working with LLMs simple, flexible, and provider-agnostic.

Effect’s AI packages provide simple, composable building blocks to model LLM interactions in a safe, declarative, and composable manner. With Effect’s AI integrations, you can:

🔌 Write Provider-Agnostic Business Logic

Define your LLM interactions once and plug in the specific provider you need later. Switch between any supported provider without changing your business logic.

🧪 Test LLM Interactions

Test your LLM interactions by simply providing mock service implementations during testing to ensure your AI-dependent business logic is executed in the way that you expect.

🧵 Utilize Structured Concurrency

Run concurrent LLM calls, cancel stale requests, stream partial results, or race multiple providers — all safely managed by Effect’s structured concurrency model.

🔍 Gain Deep Observability

Instrument your LLM interactions with Effect’s built-in tracing, logging, and metrics to identify performance bottlenecks or failures in production.

Effect’s AI ecosystem consists of several focused packages, each with a specific purpose:

  • @effect/ai: The core package that defines provider-agnostic services and abstractions for interacting with LLMs

  • @effect/ai-openai: Concrete implementations of AI services backed by the OpenAI API

  • @effect/ai-anthropic: Concrete implementations of AI services backed by the Anthropic API

This architecture allows you to describe your LLM interactions with provider-agnostic services, and the provide a concrete implementation once you are ready to run your program.

The central philosophy behind Effect’s AI integrations is provider-agnostic programming.

Instead of hardcoding calls to a specific LLM provider’s API, you describe your interaction using generic services provided by the base @effect/ai package.

Let’s look at a simple example to understand this concept better:

import {
import Completions
Completions
} from "@effect/ai"
import {
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
} from "effect"
// Define a provider-agnostic AI interaction
const
const generateDadJoke: Effect.Effect<AiResponse, AiError, Completions.Completions>
generateDadJoke
=
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const gen: <YieldWrap<Tag<Completions.Completions, Completions.Completions.Service>> | YieldWrap<Effect.Effect<AiResponse, AiError, never>>, AiResponse>(f: (resume: Effect.Adapter) => Generator<...>) => Effect.Effect<...> (+1 overload)

Provides a way to write effectful code using generator functions, simplifying control flow and error handling.

When to Use

Effect.gen allows you to write code that looks and behaves like synchronous code, but it can handle asynchronous tasks, errors, and complex control flow (like loops and conditions). It helps make asynchronous code more readable and easier to manage.

The generator functions work similarly to async/await but with more explicit control over the execution of effects. You can yield* values from effects and return the final result at the end.

Example

import { Effect } from "effect"
const addServiceCharge = (amount: number) => amount + 1
const applyDiscount = (
total: number,
discountRate: number
): Effect.Effect<number, Error> =>
discountRate === 0
? Effect.fail(new Error("Discount rate cannot be zero"))
: Effect.succeed(total - (total * discountRate) / 100)
const fetchTransactionAmount = Effect.promise(() => Promise.resolve(100))
const fetchDiscountRate = Effect.promise(() => Promise.resolve(5))
export const program = Effect.gen(function* () {
const transactionAmount = yield* fetchTransactionAmount
const discountRate = yield* fetchDiscountRate
const discountedAmount = yield* applyDiscount(
transactionAmount,
discountRate
)
const finalAmount = addServiceCharge(discountedAmount)
return `Final amount to charge: ${finalAmount}`
})

@since2.0.0

gen
(function*() {
// Get the Completions service from the Effect environment
const
const completions: Completions.Completions.Service
completions
= yield*
import Completions
Completions
.
class Completions

@since1.0.0

@since1.0.0

Completions
// Use the service to generate text
const
const response: AiResponse
response
= yield*
const completions: Completions.Completions.Service
completions
.
Completions.Service.create: (input: Input) => Effect.Effect<AiResponse, AiError>
create
("Generate a dad joke")
// Return the response
return
const response: AiResponse
response
})

Notice that this code doesn’t specify which LLM provider to use - it simply describes what we want to do (generate a dad joke), not how or where to do it.

This separation of concerns is at the heart of Effect’s approach to LLM interactions.

To bridge the gap between provider-agnostic business logic and concrete LLM providers, Effect introduces the AiModel abstraction.

An AiModel represents a specific LLM from a provider that can be used to satisfy service requirements, such as Completions or Embeddings.

Here is an example of how you can create and use an AiModel designed to satisfy the Completions service using OpenAI:

import {
import OpenAiCompletions
OpenAiCompletions
} from "@effect/ai-openai"
import {
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
} from "effect"
13 collapsed lines
import {
import Completions
Completions
} from "@effect/ai"
// Define a provider-agnostic AI interaction
const
const generateDadJoke: Effect.Effect<AiResponse, AiError, Completions.Completions>
generateDadJoke
=
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const gen: <YieldWrap<Tag<Completions.Completions, Completions.Completions.Service>> | YieldWrap<Effect.Effect<AiResponse, AiError, never>>, AiResponse>(f: (resume: Effect.Adapter) => Generator<...>) => Effect.Effect<...> (+1 overload)

Provides a way to write effectful code using generator functions, simplifying control flow and error handling.

When to Use

Effect.gen allows you to write code that looks and behaves like synchronous code, but it can handle asynchronous tasks, errors, and complex control flow (like loops and conditions). It helps make asynchronous code more readable and easier to manage.

The generator functions work similarly to async/await but with more explicit control over the execution of effects. You can yield* values from effects and return the final result at the end.

Example

import { Effect } from "effect"
const addServiceCharge = (amount: number) => amount + 1
const applyDiscount = (
total: number,
discountRate: number
): Effect.Effect<number, Error> =>
discountRate === 0
? Effect.fail(new Error("Discount rate cannot be zero"))
: Effect.succeed(total - (total * discountRate) / 100)
const fetchTransactionAmount = Effect.promise(() => Promise.resolve(100))
const fetchDiscountRate = Effect.promise(() => Promise.resolve(5))
export const program = Effect.gen(function* () {
const transactionAmount = yield* fetchTransactionAmount
const discountRate = yield* fetchDiscountRate
const discountedAmount = yield* applyDiscount(
transactionAmount,
discountRate
)
const finalAmount = addServiceCharge(discountedAmount)
return `Final amount to charge: ${finalAmount}`
})

@since2.0.0

gen
(function*() {
// Get the Completions service from the Effect environment
const
const completions: Completions.Completions.Service
completions
= yield*
import Completions
Completions
.
class Completions

@since1.0.0

@since1.0.0

Completions
// Use the service to generate text
const
const response: AiResponse
response
= yield*
const completions: Completions.Completions.Service
completions
.
Completions.Service.create: (input: Input) => Effect.Effect<AiResponse, AiError>
create
("Generate a dad joke")
// Return the response
return
const response: AiResponse
response
})
// Create an AiModel for OpenAI's GPT-4o
const
const Gpt4o: AiModel<Completions.Completions | Tokenizer, OpenAiClient>
Gpt4o
=
import OpenAiCompletions
OpenAiCompletions
.
const model: (model: (string & {}) | OpenAiCompletions.Model, config?: Omit<OpenAiCompletions.Config.Service, "model">) => AiModel<Completions.Completions | Tokenizer, OpenAiClient>

@since1.0.0

model
("gpt-4o")
// Use the model to provide the Completions service to our program
const
const main: Effect.Effect<void, AiError, OpenAiClient | AiModels>
main
=
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const gen: <YieldWrap<Effect.Effect<AiResponse, AiError, never>> | YieldWrap<Effect.Effect<AiPlan<in Error, in out Provides, in out Requires>.Provider<Completions.Completions | Tokenizer>, never, OpenAiClient | AiModels>>, void>(f: (resume: Effect.Adapter) => Generator<...>) => Effect.Effect<...> (+1 overload)

Provides a way to write effectful code using generator functions, simplifying control flow and error handling.

When to Use

Effect.gen allows you to write code that looks and behaves like synchronous code, but it can handle asynchronous tasks, errors, and complex control flow (like loops and conditions). It helps make asynchronous code more readable and easier to manage.

The generator functions work similarly to async/await but with more explicit control over the execution of effects. You can yield* values from effects and return the final result at the end.

Example

import { Effect } from "effect"
const addServiceCharge = (amount: number) => amount + 1
const applyDiscount = (
total: number,
discountRate: number
): Effect.Effect<number, Error> =>
discountRate === 0
? Effect.fail(new Error("Discount rate cannot be zero"))
: Effect.succeed(total - (total * discountRate) / 100)
const fetchTransactionAmount = Effect.promise(() => Promise.resolve(100))
const fetchDiscountRate = Effect.promise(() => Promise.resolve(5))
export const program = Effect.gen(function* () {
const transactionAmount = yield* fetchTransactionAmount
const discountRate = yield* fetchDiscountRate
const discountedAmount = yield* applyDiscount(
transactionAmount,
discountRate
)
const finalAmount = addServiceCharge(discountedAmount)
return `Final amount to charge: ${finalAmount}`
})

@since2.0.0

gen
(function*() {
// Build the AiModel into a Provider
const
const gpt4o: AiPlan.Provider<Completions.Completions | Tokenizer>
gpt4o
= yield*
const Gpt4o: AiModel<Completions.Completions | Tokenizer, OpenAiClient>
Gpt4o
// Provide the implementation to our generateDadJoke program
const
const response: AiResponse
response
= yield*
const gpt4o: AiPlan.Provider<Completions.Completions | Tokenizer>
gpt4o
.
AiPlan<in Error, in out Provides, in out Requires>.Provider<Completions | Tokenizer>.provide: <AiResponse, AiError, Completions.Completions>(effect: Effect.Effect<AiResponse, AiError, Completions.Completions>) => Effect.Effect<...>
provide
(
const generateDadJoke: Effect.Effect<AiResponse, AiError, Completions.Completions>
generateDadJoke
)
var console: Console

The console module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers.

The module exports two specific components:

  • A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
  • A global console instance configured to write to process.stdout and process.stderr. The global console can be used without importing the node:console module.

Warning: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the note on process I/O for more information.

Example using the global console:

console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr

Example using the Console class:

const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
const name = 'Will Robinson';
myConsole.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to err

@seesource

console
.
Console.log(message?: any, ...optionalParams: any[]): void

Prints to stdout with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to printf(3) (the arguments are all passed to util.format()).

const count = 5;
console.log('count: %d', count);
// Prints: count: 5, to stdout
console.log('count:', count);
// Prints: count: 5, to stdout

See util.format() for more information.

@sincev0.1.100

log
(
const response: AiResponse
response
.
AiResponse.text: string

@since1.0.0

text
)
})

This approach offers several key benefits:

  1. Reusability: You can reuse the same model for multiple operations
  2. Flexibility: Easily switch between providers or models based on your needs
  3. Abstractability: Extract your AI logic into services that hide implementation details

Now let’s walk through a complete example of setting up an LLM interaction with Effect:

import {
import OpenAiClient
OpenAiClient
,
import OpenAiCompletions
OpenAiCompletions
} from "@effect/ai-openai"
import {
import Completions
Completions
} from "@effect/ai"
import {
import NodeHttpClient
NodeHttpClient
} from "@effect/platform-node"
import {
import Config
Config
,
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
,
import Layer
Layer
} from "effect"
// 1. Define our provider-agnostic AI interaction
const
const generateDadJoke: Effect.Effect<AiResponse, AiError, Completions.Completions>
generateDadJoke
=
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const gen: <YieldWrap<Tag<Completions.Completions, Completions.Completions.Service>> | YieldWrap<Effect.Effect<AiResponse, AiError, never>>, AiResponse>(f: (resume: Effect.Adapter) => Generator<...>) => Effect.Effect<...> (+1 overload)

Provides a way to write effectful code using generator functions, simplifying control flow and error handling.

When to Use

Effect.gen allows you to write code that looks and behaves like synchronous code, but it can handle asynchronous tasks, errors, and complex control flow (like loops and conditions). It helps make asynchronous code more readable and easier to manage.

The generator functions work similarly to async/await but with more explicit control over the execution of effects. You can yield* values from effects and return the final result at the end.

Example

import { Effect } from "effect"
const addServiceCharge = (amount: number) => amount + 1
const applyDiscount = (
total: number,
discountRate: number
): Effect.Effect<number, Error> =>
discountRate === 0
? Effect.fail(new Error("Discount rate cannot be zero"))
: Effect.succeed(total - (total * discountRate) / 100)
const fetchTransactionAmount = Effect.promise(() => Promise.resolve(100))
const fetchDiscountRate = Effect.promise(() => Promise.resolve(5))
export const program = Effect.gen(function* () {
const transactionAmount = yield* fetchTransactionAmount
const discountRate = yield* fetchDiscountRate
const discountedAmount = yield* applyDiscount(
transactionAmount,
discountRate
)
const finalAmount = addServiceCharge(discountedAmount)
return `Final amount to charge: ${finalAmount}`
})

@since2.0.0

gen
(function*() {
const
const completions: Completions.Completions.Service
completions
= yield*
import Completions
Completions
.
class Completions

@since1.0.0

@since1.0.0

Completions
const
const response: AiResponse
response
= yield*
const completions: Completions.Completions.Service
completions
.
Completions.Service.create: (input: Input) => Effect.Effect<AiResponse, AiError>
create
("Generate a dad joke")
return
const response: AiResponse
response
})
// 2. Create an AiModel for a specific provider and model
const
const Gpt4o: AiModel<Completions.Completions | Tokenizer, OpenAiClient.OpenAiClient>
Gpt4o
=
import OpenAiCompletions
OpenAiCompletions
.
const model: (model: (string & {}) | OpenAiCompletions.Model, config?: Omit<OpenAiCompletions.Config.Service, "model">) => AiModel<Completions.Completions | Tokenizer, OpenAiClient.OpenAiClient>

@since1.0.0

model
("gpt-4o")
// 3. Create a program that uses the model
const
const main: Effect.Effect<void, AiError, OpenAiClient.OpenAiClient | AiModels>
main
=
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const gen: <YieldWrap<Effect.Effect<AiResponse, AiError, never>> | YieldWrap<Effect.Effect<AiPlan<in Error, in out Provides, in out Requires>.Provider<Completions.Completions | Tokenizer>, never, OpenAiClient.OpenAiClient | AiModels>>, void>(f: (resume: Effect.Adapter) => Generator<...>) => Effect.Effect<...> (+1 overload)

Provides a way to write effectful code using generator functions, simplifying control flow and error handling.

When to Use

Effect.gen allows you to write code that looks and behaves like synchronous code, but it can handle asynchronous tasks, errors, and complex control flow (like loops and conditions). It helps make asynchronous code more readable and easier to manage.

The generator functions work similarly to async/await but with more explicit control over the execution of effects. You can yield* values from effects and return the final result at the end.

Example

import { Effect } from "effect"
const addServiceCharge = (amount: number) => amount + 1
const applyDiscount = (
total: number,
discountRate: number
): Effect.Effect<number, Error> =>
discountRate === 0
? Effect.fail(new Error("Discount rate cannot be zero"))
: Effect.succeed(total - (total * discountRate) / 100)
const fetchTransactionAmount = Effect.promise(() => Promise.resolve(100))
const fetchDiscountRate = Effect.promise(() => Promise.resolve(5))
export const program = Effect.gen(function* () {
const transactionAmount = yield* fetchTransactionAmount
const discountRate = yield* fetchDiscountRate
const discountedAmount = yield* applyDiscount(
transactionAmount,
discountRate
)
const finalAmount = addServiceCharge(discountedAmount)
return `Final amount to charge: ${finalAmount}`
})

@since2.0.0

gen
(function*() {
const
const gpt4o: AiPlan.Provider<Completions.Completions | Tokenizer>
gpt4o
= yield*
const Gpt4o: AiModel<Completions.Completions | Tokenizer, OpenAiClient.OpenAiClient>
Gpt4o
const
const response: AiResponse
response
= yield*
const gpt4o: AiPlan.Provider<Completions.Completions | Tokenizer>
gpt4o
.
AiPlan<in Error, in out Provides, in out Requires>.Provider<Completions | Tokenizer>.provide: <AiResponse, AiError, Completions.Completions>(effect: Effect.Effect<AiResponse, AiError, Completions.Completions>) => Effect.Effect<...>
provide
(
const generateDadJoke: Effect.Effect<AiResponse, AiError, Completions.Completions>
generateDadJoke
)
var console: Console

The console module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers.

The module exports two specific components:

  • A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
  • A global console instance configured to write to process.stdout and process.stderr. The global console can be used without importing the node:console module.

Warning: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the note on process I/O for more information.

Example using the global console:

console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr

Example using the Console class:

const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
const name = 'Will Robinson';
myConsole.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to err

@seesource

console
.
Console.log(message?: any, ...optionalParams: any[]): void

Prints to stdout with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to printf(3) (the arguments are all passed to util.format()).

const count = 5;
console.log('count: %d', count);
// Prints: count: 5, to stdout
console.log('count:', count);
// Prints: count: 5, to stdout

See util.format() for more information.

@sincev0.1.100

log
(
const response: AiResponse
response
.
AiResponse.text: string

@since1.0.0

text
)
})
// 4. Create a Layer that provides the OpenAI client
const
const OpenAi: Layer.Layer<OpenAiClient.OpenAiClient | AiModels, ConfigError, HttpClient>
OpenAi
=
import OpenAiClient
OpenAiClient
.
const layerConfig: (options: Config.Config.Wrap<{
readonly apiKey?: Redacted | undefined;
readonly apiUrl?: string | undefined;
readonly organizationId?: Redacted | undefined;
readonly projectId?: Redacted | undefined;
readonly transformClient?: (client: HttpClient) => HttpClient;
}>) => Layer.Layer<AiModels | OpenAiClient.OpenAiClient, ConfigError, HttpClient>

@since1.0.0

layerConfig
({
apiKey?: Config.Config<Redacted<string> | undefined>
apiKey
:
import Config
Config
.
const redacted: (name?: string) => Config.Config<Redacted> (+1 overload)

Constructs a config for a redacted value.

@since2.0.0

redacted
("OPENAI_API_KEY")
})
// 5. Provide an HTTP client implementation
const
const OpenAiWithHttp: Layer.Layer<OpenAiClient.OpenAiClient | AiModels, ConfigError, never>
OpenAiWithHttp
=
import Layer
Layer
.
const provide: <HttpClient, ConfigError, OpenAiClient.OpenAiClient | AiModels, never, never, HttpClient>(self: Layer.Layer<OpenAiClient.OpenAiClient | AiModels, ConfigError, HttpClient>, that: Layer.Layer<...>) => Layer.Layer<...> (+3 overloads)

Feeds the output services of this builder into the input of the specified builder, resulting in a new builder with the inputs of this builder as well as any leftover inputs, and the outputs of the specified builder.

@since2.0.0

provide
(
const OpenAi: Layer.Layer<OpenAiClient.OpenAiClient | AiModels, ConfigError, HttpClient>
OpenAi
,
import NodeHttpClient
NodeHttpClient
.
const layerUndici: Layer.Layer<HttpClient, never, never>

@since1.0.0

layerUndici
)
// 6. Run the program with the provided dependencies
const main: Effect.Effect<void, AiError, OpenAiClient.OpenAiClient | AiModels>
main
.
Pipeable.pipe<Effect.Effect<void, AiError, OpenAiClient.OpenAiClient | AiModels>, Effect.Effect<void, AiError | ConfigError, never>, Promise<...>>(this: Effect.Effect<...>, ab: (_: Effect.Effect<...>) => Effect.Effect<...>, bc: (_: Effect.Effect<...>) => Promise<...>): Promise<...> (+21 overloads)
pipe
(
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const provide: <OpenAiClient.OpenAiClient | AiModels, ConfigError, never>(layer: Layer.Layer<OpenAiClient.OpenAiClient | AiModels, ConfigError, never>) => <A, E, R>(self: Effect.Effect<...>) => Effect.Effect<...> (+9 overloads)

Provides necessary dependencies to an effect, removing its environmental requirements.

Details

This function allows you to supply the required environment for an effect. The environment can be provided in the form of one or more Layers, a Context, a Runtime, or a ManagedRuntime. Once the environment is provided, the effect can run without requiring external dependencies.

You can compose layers to create a modular and reusable way of setting up the environment for effects. For example, layers can be used to configure databases, logging services, or any other required dependencies.

Example

import { Context, Effect, Layer } from "effect"
class Database extends Context.Tag("Database")<
Database,
{ readonly query: (sql: string) => Effect.Effect<Array<unknown>> }
>() {}
const DatabaseLive = Layer.succeed(
Database,
{
// Simulate a database query
query: (sql: string) => Effect.log(`Executing query: ${sql}`).pipe(Effect.as([]))
}
)
// ┌─── Effect<unknown[], never, Database>
// ▼
const program = Effect.gen(function*() {
const database = yield* Database
const result = yield* database.query("SELECT * FROM users")
return result
})
// ┌─── Effect<unknown[], never, never>
// ▼
const runnable = Effect.provide(program, DatabaseLive)
Effect.runPromise(runnable).then(console.log)
// Output:
// timestamp=... level=INFO fiber=#0 message="Executing query: SELECT * FROM users"
// []

@seeprovideService for providing a service to an effect.

@since2.0.0

provide
(
const OpenAiWithHttp: Layer.Layer<OpenAiClient.OpenAiClient | AiModels, ConfigError, never>
OpenAiWithHttp
),
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const runPromise: <A, E>(effect: Effect.Effect<A, E, never>, options?: {
readonly signal?: AbortSignal;
} | undefined) => Promise<A>

Executes an effect and returns the result as a Promise.

Details

This function runs an effect and converts its result into a Promise. If the effect succeeds, the Promise will resolve with the successful result. If the effect fails, the Promise will reject with an error, which includes the failure details of the effect.

The optional options parameter allows you to pass an AbortSignal for cancellation, enabling more fine-grained control over asynchronous tasks.

When to Use

Use this function when you need to execute an effect and work with its result in a promise-based system, such as when integrating with third-party libraries that expect Promise results.

Example (Running a Successful Effect as a Promise)

import { Effect } from "effect"
Effect.runPromise(Effect.succeed(1)).then(console.log)
// Output: 1

Example (Handling a Failing Effect as a Rejected Promise)

import { Effect } from "effect"
Effect.runPromise(Effect.fail("my error")).catch(console.error)
// Output:
// (FiberFailure) Error: my error

@seerunPromiseExit for a version that returns an Exit type instead of rejecting.

@since2.0.0

runPromise
)

One of Effect’s greatest strengths is its robust error handling, which is particularly valuable for LLM interactions where failure scenarios can be complex and varied. With Effect, these errors are typed and can be handled explicitly.

For example, if our generateDadJoke program were re-written to possibly fail with a RateLimitError or an InvalidInputError, we could write logic to handle those errors:

import {
import AiResponse
AiResponse
,
import AiRole
AiRole
} from "@effect/ai"
import {
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
} from "effect"
11 collapsed lines
import {
import Completions
Completions
} from "@effect/ai"
import {
import Data
Data
} from "effect"
class
class RateLimitError
RateLimitError
extends
import Data
Data
.
const TaggedError: <"RateLimitError">(tag: "RateLimitError") => new <A>(args: Equals<A, {}> extends true ? void : { readonly [P in keyof A as P extends "_tag" ? never : P]: A[P]; }) => YieldableError & {
...;
} & Readonly<...>

@since2.0.0

TaggedError
("RateLimitError") {}
class
class InvalidInputError
InvalidInputError
extends
import Data
Data
.
const TaggedError: <"InvalidInputError">(tag: "InvalidInputError") => new <A>(args: Equals<A, {}> extends true ? void : { readonly [P in keyof A as P extends "_tag" ? never : P]: A[P]; }) => YieldableError & ... 1 more ... & Readonly<...>

@since2.0.0

TaggedError
("InvalidInputError") {}
declare const
const generateDadJoke: Effect.Effect<AiResponse.AiResponse, RateLimitError | InvalidInputError, Completions.Completions>
generateDadJoke
:
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
interface Effect<out A, out E = never, out R = never>

The Effect interface defines a value that describes a workflow or job, which can succeed or fail.

Details

The Effect interface represents a computation that can model a workflow involving various types of operations, such as synchronous, asynchronous, concurrent, and parallel interactions. It operates within a context of type R, and the result can either be a success with a value of type A or a failure with an error of type E. The Effect is designed to handle complex interactions with external resources, offering advanced features such as fiber-based concurrency, scheduling, interruption handling, and scalability. This makes it suitable for tasks that require fine-grained control over concurrency and error management.

To execute an Effect value, you need a Runtime, which provides the environment necessary to run and manage the computation.

@since2.0.0

@since2.0.0

Effect
<
import AiResponse
AiResponse
.
class AiResponse

@since1.0.0

AiResponse
,
class RateLimitError
RateLimitError
|
class InvalidInputError
InvalidInputError
,
import Completions
Completions
.
class Completions

@since1.0.0

@since1.0.0

Completions
>
const
const withErrorHandling: Effect.Effect<AiResponse.AiResponse, RateLimitError | InvalidInputError, Completions.Completions>
withErrorHandling
=
const generateDadJoke: Effect.Effect<AiResponse.AiResponse, RateLimitError | InvalidInputError, Completions.Completions>
generateDadJoke
.
Pipeable.pipe<Effect.Effect<AiResponse.AiResponse, RateLimitError | InvalidInputError, Completions.Completions>, Effect.Effect<...>>(this: Effect.Effect<...>, ab: (_: Effect.Effect<...>) => Effect.Effect<...>): Effect.Effect<...> (+21 overloads)
pipe
(
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const catchTags: <RateLimitError | InvalidInputError, {
RateLimitError: (error: RateLimitError) => Effect.Effect<AiResponse.AiResponse, RateLimitError | InvalidInputError, Completions.Completions>;
InvalidInputError: (error: InvalidInputError) => Effect.Effect<...>;
}>(cases: {
RateLimitError: (error: RateLimitError) => Effect.Effect<AiResponse.AiResponse, RateLimitError | InvalidInputError, Completions.Completions>;
InvalidInputError: (error: InvalidInputError) => Effect.Effect<...>;
}) => <A, R>(self: Effect.Effect<...>) => Effect.Effect<...> (+1 overload)

Handles multiple errors in a single block of code using their _tag field.

When to Use

catchTags is a convenient way to handle multiple error types at once. Instead of using

catchTag

multiple times, you can pass an object where each key is an error type's _tag, and the value is the handler for that specific error. This allows you to catch and recover from multiple error types in a single call.

The error type must have a readonly _tag field to use catchTag. This field is used to identify and match errors.

Example (Handling Multiple Tagged Error Types at Once)

import { Effect, Random } from "effect"
class HttpError {
readonly _tag = "HttpError"
}
class ValidationError {
readonly _tag = "ValidationError"
}
// ┌─── Effect<string, HttpError | ValidationError, never>
// ▼
const program = Effect.gen(function* () {
const n1 = yield* Random.next
const n2 = yield* Random.next
if (n1 < 0.5) {
yield* Effect.fail(new HttpError())
}
if (n2 < 0.5) {
yield* Effect.fail(new ValidationError())
}
return "some result"
})
// ┌─── Effect<string, never, never>
// ▼
const recovered = program.pipe(
Effect.catchTags({
HttpError: (_HttpError) =>
Effect.succeed(`Recovering from HttpError`),
ValidationError: (_ValidationError) =>
Effect.succeed(`Recovering from ValidationError`)
})
)

@since2.0.0

catchTags
({
type RateLimitError: (error: RateLimitError) => Effect.Effect<AiResponse.AiResponse, RateLimitError | InvalidInputError, Completions.Completions>
RateLimitError
: (
error: RateLimitError
error
) =>
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const logError: (...message: ReadonlyArray<any>) => Effect.Effect<void, never, never>

Logs messages at the ERROR log level.

Details

This function logs messages at the ERROR level, suitable for reporting application errors or failures. These logs are typically used for unexpected issues that need immediate attention.

@since2.0.0

logError
("Rate limited, retrying in a moment").
Pipeable.pipe<Effect.Effect<void, never, never>, Effect.Effect<void, never, never>, Effect.Effect<AiResponse.AiResponse, RateLimitError | InvalidInputError, Completions.Completions>>(this: Effect.Effect<...>, ab: (_: Effect.Effect<...>) => Effect.Effect<...>, bc: (_: Effect.Effect<...>) => Effect.Effect<...>): Effect.Effect<...> (+21 overloads)
pipe
(
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const delay: (duration: DurationInput) => <A, E, R>(self: Effect.Effect<A, E, R>) => Effect.Effect<A, E, R> (+1 overload)

Delays the execution of an effect by a specified Duration.

**Details

This function postpones the execution of the provided effect by the specified duration. The duration can be provided in various formats supported by the Duration module.

Internally, this function does not block the thread; instead, it uses an efficient, non-blocking mechanism to introduce the delay.

Example

import { Console, Effect } from "effect"
const task = Console.log("Task executed")
const program = Console.log("start").pipe(
Effect.andThen(
// Delays the log message by 2 seconds
task.pipe(Effect.delay("2 seconds"))
)
)
Effect.runFork(program)
// Output:
// start
// Task executed

@since2.0.0

delay
("1 seconds"),
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const andThen: <Effect.Effect<AiResponse.AiResponse, RateLimitError | InvalidInputError, Completions.Completions>>(f: Effect.Effect<...>) => <A, E, R>(self: Effect.Effect<...>) => Effect.Effect<...> (+3 overloads)

Chains two actions, where the second action can depend on the result of the first.

Syntax

const transformedEffect = pipe(myEffect, Effect.andThen(anotherEffect))
// or
const transformedEffect = Effect.andThen(myEffect, anotherEffect)
// or
const transformedEffect = myEffect.pipe(Effect.andThen(anotherEffect))

When to Use

Use andThen when you need to run multiple actions in sequence, with the second action depending on the result of the first. This is useful for combining effects or handling computations that must happen in order.

Details

The second action can be:

  • A constant value (similar to

as

)

  • A function returning a value (similar to

map

)

  • A Promise
  • A function returning a Promise
  • An Effect
  • A function returning an Effect (similar to

flatMap

)

Note: andThen works well with both Option and Either types, treating them as effects.

Example (Applying a Discount Based on Fetched Amount)

import { pipe, Effect } from "effect"
// Function to apply a discount safely to a transaction amount
const applyDiscount = (
total: number,
discountRate: number
): Effect.Effect<number, Error> =>
discountRate === 0
? Effect.fail(new Error("Discount rate cannot be zero"))
: Effect.succeed(total - (total * discountRate) / 100)
// Simulated asynchronous task to fetch a transaction amount from database
const fetchTransactionAmount = Effect.promise(() => Promise.resolve(100))
// Using Effect.map and Effect.flatMap
const result1 = pipe(
fetchTransactionAmount,
Effect.map((amount) => amount * 2),
Effect.flatMap((amount) => applyDiscount(amount, 5))
)
Effect.runPromise(result1).then(console.log)
// Output: 190
// Using Effect.andThen
const result2 = pipe(
fetchTransactionAmount,
Effect.andThen((amount) => amount * 2),
Effect.andThen((amount) => applyDiscount(amount, 5))
)
Effect.runPromise(result2).then(console.log)
// Output: 190

@since2.0.0

andThen
(
const generateDadJoke: Effect.Effect<AiResponse.AiResponse, RateLimitError | InvalidInputError, Completions.Completions>
generateDadJoke
)
),
type InvalidInputError: (error: InvalidInputError) => Effect.Effect<AiResponse.AiResponse, never, never>
InvalidInputError
: (
error: InvalidInputError
error
) =>
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const succeed: <AiResponse.AiResponse>(value: AiResponse.AiResponse) => Effect.Effect<AiResponse.AiResponse, never, never>

Creates an Effect that always succeeds with a given value.

When to Use

Use this function when you need an effect that completes successfully with a specific value without any errors or external dependencies.

Example (Creating a Successful Effect)

import { Effect } from "effect"
// Creating an effect that represents a successful scenario
//
// ┌─── Effect<number, never, never>
// ▼
const success = Effect.succeed(42)

@seefail to create an effect that represents a failure.

@since2.0.0

succeed
(
import AiResponse
AiResponse
.
class AiResponse

@since1.0.0

AiResponse
.
AiResponse.fromText(options: {
role: AiRole.AiRole;
content: string;
}): AiResponse.AiResponse

@since1.0.0

fromText
({
role: AiRole.AiRole
role
:
import AiRole
AiRole
.
const model: AiRole.AiRole

@since1.0.0

model
,
content: string
content
: "I couldn't generate a joke right now."
}))
})
)

For more complex scenarios where you need reliability across multiple providers, Effect offers the powerful AiPlan abstraction.

AiPlan lets you create a structured execution plan for your LLM interactions with built-in retry logic, fallback strategies, and error handling:

import {
import AiPlan
AiPlan
} from "@effect/ai"
import {
import OpenAiCompletions
OpenAiCompletions
} from "@effect/ai-openai"
import {
import AnthropicCompletions
AnthropicCompletions
} from "@effect/ai-anthropic"
import {
import Data
Data
,
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
,
import Schedule
Schedule
} from "effect"
7 collapsed lines
import {
import Completions
Completions
} from "@effect/ai"
const
const generateDadJoke: Effect.Effect<AiResponse, AiError, Completions.Completions>
generateDadJoke
=
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const gen: <YieldWrap<Tag<Completions.Completions, Completions.Completions.Service>> | YieldWrap<Effect.Effect<AiResponse, AiError, never>>, AiResponse>(f: (resume: Effect.Adapter) => Generator<...>) => Effect.Effect<...> (+1 overload)

Provides a way to write effectful code using generator functions, simplifying control flow and error handling.

When to Use

Effect.gen allows you to write code that looks and behaves like synchronous code, but it can handle asynchronous tasks, errors, and complex control flow (like loops and conditions). It helps make asynchronous code more readable and easier to manage.

The generator functions work similarly to async/await but with more explicit control over the execution of effects. You can yield* values from effects and return the final result at the end.

Example

import { Effect } from "effect"
const addServiceCharge = (amount: number) => amount + 1
const applyDiscount = (
total: number,
discountRate: number
): Effect.Effect<number, Error> =>
discountRate === 0
? Effect.fail(new Error("Discount rate cannot be zero"))
: Effect.succeed(total - (total * discountRate) / 100)
const fetchTransactionAmount = Effect.promise(() => Promise.resolve(100))
const fetchDiscountRate = Effect.promise(() => Promise.resolve(5))
export const program = Effect.gen(function* () {
const transactionAmount = yield* fetchTransactionAmount
const discountRate = yield* fetchDiscountRate
const discountedAmount = yield* applyDiscount(
transactionAmount,
discountRate
)
const finalAmount = addServiceCharge(discountedAmount)
return `Final amount to charge: ${finalAmount}`
})

@since2.0.0

gen
(function*() {
const
const completions: Completions.Completions.Service
completions
= yield*
import Completions
Completions
.
class Completions

@since1.0.0

@since1.0.0

Completions
const
const response: AiResponse
response
= yield*
const completions: Completions.Completions.Service
completions
.
Completions.Service.create: (input: Input) => Effect.Effect<AiResponse, AiError>
create
("Generate a dad joke")
return
const response: AiResponse
response
})
// Define domain-specific error types
class
class NetworkError
NetworkError
extends
import Data
Data
.
const TaggedError: <"NetworkError">(tag: "NetworkError") => new <A>(args: Equals<A, {}> extends true ? void : { readonly [P in keyof A as P extends "_tag" ? never : P]: A[P]; }) => YieldableError & {
...;
} & Readonly<...>

@since2.0.0

TaggedError
("NetworkError") {}
class
class ProviderOutage
ProviderOutage
extends
import Data
Data
.
const TaggedError: <"ProviderOutage">(tag: "ProviderOutage") => new <A>(args: Equals<A, {}> extends true ? void : { readonly [P in keyof A as P extends "_tag" ? never : P]: A[P]; }) => YieldableError & {
...;
} & Readonly<...>

@since2.0.0

TaggedError
("ProviderOutage") {}
// Build a resilient plan that:
// - Attempts to use OpenAI's `"gpt-4o"` model up to 3 times
// - Waits with an exponential backoff between attempts
// - Only re-attempts the call to OpenAI if the error is a `NetworkError`
// - Falls back to using Anthropic otherwise
const
const DadJokePlan: AiPlan.AiPlan<NetworkError | ProviderOutage, Completions.Completions | Tokenizer, OpenAiClient | AnthropicClient>
DadJokePlan
=
import AiPlan
AiPlan
.
const fromModel: <Completions.Completions | Tokenizer, OpenAiClient, NetworkError | ProviderOutage, Duration, unknown, never, never>(model: AiModel<...>, options?: {
...;
} | undefined) => AiPlan.AiPlan<...>

@since1.0.0

fromModel
(
import OpenAiCompletions
OpenAiCompletions
.
const model: (model: (string & {}) | OpenAiCompletions.Model, config?: Omit<OpenAiCompletions.Config.Service, "model">) => AiModel<Completions.Completions | Tokenizer, OpenAiClient>

@since1.0.0

model
("gpt-4o"), {
attempts?: number | undefined
attempts
: 3,
schedule?: Schedule.Schedule<Duration, unknown, never> | undefined
schedule
:
import Schedule
Schedule
.
const exponential: (base: DurationInput, factor?: number) => Schedule.Schedule<Duration>

Creates a schedule that recurs indefinitely with exponentially increasing delays.

Details

This schedule starts with an initial delay of base and increases the delay exponentially on each repetition using the formula base * factor^n, where n is the number of times the schedule has executed so far. If no factor is provided, it defaults to 2, causing the delay to double after each execution.

@since2.0.0

exponential
("100 millis"),
while?: ((error: NetworkError | ProviderOutage) => boolean | Effect.Effect<boolean, never, never>) | undefined
while
: (
error: NetworkError | ProviderOutage
error
:
class NetworkError
NetworkError
|
class ProviderOutage
ProviderOutage
) =>
error: NetworkError | ProviderOutage
error
.
_tag: "NetworkError" | "ProviderOutage"
_tag
=== "NetworkError"
}).
Pipeable.pipe<AiPlan.AiPlan<NetworkError | ProviderOutage, Completions.Completions | Tokenizer, OpenAiClient>, AiPlan.AiPlan<...>>(this: AiPlan.AiPlan<...>, ab: (_: AiPlan.AiPlan<...>) => AiPlan.AiPlan<...>): AiPlan.AiPlan<...> (+21 overloads)
pipe
(
import AiPlan
AiPlan
.
const withFallback: <Completions.Completions | Tokenizer, Completions.Completions | Tokenizer, AnthropicClient, unknown, unknown, unknown, never, never>(options: {
...;
}) => <E, Requires>(self: AiPlan.AiPlan<...>) => AiPlan.AiPlan<...> (+1 overload)

@since1.0.0

withFallback
({
model: AiModel<Completions.Completions | Tokenizer, AnthropicClient>
model
:
import AnthropicCompletions
AnthropicCompletions
.
const model: (model: (string & {}) | AnthropicCompletions.Model, config?: Omit<AnthropicCompletions.Config.Service, "model">) => AiModel<Completions.Completions | Tokenizer, AnthropicClient>

@since1.0.0

model
("claude-3-7-sonnet-latest"),
})
)
// Use the plan just like an AiModel
const
const main: Effect.Effect<void, AiError, OpenAiClient | AiModels | AnthropicClient>
main
=
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const gen: <YieldWrap<Effect.Effect<AiResponse, AiError, never>> | YieldWrap<Effect.Effect<AiPlan.AiPlan<in Error, in out Provides, in out Requires>.Provider<Completions.Completions | Tokenizer>, never, OpenAiClient | ... 1 more ... | AnthropicClient>>, void>(f: (resume: Effect.Adapter) => Generator<...>) => Effect.Effect<...> (+1 overload)

Provides a way to write effectful code using generator functions, simplifying control flow and error handling.

When to Use

Effect.gen allows you to write code that looks and behaves like synchronous code, but it can handle asynchronous tasks, errors, and complex control flow (like loops and conditions). It helps make asynchronous code more readable and easier to manage.

The generator functions work similarly to async/await but with more explicit control over the execution of effects. You can yield* values from effects and return the final result at the end.

Example

import { Effect } from "effect"
const addServiceCharge = (amount: number) => amount + 1
const applyDiscount = (
total: number,
discountRate: number
): Effect.Effect<number, Error> =>
discountRate === 0
? Effect.fail(new Error("Discount rate cannot be zero"))
: Effect.succeed(total - (total * discountRate) / 100)
const fetchTransactionAmount = Effect.promise(() => Promise.resolve(100))
const fetchDiscountRate = Effect.promise(() => Promise.resolve(5))
export const program = Effect.gen(function* () {
const transactionAmount = yield* fetchTransactionAmount
const discountRate = yield* fetchDiscountRate
const discountedAmount = yield* applyDiscount(
transactionAmount,
discountRate
)
const finalAmount = addServiceCharge(discountedAmount)
return `Final amount to charge: ${finalAmount}`
})

@since2.0.0

gen
(function*() {
const
const plan: AiPlan.AiPlan.Provider<Completions.Completions | Tokenizer>
plan
= yield*
const DadJokePlan: AiPlan.AiPlan<NetworkError | ProviderOutage, Completions.Completions | Tokenizer, OpenAiClient | AnthropicClient>
DadJokePlan
const
const response: AiResponse
response
= yield*
const plan: AiPlan.AiPlan.Provider<Completions.Completions | Tokenizer>
plan
.
AiPlan<in Error, in out Provides, in out Requires>.Provider<Completions | Tokenizer>.provide: <AiResponse, AiError, Completions.Completions>(effect: Effect.Effect<AiResponse, AiError, Completions.Completions>) => Effect.Effect<...>
provide
(
const generateDadJoke: Effect.Effect<AiResponse, AiError, Completions.Completions>
generateDadJoke
)
})

With AiPlan, you can:

  • Create sophisticated retry policies with configurable backoff strategies
  • Define fallback chains across multiple providers
  • Specify which error types should trigger retries vs. fallbacks

This is particularly valuable for production systems where reliability is critical, as it allows you to leverage multiple LLM providers as fallbacks for one other, all while keeping your business logic provider-agnostic.

Effect’s structured concurrency model also makes it easy to manage concurrent LLM interactions:

import {
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
} from "effect"
7 collapsed lines
import {
import Completions
Completions
} from "@effect/ai"
const
const generateDadJoke: Effect.Effect<AiResponse, AiError, Completions.Completions>
generateDadJoke
=
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const gen: <YieldWrap<Tag<Completions.Completions, Completions.Completions.Service>> | YieldWrap<Effect.Effect<AiResponse, AiError, never>>, AiResponse>(f: (resume: Effect.Adapter) => Generator<...>) => Effect.Effect<...> (+1 overload)

Provides a way to write effectful code using generator functions, simplifying control flow and error handling.

When to Use

Effect.gen allows you to write code that looks and behaves like synchronous code, but it can handle asynchronous tasks, errors, and complex control flow (like loops and conditions). It helps make asynchronous code more readable and easier to manage.

The generator functions work similarly to async/await but with more explicit control over the execution of effects. You can yield* values from effects and return the final result at the end.

Example

import { Effect } from "effect"
const addServiceCharge = (amount: number) => amount + 1
const applyDiscount = (
total: number,
discountRate: number
): Effect.Effect<number, Error> =>
discountRate === 0
? Effect.fail(new Error("Discount rate cannot be zero"))
: Effect.succeed(total - (total * discountRate) / 100)
const fetchTransactionAmount = Effect.promise(() => Promise.resolve(100))
const fetchDiscountRate = Effect.promise(() => Promise.resolve(5))
export const program = Effect.gen(function* () {
const transactionAmount = yield* fetchTransactionAmount
const discountRate = yield* fetchDiscountRate
const discountedAmount = yield* applyDiscount(
transactionAmount,
discountRate
)
const finalAmount = addServiceCharge(discountedAmount)
return `Final amount to charge: ${finalAmount}`
})

@since2.0.0

gen
(function*() {
const
const completions: Completions.Completions.Service
completions
= yield*
import Completions
Completions
.
class Completions

@since1.0.0

@since1.0.0

Completions
const
const response: AiResponse
response
= yield*
const completions: Completions.Completions.Service
completions
.
Completions.Service.create: (input: Input) => Effect.Effect<AiResponse, AiError>
create
("Generate a dad joke")
return
const response: AiResponse
response
})
// Generate multiple jokes concurrently
const
const concurrentDadJokes: Effect.Effect<[AiResponse, AiResponse, AiResponse], AiError, Completions.Completions>
concurrentDadJokes
=
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const all: <readonly [Effect.Effect<AiResponse, AiError, Completions.Completions>, Effect.Effect<AiResponse, AiError, Completions.Completions>, Effect.Effect<...>], {
...;
}>(arg: readonly [...], options?: {
...;
} | undefined) => Effect.Effect<...>

Combines multiple effects into one, returning results based on the input structure.

Details

Use this function when you need to run multiple effects and combine their results into a single output. It supports tuples, iterables, structs, and records, making it flexible for different input types.

For instance, if the input is a tuple:

// ┌─── a tuple of effects
// ▼
Effect.all([effect1, effect2, ...])

the effects are executed sequentially, and the result is a new effect containing the results as a tuple. The results in the tuple match the order of the effects passed to Effect.all.

Concurrency

You can control the execution order (e.g., sequential vs. concurrent) using the concurrency option.

Short-Circuiting Behavior

This function stops execution on the first error it encounters, this is called "short-circuiting". If any effect in the collection fails, the remaining effects will not run, and the error will be propagated. To change this behavior, you can use the mode option, which allows all effects to run and collect results as Either or Option.

The mode option

The { mode: "either" } option changes the behavior of Effect.all to ensure all effects run, even if some fail. Instead of stopping on the first failure, this mode collects both successes and failures, returning an array of Either instances where each result is either a Right (success) or a Left (failure).

Similarly, the { mode: "validate" } option uses Option to indicate success or failure. Each effect returns None for success and Some with the error for failure.

Example (Combining Effects in Tuples)

import { Effect, Console } from "effect"
const tupleOfEffects = [
Effect.succeed(42).pipe(Effect.tap(Console.log)),
Effect.succeed("Hello").pipe(Effect.tap(Console.log))
] as const
// ┌─── Effect<[number, string], never, never>
// ▼
const resultsAsTuple = Effect.all(tupleOfEffects)
Effect.runPromise(resultsAsTuple).then(console.log)
// Output:
// 42
// Hello
// [ 42, 'Hello' ]

Example (Combining Effects in Iterables)

import { Effect, Console } from "effect"
const iterableOfEffects: Iterable<Effect.Effect<number>> = [1, 2, 3].map(
(n) => Effect.succeed(n).pipe(Effect.tap(Console.log))
)
// ┌─── Effect<number[], never, never>
// ▼
const resultsAsArray = Effect.all(iterableOfEffects)
Effect.runPromise(resultsAsArray).then(console.log)
// Output:
// 1
// 2
// 3
// [ 1, 2, 3 ]

Example (Combining Effects in Structs)

import { Effect, Console } from "effect"
const structOfEffects = {
a: Effect.succeed(42).pipe(Effect.tap(Console.log)),
b: Effect.succeed("Hello").pipe(Effect.tap(Console.log))
}
// ┌─── Effect<{ a: number; b: string; }, never, never>
// ▼
const resultsAsStruct = Effect.all(structOfEffects)
Effect.runPromise(resultsAsStruct).then(console.log)
// Output:
// 42
// Hello
// { a: 42, b: 'Hello' }

Example (Combining Effects in Records)

import { Effect, Console } from "effect"
const recordOfEffects: Record<string, Effect.Effect<number>> = {
key1: Effect.succeed(1).pipe(Effect.tap(Console.log)),
key2: Effect.succeed(2).pipe(Effect.tap(Console.log))
}
// ┌─── Effect<{ [x: string]: number; }, never, never>
// ▼
const resultsAsRecord = Effect.all(recordOfEffects)
Effect.runPromise(resultsAsRecord).then(console.log)
// Output:
// 1
// 2
// { key1: 1, key2: 2 }

Example (Short-Circuiting Behavior)

import { Effect, Console } from "effect"
const program = Effect.all([
Effect.succeed("Task1").pipe(Effect.tap(Console.log)),
Effect.fail("Task2: Oh no!").pipe(Effect.tap(Console.log)),
// Won't execute due to earlier failure
Effect.succeed("Task3").pipe(Effect.tap(Console.log))
])
Effect.runPromiseExit(program).then(console.log)
// Output:
// Task1
// {
// _id: 'Exit',
// _tag: 'Failure',
// cause: { _id: 'Cause', _tag: 'Fail', failure: 'Task2: Oh no!' }
// }

Example (Collecting Results with mode: "either")

import { Effect, Console } from "effect"
const effects = [
Effect.succeed("Task1").pipe(Effect.tap(Console.log)),
Effect.fail("Task2: Oh no!").pipe(Effect.tap(Console.log)),
Effect.succeed("Task3").pipe(Effect.tap(Console.log))
]
const program = Effect.all(effects, { mode: "either" })
Effect.runPromiseExit(program).then(console.log)
// Output:
// Task1
// Task3
// {
// _id: 'Exit',
// _tag: 'Success',
// value: [
// { _id: 'Either', _tag: 'Right', right: 'Task1' },
// { _id: 'Either', _tag: 'Left', left: 'Task2: Oh no!' },
// { _id: 'Either', _tag: 'Right', right: 'Task3' }
// ]
// }

Example (Collecting Results with mode: "validate")

import { Effect, Console } from "effect"
const effects = [
Effect.succeed("Task1").pipe(Effect.tap(Console.log)),
Effect.fail("Task2: Oh no!").pipe(Effect.tap(Console.log)),
Effect.succeed("Task3").pipe(Effect.tap(Console.log))
]
const program = Effect.all(effects, { mode: "validate" })
Effect.runPromiseExit(program).then((result) => console.log("%o", result))
// Output:
// Task1
// Task3
// {
// _id: 'Exit',
// _tag: 'Failure',
// cause: {
// _id: 'Cause',
// _tag: 'Fail',
// failure: [
// { _id: 'Option', _tag: 'None' },
// { _id: 'Option', _tag: 'Some', value: 'Task2: Oh no!' },
// { _id: 'Option', _tag: 'None' }
// ]
// }
// }

@seeforEach for iterating over elements and applying an effect.

@seeallWith for a data-last version of this function.

@since2.0.0

all
([
const generateDadJoke: Effect.Effect<AiResponse, AiError, Completions.Completions>
generateDadJoke
,
const generateDadJoke: Effect.Effect<AiResponse, AiError, Completions.Completions>
generateDadJoke
,
const generateDadJoke: Effect.Effect<AiResponse, AiError, Completions.Completions>
generateDadJoke
], {
concurrency: number
concurrency
: 2 }) // Limit to 2 concurrent requests

Effect’s AI integrations support streaming responses via Effect’s Stream type:

import {
import Completions
Completions
} from "@effect/ai"
import {
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
,
import Stream
Stream
} from "effect"
const
const streamingJoke: Effect.Effect<void, AiError, Completions.Completions>
streamingJoke
=
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const gen: <YieldWrap<Tag<Completions.Completions, Completions.Completions.Service>> | YieldWrap<Effect.Effect<void, AiError, never>>, void>(f: (resume: Effect.Adapter) => Generator<...>) => Effect.Effect<...> (+1 overload)

Provides a way to write effectful code using generator functions, simplifying control flow and error handling.

When to Use

Effect.gen allows you to write code that looks and behaves like synchronous code, but it can handle asynchronous tasks, errors, and complex control flow (like loops and conditions). It helps make asynchronous code more readable and easier to manage.

The generator functions work similarly to async/await but with more explicit control over the execution of effects. You can yield* values from effects and return the final result at the end.

Example

import { Effect } from "effect"
const addServiceCharge = (amount: number) => amount + 1
const applyDiscount = (
total: number,
discountRate: number
): Effect.Effect<number, Error> =>
discountRate === 0
? Effect.fail(new Error("Discount rate cannot be zero"))
: Effect.succeed(total - (total * discountRate) / 100)
const fetchTransactionAmount = Effect.promise(() => Promise.resolve(100))
const fetchDiscountRate = Effect.promise(() => Promise.resolve(5))
export const program = Effect.gen(function* () {
const transactionAmount = yield* fetchTransactionAmount
const discountRate = yield* fetchDiscountRate
const discountedAmount = yield* applyDiscount(
transactionAmount,
discountRate
)
const finalAmount = addServiceCharge(discountedAmount)
return `Final amount to charge: ${finalAmount}`
})

@since2.0.0

gen
(function*() {
const
const completions: Completions.Completions.Service
completions
= yield*
import Completions
Completions
.
class Completions

@since1.0.0

@since1.0.0

Completions
// Create a streaming response
const
const stream: Stream.Stream<AiResponse, AiError, never>
stream
=
const completions: Completions.Completions.Service
completions
.
Completions.Service.stream: (input: Input) => Stream.Stream<AiResponse, AiError>
stream
("Tell me a long dad joke")
// Process each chunk as it arrives
return yield*
const stream: Stream.Stream<AiResponse, AiError, never>
stream
.
Pipeable.pipe<Stream.Stream<AiResponse, AiError, never>, Effect.Effect<void, AiError, never>>(this: Stream.Stream<...>, ab: (_: Stream.Stream<AiResponse, AiError, never>) => Effect.Effect<...>): Effect.Effect<...> (+21 overloads)
pipe
(
import Stream
Stream
.
const runForEach: <AiResponse, void, never, never>(f: (a: AiResponse) => Effect.Effect<void, never, never>) => <E, R>(self: Stream.Stream<AiResponse, E, R>) => Effect.Effect<...> (+1 overload)

Consumes all elements of the stream, passing them to the specified callback.

@since2.0.0

runForEach
(
chunk: AiResponse
chunk
=>
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const sync: <void>(thunk: LazyArg<void>) => Effect.Effect<void, never, never>

Creates an Effect that represents a synchronous side-effectful computation.

Details

The provided function (thunk) must not throw errors; if it does, the error will be treated as a "defect".

This defect is not a standard error but indicates a flaw in the logic that was expected to be error-free. You can think of it similar to an unexpected crash in the program, which can be further managed or logged using tools like

catchAllDefect

.

When to Use

Use this function when you are sure the operation will not fail.

Example (Logging a Message)

import { Effect } from "effect"
const log = (message: string) =>
Effect.sync(() => {
console.log(message) // side effect
})
// ┌─── Effect<void, never, never>
// ▼
const program = log("Hello, World!")

@seetry_try for a version that can handle failures.

@since2.0.0

sync
(() => {
var process: NodeJS.Process
process
.
NodeJS.Process.stdout: NodeJS.WriteStream & {
fd: 1;
}

The process.stdout property returns a stream connected tostdout (fd 1). It is a net.Socket (which is a Duplex stream) unless fd 1 refers to a file, in which case it is a Writable stream.

For example, to copy process.stdin to process.stdout:

import { stdin, stdout } from 'node:process';
stdin.pipe(stdout);

process.stdout differs from other Node.js streams in important ways. See note on process I/O for more information.

stdout
.
Socket.write(buffer: Uint8Array | string, cb?: (err?: Error) => void): boolean (+1 overload)

Sends data on the socket. The second parameter specifies the encoding in the case of a string. It defaults to UTF8 encoding.

Returns true if the entire data was flushed successfully to the kernel buffer. Returns false if all or part of the data was queued in user memory.'drain' will be emitted when the buffer is again free.

The optional callback parameter will be executed when the data is finally written out, which may not be immediately.

See Writable stream write() method for more information.

@sincev0.1.90

@paramencoding Only used when data is string.

write
(
chunk: AiResponse
chunk
.
AiResponse.text: string

@since1.0.0

text
)
})
)
)
})

Whether you’re building an intelligent agent, an interactive chat application, or a system that leverages LLMs for background tasks, Effect’s AI packages provide all the tools you need and more. Our provider-agnostic approach will ensure your code remains adaptable as the AI landscape continues to evolve.

Ready to try out Effect for your next AI application? Take a look at our Getting Started guide.

The Effect AI integration packages are currently in the experimental/alpha stage, but we encourage you to give them a try and provide feedback to help us improve and expand their capabilities.

We’re excited to see what you build! Check out the full documentation to dive deeper, and join our community to share your experiences and get help along the way.