Skip to content

Execution Planning

Imagine that we’ve refactored our generateDadJoke program from our Getting Started guide. Now, instead of handling all errors internally, the code can fail with domain-specific issues like network interruptions or provider outages:

import type {
import AiLanguageModel
AiLanguageModel
,
import AiResponse
AiResponse
} from "@effect/ai"
import {
import OpenAiLanguageModel
OpenAiLanguageModel
} from "@effect/ai-openai"
import {
import Data
Data
,
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
} from "effect"
class
class NetworkError
NetworkError
extends
import Data
Data
.
const TaggedError: <"NetworkError">(tag: "NetworkError") => new <A>(args: Equals<A, {}> extends true ? void : { readonly [P in keyof A as P extends "_tag" ? never : P]: A[P]; }) => YieldableError & {
...;
} & Readonly<...>

@since2.0.0

TaggedError
("NetworkError") {}
class
class ProviderOutage
ProviderOutage
extends
import Data
Data
.
const TaggedError: <"ProviderOutage">(tag: "ProviderOutage") => new <A>(args: Equals<A, {}> extends true ? void : { readonly [P in keyof A as P extends "_tag" ? never : P]: A[P]; }) => YieldableError & {
...;
} & Readonly<...>

@since2.0.0

TaggedError
("ProviderOutage") {}
declare const
const generateDadJoke: Effect.Effect<AiResponse.AiResponse, NetworkError | ProviderOutage, AiLanguageModel.AiLanguageModel>
generateDadJoke
:
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
interface Effect<out A, out E = never, out R = never>

The Effect interface defines a value that describes a workflow or job, which can succeed or fail.

Details

The Effect interface represents a computation that can model a workflow involving various types of operations, such as synchronous, asynchronous, concurrent, and parallel interactions. It operates within a context of type R, and the result can either be a success with a value of type A or a failure with an error of type E. The Effect is designed to handle complex interactions with external resources, offering advanced features such as fiber-based concurrency, scheduling, interruption handling, and scalability. This makes it suitable for tasks that require fine-grained control over concurrency and error management.

To execute an Effect value, you need a Runtime, which provides the environment necessary to run and manage the computation.

@since2.0.0

@since2.0.0

Effect
<
import AiResponse
AiResponse
.
class AiResponse

Represents a response received from a large language model.

@since1.0.0

AiResponse
,
class NetworkError
NetworkError
|
class ProviderOutage
ProviderOutage
,
import AiLanguageModel
AiLanguageModel
.
class AiLanguageModel

@since1.0.0

@since1.0.0

AiLanguageModel
>
const
const main: Effect.Effect<void, NetworkError | ProviderOutage, OpenAiClient>
main
=
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const gen: <YieldWrap<Effect.Effect<AiResponse.AiResponse, NetworkError | ProviderOutage, AiLanguageModel.AiLanguageModel>>, void>(f: (resume: Effect.Adapter) => Generator<...>) => Effect.Effect<...> (+1 overload)

Provides a way to write effectful code using generator functions, simplifying control flow and error handling.

When to Use

Effect.gen allows you to write code that looks and behaves like synchronous code, but it can handle asynchronous tasks, errors, and complex control flow (like loops and conditions). It helps make asynchronous code more readable and easier to manage.

The generator functions work similarly to async/await but with more explicit control over the execution of effects. You can yield* values from effects and return the final result at the end.

Example

import { Effect } from "effect"
const addServiceCharge = (amount: number) => amount + 1
const applyDiscount = (
total: number,
discountRate: number
): Effect.Effect<number, Error> =>
discountRate === 0
? Effect.fail(new Error("Discount rate cannot be zero"))
: Effect.succeed(total - (total * discountRate) / 100)
const fetchTransactionAmount = Effect.promise(() => Promise.resolve(100))
const fetchDiscountRate = Effect.promise(() => Promise.resolve(5))
export const program = Effect.gen(function* () {
const transactionAmount = yield* fetchTransactionAmount
const discountRate = yield* fetchDiscountRate
const discountedAmount = yield* applyDiscount(
transactionAmount,
discountRate
)
const finalAmount = addServiceCharge(discountedAmount)
return `Final amount to charge: ${finalAmount}`
})

@since2.0.0

gen
(function*() {
const
const response: AiResponse.AiResponse
response
= yield*
const generateDadJoke: Effect.Effect<AiResponse.AiResponse, NetworkError | ProviderOutage, AiLanguageModel.AiLanguageModel>
generateDadJoke
var console: Console

The console module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers.

The module exports two specific components:

  • A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
  • A global console instance configured to write to process.stdout and process.stderr. The global console can be used without importing the node:console module.

Warning: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the note on process I/O for more information.

Example using the global console:

console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr

Example using the Console class:

const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
const name = 'Will Robinson';
myConsole.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to err

@seesource

console
.
Console.log(message?: any, ...optionalParams: any[]): void

Prints to stdout with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to printf(3) (the arguments are all passed to util.format()).

const count = 5;
console.log('count: %d', count);
// Prints: count: 5, to stdout
console.log('count:', count);
// Prints: count: 5, to stdout

See util.format() for more information.

@sincev0.1.100

log
(
const response: AiResponse.AiResponse
response
.
AiResponse.text: string

Returns the generated text content of the response.

text
)
}).
Pipeable.pipe<Effect.Effect<void, NetworkError | ProviderOutage, AiLanguageModel.AiLanguageModel>, Effect.Effect<void, NetworkError | ProviderOutage, OpenAiClient>>(this: Effect.Effect<...>, ab: (_: Effect.Effect<...>) => Effect.Effect<...>): Effect.Effect<...> (+21 overloads)
pipe
(
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const provide: <AiLanguageModel.AiLanguageModel, never, OpenAiClient>(layer: Layer<AiLanguageModel.AiLanguageModel, never, OpenAiClient>) => <A, E, R>(self: Effect.Effect<...>) => Effect.Effect<...> (+9 overloads)

Provides necessary dependencies to an effect, removing its environmental requirements.

Details

This function allows you to supply the required environment for an effect. The environment can be provided in the form of one or more Layers, a Context, a Runtime, or a ManagedRuntime. Once the environment is provided, the effect can run without requiring external dependencies.

You can compose layers to create a modular and reusable way of setting up the environment for effects. For example, layers can be used to configure databases, logging services, or any other required dependencies.

Example

import { Context, Effect, Layer } from "effect"
class Database extends Context.Tag("Database")<
Database,
{ readonly query: (sql: string) => Effect.Effect<Array<unknown>> }
>() {}
const DatabaseLive = Layer.succeed(
Database,
{
// Simulate a database query
query: (sql: string) => Effect.log(`Executing query: ${sql}`).pipe(Effect.as([]))
}
)
// ┌─── Effect<unknown[], never, Database>
// ▼
const program = Effect.gen(function*() {
const database = yield* Database
const result = yield* database.query("SELECT * FROM users")
return result
})
// ┌─── Effect<unknown[], never, never>
// ▼
const runnable = Effect.provide(program, DatabaseLive)
Effect.runPromise(runnable).then(console.log)
// Output:
// timestamp=... level=INFO fiber=#0 message="Executing query: SELECT * FROM users"
// []

@seeprovideService for providing a service to an effect.

@since2.0.0

provide
(
import OpenAiLanguageModel
OpenAiLanguageModel
.
const model: (model: (string & {}) | OpenAiLanguageModel.Model, config?: Omit<OpenAiLanguageModel.Config.Service, "model">) => AiModel<AiLanguageModel.AiLanguageModel, OpenAiClient>

@since1.0.0

model
("gpt-4o")))

This is fine, but what if we want to:

  • Retry the program a fixed number of times on NetworkErrors
  • Add some backoff delay between retries
  • Fallback to a different model provider if OpenAi is down

How can we accomplish such logic?

The ExecutionPlan module from Effect provides a robust method for creating structured execution plans for your Effect programs. Rather than making a single model call and hoping that it succeeds, you can use ExecutionPlan to describe how to handle errors, retries, and fallbacks in a clear, declarative way.

This is especially useful when:

  • You want to fall back to a secondary model if the primary one is unavailable
  • You want to retry on transient errors (e.g. network failures)
  • You want to control timing between retry attempts

To create an ExecutionPlan, we can use the ExecutionPlan.make constructor.

Example (Creating an ExecutionPlan for LLM Interactions)

import type {
import AiLanguageModel
AiLanguageModel
,
import AiResponse
AiResponse
} from "@effect/ai"
import {
import OpenAiLanguageModel
OpenAiLanguageModel
} from "@effect/ai-openai"
import {
import Data
Data
,
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
,
import ExecutionPlan
ExecutionPlan
,
import Schedule
Schedule
} from "effect"
9 collapsed lines
class
class NetworkError
NetworkError
extends
import Data
Data
.
const TaggedError: <"NetworkError">(tag: "NetworkError") => new <A>(args: Equals<A, {}> extends true ? void : { readonly [P in keyof A as P extends "_tag" ? never : P]: A[P]; }) => YieldableError & {
...;
} & Readonly<...>

@since2.0.0

TaggedError
("NetworkError") {}
class
class ProviderOutage
ProviderOutage
extends
import Data
Data
.
const TaggedError: <"ProviderOutage">(tag: "ProviderOutage") => new <A>(args: Equals<A, {}> extends true ? void : { readonly [P in keyof A as P extends "_tag" ? never : P]: A[P]; }) => YieldableError & {
...;
} & Readonly<...>

@since2.0.0

TaggedError
("ProviderOutage") {}
declare const
const generateDadJoke: Effect.Effect<AiResponse.AiResponse, NetworkError | ProviderOutage, AiLanguageModel.AiLanguageModel>
generateDadJoke
:
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
interface Effect<out A, out E = never, out R = never>

The Effect interface defines a value that describes a workflow or job, which can succeed or fail.

Details

The Effect interface represents a computation that can model a workflow involving various types of operations, such as synchronous, asynchronous, concurrent, and parallel interactions. It operates within a context of type R, and the result can either be a success with a value of type A or a failure with an error of type E. The Effect is designed to handle complex interactions with external resources, offering advanced features such as fiber-based concurrency, scheduling, interruption handling, and scalability. This makes it suitable for tasks that require fine-grained control over concurrency and error management.

To execute an Effect value, you need a Runtime, which provides the environment necessary to run and manage the computation.

@since2.0.0

@since2.0.0

Effect
<
import AiResponse
AiResponse
.
class AiResponse

Represents a response received from a large language model.

@since1.0.0

AiResponse
,
class NetworkError
NetworkError
|
class ProviderOutage
ProviderOutage
,
import AiLanguageModel
AiLanguageModel
.
class AiLanguageModel

@since1.0.0

@since1.0.0

AiLanguageModel
>
const
const DadJokePlan: ExecutionPlan.ExecutionPlan<{
provides: AiLanguageModel.AiLanguageModel;
input: NetworkError | ProviderOutage;
error: never;
requirements: OpenAiClient;
}>
DadJokePlan
=
import ExecutionPlan
ExecutionPlan
.
const make: <readonly [{
readonly provide: AiModel<AiLanguageModel.AiLanguageModel, OpenAiClient>;
readonly attempts: 3;
readonly schedule: Schedule.Schedule<Duration, unknown, never>;
readonly while: (error: NetworkError | ProviderOutage) => error is NetworkError;
}]>(...steps: readonly [...] & readonly [...]) => ExecutionPlan.ExecutionPlan<...>

Create an ExecutionPlan, which can be used with Effect.withExecutionPlan or Stream.withExecutionPlan, allowing you to provide different resources for each step of execution until the effect succeeds or the plan is exhausted.

import { type AiLanguageModel } from "@effect/ai"
import type { Layer } from "effect"
import { Effect, ExecutionPlan, Schedule } from "effect"
declare const layerBad: Layer.Layer<AiLanguageModel.AiLanguageModel>
declare const layerGood: Layer.Layer<AiLanguageModel.AiLanguageModel>
const ThePlan = ExecutionPlan.make(
{
// First try with the bad layer 2 times with a 3 second delay between attempts
provide: layerBad,
attempts: 2,
schedule: Schedule.spaced(3000)
},
// Then try with the bad layer 3 times with a 1 second delay between attempts
{
provide: layerBad,
attempts: 3,
schedule: Schedule.spaced(1000)
},
// Finally try with the good layer.
//
// If `attempts` is omitted, the plan will only attempt once, unless a schedule is provided.
{
provide: layerGood
}
)
declare const effect: Effect.Effect<
void,
never,
AiLanguageModel.AiLanguageModel
>
const withPlan: Effect.Effect<void> = Effect.withExecutionPlan(effect, ThePlan)

@since3.16.0

make
({
provide: AiModel<AiLanguageModel.AiLanguageModel, OpenAiClient> & (Context<any> | Context<never> | Layer.Any)
provide
:
import OpenAiLanguageModel
OpenAiLanguageModel
.
const model: (model: (string & {}) | OpenAiLanguageModel.Model, config?: Omit<OpenAiLanguageModel.Config.Service, "model">) => AiModel<AiLanguageModel.AiLanguageModel, OpenAiClient>

@since1.0.0

model
("gpt-4o"),
attempts: 3
attempts
: 3,
schedule: Schedule.Schedule<Duration, unknown, never> & Schedule.Schedule<any, any, any>
schedule
:
import Schedule
Schedule
.
const exponential: (base: DurationInput, factor?: number) => Schedule.Schedule<Duration>

Creates a schedule that recurs indefinitely with exponentially increasing delays.

Details

This schedule starts with an initial delay of base and increases the delay exponentially on each repetition using the formula base * factor^n, where n is the number of times the schedule has executed so far. If no factor is provided, it defaults to 2, causing the delay to double after each execution.

@since2.0.0

exponential
("100 millis", 1.5),
while: ((error: NetworkError | ProviderOutage) => error is NetworkError) & ((input: any) => boolean | Effect.Effect<boolean, any, any>)
while
: (
error: NetworkError | ProviderOutage
error
:
class NetworkError
NetworkError
|
class ProviderOutage
ProviderOutage
) =>
error: NetworkError | ProviderOutage
error
.
_tag: "NetworkError" | "ProviderOutage"
_tag
=== "NetworkError"
})
// ┌─── Effect<void, NetworkError | ProviderOutage, OpenAiClient>
// ▼
const
const main: Effect.Effect<void, NetworkError | ProviderOutage, OpenAiClient>
main
=
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const gen: <YieldWrap<Effect.Effect<AiResponse.AiResponse, NetworkError | ProviderOutage, AiLanguageModel.AiLanguageModel>>, void>(f: (resume: Effect.Adapter) => Generator<...>) => Effect.Effect<...> (+1 overload)

Provides a way to write effectful code using generator functions, simplifying control flow and error handling.

When to Use

Effect.gen allows you to write code that looks and behaves like synchronous code, but it can handle asynchronous tasks, errors, and complex control flow (like loops and conditions). It helps make asynchronous code more readable and easier to manage.

The generator functions work similarly to async/await but with more explicit control over the execution of effects. You can yield* values from effects and return the final result at the end.

Example

import { Effect } from "effect"
const addServiceCharge = (amount: number) => amount + 1
const applyDiscount = (
total: number,
discountRate: number
): Effect.Effect<number, Error> =>
discountRate === 0
? Effect.fail(new Error("Discount rate cannot be zero"))
: Effect.succeed(total - (total * discountRate) / 100)
const fetchTransactionAmount = Effect.promise(() => Promise.resolve(100))
const fetchDiscountRate = Effect.promise(() => Promise.resolve(5))
export const program = Effect.gen(function* () {
const transactionAmount = yield* fetchTransactionAmount
const discountRate = yield* fetchDiscountRate
const discountedAmount = yield* applyDiscount(
transactionAmount,
discountRate
)
const finalAmount = addServiceCharge(discountedAmount)
return `Final amount to charge: ${finalAmount}`
})

@since2.0.0

gen
(function*() {
const
const response: AiResponse.AiResponse
response
= yield*
const generateDadJoke: Effect.Effect<AiResponse.AiResponse, NetworkError | ProviderOutage, AiLanguageModel.AiLanguageModel>
generateDadJoke
var console: Console

The console module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers.

The module exports two specific components:

  • A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
  • A global console instance configured to write to process.stdout and process.stderr. The global console can be used without importing the node:console module.

Warning: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the note on process I/O for more information.

Example using the global console:

console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr

Example using the Console class:

const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
const name = 'Will Robinson';
myConsole.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to err

@seesource

console
.
Console.log(message?: any, ...optionalParams: any[]): void

Prints to stdout with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to printf(3) (the arguments are all passed to util.format()).

const count = 5;
console.log('count: %d', count);
// Prints: count: 5, to stdout
console.log('count:', count);
// Prints: count: 5, to stdout

See util.format() for more information.

@sincev0.1.100

log
(
const response: AiResponse.AiResponse
response
.
AiResponse.text: string

Returns the generated text content of the response.

text
)
}).
Pipeable.pipe<Effect.Effect<void, NetworkError | ProviderOutage, AiLanguageModel.AiLanguageModel>, Effect.Effect<void, NetworkError | ProviderOutage, OpenAiClient>>(this: Effect.Effect<...>, ab: (_: Effect.Effect<...>) => Effect.Effect<...>): Effect.Effect<...> (+21 overloads)
pipe
(
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const withExecutionPlan: <NetworkError | ProviderOutage, AiLanguageModel.AiLanguageModel, never, OpenAiClient>(plan: ExecutionPlan.ExecutionPlan<{
provides: AiLanguageModel.AiLanguageModel;
input: NetworkError | ProviderOutage;
error: never;
requirements: OpenAiClient;
}>) => <A, E, R>(effect: Effect.Effect<...>) => Effect.Effect<...> (+1 overload)

Apply an ExecutionPlan to the effect, which allows you to fallback to different resources in case of failure.

@since3.16.0

withExecutionPlan
(
const DadJokePlan: ExecutionPlan.ExecutionPlan<{
provides: AiLanguageModel.AiLanguageModel;
input: NetworkError | ProviderOutage;
error: never;
requirements: OpenAiClient;
}>
DadJokePlan
))

This plan contains a single step which will:

  • Provide OpenAi’s "gpt-4o" model as an AiLanguageModel for the program
  • Attempt to call OpenAi up to 3 times
  • Wait with an exponential backoff between attempts (starting at 100ms)
  • Only re-attempt the call to OpenAi if the error is a NetworkError

To make your interactions with large language models resilient to provider outages, you can define a fallback models to use. This will allow the plan to automatically fallback to another model if the previous step in the execution plan fails.

Use this when:

  • You want to make your model interactions resilient to provider outages
  • You want to potentially have multiple fallback models

Example (Adding a Fallback to Anthropic from OpenAi)

import type {
import AiLanguageModel
AiLanguageModel
,
import AiResponse
AiResponse
} from "@effect/ai"
import {
import AnthropicLanguageModel
AnthropicLanguageModel
} from "@effect/ai-anthropic"
import {
import OpenAiLanguageModel
OpenAiLanguageModel
} from "@effect/ai-openai"
import {
import Data
Data
,
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
,
import ExecutionPlan
ExecutionPlan
,
import Schedule
Schedule
} from "effect"
9 collapsed lines
class
class NetworkError
NetworkError
extends
import Data
Data
.
const TaggedError: <"NetworkError">(tag: "NetworkError") => new <A>(args: Equals<A, {}> extends true ? void : { readonly [P in keyof A as P extends "_tag" ? never : P]: A[P]; }) => YieldableError & {
...;
} & Readonly<...>

@since2.0.0

TaggedError
("NetworkError") {}
class
class ProviderOutage
ProviderOutage
extends
import Data
Data
.
const TaggedError: <"ProviderOutage">(tag: "ProviderOutage") => new <A>(args: Equals<A, {}> extends true ? void : { readonly [P in keyof A as P extends "_tag" ? never : P]: A[P]; }) => YieldableError & {
...;
} & Readonly<...>

@since2.0.0

TaggedError
("ProviderOutage") {}
declare const
const generateDadJoke: Effect.Effect<AiResponse.AiResponse, NetworkError | ProviderOutage, AiLanguageModel.AiLanguageModel>
generateDadJoke
:
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
interface Effect<out A, out E = never, out R = never>

The Effect interface defines a value that describes a workflow or job, which can succeed or fail.

Details

The Effect interface represents a computation that can model a workflow involving various types of operations, such as synchronous, asynchronous, concurrent, and parallel interactions. It operates within a context of type R, and the result can either be a success with a value of type A or a failure with an error of type E. The Effect is designed to handle complex interactions with external resources, offering advanced features such as fiber-based concurrency, scheduling, interruption handling, and scalability. This makes it suitable for tasks that require fine-grained control over concurrency and error management.

To execute an Effect value, you need a Runtime, which provides the environment necessary to run and manage the computation.

@since2.0.0

@since2.0.0

Effect
<
import AiResponse
AiResponse
.
class AiResponse

Represents a response received from a large language model.

@since1.0.0

AiResponse
,
class NetworkError
NetworkError
|
class ProviderOutage
ProviderOutage
,
import AiLanguageModel
AiLanguageModel
.
class AiLanguageModel

@since1.0.0

@since1.0.0

AiLanguageModel
>
const
const DadJokePlan: ExecutionPlan.ExecutionPlan<{
provides: AiLanguageModel.AiLanguageModel;
input: NetworkError | ProviderOutage;
error: never;
requirements: OpenAiClient | AnthropicClient;
}>
DadJokePlan
=
import ExecutionPlan
ExecutionPlan
.
const make: <readonly [{
readonly provide: AiModel<AiLanguageModel.AiLanguageModel, OpenAiClient>;
readonly attempts: 3;
readonly schedule: Schedule.Schedule<Duration, unknown, never>;
readonly while: (error: NetworkError | ProviderOutage) => error is NetworkError;
}, {
...;
}]>(...steps: readonly [...] & readonly [...]) => ExecutionPlan.ExecutionPlan<...>

Create an ExecutionPlan, which can be used with Effect.withExecutionPlan or Stream.withExecutionPlan, allowing you to provide different resources for each step of execution until the effect succeeds or the plan is exhausted.

import { type AiLanguageModel } from "@effect/ai"
import type { Layer } from "effect"
import { Effect, ExecutionPlan, Schedule } from "effect"
declare const layerBad: Layer.Layer<AiLanguageModel.AiLanguageModel>
declare const layerGood: Layer.Layer<AiLanguageModel.AiLanguageModel>
const ThePlan = ExecutionPlan.make(
{
// First try with the bad layer 2 times with a 3 second delay between attempts
provide: layerBad,
attempts: 2,
schedule: Schedule.spaced(3000)
},
// Then try with the bad layer 3 times with a 1 second delay between attempts
{
provide: layerBad,
attempts: 3,
schedule: Schedule.spaced(1000)
},
// Finally try with the good layer.
//
// If `attempts` is omitted, the plan will only attempt once, unless a schedule is provided.
{
provide: layerGood
}
)
declare const effect: Effect.Effect<
void,
never,
AiLanguageModel.AiLanguageModel
>
const withPlan: Effect.Effect<void> = Effect.withExecutionPlan(effect, ThePlan)

@since3.16.0

make
({
provide: AiModel<AiLanguageModel.AiLanguageModel, OpenAiClient> & (Context<any> | Context<never> | Layer.Any)
provide
:
import OpenAiLanguageModel
OpenAiLanguageModel
.
const model: (model: (string & {}) | OpenAiLanguageModel.Model, config?: Omit<OpenAiLanguageModel.Config.Service, "model">) => AiModel<AiLanguageModel.AiLanguageModel, OpenAiClient>

@since1.0.0

model
("gpt-4o"),
attempts: 3
attempts
: 3,
schedule: Schedule.Schedule<Duration, unknown, never> & Schedule.Schedule<any, any, any>
schedule
:
import Schedule
Schedule
.
const exponential: (base: DurationInput, factor?: number) => Schedule.Schedule<Duration>

Creates a schedule that recurs indefinitely with exponentially increasing delays.

Details

This schedule starts with an initial delay of base and increases the delay exponentially on each repetition using the formula base * factor^n, where n is the number of times the schedule has executed so far. If no factor is provided, it defaults to 2, causing the delay to double after each execution.

@since2.0.0

exponential
("100 millis", 1.5),
while: ((error: NetworkError | ProviderOutage) => error is NetworkError) & ((input: any) => boolean | Effect.Effect<boolean, any, any>)
while
: (
error: NetworkError | ProviderOutage
error
:
class NetworkError
NetworkError
|
class ProviderOutage
ProviderOutage
) =>
error: NetworkError | ProviderOutage
error
.
_tag: "NetworkError" | "ProviderOutage"
_tag
=== "NetworkError"
}, {
provide: AiModel<AiLanguageModel.AiLanguageModel, AnthropicClient> & (Context<any> | Context<never> | Layer.Any)
provide
:
import AnthropicLanguageModel
AnthropicLanguageModel
.
const model: (model: (string & {}) | AnthropicLanguageModel.Model, config?: Omit<AnthropicLanguageModel.Config.Service, "model">) => AiModel<AiLanguageModel.AiLanguageModel, AnthropicClient>

@since1.0.0

model
("claude-3-7-sonnet-latest"),
attempts: 2
attempts
: 2,
schedule: Schedule.Schedule<Duration, unknown, never> & Schedule.Schedule<any, any, any>
schedule
:
import Schedule
Schedule
.
const exponential: (base: DurationInput, factor?: number) => Schedule.Schedule<Duration>

Creates a schedule that recurs indefinitely with exponentially increasing delays.

Details

This schedule starts with an initial delay of base and increases the delay exponentially on each repetition using the formula base * factor^n, where n is the number of times the schedule has executed so far. If no factor is provided, it defaults to 2, causing the delay to double after each execution.

@since2.0.0

exponential
("100 millis", 1.5),
while: ((error: NetworkError | ProviderOutage) => error is ProviderOutage) & ((input: any) => boolean | Effect.Effect<boolean, any, any>)
while
: (
error: NetworkError | ProviderOutage
error
:
class NetworkError
NetworkError
|
class ProviderOutage
ProviderOutage
) =>
error: NetworkError | ProviderOutage
error
.
_tag: "NetworkError" | "ProviderOutage"
_tag
=== "ProviderOutage"
})
// ┌─── Effect<..., ..., AnthropicClient | OpenAiClient>
// ▼
const
const main: Effect.Effect<void, NetworkError | ProviderOutage, OpenAiClient | AnthropicClient>
main
=
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const gen: <YieldWrap<Effect.Effect<AiResponse.AiResponse, NetworkError | ProviderOutage, AiLanguageModel.AiLanguageModel>>, void>(f: (resume: Effect.Adapter) => Generator<...>) => Effect.Effect<...> (+1 overload)

Provides a way to write effectful code using generator functions, simplifying control flow and error handling.

When to Use

Effect.gen allows you to write code that looks and behaves like synchronous code, but it can handle asynchronous tasks, errors, and complex control flow (like loops and conditions). It helps make asynchronous code more readable and easier to manage.

The generator functions work similarly to async/await but with more explicit control over the execution of effects. You can yield* values from effects and return the final result at the end.

Example

import { Effect } from "effect"
const addServiceCharge = (amount: number) => amount + 1
const applyDiscount = (
total: number,
discountRate: number
): Effect.Effect<number, Error> =>
discountRate === 0
? Effect.fail(new Error("Discount rate cannot be zero"))
: Effect.succeed(total - (total * discountRate) / 100)
const fetchTransactionAmount = Effect.promise(() => Promise.resolve(100))
const fetchDiscountRate = Effect.promise(() => Promise.resolve(5))
export const program = Effect.gen(function* () {
const transactionAmount = yield* fetchTransactionAmount
const discountRate = yield* fetchDiscountRate
const discountedAmount = yield* applyDiscount(
transactionAmount,
discountRate
)
const finalAmount = addServiceCharge(discountedAmount)
return `Final amount to charge: ${finalAmount}`
})

@since2.0.0

gen
(function*() {
const
const response: AiResponse.AiResponse
response
= yield*
const generateDadJoke: Effect.Effect<AiResponse.AiResponse, NetworkError | ProviderOutage, AiLanguageModel.AiLanguageModel>
generateDadJoke
var console: Console

The console module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers.

The module exports two specific components:

  • A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
  • A global console instance configured to write to process.stdout and process.stderr. The global console can be used without importing the node:console module.

Warning: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the note on process I/O for more information.

Example using the global console:

console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr

Example using the Console class:

const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
const name = 'Will Robinson';
myConsole.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to err

@seesource

console
.
Console.log(message?: any, ...optionalParams: any[]): void

Prints to stdout with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to printf(3) (the arguments are all passed to util.format()).

const count = 5;
console.log('count: %d', count);
// Prints: count: 5, to stdout
console.log('count:', count);
// Prints: count: 5, to stdout

See util.format() for more information.

@sincev0.1.100

log
(
const response: AiResponse.AiResponse
response
.
AiResponse.text: string

Returns the generated text content of the response.

text
)
}).
Pipeable.pipe<Effect.Effect<void, NetworkError | ProviderOutage, AiLanguageModel.AiLanguageModel>, Effect.Effect<void, NetworkError | ProviderOutage, OpenAiClient | AnthropicClient>>(this: Effect.Effect<...>, ab: (_: Effect.Effect<...>) => Effect.Effect<...>): Effect.Effect<...> (+21 overloads)
pipe
(
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const withExecutionPlan: <NetworkError | ProviderOutage, AiLanguageModel.AiLanguageModel, never, OpenAiClient | AnthropicClient>(plan: ExecutionPlan.ExecutionPlan<...>) => <A, E, R>(effect: Effect.Effect<...>) => Effect.Effect<...> (+1 overload)

Apply an ExecutionPlan to the effect, which allows you to fallback to different resources in case of failure.

@since3.16.0

withExecutionPlan
(
const DadJokePlan: ExecutionPlan.ExecutionPlan<{
provides: AiLanguageModel.AiLanguageModel;
input: NetworkError | ProviderOutage;
error: never;
requirements: OpenAiClient | AnthropicClient;
}>
DadJokePlan
))

This plan contains two steps.

Step 1

The first step will:

  • Provide OpenAi’s "gpt-4o" model as an AiLanguageModel for the program
  • Attempt to call OpenAi up to 3 times
  • Wait with an exponential backoff between attempts (starting at 100ms)
  • Only attempt the call to OpenAi if the error is a NetworkError

If all of the above logic fails to run the program successfully, the plan will try to run the program using the second step.

Step 2

The second step will:

  • Provide Anthropic’s "claude-3-7-sonnet" model as an AiLanguageModel for the program
  • Attempt to call Anthropic up to 2 times
  • Wait with an exponential backoff between attempts (starting at 100ms)
  • Only attempt the fallback if the error is a ProviderOutage

The following is the complete program with the desired AiPlan fully implemented:

import type {
import AiLanguageModel
AiLanguageModel
,
import AiResponse
AiResponse
} from "@effect/ai"
import {
import AnthropicClient
AnthropicClient
,
import AnthropicLanguageModel
AnthropicLanguageModel
} from "@effect/ai-anthropic"
import {
import OpenAiClient
OpenAiClient
,
import OpenAiLanguageModel
OpenAiLanguageModel
} from "@effect/ai-openai"
import {
import NodeHttpClient
NodeHttpClient
} from "@effect/platform-node"
import {
import Config
Config
,
import Data
Data
,
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
,
import ExecutionPlan
ExecutionPlan
,
import Layer
Layer
,
import Schedule
Schedule
} from "effect"
class
class NetworkError
NetworkError
extends
import Data
Data
.
const TaggedError: <"NetworkError">(tag: "NetworkError") => new <A>(args: Equals<A, {}> extends true ? void : { readonly [P in keyof A as P extends "_tag" ? never : P]: A[P]; }) => YieldableError & {
...;
} & Readonly<...>

@since2.0.0

TaggedError
("NetworkError") {}
class
class ProviderOutage
ProviderOutage
extends
import Data
Data
.
const TaggedError: <"ProviderOutage">(tag: "ProviderOutage") => new <A>(args: Equals<A, {}> extends true ? void : { readonly [P in keyof A as P extends "_tag" ? never : P]: A[P]; }) => YieldableError & {
...;
} & Readonly<...>

@since2.0.0

TaggedError
("ProviderOutage") {}
declare const
const generateDadJoke: Effect.Effect<AiResponse.AiResponse, NetworkError | ProviderOutage, AiLanguageModel.AiLanguageModel>
generateDadJoke
:
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
interface Effect<out A, out E = never, out R = never>

The Effect interface defines a value that describes a workflow or job, which can succeed or fail.

Details

The Effect interface represents a computation that can model a workflow involving various types of operations, such as synchronous, asynchronous, concurrent, and parallel interactions. It operates within a context of type R, and the result can either be a success with a value of type A or a failure with an error of type E. The Effect is designed to handle complex interactions with external resources, offering advanced features such as fiber-based concurrency, scheduling, interruption handling, and scalability. This makes it suitable for tasks that require fine-grained control over concurrency and error management.

To execute an Effect value, you need a Runtime, which provides the environment necessary to run and manage the computation.

@since2.0.0

@since2.0.0

Effect
<
import AiResponse
AiResponse
.
class AiResponse

Represents a response received from a large language model.

@since1.0.0

AiResponse
,
class NetworkError
NetworkError
|
class ProviderOutage
ProviderOutage
,
import AiLanguageModel
AiLanguageModel
.
class AiLanguageModel

@since1.0.0

@since1.0.0

AiLanguageModel
>
const
const DadJokePlan: ExecutionPlan.ExecutionPlan<{
provides: AiLanguageModel.AiLanguageModel;
input: NetworkError | ProviderOutage;
error: never;
requirements: OpenAiClient.OpenAiClient | AnthropicClient.AnthropicClient;
}>
DadJokePlan
=
import ExecutionPlan
ExecutionPlan
.
const make: <readonly [{
readonly provide: AiModel<AiLanguageModel.AiLanguageModel, OpenAiClient.OpenAiClient>;
readonly attempts: 3;
readonly schedule: Schedule.Schedule<Duration, unknown, never>;
readonly while: (error: NetworkError | ProviderOutage) => error is NetworkError;
}, {
...;
}]>(...steps: readonly [...] & readonly [...]) => ExecutionPlan.ExecutionPlan<...>

Create an ExecutionPlan, which can be used with Effect.withExecutionPlan or Stream.withExecutionPlan, allowing you to provide different resources for each step of execution until the effect succeeds or the plan is exhausted.

import { type AiLanguageModel } from "@effect/ai"
import type { Layer } from "effect"
import { Effect, ExecutionPlan, Schedule } from "effect"
declare const layerBad: Layer.Layer<AiLanguageModel.AiLanguageModel>
declare const layerGood: Layer.Layer<AiLanguageModel.AiLanguageModel>
const ThePlan = ExecutionPlan.make(
{
// First try with the bad layer 2 times with a 3 second delay between attempts
provide: layerBad,
attempts: 2,
schedule: Schedule.spaced(3000)
},
// Then try with the bad layer 3 times with a 1 second delay between attempts
{
provide: layerBad,
attempts: 3,
schedule: Schedule.spaced(1000)
},
// Finally try with the good layer.
//
// If `attempts` is omitted, the plan will only attempt once, unless a schedule is provided.
{
provide: layerGood
}
)
declare const effect: Effect.Effect<
void,
never,
AiLanguageModel.AiLanguageModel
>
const withPlan: Effect.Effect<void> = Effect.withExecutionPlan(effect, ThePlan)

@since3.16.0

make
({
provide: AiModel<AiLanguageModel.AiLanguageModel, OpenAiClient.OpenAiClient> & (Context<any> | Context<never> | Layer.Layer.Any)
provide
:
import OpenAiLanguageModel
OpenAiLanguageModel
.
const model: (model: (string & {}) | OpenAiLanguageModel.Model, config?: Omit<OpenAiLanguageModel.Config.Service, "model">) => AiModel<AiLanguageModel.AiLanguageModel, OpenAiClient.OpenAiClient>

@since1.0.0

model
("gpt-4o"),
attempts: 3
attempts
: 3,
schedule: Schedule.Schedule<Duration, unknown, never> & Schedule.Schedule<any, any, any>
schedule
:
import Schedule
Schedule
.
const exponential: (base: DurationInput, factor?: number) => Schedule.Schedule<Duration>

Creates a schedule that recurs indefinitely with exponentially increasing delays.

Details

This schedule starts with an initial delay of base and increases the delay exponentially on each repetition using the formula base * factor^n, where n is the number of times the schedule has executed so far. If no factor is provided, it defaults to 2, causing the delay to double after each execution.

@since2.0.0

exponential
("100 millis", 1.5),
while: ((error: NetworkError | ProviderOutage) => error is NetworkError) & ((input: any) => boolean | Effect.Effect<boolean, any, any>)
while
: (
error: NetworkError | ProviderOutage
error
:
class NetworkError
NetworkError
|
class ProviderOutage
ProviderOutage
) =>
error: NetworkError | ProviderOutage
error
.
_tag: "NetworkError" | "ProviderOutage"
_tag
=== "NetworkError"
}, {
provide: AiModel<AiLanguageModel.AiLanguageModel, AnthropicClient.AnthropicClient> & (Context<any> | Context<never> | Layer.Layer.Any)
provide
:
import AnthropicLanguageModel
AnthropicLanguageModel
.
const model: (model: (string & {}) | AnthropicLanguageModel.Model, config?: Omit<AnthropicLanguageModel.Config.Service, "model">) => AiModel<AiLanguageModel.AiLanguageModel, AnthropicClient.AnthropicClient>

@since1.0.0

model
("claude-3-7-sonnet-latest"),
attempts: 2
attempts
: 2,
schedule: Schedule.Schedule<Duration, unknown, never> & Schedule.Schedule<any, any, any>
schedule
:
import Schedule
Schedule
.
const exponential: (base: DurationInput, factor?: number) => Schedule.Schedule<Duration>

Creates a schedule that recurs indefinitely with exponentially increasing delays.

Details

This schedule starts with an initial delay of base and increases the delay exponentially on each repetition using the formula base * factor^n, where n is the number of times the schedule has executed so far. If no factor is provided, it defaults to 2, causing the delay to double after each execution.

@since2.0.0

exponential
("100 millis", 1.5),
while: ((error: NetworkError | ProviderOutage) => error is ProviderOutage) & ((input: any) => boolean | Effect.Effect<boolean, any, any>)
while
: (
error: NetworkError | ProviderOutage
error
:
class NetworkError
NetworkError
|
class ProviderOutage
ProviderOutage
) =>
error: NetworkError | ProviderOutage
error
.
_tag: "NetworkError" | "ProviderOutage"
_tag
=== "ProviderOutage"
})
const
const main: Effect.Effect<void, NetworkError | ProviderOutage, OpenAiClient.OpenAiClient | AnthropicClient.AnthropicClient>
main
=
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const gen: <YieldWrap<Effect.Effect<AiResponse.AiResponse, NetworkError | ProviderOutage, AiLanguageModel.AiLanguageModel>>, void>(f: (resume: Effect.Adapter) => Generator<...>) => Effect.Effect<...> (+1 overload)

Provides a way to write effectful code using generator functions, simplifying control flow and error handling.

When to Use

Effect.gen allows you to write code that looks and behaves like synchronous code, but it can handle asynchronous tasks, errors, and complex control flow (like loops and conditions). It helps make asynchronous code more readable and easier to manage.

The generator functions work similarly to async/await but with more explicit control over the execution of effects. You can yield* values from effects and return the final result at the end.

Example

import { Effect } from "effect"
const addServiceCharge = (amount: number) => amount + 1
const applyDiscount = (
total: number,
discountRate: number
): Effect.Effect<number, Error> =>
discountRate === 0
? Effect.fail(new Error("Discount rate cannot be zero"))
: Effect.succeed(total - (total * discountRate) / 100)
const fetchTransactionAmount = Effect.promise(() => Promise.resolve(100))
const fetchDiscountRate = Effect.promise(() => Promise.resolve(5))
export const program = Effect.gen(function* () {
const transactionAmount = yield* fetchTransactionAmount
const discountRate = yield* fetchDiscountRate
const discountedAmount = yield* applyDiscount(
transactionAmount,
discountRate
)
const finalAmount = addServiceCharge(discountedAmount)
return `Final amount to charge: ${finalAmount}`
})

@since2.0.0

gen
(function*() {
const
const response: AiResponse.AiResponse
response
= yield*
const generateDadJoke: Effect.Effect<AiResponse.AiResponse, NetworkError | ProviderOutage, AiLanguageModel.AiLanguageModel>
generateDadJoke
var console: Console

The console module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers.

The module exports two specific components:

  • A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
  • A global console instance configured to write to process.stdout and process.stderr. The global console can be used without importing the node:console module.

Warning: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the note on process I/O for more information.

Example using the global console:

console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr

Example using the Console class:

const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
const name = 'Will Robinson';
myConsole.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to err

@seesource

console
.
Console.log(message?: any, ...optionalParams: any[]): void

Prints to stdout with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to printf(3) (the arguments are all passed to util.format()).

const count = 5;
console.log('count: %d', count);
// Prints: count: 5, to stdout
console.log('count:', count);
// Prints: count: 5, to stdout

See util.format() for more information.

@sincev0.1.100

log
(
const response: AiResponse.AiResponse
response
.
AiResponse.text: string

Returns the generated text content of the response.

text
)
}).
Pipeable.pipe<Effect.Effect<void, NetworkError | ProviderOutage, AiLanguageModel.AiLanguageModel>, Effect.Effect<void, NetworkError | ProviderOutage, OpenAiClient.OpenAiClient | AnthropicClient.AnthropicClient>>(this: Effect.Effect<...>, ab: (_: Effect.Effect<...>) => Effect.Effect<...>): Effect.Effect<...> (+21 overloads)
pipe
(
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const withExecutionPlan: <NetworkError | ProviderOutage, AiLanguageModel.AiLanguageModel, never, OpenAiClient.OpenAiClient | AnthropicClient.AnthropicClient>(plan: ExecutionPlan.ExecutionPlan<...>) => <A, E, R>(effect: Effect.Effect<...>) => Effect.Effect<...> (+1 overload)

Apply an ExecutionPlan to the effect, which allows you to fallback to different resources in case of failure.

@since3.16.0

withExecutionPlan
(
const DadJokePlan: ExecutionPlan.ExecutionPlan<{
provides: AiLanguageModel.AiLanguageModel;
input: NetworkError | ProviderOutage;
error: never;
requirements: OpenAiClient.OpenAiClient | AnthropicClient.AnthropicClient;
}>
DadJokePlan
))
const
const Anthropic: Layer.Layer<AnthropicClient.AnthropicClient, ConfigError, never>
Anthropic
=
import AnthropicClient
AnthropicClient
.
const layerConfig: (options: Config.Config.Wrap<{
readonly apiKey?: Redacted | undefined;
readonly apiUrl?: string | undefined;
readonly anthropicVersion?: string;
readonly transformClient?: (client: HttpClient) => HttpClient;
}>) => Layer.Layer<AnthropicClient.AnthropicClient, ConfigError, HttpClient>

@since1.0.0

layerConfig
({
apiKey?: Config.Config<Redacted<string> | undefined>
apiKey
:
import Config
Config
.
const redacted: (name?: string) => Config.Config<Redacted> (+1 overload)

Constructs a config for a redacted value.

@since2.0.0

redacted
("ANTHROPIC_API_KEY")
}).
Pipeable.pipe<Layer.Layer<AnthropicClient.AnthropicClient, ConfigError, HttpClient>, Layer.Layer<AnthropicClient.AnthropicClient, ConfigError, never>>(this: Layer.Layer<...>, ab: (_: Layer.Layer<...>) => Layer.Layer<...>): Layer.Layer<...> (+21 overloads)
pipe
(
import Layer
Layer
.
const provide: <never, never, HttpClient>(that: Layer.Layer<HttpClient, never, never>) => <RIn2, E2, ROut2>(self: Layer.Layer<ROut2, E2, RIn2>) => Layer.Layer<...> (+3 overloads)

Feeds the output services of this builder into the input of the specified builder, resulting in a new builder with the inputs of this builder as well as any leftover inputs, and the outputs of the specified builder.

@since2.0.0

provide
(
import NodeHttpClient
NodeHttpClient
.
const layerUndici: Layer.Layer<HttpClient, never, never>

@since1.0.0

layerUndici
))
const
const OpenAi: Layer.Layer<OpenAiClient.OpenAiClient, ConfigError, never>
OpenAi
=
import OpenAiClient
OpenAiClient
.
const layerConfig: (options: Config.Config.Wrap<{
readonly apiKey?: Redacted | undefined;
readonly apiUrl?: string | undefined;
readonly organizationId?: Redacted | undefined;
readonly projectId?: Redacted | undefined;
readonly transformClient?: (client: HttpClient) => HttpClient;
}>) => Layer.Layer<OpenAiClient.OpenAiClient, ConfigError, HttpClient>

@since1.0.0

layerConfig
({
apiKey?: Config.Config<Redacted<string> | undefined>
apiKey
:
import Config
Config
.
const redacted: (name?: string) => Config.Config<Redacted> (+1 overload)

Constructs a config for a redacted value.

@since2.0.0

redacted
("OPENAI_API_KEY")
}).
Pipeable.pipe<Layer.Layer<OpenAiClient.OpenAiClient, ConfigError, HttpClient>, Layer.Layer<OpenAiClient.OpenAiClient, ConfigError, never>>(this: Layer.Layer<...>, ab: (_: Layer.Layer<...>) => Layer.Layer<...>): Layer.Layer<...> (+21 overloads)
pipe
(
import Layer
Layer
.
const provide: <never, never, HttpClient>(that: Layer.Layer<HttpClient, never, never>) => <RIn2, E2, ROut2>(self: Layer.Layer<ROut2, E2, RIn2>) => Layer.Layer<...> (+3 overloads)

Feeds the output services of this builder into the input of the specified builder, resulting in a new builder with the inputs of this builder as well as any leftover inputs, and the outputs of the specified builder.

@since2.0.0

provide
(
import NodeHttpClient
NodeHttpClient
.
const layerUndici: Layer.Layer<HttpClient, never, never>

@since1.0.0

layerUndici
))
const main: Effect.Effect<void, NetworkError | ProviderOutage, OpenAiClient.OpenAiClient | AnthropicClient.AnthropicClient>
main
.
Pipeable.pipe<Effect.Effect<void, NetworkError | ProviderOutage, OpenAiClient.OpenAiClient | AnthropicClient.AnthropicClient>, Effect.Effect<...>, Promise<...>>(this: Effect.Effect<...>, ab: (_: Effect.Effect<...>) => Effect.Effect<...>, bc: (_: Effect.Effect<...>) => Promise<...>): Promise<...> (+21 overloads)
pipe
(
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const provide: <[Layer.Layer<AnthropicClient.AnthropicClient, ConfigError, never>, Layer.Layer<OpenAiClient.OpenAiClient, ConfigError, never>]>(layers: [...]) => <A, E, R>(self: Effect.Effect<...>) => Effect.Effect<...> (+9 overloads)

Provides necessary dependencies to an effect, removing its environmental requirements.

Details

This function allows you to supply the required environment for an effect. The environment can be provided in the form of one or more Layers, a Context, a Runtime, or a ManagedRuntime. Once the environment is provided, the effect can run without requiring external dependencies.

You can compose layers to create a modular and reusable way of setting up the environment for effects. For example, layers can be used to configure databases, logging services, or any other required dependencies.

Example

import { Context, Effect, Layer } from "effect"
class Database extends Context.Tag("Database")<
Database,
{ readonly query: (sql: string) => Effect.Effect<Array<unknown>> }
>() {}
const DatabaseLive = Layer.succeed(
Database,
{
// Simulate a database query
query: (sql: string) => Effect.log(`Executing query: ${sql}`).pipe(Effect.as([]))
}
)
// ┌─── Effect<unknown[], never, Database>
// ▼
const program = Effect.gen(function*() {
const database = yield* Database
const result = yield* database.query("SELECT * FROM users")
return result
})
// ┌─── Effect<unknown[], never, never>
// ▼
const runnable = Effect.provide(program, DatabaseLive)
Effect.runPromise(runnable).then(console.log)
// Output:
// timestamp=... level=INFO fiber=#0 message="Executing query: SELECT * FROM users"
// []

@seeprovideService for providing a service to an effect.

@since2.0.0

provide
([
const Anthropic: Layer.Layer<AnthropicClient.AnthropicClient, ConfigError, never>
Anthropic
,
const OpenAi: Layer.Layer<OpenAiClient.OpenAiClient, ConfigError, never>
OpenAi
]),
import Effect

@since2.0.0

@since2.0.0

@since2.0.0

Effect
.
const runPromise: <A, E>(effect: Effect.Effect<A, E, never>, options?: {
readonly signal?: AbortSignal;
} | undefined) => Promise<A>

Executes an effect and returns the result as a Promise.

Details

This function runs an effect and converts its result into a Promise. If the effect succeeds, the Promise will resolve with the successful result. If the effect fails, the Promise will reject with an error, which includes the failure details of the effect.

The optional options parameter allows you to pass an AbortSignal for cancellation, enabling more fine-grained control over asynchronous tasks.

When to Use

Use this function when you need to execute an effect and work with its result in a promise-based system, such as when integrating with third-party libraries that expect Promise results.

Example (Running a Successful Effect as a Promise)

import { Effect } from "effect"
Effect.runPromise(Effect.succeed(1)).then(console.log)
// Output: 1

Example (Handling a Failing Effect as a Rejected Promise)

import { Effect } from "effect"
Effect.runPromise(Effect.fail("my error")).catch(console.error)
// Output:
// (FiberFailure) Error: my error

@seerunPromiseExit for a version that returns an Exit type instead of rejecting.

@since2.0.0

runPromise
)