# Getting Started
import { Aside, Steps, Tabs, TabItem } from "@astrojs/starlight/components"
In this getting started guide, we will demonstrate how to generate a simple text completion using an LLM provider (OpenAi) using the Effect AI integration packages.
We’ll walk through:
- Writing provider-agnostic logic to interact with an LLM
- Declaring the specific LLM model to use for the interaction
- Using a provider integration to make the program executable
## Installation
First, we will need to install the base `@effect/ai` package to gain access to the core AI abstractions. In addition, we will need to install at least one provider integration package (in this case `@effect/ai-openai`):
<Tabs syncKey="package-manager">
<TabItem label="npm" icon="seti:npm">
```sh showLineNumbers=false
# Install the base package for the core abstractions (always required)
npm install @effect/ai
# Install one (or more) provider integrations
npm install @effect/ai-openai
# Also add the core Effect package (if not already installed)
npm install effect
```
</TabItem>
<TabItem label="pnpm" icon="pnpm">
```sh showLineNumbers=false
# Install the base package for the core abstractions (always required)
pnpm add @effect/ai
# Install one (or more) provider integrations
pnpm add @effect/ai-openai
# Also add the core Effect package (if not already installed)
pnpm add effect
```
</TabItem>
<TabItem label="Yarn" icon="seti:yarn">
```sh showLineNumbers=false
# Install the base package for the core abstractions (always required)
yarn add @effect/ai
# Install one (or more) provider integrations
yarn add @effect/ai-openai
# Also add the core Effect package (if not already installed)
yarn add effect
```
</TabItem>
<TabItem label="Bun" icon="bun">
```sh showLineNumbers=false
# Install the base package for the core abstractions (always required)
bun add @effect/ai
# Install one (or more) provider integrations
bun add @effect/ai-openai
# Also add the core Effect package (if not already installed)
bun add effect
```
</TabItem>
</Tabs>
## Define an Interaction with a Language Model
First let's define a simple interaction with a large language model (LLM):
**Example** (Using the `LanguageModel` Service to Generate a Dad Joke)
```ts twoslash
import { LanguageModel } from "@effect/ai"
import { Effect } from "effect"
// Using `LanguageModel` will add it to your program's requirements
//
// ┌─── Effect<GenerateTextResponse<{}>, AiError, LanguageModel>
// ▼
const generateDadJoke = Effect.gen(function*() {
// Use the `LanguageModel` to generate some text
const response = yield* LanguageModel.generateText({
prompt: "Generate a dad joke"
})
// Log the generated text to the console
console.log(response.text)
// Return the response
return response
})
```
<Aside type="note" title="Declarative LLM Interactions">
Notice that the above code does not know or care which LLM provider (OpenAi, Anthropic, etc.) will be used. Instead, we focus on _what_ we want to accomplish (i.e. our business logic), not _how_ to accomplish it.
</Aside>
## Select a Provider
Next, we need to select which model provider we want to use:
**Example** (Using a Model Provider to Satisfy the `LanguageModel` Requirement)
```ts twoslash collapse={5-11}
import { OpenAiLanguageModel } from "@effect/ai-openai"
import { LanguageModel } from "@effect/ai"
import { Effect } from "effect"
const generateDadJoke = Effect.gen(function*() {
const response = yield* LanguageModel.generateText({
prompt: "Generate a dad joke"
})
console.log(response.text)
return response
})
// Create a `Model` which provides a concrete implementation of
// `LanguageModel` and requires an `OpenAiClient`
//
// ┌─── Model<"openai", LanguageModel | ProviderName, OpenAiClient>
// ▼
const Gpt4o = OpenAiLanguageModel.model("gpt-4o")
// Provide the `Model` to the program
//
// ┌─── Effect<GenerateTextResponse<{}>, AiError, OpenAiClient>
// ▼
const main = generateDadJoke.pipe(
Effect.provide(Gpt4o)
)
```
Before moving on, it is important that we understand the purpose of the `Model` data type.
## Understanding `Model`
The `Model` data type represents a **provider-specific implementation** of one or more services, such as `LanguageModel` or `EmbeddingsModel`. It is the primary way that you can plug a real large language model into your program.
```ts showLineNumbers=false
export interface Model<ProviderName, Provides, Requires> {}
```
An `Model` has three generic type parameters:
- **ProviderName** - the name of the large language model provider that will be used
- **Provides** - the services this model will provide when built
- **Requires** - the services this model will require to be built
This allows Effect to track which services the `Model` requires as well as which services the `Model` will provide.
### Creating n `Model`
To create a `Model`, you can use the model-specific factory from one of Effect's provider integration packages.
**Example** (Defining a `Model` to Interact with OpenAI)
```ts showLineNumbers=false
import { OpenAiLanguageModel } from "@effect/ai-openai"
// ┌─── Model<"openai", LanguageModel | ProviderName, OpenAiClient>
// ▼
const Gpt4o = OpenAiLanguageModel.model("gpt-4o")
```
This creates a `Model` that:
- **Provides** the `ProviderName` service, which allows introspection of the current provider in use by the program
- **Provides** an OpenAI-specific implementation of the `LanguageModel` service using `"gpt-4o"`
- **Requires** an `OpenAiClient` to be built
### Providing a `Model`
Once you've created a `Model`, you can directly `Effect.provide` it to your Effect programs just like any other service:
```ts
import { OpenAiLanguageModel } from "@effect/ai-openai"
import { LanguageModel } from "@effect/ai"
// ┌─── Model<"openai", LanguageModel | ProviderName, OpenAiClient>
// ▼
const Gpt4o = OpenAiLanguageModel.model("gpt-4o")
// ┌─── Effect<GenerateTextResponse<{}>, AiError, OpenAiClient>
// ▼
const program = LanguageModel.generateText({
prompt: "Generate a dad joke"
}).pipe(Effect.provide(Gpt4o))
```
### Benefits of `Model`
There are several benefits to this approach:
**Reusability**
You can provide the same `Model` to as many programs as you like.
**Example** (Providing a `Model` to Multiple Programs)
```ts twoslash {18-20} collapse={5-11}
import { OpenAiLanguageModel } from "@effect/ai-openai"
import { LanguageModel } from "@effect/ai"
import { Effect } from "effect"
const generateDadJoke = Effect.gen(function*() {
const response = yield* LanguageModel.generateText({
prompt: "Generate a dad joke"
})
console.log(response.text)
return response
})
const Gpt4o = OpenAiLanguageModel.model("gpt-4o")
const main = Effect.gen(function*() {
// You can provide the `Model` individually to each
// program, or to all of them at once (as we do here)
const res1 = yield* generateDadJoke
const res2 = yield* generateDadJoke
const res3 = yield* generateDadJoke
}).pipe(Effect.provide(Gpt4o))
```
**Flexibility**
If we know that one model or provider performs better at a given task than another, we can freely mix and match models and providers together.
For example, if we know Anthropic's Claude generates some really great dad jokes, we can mix it into our existing program with just a few lines of code:
**Example** (Mixing Multiple Providers and Models)
```ts twoslash {24} collapse={6-12}
import { AnthropicLanguageModel } from "@effect/ai-anthropic"
import { OpenAiLanguageModel } from "@effect/ai-openai"
import { LanguageModel } from "@effect/ai"
import { Effect } from "effect"
const generateDadJoke = Effect.gen(function*() {
const response = yield* LanguageModel.generateText({
prompt: "Generate a dad joke"
})
console.log(response.text)
return response
})
const Gpt4o = OpenAiLanguageModel.model("gpt-4o")
const Claude37 = AnthropicLanguageModel.model("claude-3-7-sonnet-latest")
// ┌─── Effect<void, AiError, AnthropicClient | OpenAiClient>
// ▼
const main = Effect.gen(function*() {
const res1 = yield* generateDadJoke
const res2 = yield* generateDadJoke
const res3 = yield* Effect.provide(generateDadJoke, Claude37)
}).pipe(Effect.provide(Gpt4o))
```
Because Effect performs type-level dependency tracking, we can see that an `AnthropicClient` is now required to make our program runnable.
**Abstractability**
An `Model` can also be `yield*`'ed to lift its dependencies into the calling Effect. This is particularly useful when creating services that depend on AI interactions, where you want to avoid leaking service-level dependencies into the service interface.
For example, in the code below the `main` program is only dependent upon the `DadJokes` service. All AI requirements are abstracted away into `Layer` composition.
**Example** (Abstracting LLM Interactions into a Service)
```ts twoslash collapse={18-24}
import { AnthropicLanguageModel } from "@effect/ai-anthropic"
import { OpenAiLanguageModel } from "@effect/ai-openai"
import { LanguageModel } from "@effect/ai"
import { Effect } from "effect"
const Gpt4o = OpenAiLanguageModel.model("gpt-4o")
const Claude37 = AnthropicLanguageModel.model("claude-3-7-sonnet-latest")
class DadJokes extends Effect.Service<DadJokes>()("app/DadJokes", {
effect: Effect.gen(function*() {
// Yielding the model will return a layer with no requirements
//
// ┌─── Layer<LanguageModel | ProviderName>
// ▼
const gpt = yield* Gpt4o
const claude = yield* Claude37
const generateDadJoke = Effect.gen(function*() {
const response = yield* LanguageModel.generateText({
prompt: "Generate a dad joke"
})
console.log(response.text)
return response
})
return {
generateDadJoke: Effect.provide(generateDadJoke, gpt),
generateBetterDadJoke: Effect.provide(generateDadJoke, claude)
}
})
}) {}
// Programs which utilize the `DadJokes` service have no knowledge of
// any AI requirements
//
// ┌─── Effect<void, AiError, DadJokes>
// ▼
const main = Effect.gen(function*() {
const dadJokes = yield* DadJokes
const res1 = yield* dadJokes.generateDadJoke
const res2 = yield* dadJokes.generateBetterDadJoke
})
// The AI requirements are abstracted away into `Layer` composition
//
// ┌─── Layer<DadJokes, never, AnthropicClient | OpenAiClient>
// ▼
DadJokes.Default
```
## Create a Provider Client
To make our code executable, we must finish satisfying our program's requirements.
Let's take another look at our program from earlier:
```ts twoslash "OpenAiClient" collapse={5-11}
import { OpenAiLanguageModel } from "@effect/ai-openai"
import { LanguageModel } from "@effect/ai"
import { Effect } from "effect"
const generateDadJoke = Effect.gen(function*() {
const response = yield* LanguageModel.generateText({
prompt: "Generate a dad joke"
})
console.log(response.text)
return response
})
const Gpt4o = OpenAiLanguageModel.model("gpt-4o")
// ┌─── Effect<GenerateTextResponse<{}>, AiError, OpenAiClient>
// ▼
const main = generateDadJoke.pipe(
Effect.provide(Gpt4o)
)
```
We can see that our `main` program still requires us to provide an `OpenAiClient`.
Each of our provider integration packages exports a client module that can be used to construct a client for that provider.
**Example** (Creating a Client Layer for a Model Provider)
```ts twoslash /{ (OpenAiClient),/ {24-26} collapse={5-17}
import { OpenAiClient, OpenAiLanguageModel } from "@effect/ai-openai"
import { LanguageModel } from "@effect/ai"
import { Config, Effect } from "effect"
const generateDadJoke = Effect.gen(function*() {
const response = yield* LanguageModel.generateText({
prompt: "Generate a dad joke"
})
console.log(response.text)
return response
})
const Gpt4o = OpenAiLanguageModel.model("gpt-4o")
const main = generateDadJoke.pipe(
Effect.provide(Gpt4o)
)
// Create a `Layer` which produces an `OpenAiClient` and requires
// an `HttpClient`
//
// ┌─── Layer<OpenAiClient, ConfigError, HttpClient>
// ▼
const OpenAi = OpenAiClient.layerConfig({
apiKey: Config.redacted("OPENAI_API_KEY")
})
```
In the code above, we use the `layerConfig` constructor from the `OpenAiClient` module to create a `Layer` which will produce an `OpenAiClient`. The `layerConfig` constructor allows us to read in configuration variables using Effect's [configuration system](/docs/configuration/).
The provider clients also have a dependency on an `HttpClient` implementation to avoid any platform dependencies. This way, you can provide whichever `HttpClient` implementation is most appropriate for the platform your code is running upon.
For example, if we know we are going to run this code in NodeJS, we can utilize the `NodeHttpClient` module from `@effect/platform-node` to provide an `HttpClient` implementation:
```ts twoslash /{ (NodeHttpClient) }|, (HttpClient)>|, (never)>/ {35} collapse={6-18}
import { OpenAiClient, OpenAiLanguageModel } from "@effect/ai-openai"
import { LanguageModel } from "@effect/ai"
import { NodeHttpClient } from "@effect/platform-node"
import { Config, Effect, Layer } from "effect"
const generateDadJoke = Effect.gen(function*() {
const response = yield* LanguageModel.generateText({
prompt: "Generate a dad joke"
})
console.log(response.text)
return response
})
const Gpt4o = OpenAiLanguageModel.model("gpt-4o")
const main = generateDadJoke.pipe(
Effect.provide(Gpt4o)
)
// Create a `Layer` which produces an `OpenAiClient` and requires
// an `HttpClient`
//
// ┌─── Layer<OpenAiClient, ConfigError, HttpClient>
// ▼
const OpenAi = OpenAiClient.layerConfig({
apiKey: Config.redacted("OPENAI_API_KEY")
})
// Provide a platform-specific implementation of `HttpClient` to our
// OpenAi layer
//
// ┌─── Layer<OpenAiClient, ConfigError, never>
// ▼
const OpenAiWithHttp = Layer.provide(OpenAi, NodeHttpClient.layerUndici)
```
## Running the Program
Now that we have a `Layer` which provides us with an `OpenAiClient`, we're ready to make our `main` program runnable.
Our final program looks like the following:
```ts twoslash
import { OpenAiClient, OpenAiLanguageModel } from "@effect/ai-openai"
import { LanguageModel } from "@effect/ai"
import { NodeHttpClient } from "@effect/platform-node"
import { Config, Effect, Layer } from "effect"
const generateDadJoke = Effect.gen(function*() {
const response = yield* LanguageModel.generateText({
prompt: "Generate a dad joke"
})
console.log(response.text)
return response
})
const Gpt4o = OpenAiLanguageModel.model("gpt-4o")
const main = generateDadJoke.pipe(
Effect.provide(Gpt4o)
)
const OpenAi = OpenAiClient.layerConfig({
apiKey: Config.redacted("OPENAI_API_KEY")
})
const OpenAiWithHttp = Layer.provide(OpenAi, NodeHttpClient.layerUndici)
main.pipe(
Effect.provide(OpenAiWithHttp),
Effect.runPromise
)
```
In this getting started guide, we will demonstrate how to generate a simple text completion using an LLM provider (OpenAi) using the Effect AI integration packages.
We’ll walk through:
Writing provider-agnostic logic to interact with an LLM
Declaring the specific LLM model to use for the interaction
Using a provider integration to make the program executable
Installation
First, we will need to install the base @effect/ai package to gain access to the core AI abstractions. In addition, we will need to install at least one provider integration package (in this case @effect/ai-openai):
Provides a way to write effectful code using generator functions, simplifying
control flow and error handling.
When to Use
Effect.gen allows you to write code that looks and behaves like synchronous
code, but it can handle asynchronous tasks, errors, and complex control flow
(like loops and conditions). It helps make asynchronous code more readable
and easier to manage.
The generator functions work similarly to async/await but with more
explicit control over the execution of effects. You can yield* values from
effects and return the final result at the end.
The console module provides a simple debugging console that is similar to the
JavaScript console mechanism provided by web browsers.
The module exports two specific components:
A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
A global console instance configured to write to process.stdout and
process.stderr. The global console can be used without importing the node:console module.
Warning: The global console object's methods are neither consistently
synchronous like the browser APIs they resemble, nor are they consistently
asynchronous like all other Node.js streams. See the note on process I/O for
more information.
Example using the global console:
console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(newError('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
constname='Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr
Example using the Console class:
constout=getStreamSomehow();
consterr=getStreamSomehow();
constmyConsole=new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(newError('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
Prints to stdout with newline. Multiple arguments can be passed, with the
first used as the primary message and all additional used as substitution
values similar to printf(3)
(the arguments are all passed to util.format()).
Provides a way to write effectful code using generator functions, simplifying
control flow and error handling.
When to Use
Effect.gen allows you to write code that looks and behaves like synchronous
code, but it can handle asynchronous tasks, errors, and complex control flow
(like loops and conditions). It helps make asynchronous code more readable
and easier to manage.
The generator functions work similarly to async/await but with more
explicit control over the execution of effects. You can yield* values from
effects and return the final result at the end.
The console module provides a simple debugging console that is similar to the
JavaScript console mechanism provided by web browsers.
The module exports two specific components:
A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
A global console instance configured to write to process.stdout and
process.stderr. The global console can be used without importing the node:console module.
Warning: The global console object's methods are neither consistently
synchronous like the browser APIs they resemble, nor are they consistently
asynchronous like all other Node.js streams. See the note on process I/O for
more information.
Example using the global console:
console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(newError('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
constname='Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr
Example using the Console class:
constout=getStreamSomehow();
consterr=getStreamSomehow();
constmyConsole=new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(newError('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
Prints to stdout with newline. Multiple arguments can be passed, with the
first used as the primary message and all additional used as substitution
values similar to printf(3)
(the arguments are all passed to util.format()).
constprovide: <LanguageModel.LanguageModel|ProviderName, never, OpenAiClient>(layer:Layer<LanguageModel.LanguageModel|ProviderName, never, OpenAiClient>) => <A, E, R>(self:Effect.Effect<A, E, R>) =>Effect.Effect<A, E, OpenAiClient|Exclude<R, LanguageModel.LanguageModel|ProviderName>> (+9overloads)
Provides necessary dependencies to an effect, removing its environmental
requirements.
Details
This function allows you to supply the required environment for an effect.
The environment can be provided in the form of one or more Layers, a
Context, a Runtime, or a ManagedRuntime. Once the environment is
provided, the effect can run without requiring external dependencies.
You can compose layers to create a modular and reusable way of setting up the
environment for effects. For example, layers can be used to configure
databases, logging services, or any other required dependencies.
Before moving on, it is important that we understand the purpose of the Model data type.
Understanding Model
The Model data type represents a provider-specific implementation of one or more services, such as LanguageModel or EmbeddingsModel. It is the primary way that you can plug a real large language model into your program.
Provides a way to write effectful code using generator functions, simplifying
control flow and error handling.
When to Use
Effect.gen allows you to write code that looks and behaves like synchronous
code, but it can handle asynchronous tasks, errors, and complex control flow
(like loops and conditions). It helps make asynchronous code more readable
and easier to manage.
The generator functions work similarly to async/await but with more
explicit control over the execution of effects. You can yield* values from
effects and return the final result at the end.
The console module provides a simple debugging console that is similar to the
JavaScript console mechanism provided by web browsers.
The module exports two specific components:
A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
A global console instance configured to write to process.stdout and
process.stderr. The global console can be used without importing the node:console module.
Warning: The global console object's methods are neither consistently
synchronous like the browser APIs they resemble, nor are they consistently
asynchronous like all other Node.js streams. See the note on process I/O for
more information.
Example using the global console:
console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(newError('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
constname='Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr
Example using the Console class:
constout=getStreamSomehow();
consterr=getStreamSomehow();
constmyConsole=new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(newError('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
Prints to stdout with newline. Multiple arguments can be passed, with the
first used as the primary message and all additional used as substitution
values similar to printf(3)
(the arguments are all passed to util.format()).
Provides a way to write effectful code using generator functions, simplifying
control flow and error handling.
When to Use
Effect.gen allows you to write code that looks and behaves like synchronous
code, but it can handle asynchronous tasks, errors, and complex control flow
(like loops and conditions). It helps make asynchronous code more readable
and easier to manage.
The generator functions work similarly to async/await but with more
explicit control over the execution of effects. You can yield* values from
effects and return the final result at the end.
constprovide: <LanguageModel.LanguageModel|ProviderName, never, OpenAiClient>(layer:Layer<LanguageModel.LanguageModel|ProviderName, never, OpenAiClient>) => <A, E, R>(self:Effect.Effect<A, E, R>) =>Effect.Effect<A, E, OpenAiClient|Exclude<R, LanguageModel.LanguageModel|ProviderName>> (+9overloads)
Provides necessary dependencies to an effect, removing its environmental
requirements.
Details
This function allows you to supply the required environment for an effect.
The environment can be provided in the form of one or more Layers, a
Context, a Runtime, or a ManagedRuntime. Once the environment is
provided, the effect can run without requiring external dependencies.
You can compose layers to create a modular and reusable way of setting up the
environment for effects. For example, layers can be used to configure
databases, logging services, or any other required dependencies.
If we know that one model or provider performs better at a given task than another, we can freely mix and match models and providers together.
For example, if we know Anthropic’s Claude generates some really great dad jokes, we can mix it into our existing program with just a few lines of code:
Provides a way to write effectful code using generator functions, simplifying
control flow and error handling.
When to Use
Effect.gen allows you to write code that looks and behaves like synchronous
code, but it can handle asynchronous tasks, errors, and complex control flow
(like loops and conditions). It helps make asynchronous code more readable
and easier to manage.
The generator functions work similarly to async/await but with more
explicit control over the execution of effects. You can yield* values from
effects and return the final result at the end.
The console module provides a simple debugging console that is similar to the
JavaScript console mechanism provided by web browsers.
The module exports two specific components:
A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
A global console instance configured to write to process.stdout and
process.stderr. The global console can be used without importing the node:console module.
Warning: The global console object's methods are neither consistently
synchronous like the browser APIs they resemble, nor are they consistently
asynchronous like all other Node.js streams. See the note on process I/O for
more information.
Example using the global console:
console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(newError('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
constname='Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr
Example using the Console class:
constout=getStreamSomehow();
consterr=getStreamSomehow();
constmyConsole=new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(newError('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
Prints to stdout with newline. Multiple arguments can be passed, with the
first used as the primary message and all additional used as substitution
values similar to printf(3)
(the arguments are all passed to util.format()).
Provides a way to write effectful code using generator functions, simplifying
control flow and error handling.
When to Use
Effect.gen allows you to write code that looks and behaves like synchronous
code, but it can handle asynchronous tasks, errors, and complex control flow
(like loops and conditions). It helps make asynchronous code more readable
and easier to manage.
The generator functions work similarly to async/await but with more
explicit control over the execution of effects. You can yield* values from
effects and return the final result at the end.
Provides necessary dependencies to an effect, removing its environmental
requirements.
Details
This function allows you to supply the required environment for an effect.
The environment can be provided in the form of one or more Layers, a
Context, a Runtime, or a ManagedRuntime. Once the environment is
provided, the effect can run without requiring external dependencies.
You can compose layers to create a modular and reusable way of setting up the
environment for effects. For example, layers can be used to configure
databases, logging services, or any other required dependencies.
constprovide: <LanguageModel.LanguageModel|ProviderName, never, OpenAiClient>(layer:Layer<LanguageModel.LanguageModel|ProviderName, never, OpenAiClient>) => <A, E, R>(self:Effect.Effect<A, E, R>) =>Effect.Effect<A, E, OpenAiClient|Exclude<R, LanguageModel.LanguageModel|ProviderName>> (+9overloads)
Provides necessary dependencies to an effect, removing its environmental
requirements.
Details
This function allows you to supply the required environment for an effect.
The environment can be provided in the form of one or more Layers, a
Context, a Runtime, or a ManagedRuntime. Once the environment is
provided, the effect can run without requiring external dependencies.
You can compose layers to create a modular and reusable way of setting up the
environment for effects. For example, layers can be used to configure
databases, logging services, or any other required dependencies.
Because Effect performs type-level dependency tracking, we can see that an AnthropicClient is now required to make our program runnable.
Abstractability
An Model can also be yield*’ed to lift its dependencies into the calling Effect. This is particularly useful when creating services that depend on AI interactions, where you want to avoid leaking service-level dependencies into the service interface.
For example, in the code below the main program is only dependent upon the DadJokes service. All AI requirements are abstracted away into Layer composition.
Example (Abstracting LLM Interactions into a Service)
Simplifies the creation and management of services in Effect by defining both
a Tag and a Layer.
Details
This function allows you to streamline the creation of services by combining
the definition of a Context.Tag and a Layer in a single step. It supports
various ways of providing the service implementation:
Using an effect to define the service dynamically.
Using sync or succeed to define the service statically.
Using scoped to create services with lifecycle management.
It also allows you to specify dependencies for the service, which will be
provided automatically when the service is used. Accessors can be optionally
generated for the service, making it more convenient to use.
Provides a way to write effectful code using generator functions, simplifying
control flow and error handling.
When to Use
Effect.gen allows you to write code that looks and behaves like synchronous
code, but it can handle asynchronous tasks, errors, and complex control flow
(like loops and conditions). It helps make asynchronous code more readable
and easier to manage.
The generator functions work similarly to async/await but with more
explicit control over the execution of effects. You can yield* values from
effects and return the final result at the end.
Provides a way to write effectful code using generator functions, simplifying
control flow and error handling.
When to Use
Effect.gen allows you to write code that looks and behaves like synchronous
code, but it can handle asynchronous tasks, errors, and complex control flow
(like loops and conditions). It helps make asynchronous code more readable
and easier to manage.
The generator functions work similarly to async/await but with more
explicit control over the execution of effects. You can yield* values from
effects and return the final result at the end.
The console module provides a simple debugging console that is similar to the
JavaScript console mechanism provided by web browsers.
The module exports two specific components:
A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
A global console instance configured to write to process.stdout and
process.stderr. The global console can be used without importing the node:console module.
Warning: The global console object's methods are neither consistently
synchronous like the browser APIs they resemble, nor are they consistently
asynchronous like all other Node.js streams. See the note on process I/O for
more information.
Example using the global console:
console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(newError('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
constname='Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr
Example using the Console class:
constout=getStreamSomehow();
consterr=getStreamSomehow();
constmyConsole=new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(newError('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
Prints to stdout with newline. Multiple arguments can be passed, with the
first used as the primary message and all additional used as substitution
values similar to printf(3)
(the arguments are all passed to util.format()).
Provides necessary dependencies to an effect, removing its environmental
requirements.
Details
This function allows you to supply the required environment for an effect.
The environment can be provided in the form of one or more Layers, a
Context, a Runtime, or a ManagedRuntime. Once the environment is
provided, the effect can run without requiring external dependencies.
You can compose layers to create a modular and reusable way of setting up the
environment for effects. For example, layers can be used to configure
databases, logging services, or any other required dependencies.
Provides necessary dependencies to an effect, removing its environmental
requirements.
Details
This function allows you to supply the required environment for an effect.
The environment can be provided in the form of one or more Layers, a
Context, a Runtime, or a ManagedRuntime. Once the environment is
provided, the effect can run without requiring external dependencies.
You can compose layers to create a modular and reusable way of setting up the
environment for effects. For example, layers can be used to configure
databases, logging services, or any other required dependencies.
Provides a way to write effectful code using generator functions, simplifying
control flow and error handling.
When to Use
Effect.gen allows you to write code that looks and behaves like synchronous
code, but it can handle asynchronous tasks, errors, and complex control flow
(like loops and conditions). It helps make asynchronous code more readable
and easier to manage.
The generator functions work similarly to async/await but with more
explicit control over the execution of effects. You can yield* values from
effects and return the final result at the end.
Provides a way to write effectful code using generator functions, simplifying
control flow and error handling.
When to Use
Effect.gen allows you to write code that looks and behaves like synchronous
code, but it can handle asynchronous tasks, errors, and complex control flow
(like loops and conditions). It helps make asynchronous code more readable
and easier to manage.
The generator functions work similarly to async/await but with more
explicit control over the execution of effects. You can yield* values from
effects and return the final result at the end.
The console module provides a simple debugging console that is similar to the
JavaScript console mechanism provided by web browsers.
The module exports two specific components:
A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
A global console instance configured to write to process.stdout and
process.stderr. The global console can be used without importing the node:console module.
Warning: The global console object's methods are neither consistently
synchronous like the browser APIs they resemble, nor are they consistently
asynchronous like all other Node.js streams. See the note on process I/O for
more information.
Example using the global console:
console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(newError('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
constname='Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr
Example using the Console class:
constout=getStreamSomehow();
consterr=getStreamSomehow();
constmyConsole=new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(newError('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
Prints to stdout with newline. Multiple arguments can be passed, with the
first used as the primary message and all additional used as substitution
values similar to printf(3)
(the arguments are all passed to util.format()).
constprovide: <LanguageModel.LanguageModel|ProviderName, never, OpenAiClient>(layer:Layer<LanguageModel.LanguageModel|ProviderName, never, OpenAiClient>) => <A, E, R>(self:Effect.Effect<A, E, R>) =>Effect.Effect<A, E, OpenAiClient|Exclude<R, LanguageModel.LanguageModel|ProviderName>> (+9overloads)
Provides necessary dependencies to an effect, removing its environmental
requirements.
Details
This function allows you to supply the required environment for an effect.
The environment can be provided in the form of one or more Layers, a
Context, a Runtime, or a ManagedRuntime. Once the environment is
provided, the effect can run without requiring external dependencies.
You can compose layers to create a modular and reusable way of setting up the
environment for effects. For example, layers can be used to configure
databases, logging services, or any other required dependencies.
Provides a way to write effectful code using generator functions, simplifying
control flow and error handling.
When to Use
Effect.gen allows you to write code that looks and behaves like synchronous
code, but it can handle asynchronous tasks, errors, and complex control flow
(like loops and conditions). It helps make asynchronous code more readable
and easier to manage.
The generator functions work similarly to async/await but with more
explicit control over the execution of effects. You can yield* values from
effects and return the final result at the end.
The console module provides a simple debugging console that is similar to the
JavaScript console mechanism provided by web browsers.
The module exports two specific components:
A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
A global console instance configured to write to process.stdout and
process.stderr. The global console can be used without importing the node:console module.
Warning: The global console object's methods are neither consistently
synchronous like the browser APIs they resemble, nor are they consistently
asynchronous like all other Node.js streams. See the note on process I/O for
more information.
Example using the global console:
console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(newError('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
constname='Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr
Example using the Console class:
constout=getStreamSomehow();
consterr=getStreamSomehow();
constmyConsole=new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(newError('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
Prints to stdout with newline. Multiple arguments can be passed, with the
first used as the primary message and all additional used as substitution
values similar to printf(3)
(the arguments are all passed to util.format()).
constprovide: <LanguageModel.LanguageModel|ProviderName, never, OpenAiClient.OpenAiClient>(layer:Layer<LanguageModel.LanguageModel|ProviderName, never, OpenAiClient.OpenAiClient>) => <A, E, R>(self:Effect.Effect<A, E, R>) =>Effect.Effect<A, E, OpenAiClient.OpenAiClient|Exclude<R, LanguageModel.LanguageModel|ProviderName>> (+9overloads)
Provides necessary dependencies to an effect, removing its environmental
requirements.
Details
This function allows you to supply the required environment for an effect.
The environment can be provided in the form of one or more Layers, a
Context, a Runtime, or a ManagedRuntime. Once the environment is
provided, the effect can run without requiring external dependencies.
You can compose layers to create a modular and reusable way of setting up the
environment for effects. For example, layers can be used to configure
databases, logging services, or any other required dependencies.
In the code above, we use the layerConfig constructor from the OpenAiClient module to create a Layer which will produce an OpenAiClient. The layerConfig constructor allows us to read in configuration variables using Effect’s configuration system.
The provider clients also have a dependency on an HttpClient implementation to avoid any platform dependencies. This way, you can provide whichever HttpClient implementation is most appropriate for the platform your code is running upon.
For example, if we know we are going to run this code in NodeJS, we can utilize the NodeHttpClient module from @effect/platform-node to provide an HttpClient implementation:
Provides a way to write effectful code using generator functions, simplifying
control flow and error handling.
When to Use
Effect.gen allows you to write code that looks and behaves like synchronous
code, but it can handle asynchronous tasks, errors, and complex control flow
(like loops and conditions). It helps make asynchronous code more readable
and easier to manage.
The generator functions work similarly to async/await but with more
explicit control over the execution of effects. You can yield* values from
effects and return the final result at the end.
The console module provides a simple debugging console that is similar to the
JavaScript console mechanism provided by web browsers.
The module exports two specific components:
A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
A global console instance configured to write to process.stdout and
process.stderr. The global console can be used without importing the node:console module.
Warning: The global console object's methods are neither consistently
synchronous like the browser APIs they resemble, nor are they consistently
asynchronous like all other Node.js streams. See the note on process I/O for
more information.
Example using the global console:
console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(newError('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
constname='Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr
Example using the Console class:
constout=getStreamSomehow();
consterr=getStreamSomehow();
constmyConsole=new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(newError('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
Prints to stdout with newline. Multiple arguments can be passed, with the
first used as the primary message and all additional used as substitution
values similar to printf(3)
(the arguments are all passed to util.format()).
constprovide: <LanguageModel.LanguageModel|ProviderName, never, OpenAiClient.OpenAiClient>(layer:Layer.Layer<LanguageModel.LanguageModel|ProviderName, never, OpenAiClient.OpenAiClient>) => <A, E, R>(self:Effect.Effect<A, E, R>) =>Effect.Effect<A, E, OpenAiClient.OpenAiClient|Exclude<R, LanguageModel.LanguageModel|ProviderName>> (+9overloads)
Provides necessary dependencies to an effect, removing its environmental
requirements.
Details
This function allows you to supply the required environment for an effect.
The environment can be provided in the form of one or more Layers, a
Context, a Runtime, or a ManagedRuntime. Once the environment is
provided, the effect can run without requiring external dependencies.
You can compose layers to create a modular and reusable way of setting up the
environment for effects. For example, layers can be used to configure
databases, logging services, or any other required dependencies.
Feeds the output services of this builder into the input of the specified
builder, resulting in a new builder with the inputs of this builder as
well as any leftover inputs, and the outputs of the specified builder.
Provides a way to write effectful code using generator functions, simplifying
control flow and error handling.
When to Use
Effect.gen allows you to write code that looks and behaves like synchronous
code, but it can handle asynchronous tasks, errors, and complex control flow
(like loops and conditions). It helps make asynchronous code more readable
and easier to manage.
The generator functions work similarly to async/await but with more
explicit control over the execution of effects. You can yield* values from
effects and return the final result at the end.
The console module provides a simple debugging console that is similar to the
JavaScript console mechanism provided by web browsers.
The module exports two specific components:
A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
A global console instance configured to write to process.stdout and
process.stderr. The global console can be used without importing the node:console module.
Warning: The global console object's methods are neither consistently
synchronous like the browser APIs they resemble, nor are they consistently
asynchronous like all other Node.js streams. See the note on process I/O for
more information.
Example using the global console:
console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(newError('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
constname='Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr
Example using the Console class:
constout=getStreamSomehow();
consterr=getStreamSomehow();
constmyConsole=new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(newError('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
Prints to stdout with newline. Multiple arguments can be passed, with the
first used as the primary message and all additional used as substitution
values similar to printf(3)
(the arguments are all passed to util.format()).
constprovide: <LanguageModel.LanguageModel|ProviderName, never, OpenAiClient.OpenAiClient>(layer:Layer.Layer<LanguageModel.LanguageModel|ProviderName, never, OpenAiClient.OpenAiClient>) => <A, E, R>(self:Effect.Effect<A, E, R>) =>Effect.Effect<A, E, OpenAiClient.OpenAiClient|Exclude<R, LanguageModel.LanguageModel|ProviderName>> (+9overloads)
Provides necessary dependencies to an effect, removing its environmental
requirements.
Details
This function allows you to supply the required environment for an effect.
The environment can be provided in the form of one or more Layers, a
Context, a Runtime, or a ManagedRuntime. Once the environment is
provided, the effect can run without requiring external dependencies.
You can compose layers to create a modular and reusable way of setting up the
environment for effects. For example, layers can be used to configure
databases, logging services, or any other required dependencies.
Feeds the output services of this builder into the input of the specified
builder, resulting in a new builder with the inputs of this builder as
well as any leftover inputs, and the outputs of the specified builder.
constprovide: <OpenAiClient.OpenAiClient, ConfigError, never>(layer:Layer.Layer<OpenAiClient.OpenAiClient, ConfigError, never>) => <A, E, R>(self:Effect.Effect<A, E, R>) =>Effect.Effect<A, ConfigError|E, Exclude<R, OpenAiClient.OpenAiClient>> (+9overloads)
Provides necessary dependencies to an effect, removing its environmental
requirements.
Details
This function allows you to supply the required environment for an effect.
The environment can be provided in the form of one or more Layers, a
Context, a Runtime, or a ManagedRuntime. Once the environment is
provided, the effect can run without requiring external dependencies.
You can compose layers to create a modular and reusable way of setting up the
environment for effects. For example, layers can be used to configure
databases, logging services, or any other required dependencies.
construnPromise: <A, E>(effect:Effect.Effect<A, E, never>, options?: {
readonlysignal?:AbortSignal|undefined;
} |undefined) =>Promise<A>
Executes an effect and returns the result as a Promise.
Details
This function runs an effect and converts its result into a Promise. If the
effect succeeds, the Promise will resolve with the successful result. If
the effect fails, the Promise will reject with an error, which includes the
failure details of the effect.
The optional options parameter allows you to pass an AbortSignal for
cancellation, enabling more fine-grained control over asynchronous tasks.
When to Use
Use this function when you need to execute an effect and work with its result
in a promise-based system, such as when integrating with third-party
libraries that expect Promise results.
Example (Running a Successful Effect as a Promise)