# Changelog
Source: https://www.activepieces.com/docs/about/changelog
A log of all notable changes to Activepieces
### Waitpoints — durable pause and resume for flow runs
Flows can now **pause mid-execution and resume later** without losing any state. The execution context is fully persisted so restarts, deployments, or extended wait times do not affect in-progress runs.
Two pause types are available out of the box:
* **Webhook waitpoints** — the flow pauses and resumes the moment a specific callback URL is called. The URL is unique to the run, carries the caller's request body, headers, and query parameters through to the next step, and supports both async and synchronous (respond-when-done) modes.
* **Delay waitpoints** — the flow pauses until a scheduled timestamp and then resumes automatically, freeing up worker capacity during the wait.
Both types survive worker restarts. If a resume signal arrives before the run has finished writing its paused state, it is buffered and replayed automatically — no lost callbacks.
Read more: [Waitpoints](/docs/install/architecture/waitpoints) · [Durable Execution](/docs/install/architecture/durable-execution)
### Network Security
Activepieces now ships with a built-in **network proxy that protects your automations from network attacks**.
* **One switch to turn it on** — set `AP_NETWORK_MODE=STRICT` and the protection is active. It's **opt-in today**, but in an upcoming release it will become **opt-out** (on by default).
* **Allow list for trusted internal services** — if a flow legitimately needs to reach an internal API or database, add its address (or a whole subnet like `10.10.0.0/24`) to `AP_SSRF_ALLOW_LIST`.
* **Works with your corporate proxy** — if you already use `HTTP_PROXY` / `HTTPS_PROXY`, Activepieces respects them automatically.
See the [Network Security](/docs/install/architecture/network-security) architecture page for the full breakdown.
### Shared concurrency pools
Projects can now share a single concurrency limit through **concurrency pools**, giving platform operators finer control over how worker capacity is distributed.
* **Per-project concurrency** — set **Max Concurrent Jobs** directly in Project Settings > General.
* **Shared pools** — multiple projects can be grouped into the same pool so they draw from one shared limit instead of each having its own.
* **Embedding support** — when provisioning users via JWT, include `concurrencyPoolKey` and `concurrencyPoolLimit` claims to create or reuse a pool automatically. See [Provision Users](/docs/embedding/provision-users).
Learn more in the [Manage Concurrency](/docs/admin-guide/guides/manage-concurrency) guide.
### Stability & reliability improvements
Major investments in worker architecture, testing, and resilience to ensure more reliable flow execution.
**Worker Architecture**
* Complete worker rewrite (worker v2) focused on stability and reliability
* New sandbox process model with improved error handling and resource cleanup
**Testing & Quality — what we added and why it matters**
Race condition tests:
* **Queue dispatcher** (12 tests) — tests orphaned job handling when a job is dequeued but all waiters have timed out, prevents double-loop spawn during close with in-flight dequeue, verifies single dequeue concurrency control, tests waiter timeout and retry behavior
* **Subflow resume** (8 tests) — tests the race condition where the engine writes pause metadata to Redis before it's persisted to DB, verifies Redis fallback when DB is stale, simulates concurrent reads/writes with spy mocks, covers sync endpoint with concurrent Redis updates
* **Rate limiter** (5+ tests) — tests concurrent job slot allocation, idempotency with concurrent dispatch, per-project isolation so different projects don't interfere
* **Concurrent flow execution** — end-to-end test creating 5 concurrent flow runs verifying none get stuck or deadlocked
* **Memory lock** — verifies mutual exclusion (two concurrent lock acquire calls on same key are serialized)
Worker unit tests:
* **Worker polling** — tests job execution lifecycle, resilience to invalid job data, null polls, unrecognized job types, mixed valid/invalid sequences
* **Sandbox execution** — tests sandbox creation, startup, RPC communication, stdout/stderr accumulation, resource cleanup on timeout or memory issues, process cleanup and listener removal
* **Process forking** — tests execArgv configuration (memory limits, node options), environment variable propagation
* **Cache logic** — tests cache hit/miss, disk persistence, memory caching, cache invalidation predicates
* **Configuration** — tests config loading for different container types (WORKER\_AND\_APP vs WORKER)
End-to-end validation:
* **[Smoke tests](https://github.com/activepieces/activepieces/blob/main/.github/workflows/smoke-test.yml) in GitHub Actions** — validates health checks and webhook flow execution on AMD64 and ARM64
* **[Benchmark tests](https://github.com/activepieces/activepieces/blob/main/.github/workflows/benchmark.yml) in GitHub Actions** — load testing across 6 app/worker configurations measuring throughput, mean latency, P50, P99 ([see results](/docs/install/architecture/benchmark))
**Resilience**
* Worker gracefully handles invalid job data, null polls, and unrecognized job types
* Sandbox properly cleans up processes, listeners, and resources on timeout or memory issues
* Race condition fixes for subflow resume and user interaction jobs
### Platform Admin — full UI redesign
A comprehensive visual refresh across all Platform Admin and Project pages with a cleaner, more consistent interface.
**Platform Admin**
* **Projects** — filter chips, inline edit icons, and cleaner sidebar without branding
* **Users** — new Role and Last Active columns, dedicated Invite button, and per-row action menus
* **Project Roles** — switched from table to card-based layout with permission viewing
* **Audit Logs** — fully enabled with Action, Performed By, Project, and Date Range filters, plus detail slide-over panels with a full JSON payload viewer
* **Secret Managers** — card-based layout with Hashicorp Vault, AWS Secrets Manager, Cyberark Conjur, and 1Password
* **Branding** — refreshed page layout
**Project pages**
* **Automations** — renamed from "Flows", with Type and Owner filters, search, step icon details, status toggles, and favorites
* **Connections** — Pieces and Owner filters, flow usage count, inline edit and refresh actions
* **Runs** — updated tab navigation and consistent styling
**General improvements**
* Unified tab navigation across Automations, Runs, and Connections
* Platform Admin link added to the bottom of the project sidebar
* Slide-over panels for detail views instead of full-page navigation
* Previous / Next pagination style throughout
* Sidebar header replaced with a "Back to app" link
### Dedicated staging environment & improved release process
All changes now go through a dedicated staging environment before reaching production. Every update is validated internally for a minimum of 16 hours before being promoted, ensuring higher reliability and fewer disruptions for our users.
* **Staging-first deployment** — every change is tested in a production-like environment before going live
* **Daily production promotions** — only validated, stable builds are promoted to production
* **Weekly self-hosted releases** — predictable, stable releases published every week for self-hosted customers
* **Emergency hotfix path** — critical fixes can still be fast-tracked to production when needed
[Read more about our release cycle](/docs/handbook/engineering/onboarding/release-cycle)
For previous releases, see the [GitHub Releases](https://github.com/activepieces/activepieces/releases) page.
# i18n Translations
Source: https://www.activepieces.com/docs/about/i18n
This guide helps you understand how to change or add new translations.
Activepieces uses Crowdin because it helps translators who don't know how to code. It also makes the approval process easier. Activepieces automatically sync new text from the code and translations back into the code.
## Contribute to existing translations
1. Create Crowdin account
2. Join the project [https://crowdin.com/project/activepieces](https://crowdin.com/project/activepieces)
3. Click on the language you want to translate
4. Click on "Translate All"
5. Select Strings you want to translate and click on "Save" button
## Adding a new language
* Please contact us ([support@activepieces.com](mailto:support@activepieces.com)) if you want to add a new language. We will add it to the project and you can start translating.
# License
Source: https://www.activepieces.com/docs/about/license
Activepieces' **core** is released as open source under the [MIT license](https://github.com/activepieces/activepieces/blob/main/LICENSE) and enterprise / cloud editions features are released under [Commercial License](https://github.com/activepieces/activepieces/blob/main/packages/ee/LICENSE)
The MIT license is a permissive license that grants users the freedom to use, modify, or distribute the software without any significant restrictions. The only requirement is that you include the license notice along with the software when distributing it.
Using the enterprise features (under the packages/ee and packages/server/api/src/app/ee folder) with a self-hosted instance requires an Activepieces license. If you are looking for these features, contact us at [sales@activepieces.com](mailto:sales@activepieces.com).
**Benefits of Dual Licensing Repo**
* **Transparency** - Everyone can see what we are doing and contribute to the project.
* **Clarity** - Everyone can see what the difference is between the open source and commercial versions of our software.
* **Audit** - Everyone can audit our code and see what we are doing.
* **Faster Development** - We can develop faster and more efficiently.
If you are still confused or have feedback, please open an issue on GitHub or send a message in the #contribution channel on Discord.
# Event Streaming
Source: https://www.activepieces.com/docs/admin-guide/guides/event-streaming
Forward audit events to a webhook and build fully customizable alerts on top
## Overview
Event Streaming forwards platform [audit events](/docs/admin-guide/security/audit-logs/overview) to a webhook URL of your choice. The most common pattern is to point that webhook at a flow inside Activepieces — your flow then routes each event to the channel you care about (Slack, Gmail, Microsoft Teams, custom HTTP, etc.).
Use it to react to flow run failures, new sign-ins, project releases, or any other audit event with the alerting tool of your choice.
## Quick start
The fastest way to get going is the **Generate handler flow** button in the New destination dialog. It builds a webhook-triggered flow with one router branch per event you select, plus a nested branch for failed runs when you pick `flow.run.finished`.
1. Open **Platform Admin → Event Streaming**.
2. Click **New Destination**, pick the events you want to handle, and click **Generate handler flow**.
3. The flow lands in your personal project with a webhook trigger and a router branch per selected event. Sample data is pre-baked so you can hit Test right away.
4. Customize the branches with whatever pieces you like — Slack, Gmail, Microsoft Teams, custom HTTP, and so on.
5. Publish the flow, then go back to the New destination dialog and click **Create** to save the destination.
6. Use the **Test webhook** dropdown in the dialog to fire a sample of any selected event and confirm the round-trip.
## Available events
You can subscribe a destination to any audit event — flow lifecycle, run status, folder/connection changes, user activity, and platform admin actions. See the [Audit Log Events catalog](/docs/admin-guide/security/audit-logs/overview#event-catalog) for the full list and per-event payload details.
## Use your own webhook URL
If you already have a webhook URL — or you want to point Event Streaming at a SIEM, data warehouse, or any other external system — you can skip the handler flow and configure the destination by hand:
1. Go to **Platform Admin → Event Streaming**.
2. Click **New Destination**.
3. Select the **Events** you want to forward.
4. Paste your **Webhook URL** (must be a valid HTTPS endpoint).
5. Use the **Test webhook** dropdown to fire a sample payload for any selected event.
6. Click **Create** to save.
## Payload shape
Every event is delivered as an HTTP POST with a JSON body. The top-level fields are:
* `action` — the event name (for example `flow.run.finished`).
* `data` — event-specific payload, including details like the flow id, run id, status, or affected user.
* `id`, `created`, `updated` — event identifiers and timestamps.
* `platformId`, `projectId`, `userId` — context fields shared across all events.
## Requirements
* **Enterprise Edition**: Event Streaming requires an enterprise plan with Audit Logs enabled.
* **Platform admin**: only platform admins can configure destinations.
* **HTTPS endpoint**: webhook URLs must use HTTPS.
* **Publicly accessible**: your endpoint must be reachable from the internet.
## Troubleshooting
* **Events not received**: verify your endpoint is publicly accessible and returns 2xx status codes.
* **Test fails**: check that your URL is valid and uses HTTPS.
* **Missing events**: make sure the event type is selected in your destination configuration.
## See also
* [Audit Logs](/docs/admin-guide/security/audit-logs/overview) — view every event recorded on your platform
# Manage Concurrency
Source: https://www.activepieces.com/docs/admin-guide/guides/manage-concurrency
Control how many flows can run at the same time per project
## Overview
Concurrency limits control how many flow runs can execute simultaneously within a project. When a project reaches its limit, new runs are queued and retried automatically with exponential backoff until a slot becomes available.
This is useful to:
* Prevent a single project from consuming all available workers.
* Protect downstream APIs from being overwhelmed by too many parallel requests.
* Ensure fair resource distribution across projects on your platform.
## Setting a Per-Project Limit
1. Navigate to **Project Settings > General**.
2. Find the **Max Concurrent Jobs** field.
3. Enter the desired limit, or leave it empty to use the platform default.
4. Click **Save**.
When the field is empty, the project uses the default concurrency limit from your platform plan.
## What Happens When the Limit Is Reached
When a project hits its concurrency limit:
1. New flow runs are **not dropped** — they are queued.
2. The system retries queued runs with **exponential backoff**.
3. Once an active run finishes and a slot opens, the next queued run starts.
This means flows will always eventually execute, but they may experience delays when the project is at capacity.
## Programmatic Management via Embedding
If you are embedding Activepieces, you can manage concurrency limits programmatically through the JWT token used to provision users. This allows you to group multiple projects into a shared **concurrency pool** so they share the same limit.
See [Provision Users](/docs/embedding/provision-users) for the `concurrencyPoolKey` and `concurrencyPoolLimit` JWT claims.
# Override OAuth2 Apps
Source: https://www.activepieces.com/docs/admin-guide/guides/manage-oauth2
Use your own OAuth2 credentials instead of the default Activepieces apps
## Default Behavior
When users connect to services like Google Sheets or Slack, they see "Activepieces" as the app requesting access. This works out of the box with no setup required.
## Why Replace OAuth2 Apps?
* **Branding**: Show your company name instead of "Activepieces" in authorization screens
* **Higher Limits**: Some services have stricter rate limits for shared OAuth apps
* **Compliance**: Your organization may require using company-owned credentials
## How to Configure
1. Go to **Platform Admin → Setup → Pieces**
2. Find the piece you want to configure (e.g., Google Sheets)
3. Click the lock icon to open the OAuth2 settings
4. Enter your own Client ID and Client Secret
# How to Manage Pieces
Source: https://www.activepieces.com/docs/admin-guide/guides/manage-pieces
Control which integrations are available to your users
## Overview
**Pieces** are the building blocks of Activepieces — they are integrations and connectors (like Google Sheets, Slack, OpenAI, etc.) that users can use in their automation flows.
As a platform administrator, you have full control over which pieces are available to your users. This allows you to:
* **Enforce security policies** by restricting access to certain integrations
* **Simplify the user experience** by showing only relevant pieces for your use case
* **Deploy custom/private pieces** that are specific to your organization
There are **two levels** of piece management:
| Level | Who Can Manage | Scope |
| ------------------ | -------------- | --------------------------------------------- |
| **Platform Level** | Platform Admin | Install and remove across the entire platform |
| **Project Level** | Project Admin | Show/hide specific pieces for specfic project |
Pieces are standard npm packages — official pieces are **auto-synced from the registry hourly**, so you don't need to upgrade the server to get new versions. Each step in a flow is pinned to a specific piece version, and drafts can be upgraded from the builder. See [Piece Syncing & Versioning](/docs/install/architecture/piece-syncing) for the full pipeline.
***
## Platform-Level Management
Platform administrators can manage pieces for the entire Activepieces instance from **Platform Admin → Setup → Pieces**.
## Project-Level Management
Project administrators can further restrict which pieces are available within their specific project. This is useful when different teams or projects need access to different integrations.
### Show/Hide Pieces in a Project
Navigate to your project and go to **Settings → Pieces**.
You'll see a list of all pieces installed on the platform. Toggle the visibility for each piece:
* **Enabled**: Users in this project can use the piece
* **Disabled**: The piece is hidden from users in this project
Changes take effect immediately — users will only see the enabled pieces when building their flows.
Project-level settings can only **hide** pieces that are installed at the platform level. You cannot add pieces at the project level that aren't already installed on the platform.
### Install Private Pieces
For detailed instructions on building custom pieces, check the [Building Pieces](/docs/build-pieces/building-pieces/overview) documentation.
If you've built a custom piece for your organization, you can upload it directly as a tarball (`.tgz`) file.
Build your piece using the Activepieces CLI:
```bash theme={null}
npm run pieces -- build --name=your-piece-name
```
This generates a tarball in `dist/packages/pieces/your-piece-name`.
Go to **Platform Admin → Setup → Pieces** and click **Install Piece**.
Choose **Upload File** as the installation source.
Select the `.tgz` file from your build output and upload it.
# Manage User Roles
Source: https://www.activepieces.com/docs/admin-guide/guides/permissions
Documentation on project permissions in Activepieces
Activepieces utilizes Role-Based Access Control (RBAC) for managing permissions within projects. Each project consists of multiple flows and users, with each user assigned specific roles that define their actions within the project.
## Default Roles
Activepieces comes with four standard roles out of the box. The table below shows the permissions for each role:
| Permission | Admin | Editor | Operator | Viewer |
| -------------------------- | :---: | :----: | :------: | :----: |
| **Flows** | | | | |
| View Flows | ✓ | ✓ | ✓ | ✓ |
| Edit Flows | ✓ | ✓ | | |
| Publish / Toggle Flows | ✓ | ✓ | ✓ | |
| **Runs** | | | | |
| View Runs | ✓ | ✓ | ✓ | ✓ |
| Retry Runs | ✓ | ✓ | ✓ | |
| **Connections** | | | | |
| View Connections | ✓ | ✓ | ✓ | ✓ |
| Edit Connections | ✓ | ✓ | ✓ | |
| **Team** | | | | |
| View Project Members | ✓ | ✓ | ✓ | ✓ |
| Add/Remove Project Members | ✓ | | | |
| **Git Sync** | | | | |
| Configure Git Repo | ✓ | | | |
| Pull Flows from Git | ✓ | | | |
| Push Flows to Git | ✓ | | | |
## Custom Roles
If the default roles don't fit your needs, you can create custom roles with specific permissions.
Go to **Platform Admin** → **Security** → **Project Roles**
Click **Create Role** and give it a name
Select the specific permissions you want to grant to this role
Custom roles are useful when you need fine-grained control, such as allowing users to view and retry runs without being able to edit flows.
# Project Releases
Source: https://www.activepieces.com/docs/admin-guide/guides/project-releases
Learn how to manage and deploy releases across projects
Project Releases allow you to sync flows, connections, and tables between different projects—essential for teams that want to develop in one environment and deploy to another with confidence.
**Example:** Build and test your automations in a **Staging** project, then seamlessly promote them to **Production** when ready. Simply navigate to your Production project → **Releases** → create a release from Staging, and all your changes will be applied instantly.
## Overview
There are three ways to create a release:
| Source | Description |
| ------------ | ------------------------------------------------ |
| **Git** | Pull changes from a connected Git repository |
| **Project** | Copy flows from another project in your instance |
| **Rollback** | Restore a previous release state |
## Prerequisites
### Enabling Environments
In your project dashboard, go to settings then to Environments and hit the enable button.
## Getting Started
Navigate to the **Releases** page from your project sidebar to view all releases and create new ones.
## Connecting Git (Optional)
If you want to use Git to track your changes, you'll need to connect a Git repository first. This requires the Environments feature to be enabled.
## Creating a Release
### From Project
Apply changes from flows, connections and tables in one project to another.
Click the **Create Release** dropdown button.
Choose **From Project** from the dropdown menu.
Choose the project you want to copy flows, connections and tables from.
Review the changes, and click **Apply Changes**.
New connections created during a release are placeholders and need to be reconnected with valid credentials after the release is applied.
### From Git
Click the **Create Release** dropdown button.
Choose **From Git** from the dropdown menu.
A dialog will appear showing all the changes that will be applied:
* **Flows Changes**: New, updated, or deleted flows
* **Connections Changes**: New or renamed connections
* **Tables Changes**: New, updated, or deleted tables
Check or uncheck the flows you want to include in this release.
Enter a **Name** and optional **Description** for your release.
Click **Apply Changes** to create the release.
## Push Everything to Git
If your project is connected to a Git repository, you can push all your flows, connections, and tables to Git.
Click the **Push Everything** button on the releases page.
Write a descriptive commit message explaining your changes.
Click **Push** to send all published flows to the Git repository.
## Pushing Individual Flows or Tables
You can also push specific flows or tables to Git without pushing everything.
You can only push published flows to git
Navigate to your flows or tables and select the items you want to push.
Click the **Push to Git** option.
Provide a commit message describing what you're pushing.
Click **Push** to send the selected items to Git.
## Rolling Back a Release
If something goes wrong after applying a release, you can easily rollback to a previous state.
Locate the release you want to rollback to in the releases list.
Click the rollback icon (↩) next to the release.
Review the changes that will be applied to restore that release state.
Select the changes to include and click **Apply Changes**.
## Release Details
Each release in the list shows:
| Column | Description |
| --------------- | ------------------------------------------------------- |
| **Name** | The name you gave the release |
| **Source** | Where the release came from (Git, Project, or Rollback) |
| **Imported At** | When the release was created |
| **Imported By** | The user who created the release |
Click on any release to view its full details.
## Understanding the Changes Preview
When creating a release, you'll see a preview of all changes:
### Flow Changes
* New flows that will be created
* Existing flows that will be updated
* Flows that will be deleted
### Connection Changes
* New connections are placeholders and must be reconnected after the release
* Renamed connections
### Table Changes
* New, updated, and deleted tables are shown with their respective indicators
## Best Practices
Give your releases meaningful names like "v1.2.0 - Added email notifications" to easily identify them later.
Always review the changes preview carefully before applying a release to avoid unexpected modifications.
If using Git sync, test changes in a development project before deploying to production.
Use the description field to document what changed and why for future reference.
## Permissions
To create and manage releases, you need the **Write Project Release** permission. Contact your instance administrator if you don't have access to the releases feature.
## Troubleshooting
The Environments feature must be enabled on your instance plan to use Git sync. Contact your instance administrator to upgrade your plan or enable this feature.
* Verify your SSH private key is correctly formatted (ends with an endline), and make sure it has an empty phrase.
* Ensure the remote URL is in SSH format (not HTTPS)
* Check that the branch exists in the repository
If no changes appear when creating a release, it means your current project is already in sync with the source.
After applying a release with new connections, navigate to the Connections page and reconnect them with valid credentials.
Make sure you configured your git settings and if you are selecting flows, make sure they are published.
Navigate to **Project Settings** from the sidebar, then click on **Environment**. If you don't see this option, the Environments feature may not be enabled for your instance.
# SCIM Overview
Source: https://www.activepieces.com/docs/admin-guide/guides/scim/overview
Automate user lifecycle management with SCIM in Activepieces.
## What is SCIM?
SCIM (System for Cross-domain Identity Management) lets your identity provider automatically create, update, and deactivate users in Activepieces.
## What SCIM Handles
* **Provision users** when they are assigned in your identity provider
* **Sync profile updates** such as name or email changes
* **Deprovision users** when access is removed or accounts are deactivated
## SCIM and SSO
SCIM handles user lifecycle management.\
SSO handles sign-in and authentication.
You can use both together so user access is managed centrally in your identity provider.
## Supported Providers
Configure SCIM provisioning from Okta to Activepieces.
## Configuration
| Variable | Description | Default |
| ------------------------------ | ------------------------------------------------------------------------------------------------------------------ | -------- |
| `AP_SCIM_DEFAULT_PROJECT_ROLE` | The project role assigned to members when added via SCIM group sync. Accepted values: `Admin`, `Editor`, `Viewer`. | `Editor` |
## Architecture
Replace this placeholder with your final architecture image later. Keep the same image path to avoid doc changes.
# SCIM with Okta
Source: https://www.activepieces.com/docs/admin-guide/guides/scim/providers/okta
Configure SCIM provisioning from Okta to Activepieces.
## Prerequisites
Before you start, make sure you have:
* **Admin access** to your Activepieces platform
* **Admin access** to your Okta tenant
* SSO already configured (recommended): [SAML with Okta](/docs/admin-guide/guides/sso#saml-with-okta)
* Generated an API key from `/platform/security/api-keys` route in the Activepieces app
## Configure SCIM Connection in Okta
In Okta Admin Console, open your Activepieces application (created in [SSO step](/docs/admin-guide/guides/sso#saml-with-okta)).
In the app's **General** tab, enable **SCIM Provisioning**.
**Provisioning** will be visible, go to it and set:
* **SCIM base URL** to `https://your-activepieces-domain/api/v1/scim/v2`
* **Unique identifier field** to `userName`
* **Authentication mode** to `HTTP Header`
* **Authorization** to `Bearer `
In Supported provisioning actions we support all **Push** actions
Click **Test Connector Configuration** and confirm the test passes.
## Configure Attribute Mapping
In **Provisioning -> To App -> Attribute Mappings**, map these fields:
| Activepieces (SCIM) | Okta Value |
| ------------------- | ------------------ |
| `userName` | `user.email` |
| `givenName` | `user.firstName` |
| `familyName` | `user.lastName` |
| `email` | `user.email` |
| `displayName` | `user.displayName` |
## Platform role mapping
By default, provisioned users will have `Member` role in the platform. In order to specify roles for users in Okta, follow these steps:
In Okta admin console, navigate to **Directory -> Profile Editor -> Your-Application User**.
Click **Add Attribute** and fill form with:
| Field | Value |
| -------------------- | -------------------------------------------------------------------- |
| `Display name` | `platformRole` |
| `Variable name` | `platformRole` |
| `External name` | `platformRole` |
| `External namespace` | `urn:ietf:params:scim:schemas:activepieces:1.0:CustomUserAttributes` |
| `Enum` | `enabled` |
For **Attribute members**, add:
| Display name | Value |
| ------------ | ---------- |
| `ADMIN` | `ADMIN` |
| `MEMBER` | `MEMBER` |
| `OPERATOR` | `OPERATOR` |
Finally click save.
This step assumes that you already have a field in the Okta user profile that you can map to platformRole in your Activepieces user profile. If you don't have one, you can create a new field in **Directory -> Profile Editor -> User (default)**.
* Back to your Activepieces application page in **Provisioning -> To App -> Attribute Mappings**
* Scroll down and click **Show Unmapped Attributes**
* Edit `platformRole` field
* Here you need to map the attribute value from your Okta user profile. If you already have a role field in the Okta user profile that matches exactly with a platformRole value (`ADMIN`, `MEMBER`, `OPERATOR`) then you can select it directly with `Map from Okta Profile` option, otherwise you can use an [Expression](https://developer.okta.com/docs/reference/okta-expression-language/) to return one of the 3 roles based on other fields in the Okta user profile.
Here is an example of an expression:
Please make sure the return value to always be one of `ADMIN`, `MEMBER` or `OPERATOR`
## Provision and Deprovision Users
### Provision
In the Activepieces application page, go to **Provisioning -> To App** and enable the actions you want to be applied to Activepieces when changes occur in Okta.
Now in the **Assignments** tab you can:
* Choose to provision individual users or groups. Note that groups in Okta will be projects in Activepieces.
* In case you don't have groups and you want to provision your Okta users at once, you can assign the `Everyone` group.
* When editing/creating users in an assigned group (including `Everyone`), they should be updated in Activepieces.
* To push groups to Activepieces, go to the **Push groups** tab and click on the push button, find the group and save.
Default role for users in projects will be `Editor` role, right now there is no way to link the project role with Okta
Created users in Activepieces will receive a welcome email. When clicked, they will be redirected to sign in with `SAML`.
### Deprovision
Users' state switches to `INACTIVE` in Activepieces only when they are deactivated in Okta. Suspension or deletion in Okta does not reflect in Activepieces because of Okta's design.
For groups you can delete them in **Push groups** tab -> click on button in **Push Status** column -> Unlink pushed group -> Delete the group in target
Deleting a group will delete the whole project in Activepieces with its flows and connections. Users linked to that group won't be affected.
## Troubleshooting
* Confirm SCIM base URL is correct.
* Ensure the `Authorization` header uses `Bearer` format.
* Ensure users are assigned to the Okta app.
* Confirm provisioning actions are enabled in Okta.
* Recheck mappings in **Provisioning -> To App**.
* Ensure `userName` uses a stable unique value (usually email).
# AWS Secrets Manager
Source: https://www.activepieces.com/docs/admin-guide/guides/secret-managers/aws
Connect AWS Secrets Manager to Activepieces for centralized secret management
AWS Secrets Manager helps you protect access to your applications, services, and IT resources. This integration uses **IAM user credentials** (Access Key + Secret Key) to authenticate directly with AWS Secrets Manager.
## Prerequisites
* An AWS account with permissions to create IAM users and policies
* Permissions to create and manage secrets in AWS Secrets Manager
## Step 1 — Create an IAM policy for Secrets Manager access
Create an IAM policy that grants read access to the secrets Activepieces will retrieve.
1. Open the [IAM console → Policies → Create policy](https://console.aws.amazon.com/iam/home#/policies\$new?step=edit).
2. Switch to the **JSON** tab and paste:
```json theme={null}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue",
"secretsmanager:ListSecrets",
"secretsmanager:DescribeSecret"
],
"Resource": "*"
}
]
}
```
For production, scope `Resource` to the specific secret ARNs Activepieces needs instead of using `"*"`.
3. Click **Next**, name the policy (e.g. `ActivepiecesSecretsReadOnly`), and create it.
## Step 2 — Create an IAM user and attach the policy
1. Open the [IAM console → Users → Create user](https://console.aws.amazon.com/iam/home#/users\$new).
2. Enter a username (e.g. `activepieces-secrets-user`) and click **Next**.
3. Select **Attach policies directly**, find and attach the policy created in Step 1, then click **Next** and **Create user**.
4. Open the newly created user, go to the **Security credentials** tab, and click **Create access key**.
5. Select **Application running outside AWS**, click **Next**, then **Create access key**.
6. Copy the **Access Key** and **Secret Key** — you will need both in the next step.
## Step 3 — Connect in Activepieces
1. Go to **Platform Admin → Security → Secret Managers**.
2. Select **AWS Secrets Manager** from the provider list.
3. Enter the connection details:
* **Access Key** — the Access Key ID from Step 2 (e.g. `AKIAIOSFODNN7EXAMPLE`).
* **Secret Key** — the Secret Access Key from Step 2.
* **Region** — the AWS region where your secrets are stored (e.g. `us-east-1`).
4. Click **Connect** to test and save the connection.
## Using AWS Secrets Manager in connections
When configuring a global connection that requires credentials:
1. Click the **key icon** (🔑) next to the credential field.
2. Select **AWS Secrets Manager** as the secret manager.
3. Fill in:
* **Secret Name** — the friendly name of the secret in AWS Secrets Manager.
* **Secret Json key** — Key of row for the stored secret.
Activepieces will use the configured credentials to retrieve the secret value and inject it into the connection at runtime.
If you update existing secrets and you can't see the update reflected . refer to [caching](/docs/admin-guide/guides/secret-managers/overview#caching)
# CyberArk Conjur
Source: https://www.activepieces.com/docs/admin-guide/guides/secret-managers/cyberark-conjur
Connect CyberArk Conjur to Activepieces for centralized secret management
CyberArk Conjur is a secrets management solution that provides secure storage and access to credentials. Integration with Activepieces uses **host/API key authentication**: Activepieces authenticates as a Conjur host, receives a short-lived token, and uses it to retrieve secrets for which that host has `read` and `execute` permissions.
Conjur policies are defined in `.yml` files. For recommended structure and patterns, see [Policy best practices](https://docs.cyberark.com/conjur-enterprise/13.0/en/Content/Operations/Policy/policy-best-practices.htm) in the CyberArk Conjur documentation. For policy syntax and operators, see the [Policy syntax](https://docs.cyberark.com/conjur-open-source/Latest/en/Content/Operations/Policy/policy-syntax.htm) reference.
## Prerequisites
* A Conjur server (Conjur Cloud, Conjur Enterprise, or Conjur Open Source)
* A Conjur policy that defines a host for Activepieces and grants it access to the variables you want to use
## Conjur host configuration for Activepieces
To allow Activepieces to read secrets, configure a Conjur policy that declares a group, variables, a host, a layer, and the right permissions. The steps below describe how to create that policy file.
### Example policy (Activepieces)
The following policy defines a policy `activepieces` with a group, two variables, a host, a layer, and the grants so the host can read the variables.
```yaml theme={null}
- !policy
id: activepieces
body:
- !group activepieces-secrets
- &variables
- !variable
id: key-1
kind: password
- !variable
id: key-2
kind: password
- !permit
role: !group /activepieces/activepieces-secrets
privileges: [read, update, execute]
resources: *variables
- !host activepieces
- !layer activepieces
- !grant
role: !layer activepieces
members:
- !host activepieces
- !grant
role: !group activepieces-secrets
member: !layer activepieces
```
### Policy steps (summary)
1. **Declare a group** at the root of the policy (e.g. `activepieces-secrets`). This group will be allowed to read (and optionally execute) the variables.
2. **Declare variables** and give the group `read` and `execute` on them (so the host can fetch secret values):
```yaml theme={null}
- &variables
- !variable
id: my-secret
kind: password
- !permit
role: !group /your-policy/your-group
privileges: [read, execute]
resources: *variables
```
3. **Declare the host** that Activepieces will use (e.g. `activepieces`) and a **layer** (e.g. `activepieces`), and add the host to the layer:
```yaml theme={null}
- !host activepieces
- !layer activepieces
- !grant
role: !layer activepieces
members:
- !host activepieces
```
4. **Grant the layer membership in the group** that has access to the variables:
```yaml theme={null}
- !grant
role: !group activepieces-secrets
member: !layer activepieces
```
5. **Load the policy** into Conjur. Conjur will create the host and return an **API key** for that host. You will use this API key and the host identity when connecting Activepieces.
After loading the policy, Conjur returns something like:
```json theme={null}
{
"created_roles": {
"conjur:host:activepieces/activepieces": {
"id": "conjur:host:activepieces/activepieces",
"api_key": ""
}
},
"version": 1
}
```
Store the **api\_key** securely; you will enter it in Activepieces as the **API Key**.
## Server URL and organization
* **Conjur Cloud**: Use a URL of the form\
`https://.secretsmgr.cyberark.cloud/api/`\
and set **Organization account name** to `conjur` unless your Cloud tenant uses a different account.
* **On-prem / Enterprise**: Use your Conjur server base URL (e.g. `https://conjur.example.com`) and your organization account name.
## Connecting to Activepieces
1. Go to **Platform Admin → Security → Secret Managers**.
2. Select **CyberArk Conjur** from the provider list.
3. Enter the connection details:
* **URL**: Conjur server URL (e.g. `https://conjur.example.com` or Conjur Cloud URL above). Do not add a trailing slash.
* **Organization account name**: Your Conjur account (e.g. `conjur` for Conjur Cloud).
* **Login ID**: For host authentication this must be the Conjur host ID with a `host/` prefix, e.g. `host/activepieces/activepieces` (policy id and host name as in your policy).
* **API Key**: The host API key returned when the host was created (see policy load response above).
4. Click **Connect** to test and save the connection.
## Using CyberArk Conjur secrets in connections
When configuring a connection that uses a secret:
1. Click the **key icon** (🔑) next to the credential field.
2. Select a **CyberArk Conjur** connection from the list.
3. Enter the **Secret key**: the Conjur variable path in the form `policy_id/variable_id`.\
For the example policy above, use:
* `activepieces/key-1`
* `activepieces/key-2`
Activepieces will authenticate as the configured host and retrieve the secret from Conjur when the flow runs.
If you update existing secrets and you can't see the update reflected . refer to [caching](/docs/admin-guide/guides/secret-managers/overview#caching)
# HashiCorp Vault
Source: https://www.activepieces.com/docs/admin-guide/guides/secret-managers/hashicorp
Connect HashiCorp Vault to Activepieces for enterprise-grade secret management
HashiCorp Vault is an enterprise-grade secrets management system that provides secure storage and access to secrets, API keys, passwords, and other sensitive data.
## Prerequisites
Before connecting HashiCorp Vault to Activepieces, ensure you have:
* **HashiCorp Vault Key-value (KV) secrets engine** version 2
* **AppRole auth method** [enabled](https://developer.hashicorp.com/vault/docs/auth/approle)
* **One or more AppRoles** [configured](https://developer.hashicorp.com/vault/docs/auth/approle) with appropriate policies
## Policies
Enable The created AppRole to access your secrets engine(s) by adding the following to your policy
```
path "sys/mounts" {
capabilities = [ "read" ]
}
path "/data/" {
capabilities = [ "read" ]
}
```
or
```
path "sys/mounts" {
capabilities = [ "read" ]
}
path "/data/*" {
capabilities = [ "read" ]
}
```
## Connecting to Activepieces
1. Go to **Platform Admin → Security → Secret Managers**
2. Click **New Connection** and select **HashiCorp Vault**
3. Enter a **Name** for the connection
4. Choose a **Scope** — **Platform** to make it available to all projects, or **Project** to restrict it to specific projects
5. Fill in the connection details:
* **URL**: Your Vault server URL (e.g., `http://localhost:8200`)
* **Role ID**: The Role ID from your AppRole configuration
* **Secret ID**: The Secret ID from your AppRole configuration
* **Namespace** (optional): Vault namespace if using Vault Enterprise namespaces
6. Click **Save** to test and save the connection
## Using HashiCorp Vault Secrets
Once the connection is saved, you can reference Vault secrets inside any piece connection dialog — in global connections (Platform Admin) or directly in the flow builder.
1. Open a connection dialog and click the **key icon** (🔑) next to a credential field
2. Select your HashiCorp Vault connection from the dropdown
3. Enter the secret path in the format: `mount/data/path/to/secret/key`
For example, if you stored a secret with:
```bash theme={null}
vault kv put -mount=secret mysec api_key='supersecret'
```
The path to enter would be:
```
secret/data/mysec/api_key
```
The connection will automatically retrieve the secret from Vault when the flow runs.
If you update a secret in Vault and the change isn't reflected in your flows, the cached value may still be active. Use the **refresh icon** next to the connection in the Secret Managers page to clear its cache immediately, or wait up to 1 hour for it to expire automatically. See [Caching](/docs/admin-guide/guides/secret-managers/overview#caching) for details.
# 1Password
Source: https://www.activepieces.com/docs/admin-guide/guides/secret-managers/onepassword
Connect 1Password to Activepieces for centralized secret management
1Password Secrets Automation lets you securely store and retrieve credentials from 1Password vaults. This integration uses a **service account token** to authenticate with the 1Password SDK.
## Prerequisites
* A 1Password account with the **Teams** or **Business** plan (service accounts require a paid plan)
* Permission to create service accounts in your 1Password account
## Step 1 — Create a service account
1. Sign in to [1password.com](https://1password.com) and go to **Developer Tools → Service Accounts**.
2. Click **Create a service account**.
3. Give it a name (e.g. `Activepieces`).
4. Grant it **Read Items** access on the vaults Activepieces needs to access.
5. Click **Create service account** and copy the token — it starts with `ops_` and is shown only once.
Store the token securely. Once you leave the page, 1Password will not show it again.
## Step 2 — Connect in Activepieces
1. Go to **Platform Admin → Security → Secret Managers**.
2. Select **1Password** from the provider list.
3. Paste the **Service Account Token** from Step 1.
4. Click **Connect** to test and save the connection.
## Using 1Password in connections
When configuring a global connection that requires credentials:
1. Click the **key icon** (🔑) next to the credential field.
2. Select a **1Password** connection from the list.
3. Enter the **Secret Reference** in the format:
```
op://vault/item/field
```
* **vault** — the vault name or ID (e.g. `Production`)
* **item** — the item title or ID (e.g. `Stripe`)
* **field** — the field label within the item (e.g. `password`, `api key`)
Activepieces will retrieve the secret value from 1Password and inject it into the connection at runtime.
## Example secret references
| 1Password location | Reference |
| ------------------------------------------------------- | ----------------------------------- |
| Vault `Production` → Item `Stripe` → Field `Secret Key` | `op://Production/Stripe/Secret Key` |
| Vault `Shared` → Item `Database` → Field `password` | `op://Shared/Database/password` |
If you update existing secrets and you can't see the update reflected, refer to [caching](/docs/admin-guide/guides/secret-managers/overview#caching).
# Overview
Source: https://www.activepieces.com/docs/admin-guide/guides/secret-managers/overview
Connect external secret management systems to securely store and retrieve credentials
Secret Managers allow you to integrate external secret management systems with Activepieces, enabling centralized credential management and enhanced security for your global connections.
## Benefits
* **Centralized Management**: Store all credentials in one secure location
* **Enhanced Security**: Credentials are managed by dedicated secret management systems
* **Audit & Compliance**: Track access and changes to secrets
* **Rotation Support**: Easily rotate credentials without updating flows
* **Access Control**: Use your existing secret manager access policies
## Supported Providers
* **[HashiCorp Vault](./hashicorp)** - Enterprise-grade secrets management
* **[CyberArk Conjur](./cyberark-conjur)** - Centralized secrets management with host-based authentication
* **[AWS Secrets Manager](./aws)** - Managed secrets storage on AWS
* **[1Password](./onepassword)** - Consumer and team password manager with Secrets Automation
## How to Connect
1. Go to **Platform Admin → Security → Secret Managers**
2. Click **New Connection**
3. Select the secret manager provider you want to connect
4. Enter a **Name** for the connection
5. Choose a **Scope** (see [Connection Scopes](#connection-scopes) below)
6. Follow the provider-specific setup instructions in the provider documentation
7. Enter the required connection details
8. Click **Save** to test and save the connection
The connection will be encrypted and stored securely. You can edit or delete it at any time from the Secret Managers page.
## Connection Scopes
Each secret manager connection has a **scope** that controls which projects can use it:
| Scope | Description |
| ------------ | ------------------------------------------ |
| **Platform** | Available to all projects on the platform |
| **Project** | Restricted to specific projects you select |
When creating or editing a connection, select **Project** scope and choose the projects that should have access. Platform-scoped connections are always visible to all projects.
## Using Secret Managers in Connection Dialogs
Once connected, you can reference secrets from your secret managers when configuring piece connections:
1. Open a connection dialog (either a global connection or one inside the flow builder)
2. Click the **key icon** (🔑) next to a credential field
3. Select a secret manager connection from the dropdown
4. Enter the secret path/identifier required by your provider (see provider-specific documentation)
5. The connection will automatically retrieve the secret from your secret manager when needed
**Global connections (Platform Admin):** All platform-scoped and project-scoped secret manager connections are available to select.
**Flow builder connections:** Only secret manager connections that are accessible to the current project are shown — this includes platform-scoped connections and project-scoped connections assigned to that project.
## How It Works
When you use a secret manager in a connection:
* The global connection stores a reference to the secret (not the actual credential)
* When the flow runs, Activepieces authenticates with your secret manager and retrieves the secret
* Secrets are fetched on-demand and never stored in Activepieces
* If the secret is updated in your secret manager, flows will use the new value after the cache expires (up to 1 hour), or immediately after clearing the cache
## Caching
Connection checks and retrieved secrets are cached in Redis (encrypted) for **1 hour** to reduce latency and provider API load.
To force a refresh (e.g. after rotating credentials or updating secrets), platform admins can clear the cache per connection using the **refresh icon** next to each connection row in the Secret Managers page.
You can also clear the cache via the API. Omit `connectionId` to clear all cached entries for the platform, or pass a `connectionId` to clear only that connection's cache:
```bash theme={null}
# Clear cache for a specific connection
curl --request DELETE \
--url 'https:///api/v1/secret-managers/cache?connectionId=' \
--header 'Authorization: Bearer '
# Clear all platform cache entries
curl --request DELETE \
--url 'https:///api/v1/secret-managers/cache' \
--header 'Authorization: Bearer '
```
## Security Considerations
* **Encryption**: Secret managers authentication configuration is encrypted
* **Access Control**: Use your secret manager's access policies to control who can access secrets
* **Network Security**: Ensure your secret manager is accessible from your Activepieces instance
* **Credential Management**: Regularly rotate authentication credentials for secret managers
## Troubleshooting
**Connection Failed:**
* Verify the connection details are correct and accessible
* Check that authentication credentials are valid
* Ensure network connectivity between Activepieces and your secret manager
* Review provider-specific troubleshooting guides
**Secret Not Found:**
* Verify the secret path/name is correct
* Check that the secret exists in your secret manager
* Ensure the authentication credentials have permissions to read the secret
**Permission Denied:**
* Verify the authentication credentials have the necessary permissions
* Check your secret manager's access control policies
* Review audit logs in your secret manager for detailed error information
# Setup AI Providers
Source: https://www.activepieces.com/docs/admin-guide/guides/setup-ai-providers
AI providers are configured by the platform admin to centrally manage credentials and access, making [AI pieces](https://www.activepieces.com/pieces/ai) and their features available to everyone in all projects.
## Supported Providers
* **OpenAI**
* **Anthropic**
* **Gemini**
* **Vercel AI Gateway**
* **Cloudflare AI Gateway**
## How to Setup
Go to **Admin Console** → **AI** page. Add your provider's base URL and API key. These settings apply to all projects.
## Cost Control & Logging
Use an AI gateway like **Vercel AI Gateway** or **Cloudflare AI Gateway** to:
* Set rate limits and budgets
* Log and monitor all AI requests
* Track usage across projects
Just set the gateway URL as your provider's base URL in the Admin Console.
# How to Setup SSO
Source: https://www.activepieces.com/docs/admin-guide/guides/sso
Configure Single Sign-On (SSO) to enable secure, centralized authentication for your Activepieces platform
## Overview
Single Sign-On (SSO) allows your team to authenticate using your organization's existing identity provider, eliminating the need for separate Activepieces credentials. This improves security, simplifies user management, and provides a seamless login experience.
## Prerequisites
Before configuring SSO, ensure you have:
* **Admin access** to your Activepieces platform
* **Admin access** to your identity provider (Google, GitHub, Okta, or JumpCloud)
* The **redirect URL** from your Activepieces SSO configuration screen
## Accessing SSO Configuration
Navigate to **Platform Settings** → **SSO** in your Activepieces admin dashboard to access the SSO configuration screen.
## Enforcing SSO
You can enforce SSO by specifying your organization's email domain. When SSO enforcement is enabled:
* Users with matching email domains must authenticate through the SSO provider
* Email/password login can be disabled for enhanced security
* All authentication is routed through your designated identity provider
We recommend testing SSO with a small group of users before enforcing it organization-wide.
## Supported SSO Providers
Activepieces supports multiple SSO providers to integrate with your existing identity management system.
### Google
Go to the [Google Cloud Console](https://console.cloud.google.com/) and select your project (or create a new one).
Navigate to **APIs & Services** → **Credentials** → **Create Credentials** → **OAuth client ID**.
Select **Web application** as the application type.
Copy the **Redirect URL** from the Activepieces SSO configuration screen and add it to the **Authorized redirect URIs** in Google Cloud Console.
Copy the **Client ID** and **Client Secret** from Google and paste them into the corresponding fields in Activepieces.
Click **Finish** to complete the setup.
### GitHub
Go to [GitHub Developer Settings](https://github.com/settings/developers) → **OAuth Apps** → **New OAuth App**.
Fill in the application details:
* **Application name**: Choose a recognizable name (e.g., "Activepieces SSO")
* **Homepage URL**: Enter your Activepieces instance URL
Copy the **Redirect URL** from the Activepieces SSO configuration screen and paste it into the **Authorization callback URL** field.
Click **Register application** to create the OAuth App.
After registration, click **Generate a new client secret** and copy it immediately (it won't be shown again).
Copy the **Client ID** and **Client Secret** and paste them into the corresponding fields in Activepieces.
Click **Finish** to complete the setup.
### SAML with Okta
Go to the [Okta Admin Portal](https://login.okta.com/) → **Applications** → **Create App Integration**.
Choose **SAML 2.0** as the sign-on method and click **Next**.
Enter an **App name** (e.g., "Activepieces") and optionally upload a logo. Click **Next**.
* **Single sign-on URL**: Copy the SSO URL from the Activepieces configuration screen
* **Audience URI (SP Entity ID)**: Enter `Activepieces`
* **Name ID format**: Select `EmailAddress`
Add the following attribute mappings:
| Name | Value |
| ----------- | ---------------- |
| `firstName` | `user.firstName` |
| `lastName` | `user.lastName` |
| `email` | `user.email` |
Click **Next**, select the appropriate feedback option, and click **Finish**.
Go to the **Sign On** tab → **View SAML setup instructions** or **View IdP metadata**. Copy the Identity Provider metadata XML.
* Paste the **IdP Metadata** XML into the corresponding field
* Copy the **X.509 Certificate** from Okta and paste it into the **Signing Key** field
Click **Save** to complete the setup.
### SAML with Microsoft Entra ID (Azure AD)
Go to the [Azure Portal](https://portal.azure.com/) → **Microsoft Entra ID** → **Enterprise applications** → **New application** → **Create your own application**.
Name it (e.g., "Activepieces") and select **Integrate any other application you don't find in the gallery (Non-gallery)**.
Open the application → **Single sign-on** → select **SAML**.
Edit **Basic SAML Configuration**:
* **Identifier (Entity ID)**: `Activepieces`
* **Reply URL (Assertion Consumer Service URL)**: paste the SSO URL from the Activepieces configuration screen
Edit **Attributes & Claims** and add these additional claims (leave **Namespace** empty):
| Claim name | Source attribute |
| ----------- | ---------------- |
| `firstName` | `user.givenname` |
| `lastName` | `user.surname` |
| `email` | `user.mail` |
In the **SAML Certificates** section, copy the **App Federation Metadata Url**.
You can paste this URL directly into the **IdP Metadata** field in Activepieces — Activepieces will fetch the metadata XML automatically. Alternatively, open the URL in a browser, save the XML, and paste its contents.
Download the **Certificate (Base64)** from the **SAML Certificates** section. Open the file and copy its contents (including the `-----BEGIN CERTIFICATE-----` / `-----END CERTIFICATE-----` markers) into the **Signing Key** field in Activepieces.
Go to **Users and groups** in the application and assign the users or groups that should be allowed to sign in.
Click **Save** in Activepieces to complete the setup.
### SAML with JumpCloud
Go to the [JumpCloud Admin Portal](https://console.jumpcloud.com/) → **SSO Applications** → **Add New Application** → **Custom SAML App**.
Copy the **ACS URL** from the Activepieces configuration screen and paste it into the **ACS URLs** field in JumpCloud.
Set the **SP Entity ID** (Audience URI) to `Activepieces`.
Configure the following attribute mappings:
| Service Provider Attribute | JumpCloud Attribute |
| -------------------------- | ------------------- |
| `firstName` | `firstname` |
| `lastName` | `lastname` |
| `email` | `email` |
JumpCloud does not include the `HTTP-Redirect` binding by default. You **must** enable this option.
Without HTTP-Redirect binding, the SSO integration will not work correctly.
Click **Save**, then refresh the page and click **Export Metadata**.
Verify that the exported XML contains `Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect"` to ensure the binding was properly enabled.
Paste the exported metadata XML into the **IdP Metadata** field in Activepieces.
Locate the `` element in the IdP metadata and extract its value. Format it as a PEM certificate:
```
-----BEGIN CERTIFICATE-----
[PASTE THE CERTIFICATE VALUE HERE]
-----END CERTIFICATE-----
```
Paste this into the **Signing Key** field.
In JumpCloud, assign the application to the appropriate users or user groups.
Click **Finish** to complete the setup.
## Troubleshooting
* Verify the redirect URL is correctly configured in your identity provider
* Ensure users are assigned to the application in your identity provider
* Check that email domains match the SSO enforcement settings
* Confirm the IdP metadata is complete and correctly formatted
* If you pasted a metadata URL, make sure it is publicly reachable (Activepieces fetches it server-side)
* Verify the signing certificate is properly formatted with BEGIN/END markers
* Ensure all required attributes (firstName, lastName, email) are mapped
* Enable the HTTP-Redirect binding option in JumpCloud
* Re-export the metadata after enabling the binding
* Verify the binding appears in the exported XML
## Need Help?
If you encounter issues during SSO setup, please contact our enterprise support or [sales team](https://www.activepieces.com/sales).
# How to Structure Projects
Source: https://www.activepieces.com/docs/admin-guide/guides/structure-projects
Projects in Activepieces are the main units for organizing your automations and resources within your organization. Every project contains its own flows, connections, and tables. Access to these resources is shared among everyone who has access to that project.
There are two types of projects:
* **Personal Projects**: Each user invited to your organization automatically receives a personal project. This is a private space where only that user can create and manage flows, connections, and tables.
* **Team Projects**: Team projects are shared spaces that can be created and managed from this page. Multiple users can be invited to a team project, allowing them to collaborate, share access to flows, connections, and tables, and work together.
When organizing your work, create team projects for group collaboration and utilize personal projects for individual or private tasks.
# Connection Deleted
Source: https://www.activepieces.com/docs/admin-guide/security/audit-logs/connection-deleted
# Connection Upserted
Source: https://www.activepieces.com/docs/admin-guide/security/audit-logs/connection-upserted
# Flow Activated
Source: https://www.activepieces.com/docs/admin-guide/security/audit-logs/flow-activated
# Flow Created
Source: https://www.activepieces.com/docs/admin-guide/security/audit-logs/flow-created
# Flow Deactivated
Source: https://www.activepieces.com/docs/admin-guide/security/audit-logs/flow-deactivated
# Flow Deleted
Source: https://www.activepieces.com/docs/admin-guide/security/audit-logs/flow-deleted
# Flow Published
Source: https://www.activepieces.com/docs/admin-guide/security/audit-logs/flow-published
# Flow Run Finished
Source: https://www.activepieces.com/docs/admin-guide/security/audit-logs/flow-run-finished
# Flow Run Started
Source: https://www.activepieces.com/docs/admin-guide/security/audit-logs/flow-run-started
# Flow Updated
Source: https://www.activepieces.com/docs/admin-guide/security/audit-logs/flow-updated
# Folder Created
Source: https://www.activepieces.com/docs/admin-guide/security/audit-logs/folder-created
# Folder Deleted
Source: https://www.activepieces.com/docs/admin-guide/security/audit-logs/folder-deleted
# Folder Updated
Source: https://www.activepieces.com/docs/admin-guide/security/audit-logs/folder-updated
# Overview
Source: https://www.activepieces.com/docs/admin-guide/security/audit-logs/overview
Track every action on your platform and forward events to your tools
## What are audit events?
Activepieces records an audit event for every meaningful action on the platform — a flow gets created, a run finishes, a user signs in, a connection is saved, a signing key is rotated, and so on. Each event captures what happened, who did it, when, and the project or platform it belongs to.
You can do two things with these events: **view them in the audit log table** for compliance and forensics, or **stream them to a webhook** to react in real time.
## View them in the audit log
Open **Platform Admin → Security → Audit Logs** to browse the full table. Filter by action, user, project, or date range to find a specific event, and click any row to see the full payload.
## Forward them to your tools
To react to events in real time — get a Slack alert when a flow fails, forward audit logs to your SIEM, send a daily digest of new sign-ups — use [Event Streaming](/docs/admin-guide/guides/event-streaming). It POSTs each event to a webhook URL of your choice, which you can point at an external system or at an internal handler flow that routes each event to the channel you care about.
## Event catalog
The full list of documented events. We add new ones as the platform grows; the source of truth for every event we emit is the [event schema in the codebase](https://github.com/activepieces/activepieces/blob/main/packages/shared/src/lib/ee/audit-events/index.ts).
### Flows
* [Flow created](/docs/admin-guide/security/audit-logs/flow-created)
* [Flow updated](/docs/admin-guide/security/audit-logs/flow-updated)
* [Flow deleted](/docs/admin-guide/security/audit-logs/flow-deleted)
### Flow runs
* [Flow run started](/docs/admin-guide/security/audit-logs/flow-run-started)
* [Flow run finished](/docs/admin-guide/security/audit-logs/flow-run-finished)
### Folders
* [Folder created](/docs/admin-guide/security/audit-logs/folder-created)
* [Folder updated](/docs/admin-guide/security/audit-logs/folder-updated)
* [Folder deleted](/docs/admin-guide/security/audit-logs/folder-deleted)
### Connections
* [Connection saved](/docs/admin-guide/security/audit-logs/connection-upserted)
* [Connection deleted](/docs/admin-guide/security/audit-logs/connection-deleted)
### Users
* [User signed up](/docs/admin-guide/security/audit-logs/user-signed-up)
* [User signed in](/docs/admin-guide/security/audit-logs/user-signed-in)
* [User email verified](/docs/admin-guide/security/audit-logs/user-email-verified)
* [User password reset](/docs/admin-guide/security/audit-logs/user-password-reset)
### Platform
* [Signing key created](/docs/admin-guide/security/audit-logs/signing-key-created)
## See also
* [Event Streaming](/docs/admin-guide/guides/event-streaming) — forward audit events to a webhook URL
# Signing Key Created
Source: https://www.activepieces.com/docs/admin-guide/security/audit-logs/signing-key-created
# User Email Verified
Source: https://www.activepieces.com/docs/admin-guide/security/audit-logs/user-email-verified
# User Password Reset
Source: https://www.activepieces.com/docs/admin-guide/security/audit-logs/user-password-reset
# User Signed In
Source: https://www.activepieces.com/docs/admin-guide/security/audit-logs/user-signed-in
# User Signed Up
Source: https://www.activepieces.com/docs/admin-guide/security/audit-logs/user-signed-up
# Security & Data Practices
Source: https://www.activepieces.com/docs/admin-guide/security/practices
We prioritize security and follow these practices to keep information safe.
## External Systems Credentials
**Storing Credentials**
All credentials are stored with 256-bit encryption keys, and there is no API to retrieve them for the user. They are sent only during processing, after which access is revoked from the engine.
**Data Masking**
We implement a robust data masking mechanism where third-party credentials or any sensitive information are systematically censored within the logs, guaranteeing that sensitive information is never stored or documented.
**OAuth2**
Integrations with third parties are always done using OAuth2, with a limited number of scopes when third-party support allows.
## Vulnerability Disclosure
Activepieces is an open-source project that welcomes contributors to test and report security issues.
For detailed information about our security policy, please refer to our GitHub Security Policy at: [https://github.com/activepieces/activepieces/security/policy](https://github.com/activepieces/activepieces/security/policy)
## Access and Authentication
**Role-Based Access Control (RBAC)**
To manage user access, we utilize Role-Based Access Control (RBAC). Team admins assign roles to users, granting them specific permissions to access and interact with projects, folders, and resources. RBAC allows for fine-grained control, enabling administrators to define and enforce access policies based on user roles.
**Single Sign-On (SSO)**
Implementing Single Sign-On (SSO) serves as a pivotal component of our security strategy. SSO streamlines user authentication by allowing them to access Activepieces with a single set of credentials. This not only enhances user convenience but also strengthens security by reducing the potential attack surface associated with managing multiple login credentials.
**Audit Logs**
We maintain comprehensive audit logs to track and monitor all access activities within Activepieces. This includes user interactions, system changes, and other relevant events. Our meticulous logging helps identify security threats and ensures transparency and accountability in our security measures.
**Password Policy Enforcement**
Users log in to Activepieces using a password known only to them. Activepieces enforces password length and complexity standards. Passwords are not stored; instead, only a secure hash of the password is stored in the database. For more information.
## Privacy & Data
**Supported Cloud Regions**
Presently, our cloud services are available in Germany as the supported data region.
We have plans to expand to additional regions in the near future.
If you opt for **self-hosting**, the available regions will depend on where you choose to host.
**Policy**
To better understand how we handle your data and prioritize your privacy, please take a moment to review our [Privacy Policy](https://www.activepieces.com/privacy). This document outlines in detail the measures we take to safeguard your information and the principles guiding our approach to privacy and data protection.
# Create Action
Source: https://www.activepieces.com/docs/build-pieces/building-pieces/create-action
## Action Definition
Now let's create first action which fetch random ice-cream flavor.
```bash theme={null}
npm run cli actions create
```
You will be asked three questions to define your new piece:
1. `Piece Folder Name`: This is the name associated with the folder where the action resides. It helps organize and categorize actions within the piece.
2. `Action Display Name`: The name users see in the interface, conveying the action's purpose clearly.
3. `Action Description`: A brief, informative text in the UI, guiding users about the action's function and purpose.
Next, let's create the action file:
**Example:**
```bash theme={null}
npm run cli actions create
? Enter the piece folder name : gelato
? Enter the action display name : get icecream flavor
? Enter the action description : fetches random icecream flavor.
```
This will create a new TypeScript file named `get-icecream-flavor.ts` in the `packages/pieces/community/gelato/src/lib/actions` directory.
Inside this file, paste the following code:
```typescript theme={null}
import {
createAction,
Property,
PieceAuth,
} from '@activepieces/pieces-framework';
import { httpClient, HttpMethod } from '@activepieces/pieces-common';
import { gelatoAuth } from '../..';
export const getIcecreamFlavor = createAction({
name: 'get_icecream_flavor', // Must be a unique across the piece, this shouldn't be changed.
auth: gelatoAuth,
displayName: 'Get Icecream Flavor',
description: 'Fetches random icecream flavor',
props: {},
async run(context) {
const res = await httpClient.sendRequest({
method: HttpMethod.GET,
url: 'https://cloud.activepieces.com/api/v1/webhooks/RGjv57ex3RAHOgs0YK6Ja/sync',
headers: {
Authorization: context.auth, // Pass API key in headers
},
});
return res.body;
},
});
```
The createAction function takes an object with several properties, including the `name`, `displayName`, `description`, `props`, and `run` function of the action.
The `name` property is a unique identifier for the action. The `displayName` and `description` properties are used to provide a human-readable name and description for the action.
The `props` property is an object that defines the properties that the action requires from the user. In this case, the action doesn't require any properties.
The `run` function is the function that is called when the action is executed. It takes a single argument, context, which contains the values of the action's properties.
The `run` function utilizes the httpClient.sendRequest function to make a GET request, fetching a random ice cream flavor. It incorporates API key authentication in the request headers. Finally, it returns the response body.
## Expose The Definition
To make the action readable by Activepieces, add it to the array of actions in the piece definition.
```typescript theme={null}
import { createPiece } from '@activepieces/pieces-framework';
// Don't forget to add the following import.
import { getIcecreamFlavor } from './lib/actions/get-icecream-flavor';
export const gelato = createPiece({
displayName: 'Gelato',
logoUrl: 'https://cdn.activepieces.com/pieces/gelato.png',
authors: [],
auth: gelatoAuth,
// Add the action here.
actions: [getIcecreamFlavor], // <--------
triggers: [],
});
```
# Testing
By default, the development setup only builds specific components. Open the file `.env.dev` and include "gelato" in the `AP_DEV_PIECES`.
For more details, check out the [Piece Development](/docs/build-pieces/building-pieces/development-setup#pieces-development) section.
Once you edit the environment variable, restart the backend. The piece will be rebuilt. After this process, you'll need to **refresh** the frontend to see the changes.
If the build fails, try debugging by running `npx turbo run build --filter=@activepieces/piece-gelato`.
It will display any errors in your code.
To test the action, use the flow builder in Activepieces. It should function as shown in the screenshot.
# Create Trigger
Source: https://www.activepieces.com/docs/build-pieces/building-pieces/create-trigger
This tutorial will guide you through the process of creating trigger for a Gelato piece that fetches new icecream flavor.
## Trigger Definition
To create trigger run the following command,
```bash theme={null}
npm run cli triggers create
```
1. `Piece Folder Name`: This is the name associated with the folder where the trigger resides. It helps organize and categorize triggers within the piece.
2. `Trigger Display Name`: The name users see in the interface, conveying the trigger's purpose clearly.
3. `Trigger Description`: A brief, informative text in the UI, guiding users about the trigger's function and purpose.
4. `Trigger Technique`: Specifies the trigger type - either [polling](../piece-reference/triggers/polling-trigger) or [webhook](../piece-reference/triggers/webhook-trigger).
**Example:**
```bash theme={null}
npm run cli triggers create
? Enter the piece folder name : gelato
? Enter the trigger display name : new flavor created
? Enter the trigger description : triggers when a new icecream flavor is created.
? Select the trigger technique: polling
```
This will create a new TypeScript file at `packages/pieces/community/gelato/src/lib/triggers` named `new-flavor-created.ts`.
Inside this file, paste the following code:
```ts theme={null}
import { gelatoAuth } from '../../';
import {
DedupeStrategy,
HttpMethod,
HttpRequest,
Polling,
httpClient,
pollingHelper,
} from '@activepieces/pieces-common';
import {
TriggerStrategy,
createTrigger,
AppConnectionValueForAuthProperty
} from '@activepieces/pieces-framework';
import dayjs from 'dayjs';
const polling: Polling<
AppConnectionValueForAuthProperty,
Record
> = {
strategy: DedupeStrategy.TIMEBASED,
items: async ({ auth, propsValue, lastFetchEpochMS }) => {
const request: HttpRequest = {
method: HttpMethod.GET,
url: 'https://cloud.activepieces.com/api/v1/webhooks/aHlEaNLc6vcF1nY2XJ2ed/sync',
headers: {
authorization: auth,
},
};
const res = await httpClient.sendRequest(request);
return res.body['flavors'].map((flavor: string) => ({
epochMilliSeconds: dayjs().valueOf(),
data: flavor,
}));
},
};
export const newFlavorCreated = createTrigger({
auth: gelatoAuth,
name: 'newFlavorCreated',
displayName: 'new flavor created',
description: 'triggers when a new icecream flavor is created.',
props: {},
sampleData: {},
type: TriggerStrategy.POLLING,
async test(context) {
return await pollingHelper.test(polling, context);
},
async onEnable(context) {
const { store, auth, propsValue } = context;
await pollingHelper.onEnable(polling, { store, auth, propsValue });
},
async onDisable(context) {
const { store, auth, propsValue } = context;
await pollingHelper.onDisable(polling, { store, auth, propsValue });
},
async run(context) {
return await pollingHelper.poll(polling, context);
},
});
```
The way polling triggers usually work is as follows:
`Run`:The run method executes every 5 minutes, fetching data from the endpoint within a specified timestamp range or continuing until it identifies the last item ID. It then returns the new items as an array. In this example, the httpClient.sendRequest method is utilized to retrieve new flavors, which are subsequently stored in the store along with a timestamp.
## Expose The Definition
To make the trigger readable by Activepieces, add it to the array of triggers in the piece definition.
```typescript theme={null}
import { createPiece } from '@activepieces/pieces-framework';
import { getIcecreamFlavor } from './lib/actions/get-icecream-flavor';
// Don't forget to add the following import.
import { newFlavorCreated } from './lib/triggers/new-flavor-created';
export const gelato = createPiece({
displayName: 'Gelato Tutorial',
logoUrl: 'https://cdn.activepieces.com/pieces/gelato.png',
authors: [],
auth: gelatoAuth,
actions: [getIcecreamFlavor],
// Add the trigger here.
triggers: [newFlavorCreated], // <--------
});
```
# Testing
By default, the development setup only builds specific components. Open the file `.env.dev` and include "gelato" in the `AP_DEV_PIECES`.
For more details, check out the [Piece Development](/docs/build-pieces/building-pieces/development-setup#pieces-development) section.
Once you edit the environment variable, restart the backend. The piece will be rebuilt. After this process, you'll need to **refresh** the frontend to see the changes.
To test the trigger, use the load sample data from flow builder in Activepieces. It should function as shown in the screenshot.
To make your webhook accessible from the internet, you need to expose your local development instance to the internet, do the following:
1. Install [localxpose](https://localxpose.io/docs#start-your-first-tunnel).
2. Follow the documentation to start your first tunnel to localhost:4200.
3. Copy the tunnel domain, i.e wozcsvaint.loclx.io, and replace the `AP_FRONTEND_URL` environment variable in `.env.dev` with the exposed url, i.e [https://wozcsvaint.loclx.io](https://wozcsvaint.loclx.io)
4. Go to /packages/web/vite.config.ts, uncomment allowedHosts and replace the value with the same tunnel domain, i.e wozcsvaint.loclx.io.
Once you have completed these configurations, you will be able to test webhook triggers and run published flows that have them.
# Development setup
Source: https://www.activepieces.com/docs/build-pieces/building-pieces/development-setup
## Prerequisites
* Node.js v18+
* npm v9+
## Instructions
1. Setup the environment
```bash theme={null}
node tools/setup-dev.js
```
2. Start the environment
This command will start activepieces with sqlite3 and in memory queue.
```bash theme={null}
npm start
```
By default, the development setup only builds specific pieces.Open the file
`.env.dev` and add comma-separated list of pieces to make
available.
For more details, check out the [Piece Development](/docs/build-pieces/building-pieces/development-setup#pieces-development) section.
3. Go to ***localhost:4200*** on your web browser and sign in with these details:
Email: `dev@ap.com`
Password: `12345678`
## Pieces Development
When [`AP_PIECES_SYNC_MODE`](https://github.com/activepieces/activepieces/blob/main/.env.dev#L14) is set to `OFFICIAL_AUTO`, all pieces are automatically loaded from the cloud API and synced to the database on first launch. This process may take a few seconds to several minutes depending on your internet connection.
For local development, pieces are loaded from your local `dist` folder instead of the database. To enable this, set the [`AP_DEV_PIECES`](https://github.com/activepieces/activepieces/blob/main/.env.dev#L5) environment variable with a comma-separated list of pieces. For example, to develop with `google-sheets` and `cal-com`:
```sh theme={null}
AP_DEV_PIECES=google-sheets,cal-com npm start
```
# Overview
Source: https://www.activepieces.com/docs/build-pieces/building-pieces/overview
This section helps developers build and contribute pieces.
Building pieces is fun and important; it allows you to customize Activepieces for your own needs.
We love contributions! In fact, most of the pieces are contributed by the community. Feel free to open a pull request.
**Friendly Tip:**
For the fastest support, we recommend joining our Discord community. We are dedicated to addressing every question and concern raised there.
Build pieces using TypeScript for a more powerful and flexible development process.
See your changes in the browser within 7 seconds.
Work within the open-source environment, explore, and contribute to other pieces.
Join our large community, where you can ask questions, share ideas, and develop alongside others.
# Add Piece Authentication
Source: https://www.activepieces.com/docs/build-pieces/building-pieces/piece-authentication
### Piece Authentication
Activepieces supports multiple forms of authentication, you can check those forms [here](../piece-reference/authentication)
Now, let's establish authentication for this piece, which necessitates the inclusion of an API Key in the headers.
Modify `src/index.ts` file to add authentication,
```ts theme={null}
import { PieceAuth, createPiece } from '@activepieces/pieces-framework';
export const gelatoAuth = PieceAuth.SecretText({
displayName: 'API Key',
required: true,
description: 'Please use **test-key** as value for API Key',
});
export const gelato = createPiece({
displayName: 'Gelato',
logoUrl: 'https://cdn.activepieces.com/pieces/gelato.png',
auth: gelatoAuth,
authors: [],
actions: [],
triggers: [],
});
```
Use the value **test-key** as the API key when testing actions or triggers for
Gelato.
# Create Piece Definition
Source: https://www.activepieces.com/docs/build-pieces/building-pieces/piece-definition
This tutorial will guide you through the process of creating a Gelato piece with an action that fetches random icecream flavor and trigger that fetches new icecream flavor created. It assumes that you are familiar with the following:
* [Activepieces Local development](./development-setup) Or [GitHub Codespaces](../misc/codespaces).
* TypeScript syntax.
## Piece Definition
To get started, let's generate a new piece for Gelato
```bash theme={null}
npm run cli pieces create
```
You will be asked three questions to define your new piece:
1. `Piece Name`: Specify a name for your piece. This name uniquely identifies your piece within the ActivePieces ecosystem.
2. `Package Name`: Optionally, you can enter a name for the npm package associated with your piece. If left blank, the default name will be used.
3. `Piece Type`: Choose the piece type based on your intention. It can be either "custom" if it's a tailored solution for your needs, or "community" if it's designed to be shared and used by the broader community.
**Example:**
```bash theme={null}
npm run cli pieces create
? Enter the piece name: gelato
? Enter the package name: @activepieces/piece-gelato
? Select the piece type: community
```
The piece will be generated at `packages/pieces/community/gelato/`,
the `src/index.ts` file should contain the following code
```ts theme={null}
import { PieceAuth, createPiece } from '@activepieces/pieces-framework';
export const gelato = createPiece({
displayName: 'Gelato',
logoUrl: 'https://cdn.activepieces.com/pieces/gelato.png',
auth: PieceAuth.None(),
authors: [],
actions: [],
triggers: [],
});
```
# Fork Repository
Source: https://www.activepieces.com/docs/build-pieces/building-pieces/setup-fork
To start building pieces, we need to fork the repository that contains the framework library and the development environment. Later, we will publish these pieces as `npm` artifacts.
Follow these steps to fork the repository:
If you are on windows, please install [WSL](https://learn.microsoft.com/en-us/windows/wsl/install) and set up git there then proceed with the instructions down below
1. Go to the repository page at [https://github.com/activepieces/activepieces](https://github.com/activepieces/activepieces).
2. Click the `Fork` button located in the top right corner of the page.
3. Clone your fork using a shallow clone for faster setup:
```bash theme={null}
git clone --depth=1 https://github.com/YOUR_USERNAME/activepieces.git
```
Using `--depth=1` reduces the clone size significantly by only fetching the latest commit instead of the full history.
If you are an enterprise customer and want to use the private pieces feature, you can refer to the tutorial on how to set up a [private fork](../misc/private-fork).
# Start Building
Source: https://www.activepieces.com/docs/build-pieces/building-pieces/start-building
This section guides you in creating a Gelato piece, from setting up your development environment to contributing the piece. By the end of this tutorial, you will have a piece with an action that fetches a random ice cream flavor and a trigger that fetches newly created ice cream flavors.
These are the next sections. In each step, we will do one small thing. This tutorial should take around 30 minutes.
## Steps Overview
Fork the repository to create your own copy of the codebase.
Set up your development environment with the necessary tools and dependencies.
Define the structure and behavior of your Gelato piece.
Implement authentication mechanisms for your Gelato piece.
Create an action that fetches a random ice cream flavor.
Create a trigger that fetches newly created ice cream flavors.
Share your Gelato piece with others.
Contribute a piece to our repo and receive +1,400 tasks/month on [Activepieces Cloud](https://cloud.activepieces.com).
# Build Custom Pieces
Source: https://www.activepieces.com/docs/build-pieces/misc/build-piece
You can use the CLI to build custom pieces for the platform. This process compiles the pieces and exports them as a `.tgz` packed archive.
### How It Works
The CLI scans the `packages/pieces/` directory for the specified piece. It checks the **name** in the `package.json` file. If the piece is found, it builds and packages it into a `.tgz` archive.
### Usage
To build a piece, follow these steps:
1. Ensure you have the CLI installed by cloning the repository.
2. Run the following command:
```bash theme={null}
npm run build-piece
```
You will be prompted to enter the name of the piece you want to build. For example:
```bash theme={null}
? Enter the piece folder name : google-drive
```
The CLI will build the piece and you will be given the path to the archive. For example:
```bash theme={null}
Piece 'google-drive' built and packed successfully at packages/pieces/community/google-drive/dist
```
You may also build the piece non-interactively by passing the piece name as an argument. For example:
```bash theme={null}
npm run build-piece google-drive
```
# GitHub Codespaces
Source: https://www.activepieces.com/docs/build-pieces/misc/codespaces
GitHub Codespaces is a cloud development platform that enables developers to write, run, and debug code directly in their browsers, seamlessly integrated with GitHub.
### Steps to setup Codespaces
1. Go to [Activepieces repo](https://github.com/activepieces/activepieces).
2. Click Code `<>`, then under codespaces click create codespace on main.
By default, the development setup only builds specific pieces.Open the file
`.env.dev` and add comma-separated list of pieces to make
available.
For more details, check out the [Piece Development](/docs/build-pieces/building-pieces/development-setup#pieces-development) section.
3. Open the terminal and run `npm start`
4. Access the frontend URL by opening port 4200 and signing in with these details:
Email: `dev@ap.com`
Password: `12345678`
# Dev Containers
Source: https://www.activepieces.com/docs/build-pieces/misc/dev-container
## Using Dev Containers in Visual Studio Code
The project includes a dev container configuration that allows you to use Visual Studio Code's [Remote Development](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.vscode-remote-extensionpack) extension to develop the project in a consistent environment. This can be especially helpful if you are new to the project or if you have a different environment setup on your local machine.
## Prerequisites
Before you can use the dev container, you will need to install the following:
* [Visual Studio Code](https://code.visualstudio.com/).
* The [Remote Development](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.vscode-remote-extensionpack) extension for Visual Studio Code.
* [Docker](https://www.docker.com/).
## Using the Dev Container
To use the dev container for the Activepieces project, follow these steps:
1. Clone the Activepieces repository to your local machine.
2. Open the project in Visual Studio Code.
3. Press `Ctrl+Shift+P` and type `> Dev Containers: Reopen in Container`.
4. Run `npm start`.
5. The backend will run at `localhost:3000` and the frontend will run at `localhost:4200`.
By default, the development setup only builds specific pieces.Open the file
`.env.dev` and add comma-separated list of pieces to make
available.
For more details, check out the [Piece Development](/docs/build-pieces/building-pieces/development-setup#pieces-development) section.
The login credentials are:\
Email: `dev@ap.com`
Password: `12345678`
## Exiting the Dev Container
To exit the dev container and return to your local environment, follow these steps:
1. In the bottom left corner of Visual Studio Code, click the `Remote-Containers: Reopen folder locally` button.
2. Visual Studio Code will close the connection to the dev container and reopen the project in your local environment.
## Troubleshoot
One of the best trouble shoot after an error occur is to reset the dev container.
1. Exit the dev container
2. Run the following
```sh theme={null}
sh tools/reset-dev.sh
```
3. Rebuild the dev container from above steps
# Migrate from Nx to Turbo
Source: https://www.activepieces.com/docs/build-pieces/misc/migrate-nx-to-turbo
The Activepieces monorepo has fully migrated to Turbo. This guide is kept for **historical reference** and for users with older forks that still use the Nx-based build system.
If you have an existing fork with custom pieces built using the **old Nx-based build system**, you need to migrate them to the new **Turbo-based build system**. This guide explains what changed and provides a migration script.
## What Changed
The Activepieces monorepo replaced [Nx](https://nx.dev) with [Turbo](https://turbo.build) as its build orchestrator. For pieces, this means:
| | **Old (Nx)** | **New (Turbo)** |
| ---------------- | -------------------------- | ---------------------------------------------------------- |
| Build config | `project.json` per piece | `package.json` scripts |
| Build command | `nx build pieces-{name}` | `turbo run build --filter=@activepieces/piece-{name}` |
| Output directory | `dist/out-tsc` (shared) | `./dist` (local per piece) |
| Task runner | Nx executor (`@nx/js:tsc`) | Direct `tsc -p tsconfig.lib.json && cp package.json dist/` |
| Dependencies | Inferred by Nx graph | Workspace protocol (`workspace:*`) in `package.json` |
### Files affected per piece
* **`project.json`** — Deleted (no longer needed)
* **`package.json`** — Added `build` and `lint` scripts, added `main` and `types` fields, added workspace dependencies
* **`tsconfig.lib.json`** — Updated `outDir`, added `rootDir`, `baseUrl`, `paths`
## Automatic Migration
Run the migration script to update all custom pieces at once:
```bash theme={null}
npx ts-node tools/scripts/migrate-custom-piece-to-turbo.ts
```
This scans `packages/pieces/custom/` and applies all necessary changes.
### Migrate a specific piece
You can also pass a path to migrate a single piece:
```bash theme={null}
npx ts-node tools/scripts/migrate-custom-piece-to-turbo.ts packages/pieces/custom/my-piece
```
### What the script does
For each piece, the script:
1. **Updates `package.json`** — adds `build` and `lint` scripts, sets `main` and `types` entry points, ensures `@activepieces/pieces-framework`, `@activepieces/shared`, and `tslib` are listed as dependencies
2. **Updates `tsconfig.lib.json`** — sets `outDir` to `./dist`, adds `rootDir`, `baseUrl`, `paths`, and `declaration` settings
3. **Creates `tsconfig.json`** if missing — extends the root `tsconfig.base.json`
4. **Deletes `project.json`** — removes Nx configuration
## Verify the Migration
After migrating, build your piece to verify everything works:
```bash theme={null}
npx turbo run build --filter=@activepieces/piece-your-piece --force
```
Or use the CLI:
```bash theme={null}
npm run build-piece your-piece
```
The build output should appear in `packages/pieces/custom/your-piece/dist/`.
# Custom Pieces CI/CD
Source: https://www.activepieces.com/docs/build-pieces/misc/pieces-ci-cd
You can use the CLI to sync custom pieces. There is no need to rebuild the Docker image as they are loaded directly from npm.
### How It Works
Use the CLI to sync items from `packages/pieces/custom/` to instances. In production, Activepieces acts as an npm registry, storing all piece versions.
The CLI scans the directory for `package.json` files, checking the **name** and **version** of each piece. If a piece isn't uploaded, it packages and uploads it via the API.
### Usage
To use the CLI, follow these steps:
1. Generate an API Key from the Admin Interface. Go to Settings and generate the API Key.
2. Install the CLI by cloning the repository.
3. Run the following command, replacing `API_KEY` with your generated API Key and `INSTANCE_URL` with your instance URL:
```bash theme={null}
AP_API_KEY=your_api_key_here bun run sync-pieces -- --apiUrl https://INSTANCE_URL/api
```
### Developer Workflow
1. Developers create and modify the pieces offline.
2. Increment the piece version in their corresponding `package.json`. For more information, refer to the [piece versioning](../piece-reference/piece-versioning) documentation.
3. Open a pull request towards the main branch.
4. Once the pull request is merged to the main branch, manually run the CLI or use a GitHub/GitLab Action to trigger the synchronization process.
### GitHub Action
```yaml theme={null}
name: Sync Custom Pieces
on:
push:
branches:
- main
workflow_dispatch:
jobs:
sync-pieces:
runs-on: ubuntu-latest
steps:
# Step 1: Check out the repository code with full history
- name: Check out repository code
uses: actions/checkout@v3
with:
fetch-depth: 0
# Step 2: Set up Bun
- name: Set up Bun
uses: oven-sh/setup-bun@v1
with:
bun-version: latest
# Step 3: Cache Bun dependencies
- name: Cache Bun dependencies
uses: actions/cache@v3
with:
path: ~/.bun/install/cache
key: bun-${{ hashFiles('bun.lockb') }}
restore-keys: |
bun-
# Step 4: Install dependencies using Bun
- name: Install dependencies
run: bun install --no-save
# Step 5: Sync Custom Pieces
- name: Sync Custom Pieces
env:
AP_API_KEY: ${{ secrets.AP_API_KEY }}
run: bun run sync-pieces -- --apiUrl ${{ secrets.INSTANCE_URL }}/api
```
# Setup Private Fork
Source: https://www.activepieces.com/docs/build-pieces/misc/private-fork
**Friendly Tip #1:** If you want to experiment, you can fork or clone the public repository.
For private piece installation, you will need the paid edition. However, you can still develop pieces, contribute them back, **OR** publish them to the public npm registry and use them in your own instance or project.
## Create a Private Fork (Private Pieces)
By following these steps, you can create a private fork on GitHub, GitLab or another platform and configure the "activepieces" repository as the upstream source, allowing you to incorporate changes from the "activepieces" repository.
1. **Clone the Repository:**
Begin by creating a bare clone of the repository. Remember that this is a temporary step and will be deleted later.
```bash theme={null}
git clone --bare git@github.com:activepieces/activepieces.git
```
2. **Create a Private Git Repository**
Generate a new private repository on GitHub or your chosen platform. When initializing the new repository, do not include a README, license, or gitignore files. This precaution is essential to avoid merge conflicts when synchronizing your fork with the original repository.
3. **Mirror-Push to the Private Repository:**
Mirror-push the bare clone you created earlier to your newly created "activepieces" repository. Make sure to replace `` in the URL below with your actual GitHub username.
```bash theme={null}
cd activepieces.git
git push --mirror git@github.com:/activepieces.git
```
4. **Remove the Temporary Local Repository:**
```bash theme={null}
cd ..
rm -rf activepieces.git
```
5. **Clone Your Private Repository:**
Now, you can clone your "activepieces" repository onto your local machine into your desired directory.
```bash theme={null}
cd ~/path/to/directory
git clone git@github.com:/activepieces.git
```
6. **Add the Original Repository as a Remote:**
If desired, you can add the original repository as a remote to fetch potential future changes. However, remember to disable push operations for this remote, as you are not permitted to push changes to it.
```bash theme={null}
git remote add upstream git@github.com:activepieces/activepieces.git
git remote set-url --push upstream DISABLE
```
You can view a list of all your remotes using `git remote -v`. It should resemble the following:
```
origin git@github.com:/activepieces.git (fetch)
origin git@github.com:/activepieces.git (push)
upstream git@github.com:activepieces/activepieces.git (fetch)
upstream DISABLE (push)
```
> When pushing changes, always use `git push origin`.
### Sync Your Fork
To retrieve changes from the "upstream" repository, fetch the remote and rebase your work on top of it.
```bash theme={null}
git fetch upstream
git merge upstream/main
```
Conflict resolution should not be necessary since you've only added pieces to your repository.
# Publish Custom Pieces
Source: https://www.activepieces.com/docs/build-pieces/misc/publish-piece
You can use the CLI to publish custom pieces to the platform. This process packages the pieces and uploads them to the specified API endpoint.
### How It Works
The CLI scans the `packages/pieces/` directory for the specified piece. It checks the **name** and **version** in the `package.json` file. If the piece is not already published, it builds, packages, and uploads it to the platform using the API.
### Usage
To publish a piece, follow these steps:
1. Ensure you have an API Key. Generate it from the Admin Interface by navigating to Settings.
2. Install the CLI by cloning the repository.
3. Run the following command:
```bash theme={null}
npm run publish-piece-to-api
```
4. You will be asked three questions to publish your piece:
* `Piece Folder Name`: This is the name associated with the folder where the action resides. It helps organize and categorize actions within the piece.
* `API URL`: This is the URL of the API endpoint where the piece will be published (ex: [https://cloud.activepieces.com/api](https://cloud.activepieces.com/api)).
* `API Key Source`: This is the source of the API key. It can be either `Env Variable (AP_API_KEY)` or `Manually`.
In case you choose `Env Variable (AP_API_KEY)`, the CLI will use the API key from the `.env` file in the `packages/server/api` directory.
In case you choose `Manually`, you will be asked to enter the API key.
Examples:
```bash theme={null}
npm run publish-piece-to-api
? Enter the piece folder name : google-drive
? Enter the API URL : https://cloud.activepieces.com/api
? Enter the API Key Source : Env Variable (AP_API_KEY)
```
```bash theme={null}
npm run publish-piece-to-api
? Enter the piece folder name : google-drive
? Enter the API URL : https://cloud.activepieces.com/api
? Enter the API Key Source : Manually
? Enter the API Key : ap_1234567890abcdef1234567890abcdef
```
# Testing Pieces
Source: https://www.activepieces.com/docs/build-pieces/misc/testing-pieces
How to add unit tests to your pieces
Testing pieces is **optional** but recommended for complex logic. Pieces can be tested using [Vitest](https://vitest.dev/) with the `createMockActionContext` helper from the framework.
For a full working example, see the [text-helper piece tests on GitHub](https://github.com/activepieces/activepieces/tree/main/packages/pieces/core/text-helper/test).
## Setup
### 1. Add vitest to your piece
Add `vitest` as a dev dependency in your piece's `package.json` and add a `test` script:
```json theme={null}
{
"scripts": {
"build": "tsc -p tsconfig.lib.json && cp package.json dist/",
"lint": "eslint 'src/**/*.ts'",
"test": "vitest run"
},
"devDependencies": {
"vitest": "3.0.8"
}
}
```
### 2. Create a vitest config
Create `vitest.config.ts` in your piece root:
```typescript theme={null}
import path from 'path'
import { defineConfig } from 'vitest/config'
const repoRoot = path.resolve(__dirname, '../../../..')
export default defineConfig({
test: {
globals: true,
environment: 'node',
},
resolve: {
alias: {
'@activepieces/shared': path.resolve(repoRoot, 'packages/shared/src/index.ts'),
'@activepieces/pieces-framework': path.resolve(repoRoot, 'packages/pieces/framework/src/index.ts'),
'@activepieces/pieces-common': path.resolve(repoRoot, 'packages/pieces/common/src/index.ts'),
},
},
})
```
### 3. Write tests
Create a `test/` directory and add `.test.ts` files:
```typescript theme={null}
import { createMockActionContext } from '@activepieces/pieces-framework';
import { myAction } from '../src/lib/actions/my-action';
describe('myAction', () => {
test('does something', async () => {
const ctx = createMockActionContext({
propsValue: {
inputField: 'test value',
},
});
const result = await myAction.run(ctx);
expect(result).toBe('expected output');
});
});
```
### 4. Run tests
```bash theme={null}
# Run tests for a specific piece
npx turbo test --filter=@activepieces/piece-text-helper
# Run tests directly from the piece directory
cd packages/pieces/core/text-helper
npx vitest run
```
# Piece Auth
Source: https://www.activepieces.com/docs/build-pieces/piece-reference/authentication
Learn about piece authentication
Piece authentication is used to gather user credentials and securely store them for future use in different flows.
The authentication must be defined as the `auth` parameter in the `createPiece`, `createTrigger`, and `createAction` functions.
This requirement ensures that the type of authentication can be inferred correctly in triggers and actions.
The auth parameter for `createPiece`, `createTrigger`, and `createAction` functions can take an array, but you cannot have more than one auth property of the same type, i.e two OAUTH2 properties.
### Secret Text
This authentication collects sensitive information, such as passwords or API keys. It is displayed as a masked input field.
**Example:**
```typescript theme={null}
PieceAuth.SecretText({
displayName: 'API Key',
description: 'Enter your API key',
required: true,
// Optional Validation
validate: async ({auth}) => {
if(auth.startsWith('sk_')){
return {
valid: true,
}
}
return {
valid: false,
error: 'Invalid Api Key'
}
}
})
```
### Username and Password
This authentication collects a username and password as separate fields.
**Example:**
```typescript theme={null}
PieceAuth.BasicAuth({
displayName: 'Credentials',
description: 'Enter your username and password',
required: true,
username: {
displayName: 'Username',
description: 'Enter your username',
},
password: {
displayName: 'Password',
description: 'Enter your password',
},
// Optional Validation
validate: async ({auth}) => {
if(auth){
return {
valid: true,
}
}
return {
valid: false,
error: 'Invalid Api Key'
}
}
})
```
### Custom
This authentication allows for custom authentication by collecting specific properties, such as a base URL and access token.
**Example:**
```typescript theme={null}
PieceAuth.CustomAuth({
displayName: 'Custom Authentication',
description: 'Enter custom authentication details',
props: {
base_url: Property.ShortText({
displayName: 'Base URL',
description: 'Enter the base URL',
required: true,
}),
access_token: PieceAuth.SecretText({
displayName: 'Access Token',
description: 'Enter the access token',
required: true
})
},
// Optional Validation
validate: async ({auth}) => {
if(auth){
return {
valid: true,
}
}
return {
valid: false,
error: 'Invalid Api Key'
}
},
required: true
})
```
### OAuth2
This authentication collects OAuth2 authentication details, including the authentication URL, token URL, and scope.
**Example:**
```typescript theme={null}
PieceAuth.OAuth2({
displayName: 'OAuth2 Authentication',
grantType: OAuth2GrantType.AUTHORIZATION_CODE,
required: true,
authUrl: 'https://example.com/auth',
tokenUrl: 'https://example.com/token',
scope: ['read', 'write']
})
```
Please note `OAuth2GrantType.CLIENT_CREDENTIALS` is also supported for service-based authentication.
# Enable Custom API Calls
Source: https://www.activepieces.com/docs/build-pieces/piece-reference/custom-api-calls
Learn how to enable custom API calls for your pieces
Custom API Calls allow the user to send a request to a specific endpoint if no action has been implemented for it.
This will show in the actions list of the piece as `Custom API Call`, to enable this action for a piece, you need to call the `createCustomApiCallAction` in your actions array.
## Basic Example
The example below implements the action for the OpenAI piece. The OpenAI piece uses a `Bearer token` authorization header to identify the user sending the request.
```typescript theme={null}
actions: [
...yourActions,
createCustomApiCallAction({
// The auth object defined in the piece
auth: openaiAuth,
// The base URL for the API
baseUrl: () => {
'https://api.openai.com/v1'
},
// Mapping the auth object to the needed authorization headers
authMapping: async (auth) => {
return {
'Authorization': `Bearer ${auth}`
}
}
})
]
```
## Dynamic Base URL and Basic Auth Example
The example below implements the action for the Jira Cloud piece. The Jira Cloud piece uses a dynamic base URL for it's actions, where the base URL changes based on the values the user authenticated with. We will also implement a Basic authentication header.
```typescript theme={null}
actions: [
...yourActions,
createCustomApiCallAction({
baseUrl: (auth) => {
return `${(auth as JiraAuth).instanceUrl}/rest/api/3`
},
auth: jiraCloudAuth,
authMapping: async (auth) => {
const typedAuth = auth as JiraAuth
return {
'Authorization': `Basic ${typedAuth.email}:${typedAuth.apiToken}`
}
}
})
]
```
# Piece Examples
Source: https://www.activepieces.com/docs/build-pieces/piece-reference/examples
Explore a collection of example triggers and actions
To get the full benefit, it is recommended to read the tutorial first.
## Triggers:
**Webhooks:**
* [New Form Submission on Typeform](https://github.com/activepieces/activepieces/blob/main/packages/pieces/community/typeform/src/lib/trigger/new-submission.ts)
**Polling:**
* [New Completed Task On Todoist](https://github.com/activepieces/activepieces/blob/main/packages/pieces/community/todoist/src/lib/triggers/task-completed-trigger.ts)
## Actions:
* [Send a message On Discord](https://github.com/activepieces/activepieces/blob/main/packages/pieces/community/discord/src/lib/actions/send-message-webhook.ts)
* [Send an mail On Gmail](https://github.com/activepieces/activepieces/blob/main/packages/pieces/community/gmail/src/lib/actions/send-email-action.ts)
## Authentication
**OAuth2:**
* [Slack](https://github.com/activepieces/activepieces/blob/main/packages/pieces/community/slack/src/index.ts)
* [Gmail](https://github.com/activepieces/activepieces/blob/main/packages/pieces/community/gmail/src/index.ts)
**API Key:**
* [Sendgrid](https://github.com/activepieces/activepieces/blob/main/packages/pieces/community/sendgrid/src/index.ts)
**Basic Authentication:**
* [Twilio](https://github.com/activepieces/activepieces/blob/main/packages/pieces/community/twilio/src/index.ts)
# External Libraries
Source: https://www.activepieces.com/docs/build-pieces/piece-reference/external-libraries
Learn how to install and use external libraries.
The Activepieces repository is structured as a monorepo, employing Nx as its build tool.
To keep our main `package.json` as light as possible, we keep libraries that are only used for a piece in the piece `package.json` . This means when adding a new library you should navigate to the piece folder and install the library with our package manager `bun`
```bash theme={null}
cd packages/pieces/
bun install --save
```
* Import the library into your piece.
Guidelines:
* Make sure you are using well-maintained libraries.
* Ensure that the library size is not too large to avoid bloating the bundle size; this will make the piece load faster in the sandbox.
## Dependency Pinning
When pieces are built for publishing, all dependency versions — including **transitive dependencies** (dependencies of your dependencies) — are automatically pinned to the exact versions resolved in the monorepo's `bun.lock` file.
This means:
* You don't need to worry about manually pinning transitive dependency versions.
* The published piece will always install the same dependency tree that was tested during development.
* This protects against supply chain attacks where a compromised transitive dependency could be silently pulled in at install time.
# Files
Source: https://www.activepieces.com/docs/build-pieces/piece-reference/files
Learn how to use files object to create file references.
The `ctx.files` object allow you to store files in local storage or in a remote storage depending on the run environment.
## Write
You can use the `write` method to write a file to the storage, It returns a string that can be used in other actions or triggers properties to reference the file.
**Example:**
```ts theme={null}
const fileReference = await files.write({
fileName: 'file.txt',
data: Buffer.from('text')
});
```
This code will store the file in the database If the run environment is testing mode since it will be required to test other steps, other wise it will store it in the local temporary directory.
For Reading the file If you are using the file property in a trigger or action, It will be automatically parsed and you can use it directly, please refer to `Property.File` in the [properties](./properties#file) section.
# Flow Control
Source: https://www.activepieces.com/docs/build-pieces/piece-reference/flow-control
Learn how to control flow execution from inside a piece
Flow Controls let an action change the shape of the run — stop it early, send an intermediate HTTP response, or **pause the flow and resume later** when an external signal arrives. All of these are exposed on the `ctx` parameter of the action's `run` method.
## Stop Flow
Stop the flow and (optionally) return a response to the webhook trigger that started it.
**With a response:**
```typescript theme={null}
context.run.stop({
response: {
status: context.propsValue.status ?? StatusCodes.OK,
body: context.propsValue.body,
headers: (context.propsValue.headers as Record) ?? {},
},
});
```
**Without a response:**
```typescript theme={null}
context.run.stop();
```
## Pause with a waitpoint
A **waitpoint** is a durable checkpoint: the run is marked `PAUSED`, its execution state is persisted, and the action will be invoked a second time once the waitpoint is resumed. Waitpoints survive worker restarts — see [Durable Execution](/docs/install/architecture/durable-execution) for the full model.
The same action runs twice — once to create the waitpoint, once to read the resume payload — so every pausing action branches on `ctx.executionType`:
```typescript theme={null}
import { ExecutionType } from '@activepieces/shared';
async run(ctx) {
if (ctx.executionType === ExecutionType.BEGIN) {
// First invocation: create the waitpoint and pause.
// A real WEBHOOK waitpoint must surface waitpoint.buildResumeUrl(...) to
// the outside world — see the "Wait for a webhook callback" section below.
const waitpoint = await ctx.run.createWaitpoint({ type: 'WEBHOOK' });
ctx.run.waitForWaitpoint(waitpoint.id);
return {};
}
// Second invocation: the waitpoint was resumed.
return {
body: ctx.resumePayload.body,
headers: ctx.resumePayload.headers,
queryParams: ctx.resumePayload.queryParams,
};
}
```
Two hooks do the work:
* `ctx.run.createWaitpoint({ type, ... })` — registers the waitpoint on the server and returns `{ id, resumeUrl, buildResumeUrl }`.
* `ctx.run.waitForWaitpoint(waitpointId)` — tells the engine the step's verdict is `paused`; the run transitions to `PAUSED` after the action returns.
There are two waitpoint types.
### Wait for a webhook callback
Create a `WEBHOOK` waitpoint and expose its resume URL — the flow will resume whenever that URL is called.
```typescript theme={null}
async run(ctx) {
if (ctx.executionType === ExecutionType.BEGIN) {
const waitpoint = await ctx.run.createWaitpoint({ type: 'WEBHOOK' });
const callbackUrl = waitpoint.buildResumeUrl({
queryParams: { runId: ctx.run.id },
});
// Send `callbackUrl` somewhere the outside world can reach it
// (email, Slack message, third-party API, etc).
ctx.run.waitForWaitpoint(waitpoint.id);
return {};
}
return {
approved: ctx.resumePayload.queryParams['action'] === 'approve',
};
}
```
`buildResumeUrl` takes an optional `sync: true` to return the caller a synchronous response produced by the remainder of the flow, and a `queryParams` object that is carried through to `ctx.resumePayload.queryParams` on resume.
### Respond immediately and wait for the next webhook
Pause the flow **and** immediately reply to the webhook trigger — useful for "we got your submission, we'll call you back" patterns. Pass `responseToSend` and your HTTP response is sent right away; the flow then sits paused until the returned URL is called.
```typescript theme={null}
async run(ctx) {
if (ctx.executionType === ExecutionType.BEGIN) {
const waitpoint = await ctx.run.createWaitpoint({
type: 'WEBHOOK',
responseToSend: {
status: 200,
headers: {},
body: { accepted: true },
},
});
const nextWebhookUrl = waitpoint.buildResumeUrl({
queryParams: { created: new Date().toISOString() },
sync: true,
});
// nextWebhookUrl is what the counterpart should call to resume this run.
ctx.run.waitForWaitpoint(waitpoint.id);
return { nextWebhookUrl };
}
return {
body: ctx.resumePayload.body,
headers: ctx.resumePayload.headers,
queryParams: ctx.resumePayload.queryParams,
};
}
```
### Delay until a timestamp
Create a `DELAY` waitpoint with the UTC timestamp you want to resume at. The server schedules a one-time job that fires at `resumeDateTime` and resumes the run automatically.
```typescript theme={null}
async run(ctx) {
if (ctx.executionType === ExecutionType.RESUME) {
return { success: true };
}
const futureTime = new Date(Date.now() + 60 * 60 * 1000); // 1 hour
const waitpoint = await ctx.run.createWaitpoint({
type: 'DELAY',
resumeDateTime: futureTime.toUTCString(),
});
ctx.run.waitForWaitpoint(waitpoint.id);
return {};
}
```
`resumeDateTime` is capped by the server's `AP_PAUSED_FLOW_TIMEOUT_DAYS` setting; the engine throws `PausedFlowTimeoutError` if you ask for a longer delay.
## Reading the resume payload
On the `RESUME` branch, `ctx.resumePayload` is whatever the resume call carried in:
```typescript theme={null}
ctx.resumePayload.body // request body from the webhook caller (empty for DELAY)
ctx.resumePayload.headers // HTTP headers from the webhook caller
ctx.resumePayload.queryParams // parsed ?foo=bar query string
```
For `DELAY` waitpoints there is no incoming HTTP request, so the payload is empty — use the `RESUME` branch simply to produce the step's final output.
**Deprecated:** older pieces use `ctx.run.pause({ pauseMetadata: { type: PauseType.WEBHOOK | PauseType.DELAY, ... } })` together with `ctx.generateResumeUrl(...)`. That V0 API is kept for backwards compatibility with in-flight paused runs and will be removed. New actions must use `ctx.run.createWaitpoint` + `ctx.run.waitForWaitpoint`.
# Piece i18n
Source: https://www.activepieces.com/docs/build-pieces/piece-reference/i18n
Learn about translating pieces to multiple locales
Run the following command to create a translation file with all the strings that need translation in your piece
```bash theme={null}
npm run cli pieces generate-translation-file PIECE_FOLDER_NAME
```
Make a copy of `packages/pieces///src/i18n/translation.json`, name it `.json` i.e fr.json and translate the values.
For open source pieces, you can use the [Crowdin project](https://crowdin.com/project/activepieces) to translate to different languages. These translations will automatically sync back to your code.
After following the steps to [setup your development environment](/docs/build-pieces/building-pieces/development-setup), click the small cog icon next to the logo in your dashboard and change the locale.
In the builder your piece will now appear in the translated language:
Follow the docs here to [publish your piece](/docs/build-pieces/sharing-pieces/overview)
# Persistent Storage
Source: https://www.activepieces.com/docs/build-pieces/piece-reference/persistent-storage
Learn how to store and retrieve data from a key-value store
The `ctx` parameter inside triggers and actions provides a simple key/value storage mechanism. The storage is persistent, meaning that the stored values are retained even after the execution of the piece.
By default, the storage operates at the flow level, but it can also be configured to store values at the project level.
The storage scope is completely isolated. If a key is stored in a different scope, it will not be fetched when requested in different scope.
## Put
You can store a value with a specified key in the storage.
**Example:**
```typescript theme={null}
ctx.store.put('KEY', 'VALUE', StoreScope.PROJECT);
```
## Get
You can retrieve the value associated with a specific key from the storage.
**Example:**
```typescript theme={null}
const value = ctx.store.get('KEY', StoreScope.PROJECT);
```
## Delete
You can delete a key-value pair from the storage.
**Example:**
```typescript theme={null}
ctx.store.delete('KEY', StoreScope.PROJECT);
```
These storage operations allow you to store, retrieve, and delete key-value pairs in the persistent storage. You can use this storage mechanism to store and retrieve data as needed within your triggers and actions.
# Piece versioning
Source: https://www.activepieces.com/docs/build-pieces/piece-reference/piece-versioning
How to choose a version number when publishing a piece
Pieces are npm packages and follow **semantic versioning**.
This page is for *piece authors* deciding what version number to publish. For how flows adopt new piece versions, see [Piece syncing & versioning](/docs/install/architecture/piece-syncing).
## Semantic versioning
The version number is `MAJOR.MINOR.PATCH`:
* **MAJOR** — bump when you make a breaking change.
* **MINOR** — bump when you add functionality without breaking existing flows.
* **PATCH** — bump for bug fixes that don't change behavior.
## Classifying changes
Use this checklist when deciding which segment to bump.
### Breaking changes (MAJOR)
* Remove an existing action or trigger.
* Add a required prop to an existing action or trigger.
* Remove an existing prop, whether required or optional.
* Remove an attribute from an action output.
* Change the existing behavior of an action or trigger.
### Non-breaking changes (MINOR or PATCH)
* Add a new action or trigger.
* Add an optional prop.
* Add an attribute to an action output.
* Fix a bug without changing the public surface (PATCH).
The rule of thumb: **any removal is breaking, any required addition is breaking, everything else is not.**
# Props
Source: https://www.activepieces.com/docs/build-pieces/piece-reference/properties
Learn about different types of properties used in triggers / actions
Properties are used in actions and triggers to collect information from the user. They are also displayed to the user for input. Here are some commonly used properties:
## Basic Properties
These properties collect basic information from the user.
### Short Text
This property collects a short text input from the user.
**Example:**
```typescript theme={null}
Property.ShortText({
displayName: 'Name',
description: 'Enter your name',
required: true,
defaultValue: 'John Doe',
});
```
### Long Text
This property collects a long text input from the user.
**Example:**
```typescript theme={null}
Property.LongText({
displayName: 'Description',
description: 'Enter a description',
required: false,
});
```
### Checkbox
This property presents a checkbox for the user to select or deselect.
**Example:**
```typescript theme={null}
Property.Checkbox({
displayName: 'Agree to Terms',
description: 'Check this box to agree to the terms',
required: true,
defaultValue: false,
});
```
### Markdown
This property displays a markdown snippet to the user, useful for documentation or instructions. It includes a `variant` option to style the markdown, using the `MarkdownVariant` enum:
* **BORDERLESS**: For a minimalistic, no-border layout.
* **INFO**: Displays informational messages.
* **WARNING**: Alerts the user to cautionary information.
* **TIP**: Highlights helpful tips or suggestions.
The default value for `variant` is **INFO**.
**Example:**
```typescript theme={null}
Property.MarkDown({
value: '## This is a markdown snippet',
variant: MarkdownVariant.WARNING,
}),
```
If you want to show a webhook url to the user, use `{{ webhookUrl }}` in the
markdown snippet.
### DateTime
This property collects a date and time from the user.
**Example:**
```typescript theme={null}
Property.DateTime({
displayName: 'Date and Time',
description: 'Select a date and time',
required: true,
defaultValue: '2023-06-09T12:00:00Z',
});
```
### Number
This property collects a numeric input from the user.
**Example:**
```typescript theme={null}
Property.Number({
displayName: 'Quantity',
description: 'Enter a number',
required: true,
});
```
### Static Dropdown
This property presents a dropdown menu with predefined options.
**Example:**
```typescript theme={null}
Property.StaticDropdown({
displayName: 'Country',
description: 'Select your country',
required: true,
options: {
options: [
{
label: 'Option One',
value: '1',
},
{
label: 'Option Two',
value: '2',
},
],
},
});
```
### Static Multiple Dropdown
This property presents a dropdown menu with multiple selection options.
**Example:**
```typescript theme={null}
Property.StaticMultiSelectDropdown({
displayName: 'Colors',
description: 'Select one or more colors',
required: true,
options: {
options: [
{
label: 'Red',
value: 'red',
},
{
label: 'Green',
value: 'green',
},
{
label: 'Blue',
value: 'blue',
},
],
},
});
```
### JSON
This property collects JSON data from the user.
**Example:**
```typescript theme={null}
Property.Json({
displayName: 'Data',
description: 'Enter JSON data',
required: true,
defaultValue: { key: 'value' },
});
```
### Dictionary
This property collects key-value pairs from the user.
**Example:**
```typescript theme={null}
Property.Object({
displayName: 'Options',
description: 'Enter key-value pairs',
required: true,
defaultValue: {
key1: 'value1',
key2: 'value2',
},
});
```
### File
This property collects a file from the user, either by providing a URL or uploading a file.
**Example:**
```typescript theme={null}
Property.File({
displayName: 'File',
description: 'Upload a file',
required: true,
});
```
### Array of Strings
This property collects an array of strings from the user.
**Example:**
```typescript theme={null}
Property.Array({
displayName: 'Tags',
description: 'Enter tags',
required: false,
defaultValue: ['tag1', 'tag2'],
});
```
### Array of Fields
This property collects an array of objects from the user.
**Example:**
```typescript theme={null}
Property.Array({
displayName: 'Fields',
description: 'Enter fields',
properties: {
fieldName: Property.ShortText({
displayName: 'Field Name',
required: true,
}),
fieldType: Property.StaticDropdown({
displayName: 'Field Type',
required: true,
options: {
options: [
{ label: 'TEXT', value: 'TEXT' },
{ label: 'NUMBER', value: 'NUMBER' },
],
},
}),
},
required: false,
defaultValue: [],
});
```
## Dynamic Data Properties
These properties provide more advanced options for collecting user input.
### Dropdown
This property allows for dynamically loaded options based on the user's input.
**Example:**
```typescript theme={null}
Property.Dropdown({
displayName: 'Options',
description: 'Select an option',
required: true,
auth: yourPieceAuth,
refreshers: ['auth'],
refreshOnSearch: false,
options: async ({ auth }, { searchValue }) => {
// Search value only works when refreshOnSearch is true
if (!auth) {
return {
disabled: true,
};
}
return {
options: [
{
label: 'Option One',
value: '1',
},
{
label: 'Option Two',
value: '2',
},
],
};
},
});
```
When accessing the Piece auth, be sure to use exactly `auth` as it is
hardcoded. However, for other properties, use their respective names.
### Multi-Select Dropdown
This property allows for multiple selections from dynamically loaded options.
**Example:**
```typescript theme={null}
Property.MultiSelectDropdown({
displayName: 'Options',
description: 'Select one or more options',
required: true,
refreshers: ['auth'],
auth: yourPieceAuth,
options: async ({ auth }) => {
if (!auth) {
return {
disabled: true,
};
}
return {
options: [
{
label: 'Option One',
value: '1',
},
{
label: 'Option Two',
value: '2',
},
],
};
},
});
```
When accessing the Piece auth, be sure to use exactly `auth` as it is
hardcoded. However, for other properties, use their respective names.
### Dynamic Properties
This property is used to construct forms dynamically based on API responses or user input.
**Example:**
```typescript theme={null}
import {
httpClient,
HttpMethod,
} from '@activepieces/pieces-common';
Property.DynamicProperties({
description: 'Dynamic Form',
displayName: 'Dynamic Form',
required: true,
refreshers: ['auth'],
auth: yourPieceAuth,
props: async ({auth}) => {
const apiEndpoint = 'https://someapi.com';
const response = await httpClient.sendRequest<{ values: [string[]][] }>({
method: HttpMethod.GET,
url: apiEndpoint ,
//you can add the auth value to the headers
});
const properties = {
prop1: Property.ShortText({
displayName: 'Property 1',
description: 'Enter property 1',
required: true,
}),
prop2: Property.Number({
displayName: 'Property 2',
description: 'Enter property 2',
required: false,
}),
};
return properties;
},
});
```
### Custom Property (BETA)
This feature is still in BETA and not fully released yet, please let us know if you use it and face any issues and consider it a possibility could have breaking changes in the future
This is a property that lets you inject JS code into the frontend and manipulate the DOM of this content however you like, it is extremely useful in case you are [embedding](/docs/embedding/overview) Activepieces and want to have a way to communicate with the SaaS embedding it.
It has a `code` property which is a function that takes in an object parameter which will have the following schema:
| Parameter Name | Type | Description |
| -------------- | ---------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- |
| onChange | `(value:unknown)=>void` | A callback you call to set the value of your input (only call this inside event handlers) |
| value | `unknown` | Whatever the type of the value you pass to onChange |
| containerId | `string` | The ID of an HTML element in which you can modify the DOM however you like |
| isEmbedded | `boolean` | The flag that tells you if the code is running inside an [embedded instance](/docs/embedding/overview) of Activepieces |
| projectId | `string` | The project ID of the flow the step that contains this property is in |
| disabled | `boolean` | The flag that tells you whether or not the property is disabled |
| property | `{ displayName:string, description?: string, required: boolean}` | The current property information |
* You can return a clean up function at the end of the `code` property function to remove any listeners or HTML elements you inserted (this is important for development mode, the component gets [mounted twice](https://react.dev/reference/react/useEffect#my-effect-runs-twice-when-the-component-mounts)).
* This function must be pure without any imports from external packages or variables outside the function scope.
* **Must** mark your piece `minimumSupportedRelease` property to be at least `0.58.0` after introducing this property to it.
Here is how to define such a property:
```typescript theme={null}
Property.Custom({
code:(({value,onChange,containerId})=>{
const container = document.getElementById(containerId);
const input = document.createElement('input');
input.classList.add(...['border','border-solid', 'border-border', 'rounded-md'])
input.type = 'text';
input.value = `${value}`;
input.oninput = (e: Event) => {
const value = (e.target as HTMLInputElement).value;
onChange(value);
}
container!.appendChild(input);
const windowCallback = (e:MessageEvent<{type:string,value:string,propertyName:string}>) => {
if(e.data.type === 'updateInput' && e.data.propertyName === 'YOUR_PROPERTY_NAME'){
input.value= e.data.value;
onChange(e.data.value);
}
}
window.addEventListener('message', windowCallback);
return ()=>{
window.removeEventListener('message', windowCallback);
container!.removeChild(input);
}
}),
displayName: 'Custom Property',
required: true
})
```
* If you would like to know more about how to setup communication between Activepieces and the SaaS that's embedding it, check the [window postMessage API](https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage).
# Props Validation
Source: https://www.activepieces.com/docs/build-pieces/piece-reference/properties-validation
Learn about different types of properties validation
Activepieces uses Zod for runtime validation of piece properties. Zod provides a powerful schema validation system that helps ensure your piece receives valid inputs.
To use Zod validation in your piece, first import the validation helper and Zod:
Please make sure the `minimumSupportedRelease` is set to at least `0.36.1` for the validation to work.
```typescript theme={null}
import { createAction, Property } from '@activepieces/pieces-framework';
import { propsValidation } from '@activepieces/pieces-common';
import { z } from 'zod';
export const getIcecreamFlavor = createAction({
name: 'get_icecream_flavor', // Unique name for the action.
displayName: 'Get Ice Cream Flavor',
description: 'Fetches a random ice cream flavor based on user preferences.',
props: {
sweetnessLevel: Property.Number({
displayName: 'Sweetness Level',
required: true,
description: 'Specify the sweetness level (0 to 10).',
}),
includeToppings: Property.Checkbox({
displayName: 'Include Toppings',
required: false,
description: 'Should the flavor include toppings?',
defaultValue: true,
}),
numberOfFlavors: Property.Number({
displayName: 'Number of Flavors',
required: true,
description: 'How many flavors do you want to fetch? (1-5)',
defaultValue: 1,
}),
},
async run({ propsValue }) {
// Validate the input properties using Zod
await propsValidation.validateZod(propsValue, {
sweetnessLevel: z.number().min(0).max(10, 'Sweetness level must be between 0 and 10.'),
numberOfFlavors: z.number().min(1).max(5, 'You can fetch between 1 and 5 flavors.'),
});
// Action logic
const sweetnessLevel = propsValue.sweetnessLevel;
const includeToppings = propsValue.includeToppings ?? true; // Default to true
const numberOfFlavors = propsValue.numberOfFlavors;
// Simulate fetching random ice cream flavors
const allFlavors = [
'Vanilla',
'Chocolate',
'Strawberry',
'Mint',
'Cookie Dough',
'Pistachio',
'Mango',
'Coffee',
'Salted Caramel',
'Blackberry',
];
const selectedFlavors = allFlavors.slice(0, numberOfFlavors);
return {
message: `Here are your ${numberOfFlavors} flavors: ${selectedFlavors.join(', ')}`,
sweetnessLevel: sweetnessLevel,
includeToppings: includeToppings,
};
},
});
```
# Overview
Source: https://www.activepieces.com/docs/build-pieces/piece-reference/triggers/overview
This tutorial explains three techniques for creating triggers:
* `Polling`: Periodically call endpoints to check for changes.
* `Webhooks`: Listen to user events through a single URL.
* `App Webhooks (Subscriptions)`: Use a developer app (using OAuth2) to receive all authorized user events at a single URL (Not Supported).
to create new trigger run following command,
```bash theme={null}
npm run cli triggers create
```
1. `Piece Folder Name`: This is the name associated with the folder where the trigger resides. It helps organize and categorize triggers within the piece.
2. `Trigger Display Name`: The name users see in the interface, conveying the trigger's purpose clearly.
3. `Trigger Description`: A brief, informative text in the UI, guiding users about the trigger's function and purpose.
4. `Trigger Technique`: Specifies the trigger type - either polling or webhook.
# Trigger Structure
```typescript theme={null}
export const createNewIssue = createTrigger({
auth: PieceAuth | undefined
name: string, // Unique name across the piece.
displayName: string, // Display name on the interface.
description: string, // Description for the action
sampleData: null,
type: TriggerStrategy.WEBHOOK | TriggerStrategy.POLLING | TriggerStrategy.APP_WEBHOOK,
props: {}; // Required properties from the user.
// Run when the user enable or publish the flow.
onEnable: (ctx) => {},
// Run when the user disable the flow or
// the old flow is deleted after new one is published.
onDisable: (ctx) => {},
// Trigger implementation, It takes context as parameter.
// should returns an array of payload, each payload considered
run: async run(ctx): unknown[] => {}
})
```
It's important to note that the `run` method returns an array. The reason for
this is that a single polling can contain multiple triggers, so each item in
the array will trigger the flow to run.
## Context Object
The Context object contains multiple helpful pieces of information and tools that can be useful while developing.
```typescript theme={null}
// Store: A simple, lightweight key-value store that is helpful when you are developing triggers that persist between runs, used to store information like the last polling date.
await context.store.put('_lastFetchedDate', new Date());
const lastFetchedData = await context.store.get('_lastFetchedDate', new Date());
// Webhook URL: A unique, auto-generated URL that will trigger the flow. Useful when you need to develop a trigger based on webhooks.
context.webhookUrl;
// Payload: Contains information about the HTTP request sent by the third party. It has three properties: status, headers, and body.
context.payload;
// PropsValue: Contains the information filled by the user in defined properties.
context.propsValue;
```
**App Webhooks (Not Supported)**
Certain services, such as `Slack` and `Square`, only support webhooks at the developer app level.
This means that all authorized users for the app will be sent to the same endpoint. While this technique will be supported soon, for now, a workaround is to perform polling on the endpoint.
# Polling Trigger
Source: https://www.activepieces.com/docs/build-pieces/piece-reference/triggers/polling-trigger
Periodically call endpoints to check for changes
The way polling triggers usually work is as follows:
**On Enable:**
Store the last timestamp or most recent item id using the context store property.
**Run:**
This method runs every **5 minutes**, fetches the endpoint between a certain timestamp or traverses until it finds the last item id, and returns the new items as an array.
**Testing:**
You can implement a test function which should return some of the most recent items. It's recommended to limit this to five.
**Examples:**
* [New Record Airtable](https://github.com/activepieces/activepieces/blob/main/packages/pieces/community/airtable/src/lib/trigger/new-record.trigger.ts)
* [New Updated Item Salesforce](https://github.com/activepieces/activepieces/blob/main/packages/pieces/community/salesforce/src/lib/trigger/new-updated-record.ts)
# Polling library
There multiple strategy to implement polling triggers, and we have created a library to help you with that.
## Strategies
**Timebased:**
This strategy fetches new items using a timestamp. You need to implement the items method, which should return the most recent items.
The library will detect new items based on the timestamp.
The polling object's generic type consists of the props value and the object type.
```typescript theme={null}
const polling: Polling, Record> = {
strategy: DedupeStrategy.TIMEBASED,
items: async ({ propsValue, lastFetchEpochMS }) => {
// Todo implement the logic to fetch the items
const items = [ {id: 1, created_date: '2021-01-01T00:00:00Z'}, {id: 2, created_date: '2021-01-01T00:00:00Z'}];
return items.map((item) => ({
epochMilliSeconds: dayjs(item.created_date).valueOf(),
data: item,
}));
}
}
```
**Last ID Strategy:**
This strategy fetches new items based on the last item ID. To use this strategy, you need to implement the items method, which should return the most recent items.
The library will detect new items after the last item ID.
The polling object's generic type consists of the props value and the object type
```typescript theme={null}
const polling: Polling, Record> = {
strategy: DedupeStrategy.LAST_ITEM,
items: async ({ propsValue }) => {
// Implement the logic to fetch the items
const items = [{ id: 1 }, { id: 2 }];
return items.map((item) => ({
id: item.id,
data: item,
}));
}
}
```
## Trigger Implementation
After implementing the polling object, you can use the polling helper to implement the trigger.
```typescript theme={null}
export const newTicketInView = createTrigger({
name: 'new_ticket_in_view',
displayName: 'New ticket in view',
description: 'Triggers when a new ticket is created in a view',
type: TriggerStrategy.POLLING,
props: {
authentication: Property.SecretText({
displayName: 'Authentication',
description: markdownProperty,
required: true,
}),
},
sampleData: {},
onEnable: async (context) => {
await pollingHelper.onEnable(polling, {
store: context.store,
propsValue: context.propsValue,
auth: context.auth
})
},
onDisable: async (context) => {
await pollingHelper.onDisable(polling, {
store: context.store,
propsValue: context.propsValue,
auth: context.auth
})
},
run: async (context) => {
return await pollingHelper.poll(polling, context);
},
test: async (context) => {
return await pollingHelper.test(polling, context);
}
});
```
# Webhook Trigger
Source: https://www.activepieces.com/docs/build-pieces/piece-reference/triggers/webhook-trigger
Listen to user events through a single URL
The way webhook triggers usually work is as follows:
**On Enable:**
Use `context.webhookUrl` to perform an HTTP request to register the webhook in a third-party app, and store the webhook Id in the `store`.
**On Handshake:**
Some services require a successful handshake request usually consisting of some challenge. It works similar to a normal run except that you return the correct challenge response. This is optional and in order to enable the handshake you need to configure one of the available handshake strategies in the `handshakeConfiguration` option.
**Run:**
You can find the HTTP body inside `context.payload.body`. If needed, alter the body; otherwise, return an array with a single item `context.payload.body`.
**Disable:**
Using the `context.store`, fetch the webhook ID from the enable step and delete the webhook on the third-party app.
**Testing:**
You cannot test it with Test Flow, as it uses static sample data provided in the piece.
To test the trigger, publish the flow, perform the event. Then check the flow runs from the main dashboard.
**Examples:**
* [New Form Submission on Typeform](https://github.com/activepieces/activepieces/blob/main/packages/pieces/community/typeform/src/lib/trigger/new-submission.ts)
To make your webhook accessible from the internet, you need to expose your local development instance to the internet, do the following:
1. Install [localxpose](https://localxpose.io/docs#start-your-first-tunnel).
2. Follow the documentation to start your first tunnel to localhost:4200.
3. Copy the tunnel domain, i.e wozcsvaint.loclx.io, and replace the `AP_FRONTEND_URL` environment variable in `.env.dev` with the exposed url, i.e [https://wozcsvaint.loclx.io](https://wozcsvaint.loclx.io)
4. Go to /packages/web/vite.config.ts, uncomment allowedHosts and replace the value with the same tunnel domain, i.e wozcsvaint.loclx.io.
Once you have completed these configurations, you will be able to test webhook triggers and run published flows that have them.
# Community (Public NPM)
Source: https://www.activepieces.com/docs/build-pieces/sharing-pieces/community
Learn how to publish your piece to the community.
You can publish your pieces to the npm registry and share them with the community. Users can install your piece from Settings -> My Pieces -> Install Piece -> type in the name of your piece package.
Make sure you are logged in to npm. If not, please run:
```bash theme={null}
npm login
```
Rename the piece name in `package.json` to something unique or related to your organization's scope (e.g., `@my-org/piece-PIECE_NAME`). You can find it at `packages/pieces/PIECE_NAME/package.json`.
Don't forget to increase the version number in `package.json` for each new release.
Replace `PIECE_FOLDER_NAME` with the name of the folder.
Run the following command:
```bash theme={null}
npm run publish-piece PIECE_FOLDER_NAME
```
**Congratulations! You can now import the piece from the settings page.**
# Contribute
Source: https://www.activepieces.com/docs/build-pieces/sharing-pieces/contribute
Learn how to contribute a piece to the main repository.
* Build and test your piece.
* Open a pull request from your repository to the main fork.
* A maintainer will review your work closely.
* Once the pull request is approved, it will be merged into the main branch.
* Your piece will be available within a few minutes.
* An automatic GitHub action will package it and create an npm package on npmjs.com.
# Overview
Source: https://www.activepieces.com/docs/build-pieces/sharing-pieces/overview
Learn the different ways to publish your own piece on activepieces.
## Methods
* [Contribute Back](/docs/build-pieces/sharing-pieces/contribute): Publish your piece by contributing back your piece to main repository.
* [Community](/docs/build-pieces/sharing-pieces/community): Publish your piece on npm directly and share it with the community.
* [Private](/docs/build-pieces/sharing-pieces/private): Publish your piece on activepieces privately.
# Private
Source: https://www.activepieces.com/docs/build-pieces/sharing-pieces/private
Learn how to share your pieces privately.
This guide assumes you have already created a piece and created a private fork of our repository, and you would like to package it as a file and upload it.
Friendly Tip: There is a CLI to easily upload it to your platform. Please check out [Publish Custom Pieces](../misc/publish-piece).
Build the piece using the following command. Make sure to replace `${name}` with your piece folder name.
```bash theme={null}
npm run pieces -- build --name=${name}
```
More information about building pieces can be found [here](../misc/build-piece).
Upload the generated tarball inside `dist/packages/pieces/${name}`from Activepieces Platform Admin -> Pieces
# Show/Hide Pieces
Source: https://www.activepieces.com/docs/embedding/customize-pieces
If you would like to only show specific pieces to your embedding users, we recommend you do the following:
Tag the pieces you would like to show to your user by going to **Platform Admin -> Setup -> Pieces**, selecting the pieces you would like to tag and hit **Apply Tags**
You need to specify the tags of pieces in the token, check how to generate token in [provisioning users](./provision-users).
You should specify the `pieces` claim like this:
```json theme={null}
{
/// Other claims
"piecesFilterType": "ALLOWED",
"piecesTags": [ "free" ]
}
```
Each time the token is used by the embedding SDK, it will sync all pieces with these tags to the token's project.
The project will only contain the pieces that contain these tags.
# Embed Builder
Source: https://www.activepieces.com/docs/embedding/embed-builder
This documentation explains how to embed the Activepieces iframe inside your application and customize it.
## Configure SDK
Adding the embedding SDK script will initialize an object in your window called `activepieces`, which has a method called `configure` that you should call after the container has been rendered.
The following scripts shouldn't contain the `async` or `defer` attributes.
These steps assume you have already generated a JWT token from the backend. If not, please check the [provision-users](./provision-users) page.
```html theme={null}
```
`configure` returns a promise which is resolved after authentication is done.
Please check the [navigation](./navigation) section, as it's very important to understand how navigation works and how to supply an auto-sync experience.
**Configure Parameters:**
| Parameter Name | Required | Type | Description |
| ------------------------------------------ | -------- | ----------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| instanceUrl | ✅ | string | The url of the instance hosting Activepieces, could be [https://cloud.activepieces.com](https://cloud.activepieces.com) if you are a cloud user. |
| jwtToken | ✅ | string | The jwt token you generated to authenticate your users to Activepieces. |
| prefix | ❌ | string | Some customers have an embedding prefix, like this `/`. For example if the prefix is `/automation` and the Activepieces url is `/flows` the full url would be `/automation/flows`. |
| embedding.containerId | ❌ | string | The html element's id that is going to be containing Activepieces's iframe. |
| embedding.builder.disableNavigation | ❌ | boolean \| `keep_home_button_only` | Hides the folder name, home button (if not set to [`keep_home_button_only`](./sdk-changelog#20%2F05%2F2025-0-4-0)) and delete option in the builder, by default it is false. |
| embedding.builder.hideFlowName | ❌ | boolean | Hides the flow name and flow actions dropdown in the builder's header, by default it is false. |
| embedding.builder.homeButtonClickedHandler | ❌ | `()=>void` | Callback that stops home button from navigating to dashboard and overrides it with this handler (added in [0.4.0](./sdk-changelog#20%2F05%2F2025-0-4-0)) |
| embedding.builder.homeButtonIcon | ❌ | `logo` \| `back` | if set to **`back`** the tooltip shown on hovering the home button is removed (added in [0.5.0](./sdk-changelog#03%2F07%2F2025-0-5-0)) |
| embedding.dashboard.hideSidebar | ❌ | boolean | Controls the visibility of the sidebar in the dashboard, by default it is false. |
| embedding.dashboard.hideFlowsPageNavbar | ❌ | boolean | Controls the visibility of the navbar showing flows,issues and runs above the flows table in the dashboard, by default it is false. (added in [0.6.0](./sdk-changelog#07%2F07%2F2025-0-6-0)) |
| embedding.dashboard.hidePageHeader | ❌ | boolean | Hides the page header in the dashboard by default it is false. (added in [0.8.0](./sdk-changelog#09%2F21%2F2025-0-8-0)) |
| embedding.hideFolders | ❌ | boolean | Hides all things related to folders in both the flows table and builder by default it is false. |
| embedding.hideTables | ❌ | boolean | Hides the Tables UI in the dashboard (tree, filters, create/import buttons) and blocks direct access to the table editor route. The Tables piece used inside flows is unaffected. By default it is false. (added in [0.9.0](./sdk-changelog#04%2F21%2F2026-0-9-0)) |
| embedding.styling.fontUrl | ❌ | string | The url of the font to be used in the embedding, by default it is `https://fonts.googleapis.com/css2?family=Roboto:wght@300;400;500;700&display=swap`. |
| embedding.styling.fontFamily | ❌ | string | The font family to be used in the embedding, by default it is `Roboto`. |
| embedding.styling.mode | ❌ | `light` \| `dark` | Controls light/dark mode (added in [0.5.0](./sdk-changelog#03%2F07%2F2025-0-5-0)) |
| embedding.hideExportAndImportFlow | ❌ | boolean | Hides the option to export or import flows (added in [0.4.0](./sdk-changelog#20%2F05%2F2025-0-4-0)) |
| embedding.hideDuplicateFlow | ❌ | boolean | Hides the option to duplicate a flow (added in [0.5.0](./sdk-changelog#03%2F07%2F2025-0-5-0)) |
| embedding.locale | ❌ | `en` \| `nl` \| `de` \| `fr` \| `es` \| `ja` \| `zh` \| `pt` \| `zh-TW` \| `ru` \| | it takes [ISO 639-1](https://en.wikipedia.org/wiki/List_of_ISO_639_language_codes) locale codes (added in [0.5.0](./sdk-changelog#03%2F07%2F2025-0-5-0)) |
| navigation.handler | ❌ | `({route:string}) => void` | This callback will be triggered each time a route in Activepieces changes, you can read more about it [here](/docs/embedding/navigation) |
For the font to be loaded, you need to set both the `fontUrl` and `fontFamily` properties.
If you only set one of them, the default font will be used.
The default font is `Roboto`.
The font weights we use are the default font-weights from [tailwind](https://tailwindcss.com/docs/font-weight).
# Create/Update Connections
Source: https://www.activepieces.com/docs/embedding/embed-connections
**Requirements:**
* Activepieces version 0.34.5 or higher
* SDK version 0.3.2 or higher
"connectionName" is the externalId of the connection (you can get it by hovering the connection name in the connections table).
We kept the same parameter name for backward compatibility, anyone upgrading their instance from \< 0.35.1, will not face issues in that regard.
**Breaking Change:**
If your Activepieces instance version is \< 0.45.0 and (you are using the connect method from the embed sdk, and need the connection externalId to be returned after the user creates it OR if you want to reconnect a specific connection with an externalId), you must upgrade your instance to >= 0.45.0
* You can use the embedded SDK in your SaaS to allow your users to create connections and store them in Activepieces.
Follow the instructions in the [Embed Builder](./embed-builder).
After initializing the SDK, you will have access to a property called `activepieces` inside your `window` object. Call its `connect` method to open a new connection dialog as follows.
```html theme={null}
```
**Connect Parameters:**
| Parameter Name | Required | Type | Description |
| -------------- | -------- | ----------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| pieceName | ✅ | string | The name of the piece you want to create a connection for. |
| connectionName | ❌ | string | The external Id of the connection (you can get it by hovering the connection name in the connections table), when provided the connection created/upserted will use this as the external Id and display name. |
| newWindow | ❌ | \{ width?: number, height?: number, top?: number, left?: number } | If set the connection dialog will be opened in a new window instead of an iframe taking the full page. |
**Connect Result**
The `connect` method returns a `promise` that resolves to the following:
```ts theme={null}
{
connection?: {
id: string,
name: string
}
}
```
`name` is the externalId of the connection.
`connection` is undefined if the user closes the dialog and doesn't create a connection.
You can use the `connections` piece in the builder to retrieve the created connection using its name.
# Navigation
Source: https://www.activepieces.com/docs/embedding/navigation
By default, navigating within your embedded instance of Activepieces doesn't affect the client's browser history or viewed URL. Activepieces only provide a **handler**, that trigger on every route change in the **iframe**.
## Automatically Sync URL
You can use the following snippet when configuring the SDK, which will implement a handler that syncs the Activepieces iframe with your browser:
The following snippet listens when the user clicks backward, so it syncs the route back to the iframe using `activepieces.navigate` and in the handler, it updates the URL of the browser.
```js theme={null}
const instanceUrl = 'YOUR_INSTANCE_URL';
const jwtToken = 'YOUR_GENERATED_JWT_TOKEN';
const containerId = 'YOUR_CONTAINER_ID';
activepieces.configure({
instanceUrl,
jwtToken,
embedding: {
containerId,
builder: {
disableNavigation: false,
hideFlowName: false
},
dashboard: {
hideSidebar: false
},
hideFolders: false,
navigation: {
handler: ({ route }) => {
//route can include search params at the end of it
if (!window.location.href.endsWith(route)) {
window.history.pushState({}, "", window.location.origin + route);
}
}
}
},
});
window.addEventListener("popstate", () => {
const route = activepieces.extractActivepiecesRouteFromUrl({ vendorUrl: window.location.href });
activepieces.navigate({ route });
});
```
## Navigate Method
If you use `activepieces.navigate({ route: '/flows' })` this will tell the embedded sdk where to navigate to.
Here is the list for routes the sdk can navigate to:
| Route | Description |
| ------------------- | ------------------------------ |
| `/flows` | Flows table |
| `/flows/{flowId}` | Opens up a flow in the builder |
| `/runs` | Runs table |
| `/runs/{runId}` | Opens up a run in the builder |
| `/connections` | Connections table |
| `/tables` | Tables table |
| `/tables/{tableId}` | Opens up a table |
| `todos` | Todos table |
| `todos/{todoId}` | Opens up a todo |
## Navigate to Initial Route
You can call the `navigate` method after initializing the sdk using the `configure` sdk:
```js theme={null}
const flowId = '1234';
const instanceUrl = 'YOUR_INSTANCE_URL';
const jwtToken = 'YOUR_GENERATED_JWT_TOKEN';
activepieces.configure({
instanceUrl,
jwtToken,
}).then(() => {
activepieces.navigate({
route: `/flows/${flowId}`
})
});
```
# Overview
Source: https://www.activepieces.com/docs/embedding/overview
Understanding how embedding works
This section provides an overview of how to embed the Activepieces builder in your application and automatically provision the user.
The embedding process involves the following steps:
Generate a JSON Web Token (JWT) to identify your customer and pass it to the SDK, read more [here](./provision-users).
You can use the SDK to embed and customize Activepieces in your SaaS, read more [here](./embed-builder).
Here is an example of how it looks like in one of the SaaS that embed Activepieces:
In case, you need to gather connections from your users in your SaaS. You can do this with the SDK. Find more info [here](./embed-connections).
If you are looking for a way to communicate between Activpieces and the SaaS embedding it through a piece, we recommend you check the [custom property doc](/docs/build-pieces/piece-reference/properties#custom-property-beta)
# Predefined Connection
Source: https://www.activepieces.com/docs/embedding/predefined-connection
Use predefined connections to allow users to access your piece in the embedded app without re-entering authentication credentials.
The high-level steps are:
* Create a global connection for a project using the API in the platform admin. Only platform admins can edit or delete global connections.
* (Optional) Hide the connections dropdown in the piece settings.
### Prerequisites
* [Run the Enterprise Edition](/docs/handbook/engineering/playbooks/run-ee)
* [Create your piece](/docs/build-pieces/building-pieces/overview). Later we will customize the piece logic to use predefined connections.
### Create a Predefined Connection
Go to **Platform Admin → Security → API Keys** and create an API key. Save it for use in the next step.
Add the following snippet to your backend to create a global connection each time you generate the JWT token.
The snippet does the following:
* Create Project If it doesn't exist.
* Create a global connection for the project with certain naming convention.
```js theme={null}
const apiKey = 'YOUR_API_KEY';
const instanceUrl = 'https://cloud.activepieces.com';
// The name of the user / organization in your SAAS
const externalProjectId = 'org_1234';
const pieceName = '@activepieces/piece-gelato';
// This will depend on what your piece auth type is, can be one of this ['PLATFORM_OAUTH2','SECRET_TEXT','BASIC_AUTH','CUSTOM_AUTH']
const pieceAuthType = "CUSTOM_AUTH"
const connectionProps = {
// Fill in the props required by your piece's auth
}
const { id: projectId, externalId } = await getOrCreateProject({
projectExternalId: externalProjectId,
apiKey,
instanceUrl,
});
await createGlobalConnection({
projectId,
externalProjectId,
apiKey,
instanceUrl,
pieceName,
props,
pieceAuthType
});
```
Implementation:
```js theme={null}
async function getOrCreateProject({
projectExternalId,
apiKey,
instanceUrl,
}: {
projectExternalId: string,
apiKey: string,
instanceUrl: string
}): Promise<{ id: string, externalId: string }> {
const projects = await fetch(`${instanceUrl}/api/v1/projects?externalId=${projectExternalId}`, {
method: 'GET',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
})
.then(response => response.json())
.then(data => data.data)
.catch(err => {
console.error('Error fetching projects:', err);
return [];
});
if (projects.length > 0) {
return {
id: projects[0].id,
externalId: projects[0].externalId
};
}
const newProject = await fetch(`${instanceUrl}/api/v1/projects`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
displayName: projectExternalId,
metadata: {},
externalId: projectExternalId
})
})
.then(response => response.json())
.catch(err => {
console.error('Error creating project:', err);
throw err;
});
return {
id: newProject.id,
externalId: newProject.externalId
};
}
async function createGlobalConnection({
projectId,
externalProjectId,
apiKey,
instanceUrl,
pieceName,
props,
pieceAuthType
}: {
projectId: string,
externalProjectId: string,
apiKey: string,
instanceUrl: string,
pieceName: string,
props: Record,
pieceAuthType
}) {
const displayName = 'Gelato Connection';
const connectionExternalId = 'gelato_' + externalProjectId;
return fetch(`${instanceUrl}/api/v1/global-connections`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
displayName,
pieceName,
metadata: {},
type: pieceAuthType,
value: {
type: pieceAuthType,
props
},
scope: 'PLATFORM',
projectIds: [projectId],
externalId: connectionExternalId
})
});
}
```
### Hide the Connections Dropdown (Optional)
Wherever you call `createTrigger` or `createAction` set `requireAuth` to `false`, this will hide the connections dropdown in the piece settings in the builder,
next we need to fetch it based on a naming convention.
Here is example how you can fetch the connection value based on naming convention, make sure this naming convention is followed when creating a global connection.
```js theme={null}
import {
ConnectionsManager,
Property,
TriggerStrategy
} from "@activepieces/pieces-framework";
import {
createTrigger
} from "@activepieces/pieces-framework";
import {
isNil
} from "@activepieces/shared";
// Add this import from the index.ts file, where it contains the definition of the auth object.
import { auth } from '../..';
const fetchConnection = async (
connections: ConnectionsManager,
projectExternalId: string | undefined,
): Promise> => {
if (isNil(projectExternalId)) {
throw new Error('This project is missing an external id');
}
// the naming convention here is gelato_projectExternalId
const connection = await connections.get(`gelato_${projectExternalId}`);
if (isNil(connection)) {
throw new Error(`Connection not found for project ${projectExternalId}`);
}
return connection as PiecePropValueSchema;
};
export const newFlavorCreated = createTrigger({
requireAuth: false,
name: 'newFlavorCreated',
displayName: 'new flavor created',
description: 'triggers when a new icecream flavor is created.',
props: {
dropdown: Property.Dropdown({
displayName: 'Dropdown',
required: true,
refreshers: [],
options: async (_, {
connections,
project
}) => {
const connection = await fetchConnection(connections, (await project.externalId()));
// your logic
return {
options: [{
label: 'test',
value: 'test'
}]
}
}
})
},
sampleData: {},
type: TriggerStrategy.POLLING,
async test({connections,project}) {
const connection = await fetchConnection(connections, (await project.externalId()));
// use the connection with your own logic
return []
},
async onEnable({connections,project}) {
const connection = await fetchConnection(connections, (await project.externalId()));
// use the connection with your own logic
},
async onDisable({connections,project}) {
const connection = await fetchConnection(connections, (await project.externalId()));
// use the connection with your own logic
},
async run({connections,project}) {
const connection = await fetchConnection(connections, (await project.externalId()));
// use the connection with your own logic
return []
},
});
```
# Provision Users
Source: https://www.activepieces.com/docs/embedding/provision-users
Automatically authenticate your SaaS users to your Activepieces instance
## Overview
In Activepieces, there are **Projects** and **Users**. Each project is provisioned with their corresponding workspace, project, or team in your SaaS. The users are then mapped to the respective users in Activepieces.
To achieve this, the backend will generate a signed token that contains all the necessary information to automatically create a user and project. If the user or project already exists, it will skip the creation and log in the user directly.
You can generate a signing key by going to **Platform Settings -> Signing Keys -> Generate Signing Key**.
This will generate a public and private key pair. The public key will be used by Activepieces to verify the signature of the JWT tokens you send. The private key will be used by you to sign the JWT tokens.
Please store your private key in a safe place, as it will not be stored in Activepieces.
The signing key will be used to generate JWT tokens for the currently logged-in user on your website, which will then be sent to the Activepieces Iframe as a query parameter to authenticate the user and exchange the token for a longer lived token.
To generate these tokens, you will need to add code in your backend to generate the token using the RS256 algorithm, so the JWT header would look like this:
To obtain the `SIGNING_KEY_ID`, refer to the signing key table and locate the value in the first column.
```json theme={null}
{
"alg": "RS256",
"typ": "JWT",
"kid": "SIGNING_KEY_ID"
}
```
The signed tokens must include these claims in the payload:
```json theme={null}
{
"version": "v3",
"externalUserId": "user_id",
"externalProjectId": "user_project_id",
"firstName": "John",
"lastName": "Doe",
"role": "EDITOR",
"piecesFilterType": "NONE",
"exp": 1856563200,
"tasks": 50000,
"aiCredits": 250
}
```
| Claim | Description |
| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| externalUserId | Unique identification of the user in **your** software |
| externalProjectId | Unique identification of the user's project in **your** software |
| projectDisplayName | Display name of the user's project |
| firstName | First name of the user |
| lastName | Last name of the user |
| role | Role of the user in the Activepieces project (e.g., **EDITOR**, **VIEWER**, **ADMIN**) |
| exp | Expiry timestamp for the token (Unix timestamp) |
| piecesFilterType | Customize the project pieces, check [customize pieces](/docs/embedding/customize-pieces) |
| piecesTags | Customize the project pieces, check [customize pieces](/docs/embedding/customize-pieces) |
| tasks | Customize the tasks limit for your user's project |
| aiCredits | Customize the ai credits limit for your user's project |
| concurrencyPoolKey | Pool identifier. Projects sharing the same key share the same concurrency limit. Must be used with `concurrencyPoolLimit`. See [Manage Concurrency](/docs/admin-guide/guides/manage-concurrency) |
| concurrencyPoolLimit | Maximum concurrent flow runs for the pool. Must be used with `concurrencyPoolKey`. |
You can use any JWT library to generate the token. Here is an example using the jsonwebtoken library in Node.js:
**Friendly Tip #1**: You can also use this [tool](https://dinochiesa.github.io/jwt/) to generate a quick example.
**Friendly Tip #2**: Make sure the expiry time is very short, as it's a temporary token and will be exchanged for a longer-lived token.
```javascript Node.js theme={null}
const jwt = require('jsonwebtoken');
// JWT NumericDates specified in seconds:
const currentTime = Math.floor(Date.now() / 1000);
let token = jwt.sign(
{
version: "v3",
externalUserId: "user_id",
externalProjectId: "user_project_id",
firstName: "John",
lastName: "Doe",
role: "EDITOR",
piecesFilterType: "NONE",
exp: currentTime + (60 * 60), // 1 hour from now
},
process.env.ACTIVEPIECES_SIGNING_KEY,
{
algorithm: "RS256",
header: {
kid: signingKeyID, // Include the "kid" in the header
},
}
);
```
Once you have generated the token, please check the embedding docs to know how to embed the token in the iframe.
# SDK Changelog
Source: https://www.activepieces.com/docs/embedding/sdk-changelog
A log of all notable changes to Activepieces SDK
**Breaking Change:**
If your Activepieces image version is \< 0.45.0 and (you are using the connect method from the embed SDk, and need the connection externalId to be returned after the user creates it OR if you want to reconnect a specific connection with an externalId), you must upgrade your Activepieces image to >= 0.45.0
Between Acitvepieces image version 0.32.1 and 0.46.4 the navigation handler was including the project id in the path, this might have broken implementation logic for people using the navigation handler, this has been fixed from 0.46.5 and onwards, the handler won't show the project id prepended to routes.
Change log format: MM/DD/YYYY (version)
### 04/21/2026 (0.9.0)
* SDK URL: [https://cdn.activepieces.com/sdk/embed/0.9.0.js](https://cdn.activepieces.com/sdk/embed/0.9.0.js)
* This version requires you to **upgrade Activepieces to [0.82.0](https://github.com/activepieces/activepieces/releases/tag/0.82.0)**.
* Added `embedding.hideTables` parameter to the [configure](./embed-builder#configure-parameters) method **(value: true | false)**. When `true`, hides the Tables UI from the dashboard (tree view, Type filter option, Create/Import Table buttons, empty-state card, global search results) and blocks direct access to the `/tables/:id` editor route. The Tables piece used inside flows is unaffected.
### 10/27/2025 (0.8.1)
* SDK URL: [https://cdn.activepieces.com/sdk/embed/0.8.1.js](https://cdn.activepieces.com/sdk/embed/0.8.1.js)
* Fixed a bug where if you didn't start your navigation route with '/' it would redirect you to '/flows'
### 09/21/2025 (0.8.0)
* SDK URL: [https://cdn.activepieces.com/sdk/embed/0.8.0.js](https://cdn.activepieces.com/sdk/embed/0.8.0.js)
* This version requires you to **upgrade Activepieces to [0.70.0](https://github.com/activepieces/activepieces/releases/tag/0.70.0)**.
* Removed `embedding.dashboard.hideSettings`.
* Added `embedding.dashboard.hidePageHeader` parameter to the [configure](./embed-builder#configure-parameters) method **(value: true | false)**.
### 07/30/2025 (0.7.0)
* SDK URL: [https://cdn.activepieces.com/sdk/embed/0.7.0.js](https://cdn.activepieces.com/sdk/embed/0.7.0.js)
* This version requires you to **upgrade Activepieces to [0.66.7](https://github.com/activepieces/activepieces/releases/tag/0.66.7)**
* Added `embedding.dashboard.hideSettings` parameter to the [configure](./embed-builder#configure-parameters) method **(value: true | false)**.
### 07/07/2025 (0.6.0)
* SDK URL: [https://cdn.activepieces.com/sdk/embed/0.6.0.js](https://cdn.activepieces.com/sdk/embed/0.6.0.js)
* This version requires you to **upgrade Activepieces to [0.66.1](https://github.com/activepieces/activepieces/releases/tag/0.66.1)**
* Added `embedding.dashboard.hideFlowsPageNavbar` parameter to the [configure](./embed-builder#configure-parameters) method **(value: true | false)**.
* **(Breaking Change)** `embedding.dashboard.hideSidebar` used to hide the navbar above the flows table in the dashboard now it relies on `embedding.dashboard.hideFlowsPageNavbar`.
### 03/07/2025 (0.5.0)
* SDK URL: [https://cdn.activepieces.com/sdk/embed/0.5.0.js](https://cdn.activepieces.com/sdk/embed/0.5.0.js)
* This version requires you to **upgrade Activepieces to [0.64.2](https://github.com/activepieces/activepieces/releases/tag/0.64.2)**
* Added `embedding.hideDuplicateFlow` parameter to the [configure](./embed-builder#configure-parameters) method **(value: true | false)**.
* Added `embedding.builder.homeButtonIcon` parameter to the [configure](./embed-builder#configure-parameters) method **(value: 'logo' | 'back')**, if set to **'back'** the tooltip shown on hovering the home button is removed.
* Added `embedding.locale` parameter to the [configure](./embed-builder#configure-parameters) method, it takes [ISO 639-1](https://en.wikipedia.org/wiki/List_of_ISO_639_language_codes) locale codes, here are the ones supported: **('en' | 'nl' | 'it' | 'de' | 'fr' | 'bg' | 'uk' | 'hu' | 'es' | 'ja' | 'id' | 'vi' | 'zh' | 'pt')**
* Added `embedding.styling.mode` parameter to [configure](./embed-builder#configure-parameters) method **(value: 'light' | 'dark')**
* **(Breaking Change)** Removed `embedding.builder.hideLogo` parameter from the [configure](./embed-builder#configure-parameters) method.
* **(Breaking Change)** Removed MCP methods from sdk.
### 17/06/2025 (0.5.0-rc.1)
* SDK URL: [https://cdn.activepieces.com/sdk/embed/0.5.0-rc.1.js](https://cdn.activepieces.com/sdk/embed/0.5.0-rc.1.js)
* This version requires you to **upgrade Activepieces to [0.64.0-rc.0](https://github.com/activepieces/activepieces/pkgs/container/activepieces/438888138?tag=0.64.0-rc.0)**
* Revert back the `prefix` parameter from the [configure](./embed-builder#configure-parameters) method.
### 16/06/2025 (0.5.0-rc.0)
* SDK URL: [https://cdn.activepieces.com/sdk/embed/0.5.0-rc.0.js](https://cdn.activepieces.com/sdk/embed/0.5.0-rc.0.js)
* This version requires you to **upgrade Activepieces to [0.64.0-rc.0](https://github.com/activepieces/activepieces/pkgs/container/activepieces/438888138?tag=0.64.0-rc.0)**
* Added `embedding.hideDuplicateFlow` parameter to the [configure](./embed-builder#configure-parameters) method **(value: true | false)**.
* Added `embedding.builder.homeButtonIcon` parameter to the [configure](./embed-builder#configure-parameters) method **(value: 'logo' | 'back')**, if set to **'back'** the tooltip shown on hovering the home button is removed.
* Added `embedding.locale` parameter to the [configure](./embed-builder#configure-parameters) method, it takes [ISO 639-1](https://en.wikipedia.org/wiki/List_of_ISO_639_language_codes) locale codes, here are the ones supported: **('en' | 'nl' | 'it' | 'de' | 'fr' | 'bg' | 'uk' | 'hu' | 'es' | 'ja' | 'id' | 'vi' | 'zh' | 'pt')**
* Added `embedding.styling.mode` parameter to [configure](./embed-builder#configure-parameters) method **(value: 'light' | 'dark')**
* **(Breaking Change)** Removed `prefix` parameter from the [configure](./embed-builder#configure-parameters) method.
* **(Breaking Change)** Removed `embedding.builder.hideLogo` parameter from the [configure](./embed-builder#configure-parameters) method.
### 26/05/2025 (0.4.1)
* Fixed an issue where sometimes the embed HTML file was getting cached.
### 20/05/2025 (0.4.0)
\-- Note: we didn't consider adding optional new parameters as a breaking change so we were bumping the patch version, but that was wrong and we will begin bumping the minor version for those changes from now on, and patch version will only get bumped for bug fixes.
* This version requires you to update Activepieces to 0.56.0
* Added `embedding.hideExportAndImportFlow` parameter to the [configure](./embed-builder#configure-parameters) method.
* Added new possible value to the configure method param `embed.builder.disableNavigation` which is "keep\_home\_button\_only" that keeps only the home button and hides the folder name with the delete flow action.
* Added new param to the configure method `embed.builder.homeButtonClickedHandler`, that overrides the navigation behaviour on clicking the home button.
### 17/04/2025 (0.3.7)
* Added MCP methods to update MCP configurations.
### 16/04/2025 (0.3.6)
* Added the [request](./sdk-server-requests) method which allows you to call our backend API.
### 24/2/2025 (0.3.5)
* Added a new parameter to the connect method to make the connection dialog a popup instead of an iframe taking the full page.
* Fixed a bug where the returned promise from the connect method was always resolved to \{connection: undefined}
* Now when you use the connect method with the "connectionName" parameter, the user will reconnect to the connection with the matching externalId instead of creating a new one.
### 04/02/2025 (0.3.4)
* This version requires you to update Activepieces to 0.41.0
* Adds the ability to pass font family name and font url to the embed sdk
### 26/01/2025 (0.3.3)
* This version requires you to update Activepieces to 0.39.8
* activepieces.configure method was being resolved before the user was authenticated, this is fixed now, so you can use activepieces.navigate method to navigate to your desired initial route.
### 04/12/2024 (0.3.0)
* add custom navigation handler ([#4500](https://github.com/activepieces/activepieces/pull/4500))
* allow passing a predefined name for connection in connect method ([#4485](https://github.com/activepieces/activepieces/pull/4485))
* add changelog ([#4503](https://github.com/activepieces/activepieces/pull/4503))
# API Requests
Source: https://www.activepieces.com/docs/embedding/sdk-server-requests
Send requests to your Activepieces instance from the embedded app
**Requirements:**
* Activepieces version 0.34.5 or higher
* SDK version 0.3.6 or higher
You can use the embedded SDK to send requests to your instance and retrieve data.
Follow the instructions in the [Embed Builder](./embed-builder) to initialize the SDK.
```html theme={null}
```
**Request Parameters:**
| Parameter Name | Required | Type | Description |
| -------------- | -------- | ---------------------- | --------------------------------------------------------------------------------------------------- |
| path | ✅ | string | The path within your instance you want to hit (we prepend the path with your\_instance\_url/api/v1) |
| method | ✅ | string | The http method to use 'GET', 'POST','PUT', 'DELETE', 'OPTIONS', 'PATCH' and 'HEAD |
| body | ❌ | JSON object | The json body of your request |
| queryParams | ❌ | Record\ | The query params to include in your request |
# Delete Connection
Source: https://www.activepieces.com/docs/endpoints/connections/delete
DELETE /v1/app-connections/{id}
Delete an app connection
# List Connections
Source: https://www.activepieces.com/docs/endpoints/connections/list
GET /v1/app-connections
List app connections
# Connection Schema
Source: https://www.activepieces.com/docs/endpoints/connections/schema
# Upsert Connection
Source: https://www.activepieces.com/docs/endpoints/connections/upsert
POST /v1/app-connections
Upsert an app connection based on the app name
# Get Flow Run
Source: https://www.activepieces.com/docs/endpoints/flow-runs/get
GET /v1/flow-runs/{id}
Get Flow Run
# List Flows Runs
Source: https://www.activepieces.com/docs/endpoints/flow-runs/list
GET /v1/flow-runs
List Flow Runs
# Flow Run Schema
Source: https://www.activepieces.com/docs/endpoints/flow-runs/schema
# Create Flow
Source: https://www.activepieces.com/docs/endpoints/flows/create
POST /v1/flows
Create a flow
# Delete Flow
Source: https://www.activepieces.com/docs/endpoints/flows/delete
DELETE /v1/flows/{id}
Delete a flow
# Get Flow
Source: https://www.activepieces.com/docs/endpoints/flows/get
GET /v1/flows/{id}
Get a flow by id
# List Flows
Source: https://www.activepieces.com/docs/endpoints/flows/list
GET /v1/flows
List flows
# Flow Schema
Source: https://www.activepieces.com/docs/endpoints/flows/schema
# Apply Flow Operation
Source: https://www.activepieces.com/docs/endpoints/flows/update
POST /v1/flows/{id}
Apply an operation to a flow
# Create Folder
Source: https://www.activepieces.com/docs/endpoints/folders/create
POST /v1/folders
Create a new folder
# Delete Folder
Source: https://www.activepieces.com/docs/endpoints/folders/delete
DELETE /v1/folders/{id}
Delete a folder
# Get Folder
Source: https://www.activepieces.com/docs/endpoints/folders/get
GET /v1/folders/{id}
Get a folder by id
# List Folders
Source: https://www.activepieces.com/docs/endpoints/folders/list
GET /v1/folders
List folders
# Folder Schema
Source: https://www.activepieces.com/docs/endpoints/folders/schema
# Update Folder
Source: https://www.activepieces.com/docs/endpoints/folders/update
POST /v1/folders/{id}
Update an existing folder
# Configure
Source: https://www.activepieces.com/docs/endpoints/git-repos/configure
POST /v1/git-repos
Upsert a git repository information for a project.
# Git Repos Schema
Source: https://www.activepieces.com/docs/endpoints/git-repos/schema
# Delete Global Connection
Source: https://www.activepieces.com/docs/endpoints/global-connections/delete
DELETE /v1/global-connections/{id}
# List Global Connections
Source: https://www.activepieces.com/docs/endpoints/global-connections/list
GET /v1/global-connections
# Global Connection Schema
Source: https://www.activepieces.com/docs/endpoints/global-connections/schema
# Update Global Connection
Source: https://www.activepieces.com/docs/endpoints/global-connections/update
POST /v1/global-connections/{id}
# Upsert Global Connection
Source: https://www.activepieces.com/docs/endpoints/global-connections/upsert
POST /v1/global-connections
# Overview
Source: https://www.activepieces.com/docs/endpoints/overview
API keys are generated under the platform dashboard at this moment to manage multiple projects, which is only available in the Platform and Enterprise editions,
Please contact [sales@activepieces.com](mailto:sales@activepieces.com) for more information.
### Authentication:
The API uses "API keys" to authenticate requests. You can view and manage your API keys from the Platform Dashboard.
After creating your API key, you can pass the API key as a Bearer token in the header.
Example:
`Authorization: Bearer {API_KEY}`
### Pagination
All endpoints use seek pagination, to paginate through the results, you can provide the `limit` and `cursor` as query parameters.
The API response will have the following structure:
```json theme={null}
{
"data": [],
"next": "string",
"previous": "string"
}
```
* **`data`**: Holds the requested results or data.
* **`next`**: Provides a starting cursor for the next set of results, if available.
* **`previous`**: Provides a starting cursor for the previous set of results, if applicable.
# Install Piece
Source: https://www.activepieces.com/docs/endpoints/pieces/install
POST /v1/pieces
Add a piece to a platform
# Piece Schema
Source: https://www.activepieces.com/docs/endpoints/pieces/schema
# Delete Project Member
Source: https://www.activepieces.com/docs/endpoints/project-members/delete
DELETE /v1/project-members/{id}
# List Project Member
Source: https://www.activepieces.com/docs/endpoints/project-members/list
GET /v1/project-members
# Project Member Schema
Source: https://www.activepieces.com/docs/endpoints/project-members/schema
# Create Project Release
Source: https://www.activepieces.com/docs/endpoints/project-releases/create
POST /v1/project-releases
# Project Release Schema
Source: https://www.activepieces.com/docs/endpoints/project-releases/schema
# Create Project
Source: https://www.activepieces.com/docs/endpoints/projects/create
POST /v1/projects
# Delete Project
Source: https://www.activepieces.com/docs/endpoints/projects/delete
DELETE /v1/projects/{id}
# List Projects
Source: https://www.activepieces.com/docs/endpoints/projects/list
GET /v1/projects
# Project Schema
Source: https://www.activepieces.com/docs/endpoints/projects/schema
# Update Project
Source: https://www.activepieces.com/docs/endpoints/projects/update
POST /v1/projects/{id}
# Get Sample Data
Source: https://www.activepieces.com/docs/endpoints/sample-data/get
GET /v1/sample-data
# Create Template
Source: https://www.activepieces.com/docs/endpoints/templates/create
POST /v1/templates
Create a template.
# Delete Template
Source: https://www.activepieces.com/docs/endpoints/templates/delete
DELETE /v1/templates/{id}
Delete a template.
# Get Template
Source: https://www.activepieces.com/docs/endpoints/templates/get
GET /v1/templates/{id}
Get a template.
# List Templates
Source: https://www.activepieces.com/docs/endpoints/templates/list
GET /v1/templates
List templates.
# Template Schema
Source: https://www.activepieces.com/docs/endpoints/templates/schema
# Delete User Invitation
Source: https://www.activepieces.com/docs/endpoints/user-invitations/delete
DELETE /v1/user-invitations/{id}
# List User Invitations
Source: https://www.activepieces.com/docs/endpoints/user-invitations/list
GET /v1/user-invitations
# User Invitation Schema
Source: https://www.activepieces.com/docs/endpoints/user-invitations/schema
# Send User Invitation (Upsert)
Source: https://www.activepieces.com/docs/endpoints/user-invitations/upsert
POST /v1/user-invitations
Send a user invitation to a user. If the user already has an invitation, the invitation will be updated.
# Delete User
Source: https://www.activepieces.com/docs/endpoints/users/delete
DELETE /v1/users/{id}
Delete user
# List Users
Source: https://www.activepieces.com/docs/endpoints/users/list
GET /v1/users
List users
# User Schema
Source: https://www.activepieces.com/docs/endpoints/users/schema
# Update User
Source: https://www.activepieces.com/docs/endpoints/users/update
POST /v1/users/{id}
Update user
# Queue Metrics
Source: https://www.activepieces.com/docs/endpoints/worker-machines/queue-metrics
GET /v1/worker-machines/queue-metrics
# Building Flows
Source: https://www.activepieces.com/docs/flows/building-flows
Flow consists of two parts, trigger and actions
## Trigger
The flow's starting point determines its frequency of execution. There are various types of triggers available, such as Schedule Trigger, Webhook Trigger, or Event Trigger based on specific service.
## Action
Actions come after the flow and control what occurs when the flow is activated, like running code or communicating with other services.
In real-life scenario:
# Debugging Runs
Source: https://www.activepieces.com/docs/flows/debugging-runs
Ensuring your business automations are running properly
You can monitor each run that results from an enabled flow:
1. Go to the Dashboard, click on **Runs**.
2. Find the run that you're looking for, and click on it.
3. You will see the builder in a view-only mode, each step will show a ✅ or a ❌ to indicate its execution status.
4. Click on any of these steps, you will see the **input** and **output** in the **Run Details** panel.
The debugging experience looks like this:
# Technical Limits
Source: https://www.activepieces.com/docs/flows/known-limits
Limits enforced on Activepieces Cloud, with self-hosted overrides
The numbers in the **Cloud** column are the limits enforced on
[cloud.activepieces.com](https://cloud.activepieces.com). On **self-hosted**
installations every limit is configurable through the environment variable
shown in the table — the **Self-hosted default** column lists the value applied
when the variable is unset.
***
### Execution
| Limit | Cloud | Env var | Self-hosted default |
| ------------------------------- | ------- | ------------------------------ | ------------------- |
| Flow run timeout | 10 min | `AP_FLOW_TIMEOUT_SECONDS` | `600` |
| Worker process memory | 1 GB | `AP_SANDBOX_MEMORY_LIMIT` (KB) | `1048576` |
| Paused flow lifetime | 30 days | `AP_PAUSED_FLOW_TIMEOUT_DAYS` | `30` |
| Worker concurrency (per worker) | 10 | `AP_WORKER_CONCURRENCY` | `5` |
Flows paused by **Wait for Approval** or **Delay** do **not** count toward the
flow run timeout — the timeout only counts active execution time.
The memory limit is measured for the entire Node.js process running the flow.
About 300 MB of that is overhead for a warm process with pieces loaded.
To handle longer processes, split them into multiple flows — e.g. one flow calls
another via webhook, or each flow processes a smaller batch of items.
***
### Files & flow run logs
Files emitted by actions or triggers are persisted to the database or S3 so the
flow can retry from a later step. The log size limit applies to the combined
inputs and outputs of every step in a single run; step outputs above the slice
threshold are offloaded to object storage and re-hydrated on demand instead of
sitting in worker memory.
| Limit | Cloud | Env var | Self-hosted default |
| ----------------------------------------------------------------------- | ----- | --------------------------------------------- | ------------------- |
| Step file size | 10 MB | `AP_MAX_FILE_SIZE_MB` | `25` |
| Flow run log size (combined inputs + outputs, includes sliced payloads) | 25 MB | `AP_MAX_FLOW_RUN_LOG_SIZE_MB` | `50` |
| Step output slice threshold | 32 KB | `AP_FLOW_RUN_LOG_SLICE_THRESHOLD_KB` | `32` |
| Step input truncate threshold | 2 KB | `AP_FLOW_RUN_LOG_INPUT_TRUNCATE_THRESHOLD_KB` | `2` |
#### How it works
* **Input** — values above `AP_FLOW_RUN_LOG_INPUT_TRUNCATE_THRESHOLD_KB` are
replaced with a placeholder in the log; the step still receives the full
value at runtime.
* **Output** — outputs above `AP_FLOW_RUN_LOG_SLICE_THRESHOLD_KB` are offloaded
to object storage and replaced with a reference in the log; the payload is
re-hydrated on demand.
* **Total** — the cumulative size of inputs and outputs (counting the
**original** size of offloaded outputs) is capped by
`AP_MAX_FLOW_RUN_LOG_SIZE_MB`. Runs that exceed it end with status
`LOG_SIZE_EXCEEDED`.
A run that produces enough cumulative output to exceed
`AP_MAX_FLOW_RUN_LOG_SIZE_MB` ends with status `LOG_SIZE_EXCEEDED`, regardless
of how many individual outputs were offloaded to object storage. Lowering
`AP_FLOW_RUN_LOG_SLICE_THRESHOLD_KB` will not buy more log headroom — only
raising `AP_MAX_FLOW_RUN_LOG_SIZE_MB` does.
***
### Webhooks
| Limit | Cloud | Env var | Self-hosted default |
| -------------------------------- | ------- | ---------------------------------------- | ------------------- |
| Sync webhook response timeout | 30 s | `AP_WEBHOOK_TIMEOUT_SECONDS` | `30` |
| Max webhook payload size | 5 MB | `AP_MAX_WEBHOOK_PAYLOAD_SIZE_MB` | `25` |
| Webhook payload inline threshold | 1024 KB | `AP_WEBHOOK_PAYLOAD_INLINE_THRESHOLD_KB` | `512` |
For synchronous webhook requests (URLs ending in `/sync`), Activepieces will
wait up to the response timeout before returning HTTP 408. Payloads above the
inline threshold are offloaded from Redis to file storage to protect Redis
memory; smaller payloads stay inline for the fastest path.
***
### Key / value storage
Used by the built-in **Store** piece and any piece that calls `context.store`.
| Limit | Value |
| ------------------ | -------------- |
| Maximum key length | 128 characters |
| Maximum value size | 512 KB |
These limits are not configurable.
# Passing Data
Source: https://www.activepieces.com/docs/flows/passing-data
Using data from previous steps in the current one
## Data flow
Any Activepieces flow is a vertical diagram that **starts with a trigger step** followed by **any number of action steps**.
Steps are connected vertically. Data flows from parent steps to the children. Children steps have access to the output data of the parent steps.
## Example Steps
This flow has 3 steps, they can access data as follows:
* **Step 1** is the main data producer to be used in the next steps. Data produced by Step 1 will be accessible in Steps 2 and 3. Some triggers don't produce data though, like Schedules.
* **Step 2** can access data produced by Step 1. After execution, this step will also produce data to be used in the next step(s).
* **Step 3** can access data produced by Steps 1 and 2 as they're its parent steps. This step can produce data but since it's the last step in the flow, it can't be used by other ones.
## Data to Insert Panel
In order to use data from a previous step in your current step, place your cursor in any input, the **Data to Insert** panel will pop up.
This panel shows the accessible steps and their data. You can expand the data items to view their content, and you can click the items to insert them in your current settings input.
If an item in this panel has a caret (⌄) to the right, it means you can click on the item to expand its child properties. You can select the parent item or its properties as you need.
When you insert data from this panel, it gets inserted at the cursor's position in the input. This means you can combine static text and dynamic data in any field.
We generally recommend that you expand the items before inserting them to understand the type of data they contain and whether they're the right fit to the input you're filling.
## Testing Steps to Generate Data
We require you to test steps before accessing their data. This approach protects you from selecting the wrong data and breaking your flows after publishing them.
If a step is not tested and you try to access its data, you will see the following message:
To fix this, go to the step and use the Generate Sample Data panel to test it. Steps use different approaches for testing. These are the common ones:
* **Load Data:** Some triggers will let you load data from your connected account without having to perform any action in that account.
* **Test Trigger:** Some triggers will require you to head to your connected account and fire the trigger in order to generate sample data.
* **Send Data:** Webhooks require you to send a sample request to the webhook URL to generate sample data.
* **Test Action:** Action steps will let you run the action in order to generate sample data.
Follow the instructions in the Generate Sample Data panel to know how your step should be tested. Some triggers will also let you Use Mock Data, which will generate static sample data from the piece. We recommend that you test the step instead of using mock data.
This is an example for generating sample data for a trigger using the **Load Data** button:
## Advanced Tips
### Switching to Dynamic Values
Dropdowns and some other input types don't let you select data from previous steps. If you'd like to bypass this and use data from previous steps instead, switch the input into a dynamic one using this button:
### Accessing data by path
If you can't find the data you're looking for in the **Data to Insert** panel but you'd like to use it, you can write a JSON path instead.
Use the following syntax to write JSON paths:
`{{step_slug.path.to.property}}`
The `step_slug` can be found by moving your cursor over any of your flow steps, it will show to the right of the step.
# Publishing Flows
Source: https://www.activepieces.com/docs/flows/publishing-flows
Make your flow work by publishing your updates
The changes you make won't work right away to avoid disrupting the flow that's already published. To enable your changes, simply click on the publish button once you're done with your changes.
# Version History
Source: https://www.activepieces.com/docs/flows/versioning
Learn how flow versioning works in Activepieces
Activepieces keeps track of all published flows and their versions. Here’s how it works:
1. You can edit a flow as many times as you want in **draft** mode.
2. Once you're done with your changes, you can publish it.
3. The published version will be locked and uneditable.
4. If you try to edit a published flow, Activepieces will create a new **draft** if there is none and copy the **published** version to the new version.
This means you can always go back to a previous version and edit the flow in draft mode without affecting the published version.
As you can see in the following screenshot, the yellow dot refers to DRAFT and the green dot refers to PUBLISHED.
# How to handle Requests
Source: https://www.activepieces.com/docs/handbook/customer-support/handle-requests
As a support engineer, you should:
* Fix the urgent issues (please see the definition below)
* Open tickets for all non-urgent issues. **(DO NOT INCLUDE ANY SENSITIVE INFO IN ISSUE)**
* Keep customers updated
* Write clear ticket descriptions
* Help the team prioritize work
* Route issues to the right people
Our support hours are from **8 am to 6 pm New York time (ET)**. Please keep this in mind when communicating response expectations to customers.
### Ticket fields
When handling support tickets, ensure you set the appropriate status and priority to help with ticket management and response time:
### Requests
### Type 1: Quick Fixes & Urgent Issues
* Understand the issue and how urgent it is.
* Open a ticket for on linear with "require attention" label and assign someone.
### Type 2: Complex Technical Issues
* Assess the issue and determine its urgency.
* Always create a Linear issue for the feature request, and send it to the customer.
* Leave a comment on the Linear issue with an estimated completion time.
### Type 3: Feature Enhancement Requests
* Always create a Linear issue for the feature request and send it to the customer.
* Evaluate the request and dig deeper into what the customer is trying to solve, then either evaluate and open a new ticket or append to an existing ticket in the backlog.
* Add it to our roadmap and discuss it with the team.
### Type 4: Business Case
* These cases involve purchasing new features, billing, or discussing agreements.
* Change the Team to "Success" on Pylon.
* Tag someone from the Success Team to handle it.
New features will always have the status "Backlog". Please make sure to communicate that we will discuss and address it in future production cycles so the customer doesn't expect immediate action.
### Frequently Asked Questions
If you don't understand the feature or issue, reach out to the customer for clarification. It's important to fully grasp the problem before proceeding. You can also consult with your team for additional insights.
When faced with multiple urgent issues, assess the impact of each on the customer and the system. Prioritize based on severity, number of affected users, and potential risks. Communicate with your team to ensure alignment on priorities.
If you encounter an abusive or rude customer, escalate the issue to Mohammad AbuAboud or Ashraf Samhouri. It's important to handle such situations with care and ensure that the customer feels heard while maintaining a respectful and professional demeanor.
# Overview
Source: https://www.activepieces.com/docs/handbook/customer-support/overview
At Activepieces, we take a unique approach to customer support. Instead of having dedicated support staff, our full-time engineers handle support requests on rotation. This ensures you get expert technical help from the people who build the product.
### Support Schedule
Our on-call engineer handles customer support as part of their rotation. For more details about how this works, check out our on-call documentation.
### Support Channels
* Community Support
* GitHub Issues: We actively monitor and respond to issues on our [GitHub repository](https://github.com/activepieces/activepieces)
* Community Forum: We engage with users on our [Community Platform](https://community.activepieces.com/) to provide help and gather feedback
* Email: only for account related issues, delete account request or billing issues.
* Enterprise Support
* Enterprise customers receive dedicated support through Slack
* We use [Pylon](https://usepylon.com) to manage support tickets and customer channels efficiently
* For detailed information on using Pylon, see our [Pylon Guide](/docs/handbook/customer-support/pylon)
### Support Hours & SLA:
Work in progress—coming soon!
# How to use Pylon
Source: https://www.activepieces.com/docs/handbook/customer-support/pylon
Guide for using Pylon to manage customer support tickets
At Activepieces, we use Pylon to manage Slack-based customer support requests through a Kanban board.
Learn more about Pylon's features: [https://docs.usepylon.com/pylon-docs](https://docs.usepylon.com/pylon-docs)
### New Column
Contains new support requests that haven't been reviewed yet
* Action Items:
* Respond fast even if you don't have an answer, the important thing here is to reply that you will take a look into it, the key to winning the customer's heart.
### On You Column
Contains active tickets that require your attention and response. These tickets need immediate review and action.
* Action items:
* Set ticket fields (status and priority) according to the guide below
* Check the [handle request page](./handle-requests) on how to handle tickets
The goal as a support engineer is to keep the "New" and "On You" columns empty.
### On Hold
Contains only tickets that have a linked Linear issue.
* Place tickets here after:
* You have identified the customer's issue
* You have created a Linear issue (if one doesn't exist - avoid duplicates!)
* You have linked the issue in Pylon
* You have assigned it to a team member (for urgent cases only)
Please do not place tickets on hold without a ticket.
Tickets will automatically move back to the "On You" column when the linked GitHub issue is closed.
### Closed Column
This means you did awesome job and the ticket reached it's Final destination for resolved tickets and no further attention required.
# Tone & Communication
Source: https://www.activepieces.com/docs/handbook/customer-support/tone
Our customers are fellow engineers and great people to work with. This guide will help you understand the tone and communication style that reflects Activepieces' culture in customer support.
#### Casual
Chat with them like you're talking to a friend. There's no need to sound like a robot. For example:
* ✅ "Hey there! How can I help you today?"
* ❌ "Greetings. How may I assist you with your inquiry?"
* ✅ "No worries, we'll get this sorted out together!"
* ❌ "Please hold while I process your request."
#### Fast
Reply quickly! People love fast responses. Even if you don't know the answer right away, let them know you'll get back to them with the information. This is the fastest way to make customers happy; everyone likes to be heard.
#### Honest
Explain the issue clearly, don't be defensive, and be honest. We're all about open source and transparency here – it's part of our culture. For example:
* ✅ "I'm sorry, I forgot to follow up on this. Let's get it sorted out now."
* ❌ "I apologize for the delay; there were unforeseen circumstances."
### Always Communicate the Next Step
Always clarify the next step, such as whether the ticket will receive an immediate response or be added to the backlog for team discussion.
#### Use "we," not "I"
* ✅ "We made a mistake here. We'll fix that for you."
* ❌ "I'll look into this for you."
* You're speaking on behalf of the company in every email you send.
* Use "we" to show customers they have the whole team's support.
Customers are real people who want to talk to real people. Be yourself, be helpful, and focus on solving their problems!
# Handling Downtime
Source: https://www.activepieces.com/docs/handbook/engineering/onboarding/downtime-incident

## 📋 What You Need Before Starting
Make sure these are ready:
* **[BetterStack Setup](../playbooks/setup-betterstack)**: For managing incidents and on-call alerts.
* **[BetterStack Logs](https://telemetry.betterstack.com)**: For checking logs and errors.
* **[BetterStack Monitors](https://uptime.betterstack.com)**: For e2e test results and uptime monitoring.
* **[BetterStack Errors](https://errors.betterstack.com)**: For thrown errors.
***
## 🚨 Stay Calm and Take Action
Don’t panic! Follow these steps to fix the issue.
1. **Tell Your Users**:
* Let your users know there’s an issue. Post on [Community](https://community.activepieces.com) and Discord.
* Example message: *“We’re looking into a problem with our services. Thanks for your patience!”*
2. **Find Out What’s Wrong**:
* Gather details. What’s not working? When did it start?
3. **Update the Status Page**:
* Use [BetterStack](https://uptime.betterstack.com) and create an incident to update the status page. Set it to *”Investigating”* or *”Partial Outage”*.
***
## 🔍 Check for Infrastructure Problems
1. **Look at DigitalOcean**:
* Check if the CPU, memory, or disk usage is too high.
* If it is:
* **Increase the machine size** temporarily to fix the issue.
* Keep looking for the root cause.
***
## 📜 Check Logs and Errors
1. **Use BetterStack Logs**:
* Go to [https://telemetry.betterstack.com](https://telemetry.betterstack.com).
* Search for recent errors in the logs.
2. **Check BetterStack errors**:
* Go to [https://errors.betterstack.com](https://errors.betterstack.com).
* Look for grouped errors (errors that happen a lot).
* Try to **reproduce the error** and fix it if possible.
***
## 🛠️ Debugging with Playwright / BetterStack Monitors
1. **Check BetterStack Monitor Logs**:
* Go to [https://uptime.betterstack.com](https://uptime.betterstack.com) and review recent monitor failures.
* If the issue is a **timeout**, it might mean there’s a bigger performance problem.
2. **Check Playwright E2E Results**:
* Review the latest Playwright test run in CI for failed checks.
* If it’s an E2E test failure due to UI changes, it’s likely not urgent.
* Fix the test file packages/tests-e2e/scenarios/betterstack/webhook-should-return-response.flat.spec.js and the issue will go away once pushed to main and Sync Playwright test to BetterStack ci runs.
***
## 🎭 Debugging Incidents via Playwright Artifacts
1. Go to the [BetterStack Incidents list](https://uptime.betterstack.com/team/t142339/incidents?m=4211060).
2. Choose the relevant incident.
3. Scroll down and open the **Artifacts** tab.
4. You’ll find **screenshots** of the failed Playwright tests and **logs** that help pinpoint what went wrong.
***
## 🚨 When Should You Ask for Help?
Ask for help right away if:
* Flows are failing.
* The whole platform is down.
* There's a lot of data loss or corruption.
* You're not sure what is causing the issue.
* You've spent **more than 5 minutes** and still don't know what's wrong.
💡 **How to Ask for Help**:
* Use **BetterStack** to create a **critical alert**.
* Go to the **Slack incident channel** and escalate the issue to the engineering team.
If you’re unsure, **ask for help!** It’s better to be safe than sorry.
***
## 💡 Helpful Tips
1. **Stay Organized**:
* Keep a list of steps to follow during downtime.
* Write down everything you do so you can refer to it later.
2. **Communicate Clearly**:
* Keep your team and users updated.
* Use simple language in your updates.
3. **Take Care of Yourself**:
* If you feel stressed, take a short break. Grab a coffee ☕, take a deep breath, and tackle the problem step by step.
# Engineering Workflow
Source: https://www.activepieces.com/docs/handbook/engineering/onboarding/how-we-work
Activepieces work is based on one-week sprints, as priorities change fast, the sprint has to be short to adapt.
## Sprints
Sprints are shared publicly on our GitHub account. This would give everyone visibility into what we are working on.
* There should be a GitHub issue for the sprint set up in advance that outlines the changes.
* Each *individual* should come prepared with specific suggestions for what they will work on over the next sprint. **if you're in an engineering role, no one will dictate to you what to build – it is up to you to drive this.**
* Teams generally meet once a week to pick the **priorities** together.
* Everyone in the team should attend the sprint planning.
* Anyone can comment on the sprint issue before or after the sprint.
## Pull Requests
When it comes to code review, we have a few guidelines to ensure efficiency:
* Create a pull request in draft state as soon as possible.
* Be proactive and review other people’s pull requests. Don’t wait for someone to ask for your review; it’s your responsibility.
* Assign only one reviewer to your pull request.
* Add the PR to the current project (sprint) so we can keep track of unmerged PRs at the end of each sprint.
* It is the **responsibility** of the **PR owner** to draft the test scenarios within the PR description. Upon review, the reviewer may assume that these scenarios have been tested and provide additional suggestions for scenarios.
* Large, incomplete features should be broken down into smaller tasks and continuously merged into the main branch.
## Planning is everyone's job.
Every engineer is responsible for discovering bugs/opportunities and bringing them up in the sprint to convert them into actionable tasks.
# On-Call
Source: https://www.activepieces.com/docs/handbook/engineering/onboarding/on-call
## Prerequisites:
* [Setup BetterStack](../playbooks/setup-betterstack)
## Why On-Call?
We need to ensure there is **exactly one person** at the same time who is the main point of contact for the users and the **first responder** for the issues. It's also a great way to learn about the product and the users and have some fun.
You can listen to [Queen - Under Pressure](https://www.youtube.com/watch?v=a01QQZyl-_I) while on-call, it's fun and motivating.
If you ever feel burn out in middle of your rotation, please reach out to the team and we will help you with the rotation or take over the responsibility.
## On-Call Schedule
The on-call rotation is managed through BetterStack, with each engineer taking a one-week shift. You can:
* View the current schedule and upcoming rotations on [BetterStack On-Call Schedule](https://uptime.betterstack.com)
* Add the schedule to your Google Calendar using the calendar feed link from BetterStack
Make sure to update the on-call schedule in BetterStack if you cannot be available during your assigned rotation. This ensures alerts are routed to the correct person and maintains our incident response coverage.
To modify the schedule:
1. Go to [BetterStack On-Call Schedule](https://uptime.betterstack.com)
2. Find your rotation slot and create an override for your unavailable period
3. Coordinate with the team to find coverage for your slot
## What it means to be on-call
The primary objective of being on-call is to triage issues and assist users. It is not about fixing the issues or coding missing features. Delegation is key whenever possible.
You are responsible for the following:
* Respond to Slack messages as soon as possible, referring to the [customer support guidelines](/docs/handbook/customer-support/overview).
* Check [community.activepieces.com](https://community.activepieces.com) for any new issues or to learn about existing issues.
* Monitor your BetterStack notifications and respond promptly when paged.
**Friendly Tip #1**: always escalate to the team if you are unsure what to do.
## How do you get paged?
Monitor and respond to incidents that come through these channels:
#### Slack Fire Emoji (🔥)
When a customer reports an issue in Slack and someone reacts with 🔥, you'll be automatically paged via BetterStack.
#### Automated Alerts
Watch for notifications from:
* Digital Ocean about CPU, Memory, or Disk outages
* BetterStack Monitors about website downtime or uptime failures
* Playwright CI about e2e test failures
# Onboarding Check List
Source: https://www.activepieces.com/docs/handbook/engineering/onboarding/onboarding-check-list
🎉 Welcome to Activepieces!
This guide provides a checklist for the new hire onboarding process.
***
## 📧 Essentials
* [ ] Set up your @activepieces.com email account and setup 2FA
* [ ] Confirm access to our private Discord server.
* [ ] Get Invited to the Activepieces Github Organization and Setup 2FA
* [ ] Get Assigned to a buddy who will be your onboarding buddy.
During your first two months, we'll schedule 1:1 meetings every two weeks to ensure you're progressing well and to maintain open communication in both directions.
After two months, we will decrease the frequency of the 1:1 to once a month.
If you don't setup the 2FA, We will be alerted from security perspective.
***
### Engineering Checklist
* [ ] Setup your development environment using our setup guide
* [ ] Learn the repository structure and our tech stack (Fastify, React, PostgreSQL, SQLite, Redis)
* [ ] Understand the key database tables (Platform, Projects, Flows, Connections, Users)
* [ ] Complete your first "warmup" task within your first day (it's our tradition!)
***
## 🌟 Tips for Success
* Don't hesitate to ask questions—the team is especially helpful during your first days
* Take time to understand the product from a business perspective
* Work closely with your onboarding buddy to get up to speed quickly
* Review our documentation, explore the codebase, and check out community resources, outside your scope.
* Provide your ideas and feedback regularly
***
Welcome again to the team. We can't wait to see the impact you'll make at Activepieces! 😉
# Release Cycle
Source: https://www.activepieces.com/docs/handbook/engineering/onboarding/release-cycle
This page explains how code moves from merge to production and why each step exists.
## The Cycle
```
Merge to main (Mon–Thu) ──→ Deploy to staging
│
5 PM UTC Thu: staging freezes + RC tagged
│
Fri–Sat: canary runs daily from main
│
Sun 9 AM UTC: canary rebuilt from main
│ → release-candidate promoted to production
│ + deploy/cloud/YYYY-MM-DD branch created
Mon 9 AM UTC: self-hosted release published
```
## Cutoffs & Why
### Staging Freeze / RC Tag — Thursday 5 PM UTC
Merges to `main` after 5 PM UTC on Thursday are **not deployed to staging**. At Thursday 5 PM UTC, the current staging image and commit are tagged as `release-candidate` — this is the stability gate for the week.
**Why**: Anything merged after the Thursday cut doesn't go to cloud production until the following week. This gives the team a predictable, stable window to validate changes before they reach customers.
### Promotion — Sunday 9 AM UTC
At 9 AM UTC on Sunday, the cloud workflow first refreshes the canary environment by invoking `continuous-delivery-canary.yml` as a reusable workflow, then promotes Thursday's `release-candidate` to production.
**Why**: By Sunday, the release candidate has soaked in the canary environment (Fri–Sat). Refreshing canary again right before promotion ensures it is running the latest `main` when production is updated. If something is broken, we catch it before it reaches all customers.
After promotion, a `deploy/cloud/YYYY-MM-DD` branch is created.
### Weekly Release — Monday 9 AM UTC
On Monday at 9 AM UTC, the self-hosted release is published: git tag, GitHub release notes, and the `release-candidate` cloud image re-tagged as the versioned self-hosted image on Docker Hub and GHCR.
**Why**: Publishing the day after cloud promotion means self-hosted users get the exact same bits that cloud deployed on Sunday — no separate build, no divergence. The Monday timing also allows the team to catch any issues from Sunday's cloud deployment before the self-hosted image goes public.
## Putting It Together
| Time | What Happens |
| ---------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| Mon–Thu, 9 AM–5 PM UTC | Merge to `main` → auto-deployed to staging |
| Mon–Thu, 5 PM–9 AM UTC | Merge to `main` → code in `main` only, staging frozen |
| **Thu 5 PM UTC** | Staging image + commit tagged as `release-candidate` |
| Fri–Sat | Canary deploys daily from `main`; cloud is quiet |
| **Sun 9 AM UTC** | Canary rebuilt from latest `main`; `release-candidate` promoted to cloud production; `deploy/cloud/YYYY-MM-DD` branch created |
| **Mon 9 AM UTC** | Self-hosted release published (re-tags cloud RC image) |
## Overrides
Sometimes the cycle needs to be bypassed:
* **Hotfix**: Apply your fix to the current `deploy/cloud/YYYY-MM-DD` branch, then manually dispatch `continuous-delivery-cloud.yml` with action `cloud-hotfix` **on that branch** (not `main`). This builds the image and deploys directly to production — staging is not involved. After promotion, merge the hotfix branch into `main` — this automatically triggers `tag-release-candidate`, which SSHes to staging and retrags whatever image is running there as `release-candidate`. Sunday's scheduled run will then re-deploy what is already on prod — a safe no-op.
* **Emergency**: Use `emergency-cloud-deploy.yml` to deploy directly to production, bypassing staging entirely. Use sparingly.
## FAQ
**Q: I merged at 6 PM UTC on a Wednesday. When does my code reach production?**
Your code is in `main` but staging is frozen. It won't be included in the Thursday RC tag unless it lands before 5 PM UTC Thursday. If it misses that cut, it won't reach cloud production until the following Sunday.
**Q: I merged at 10 AM UTC on a Wednesday. When does my code reach production?**
It deploys to staging immediately. As long as nothing breaks before Thursday 5 PM UTC, it will be included in the RC tag and will reach cloud production the following Sunday.
**Q: Can I deploy to staging during the freeze window?**
Yes, manually dispatch `continuous-delivery.yml` with `deploy-staging`. But consider that the content team may be relying on the frozen version.
# Stack & Tools
Source: https://www.activepieces.com/docs/handbook/engineering/onboarding/stack
## Language
Activepieces uses **Typescript** as its one and only language.
The reason behind unifying the language is the ability for it to break data models and features into packages, which can be shared across its components (worker / frontend / backend).
This enables it to focus on learning fewer tooling options and perfect them across all its packages.
## Frontend
* Web framework/library: [React](https://reactjs.org/)
* Layout/components: [shadcn](https://shadcn.com/) / Tailwind
## Backend
* Framework: [Fastify](https://www.fastify.io/)
* Database: [PostgreSQL](https://www.postgresql.org/)
* Task Queuing: [Redis](https://redis.io/)
* Task Worker: [BullMQ](https://github.com/taskforcesh/bullmq)
## Testing
* Unit & Integration Tests: [Vitest](https://vitest.dev/)
* E2E Test: [Playwright](https://playwright.dev/)
## Additional Tools
* Application monitoring: [Sentry](https://sentry.io/welcome/)
* CI/CD: [GitHub Actions](https://github.com/features/actions) / [Depot](https://depot.dev/) / [Kamal](https://kamal-deploy.org/)
* Containerization: [Docker](https://www.docker.com/)
* Linter: [ESLint](https://eslint.org/)
* Logging: [OpenTelemetry](https://opentelemetry.io/)
* Building: [Turbo](https://turbo.build/) with [Bun](https://bun.sh/)
## Adding New Tool
Adding a new tool isn't a simple choice. A simple choice is one that's easy to do or undo, or one that only affects your work and not others'.
We avoid adding new stuff to increase the ease of setup, which increases adoption. Having more dependencies means more moving parts and support.
If you're thinking about a new tool, ask yourself these:
* Is this tool open source? How can we give it to customers who use their own servers?
* What does it fix, and why do we need it now?
* Can we use what we already have instead?
These questions only apply to required services for everyone. If this tool speeds up your own work, we don't need to think so hard.
# Overview
Source: https://www.activepieces.com/docs/handbook/engineering/overview
Welcome to the engineering team! This section contains essential information to help you get started, including our development processes, guidelines, and practices. We're excited to have you on board.
# AI Engineering Guide
Source: https://www.activepieces.com/docs/handbook/engineering/playbooks/ai-engineering-guide
How to use our Claude Code setup to ship features from a single prompt (hopefully 😄).
This guide is for engineers, not AI. Claude should not read this file.
## What We Built
Our repo has a layered AI assistance system:
| Layer | What | When Loaded | Purpose |
| ----------------------- | ------------------ | -------------------------------- | ---------------------------------------------------------------------------- |
| `CLAUDE.md` (root) | \~55 lines | Every session | Non-obvious architecture rules |
| `packages/*/CLAUDE.md` | \~30-55 lines each | When working in that package | Package-specific patterns |
| `.agents/features/*.md` | \~60 lines each | When Claude explores the feature | Entity schemas, services, data flows |
| `.claude/rules/` | 3-5 lines each | Every session | Critical safety checks (entity registration, data isolation, edition safety) |
| `.agents/skills/` | 30-65 lines each | When invoked | Step-by-step workflows (`/add-feature`, `/add-entity`, `/add-endpoint`) |
Total context per session: \~150 lines. Claude has \~150 instruction slots — we use them all, nothing wasted.
## The Workflow: Prompt to Feature to Ship
### 1. Write a Clear Brief (5 min)
**Bad:** "add analytics"
**Good:** "Add a project-level analytics dashboard showing flow run count, success rate, and average duration for the last 7/30/90 days. Project-scoped, CE feature, visible to users with READ\_RUN permission."
Include:
* What it does (1-2 sentences)
* Who sees it (role/permission)
* CE, EE, or both
* Any constraints (must work embedded, must respect quotas, etc.)
### 2. Explore (Plan Mode) — 15 min
Press `Shift+Tab` twice to enter Plan Mode, then paste your brief.
Claude will:
* Read relevant feature files in `.agents/features/` for the modules you'll touch
* Explore existing patterns in the codebase
* Ask you clarifying questions
* Propose which files to create/modify
You review and refine. This is where 80% of the value comes from — getting the plan right before writing code.
### 3. Implement — 30-90 min
Exit Plan Mode. Claude executes the plan.
For full-stack features, tell Claude: "Use `/add-feature`" — this triggers our custom skill with the full cross-cutting checklist (shared types, entity + migration, service, controller, frontend, tests, verify).
For just a database entity: `/add-entity "my_feature"`
For just an API endpoint: `/add-endpoint "description"`
Claude will:
* Create files following our patterns (references real code, not examples)
* Register entities in `getEntities()` (rules enforce this)
* Add migrations to `getMigrations()`
* Set up `securityAccess` on every endpoint
* Filter queries by `projectId`/`platformId`
* Create frontend feature folder with API client + hooks
### 4. Test — 10-30 min
Tell Claude: "Write API tests for this feature and run them."
Claude will:
* Create test file at `test/integration/ce/{feature}.test.ts`
* Use `setupTestEnvironment()` + `createTestContext(app)`
* Run `npm run test-api`
* Fix failures and re-run
### 5. Verify and Ship — 10 min
```bash theme={null}
npm run lint-dev # Claude runs this automatically
npm run test-api # Verify all tests pass
```
Then: "Create a PR for this" — Claude handles branch, commit, and PR creation.
## Parallel Features (2-3 at Once)
Open multiple terminal tabs, each with its own Claude session:
```bash theme={null}
Tab 1: claude --worktree # Feature A (backend-heavy)
Tab 2: claude --worktree # Feature B (frontend-heavy)
Tab 3: claude --worktree # Feature C (piece/integration)
```
Each tab gets its own git worktree (isolated branch, no conflicts). While Tab 1 is running tests, switch to Tab 2 and keep building.
Practical limit: 3 parallel sessions. Beyond that, context-switching overhead exceeds gains.
## Session Management
| Situation | Command | Why |
| -------------------------------- | ----------------------------- | ------------------------------ |
| Starting unrelated feature | `/clear` | Wipe old context, start clean |
| Continuing same feature next day | Open Claude in same directory | Auto-loads `CLAUDE.md` |
| Claude keeps making same mistake | `/clear` + rewrite prompt | Correction loops waste context |
| Long session (45+ min) | `/compact` | Prevent quality degradation |
| Context feels bloated | `/compact` | Summarize and continue fresh |
If you've corrected Claude twice on the same thing, `/clear` and rewrite the prompt with what you learned. Don't accumulate corrections.
## Code Review Checklist (for AI-Generated Code)
* [ ] Logic matches the plan you approved
* [ ] Queries filter by `projectId` or `platformId`
* [ ] New entity registered in `getEntities()`
* [ ] Migration added to `getMigrations()`
* [ ] Every endpoint has `securityAccess` config
* [ ] EE code stays in `src/app/ee/` (no cross-imports)
* [ ] Tests exist and pass
* [ ] No `as SomeType` casting
* [ ] No `any` types
* [ ] Zod error messages use i18n keys
* [ ] `@activepieces/shared` version bumped (if changed)
## Weekly Rhythm (5 Features/Week Target)
| Day | Morning | Afternoon |
| --- | ------------------------------- | --------------------------------- |
| Mon | Plan Feature A (explore + plan) | Implement Feature A |
| Tue | Finish A, test, PR | Plan + start Feature B |
| Wed | Finish B | Plan + implement Feature C |
| Thu | Finish C, test, PR | Feature D (full cycle in 1 day) |
| Fri | Feature E (or polish D) | Code review, merge, retrospective |
Planning on Day N, implementing on Day N+1 produces better code than doing both in the same session. The overnight break lets you think about edge cases.
## Tips from Top Performers
1. **Front-load the brief.** A clear 3-sentence description saves 30 min of Claude wandering. Include: what, who, CE/EE, constraints.
2. **Let Plan Mode run.** Don't skip it. The 15 minutes of exploration prevents 2 hours of wrong-direction coding.
3. **Use the skills.** `/add-feature` exists for a reason — it encodes every step we've learned from past mistakes.
4. **Read the feature docs yourself.** Before starting a feature in a module, spend 2 minutes reading its `.agents/features/*.md`. You'll ask better questions.
5. **Don't babysit.** Start Claude on a task, switch to another tab, come back when it's done. Check notifications.
6. **Small PRs > big PRs.** Ship each feature as its own PR. Don't bundle.
7. **Test commands in the brief.** Include "success looks like: this test passes, this API returns 200, this page renders" in your prompt.
8. **Trust the rules.** Our `.claude/rules/` enforce entity registration, edition safety, and data isolation. Claude follows these every time.
## Quick Reference
```bash Start working theme={null}
claude # Opens Claude Code in current directory
```
```bash Plan before implementing theme={null}
# Press Shift+Tab twice for Plan Mode
```
```bash Custom skills theme={null}
/add-feature "description" # Full-stack feature workflow
/add-entity "name" # Database entity + migration
/add-endpoint "description" # API endpoint + controller
```
```bash Session management theme={null}
/clear # Hard reset (new feature)
/compact # Soft reset (continue, less context)
```
```bash Verification theme={null}
npm run lint-dev # Lint + auto-fix
npm run test-api # Run API tests
npm run test-unit # Run unit tests
```
```bash Ship theme={null}
CLAUDE_PUSH=yes git push -u origin HEAD
```
## Onboarding (First Week)
Read this guide. Install Claude Code. Read root `CLAUDE.md` (\~55 lines).
Pick a small feature (5-10 files). Follow the full workflow: brief, Plan Mode, implement, test, PR. Expect 4-6 hours.
Pick a medium feature (15-25 files). Practice `/clear` when stuck. Use the `/add-feature` skill.
Try 2 parallel sessions on different features. Practice tab-switching.
You should be shipping 1 feature/day comfortably.
# Canary Deployment
Source: https://www.activepieces.com/docs/handbook/engineering/playbooks/canary-deployment
Canary deployment lets us run a separate app instance on a newer (or experimental) build and route a specific subset of platforms to it, while all other traffic continues hitting the primary app. This gives us real-world validation with live data before a full rollout.
## Architecture Overview
```
Browser / External Webhook
│
▼
Primary App (prod)
┌─────────────────────────────┐
│ preHandler hook │
│ canaryRoutingMiddleware │
│ ├─ platformId ∈ canary? │──── No ──▶ handle normally
│ └─ Yes │
│ ▼ │
│ HTTP proxy to canary app │──────────▶ Canary App
└─────────────────────────────┘ │
▼
Canary Worker
(polls canaryWorkerJobs queue)
```
All traffic enters the primary app. The `canaryRoutingMiddleware` runs as a Fastify `preHandler` hook on every request. If the resolved `platformId` is in the canary list, the request is proxied in-process to the canary app and the response is returned directly to the caller — the primary app never processes it further.
## Canary membership: DB-backed with Redis cache
Canary platform membership is stored in the `platform_plan.canary` boolean column. On each lookup, the list of canary platform IDs is fetched from Redis (`canary-platform-ids` key). On cache miss the list is read from the database and cached. When the canary flag is changed via the API, the cache is invalidated immediately.
This removes the need to keep `AP_CANARY_PLATFORM_IDS` in sync across services and allows runtime changes without a redeploy.
## Components
### Canary App
A second instance of the server API running a different image tag. No special env vars are needed for canary membership — it is driven entirely by the DB. Configure it with:
| Env var | Value |
| ----------------- | ----------------------- |
| `AP_FRONTEND_URL` | canary app's public URL |
The canary app runs as a full app instance. Global queue consumers (`runsMetadata`, `system-job-queue`) run on both the primary and canary app — this is safe because BullMQ ensures each job is processed by exactly one consumer.
### Canary Workers
Dedicated workers that connect to the canary app instead of the primary app:
| Env var | Value |
| -------------------- | -------- |
| `AP_WORKER_GROUP_ID` | `canary` |
Workers use `AP_FRONTEND_URL` for their Socket.IO RPC channel and for posting engine results back to the app. Canary workers are registered with the canary app, so the RPC path is fully isolated.
### Primary App Config
The primary app only needs to know where to proxy canary requests:
| Env var | Description |
| ------------------- | ------------------------------ |
| `AP_CANARY_APP_URL` | Internal URL of the canary app |
## WebSocket / Real-time Updates
Socket.IO is configured with a **Redis adapter** (`@socket.io/redis-adapter`). Events emitted on any app instance (primary or canary) are broadcast through Redis pub/sub to all connected instances. This means:
* Users connected to the primary app receive real-time flow run updates even when the execution happened on the canary app.
* No WebSocket proxying is required.
## Queue Isolation
| Queue | Primary App | Canary App |
| ------------------ | ----------------------------- | ---------------------------- |
| `workerJobs` | ✅ Consumed by primary workers | Not consumed |
| `runsMetadata` | ✅ Consumed | ✅ Consumed |
| `system-job-queue` | ✅ Consumed | ✅ Consumed |
| `canaryWorkerJobs` | Not consumed | ✅ Consumed by canary workers |
Jobs for canary platforms are enqueued to `canaryWorkerJobs`, which only canary workers poll — fully isolated from the primary worker fleet.
## Deploying a New Canary Build
The **Continuous Delivery — Canary** workflow (`continuous-delivery-canary.yml`) runs automatically every day at 9 AM UTC, building from the latest `main`. It is also invoked as a reusable workflow from `continuous-delivery-cloud.yml` on the Sunday promotion, so the canary is always refreshed from the latest `main` immediately before production is updated.
The workflow:
1. Builds and pushes a new image tagged `..canary`
2. Checks if any new migrations are breaking — fails the workflow if they are (no override)
3. Deploys the canary app (`config/app-canary.yml`) and canary workers
Required GitHub secrets: `AP_API_KEY` (the primary app's `AP_API_KEY` value).
## Rolling Back a Canary Deployment
Trigger the **Continuous Delivery — Rollback Canary** workflow (`continuous-delivery-rollback-canary.yml`) with the image tag you want to roll back to. The workflow:
1. Extracts the migration manifest from the target image
2. Rolls back DB migrations not present in that manifest
3. Redeploys the canary app and workers to the target image
| Input | Description |
| ----------------------- | ------------------------------------------------------------------- |
| `rollback_to_image_tag` | Image tag to roll back to (e.g. `0.51.0.abc1234.canary`) |
| `force` | Force rollback even if breaking migrations exist. Default: `false`. |
## Promoting canary build to production
Canary is a validation environment, not a promotion path. Full promotion happens via the Sunday scheduled cloud workflow, which deploys `release-candidate` to production. If a canary build has been validated and the corresponding commit has been tagged as `release-candidate`, it will automatically reach production on the next Sunday.
## Managing Platform Routing
### Enable canary routing for a platform
```bash theme={null}
curl -X POST https://cloud.activepieces.com/v1/admin/platforms/canary \
-H "api-key: " \
-H "Content-Type: application/json" \
-d '{"platformId": "", "canary": true}'
```
### Disable canary routing for a platform
```bash theme={null}
curl -X POST https://cloud.activepieces.com/v1/admin/platforms/canary \
-H "api-key: " \
-H "Content-Type: application/json" \
-d '{"platformId": "", "canary": false}'
```
Both calls update `platform_plan.canary` in the database and invalidate the Redis cache (`canary-platform-ids`). The change takes effect on the next request — no restart required.
# Connect Claude to Chrome
Source: https://www.activepieces.com/docs/handbook/engineering/playbooks/connect-claude-to-chrome
This guide explains how to set up the Chrome DevTools MCP server so Claude Code can inspect and control your Chrome browser.
## Linux
### 1. Start Chrome with remote debugging
```bash theme={null}
google-chrome --remote-debugging-port=4222 --user-data-dir=/tmp/chrome-debug
```
### 2. Install Claude Code
```bash theme={null}
npm i -g @anthropic-ai/claude-code
```
### 3. Add the Chrome DevTools MCP
```bash theme={null}
claude mcp add chrome-devtools --scope user -- npx chrome-devtools-mcp@latest --browserUrl=http://localhost:4222
```
> Make sure the port in `--browserUrl` matches the `--remote-debugging-port` you used when launching Chrome.
***
## Windows + WSL
### 1. Configure WSL mirrored networking
Create a file called `.wslconfig` in your Windows user profile directory (`%USERPROFILE%`, e.g. `C:\Users\YourName\.wslconfig`) with the following content:
```ini theme={null}
[wsl2]
networkingMode = mirrored
```
> **Important:** The file must be named exactly `.wslconfig` — not `.wslconfig.txt`. If you created it with Notepad, double-check that it didn't append `.txt` to the filename.
After creating or editing the file, restart WSL:
```powershell theme={null}
wsl --shutdown
```
### 2. Install Claude Code and add the MCP (inside WSL)
Open your WSL terminal and run:
```bash theme={null}
npm i -g @anthropic-ai/claude-code
claude mcp add chrome-devtools --scope user -- npx chrome-devtools-mcp@latest --browserUrl=http://localhost:4222
```
### 3. Start Chrome on Windows
Open **cmd** or **PowerShell** on Windows and run:
```cmd theme={null}
"C:\Program Files\Google\Chrome\Application\chrome.exe" --remote-debugging-port=4222 --user-data-dir="%TEMP%\chrome-debug"
```
> The port must match the one you used in the `--browserUrl` flag above.
***
## Verify the connection
### 1. Check the debug endpoint
Open your browser and navigate to:
```
http://localhost:4222/json/version
```
You should see a JSON response like this:
```json theme={null}
{
"Browser": "Chrome/147.0.7727.55",
"Protocol-Version": "1.3",
"User-Agent": "Mozilla/5.0 ...",
"V8-Version": "14.7.173.16",
"WebKit-Version": "537.36 (...)",
"webSocketDebuggerUrl": "ws://localhost:4222/devtools/browser/..."
}
```
If you see this, the debug protocol is active.
### 2. Check the MCP connection in Claude Code
Inside Claude Code, run:
```
/mcp
```
You should see `chrome-devtools` listed as a connected MCP server. If it appears with a green status, Claude can communicate with your browser.
***
## Devcontainer
The devcontainer is already configured to set up the Chrome DevTools MCP automatically. The `postCreateCommand` in `.devcontainer/devcontainer.json` installs dependencies and registers the MCP server, resolving the host IP via `host.docker.internal`:
```json theme={null}
"postCreateCommand": "bun install && HOST_IP=$(getent ahostsv4 host.docker.internal | awk '{print $1}' | head -1) && claude mcp add chrome-devtools --scope user -- npx chrome-devtools-mcp@latest --browserUrl=http://${HOST_IP}:4222"
```
All you need to do is:
1. Start Chrome on your host machine with `--remote-debugging-port=4222` (see the Linux or Windows sections above).
2. Inside the devcontainer, verify the MCP is registered by running `/mcp` in Claude Code.
# Database Migrations
Source: https://www.activepieces.com/docs/handbook/engineering/playbooks/database-migration
Guide for creating database migrations in Activepieces
Activepieces uses TypeORM as its database driver in Node.js. We support two database types across different editions of our platform.
The database migration files contain both what to do to migrate (up method) and what to do when rolling back (down method).
Read more about TypeORM migrations here:
[https://orkhan.gitbook.io/typeorm/docs/migrations](https://orkhan.gitbook.io/typeorm/docs/migrations)
## Database Support
* PostgreSQL
* PGlite
**Why Do we have PGlite?**
We support PGlite to simplify development and self-hosting. It's particularly helpful for:
* Developers creating pieces who want a quick setup
* Self-hosters using platforms to manage docker images but doesn't support docker compose.
PGlite is a lightweight PostgreSQL implementation that runs embedded, so migrations are compatible with PostgreSQL.
## Editions
* **Enterprise & Cloud Edition** (Must use PostgreSQL)
* **Community Edition** (Can use PostgreSQL or PGlite)
### How To Generate
Set the `AP_DB_TYPE` environment variable to `POSTGRES` after making sure have latest state by running Activepieces first.
Run the migration generation command:
```bash theme={null}
npx turbo run db-migration --filter=api -- --name=
```
Replace `` with a descriptive name for your migration.
The command will generate a new migration file in `packages/server/api/src/app/database/migration/postgres/`.
The generated file uses `MigrationInterface` — you need to update it:
1. Change `implements MigrationInterface` to `implements Migration`
2. Update the import from `typeorm` to import `Migration` from `../../migration`
3. Add `breaking = false` (or `true` if the migration drops columns/tables or transforms data irreversibly)
4. Add `release = ''` matching the upcoming release version (check `package.json` in the repo root)
5. Implement the `down()` method with queries that reverse the `up()` changes (unless `breaking = true`)
6. Register it in `postgres-connection.ts`
```typescript theme={null}
import { QueryRunner } from 'typeorm'
import { Migration } from '../../migration'
export class AddMyColumn1234567890 implements Migration {
name = 'AddMyColumn1234567890'
breaking = false
release = '0.78.0'
public async up(queryRunner: QueryRunner): Promise {
await queryRunner.query(`ALTER TABLE "project" ADD COLUMN "description" text`)
}
public async down(queryRunner: QueryRunner): Promise {
await queryRunner.query(`ALTER TABLE "project" DROP COLUMN "description"`)
}
}
```
CI will fail if `breaking`, `release`, or `down()` are missing on new migrations.
## PGlite Compatibility
While PGlite is mostly PostgreSQL-compatible, some features are not supported. When using features like `CONCURRENTLY` for index operations, you need to conditionally handle PGlite:
```typescript theme={null}
import { QueryRunner } from 'typeorm'
import { Migration } from '../../migration'
import { system } from '../../../helper/system/system'
import { AppSystemProp } from '../../../helper/system/system-props'
import { DatabaseType } from '../../database-type'
const databaseType = system.get(AppSystemProp.DB_TYPE)
const isPGlite = databaseType === DatabaseType.PGLITE
export class AddMyIndex1234567890 implements Migration {
name = 'AddMyIndex1234567890'
breaking = false
release = '0.78.0'
transaction = false // Required when using CONCURRENTLY
public async up(queryRunner: QueryRunner): Promise {
if (isPGlite) {
await queryRunner.query(`CREATE INDEX "idx_name" ON "table" ("column")`)
} else {
await queryRunner.query(`CREATE INDEX CONCURRENTLY "idx_name" ON "table" ("column")`)
}
}
public async down(queryRunner: QueryRunner): Promise {
if (isPGlite) {
await queryRunner.query(`DROP INDEX "idx_name"`)
} else {
await queryRunner.query(`DROP INDEX CONCURRENTLY "idx_name"`)
}
}
}
```
`CREATE INDEX CONCURRENTLY` and `DROP INDEX CONCURRENTLY` are not supported in PGlite because PGLite is a single user/connection database. Always add a check for PGlite when using these operations.
Always test your migrations by running them both up and down to ensure they work as expected.
# E2E Tests
Source: https://www.activepieces.com/docs/handbook/engineering/playbooks/e2e-tests
This playbook is specifically about **full-stack browser E2E tests using Playwright** in `packages/tests-e2e/`. For the canonical 4-layer testing taxonomy across the whole monorepo (unit / integration / e2e / smoke) — and to decide whether your test actually belongs at the E2E layer at all — see the [Testing Strategy playbook](/docs/handbook/engineering/playbooks/testing-strategy).
## Overview
Our **full-stack browser E2E** suite uses Playwright to ensure critical user workflows function correctly across the application. The tests are organized using the Page Object Model pattern to maintain clean, reusable, and maintainable test code. This playbook outlines the structure, conventions, and best practices for writing e2e tests.
## Project Structure
```
packages/tests-e2e/
├── scenarios/ # Test files (*.spec.ts)
├── pages/ # Page Object Models
│ ├── base.ts # Base page class
│ ├── index.ts # Page exports
│ ├── authentication.page.ts
│ ├── builder.page.ts
│ ├── flows.page.ts
│ └── agent.page.ts
├── helper/ # Utilities and configuration
│ └── config.ts # Environment configuration
├── playwright.config.ts # Playwright configuration
└── project.json # Nx project configuration
```
This playbook provides a comprehensive guide for writing e2e tests following the established patterns in your codebase. It covers the Page Object Model structure, test organization, configuration management, and best practices for maintaining reliable e2e tests.
## Page Object Model Pattern
### Base Page Structure
All page objects extend the `BasePage` class and follow a consistent structure:
```typescript theme={null}
export class YourPage extends BasePage {
url = `${configUtils.getConfig().instanceUrl}/your-path`;
getters = {
// Locator functions that return page elements
elementName: (page: Page) => page.getByRole('button', { name: 'Button Text' }),
};
actions = {
// Action functions that perform user interactions
performAction: async (page: Page, params: { param1: string }) => {
// Implementation
},
};
}
```
### Page Object Guidelines
#### ❌ Don't do
```typescript theme={null}
// Direct element selection in test files
test('should create flow', async ({ page }) => {
await page.getByRole('button', { name: 'Create Flow' }).click();
await page.getByText('From scratch').click();
// Test logic mixed with element selection
});
```
#### ✅ Do
```typescript theme={null}
// flows.page.ts
export class FlowsPage extends BasePage {
getters = {
createFlowButton: (page: Page) => page.getByRole('button', { name: 'Create Flow' }),
fromScratchButton: (page: Page) => page.getByText('From scratch'),
};
actions = {
newFlowFromScratch: async (page: Page) => {
await this.getters.createFlowButton(page).click();
await this.getters.fromScratchButton(page).click();
},
};
}
// integration.spec.ts
test('should create flow', async ({ page }) => {
await flowsPage.actions.newFlowFromScratch(page);
// Clean test logic focused on behavior
});
```
## Test Organization
### Test File Structure
Test files should be organized by feature or workflow:
```typescript theme={null}
import { test, expect } from '@playwright/test';
import {
AuthenticationPage,
FlowsPage,
BuilderPage
} from '../pages';
import { configUtils } from '../helper/config';
test.describe('Feature Name', () => {
let authenticationPage: AuthenticationPage;
let flowsPage: FlowsPage;
let builderPage: BuilderPage;
test.beforeEach(async () => {
// Initialize page objects
authenticationPage = new AuthenticationPage();
flowsPage = new FlowsPage();
builderPage = new BuilderPage();
});
test('should perform specific workflow', async ({ page }) => {
// Test implementation
});
});
```
### Test Naming Conventions
* Use descriptive test names that explain the expected behavior
* Follow the pattern: `should [action] [expected result]`
* Include context when relevant
```typescript theme={null}
// Good test names
test('should send Slack message via flow', async ({ page }) => {});
test('should handle webhook with dynamic parameters', async ({ page }) => {});
test('should authenticate user with valid credentials', async ({ page }) => {});
// Avoid vague names
test('should work', async ({ page }) => {});
test('test flow', async ({ page }) => {});
```
## Configuration Management
### Environment Configuration
Use the centralized config utility to handle different environments:
```typescript theme={null}
// helper/config.ts
export const configUtils = {
getConfig: (): Config => {
return process.env.E2E_INSTANCE_URL ? prodConfig : localConfig;
},
};
// Usage in pages
export class AuthenticationPage extends BasePage {
url = `${configUtils.getConfig().instanceUrl}/sign-in`;
}
```
### Environment Variables
Required environment variables for CI/CD:
* `E2E_INSTANCE_URL`: Target application URL
* `E2E_EMAIL`: Test user email
* `E2E_PASSWORD`: Test user password
## Writing Effective Tests
### Test Structure
Follow this pattern for comprehensive tests:
```typescript theme={null}
test('should complete user workflow', async ({ page }) => {
// 1. Set up test data and timeouts
test.setTimeout(120000);
const config = configUtils.getConfig();
// 2. Authentication (if required)
await authenticationPage.actions.signIn(page, {
email: config.email,
password: config.password
});
// 3. Navigate to relevant page
await flowsPage.actions.navigate(page);
// 4. Clean up existing data (if needed)
await flowsPage.actions.cleanupExistingFlows(page);
// 5. Perform the main workflow
await flowsPage.actions.newFlowFromScratch(page);
await builderPage.actions.waitFor(page);
await builderPage.actions.selectInitialTrigger(page, {
piece: 'Schedule',
trigger: 'Every Hour'
});
// 6. Add assertions and validations
await builderPage.actions.testFlowAndWaitForSuccess(page);
// 7. Clean up (if needed)
await builderPage.actions.exitRun(page);
});
```
### Wait Strategies
Use appropriate wait strategies instead of fixed timeouts:
```typescript theme={null}
// Good - Wait for specific conditions
await page.waitForURL('**/flows/**');
await page.waitForSelector('.react-flow__nodes', { state: 'visible' });
await page.waitForFunction(() => {
const element = document.querySelector('.target-element');
return element && element.textContent?.includes('Expected Text');
}, { timeout: 10000 });
// Avoid - Fixed timeouts
await page.waitForTimeout(5000);
```
### Error Handling
Implement proper error handling and cleanup:
```typescript theme={null}
test('should handle errors gracefully', async ({ page }) => {
try {
await flowsPage.actions.navigate(page);
// Test logic
} catch (error) {
// Log error details
console.error('Test failed:', error);
// Take screenshot for debugging
await page.screenshot({ path: 'error-screenshot.png' });
throw error;
} finally {
// Clean up resources
await flowsPage.actions.cleanupExistingFlows(page);
}
});
```
## Best Practices
### Element Selection
Prefer semantic selectors over CSS selectors:
```typescript theme={null}
// Good - Semantic selectors
getters = {
createButton: (page: Page) => page.getByRole('button', { name: 'Create Flow' }),
emailField: (page: Page) => page.getByPlaceholder('email@example.com'),
searchInput: (page: Page) => page.getByRole('textbox', { name: 'Search' }),
};
// Avoid - Fragile CSS selectors
getters = {
createButton: (page: Page) => page.locator('button.btn-primary'),
emailField: (page: Page) => page.locator('input[type="email"]'),
};
```
### Test Data Management
Use dynamic test data to avoid conflicts:
```typescript theme={null}
// Good - Dynamic test data
const runVersion = Math.floor(Math.random() * 100000);
const uniqueFlowName = `Test Flow ${Date.now()}`;
// Avoid - Static test data
const flowName = 'Test Flow';
```
### Assertions
Use meaningful assertions that verify business logic:
```typescript theme={null}
// Good - Business logic assertions
await builderPage.actions.testFlowAndWaitForSuccess(page);
const response = await apiRequest.get(urlWithParams);
const body = await response.json();
expect(body.targetRunVersion).toBe(runVersion.toString());
// Avoid - Implementation details
expect(await page.locator('.success-message').isVisible()).toBe(true);
```
## Running Tests
### Local Development & Debugging with Checkly
We use [Checkly](https://checklyhq.com/) to run and debug E2E tests. Checkly provides video recordings for each test run, making it easy to debug failures.
```bash theme={null}
# Run tests with Checkly (includes video reporting)
npx turbo run test-checkly --filter=tests-e2e
```
* Test results, including video recordings, are available in the Checkly dashboard.
* You can debug failed tests by reviewing the video and logs provided by Checkly.
### Deploying Tests
Manual deployment is rarely needed, but you can trigger it with:
```bash theme={null}
npx turbo run deploy-checkly --filter=tests-e2e
```
Tests are deployed to Checkly automatically after successful test runs in the CI pipeline.
## Debugging Tests
### 1. Checkly Videos and Reports
When running tests with Checkly, each test execution is recorded and detailed reports are generated. This is the fastest way to debug failures:
* **Video recordings**: Watch the exact browser session for any test run.
* **Step-by-step logs**: Review detailed logs and screenshots for each test step.
* **Access**: Open the Checkly dashboard and navigate to the relevant test run to view videos and reports.
### 2. VSCode Extension
For the best local debugging experience, install the **Playwright Test for VSCode** extension:
1. Open VSCode Extensions (Ctrl+Shift+X)
2. Search for "Playwright Test for VSCode"
3. Install the extension by Microsoft
**Benefits:**
* Debug tests directly in VSCode with breakpoints
* Step-through test execution
* View test results and traces in the Test Explorer
* Auto-completion for Playwright APIs
* Integrated test runner
### 3. Debugging Tips
1. **Use Checkly dashboard**: Review videos and logs for failed tests.
2. **Use VSCode Extension**: Set breakpoints directly in your test files.
3. **Step Through**: Use F10 (step over) and F11 (step into) in debug mode.
4. **Inspect Elements**: Use `await page.pause()` to pause execution and inspect the page.
5. **Console Logs**: Add `console.log()` statements to track execution flow.
6. **Manual Screenshots**: Take screenshots at critical points for visual debugging.
```typescript theme={null}
test('should debug workflow', async ({ page }) => {
await page.goto('/flows');
// Pause execution for manual inspection
await page.pause();
// Take screenshot for debugging
await page.screenshot({ path: 'debug-screenshot.png' });
// Continue with test logic
await flowsPage.actions.newFlowFromScratch(page);
});
```
## Common Patterns
### Authentication Flow
```typescript theme={null}
test('should authenticate user', async ({ page }) => {
const config = configUtils.getConfig();
await authenticationPage.actions.signIn(page, {
email: config.email,
password: config.password
});
await agentPage.actions.waitFor(page);
});
```
### Flow Creation and Testing
```typescript theme={null}
test('should create and test flow', async ({ page }) => {
await flowsPage.actions.navigate(page);
await flowsPage.actions.cleanupExistingFlows(page);
await flowsPage.actions.newFlowFromScratch(page);
await builderPage.actions.waitFor(page);
await builderPage.actions.selectInitialTrigger(page, {
piece: 'Schedule',
trigger: 'Every Hour'
});
await builderPage.actions.testFlowAndWaitForSuccess(page);
});
```
### API Integration Testing
```typescript theme={null}
test('should handle webhook integration', async ({ page }) => {
const apiRequest = await page.context().request;
const response = await apiRequest.get(urlWithParams);
const body = await response.json();
expect(body.targetRunVersion).toBe(expectedValue);
});
```
## Maintenance Guidelines
### Updating Selectors
When UI changes occur:
1. Update page object getters with new selectors
2. Test the changes locally
3. Update related tests if necessary
4. Ensure all tests pass before merging
### Adding New Tests
1. Create or update relevant page objects
2. Write test scenarios in appropriate spec files
3. Follow the established patterns and conventions
4. Add proper error handling and cleanup
5. Test locally before submitting
### Performance Considerations
* Keep tests focused and avoid unnecessary steps
* Use appropriate timeouts (not too short, not too long)
* Clean up test data to avoid conflicts
* Group related tests in the same describe block
# Frontend Best Practices
Source: https://www.activepieces.com/docs/handbook/engineering/playbooks/frontend-best-practices
## Overview
Our frontend codebase is large and constantly growing, with multiple developers contributing to it. Establishing consistent rules across key areas like data fetching and state management will make the code easier to follow, refactor, and test. It will also help newcomers understand existing patterns and adopt them quickly.
## Data Fetching with React Query
### Hook Organization
All `useMutation` and `useQuery` hooks should be grouped by domain/feature in a single location: `features/lib/feature-hooks.ts`. Never call data fetching hooks directly from component bodies.
**Benefits:**
* Easier refactoring and testing
* Simplified mocking for tests
* Cleaner components focused on UI logic
* Reduced clutter in `.tsx` files
#### ❌ Don't do
```tsx theme={null}
// UserProfile.tsx
import { useMutation, useQuery } from '@tanstack/react-query';
import { updateUser, getUser } from '../api/users';
function UserProfile({ userId }) {
const { data: user } = useQuery({
queryKey: ['user', userId],
queryFn: () => getUser(userId)
});
const updateUserMutation = useMutation({
mutationFn: updateUser,
onSuccess: () => {
// refetch logic here
}
});
return (
);
}
```
### Query Keys Management
Query keys should be unique identifiers for specific queries. Avoid using boolean values, empty strings, or inconsistent patterns.
**Best Practice:** Group all query keys in one centralized location (inside the hooks file) for easy management and refactoring.
```tsx theme={null}
// features/users/lib/user-hooks.ts
export const userKeys = {
all: ['users'] as const,
lists: () => [...userKeys.all, 'list'] as const,
list: (filters: string) => [...userKeys.lists(), { filters }] as const,
details: () => [...userKeys.all, 'detail'] as const,
detail: (id: string) => [...userKeys.details(), id] as const,
preferences: (id: string) => [...userKeys.detail(id), 'preferences'] as const,
};
// Usage examples:
// userKeys.all // ['users']
// userKeys.list('active') // ['users', 'list', { filters: 'active' }]
// userKeys.detail('123') // ['users', 'detail', '123']
```
**Benefits:**
* Easy key renaming and refactoring
* Consistent key structure across the app
* Better query specificity control
* Centralized key management
### Refetch vs Query Invalidation
Prefer using `invalidateQueries` over passing `refetch` functions between components. This approach is more maintainable and easier to understand.
#### ❌ Don't do
```tsx theme={null}
function UserList() {
const { data: users, refetch } = useUsers();
return (
{/* Passing refetch everywhere */}
);
}
```
#### ✅ Do
```tsx theme={null}
// In your mutation hooks
export function useCreateUser() {
const queryClient = useQueryClient();
return useMutation({
mutationFn: createUser,
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: userKeys.lists() });
}
});
}
// Components don't need to handle refetching
function UserList() {
const { data: users } = useUsers();
return (
{/* Handles its own invalidation */}
{/* Handles its own invalidation */}
);
}
```
## Dialog State Management
Use a centralized store or context to manage all dialog states in one place. This eliminates the need to pass local state between different components and provides global access to dialog controls.
### Implementation Example
```tsx theme={null}
// stores/dialog-store.ts
import { create } from 'zustand';
import { immer } from 'zustand/middleware/immer';
interface DialogState {
createUser: boolean;
editUser: boolean;
deleteConfirmation: boolean;
// Add more dialogs as needed
}
interface DialogStore {
dialogs: DialogState;
setDialog: (dialog: keyof DialogState, isOpen: boolean) => void;
}
export const useDialogStore = create()(
immer((set) => ({
dialogs: {
createUser: false,
editUser: false,
deleteConfirmation: false,
},
setDialog: (dialog, isOpen) =>
set((state) => {
state.dialogs[dialog] = isOpen;
}),
}))
);
// Usage in components
function UserManagement() {
const { dialogs, setDialog } = useDialogStore();
return (
);
}
// Any component can control dialogs - no provider needed
function Sidebar() {
const setDialog = useDialogStore((state) => state.setDialog);
return (
);
}
// You can also use selectors for better performance
function UserDialog() {
const isOpen = useDialogStore((state) => state.dialogs.createUser);
const setDialog = useDialogStore((state) => state.setDialog);
return (
setDialog('createUser', false)}
/>
);
}
```
**Benefits:**
* Centralized dialog state management
* No prop drilling of dialog states
* Easy to open/close dialogs from anywhere in the app
* Consistent dialog behavior across the application
* Simplified component logic
# Cloud Infrastructure
Source: https://www.activepieces.com/docs/handbook/engineering/playbooks/infrastructure
The playbooks are private, Please ask your team for access.
Our infrastructure stack includes several key components to monitor, deploy, and manage our services effectively.
## Hosting Providers
We use two main hosting providers:
* **DigitalOcean**: Hosts our databases including Redis and PostgreSQL.
* **Hetzner**: Provides the machines that run our services.
## Environments
| Environment | Domain | Kamal Config |
| ----------- | ------------------------ | ------------ |
| Staging | `stg.activepieces.com` | `mrsk/stg` |
| Production | `cloud.activepieces.com` | `mrsk/prod` |
Staging receives every merge to `main` and is promoted to production daily at 9 AM UTC. See the [Releases & Deployment](/docs/handbook/engineering/playbooks/releases) playbook for the full flow.
## Observability: Logs & Telemetry
We collect logs and telemetry from all services using **HyperDX**.
## Kamal for Deployment
We use **Kamal** as a deployment tool to deploy our Docker containers to staging and production with zero downtime. Kamal configs live on the devops machine under `mrsk/stg` and `mrsk/prod`.
# Feature Announcement
Source: https://www.activepieces.com/docs/handbook/engineering/playbooks/product-announcement
When we develop new features, our marketing team handles the public announcements. As engineers, we need to clearly communicate:
1. The problem the feature solves
2. The benefit to our users
3. How it integrates with our product
### Handoff to Marketing Team
There is an integration between GitHub and Linear, that automatically open a ticket for the marketing team after 5 minutes of issue get closed.\
\
Please make sure of the following:
* Github Pull Request is linked to an issue.
* The pull request must have one of these labels: **"Pieces"**, **"Polishing"**, or **"Feature"**.
* If none of these labels are added, the PR will not be merged.
* You can also add any other relevant label.
* The GitHub issue must include the correct template (see "Ticket templates" below).
Bonus: Please include a video showing the marketing team on how to use this feature so they can create a demo video and market it correctly.
Ticket templates:
```
### What Problem Does This Feature Solve?
### Explain How the Feature Works
[Insert the video link here]
### Target Audience
Enterprise / Everyone
### Relevant User Scenarios
[Insert Pylon tickets or community posts here]
```
# Releases & Deployment
Source: https://www.activepieces.com/docs/handbook/engineering/playbooks/releases
All changes flow through a staging environment before reaching production. There are no release branches — everything merges to `main`.
## How It Works
Five separate workflows cover the full delivery lifecycle:
1. **Merge to `main`** (Mon–Thu, 9 AM–5 PM UTC) — `continuous-delivery-stg.yml` automatically builds a Docker image and deploys to **staging** (`stg.activepieces.com`).
2. **Staging freeze (5 PM UTC)** — merges to `main` after 5 PM UTC are accepted but **not deployed** to staging. The content team uses the frozen staging environment overnight.
3. **Daily 9 AM UTC** — `continuous-delivery-canary.yml` builds a `version.sha.canary` image from the latest `main` and deploys it to the **canary** environment. Breaking migrations block the deployment. See the [Canary Deployment playbook](/docs/handbook/engineering/playbooks/canary-deployment) for details.
4. **Thursday 5 PM UTC** — `tag-release-candidate` job tags the current staging image and commit as `release-candidate`.
5. **Sunday 9 AM UTC** — `continuous-delivery-cloud.yml` runs a fresh canary build from `main` (by calling `continuous-delivery-canary.yml` as a reusable workflow), then promotes `release-candidate` to production and creates a `deploy/cloud/YYYY-MM-DD` branch.
6. **Monday 9 AM UTC** — `continuous-delivery-release.yml` re-tags the `release-candidate` image and publishes the self-hosted release.
## Environments
| Environment | URL | Purpose |
| ----------- | ------------------------------ | ---------------------------------------------------------------------------------- |
| Preview | `branch-name.activepieces.com` | Per-PR ephemeral environment for review. *Requires the `preview` label on the PR.* |
| Staging | `stg.activepieces.com` | Internal testing, content team daily use |
| Canary | `canary.activepieces.com` | Daily cut from `main`; catches issues before they reach production |
| Production | `cloud.activepieces.com` | Live customers |
## Hotfix Workflow
1. Checkout from the current `deploy/cloud/YYYY-MM-DD` branch (created by the last Sunday promotion).
2. Push your fix commit(s) to that branch.
3. Manually trigger `continuous-delivery-cloud.yml` with action `cloud-hotfix` **on the hotfix branch** (not `main`). This builds the image and deploys directly to production — staging is not involved. If the next scheduled Sunday promotion is within 1 hour, the workflow will refuse to run — just wait for the scheduled run instead.
4. Merge the hotfix branch into `main`. This automatically triggers `tag-release-candidate`, which SSHes to staging and retrags whatever image is running there as `release-candidate`. Sunday's scheduled run will re-deploy — a safe no-op.
## Self-Hosted Release
The weekly self-hosted release runs automatically on Mondays at 9 AM UTC. It re-tags the `release-candidate` cloud image as the versioned self-hosted image — no separate build required.
For off-cycle or hotfix releases from any branch, use `release-self-hosted.yml` (see below).
## GitHub Actions Workflows
To run any workflow manually, go to the repo's **Actions** tab, select the workflow, and click **Run workflow**.
### `continuous-delivery-stg.yml` — Staging
| Trigger | What happens |
| ------------------- | --------------------------------------------------------------------------------------------------------------------------- |
| **Push to `main`** | Builds a `version.sha.beta` image and deploys to staging. Skipped automatically during the freeze window (5 PM – 9 AM UTC). |
| **Manual dispatch** | Presents a dropdown with two choices (see below). |
#### Manual dispatch choices
* **`deploy-staging`** — Re-deploy the latest `main` to staging. Use this after an infra change, or to force a deploy that was skipped due to the freeze window.
* **`deploy-staging-skip-freeze`** — Same as above but explicitly bypasses the freeze window check.
### `continuous-delivery-cloud.yml` — Production Promotion
| Trigger | What happens |
| ------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Scheduled (Sunday 9 AM UTC)** | Triggers `continuous-delivery-canary.yml` as a reusable workflow to refresh the canary environment from the latest `main`, then promotes `release-candidate` to production (requires approval via the `production` GitHub Environment). After promotion, creates a `deploy/cloud/YYYY-MM-DD` branch and runs a smoke test. |
| **`workflow_call`** | Same as above, triggered by another workflow. |
| **Manual dispatch — `cloud-hotfix`** | Builds the image from the current branch and deploys directly to production, bypassing staging. Trigger this **on the hotfix branch** (`deploy/cloud/YYYY-MM-DD`), not `main`. After promotion, merge the hotfix branch into `main` to automatically retag `release-candidate`. Blocked automatically if the next scheduled Sunday promotion is within 1 hour. |
### `continuous-delivery-release.yml` — Weekly Release
| Trigger | What happens |
| -------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Scheduled (Mondays 9 AM UTC)** | Publishes a GitHub release with changelog, creates a git tag, and re-tags `ghcr.io/activepieces/activepieces-cloud:release-candidate` as `activepieces/activepieces:X.Y.Z` + `latest` on Docker Hub and GHCR. |
| **Manual dispatch** | Same as above, on demand. Use this if a Monday run was skipped or you need an off-cycle release. |
### `continuous-delivery-canary.yml` — Canary
| Trigger | What happens |
| ------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Scheduled (daily 9 AM UTC)** | Builds a `version.sha.canary` image from the latest `main`, checks migrations, deploys the canary app and workers. Breaking migrations always block the deployment. |
| **`workflow_call`** | Same as scheduled, triggered by `continuous-delivery-cloud.yml` on the Sunday promotion so the canary always receives a fresh build just before production is updated. |
| **Manual dispatch** | Same as scheduled, on demand. |
See the [Canary Deployment playbook](/docs/handbook/engineering/playbooks/canary-deployment) for full details.
### `continuous-delivery-rollback-canary.yml` — Canary Rollback
Manual dispatch only. Rolls back the canary environment to a previous image tag. Reverses DB migrations not present in the target image's manifest, then redeploys the canary app and workers to that tag.
| Input | Description |
| ----------------------- | -------------------------------------------------------------------------------------------------------------------------------- |
| `rollback_to_image_tag` | Target image tag to roll back to (e.g. `0.51.0.abc1234.canary`). Leave blank to roll back to the current cloud production image. |
| `force` | Force rollback even if breaking migrations exist. Default: `false`. |
### `release-self-hosted.yml`
Manual dispatch only. Prompts for a **release tag** (e.g., `0.79.2` or `0.79.2-hotfix.1`). Builds multi-platform images (amd64 + arm64), pushes to Docker Hub and GHCR, and creates a git tag. Can be run from any branch — useful for patching older versions.
## Changelog
Before each weekly release, ensure the draft GitHub release has accurate notes. PRs should be labeled correctly so the auto-generated changelog categorizes them properly.
# Run Enterprise Edition
Source: https://www.activepieces.com/docs/handbook/engineering/playbooks/run-ee
The enterprise edition requires a postgres and redis instance to run, and a license key to activate.
Follow the instructions [here](/docs/build-pieces/misc/dev-container) to run the dev container.
Pase the following env variables in `server/api/.env`
```bash theme={null}
## these variables are set to align with the .devcontainer/docker-compose.yml file
AP_DB_TYPE=POSTGRES
AP_DEV_PIECES="your_piece_name"
AP_ENVIRONMENT="dev"
AP_EDITION=ee
AP_EXECUTION_MODE=UNSANDBOXED
AP_FRONTEND_URL="http://localhost:4200"
AP_WEBHOOK_URL="http://localhost:3000"
AP_PIECES_SOURCE='FILE'
AP_PIECES_SYNC_MODE='NONE'
AP_LOG_LEVEL=debug
AP_LOG_PRETTY=true
AP_REDIS_HOST="redis"
AP_REDIS_PORT="6379"
AP_TRIGGER_DEFAULT_POLL_INTERVAL=1
AP_CACHE_PATH=/workspace/cache
AP_POSTGRES_DATABASE=activepieces
AP_POSTGRES_HOST=db
AP_POSTGRES_PORT=5432
AP_POSTGRES_USERNAME=postgres
AP_POSTGRES_PASSWORD=A79Vm5D4p2VQHOp2gd5
AP_ENCRYPTION_KEY=427a130d9ffab21dc07bcd549fcf0966
AP_JWT_SECRET=secret
```
After signing in, activate the license key by going to **Platform Admin -> Setup -> License Keys**
# Security Advisory Response
Source: https://www.activepieces.com/docs/handbook/engineering/playbooks/security-advisory-response
Internal lifecycle for handling security reports from intake to public disclosure
A security advisory is the public artifact we publish after a vulnerability is reported and fixed. This playbook is the lifecycle that produces one. Reporter-facing policy lives in [SECURITY.md](https://github.com/activepieces/activepieces/blob/main/SECURITY.md).
## Triage
Reproduce locally before scoring. If it doesn't reproduce, ask the reporter for clarification.
Compare against the **Out of scope** list in [SECURITY.md](https://github.com/activepieces/activepieces/blob/main/SECURITY.md). If out of scope, reply with the reason and close.
Use the [FIRST calculator](https://www.first.org/cvss/calculator/4-0). Record the score and vector. \
Buckets: 0.1–3.9 low, 4.0–6.9 medium, 7.0–8.9 high, 9.0–10 critical.
Reply to the reporter with severity, expected resolution date, and a confidentiality reminder. Clock starts at the report timestamp.
## Private fix
Repo → **Security** → **Advisories** → **New draft security advisory**. \
Set affected versions, severity, and a neutral summary. \
Save as draft. You'll fill the rest of the metadata in [Draft advisory](#draft-advisory) later.
From the draft advisory, click **Start a temporary private fork**. \
Fix on a `security/` branch inside it.
Run `npm run lint-dev`, `npm run test-unit`, `npm run test-api`. \
Add a regression test. Merge the PR inside the private fork.
A public PR or push collapses the embargo. Double-check the remote URL before pushing.
## Draft advisory
In the draft advisory: **CVE ID** → **Request CVE**. GitHub assigns one within \~1 business day.
Fields must match the in-app `SecurityAdvisory` shape: `summary`, `description`, `severity`, `cvssScore`, `vulnerableVersionRange` (e.g. `< 0.71.1`), `patchedVersion`. Use the [advisory body template](#advisory-body-template) for the `description`.
Default 60 days. Hold publication until the patch is on cloud production and customers have been notified.
Publish sooner if actively exploited or the reporter has set an earlier date.
## Patch release
Always a patch bump (e.g. `0.71.0` → `0.71.1`). Never bundle with feature commits.
Follow the cloud-hotfix flow in [Releases](/docs/handbook/engineering/playbooks/releases): `deploy/cloud/YYYY-MM-DD` branch, then trigger `continuous-delivery-cloud.yml` with `cloud-hotfix`.
Patch-bump root [package.json](https://github.com/activepieces/activepieces/blob/main/package.json). Bump `packages/shared` if touched. If the fix has a migration, set `release = ''` per [Database Migrations](/docs/handbook/engineering/playbooks/database-migration).
Wait for canary to confirm before promoting to production.
Draft (don't post yet) for [docs/about/changelog.mdx](https://github.com/activepieces/activepieces/blob/main/docs/about/changelog.mdx):
```mdx theme={null}
### Security
Fixed (, ). Upgrade to immediately.
```
## Customer disclosure
7-day lead time before public publication. Patched version must already be on cloud production before sending.
```
Subject: [Security] Activepieces advisory — patched in
Cloud customers are already protected — the fix was deployed on .
Self-managed customers should upgrade to before
, when we'll publish CVE on GitHub.
Mitigation if you cannot upgrade:
```
Re-confirm the disclosure date with the reporter before sending.
## Public publication
In the draft advisory, click **Publish advisory**. This makes the CVE public.
Confirm the new advisory appears on the platform health page (`/platform/infrastructure/health`). Source feeds cache for 15 minutes, so allow that delay before troubleshooting.
Commit the changelog entry drafted in [Patch release](#patch-release).
## Postmortem
Required for high/critical, optional for medium, skip for low.
Create `docs/handbook/engineering/postmortems/YYYY-MM-DD-.mdx` using the existing structure (see [2026-03-19 Redis and delay overload](/docs/handbook/engineering/postmortems/2026-03-19-redis-and-delay-overload)).
## References
* [SECURITY.md](https://github.com/activepieces/activepieces/blob/main/SECURITY.md)
* [GitHub Security Advisories docs](https://docs.github.com/en/code-security/security-advisories)
* [FIRST CVSS 4.0 calculator](https://www.first.org/cvss/calculator/4-0)
## Advisory body template
```markdown theme={null}
## Summary
One paragraph plain-language explanation of the vulnerability — what it is, no exploit detail.
## Impact
What an attacker can achieve, what data or systems are at risk, and which versions and configurations are affected.
## Patches
The patched version and how to upgrade.
## Workarounds
Mitigations available to users who cannot upgrade immediately, or note that the only safe option is to upgrade.
## References
The fix commit (visible after publication), related CVEs or upstream advisories, and reporter credit.
```
# Setup BetterStack
Source: https://www.activepieces.com/docs/handbook/engineering/playbooks/setup-betterstack
BetterStack is our primary tool for managing and responding to urgent issues and service disruptions.
This guide explains how we use BetterStack to coordinate our on-call rotations and emergency response procedures.
## Setup and Notifications
### Personal Setup
1. Download the BetterStack mobile app from your device's app store
2. Ask your team to add you to the BetterStack workspace
3. Configure your notification preferences:
* Phone calls for critical incidents
* Push notifications for high-priority issues
* Slack notifications for standard updates
### On-Call Rotations
Our team operates on a weekly rotation schedule through BetterStack, where every team member participates. When you're on-call:
* You'll receive priority notifications for all urgent issues
* Phone calls will be placed for critical service disruptions
* Rotations change every week, with handoffs occurring on Monday mornings
* Response is expected within 15 minutes for critical incidents
If you are unable to respond to an incident, please escalate to the engineering team.
# Testing Strategy
Source: https://www.activepieces.com/docs/handbook/engineering/playbooks/testing-strategy
Canonical 4-layer testing taxonomy for Activepieces
## Overview
Activepieces tests live in four distinct layers. Each layer owns a different failure mode — picking the wrong layer produces slow, redundant, or silently ineffective tests.
## The Four Layers
### Unit
A single module or function in isolation. Mocks are allowed for collaborators. No real I/O, no real infrastructure. Runs in-process under Vitest.
### Integration
Multiple modules (and sometimes multiple packages) wired together against real infrastructure — a real database, queue, filesystem, iptables rule set, HTTP server, or V8 isolate. The scope still sits inside a single package, but the collaborators are real. Runs under Vitest, in-process plus local services.
### E2E
The full product stack, exercised through a public interface — either a browser UI or the public HTTP API. Runs under Playwright against docker-compose or a deployed environment.
### Smoke
Minimal post-deploy liveness checks against a real running environment. Implemented as bash plus `curl`. Runs against production-like stacks after deploy.
## Which package owns which layer
| Layer | Lives in |
| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Unit | `packages/server/api/test/unit/`, most of `packages/server/worker/test/lib/`, engine `test/variables/` + `test/helper/` + `test/utils.test.ts` + `test/network/ssrf-guard.test.ts` |
| Integration | `packages/server/api/test/integration/{ce,ee,cloud}/`, `packages/server/worker/test/e2e/` (see warning below), engine `test/handler/` + `test/core/code/` + `test/piece-context/` + `test/operations/` |
| E2E | `packages/tests-e2e/` |
| Smoke | `smoke-test/` |
## Picking the right layer
It is a **Unit** test. Keep mocks for collaborators and run it in-process.
It is an **Integration** test. Stay within one package, but let the collaborators be real.
It is an **E2E** test. Write it with Playwright under `packages/tests-e2e/`.
It is a **Smoke** test. Drop it under `smoke-test/` as a bash + `curl` script.
## Common pitfalls
The four files in `packages/server/worker/test/e2e/` are **privileged integration tests**, not full-stack E2E. They exercise the engine SSRF guard plus iptables lockdown plus egress proxy against real Linux kernel primitives, inside a single package. The folder is named `e2e/` for historical reasons; under this taxonomy they belong to the Integration layer. See `packages/server/worker/test/README.md` for the full story.
If you are tempted to mock a database, a queue, or a kernel primitive, and your test still feels like it is testing real behaviour, move it up one layer. Mocks that nearly reimplement the real thing are a smell.
## Classification debt
* **`packages/server/api/test/unit/app/core/canary/canary-proxy.integration.test.ts`** — already carries an `.integration.` suffix but lives under `test/unit/`. Spins up two real Fastify instances listening on real TCP ports. Target: move to `packages/server/api/test/integration/ce/core/canary/`.
* **`packages/server/api/test/integration/ce/authentication/password-hasher.test.ts`** — pure bcrypt/scrypt logic, no `setupTestEnvironment`, no DB, no Fastify. Target: move to `packages/server/api/test/unit/app/authentication/`.
* **`packages/server/engine/test/handler/*.test.ts`** (13 files) — load real `@activepieces/piece-http`, `@activepieces/piece-data-mapper`, etc. and make real outbound HTTP calls from the executor. Target: relabel as integration (either move to a new `test/integration/` subtree, or document in the engine test README that the flat folder mixes unit and integration).
* **`packages/server/engine/test/core/code/v8-isolate-code-sandbox.test.ts`** — runs a real V8 isolate. Target: same as above (integration).
* **`packages/server/worker/test/e2e/*.e2e.test.ts`** (4 files) — privileged integration, not full-stack E2E. Target: rename folder to `test/integration/` (or similar) and drop the `.e2e.` infix.
* **`packages/server/worker/test/piece-installer.test.ts`** — does real filesystem work against temp dirs; borderline integration. Target: decide on a threshold (temp-dir I/O alone = unit, vs. temp-dir I/O + subprocess = integration) and relabel accordingly.
* **Duplication: `api/test/integration/ce/flows/flow/flow.test.ts` vs `api/test/integration/cloud/flow/flow.test.ts`** — both assert identical "Create flow" shapes; only role-permission checks differ. Target: extract the shared CRUD assertions into CE and keep only the cloud-specific role permutations under `cloud/`.
This list is a backlog, not a to-do list for this playbook. Tackle items opportunistically as you touch the adjacent code.
## What is NOT duplication (defense-in-depth)
* SSRF is tested at three layers (egress proxy, iptables lockdown, engine SSRF guard). Each defends a different attack surface — keep all three.
* Flow execution is tested at engine (pure logic), API integration (queue + worker dispatch), and smoke (post-deploy liveness). Each owns different failure modes.
* Sandbox isolation is tested at engine (V8 code sandbox), worker (process orchestration), and worker `test/e2e/` (real kernel egress). Distinct concerns.
## Related documentation
* [E2E Tests playbook](/docs/handbook/engineering/playbooks/e2e-tests) — full-stack browser E2E deep dive
* Per-package READMEs live at `packages/server/api/test/README.md`, `packages/server/engine/test/README.md`, `packages/server/worker/test/README.md`, `packages/tests-e2e/README.md`, and `smoke-test/README.md`.
# Infrastructure Upgrade — Mar 16–17, 2026
Source: https://www.activepieces.com/docs/handbook/engineering/postmortems/2026-03-16-infrastructure-upgrade
## Summary
On March 16–17, 2026, Activepieces experienced a service disruption lasting approximately 12–24 hours (with the first \~6 hours being the most severe) after rolling out a new worker architecture on Kubernetes. Two cascading issues — a persistent volume provisioning problem and a dedicated worker misconfiguration — caused flow execution failures for most cloud customers and one enterprise customer.
## Impact
* Most cloud customers experienced failed or delayed flow executions over a \~12–24 hour period, with the first \~6 hours being the most severe. All affected executions were replayed from the failed step once service was restored.
* One enterprise customer with a dedicated worker had flows fail because npm was blocked after a trust level misconfiguration during pre-incident migration. All affected executions were replayed from the failed step once resolved.
## Timeline
All times are in UTC.
**Mar 14–15 (Pre-incident):** As part of our infrastructure upgrade, we moved enterprise dedicated workers one by one first and isolated them from shared infrastructure changes. We then began rolling out the new architecture for shared workers.
1. **Mar 16, 8:13 PM** — Shared workers begin failing after the architecture refactor deployment is applied. Team immediately begins investigating.
2. **Mar 16, 8:50 PM** — Brief recovery observed.
3. **Mar 16, 8:52 PM** — Errors resurface. Pattern of brief recoveries followed by recurring errors points to a resource issue rather than a code bug.
4. **Mar 17, 3:26 AM** — Root cause identified: persistent volumes (PVCs) filled up with no shell access to worker pods to fix in-place. Decision made to revert to the previous deployment method (Kamal) with multi-tenant workers. This was not a simple rollback — reverting to Kamal required significant code changes to re-adapt the worker architecture back to the previous multi-tenant deployment model.
5. **Mar 17, \~8:00 AM** — Kamal revert deployed, flow execution restored for cloud customers.
6. **Mar 17, 8:13 AM** — Attention turned to enterprise dedicated workers. Discovered one customer's dedicated worker had `trustedEnvironment` incorrectly set to `false` after the namespace migration, blocking npm libraries in their sandbox.
7. **Mar 17, 11:30 AM** — Trust level configuration fixed, test coverage added for this code path. All failed executions identified across cloud and enterprise customers and replayed from the exact failed step. No customer data or automation results were permanently lost.
## Root Causes
### Issue 1: Persistent volume provisioning on Kubernetes
The Kubernetes persistent volumes (PVCs) allocated to the new shared workers filled up quickly after deployment. Once full, there was no shell access available to diagnose or remediate the issue. Additionally, no rollback plan had been prepared for the Kubernetes deployment, which delayed recovery.
### Issue 2: Enterprise dedicated worker trust level misconfiguration
When moving enterprise dedicated workers to the new server, a code change accidentally set `trustedEnvironment` to `false` for one enterprise customer. This disabled npm package support in the sandbox, causing that customer's flows to fail. This code path had no test coverage at the time, so the misconfiguration went undetected until flows started failing.
## What Went Well
1. **Enterprise worker isolation was completed ahead of rollout.** Dedicated workers were moved to their own namespaces 1–2 days before the shared worker migration, which limited the blast radius and prevented the PVC issue from affecting enterprise customers directly.
2. **Execution replay from failed step.** The platform's ability to replay failed executions from the exact step that failed meant no customer data was permanently lost, despite the extended outage.
3. **Smoke tests for trusted environments already existed in worker v2.** The new worker architecture already included smoke tests that validate sandbox npm access for trusted environments, which helped catch Issue 2 quickly once it was discovered.
4. **Gradual dedicated worker migration was already built into worker v2.** Per-customer validation when migrating dedicated workers was already implemented, reducing future risk.
5. **Quick identification of the revert path.** Once the PVC root cause was identified, the team made a clear decision to revert to Kamal rather than continue debugging the Kubernetes deployment, which accelerated recovery.
6. **Team stayed engaged through the night** (8 PM – 11:30 AM UTC) with continuous investigation and response.
## What Went Wrong
1. **No rollback plan for the Kubernetes deployment.** The migration to Kubernetes was deployed without a documented or tested rollback strategy. When PVCs filled up, there was no fast path back, delaying recovery by several hours.
2. **PVC sizing was not validated under production load.** Persistent volumes were provisioned without load-testing against real production traffic patterns, causing them to fill up unexpectedly fast.
3. **Trust level configuration had no test coverage.** The code path that set `trustedEnvironment` for dedicated workers was not covered by tests, allowing a misconfiguration to ship undetected.
4. **No canary deployment strategy.** The new architecture was rolled out to all shared workers at once rather than incrementally, so there was no opportunity to catch issues on a small subset before full impact.
5. **Issue 2 was discovered late.** The enterprise worker misconfiguration was not found until \~12 hours after the initial incident began, because investigation was focused on the PVC issue affecting shared workers.
## Action Items
| Action Item | Status |
| -------------------------------------------------------------------------------------------------------------------------------------------- | ------ |
| Implement a documented and tested rollback plan for all infrastructure migrations ([GIT-911](https://linear.app/activepieces/issue/GIT-911)) | To do |
| Add test coverage for worker trust level and sandbox configuration | Done |
| Support canary deployments | To do |
## Improvements Done
* **Worker trust level and sandbox configuration tests** — added dedicated tests covering the `trustedEnvironment` code path that caused Issue 2, ensuring configuration changes are validated automatically.
* **Worker polling resilience tests** — tests covering job execution lifecycle, resilience to invalid job data, null polls, unrecognized job types, and mixed valid/invalid sequences.
* **Sandbox execution tests** — tests for sandbox creation, startup, RPC communication, resource cleanup on timeout or memory issues, and process cleanup.
* **Race condition tests for queue dispatcher** — 12 tests covering orphaned job handling, double-loop spawn prevention during close, single dequeue concurrency control, and waiter timeout/retry behavior.
* **Subflow resume race condition tests** — 8 tests covering the race where the engine writes pause metadata to Redis before it's persisted to DB, verifying Redis fallback when DB is stale.
* **Rate limiter concurrency tests** — tests for concurrent job slot allocation, idempotency with concurrent dispatch, and per-project isolation.
* **End-to-end smoke tests in CI** — GitHub Actions workflows that validate health checks and webhook flow execution on both AMD64 and ARM64.
* **Benchmark tests in CI** — load testing across 6 app/worker configurations measuring throughput, mean latency, P50, and P99.
# Delay Step Infinite Loop — Mar 19, 2026
Source: https://www.activepieces.com/docs/handbook/engineering/postmortems/2026-03-19-redis-and-delay-overload
## Summary
On March 19, 2026, a bug in the Delay step caused flows to restart from the beginning instead of resuming after the delay, creating an infinite loop that flooded Redis with jobs. Affected flows never completed, and the growing job backlog degraded queue processing for all users.
## Impact
* Flows with a Delay step looped forever without completing.
* The runaway job creation overloaded Redis, causing delays for other flows.
* All affected executions were replayed once service was restored — no data was lost.
## Timeline
All times are in UTC.
1. **Mar 18, \~9:00 PM** — A code change to the Delay step is deployed, introducing the infinite loop bug.
2. **Mar 19, \~8:45 AM** — Customer reports arrive indicating flows with Delay steps are not completing. Investigation begins.
3. **Mar 19, \~10:45 AM** — Fix for the Delay step bug deployed.
## Root Cause
When a flow hits a Delay step, the system puts the job on hold via BullMQ's `moveToDelayed()`. The bug was that the job still carried `executionType: BEGIN` instead of `RESUME`. When the delay expired, the worker re-ran the entire flow from the first step, hit the Delay again, paused again, and looped forever — flooding Redis with new jobs on every iteration.
```
Trigger -> Step 1 -> Delay(20s) -> PAUSE
| (20s later, job still says "BEGIN")
Trigger -> Step 1 -> Delay(20s) -> PAUSE
| (20s later, job still says "BEGIN")
... forever
```
The platform does enforce per-execution time limits, but because the job was marked as `BEGIN` instead of `RESUME`, each loop iteration was treated as a brand-new execution rather than a continuation. Each fresh execution only ran from the trigger to the Delay step — well within the time limit — before spawning another delayed job and repeating.
## Detection & Monitoring Gaps
* **Detected by customers, not automated alerting.** There was no monitoring on repeated execution patterns or runaway job creation for a single flow.
* No alerting on Redis queue depth growth rate or sudden spikes in scheduled job volume.
## Action Items
| Action Item | Status |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ |
| Update job data to `executionType: RESUME` before calling `moveToDelayed()` so the worker continues from the correct step | Done |
| Add test coverage for Delay step resume behavior to catch regressions where a delayed job restarts instead of resuming | Done |
| Prevent a flow from entering an infinite state by detecting and halting repeated re-executions of the same run ([ENG-320](https://linear.app/activepieces/issue/ENG-320)) | Done |
| Add alerting on abnormal queue depth growth to detect runaway job creation before customers are impacted | To do |
| Add monitoring for repeated execution patterns on a single flow (e.g., same flow re-triggered N times within a short window) | To do |
## Improvements Done
* **Delay step fix** — Updated job data to `executionType: RESUME` before calling `moveToDelayed()`, so the worker continues from where the flow left off instead of restarting.
* **Defense-in-depth: RESUME empty state guard** — Worker validates that RESUME operations have non-empty execution state. An empty state with RESUME is the exact signature of the original bug and is rejected with a `VALIDATION` error.
* **Defense-in-depth: BEGIN non-empty state assertion** — Engine asserts that BEGIN operations have empty execution state. A BEGIN with pre-existing steps would indicate a code regression.
# Redis QueueEvents Overload — Mar 20, 2026
Source: https://www.activepieces.com/docs/handbook/engineering/postmortems/2026-03-redis-queue-events-overload
## Summary
On Friday, March 20, 2026, BullMQ's `QueueEvents` caused every worker to broadcast job lifecycle events to all connected app instances. As traffic grew, Redis output buffers grew faster than clients could consume them, eventually filling Redis memory. Once memory was exhausted, the `runsMetadata` queue stopped consuming, workers crashed, and flow execution logs were delayed up to **8 hours** before appearing in the UI.
The incident lasted the entire day. Mitigation involved repeated server restarts and manual cleanup to minimize customer impact while the root cause was identified. The fix was to revert the `QueueEvents` change.
## Impact
* Redis memory filled up, causing all queue processing to stall.
* Workers crashed and could not recover automatically.
* Flow execution logs were delayed up to **8 hours** before appearing in the UI.
* No executions were lost — all `runsMetadata` jobs were eventually resumed and indexed.
## Timeline
All times are in UTC.
1. **Mar 20, morning** — Incident begins. Redis memory usage spikes as `QueueEvents` broadcast volume overwhelms output buffers. The `runsMetadata` queue stops consuming and workers start crashing.
2. **Mar 20, during the day** — Customers report missing/delayed execution logs on the community. Team begins investigating.
3. **Mar 20, during the day** — Mitigation: repeated server restarts and manual cleanup of stalled jobs to keep customer impact minimal while root cause is identified.
4. **Mar 20, end of day** — Root cause identified as `QueueEvents` broadcasting. Change reverted. Redis memory recovers, workers resume, and all backed-up `runsMetadata` jobs are processed and indexed.
## Root Cause
BullMQ's `QueueEvents` feature subscribes each worker instance to a Redis pub/sub stream of all job lifecycle events (started, completed, failed, etc.) for the queues it listens to. In a multi-instance deployment, this means **every** app server receives **every** event from **every** other server.
As traffic grew, the volume of events exceeded the rate at which clients could read them. Redis buffers these unread events in per-client output buffers. When the cumulative buffer size exceeded available Redis memory, Redis could no longer accept writes. The `runsMetadata` queue — which records execution logs for the UI — was the first visible casualty, but all queue operations were degraded.
## Detection & Monitoring Gaps
* **Detected by customers on the community**, not by automated alerting.
* No alerting specifically on Redis output buffer growth or pub/sub subscriber lag.
* No automated detection of `runsMetadata` queue stalling or worker crash loops.
## Action Items
| Action Item | Status |
| --------------------------------------------------------------- | ------ |
| Revert `QueueEvents` adoption to stop the event broadcast storm | Done |
| Add alerting on Redis memory usage and output buffer growth | To do |
| Add monitoring for `runsMetadata` queue consumption lag | To do |
## Improvements Done
* **QueueEvents reverted** — Removed the `QueueEvents` listener so workers no longer broadcast lifecycle events to all instances, eliminating the Redis output buffer growth.
# Our Compensation
Source: https://www.activepieces.com/docs/handbook/hiring/compensation
The packages include three factors for the salary:
* **Role**: The specific position and responsibilities of the employee.
* **Location**: The geographical area where the employee is based.
* **Level**: The seniority and experience level of the employee.
Salaries are fixed and based on levels and seniority, not negotiation. This ensures fair pay for everyone.Salaries are updated based on market trends and the company's performance. It's easier to justify raises when the business is great.
# Our Hiring Process
Source: https://www.activepieces.com/docs/handbook/hiring/hiring
Engineers are the majority of the Activepieces team, and we are always looking for highly talented product engineers.
Here, you'll face a real challenge from Activepieces. We'll guide you through it to see how you solve problems.
We'll chat about your past experiences and how you design products. It's like having a friendly conversation where we reflect on what you've done before.
You'll do open source task for one day. This open source contribution task help us understand how well we work together.
## Interviewing Tips
Every interview should make us say **HELL YES**. If not, we'll kindly pass.
**Avoid Bias:** Get opinions from others to make fair decisions.
**Speak Up Early:** If you're unsure about something, ask or test it right away.
# Our Roles & Levels
Source: https://www.activepieces.com/docs/handbook/hiring/levels
**Product Engineers** are full stack engineers who handle both the engineering and product side, delivering features end-to-end.
### Our Levels
We break out seniority into three levels, **L1 to L3**.
### L1 Product Engineers
They tend to be early-career.
* They get more management support than folks at other levels.
* They focus on continuously absorbing new information about our users and how to be effective at **Activepieces**.
* They aim to be increasingly autonomous as they gain more experience here.
### L2 Product Engineers
They are generally responsible for running a project start-to-finish.
* They independently decide on the implementation details.
* They work with **Stakeholders** / **teammates** / **L3s** on the plan.
* They have personal responsibility for the **“how”** of what they’re working on, but share responsibility for the **“what”** and **“why”**.
* They make consistent progress on their work by continuously defining the scope, incorporating feedback, trying different approaches and solutions, and deciding what will deliver the most value for users.
### L3 Product Engineers
Their scope is bigger than coding, they lead a product area, make key product decisions and guide the team with strong leadership skills.
* **Planning**: They help **L2s** figure out what the next priority things to focus on and guide **L1s** in determining the right sequence of work to get a project done.
* **Day-to-Day Work**: They might be hands-on with the day-to-day work of the team, providing support and resources to their teammates as needed.
* **Customer Communication**: They handle direct communication with customers regarding planning and product direction, ensuring that customer needs and feedback are incorporated into the development process.
### How to Level Up
There is no formal process, but it happens at the end of **each year** and is based on two things:
1. **Manager Review**: Managers look at how well the engineer has performed and grown over the year.
2. **Peer Review**: Colleagues give feedback on how well the engineer has worked with the team.
This helps make sure promotions are fair and based on merit.
# Our Team Structure
Source: https://www.activepieces.com/docs/handbook/hiring/team
We are big believers in small teams with 10x engineers who would outperform other team types.
## No product management by default
Engineers decide what to build. If you need help, feel free to reach out to the team for other opinions or help.
## No Process by default
We trust the engineers' judgment to make the call whether this code is risky and requires external approval or if it's a fix that can be easily reversed or fixed with no big impact on the end user.
## They Love Users
When the engineer loves the users, that means they would ship fast, they wouldn't over-engineer because they understand the requirements very well, they usually have empathy which means they don't complicate everyone else.
## Pragmatic & Speed
Engineering planning sometimes seems sexy from a technical perspective, but being pragmatic means you would take decisions in a timely manner, taking them in baby steps and iterating faster rather than planning for the long run, and it's easy to reverse wrong decisions early on without investing too much time.
## Starts With Hiring
We hire very **slowly**. We are always looking for highly talented engineers. We love to hire people with a broader skill set and flexibility, low egos, and who are builders at heart.
We found that working with strong engineers is one of the strongest reasons to retain employees, and this would allow everyone to be free and have less process.
# Activepieces Handbook
Source: https://www.activepieces.com/docs/handbook/overview
Welcome to the Activepieces Handbook!
This guide serves as a complete resource for understanding our organization. Inside, you'll find detailed sections covering various aspects of our internal processes and policies.
# Interface Design
Source: https://www.activepieces.com/docs/handbook/product/interface-design
This page is a collection of resources for interface design. It's a work in progress and will be updated as we go.
## Color Palette
The palette includes:
* Primary colors for main actions and branding
* Secondary colors for supporting elements
* Semantic colors for status and feedback (success, warning, destructive)
## Tech Stack
Our frontend is built with:
* **React** - Core UI framework
* **Shadcn UI** - Component library
* **Tailwind CSS** - Utility-first styling
## Learning Resources
* [Interface Design (Chapters 46-53)](https://basecamp.com/gettingreal/09.1-interface-first) from Getting Real by Basecamp
# Teams
Source: https://www.activepieces.com/docs/handbook/team
Meet the teams that make Activepieces magical ✨
Designing delightful user experiences to turn your ideas into powerful automations—fast.
Building the engine room that keeps Activepieces running smoothly, securely, and at scale.
Connecting everything: we create and maintain integrations with popular apps and platforms.
Growing our community: we drive awareness, adoption, and help our users thrive.
## People
| Name | Role & Team | Social Media |
| -------------------------- | ----------------------------- | ----------------------------------------------------------------------------- |
| Ashraf Samhouri | Founder | [LinkedIn](https://www.linkedin.com/in/ashrafsam/) |
| Mohammad AbuAboud | Founder | [LinkedIn](https://www.linkedin.com/in/mohammad-abuaboud/) |
| Amr Elmohamady | Platform | [LinkedIn](https://www.linkedin.com/in/amr-elmohamady/) |
| Chaker Atallah | Platform | [LinkedIn](https://www.linkedin.com/in/chaker-atallah/) |
| Yasser Belatreche | Platform | [LinkedIn](https://www.linkedin.com/in/yasser-belatreche-6b450620a/) |
| Kishan Parmar | Pieces | [LinkedIn](https://www.linkedin.com/in/kishanprmr/) |
| Sanket Nannaware | Pieces | [LinkedIn](https://www.linkedin.com/in/sanket-nannaware-a8505a22a/) |
| David Anyatonwu | Pieces | [LinkedIn](https://www.linkedin.com/in/david-anyatonwu-79165988/) |
| Hazem Adel | Product | [LinkedIn](https://www.linkedin.com/in/hazemadelkhalel/) |
| Louai Boumediene | Product | [LinkedIn](https://www.linkedin.com/in/louai-boumediene-018919262/) |
| Abdul Rahman Al Hussien | Product | [LinkedIn](https://www.linkedin.com/in/abdul-rahman-al-hussien-21a074198/) |
| Yazeed Kamal | Designer | [LinkedIn](https://www.linkedin.com/in/yazeed-kamal/) |
| Ginikachukwu Soluchi Nwibe | Content | [LinkedIn](https://www.linkedin.com/in/ginikachukwu-soluchi-nwibe-8010a5216/) |
| Ibrahim Abuznaid | Growth, Automation Specialist | [LinkedIn](https://www.linkedin.com/in/ibrahim-abuznaid-2b4079264/) |
| Kareem Nofal | Product & Customer Success | [LinkedIn](https://www.linkedin.com/in/kareem-nofal-3016091a1/) |
# Benchmark
Source: https://www.activepieces.com/docs/install/architecture/benchmark
## What's Measured
The benchmark tests end-to-end webhook flow execution: an HTTP request hits a **Catch Webhook** trigger, runs a **Code** action, and returns a response via **Return Response**. This measures the full request lifecycle including routing, execution, and response delivery.
## Results
| Configuration | Throughput (req/s) | Mean Latency | P50 | P99 |
| ------------------ | ------------------ | ------------ | ------- | ------- |
| 1 App, 2 Workers | 95.1 | 20.8 ms | 15.4 ms | 62.7 ms |
| 2 Apps, 4 Workers | 167.6 | 23.4 ms | 20.0 ms | 65.1 ms |
| 3 Apps, 6 Workers | 231.8 | 24.8 ms | 21.2 ms | 71.5 ms |
| 1 App, 4 Workers | 174.5 | 22.4 ms | 19.0 ms | 56.7 ms |
| 2 Apps, 8 Workers | 286.5 | 26.0 ms | 22.3 ms | 89.5 ms |
| 3 Apps, 12 Workers | 336.6 | 35.1 ms | 34.8 ms | 51.5 ms |
## Test Environment
* **Runner**: `depot-ubuntu-24.04-16` (16 cores)
* **App container**: 2 CPU / 4 GB memory
* **Worker container**: 0.5 CPU / 1 GB memory
* **Execution mode**: `SANDBOX_CODE_ONLY`
* **Rate limiter**: Disabled (`AP_PROJECT_RATE_LIMITER_ENABLED=false`)
* **Total requests**: 500 per configuration
## How to Reproduce
### Via GitHub Actions UI
1. Go to the **Actions** tab in the repository
2. Select the **Benchmark** workflow
3. Click **Run workflow**
4. Optionally adjust `total_requests` (default: 500)
5. All 6 configurations (1:2 and 1:4 ratios at 1x/2x/3x scale) run in parallel automatically
### Via CLI
```bash theme={null}
gh workflow run benchmark.yml
```
After the workflow completes, check the **summary** job for a combined comparison table.
These benchmarks run in `SANDBOX_CODE_ONLY` mode. This does **not** represent the performance of Activepieces Cloud, which uses a different sandboxing mechanism to support multi-tenancy. For more information, see [Sandboxing](/docs/install/architecture/sandboxing).
# Durable Execution
Source: https://www.activepieces.com/docs/install/architecture/durable-execution
A worker is halfway through a flow when its container is recycled. Another worker picks the run up, walks the graph from the trigger, and for every step whose output is already in the run's log it skips execution and reuses the cached output. It stops at the first step that is not yet in the log and runs that one. One mechanism covers crashes, deploys, multi-day pauses, and retries.
## The run log
Every flow run has a **run log**: a single compressed checkpoint file holding everything needed to resume the run on a fresh worker.
What is in it:
* One entry per completed step, keyed by step name: input (secrets censored), output, status, duration, and error message for failed steps.
* Loop iterations and router branches are recorded with the same shape, nested under their parent step.
* Run-level tags.
When it is written:
* Once at the start of the run, before the first step executes.
* Every 15 seconds during execution, from a background loop that snapshots whatever has completed since the last write.
* Once on the final state (success, failure, or pause).
Each write overwrites the previous copy. Only the latest checkpoint is retained, and the file is compressed before upload.
## Replay and skip
Resume is not a special path. Every time a worker starts executing a run, it walks the flow graph from the trigger and at every step asks: *is the output of this step already in the log?*
* If yes, and the step completed (`SUCCEEDED` or `PAUSED`), the engine returns the cached output and moves on.
* If no, the engine executes the step, records its output, and continues.
The first time a run is scheduled the log is empty, so every step runs. After a resume the log is full up to the interruption, so the engine fast-forwards through all of it and only executes whatever came next.
```mermaid theme={null}
sequenceDiagram
participant Worker
participant Log as Run log
participant Flow as Flow graph
Worker->>Log: load checkpoint
Worker->>Flow: walk from trigger
Flow-->>Worker: step A
Worker->>Log: output for A?
Log-->>Worker: yes (cached)
Note over Worker: skip A, reuse output
Flow-->>Worker: step B
Worker->>Log: output for B?
Log-->>Worker: yes (cached)
Note over Worker: skip B, reuse output
Flow-->>Worker: step C
Worker->>Log: output for C?
Log-->>Worker: no
Note over Worker: execute C, append to log
```
Worst-case data loss on an abrupt crash is the single step that was executing when the worker died. It is re-run from the last checkpoint; everything before it is skipped.
## What triggers a resume
Every kind of interruption resolves through the same replay path; only the trigger differs.
* **Worker crash or deployment.** The queue reassigns the run to another worker, which loads the log and replays.
* **Paused step.** The piece creates a [waitpoint](/docs/install/architecture/waitpoints). When the waitpoint fires, a resume job is enqueued and a worker replays the run.
* **Retry from failed step.** The same log is reused; the run is re-queued and a worker replays from the failure point.
* **Normal progression within one worker.** Same replay model, without leaving the process.
# Network Security
Source: https://www.activepieces.com/docs/install/architecture/network-security
How Activepieces isolates outbound traffic from your internal network
## Overview
Activepieces makes outbound HTTP from two surfaces, and each is hardened separately:
1. **User code in flows** — Code steps, piece actions, anything a flow author can write. Hardened by `AP_NETWORK_MODE`; see [User-code egress](#user-code-egress).
2. **Server-side HTTP from the API** — OAuth token claim/refresh, Vault, Conjur, event-destination webhooks, on-call pager, MCP tool validation. Always filtered, tuned via `AP_SSRF_ALLOW_LIST`; see [Server-side egress](#server-side-egress).
Both surfaces block the same set of IPs (RFC1918 private, loopback, link-local / cloud-metadata, non-unicast) and share the `AP_SSRF_ALLOW_LIST` allow-list.
## User-code egress
Every flow eventually runs **user-supplied code** — Code steps, piece actions, HTTP requests. Without an explicit boundary, that code can reach anything the host can reach: `127.0.0.1`, Redis, Postgres, the Kubernetes API, cloud-metadata endpoints (`169.254.169.254`), the VPC. `AP_NETWORK_MODE` is the switch that controls this boundary.
`AP_NETWORK_MODE` defaults to `UNRESTRICTED`. Set it to `STRICT` in production to opt into the full defense-in-depth stack described below.
| Value | Effect |
| -------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| `UNRESTRICTED` | No outbound guards. User code can reach any host the worker can reach. Default while the hardening stack is being validated. |
| `STRICT` | All three layers below are installed. Outbound connections to private, loopback, link-local, and cloud-metadata IPs are blocked from user code. |
Related env vars:
* `AP_SSRF_ALLOW_LIST` — comma-separated IPs that bypass the block (e.g. an internal DB or sidecar).
* `HTTP_PROXY` / `HTTPS_PROXY` — if set, the engine routes all outbound HTTP(S) through the proxy. Loopback proxy ports are auto-exempted.
## How Isolation Works
In `STRICT` mode, three layers stack. Each one is a tripwire — if a request slips past one, the next one stops it.
```mermaid theme={null}
flowchart LR
UC["User code (Code step · Piece · fetch)"]
subgraph L1["Layer 1 — Engine SSRF guard (in-process)"]
direction TB
L1check{"IP in blocked range?"}
end
subgraph L2["Layer 2 — Egress proxy (loopback)"]
direction TB
L2check{"Any A/AAAA record blocked?"}
end
subgraph L3["Layer 3 — iptables lockdown (SANDBOX_PROCESS only)"]
direction TB
L3check{"Destination is proxy port?"}
end
NET(["Public internet"])
B1["SSRFBlockedError"]
B2["403 Egress blocked"]
B3["Kernel drops packet"]
UC --> L1check
L1check -- yes --> B1
L1check -- no --> L2check
L2check -- yes --> B2
L2check -- no --> L3check
L3check -- no --> B3
L3check -- yes --> NET
classDef blocked fill:#fde2e2,stroke:#d33,color:#7a1414
classDef allowed fill:#e3f7e4,stroke:#2a7,color:#14532d
class B1,B2,B3 blocked
class NET allowed
```
The engine monkey-patches Node's `dns.lookup`, `Socket.prototype.connect`, and `undici`'s global dispatcher. The moment user code resolves a hostname or opens a socket, the guard checks the resulting IP against a blocklist (every non-`unicast` range — RFC1918, loopback, link-local, multicast, cloud metadata). Covers `axios`, `fetch`, `undici`, and raw `http`/`net` in one pass.
The worker spins up a loopback-bound HTTP(S) CONNECT proxy (`proxy-chain`). User code is routed through it by the engine dispatcher. The proxy re-resolves every hostname and checks **every** A/AAAA record — closing the multi-record bypass where one IP is public and another is private.
In isolate-based sandboxing, the worker applies `iptables` rules at boot that restrict egress from sandbox UIDs to the proxy port only. Even if user code finds a way to hit the raw socket layer, the kernel drops the packet.
## How It Applies to Each Sandbox Mode
Network Security layers on top of the sandbox execution mode — they are independent choices. The table below shows what is active in `STRICT` mode for each sandbox:
| Sandbox Mode | Engine SSRF guard | Egress proxy | Kernel lockdown |
| -------------------------------------------------------- | ----------------- | ------------ | ------------------------------------------- |
| **No Sandboxing** (`UNSANDBOXED`) | ✅ | ✅ | ❌ — no per-UID separation available |
| **V8 Sandboxing** (`SANDBOX_CODE_ONLY`) | ✅ | ✅ | ❌ — engine process shares the host UID |
| **Kernel Namespaces** (`SANDBOX_PROCESS` with `isolate`) | ✅ | ✅ | ✅ — `iptables` rules scoped to sandbox UIDs |
See [Sandboxing](./sandboxing) for what each sandbox mode isolates at the process level. Network Security is orthogonal: it isolates what the sandbox is allowed to *reach*, regardless of *how* it runs.
In `UNSANDBOXED` and V8 modes there is no kernel-level fallback. The in-process SSRF guard and the egress proxy are the only two layers — which is still strong, but a determined attacker who compromises the Node process can bypass in-process checks. Use `SANDBOX_PROCESS` in multi-tenant deployments.
## Verifying Your Setup
From a Code step in a test flow, try:
```js theme={null}
const res = await fetch('http://169.254.169.254/latest/meta-data/')
```
With `AP_NETWORK_MODE=STRICT` you should see an `SSRFBlockedError` (from the engine guard) or a 403 from the egress proxy (for hostname-resolved requests). With `UNRESTRICTED` the request will succeed if the host allows it — confirming the guard is off.
## Rolling Out
1. Start in `UNRESTRICTED` (default) and identify any internal services that legitimate flows need to reach (internal APIs, databases used by Code steps, etc.).
2. Add those IPs to `AP_SSRF_ALLOW_LIST`.
3. Switch to `AP_NETWORK_MODE=STRICT`. Watch worker logs for `SSRFBlockedError` or `Egress proxy refused request` — each one is either an attack, a misconfigured flow, or a missing allow-list entry.
## Server-side egress
Separate from flow code, the API server itself makes outbound HTTP on behalf of admins and users — OAuth token claim/refresh, Hashicorp Vault, CyberArk Conjur, event-destination webhooks, on-call pager, MCP tool validation. The URLs come from admin config (Vault server URL) or user input (webhook destination, MCP server URL), so the same SSRF risks apply.
Unlike user-code egress, **this layer is always on** — it does not require `AP_NETWORK_MODE=STRICT`. It is implemented as a `request-filtering-agent` wrapper attached to the shared axios instances in `@activepieces/server-utils`; every outbound request flows through it. The blocked ranges are identical to the engine guard (RFC1918, loopback, link-local / cloud metadata, non-unicast).
`AP_SSRF_ALLOW_LIST` is shared between both surfaces. Add an IP or CIDR once and it applies to user-code egress **and** server-side HTTP. Restart the server after changing the value.
When a request is blocked, the axios error surfaced in the admin UI includes the `AP_SSRF_ALLOW_LIST` hint, so operators see the remediation directly in connection-test dialogs.
### Self-hosted providers on private IPs
If Vault, Conjur, an on-prem OAuth2 token endpoint, or an internal webhook resolves to a private IP, the server-side filter will reject it until you add the target to `AP_SSRF_ALLOW_LIST`:
```
AP_SSRF_ALLOW_LIST=10.0.5.12,192.168.10.0/24
```
### Relaxed TLS is still filtered
Connectors that accept self-signed certs (e.g. CyberArk Conjur in a private cluster) use `rejectUnauthorized: false`. The SSRF filter is preserved under this setting — TLS verification is relaxed, SSRF protection is not.
# Overview
Source: https://www.activepieces.com/docs/install/architecture/overview
This page focuses on describing the main components of Activepieces and focus mainly on workflow executions.
## Components
**Activepieces:**
* **App**: The main application that organizes everything from APIs to scheduled jobs.
* **Worker**: Polls for new jobs, allocates a sandbox from the pool, executes the flows with the engine inside the sandbox, and sends results back to the app.
* **Sandbox**: An isolated execution environment that manages process lifecycle, resource limits, and communication with the engine via WebSocket.
* **Engine**: TypeScript code that parses flow JSON and executes it. It is compiled into a single JS file and runs inside the sandbox.
* **UI**: Frontend written in React.
**Third Party**:
* **Postgres**: The main database for Activepieces.
* **Redis**: This is used to power the queue using [BullMQ](https://docs.bullmq.io/).
## Reliability & Scalability
Postgres and Redis availability is outside the scope of this documentation, as many cloud providers already implement best practices to ensure their availability.
* **Webhooks**:
All webhooks are sent to the Activepieces app, which performs basic validation and adds them to the queue. In case of a spike, webhooks will be added to the queue.
* **Polling Trigger**:
All recurring jobs are added to Redis. In case of a failure, the missed jobs will be executed again.
* **Flow Execution**:
Workers poll jobs from the queue. In the event of a spike, the flow execution will still work but may be delayed depending on the size of the spike.
To scale Activepieces, you typically need to increase the replicas of either workers, the app, or the Postgres database. A small Redis instance is sufficient as it can handle thousands of jobs per second and rarely acts as a bottleneck.
## Repository Structure
The repository is structured as a monorepo using Turbo, with TypeScript as the primary language. It is divided into several packages:
```
.
├── packages
│ ├── web
│ ├── server
│ │ ├── api
│ │ ├── worker
│ │ ├── engine
│ │ ├── sandbox
│ │ └── common
│ ├── ee
│ ├── pieces
│ └── shared
```
* `web`: This package contains the user interface, implemented using the React framework.
* `api`: This package contains the main application written in TypeScript with the Fastify framework.
* `worker`: This package contains the logic of accepting flow jobs and executing them using the engine.
* `server-sandbox`: This package contains the sandbox execution environment, including process isolation, pool management, and WebSocket communication between the worker and engine.
* `common`: This package contains the shared logic between worker and app.
* `engine`: This package contains the logic for flow execution within the sandbox. Located under `server/engine`.
* `pieces`: This package contains the implementation of triggers and actions for third-party apps.
* `shared`: This package contains shared data models and helper functions used by the other packages.
* `ee`: This package contains features that are only available in the paid edition.
# Piece syncing & versioning
Source: https://www.activepieces.com/docs/install/architecture/piece-syncing
How pieces are packaged, distributed, and pinned to flow versions
Pieces are standard [npm packages](https://www.npmjs.com/search?q=%40activepieces%2Fpiece-). Two facts follow from that:
* **No server upgrade is needed for new pieces** — a sync job pulls fresh versions on its own.
* **Each step is pinned to an exact version** — flows never auto-upgrade. Bumps are explicit, through the builder.
```mermaid theme={null}
flowchart LR
REG["npm registry (@activepieces/piece-*)"]
CAT["Local piece catalog"]
STEP["Flow step pieceVersion: 0.5.3"]
UI["Builder version dialog"]
REG -- "hourly sync (OFFICIAL_AUTO)" --> CAT
CAT -- "exact version on add" --> STEP
UI -- "explicit upgrade" --> STEP
classDef src fill:#eef,stroke:#88a
classDef catalog fill:#efe,stroke:#7a7
classDef flow fill:#fee,stroke:#a77
class REG src
class CAT catalog
class STEP,UI flow
```
## Packaging
| Type | Source | Installed by |
| ----------------- | ------------------------------------ | -------------- |
| Official | Activepieces cloud registry | Auto-sync |
| Custom (npm) | npm registry, scoped to one platform | Platform admin |
| Private (archive) | `.tgz` upload | Platform admin |
Custom and private pieces are managed manually — see [Manage pieces](/docs/admin-guide/guides/manage-pieces).
## Auto-sync
| `AP_PIECES_SYNC_MODE` | Behavior |
| --------------------- | ------------------------------------------------------------------------- |
| `OFFICIAL_AUTO` | Hourly reconcile against the cloud registry. Default for all deployments. |
Custom and private pieces are never touched by the sync job.
## Server compatibility
Every piece declares a `minimumSupportedRelease` (and optional `maximumSupportedRelease`) in its definition — the range of Activepieces server releases it works on. The catalog filters pieces against the running server's release, so an out-of-range piece is never listed in the builder and never served from the registry.
**Self-hosted: upgrade to `0.82.0` or newer.** Every new piece now declares `minimumSupportedRelease ≥ 0.82.0`, the floor that came in with the latest piece-context version. Servers below `0.82.0` will not pick up any newly published pieces or bug fixes. Cloud is always on the latest release.
## Version pinning
Adding a step records the exact piece version at that moment (e.g. `0.5.3`). The pin stays until a human changes it. To upgrade, open the step in the builder, click the version next to its name, and pick a new one. The dialog warns when the change crosses a minor or major boundary.
## Related
* [Manage pieces](/docs/admin-guide/guides/manage-pieces) — install, hide, upload custom pieces.
* [Piece versioning](/docs/build-pieces/piece-reference/piece-versioning) — semver rules for piece authors.
# Sandboxing
Source: https://www.activepieces.com/docs/install/architecture/sandboxing
Choose the right isolation mode for running flow code
Flow code — Code steps and piece actions — always runs inside a **sandbox** that wraps the engine process. `AP_EXECUTION_MODE` decides which sandbox, and it is the most consequential security choice in a self-hosted deployment: it decides whether a malicious flow is contained to one worker pod or can reach the kernel.
## Execution Modes
**For enterprise deployments, use `SANDBOX_CODE_ONLY` (V8 isolation).** It is the only mode that is both multi-tenant-safe *and* runs as an unprivileged container — which is what Activepieces Cloud uses, and what fits inside a standard Kubernetes security baseline.
## Why V8 Sandboxing Exists
`SANDBOX_PROCESS` uses the `isolate` binary, which creates fresh Linux namespaces per run. That needs `CAP_SYS_ADMIN` — in practice, `privileged: true` on the container. V8 Sandboxing exists so you don't have to grant that.
### A concrete K8s example
You run Activepieces in your own Kubernetes cluster, next to a Salesforce-sync service and a finance-analytics pod. A customer ships a malicious Code step.
* **With `SANDBOX_PROCESS`** — the worker pod is privileged. A kernel exploit escapes to the host, reads the service-account token, hits the Kubernetes API, and pivots to the Salesforce pod and the finance DB. Blast radius: **your whole cluster**.
* **With `SANDBOX_CODE_ONLY`** — the worker has no special capabilities. The Code step runs inside a fresh V8 isolate (no `require`, no filesystem, no npm). Blast radius: **that one worker pod**.
V8 isolation is how you get multi-tenant code safety without handing a workflow engine kernel-level access to everything sharing the cluster.
Only choose `SANDBOX_PROCESS` if you genuinely need arbitrary `npm` packages in Code steps, and run it on a dedicated node pool. A privileged Activepieces worker should never share a node with unrelated workloads.
## How Each Mode Works
### `fork()` + V8 — `UNSANDBOXED`, `SANDBOX_CODE_ONLY`
The engine runs as a plain `child_process.fork` with a memory cap. In `SANDBOX_CODE_ONLY`, every Code step is additionally wrapped in a fresh [`isolated-vm`](https://www.npmjs.com/package/isolated-vm) context — 128 MB per isolate, `require` removed, disposed after the step. No Linux-namespace machinery, no `CAP_SYS_ADMIN`, no privileged container. Sandboxes stay warm across jobs, so execution is fast.
* **V8 guarantees:** user code cannot touch `require`, the filesystem, or other steps' memory.
* **V8 does not:** protect isolates from each other if the host Node process itself is compromised.
### `isolate` binary — `SANDBOX_PROCESS`, `SANDBOX_CODE_AND_PROCESS`
The engine runs inside [ioi/isolate](https://github.com/ioi/isolate), which creates fresh PID, mount, user, and UTS namespaces per run and mounts the engine and code artifacts read-only. Arbitrary `npm` packages are safe inside Code steps because filesystem and process state are scoped to the box. The cost: cold boot per run (not reusable), and the worker container must hold `CAP_SYS_ADMIN` — `--privileged` in Docker, `securityContext.privileged: true` in Kubernetes.
## Network Isolation
Execution mode decides *how* user code runs; `AP_NETWORK_MODE` decides *what it can reach*. See [Network Security](./network-security) for the SSRF guard, egress proxy, and iptables lockdown that layer on top of every sandbox mode.
# Waitpoints
Source: https://www.activepieces.com/docs/install/architecture/waitpoints
A **waitpoint** is the durable row that represents a paused step on a flow run. The flow run row only carries status (`PAUSED`, `RUNNING`, …); the *why* lives on the waitpoint.
```mermaid theme={null}
stateDiagram-v2
[*] --> PENDING: createWaitpoint
PENDING --> COMPLETED: resume signal
COMPLETED --> [*]: flow resumes
[*] --> COMPLETED: pre-completed (resume arrived first)
```
## Schema
| Field | Meaning |
| ----------------------- | ---------------------------------------------------------------------------------------------------- |
| `flowRunId`, `stepName` | Which run/step is paused. Unique together: a step has at most one waitpoint. |
| `type` | `DELAY` or `WEBHOOK`. |
| `status` | `PENDING` until the resume signal arrives, then `COMPLETED`. |
| `resumeDateTime` | For `DELAY`: when to fire the resume. |
| `responseToSend` | For `WEBHOOK`: optional HTTP response returned immediately to the original webhook trigger. |
| `resumePayload` | `{ body, headers, queryParams }` from the resume call, surfaced to the piece as `ctx.resumePayload`. |
## Types
* **`DELAY`**: resumes at `resumeDateTime`. The server schedules a one-time job for that timestamp. The delay is bounded by a configurable server-side maximum.
* **`WEBHOOK`**: resumes on any HTTP call to the waitpoint's resume URL. If `responseToSend` is set, it is replied immediately to the original trigger so a single webhook can respond-then-pause.
## Lifecycle
1. **Create.** The piece calls `ctx.run.createWaitpoint({ type, ... })` + `ctx.run.waitForWaitpoint(id)`. The engine marks the step as `paused` and asks the server to insert a `PENDING` row. The insert is idempotent on the `(flow run, step)` pair.
2. **Checkpoint.** The engine serializes the execution context and transitions the flow run to `PAUSED`.
3. **Resume signal.** Either an HTTP call on the resume URL or the scheduled job firing. Both carry `{ body, headers, queryParams }`.
4. **Re-run.** A worker rebuilds the context and re-invokes the same action with `ctx.executionType === ExecutionType.RESUME` and `ctx.resumePayload` populated.
```mermaid theme={null}
sequenceDiagram
participant Piece
participant Engine
participant Server
participant Queue as Job queue
participant Caller as Resume caller
Piece->>Engine: createWaitpoint + waitForWaitpoint
Engine->>Server: register waitpoint
Server-->>Engine: { id, resumeUrl }
Engine->>Engine: checkpoint context, status = PAUSED
Note over Piece,Server: ...time passes...
Caller->>Server: HTTP call on resumeUrl
Server->>Queue: enqueue resume job
Queue->>Engine: resume
Engine->>Piece: re-invoke with resumePayload
```
## Resume-before-pause race
A callback can arrive before the flow run has finished writing `PAUSED` to disk. The protocol absorbs the race:
* Completing a waitpoint takes a write lock on the `PENDING` row. If it exists, it flips to `COMPLETED` and stores the `resumePayload`. If it does not, a **pre-completed** row is inserted instead.
* When the flow run transitions to `PAUSED`, the server checks for a matching `COMPLETED` waitpoint and enqueues the resume job immediately.
Duplicate callbacks are absorbed by the uniqueness constraint; the engine never processes a resume twice.
## Endpoints
| Method | Path | Notes |
| ------ | ------------------------------------------------ | ----------------------------------------------------------------------------- |
| `POST` | `/v1/waitpoints` | Engine-only. Creates a `PENDING` waitpoint, returns its resume URL. |
| `ALL` | `/v1/flow-runs/:id/waitpoints/:waitpointId` | Resume, async. |
| `ALL` | `/v1/flow-runs/:id/waitpoints/:waitpointId/sync` | Resume, sync. The HTTP response is whatever the flow produces after resuming. |
## Piece API
Piece authors create waitpoints with `ctx.run.createWaitpoint` / `ctx.run.waitForWaitpoint`. Patterns for `WEBHOOK`, `DELAY`, and `responseToSend` are in Flow Control.
# Workers
Source: https://www.activepieces.com/docs/install/architecture/workers
Run, scale, and tune the container that executes your flows
The **worker** is the container that actually runs flows. It pulls jobs from Redis, executes each one inside a sandbox, and streams results back to the app. This page is about operating it: how many to run, how to size them, and what happens when one dies mid-flow.
## What a Worker Does
Each worker pulls jobs off the BullMQ queue in Redis, hands them to a sandboxed engine process, and posts progress and results back to the app over HTTP. Workers are **stateless** — they hold no per-flow memory, which is what makes horizontal scaling and crash recovery straightforward.
## Scaling
There are two independent knobs:
* **Replicas** — scale horizontally. Workers are stateless, so adding replicas is safe behind any orchestrator (Docker Compose, Kubernetes, Nomad). The default Docker Compose setup starts 5 replicas.
* **`AP_WORKER_CONCURRENCY`** — concurrent jobs per replica. Default `5`. Each concurrent job uses one sandbox instance, so this also sets the peak sandbox count per worker.
Size replicas for throughput, and concurrency for how much a single replica can chew on. If flows are CPU-bound, lower concurrency and add replicas. If flows are I/O-bound (most automation workloads), raise concurrency before adding replicas.
See [Hardware Requirements](../configuration/hardware) for per-replica memory and CPU sizing.
In `UNSANDBOXED` and `SANDBOX_CODE_ONLY` modes, sandboxes stay warm across jobs, so steady-state execution is fast. `SANDBOX_PROCESS` spins up a fresh sandbox per run, which trades latency for kernel-level isolation.
## Failure Behaviour
If a worker crashes, is evicted, or loses its Redis lease mid-run, BullMQ requeues the job and another worker picks it up. The engine's durable-execution layer replays already-completed steps from persisted state rather than re-running them, so side effects are not duplicated. This means a worker restart or OOM kill during a flow is survivable — you do not need to drain traffic before rolling workers.
See [Durable Execution](./durable-execution) for exactly what is persisted and how replay works.
## Sandbox & Network Isolation
How flows are isolated from the worker container and the outside world is two independent choices:
* [**Sandboxing**](./sandboxing) — `AP_EXECUTION_MODE` decides how user code is isolated from the host kernel. This is the most important security decision for multi-tenant deployments.
* [**Network Security**](./network-security) — `AP_NETWORK_MODE` decides what the sandbox is allowed to reach on the network.
# Breaking Changes
Source: https://www.activepieces.com/docs/install/configuration/breaking-changes
This list shows all versions that include breaking changes and how to upgrade.
## 0.84.0
### What has changed?
#### Flow Run Log Size Enforcement
This is an important behavior change in how flow run logs are stored and capped, but **no action is required** — the existing default (`AP_MAX_FLOW_RUN_LOG_SIZE_MB=50`) is preserved.
* Large step outputs are now offloaded to object storage instead of sitting inline in worker memory, controlled by the new `AP_FLOW_RUN_LOG_SLICE_THRESHOLD_KB` env var (default `32`).
* Large step input values are replaced with a `(truncated, original size )` placeholder in the log via the new `AP_FLOW_RUN_LOG_INPUT_TRUNCATE_THRESHOLD_KB` env var (default `2`). The step still receives the full value at runtime.
* Runs whose combined step inputs and outputs exceed `AP_MAX_FLOW_RUN_LOG_SIZE_MB` now terminate with the new `LOG_SIZE_EXCEEDED` status instead of silently trimming inputs. Offloaded outputs still count their original size against the cap.
* See [Technical Limits → Files & flow run logs](/docs/flows/known-limits#files-flow-run-logs) for the full behavior, environment variables, and defaults.
## 0.82.0
### What has changed?
#### Piece Versions Are Now Pinned
* Piece versions are no longer stored with wildcards (`~1.2.0`, `^1.2.0`). All piece steps now use exact versions (e.g. `1.2.0`).
* A migration automatically strips wildcard prefixes from all existing flow versions on upgrade.
* The `LOCK_AND_PUBLISH` operation no longer resolves piece versions at publish time — steps run with the exact version stored in the flow.
* A new **version switcher** UI in the builder lets users upgrade or downgrade piece versions manually.
#### REST API
* `ADD_ACTION`, `UPDATE_ACTION`, and `UPDATE_TRIGGER` now strip wildcard prefixes (`~`, `^`) from `pieceVersion` before saving. If you were relying on wildcard versions to auto-resolve on publish, your steps will now be pinned to the base version (e.g. `~1.2.0` becomes `1.2.0`).
* `LOCK_AND_PUBLISH` no longer modifies `pieceVersion` on any step. The version in the draft is the version that will run.
* A new endpoint `GET /v1/pieces/:name/versions` is available to list all versions of a piece, useful for building custom version selection.
#### Concurrent Jobs Env Var Renamed
* `AP_MAX_CONCURRENT_JOBS_PER_PROJECT` → `AP_DEFAULT_CONCURRENT_JOBS_LIMIT`. Default drops from `100` to `5`.
#### Outbound HTTP is SSRF-filtered
* Server-side HTTP (OAuth, Vault, Conjur, event destinations, on-call pager, MCP validator) now blocks private, loopback, and cloud-metadata IPs. Reach internal hosts by adding their IP/CIDR to `AP_SSRF_ALLOW_LIST`.
### Do you need to take action?
* **Pinned piece versions** — if you create or update flows via the REST API with wildcard versions (`~` or `^`), switch to exact versions; wildcards are silently stripped. If your publish flow relied on `LOCK_AND_PUBLISH` resolving wildcards, set the exact version on each step before publishing. Use `GET /v1/pieces/:name/versions` to list available versions.
* **Concurrent jobs** — rename `AP_MAX_CONCURRENT_JOBS_PER_PROJECT` to `AP_DEFAULT_CONCURRENT_JOBS_LIMIT`. To keep the old cap, set `AP_DEFAULT_CONCURRENT_JOBS_LIMIT=100`.
* **SSRF filter** — if you self-host Vault, Conjur, an OAuth2 token endpoint, or any internal webhook on a private IP, set `AP_SSRF_ALLOW_LIST` (comma-separated IPs or CIDRs, e.g. `10.0.5.12,192.168.10.0/24`) before upgrading.
## 0.80.0
### What has changed?
#### Infrastructure
* A new environment variable `AP_MAX_WEBHOOK_PAYLOAD_SIZE_MB` has been introduced to control the maximum allowed webhook payload size. The default is `25` MB. Webhooks exceeding this limit will be rejected with a `413 Request Too Long` response.
* Nginx has been removed from the Docker image. Fastify now serves both the API and the React frontend directly. All API routes are now under the `/api` prefix natively. If you were using `/v1/health` as a health check endpoint (e.g. in Kubernetes probes or load balancer checks), update it to `/api/v1/health`.
* Secret managers has been refactored, the version in **0.79.0** no longer is supported, it was not used by anyone but it's worth to mention to upgrade to **0.80.0** before considering using the feature
#### API
* A new `UPDATE_SAMPLE_DATA_INFO` flow operation has been introduced to handle sample data updates independently.
* `UPDATE_ACTION` and `UPDATE_TRIGGER` no longer accept or apply changes to `sampleData` in step settings. Any `sampleData` fields sent in these requests will be ignored and the existing sample data will be preserved.
* A new required `lastUpdatedDate` field has been added to all actions and triggers, tracked automatically by the server. It is not accepted in `UPDATE_ACTION` or `UPDATE_TRIGGER` requests.
### Do you need to take action?
* If you want to restrict webhook payload sizes below the new `25` MB default, set `AP_MAX_WEBHOOK_PAYLOAD_SIZE_MB` to your desired limit.
* If you are using the API to update sample data on steps via `UPDATE_ACTION` or `UPDATE_TRIGGER`, switch to the new `UPDATE_SAMPLE_DATA_INFO` operation instead.
* If you have custom health checks pointing to `/v1/health`, update them to `/api/v1/health`.
## 0.78.1
### What has changed?
* The Platform `Operator` role can now edit all projects.
### Do you need to take action?
* Only if you want to restrict Operators from having editor access to every project. Review your Operator permissions as needed.
## 0.78.0
### What has changed?
* The `usageCount` field has been removed from both the template API responses and the database—it's no longer available.
* The Todos feature is now deprecated and will not be supported going forward.
### Do you need to take action?
* If you're using the Todos feature, update your flows to use the new approvals channels available from the approvals tab in the piece selector.
## 0.77.0
### What has changed?
* For Embed Plan users: the "Use a Template" dialog no longer appears when clicking the "New Flow" button.
* The `/flow-templates` API endpoints have been removed and replaced by `/templates`.
* Log size configuration has changed: `AP_MAX_FILE_SIZE_MB` no longer controls flow run logs. Use `AP_MAX_FLOW_RUN_LOG_SIZE_MB` instead.
### Do you need to take action?
* If you are on the embed plan, update your implementation to redirect users to the `/templates` page.
* Review the new endpoints documentation: [Templates API Schema](https://www.activepieces.com/docs/endpoints/templates/schema).
* If you use a custom value for `AP_MAX_FILE_SIZE_MB`, be sure to also set `AP_MAX_FLOW_RUN_LOG_SIZE_MB` accordingly.
## 0.75.0
### What has changed?
* When you navigate to a flow run inside the builder the url will change to /runs, this is something embedding customers might need to consider in case they have a route guard that only allows users to navigate to /flows.
* In **development mode**, loading piece translations are now off by default. Set `AP_LOAD_TRANSLATIONS_FOR_DEV_PIECES=true` to enable.
### Do you need to take action?
* Check your embedding navigation handler and see if it would be blocking the user from seeing the runs inside the builder or not.
* If you want to load translations for pieces in development mode, set `AP_LOAD_TRANSLATIONS_FOR_DEV_PIECES=true` in your environment variables.
## 0.74.0
### What has changed?
* The default embedded database for development and lightweight deployments has changed from **SQLite3** to [**PGLite**](https://pglite.dev/) (embedded PostgreSQL).
* The environment variable `AP_DB_TYPE=SQLITE3` is now deprecated and replaced with `AP_DB_TYPE=PGLITE`.
* Existing SQLite databases will be automatically migrated to PGLite on first startup.
* Templates are broken in this version. A migration issue changed template IDs, breaking API endpoints. This will be fixed in the next patch release.
* The `aiCredits` feature per project has been removed. In the next version, it will be replaced by integration with the AI Gateway.
### Do you need to take action?
* **If you are using `AP_DB_TYPE=SQLITE3`:** Update your configuration to use `AP_DB_TYPE=PGLITE` instead.
* **If you are using templates:** Wait for the next patch release to fix the template IDs.
## 0.73.0
### What has changed?
* Major change to MCP: [Read the announcement.](https://community.activepieces.com/t/mcp-update-easier-faster-and-more-secure/11177)
* If you have SMTP configured in the platform admin, it's no longer supported—you need to use AP\_SMTP\_ [environment variables.](https://www.activepieces.com/docs/install/configuration/environment-variables#environment-variables)
### Do you need to take action?
* If you are currently using MCP, review the linked announcement for important migration details and upgrade guidance.
## 0.71.0
### What has changed?
* In separate workers setup, now they have access to Redis.
* `AP_EXECUTION_MODE` mode `SANDBOXED` is now deprecated and replaced with `SANDBOX_PROCESS`
* Code Copilot has been deprecated. It will be reintroduced in a different, more powerful form in the future.
### When is action necessary?
* If you have separate workers setup, you should make sure that workers have access to Redis.
* If you are using `AP_EXECUTION_MODE` mode `SANDBOXED`, you should replace it with `SANDBOX_PROCESS`
## 0.70.0
### What has changed?
* `AP_QUEUE_MODE` is now deprecated and replaced with `AP_REDIS_TYPE`
* If you are using Sentinel Redis, you should add `AP_REDIS_TYPE` to `SENTINEL`
### When is action necessary?
* If you are using `AP_QUEUE_MODE`, you should replace it with `AP_REDIS_TYPE`
* If you are using Sentinel Redis, you should add `AP_REDIS_TYPE` to `SENTINEL`
## 0.69.0
### What has changed?
* `AP_FLOW_WORKER_CONCURRENCY` and `AP_SCHEDULED_WORKER_CONCURRENCY` are now deprecated all jobs have single queue and replaced with `AP_WORKER_CONCURRENCY`
### When is action necessary?
* If you are using `AP_FLOW_WORKER_CONCURRENCY` or `AP_SCHEDULED_WORKER_CONCURRENCY`, you should replace them with `AP_WORKER_CONCURRENCY`
## 0.66.0
### What has changed?
* If you use embedding the embedding SDK, please upgrade to version 0.6.0, `embedding.dashboard.hideSidebar` used to hide the navbar above the flows table in the dashboard now it relies on `embedding.dashboard.hideFlowsPageNavbar`
## 0.64.0
### What has changed?
* MCP management is removed from the embedding SDK.
## 0.63.0
### What has changed?
* Replicate provider's text models have been removed.
### When is action necessary?
* If you are using one of Replicate's text models, you should replace it with another model from another provider.
## 0.46.0
### What has changed?
* The UI for "Array of Properties" inputs in the pieces has been updated, particularly affecting the "Dynamic Value" toggle functionality.
### When is action necessary?
* No action is required for this change.
* Your published flows will continue to work without interruption.
* When editing existing flows that use the "Dynamic Value" toggle on "Array of Properties" inputs (such as the "files" parameter in the "Extract Structured Data" action of the "Utility AI" piece), the end user will need to remap the values again.
* For details on the new UI implementation, refer to this [announcement](https://community.activepieces.com/t/inline-items/8964).
## 0.38.6
### What has changed?
* Workers no longer rely on the `AP_FLOW_WORKER_CONCURRENCY` and `AP_SCHEDULED_WORKER_CONCURRENCY` environment variables. These values are now retrieved from the app server.
### When is action necessary?
* If `AP_CONTAINER_TYPE` is set to `WORKER` on the worker machine, and `AP_SCHEDULED_WORKER_CONCURRENCY` or `AP_FLOW_WORKER_CONCURRENCY` are set to zero on the app server, workers will stop processing the queues. To fix this, check the [Separate Worker from App](https://www.activepieces.com/docs/install/configuration/separate-workers) documentation and set the `AP_CONTAINER_TYPE` to fetch the necessary values from the app server. If no container type is set on the worker machine, this is not a breaking change.
## 0.35.1
### What has changed?
* The 'name' attribute has been renamed to 'externalId' in the `AppConnection` entity.
* The 'displayName' attribute has been added to the `AppConnection` entity.
### When is action necessary?
* If you are using the connections API, you should update the `name` attribute to `externalId` and add the `displayName` attribute.
## 0.35.0
### What has changed?
* All branches are now converted to routers, and downgrade is not supported.
## 0.33.0
### What has changed?
* Files from actions or triggers are now stored in the database / S3 to support retries from certain steps, and the size of files from actions is now subject to the limit of `AP_MAX_FILE_SIZE_MB`.
* Files in triggers were previously passed as base64 encoded strings; now they are passed as file paths in the database / S3. Paused flows that have triggers from version 0.29.0 or earlier will no longer work.
### When is action necessary?
* If you are dealing with large files in the actions, consider increasing the `AP_MAX_FILE_SIZE_MB` to a higher value, and make sure the storage system (database/S3) has enough capacity for the files.
## 0.30.0
### What has changed?
* `AP_SANDBOX_RUN_TIME_SECONDS` is now deprecated and replaced with `AP_FLOW_TIMEOUT_SECONDS`
* `AP_CODE_SANDBOX_TYPE` is now deprecated and replaced with new mode in `AP_EXECUTION_MODE`
### When is action necessary?
* If you are using `AP_CODE_SANDBOX_TYPE` to `V8_ISOLATE`, you should switch to `AP_EXECUTION_MODE` to `SANDBOX_CODE_ONLY`
* If you are using `AP_SANDBOX_RUN_TIME_SECONDS` to set the sandbox run time limit, you should switch to `AP_FLOW_TIMEOUT_SECONDS`
## 0.28.0
### What has changed?
* **Project Members:**
* The `EXTERNAL_CUSTOMER` role has been deprecated and replaced with the `OPERATOR` role. Please check the permissions page for more details.
* All pending invitations will be removed.
* The User Invitation entity has been introduced to send invitations. You can still use the Project Member API to add roles for the user, but it requires the user to exist. If you want to send an email, use the User Invitation, and later a record in the project member will be created after the user accepts and registers an account.
* **Authentication:**
* The `SIGN_UP_ENABLED` environment variable, which allowed multiple users to sign up for different platforms/projects, has been removed. It has been replaced with inviting users to the same platform/project. All old users should continue to work normally.
### When is action necessary?
* **Project Members:**
If you use the embedding SDK or the create project member API with the `EXTERNAL_CUSTOMER` role, you should start using the `OPERATOR` role instead.
* **Authentication:**
Multiple platforms/projects are no longer supported in the community edition. Technically, everything is still there, but you have to hack using the API as the authentication system has now changed. If you have already created the users/platforms, they should continue to work, and no action is required.
# Environment Variables
Source: https://www.activepieces.com/docs/install/configuration/environment-variables
To configure activepieces, you will need to set some environment variables, There is file called `.env` at the root directory for our main repo.
When you execute the [tools/deploy.sh](https://github.com/activepieces/activepieces/blob/main/tools/deploy.sh) script in the Docker installation tutorial,
it will produce these values.
## Environment Variables
| Variable | Description | Default Value | Example |
| --------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------- | ---------------------------------------------------------------------- |
| `AP_CONFIG_PATH` | Optional parameter for specifying the path to store PGLite database and local settings. | `~/.activepieces` | |
| `AP_CLOUD_AUTH_ENABLED` | Turn off the utilization of Activepieces oauth2 applications | `false` | |
| `AP_DB_TYPE` | The type of database to use. `POSTGRES` for external PostgreSQL, `PGLITE` for embedded database. **Note:** `SQLITE3` is deprecated and will be automatically migrated to `PGLITE`. | `POSTGRES` | |
| `AP_EXECUTION_MODE` | You can choose between 'SANDBOX\_PROCESS', 'UNSANDBOXED', 'SANDBOX\_CODE\_ONLY', 'SANDBOX\_CODE\_AND\_PROCESS' as possible values. If you decide to change this, make sure to carefully read [https://www.activepieces.com/docs/install/architecture/sandboxing](https://www.activepieces.com/docs/install/architecture/sandboxing) | `UNSANDBOXED` | |
| `AP_ENCRYPTION_KEY` | ❗️ Encryption key used for connections is a 32-character (16 bytes) hexadecimal key. You can generate one using the following command: `openssl rand -hex 16`. | None | |
| `AP_EXECUTION_DATA_RETENTION_DAYS` | The number of days to retain execution data, logs and events. | `30` | |
| `AP_FRONTEND_URL` | ❗️ Url that will be used to specify redirect url and webhook url. | | |
| `AP_INTERNAL_URL` | (BETA) Used to specify the SSO authentication URL. | None | [https://demo.activepieces.com/api](https://demo.activepieces.com/api) |
| `AP_JWT_SECRET` | ❗️ Encryption key used for generating JWT tokens is a 32-character hexadecimal key. You can generate one using the following command: `openssl rand -hex 32`. | None | [https://demo.activepieces.com](https://demo.activepieces.com) |
| `AP_QUEUE_UI_ENABLED` | Enable the queue UI (only works with redis) | `true` | |
| `AP_QUEUE_UI_USERNAME` | The username for the queue UI. This is required if `AP_QUEUE_UI_ENABLED` is set to `true`. | None | |
| `AP_QUEUE_UI_PASSWORD` | The password for the queue UI. This is required if `AP_QUEUE_UI_ENABLED` is set to `true`. | None | |
| `AP_REDIS_FAILED_JOB_RETENTION_DAYS` | The number of days to retain failed jobs in Redis. | `30` | |
| `AP_REDIS_FAILED_JOB_RETENTION_MAX_COUNT` | The maximum number of failed jobs to retain in Redis. | `2000` | |
| `AP_TRIGGER_DEFAULT_POLL_INTERVAL` | How many minutes before the system checks for new data updates for pieces with scheduled triggers, such as new Google Contacts. | `5` | |
| `AP_PIECES_CACHE_MAX_ENTRIES` | Maximum number of entries in the in-memory LRU cache for piece metadata. The cache stores piece lists and individual piece metadata to reduce database load; when the limit is reached, least recently used entries are evicted. | `1000` | |
| `AP_PIECES_SOURCE` | `AP_PIECES_SOURCE`: `FILE` for local development, `DB` for database. You can find more information about it in [Setting Piece Source](#setting-piece-source) section. | `CLOUD_AND_DB` | |
| `AP_PIECES_SYNC_MODE` | `AP_PIECES_SYNC_MODE`: None for no metadata syncing / 'OFFICIAL\_AUTO' for automatic syncing for pieces metadata from cloud | `OFFICIAL_AUTO` | |
| `AP_POSTGRES_DATABASE` | ❗️ The name of the PostgreSQL database | None | |
| `AP_POSTGRES_HOST` | ❗️ The hostname or IP address of the PostgreSQL server | None | |
| `AP_POSTGRES_PASSWORD` | ❗️ The password for the PostgreSQL, you can generate a 32-character hexadecimal key using the following command: `openssl rand -hex 32`. | None | |
| `AP_POSTGRES_PORT` | ❗️ The port number for the PostgreSQL server | None | |
| `AP_POSTGRES_USERNAME` | ❗️ The username for the PostgreSQL user | None | |
| `AP_POSTGRES_USE_SSL` | Use SSL to connect the postgres database | `false` | |
| `AP_POSTGRES_SSL_CA` | Use SSL Certificate to connect to the postgres database | | |
| `AP_POSTGRES_URL` | Alternatively, you can specify only the connection string (e.g postgres\://user:password\@host:5432/database) instead of providing the database, host, port, username, and password. | None | |
| `AP_POSTGRES_POOL_SIZE` | Maximum number of clients the pool should contain for the PostgreSQL database | None | |
| `AP_POSTGRES_IDLE_TIMEOUT_MS` | Sets the idle timout pool for your PostgreSQL | `30000` | |
| `AP_REDIS_TYPE` | Where to spin redis instance, either in memory (MEMORY) or in a dedicated instance (STANDALONE), or in a sentinel instance (SENTINEL) | `STANDALONE` | |
| `AP_REDIS_URL` | If a Redis connection URL is specified, all other Redis properties will be ignored. | None | |
| `AP_REDIS_USER` | ❗️ Username to use when connect to redis | None | |
| `AP_REDIS_PASSWORD` | ❗️ Password to use when connect to redis | None | |
| `AP_REDIS_HOST` | ❗️ The hostname or IP address of the Redis server | None | |
| `AP_REDIS_PORT` | ❗️ The port number for the Redis server | None | |
| `AP_REDIS_DB` | The Redis database index to use | `0` | |
| `AP_REDIS_USE_SSL` | Connect to Redis with SSL | `false` | |
| `AP_REDIS_SSL_CA_FILE` | The path to the CA file for the Redis server. | None | |
| `AP_REDIS_SENTINEL_HOSTS` | If specified, this should be a comma-separated list of `host:port` pairs for Redis Sentinels. Make sure to set `AP_REDIS_CONNECTION_MODE` to `SENTINEL` | None | `sentinel-host-1:26379,sentinel-host-2:26379,sentinel-host-3:26379` |
| `AP_REDIS_SENTINEL_NAME` | The name of the master node monitored by the sentinels. | None | `sentinel-host-1` |
| `AP_REDIS_SENTINEL_ROLE` | The role to connect to, either `master` or `slave`. | None | `master` |
| `AP_TRIGGER_TIMEOUT_SECONDS` | Maximum allowed runtime for a trigger to perform polling in seconds | `60` | |
| `AP_FLOW_TIMEOUT_SECONDS` | Maximum allowed runtime for a flow to run in seconds | `600` | |
| `AP_SANDBOX_MEMORY_LIMIT` | The maximum amount of memory (in kilobytes) that a single sandboxed engine process can use. Each engine process executes at most one execution at a time. This helps prevent runaway memory usage in custom code or pieces. If not set, the default is 1,048,576 KB (1,024 MB). | `1048576` | `1048576` |
| `AP_SANDBOX_PROPAGATED_ENV_VARS` | Environment variables that will be propagated to the sandboxed code. If you are using it for pieces, we strongly suggests keeping everything in the authentication object to make sure it works across AP instances. | None | |
| `AP_SCIM_DEFAULT_PROJECT_ROLE` | The default project role assigned to members when they are added to a project via SCIM group sync. Accepted values: `Admin`, `Editor`, `Viewer`. | `Editor` | `Viewer` |
| `AP_TELEMETRY_ENABLED` | Collect telemetry information. | `true` | |
| `AP_TEMPLATES_SOURCE_URL` | This is the endpoint we query for templates, remove it and templates will be removed from UI | `https://cloud.activepieces.com/api/v1/templates` | |
| `AP_WEBHOOK_TIMEOUT_SECONDS` | The default timeout for webhooks. The maximum allowed is 15 minutes. Please note that Cloudflare limits it to 30 seconds. If you are using a reverse proxy for SSL, make sure it's configured correctly. | `30` | |
| `AP_TRIGGER_FAILURE_THRESHOLD` | The maximum number of consecutive trigger failures is 576 by default, which is equivalent to approximately 2 days. | `30` | |
| `AP_PROJECT_RATE_LIMITER_ENABLED` | Enforce rate limits and prevent excessive usage by a single project. | `true` | |
| `AP_DEFAULT_CONCURRENT_JOBS_LIMIT` | The default maximum number of concurrent runs a project can have. Used to enforce rate limits and prevent excessive usage by a single project. Can be overridden per-project in settings. | `5` | |
| `AP_S3_ACCESS_KEY_ID` | The access key ID for your S3-compatible storage service. Not required if `AP_S3_USE_IRSA` is `true`. | None | |
| `AP_S3_SECRET_ACCESS_KEY` | The secret access key for your S3-compatible storage service. Not required if `AP_S3_USE_IRSA` is `true`. | None | |
| `AP_S3_BUCKET` | The name of the S3 bucket to use for file storage. | None | |
| `AP_S3_ENDPOINT` | The endpoint URL for your S3-compatible storage service. Not required if `AWS_ENDPOINT_URL` is set. | None | `https://s3.amazonaws.com` |
| `AP_S3_REGION` | The region where your S3 bucket is located. Not required if `AWS_REGION` is set. | None | `us-east-1` |
| `AP_S3_USE_SIGNED_URLS` | Routes file traffic directly to S3 using pre-signed URLs, bypassing the API server. The bucket should remain private; signed URLs provide temporary authenticated access. | None | |
| `AP_S3_USE_IRSA` | Use IAM Role for Service Accounts (IRSA) to connect to S3. When `true`, `AP_S3_ACCESS_KEY_ID` and `AP_S3_ACCESS_KEY_ID` are not required. | None | `true` |
| `AP_SMTP_HOST` | The host name for the SMTP server that activepieces uses to send emails | `None` | `mail.example.com` |
| `AP_SMTP_PORT` | The port number for the SMTP server that activepieces uses to send emails | `None` | 587 |
| `AP_SMTP_USERNAME` | The user name for the SMTP server that activepieces uses to send emails | `None` | [test@mail.example.com](mailto:test@mail.example.com) |
| `AP_SMTP_PASSWORD` | The password for the SMTP server that activepieces uses to send emails | `None` | secret1234 |
| `AP_SMTP_SENDER_EMAIL` | The email address from which activepieces sends emails. | `None` | [test@mail.example.com](mailto:test@mail.example.com) |
| `AP_SMTP_SENDER_NAME` | The sender name activepieces uses to send emails. | | |
| `AP_MAX_FILE_SIZE_MB` | The maximum allowed file size (in megabytes) for **uploaded files** in steps or triggers. Files larger than this value will be rejected. This does **not** control flow run log size—see `AP_MAX_FLOW_RUN_LOG_SIZE_MB`. | `25` | `10` |
| `AP_MAX_FLOW_RUN_LOG_SIZE_MB` | The maximum allowed size (in megabytes) of the **flow run logs**—this is the total combined size of all inputs and outputs for each step in a single flow run. If logs exceed this size, they will be truncated, which may cause flow execution issues. | `50` | `25` |
| `AP_FLOW_RUN_LOG_SLICE_THRESHOLD_KB` | Step outputs whose size exceeds this threshold (in KB) are offloaded as `FLOW_RUN_LOG_SLICE` files to object storage instead of being inlined in the flow run log. Lowering the value reduces worker memory pressure on runs with large step outputs; raising it reduces the number of object-storage requests per run. | `32` | `32` |
| `AP_FLOW_RUN_LOG_INPUT_TRUNCATE_THRESHOLD_KB` | Step input values whose serialized size exceeds this threshold (in KB) are replaced with a placeholder string (`(truncated, original size )`) before being stored in the flow run log. Source data still lives in the upstream step's output, which is recoverable when sliced. | `2` | `2` |
| `AP_MAX_WEBHOOK_PAYLOAD_SIZE_MB` | Maximum webhook payload size in MB. Payloads exceeding this are rejected with HTTP 413. | `25` | `5` |
| `AP_WEBHOOK_PAYLOAD_INLINE_THRESHOLD_KB` | Webhook payloads below this size (in KB) are stored inline in Redis for fastest processing. Payloads above this are offloaded to file storage to protect Redis memory. | `512` | `1024` |
| `AP_WORKER_CONCURRENCY` | The number of concurrent jobs the worker can process simultaneously. Each concurrent job uses one sandbox instance. | `5` | `10` |
| `AP_FILE_STORAGE_LOCATION` | The location to store files. Possible values are `DB` for storing files in the database or `S3` for storing files in an S3-compatible storage service. | `DB` | |
| `AP_PAUSED_FLOW_TIMEOUT_DAYS` | The maximum allowed pause duration in days for a paused flow, please note it can not exceed `AP_EXECUTION_DATA_RETENTION_DAYS` | `30` | |
| `AP_MAX_RECORDS_PER_TABLE` | The maximum allowed number of records per table | `10000` | `10000` |
| `AP_MAX_FIELDS_PER_TABLE` | The maximum allowed number of fields per table | `100` | `100` |
| `AP_MAX_TABLES_PER_PROJECT` | The maximum allowed number of tables per project | `20` | `20` |
| `AP_MAX_MCPS_PER_PROJECT` | The maximum allowed number of mcp per project | `20` | `20` |
| `AP_ENABLE_FLOW_ON_PUBLISH` | Whether publishing a new flow version should automatically enable the flow | `true` | `false` |
| `AP_ISSUE_ARCHIVE_DAYS` | Controls the automatic archival of issues in the system. Issues that have not been updated for this many days will be automatically moved to an archived state. | `14` | `1` |
| `AP_LOAD_TRANSLATIONS_FOR_DEV_PIECES` | Load translations for dev pieces (configured via `AP_DEV_PIECES`). When disabled, dev pieces are loaded without translations. This only affects development mode. | `false` | `true` |
| `AP_CONTAINER_TYPE` | Controls which services to run in the Docker container. `APP` starts only the API server, `WORKER` starts only the worker, `WORKER_AND_APP` starts both. | `WORKER_AND_APP` | `APP` |
| `AP_NETWORK_MODE` | Network posture for user code egress. `STRICT` blocks outbound connections from user code (Code steps, pieces, HTTP requests) to private, loopback, link-local, and cloud metadata IPs across every Node egress path (`axios`, `fetch`, `undici`, raw `http`/`net`); in sandboxed execution modes it also applies a kernel-level iptables lockdown. `UNRESTRICTED` disables all guards. Defaults to `UNRESTRICTED` while the hardening stack is being validated. | `UNRESTRICTED` | `STRICT` |
| `AP_SSRF_ALLOW_LIST` | Comma-separated IPs or CIDR ranges that bypass `AP_NETWORK_MODE=STRICT`. Use this to reach specific internal services (databases, sidecars) from flows. Accepts exact IPs (`10.0.0.5`) and CIDR blocks (`10.0.0.0/24`, `fd00::/8`). Only applies when `AP_NETWORK_MODE=STRICT`. | \`\` | `10.0.0.5,10.10.0.0/24` |
| `HTTP_PROXY` / `HTTPS_PROXY` | Standard proxy env vars. When set, the engine routes all outbound HTTP(S) through the proxy (via `undici`'s `EnvHttpProxyAgent`). If the proxy listens on loopback, its port is automatically exempted from the SSRF guard so user code can still reach it. | \`\` | `http://127.0.0.1:3128` |
The frontend URL is essential for webhooks and app triggers to work. It must
be accessible to third parties to send data.
### Setting Webhook (Frontend URL):
The default URL is set to the machine's IP address. To ensure proper operation, ensure that this address is accessible or specify an `AP_FRONTEND_URL` environment variable.
One possible solution for this is using a service like ngrok ([https://ngrok.com/](https://ngrok.com/)), which can be used to expose the frontend port (4200) to the internet.
### Redis Configuration
Set the `AP_REDIS_URL` environment variable to the connection URL of your Redis server.
Please note that if a Redis connection URL is specified, all other **Redis properties** will be ignored.
If you don't have the Redis URL, you can use the following command to get it. You can use the following variables:
* `REDIS_USER`: The username to use when connecting to Redis.
* `REDIS_PASSWORD`: The password to use when connecting to Redis.
* `REDIS_HOST`: The hostname or IP address of the Redis server.
* `REDIS_PORT`: The port number for the Redis server.
* `REDIS_DB`: The Redis database index to use.
* `REDIS_USE_SSL`: Connect to Redis with SSL.
If you are using **Redis Sentinel**, you can set the following environment variables:
* `AP_REDIS_TYPE`: Set this to `SENTINEL`.
* `AP_REDIS_SENTINEL_HOSTS`: A comma-separated list of `host:port` pairs for Redis Sentinels. When set, all other Redis properties will be ignored.
* `AP_REDIS_SENTINEL_NAME`: The name of the master node monitored by the sentinels.
* `AP_REDIS_SENTINEL_ROLE`: The role to connect to, either `master` or `slave`.
* `AP_REDIS_PASSWORD`: The password to use when connecting to Redis.
* `AP_REDIS_USE_SSL`: Connect to Redis with SSL.
* `AP_REDIS_SSL_CA_FILE`: The path to the CA file for the Redis server.
### SMTP Configuration
SMTP can be configured both from the platform admin screen and through environment variables. The enviroment variables are only used if the platform admin screen has no email configuration entered.
Activepieces will only use the configuration from the environment variables if `AP_SMTP_HOST`, `AP_SMTP_PORT`, `AP_SMTP_USERNAME` and `AP_SMTP_PASSWORD` all have a value set. TLS is supported.
# Hardware Requirements
Source: https://www.activepieces.com/docs/install/configuration/hardware
Specifications for hosting Activepieces
More information about architecture please visit our [architecture](../architecture/overview) page.
### Technical Specifications
Activepieces is designed to be memory-intensive rather than CPU-intensive. A modest instance will suffice for most scenarios, but requirements can vary based on specific use cases.
| Component | Memory (RAM) | CPU Cores | Disk Space | Notes |
| ---------- | ------------ | --------- | ---------- | ---------------------------------- |
| PostgreSQL | 1 GB | 1 | — | |
| Redis | 1 GB | 1 | — | |
| App | 2 GB | 1 | 30 GB | API server, UI, webhook routing |
| Worker | 1 GB | 0.5 | — | Per replica; add replicas to scale |
The above recommendations are designed to meet the needs of the majority of use cases.
## Scaling Factors
### Redis
Redis requires minimal scaling as it primarily stores jobs during processing. Activepieces leverages BullMQ, capable of handling a substantial number of jobs per second.
### PostgreSQL
**Scaling Tip:** Since files are stored in the database, you can alleviate the load by configuring S3 storage for file management.
PostgreSQL is typically not the system's bottleneck.
### App Container
**Scaling Tip:** The App container is stateless, allowing for seamless horizontal scaling.
### Worker Container
**Scaling Tip:** Workers are stateless. Add more worker replicas to increase concurrent flow execution capacity. The default Docker Compose setup starts 5 replicas.
## Expected Performance
Activepieces ensures no request is lost; all requests are queued. In the event of a spike, requests will be processed later, which is acceptable as most flows are asynchronous, with synchronous flows being prioritized.
It's hard to predict exact performance because flows can be very different. But running a flow doesn't slow things down, as it runs as fast as regular JavaScript.
(Note: This applies to `SANDBOX_CODE_ONLY` and `UNSANDBOXED` execution modes, which are recommended and used in self-hosted setups.)
You can anticipate handling over **20 million executions** monthly with this setup.
# Deployment Checklist
Source: https://www.activepieces.com/docs/install/configuration/overview
Checklist to follow after deploying Activepieces
This tutorial assumes you have already followed the quick start guide using one of the installation methods listed in [Install Overview](../overview).
In this section, we will go through the checklist after using one of the installation methods and ensure that your deployment is production-ready.
You should decide on the sandboxing mode for your deployment based on your use case and whether it is multi-tenant or not. Here is a simplified way to decide:
**Friendly Tip #1**: For multi-tenant setups, use V8/Code Sandboxing.
It is secure and does not require privileged Docker access in Kubernetes.
Privileged Docker is usually not allowed to prevent root escalation threats.
**Friendly Tip #2**: For single-tenant setups, use No Sandboxing. It is faster and does not require privileged Docker access.
More Information at [Sandboxing](../architecture/sandboxing)
For licensing inquiries regarding the self-hosted enterprise edition, please reach out to `sales@activepieces.com`, as the code and Docker image are not covered by the MIT license.
You can request a trial key from within the app or in the cloud by filling out the form. Alternatively, you can contact sales at [https://www.activepieces.com/sales](https://www.activepieces.com/sales). Please know that when your trial runs out, all enterprise [features](https://www.activepieces.com/pricing) will be shut down meaning any user other than the platform admin will be deactivated, and your private pieces will be deleted, which could result in flows using them to fail.
Before version 0.73.0, you cannot switch from CE to EE directly We suggest upgrading to 0.73.0 with the same edition first, then switch `AP_EDITION`.
Enterprise edition must use `PostgreSQL` as the database backend and `Redis` as the Queue System.
## Installation
1. Set the `AP_EDITION` environment variable to `ee`.
2. Set the `AP_EXECUTION_MODE` to anything other than `UNSANDBOXED`, check the above section.
3. Once your instance is up, activate the license key by going to **Platform Admin -> Setup -> License Keys**.
Setting up HTTPS is highly recommended because many services require webhook URLs to be secure (HTTPS). This helps prevent potential errors.
To set up SSL, you can use any reverse proxy. For a step-by-step guide, check out our example using [Nginx](../guides/setup-ssl).
If you encounter any issues, check out our [Troubleshooting](../troubleshooting/websocket-issues) guide.
# Telemetry
Source: https://www.activepieces.com/docs/install/configuration/telemetry
# Why Does Activepieces need data?
As a self-hosted product, gathering usage metrics and insights can be difficult for us. However, these analytics are essential in helping us understand key behaviors and delivering a higher quality experience that meets your needs.
To ensure we can continue to improve our product, we have decided to track certain basic behaviors and metrics that are vital for understanding the usage of Activepieces.
We have implemented a minimal tracking plan and provide a detailed list of the metrics collected in a separate section.
# What Does Activepieces Collect?
We value transparency in data collection and assure you that we do not collect any personal information. The following events are currently being collected:
[Exact Code](https://github.com/activepieces/activepieces/blob/main/packages/shared/src/lib/common/telemetry.ts)
# Opting out?
To opt out, set the environment variable `AP_TELEMETRY_ENABLED=false`
# Rollback Guide
Source: https://www.activepieces.com/docs/install/guides/rollback
How to rollback Activepieces to a previous version
## Overview
Activepieces ships a rollback command that reverses database migrations when you need to downgrade to a previous version. Most releases are fully rollback-safe — the release notes will let you know if one isn't.
## Backups
For most upgrades you won't need a backup, but if you want to be extra safe:
### PostgreSQL
```bash theme={null}
pg_dump -Fc $DATABASE_URL > backup-pre-upgrade.dump
```
### PGLite
Copy the `pglite` folder inside your configured `AP_CONFIG_PATH`:
```bash theme={null}
cp -r /path/to/config/pglite /path/to/config/pglite-backup
```
## Release Notes
Releases that include non-reversible database changes will have a note at the bottom of the release notes mentioning which migrations are affected. If you don't see a note, the release is rollback-safe.
## Rolling Back
The rollback command runs against the **current (newer) image** since it has the migration reversal logic. After rolling back the database, you swap to the older image.
### Step 1: Stop Activepieces
```bash theme={null}
docker compose down
```
### Step 2: Run the rollback command
Replace `0.78.0` with your current version and `0.77.0` with the version you want to go back to:
```bash theme={null}
docker run --rm --env-file .env --entrypoint npm \
activepieces/activepieces:0.78.0 \
run rollback -- --to 0.77.0
```
If the release has breaking migrations, the command will ask you to confirm with `--force`:
```bash theme={null}
docker run --rm --env-file .env --entrypoint npm \
activepieces/activepieces:0.78.0 \
run rollback -- --to 0.77.0 --force
```
### Step 3: Switch to the older image
Update your `docker-compose.yml`:
```yaml theme={null}
services:
activepieces:
image: activepieces/activepieces:0.77.0
```
### Step 4: Start Activepieces
```bash theme={null}
docker compose up -d
```
## Restoring From Backup
If you took a backup and prefer to restore from it:
```bash theme={null}
dropdb activepieces
createdb activepieces
pg_restore -d activepieces backup-pre-upgrade.dump
```
Then switch your `docker-compose.yml` to the previous image version and start Activepieces.
## For Contributors
Every new database migration must:
1. **Implement `Migration`** instead of `MigrationInterface`
2. **Set `breaking`** to `true` (destructive changes) or `false` (additive only)
3. **Set `release`** to the target release version (e.g., `'0.78.0'`)
4. **Implement `down()`** with working rollback queries
Example:
```typescript theme={null}
import { Migration } from '../../migration'
import { QueryRunner } from 'typeorm'
export class AddNewColumn1710000000000 implements Migration {
name = 'AddNewColumn1710000000000'
breaking = false
release = '0.78.0'
public async up(queryRunner: QueryRunner): Promise {
await queryRunner.query(`ALTER TABLE "project" ADD COLUMN "description" text`)
}
public async down(queryRunner: QueryRunner): Promise {
await queryRunner.query(`ALTER TABLE "project" DROP COLUMN "description"`)
}
}
```
These requirements are enforced by CI — PRs with migrations that don't meet them will fail checks.
# How to Separate Workers
Source: https://www.activepieces.com/docs/install/guides/separate-workers
Benefits of separating workers from the main application (APP):
* **Availability**: The application remains lightweight, allowing workers to be scaled independently.
* **Security**: Workers lack direct access to Redis and the database, minimizing impact in case of a security breach.
To create a worker token, use the local CLI command to generate the JWT and sign it with your `AP_JWT_SECRET` used for the app server. Follow these steps:
1. Open your terminal and navigate to the root of the repository.
2. Run the command: `npm run workers token`.
3. When prompted, enter the JWT secret (this should be the same as the `AP_JWT_SECRET` used for the app server).
4. The generated token will be displayed in your terminal, copy it and use it in the next step.
Define the following environment variables in the `.env` file on the worker machine:
* Set `AP_CONTAINER_TYPE` to `WORKER`
* Specify `AP_FRONTEND_URL`
* Provide `AP_WORKER_TOKEN`
Configure a persistent volume for the worker to cache flows and pieces. This is important as first uncached execution of pieces and flows are very slow. Having a persistent volume significantly improves execution speed.
Add the following volume mapping to your docker configuration:
```yaml theme={null}
volumes:
- :/usr/src/app/cache
```
Note: This setup works whether you attach one volume per worker, It cannot be shared across multiple workers.
Launch the worker machine and supply it with the generated token.
Verify that the workers are visible in the Platform Admin Console under Infra -> Workers.
On the APP machine, set `AP_CONTAINER_TYPE` to `APP`.
# How to Setup App Webhooks
Source: https://www.activepieces.com/docs/install/guides/setup-app-webhooks
Certain apps like Slack and Square only support one webhook per OAuth2 app. This means that manual configuration is required in their developer portal, and it cannot be automated.
## Slack
**Configure Webhook Secret**
1. Visit the "Basic Information" section of your Slack OAuth settings.
2. Copy the "Signing Secret" and save it.
3. Set the following environment variable in your activepieces environment:
```
AP_APP_WEBHOOK_SECRETS={"@activepieces/piece-slack": {"webhookSecret": "SIGNING_SECRET"}}
```
4. Restart your application instance.
**Configure Webhook URL**
1. Go to the "Event Subscription" settings in the Slack OAuth2 developer platform.
2. The URL format should be: `https://YOUR_AP_INSTANCE/api/v1/app-events/slack`.
3. When connecting to Slack, use your OAuth2 credentials or update the OAuth2 app details from the admin console (in platform plans).
4. Add the following events to the app:
* `message.channels`
* `reaction_added`
* `message.im`
* `message.groups`
* `message.mpim`
* `app_mention`
# How to Setup OpenTelemetry
Source: https://www.activepieces.com/docs/install/guides/setup-opentelemetry
Configure OpenTelemetry for observability and tracing
Activepieces supports both standard OpenTelemetry environment variables and vendor-specific configuration for observability and tracing.
## Environment Variables
| Variable | Description | Default Value | Example |
| ----------------------------- | ----------------------------------------------------------- | ------------- | --------------------------------------- |
| `AP_OTEL_ENABLED` | Enable OpenTelemetry tracing | `false` | `true` |
| `OTEL_EXPORTER_OTLP_ENDPOINT` | OTLP exporter endpoint URL | None | `https://your-collector:4317/v1/traces` |
| `OTEL_EXPORTER_OTLP_HEADERS` | Headers for OTLP exporter (comma-separated key=value pairs) | None | `Authorization=Bearer token` |
Both `AP_OTEL_ENABLED` and `OTEL_EXPORTER_OTLP_ENDPOINT` must be set for OpenTelemetry to be enabled.
# How to Setup S3
Source: https://www.activepieces.com/docs/install/guides/setup-s3
Configure S3-compatible storage for files and run logs
Run logs and files are stored in the database by default, but you can switch to S3 later without any migration; for most cases, the database is enough.
It's recommended to start with the database and switch to S3 if needed. After switching, expired files in the database will be deleted, and everything will be stored in S3. No manual migration is needed.
## Environment Variables
| Variable | Description | Default Value | Example |
| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------- | -------------------------- |
| `AP_FILE_STORAGE_LOCATION` | The location to store files. Set to `S3` for S3 storage. | `DB` | `S3` |
| `AP_S3_ACCESS_KEY_ID` | The access key ID for your S3-compatible storage service. Not required if `AP_S3_USE_IRSA` is `true`. | None | |
| `AP_S3_SECRET_ACCESS_KEY` | The secret access key for your S3-compatible storage service. Not required if `AP_S3_USE_IRSA` is `true`. | None | |
| `AP_S3_BUCKET` | The name of the S3 bucket to use for file storage. | None | |
| `AP_S3_ENDPOINT` | The endpoint URL for your S3-compatible storage service. Not required if `AWS_ENDPOINT_URL` is set. | None | `https://s3.amazonaws.com` |
| `AP_S3_REGION` | The region where your S3 bucket is located. Not required if `AWS_REGION` is set. | None | `us-east-1` |
| `AP_S3_USE_SIGNED_URLS` | Routes file traffic directly to S3 using pre-signed URLs, bypassing the API server. The bucket should remain private; signed URLs provide temporary authenticated access. | None | `true` |
| `AP_S3_USE_IRSA` | Use IAM Role for Service Accounts (IRSA) to connect to S3. When `true`, `AP_S3_ACCESS_KEY_ID` and `AP_S3_SECRET_ACCESS_KEY` are not required. | None | `true` |
| `AP_MAX_FILE_SIZE_MB` | The maximum allowed file size in megabytes for uploads including logs of flow runs. | `10` | `10` |
**Friendly Tip #1**: If the S3 bucket supports signed URLs but needs to be accessible over a public network, you can set `AP_S3_USE_SIGNED_URLS` to `true` to route traffic directly to S3 and reduce heavy traffic on your API server.
# Setup HTTPS
Source: https://www.activepieces.com/docs/install/guides/setup-ssl
To enable SSL, you can use a reverse proxy. In this case, we will use Nginx as the reverse proxy.
## Install Nginx
```bash theme={null}
sudo apt-get install nginx
```
## Create Certificate
To proceed with this documentation, it is assumed that you already have a certificate for your domain.
You have the option to use Cloudflare or generate a certificate using Let's Encrypt or Certbot.
Add the certificate to the following paths: `/etc/key.pem` and `/etc/cert.pem`
## Setup Nginx
```bash theme={null}
sudo nano /etc/nginx/sites-available/default
```
```bash theme={null}
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com www.example.com;
ssl_certificate /etc/cert.pem;
ssl_certificate_key /etc/key.pem;
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
```
## Restart Nginx
```bash theme={null}
sudo systemctl restart nginx
```
## Test
Visit your domain and you should see your application running with SSL.
# AWS (Pulumi)
Source: https://www.activepieces.com/docs/install/options/aws
Get Activepieces up & running on AWS with Pulumi for IaC
# Infrastructure-as-Code (IaC) with Pulumi
Pulumi is an IaC solution akin to Terraform or CloudFormation that lets you deploy & manage your infrastructure using popular programming languages e.g. Typescipt (which we'll use), C#, Go etc.
## Deploy from Pulumi Cloud
If you're already familiar with Pulumi Cloud and have [integrated their services with your AWS account](https://www.pulumi.com/docs/pulumi-cloud/deployments/oidc/aws/#configuring-openid-connect-for-aws), you can use the button below to deploy Activepieces in a few clicks.
The template will deploy the latest Activepieces image that's available on [Docker Hub](https://hub.docker.com/r/activepieces/activepieces).
[](https://app.pulumi.com/new?template=https://github.com/activepieces/activepieces/tree/main/deploy/pulumi)
## Deploy from a local environment
Or, if you're currently using an S3 bucket to maintain your Pulumi state, you can scaffold and deploy Activepieces direct from Docker Hub using the template below in just few commands:
```bash theme={null}
$ mkdir deploy-activepieces && cd deploy-activepieces
$ pulumi new https://github.com/activepieces/activepieces/tree/main/deploy/pulumi
$ pulumi up
```
## What's Deployed?
The template is setup to be somewhat flexible, supporting what could be a development or more production-ready configuration.
The configuration options that are presented during stack configuration will allow you to optionally add any or all of:
* PostgreSQL RDS instance. Opting out of this will use a local SQLite3 Db.
* Single node Redis 7 cluster. Opting out of this will mean using an in-memory cache.
* Fully qualified domain name with SSL. Note that the hosted zone must already be configured in Route 53.
Opting out of this will mean relying on using the application load balancer's url over standard HTTP to access your Activepieces deployment.
For a full list of all the currently available configuration options, take a look at the [Activepieces Pulumi template file on GitHub](https://github.com/activepieces/activepieces/tree/main/deploy/pulumi/Pulumi.yaml).
## Setting up Pulumi for the first time
If you're new to Pulumi then read on to get your local dev environment setup to be able to deploy Activepieces.
### Prerequisites
1. Make sure you have [Node](https://nodejs.org/en/download) and [Pulumi](https://www.pulumi.com/docs/install/) installed.
2. [Install and configure the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).
3. [Install and configure Pulumi](https://www.pulumi.com/docs/clouds/aws/get-started/begin/).
4. Create an S3 bucket which we'll use to maintain the state of all the various service we'll provision for our Activepieces deployment:
```bash theme={null}
aws s3api create-bucket --bucket pulumi-state --region us-east-1
```
Note: [Pulumi supports to two different state management approaches](https://www.pulumi.com/docs/concepts/state/#deciding-on-a-state-backend).
If you'd rather use Pulumi Cloud instead of S3 then feel free to skip this step and setup an account with Pulumi.
5. Login to the Pulumi backend:
```bash theme={null}
pulumi login s3://pulumi-state?region=us-east-1
```
6. Next we're going to use the Activepieces Pulumi deploy template to create a new project, a stack in that project and then kick off the deploy:
```bash theme={null}
$ mkdir deploy-activepieces && cd deploy-activepieces
$ pulumi new https://github.com/activepieces/activepieces/tree/main/deploy/pulumi
```
This step will prompt you to create you stack and to populate a series of config options, such as whether or not to provision a PostgreSQL RDS instance or use SQLite3.
Note: When choosing a stack name, use something descriptive like `activepieces-dev`, `ap-prod` etc.
This solution uses the stack name as a prefix for every AWS service created\
e.g. your VPC will be named `-vpc`.
7. Nothing left to do now but kick off the deploy:
```bash theme={null}
pulumi up
```
8. Now choose `yes` when prompted. Once the deployment has finished, you should see a bunch of Pulumi output variables that look like the following:
```json theme={null}
_: {
activePiecesUrl: "http://.us-east-1.elb.amazonaws.com"
activepiecesEnv: [
. . . .
]
}
```
The config value of interest here is the `activePiecesUrl` as that is the URL for our Activepieces deployment.
If you chose to add a fully qualified domain during your stack configuration, that will be displayed here.
Otherwise you'll see the URL to the application load balancer. And that's it.
Congratulations! You have successfully deployed Activepieces to AWS.
## Deploy a locally built Activepieces Docker image
To deploy a locally built image instead of using the official Docker Hub image, read on.
1. Clone the Activepieces repo locally:
```bash theme={null}
git clone https://github.com/activepieces/activepieces
```
2. Move into the `deploy/pulumi` folder & install the necessary npm packages:
```bash theme={null}
cd deploy/pulumi && npm i
```
3. This folder already has two Pulumi stack configuration files reday to go: `Pulumi.activepieces-dev.yaml` and `Pulumi.activepieces-prod.yaml`.
These files already contain all the configurations we need to create our environments. Feel free to have a look & edit the values as you see fit.
Lets continue by creating a development stack that uses the existing `Pulumi.activepieces-dev.yaml` file & kick off the deploy.
```bash theme={null}
pulumi stack init activepieces-dev && pulumi up
```
Note: Using `activepieces-dev` or `activepieces-prod` for the `pulumi stack init` command is required here as the stack name needs to match the existing stack file name in the folder.
4. You should now see a preview in the terminal of all the services that will be provisioned, before you continue.
Once you choose `yes`, a new image will be built based on the `Dockerfile` in the root of the solution (make sure Docker Desktop is running) and then pushed up to a new ECR, along with provisioning all the other AWS services for the stack.
Congratulations! You have successfully deployed Activepieces into AWS using a locally built Docker image.
## Customising the deploy
All of the current configuration options, as well as the low-level details associated with the provisioned services are fully customisable, as you would expect from any IaC.
For example, if you'd like to have three availability zones instead of two for the VPC, use an older version of Redis or add some additional security group rules for PostgreSQL, you can update all of these and more in the `index.ts` file inside the `deploy` folder.
Or maybe you'd still like to deploy the official Activepieces Docker image instead of a local build, but would like to change some of the services. Simply set the `deployLocalBuild` config option in the stack file to `false` and make whatever changes you'd like to the `index.ts` file.
Checking out the [Pulumi docs](https://www.pulumi.com/docs/clouds/aws/) before doing so is highly encouraged.
# Docker
Source: https://www.activepieces.com/docs/install/options/docker
Single docker image deployment with PGLite and Memory Queue
This setup is only meant for personal use or testing. It runs on [PGLite](https://pglite.dev/) (embedded PostgreSQL) and an in-memory Redis queue, which supports only a single instance on a single machine. For production or multi-instance setups, you must use Docker Compose with PostgreSQL and Redis. You will not be able to upgrade from Community Edition to Enterprise out of the box, as Enterprise does not support PGLite.
To get up and running quickly with Activepieces, we will use the Activepieces Docker image. Follow these steps:
## Prerequisites
You need to have [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) and [Docker](https://docs.docker.com/get-docker/) installed on your machine in order to set up Activepieces via Docker Compose.
## Install
### Pull Image and Run Docker image
Pull the Activepieces Docker image and run the container with the following command:
```bash theme={null}
docker run -d -p 8080:80 -v ~/.activepieces:/root/.activepieces -e AP_REDIS_TYPE=MEMORY -e AP_DB_TYPE=PGLITE -e AP_FRONTEND_URL="http://localhost:8080" activepieces/activepieces:latest
```
### Configure Webhook URL (Important for Triggers, Optional If you have public IP)
**Note:** By default, Activepieces will try to use your public IP for webhooks. If you are self-hosting on a personal machine, you must configure the frontend URL so that the webhook is accessible from the internet.
**Optional:** The easiest way to expose your webhook URL on localhost is by using a service like ngrok. However, it is not suitable for production use.
1. Install ngrok
2. Run the following command:
```bash theme={null}
ngrok http 8080
```
3. Replace `AP_FRONTEND_URL` environment variable in the command line above.
## Upgrade
Please follow the steps below:
### Step 1: Back Up Your Data (Recommended)
Before proceeding with the upgrade, it is always a good practice to back up your Activepieces data to avoid any potential data loss during the update process.
1. **Stop the Current Activepieces Container:** If your Activepieces container is running, stop it using the following command:
```bash theme={null}
docker stop activepieces_container_name
```
2. **Backup Activepieces Data Directory:** By default, Activepieces data is stored in the `~/.activepieces` directory on your host machine. Create a backup of this directory to a safe location using the following command:
```bash theme={null}
cp -r ~/.activepieces ~/.activepieces_backup
```
### Step 2: Update the Docker Image
1. **Pull the Latest Activepieces Docker Image:** Run the following command to pull the latest Activepieces Docker image from Docker Hub:
```bash theme={null}
docker pull activepieces/activepieces:latest
```
### Step 3: Remove the Existing Activepieces Container
1. **Stop and Remove the Current Activepieces Container:** If your Activepieces container is running, stop and remove it using the following commands:
```bash theme={null}
docker stop activepieces_container_name
docker rm activepieces_container_name
```
### Step 4: Run the Updated Activepieces Container
Now, run the updated Activepieces container with the latest image using the same command you used during the initial setup. Be sure to replace `activepieces_container_name` with the desired name for your new container.
```bash theme={null}
docker run -d -p 8080:80 -v ~/.activepieces:/root/.activepieces -e AP_REDIS_TYPE=MEMORY -e AP_DB_TYPE=PGLITE -e AP_FRONTEND_URL="http://localhost:8080" --name activepieces_container_name activepieces/activepieces:latest
```
Congratulations! You have successfully upgraded your Activepieces Docker deployment
# Docker Compose
Source: https://www.activepieces.com/docs/install/options/docker-compose
To get up and running quickly with Activepieces, we will use the Activepieces Docker image. Follow these steps:
## Prerequisites
You need to have [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) and [Docker](https://docs.docker.com/get-docker/) installed on your machine in order to set up Activepieces via Docker Compose.
## Installing
**1. Clone Activepieces repository.**
Use the command line to clone Activepieces repository:
```bash theme={null}
git clone https://github.com/activepieces/activepieces.git
```
**2. Go to the repository folder.**
```bash theme={null}
cd activepieces
```
**3.Generate Environment variable**
Run the following command from the command prompt / terminal
```bash theme={null}
sh tools/deploy.sh
```
If none of the above methods work, you can rename the .env.example file in the root directory to .env and fill in the necessary information within the file.
**4. Run Activepieces.**
Please note that "docker-compose" (with a dash) is an outdated version of Docker Compose and it will not work properly. We strongly recommend downloading and installing version 2 from the [here](https://docs.docker.com/compose/install/) to use Docker Compose.
```bash theme={null}
docker compose -p activepieces up
```
## 4. Configure Webhook URL (Important for Triggers, Optional If you have public IP)
**Note:** By default, Activepieces will try to use your public IP for webhooks. If you are self-hosting on a personal machine, you must configure the frontend URL so that the webhook is accessible from the internet.
**Optional:** The easiest way to expose your webhook URL on localhost is by using a service like ngrok. However, it is not suitable for production use.
1. Install ngrok
2. Run the following command:
```bash theme={null}
ngrok http 8080
```
3. Replace `AP_FRONTEND_URL` environment variable in `.env` with the ngrok url.
When deploying for production, ensure that you update the database credentials and properly set the environment variables.
Review the [configurations guide](/docs/install/configuration/environment-variables) to make any necessary adjustments.
## Upgrading
To upgrade to new versions, which are installed using docker compose, perform the following steps. First, open a terminal in the activepieces repository directory and run the following commands.
### Automatic Pull
**1. Run the update script**
```bash theme={null}
sh tools/update.sh
```
### Manually Pull
**1. Pull the new docker compose file**
```bash theme={null}
git pull
```
**2. Pull the new images**
```bash theme={null}
docker compose pull
```
**3. Review changelog for breaking changes**
Please review breaking changes in the [changelog](../configuration/breaking-changes).
**4. Run the updated docker images**
```
docker compose up -d --remove-orphans
```
Congratulations! You have now successfully updated the version.
## Deleting
The following command is capable of deleting all Docker containers and associated data, and therefore should be used with caution:
```
sh tools/reset.sh
```
Executing this command will result in the removal of all Docker containers and the data stored within them. It is important to be aware of the potentially hazardous nature of this command before proceeding.
# Easypanel
Source: https://www.activepieces.com/docs/install/options/easypanel
Run Activepieces with Easypanel 1-Click Install
Easypanel is a modern server control panel. If you [run Easypanel](https://easypanel.io/docs) on your server, you can deploy Activepieces with 1 click on it.

## Instructions
1. Create a VM that runs Ubuntu on your cloud provider.
2. Install Easypanel using the instructions from the website.
3. Create a new project.
4. Install Activepieces using the dedicated template.
# Elestio
Source: https://www.activepieces.com/docs/install/options/elestio
Run Activepieces with Elestio 1-Click Install
You can deploy Activepieces on Elestio using one-click deployment. Elestio handles version updates, maintenance, security, backups, etc. So go ahead and click below to deploy and start using.
[](https://elest.io/open-source/activepieces)
# GCP
Source: https://www.activepieces.com/docs/install/options/gcp
This documentation is to deploy activepieces on VM Instance or VM Instance Group, we should first create VM template
## Create VM Template
First choose machine type (e.g e2-medium)
After configuring the VM Template, you can proceed to click on "Deploy Container" and specify the following container-specific settings:
* Image: activepieces/activepieces
* Run as a privileged container: true
* Environment Variables:
* `AP_REDIS_TYPE`: MEMORY
* `AP_DB_TYPE`: SQLITE3
* `AP_FRONTEND_URL`: [http://localhost:80](http://localhost:80)
* `AP_EXECUTION_MODE`: SANDBOX\_PROCESS
* Firewall: Allow HTTP traffic (for testing purposes only)
Once these details are entered, click on the "Deploy" button and patiently wait for the container deployment process to complete.\\
After a successful deployment, you can access the ActivePieces application by visiting the external IP address of the VM on GCP.
## Production Deployment
Please visit [ActivePieces](/docs/install/configuration/environment-variables) for more details on how to customize the application.
# Helm
Source: https://www.activepieces.com/docs/install/options/helm
Deploy Activepieces on Kubernetes using Helm
This guide walks you through deploying Activepieces on Kubernetes using the official Helm chart.
## Prerequisites
* Kubernetes cluster (v1.19+)
* Helm 3.x installed
* kubectl configured to access your cluster
## Using External PostgreSQL and Redis
The Helm chart supports using external PostgreSQL and Redis services instead of deploying the Bitnami subcharts.
### Using External PostgreSQL
To use an external PostgreSQL instance:
```yaml theme={null}
postgresql:
enabled: false # Disable Bitnami PostgreSQL subchart
host: "your-postgres-host.example.com"
port: 5432
useSSL: true # Enable SSL if required
auth:
database: "activepieces"
username: "postgres"
password: "your-password"
# Or use external secret reference:
# externalSecret:
# name: "postgresql-credentials"
# key: "password"
```
Alternatively, you can use a connection URL:
```yaml theme={null}
postgresql:
enabled: false
url: "postgresql://user:password@host:5432/database?sslmode=require"
```
### Using External Redis
To use an external Redis instance:
```yaml theme={null}
redis:
enabled: false # Disable Bitnami Redis subchart
host: "your-redis-host.example.com"
port: 6379
useSSL: false # Enable SSL if required
auth:
enabled: true
password: "your-password"
# Or use external secret reference:
# externalSecret:
# name: "redis-credentials"
# key: "password"
```
Alternatively, you can use a connection URL:
```yaml theme={null}
redis:
enabled: false
url: "redis://:password@host:6379/0"
```
### External Secret References
For better security, you can reference passwords from existing Kubernetes secrets (useful with External Secrets Operator or Sealed Secrets):
```yaml theme={null}
postgresql:
enabled: false
host: "your-postgres-host.example.com"
auth:
externalSecret:
name: "postgresql-credentials"
key: "password"
redis:
enabled: false
host: "your-redis-host.example.com"
auth:
enabled: true
externalSecret:
name: "redis-credentials"
key: "password"
```
## Quick Start
### 1. Clone the Repository
```bash theme={null}
git clone https://github.com/activepieces/activepieces.git
cd activepieces
```
### 2. Install Dependencies
```bash theme={null}
helm dependency update
```
### 3. Create a Values File
Create a `my-values.yaml` file with your configuration. You can use the [example values file](https://github.com/activepieces/activepieces/blob/main/deploy/activepieces-helm/values.yaml) as a reference.
The Helm chart has sensible defaults for required values while leaving the optional ones empty, but you should customize these core values for production
### 4. Install Activepieces
```bash theme={null}
helm install activepieces deploy/activepieces-helm -f my-values.yaml
```
### 5. Verify Installation
```bash theme={null}
# Check deployment status
kubectl get pods
kubectl get services
```
## Production Checklist
* [ ] Set `frontendUrl` to your actual domain
* [ ] Set strong passwords for PostgreSQL and Redis (or keep auto-generated)
* [ ] Configure proper ingress with TLS
* [ ] Set appropriate resource limits
* [ ] Configure persistent storage
* [ ] Choose appropriate [execution mode](/docs/install/architecture/sandboxing) for your security requirements
* [ ] Review [environment variables](/docs/install/configuration/environment-variables) for advanced configuration
* [ ] Consider using a [separate workers](/docs/install/guides/separate-workers) setup for better availability and security
## Upgrading
```bash theme={null}
# Update dependencies
helm dependency update
# Upgrade release
helm upgrade activepieces deploy/activepieces-helm -f my-values.yaml
# Check upgrade status
kubectl rollout status deployment/activepieces
```
## Troubleshooting
### Common Issues
1. **Pod won't start**: Check logs with `kubectl logs deployment/activepieces`
2. **Database connection**: Verify PostgreSQL credentials and connectivity
3. **Frontend URL**: Ensure `frontendUrl` is accessible from external sources
4. **Webhooks not working**: Check ingress configuration and DNS resolution
### Useful Commands
```bash theme={null}
# View logs
kubectl logs deployment/activepieces -f
# Port forward for testing
kubectl port-forward svc/activepieces 4200:80 --namespace default
# Get all resources
kubectl get all --namespace default
```
## Editions
Activepieces supports three editions:
* **`ce` (Community Edition)**: Open-source version with all core features (default)
* **`ee` (Enterprise Edition)**: Self-hosted edition with advanced features like SSO, RBAC, and audit logs
* **`cloud`**: For Activepieces Cloud deployments
Set the edition in your values file:
```yaml theme={null}
activepieces:
edition: "ce" # or "ee" for Enterprise Edition
```
For Enterprise Edition features and licensing, visit [activepieces.com](https://www.activepieces.com/docs/admin-console/overview).
## Environment Variables
For a complete list of configuration options, see the [Environment Variables](/docs/install/configuration/environment-variables) documentation. Most environment variables can be configured through the Helm values file under the `activepieces` section.
## Execution Modes
Understanding execution modes is crucial for security and performance. See the [Sandboxing](/docs/install/architecture/sandboxing) guide to choose the right mode for your deployment.
## Uninstalling
```bash theme={null}
helm uninstall activepieces
# Clean up persistent volumes (optional)
kubectl delete pvc -l app.kubernetes.io/instance=activepieces
```
# Railway
Source: https://www.activepieces.com/docs/install/options/railway
Deploy Activepieces to the cloud in minutes using Railway's one-click template
Railway simplifies your infrastructure stack from servers to observability with a single, scalable, easy-to-use platform. With Railway's one-click deployment, you can get Activepieces up and running in minutes without managing servers, databases, or infrastructure.
## What Gets Deployed
The Railway template deploys Activepieces with the following components:
* **Activepieces Application**: The main Activepieces container running the latest version from [Docker Hub](https://hub.docker.com/r/activepieces/activepieces)
* **PostgreSQL Database**: Managed PostgreSQL database for storing flows, executions, and application data
* **Redis Cache**: Redis instance for job queuing and caching (optional, can use in-memory cache)
* **Automatic SSL**: Railway provides automatic HTTPS with SSL certificates
* **Custom Domain Support**: Configure your own domain through Railway's dashboard
## Prerequisites
Before deploying, ensure you have:
* A [Railway account](https://railway.app/) (free tier available)
* Basic understanding of environment variables (optional, for advanced configuration)
## Quick Start
1. **Click the deploy button** above to open Railway's deployment interface
2. **Configure environment variables for advanced usage** (see [Configuration](#configuration) below)
3. **Deploy** - Railway will automatically provision resources and start your instance
Once deployed, Railway will provide you with a public URL where your Activepieces instance is accessible.
## Configuration
### Environment Variables
Railway allows you to configure Activepieces through environment variables. You can set these in the Railway dashboard under your project's **Variables** tab.
#### Execution Mode
Configure the execution mode for security and performance:
See the [Sandboxing](/docs/install/architecture/sandboxing) documentation for details on each mode.
#### Other Important Variables
* `AP_TELEMETRY_ENABLED`: Enable/disable telemetry (default: `false`)
For a complete list of all available environment variables, see the [Environment Variables](/docs/install/configuration/environment-variables) documentation.
## Custom Domain Setup
Railway supports custom domains with automatic SSL:
1. Go to your Railway project dashboard
2. Navigate to **Settings** → **Networking**
3. Add your custom domain
4. Update `AP_FRONTEND_URL` environment variable to match your custom domain
5. Railway will automatically provision SSL certificates
For more details on SSL configuration, see the [Setup SSL](/docs/install/guides/setup-ssl) guide.
## Production Considerations
Before deploying to production, review these important points:
* [ ] Review [Security Practices](/docs/admin-guide/security/practices) documentation
* [ ] Configure `AP_WORKER_CONCURRENCY` based on your workload and hardware resources
* [ ] Ensure PostgreSQL backups are configured in Railway
* [ ] Consider database scaling options in Railway
## Observability
Railway provides built-in observability features for Activepieces. You can view logs and metrics in the Railway dashboard.
## Upgrading
To upgrade to a new version of Activepieces on Railway:
1. Go to your Railway project dashboard
2. Navigate to **Deployments**
3. Click **Redeploy** on the latest deployment
4. Railway will pull the latest Activepieces image and redeploy
Before upgrading, review the [Breaking Changes](/docs/install/configuration/breaking-changes) documentation to ensure compatibility with your flows and configuration.
## Next Steps
After deploying Activepieces on Railway:
1. **Access your instance** using the Railway-provided URL
2. **Create your first flow** - see [Building Flows](/docs/flows/building-flows)
3. **Configure webhooks** - see [Setup App Webhooks](/docs/install/guides/setup-app-webhooks)
4. **Explore pieces** - browse available integrations in the piece library
## Additional Resources
* [Troubleshooting](/docs/install/troubleshooting/websocket-issues): Troubleshooting guide
* [Configuration Guide](/docs/install/configuration/overview): Comprehensive configuration documentation
* [Environment Variables](/docs/install/configuration/environment-variables): Complete list of configuration options
* [Architecture Overview](/docs/install/architecture/overview): Understand Activepieces architecture
* [Railway Documentation](https://docs.railway.app/): Official Railway platform documentation
# Overview
Source: https://www.activepieces.com/docs/install/overview
Introduction to the different ways to install Activepieces
Activepieces Community Edition can be deployed using **Docker**, **Docker Compose**, and **Kubernetes**.
Community Edition is **free** and **open source**.
You can read the difference between the editions [here](https://www.activepieces.com/pricing).
## Recommended Options
Deploy Activepieces as a single Docker container using the PGLite database.
Deploy Activepieces with **Redis** and **PostgreSQL** setup.
## Other Options
Install on Kubernetes with Helm.
}
href="./options/railway"
>
1-Click Install on Railway.
}
href="./options/easypanel"
>
1-Click Install with Easypanel template, maintained by the community.
1-Click Install on Elestio.
Install on AWS with Pulumi.
Install on GCP as a VM template.
}
href="https://www.pikapods.com/pods?run=activepieces"
>
Instantly run on PikaPods from \$2.9/month.
Easily install on RepoCloud using this template, maintained by the community.
}
href="https://zeabur.com/templates/LNTQDF"
>
1-Click Install on Zeabur.
## Cloud Edition
This is the fastest option.
# Queues Dashboard
Source: https://www.activepieces.com/docs/install/troubleshooting/bullboard
The Bull Board is a tool that allows you to check issues with scheduling and internal flow runs issues.

## Setup BullBoard
To enable the Bull Board UI in your self-hosted installation:
1. Define these environment variables:
* `AP_QUEUE_UI_ENABLED`: Set to `true`
* `AP_QUEUE_UI_USERNAME`: Set your desired username
* `AP_QUEUE_UI_PASSWORD`: Set your desired password
2. Access the UI at `/api/ui`
For cloud installations, please ask your team for access to the internal documentation that explains how to access BullBoard.
## Queue Overview
We have one main queue called `workerJobs` that handles all job types. Each job has a `jobType` field that tells us what it does:
### Low Priority Jobs
#### RENEW\_WEBHOOK
Renews webhooks for pieces that have webhooks channel with expiration like Google Sheets.
#### EXECUTE\_POLLING
Checks external services for new data at regular intervals.
### Medium Priority Jobs
#### EXECUTE\_FLOW
Runs flows when they're triggered.
#### EXECUTE\_WEBHOOK
Processes incoming webhook requests that start flow runs.
#### DELAYED\_FLOW
Runs flows that were scheduled for later, like paused flows or delayed executions.
### High Priority Jobs
#### EXECUTE\_PROPERTY
Loads dynamic properties for pieces that need them at runtime.
#### EXECUTE\_EXTRACT\_PIECE\_INFORMATION
Gets information about pieces when they're being installed or set up.
#### EXECUTE\_VALIDATION
Checks that flow settings, inputs, or data are correct before running.
#### EXECUTE\_TRIGGER\_HOOK
Runs special logic before or after triggers fire.
Failed jobs are not normal and need to be checked right away to find and fix what's causing them.
They require immediate investigation as they represent executions that failed for unknown reasons that could indicate system issues.
Delayed jobs represent either paused flows scheduled for future execution, upcoming polling job iterations, or jobs being retried due to temporary failures. They indicate an internal system error occurred and the job will be retried automatically according to the backoff policy.
# Reset Password
Source: https://www.activepieces.com/docs/install/troubleshooting/reset-password
How to reset your password on a self-hosted instance
If you forgot your password on a self-hosted instance, you can reset it using the following steps:
1. **Locate PostgreSQL Docker Container**:
* Use a command like `docker ps` to find the PostgreSQL container.
2. **Access the Container**:
* Use SSH to access the PostgreSQL Docker container.
```bash theme={null}
docker exec -it POSTGRES_CONTAINER_ID /bin/bash
```
3. **Open the PostgreSQL Console**:
* Inside the container, open the PostgreSQL console with the `psql` command.
```bash theme={null}
psql -U postgres
```
4. **Connect to the ActivePieces Database**:
* Connect to the ActivePieces database.
```sql theme={null}
\c activepieces
```
5. **Create a Secure Password**:
* Use a tool like [bcrypt-generator.com](https://bcrypt-generator.com/) to generate a new secure password, number of rounds is 10.
6. **Update Your Password**:
* Run the following SQL query within the PostgreSQL console, replacing `HASH_PASSWORD` with your new password and `YOUR_EMAIL_ADDRESS` with your email.
```sql theme={null}
UPDATE public.user_identity SET password='HASH_PASSWORD' WHERE email='YOUR_EMAIL_ADDRESS';
```
# Truncated Logs
Source: https://www.activepieces.com/docs/install/troubleshooting/truncated-logs
Understanding and resolving truncated flow run logs
## Overview
Flow runs have a maximum log size (default **25 MB**). When a run gets close to this limit, Activepieces tries to keep it under the cap by truncating large step **inputs** — you'll see `(truncated)` in place of the original value.
## What Gets Truncated
Truncation applies to **step inputs only**. Step **outputs are never truncated**, because downstream steps, subflows, and paused/resumed runs need the original output to continue executing correctly. If outputs were dropped, the next step would receive missing data and fail unpredictably.
The engine works like this:
1. If the total run data fits within the limit, nothing is truncated.
2. Otherwise, step input values are replaced with `(truncated)`, starting from the largest, until the run fits.
3. If the run **still** exceeds the limit after all inputs are truncated — meaning step outputs alone are over the cap — the run fails with `LOG_SIZE_EXCEEDED` and an error like:
```
Flow run data size exceeded the maximum allowed size of 25 MB
```
## Why Runs Can Still Fail
If you see this error on a step that downloads or produces a large file (e.g. a multi-megabyte HTTP response, a base64-encoded binary, or a large API payload), the step's **output** is what's pushing the run over the limit. Since outputs cannot be truncated without breaking subsequent steps, the engine has no choice but to fail the run.
## Solution
Increase the flow run log size limit by setting the `AP_MAX_FLOW_RUN_LOG_SIZE_MB` environment variable:
```bash theme={null}
AP_MAX_FLOW_RUN_LOG_SIZE_MB=50
```
For large file handling, prefer passing files between steps using the built-in file storage (e.g. via `Files` / `File` properties) rather than embedding raw bytes in step outputs. This keeps the run log small and avoids the limit entirely.
**Future Improvement:** A planned enhancement will move this limit from per-run to per-step, giving more granular control over how much data each step can retain.
# Websocket Issues
Source: https://www.activepieces.com/docs/install/troubleshooting/websocket-issues
Troubleshoot websocket connection problems
If you're experiencing issues with websocket connections, it's likely due to incorrect proxy configuration. Common symptoms include:
* Test Flow button not working
* Test step in flows not working
* Real-time updates not showing
To resolve these issues:
1. Ensure your reverse proxy is properly configured for websocket connections
2. Check our [Setup HTTPS](/docs/install/guides/setup-ssl) guide for correct configuration examples
3. Some browsers block http websocket connections, please setup SSL to resolve this issue.
# MCP Server
Source: https://www.activepieces.com/docs/mcp/overview
Connect AI assistants to Activepieces using the Model Context Protocol (MCP)
Activepieces includes a built-in [MCP](https://modelcontextprotocol.io/) server that lets AI assistants build flows, manage tables, test automations, and more — all through natural language.
## Quick Start
### 1. Get Your Server URL
1. Go to **Settings** → **MCP Server**
2. Toggle the server **on**
3. Copy the **Server URL**
### 2. Connect Your Client
Add the URL to your MCP client config. Authentication is handled via OAuth — your client will open a browser to authenticate on first use.
```json theme={null}
{
"mcpServers": {
"activepieces": {
"url": "https://your-instance.com/mcp"
}
}
}
```
| Client | Config Location |
| -------------- | ------------------------------------------------------------------------------------------------------------------------------------ |
| Cursor | `.cursor/mcp.json` (project) or `~/.cursor/mcp.json` (global) |
| Claude Desktop | `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows) |
| Windsurf | MCP settings in editor preferences |
| Claude.ai | **Organization Settings** → **Connectors** → **Add** → **Custom connector** |
### 3. Start Building
Once connected, ask your AI assistant things like:
* *"Create a flow that sends a Slack message when a new row is added to Google Sheets"*
* *"Check my flow for any issues before publishing"*
* *"Create a Contacts table and add 3 records"*
* *"Show me the last failed run and what went wrong"*
* *"What Slack triggers are available?"*
## Tool Categories
Tools are organized into categories. **Discovery tools** are always available. Other categories can be enabled or disabled per-project in the MCP Server settings.
| Category | Description |
| ------------------ | -------------------------------------------------------------------------------------- |
| Discovery | Read-only tools for exploring flows, pieces, connections, tables, runs, and validation |
| Flow Management | Create, duplicate, rename, publish, and enable/disable flows |
| Flow Building | Add, update, and delete steps and triggers |
| Router & Branching | Add, update, and delete conditional branches |
| Annotations | Manage canvas notes |
| Tables | Full CRUD for tables, fields, and records |
| Testing & Runs | Test flows, inspect results, retry failures |
See the [Tools Reference](/docs/mcp/tools) for the complete catalog with input schemas.
## Security
* **OAuth authentication** — secure, token-based authentication handled automatically by your MCP client
* **Credentials are never exposed** — connection secrets, API keys, and OAuth tokens are never returned by any tool
* **Project-scoped** — all operations are scoped to the authenticated project
* **Sensitive setup** — `ap_setup_guide` returns instructions for the user to configure connections in the UI, rather than handling secrets through MCP
# Tools Reference
Source: https://www.activepieces.com/docs/mcp/tools
Complete catalog of all MCP tools available in Activepieces
## Discovery
These tools are always available (locked) and read-only. They help AI agents understand what's in the project before making changes.
### ap\_list\_flows
List flows in the current project with status, trigger type, and published state.
| Input | Type | Required | Description |
| -------- | ------ | -------- | ------------------------------------------ |
| `limit` | number | No | Max flows to return (default 100, max 500) |
| `status` | string | No | Filter by status: `ENABLED` or `DISABLED` |
| `name` | string | No | Filter by flow name (partial match) |
### ap\_flow\_structure
Get the full structure of a flow: step tree, configuration status, step input values, router branch conditions, and valid insert locations. Shows the actual configured inputs for each step (URL, body, headers, etc.) so the AI can read and edit existing configurations.
| Input | Type | Required | Description |
| -------- | ------ | -------- | ----------- |
| `flowId` | string | Yes | The flow ID |
### ap\_read\_step\_code
Read the full source code, package.json, and input mappings of a CODE step. Returns untruncated content — unlike `ap_flow_structure` which truncates code to 300 characters.
| Input | Type | Required | Description |
| ---------- | ------ | -------- | ------------------------------------------ |
| `flowId` | string | Yes | The flow ID |
| `stepName` | string | Yes | The name of the CODE step (e.g., `step_1`) |
Use this tool when you need to read or edit a CODE step's full source. `ap_flow_structure` truncates code for overview purposes, so always call `ap_read_step_code` before modifying a code step.
### ap\_validate\_flow
Validate a flow for structural issues without publishing. Checks step validity, template references, and empty branches.
| Input | Type | Required | Description |
| -------- | ------ | -------- | ----------- |
| `flowId` | string | Yes | The flow ID |
### ap\_list\_pieces
List available pieces with their actions and triggers. Required before adding or updating steps.
| Input | Type | Required | Description |
| ----------------- | ------- | -------- | ----------------------- |
| `searchQuery` | string | No | Filter pieces by name |
| `includeActions` | boolean | No | Include action details |
| `includeTriggers` | boolean | No | Include trigger details |
### ap\_get\_piece\_props
Get the detailed input property schema for a specific piece action or trigger. Returns field names, types, required/optional, descriptions, default values, and dropdown options. When auth is required but not provided, automatically lists available connections. Use this before `ap_update_step` or `ap_update_trigger` to know exactly which fields to set.
| Input | Type | Required | Description |
| --------------------- | ------ | -------- | ---------------------------------------------------------------------------------------------- |
| `pieceName` | string | Yes | Piece name (e.g., `@activepieces/piece-slack`) |
| `actionOrTriggerName` | string | Yes | Action or trigger name (e.g., `send_channel_message`) |
| `type` | string | Yes | `action` or `trigger` |
| `auth` | string | No | Connection externalId. When provided, dynamic dropdowns and DYNAMIC sub-fields are resolved. |
| `flowId` | string | No | Flow ID for resolving dependent dropdowns that need step context. |
| `input` | object | No | Known input values for resolving dependent DYNAMIC properties (e.g., `{"body_type": "json"}`). |
### ap\_resolve\_property\_options
Resolve dropdown options for a single piece property. Returns available choices with labels and internal IDs. Use this to discover valid values for dynamic dropdown fields (e.g., Slack channels, Google Sheets, email labels) before configuring a step.
| Input | Type | Required | Description |
| --------------------- | ------ | -------- | ----------------------------------------------------------------------------------------------------- |
| `pieceName` | string | Yes | Piece name (e.g., `@activepieces/piece-slack`) |
| `actionOrTriggerName` | string | Yes | Action or trigger name (e.g., `send_channel_message`) |
| `type` | string | Yes | `action` or `trigger` |
| `propertyName` | string | Yes | The exact property name to resolve (e.g., `channel`) |
| `auth` | string | Yes | Connection externalId — required to fetch options from the user's account |
| `input` | object | No | Values for parent properties that this field depends on (refreshers) |
| `searchValue` | string | No | Search term to filter results for large dropdown lists (e.g., "sales" to find sales-related channels) |
Each option in the response has a `label` (human-readable name) and a `value` (internal ID). Always pass the **value** when configuring a step — never use the label. For example, if the response includes `{label: "general", value: "C1234567890"}`, use `"C1234567890"` as the channel value.
### ap\_validate\_step\_config
Validate a step configuration before applying it. Returns field-level errors without modifying any flow.
| Input | Type | Required | Description |
| ------------- | ------ | -------- | --------------------------------------------------------------------- |
| `stepType` | string | Yes | `PIECE_ACTION`, `PIECE_TRIGGER`, `CODE`, `LOOP_ON_ITEMS`, or `ROUTER` |
| `pieceName` | string | No | For PIECE types: piece name (short names like `slack` accepted) |
| `actionName` | string | No | For PIECE\_ACTION: action name |
| `triggerName` | string | No | For PIECE\_TRIGGER: trigger name |
| `input` | object | No | For PIECE types: input config to validate |
| `auth` | string | No | For PIECE types: any non-empty string indicates auth is provided |
| `sourceCode` | string | No | For CODE: JavaScript/TypeScript source code |
| `packageJson` | string | No | For CODE: package.json as JSON string |
| `loopItems` | string | No | For LOOP\_ON\_ITEMS: items expression |
| `settings` | object | No | For ROUTER: raw router settings |
### ap\_list\_connections
List OAuth/app connections in the project. Required before adding steps that need authentication.
| Input | Type | Required | Description |
| ------------- | ------ | -------- | ------------------------------------------------------------------------------------ |
| `pieceName` | string | No | Filter by piece name. Short names like `slack` or `google-sheets` are auto-expanded. |
| `displayName` | string | No | Filter by connection display name (partial match) |
| `status` | array | No | Filter by status: `ACTIVE`, `MISSING`, or `ERROR` |
### ap\_list\_ai\_models
List configured AI providers and their available models. Use this to discover valid `aiProviderModel` values for configuring Run Agent steps.
| Input | Type | Required | Description |
| ---------- | ------ | -------- | --------------------------------------------------------------------------------------------------------------------------- |
| `provider` | string | No | Filter by provider (`openai`, `anthropic`, `google`, `azure`, `openrouter`, `activepieces`, `cloudflare-gateway`, `custom`) |
### ap\_list\_tables
List all tables in the project with their fields (name, type, id) and row counts.
| Input | Type | Required | Description |
| ----- | ---- | -------- | ------------------ |
| — | — | — | No inputs required |
### ap\_find\_records
Query records from a table with optional filtering.
| Input | Type | Required | Description |
| --------- | ------ | -------- | ---------------------------------------------- |
| `tableId` | string | Yes | The table ID |
| `filters` | array | No | Filter conditions (fieldName, operator, value) |
| `limit` | number | No | Max records (default 50, max 500) |
**Filter operators:** `eq`, `neq`, `gt`, `gte`, `lt`, `lte`, `co` (contains), `exists`, `not_exists`
### ap\_list\_runs
List recent flow runs with optional filters.
| Input | Type | Required | Description |
| -------- | ------ | -------- | --------------------------------------------------- |
| `flowId` | string | No | Filter by flow |
| `status` | string | No | Filter by status (SUCCEEDED, FAILED, RUNNING, etc.) |
| `limit` | number | No | Max runs (default 10, max 50) |
### ap\_get\_run
Get detailed results of a flow run including step-by-step outputs, errors, and durations.
| Input | Type | Required | Description |
| ----------- | ------ | -------- | ----------- |
| `flowRunId` | string | Yes | The run ID |
### ap\_setup\_guide
Get step-by-step instructions for setting up connections or AI providers. Returns instructions for the user to follow in the UI — credentials are never handled through MCP.
| Input | Type | Required | Description |
| ----------- | ------ | -------- | --------------------------------------- |
| `topic` | string | Yes | `connection` or `ai_provider` |
| `pieceName` | string | No | For connections: which piece needs auth |
***
## Flow Management
Create and manage flows.
### ap\_create\_flow
Create a new empty flow.
| Input | Type | Required | Description |
| ---------- | ------ | -------- | ------------------------- |
| `flowName` | string | Yes | Display name for the flow |
### ap\_duplicate\_flow
Duplicate an existing flow. Creates a new copy with all steps, configuration, and canvas notes. Connections and sample data are not copied.
| Input | Type | Required | Description |
| -------- | ------ | -------- | ---------------------------------------------------------- |
| `flowId` | string | Yes | The flow ID to duplicate |
| `name` | string | No | Name for the copy (defaults to "Copy of \{original name}") |
After duplicating, use `ap_flow_structure` on the new flow to check configuration status. Steps that reference connections will need to be re-configured with `ap_update_step`.
### ap\_rename\_flow
Rename an existing flow.
| Input | Type | Required | Description |
| ------------- | ------ | -------- | ----------- |
| `flowId` | string | Yes | The flow ID |
| `displayName` | string | Yes | New name |
### ap\_change\_flow\_status
Enable or disable a flow.
| Input | Type | Required | Description |
| -------- | ------ | -------- | ----------------------- |
| `flowId` | string | Yes | The flow ID |
| `status` | string | Yes | `ENABLED` or `DISABLED` |
### ap\_delete\_flow
Permanently delete a flow and all its versions. This cannot be undone.
| Input | Type | Required | Description |
| -------- | ------ | -------- | ---------------------------- |
| `flowId` | string | Yes | The ID of the flow to delete |
### ap\_lock\_and\_publish
Publish the current draft of a flow. Validates all steps are configured.
| Input | Type | Required | Description |
| -------- | ------ | -------- | ----------- |
| `flowId` | string | Yes | The flow ID |
***
## Flow Building
Add, configure, and remove steps in a flow.
### ap\_build\_flow
Create a complete flow in one call — trigger plus any number of steps. Steps are added sequentially (trigger → step\_1 → step\_2 → ...). All steps are validated on creation. Use granular tools (`ap_add_step`, `ap_update_step`) to modify existing flows or add nested structures (loop contents, router branches).
| Input | Type | Required | Description |
| ---------- | ------ | -------- | ------------------------------------------------------------------------------ |
| `flowName` | string | Yes | Name for the new flow |
| `trigger` | object | Yes | `{pieceName, triggerName, input?, auth?}` |
| `steps` | array | Yes | Array of step specs, each with `type`, `displayName`, and type-specific fields |
**Step types in the array:**
* **PIECE**: `pieceName`, `actionName`, `input`, `auth`, `continueOnFailure`, `retryOnFailure`
* **CODE**: `sourceCode`, `input`, `continueOnFailure`, `retryOnFailure`
* **LOOP\_ON\_ITEMS**: `loopItems`
* **ROUTER**: creates a router with Branch 1 + Otherwise
### ap\_update\_trigger
Set or update the trigger for a flow.
| Input | Type | Required | Description |
| ------------- | ------ | -------- | ---------------------------------------------- |
| `flowId` | string | Yes | The flow ID |
| `pieceName` | string | Yes | Piece name (e.g., `@activepieces/piece-gmail`) |
| `triggerName` | string | Yes | Trigger name from the piece |
| `input` | object | No | Trigger configuration |
| `auth` | string | No | Connection externalId |
| `displayName` | string | No | Display name for the trigger step |
### ap\_add\_step
Add a new step to a flow. Optionally configure it in the same call by providing `input`, `auth`, `sourceCode`, or `loopItems`.
| Input | Type | Required | Description |
| ------------------------------ | ------- | -------- | ----------------------------------------------------------------------- |
| `flowId` | string | Yes | The flow ID |
| `parentStepName` | string | Yes | Step to insert after/into |
| `stepLocationRelativeToParent` | string | Yes | `AFTER`, `INSIDE_LOOP`, or `INSIDE_BRANCH` |
| `stepType` | string | Yes | `CODE`, `PIECE`, `LOOP_ON_ITEMS`, or `ROUTER` |
| `displayName` | string | Yes | Step display name |
| `pieceName` | string | No | For PIECE steps |
| `actionName` | string | No | For PIECE steps |
| `branchIndex` | number | No | For INSIDE\_BRANCH |
| `input` | object | No | Step input config (key-value pairs) |
| `auth` | string | No | Connection externalId |
| `sourceCode` | string | No | For CODE steps: JavaScript/TypeScript source |
| `packageJson` | string | No | For CODE steps: npm dependencies |
| `loopItems` | string | No | For LOOP steps: items expression |
| `continueOnFailure` | boolean | No | For CODE/PIECE steps: continue flow if this step fails (default: false) |
| `retryOnFailure` | boolean | No | For CODE/PIECE steps: retry this step on failure (default: false) |
### ap\_update\_step
Update an existing step's settings. Auto-fills default values for optional properties.
| Input | Type | Required | Description |
| ------------------- | ------- | -------- | ------------------------------------------------------ |
| `flowId` | string | Yes | The flow ID |
| `stepName` | string | Yes | Step name (e.g., `step_1`) |
| `displayName` | string | No | New display name |
| `input` | object | No | Step configuration |
| `auth` | string | No | Connection externalId |
| `actionName` | string | No | For PIECE steps |
| `loopItems` | string | No | For LOOP steps |
| `sourceCode` | string | No | For CODE steps: JavaScript/TypeScript source code |
| `packageJson` | string | No | For CODE steps: npm dependencies as JSON string |
| `skip` | boolean | No | Skip this step |
| `continueOnFailure` | boolean | No | For CODE/PIECE steps: continue flow if this step fails |
| `retryOnFailure` | boolean | No | For CODE/PIECE steps: retry this step on failure |
Use `{{stepName.field}}` syntax in input values to reference data from previous steps (e.g., `{{trigger.body.email}}`, `{{step_1.id}}`). Do NOT include `.output.` in the path.
### ap\_delete\_step
Delete a step from a flow.
| Input | Type | Required | Description |
| ---------- | ------ | -------- | -------------- |
| `flowId` | string | Yes | The flow ID |
| `stepName` | string | Yes | Step to delete |
***
## Router & Branching
Manage conditional branches in router steps. Use `ap_flow_structure` to see existing branch conditions and indices.
### ap\_add\_branch
Add a conditional branch to a router step. The branch is inserted before the fallback (Otherwise) branch.
| Input | Type | Required | Description |
| ---------------- | ------ | -------- | ---------------------------- |
| `flowId` | string | Yes | The flow ID |
| `routerStepName` | string | Yes | The router step name |
| `branchName` | string | Yes | Display name for the branch |
| `conditions` | array | No | Conditions array (see below) |
**Conditions format:** Outer array = OR groups, inner array = AND conditions. Each condition has:
* `firstValue` (string) — left-hand value, can use `{{step_1.field}}` template syntax
* `operator` (string) — e.g. `TEXT_CONTAINS`, `NUMBER_IS_GREATER_THAN`, `EXISTS`, `BOOLEAN_IS_TRUE`
* `secondValue` (string, optional) — right-hand value (not needed for single-value operators)
* `caseSensitive` (boolean, optional) — for text operators
### ap\_update\_branch
Update the conditions and/or name of an existing router branch without affecting the steps inside it.
| Input | Type | Required | Description |
| ---------------- | ------ | -------- | --------------------------------------------------------------------------------------- |
| `flowId` | string | Yes | The flow ID |
| `routerStepName` | string | Yes | The router step name |
| `branchIndex` | number | Yes | Branch index (0-based) |
| `branchName` | string | No | New display name |
| `conditions` | array | No | New conditions (same format as `ap_add_branch`). Replaces existing conditions entirely. |
Cannot set conditions on the fallback branch — only `branchName` can be updated for fallback branches.
### ap\_delete\_branch
Delete a branch from a router step. Cannot delete the fallback (last) branch.
| Input | Type | Required | Description |
| ---------------- | ------ | -------- | -------------------------------- |
| `flowId` | string | Yes | The flow ID |
| `routerStepName` | string | Yes | The router step name |
| `branchIndex` | number | Yes | Branch index to delete (0-based) |
***
## Annotations
### ap\_manage\_notes
Add, update, or delete canvas notes on a flow.
| Input | Type | Required | Description |
| ----------- | ------ | -------- | --------------------------------------------------- |
| `flowId` | string | Yes | The flow ID |
| `operation` | string | Yes | `ADD`, `UPDATE`, or `DELETE` |
| `noteId` | string | No | Required for UPDATE/DELETE |
| `content` | string | No | Note text (required for ADD) |
| `color` | string | No | Note color |
| `position` | object | No | `{x, y}` canvas position |
| `size` | object | No | `{width, height}` note dimensions (default 200x200) |
***
## Tables
Full CRUD operations for the built-in Tables feature. Use field names (not IDs) when inserting or updating records.
### ap\_create\_table
Create a new table with an initial set of fields.
| Input | Type | Required | Description |
| -------- | ------ | -------- | -------------------------------- |
| `name` | string | Yes | Table name |
| `fields` | array | Yes | Fields: `{name, type, options?}` |
**Field types:** `TEXT`, `NUMBER`, `DATE`, `STATIC_DROPDOWN` (requires `options` array)
### ap\_delete\_table
Permanently delete a table and all its data.
| Input | Type | Required | Description |
| --------- | ------ | -------- | ------------ |
| `tableId` | string | Yes | The table ID |
### ap\_manage\_fields
Add, rename, or delete fields on a table.
| Input | Type | Required | Description |
| ----------- | ------ | -------- | ---------------------------- |
| `tableId` | string | Yes | The table ID |
| `operation` | string | Yes | `ADD`, `UPDATE`, or `DELETE` |
| `fieldId` | string | No | Required for UPDATE/DELETE |
| `name` | string | No | Required for ADD/UPDATE |
| `type` | string | No | Required for ADD |
| `options` | array | No | For STATIC\_DROPDOWN |
### ap\_insert\_records
Insert one or more records into a table.
| Input | Type | Required | Description |
| --------- | ------ | -------- | ------------------------------------------------ |
| `tableId` | string | Yes | The table ID |
| `records` | array | Yes | 1-50 records, each mapping field names to values |
### ap\_update\_record
Update specific cells in a record. Only specified fields are changed.
| Input | Type | Required | Description |
| ---------- | ------ | -------- | ------------------------- |
| `tableId` | string | Yes | The table ID |
| `recordId` | string | Yes | The record ID |
| `fields` | object | Yes | Field names to new values |
### ap\_delete\_records
Permanently delete one or more records.
| Input | Type | Required | Description |
| ----------- | ----- | -------- | -------------------- |
| `recordIds` | array | Yes | Record IDs to delete |
***
## Testing & Runs
Test flows, inspect results, and retry failures. Test tools poll for up to 120 seconds and return step-by-step results.
### ap\_test\_flow
Test a flow end-to-end in the test environment. Pass `triggerTestData` to provide mock trigger output when no sample data exists (e.g., for webhook triggers).
| Input | Type | Required | Description |
| ----------------- | ------ | -------- | ----------------------------------------------------------------------- |
| `flowId` | string | Yes | The flow ID |
| `triggerTestData` | object | No | Mock trigger output data. Saved as sample data before running the test. |
The flow must have a configured trigger. The tool validates this before running and returns a clear error if not.
### ap\_test\_step
Test a single step within a flow. Runs all steps up to and including the target step. Pass `triggerTestData` when no sample data exists.
| Input | Type | Required | Description |
| ----------------- | ------ | -------- | ------------------------- |
| `flowId` | string | Yes | The flow ID |
| `stepName` | string | Yes | Step to test |
| `triggerTestData` | object | No | Mock trigger output data. |
### ap\_retry\_run
Retry a failed flow run.
| Input | Type | Required | Description |
| ----------- | ------ | -------- | ----------------------------------------- |
| `flowRunId` | string | Yes | The failed run ID |
| `strategy` | string | Yes | `FROM_FAILED_STEP` or `ON_LATEST_VERSION` |
* **FROM\_FAILED\_STEP**: Resume from where it failed, keeping previous step outputs
* **ON\_LATEST\_VERSION**: Re-run the entire flow with the current published version
### ap\_run\_action
Execute a single piece action once without building or saving a flow. Designed for one-shot tasks like *"check my inbox"* or *"send one Slack message"* where building a full automation would be overkill. Under the hood the tool creates a disposable flow, runs the action, returns the output, and cleans the flow up — the user never sees it in their flow list.
| Input | Type | Required | Description |
| ---------------------- | ------ | -------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `pieceName` | string | Yes | Piece name (e.g. `slack` or `@activepieces/piece-slack`). Use `ap_list_pieces` to discover. |
| `actionName` | string | Yes | Action to run (e.g. `send_channel_message`). Use `ap_get_piece_props` for the input shape. |
| `input` | object | No | Fully-resolved input for the action. Keys must match the piece action's props. Pass raw values — do **not** wrap them in `{{…}}`. Omit entirely if the action has no props. |
| `connectionExternalId` | string | No | `externalId` from `ap_list_connections`. Required if the piece needs auth. Auto-wrapped server-side as `{{connections['externalId']}}`. Must be a plain ID — special characters are rejected. |
When to pick this vs. `ap_build_flow`:
* **`ap_run_action`** — one-off, throwaway, returns a result now.
* **`ap_build_flow`** — persistent automation that should repeat, run on a schedule, or trigger on external events.
Prefer calling `ap_list_connections` and `ap_get_piece_props` first so you know the exact `actionName`, the expected props, and the right `connectionExternalId` before invoking. Missing required inputs or unknown actions produce a friendly error before dispatch — no run is created and nothing is billed.
# Welcome
Source: https://www.activepieces.com/docs/overview/welcome
Your friendliest open source all-in-one automation tool, designed to be extensible.
Learn how to work with Activepieces
Browse available pieces
Learn how to install Activepieces
How to Build Pieces and Contribute
# 🔥 Why Activepieces is Different:
* **💖 Loved by Everyone**: Intuitive interface and great experience for both technical and non-technical users with a quick learning curve.
* **🌐 Open Ecosystem:** All pieces are open source and available on npmjs.com, **60% of the pieces are contributed by the community**.
* **🛠️ Pieces are written in Typescript**: Pieces are npm packages in TypeScript, offering full customization with the best developer experience, including **hot reloading** for **local** piece development on your machine. 😎
* **🤖 AI-Ready**: Native AI pieces and agents are built into Activepieces. Integrating AI into your flows is seamless and simple—experiment with popular providers, or quickly create custom agents using our easy-to-use AI SDK.
* **🏢 Enterprise-Ready**: Developers set up the tools, and anyone in the organization can use the no-code builder. Full customization from branding to control.
* **🔒 Secure by Design**: Self-hosted and network-gapped for maximum security and control over your data.
* **🧠 Human in Loop**: Delay execution for a period of time or require approval. These are just pieces built on top of the piece framework, and you can build many pieces like that. 🎨