# Electric Sql > url: /blog/posts/2024-07-17-electric-next.md --- # Source: https://electric-sql.com/blog/posts/2024-07-17-electric-next.md --- url: /blog/posts/2024-07-17-electric-next.md description: Electric Next is a new approach to building the ElectricSQL sync engine. --- Electric Next is a new approach that we've adopted to building ElectricSQL. It's informed by the lessons learned building the [previous system](https://legacy.electric-sql.com) and inspired by new insight from [Kyle Mathews](https://electric-sql.com/about/team#kyle) joining the team. What started as tinkering is now the way forward for Electric. So, what's changed and what does it mean for you? ## What is Electric Next? Electric Next was a clean rebuild of the Electric sync engine that now forms the basis of ElectricSQL moving forwards. We created a new repo and started by porting the absolute minimum code necessary from the [previous repo](https://github.com/electric-sql/electric-old). Once we were confident that Electric Next was the way forward, we froze the old system and moved the new code into our main repo at . The new approach provides an [HTTP API](/docs/api/http) for syncing [Shapes](/docs/guides/shapes) of data from Postgres. This can be used directly or via [client libraries](/docs/api/clients/typescript) and [integrations](/docs/integrations/react). It's also simple to write your own client in any language. ## Why build a new system? Electric has its [heritage](https://electric-sql.com/about/team#advisors) in [distributed database research](/docs/reference/literature). When we started, our plan was to use this research to build a next-generation distributed database. Cockroach for the AP side of the CAP theorem. However, the adoption dynamics for creating a new database from scratch are tough. So we pivoted to building a replication layer for existing databases. This allowed us to do active-active replication between multiple Postgres instances, in the cloud or at the edge. However, rather than stopping at the edge, we kept seeing that it was more optimal to take the database-grade replication guarantees all the way into the client. So we built a system to sync data into embedded databases in the client. Where our core technology could solve the concurrency challenges with local-first software architecture. Thus, ElectricSQL was born, as an [open source platform for building local-first software](/blog/2023/09/20/introducing-electricsql-v0.6). ### Optimality and complexity To go from core database replication technology to a viable solution for building local-first software, we had to build a lot of stuff. Tooling for [migrations](https://legacy.electric-sql.com/docs/usage/data-modelling/migrations), [permissions](https://legacy.electric-sql.com/docs/usage/data-modelling/permissions), [client generation](https://legacy.electric-sql.com/docs/api/cli#generate), [type-safe data access](https://legacy.electric-sql.com/docs/usage/data-access/client), [live queries](https://legacy.electric-sql.com/docs/integrations/frontend/react#uselivequery), [reactivity](https://legacy.electric-sql.com/docs/reference/architecture#reactivity), [drivers](https://legacy.electric-sql.com/docs/integrations/drivers), etc. Coming from a research background, we wanted the system to be optimal. As a result, we often picked the more complex solution from the design space and, as a vertically integrated system, that solution became the only one available to use with Electric. For example, we designed the [DDLX rule system](https://legacy.electric-sql.com/docs/api/ddlx) in a certain way, because we wanted authorization that supported finality of local writes. However, rules (and our rules) are only one way to do authorization in a local-first system. Many applications would be happy with a simpler solution, such as Postgres RLS or a server authoritative middleware. These decisions not only made Electric more complex to use but also more complex to develop. Despite our best efforts, this has slowed us down and tested the patience of even the most forgiving of our early adopters. Many of those early adopters have also reported performance and reliability issues. The complexity of the stack has provided a wide surface for bugs. So where we've wanted to be focusing on core features, performance and stability, we've ended up fixing issues with things like [docker networking](https://github.com/electric-sql/electric/issues/582), [migration tooling](https://github.com/electric-sql/electric/issues/668) and [client-side build tools](https://github.com/electric-sql/electric/issues/798). The danger, articulated by [Teej](https://x.com/teej_m) in the tweet below, is building a system that demos well, with magic sync APIs but that never actually scales out reliably. Because the very features and choices that make the demo magic, prevent the system from being simple enough to be bulletproof in production. ### Refocusing our product strategy One of the many insights that Kyle has brought is that successful systems evolve from simple systems that work. This is [Gall's law](https://archive.org/details/systemanticshows00gall): > “A complex system that works is invariably found to have evolved from a simple system that worked.” This has been echoed in conversations we've had with [Paul Copplestone](https://linkedin.com/in/paulcopplestone) at [Supabase](https://supabase.com). His approach to successfully building our type of software is to make the system incremental and composable, as reflected in the [Supabase Architecture](https://supabase.com/docs/guides/getting-started/architecture) guide: > Supabase is composable. Even though every product works in isolation, each product on the platform needs to 10x the other products. To make a system that's incremental and composable, we need to decouple the Electric stack. So it's not a one-size-fits-all vertical stack but, instead, more of a loosely coupled set of primitives around a smaller core. Where we do the essential bits and then allow our users to choose how to integrate and compose these with other layers of the stack. This aligns with the principle of [Worse is Better](https://en.wikipedia.org/wiki/Worse_is_better), defined by Richard P. Gabriel: > Software quality does not necessarily increase with functionality: there is a point where less functionality ("worse") is a preferable option ("better") in terms of practicality and usability. Gabriel contrasts "Worse is Better" with a make the "Right Thing" approach that aims to create the optimal solution. Which sounds painfully like our ambitions to make an optimal local-first platform. Whereas moving functionality out of scope, will actually allow us to make the core better and deliver on the opportunity. #### The motivation for Electric Next So, hopefully now our motivation is clear. We needed to find a way to simplify Electric and make it more loosely coupled. To pare it back to its core and iterate on solid foundations. ## What's changed? Electric Next is a [sync engine](/products/postgres-sync), not a local-first software platform. It can be used for a wide range of [use cases](/sync), syncing data into apps, workers, services, agents and environments. These include but are not limited to local-first software development. ### Sync engine When we look at our stack, the part that we see as most core is the [sync engine](/products/postgres-sync). This is the component of Electric that syncs data between Postgres and local clients. Consuming Postgres logical replication, managing partial replication using Shapes and syncing data to and from clients over a replication protocol. It’s where there’s the most complexity. Where we can add the most value and is hardest to develop yourself. #### Core responsibilities We now see Electric as a sync engine that does partial replication on top of Postgres. We've pushed other, non-core, aspects of the system out of scope, as we pare down to our essential core and then iterate on this to re-build the capabilities of the previous system. The diagram above and table below summarise what we see as core and what we've pushed out of scope. | Aspect | Is it core? | Who should/can provide? | | --------------------------------------------------- | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Syncing data | yes | Electric | | Partial replication ([Shapes](/docs/guides/shapes)) | yes | Electric | | Schema management / propagation / matching | partial | Application specific. In some cases it may be useful or necessary to replicate and validate schema information. In others, it can be the responsibility of the client to connect with the correct schema. | | Type safety in the client | partial | Important in many cases for DX and can be assisted by the sync service (e.g.: by providing an endpoint to query types for a shape). But use of types is optional and in many cases types can be provided by ORMs and other client-libraries. | | Permissions / authorization | no | There are many valid patterns here. Auth middleware, proxies, rule systems. Authorize at connect, per shape, per row/operation. A sync engine may provide some hooks and options but should not prescribe a solution. | | Client-side data access library | no | There are many ways of mapping a replication stream to objects, graphs or databases in the client. For example using existing ORMs like Drizzle and Prisma, or reactivity frameworks like LiveStore and TinyBase. | | Client-side reactivity | no | Client specific. Can be provided by reactivity frameworks. | | Connection management | no | Client specific. | | Database adapters | no | Client specific. Can be provided by ORMs and reactivity frameworks. | | Framework integrations | no | Client specific. Can be provided by reactivity frameworks. | | Client-side debug tooling | no | Client specific. | ### HTTP Protocol One of the key aspects that has changed in the core sync engine is a switch from the [Satellite web socket replication protocol](https://legacy.electric-sql.com/docs/api/satellite) to an HTTP replication protocol. Switching to an HTTP protocol may at first seem like a regression or a strange fit. Web sockets are built on top of HTTP specifically to serve the kind of realtime data stream that Electric provides. However, they are also more stateful and harder to cache. By switching to the [new HTTP API](/docs/api/http), the new system: * minimises state, making the sync engine more reliable and easier to scale out * integrates with standard HTTP tooling, including proxies and CDNs This allows us to optimise initial data sync by making sync requests cacheable. And it facilitates moving concerns like authentication and authorization out of scope, as these can be handled by HTTP proxies. ### Write patterns Electric has always been envisaged as an active-active replication system that supports bi-directional sync between clients and the server. This means it syncs data out to clients (the "read path") and syncs data back from clients (the "write path"). The previous Electric supported a single primary write-path pattern — [writing through the local database](https://legacy.electric-sql.com/docs/usage/data-access/writes): This is very powerful (and [abstracts state transfer](/blog/2022/12/16/evolution-state-transfer) out of the application domain). However, it is only one of many valid write patterns. Many applications don't write data at all; for example, syncing data into an application for visualisation or analysis. Some fire-and-forget writes to an ingest API. Other applications write data via API calls, or mutation queues. Some of these are online writes. Some use local optimistic state. For example, when applying a mutation with [Relay](https://relay.dev) you can [define an `optimisticResponse`](https://relay.dev/docs/guided-tour/updating-data/imperatively-modifying-store-data/#optimistic-updaters-vs-updaters) to update the client store with temporary optimistic state whilst the write is sent to the server. Or to give another example, when [making secure transactions](/blog/2023/12/15/secure-transactions-with-local-first) a local-first app will explicitly want to send writes to the server, in order to validate and apply them in a secure and strongly consistent environment. So, following the strategy of paring down to the core and then progressively layering on more complex functionality, Electric Next has taken the following approach: 1. start with read-path only 2. then add support for optimistic write patterns with tentativity 3. then add support for through-the-DB writes This explicitly reduces the capability of the system in the short term, in order to build a better, more resilient system in the long term. The beauty is that, because we no longer prescribe a write-path strategy, you can choose and if necessary implement any write-path strategy you like. We will only focus on the more complex strategies ourselves once the simpler ones are bulletproof. And we hope that others, like [LiveStore](https://www.schickling.dev/projects/livestore) and [Drizzle](https://orm.drizzle.team/), for example, will build better client-side libraries that we can. #### A note on finality of local writes One of the key differentiators of the previous ElectricSQL system was the ability to write to the local database without conflicts or rollbacks. The principle is [finality of local-writes](https://legacy.electric-sql.com/docs/reference/architecture#local-writes), which means that writes are final, not tentative. I.e.: once a write is accepted locally, it won't be rejected as invalid later on. In contrast, Electric Next embraces tentativity. With the new system, you can choose your write pattern(s) and the guarantees you want them to provide. We still believe that a local-first stack that provides finality of local writes can provide a better DX and UX than one that doesn't. Because of the absence of rollbacks. So we are committed in the longer term to building support for finality of local writes. However, it is no longer a key tenet of the system design. ### Use cases The core use case for Electric is to sync subsets of data out of Postgres into local environments, wherever you need the data. You can sync data into: * apps, replacing data fetching with data sync * development environments, for example syncing data into [an embedded PGlite](/products/pglite) * edge workers and services, for example maintaining a low-latency [edge data cache](/docs/integrations/redis) * local AI systems running RAG, as per the example below ## What's the status? ### Previous system Electric Next has superseded the previous Electric. * some parts of the old system were cherry-picked and ported over * some parts may be cut out into optional libraries, for example the [DDLX implementation](https://github.com/electric-sql/electric/pull/1393) * most parts were not and will not be needed You're welcome to continue to use the old system and choose your moment to migrate. The code is preserved at [electric-sql/electric-old](https://github.com/electric-sql/electric-old) and the website and docs remain published at [legacy.electric-sql.com](https://legacy.electric-sql.com). However caveat emptor — we are not supporting the old system. ### New system At the time of writing this document, we are early in the development of Electric Next. The repo was created on the 1st July 2024. As a clean re-write, there are many things not-yet supported. However, even just with the first release of Electric Next you can already sync partial subsets of data from a Postgres database into a wide variety of clients and environments, for example: * syncing data into local apps using the [TypeScript](/docs/api/clients/typescript) and [Elixir](/docs/api/clients/elixir) clients * replacing hot-path data fetching and database queries in apps using [React](/docs/integrations/react), [MobX](/docs/integrations/react) and [TanStack](/docs/integrations/tanstack) * maintain live caches with automatic invalidation, as per [our Redis example](https://github.com/electric-sql/electric/blob/main/examples/redis-sync/src/index.ts) ### Roadmap You can track development on [Discord](https://discord.electric-sql.com) and via the [GitHub Issues milestones](https://github.com/electric-sql/archived-electric-next/milestones). *** ## Next steps Electric Next is available to use today. We welcome community contributions. ### Using Electric Next See the: * [Quickstart](/docs/quickstart) * [HTTP API](/docs/api/http) * [Examples](/demos) If you have any questions or need support, ask on the `#help-and-support` channel in the [Electric Discord](https://discord.electric-sql.com). ### Get involved in development Electric is open source (Apache 2.0) and developed on GitHub at [electric-sql/electric](https://github.com/electric-sql/electric). See the [open issues](https://github.com/electric-sql/electric/issues) on the repo and the [contributing guide](https://github.com/electric-sql/electric/blob/main/CONTRIBUTING.md). --- # Source: https://electric-sql.com/blog/posts/2024-11-21-local-first-with-your-existing-api.md --- url: /blog/posts/2024-11-21-local-first-with-your-existing-api.md description: 'How to develop local-first apps incrementally, using your existing API.' --- One of the exciting things about [local-first software](/sync) is the potential to eliminate APIs and microservices. Instead of coding across the network, you code against a local store, data syncs in the background and your stack is suddenly much simpler. But what if you don't want to eliminate your API? What if you want or need to keep it. How do you develop local-first software then? With [Electric](/products/postgres-sync), you can develop local-first apps incrementally, [using your existing API](#how-it-works). I gave a talk on this subject at the second Local-first meetup in Berlin in December 2024: ## The Toaster Project There's a great book by Harvey Molotch called [Where stuff comes from](https://www.amazon.com/Where-Stuff-Comes-Toasters-Computers/dp/0415944007) which talks about how nothing exists in isolation. One of his examples is a toaster. At first glance, a toaster seems like a pretty straightforward, standalone product. However, look a bit closer and it integrates with a huge number of other things. Like sliced bread and all the supply chain behind it. It runs on electricity. Through a standard plug. It sits on a worktop. The spring in the lever that you press down to put the toast on is calibrated to match the strength of your arm. Your API is a toaster. It doesn't exist in isolation. It's tied into other systems, like your monitoring systems and the way you do migrations and deployment. It's hard to just rip it out, because then you break these integrations and ergonomics — and obviate your own tooling and operational experience. For example, REST APIs are stateless. We know how to scale them. We know how to debug them. They show up in the [browser console](#browser-console). Swapping them out is all very well in theory, but what happens with your new system when it goes down in production? ### Electric's approach At Electric, our mission is to make [sync](/sync) and [local-first](/sync) adoptable for mainstream software. So, one of the main challenges we've focused on is how to use Electric with your existing software stack. This is why we work with [any data model](/docs/guides/deployment#data-model-compatibility) in [any standard Postgres](/docs/guides/deployment#_1-running-postgres). It's why we allow you to sync data into anything from a [JavaScript object](/docs/api/clients/typescript#shape) to a [local database](/products/pglite). And it's why we focus on providing [composable primitives](/blog/2024/07/17/electric-next) rather than a one-size-fits-all solution. As a result, with Electric, you can develop local-first apps incrementally, using your existing API. So you can get the benefits of local-first, without having to re-engineer your stack or re-invent sliced bread, just to make toast in the morning. ## How it works First use Electric to [sync data into your app](#electric-sync). This allows your app to work with local data without it getting stale. Then [use your API](#using-your-api) to handle: * [auth](#auth) * [writes](#writes) As well as, optionally, other concerns like: * [encryption](#encryption) * [filtering](#filtering) Because Electric syncs data [over HTTP](#http-and-json), you can use existing middleware, integrations and instrumentation. Like [authorization services](#external-services) and [the browser console](#browser-console). ### Electric sync To build local-first you have to have the data locally. If you're doing that with data fetching then you have a stale data problem. Because if you're working with local data without keeping it in sync, then how do you know that it's not stale? This is why you need [data sync](/sync). To keep the local data fresh when it changes. Happily, this is exactly what Electric does. It [syncs data into local apps and services](/products/postgres-sync) and keeps it fresh for you. Practically what does this look like? Well, instead of fetching data using web service calls, i.e.: something like this: ```jsx import React, { useState, useEffect } from 'react' const MyComponent = () => { const [items, setItems] = useState([]) useEffect(() => { const fetchItems = async () => { const response = await fetch('https://example.com/v1/api/items') const data = await response.json() setItems(data) } fetchItems() }, []) return } ``` Sync data using Electric, like this: ```jsx import { useShape } from '@electric-sql/react' const MyComponent = () => { const { data } = useShape({ url: `https://electric.example.com/v1/shape`, params: { table: 'items', }, }) return } ``` For example: * [Trigger.dev](https://trigger.dev/) started out with Electric by syncing status data from their background jobs platform into their [Realtime dashboard](https://trigger.dev/launchweek/0/realtime) * [Otto](https://ottogrid.ai) swapped out the way they loaded data into their [AI spreadsheet](https://ottogrid.ai) You can go much further with Electric, all the way to [syncing into a local database](/products/pglite). But you can do this *incrementally* as and when you need to. #### Read-path Electric [only does the read-path sync](/docs/guides/writes#local-writes-with-electric). It syncs data out-of Postgres, into local apps. Electric does not do write-path sync. It does not provide (or prescribe) a solution for getting data back into Postgres from local apps and services. In fact, it's explicitly designed for you to [handle writes yourself](#writes). #### HTTP The other key thing about Electric sync is that [it's just JSON over HTTP](/docs/api/http). Because it's JSON you can parse it and [work with it](/docs/guides/client-development) in any language and environment. Because it's HTTP you can proxy it. Which means you can use existing HTTP services and middleware to authorize access to it. In fact, whatever you want to do to the replication stream — [encrypt](#encryption), [filter](#filtering), transform, split, remix, buffer, you name it — you can do through a proxy. Extensibility is built in at the protocol layer. ## Using your existing API So far, we've seen that Electric handles read-path sync and leaves [writes](#writes) up to you. We've seen how it syncs over HTTP and how this allows you to implement [auth](#auth) and other concerns like [encryption](#encryption) and [filtering](#filtering) using proxies. Now, let's now dive in to these aspects and see exactly how to implement them using your existing API. With code samples and links to example apps. ### Auth Web-service based apps typically authorize access to resources in a controller or middleware layer. When switching to use a sync engine without an API, you cut out these layers and typically need to codify your auth logic as database rules. For example in [Firebase](https://firebase.google.com) you have [Security Rules](https://firebase.google.com/docs/rules) that look like this: ```js service <> { // Match the resource path. match <> { // Allow the request if the following conditions are true. allow <> : if <> } } ``` In Postgres-based systems, like [Supabase Realtime](https://supabase.com/docs/guides/realtime) you use Postgres [Row Level Security (RLS)](https://supabase.com/docs/guides/database/postgres/row-level-security) rules, e.g.: ```sql create policy "Individuals can view their own todos." on todos for select using ( (select auth.uid()) = user_id ); ``` With Electric, you don't need to do this. Electric syncs [over HTTP](/docs/api/http). You make HTTP requests to a [Shape](/docs/guides/shapes) endpoint (see spec here) at: ```http GET /v1/shape ``` Because this is an HTTP resource, you can authorize access to it just as you would any other web service resource: using HTTP middleware. Route the request to Electric through an authorizing proxy that you control: #### API proxy You can see this pattern implemented in the [Proxy auth example](/demos/proxy-auth). This defines a proxy that takes an HTTP request, reads the user credentials from an `Authorization` header, uses them to authorize the request and if successful, proxies the request onto Electric: <<< @../../examples/proxy-auth/app/shape-proxy/route.ts{typescript} You can run this kind of proxy as part of your existing backend API. Here's [another example](/demos/gatekeeper-auth), this time using a [Plug](https://hexdocs.pm/phoenix/plug.html) to authorize requests to a [Phoenix](/docs/integrations/phoenix) application: <<< @../../examples/gatekeeper-auth/api/lib/api\_web/plugs/auth/verify\_token.ex{elixir} #### Edge proxy If you're running Electric [behind a CDN](/docs/api/http#caching), you're likely to want to deploy your authorizing proxy in front of the CDN. Otherwise routing requests through your API adds latency and can become a bottleneck. You can achieve this by deploying your proxy as an edge function or worker in front of the CDN, for example using [Cloudflare Workers](/docs/integrations/cloudflare#auth-example) or [Supabase Edge Functions](/docs/integrations/supabase#sync-into-edge-function). Here's a Supabase edge function using Deno that verifies that the [shape definition](/docs/guides/shapes#defining-shapes) in a JWT matches the shape definition in the request params: <<< @../../examples/gatekeeper-auth/edge/index.ts{typescript} #### External services You can also use external authorization services in your proxy. For example, [Authzed](https://authzed.com) is a low-latency, distributed authorization service based on Google Zanzibar. You can use it in an edge proxy to authorize requests in front of a CDN, whilst still ensuring strong consistency for your authorization logic. ```ts import jwt from 'jsonwebtoken' import { v1 } from '@authzed/authzed-node' const AUTH_SECRET = Deno.env.get('AUTH_SECRET') || 'NFL5*0Bc#9U6E@tnmC&E7SUN6GwHfLmY' const ELECTRIC_URL = Deno.env.get('ELECTRIC_URL') || 'http://localhost:3000' const HAS_PERMISSION = v1.CheckPermissionResponse_Permissionship.HAS_PERMISSION function verifyAuthHeader(headers: Headers) { const auth_header = headers.get('Authorization') if (auth_header === null) { return [false, null] } const token = auth_header.split('Bearer ')[1] try { const claims = jwt.verify(token, AUTH_SECRET, { algorithms: ['HS256'] }) return [true, claims] } catch (err) { console.warn(err) return [false, null] } } Deno.serve(async (req) => { const url = new URL(req.url) const [isValidJWT, claims] = verifyAuthHeader(req.headers) if (!isValidJWT) { return new Response('Unauthorized', { status: 401 }) } // See https://github.com/authzed/authzed-node and // https://authzed.com/docs/spicedb/getting-started/discovering-spicedb const client = v1.NewClient(claims.token) const resource = v1.ObjectReference.create({ objectType: `example/table`, objectId: claims.table, }) const user = v1.ObjectReference.create({ objectType: 'example/user', objectId: claims.user_id, }) const subject = v1.SubjectReference.create({ object: user, }) const permissionRequest = v1.CheckPermissionRequest.create({ permission: 'read', resource, subject, }) const checkResult = await new Promise((resolve, reject) => { client.checkPermission(permissionRequest, (err, response) => err ? reject(err) : resolve(response) ) }) if (checkResult.permissionship !== HAS_PERMISSION) { return new Response('Forbidden', { status: 403 }) } return fetch(`${ELECTRIC_URL}/v1/shape${url.search}`, { headers: req.headers, }) }) ``` #### Gatekeeper pattern Another pattern, illustrated in our [gatekeeper-auth example](/demos/gatekeeper-auth), is to: 1. use an API endpoint to authorize shape access 2. generate shape-scoped auth tokens 3. validate these tokens in the proxy This allows you to keep more of your auth logic in your API and minimise what's executed on the "hot path" of the proxy. This is actually what the code example shown in the [edge proxy](#edge-proxy) section above does, using an edge worker to validate a shape-scoped auth token. You can also achieve the same thing using a standard reverse proxy like [Caddy](https://caddyserver.com/), [Nginx](https://nginx.org) or [Varnish](https://varnish-cache.org). For example, [using Caddy](https://github.com/electric-sql/electric/tree/main/examples/gatekeeper-auth/caddy): <<< @../../examples/gatekeeper-auth/caddy/Caddyfile{hcl} The workflow from the client's point of view is to first hit the gatekeeper endpoint to generate a shape-scoped auth token, e.g.: ```console $ curl -sX POST "http://localhost:4000/gatekeeper/items" | jq { "headers": { "Authorization": "Bearer " }, "url": "http://localhost:4000/proxy/v1/shape", "table": "items" } ``` Then use the token to authorize requests to Electric, via the proxy, e.g.: ```console $ curl -sv --header "Authorization: Bearer " \ "http://localhost:4000/proxy/v1/shape?table=items&offset=-1" ... < HTTP/1.1 200 OK ... ``` The [Typescript client](/docs/api/clients/typescript) supports auth headers and `401` / `403` error handling, so you can wrap this up using, e.g.: <<< @../../examples/gatekeeper-auth/client/index.ts{ts} ### Writes Electric does [read-path](#read-path) sync. That's the bit between Postgres and the client in the diagram below. Electric **does not** handle writes. That's the dashed blue arrows around the outside, back from the client into Postgres: Instead, Electric is designed for you to implement writes yourself. There's a comprehensive [Writes guide](/docs/guides/writes) and [Write patterns example](/demos/write-patterns) that walks through a range of approaches for this that integrate with your existing API. You can also see a number of the examples that use an API for writes, including the [Linearlite](/demos/linearlite), [Phoenix LiveView](/demos/phoenix-liveview) and [Tanstack](/demos/tanstack) examples. #### API server To highlight a couple of the key patterns, let's look at the shared API server for the write-patterns example. It is an [Express](https://expressjs.com) app that exposes the write methods of a REST API for a table of `todos`: * `POST {todo} /todos` to create a todo * `PUT {partial-todo} /todos/:id` to update * `DELETE /todos/:id` to delete <<< @../../examples/write-patterns/shared/backend/api.js{js} #### Optimistic writes If you then look at the [optimistic state pattern](/docs/guides/writes#optimistic-state) (one of the approaches illustrated in the write-patterns example) you can see this being used, together with Electric sync, to support instant, local, offline-capable writes: <<< @../../examples/write-patterns/patterns/2-optimistic-state/index.tsx{tsx} You can also see the [shared persistent optimistic state](https://github.com/electric-sql/electric/tree/main/examples/write-patterns/patterns/3-shared-persistent) pattern for a more resilient, comprehensive approach to building local-first apps with Electric on optimistic state. #### Write-path sync Another pattern covered in the Writes guide is [through the database sync](/docs/guides/writes#through-the-db). This approach uses Electric to sync into an local, embedded database and then syncs changes made to the local database back to Postgres, via your API. The [example implementation](https://github.com/electric-sql/electric/tree/main/examples/write-patterns/patterns/4-through-the-db) uses Electric to sync into [PGlite](/products/pglite) as the local embedded database. All the application code needs to do is read and write to the local database. The [database schema](https://github.com/electric-sql/electric/blob/main/examples/write-patterns/patterns/4-through-the-db/local-schema.sql) takes care of everything else, including keeping a log of local changes to send to the server. This is then processed by a sync utility that sends data to a: * `POST {transactions} /changes` endpoint Implemented in the [shared API server](https://github.com/electric-sql/electric/blob/main/examples/write-patterns/patterns/4-through-the-db/shared/backend/api.js) shown above: <<< @../../examples/write-patterns/patterns/4-through-the-db/sync.ts{ts} #### Authorizing writes Just as [with reads](#auth), because you're sending writes to an API endpoint, you can use your API, middleware, or a proxy to authorize them. Just as you would any other API request. Again, to emphasise, this allows you to develop local-first apps, without having to codify write-path authorization logic into database rules. In fact, in many cases, you can just keep your existing API endpoints and you may not need to change any code at all. ### Encryption Electric syncs ciphertext as well as it syncs plaintext. You can encrypt data on and off the local client, i.e.: * *encrypt* it before it leaves the client * *decrypt* it when it comes into the client from the replication stream You can see an example of this in the [encryption example](/demos/encryption): <<< @../../examples/encryption/src/Example.tsx{tsx} #### Key management One of the challenges with encryption is key management. I.e.: choosing which data to encrypt with which keys and sharing the right keys with the right users. There are some good patterns here like using a key per resource, such as a tenant, workspace or group. You can then encrypt data within that resource using a specific key and share the key with user when they get access to the resource (e.g.: when added to the group). Electric is good at syncing keys. For example, you could define a shape like: ```ts const stream = new ShapeStream({ url: `${ELECTRIC_URL}/v1/shape`, params: { table: 'tenants', columns: ['keys'], where: `id in ('${user.tenant_ids.join(`', '`)}')`, }, }) ``` Either in your client or in your proxy. You could then put a denormalised `tenant_id` column on all of your rows and lookup the correct key to use when decrypting and encrypting the row. ### Filtering The [HTTP API](/docs/api/http) streams a log of change operations. You can intercept this at any level -- in your API, in a middleware proxy or when handling or materialising the log from a ShapeStream instance in the client. ## Using your existing tools Because Electric syncs over HTTP, it integrates with standard debugging, visibility and monitoring tools. ### Monitoring You can see Electric requests in your standard HTTP logs. You can catch errors and send them with request-specific context to systems like Sentry and AppSignal. You can debug on the command line [using `curl`](/docs/quickstart#http-api). ### Browser console One of the most important aspects of this is being able to see and easily introspect sync requests in the browser console. This allows you to see what data is being sent through when and also allows you to observe caching and offline behaviour. You don't need to implement custom tooling to get visibility in what's happening with Electric. It's not a black box when it comes to debugging in development and in production. ## Next steps This post has outlined how you can develop [local-first software](/sync) incrementally, using your existing API alongside [Electric](/products/postgres-sync) for read-path sync. To learn more and get started with Electric, see the [Quickstart](/docs/quickstart), [Documentation](/docs/intro) and source code on GitHub: --- # Source: https://electric-sql.com/blog/posts/2024-12-10-electric-beta-release.md --- url: /blog/posts/2024-12-10-electric-beta-release.md description: >- The Electric sync engine is now in BETA. If you haven't checked out Electric recently, it's a great time to take another look. --- With version [`1.0.0-beta.1`](https://github.com/electric-sql/electric/releases) the Electric sync engine is now in BETA! If you haven't checked out Electric recently, it's a great time to [take another look](/docs/intro). ## What is Electric? [Electric](/products/postgres-sync) is a Postgres sync engine. We do real-time [partial replication](/docs/guides/shapes) of Postgres data into local apps and services. Use Electric to swap out data *fetching* for [data *sync*](/sync). Build apps on instant, real-time, local data. Without having to roll your own sync engine or change your stack. We also develop [PGlite](/products/pglite), a lightweight WASM Postgres you can run in the browser. ## The path to BETA Six months ago, we [took on a clean re-write](/blog/2024/07/17/electric-next). [First commit](https://github.com/electric-sql/archived-electric-next/commit/fc406d77caca923d1fb595d921102f25c7ce3856) was on the 29th June 2024. [600 pull requests later](https://github.com/electric-sql/electric/pulls?q=is%3Apr+is%3Aclosed), we're ready for adoption into production apps. ## Production ready Electric and PGlite are being used in production by companies including [Google](https://firebase.google.com/docs/data-connect), [Supabase](https://database.build), [Trigger.dev](https://trigger.dev/launchweek/0/realtime), [Otto](https://ottogrid.ai) and [Doorboost](https://www.doorboost.com). > We use ElectricSQL to power [Trigger.dev Realtime](https://trigger.dev/launchweek/0/realtime), a core feature of our product. When we execute our users background tasks they get instant updates in their web apps. It's simple to operate since we already use Postgres, and it scales to millions of updates per day. > *— [Matt Aitken](https://www.linkedin.com/in/mattaitken1985), Founder & CEO, [Trigger.dev](https://trigger.dev)* > At [Otto](https://ottogrid.ai), we built a spreadsheet product where every cell operates as its own AI agent. ElectricSQL enables us to reliably stream agent updates to our spreadsheet in real-time and efficiently manage large spreadsheets at scale. It has dramatically simplified our architecture while delivering the performance we need for cell-level reactive updates. > *— [Sully Omar](https://x.com/SullyOmarr), Co-founder & CEO, [Otto](https://ottogrid.ai)* > At [Doorboost](https://www.doorboost.com) we aggregate millions of rows from a dozen platforms, all of which gets distilled down to a simple dashboard. With Electric we have been able to deliver this dashboard in milliseconds and update live. Moving forward, we will be building all our products using Electric. > *— [Vache Asatryan](https://am.linkedin.com/in/vacheasatryan), CTO, [Doorboost](https://doorboost.com)* ### Scalable So many real-time sync systems demo well but break under real load. Electric has been [engineered from the ground up](/docs/api/http) to handle high-throughput workloads, like [Trigger.dev](https://trigger.dev/launchweek/0/realtime), with low latency and flat resource use. You can stream real-time data to **millions of concurrent users** from a single commodity Postgres. The chart below is from our [cloud benchmarks](/docs/reference/benchmarks#cloud), testing Electric's memory usage and latency with a single Electric service scaling real-time sync from 100k to 1 million concurrent clients under a sustained load of 960 writes/minute. Both memory usage and latency are essentially flat: You can also see how large-scale apps built with Electric feel to use with our updated [ Linearlite](/demos/linearlite) demo. This is a [Linear](https://linear.app) clone that loads 100k issues and their comments through Electric into PGlite (~150mb of data). Once loaded, it's fully interactive and feels instant to use: ## Easy to adopt We've iterated a lot on our APIs to make them as simple and powerful as possible. There should be no breaking changes in minor or patch releases moving forward. We've updated our [Documentation](/docs/intro), with a new [Quickstart](/docs/quickstart) and guides for topics like: * how to do [auth](/docs/guides/auth) * how to handle [local writes](/docs/guides/writes) * how to do [partial replication with Shapes](/docs/guides/shapes) * how to [deploy Electric](/docs/guides/deployment) * how to [write your own client](/docs/guides/client-development) for any language or environment We have [client libraries](/docs/api/clients/typescript), [integration docs](/docs/integrations/react), [demo apps](/demos) and [technical examples](/demos#technical-examples) showing how to use Electric with different patterns and frameworks: #### Interactive demos ### Incrementally You can adopt Electric one component and one route at a time. Wherever you have code doing something like this: ```tsx import React, { useState, useEffect } from 'react' const MyComponent = () => { const [items, setItems] = useState([]) useEffect(() => { const fetchItems = async () => { const response = await fetch('https://api.example.com/v1/items') const data = await response.json() setItems(data) } fetchItems() }, []) return } ``` Swap it out for code like this (replacing the `fetch` in the `useEffect` with [`useShape`](/docs/integrations/react)): ```tsx import { useShape } from '@electric-sql/react' const MyComponent = () => { const { data: items } = useShape({ url: 'https://electric.example.com/v1/shapes', params: { table: 'items', }, }) return } ``` This works with *any* Postgres [data model and host](/docs/guides/deployment), any data type, extension and Postgres feature. Including [pgvector](https://github.com/pgvector/pgvector), [PostGIS](https://postgis.net), sequential IDs, unique constraints, etc. You don't have to change your data model or your migrations to use Electric. ### With your existing API Because Electric syncs [over HTTP](/docs/api/http), you can use it together [with your existing API](/blog/2024/11/21/local-first-with-your-existing-api). This allows you to handle concerns like [auth](/docs/guides/auth) and [writes](/docs/guides/writes) with your existing code and web service integrations. You don't need to codify your auth logic into database rules. You don't need to replace your API endpoints and middleware stack. ## Take another look With this BETA release, Electric is stable and ready for prime time use. If you haven't checked it out recently, it's a great time to take another look. ### Signup for early access to Electric Cloud We're also building [Electric Cloud](/cloud), which provides managed Electric hosting (for those that don't want to [host Electric themselves](/docs/guides/deployment)). If you're interested in using Electric Cloud, you can sign up for early access here: --- # Source: https://electric-sql.com/blog/posts/2025-03-17-electricsql-1.0-released.md --- url: /blog/posts/2025-03-17-electricsql-1.0-released.md description: >- With version 1.0 Electric is now in GA. The APIs are stable and the sync engine is ready for mission critical, production apps. --- With [version 1.0.0](https://github.com/electric-sql/electric/releases/tag/%40core%2Fsync-service%401.0.0), Electric is now in GA. The APIs are stable and the sync engine is ready for mission critical, production apps. It's been a huge effort by the [whole team](/about/team). We've put our heart and soul into it. We know there's a lot of teams waiting for this milestone. We're really excited to see what you build with Electric now it's hit 1.0! ## What is Electric? Sync makes apps awesome. Electric solves sync. [Electric](/) is a Postgres sync engine. It handles the core concerns of [partial replication](/docs/guides/shapes), [fan out](/docs/api/http#caching), and [data delivery](/docs/reference/benchmarks#cloud). So you can build awesome software without rolling your own sync. ## The path to 1.0 In 2024 we [re-built Electric from scratch](/blog/2024/07/17/electric-next) to be simpler, faster, more reliable and more scalable. In December 2024, [we hit BETA](/blog/2024/12/10/electric-beta-release#the-path-to-beta) with production users, [proof of scalability](/docs/reference/benchmarks) and a raft of updated [docs](/docs/intro) and [demos](/demos). Since then, we've launched a [managed cloud platform](/cloud), run / supported a wide range of production workloads from both open-source and cloud users, tested with [Antithesis](https://www.antithesis.com) and merged 200 bug-fix and reliability PRs. ## Stable APIs With the 1.0 release, the core [Electric sync service APIs](/docs/intro) are now stable. Our policy is now no backwards-incompatible changes in patch or minor releases. You can now build on Electric without tracking the latest changes. ## Production ready Electric is stable, reliable and scales. It's been stress-tested in production for some time now by companies like [Trigger](https://trigger.dev), [Otto](https://ottogrid.ai) and [IP.world](https://ip.world). > We use ElectricSQL to power [Trigger.dev Realtime](https://trigger.dev/launchweek/0/realtime), a core feature of our product. When we execute our users background tasks they get instant updates in their web apps. It's simple to operate since we already use Postgres, and it scales to millions of updates per day. > *— [Matt Aitken](https://www.linkedin.com/in/mattaitken1985), Founder & CEO, [Trigger.dev](https://trigger.dev)* > At [Otto](https://ottogrid.ai), we built a spreadsheet product where every cell operates as its own AI agent. ElectricSQL enables us to reliably stream agent updates to our spreadsheet in real-time and efficiently manage large spreadsheets at scale. It has dramatically simplified our architecture while delivering the performance we need for cell-level reactive updates. > *— [Sully Omar](https://x.com/SullyOmarr), Co-founder & CEO, [Otto](https://ottogrid.ai)* We process millions of requests and transactions each day. With hundreds of thousands of active [shapes](/docs/guides/shapes) and application users. The chart below is from our [cloud benchmarks](/docs/reference/benchmarks#cloud), showing flat, low latency and memory use scaling sync to 1 million concurrent clients on a single commodity Postgres: ## Increasingly powerful We've been focused on making Electric small and stable. So it scales and just works. Running real workloads has been key to this, as it's given us a tight feedback loop and flushed out real world bugs and edge cases. At the same time, it's also given us a lot of insight into demand for what to build next. And we have some seriously cool stuff coming. From more expressive partial replication primitives to advanced stream processing, database sync and client-side state management. More on these soon but to give a sneak preview of some of the work in progress: * [electric-sql/d2ts](https://github.com/electric-sql/d2ts) differential dataflow in Typescript to allow for flexible, extensible stream processing in front of Electric (in the client or at the cloud edge) * [TanStack/optimistic](https://github.com/TanStack/optimistic) collaboration with [TanStack](https://tanstack.com/) to create a new library to simplify managing optimistic state in the client. This is early but the DX looks really promising * [electric-sql/phoenix\_sync](https://github.com/electric-sql/phoenix_sync) Phoenix.Sync library to add sync to the [Phoenix](https://www.phoenixframework.org) web framework * [LiveStore](https://livestore.dev/getting-started/react-web) highly performant reactive state management solution for web and mobile apps with first-class support for syncing with Electric As we build towards 2.0 and 3.0, Electric is only going to become more expressive, more powerful and easier to use. We're super excited for what's ahead and we hope you'll join us on the journey. ## Next steps [Sign up for Cloud](/cloud), dive into the [Quickstart](/docs/quickstart), join the [Discord](https://discord.electric-sql.com) and star us on [GitHub](https://github.com/electric-sql/electric). --- # Source: https://electric-sql.com/blog/posts/2025-04-07-electric-cloud-public-beta-release.md --- url: /blog/posts/2025-04-07-electric-cloud-public-beta-release.md description: >- Electric Cloud is now in public BETA! This means it's open to everyone for immediate access. --- [Electric Cloud](https://dashboard.electric-sql.cloud) is in public BETA! It's open to everyone for immediate access. You can [create your account here](https://dashboard.electric-sql.cloud) and start using it straight away to sync data and build apps. Use the new dashboard to connect and manage backing Postgres databases, and see system logs and service health and status. Electric Cloud is our managed service for our [open-source Postgres sync engine](https://electric-sql.com/). It solves the hard problems of sync for you, including [partial replication](/docs/guides/shapes), [fan-out](/docs/api/http#caching), and [data delivery](/docs/api/http). As well as being easy to [use](/docs/intro), [integrate](/blog/2024/11/21/local-first-with-your-existing-api) and [get-started with](/docs/quickstart), Electric Cloud is also [highly performant and scalable](/docs/reference/benchmarks#cloud), with an integrated CDN. Unlike other systems that demo well and fall over, you can build real-time apps on Electric Cloud and not worry that they're going to explode or fall over when you hit hockey stick growth. The chart below is from our [cloud benchmarks](/docs/reference/benchmarks#cloud), testing Electric's memory usage and latency with a single Electric service scaling real-time sync from 100k to 1 million concurrent clients under a sustained load of 960 writes/minute. Both memory usage and latency are essentially flat: ## Real-time features shouldn't be this hard Every major web app we’ve worked on, including [Gatsby Cloud](https://www.gatsbyjs.com/docs/reference/cloud/what-is-gatsby-cloud/), [Posterhaste](https://www.posterhaste.com/), [OpenIDEO](https://www.openideo.com/), and [Pantheon](https://pantheon.io/) has had critical real-time features. And they were the most fragile, frustrating parts of the app. The real-time systems plumbing was complex to build and operate (redis pub/sub & websocket servers) and we couldn’t ever get to 100% reliable event delivery as almost daily we’d get support requests from customers resulting from edge cases around race conditions or connectivity issues. Our team has been building apps and doing research on sync problems for decades and have all felt this pain. Talking to other developers, we heard the same frustrations: "Our app felt sluggish because every interaction required a network round trip." "We spent weeks building and debugging our real-time infrastructure." "Our state management code was 3x larger than our actual business logic." Something is fundamentally broken with how we build modern apps. ### Especially when building AI apps AI apps are inherantly real-time, whether that's [token streaming](/blog/2025/04/09/building-ai-apps-on-sync#resumability) or [user <> agent collaboration](/blog/2025/04/09/building-ai-apps-on-sync#collaboration). [Keeping agents and users in sync](/blog/2025/04/09/building-ai-apps-on-sync) yourself with a combination of ad-hoc data fetching and notifications is a massive pain. ## Sync: the missing abstraction for simple, fast apps The ElectricSQL team came together to build a proper abstraction for data synchronization. We asked ourselves: instead of manually orchestrating data fetching, caching, and real-time updates, what if developers could simply declare what data they need, and have it automatically stay in sync between the server and client? That's why we built Electric — an open-source sync engine that works directly with Postgres. We had three core requirements: 1. **Zero assumptions about your stack**: It should work with any Postgres database, any data model, and any frontend framework. 2. **Simple to integrate**: It should be a thin layer that fits into existing architectures without requiring a rewrite. 3. **Infinitely scalable**: It should handle millions of concurrent users without breaking a sweat. With Electric, you don't need to write imperative state transfer code. You don't need complex state management frameworks. You don't need to engineer for high uptime when your app naturally tolerates network issues. Instead, you get: * **Dead-simple integration**: Sync data directly from your Postgres database * **Instant responsiveness**: Data is locally available, making your app lightning fast * **Offline support**: local data reads should keep working regardless of connectivity * **Real-time by default**: Changes propagate automatically to all connected clients * **Reduced cloud costs**: Move data and compute to the client, lowering your server load We released the 1.0 of the open-source Electric sync engine a few weeks ago. And today, we're launching **Electric Cloud** — a managed platform that gives you all the benefits of sync in just 30 seconds. ## **Try Electric Cloud Today** Getting started is dead simple: 1. Connect your existing Postgres database via a standard connection string 2. Specify what data you want to sync using our simple Shape API 3. Use our client libraries to bind that data directly to your UI That's it. No complex infrastructure to set up or maintain. No opinionated frameworks to adopt. Just real-time sync, solved. Companies like Trigger.dev are already using Electric in production, noting that it has "it’s simple to operate as we already use Postgres, and it scales to millions of updates per day." **Ready to add sync to your app in 30 seconds?** [Sign up for the Electric Cloud Public Beta](https://dashboard.electric-sql.cloud) today. We can't wait to see what you build with it 🚀 *** *Have questions? Join our[ Discord community](https://discord.gg/electric), check out our[ documentation](/docs/intro), or find us on[ GitHub](https://github.com/electric-sql/electric).* --- # Source: https://electric-sql.com/blog/posts/2025-04-09-building-ai-apps-on-sync.md --- url: /blog/posts/2025-04-09-building-ai-apps-on-sync.md description: >- AI apps are collaborative. Building them requires solving resumability, interruptibility, multi‑tab, multi‑device and multi‑user. --- AI apps are inherently collaborative. Building them requires solving [resumability](#resumability), [interruptibility](#interruptibility), [multi‑device](#multi-device) and [multi‑user](#multi-user). These are not edge-cases. They're core to [user <-> agent collaboration](#collaboration) and the new world of [multi‑step, task‑and‑review workflows](#multi-step-workflows). They're also [key growth hacks](#unlocking-adoption) for products looking to replace current-generation SaaS and enterprise software. As AI apps become more collaborative, with [multiple users interacting with the same AI session](#collaboration) and those sessions spawning [more and more agents](#swarms), these challenges are only going to get more important. Luckily, they're all [solved by sync](#sync-is-the-solution). > \[!Warning] ✨ Electric AI chat app > See the [electric-sql/electric-ai-chat](https://github.com/electric-sql/electric-ai-chat) repo for the example app accompanying this post. ## Resumability Most AI apps stream tokens into the front-end. That's how Claude and ChatGPT write out their response to you, one word at a time. If you stream directly from the agent to the UI, you have a fragile system. Your app breaks when the connection drops and when the user refreshes the page. For example, here's a video showing how ChatGPT behaves: If, instead, you stream tokens into a store and then subscribe to that store, you can build non-fragile, resilient apps where the data isn't lost when a connection drops. For example, here's our [Electric AI chat app](https://github.com/electric-sql/electric-ai-chat), streaming tokens via a store (in this case [a Postgres database](/docs/guides/deployment#_1-running-postgres)). It handles offline, patchy connectivity and page refreshes without a problem: The key to this behaviour is *resumability*: the ability to resume streaming from a known position in the stream. To do this, the app keeps track of the last position its seen. Then when re-connecting, it requests the stream from that position. This pattern is fiddly to wire up yourself (message delivery is a [distributed systems rabbit hole](https://jepsen.io/consistency/models)) but is *built in* to sync engines for you. For example, Electric's [sync protocol](/docs/api/http) is based on the client sending an `offset` parameter. This is usually abstracted away at a [higher-level](/docs/api/clients/typescript), e.g.: ```tsx import { ShapeStream } from '@electric-sql/client' const tokenStream = new ShapeStream({ params: { table: 'tokens', }, }) // tokenStream.subscribe(tokens => ...) ``` But under the hood, the sync protocol provides automatic resumability. So apps just work and users don't swear at your software when their vibes disappear. ## Multi-device You know another thing users do? They open multiple browser tabs and they flit in and out of your app. Talk to Claude, check your emails, talk to Claude, check Instagram, ... So what do you do when they open your app in two tabs at the same time? They can't remember which tab they used last. They're just confused when their session isn't there. Where did my vibes go?! Or worse, they kick off the same prompt twice because they think it's not running. Now they have two threads competing to do the same thing. Who are they going to blame? Your software. So even just the possibility of multiple browser tabs means you need to split that stream and keep both tabs in sync. But, of course, the world is not just about browser tabs. Agents do stuff in the background. What are the chances your user is going to grab their mobile, nip across to [Linea Coffee](https://lineacaffe.com) on Mariposa and check progress while waiting in the queue? When they do so, how do you keep the mobile app up-to-date with the session that was started in the browser? This is exactly what sync does. It handles *fan out*, so you can (resiliently) stream changes to multiple places at the same time. For example, with Electric, you can just write changes to Postgres and then Electric takes care of fanning-out data delivery to as many clients as you like (you can literally scale to [millions of clients](/docs/reference/benchmarks#cloud) straight out of the box). So whichever device your user grabs or tab they return to, it can be up-to-date and exactly in the state they're expecting: ## Multi-user In an [Onion-style newsflash](https://theonion.com/area-man-accepts-burden-of-being-only-person-on-earth-w-1819579668/), it turns out that our brave user is not the only person in the world. They have work colleagues, friends and family members. SaaS was designed around this. Work colleagues can collaborate on Figma designs. Friends and family members can plan holidays using Airbnb wishlists. Now we have AI, collaboration-by-clicking-buttons is going to be replaced by by interacting with agents. That direct stream from the agent to the UI, it's single-user. It doesn't work for collaboration. For multi-user, you need the same pattern as with resumability and multi-device. Stream through a store with fan-out. As long as you stream the right sessions to the right users. That's what sync engines like Electric and [Figma's LiveGraph](https://www.figma.com/blog/livegraph-real-time-data-fetching-at-figma/) do. They handle resilient streaming and fan-out, with partial replication. So the right data syncs to the right users. For example, with Electric, you can define partial replication using [Shapes](/docs/guides/shapes): Filtering just the content you need using where clauses: ```tsx const tokenStream = new ShapeStream({ params: { table: 'tokens', // Just sync the tokens for a given session. where: 'session_id = 1234', }, }) ``` Which really changes the game for AI UX. Because it allows multiple users to collaborate on the same AI session. ### Collaboration For example, here we show two users collaborating on the same task. The first user prompts the AI. The second user is watching in real-time. They see that the AI needs more context and upload a document to provide it. The AI sees this generates a better response. This is a simple example (just the tip of the iceberg of [things to come](#agents-are-users)). However, it already clearly illustrates how AI apps need to be built on real-time sync, in order to facilitate multi-user collaboration. ### Interruptibility Streaming tokens via a store also makes it simple to interrupt the stream for all users. Rather than each user streaming from an agent, the agent streams into the store. Any user can then issue an instruction to interrupt, with aborts the token stream from the agent and stops it being written to the store. This naturally interrupts the session for all concurrent users. For example: ```ts // Stream tokens from the OpenAI API. const stream = await openai.chat.completions.create({ model, messages, stream: true, }) // Into Postgres for await (const event of stream) { pg.insert('INSERT INTO tokens value ($1)', [event.message]) } // Until interrupted function interrupt() { stream.controller.abort() } ``` This fixes the problem where the user is frantically clicking or saying "stop" but Claude just ignores it and carries on generating artifacts. ## Agents are users Human users are not the only thing that can interrupt flows and update data. An agent is not just an interface. An agent is an actor. They can [send notifications](https://modelcontextprotocol.io/docs/concepts/transports#notifications) and [update application state](https://modelcontextprotocol.io/docs/concepts/tools). So, as soon as you have a user interacting with an agent, you have a multi-user app. Every conversation with an AI agent is inherently multi-user. It's at least you and the AI. ### Swarms You're also not going to just have one agent. Soon, we're all going to have [swarms of agents](https://github.com/openai/openai-agents-python) running around in the background for us. These are going to need to share context and have situational awareness. Tools like [LangGraph](https://www.langchain.com/langgraph) and [Mastra](https://mastra.ai/blog/mastra-storage) provide a shared data layer for agents. However, they don't solve the last mile problem of syncing into user-facing apps to also keep the human in the loop. State can't just be in the cloud. Users have agency too! For example, imagine you're managing a project and you have an AI assistant. You tell it to "monitor the todo list and perform the tasks". You then fire up a new session with another agent to plan out the project and generate tasks. These agents need to collaborate via shared state. In this example, the todo-list. They need to known when it's changed and react to the changes. And so do the users! They want to see the state too. For example, this is the Electric code for the agents to monitor and react to the todolist (full example in [`tools/todo/process.ts`](https://github.com/electric-sql/electric-ai-chat/blob/main/packages/api/src/ai/tools/todo/process.ts)): ```ts const listItemsStream = new ShapeStream({ url: `${ELECTRIC_API_URL}/v1/shape`, params: { table: 'todo_items', where: `list_id = '${listId}'`, }, }) const listItemsShape = new Shape(listItemsStream) async function processNextItem() { const item = listItemsShape.currentRows.find((item) => !item.done) if (item) { // Perform the task using the agent } } let processing = false async function processItems() { if (processing) return processing = true while (listItemsShape.currentRows.some((item) => !item.done)) { await processNextItem() } processing = false } listItemsShape.subscribe(async () => { await processItems() }) ``` This is code to show the same state to the user (full example in [`components/Todo.tsx`](https://github.com/electric-sql/electric-ai-chat/blob/main/packages/app/src/components/Todo.tsx)): ```tsx function TodoListItems() { const { data: todoListItems } = useShape({ url: `${ELECTRIC_API_URL}/v1/shape`, params: { table: 'todo_lists_items', }, }) return (
    {todoListItems.map((todoListItem) => (
  • {todoListItem.task} {todoListItem.done && Done}
  • ))}
) } ``` ### Structure So far, when discussing streaming, we've focused on tokens. But models are just as adept at returning structured data. This is another major advantage of streaming through a store. That store can be a structured database. This allows agents to collaborate on different parts of a shared state, by working on different parts of a structured data model. For example, one agent can be outlining the high level structure of a Figma project whilst another agent fills in the details on each of the canvases. ### Chaos When you call an API or function you typically know the "blast radius" of what data it can change. So you can know what to refetch. When you interact with an AI agent (that has any kind of agency) you don't know what it's going to change. So you either need to constantly track and re-fetch everything. Or you need to monitor what data changes, so that you're automatically informed about it. What you really need is a way of declaring the subset of the data that the app, agent or UI needs, in order to monitor it, stay up-to-date and respond to changes. That's why Sunil Pai says that [AI agents are local-first clients](https://sunilpai.dev/posts/local-first-ai-agents/) and that's why Theo Brown is [searching for the ideal sync engine](https://youtu.be/3gVBjTMS8FE). ## Sync is the solution Sync solves a range of practical challenges with AI UX. From resumability and interruptibility to multi-tab, multi-device and multi-user. As AI agents become more collaborative and autonomous (and lots more of them are spawned), then sharing state, reviewing progress, reacting to changes and maintaining local data sets are all going to get more important. ### Unlocking adoption One of the main opportunities for AI startups and teams building AI apps is to replace current-generation software with smarter, AI-powered systems. Particularly in b2b and enterprise software, which tends to built around team-based collaboration, with support for multiple users with different roles. This is where the ability to build multi-user, collaborative AI apps is key to adoption. Single-user AI sessions are not going to cut it. To replace incumbent systems and get wide adoption across the enterprise, AI apps need to support team-based collaboration. As we've seen, that means keeping multiple users and agents in sync. Sync is a hard problem to solve yourself — and the last thing you want to be spending time on when you could be building your core product. That's why AI apps with ambition should be built on a sync engine, like [Electric](/), that solves sync for you. ### Let's jump in This post is accompanied by a resilient, multi-user, multi-agent AI chat demo. The source code is on GitHub at [electric-sql/electric-ai-chat](https://github.com/electric-sql/electric-ai-chat) and the demo is deployed online at [electric-ai-chat.examples.electric-sql.com](https://electric-ai-chat.examples.electric-sql.com). Start by cloning the repo: ```sh git clone https://github.com/electric-sql/electric-ai-chat.git cd electric-ai-chat ``` Make sure you have [Node](https://nodejs.org/en/download), [pnpm](https://pnpm.io/installation), [Docker](https://docs.docker.com/compose/install/) and an [OpenAI API key](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key). Install the dependencies: ```sh pnpm install ``` Start Postgres and Electric using Docker: ```sh docker compose up -d ``` Start the backend API: ```sh export OPENAI_API_KEY= pnpm dev:api ``` You can then run the demo app with: ```sh pnpm dev:app ``` Open your browser at [localhost:5173](http://localhost:5173) ### More info See the [Docs](/docs/intro) and [Demos](/demos), including the [Typescript Client](/docs/api/clients/typescript) and [React bindings](/docs/integrations/react). If you have any questions, [Join the Discord](https://discord.electric-sql.com), where you can connect with the Electric team and other developers building on sync. When you're ready to deploy, the easiest way to get up-and-running with sync in 30 seconds is to use the [Electric Cloud](https://dashboard.electric-sql.cloud). --- # Source: https://electric-sql.com/blog/posts/2025-04-22-untangling-llm-spaghetti.md --- url: /blog/posts/2025-04-22-untangling-llm-spaghetti.md description: >- LLMs are generating code. That code is imperatively fetching data. That leads to a big ball of spaghetti. --- LLMs are generating code. That code is imperatively fetching data. That leads to a big ball of spaghetti. For example, [Bolt](https://bolt.new) and [Lovable](http://lovable.dev) use Supabase, generating code like this: ```js const fetchTodos = async () => { try { setLoading(true) const { data, error } = await supabase .from('todos') .select('*') .order('created_at', { ascending: false }) if (error) throw error setTodos(data || []) } catch (error) { console.error('Error fetching todos:', error) } finally { setLoading(false) } } ``` This code is imperative. It's a function inlined into a React component that's called when the component mounts. You see the problem already. The more components like this you generate, the more requests are made. More loading spinners, more waterfall, more failure modes. More stale data when components don't re-fetch on update. It's a big ball of spaghetti. So what's the solution? Declarative data dependencies and a sync engine. So that the network requests, the actual data fetching, can be delegated to a system that optimises data transfer and placement for you. Start with a logical data model. Your LLM understands that — it has no problem generating and evolving a schema for you. Then, instead of telling the LLM to generate code that fetches data, tell it to use a sync engine and generate code that *declares* the data that your component needs. [GraphQL](https://relay.dev/docs/tutorial/fragments-1/) does it with fragments and an aggregated top-level fetch: ```js export const TodoFragment = graphql` fragment TodoFragment on Todo { id text complete createdAt updatedAt relationship user { id name } } ` ``` [Zero](https://zero.rocicorp.dev/docs/reading-data), does it with code that's almost identical to the Supabase example but actually delegates fetching to a sync engine under the hood: ```js function TodoList() { const z = useZero() let todoQuery = z.query.todo .related('user') .limit(100) const [todos, todosDetail] = useQuery(todoQuery) ``` In general, local-first systems use a local store like [Valtio](https://valtio.dev) or an embedded database like [PGlite](https://pglite.dev) and a sync engine like [Electric](/) to keep the data in sync: ```js const shape = await pg.electric.syncShapeToTable({ shape: { url: 'http://localhost:3000/v1/shape', params: { table: 'todo', }, }, table: 'todo', primaryKey: ['id'], }) ``` Your components can then interface directly with the local store. For example with PGlite, you can use [live SQL queries](https://pglite.dev/docs/framework-hooks/react#uselivequery) to declare what data the component needs: ```js function TodoList() { const todos = useLiveQuery(`SELECT * FROM todos;`, []) } ``` This works perfectly with a platform like [Supabase](/docs/integrations/supabase) or [Neon](/docs/integrations/neon) powering the database hosting in the cloud. However, the network requests, the actual data fetching, are managed by the sync engine behind the scenes. The LLM doesn't need to know how. It certainly doesn't need to be writing code that fires off fetch requests at all angles and stages of your rendering pipeline. This has always been the [endgame for state transfer](/blog/2022/12/16/evolution-state-transfer) and the [next evolution of cloud programming](https://www.cidrdb.org/cidr2021/papers/cidr2021_paper16.pdf). A key ingredient of Andrej Karpathy's [Software 2.0](https://karpathy.medium.com/software-2-0-a64152b37c35) and Rich Hickey's [Simple Made Easy](https://youtu.be/SxdOUGdseq4). But it's even more important now LLMs are writing the code. Tell your LLM to stop writing code that does imperative data fetching. Tell it to start using declarative data bindings with a sync engine like [Electric](/) instead. ## Next steps The simplest way to generate code with Electric is to add our [llms.txt](/llms.txt) to your project context and just tell your LLM to use Electric. See our [AGENTS.md](/docs/agents) and [Building AI apps? You need sync](/blog/2025/04/09/building-ai-apps-on-sync) post. --- # Source: https://electric-sql.com/blog/posts/2025-06-05-database-in-the-sandbox.md --- url: /blog/posts/2025-06-05-database-in-the-sandbox.md description: >- More play less infra. With PGlite you can vibe code with a database built into the sandbox. --- More play less infra. With PGlite you can vibe code with a database in the sandbox. AI app builders like [Bolt](https://bolt.new), [Lovable](https://lovable.dev) and [Replit](https://replit.com) can generate database-driven apps and run them in a sandboxed dev environment. However, to actually work, these apps need to connect to a database. This breaks the sandbox encapsulation and adds friction to the development experience. [PGlite](https://pglite.dev) is a Postgres database that runs inside your dev environment. With it, you can one-shot database-driven apps that run without leaving the sandbox. So you can vibe code real apps without even thinking about infra. > \[!Warning] ✨ Try it on Bolt.new > Copy the one-shot [prompt examples](https://pglite.dev/docs/pglite-socket#llm-usage) from the PGlite docs. Or fork [this Bolt app](https://bolt.new/~/sb1-tgukxuwd). ## More play, less infra AI app builders like [Bolt](https://bolt.new), [Lovable](https://lovable.dev) and [Replit](https://replit.com) are amazing tools for building apps. They're automating a lot of the drudge and opening up development to a whole new audience of [barefoot developers](https://www.youtube.com/watch?v=qo5m92-9_QI). Apps tend to be backed by a database. Usually [Postgres](https://www.postgresql.org). So, when an AI app builder generates a new app, it needs to be connected to a database in order to actually work. For example, here's Bolt.new prompted to create a "wish list" app using Vite and Node.js (about as standard a stack as you can get). It one-shots the code fine but fails to run the app because it doesn't have a database connected: Bolt literally prints the message: > To get started, you'll need to have PostgreSQL installed and running with the connection details matching those in the `.env` file. Which is kinda crazy, right? The whole point of the Bolt developer experience (and the same is true of other platforms like Lovable and Replit) is that it generates the code and runs it for you in a sandboxed development environment in the browser. Yet to make the most basic functional app work, you need to ... install system packages? Wire up external database connections? This may be fairly simple stuff for experienced developers (but is friction nonetheless) and presents a major barrier to the new audience of barefoot developers who don't know how this stuff works. ### Breaking encapsulation Now ... there is a solution built into the platforms for this. That is to connect your [Supabase](https://supabase.com) or [Neon](https://neon.com) account, depending on which app builder you're using: Once connected, you can create a database and then wire in the credentials. Sometimes the AI does this for you. In other cases, it writes unhelpful keys into your `.env` file and you have to debug getting the right connection string into your database driver. So you can make this work. (And it's a [well-trodden path](https://x.com/kiwicopple/status/1862433123192955016)). However, what you *now* have is a sandboxed development environment that's tied to an external database resource. This creates *even more* fricition and limits the flexibility of the app builder experience. For example, using Bolt, you can click a button to fork, aka duplicate, your application: Do you want the fork to connect to the same database instance? Or a different one? If it's the same database, there's no isolation. Bugs in one version of the app will cause bugs in another. Schema changes in one will break the other. If you're creating the fork to play around and then throw away, you probably want a clean database. But how do you bootstrap that with the same content? How do you clean up the database when you throw away the fork? This stuff is meant to be simple and automated. But with an external database, it's complex and full of friction. ### Database in the sandbox What if ... instead of connecting the app to an external database, you could just have the database inside the sandbox? If you dig into a platform like Bolt, you'll see it runs the full development environment, with both front-end *and* back-end services, inside a [WebContainer](https://webcontainers.io). What if the database was *also* able to run inside the WebContainer? Well, with PGlite, it can. [PGlite](https://pglite.dev) is an embeddable Postgres database that's designed to run inside the web browser. With the recent addition of the new [PGlite Socket](https://pglite.dev/docs/pglite-socket) library, it can now also happily run inside a WebContainer in a way that's compatible with existing Postgres drivers. The steps to adapting a standard app to use it are simple enough to [one-shot prompt](https://pglite.dev/docs/pglite-socket#llm-usage): * install the `@electric-sql/pglite` and `@electric-sql/pglite-socket` libraries * update the Node `package.json` to run the PGLite server * configure the app to connect to it With these steps in the prompt, the app just works: The user does not need to think about infra. The database is self-contained inside the sandbox. The code runs first-time. If they fork the app, it works. If they delete the app, the database is deleted with it. There is no friction. There is no infra. It just works, out of the box. ### Pathway to production There's nothing in this approach that prevents running against a hosted database in production. The prompt in the example above literally tells the AI to wrap the Postgres config in a conditional that looks a bit like this: ```ts const sql = process.env.NODE_ENV === 'production' ? postgres(process.env.DATABASE_URL) : postgres({ host: '/tmp/', username: 'postgres', password: 'postgres', database: 'postgres', max: 1, connect_timeout: 0, }) ``` So if you hit "deploy" and run in production, the app automatically connects to a production database on a platform like Supabase or Neon. Which is when you *want* a proper, external database, because you *need* that database to be available and durable. What you don't need is the friction from configuring and managing that kind of infra, before you've even run the code your AI app builder has generated for you. ### Without killing the vibes When you're vibe-coding, you don't want to think about infra. You want to stay in the zone, iterating and expressing yourself. That means having a database inside your sandbox. No glue, no friction, no external services, no free-tier limits. Just part of the runtime. Forkable, disposable, unlimited and zero cost. For the user, for the platform and for the infra provider. This is the future of AI app building. Vibe coding with a database in the sandbox. Unlocked by [PGlite](https://pglite.dev). > \[!Warning] ✨ Try it on Bolt.new > Copy the one-shot [prompt examples](https://pglite.dev/docs/pglite-socket#llm-usage) from the PGlite docs. Or fork [this Bolt app](https://bolt.new/~/sb1-tgukxuwd). --- # Source: https://electric-sql.com/blog/posts/2025-07-29-local-first-sync-with-tanstack-db.md --- url: /blog/posts/2025-07-29-local-first-sync-with-tanstack-db.md description: >- Tanstack DB is a reactive client store for building super fast apps on sync. Paired with Electric, it provides an optimal end-to-end sync stack for local-first app development. --- [Tanstack DB](https://tanstack.com/db) is a [reactive client store for building super fast apps on sync](https://tanstack.com/blog/tanstack-db-0.1-the-embedded-client-database-for-tanstack-query). Paired with [Electric](/), it provides an optimal end-to-end sync stack for building [local-first apps](/use-cases/local-first-software). [Tanstack DB](https://tanstack.com/db) is a reactive client store for [building super fast apps on sync](https://tanstack.com/blog/tanstack-db-0.1-the-embedded-client-database-for-tanstack-query). Paired with [Electric](/), it provides an optimal end-to-end sync stack for [local-first app development](/use-cases/local-first-software). Type-safe, declarative, incrementally adoptable and insanely fast, it's the future of app development with Electric and the best way of [building AI apps and agentic systems](/blog/2025/04/09/building-ai-apps-on-sync). > \[!Warning] ✨  TanStack DB <> Electric starters > Fire up TanStack DB with Electric using the [TanStack Start starter](https://github.com/electric-sql/electric/tree/main/examples/tanstack-db-web-starter) and [Expo starter](https://github.com/electric-sql/electric/tree/main/examples/tanstack-db-expo-starter) templates. > > Docs are at [tanstack.com/db](https://tanstack.com/db) and there's an [example app](https://github.com/TanStack/db/tree/main/examples/react/todo) in the repo. > \[!Info] ⚡  Interactive guide to TanStack DB > [What TanStack DB is](https://frontendatscale.com/blog/tanstack-db) how it works and why it might change the way you build apps. ## The next frontier for front‑end Front-end has long been about reactivity frameworks and client-side state management. However, the alpha in these is receding. The next frontier, with much bigger gains, across UX, DX and AX lies in [local-first, sync engine architecture](/use-cases/local-first-software). Sync-based apps like [Linear](https://linear.app/blog/scaling-the-linear-sync-engine) and [Figma](https://www.figma.com/blog/how-figmas-multiplayer-technology-works) are instant to use and naturally collaborative. Eliminating stale data, loading spinners and manual data wiring. It's the best way to keep [users and agents in sync](/blog/2025/04/09/building-ai-apps-on-sync) when building AI apps and agentic systems and it's the best way to keep [LLM‑code maintainable](/blog/2025/04/22/untangling-llm-spaghetti). ## Adding local-first sync to TanStack [TanStack](https://tanstack.com) is a collection of TypeScript libraries for building web and mobile apps. Developed by an open collective, stewarded by [Tanner Linsley](https://github.com/tannerlinsley), it's one of the best and most popular ways to build modern apps. Tanner has long wanted to add local-first sync to TanStack: “I think ideally every developer would love to be able to interact with their APIs as if they were local-first. I have no doubt that that is what everybody wants.”. When Electric co-founder [Kyle Mathews](/about/team#kyle) approached Tanner to work on this, they immediately aligned on DX and a vision for incrementally adoptable local-first app development. There was still once piece missing though: a reactive query engine fast enough to make the vision a reality. Enter [Sam Willis'](/about/team#sam) work on [d2ts](https://github.com/electric-sql/d2ts), a Typescript implementation of differential dataflow that can handle even the most complex reactive queries in microseconds. Suddenly we had all the primitives: the stack, the DX, the sync engine and a query engine fast enough to make it possible. To understand how this then came together in TanStack DB, let's briefly refresh on how TanStack works. ### How TanStack works TanStack grew out of React Query, now [TanStack Query](https://tanstack.com/query), a library that gives you managed queries, caching, mutation primitives, etc. This is TanStack Query code to read data into a React component: ```tsx import { useQuery } from '@tanstack/react-query' function Todos() { const { data } = useQuery({ queryFn: async () => await api.get('/todos'), queryKey: ['todos'], }) // ... } ``` You provide a `queryFn` that defines how you fetch your data. TanStack Query then [manages calling it, retries, caching](https://tanstack.com/query/latest/docs/framework/react/overview#motivation) etc. For writes, you create a mutation with a `mutationFn` that defines how you actually send your data to the server (in this case by posting it to your API): ```tsx import { useMutation, useQueryClient } from '@tanstack/react-query' function Todos() { const queryClient = useQueryClient() const { mutate } = useMutation({ mutationFn: (todo) => api.post('/todos', todo), onSettled: () => queryClient.invalidateQueries({ queryKey: ['todos'], }), }) // continues below ... } ``` You can then use this mutation in your components to make instant local writes, with TanStack Query managing the [optimistic state lifecycle](https://tanstack.com/query/latest/docs/framework/react/guides/optimistic-updates) for you: ```tsx function Todos() { // ... as above const addTodo = () => mutate({ title: 'Some Title' }) return