# Urql > `urql`is a highly customizable and versatile GraphQL client with which you add on features like --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/README.md # Path: docs/README.md --- title: Overview order: 1 --- # Overview `urql` is a highly customizable and versatile GraphQL client with which you add on features like normalized caching as you grow. It's built to be both easy to use for newcomers to GraphQL, and extensible, to grow to support dynamic single-app applications and highly customized GraphQL infrastructure. In short, `urql` prioritizes usability and adaptability. As you're adopting GraphQL, `urql` becomes your primary data layer and can handle content-heavy pages through ["Document Caching"](./basics/document-caching.md) as well as dynamic and data-heavy apps through ["Normalized Caching"](./graphcache/normalized-caching.md). `urql` can be understood as a collection of connected parts and packages. When we only need to install a single package for our framework of choice. We're then able to declaratively send GraphQL requests to our API. All framework packages — like `urql` (for React), `@urql/preact`, `@urql/svelte`, and `@urql/vue` — wrap the [core package, `@urql/core`](./basics/core.md), which we can imagine as the brain of `urql` with most of its logic. As we progress with implementing `urql` into our application, we're later able to extend it by adding ["addon packages", which we call _Exchanges_](./advanced/authoring-exchanges.md) If at this point you're still unsure of whether to use `urql`, [have a look at the **Comparison** page](./comparison.md) and check whether `urql` supports all features you're looking for. ## Where to start We have **Getting Started** guides for: - [**React/Preact**](./basics/react-preact.md) covers how to work with the bindings for React/Preact. - [**Vue**](./basics/vue.md) covers how to work with the bindings for Vue 3. - [**Svelte**](./basics/svelte.md) covers how to work with the bindings for Svelte. - [**Core Package**](./basics/core.md) covers the shared "core APIs" and how we can use them directly in Node.js or imperatively. Each of these sections will walk you through the specific instructions for the framework bindings, including how to install and set them up, how to write queries, and how to send mutations. ## Following the Documentation This documentation is split into groups or sections that cover different levels of usage or areas of interest. - **Basics** is the section where we'll want to start learning about `urql` as it contains "Getting Started" guides for our framework of choice. - **Architecture** then explains more about how `urql` functions, what it's made up of, and covers the main aspects of the `Client` and exchanges. - **Advanced** covers all more uncommon use-cases and contains guides that we won't need immediately when we get started with `urql`. - **Graphcache** documents one of the most important addons to `urql`, which adds ["Normalized Caching" support](./graphcache/normalized-caching.md) to the `Client` and enables more complex use-cases, smarter caching, and more dynamic apps to function. - **Showcase** aims to list users of `urql`, third-party packages, and other helpful resources, like tutorials and guides. - **API** contains a detailed documentation on each package's APIs. The documentation links to each of these as appropriate, but if we're unsure of how to use a utility or package, we can go here directly to look up how to use a specific API. We hope you grow to love `urql`! --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/advanced/README.md # Path: docs/advanced/README.md --- title: Advanced order: 4 --- # Advanced In this chapter we'll dive into various topics of "advanced" `urql` usage. This is admittedly a catch-all chapter of various use-cases that can only be covered after [the "Architecture" chapter.](../architecture.md) - [**Subscriptions**](./subscriptions.md) covers how to use `useSubscription` and how to set up GraphQL subscriptions with `urql`. - [**Persistence & Uploads**](./persistence-and-uploads.md) teaches us how to set up Automatic Persisted Queries and File Uploads using the two respective packages. - [**Server-side Rendering**](./server-side-rendering.md) guides us through how to set up server-side rendering and rehydration. - [**Debugging**](./debugging.md) shows us the [`urql` devtools](https://github.com/urql-graphql/urql-devtools/) and how to add our own debug events for its event view. - [**Retrying operations**](./retry-operations.md) shows the `retryExchange` which allows you to retry operations when they've failed. - [**Authentication**](./authentication.md) describes how to implement authentication using the `authExchange` - [**Testing**](./testing.md) covers how to test components that use `urql` particularly in React. - [**Authoring Exchanges**](./authoring-exchanges.md) describes how to implement exchanges from scratch and how they work internally. This is a good basis to understanding how some features in this section function. - [**Auto-populate Mutations**](./auto-populate-mutations.md) presents the `populateExchange` addon, which can make it easier to update normalized data after mutations. --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/advanced/authentication.md # Path: docs/advanced/authentication.md --- title: Authentication order: 6 --- # Authentication Most APIs include some type of authentication, usually in the form of an auth token that is sent with each request header. The purpose of the [`authExchange`](../api/auth-exchange.md) is to provide a flexible API that facilitates the typical JWT-based authentication flow. > **Note:** [You can find a code example for `@urql/exchange-auth` in an example in the `urql` repository.](https://github.com/urql-graphql/urql/tree/main/examples/with-refresh-auth) ## Typical Authentication Flow **Initial login** — the user opens the application and authenticates for the first time. They enter their credentials and receive an auth token. The token is saved to storage that is persisted though sessions, e.g. `localStorage` on the web or `AsyncStorage` in React Native. The token is added to each subsequent request in an auth header. **Resume** — the user opens the application after having authenticated in the past. In this case, we should already have the token in persisted storage. We fetch the token from storage and add to each request, usually as an auth header. **Forced log out due to invalid token** — the user's session could become invalid for a variety reasons: their token expired, they requested to be signed out of all devices, or their session was invalidated remotely. In this case, we would want to also log them out in the application, so they could have the opportunity to log in again. To do this, we want to clear any persisted storage, and redirect them to the application home or login page. **User initiated log out** — when the user chooses to log out of the application, we usually send a logout request to the API, then clear any tokens from persisted storage, and redirect them to the application home or login page. **Refresh (optional)** — this is not always implemented; if your API supports it, the user will receive both an auth token, and a refresh token. The auth token is usually valid for a shorter duration of time (e.g. 1 week) than the refresh token (e.g. 6 months), and the latter can be used to request a new auth token if the auth token has expired. The refresh logic is triggered either when the JWT is known to be invalid (e.g. by decoding it and inspecting the expiry date), or when an API request returns with an unauthorized response. For graphQL APIs, it is usually an error code, instead of a 401 HTTP response, but both can be supported. When the token has been successfully refreshed (this can be done as a mutation to the graphQL API or a request to a different API endpoint, depending on implementation), we will save the new token in persisted storage, and retry the failed request with the new auth header. The user should be logged out and persisted storage cleared if the refresh fails or if the re-executing the query with the new token fails with an auth error for the second time. ## Installation & Setup First, install the `@urql/exchange-auth` alongside `urql`: ```sh yarn add @urql/exchange-auth # or npm install --save @urql/exchange-auth ``` You'll then need to add the `authExchange`, that this package exposes to your `Client`. The `authExchange` is an asynchronous exchange, so it must be placed in front of all `fetchExchange`s but after all other synchronous exchanges, like the `cacheExchange`. ```js import { Client, cacheExchange, fetchExchange } from 'urql'; import { authExchange } from '@urql/exchange-auth'; const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [ cacheExchange, authExchange(async utils => { return { /* config... */ }; }), fetchExchange, ], }); ``` You pass an initialization function to the `authExchange`. This function is called by the exchange when it first initializes. It'll let you receive an object of utilities and you must return a (promisified) object of configuration options. Let's discuss each of the [configuration options](../api/auth-exchange.md#options) and how to use them in turn. ### Configuring the initializer function (initial load) The initializer function must return a promise of a configuration object and hence also gives you an opportunity to fetch your authentication state from storage. ```js async function initializeAuthState() { const token = localStorage.getItem('token'); const refreshToken = localStorage.getItem('refreshToken'); return { token, refreshToken }; } authExchange(async utils => { let { token, refreshToken } = initializeAuthState(); return { /* config... */ }; }); ``` The first step here is to retrieve our tokens from a kind of storage, which may be asynchronous as well, as illustrated by `initializeAuthState`. In React Native, this is very similar, but because persisted storage in React Native is always asynchronous and promisified, we would await our tokens. This works because the function that `authExchange` is async, i.e. must return a `Promise`. ```js async function initializeAuthState() { const token = await AsyncStorage.getItem(TOKEN_KEY); const refreshToken = await AsyncStorage.getItem(REFRESH_KEY); return { token, refreshToken }; } authExchange(async utils => { let { token, refreshToken } = initializeAuthState(); return { /* config... */ }; }); ``` ### Configuring `addAuthToOperation` The purpose of `addAuthToOperation` is to apply an auth state to each request. Here, we'll use the tokens we retrieved from storage and add them to our operations. In this example, we're using a utility we're passed, `appendHeaders`. This utility is a simply shortcut to quickly add HTTP headers via `fetchOptions` to an `Operation`, however, we may as well be editing the `Operation` context here using `makeOperation`. ```js authExchange(async utils => { let token = await AsyncStorage.getItem(TOKEN_KEY); let refreshToken = await AsyncStorage.getItem(REFRESH_KEY); return { addAuthToOperation(operation) { if (!token) return operation; return utils.appendHeaders(operation, { Authorization: `Bearer ${token}`, }); }, // ... }; }); ``` First, we check that we have a non-null `token`. Then we apply it to the request using the `appendHeaders` utility as an `Authorization` header. We could also be using `makeOperation` here to update the context in any other way, such as: ```js import { makeOperation } from '@urql/core'; makeOperation(operation.kind, operation, { ...operation.context, someAuthThing: token, }); ``` ### Configuring `didAuthError` This function lets the `authExchange` know what is defined to be an API error for your API. `didAuthError` is called by `authExchange` when it receives an `error` on an `OperationResult`, which is of type [`CombinedError`](../api/core.md#combinederror). We can for example check the error's `graphQLErrors` array in `CombinedError` to determine if an auth error has occurred. While your API may implement this differently, an authentication error on an execution result may look a little like this if your API uses `extensions.code` on errors: ```js { data: null, errors: [ { message: 'Unauthorized: Token has expired', extensions: { code: 'FORBIDDEN' }, } ] } ``` If you're building a new API, using `extensions` on errors is the recommended approach to add metadata to your errors. We'll be able to determine whether any of the GraphQL errors were due to an unauthorized error code, which would indicate an auth failure: ```js authExchange(async utils => { // ... return { // ... didAuthError(error, _operation) { return error.graphQLErrors.some(e => e.extensions?.code === 'FORBIDDEN'); }, }; }); ``` For some GraphQL APIs, the authentication error is only communicated via a 401 HTTP status as is common in RESTful APIs, which is suboptimal, but which we can still write a check for. ```js authExchange(async utils => { // ... return { // ... didAuthError(error, _operation) { return error.response?.status === 401; }, }; }); ``` If `didAuthError` returns `true`, it will trigger the `authExchange` to trigger the logic for asking for re-authentication via `refreshAuth`. ### Configuring `refreshAuth` (triggered after an auth error has occurred) If the API doesn't support any sort of token refresh, this is where we could simply log the user out. ```js authExchange(async utils => { // ... return { // ... async refreshAuth() { logout(); }, }; }); ``` Here, `logout()` is a placeholder that is called when we got an error, so that we can redirect to a login page again and clear our tokens from local storage or otherwise. If we had a way to refresh our token using a refresh token, we can attempt to get a new token for the user first: ```js authExchange(async utils => { let token = localStorage.getItem('token'); let refreshToken = localStorage.getItem('refreshToken'); return { // ... async refreshAuth() { const result = await utils.mutate(REFRESH, { refreshToken }); if (result.data?.refreshLogin) { // Update our local variables and write to our storage token = result.data.refreshLogin.token; refreshToken = result.data.refreshLogin.refreshToken; localStorage.setItem('token', token); localStorage.setItem('refreshToken', refreshToken); } else { // This is where auth has gone wrong and we need to clean up and redirect to a login page localStorage.clear(); logout(); } }, }; }); ``` Here we use the special `mutate` utility method provided by the `authExchange` to do the token refresh. This is a useful method to use if your GraphQL API expects you to make a GraphQL mutation to update your authentication state. It will send the mutation and bypass all authentication and prior exchanges. If your authentication is not handled via GraphQL but a REST endpoint, you can use the `fetch` API here however instead of a mutation. All other requests will be paused while `refreshAuth` runs, so we won't have to deal with multiple authentication errors or refreshes at once. ### Configuring `willAuthError` `willAuthError` is an optional parameter and is run _before_ a request is made. We can use it to trigger an authentication error and let the `authExchange` run our `refreshAuth` function without the need to first let a request fail with an authentication error. For example, we can use this to predict an authentication error, for instance, because of expired JWT tokens. ```js authExchange(async utils => { // ... return { // ... willAuthError(_operation) { // Check whether `token` JWT is expired return false; }, }; }); ``` This can be really useful when we know when our authentication state is invalid and want to prevent even sending any operation that we know will fail with an authentication error. However, we have to be careful on how we define this function, if some queries or login mutations are sent to our API without being logged in. In these cases, it's better to either detect the mutations we'd like to allow or return `false` when a token isn't set in storage yet. If we'd like to detect a mutation that will never fail with an authentication error, we could for instance write the following logic: ```js authExchange(async utils => { // ... return { // ... willAuthError(operation) { if ( operation.kind === 'mutation' && // Here we find any mutation definition with the "login" field operation.query.definitions.some(definition => { return ( definition.kind === 'OperationDefinition' && definition.selectionSet.selections.some(node => { // The field name is just an example, since signup may also be an exception return node.kind === 'Field' && node.name.value === 'login'; }) ); }) ) { return false; } else if (false /* is JWT expired? */) { return true; } else { return false; } }, }; }); ``` Alternatively, you may decide to let all operations through if your token isn't set in storage, i.e. if you have no prior authentication state. ## Handling Logout by reacting to Errors We can also handle authentication errors in a `mapExchange` instead of the `authExchange`. To do this, we'll need to add the `mapExchange` to the exchanges array, _before_ the `authExchange`. The order is very important here: ```js import { createClient, cacheExchange, fetchExchange, mapExchange } from 'urql'; import { authExchange } from '@urql/exchange-auth'; const client = createClient({ url: 'http://localhost:3000/graphql', exchanges: [ cacheExchange, mapExchange({ onError(error, _operation) { const isAuthError = error.graphQLErrors.some(e => e.extensions?.code === 'FORBIDDEN'); if (isAuthError) { logout(); } }, }), authExchange(async utils => { return { /* config */ }; }), fetchExchange, ], }); ``` The `mapExchange` will only receive an auth error when the auth exchange has already tried and failed to handle it. This means we have either failed to refresh the token, or there is no token refresh functionality. If we receive an auth error in the `mapExchange`'s `onError` function (as defined in the `didAuthError` configuration section above), then we can be confident that it is an authentication error that the `authExchange` isn't able to recover from, and the user should be logged out. ## Cache Invalidation on Logout If we're dealing with multiple authentication states at the same time, e.g. logouts, we need to ensure that the `Client` is reinitialized whenever the authentication state changes. Here's an example of how we may do this in React if necessary: ```jsx import { createClient, Provider } from 'urql'; const App = ({ isLoggedIn }: { isLoggedIn: boolean | null }) => { const client = useMemo(() => { if (isLoggedIn === null) { return null; } return createClient({ /* config */ }); }, [isLoggedIn]); if (!client) { return null; } return { {/* app content */} } } ``` When the application launches, the first thing we do is check whether the user has any authentication tokens in persisted storage. This will tell us whether to show the user the logged in or logged out view. The `isLoggedIn` prop should always be updated based on authentication state change. For instance, we may set it to `true` after the user has authenticated and their tokens have been added to storage, and set it to `false` once the user has been logged out and their tokens have been cleared. It's important to clear or add tokens to a storage _before_ updating the prop in order for the auth exchange to work correctly. This pattern of creating a new `Client` when changing authentication states is especially useful since it will also recreate our client-side cache and invalidate all cached data. --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/advanced/authoring-exchanges.md # Path: docs/advanced/authoring-exchanges.md --- title: Authoring Exchanges order: 8 --- # Exchange Author Guide As we've learned [on the "Architecture" page](../architecture.md) page, `urql`'s `Client` structures its data as an event hub. We have an input stream of operations, which are instructions for the `Client` to provide a result. These results then come from an output stream of operation results. _Exchanges_ are responsible for performing the important transform from the operations (input) stream to the results stream. Exchanges are handler functions that deal with these input and output streams. They're one of `urql`'s key components, and are needed to implement vital pieces of logic such as caching, fetching, deduplicating requests, and more. In other words, Exchanges are handlers that fulfill our GraphQL requests and can change the stream of operations or results. In this guide we'll learn more about how exchanges work and how we can write our own exchanges. ## An Exchange Signature Exchanges are akin to [middleware in Redux](https://redux.js.org/advanced/middleware) due to the way that they apply transforms. ```ts import { Client, Operation, OperationResult } from '@urql/core'; type ExchangeInput = { forward: ExchangeIO; client: Client }; type Exchange = (input: ExchangeInput) => ExchangeIO; type ExchangeIO = (ops$: Source) => Source; ``` The first parameter to an exchange is a `forward` function that refers to the next Exchange in the chain. The second second parameter is the `Client` being used. Exchanges always return an `ExchangeIO` function (this applies to the `forward` function as well), which accepts the source of [_Operations_](../api/core.md#operation) and returns a source of [_Operation Results_](../api/core.md#operationresult). - [Read more about streams on the "Architecture" page.](../architecture.md#stream-patterns-in-urql) - [Read more about the _Exchange_ type signature on the API docs.](../api/core.md#exchange) ## Using Exchanges The `Client` accepts an `exchanges` option that. Initially, we may choose to just set this to two very standard exchanges — `cacheExchange` and `fetchExchange`. In essence these exchanges build a pipeline that runs in the order they're passed; _Operations_ flow in from the start to the end, and _Results_ are returned through the chain in reverse. Suppose we pass the `cacheExchange` and then the `fetchExchange` to the `exchanges`. **First,** operations are checked against the cache. Depending on the `requestPolicy`, cached results can be resolved from here instead, which would mean that the cache sends back the result, and the operation doesn't travel any further in the chain. **Second,** operations are sent to the API, and the result is turned into an `OperationResult`. **Lastly,** operation results then travel through the exchanges in _reverse order_, which is because exchanges are a pipeline where all operations travel forward deeper into the exchange chain, and then backwards. When these results pass through the cache then the `cacheExchange` stores the result. ```js import { Client, fetchExchange, cacheExchange } from 'urql'; const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [cacheExchange, fetchExchange], }); ``` We can add more exchanges to this chain, for instance, we can add the `mapExchange`, which can call a callback whenever it sees [a `CombinedError`](../basics/errors.md) occur on a result. ```js import { Client, fetchExchange, cacheExchange, mapExchange } from 'urql'; const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [ cacheExchange, mapExchange({ onError(error) { console.error(error); }, }), fetchExchange, ], }); ``` This is an example for adding a synchronous exchange to the chain that only reacts to results. It doesn't add any special behavior for operations travelling through it. An example for an asynchronous exchange that looks at both operations and results [we may look at the `retryExchange` which retries failed operations.](../advanced/retry-operations.md) ## The Rules of Exchanges Before we can start writing some exchanges, there are a couple of consistent patterns and limitations that must be adhered to when writing an exchange. We call these the "rules of Exchanges", which also come in useful when trying to learn what Exchanges actually are. For reference, this is a basic template for an exchange: ```js const noopExchange = ({ client, forward }) => { return operations$ => { // <-- The ExchangeIO function const operationResult$ = forward(operations$); return operationResult$; }; }; ``` This exchange does nothing else than forward all operations and return all results. Hence, it's called a `noopExchange` — an exchange that doesn't do anything. ### Forward and Return Composition When you create a `Client` and pass it an array of exchanges, `urql` composes them left-to-right. If we look at our previous `noopExchange` example in context, we can track what it does if it is located between the `cacheExchange` and the `fetchExchange`. ```js import { Client, cacheExchange, fetchExchange } from 'urql'; const noopExchange = ({ client, forward }) => { return operations$ => { // <-- The ExchangeIO function // We receive a stream of Operations from `cacheExchange` which // we can modify before... const forwardOperations$ = operations$; // ...calling `forward` with the modified stream. The `forward` // function is the next exchange's `ExchangeIO` function, in this // case `fetchExchange`. const operationResult$ = forward(operations$); // We get back `fetchExchange`'s stream of results, which we can // also change before returning, which is what `cacheExchange` // will receive when calling `forward`. return operationResult$; }; }; const client = new Client({ exchanges: [cacheExchange, noopExchange, fetchExchange], }); ``` ### How to Avoid Accidentally Dropping Operations Typically the `operations$` stream will send you `query`, `mutation`, `subscription`, and `teardown`. There is no constraint for new operations to be added later on or a custom exchange adding new operations altogether. This means that you have to take "unknown" operations into account and not `filter` operations too aggressively. ```js import { pipe, filter, merge } from 'wonka'; // DON'T: drop unknown operations ({ forward }) => operations$ => { // This doesn't handle operations that aren't queries const queries = pipe( operations$, filter(op => op.kind === 'query') ); return forward(queries); }; // DO: forward operations that you don't handle ({ forward }) => operations$ => { const queries = pipe( operations$, filter(op => op.kind === 'query') ); const rest = pipe( operations$, filter(op => op.kind !== 'query') ); return forward(merge([queries, rest])); }; ``` If operations are grouped and/or filtered by what the exchange is handling, then it's also important to make that any streams of operations not handled by the exchange should also be forwarded. ### Synchronous first, Asynchronous last By default exchanges and Wonka streams are as predictable as possible. Every operator in Wonka runs synchronously until asynchronicity is introduced. This may happen when using a timing utility from Wonka, like [`delay`](https://wonka.kitten.sh/api/operators#delay) or [`throttle`](https://wonka.kitten.sh/api/operators#throttle) This can also happen because the exchange inherently does something asynchronous, like fetching some data or using a promise. When writing exchanges, some will inevitably be asynchronous. For example if they're fetching results, performing authentication, or other tasks that you have to wait for. This can cause problems, because the behavior in `urql` is built to be _synchronous_ first. This is very helpful for suspense mode and allowing components receive cached data on their initial mount without rerendering. This why **all exchanges should be ordered synchronous first and asynchronous last**. What we for instance repeat as the default setup in our docs is this: ```js import { Client, cacheExchange, fetchExchange } from 'urql'; new Client({ // ... exchanges: [cacheExchange, fetchExchange]; }); ``` The `cacheExchange` is completely synchronous. The `fetchExchange` is asynchronous since it makes a `fetch` request and waits for a server response. If we put an asynchronous exchange in front of the `cacheExchange`, that would be unexpected, and since all results would then be delayed, nothing would ever be "cached" and instead always take some amount of time to be returned. When you're adding more exchanges, it's often crucial to put them in a specific order. For instance, an authentication exchange will need to go before the `fetchExchange`, a secondary cache will probably have to go in front of the default cache exchange. To ensure the correct behavior of suspense mode and the initialization of our hooks, it's vital to order exchanges so that synchronous ones come before asynchronous ones. --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/advanced/auto-populate-mutations.md # Path: docs/advanced/auto-populate-mutations.md --- title: Auto-populate Mutations order: 9 --- # Automatically populating Mutations The `populateExchange` allows you to auto-populate selection sets in your mutations using the `@populate` directive. In combination with [Graphcache](../graphcache/README.md) this is a useful tool to update the data in your application automatically following a mutation, when your app grows, and it becomes harder to track all fields that have been queried before. > **NOTE:** The `populateExchange` is _experimental_! Certain patterns and usage paths > like GraphQL field arguments aren't covered yet, and the exchange hasn't been extensively used > yet. ## Installation and Setup The `populateExchange` can be installed via the `@urql/exchange-populate` package. ```sh yarn add @urql/exchange-populate # or npm install --save @urql/exchange-populate ``` Afterwards we can set the `populateExchange` up by adding it to our list of `exchanges` in the client options. ```ts import { Client, fetchExchange } from '@urql/core'; import { populateExchange } from '@urql/exchange-populate'; const client = new Client({ // ... exchanges: [populateExchange({ schema }), cacheExchange, fetchExchange], }); ``` The `populateExchange` should be placed in front of the `cacheExchange`, especially if you're using [Graphcache](../graphcache/README.md), since it won't understand the `@populate` directive on its own. It should also be placed in front the `cacheExchange` to avoid unnecessary work. Adding the `populateExchange` now enables us to use the `@populate` directive in our mutations. The `schema` option is the introspection result for your backend graphql schema, more information about how to get your schema can be found [in the "Schema Awareness" Page of the Graphcache documentation.](../graphcache/schema-awareness.md#getting-your-schema). ## Example usage Consider the following queries, which have been requested in other parts of your application: ```graphql # Query 1 { todos { id name } } # Query 2 { todos { id createdAt } } ``` Without the `populateExchange` you may write a mutation like the following which returns a newly created todo item: ```graphql # Without populate mutation addTodo(id: ID!) { addTodo(id: $id) { id # To update Query 1 & 2 name # To update Query 1 createdAt # To update Query 2 } } ``` By using `populateExchange`, you no longer need to manually specify the selection set required to update your other queries. Instead you can just add the `@populate` directive. ```graphql # With populate mutation addTodo(id: ID!) { addTodo(id: $id) @populate } ``` ### Choosing when to populate You may not want to populate your whole mutation response. To reduce your payload, pass populate lower in your query. ```graphql mutation addTodo(id: ID!) { addTodo(id: $id) { id user @populate } } ``` ### Using aliases If you find yourself using multiple queries with variables, it may be necessary to [use aliases](https://graphql.org/learn/queries/#aliases) to allow merging of queries. > **Note:** This caveat may change in the future or this restriction may be lifted. **Invalid usage** ```graphql # Query 1 { todos(first: 10) { id name } } # Query 2 { todos(last: 20) { id createdAt } } ``` **Usage with aliases** ```graphql # Query 1 { firstTodos: todos(first: 10) { id name } } # Query 2 { lastTodos: todos(last: 20) { id createdAt } } ``` --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/advanced/debugging.md # Path: docs/advanced/debugging.md --- title: Debugging order: 4 --- # Debugging We've tried to make debugging in `urql` as seamless as possible by creating tools for users of `urql` and those creating their own exchanges. ## Devtools It's easiest to debug `urql` with the [`urql` devtools.](https://github.com/urql-graphql/urql-devtools/) It offers tools to inspect internal ["Debug Events"](#debug-events) as they happen, to explore data as your app is seeing it, and to quickly trigger GraphQL queries. [For instructions on how to set up the devtools, check out `@urql/devtools`'s readme in its repository.](https://github.com/urql-graphql/urql-devtools) ![Urql Devtools Timeline](../assets/devtools-timeline.png) ## Debug events The "Debug Events" are internally what displays more information to the user on the devtools' "Events" tab than just [Operations](../api/core.md#operation) and [Operation Results](../api/core.md#operationresult). Events may be fired inside exchanges to add additional development logging to an exchange. The `fetchExchange` for instance will fire a `fetchRequest` event when a request is initiated and either a `fetchError` or `fetchSuccess` event when a result comes back from the GraphQL API. The [Devtools](#browser-devtools) aren't the only way to observe these internal events. Anyone can start listening to these events for debugging events by calling the [`Client`'s](../api/core.md#client) `client.subscribeToDebugTarget()` method. Unlike `Operation`s these events are fire-and-forget events that are only used for debugging. Hence, they shouldn't be used for anything but logging and not for messaging. **Debug events are also entirely disabled in production.** ### Subscribing to Debug Events Internally the `devtoolsExchange` calls the `client.subscribeToDebugTarget`, but if we're looking to build custom debugging tools, it's also possible to call this function directly and to replace the `devtoolsExchange`. ``` const { unsubscribe } = client.subscribeToDebugTarget(event => { if (event.source === 'cacheExchange') return; console.log(event); // { type, message, operation, data, source, timestamp } }); ``` As demonstrated above, the `client.subscribeToDebugTarget` accepts a callback function and returns a subscription with an `unsubscribe` method. We've seen this pattern in the prior ["Stream Patterns" section on the "Architecture" page.](../architecture.md) ## Adding your own Debug Events Debug events are a means of sharing implementation details to consumers of an exchange. If you're creating an exchange and want to share relevant information with the `devtools`, then you may want to start adding your own events. #### Dispatching an event [On the "Authoring Exchanges" page](./authoring-exchanges.md) we've learned about the [`ExchangeInput` object](../api/core.md#exchangeinput), which comes with a `client` and a `forward` property. It also contains a `dispatchDebug` property. It is called with an object containing the following properties: | Prop | Type | Description | | ----------- | ----------- | ------------------------------------------------------------------------------------- | | `type` | `string` | A unique type identifier for the Debug Event. | | `message` | `string` | A human readable description of the event. | | `operation` | `Operation` | The [`Operation`](../api/core.md#operation) that the event targets. | | `data` | `?object` | This is an optional payload to include any data that may become useful for debugging. | For instance, we may call `dispatchDebug` with our `fetchRequest` event. This is the event that the `fetchExchange` uses to notify us that a request has commenced: ```ts export const fetchExchange: Exchange = ({ forward, dispatchDebug }) => { // ... return ops$ => { return pipe( ops$, // ... mergeMap(operation => { dispatchDebug({ type: 'fetchRequest', message: 'A network request has been triggered', operation, data: { /* ... */ }, }); // ... }) ); }; }; ``` If we're adding new events that aren't included in the main `urql` repository and are using TypeScript, we may also declare a fixed type for the `data` property, so we can guarantee a consistent payload for our Debug Events. This also prevents accidental conflicts. ```ts // urql.d.ts import '@urql/core'; declare module '@urql/core' { interface DebugEventTypes { customEventType: { somePayload: string }; } } ``` Read more about extending types, like `urql`'s `DebugEventTypes` on the [TypeScript docs on declaration merging](https://www.typescriptlang.org/docs/handbook/declaration-merging.html). ### Tips Lastly, in summary, here are a few tips, that are important when we're adding new Debug Events to custom exchanges: - ✅ **Share internal details**: Frequent debug messages on key events inside your exchange are very useful when later inspecting them, e.g. in the `devtools`. - ✅ **Create unique event types** : Key events should be easily identifiable and have a unique names. - ❌ **Don't listen to debug events inside your exchange**: While it's possible to call `client.subscribeToDebugTarget` in an exchange it's only valuable when creating a debugging exchange, like the `devtoolsExchange`. - ❌ **Don't send warnings in debug events**: Informing your user about warnings isn't effective when the event isn't seen. You should still rely on `console.warn` so all users see your important warnings. --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/advanced/persistence-and-uploads.md # Path: docs/advanced/persistence-and-uploads.md --- title: Persistence & Uploads order: 1 --- # Persisted Queries and Uploads `urql` supports (Automatic) Persisted Queries, and File Uploads via GraphQL Multipart requests. For persisted queries to work, some setup work is needed, while File Upload support is built into `@urql/core@4`. ## Automatic Persisted Queries Persisted Queries allow us to send requests to the GraphQL API that can easily be cached on the fly, both by the GraphQL API itself and potential CDN caching layers. This is based on the unofficial [GraphQL Persisted Queries Spec](https://github.com/apollographql/apollo-link-persisted-queries#apollo-engine). With Automatic Persisted Queries the client hashes the GraphQL query and turns it into an SHA256 hash and sends this hash instead of the full query. If the server has seen this GraphQL query before it will recognise it by its hash and process the GraphQL API request as usual, otherwise it may respond using a `PersistedQueryNotFound` error. In that case the client is supposed to instead send the full GraphQL query, and the hash together, which will cause the query to be "registered" with the server. Additionally, we could also decide to send these hashed queries as GET requests instead of POST requests. If we only send the persisted queries with hashes as GET requests then they become a lot easier for a CDN to cache, as by default most caches would not cache POST requests automatically. In `urql`, we may use the `@urql/exchange-persisted` package's `persistedExchange` to enable support for Automatic Persisted Queries. This exchange works alongside other fetch or subscription exchanges by adding metadata for persisted queries to each GraphQL request by modifying the `extensions` object of operations. > **Note:** [You can find a code example for `@urql/exchange-persisted` in an example in the `urql` repository.](https://github.com/urql-graphql/urql/tree/main/examples/with-apq) ### Installation & Setup First install `@urql/exchange-persisted` alongside `urql`: ```sh yarn add @urql/exchange-persisted # or npm install --save @urql/exchange-persisted ``` You'll then need to add the `persistedExchange` function, that this package exposes, to your `exchanges`, in front of exchanges that communicate with the API: ```js import { Client, fetchExchange, cacheExchange } from 'urql'; import { persistedExchange } from '@urql/exchange-persisted'; const client = new Client({ url: 'http://localhost:1234/graphql', exchanges: [ cacheExchange, persistedExchange({ preferGetForPersistedQueries: true, }), fetchExchange, ], }); ``` As we can see, typically it's recommended to set `preferGetForPersistedQueries` to `true` to encourage persisted queries to use GET requests instead of POST so that CDNs can do their job. When set to `true` or `'within-url-limit'`, persisted queries will use GET requests if the resulting URL doesn't exceed the 2048 character limit. The `fetchExchange` can see the modifications that the `persistedExchange` is making to operations, and understands to leave out the `query` from any request as needed. The same should be happening to the `subscriptionExchange`, if you're using it for queries. ### Customizing Hashing The `persistedExchange` also accepts a `generateHash` option. This may be used to swap out the exchange's default method of generating SHA256 hashes. By default, the exchange will use the built-in [Web Crypto API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Crypto_API) when it's available, and in Node.js it'll use the [Node Crypto Module](https://nodejs.org/api/crypto.html) instead. If you're using [the `graphql-persisted-document-loader` for Webpack](https://github.com/leoasis/graphql-persisted-document-loader), for instance, then you will already have a loader generating SHA256 hashes for you at compile time. In that case we could swap out the `generateHash` function with a much simpler one that uses the `generateHash` function's second argument, a GraphQL `DocumentNode` object. ```js persistedExchange({ generateHash: (_, document) => document.documentId, }); ``` If you're using **React Native** then you may not have access to the Web Crypto API, which means that you have to provide your own SHA256 function to the `persistedExchange`. Luckily, we can do so easily by using the first argument `generateHash` receives, a GraphQL query as a string. ```js import sha256 from 'hash.js/lib/hash/sha/256'; persistedExchange({ async generateHash(query) { return sha256().update(query).digest('hex'); }, }); ``` Additionally, if the API only expects persisted queries and not arbitrary ones and all queries are pre-registered against the API then the `persistedExchange` may be put into a **non-automatic** persisted queries mode by giving it the `enforcePersistedQueries: true` option. This disables any retry logic and assumes that persisted queries will be handled like regular GraphQL requests. ## File Uploads GraphQL server APIs commonly support the [GraphQL Multipart Request spec](https://github.com/jaydenseric/graphql-multipart-request-spec) to allow for File Uploads directly with a GraphQL API. If a GraphQL API supports this, we can pass a [`File`](https://developer.mozilla.org/en-US/docs/Web/API/File) or a [`Blob`](https://developer.mozilla.org/en-US/docs/Web/API/Blob) directly into our variables and define the corresponding scalar for our variable, which is often called `File` or `Upload`. In a browser, the `File` object may often be retrieved via a [file input](https://developer.mozilla.org/en-US/docs/Web/API/File/Using_files_from_web_applications), for example. > **Note:** If you are using your own version of `File` and `Blob` ensure you are properly extending the > so it can be properly identified as a file. The `@urql/core@4` package supports File Uploads natively, so we won't have to do any installation or setup work. When `urql` sees a `File` or a `Blob` anywhere in your `variables`, it switches to a `multipart/form-data` request, converts the request to a `FormData` object, according to the GraphQL Multipart Request specification, and sends it off to the API. > **Note:** Previously, this worked by installing the `@urql/multipart-fetch-exchange` package. > however, this package has been deprecated and file uploads are now built into `@urql/core@4`. [You can find a code example for file uploads in an example in the `urql` repository.](https://github.com/urql-graphql/urql/tree/main/examples/with-multipart) --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/advanced/retry-operations.md # Path: docs/advanced/retry-operations.md --- title: Retrying Operations order: 5 --- # Retrying Operations The `retryExchange` lets us retry specific operation, by default it will retry only network errors, but we can specify additional options to add functionality. > **Note:** [You can find a code example for `@urql/exchange-retry` in an example in the `urql` repository.](https://github.com/urql-graphql/urql/tree/main/examples/with-retry) ## Installation and Setup First install `@urql/exchange-retry` alongside `urql`: ```sh yarn add @urql/exchange-retry # or npm install --save @urql/exchange-retry ``` You'll then need to add the `retryExchange`, exposed by this package, to your `urql` Client: ```js import { Client, cacheExchange, fetchExchange } from 'urql'; import { retryExchange } from '@urql/exchange-retry'; // None of these options have to be added, these are the default values. const options = { initialDelayMs: 1000, maxDelayMs: 15000, randomDelay: true, maxNumberAttempts: 2, retryIf: err => err && err.networkError, }; // Note the position of the retryExchange - it should be placed prior to the // fetchExchange and after the cacheExchange for it to function correctly const client = new Client({ url: 'http://localhost:1234/graphql', exchanges: [ cacheExchange, retryExchange(options), // Use the retryExchange factory to add a new exchange fetchExchange, ], }); ``` We want to place the `retryExchange` before the `fetchExchange` so that retries are only performed _after_ the operation has passed through the cache and has attempted to fetch. ## The Options There are a set of optional options that allow for fine-grained control over the `retry` mechanism. We have the `initialDelayMs` to specify at what interval the `retrying` should start, this means that if we specify `1000` that when our `operation` fails we'll wait 1 second and then retry it. Next up is the `maxDelayMs`, our `retryExchange` will keep increasing the time between retries, so we don't spam our server with requests it can't complete, this option ensures we don't exceed a certain threshold. This time between requests will increase with a random `back-off` factor multiplied by the `initialDelayMs`, read more about the [thundering herd problem](https://en.wikipedia.org/wiki/Thundering_herd_problem). Talking about increasing the `delay` randomly, `randomDelay` allows us to disable this. When this option is set to `false` we'll only increase the time between attempts with the `initialDelayMs`. This means if we fail the first time we'll have 1 second wait, next fail we'll have 2 seconds and so on. We can declare how many times it should attempt the `operation` with `maxNumberAttempts`, otherwise, it defaults to 2. If you want it to retry indefinitely, you can simply pass in `Number.POSITIVE_INFINITY`. [For more information on the available options check out the API Docs.](../api/retry-exchange.md) ## Reacting to Different Errors We can introduce specific triggers for the `retryExchange` to start retrying operations, let's look at an example: ```js import { Client, cacheExchange, fetchExchange } from 'urql'; import { retryExchange } from '@urql/exchange-retry'; const client = new Client({ url: 'http://localhost:1234/graphql', exchanges: [ cacheExchange, retryExchange({ retryIf: error => { return !!(error.graphQLErrors.length > 0 || error.networkError); }, }), fetchExchange, ], }); ``` In the above example we'll retry when we have `graphQLErrors` or a `networkError`, we can go more granular and check for certain errors in `graphQLErrors`. ## Failover / Fallback In case of a network error, e.g., when part the infrastructure is down, but a fallback GraphQL endpoint is available, e.g., from a different provider on a different domain, the `retryWith` option allows for client-side failover. This could also be used in case of a `graphQLError`, for example, when APIs are deployed via a windowing strategy, i.e., a newer version at URL X, while an older one remains at Y. Note that finer granularity depending on custom requirements may be applicable, and that this does not allow for balancing load. ```js const fallbackUrl = 'http://localhost:1337/anotherGraphql'; const options = { initialDelayMs: 1000, maxDelayMs: 15000, randomDelay: true, maxNumberAttempts: 2, retryWith: (error, operation) => { if (error.networkError) { const context = { ...operation.context, url: fallbackUrl }; return { ...operation, context }; } return null; }, }; ``` --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/advanced/server-side-rendering.md # Path: docs/advanced/server-side-rendering.md --- title: Server-side Rendering order: 3 --- # Server-side Rendering In server-side rendered applications we often need to set our application up so that data will be fetched on the server-side and later sent down to the client for hydration. `urql` supports this through the `ssrExchange.` ## The SSR Exchange The `ssrExchange` has two functions. On the server-side it's able to gather all results as they're being fetched, which can then be serialized and sent to the client. On the client-side it's able to use these serialized results to rehydrate and render the application without refetching this data. To start out with the `ssrExchange` we have to add the exchange to our `Client`: ```js import { Client, cacheExchange, fetchExchange, ssrExchange } from '@urql/core'; const isServerSide = typeof window === 'undefined'; // The `ssrExchange` must be initialized with `isClient` and `initialState` const ssr = ssrExchange({ isClient: !isServerSide, initialState: !isServerSide ? window.__URQL_DATA__ : undefined, }); const client = new Client({ exchanges: [ cacheExchange, ssr, // Add `ssr` in front of the `fetchExchange` fetchExchange, ], }); ``` The `ssrExchange` must be initialized with the `isClient` and `initialState` options. The `isClient` option tells the exchange whether it's on the server- or client-side. In our example we use `typeof window` to determine this, but in Webpack environments you may also be able to use `process.browser`. Optionally, we may also choose to enable `staleWhileRevalidate`. When enabled this flag will ensure that although a result may have been rehydrated from our SSR result, another refetch `network-only` operation will be issued, to update stale data. This is useful for statically generated sites (SSG) that may ship stale data to our application initially. The `initialState` option should be set to the serialized data you retrieve on your server-side. This data may be retrieved using methods on `ssrExchange()`. You can retrieve the serialized data after server-side rendering using `ssr.extractData()`: ```js // Extract and serialise the data like so from the `ssr` instance // we've previously created by calling `ssrExchange()` const data = JSON.stringify(ssr.extractData()); const markup = ''; // The render code for our framework goes here const html = `
${markup}
`; ``` This will provide `__URQL_DATA__` globally, which we've used in our first example to inject data into the `ssrExchange` on the client-side. Alternatively you can also call `restoreData` as long as this call happens synchronously before the `client` starts receiving queries. ```js const isServerSide = typeof window === 'undefined'; const ssr = ssrExchange({ isClient: !isServerSide }); if (!isServerSide) { ssr.restoreData(window.__URQL_DATA__); } ``` ## Using `react-ssr-prepass` In the previous examples we've set up the `ssrExchange`, however with React this still requires us to manually execute our queries before rendering a server-side React app [using `renderToString` or `renderToNodeStream`](https://reactjs.org/docs/react-dom-server.html#rendertostring). For React, `urql` has a "Suspense mode" that [allows data fetching to interrupt rendering](https://reactjs.org/docs/concurrent-mode-suspense.html). However, Suspense is not supported by React during server-side rendering. Using [the `react-ssr-prepass` package](https://github.com/FormidableLabs/react-ssr-prepass) however, we can implement a prerendering step before we let React server-side render, which allows us to automatically fetch all data that the app requires with Suspense. This technique is commonly referred to as a "two-pass approach", since our React element is traversed twice. To set this up, first we'll install `react-ssr-prepass`. It has a peer dependency on `react-is` and `react`. ```sh yarn add react-ssr-prepass react-is react-dom # or npm install --save react-ssr-prepass react-is react-dom ``` Next, we'll modify our server-side code and add `react-ssr-prepass` in front of `renderToString`. ```jsx import { renderToString } from 'react-dom/server'; import prepass from 'react-ssr-prepass'; import { Client, cacheExchange, fetchExchange, ssrExchange, Provider, } from 'urql'; const handleRequest = async (req, res) => { // ... const ssr = ssrExchange({ isClient: false }); const client = new Client({ url: 'https://??', suspense: true, // This activates urql's Suspense mode on the server-side exchanges: [cacheExchange, ssr, fetchExchange] }); const element = ( ); // Using `react-ssr-prepass` this prefetches all data await prepass(element); // This is the usual React SSR rendering code const markup = renderToString(element); // Extract the data after prepass and rendering const data = JSON.stringify(ssr.extractData()); res.status(200).send(`
${markup}
`); }; ``` It's important to set enable the `suspense` option on the `Client`, which switches it to support React suspense. ### With Preact If you're using Preact instead of React, there's a drop-in replacement package for `react-ssr-prepass`, which is called `preact-ssr-prepass`. It only has a peer dependency on Preact, and we can install it like so: ```sh yarn add preact-ssr-prepass preact # or npm install --save preact-ssr-prepass preact ``` All above examples for `react-ssr-prepass` will still be the same, except that instead of using the `urql` package we'll have to import from `@urql/preact`, and instead of `react-ssr-prepass` we'll have to import from. `preact-ssr-prepass`. ## Next.js If you're using [Next.js](https://nextjs.org/) you can save yourself a lot of work by using `@urql/next`. The `@urql/next` package is set to work with Next 13. To set up `@urql/next`, first we'll install `@urql/next` and `urql` as peer dependencies: ```sh yarn add @urql/next urql graphql # or npm install --save @urql/next urql graphql ``` We now have two ways to leverage `@urql/next`, one being part of a Server component or being part of the general `app/` folder. In a server component we will import from `@urql/next/rsc` ```ts // app/page.tsx import React from 'react'; import { cacheExchange, createClient, fetchExchange, gql } from '@urql/core'; import { registerUrql } from '@urql/next/rsc'; const makeClient = () => { return createClient({ url: 'https://trygql.formidable.dev/graphql/basic-pokedex', exchanges: [cacheExchange, fetchExchange], }); }; const { getClient } = registerUrql(makeClient); export default async function Home() { const result = await getClient().query(PokemonsQuery, {}); return (

This is rendered as part of an RSC

    {result.data.pokemons.map((x: any) => (
  • {x.name}
  • ))}
); } ``` When we aren't leveraging server components we will import the things we will need to do a bit more setup, we go to the `client` component's layout file and structure it as the following. ```tsx // app/client/layout.tsx 'use client'; import { useMemo } from 'react'; import { UrqlProvider, ssrExchange, cacheExchange, fetchExchange, createClient } from '@urql/next'; export default function Layout({ children }: React.PropsWithChildren) { const [client, ssr] = useMemo(() => { const ssr = ssrExchange({ isClient: typeof window !== 'undefined', }); const client = createClient({ url: 'https://trygql.formidable.dev/graphql/web-collections', exchanges: [cacheExchange, ssr, fetchExchange], suspense: true, }); return [client, ssr]; }, []); return ( {children} ); } ``` It is important that we pass both a client as well as the `ssrExchange` to the `Provider` this way we will be able to restore the data that Next streams to the client later on when we are hydrating. The next step is to query data in your client components by means of the `useQuery` method defined in `@urql/next`. ```tsx // app/client/page.tsx 'use client'; import Link from 'next/link'; import { Suspense } from 'react'; import { useQuery, gql } from '@urql/next'; export default function Page() { return ( ); } const PokemonsQuery = gql` query { pokemons(limit: 10) { id name } } `; function Pokemons() { const [result] = useQuery({ query: PokemonsQuery }); return (

This is rendered as part of SSR

    {result.data.pokemons.map((x: any) => (
  • {x.name}
  • ))}
); } ``` The data queried in the above component will be rendered on the server and re-hydrated back on the client. When using multiple Suspense boundaries these will also get flushed as they complete and re-hydrated. > When data is used throughout the application we advise against > rendering this as part of a server-component so you can benefit > from the client-side cache. ### Invalidating data from a server-component When data is rendered by a server component but you dispatch a mutation from a client component the server won't automatically know that the server-component on the client needs refreshing. You can forcefully tell the server to do so by using the Next router and calling `.refresh()`. ```tsx import { useRouter } from 'next/navigation'; const Todo = () => { const router = useRouter(); const executeMutation = async () => { await updateTodo(); router.refresh(); }; }; ``` ### Disabling RSC fetch caching You can pass `fetchOptions: { cache: "no-store" }` to the `createClient` constructor to avoid running into cached fetches with server-components. ## Legacy Next.js (pages) If you're using [Next.js](https://nextjs.org/) with the classic `pages` you can instead use `next-urql`. To set up `next-urql`, first we'll install `next-urql` with `react-is` and `urql` as peer dependencies: ```sh yarn add next-urql react-is urql graphql # or npm install --save next-urql react-is urql graphql ``` The peer dependency on `react-is` is inherited from `react-ssr-prepass` requiring it. Note that if you are using Next before v9.4 you'll need to polyfill fetch, this can be done through [`isomorphic-unfetch`](https://www.npmjs.com/package/isomorphic-unfetch). We're now able to wrap any page or `_app.js` using the `withUrqlClient` higher-order component. If we wrap `_app.js` we won't have to wrap any individual page. ```js // pages/index.js import React from 'react'; import { useQuery } from 'urql'; import { withUrqlClient } from 'next-urql'; const Index = () => { const [result] = useQuery({ query: '{ test }', }); // ... }; export default withUrqlClient((_ssrExchange, ctx) => ({ // ...add your Client options here url: 'http://localhost:3000/graphql', }))(Index); ``` The `withUrqlClient` higher-order component function accepts the usual `Client` options as an argument. This may either just be an object, or a function that receives the Next.js' `getInitialProps` context. One added caveat is that these options may not include the `exchanges` option because `next-urql` injects the `ssrExchange` automatically at the right location. If you're setting up custom exchanges you'll need to instead provide them in the `exchanges` property of the returned client object. ```js import { cacheExchange, fetchExchange } from '@urql/core'; import { withUrqlClient } from 'next-urql'; export default withUrqlClient(ssrExchange => ({ url: 'http://localhost:3000/graphql', exchanges: [cacheExchange, ssrExchange, fetchExchange], }))(Index); ``` Unless the component that is being wrapped already has a `getInitialProps` method, `next-urql` won't add its own SSR logic, which automatically fetches queries during server-side rendering. This can be explicitly enabled by passing the `{ ssr: true }` option as a second argument to `withUrqlClient`. When you are using `getStaticProps`, `getServerSideProps`, or `getStaticPaths`, you should opt-out of `Suspense` by setting the `neverSuspend` option to `true` in your `withUrqlClient` configuration. During the prepass of your component tree `next-urql` can't know how these functions will alter the props passed to your page component. This injection could change the `variables` used in your `useQuery`. This will lead to error being thrown during the subsequent `toString` pass, which isn't supported in React 16. ### SSR with { ssr: true } The `withUrqlClient` only wraps our component tree with the context provider by default. To enable SSR, the easiest way is specifying the `{ ssr: true }` option as a second argument to `withUrqlClient`: ```js import { cacheExchange, fetchExchange } from '@urql/core'; import { withUrqlClient } from 'next-urql'; export default withUrqlClient( ssrExchange => ({ url: 'http://localhost:3000/graphql', exchanges: [cacheExchange, ssrExchange, fetchExchange], }), { ssr: true } // Enables server-side rendering using `getInitialProps` )(Index); ``` Be aware that wrapping the `_app` component using `withUrqlClient` with the `{ ssr: true }` option disables Next's ["Automatic Static Optimization"](https://nextjs.org/docs/advanced-features/automatic-static-optimization) for **all our pages**. It is thus preferred to enable server-side rendering on a per-page basis. ### SSR with getStaticProps or getServerSideProps Enabling server-side rendering using `getStaticProps` and `getServerSideProps` is a little more involved, but has two major benefits: 1. allows **direct schema execution** for performance optimisation 2. allows performing extra operations in those functions To make the functions work with the `withUrqlClient` wrapper, return the `urqlState` prop with the extracted data from the `ssrExchange`: ```js import { withUrqlClient, initUrqlClient } from 'next-urql'; import { ssrExchange, cacheExchange, fetchExchange, useQuery } from 'urql'; const TODOS_QUERY = ` query { todos { id text } } `; function Todos() { const [res] = useQuery({ query: TODOS_QUERY }); return (
{res.data.todos.map(todo => (
{todo.id} - {todo.text}
))}
); } export async function getStaticProps(ctx) { const ssrCache = ssrExchange({ isClient: false }); const client = initUrqlClient( { url: 'your-url', exchanges: [cacheExchange, ssrCache, fetchExchange], }, false ); // This query is used to populate the cache for the query // used on this page. await client.query(TODOS_QUERY).toPromise(); return { props: { // urqlState is a keyword here so withUrqlClient can pick it up. urqlState: ssrCache.extractData(), }, revalidate: 600, }; } export default withUrqlClient( ssr => ({ url: 'your-url', }) // Cannot specify { ssr: true } here so we don't wrap our component in getInitialProps )(Todos); ``` The above example will make sure the page is rendered as a static-page, It's important that you fully pre-populate your cache so in our case we were only interested in getting our todos, if there are child components relying on data you'll have to make sure these are fetched as well. The `getServerSideProps` and `getStaticProps` functions only run on the **server-side** — any code used in them is automatically stripped away from the client-side bundle using the [next-code-elimination tool](https://next-code-elimination.vercel.app/). This allows **executing our schema directly** using `@urql/exchange-execute` if we have access to our GraphQL server: ```js import { withUrqlClient, initUrqlClient } from 'next-urql'; import { ssrExchange, cacheExchange, fetchExchange, useQuery } from 'urql'; import { executeExchange } from '@urql/exchange-execute'; import { schema } from '@/server/graphql'; // our GraphQL server's executable schema const TODOS_QUERY = ` query { todos { id text } } `; function Todos() { const [res] = useQuery({ query: TODOS_QUERY }); return (
{res.data.todos.map(todo => (
{todo.id} - {todo.text}
))}
); } export async function getServerSideProps(ctx) { const ssrCache = ssrExchange({ isClient: false }); const client = initUrqlClient( { url: '', // not needed without `fetchExchange` exchanges: [ cacheExchange, ssrCache, executeExchange({ schema }), // replaces `fetchExchange` ], }, false ); await client.query(TODOS_QUERY).toPromise(); return { props: { urqlState: ssrCache.extractData(), }, }; } export default withUrqlClient(ssr => ({ url: 'your-url', }))(Todos); ``` Direct schema execution skips one network round trip by accessing your resolvers directly instead of performing a `fetch` API call. ### Stale While Revalidate If we choose to use Next's static site generation (SSG or ISG) we may be embedding data in our initial payload that's stale on the client. In this case, we may want to update this data immediately after rehydration. We can pass `staleWhileRevalidate: true` to `withUrqlClient`'s second option argument to Switch it to a mode where it'll refresh its rehydrated data immediately by issuing another network request. ```js export default withUrqlClient( ssr => ({ url: 'your-url', }), { staleWhileRevalidate: true } )(...); ``` Now, although on rehydration we'll receive the stale data from our `ssrExchange` first, it'll also immediately issue another `network-only` operation to update the data. During this revalidation our stale results will be marked using `result.stale`. While this is similar to what we see with `cache-and-network` without server-side rendering, it isn't quite the same. Changing the request policy wouldn't actually refetch our data on rehydration as the `ssrExchange` is simply a replacement of a full network request. Hence, this flag allows us to treat this case separately. ### Resetting the client instance In rare scenario's you possibly will have to reset the client instance (reset all cache, ...), this is an uncommon scenario, and we consider it "unsafe" so evaluate this carefully for yourself. When this does seem like the appropriate solution any component wrapped with `withUrqlClient` will receive the `resetUrqlClient` property, when invoked this will create a new top-level client and reset all prior operations. ## Vue Suspense In Vue 3 a [new feature was introduced](https://vuedose.tips/go-async-in-vue-3-with-suspense/) that natively allows components to suspend while data is loading, which works universally on the server and on the client, where a replacement loading template is rendered on a parent while data is loading. We've previously seen how we can change our usage of `useQuery`'s `PromiseLike` result to [make use of Vue Suspense on the "Queries" page.](../basics/vue.md#vue-suspense) Any component's `setup()` function can be updated to instead be an `async setup()` function, in other words, to return a `Promise` instead of directly returning its data. This means that we can update any `setup()` function to make use of Suspense. On the server-side we can then use `@vue/server-renderer`'s `renderToString`, which will return a `Promise` that resolves when all suspense-related loading is completed. ```jsx import { createSSRApp } = from 'vue' import { renderToString } from '@vue/server-renderer'; import urql, { createClient, cacheExchange, fetchExchange, ssrExchange } from '@urql/vue'; const handleRequest = async (req, res) => { // This is where we'll put our root component const app = createSSRApp(Root) // NOTE: All we care about here is that the SSR Exchange is included const ssr = ssrExchange({ isClient: false }); app.use(urql, { exchanges: [cacheExchange, ssr, fetchExchange] }); const markup = await renderToString(app); const data = JSON.stringify(ssr.extractData()); res.status(200).send(`
${markup}
`); }; ``` This effectively renders our Vue app on the server-side and provides the client-side data for rehydration that we've set up in the above [SSR Exchange section](#the-ssr-exchange) to use. --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/advanced/subscriptions.md # Path: docs/advanced/subscriptions.md --- title: Subscriptions order: 0 --- # Subscriptions One feature of `urql` that was not mentioned in the ["Basics" sections](../basics/README.md) is `urql`'s APIs and ability to handle GraphQL subscriptions. ## The Subscription Exchange To add support for subscriptions we need to add the `subscriptionExchange` to our `Client`. ```js import { Client, cacheExchange, fetchExchange, subscriptionExchange } from 'urql'; const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [ cacheExchange, fetchExchange, subscriptionExchange({ forwardSubscription, }), ], }); ``` Read more about Exchanges and how they work [on the "Authoring Exchanges" page.](./authoring-exchanges.md) or what they are [on the "Architecture" page.](../architecture.md) In the above example, we add the `subscriptionExchange` to the `Client` with the default exchanges added before it. The `subscriptionExchange` is a factory that accepts additional options and returns the actual `Exchange` function. It does not make any assumption over the transport protocol and scheme that is used. Instead, we need to pass a `forwardSubscription` function. The `forwardSubscription` is called when the `subscriptionExchange` receives an `Operation`, so typically, when you’re executing a GraphQL subscription. This will call the `forwardSubscription` function with a GraphQL request body, in the same shape that a GraphQL HTTP API may receive it as JSON input. If you’re using TypeScript, you may notice that the input that `forwardSubscription` receives has an optional `query` property. This is because of persisted query support. For some transports, the `query` property may have to be defaulted to an empty string, which matches the GraphQL over HTTP specification more closely. When we define this function it must return an "Observable-like" object, which needs to follow the [Observable spec](https://github.com/tc39/proposal-observable), which comes down to having an object with a `.subscribe()` method accepting an observer. ### Setting up `graphql-ws` For backends supporting `graphql-ws`, we recommend using the [graphql-ws](https://github.com/enisdenjo/graphql-ws) client. ```js import { Client, cacheExchange, fetchExchange, subscriptionExchange } from 'urql'; import { createClient as createWSClient } from 'graphql-ws'; const wsClient = createWSClient({ url: 'ws://localhost/graphql', }); const client = new Client({ url: '/graphql', exchanges: [ cacheExchange, fetchExchange, subscriptionExchange({ forwardSubscription(request) { const input = { ...request, query: request.query || '' }; return { subscribe(sink) { const unsubscribe = wsClient.subscribe(input, sink); return { unsubscribe }; }, }; }, }), ], }); ``` In this example, we're creating a `SubscriptionClient`, are passing in a URL and some parameters, and are using the `SubscriptionClient`'s `request` method to create a Subscription Observable, which we return to the `subscriptionExchange` inside `forwardSubscription`. [Read more on the `graphql-ws` README.](https://github.com/enisdenjo/graphql-ws/blob/master/README.md) ### Setting up `subscriptions-transport-ws` For backends supporting `subscriptions-transport-ws`, [Apollo's `subscriptions-transport-ws` package](https://github.com/apollographql/subscriptions-transport-ws) can be used. > The `subscriptions-transport-ws` package isn't actively maintained. If your API supports the new protocol or you can swap the package out, consider using [`graphql-ws`](#setting-up-graphql-ws) instead. ```js import { Client, cacheExchange, fetchExchange, subscriptionExchange } from 'urql'; import { SubscriptionClient } from 'subscriptions-transport-ws'; const subscriptionClient = new SubscriptionClient('ws://localhost/graphql', { reconnect: true }); const client = new Client({ url: '/graphql', exchanges: [ cacheExchange, fetchExchange, subscriptionExchange({ forwardSubscription: request => subscriptionClient.request(request), }), ], }); ``` In this example, we're creating a `SubscriptionClient`, are passing in a URL and some parameters, and are using the `SubscriptionClient`'s `request` method to create a Subscription Observable, which we return to the `subscriptionExchange` inside `forwardSubscription`. [Read more about `subscription-transport-ws` on its README.](https://github.com/apollographql/subscriptions-transport-ws/blob/master/README.md) ### Using `fetch` for subscriptions Some GraphQL backends (for example GraphQL Yoga) support built-in transport protocols that can execute subscriptions via a simple HTTP fetch call. In fact, this is how `@defer` and `@stream` directives are supported. These transports can also be used for subscriptions. ```js import { Client, cacheExchange, fetchExchange, subscriptionExchange } from 'urql'; const client = new Client({ url: '/graphql', fetchSubscriptions: true, exchanges: [cacheExchange, fetchExchange], }); ``` In this example, we only need to enable `fetchSubscriptions: true` on the `Client`, and the `fetchExchange` will be used to send subscriptions to the API. If your API supports this transport, it will stream results back to the `fetchExchange`. [You can find a code example of subscriptions via `fetch` in an example in the `urql` repository.](https://github.com/urql-graphql/urql/tree/main/examples/with-subscriptions-via-fetch) ## React & Preact The `useSubscription` hooks comes with a similar API to `useQuery`, which [we've learned about in the "Queries" page in the "Basics" section.](../basics/react-preact.md#queries) Its usage is extremely similar in that it accepts options, which may contain `query` and `variables`. However, it also accepts a second argument, which is a reducer function, similar to what you would pass to `Array.prototype.reduce`. It receives the previous set of data that this function has returned or `undefined`. As the second argument, it receives the event that has come in from the subscription. You can use this to accumulate the data over time, which is useful for a list for example. In the following example, we create a subscription that informs us of new messages. We will concatenate the incoming messages so that we can display all messages that have come in over the subscription across events. ```js import React from 'react'; import { useSubscription } from 'urql'; const newMessages = ` subscription MessageSub { newMessages { id from text } } `; const handleSubscription = (messages = [], response) => { return [response.newMessages, ...messages]; }; const Messages = () => { const [res] = useSubscription({ query: newMessages }, handleSubscription); if (!res.data) { return

No new messages

; } return (
    {res.data.map(message => (

    {message.from}: "{message.text}"

    ))}
); }; ``` As we can see, the `res.data` is being updated and transformed by the `handleSubscription` function. This works over time, so as new messages come in, we will append them to the list of previous messages. [Read more about the `useSubscription` API in the API docs for it.](../api/urql.md#usesubscription) ## Svelte The `subscriptionStore` function in `@urql/svelte` comes with a similar API to `query`, which [we've learned about in the "Queries" page in the "Basics" section.](../basics/svelte.md#queries) Its usage is extremely similar in that it accepts an `operationStore`, which will typically contain our GraphQL subscription query. In the following example, we create a subscription that informs us of new messages. ```js {#if !$messages.data}

No new messages

{:else}
    {#each $messages.data.newMessages as message}
  • {message.from}: "{message.text}"
  • {/each}
{/if} ``` As we can see, `$messages.data` is being updated and transformed by the `$messages` subscriptionStore. This works over time, so as new messages come in, we will append them to the list of previous messages. `subscriptionStore` optionally accepts a second argument, a handler function, allowing custom update behavior from the subscription. [Read more about the `subscription` API in the API docs for it.](../api/svelte.md#subscriptionstore) ## Vue The `useSubscription` API is very similar to `useQuery`, which [we've learned about in the "Queries" page in the "Basics" section.](../basics/vue.md#queries) Its usage is extremely similar in that it accepts options, which may contain `query` and `variables`. However, it also accepts a second argument, which is a reducer function, similar to what you would pass to `Array.prototype.reduce`. It receives the previous set of data that this function has returned or `undefined`. As the second argument, it receives the event that has come in from the subscription. You can use this to accumulate the data over time, which is useful for a list for example. In the following example, we create a subscription that informs us of new messages. We will concatenate the incoming messages so that we can display all messages that have come in over the subscription across events. ```jsx ``` As we can see, the `result.data` is being updated and transformed by the `handleSubscription` function. This works over time, so as new messages come in, we will append them to the list of previous messages. [Read more about the `useSubscription` API in the API docs for it.](../api/vue.md#usesubscription) ## One-off Subscriptions When you're using subscriptions directly without `urql`'s framework bindings, you can use the `Client`'s `subscription` method for one-off subscriptions. This method is similar to the ones for mutations and subscriptions [that we've seen before on the "Core Package" page.](../basics/core.md) This method will always [returns a Wonka stream](../architecture.md#the-wonka-library) and doesn't have a `.toPromise()` shortcut method, since promises won't return the multiple values that a subscription may deliver. Let's convert the above example to one without framework code, as we may use subscriptions in a Node.js environment. ```js import { gql } from '@urql/core'; const MessageSub = gql` subscription MessageSub { newMessages { id from text } } `; const { unsubscribe } = client.subscription(MessageSub).subscribe(result => { console.log(result); // { data: ... } }); ``` --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/advanced/testing.md # Path: docs/advanced/testing.md --- title: Testing order: 7 --- # Testing Testing with `urql` can be done in a multitude of ways. The most effective and straightforward method is to mock the `Client` to force your components into a fixed state during testing. The following examples demonstrate this method of testing for React and the `urql` package only, however the pattern itself can be adapted for any framework-bindings of `urql`. ## Mocking the client For the most part, urql's hooks are just adapters for talking to the urql client. The way in which they do this is by making calls to the client via context. - `useQuery` calls `executeQuery` - `useMutation` calls `executeMutation` - `useSubscription` calls `executeSubscription` In the section ["Stream Patterns" on the "Architecture" page](../architecture.md) we've seen, that all methods on the client operate with and return streams. These streams are created using [the Wonka library](../architecture.md#the-wonka-library), and we're able to create streams ourselves to mock the different states of our operations, e.g. fetching, errors, or success with data. You'll probably use one of these utility functions to create streams: - `never`: This stream doesn’t emit any values and never completes, which puts our `urql` code in a permanent `fetching: true` state. - `fromValue`: This utility function accepts a value and emits it immediately, which we can use to mock a result from the server. - `makeSubject`: Allows us to create a source and imperatively push responses, which is useful to test subscription and simulate changes, i.e. multiple states. Creating a mock `Client` is pretty quick as we'll create an object that contains the `Client`'s methods that the React `urql` hooks use. We'll mock the appropriate `execute` functions that we need to mock a set of hooks. After we've created the mock `Client` we can wrap components with the `Provider` from `urql` and pass it. Here's an example client mock being used while testing a component. ```tsx import { mount } from 'enzyme'; import { Provider } from 'urql'; import { never } from 'wonka'; import { MyComponent } from './MyComponent'; it('renders', () => { const mockClient = { executeQuery: jest.fn(() => never), executeMutation: jest.fn(() => never), executeSubscription: jest.fn(() => never), }; const wrapper = mount( ); }); ``` ## Testing calls to the client Once you have your mock setup, calls to the client can be tested. ```tsx import { mount } from 'enzyme'; import { Provider } from 'urql'; import { MyComponent } from './MyComponent'; it('skips the query', () => { mount( ); expect(mockClient.executeQuery).toBeCalledTimes(0); }); ``` Testing mutations and subscriptions also work in a similar fashion. ```tsx import { mount } from 'enzyme'; import { Provider } from 'urql'; import { MyComponent } from './MyComponent'; it('triggers a mutation', () => { const wrapper = mount( ); const variables = { name: 'Carla' }; wrapper.find('input').simulate('change', { currentTarget: { value: variables.name } }); wrapper.find('button').simulate('click'); expect(mockClient.executeMutation).toBeCalledTimes(1); expect(mockClient.executeMutation).toBeCalledWith(expect.objectContaining({ variables }), {}); }); ``` ## Forcing states For testing render output, or creating fixtures, you may want to force the state of your components. ### Fetching Fetching states can be simulated by returning a stream, which never returns. Wonka provides a utility for this, aptly called `never`. Here's a fixture, which stays in the _fetching_ state. ```tsx import { Provider } from 'urql'; import { never } from 'wonka'; import { MyComponent } from './MyComponent'; const fetchingState = { executeQuery: () => never, }; export default ( ); ``` ### Response (success) Response states are simulated by providing a stream, which contains a network response. For single responses, Wonka's `fromValue` function can do this for us. **Example snapshot test of response state** ```tsx import { mount } from 'enzyme'; import { Provider } from 'urql'; import { fromValue } from 'wonka'; import { MyComponent } from './MyComponent'; it('matches snapshot', () => { const responseState = { executeQuery: () => fromValue({ data: { posts: [ { id: 1, title: 'Post title', content: 'This is a post' }, { id: 3, title: 'Final post', content: 'Final post here' }, ], }, }), }; const wrapper = mount( ); expect(wrapper).toMatchSnapshot(); }); ``` ### Response (error) Error responses are similar to success responses, only the value in the stream is changed. ```tsx import { Provider, CombinedError } from 'urql'; import { fromValue } from 'wonka'; const errorState = { executeQuery: () => fromValue({ error: new CombinedError({ networkError: Error('something went wrong!'), }), }), }; ``` ### Handling multiple hooks Returning different values for many `useQuery` calls can be done by introducing conditionals into the mocked client functions. ```tsx import { fromValue } from 'wonka'; let mockClient; beforeEach(() => { mockClient = () => { executeQuery: ({ query }) => { if (query === GET_USERS) { return fromValue(usersResponse); } if (query === GET_POSTS) { return fromValue(postsResponse); } }; }; }); ``` The above client we've created mocks all three operations — queries, mutations and subscriptions — to always remain in the `fetching: true` state. Generally when we're _hoisting_ our mocked client and reuse it across multiple tests we have to be mindful not to instantiate the mocks outside of Jest's lifecycle functions (like `it`, `beforeEach`, `beforeAll` and such) as it may otherwise reset our mocked functions' return values or implementation. ## Subscriptions Testing subscriptions can be done by simulating the arrival of new data over time. To do this we may use the `interval` utility from Wonka, which emits values on a timer, and for each value we can map over the response that we'd like to mock. If you prefer to have more control on when the new data is arriving you can use the `makeSubject` utility from Wonka. You can see more details in the next section. Here's an example of testing a list component, which uses a subscription. ```tsx import { OperationContext, makeOperation } from '@urql/core'; import { mount } from 'enzyme'; import { Provider } from 'urql'; import { MyComponent } from './MyComponent'; it('should update the list', done => { const mockClient = { executeSubscription: jest.fn(query => pipe( interval(200), map((i: number) => ({ // To mock a full result, we need to pass a mock operation back as well operation: makeOperation('subscription', query, {} as OperationContext), data: { posts: { id: i, title: 'Post title', content: 'This is a post' } }, })) ) ), }; let index = 0; const wrapper = mount( ); setTimeout(() => { expect(wrapper.find('.list').children()).toHaveLength(index + 1); // See how many items are in the list index++; if (index === 2) done(); }, 200); }); ``` ## Simulating changes Simulating multiple responses can be useful, particularly testing `useEffect` calls dependent on changing query responses. For this, a _subject_ is the way to go. In short, it's a stream that you can push responses to. The `makeSubject` function from Wonka is what you'll want to use for this purpose. Below is an example of simulating subsequent responses (such as a cache update/refetch) in a test. ```tsx import { mount } from 'enzyme'; import { act } from 'react-dom/test-utils'; import { Provider } from 'urql'; import { makeSubject } from 'wonka'; import { MyComponent } from './MyComponent'; const { source: stream, next: pushResponse } = makeSubject(); it('shows notification on updated data', () => { const mockedClient = { executeQuery: jest.fn(() => stream), }; const wrapper = mount( ); // First response act(() => { pushResponse({ data: { posts: [{ id: 1, title: 'Post title', content: 'This is a post' }], }, }); }); expect(wrapper.find('dialog').exists()).toBe(false); // Second response act(() => { pushResponse({ data: { posts: [ { id: 1, title: 'Post title', content: 'This is a post' }, { id: 1, title: 'Post title', content: 'This is a post' }, ], }, }); }); expect(wrapper.find('dialog').exists()).toBe(true); }); ``` --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/api/README.md # Path: docs/api/README.md --- title: API order: 9 --- # API `urql` is a collection of multiple packages. You'll likely be using one of the framework bindings package or exchange packages, which are all listed in this section. Most of these packages will refer to or use utilities and types from the `@urql/core` package. [Read more about the core package on the "Core" page.](../basics/core.md) > **Note:** These API docs are deprecated as we now keep TSDocs in all published packages. > You can view TSDocs while using these packages in your editor, as long as it supports the > TypeScript Language Server. > We're planning to replace these API docs with a separate web app soon. - [`@urql/core` API docs](./core.md) - [`urql` React API docs](./urql.md) - [`@urql/preact` Preact API docs](./preact.md) - [`@urql/svelte` Svelte API docs](./svelte.md) - [`@urql/exchange-graphcache` API docs](./graphcache.md) - [`@urql/exchange-retry` API docs](./retry-exchange.md) - [`@urql/exchange-execute` API docs](./execute-exchange.md) - [`@urql/exchange-request-policy` API docs](./request-policy-exchange.md) - [`@urql/exchange-auth` API docs](./auth-exchange.md) - [`@urql/exchange-refocus` API docs](./refocus-exchange.md) --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/api/auth-exchange.md # Path: docs/api/auth-exchange.md --- title: '@urql/exchange-auth' order: 10 --- # Authentication Exchange > **Note:** These API docs are deprecated as we now keep TSDocs in all published packages. > You can view TSDocs while using these packages in your editor, as long as it supports the > TypeScript Language Server. > We're planning to replace these API docs with a separate web app soon. The `@urql/exchange-auth` package contains an addon `authExchange` for `urql` that aims to make it easy to implement complex authentication and reauthentication flows as are typically found with JWT token based API authentication. ## Installation and Setup First install `@urql/exchange-auth` alongside `urql`: ```sh yarn add @urql/exchange-auth # or npm install --save @urql/exchange-auth ``` You'll then need to add the `authExchange`, that this package exposes to your `Client`. The `authExchange` is an asynchronous exchange, so it must be placed in front of all `fetchExchange`s but after all other synchronous exchanges, like the `cacheExchange`. ```js import { createClient, cacheExchange, fetchExchange } from 'urql'; import { authExchange } from '@urql/exchange-auth'; const client = createClient({ url: 'http://localhost:3000/graphql', exchanges: [ cacheExchange, authExchange(async utils => { return { /* config... */ }; }), fetchExchange, ], }); ``` The `authExchange` accepts an initialization function. This function is called when your exchange and `Client` first start up, and must return an object of options wrapped in a `Promise`, which is used to configure how your authentication method works. You can use this function to first retrieve your authentication state from a kind of local storage, or to call your API to validate your authentication state first. The relevant configuration options, returned to the `authExchange`, then determine how the `authExchange` behaves: - `addAuthToOperation` must be provided to tell `authExchange` how to add authentication information to an operation, e.g. how to add the authentication state to an operation's fetch headers. - `willAuthError` may be provided to detect expired tokens or tell whether an operation will likely fail due to an authentication error. - `didAuthError` may be provided to let the `authExchange` detect authentication errors from the API on results. - `refreshAuth` is called when an authentication error occurs and gives you an opportunity to update your authentication state. Afterwards, the `authExchange` will retry your operation. [Read more examples in the documentation given here.](../advanced/authentication.md) --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/api/core.md # Path: docs/api/core.md --- title: '@urql/core' order: 0 --- # @urql/core > **Note:** These API docs are deprecated as we now keep TSDocs in all published packages. > You can view TSDocs while using these packages in your editor, as long as it supports the > TypeScript Language Server. > We're planning to replace these API docs with a separate web app soon. The `@urql/core` package is the basis of all framework bindings. Each bindings-package, like [`urql` for React](./urql.md) or [`@urql/preact`](./preact.md), will reuse the core logic and reexport all exports from `@urql/core`. Therefore if you're not accessing utilities directly, aren't in a Node.js environment, and are using framework bindings, you'll likely want to import from your framework bindings package directly. [Read more about `urql`'s core on the "Core Package" page.](../basics/core.md) ## Client The `Client` manages all operations and ongoing requests to the exchange pipeline. It accepts several options on creation. `@urql/core` also exposes `createClient()` that is just a convenient alternative to calling `new Client()`. | Input | Type | Description | | ----------------- | ------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `exchanges` | `Exchange[]` | An array of `Exchange`s that the client should use | | `url` | `string` | The GraphQL API URL as used by `fetchExchange` | | `fetchOptions` | `RequestInit \| () => RequestInit` | Additional `fetchOptions` that `fetch` in `fetchExchange` should use to make a request | | `fetch` | `typeof fetch` | An alternative implementation of `fetch` that will be used by the `fetchExchange` instead of `window.fetch` | | `suspense` | `?boolean` | Activates the experimental React suspense mode, which can be used during server-side rendering to prefetch data | | `requestPolicy` | `?RequestPolicy` | Changes the default request policy that will be used. By default, this will be `cache-first`. | | `preferGetMethod` | `?boolean \| 'force' \| 'within-url-limit'` | This is picked up by the `fetchExchange` and will force all queries (not mutations) to be sent using the HTTP GET method instead of POST if the length of the resulting URL doesn't exceed 2048 characters. When `'force'` is passed a GET request is always sent regardless of how long the resulting URL is. | ### client.executeQuery Accepts a [`GraphQLRequest`](#graphqlrequest) and optionally `Partial`, and returns a [`Source`](#operationresult) — a stream of query results that can be subscribed to. Internally, subscribing to the returned source will create an [`Operation`](#operation), with `kind` set to `'query'`, and dispatch it on the exchanges pipeline. If no subscribers are listening to this operation anymore and unsubscribe from the query sources, the `Client` will dispatch a "teardown" operation. - [Instead of using this method directly, you may want to use the `client.query` shortcut instead.](#clientquery) - [See `createRequest` for a utility that creates `GraphQLRequest` objects.](#createrequest) ### client.executeSubscription This is functionally the same as `client.executeQuery`, but creates operations for subscriptions instead, with `kind` set to `'subscription'`. ### client.executeMutation This is functionally the same as `client.executeQuery`, but creates operations for mutations instead, with `kind` set to `'mutation'`. A mutation source is always guaranteed to only respond with a single [`OperationResult`](#operationresult) and then complete. ### client.query This is a shorthand method for [`client.executeQuery`](#clientexecutequery), which accepts a query (`DocumentNode | string`) and variables separately and creates a [`GraphQLRequest`](#graphqlrequest) [`createRequest`](#createrequest) automatically. The returned `Source` will also have an added `toPromise` method, so the stream can be conveniently converted to a promise. ```js import { pipe, subscribe } from 'wonka'; const { unsubscribe } = pipe( client.query('{ test }', { /* vars */ }), subscribe(result => { console.log(result); // OperationResult }) ); // or with toPromise, which also limits this to one result client .query('{ test }', { /* vars */ }) .toPromise() .then(result => { console.log(result); // OperationResult }); ``` [Read more about how to use this API on the "Core Package" page.](../basics/core.md#one-off-queries-and-mutations) ### client.mutation This is similar to [`client.query`](#clientquery), but dispatches mutations instead. [Read more about how to use this API on the "Core Package" page.](../basics/core.md#one-off-queries-and-mutations) ### client.subscription This is similar to [`client.query`](#clientquery), but does not provide a `toPromise()` helper method on the streams it returns. [Read more about how to use this API on the "Subscriptions" page.](../advanced/subscriptions.md) ### client.reexecuteOperation This method is commonly used in _Exchanges_ to reexecute an [`Operation`](#operation) on the `Client`. It will only reexecute when there are still subscribers for the given [`Operation`](#operation). For an example, this method is used by the `cacheExchange` when an [`OperationResult`](#operationresult) is invalidated in the cache and needs to be refetched. ### client.readQuery This method is typically used to read data synchronously from a cache. It returns an [`OperationResult`](#operationresult) if a value is returned immediately or `null` if no value is returned while cancelling all side effects. ## CombinedError The `CombinedError` is used in `urql` to normalize network errors and `GraphQLError`s if anything goes wrong during a GraphQL request. | Input | Type | Description | | --------------- | -------------------------------- | ---------------------------------------------------------------------------------- | | `networkError` | `?Error` | An unexpected error that might've occurred when trying to send the GraphQL request | | `graphQLErrors` | `?Array` | GraphQL Errors (if any) that were returned by the GraphQL API | | `response` | `?any` | The raw response object (if any) from the `fetch` call | [Read more about errors in `urql` on the "Error" page.](../basics/errors.md) ## Types ### GraphQLRequest This often comes up as the **input** for every GraphQL request. It consists of `query` and optionally `variables`. | Prop | Type | Description | | ----------- | -------------- | --------------------------------------------------------------------------------------------------------------------- | | `key` | `number` | A unique key that identifies this exact combination of `query` and `variables`, which is derived using a stable hash. | | `query` | `DocumentNode` | The query to be executed. Accepts as a plain string query or GraphQL DocumentNode. | | `variables` | `?object` | The variables to be used with the GraphQL request. | The `key` property is a hash of both the `query` and the `variables`, to uniquely identify the request. When `variables` are passed it is ensured that they're stably stringified so that the same variables in a different order will result in the same `key`, since variables are order-independent in GraphQL. [A `GraphQLRequest` may be manually created using the `createRequest` helper.](#createrequest) ### OperationType This determines what _kind of operation_ the exchanges need to perform. This is one of: - `'subscription'` - `'query'` - `'mutation'` - `'teardown'` The `'teardown'` operation is special in that it instructs exchanges to cancel any ongoing operations with the same key as the `'teardown'` operation that is received. ### Operation The input for every exchange that informs GraphQL requests. It extends the [`GraphQLRequest` type](#graphqlrequest) and contains these additional properties: | Prop | Type | Description | | --------- | ------------------ | --------------------------------------------- | | `kind` | `OperationType` | The type of GraphQL operation being executed. | | `context` | `OperationContext` | Additional metadata passed to exchange. | An `Operation` also contains the `operationName` property, which is a deprecated alias of the `kind` property and outputs a deprecation warning if it's used. ### RequestPolicy This determines the strategy that a cache exchange should use to fulfill an operation. When you implement a custom cache exchange it's recommended that these policies are handled. - `'cache-first'` (default) - `'cache-only'` - `'network-only'` - `'cache-and-network'` [Read more about request policies on the "Document Caching" page.](../basics/document-caching.md#request-policies) ### OperationContext The context often carries options or metadata for individual exchanges, but may also contain custom data that can be passed from almost all API methods in `urql` that deal with [`Operation`s](#operation). Some of these options are set when the `Client` is initialised, so in the following list of properties you'll likely see some options that exist on the `Client` as well. | Prop | Type | Description | | --------------------- | ------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- | | `fetchOptions` | `?RequestInit \| (() => RequestInit)` | Additional `fetchOptions` that `fetch` in `fetchExchange` should use to make a request. | | `fetch` | `typeof fetch` | An alternative implementation of `fetch` that will be used by the `fetchExchange` instead of `window.fetch` | | `requestPolicy` | `RequestPolicy` | An optional [request policy](../basics/document-caching.md#request-policies) that should be used specifying the cache strategy. | | `url` | `string` | The GraphQL endpoint, when using GET you should use absolute url's | | `meta` | `?OperationDebugMeta` | Metadata that is only available in development for devtools. | | `suspense` | `?boolean` | Whether suspense is enabled. | | `preferGetMethod` | `?boolean \| 'force' \| 'within-url-limit'` | Instructs the `fetchExchange` to use HTTP GET for queries. | | `additionalTypenames` | `?string[]` | Allows you to tell the operation that it depends on certain typenames (used in document-cache.) | It also accepts additional, untyped parameters that can be used to send more information to custom exchanges. ### OperationResult The result of every GraphQL request, i.e. an `Operation`. It's very similar to what comes back from a typical GraphQL API, but slightly enriched and normalized. | Prop | Type | Description | | ------------ | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | | `operation` | `Operation` | The operation that this is a result for | | `data` | `?any` | Data returned by the specified query | | `error` | `?CombinedError` | A [`CombinedError`](#combinederror) instances that wraps network or `GraphQLError`s (if any) | | `extensions` | `?Record` | Extensions that the GraphQL server may have returned. | | `stale` | `?boolean` | A flag that may be set to `true` by exchanges to indicate that the `data` is incomplete or out-of-date, and that the result will be updated soon. | ### ExchangeInput This is the input that an [`Exchange`](#exchange) receives when it's initialized by the [`Client`](#client) | Input | Type | Description | | --------- | ------------ | ----------------------------------------------------------------------------------------------------------------------- | | `forward` | `ExchangeIO` | The unction responsible for receiving an observable operation and returning a result | | `client` | `Client` | The urql application-wide client library. Each execute method starts a GraphQL request and returns a stream of results. | ### Exchange An exchange represents abstractions of small chunks of logic in `urql`. They're small building blocks and similar to "middleware". [Read more about _Exchanges_ on the "Authoring Exchanges" page.](../advanced/authoring-exchanges.md) An exchange is defined to be a function that receives [`ExchangeInput`](#exchangeinput) and returns an `ExchangeIO` function. The `ExchangeIO` function in turn will receive a stream of operations, and must return a stream of results. If the exchange is purely transforming data, like the `mapExchange` for instance, it'll call `forward`, which is the next Exchange's `ExchangeIO` function to get a stream of results. ```js type ExchangeIO = (Source) => Source; type Exchange = ExchangeInput => ExchangeIO; ``` [If you haven't yet seen streams you can read more about "Stream Patterns" on the "Architecture" page.](../architecture.md) ## Exchanges ### cacheExchange The `cacheExchange` as [described on the "Document Caching" page.](../basics/document-caching.md). It's of type `Exchange`. ### subscriptionExchange The `subscriptionExchange` as [described on the "Subscriptions" page.](../advanced/subscriptions.md). It's of type `Options => Exchange`. It accepts a single input: `{ forwardSubscription }`. This is a function that receives an enriched operation and must return an Observable-like object that streams `GraphQLResult`s with `data` and `errors`. The `forwardSubscription` function is commonly connected to the [`subscriptions-transport-ws` package](https://github.com/apollographql/subscriptions-transport-ws). ### ssrExchange The `ssrExchange` as [described on the "Server-side Rendering" page.](../advanced/server-side-rendering.md). It's of type `Options => Exchange`. It accepts three inputs, `initialState` which is completely optional and populates the server-side rendered data with a rehydrated cache, `isClient` which can be set to `true` or `false` to tell the `ssrExchange` whether to write to (server-side) or read from (client-side) the cache, and `staleWhileRevalidate` which will treat rehydrated data as stale and refetch up-to-date data by reexecuring the operation using a `network-only` requests policy. By default, `isClient` defaults to `true` when the `Client.suspense` mode is disabled and to `false` when the `Client.suspense` mode is enabled. This can be used to extract data that has been queried on the server-side, which is also described in the Basics section, and is also used on the client-side to restore server-side rendered data. When called, this function creates an `Exchange`, which also has two methods on it: - `.restoreData(data)` which can be used to inject data, typically on the client-side. - `.extractData()` which is typically used on the server-side to extract the server-side rendered data. Basically, the `ssrExchange` is a small cache that collects data during the server-side rendering pass, and allows you to populate the cache on the client-side with the same data. During React rehydration this cache will be emptied, and it will become inactive and won't change the results of queries after rehydration. It needs to be used _after_ other caching Exchanges like the `cacheExchange`, but before any _asynchronous_ Exchange like the `fetchExchange`. ### debugExchange An exchange that writes incoming `Operation`s to `console.log` and writes completed `OperationResult`s to `console.log`. This exchange is disabled in production and is based on the `mapExchange`. If you'd like to customise it, you can replace it with a custom `mapExchange`. ### fetchExchange The `fetchExchange` of type `Exchange` is responsible for sending operations of type `'query'` and `'mutation'` to a GraphQL API using `fetch`. ### mapExchange The `mapExchange` allows you to: - react to or replace operations with `onOperation`, - react to or replace results with `onResult`, - and; react to errors in results with `onError`. It can therefore be used to quickly react to the core events in the `Client` without writing a custom exchange, effectively allowing you to ship your own `debugExchange`. ```ts mapExchange({ onOperation(operation) { console.log('operation', operation); }, onResult(result) { console.log('result', result); }, }); ``` It can also be used to react only to errors, which is the same as checking for `result.error`: ```ts mapExchange({ onError(error, operation) { console.log(`The operation ${operation.key} has errored with:`, error); }, }); ``` Lastly, it can be used to map operations and results, which may be useful to update the `OperationContext` or perform other standard tasks that require you to wait for a result: ```ts import { mapExchange, makeOperation } from '@urql/core'; mapExchange({ async onOperation(operation) { // NOTE: This is only for illustration purposes return makeOperation(operation.kind, operation, { ...operation.context, test: true, }); }, async onResult(result) { // NOTE: This is only for illustration purposes if (result.data === undefined) result.data = null; return result; }, }); ``` ### errorExchange (deprecated) An exchange that lets you inspect errors. This can be useful for logging, or reacting to different types of errors (e.g. logging the user out in case of a permission error). In newer versions of `@urql/core`, it's identical to the `mapExchange` and its export has been replaced as the `mapExchange` also allows you to pass an `onError` function. ## Utilities ### gql This is a `gql` tagged template literal function, similar to the one that's also commonly known from `graphql-tag`. It can be used to write GraphQL documents in a tagged template literal and returns a parsed `DocumentNode` that's primed against the `createRequest`'s cache for `key`s. ```js import { gql } from '@urql/core'; const SharedFragment = gql` fragment UserFrag on User { id name } `; gql` query { user ...UserFrag } ${SharedFragment} `; ``` Unlike `graphql-tag`, this function outputs a warning in development when names of fragments in the document are duplicated. It does not output warnings when fragment names were duplicated globally however. ### stringifyVariables This function is a variation of `JSON.stringify` that sorts any object's keys that is being stringified to ensure that two objects with a different order of keys will be stably stringified to the same string. ```js stringifyVariables({ a: 1, b: 2 }); // {"a":1,"b":2} stringifyVariables({ b: 2, a: 1 }); // {"a":1,"b":2} ``` ### createRequest This utility accepts a GraphQL query of type `string | DocumentNode` and optionally an object of variables, and returns a [`GraphQLRequest` object](#graphqlrequest). Since the [`client.executeQuery`](#clientexecutequery) and other execute methods only accept [`GraphQLRequest`s](#graphqlrequest), this helper is commonly used to create that request first. The [`client.query`](#clientquery) and [`client.mutation`](#clientmutation) methods use this helper as well to create requests. The helper takes care of creating a unique `key` for the `GraphQLRequest`. This is a hash of the `query` and `variables` if they're passed. The `variables` will be stringified using [`stringifyVariables`](#stringifyvariables), which outputs a stable JSON string. Additionally, this utility will ensure that the `query` reference will remain stable. This means that if the same `query` will be passed in as a string or as a fresh `DocumentNode`, then the output will always have the same `DocumentNode` reference. ### makeOperation This utility is used to either turn a [`GraphQLRequest` object](#graphqlrequest) into a new [`Operation` object](#operation) or to copy an `Operation`. It adds the `kind` property, and the `operationName` alias that outputs a deprecation warning. It accepts three arguments: - An `Operation`'s `kind` (See [`OperationType`](#operationtype) - A [`GraphQLRequest` object](#graphqlrequest) or another [`Operation`](#operation) that should be copied. - and; optionally a [partial `OperationContext` object.](#operationcontext). This argument may be left out if the context is to be copied from the operation that may be passed as a second argument. Hence some valid uses of the utility are: ```js // Create a new operation from scratch makeOperation('query', createRequest(query, variables), client.createOperationContext(opts)); // Turn an operation into a 'teardown' operation makeOperation('teardown', operation); // Copy an existing operation while modifying its context makeOperation(operation.kind, operation, { ...operation.context, preferGetMethod: true, }); ``` ### makeResult This is a helper function that converts a GraphQL API result to an [`OperationResult`](#operationresult). It accepts an [`Operation`](#operation), the API result, and optionally the original `FetchResponse` for debugging as arguments, in that order. ### makeErrorResult This is a helper function that creates an [`OperationResult`](#operationresult) for GraphQL API requests that failed with a generic or network error. It accepts an [`Operation`](#operation), the error, and optionally the original `FetchResponse` for debugging as arguments, in that order. ### formatDocument This utility is used by the [`cacheExchange`](#cacheexchange) and by [Graphcache](../graphcache/README.md) to add `__typename` fields to GraphQL `DocumentNode`s. ### composeExchanges This utility accepts an array of `Exchange`s and composes them into a single one. It chains them in the order that they're given, left to right. ```js function composeExchanges(Exchange[]): Exchange; ``` This can be used to combine some exchanges and is also used by [`Client`](#client) to handle the `exchanges` input. --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/api/execute-exchange.md # Path: docs/api/execute-exchange.md --- title: '@urql/exchange-execute' order: 6 --- # Execute Exchange > **Note:** These API docs are deprecated as we now keep TSDocs in all published packages. > You can view TSDocs while using these packages in your editor, as long as it supports the > TypeScript Language Server. > We're planning to replace these API docs with a separate web app soon. The `@urql/exchange-execute` package contains an addon `executeExchange` for `urql` that may be used to execute queries against a local schema. It is therefore a drop-in replacement for the default _fetchExchange_ and useful for the server-side, debugging, or testing. ## Installation and Setup First install `@urql/exchange-execute` alongside `urql`: ```sh yarn add @urql/exchange-execute # or npm install --save @urql/exchange-execute ``` You'll then need to add the `executeExchange`, exposed by this package, to your `Client`. It'll typically replace the `fetchExchange` or similar exchanges and must be used last if possible, since it'll handle operations and return results. ```js import { createClient, cacheExchange } from 'urql'; import { executeExchange } from '@urql/exchange-execute'; const client = createClient({ url: 'http://localhost:3000/graphql', exchanges: [ cacheExchange, executeExchange({ /* config */ }), ], }); ``` The `executeExchange` accepts an object of options, which are all similar to the arguments that `graphql/execution/execute` accepts. Typically you'd pass it the `schema` option, some resolvers if your schema isn't already executable as `fieldResolver` / `typeResolver` / `rootValue`, and a `context` value or function. ## Options | Option | Description | | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `schema` | This is of type `GraphQLSchema` and accepts either a schema that is or isn't executable. This field is _required_ while all other fields are _optional_. | | `rootValue` | The root value that `graphql`'s `execute` will use when starting to execute the schema. | | `fieldResolver` | A given field resolver function. Creating an executable schema may be easier than providing this, but this resolver will be passed on to `execute` as expected. | | `typeResolver` | A given type resolver function. Creating an executable schema may be easier than providing this, but this resolver will be passed on to `execute` as expected. | | `context` | This may either be a function that receives an [`Operation`](./core.md#operation) and returns the context value, or just a plain context value. Similarly to a GraphQL server this is useful as all resolvers will have access to your `context` | --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/api/graphcache.md # Path: docs/api/graphcache.md --- title: '@urql/exchange-graphcache' order: 4 --- # @urql/exchange-graphcache > **Note:** These API docs are deprecated as we now keep TSDocs in all published packages. > You can view TSDocs while using these packages in your editor, as long as it supports the > TypeScript Language Server. > We're planning to replace these API docs with a separate web app soon. The `@urql/exchange-graphcache` package contains an addon `cacheExchange` for `urql` that may be used to replace the default [`cacheExchange`](./core.md#cacheexchange), which switches `urql` from using ["Document Caching"](../basics/document-caching.md) to ["Normalized Caching"](../graphcache/normalized-caching.md). [Read more about how to use and configure _Graphcache_ in the "Graphcache" section](../graphcache/README.md) ## cacheExchange The `cacheExchange` function, as exported by `@urql/exchange-graphcache`, accepts a single object of options and returns an [`Exchange`](./core.md#exchange). | Input | Description | | ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `keys` | A mapping of key generator functions for types that are used to override the default key generation that _Graphcache_ uses to normalize data for given types. | | `resolvers` | A nested mapping of resolvers, which are used to override the record or entity that _Graphcache_ resolves for a given field for a type. | | `directives` | A mapping of directives, which are functions accepting directive arguments and returning a resolver, which can be referenced by `@localDirective` or `@_localDirective` in queries. | | `updates` | A nested mapping of updater functions for mutation and subscription fields, which may be used to add side-effects that update other parts of the cache when the given subscription or mutation field is written to the cache. | | `optimistic` | A mapping of mutation fields to resolvers that may be used to provide _Graphcache_ with an optimistic result for a given mutation field that should be applied to the cached data temporarily. | | `schema` | A serialized GraphQL schema that is used by _Graphcache_ to resolve partial data, interfaces, and enums. The schema also used to provide helpful warnings for [schema awareness](../graphcache/schema-awareness.md). | | `storage` | A persisted storage interface that may be provided to preserve cache data for [offline support](../graphcache/offline.md). | | `globalIDs` | A boolean or list of typenames that have globally unique ids, this changes how graphcache internally keys the entities. This can be useful for complex interface relationships. | | `logger` | A function that will be invoked for warning/debug/... logs | The `@urql/exchange-graphcache` package also exports the `offlineExchange`; which is identical to the `cacheExchange` but activates [offline support](../graphcache/offline.md) when the `storage` option is passed. ### `keys` option This is a mapping of typenames to `KeyGenerator` functions. ```ts interface KeyingConfig { [typename: string]: (data: Data) => null | string; } ``` It may be used to alter how _Graphcache_ generates the key it uses for normalization for individual types. The key generator function may also always return `null` when a type should always be embedded. [Read more about how to set up `keys` in the "Key Generation" section of the "Normalized Caching" page.](../graphcache/normalized-caching.md#key-generation) ### `resolvers` option This configuration is a mapping of typenames to field names to `Resolver` functions. A resolver may be defined to override the entity or record that a given field on a type should resolve on the cache. ```ts interface ResolverConfig { [typeName: string]: { [fieldName: string]: Resolver; }; } ``` A `Resolver` receives four arguments when it's called: `parent`, `args`, `cache`, and `info`. | Argument | Type | Description | | -------- | -------- | ----------------------------------------------------------------------------------------------------------- | | `parent` | `Data` | The parent entity that the given field is on. | | `args` | `object` | The arguments for the given field the updater is executed on. | | `cache` | `Cache` | The cache using which data can be read or written. [See `Cache`.](#cache) | | `info` | `Info` | Additional metadata and information about the current operation and the current field. [See `Info`.](#info) | We can use the arguments it receives to either return new data based on just the arguments and other cache information, but we may also read information about the parent and return new data for the current field. ```js { Todo: { createdAt(parent, args, cache) { // Read `createdAt` on the parent but return a Date instance const date = cache.resolve(parent, 'createdAt'); return new Date(date); } } } ``` [Read more about how to set up `resolvers` on the "Computed Queries" page.](../graphcache/local-resolvers.md) ### `updates` option The `updates` configuration is a mapping of `'Mutation' | 'Subscription'` to field names to `UpdateResolver` functions. An update resolver may be defined to add side-effects that run when a given mutation field or subscription field is written to the cache. These side-effects are helpful to update data in the cache that is implicitly changed on the GraphQL API, that _Graphcache_ can't know about automatically. ```ts interface UpdatesConfig { Mutation: { [fieldName: string]: UpdateResolver; }; Subscription: { [fieldName: string]: UpdateResolver; }; } ``` An `UpdateResolver` receives four arguments when it's called: `result`, `args`, `cache`, and `info`. | Argument | Type | Description | | -------- | -------- | ----------------------------------------------------------------------------------------------------------- | | `result` | `any` | Always the entire `data` object from the mutation or subscription. | | `args` | `object` | The arguments for the given field the updater is executed on. | | `cache` | `Cache` | The cache using which data can be read or written. [See `Cache`.](#cache) | | `info` | `Info` | Additional metadata and information about the current operation and the current field. [See `Info`.](#info) | It's possible to derive more information about the current update using the `info` argument. For instance this metadata contains the current `fieldName` of the updater which may be used to make an updater function more reusable, along with `parentKey` and other key fields. It also contains `variables` and `fragments` which remain the same for the entire write operation, and additionally it may have the `error` field set to describe whether the current field is `null` because the API encountered a `GraphQLError`. [Read more about how to set up `updates` on the "Custom Updates" page.](../graphcache/cache-updates.md) ### `optimistic` option The `optimistic` configuration is a mapping of Mutation field names to `OptimisticMutationResolver` functions, which return optimistic mutation results for given fields. These results are used by _Graphcache_ to optimistically update the cache data, which provides an immediate and temporary change to its data before a mutation completes. ```ts interface OptimisticMutationConfig { [mutationFieldName: string]: OptimisticMutationResolver; } ``` A `OptimisticMutationResolver` receives three arguments when it's called: `variables`, `cache`, and `info`. | Argument | Type | Description | | -------- | -------- | ----------------------------------------------------------------------------------------------------------- | | `args` | `object` | The arguments that the given mutation field received. | | `cache` | `Cache` | The cache using which data can be read or written. [See `Cache`.](#cache) | | `info` | `Info` | Additional metadata and information about the current operation and the current field. [See `Info`.](#info) | [Read more about how to set up `optimistic` on the "Custom Updates" page.](../graphcache/cache-updates.md) ### `schema` option The `schema` option may be used to pass a `IntrospectionQuery` data to _Graphcache_, in other words it's used to provide schema information to it. This schema is then used to resolve and return partial results when querying, which are results that the cache can partially resolve as long as no required fields are missing. [Read more about how to use the `schema` option on the "Schema Awareness" page.](../graphcache/schema-awareness.md) ### `storage` option The `storage` option is an interface of methods that are used by the `offlineExchange` to persist the cache's data to persisted storage on the user's device. it > **NOTE:** Offline Support is currently experimental! It hasn't been extensively tested yet and > may not always behave as expected. Please try it out with caution! | Method | Type | Description | | ----------------- | --------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `writeData` | `(delta: SerializedEntries) => Promise` | This provided method must be able to accept an object of key-value entries that will be persisted to the storage. This method is called as a batch of updated entries becomes ready. | | `readData` | `() => Promise` | This provided method must be able to return a single combined object of previous key-value entries that have been previously preserved using `writeData`. It's only called on startup. | | `writeMetadata` | `(json: SerializedRequest[]) => void` | This provided method must be able to persist metadata for the cache. For backwards compatibility it should be able to accept any JSON data. | | `readMetadata` | `() => Promise` | This provided method must be able to read the persisted metadata that has previously been written using `writeMetadata`. It's only called on startup. | | `onOnline` | `(cb: () => void) => void` | This method must be able to accept a callback that is called when the user's device comes back online. | | `onCacheHydrated` | `() => void` | This method will be called when the `cacheExchange` has finished hydrating the data coming from storage. | These options are split into three parts: - The `writeMetadata` and `readMetadata` methods are used to persist in-progress optimistic mutations to a storage so that they may be retried if the app has been closed while some optimistic mutations were still in progress. - The `writeData` and `readData` methods are used to persist any cache data. This is the normalized data that _Graphcache_ usually keeps in memory. The `cacheExchange` will frequently call `writeData` with a partial object of its cache data, which `readData` must then be able to return in a single combined object on startup. We call the partial objects that `writeData` is called with "deltas". - The `onOnline` method is only used to receive a trigger that determines whether the user's device has come back online, which is used to retry optimistic mutations that have previously failed due to being offline. The `storage` option may also be used with the `cacheExchange` instead of the `offlineExchange`, but will then only use `readData` and `writeData` to persist its cache data. This is not full offline support, but will rather be "persistence support". [Read more about how to use the `storage` option on the "Offline Support" page.](../graphcache/offline.md) ## Cache An instance of the `Cache` interface is passed to every resolvers and updater function. It may be used to read cached data or write cached data, which may be used in combination with the [`cacheExchange` configuration](#cacheexchange) to alter the default behaviour of _Graphcache_. ### keyOfEntity The `cache.keyOfEntity` method may be called with a partial `Data` object and will return the key for that object, or `null` if it's not keyable. An object may not be keyable if it's missing the `__typename` or `id` (which falls back to `_id`) fields. This method does take the [`keys` configuration](#keys-option) into account. ```js cache.keyOfEntity({ __typename: 'Todo', id: 1 }); // 'Todo:1' cache.keyOfEntity({ __typename: 'Query' }); // 'Query' cache.keyOfEntity({ __typename: 'Unknown' }); // null ``` There's an alternative method, `cache.keyOfField` which generates a key for a given field. This is only rarely needed but similar to `cache.keyOfEntity`. This method accepts a field name and optionally a field's arguments. ```js cache.keyOfField('todo'); // 'todo' cache.keyOfField('todo', { id: 1 }); // 'todo({"id":1})' ``` Internally, these are the keys that records and links are stored on per entity. ### resolve This method retrieves a value or link for a given field, given a partially keyable `Data` object or entity, a field name, and optionally the field's arguments. Internally this method accesses the cache by using `cache.keyOfEntity` and `cache.keyOfField`. ```js // This may resolve a link: cache.resolve({ __typename: 'Query' }, 'todo', { id: 1 }); // 'Todo:1' // This may also resolve records / scalar values: cache.resolve({ __typename: 'Todo', id: 1 }, 'id'); // 1 // You can also chain multiple calls to `cache.resolve`! cache.resolve(cache.resolve({ __typename: 'Query' }, 'todo', { id: 1 }), 'id'); // 1 ``` As you can see in the last example of this code snippet, the `Data` object can also be replaced by an entity key, which makes it possible to pass a key from `cache.keyOfEntity` or another call to `cache.resolve` instead of the partial entity. > **Note:** Because `cache.resolve` may return either a scalar value or another entity key, it may > be dangerous to use in some cases. It's a good idea to make sure first whether the field you're > reading will be a key or a value. The `cache.resolve` method may also be called with a field key as generated by `cache.keyOfField`. ```js cache.resolve({ __typename: 'Query' }, cache.keyOfField('todo', { id: 1 })); // 'Todo:1' ``` This specialized case is likely only going to be useful in combination with [`cache.inspectFields`](#inspectfields). ### inspectFields The `cache.inspectFields` method may be used to interrogate the cache about all available fields on a specific entity. It accepts a partial entity or an entity key, like [`cache.resolve`](#resolve)'s first argument. When calling the method this returns an array of `FieldInfo` objects, one per field (including differing arguments) that is known to the cache. The `FieldInfo` interface has three properties: `fieldKey`, `fieldName`, and `arguments`: | Argument | Type | Description | | ----------- | ---------------- | ------------------------------------------------------------------------------- | | `fieldName` | `string` | The field's name (without any arguments, just the name) | | `arguments` | `object \| null` | The field's arguments, or `null` if the field doesn't have any arguments | | `fieldKey` | `string` | The field's cache key, which is similar to what `cache.keyOfField` would return | This works on any given entity. When calling this method the cache works in reverse on its data structure, by parsing the entity's individual field keys. p ```js cache.inspectFields({ __typename: 'Query' }); /* [ { fieldName: 'todo', arguments: { id: 1 }, fieldKey: 'id({"id":1})' }, { fieldName: 'todo', arguments: { id: 2 }, fieldKey: 'id({"id":2})' }, ... ] */ ``` ### readFragment `cache.readFragment` accepts a GraphQL `DocumentNode` as the first argument and a partial entity or an entity key as the second, like [`cache.resolve`](#resolve)'s first argument. The method will then attempt to read the entity according to the fragment entirely from the cached data. If any data is uncached and missing it'll return `null`. ```js import { gql } from '@urql/core'; cache.readFragment( gql` fragment _ on Todo { id text } `, { id: 1 } ); // Data or null ``` Note that the `__typename` may be left out on the partial entity if the fragment isn't on an interface or union type, since in that case the `__typename` is already present on the fragment itself. If any fields on the fragment require variables, you can pass them as the third argument like so: ```js import { gql } from '@urql/core'; cache.readFragment( gql` fragment _ on User { id permissions(byGroupId: $groupId) } `, { id: 1 }, // this identifies the fragment (User) entity { groupId: 5 } // any additional field variables ); ``` If you need a specific fragment in a document containing multiple you can leverage the fourth argument like this: ```js import { gql } from '@urql/core'; cache.readFragment( gql` fragment todoFields on Todo { id } fragment userFields on User { id } `, { id: 1 }, // this identifies the fragment (User) entity undefined, 'userFields' // if not passed we take the first fragment, in this case todoFields ); ``` [Read more about using `readFragment` on the ["Local Resolvers" page.](../graphcache/local-resolvers.md#reading-a-fragment) ### readQuery The `cache.readQuery` method is similar to `cache.readFragment`, but instead of reading a fragment from cache, it reads an entire query. The only difference between how these two methods are used is `cache.readQuery`'s input, which is an object instead of two arguments. The method accepts a `{ query, variables }` object as the first argument, where `query` may either be a `DocumentNode` or a `string` and variables may optionally be an object. ```js cache.readQuery({ query: ` query ($id: ID!) { todo(id: $id) { id, text } } `, variables: { id: 1 } ); // Data or null ``` [Read more about using `readQuery` on the ["Local Resolvers" page.](../graphcache/local-resolvers.md#reading-a-query) ### link Corresponding to [`cache.resolve`](#resolve), the `cache.link` method allows links in the cache to be updated. While the `cache.resolve` method reads both records and links from the cache, the `cache.link` method will only ever write links as fragments (See [`cache.writeFragment`](#writefragment) below) are more suitable for updating scalar data in the cache. The arguments for `cache.link` are identical to [`cache.resolve`](#resolve) and the field's arguments are optional. However, the last argument must always be a link, meaning `null`, an entity key, a keyable entity, or a list of these. In other words, `cache.link` accepts an entity to write to as its first argument, with the same arguments as `cache.keyOfEntity`. It then accepts one or two arguments that are passed to `cache.keyOfField` to get the targeted field key. And lastly, you may pass a list or a single entity (or an entity key). ```js // Link Query.todo field to a todo item cache.link({ __typename: 'Query' }, 'todo', { __typename: 'Todo', id: 1 }); // You may also pass arguments instead: cache.link({ __typename: 'Query' }, 'todo', { id: 1 }, { __typename: 'Todo', id: 1 }); // Or use entity keys instead of the entities themselves: cache.link('Query', 'todo', cache.keyOfEntity({ __typename: 'Todo', id: 1 })); ``` The method may [output a warning](../graphcache/errors.md#12-cant-generate-a-key-for-writefragment-or-link) when any of the entities were passed as objects but aren't keyable, which is useful when a scalar or a non-keyable object have been passed to `cache.link` accidentally. ### writeFragment Corresponding to [`cache.readFragment`](#readfragments), the `cache.writeFragment` method allows data in the cache to be updated. The arguments for `cache.writeFragment` are identical to [`cache.readFragment`](#readfragment), however the second argument, `data`, should not only contain properties that are necessary to derive an entity key from the given data, but also the fields that will be written: ```js import { gql } from '@urql/core'; cache.writeFragment( gql` fragment _ on Todo { text } `, { id: 1, text: 'New Todo Text' } ); ``` In the example we can see that the `writeFragment` method returns `undefined`. Furthermore we pass `id` in our `data` object so that an entity key can be written, but the fragment itself doesn't have to include these fields. If you need a specific fragment in a document containing multiple you can leverage the fourth argument like this: ```js import { gql } from '@urql/core'; cache.writeFragment( gql` fragment todoFields on Todo { id text } fragment userFields on User { id name } `, { id: 1, name: 'New Name' } undefined, 'userFields' // if not passed we take the first fragment, in this case todoFields ); ``` [Read more about using `writeFragment` on the ["Custom Updates" page.](../graphcache/cache-updates.md#cachewritefragment) ### updateQuery Similarly to [`cache.writeFragment`](#writefragment), there's an analogous method for [`cache.readQuery`](#readquery) that may be used to update query data. The `cache.updateQuery` method accepts the same `{ query, variables }` object input as its first argument, which is the query we'd like to write to the cache. As a second argument the method accepts an updater function. This function will be called with the query data that is already in the cache (which may be `null` if the data is uncached) and must return the new data that should be written to the cache. ```js const TodoQuery = ` query ($id: ID!) { todo(id: $id) { id, text } } `; cache.updateQuery({ query: TodoQuery, variables: { id: 1 } }, data => { if (!data) return null; data.todo.text = 'New Todo Text'; return data; }); ``` As we can see, our updater may return `null` to cancel updating any data, which we do in case the query data is uncached. We can also see that data can simply be mutated and doesn't have to be altered immutably. This is because all data from the cache is already a deep copy and hence we can do to it whatever we want. [Read more about using `updateQuery` on the "Custom Updates" page.](../graphcache/cache-updates.md#cacheupdatequery) ### invalidate The `cache.invalidate` method can be used to delete (i.e. "evict") an entity from the cache entirely. This will cause it to disappear from all queries in _Graphcache_. Its arguments are identical to [`cache.resolve`](#resolve). Since deleting an entity will lead to some queries containing missing and uncached data, calling `invalidate` may lead to additional GraphQL requests being sent, unless you're using [_Graphcache_'s "Schema Awareness" feature](../graphcache/schema-awareness.md), which takes optional fields into account. This method accepts a partial entity or an entity key as its first argument, similar to [`cache.resolve`](#resolve)'s first argument. ```js cache.invalidate({ __typename: 'Todo', id: 1 }); // Invalidates Todo:1 ``` Additionally `cache.invalidate` may be used to delete specific fields only, which can be useful when for instance a list is supposed to be evicted from cache, where a full invalidation may be impossible. This is often the case when a field on the root `Query` needs to be deleted. This method therefore accepts two additional arguments, similar to [`cache.resolve`](#resolve). ```js // Invalidates `Query.todos` with the `first: 10` argument: cache.invalidate('Query', 'todos', { first: 10 }); ``` ## Info This is a metadata object that is passed to every resolver and updater function. It contains basic information about the current GraphQL document and query, and also some information on the current field that a given resolver or updater is called on. | Argument | Type | Description | | ---------------- | -------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `parent` | `Data` | The field's parent entity's data, as it was written or read up until now, which means it may be incomplete. [Use `cache.resolve`](#resolve) to read from it. | | `parentTypeName` | `string` | The field's parent entity's typename | | `parentKey` | `string` | The field's parent entity's cache key (if any) | | `parentFieldKey` | `string` | The current key's cache key, which is the parent entity's key combined with the current field's key (This is mostly obsolete) | | `fieldName` | `string` | The current field's name | | `fragments` | `{ [name: string]: FragmentDefinitionNode }` | A dictionary of fragments from the current GraphQL document | | `variables` | `object` | The current GraphQL operation's variables (may be an empty object) | | `error` | `GraphQLError \| undefined` | The current GraphQLError for a given field. This will always be `undefined` for resolvers and optimistic updaters, but may be present for updaters when the API has returned an error for a given field. | | `partial` | `?boolean` | This may be set to `true` at any point in time (by your custom resolver or by _Graphcache_) to indicate that some data is uncached and missing | | `optimistic` | `?boolean` | This is only `true` when an optimistic mutation update is running | > **Note:** Using `info` is regarded as a last resort. Please only use information from it if > there's no other solution to get to the metadata you need. We don't regard the `Info` API as > stable and may change it with a simple minor version bump. ## The `/extras` import The `extras` subpackage is published with _Graphcache_ and contains helpers and utilities that don't have to be included in every app or aren't needed by all users of _Graphcache_. All utilities from extras may be imported from `@urql/exchange-graphcache/extras`. Currently the `extras` subpackage only contains the [pagination resolvers that have been mentioned on the "Computed Queries" page.](../graphcache/local-resolvers.md#pagination) ### simplePagination Accepts a single object of optional options and returns a resolver that can be inserted into the [`cacheExchange`'s](#cacheexchange) [`resolvers` configuration.](#resolvers-option) | Argument | Type | Description | | ---------------- | --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `offsetArgument` | `?string` | The field arguments' property, as passed to the resolver, that contains the current offset, i.e. the number of items to be skipped. Defaults to `'skip'`. | | `limitArgument` | `?string` | The field arguments' property, as passed to the resolver, that contains the current page size limit, i.e. the number of items on each page. Defaults to `'limit'`. | | `mergeMode` | `'after' \| 'before'` | This option defines whether pages are merged before or after preceding ones when paginating. Defaults to `'after'`. | Once set up, the resulting resolver is able to automatically concatenate all pages of a given field automatically. Queries to this resolvers will from then on only return the infinite, combined list of all pages. [Read more about `simplePagination` on the "Computed Queries" page.](../graphcache/local-resolvers.md#simple-pagination) ### relayPagination Accepts a single object of optional options and returns a resolver that can be inserted into the [`cacheExchange`'s](#cacheexchange) [`resolvers` configuration.](#resolvers-option) | Argument | Type | Description | | ----------- | ------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `mergeMode` | `'outwards' \| 'inwards'` | With Relay pagination, pages can be queried forwards and backwards using `after` and `before` cursors. This option defines whether pages that have been queried backwards should be concatenated before (outwards) or after (inwards) all pages that have been queried forwards. | Once set up, the resulting resolver is able to automatically concatenate all pages of a given field automatically. Queries to this resolvers will from then on only return the infinite, combined list of all pages. [Read more about `relayPagnation` on the "Computed Queries" page.](../graphcache/local-resolvers.md#relay-pagination) ## The `/default-storage` import The `default-storage` subpackage is published with _Graphcache_ and contains a default storage interface that may be used with the [`storage` option.](#storage-option) It contains the `makeDefaultStorage` export which is a factory function that accepts a few options and returns a full [storage interface](#storage-option). This storage by default persists to [IndexedDB](https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API). | Argument | Type | Description | | --------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | | `idbName` | `string` | The name of the IndexedDB database that is used and created if needed. By default this is set to `"graphcache-v3"` | | `maxAge` | `number` | The maximum age of entries that the storage should use in whole days. By default the storage will discard entries that are older than seven days. | --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/api/preact.md # Path: docs/api/preact.md --- title: '@urql/preact' order: 2 --- # @urql/preact > **Note:** These API docs are deprecated as we now keep TSDocs in all published packages. > You can view TSDocs while using these packages in your editor, as long as it supports the > TypeScript Language Server. > We're planning to replace these API docs with a separate web app soon. The `@urql/preact` API is the same as the React `urql` API. Please refer to [the "urql" API docs](./urql.md) for details on the Preact API. --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/api/refocus-exchange.md # Path: docs/api/refocus-exchange.md --- title: '@urql/exchange-refocus' order: 11 --- # Refocus Exchange > **Note:** These API docs are deprecated as we now keep TSDocs in all published packages. > You can view TSDocs while using these packages in your editor, as long as it supports the > TypeScript Language Server. > We're planning to replace these API docs with a separate web app soon. `@urql/exchange-refocus` is an exchange for the `urql` that tracks currently active operations and redispatches them when the window regains focus ## Quick Start Guide First install `@urql/exchange-refocus` alongside `urql`: ```sh yarn add @urql/exchange-refocus # or npm install --save @urql/exchange-refocus ``` Then add it to your `Client`, preferably in front of your `cacheExchange` ```js import { createClient, cacheExchange, fetchExchange } from 'urql'; import { refocusExchange } from '@urql/exchange-refocus'; const client = createClient({ url: 'http://localhost:3000/graphql', exchanges: [refocusExchange(), cacheExchange, fetchExchange], }); ``` --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/api/request-policy-exchange.md # Path: docs/api/request-policy-exchange.md --- title: '@urql/exchange-request-policy' order: 9 --- # Request Policy Exchange > **Note:** These API docs are deprecated as we now keep TSDocs in all published packages. > You can view TSDocs while using these packages in your editor, as long as it supports the > TypeScript Language Server. > We're planning to replace these API docs with a separate web app soon. The `@urql/exchange-request-policy` package contains an addon `requestPolicyExchange` for `urql` that may be used to upgrade [Operations' Request Policies](./core.md#requestpolicy) on a time-to-live basis. [Read more about request policies on the "Document Caching" page.](../basics/document-caching.md#request-policies) This exchange will conditionally upgrade `cache-first` and `cache-only` operations to use `cache-and-network`, so that the client gets an opportunity to update its cached data, when the operation hasn't been seen within the given `ttl` time. This is often preferable to setting the default policy to `cache-and-network` to avoid an unnecessarily high amount of requests to be sent to the API when switching pages. ## Installation and Setup First install `@urql/exchange-request-policy` alongside `urql`: ```sh yarn add @urql/exchange-request-policy # or npm install --save @urql/exchange-request-policy ``` Then add it to your `Client`, preferably in front of the `cacheExchange` and in front of any asynchronous exchanges, like the `fetchExchange`: ```js import { createClient, cacheExchange, fetchExchange } from 'urql'; import { requestPolicyExchange } from '@urql/exchange-request-policy'; const client = createClient({ url: 'http://localhost:3000/graphql', exchanges: [ requestPolicyExchange({ /* config */ }), cacheExchange, fetchExchange, ], }); ``` ## Options | Option | Description | | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `ttl` | The "time-to-live" until an `Operation` will be upgraded to the `cache-and-network` policy in milliseconds. By default 5 minutes is set. | | `shouldUpgrade` | An optional function that receives an `Operation` as the only argument and may return `true` or `false` depending on whether an operation should be upgraded. This can be used to filter out operations that should never be upgraded to `cache-and-network`. | --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/api/retry-exchange.md # Path: docs/api/retry-exchange.md --- title: '@urql/exchange-retry' order: 5 --- # Retry Exchange > **Note:** These API docs are deprecated as we now keep TSDocs in all published packages. > You can view TSDocs while using these packages in your editor, as long as it supports the > TypeScript Language Server. > We're planning to replace these API docs with a separate web app soon. The `@urql/exchange-retry` package contains an addon `retryExchange` for `urql` that may be used to let failed operations be retried, typically when a previous operation has failed with a network error. [Read more about how to use and configure the `retryExchange` on the "Retry Operations" page.](../advanced/retry-operations.md) ## Options | Option | Description | | ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `initialDelayMs` | Specify at what interval the `retrying` should start, this means that if we specify `1000` that when our `operation` fails we'll wait 1 second and then retry it. | | `maxDelayMs` | The maximum delay between retries. The `retryExchange` will keep increasing the time between retries so that the server doesn't receive simultaneous requests it can't complete. This time between requests will increase with a random `back-off` factor applied to the `initialDelayMs`, read more about the [thundering herd problem](https://en.wikipedia.org/wiki/Thundering_herd_problem). | | `randomDelay` | Allows the randomized delay described above to be disabled. When this option is set to `false` there will be exactly a `initialDelayMs` wait between each retry. | | `maxNumberAttempts` | Allows the max number of retries to be defined. | | `retryIf` | Apply a custom test to the returned error to determine whether it should be retried. | | `retryWith` | Apply a transform function allowing you to selectively replace a retried `Operation` or return a nullish value. This will act like `retryIf` where a truthy value retries (`retryIf` takes precedence and overrides this function.) | --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/api/svelte.md # Path: docs/api/svelte.md --- title: '@urql/svelte' order: 3 --- # Svelte API > **Note:** These API docs are deprecated as we now keep TSDocs in all published packages. > You can view TSDocs while using these packages in your editor, as long as it supports the > TypeScript Language Server. > We're planning to replace these API docs with a separate web app soon. ## queryStore The `queryStore` factory accepts properties as inputs and returns a Svelte pausable, readable store of results, with type `OperationResultStore & Pausable`. | Argument | Type | Description | | --------------- | -------------------------- | -------------------------------------------------------------------------------------------------------- | | `client` | `Client` | The [`Client`](./core.md#Client) to use for the operation. | | `query` | `string \| DocumentNode \` | The query to be executed. Accepts as a plain string query or GraphQL DocumentNode. | | `variables` | `?object` | The variables to be used with the GraphQL request. | | `requestPolicy` | `?RequestPolicy` | An optional [request policy](./core.md#requestpolicy) that should be used specifying the cache strategy. | | `pause` | `?boolean` | A boolean flag instructing [execution to be paused](../basics/vue.md#pausing-usequery). | | `context` | `?object` | Holds the contextual information for the query. | This store is pausable, which means that the result has methods on it to `pause()` or `resume()` the subscription of the operation. [Read more about how to use the `queryStore` API on the "Queries" page.](../basics/svelte.md#queries) ## mutationStore The `mutationStore` factory accepts properties as inputs and returns a Svelte readable store of a result. | Argument | Type | Description | | ----------- | -------------------------- | ---------------------------------------------------------------------------------- | | `client` | `Client` | The [`Client`](./core.md#Client) to use for the operation. | | `query` | `string \| DocumentNode \` | The query to be executed. Accepts as a plain string query or GraphQL DocumentNode. | | `variables` | `?object` | The variables to be used with the GraphQL request. | | `context` | `?object` | Holds the contextual information for the query. | [Read more about how to use the `mutation` API on the "Mutations" page.](../basics/svelte.md#mutations) ## subscriptionStore The `subscriptionStore` utility function accepts the same inputs as `queryStore` does as its first argument, [see above](#querystore). The function also optionally accepts a second argument, a `handler` function. This function has the following type signature: ```js type SubscriptionHandler = (previousData: R | undefined, data: T) => R; ``` This function will be called with the previous data (or `undefined`) and the new data that's incoming from a subscription event, and may be used to "reduce" the data over time, altering the value of `result.data`. [Read more about how to use the `subscription` API on the "Subscriptions" page.](../advanced/subscriptions.md#svelte) ## OperationResultStore A Svelte Readble store of an [`OperationResult`](./core.md#operationresult). This store will be updated as the incoming data changes. | Prop | Type | Description | | ------------ | ---------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | | `data` | `?any` | Data returned by the specified query | | `error` | `?CombinedError` | A [`CombinedError`](./core.md#combinederror) instances that wraps network or `GraphQLError`s (if any) | | `extensions` | `?Record` | Extensions that the GraphQL server may have returned. | | `stale` | `boolean` | A flag that may be set to `true` by exchanges to indicate that the `data` is incomplete or out-of-date, and that the result will be updated soon. | | `fetching` | `boolean` | A flag that indicates whether the operation is currently in progress, which means that the `data` and `error` is out-of-date for the given inputs. | ## Pausable The `queryStore` and `subscriptionStore`'s stores are pausable. This means they inherit the following properties from the `Pausable` store. | Prop | Type | Description | | ----------- | ------------------- | ---------------------------------------------------------------------------------------------------------------------------- | | `isPaused$` | `Readable` | A Svelte readable store indicating whether the operation is currently paused. Essentially, this is equivalent to `!fetching` | | `pause()` | `pause(): void` | This method pauses the ongoing operation. | | `resume()` | `resume(): void` | This method resumes the previously paused operation. | ## Context API In `urql`'s Svelte bindings, the [`Client`](./core.md#client) is passed into the factories for stores above manually. This is to cater to greater flexibility. However, for convenience's sake, instead of keeping a `Client` singleton, we may also use [Svelte's Context API](https://svelte.dev/tutorial/context-api). `@urql/svelte` provides wrapper functions around Svelte's [`setContext`](https://svelte.dev/docs#run-time-svelte-setcontext) and [`getContext`](https://svelte.dev/docs#run-time-svelte-getcontext) functions: - `setContextClient` - `getContextClient` - `initContextClient` (a shortcut for `createClient` + `setContextClient`) --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/api/urql.md # Path: docs/api/urql.md --- title: urql (React) order: 1 --- # React API > **Note:** These API docs are deprecated as we now keep TSDocs in all published packages. > You can view TSDocs while using these packages in your editor, as long as it supports the > TypeScript Language Server. > We're planning to replace these API docs with a separate web app soon. ## useQuery Accepts a single required options object as an input with the following properties: | Prop | Type | Description | | --------------- | ------------------------ | -------------------------------------------------------------------------------------------------------- | | `query` | `string \| DocumentNode` | The query to be executed. Accepts as a plain string query or GraphQL DocumentNode. | | `variables` | `?object` | The variables to be used with the GraphQL request. | | `requestPolicy` | `?RequestPolicy` | An optional [request policy](./core.md#requestpolicy) that should be used specifying the cache strategy. | | `pause` | `?boolean` | A boolean flag instructing [execution to be paused](../basics/react-preact.md#pausing-usequery). | | `context` | `?object` | Holds the contextual information for the query. | This hook returns a tuple of the shape `[result, executeQuery]`. - The `result` is an object with the shape of an [`OperationResult`](./core.md#operationresult) with an added `fetching: boolean` property, indicating whether the query is being fetched. - The `executeQuery` function optionally accepts [`Partial`](./core.md#operationcontext) and reexecutes the current query when it's called. When `pause` is set to `true` this executes the query, overriding the otherwise paused hook. [Read more about how to use the `useQuery` API on the "Queries" page.](../basics/react-preact.md#queries) ## useMutation Accepts a single `query` argument of type `string | DocumentNode` and returns a tuple of the shape `[result, executeMutation]`. - The `result` is an object with the shape of an [`OperationResult`](./core.md#operationresult) with an added `fetching: boolean` property, indicating whether the mutation is being executed. - The `executeMutation` function accepts variables and optionally [`Partial`](./core.md#operationcontext) and may be used to start executing a mutation. It returns a `Promise` resolving to an [`OperationResult`](./core.md#operationresult). [Read more about how to use the `useMutation` API on the "Mutations" page.](../basics/react-preact.md#mutations) ## useSubscription Accepts a single required options object as an input with the following properties: | Prop | Type | Description | | ----------- | ------------------------ | ------------------------------------------------------------------------------------------------ | | `query` | `string \| DocumentNode` | The query to be executed. Accepts as a plain string query or GraphQL DocumentNode. | | `variables` | `?object` | The variables to be used with the GraphQL request. | | `pause` | `?boolean` | A boolean flag instructing [execution to be paused](../basics/react-preact.md#pausing-usequery). | | `context` | `?object` | Holds the contextual information for the query. | The hook optionally accepts a second argument, which may be a handler function with a type signature of: ```js type SubscriptionHandler = (previousData: R | undefined, data: T) => R; ``` This function will be called with the previous data (or `undefined`) and the new data that's incoming from a subscription event, and may be used to "reduce" the data over time, altering the value of `result.data`. This hook returns a tuple of the shape `[result, executeSubscription]`. - The `result` is an object with the shape of an [`OperationResult`](./core.md#operationresult). - The `executeSubscription` function optionally accepts [`Partial`](./core.md#operationcontext) and restarts the current subscription when it's called. When `pause` is set to `true` this starts the subscription, overriding the otherwise paused hook. The `fetching: boolean` property on the `result` may change to `false` when the server proactively ends the subscription. By default, `urql` is unable able to start subscriptions, since this requires some additional setup. [Read more about how to use the `useSubscription` API on the "Subscriptions" page.](../advanced/subscriptions.md) ## Query Component This component is a wrapper around [`useQuery`](#usequery), exposing a [render prop API](https://reactjs.org/docs/render-props.html) for cases where hooks aren't desirable. The API of the `Query` component mirrors the API of [`useQuery`](#usequery). The props that `` accepts are the same as `useQuery`'s options object. A function callback must be passed to `children` that receives the query result and must return a React element. The second argument of the hook's tuple, `executeQuery` is passed as an added property on the query result. ## Mutation Component This component is a wrapper around [`useMutation`](#usemutation), exposing a [render prop API](https://reactjs.org/docs/render-props.html) for cases where hooks aren't desirable. The `Mutation` component accepts a `query` prop, and a function callback must be passed to `children` that receives the mutation result and must return a React element. The second argument of `useMutation`'s returned tuple, `executeMutation` is passed as an added property on the mutation result object. ## Subscription Component This component is a wrapper around [`useSubscription`](#usesubscription), exposing a [render prop API](https://reactjs.org/docs/render-props.html) for cases where hooks aren't desirable. The API of the `Subscription` component mirrors the API of [`useSubscription`](#usesubscription). The props that `` accepts are the same as `useSubscription`'s options object, with an added, optional `handler` prop that may be passed, which for the `useSubscription` hook is instead the second argument. A function callback must be passed to `children` that receives the subscription result and must return a React element. The second argument of the hook's tuple, `executeSubscription` is passed as an added property on the subscription result. ## Context `urql` is used in React by adding a provider around where the [`Client`](./core.md#client) is supposed to be used. Internally this means that `urql` creates a [React Context](https://reactjs.org/docs/context.html). All created parts of this context are exported by `urql`, namely: - `Context` - `Provider` - `Consumer` To keep examples brief, `urql` creates a default client with the `url` set to `'/graphql'`. This client will be used when no `Provider` wraps any of `urql`'s hooks. However, to prevent this default client from being used accidentally, a warning is output in the console for the default client. ### useClient `urql` also exports a `useClient` hook, which is a convenience wrapper like the following: ```js import React from 'react'; import { Context } from 'urql'; const useClient = () => React.useContext(Context); ``` However, this hook is also responsible for outputting the default client warning that's mentioned above, and should thus be preferred over manually using `useContext` with `urql`'s `Context`. --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/api/vue.md # Path: docs/api/vue.md --- title: '@urql/vue' order: 3 --- # Vue API > **Note:** These API docs are deprecated as we now keep TSDocs in all published packages. > You can view TSDocs while using these packages in your editor, as long as it supports the > TypeScript Language Server. > We're planning to replace these API docs with a separate web app soon. ## useQuery Accepts a single required options object as an input with the following properties: | Prop | Type | Description | | --------------- | ------------------------ | -------------------------------------------------------------------------------------------------------- | | `query` | `string \| DocumentNode` | The query to be executed. Accepts as a plain string query or GraphQL DocumentNode. | | `variables` | `?object` | The variables to be used with the GraphQL request. | | `requestPolicy` | `?RequestPolicy` | An optional [request policy](./core.md#requestpolicy) that should be used specifying the cache strategy. | | `pause` | `?boolean` | A boolean flag instructing [execution to be paused](../basics/vue.md#pausing-usequery). | | `context` | `?object` | Holds the contextual information for the query. | Each of these inputs may also be [reactive](https://v3.vuejs.org/api/refs-api.html) (e.g. a `ref`) and are allowed to change over time which will issue a new query. This function returns an object with the shape of an [`OperationResult`](./core.md#operationresult) with an added `fetching` property, indicating whether the query is currently being fetched and an `isPaused` property which will indicate whether `useQuery` is currently paused and won't automatically start querying. All of the properties on this result object are also marked as [reactive](https://v3.vuejs.org/api/refs-api.html) using `ref` and will update accordingly as the query is executed. The result furthermore carries several utility methods: | Method | Description | | -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `pause()` | This will pause automatic querying, which is equivalent to setting `pause.value = true` | | `resume()` | This will resume a paused automatic querying, which is equivalent to setting `pause.value = false` | | `executeQuery(opts)` | This will execute a new query with the given partial [`Partial`](./core.md#operationcontext) regardless of whether the query is currently paused or not. This also returns the result object again for chaining. | Furthermore the returned result object of `useQuery` is also a `PromiseLike`, which allows you to take advantage of [Vue 3's experimental Suspense feature.](https://vuedose.tips/go-async-in-vue-3-with-suspense/) When the promise is used, e.g. you `await useQuery(...)` then the `PromiseLike` will only resolve once a result from the API is available. [Read more about how to use the `useQuery` API on the "Queries" page.](../basics/vue.md#queries) ## useMutation Accepts a single `query` argument of type `string | DocumentNode` and returns a result object with the shape of an [`OperationResult`](./core.md#operationresult) with an added `fetching` property. All of the properties on this result object are also marked as [reactive](https://v3.vuejs.org/api/refs-api.html) using `ref` and will update accordingly as the mutation is executed. The object also carries a special `executeMutation` method, which accepts variables and optionally a [`Partial`](./core.md#operationcontext) and may be used to start executing a mutation. It returns a `Promise` resolving to an [`OperationResult`](./core.md#operationresult) [Read more about how to use the `useMutation` API on the "Mutations" page.](../basics/vue.md#mutations) ## useSubscription Accepts a single required options object as an input with the following properties: | Prop | Type | Description | | ----------- | ------------------------ | --------------------------------------------------------------------------------------- | | `query` | `string \| DocumentNode` | The query to be executed. Accepts as a plain string query or GraphQL DocumentNode. | | `variables` | `?object` | The variables to be used with the GraphQL request. | | `pause` | `?boolean` | A boolean flag instructing [execution to be paused](../basics/vue.md#pausing-usequery). | | `context` | `?object` | Holds the contextual information for the subscription. | Each of these inputs may also be [reactive](https://v3.vuejs.org/api/refs-api.html) (e.g. a `ref`) and are allowed to change over time which will issue a new query. `useSubscription` also optionally accepts a second argument, which may be a handler function with a type signature of: ```js type SubscriptionHandler = (previousData: R | undefined, data: T) => R; ``` This function will be called with the previous data (or `undefined`) and the new data that's incoming from a subscription event, and may be used to "reduce" the data over time, altering the value of `result.data`. This function returns an object with the shape of an [`OperationResult`](./core.md#operationresult) with an added `fetching` property, indicating whether the subscription is currently running and an `isPaused` property which will indicate whether `useSubscription` is currently paused. All of the properties on this result object are also marked as [reactive](https://v3.vuejs.org/api/refs-api.html) using `ref` and will update accordingly as the query is executed. The result furthermore carries several utility methods: | Method | Description | | --------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `pause()` | This will pause the subscription, which is equivalent to setting `pause.value = true` | | `resume()` | This will resume the subscription, which is equivalent to setting `pause.value = false` | | `executeSubscription(opts)` | This will start a new subscription with the given partial [`Partial`](./core.md#operationcontext) regardless of whether the subscription is currently paused or not. This also returns the result object again for chaining. | [Read more about how to use the `useSubscription` API on the "Subscriptions" page.](../advanced/subscriptions.md#vue) ## useClientHandle The `useClientHandle()` function may, like the other `use*` functions, be called either in `setup()` or another lifecycle hook, and returns a so called "client handle". Using this `handle` we can access the [`Client`](./core.md#client) directly via the `client` property or call the other `use*` functions as methods, which will be directly bound to this `client`. This may be useful when chaining these methods inside an `async setup()` lifecycle function. | Method | Description | | ---------------------- | ------------------------------------------------------------------------------------------------------------------------- | | `client` | Contains the raw [`Client`](./core.md#client) reference, which allows the `Client` to be used directly. | | `useQuery(...)` | Accepts the same arguments as the `useQuery` function, but will always use the `Client` from the handle's context. | | `useMutation(...)` | Accepts the same arguments as the `useMutation` function, but will always use the `Client` from the handle's context. | | `useSubscription(...)` | Accepts the same arguments as the `useSubscription` function, but will always use the `Client` from the handle's context. | ## Context API In Vue the [`Client`](./core.md#client) is provided either to your app or to a parent component of a given subtree and is then subsequently injected whenever one of the above composition functions is used. You can provide the `Client` from any of your components using the `provideClient` function. Alternatively, `@urql/vue` also has a default export of a [Vue Plugin function](https://v3.vuejs.org/guide/plugins.html#using-a-plugin). Both `provideClient` and the plugin function either accept an [instance of `Client`](./core.md#client) or the same options that `createClient` accepts as inputs. --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/architecture.md # Path: docs/architecture.md --- title: Architecture order: 3 --- # Architecture `urql` is a highly customizable and flexible GraphQL client. As you use it in your app, it's split into three parts: - Bindings — such as for React, Preact, Vue, or Svelte — which interact with `@urql/core`'s `Client`. - The Client — as created [with the core `@urql/core` package](./basics/core.md), which interacts with "exchanges" to execute GraphQL operations, and which you can also use directly. - Exchanges, which provide functionality like fetching or caching to the `Client`. By default, `urql` aims to provide the minimal amount of features that allow us to build an app quickly. However, `urql` has also been designed to be a GraphQL Client that grows with our usage and demands. As we go from building our smallest or first GraphQL apps to utilising its full functionality, we have tools at our disposal to extend and customize `urql` to our liking. ## Using GraphQL Clients You may have worked with a GraphQL API previously and noticed that using GraphQL in your app can be as straightforward as sending a plain HTTP request with your query to fetch some data. GraphQL also provides an opportunity to abstract away a lot of the manual work that goes with sending these queries and managing the data. Ultimately, this lets you focus on building your app without having to handle the technical details of state management in detail. Specifically, `urql` simplifies three common aspects of using GraphQL: - Sending queries and mutations and receiving results _declaratively_ - Abstracting _caching_ and state management internally - Providing a central point of _extensibility_ and integration with your API In the following sections we'll talk about the way that `urql` solves these three problems and how the logic is abstracted away internally. ## Requests and Operations on the Client If `urql` was a train it would take several stops to arrive at its terminus, our API. It starts with us defining queries or mutations by writing in GraphQL's query language. Any GraphQL request can be abstracted into its query documents and its variables. ```js import { gql } from '@urql/core'; const query = gql` query ($name: String!) { helloWorld(name: $name) } `; const request = createRequest(query, { name: 'Urkel', }); ``` In `urql`, these GraphQL requests are treated as unique objects and each GraphQL request will have a `key` generated for them. This `key` is a hash of the query document and the variables you provide and are set on the `key` property of a [`GraphQLRequest`](./api/core.md#graphqlrequest). Whenever we decide to send our GraphQL requests to a GraphQL API we start by using `urql`'s [`Client`](./api/core.md#client). The `Client` accepts several options to configure its behaviour and the behaviour of exchanges, like the `fetchExchange`. For instance, we can pass it a `url` which the `fetchExchange` will use to make a `fetch` call to our GraphQL API. ```js import { Client, cacheExchange, fetchExchange } from '@urql/core'; const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [cacheExchange, fetchExchange], }); ``` Above, we're defining a `Client` that is ready to accept our requests. It will apply basic document caching and will send uncached requests to the `url` we pass it. The bindings that we've seen in [the "Basics" section](./basics/README.md), like `useQuery` for React for example, interact with [the `Client`](./api/core.md#client) directly and are a thin abstraction. Some methods can be called on it directly however, as seen [on the "Core Usage" page](./basics/core.md#one-off-queries-and-mutations). ```js // Given our request and client defined above, we can call const subscription = client.executeQuery(request).subscribe(result => { console.log(result.data); }); ``` As we've seen, `urql` defines our query documents and variables as [`GraphQLRequest`s](./api/core.md#graphqlrequest). However, since we have more metadata that is needed, like our `url` option on the `Client`, `urql` internally creates [`Operation`s](./api/core.md#operation) each time a request is executed. The operations are then forwarded to the exchanges, like the `cacheExchange` and `fetchExchange`. An "Operation" is an extension of `GraphQLRequest`s. Not only do they carry the `query`, `variables`, and a `key` property, they will also identify the `kind` of operation that is executed, like `"query"` or `"mutation"`, and they contain the `Client`'s options on `operation.context`. ![Operations and Results](./assets/urql-event-hub.png) This means, once we hand over a GraphQL request to the `Client`, it will create an `Operation`, and then hand it over to the exchanges until a result comes back. As shown in the diagram, each operation is like an event or signal for a GraphQL request to start, and the exchanges will eventually send back a corresponding result. However, because the cache can send updates to us whenever it detects a change, or you could cancel a GraphQL request before it finishes, a special "teardown" `Operation` also exists, which cancels ongoing requests. ## The Client and Exchanges To reiterate, when we use `urql`'s bindings for our framework of choice, methods are called on the `Client`, but we never see the operations that are created in the background from our bindings. We call a method like `client.executeQuery` (or it's called for us in the bindings), an operation is issued internally when we subscribe with a callback, and later, we're given results. ![Operations stream and results stream](./assets/urql-client-architecture.png) While we know that, for us, we're only interested in a single [`Operation`](./api/core.md#operation) and its [`OperationResult`s](./api/core.md#operationresult) at a time, the `Client` treats these as one big stream. The `Client` sees an incoming flow of all of our operations. As we've learned before, each operation carries a `key` and each result we receive carries the original `operation`. Because an `OperationResult` also carries an `operation` property the `Client` will always know which results correspond to an individual operation. However, internally, all of our operations are processed at the same time concurrently. However, from our perspective: - We subscribe to a "stream" and expect to get results on a callback - The `Client` issues the operation, and we'll receive some results back eventually as either the cache responds (synchronously), or the request gets sent to our API. - We eventually unsubscribe, and the `Client` issues a "teardown" operation with the same `key` as the original operation, which concludes our flow. The `Client` itself doesn't actually know what to do with operations. Instead, it sends them through "exchanges". Exchanges are akin to [middleware in Redux](https://redux.js.org/advanced/middleware) and have access to all operations and all results. Multiple exchanges are chained to process our operations and to execute logic on them, one of them being the `fetchExchange`, which as the name implies sends our requests to our API. ### How operations get to exchanges We now know how we get to operations and to the `Client`: - Any bindings or calls to the `Client` create an **operation** - This operation identifies itself as either a `"query"`, `"mutation"` or `"subscription"` and has a unique `key`. - This operation is sent into the **exchanges** and eventually ends up at the `fetchExchange` (or a similar exchange) - The operation is sent to the API and a **result** comes back, which is wrapped in an `OperationResult` - The `Client` filters the `OperationResult` by the `operation.key` and — via a callback — gives us a **stream of results**. To come back to our train analogy from earlier, an operation, like a train, travels from one end of the track to the terminus — our API. The results then come back on the same path as they're just travelling the same line in reverse. ### The Exchanges By default, the `Client` doesn't do anything with GraphQL requests. It contains only the logic to manage and differentiate between active and inactive requests and converts them to operations. To actually do something with our GraphQL requests, it needs _exchanges_, which are like plugins that you can pass to create a pipeline of how GraphQL operations are executed. By default, you may want to add the `cacheExchange` and the `fetchExchange` from `@urql/core`: - `cacheExchange`: Caches GraphQL results with ["Document Caching"](./basics/document-caching.md) - `fetchExchange`: Executes GraphQL requests with a `fetch` HTTP call ```js import { Client, cacheExchange, fetchExchange } from '@urql/core'; const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [cacheExchange, fetchExchange], }); ``` As we can tell, exchanges define not only how GraphQL requests are executed and handled, but also get control over caching. Exchanges can be used to change almost any behaviour in the `Client`, although internally they only handle incoming & outgoing requests and incoming & outgoing results. Some more exchanges that we can use with our `Client` are: - [`mapExchange`](./api/core.md#mapexchange): Allows changing and reacting to operations, results, and errors - [`ssrExchange`](./advanced/server-side-rendering.md): Allows for a server-side renderer to collect results for client-side rehydration. - [`retryExchange`](./advanced/retry-operations.md): Allows operations to be retried on errors - [`persistedExchange`](./advanced/persistence-and-uploads.md#automatic-persisted-queries): Provides support for Automatic Persisted Queries - [`authExchange`](./advanced/authentication.md): Allows refresh authentication to be implemented easily. - [`requestPolicyExchange`](./api/request-policy-exchange.md): Automatically refreshes results given a TTL. - `devtoolsExchange`: Provides the ability to use the [urql-devtools](https://github.com/urql-graphql/urql-devtools) We can even swap out our [document cache](./basics/document-caching.md), which is implemented by `@urql/core`'s `cacheExchange`, with `urql`'s [normalized cache, Graphcache](./graphcache/README.md). [Read more about exchanges and how to write them from scratch on the "Authoring Exchanges" page.](./advanced/authoring-exchanges.md) ## Stream Patterns in `urql` In the previous sections we've learned a lot about how the `Client` works, but we've always learned it in vague terms — for instance, we've learned that we get a "stream of results" or `urql` sees all operations as "one stream of operations" that it sends to the exchanges. But, **what are streams?** Generally we refer to _streams_ as abstractions that allow us to program with asynchronous events over time. Within the context of JavaScript we're specifically thinking in terms of [Observables](https://github.com/tc39/proposal-observable) and [Reactive Programming with Observables.](http://reactivex.io/documentation/observable.html) These concepts may sound intimidating, but from a high-level view what we're talking about can be thought of as a combination of promises and iterables (e.g. arrays). We're dealing with multiple events, but our callback is called over time. It's like calling `forEach` on an array but expecting the results to come in asynchronously. As a user, if we're using the one framework bindings that we've seen in [the "Basics" section](./basics/README.md), we may never see these streams in action or may never use them even, since the bindings internally use them for us. But if we [use the `Client` directly](./basics/core.md#one-off-queries-and-mutations) or write exchanges then we'll see streams and will have to deal with their API. ### Stream patterns with the client When we call methods on the `Client` like [`client.executeQuery`](./api/core.md#clientexecutequery) or [`client.query`](./api/core.md#clientquery) then these will return a "stream" of results. It's normal for GraphQL subscriptions to deliver multiple results, however, even GraphQL queries can give you multiple results in `urql`. This is because operations influence one another. When a cache invalidates a query, this query may refetch, and a new result is delivered to your application. Multiple results mean that once you subscribe to a GraphQL query via the `Client`, you may receive new results in the future. ```js import { gql } from '@urql/core'; const QUERY = gql` query Test($id: ID!) { getUser(id: $id) { id name } } `; client.query(QUERY, { id: 'test' }).subscribe(result => { console.log(result); // { data: ... } }); ``` Read more about the available APIs on the `Client` in the [Core API docs](./api/core.md). Internally, these streams and all exchanges are written using a library called [`wonka`](https://wonka.kitten.sh/basics/background), which is a tiny Observable-like library. It is used to write exchanges and when we interact with the `Client` it is used internally as well. --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/basics/README.md # Path: docs/basics/README.md --- title: Basics order: 2 --- # Basics In this chapter we'll explain the basics of `urql` and how to get started with using it without any prior knowledge. - [**React/Preact**](./react-preact.md) covers how to work with the bindings for React/Preact. - [**Vue**](./vue.md) covers how to work with the bindings for Vue 3. - [**Svelte**](./svelte.md) covers how to work with the bindings for Svelte. - [**Core Package**](./core.md) defines why a shared package exists that contains the main logic of `urql`, and how we can use it directly in Node.js. After reading the page for your bindings and the "Core" page you may want to the next two pages in this section of the documentation: - [**Document Caching**](./document-caching.md) explains the default cache mechanism of `urql`, as opposed to the opt-in [Normalized Cache](../graphcache/normalized-caching.md). - [**Errors**](../basics/errors.md) contains information on error handling in `urql`. - [**UI-Patterns**](../basics/ui-patterns.md) presents some common UI-patterns with `urql`. --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/basics/core.md # Path: docs/basics/core.md --- title: Core / Node.js order: 3 --- # Core and Node.js Usage The `@urql/core` package contains `urql`'s `Client`, some common utilities, and some default _Exchanges_. These are the shared, default parts of `urql` that we will be using no matter which framework we're interacting with. All framework bindings — meaning `urql`, `@urql/preact`, `@urql/svelte`, and `@urql/vue` — reexport all exports of our `@urql/core` core library. This means that if we want to use `urql`'s `Client` imperatively or with Node.js we'd use `@urql/core`'s utilities or the `Client` directly. In other words, if we're using framework bindings then writing `import { Client } from "@urql/vue"` for instance is the same as `import { Client } from "@urql/core"`. This means that we can use the core utilities and exports that are shared between all bindings directly or install `@urql/core` separately. We can even use `@urql/core` directly without any framework bindings. ## Installation As we said above, if we are using bindings then those will already have installed `@urql/core` as they depend on it. They also all re-export all exports from `@urql/core`, so we can use those regardless of which bindings we've installed. However, it's also possible to explicitly install `@urql/core` or use it standalone, e.g. in a Node.js environment. ```sh yarn add @urql/core # or npm install --save @urql/core ``` Since all bindings and all exchanges depend on `@urql/core`, we may sometimes run into problems where the package manager installs _two versions_ of `@urql/core`, which is a duplication problem. This can cause type errors in TypeScript or cause some parts of our application to bundle two different versions of the package or use slightly different utilities. We can fix this by deduplicating our dependencies. ```sh # npm npm dedupe # pnpm pnpm dedupe # yarn npx yarn-deduplicate && yarn ``` ## GraphQL Tags A notable utility function is the `gql` tagged template literal function, which is a drop-in replacement for `graphql-tag`, if you're coming from other GraphQL clients. Wherever `urql` accepts a query document, we can either pass a string or a `DocumentNode`. `gql` is a utility that allows a `DocumentNode` to be created directly, and others to be interpolated into it, which is useful for fragments for instance. This function will often also mark GraphQL documents for syntax highlighting in most code editors. In most examples we may have passed a string to define a query document, like so: ```js const TodosQuery = ` query { todos { id title } } `; ``` We may also use the `gql` tag function to create a `DocumentNode` directly: ```js import { gql } from '@urql/core'; const TodosQuery = gql` query { todos { id title } } `; ``` Since all framework bindings also re-export `@urql/core`, we may also import `gql` from `'urql'`, `'@urql/svelte'` and other bindings directly. We can also start interpolating other documents into the tag function. This is useful to compose fragment documents into a larger query, since it's common to define fragments across components of an app to spread out data dependencies. If we accidentally use a duplicate fragment name in a document, `gql` will log a warning, since GraphQL APIs won't accept duplicate names. ```js import { gql } from '@urql/core'; const TodoFragment = gql` fragment SmallTodo on Todo { id title } `; const TodosQuery = gql` query { todos { ...TodoFragment } } ${TodoFragment} `; ``` This usage will look familiar when coming from the `graphql-tag` package. The `gql` API is identical, and its output is approximately the same. The two packages are also intercompatible. However, one small change in `@urql/core`'s implementation is that your fragment names don't have to be globally unique, since it's possible to create some one-off fragments occasionally, especially for `@urql/exchange-graphcache`'s configuration. It also pre-generates a "hash key" for the `DocumentNode` which is what `urql` does anyway, thus avoiding some extra work compared to when the `graphql-tag` package is used with `urql`. ## Using the `urql` Client The `Client` is the main "hub" and store for everything that `urql` does. It is used by all framework bindings and from the other pages in the "Basics" section we can see that creating a `Client` comes up across all bindings and use-cases for `urql`. [Read more about the `Client` and `urql`'s architecture on the "Architecture" page.](../architecture.md) ### Setting up the `Client` The `@urql/core` package exports a `Client` class, which we can use to create the GraphQL client. This central `Client` manages all of our GraphQL requests and results. ```js import { Client, cacheExchange, fetchExchange } from '@urql/core'; const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [cacheExchange, fetchExchange], }); ``` At the bare minimum we'll need to pass an API's `url`, and the `fetchExchange`, when we create a `Client` to get started. Another common option is `fetchOptions`. This option allows us to customize the options that will be passed to `fetch` when a request is sent to the given API `url`. We may pass in an options object, or a function returning an options object. In the following example we'll add a token to each `fetch` request that our `Client` sends to our GraphQL API. ```js const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [cacheExchange, fetchExchange], fetchOptions: () => { const token = getToken(); return { headers: { authorization: token ? `Bearer ${token}` : '' }, }; }, }); ``` ### The `Client`s options As we've seen above, the most important options for the `Client` are `url` and `exchanges`. The `url` option is used by the `fetchExchange` to send GraphQL requests to an API. The `exchanges` option is of particular importance however because it tells the `Client` what to do with our GraphQL requests: ```js import { Client, cacheExchange, fetchExchange } from '@urql/core'; const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [cacheExchange, fetchExchange], }); ``` For instance, here, the `Client`'s caching and fetching features are only available because we're passing it exchanges. In the above example, the `Client` will try to first read a GraphQL request from a local cache, and if this request isn't cached it'll make an HTTP request. The caching in `urql` is also implemented as an exchange, so for instance, the behavior described on the ["Document Caching" page](./document-caching.md) is all contained within the `cacheExchange` above. Later, [in the "Advanced" section](../advanced/README.md) we'll see many more features that `urql` supports by adding new exchanges to this list. On [the "Architecture" page](../architecture.md) we'll also learn more about what exchanges are and why they exist. ### One-off Queries and Mutations When you're using `urql` to send one-off queries or mutations — rather than in full framework code, where updates are important — it's common to convert the streams that we get to promises. The `client.query` and `client.mutation` methods have a shortcut to do just that. ```js const QUERY = ` query Test($id: ID!) { getUser(id: $id) { id name } } `; client .query(QUERY, { id: 'test' }) .toPromise() .then(result => { console.log(result); // { data: ... } }); ``` In the above example we're executing a query on the client, are passing some variables and are calling the `toPromise()` method on the return value to execute the request immediately and get the result as a promise. This may be useful when we don't plan on cancelling queries, or we don't care about future updates to this data and are just looking to query a result once. This can also be written using async/await by simply awaiting the return value of `client.query`: ```js const QUERY = ` query Test($id: ID!) { getUser(id: $id) { id name } } `; async function query() { const result = await client.query(QUERY, { id: 'test' }); console.log(result); // { data: ... } } ``` The same can be done for mutations by calling the `client.mutation` method instead of the `client.query` method. It's worth noting that promisifying a query result will always only give us _one_ result, because we're not calling `subscribe`. This means that we'll never see cache updates when we're asking for a single result like we do above. #### Reading only cache data Similarly there's a way to read data from the cache synchronously, provided that the cache has received a result for a given query before. The `Client` has a `readQuery` method, which is a shortcut for just that. ```js const QUERY = ` query Test($id: ID!) { getUser(id: $id) { id name } } `; const result = client.readQuery(QUERY, { id: 'test' }); result; // null or { data: ... } ``` In the above example we call `readQuery` and receive a result immediately. This result will be `null` if the `cacheExchange` doesn't have any results cached for the given query. ### Subscribing to Results GraphQL Clients are by their nature "reactive", meaning that when we execute a query, we expect to get future results for this query. [On the "Document Caching" page](./document-caching.md) we'll learn how mutations can invalidate results in the cache. This process (and others just like it) can cause our query to be refetched. In essence, if we're subscribing to results rather than using a promise, like we've seen above, then we're able to see future changes for our query's results. If a mutation causes a query to be refetched from our API in the background then we'll see a new result. If we execute a query somewhere else then we'll get notified of the new API result as well, as long as we're subscribed. ```js const QUERY = ` query Test($id: ID!) { getUser(id: $id) { id name } } `; const { unsubscribe } = client.query(QUERY, { id: 'test' }).subscribe(result => { console.log(result); // { data: ... } }); ``` This code example is similar to the one before. However, instead of sending a one-off query, we're subscribing to the query. Internally, this causes the `Client` to do the same, but the subscription means that our callback may be called repeatedly. We may get future results as well as the first one. This also works synchronously. As we've seen before `client.readQuery` can give us a result immediately if our cache already has a result for the given query. The same principle applies here! Our callback will be called synchronously if the cache already has a result. Once we're not interested in any results anymore, we need to clean up after ourselves by calling `unsubscribe`. This stops the subscription and makes sure that the `Client` doesn't actively update the query anymore or refetches it. We can think of this pattern as being very similar to events or event hubs. We're using [the Wonka library for our streams](https://wonka.kitten.sh/basics/background), which we'll learn more about [on the "Architecture" page](../architecture.md). But we can think of this as React's effects being called over time, or as `window.addEventListener`. ## Common Utilities in Core The `@urql/core` package contains other utilities that are shared between multiple addon packages. This is a short but non-exhaustive list. It contains, - [`CombinedError`](../api/core.md#combinederror) - our abstraction to combine one or more `GraphQLError`(s) and a `NetworkError` - `makeResult` and `makeErrorResult` - utilities to create _Operation Results_ - [`createRequest`](../api/core.md#createrequest) - a utility function to create a request from a query, and some variables (which generate a stable _Operation Key_) There are other utilities not mentioned here. Read more about the `@urql/core` API in the [API docs](../api/core.md). ## Reading on This concludes the introduction for using `@urql/core` without any framework bindings. This showed just a couple of ways to use `gql` or the `Client`, however you may also want to learn more about [how to use `urql`'s streams](../architecture.md#stream-patterns-in-urql). Furthermore, apart from the framework binding introductions, there are some other pages that provide more information on how to get fully set up with `urql`: - [How does the default "document cache" work?](./document-caching.md) - [How are errors handled and represented?](./errors.md) - [A quick overview of `urql`'s architecture and structure.](../architecture.md) - [Setting up other features, like authentication, uploads, or persisted queries.](../advanced/README.md) --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/basics/document-caching.md # Path: docs/basics/document-caching.md --- title: Document Caching order: 4 --- # Document Caching By default, `urql` uses a concept called _Document Caching_. It will avoid sending the same requests to a GraphQL API repeatedly by caching the result of each query. This works like the cache in a browser. `urql` creates a key for each request that is sent based on a query and its variables. The default _document caching_ logic is implemented in the default `cacheExchange`. We'll learn more about ["Exchanges" on the "Architecture" page.](../architecture.md) ## Operation Keys ![Keys for GraphQL Requests](../assets/urql-operation-keys.png) Once a result comes in it's cached indefinitely by its key. This means that each unique request can have exactly one cached result. However, we also need to invalidate the cached results so that requests are sent again and updated, when we know that some results are out-of-date. With document caching we assume that a result may be invalidated by a mutation that executes on data that has been queried previously. In GraphQL the client can request additional type information by adding the `__typename` field to a query's _selection set_. This field returns the name of the type for an object in the results, and we use it to detect commonalities and data dependencies between queries and mutations. ![Document Caching](../assets/urql-document-caching.png) In short, when we send a mutation that contains types that another query's results contains as well, that query's result is removed from the cache. This is an aggressive form of cache invalidation. However, it works well for content-driven sites, while it doesn't deal with normalized data or IDs. ## Request Policies The _request policy_ that is defined will alter what the default document cache does. By default, the cache will prefer cached results and will otherwise send a request, which is called `cache-first`. In total there are four different policies that we can use: - `cache-first` (the default) prefers cached results and falls back to sending an API request when no prior result is cached. - `cache-and-network` returns cached results but also always sends an API request, which is perfect for displaying data quickly while keeping it up-to-date. - `network-only` will always send an API request and will ignore cached results. - `cache-only` will always return cached results or `null`. The `cache-and-network` policy is particularly useful, since it allows us to display data instantly if it has been cached, but also refreshes data in our cache in the background. This means though that `fetching` will be `false` for cached results although an API request may still be ongoing in the background. For this reason there's another field on results, `result.stale`, which indicates that the cached result is either outdated or that another request is being sent in the background. [Read more about which request policies are available in the API docs.](../api/core.md#requestpolicy-type) ## Document Cache Gotchas This cache has a small trade-off! If we request a list of data, and the API returns an empty list, then the cache won't be able to see the `__typename` of said list and invalidate it. There are two ways to fix this issue, supplying `additionalTypenames` to the context of your query or [switch to "Normalized Caching" instead](../graphcache/normalized-caching.md). ### Adding typenames This will elaborate about the first fix for empty lists, the `additionalTypenames`. Example where this would occur: ```js const query = `query { todos { id name } }`; const result = { todos: [] }; ``` At this point we don't know what types are possible for this query, so a best practice when using the default cache is to add `additionalTypenames` for this query. ```js // Keep the reference stable. const context = useMemo(() => ({ additionalTypenames: ['Todo'] }), []); const [result] = useQuery({ query, context }); ``` Now the cache will know when to invalidate this query even when the list is empty. We may also use this feature for mutations, since occasionally mutations must invalidate data that isn't directly connected to a mutation by a `__typename`. ```js const [result, execute] = useMutation(`mutation($name: String!) { createUser(name: $name) }`); const onClick = () => { execute({ name: 'newName' }, { additionalTypenames: ['Wallet'] }); }; ``` Now our `mutation` knows that when it completes it has an additional type to invalidate. --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/basics/errors.md # Path: docs/basics/errors.md --- title: Errors order: 5 --- # Error handling When we use a GraphQL API there are two kinds of errors we may encounter: Network Errors and GraphQL Errors from the API. Since it's common to encounter either of them, there's a [`CombinedError`](../api/core.md#combinederror) class that can hold and abstract either. We may encounter a `CombinedError` when using `urql` wherever an `error` may be returned, typically in results from the API. The `CombinedError` can have one of two properties that describe what went wrong. - The `networkError` property will contain any error that stopped `urql` from making a network request. - The `graphQLErrors` property may be an array that contains [normalized `GraphQLError`s as they were received in the `errors` array from a GraphQL API.](https://graphql.org/graphql-js/error/) Additionally, the `message` of the error will be generated and combined from the errors for debugging purposes. ![Combined errors](../assets/urql-combined-error.png) It's worth noting that an `error` can coexist and be returned in a successful request alongside `data`. This is because in GraphQL a query can have partially failed but still contain some data. In that case `CombinedError` will be passed to us with `graphQLErrors`, while `data` may still be set. --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/basics/react-preact.md # Path: docs/basics/react-preact.md --- title: React/Preact Bindings order: 0 --- # React/Preact This guide covers how to install and setup `urql` and the `Client`, as well as query and mutate data, with React and Preact. Since the `urql` and `@urql/preact` packages share most of their API and are used in the same way, when reading the documentation on React, all examples are essentially the same, except that we'd want to use the `@urql/preact` package instead of the `urql` package. ## Getting started ### Installation Installing `urql` is as quick as you'd expect, and you won't need any other packages to get started with at first. We'll install the package with our package manager of choice. ```sh yarn add urql # or npm install --save urql ``` To use `urql` with Preact, we have to install `@urql/preact` instead of `urql` and import from that package instead. Otherwise all examples for Preact will be the same. Most libraries related to GraphQL also need the `graphql` package to be installed as a peer dependency, so that they can adapt to your specific versioning requirements. That's why we'll need to install `graphql` alongside `urql`. Both the `urql` and `graphql` packages follow [semantic versioning](https://semver.org) and all `urql` packages will define a range of compatible versions of `graphql`. Watch out for breaking changes in the future however, in which case your package manager may warn you about `graphql` being out of the defined peer dependency range. ### Setting up the `Client` The `urql` and `@urql/preact` packages export a `Client` class, which we can use to create the GraphQL client. This central `Client` manages all of our GraphQL requests and results. ```js import { Client, cacheExchange, fetchExchange } from 'urql'; const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [cacheExchange, fetchExchange], }); ``` At the bare minimum we'll need to pass an API's `url` and `exchanges` when we create a `Client` to get started. Another common option is `fetchOptions`. This option allows us to customize the options that will be passed to `fetch` when a request is sent to the given API `url`. We may pass in an options object, or a function returning an options object. In the following example we'll add a token to each `fetch` request that our `Client` sends to our GraphQL API. ```js const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [cacheExchange, fetchExchange], fetchOptions: () => { const token = getToken(); return { headers: { authorization: token ? `Bearer ${token}` : '' }, }; }, }); ``` ### Providing the `Client` To make use of the `Client` in React & Preact we will have to provide it via the [Context API](https://reactjs.org/docs/context.html). This may be done with the help of the `Provider` export. ```jsx import { Client, Provider, cacheExchange, fetchExchange } from 'urql'; const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [cacheExchange, fetchExchange], }); const App = () => ( ); ``` Now every component and element inside and under the `Provider` can use GraphQL queries that will be sent to our API. ## Queries Both libraries offer a `useQuery` hook and a `Query` component. The latter accepts the same parameters, but we won't cover it in this guide. [Look it up in the API docs if you prefer render-props components.](../api/urql.md#query-component) ### Run a first query For the following examples, we'll imagine that we're querying data from a GraphQL API that contains todo items. Let's dive right into it! ```jsx import { gql, useQuery } from 'urql'; const TodosQuery = gql` query { todos { id title } } `; const Todos = () => { const [result, reexecuteQuery] = useQuery({ query: TodosQuery, }); const { data, fetching, error } = result; if (fetching) return

Loading...

; if (error) return

Oh no... {error.message}

; return (
    {data.todos.map(todo => (
  • {todo.title}
  • ))}
); }; ``` Here we have implemented our first GraphQL query to fetch todos. We see that `useQuery` accepts options and returns a tuple. In this case we've set the `query` option to our GraphQL query. The tuple we then get in return is an array that contains a result object, and a re-execute function. The result object contains several properties. The `fetching` field indicates whether the hook is loading data, `data` contains the actual `data` from the API's result, and `error` is set when either the request to the API has failed or when our API result contained some `GraphQLError`s, which we'll get into later on the ["Errors" page](./errors.md). ### Variables Typically we'll also need to pass variables to our queries, for instance, if we are dealing with pagination. For this purpose the `useQuery` hook also accepts a `variables` option, which we can use to supply variables to our query. ```jsx const TodosListQuery = gql` query ($from: Int!, $limit: Int!) { todos(from: $from, limit: $limit) { id title } } `; const Todos = ({ from, limit }) => { const [result, reexecuteQuery] = useQuery({ query: TodosListQuery, variables: { from, limit }, }); // ... }; ``` As when we're sending GraphQL queries manually using `fetch`, the variables will be attached to the `POST` request body that is sent to our GraphQL API. Whenever the `variables` (or the `query`) option on the `useQuery` hook changes `fetching` will switch to `true`, and a new request will be sent to our API, unless a result has already been cached previously. ### Pausing `useQuery` In some cases we may want `useQuery` to execute a query when a pre-condition has been met, and not execute the query otherwise. For instance, we may be building a form and want a validation query to only take place when a field has been filled out. Since hooks in React can't just be commented out, the `useQuery` hook accepts a `pause` option that temporarily _freezes_ all changes and stops requests. In the previous example we've defined a query with mandatory arguments. The `$from` and `$limit` variables have been defined to be non-nullable `Int!` values. Let's pause the query we've just written to not execute when these variables are empty, to prevent `null` variables from being executed. We can do this by setting the `pause` option to `true`: ```jsx const Todos = ({ from, limit }) => { const shouldPause = from === undefined || from === null || limit === undefined || limit === null; const [result, reexecuteQuery] = useQuery({ query: TodosListQuery, variables: { from, limit }, pause: shouldPause, }); // ... }; ``` Now whenever the mandatory `$from` or `$limit` variables aren't supplied the query won't be executed. This also means that `result.data` won't change, which means we'll still have access to our old data even though the variables may have changed. ### Request Policies As has become clear in the previous sections of this page, the `useQuery` hook accepts more options than just `query` and `variables`. Another option we should touch on is `requestPolicy`. The `requestPolicy` option determines how results are retrieved from our `Client`'s cache. By default, this is set to `cache-first`, which means that we prefer to get results from our cache, but are falling back to sending an API request. Request policies aren't specific to `urql`'s React API, but are a common feature in its core. [You can learn more about how the cache behaves given the four different policies on the "Document Caching" page.](../basics/document-caching.md) ```jsx const [result, reexecuteQuery] = useQuery({ query: TodosListQuery, variables: { from, limit }, requestPolicy: 'cache-and-network', }); ``` Specifically, a new request policy may be passed directly to the `useQuery` hook as an option. This policy is then used for this specific query. In this case, `cache-and-network` is used and the query will be refreshed from our API even after our cache has given us a cached result. Internally, the `requestPolicy` is just one of several "**context** options". The `context` provides metadata apart from the usual `query` and `variables` we may pass. This means that we may also change the `Client`'s default `requestPolicy` by passing it there. ```js import { Client, cacheExchange, fetchExchange } from 'urql'; const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [cacheExchange, fetchExchange], // every operation will by default use cache-and-network rather // than cache-first now: requestPolicy: 'cache-and-network', }); ``` ### Context Options As mentioned, the `requestPolicy` option on `useQuery` is a part of `urql`'s context options. In fact, there are several more built-in context options, and the `requestPolicy` option is one of them. Another option we've already seen is the `url` option, which determines our API's URL. These options aren't limited to the `Client` and may also be passed per query. ```jsx import { useMemo } from 'react'; import { useQuery } from 'urql'; const Todos = ({ from, limit }) => { const [result, reexecuteQuery] = useQuery({ query: TodosListQuery, variables: { from, limit }, context: useMemo( () => ({ requestPolicy: 'cache-and-network', url: 'http://localhost:3000/graphql?debug=true', }), [] ), }); // ... }; ``` As we can see, the `context` property for `useQuery` accepts any known `context` option and can be used to alter them per query rather than globally. The `Client` accepts a subset of `context` options, while the `useQuery` option does the same for a single query. [You can find a list of all `Context` options in the API docs.](../api/core.md#operationcontext) ### Reexecuting Queries The `useQuery` hook updates and executes queries whenever its inputs, like the `query` or `variables` change, but in some cases we may find that we need to programmatically trigger a new query. This is the purpose of the `reexecuteQuery` function, which is the second item in the tuple that `useQuery` returns. Triggering a query programmatically may be useful in a couple of cases. It can for instance be used to refresh the hook's data. In these cases we may also override the `requestPolicy` of our query just once and set it to `network-only` to skip the cache. ```jsx const Todos = ({ from, limit }) => { const [result, reexecuteQuery] = useQuery({ query: TodosListQuery, variables: { from, limit }, }); const refresh = () => { // Refetch the query and skip the cache reexecuteQuery({ requestPolicy: 'network-only' }); }; }; ``` Calling `refresh` in the above example will execute the query again forcefully, and will skip the cache, since we're passing `requestPolicy: 'network-only'`. Furthermore the `reexecuteQuery` function can also be used to programmatically start a query even when `pause` is set to `true`, which would usually stop all automatic queries. This can be used to perform one-off actions, or to set up polling. ```jsx import { useEffect } from 'react'; import { useQuery } from 'urql'; const Todos = ({ from, limit }) => { const [result, reexecuteQuery] = useQuery({ query: TodosListQuery, variables: { from, limit }, pause: true, }); useEffect(() => { if (result.fetching) return; // Set up to refetch in one second, if the query is idle const timerId = setTimeout(() => { reexecuteQuery({ requestPolicy: 'network-only' }); }, 1000); return () => clearTimeout(timerId); }, [result.fetching, reexecuteQuery]); // ... }; ``` There are some more tricks we can use with `useQuery`. [Read more about its API in the API docs for it.](../api/urql.md#usequery) ## Mutations Both libraries offer a `useMutation` hook and a `Mutation` component. The latter accepts the same parameters, but we won't cover it in this guide. [Look it up in the API docs if you prefer render-props components.](../api/urql.md#mutation-component) ### Sending a mutation Let's again pick up an example with an imaginary GraphQL API for todo items, and dive into an example! We'll set up a mutation that _updates_ a todo item's title. ```jsx const UpdateTodo = ` mutation ($id: ID!, $title: String!) { updateTodo (id: $id, title: $title) { id title } } `; const Todo = ({ id, title }) => { const [updateTodoResult, updateTodo] = useMutation(UpdateTodo); }; ``` Similar to the `useQuery` output, `useMutation` returns a tuple. The first item in the tuple again contains `fetching`, `error`, and `data` — it's identical since this is a common pattern of how `urql` presents [operation results](../api/core.md#operationresult). Unlike the `useQuery` hook, the `useMutation` hook doesn't execute automatically. At this point in our example, no mutation will be performed. To execute our mutation we instead have to call the execute function — `updateTodo` in our example — which is the second item in the tuple. ### Using the mutation result When calling our `updateTodo` function we have two ways of getting to the result as it comes back from our API. We can either use the first value of the returned tuple, our `updateTodoResult`, or we can use the promise that `updateTodo` returns. ```jsx const Todo = ({ id, title }) => { const [updateTodoResult, updateTodo] = useMutation(UpdateTodo); const submit = newTitle => { const variables = { id, title: newTitle || '' }; updateTodo(variables).then(result => { // The result is almost identical to `updateTodoResult` with the exception // of `result.fetching` not being set. // It is an OperationResult. }); }; }; ``` The result is useful when your UI has to display progress on the mutation, and the returned promise is particularly useful when you're adding side effects that run after the mutation has completed. ### Handling mutation errors It's worth noting that the promise we receive when calling the execute function will never reject. Instead it will always return a promise that resolves to a result. If you're checking for errors, you should use `result.error` instead, which will be set to a `CombinedError` when any kind of errors occurred while executing your mutation. [Read more about errors on our "Errors" page.](./errors.md) ```jsx const Todo = ({ id, title }) => { const [updateTodoResult, updateTodo] = useMutation(UpdateTodo); const submit = newTitle => { const variables = { id, title: newTitle || '' }; updateTodo(variables).then(result => { if (result.error) { console.error('Oh no!', result.error); } }); }; }; ``` There are some more tricks we can use with `useMutation`.
[Read more about its API in the API docs for it.](../api/urql.md#usemutation) ## Reading on This concludes the introduction for using `urql` with React or Preact. The rest of the documentation is mostly framework-agnostic and will apply to either `urql` in general, or the `@urql/core` package, which is the same between all framework bindings. Hence, next we may want to learn more about one of the following to learn more about the internals: - [How does the default "document cache" work?](./document-caching.md) - [How are errors handled and represented?](./errors.md) - [A quick overview of `urql`'s architecture and structure.](../architecture.md) - [Setting up other features, like authentication, uploads, or persisted queries.](../advanced/README.md) --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/basics/svelte.md # Path: docs/basics/svelte.md --- title: Svelte Bindings order: 2 --- # Svelte ## Getting started This "Getting Started" guide covers how to install and set up `urql` and provide a `Client` for Svelte. The `@urql/svelte` package, which provides bindings for Svelte, doesn't fundamentally function differently from `@urql/preact` or `urql` and uses the same [Core Package and `Client`](./core.md). ### Installation Installing `@urql/svelte` is quick and no other packages are immediately necessary. ```sh yarn add @urql/svelte # or npm install --save @urql/svelte ``` Most libraries related to GraphQL also need the `graphql` package to be installed as a peer dependency, so that they can adapt to your specific versioning requirements. That's why we'll need to install `graphql` alongside `@urql/svelte`. Both the `@urql/svelte` and `graphql` packages follow [semantic versioning](https://semver.org) and all `@urql/svelte` packages will define a range of compatible versions of `graphql`. Watch out for breaking changes in the future however, in which case your package manager may warn you about `graphql` being out of the defined peer dependency range. Note: if using Vite as your bundler, you might stumble upon the error `Function called outside component initialization`, which will prevent the page from loading. To fix it, you must add `@urql/svelte` to Vite's configuration property [`optimizeDeps.exclude`](https://vitejs.dev/config/#dep-optimization-options): ```js { optimizeDeps: { exclude: ['@urql/svelte'], } // other properties } ``` ### Setting up the `Client` The `@urql/svelte` package exports a `Client` class, which we can use to create the GraphQL client. This central `Client` manages all of our GraphQL requests and results. ```js import { Client, cacheExchange, fetchExchange } from '@urql/svelte'; const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [cacheExchange, fetchExchange], }); ``` At the bare minimum we'll need to pass an API's `url` and `exchanges` when we create a `Client` to get started. Another common option is `fetchOptions`. This option allows us to customize the options that will be passed to `fetch` when a request is sent to the given API `url`. We may pass in an options object or a function returning an options object. In the following example we'll add a token to each `fetch` request that our `Client` sends to our GraphQL API. ```js const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [cacheExchange, fetchExchange], fetchOptions: () => { const token = getToken(); return { headers: { authorization: token ? `Bearer ${token}` : '' }, }; }, }); ``` ### Providing the `Client` To make use of the `Client` in Svelte we will have to provide it via the [Context API](https://svelte.dev/tutorial/context-api). From a parent component to its child components. This will share one `Client` with the rest of our app, if we for instance provide the `Client` ```html ``` The `setContextClient` method internally calls [Svelte's `setContext` function](https://svelte.dev/docs#run-time-svelte-setcontext). The `@urql/svelte` package also exposes a `getContextClient` function that uses [`getContext`](https://svelte.dev/docs#run-time-svelte-getcontext) to retrieve the `Client` in child components. This is used to input the client into `@urql/svelte`'s API. ## Queries We'll implement queries using the `queryStore` function from `@urql/svelte`. The `queryStore` function creates a [Svelte Writable store](https://svelte.dev/docs#writable). You can use it to initialise a data container in `urql`. This store holds on to our query inputs, like the GraphQL query and variables, which we can change to launch new queries. It also exposes the query's eventual result, which we can then observe. ### Run a first query For the following examples, we'll imagine that we're querying data from a GraphQL API that contains todo items. Let's dive right into it! ```js {#if $todos.fetching}

Loading...

{:else if $todos.error}

Oh no... {$todos.error.message}

{:else}
    {#each $todos.data.todos as todo}
  • {todo.title}
  • {/each}
{/if} ``` Here we have implemented our first GraphQL query to fetch todos. We're first creating a `queryStore` which will start our GraphQL query. The `todos` store can now be used like any other Svelte store using a [reactive auto-subscription](https://svelte.dev/tutorial/auto-subscriptions) in Svelte. This means that we prefix `$todos` with a dollar symbol, which automatically subscribes us to its changes. ### Variables Typically we'll also need to pass variables to our queries, for instance, if we are dealing with pagination. For this purpose the `queryStore` also accepts a `variables` argument, which we can use to supply variables to our query. ```js ... ``` > Note that we prefix the variable with `$` so Svelte knows that this store is reactive As when we're sending GraphQL queries manually using `fetch`, the variables will be attached to the `POST` request body that is sent to our GraphQL API. The `queryStore` also supports being actively changed. This will hook into Svelte's reactivity model as well and cause the `query` utility to start a new operation. ```js ``` ### Pausing Queries In some cases we may want our queries to not execute until a pre-condition has been met. Since the `query` operation exists for the entire component lifecycle however, it can't just be stopped and started at will. Instead, the `queryStore` accepts a key named `pause` that will tell the store that is starts out as paused. For instance, we may start out with a paused store and then unpause it once a callback is invoked: ```html ``` ### Request Policies The `queryStore` also accepts another key apart from `query` and `variables`. Optionally you may pass a `requestPolicy`. The `requestPolicy` option determines how results are retrieved from our `Client`'s cache. By default, this is set to `cache-first`, which means that we prefer to get results from our cache, but are falling back to sending an API request. Request policies aren't specific to `urql`'s Svelte bindings, but are a common feature in its core. [You can learn more about how the cache behaves given the four different policies on the "Document Caching" page.](../basics/document-caching.md) ```js ... ``` As we can see, the `requestPolicy` is easily changed by passing it directly as a "context option" when creating a `queryStore`. Internally, the `requestPolicy` is just one of several "**context** options". The `context` provides metadata apart from the usual `query` and `variables` we may pass. This means that we may also change the `Client`'s default `requestPolicy` by passing it there. ```js import { Client, cacheExchange, fetchExchange } from '@urql/svelte'; const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [cacheExchange, fetchExchange], // every operation will by default use cache-and-network rather // than cache-first now: requestPolicy: 'cache-and-network', }); ``` ### Context Options As mentioned, the `requestPolicy` option that we're passing to the `queryStore` is a part of `urql`'s context options. In fact, there are several more built-in context options, and the `requestPolicy` option is one of them. Another option we've already seen is the `url` option, which determines our API's URL. ```js ... ``` As we can see, the `context` argument for `queryStore` accepts any known `context` option and can be used to alter them per query rather than globally. The `Client` accepts a subset of `context` options, while the `queryStore` argument does the same for a single query. They're then merged for your operation and form a full `Context` object for each operation, which means that any given query is able to override them as needed. [You can find a list of all `Context` options in the API docs.](../api/core.md#operationcontext) ### Reexecuting queries Sometimes we'll need to arbitrarly reexecute a query to check for new data on the server, this can be done through: ```jsx ``` We use the `requestPolicy` with value `network-only` so we don't hit our cache and dispatch a refresh, if it updates the data the `todos` will be updated due to our cache updating. ### Reading on There are some more tricks we can use with `queryStore`. [Read more about its API in the API docs for it.](../api/svelte.md#queryStore) ## Mutations The `mutationStore` function is similar to the `queryStore` function but is triggered manually and can accept a [`GraphQLRequest` object](../api/core.md#graphqlrequest). ### Sending a mutation Let's again pick up an example with an imaginary GraphQL API for todo items, and dive into an example! We'll set up a mutation that _updates_ a todo item's title. ```html ``` This small call to `mutationStore` accepts a `query` property (besides the `variables` property) and returns the `OperationResult` as a store. Unlike the `query` function, we don't want the mutation to start automatically hence we enclose it in a function. The `result` will be updated with the `fetching`, `data`, ... as a normal query would which you can in-turn use in your UI. ### Handling mutation errors It's worth noting that the promise we receive when calling the execute function will never reject. Instead it will always return a promise that resolves to an `mutationStore`, even if the mutation has failed. If you're checking for errors, you should use `mutationStore.error` instead, which will be set to a `CombinedError` when any kind of errors occurred while executing your mutation. [Read more about errors on our "Errors" page.](./errors.md) ```jsx mutateTodo({ id, title: newTitle }).then(result => { if (result.error) { console.error('Oh no!', result.error); } }); ``` ## Reading on This concludes the introduction for using `urql` with Svelte. The rest of the documentation is mostly framework-agnostic and will apply to either `urql` in general, or the `@urql/core` package, which is the same between all framework bindings. Hence, next we may want to learn more about one of the following to learn more about the internals: - [How does the default "document cache" work?](./document-caching.md) - [How are errors handled and represented?](./errors.md) - [A quick overview of `urql`'s architecture and structure.](../architecture.md) - [Setting up other features, like authentication, uploads, or persisted queries.](../advanced/README.md) --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/basics/typescript-integration.md # Path: docs/basics/typescript-integration.md --- title: TypeScript integration order: 7 --- # URQL and TypeScript URQL, with the help of [GraphQL Code Generator](https://www.the-guild.dev/graphql/codegen), can leverage the typed-design of GraphQL Schemas to generate TypeScript types on the flight. ## Getting started ### Installation To get and running, install the following packages: ```sh yarn add -D graphql typescript @graphql-codegen/cli @graphql-codegen/client-preset # or npm install -D graphql typescript @graphql-codegen/cli @graphql-codegen/client-preset ``` Then, add the following script to your `package.json`: ```json { "scripts": { "codegen": "graphql-codegen" } } ``` Now, let's create a configuration file for our current framework setup: ### Configuration #### React project configuration Create the following `codegen.ts` configuration file: ```ts import { CodegenConfig } from '@graphql-codegen/cli'; const config: CodegenConfig = { schema: '', documents: ['src/**/*.tsx'], ignoreNoDocuments: true, // for better experience with the watcher generates: { './src/gql/': { preset: 'client', plugins: [], }, }, }; export default config; ``` #### Vue project configuration Create the following `codegen.ts` configuration file: ```ts import type { CodegenConfig } from '@graphql-codegen/cli'; const config: CodegenConfig = { schema: '', documents: ['src/**/*.vue'], ignoreNoDocuments: true, // for better experience with the watcher generates: { './src/gql/': { preset: 'client', config: { useTypeImports: true, }, plugins: [], }, }, }; export default config; ``` ## Typing queries, mutations and subscriptions Now that your project is properly configured, let's start codegen in watch mode: ```sh yarn codegen # or npm run codegen ``` This will generate a `./src/gql` folder that exposes a `graphql()` function. Let's use this `graphql()` function to write our GraphQL Queries, Mutations and Subscriptions. Here, an example with the React bindings, however, the usage remains the same for Vue and Svelte bindings: ```tsx import React from 'react'; import { useQuery } from 'urql'; import './App.css'; import Film from './Film'; import { graphql } from '../src/gql'; const allFilmsWithVariablesQueryDocument = graphql(/* GraphQL */ ` query allFilmsWithVariablesQuery($first: Int!) { allFilms(first: $first) { edges { node { ...FilmItem } } } } `); function App() { // `data` is typed! const [{ data }] = useQuery({ query: allFilmsWithVariablesQueryDocument, variables: { first: 10 }, }); return (
{data && (
    {data.allFilms?.edges?.map( (e, i) => e?.node && )}
)}
); } export default App; ``` _Examples with Vue are available [in the GraphQL Code Generator repository](https://github.com/dotansimha/graphql-code-generator/tree/master/examples/vue/urql)_. Using the generated `graphql()` function to write your GraphQL document results in instantly typed result and variables for queries, mutations and subscriptions! Let's now see how to go further with GraphQL fragments. ## Getting further with Fragments > Using GraphQL Fragments helps to explicitly declaring the data dependencies of your UI component and safely accessing only the data it needs. Our `` component relies on the `FilmItem` definition, passed through the `film` props: ```tsx // ... import Film from './Film'; import { graphql } from '../src/gql'; const allFilmsWithVariablesQueryDocument = graphql(/* GraphQL */ ` query allFilmsWithVariablesQuery($first: Int!) { allFilms(first: $first) { edges { node { ...FilmItem } } } } `); function App() { // ... return (
{data && (
    {data.allFilms?.edges?.map( (e, i) => e?.node && )}
)}
); } // ... ``` GraphQL Code Generator generates type helpers to type your component props based on Fragments (for example, the `film=` prop) and retrieve your fragment's data (see example below). Again, here is an example with the React bindings: ```tsx import { FragmentType, useFragment } from './gql/fragment-masking'; import { graphql } from '../src/gql'; // again, we use the generated `graphql()` function to write GraphQL documents 👀 export const FilmFragment = graphql(/* GraphQL */ ` fragment FilmItem on Film { id title releaseDate producers } `); const Film = (props: { // `film` property has the correct type 🎉 film: FragmentType; }) => { // `film` is of type `FilmFragment`, with no extraneous properties ⚡️ const film = useFragment(FilmFragment, props.film); return (

{film.title}

{film.releaseDate}

); }; export default Film; ``` _Examples with Vue are available [in the GraphQL Code Generator repository](https://github.com/dotansimha/graphql-code-generator/tree/master/examples/vue/urql)_. You will notice that our `` component leverages 2 imports from our generated code (from `../src/gql`): the `FragmentType` type helper and the `useFragment()` function. - we use `FragmentType` to get the corresponding Fragment TypeScript type - later on, we use `useFragment()` to retrieve the properly film property --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/basics/ui-patterns.md # Path: docs/basics/ui-patterns.md --- title: UI-Patterns order: 6 --- # UI Patterns > This page is incomplete. You can help us expanding it by suggesting more patterns or asking us about common problems you're facing on [GitHub Discussions](https://github.com/urql-graphql/urql/discussions). Generally, `urql`'s API surface is small and compact. Some common problems that we're facing when building apps may look like they're not a built-in feature, however, there are several patterns that even a lean UI can support. This page is a collection of common UI patterns and problems we may face with GraphQL and how we can tackle them in `urql`. These examples will be written in React but apply to any other framework. ## Infinite scrolling "Infinite Scrolling" is the approach of loading more data into a page's list without splitting that list up across multiple pages. There are a few ways of going about this. In our [normalized caching chapter on the topic](../graphcache/local-resolvers.md#pagination) we see an approach with `urql`'s normalized cache, which is suitable to get started quickly. However, this approach also requires some UI code as well to keep track of pages. Let's have a look at how we can create a UI implementation that makes use of this normalized caching feature. ```js import React from 'react'; import { useQuery, gql } from 'urql'; const PageQuery = gql` query Page($first: Int!, $after: String) { todos(first: $first, after: $after) { nodes { id name } pageInfo { hasNextPage endCursor } } } `; const SearchResultPage = ({ variables, isLastPage, onLoadMore }) => { const [{ data, fetching, error }] = useQuery({ query: PageQuery, variables }); const todos = data?.todos; return (
{error &&

Oh no... {error.message}

} {fetching &&

Loading...

} {todos && ( <> {todos.nodes.map(todo => (
{todo.id}: {todo.name}
))} {isLastPage && todos.pageInfo.hasNextPage && ( )} )}
); }; const Search = () => { const [pageVariables, setPageVariables] = useState([ { first: 10, after: '', }, ]); return (
{pageVariables.map((variables, i) => ( setPageVariables([...pageVariables, { after, first: 10 }])} /> ))}
); }; ``` Here we keep an array of all `variables` we've encountered and use them to render their respective `result` page. This only rerenders the additional page rather than having a long list that constantly changes. [You can find a full code example of this pattern in our example folder on the topic of pagination.](https://github.com/urql-graphql/urql/tree/main/examples/with-pagination) This code doesn't take changing variables into account, which will affect the cursors. For an example that takes full infinite scrolling into account, [you can find a full code example of an extended pattern in our example folder on the topic of infinite pagination.](https://github.com/urql-graphql/urql/tree/main/examples/with-infinite-pagination) ## Prefetching data We sometimes find it necessary to load data for a new page before that page is opened, for instance while a JS bundle is still loading. We may do this with help of the `Client`, by calling methods without using the React bindings directly. ```js import React from 'react'; import { useClient, gql } from 'urql'; const TodoQuery = gql` query Todo($id: ID!) { todo(id: $id) { id name } } `; const Component = () => { const client = useClient(); const router = useRouter(); const transitionPage = React.useCallback(async id => { const loadJSBundle = import('./page.js'); const loadData = client.query(TodoQuery, { id }).toPromise(); await Promise.all([loadJSBundle, loadData]); router.push(`/todo/${id}`); }, []); return ; }; ``` Here we're calling `client.query` to prepare a query when the transition begins. We then call `toPromise()` on this query which activates it. Our `Client` and its cache share results, which means that we've already kicked off or even completed the query before we're on the new page. ## Lazy query It's often required to "lazily" start a query, either at a later point or imperatively. This means that we don't start a query when a new component is mounted immediately. Parts of `urql` that automatically start, like the `useQuery` hook, have a concept of a [`pause` option.](./react-preact.md#pausing-usequery) This option is used to prevent the hook from automatically starting a new query. ```js import React from 'react'; import { useQuery, gql } from 'urql'; const TodoQuery = gql` query Todos { todos { id name } } `; const Component = () => { const [result, fetch] = useQuery({ query: TodoQuery, pause: true }); const router = useRouter(); return ; }; ``` We can unpause the hook to start fetching, or, like in this example, call its returned function to manually kick off the query. ## Reacting to focus and stale time In urql we leverage our extensibility pattern named "Exchanges" to manipulate the way data comes in and goes out of our client. - [Stale time](https://github.com/urql-graphql/urql/tree/main/exchanges/request-policy) - [Focus](https://github.com/urql-graphql/urql/tree/main/exchanges/refocus) When we want to introduce one of these patterns we add the package and add it to the `exchanges` property of our `Client`. In the case of these two we'll have to add it before the cache else our requests will never get upgraded. ```js import { Client, cacheExchange, fetchExchange } from 'urql'; import { refocusExchange } from '@urql/exchange-refocus'; const client = new Client({ url: 'some-url', exchanges: [refocusExchange(), cacheExchange, fetchExchange], }); ``` That's all we need to do to react to these patterns. --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/basics/vue.md # Path: docs/basics/vue.md --- title: Vue Bindings order: 1 --- # Vue ## Getting started The `@urql/vue` bindings have been written with [Vue 3](https://github.com/vuejs/vue-next/releases/tag/v3.0.0) in mind and use Vue's newer [Composition API](https://v3.vuejs.org/guide/composition-api-introduction.html). This gives the `@urql/vue` bindings capabilities to be more easily integrated into your existing `setup()` functions. ### Installation Installing `@urql/vue` is quick and no other packages are immediately necessary. ```sh yarn add @urql/vue graphql # or npm install --save @urql/vue graphql ``` Most libraries related to GraphQL also need the `graphql` package to be installed as a peer dependency, so that they can adapt to your specific versioning requirements. That's why we'll need to install `graphql` alongside `@urql/vue`. Both the `@urql/vue` and `graphql` packages follow [semantic versioning](https://semver.org) and all `@urql/vue` packages will define a range of compatible versions of `graphql`. Watch out for breaking changes in the future however, in which case your package manager may warn you about `graphql` being out of the defined peer dependency range. ### Setting up the `Client` The `@urql/vue` package exports a `Client` class, which we can use to create the GraphQL client. This central `Client` manages all of our GraphQL requests and results. ```js import { Client, cacheExchange, fetchExchange } from '@urql/vue'; const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [cacheExchange, fetchExchange], }); ``` At the bare minimum we'll need to pass an API's `url` and `exchanges` when we create a `Client` to get started. Another common option is `fetchOptions`. This option allows us to customize the options that will be passed to `fetch` when a request is sent to the given API `url`. We may pass in an options object or a function returning an options object. In the following example we'll add a token to each `fetch` request that our `Client` sends to our GraphQL API. ```js const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [cacheExchange, fetchExchange], fetchOptions: () => { const token = getToken(); return { headers: { authorization: token ? `Bearer ${token}` : '' }, }; }, }); ``` ### Providing the `Client` To make use of the `Client` in Vue we will have to provide from a parent component to its child components. This will share one `Client` with the rest of our app. In `@urql/vue` there are two different ways to achieve this. The first method is to use `@urql/vue`'s `provideClient` function. This must be called in any of your parent components and accepts either a `Client` directly or just the options that you'd pass to `Client`. ```html ``` Alternatively we may use the exported `install` function and treat `@urql/vue` as a plugin by importing its default export and using it [as a plugin](https://v3.vuejs.org/guide/plugins.html#using-a-plugin). ```js import { createApp } from 'vue'; import Root from './App.vue'; import urql, { cacheExchange, fetchExchange } from '@urql/vue'; const app = createApp(Root); app.use(urql, { url: 'http://localhost:3000/graphql', exchanges: [cacheExchange, fetchExchange], }); app.mount('#app'); ``` The plugin also accepts `Client`'s options or a `Client` as its inputs. ## Queries We'll implement queries using the `useQuery` function from `@urql/vue`. ### Run a first query For the following examples, we'll imagine that we're querying data from a GraphQL API that contains todo items. Let's dive right into it! ```jsx ``` Here we have implemented our first GraphQL query to fetch todos. We see that `useQuery` accepts options and returns a result object. In this case we've set the `query` option to our GraphQL query. The result object contains several properties. The `fetching` field indicates whether we're currently loading data, `data` contains the actual `data` from the API's result, and `error` is set when either the request to the API has failed or when our API result contained some `GraphQLError`s, which we'll get into later on the ["Errors" page](./errors.md). All of these properties on the result are derived from the [shape of `OperationResult`](../api/core.md#operationresult) and are marked as [reactive ](https://v3.vuejs.org/guide/reactivity-fundamentals.html), which means they may update while the query is running, which will automatically update your UI. ### Variables Typically we'll also need to pass variables to our queries, for instance, if we are dealing with pagination. For this purpose `useQuery` also accepts a `variables` input, which we can use to supply variables to our query. ```jsx ``` As when we're sending GraphQL queries manually using `fetch`, the variables will be attached to the `POST` request body that is sent to our GraphQL API. All inputs that are passed to `useQuery` may also be [reactive state](https://v3.vuejs.org/guide/reactivity-fundamentals.html). This means that both the inputs and outputs of `useQuery` are reactive and may change over time. ```jsx ``` ### Pausing `useQuery` In some cases we may want `useQuery` to execute a query when a pre-condition has been met, and not execute the query otherwise. For instance, we may be building a form and want a validation query to only take place when a field has been filled out. Since with Vue 3's Composition API we won't just conditionally call `useQuery` we can instead pass a reactive `pause` input to `useQuery`. In the previous example we've defined a query with mandatory arguments. The `$from` and `$limit` variables have been defined to be non-nullable `Int!` values. Let's pause the query we've just written to not execute when these variables are empty, to prevent `null` variables from being executed. We can do this by computing `pause` to become `true` whenever these variables are falsy: ```js import { reactive } from 'vue' import { gql, useQuery } from '@urql/vue'; export default { props: ['from', 'limit'], setup({ from, limit }) { const shouldPause = computed(() => from == null || limit == null); return useQuery({ query: gql` query ($from: Int!, $limit: Int!) { todos(from: $from, limit: $limit) { id title } } `, variables: { from, limit }, pause: shouldPause }); } }; ``` Now whenever the mandatory `$from` or `$limit` variables aren't supplied the query won't be executed. This also means that `result.data` won't change, which means we'll still have access to our old data even though the variables may have changed. It's worth noting that depending on whether `from` and `limit` are reactive or not you may have to change how `pause` is computed. But there's also an imperative alternative to this API. Not only does the result you get back from `useQuery` have an `isPaused` ref, it also has `pause()` and `resume()` methods. ```jsx ``` This means that no matter whether you're in or outside of `setup()` or rather supplying the inputs to `useQuery` or using the outputs, you'll have access to ways to pause or unpause the query. ### Request Policies As has become clear in the previous sections of this page, the `useQuery` hook accepts more options than just `query` and `variables`. Another option we should touch on is `requestPolicy`. The `requestPolicy` option determines how results are retrieved from our `Client`'s cache. By default this is set to `cache-first`, which means that we prefer to get results from our cache, but are falling back to sending an API request. Request policies aren't specific to `urql`'s Vue bindings, but are a common feature in its core. [You can learn more about how the cache behaves given the four different policies on the "Document Caching" page.](../basics/document-caching.md) ```js import { useQuery } from '@urql/vue'; export default { setup() { return useQuery({ query: TodosQuery, requestPolicy: 'cache-and-network', }); }, }; ``` Specifically, a new request policy may be passed directly to `useQuery` as an option. This policy is then used for this specific query. In this case, `cache-and-network` is used and the query will be refreshed from our API even after our cache has given us a cached result. Internally, the `requestPolicy` is just one of several "**context** options". The `context` provides metadata apart from the usual `query` and `variables` we may pass. This means that we may also change the `Client`'s default `requestPolicy` by passing it there. ```js import { Client, cacheExchange, fetchExchange } from '@urql/vue'; const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [cacheExchange, fetchExchange], // every operation will by default use cache-and-network rather // than cache-first now: requestPolicy: 'cache-and-network', }); ``` ### Context Options As mentioned, the `requestPolicy` option on `useQuery` is a part of `urql`'s context options. In fact, there are several more built-in context options, and the `requestPolicy` option is one of them. Another option we've already seen is the `url` option, which determines our API's URL. These options aren't limited to the `Client` and may also be passed per query. ```jsx import { useQuery } from '@urql/vue'; export default { setup() { return useQuery({ query: TodosQuery, context: { requestPolicy: 'cache-and-network', url: 'http://localhost:3000/graphql?debug=true', }, }); }, }; ``` As we can see, the `context` property for `useQuery` accepts any known `context` option and can be used to alter them per query rather than globally. The `Client` accepts a subset of `context` options, while the `useQuery` option does the same for a single query. [You can find a list of all `Context` options in the API docs.](../api/core.md#operationcontext) ### Reexecuting Queries The `useQuery` hook updates and executes queries whenever its inputs, like the `query` or `variables` change, but in some cases we may find that we need to programmatically trigger a new query. This is the purpose of the `executeQuery` method which is a method on the result object that `useQuery` returns. Triggering a query programmatically may be useful in a couple of cases. It can for instance be used to refresh data that is currently being displayed. In these cases we may also override the `requestPolicy` of our query just once and set it to `network-only` to skip the cache. ```js import { gql, useQuery } from '@urql/vue'; export default { setup() { const result = useQuery({ query: gql` { todos { id title } } `, }); return { data: result.data, fetching: result.fetching, error: result.error, refresh() { result.executeQuery({ requestPolicy: 'network-only', }); }, }; }, }; ``` Calling `refresh` in the above example will execute the query again forcefully, and will skip the cache, since we're passing `requestPolicy: 'network-only'`. Furthermore the `executeQuery` function can also be used to programmatically start a query even when `pause` is set to `true`, which would usually stop all automatic queries. This can be used to perform one-off actions, or to set up polling. ### Vue Suspense In Vue 3 a [new feature was introduced](https://vuedose.tips/go-async-in-vue-3-with-suspense/) that natively allows components to suspend while data is loading, which works universally on the server and on the client, where a replacement loading template is rendered on a parent while data is loading. Any component's `setup()` function can be updated to instead be an `async setup()` function, in other words, to return a `Promise` instead of directly returning its data. This means that we can update any `setup()` function to make use of Suspense. The `useQuery`'s returned result supports this, since it is a `PromiseLike`. We can update one of our examples to have a suspending component by changing our usage of `useQuery`: ```jsx ``` As we can see, `await useQuery(...)` here suspends the component and what we render will not have to handle the loading states of `useQuery` at all. Instead in Vue Suspense we'll have to wrap a parent component in a "Suspense boundary." This boundary is what switches a parent to a loading state while parts of its children are fetching data. The suspense promise is in essence "bubbling up" until it finds a "Suspense boundary". ``` ``` As long as any parent component is wrapping our component which uses `async setup()` in this boundary, we'll get Vue Suspense to work correctly and trigger this loading state. When a child suspends this component will switch to using its `#fallback` template rather than its `#default` template. ### Chaining calls in Vue Suspense As shown [above](#vue-suspense), in Vue Suspense the `async setup()` lifecycle function can be used to set up queries in advance, wait for them to have fetched some data, and then let the component render as usual. However, because the `async setup()` function can be used with `await`-ed promise calls, we may run into situations where we're trying to call functions like `useQuery()` after we've already awaited another promise and will be outside of the synchronous scope of the `setup()` lifecycle. This means that the `useQuery` (and `useSubscription` & `useMutation`) functions won't have access to the `Client` anymore that we'd have set up using `provideClient`. To prevent this, we can create something called a "client handle" using the `useClientHandle` function. ```js import { gql, useClientHandle } from '@urql/vue'; export default { async setup() { const handle = useClientHandle(); await Promise.resolve(); // NOTE: This could be any await call const result = await handle.useQuery({ query: gql` { todos { id title } } `, }); return { data: result.data }; }, }; ``` As we can see, when we use `handle.useQuery()` we're able to still create query results although we've interrupted the synchronous `setup()` lifecycle with a `Promise.resolve()` delay. This would also allow us to create chained queries by using [`computed`](https://v3.vuejs.org/guide/reactivity-computed-watchers.html#computed-values) to use an output from a preceding result in a next `handle.useQuery()` call. ### Reading on There are some more tricks we can use with `useQuery`. [Read more about its API in the API docs for it.](../api/vue.md#usequery) ## Mutations The `useMutation` function is similar to `useQuery` but is triggered manually and accepts only a `DocumentNode` or `string` as an input. ### Sending a mutation Let's again pick up an example with an imaginary GraphQL API for todo items, and dive into an example! We'll set up a mutation that _updates_ a todo item's title. ```js import { gql, useMutation } from '@urql/vue'; export default { setup() { const { executeMutation: updateTodo } = useMutation(gql` mutation ($id: ID!, $title: String!) { updateTodo(id: $id, title: $title) { id title } } `); return { updateTodo }; }, }; ``` Similar to the `useQuery` output, `useMutation` returns a result object, which reflects the data of an executed mutation. That means it'll contain the familiar `fetching`, `error`, and `data` properties — it's identical since this is a common pattern of how `urql` presents [operation results](../api/core.md#operationresult). Unlike the `useQuery` hook, the `useMutation` hook doesn't execute automatically. At this point in our example, no mutation will be performed. To execute our mutation we instead have to call the `executeMutation` method on the result with some variables. ### Using the mutation result When calling our `updateTodo` function we have two ways of getting to the result as it comes back from our API. We can either use the result itself, since all properties related to the last [operation result](../api/core.md#operationresult) are marked as [reactive ](https://v3.vuejs.org/guide/reactivity-fundamentals.html) — or we can use the promise that the `executeMutation` method returns when it's called: ```js import { gql, useMutation } from '@urql/vue'; export default { setup() { const updateTodoResult = useMutation(gql` mutation ($id: ID!, $title: String!) { updateTodo(id: $id, title: $title) { id title } } `); return { updateTodo(id, title) { const variables = { id, title: title || '' }; updateTodoResult.executeMutation(variables).then(result => { // The result is almost identical to `updateTodoResult` with the exception // of `result.fetching` not being set and its properties not being reactive. // It is an OperationResult. }); }, }; }, }; ``` The reactive result that `useMutation` returns is useful when your UI has to display progress or results on the mutation, and the returned promise is particularly useful when you're adding side-effects that run after the mutation has completed. ### Handling mutation errors It's worth noting that the promise we receive when calling the execute function will never reject. Instead it will always return a promise that resolves to a result. If you're checking for errors, you should use `result.error` instead, which will be set to a `CombinedError` when any kind of errors occurred while executing your mutation. [Read more about errors on our "Errors" page.](./errors.md) ```js import { gql, useMutation } from '@urql/vue'; export default { setup() { const updateTodoResult = useMutation(gql` mutation ($id: ID!, $title: String!) { updateTodo(id: $id, title: $title) { id title } } `); return { updateTodo(id, title) { const variables = { id, title: title || '' }; updateTodoResult.executeMutation(variables).then(result => { if (result.error) { console.error('Oh no!', result.error); } }); }, }; }, }; ``` There are some more tricks we can use with `useMutation`.
[Read more about its API in the API docs for it.](../api/vue.md#usemutation) ## Reading on This concludes the introduction for using `urql` with Vue. The rest of the documentation is mostly framework-agnostic and will apply to either `urql` in general or the `@urql/core` package, which is the same between all framework bindings. Hence, next we may want to learn more about one of the following to learn more about the internals: - [How does the default "document cache" work?](./document-caching.md) - [How are errors handled and represented?](./errors.md) - [A quick overview of `urql`'s architecture and structure.](../architecture.md) - [Setting up other features, like authentication, uploads, or persisted queries.](../advanced/README.md) --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/comparison.md # Path: docs/comparison.md --- title: Comparison order: 8 --- # Comparison > This comparison page aims to be detailed, unbiased, and up-to-date. If you see any information that > may be inaccurate or could be improved otherwise, please feel free to suggest changes. The most common question that you may encounter with GraphQL is what client to choose when you are getting started. We aim to provide an unbiased and detailed comparison of several options on this page, so that you can make an **informed decision**. All options come with several drawbacks and advantages, and all of these clients have been around for a while now. A little known fact is that `urql` in its current form and architecture has already existed since February 2019, and its normalized cache has been around since September 2019. Overall, we would recommend to make your decision based on whether your required features are supported, which patterns you'll use (or restrictions thereof), and you may want to look into whether all the parts and features you're interested in are well maintained. ## Comparison by Features This section is a list of commonly used features of a GraphQL client and how it's either supported or not by our listed alternatives. We're using Relay and Apollo to compare against as the other most common choices of GraphQL clients. All features are marked to indicate the following: - ✅ Supported 1st-class and documented. - 🔶 Supported and documented, but requires custom user-code to implement. - 🟡 Supported, but as an unofficial 3rd-party library. (Provided it's commonly used) - 🛑 Not officially supported or documented. ### Core Features | | urql | Apollo | Relay | | ------------------------------------------ | ---------------------------------------------- | ------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------- | | Extensible on a network level | ✅ Exchanges | ✅ Links | ✅ Network Layers | | Extensible on a cache / control flow level | ✅ Exchanges | 🛑 | 🛑 | | Base Bundle Size | **10kB** (11kB with bindings) | ~50kB (55kB with React hooks) | 45kB (66kB with bindings) | | Devtools | ✅ | ✅ | ✅ | | Subscriptions | 🔶 [Docs](./advanced/subscriptions.md) | 🔶 [Docs](https://www.apollographql.com/docs/react/data/subscriptions/#setting-up-the-transport) | 🔶 [Docs](https://relay.dev/docs/guided-tour/updating-data/graphql-subscriptions/#configuring-the-network-layer) | | Client-side Rehydration | ✅ [Docs](./advanced/server-side-rendering.md) | ✅ [Docs](https://www.apollographql.com/docs/react/performance/server-side-rendering) | 🛑 | | Polled Queries | 🔶 | ✅ | ✅ | | Lazy Queries | ✅ | ✅ | ✅ | | Stale while Revalidate / Cache and Network | ✅ | ✅ | ✅ | | Focus Refetching | ✅ `@urql/exchange-refocus` | 🛑 | 🛑 | | Stale Time Configuration | ✅ `@urql/exchange-request-policy` | ✅ | 🛑 | | Persisted Queries | ✅ `@urql/exchange-persisted` | ✅ `apollo-link-persisted-queries` | 🔶 | | Batched Queries | 🛑 | ✅ `apollo-link-batch-http` | 🟡 `react-relay-network-layer` | | Live Queries | ✅ (via Incremental Delivery) | 🛑 | ✅ | | Defer & Stream Directives | ✅ | ✅ / 🛑 (`@defer` is supported in >=3.7.0, `@stream` is not yet supported) | 🟡 (unreleased) | | Switching to `GET` method | ✅ | ✅ | 🟡 `react-relay-network-layer` | | File Uploads | ✅ | 🟡 `apollo-upload-client` | 🛑 | | Retrying Failed Queries | ✅ `@urql/exchange-retry` | ✅ `apollo-link-retry` | ✅ `DefaultNetworkLayer` | | Easy Authentication Flows | ✅ `@urql/exchange-auth` | 🛑 (no docs for refresh-based authentication) | 🟡 `react-relay-network-layer` | | Automatic Refetch after Mutation | ✅ (with document cache) | 🛑 | ✅ | Typically these are all additional addon features that you may expect from a GraphQL client, no matter which framework you use it with. It's worth mentioning that all three clients support some kind of extensibility API, which allows you to change when and how queries are sent to an API. These are easy to use primitives particularly in Apollo, with links, and in `urql` with exchanges. The major difference in `urql` is that all caching logic is abstracted in exchanges too, which makes it easy to swap the caching logic or other behavior out (and hence makes `urql` slightly more customizable.) A lot of the added exchanges for persisted queries, file uploads, retrying, and other features are implemented by the urql-team, while there are some cases where first-party support isn't provided in Relay or Apollo. This doesn't mean that these features can't be used with these clients, but that you'd have to lean on community libraries or maintaining/implementing them yourself. One thing of note is our lack of support for batched queries in `urql`. We explicitly decided not to support this in our [first-party packages](https://github.com/urql-graphql/urql/issues/800#issuecomment-626342821) as the benefits are not present anymore in most cases with HTTP/2 and established patterns by Relay that recommend hoisting all necessary data requirements to a page-wide query. ### Framework Bindings | | urql | Apollo | Relay | | ------------------------------ | -------------- | ------------------- | ------------------ | | React Bindings | ✅ | ✅ | ✅ | | React Concurrent Hooks Support | ✅ | ✅ | ✅ | | React Suspense | ✅ | 🛑 | ✅ | | Next.js Integration | ✅ `next-urql` | 🟡 | 🔶 | | Preact Support | ✅ | 🔶 | 🔶 | | Svelte Bindings | ✅ | 🟡 `svelte-apollo` | 🟡 `svelte-relay` | | Vue Bindings | ✅ | 🟡 `vue-apollo` | 🟡 `vue-relay` | | Angular Bindings | 🛑 | 🟡 `apollo-angular` | 🟡 `relay-angular` | | Initial Data on mount | ✅ | ✅ | ✅ | ### Caching and State | | urql | Apollo | Relay | | ------------------------------------------------------- | --------------------------------------------------------------------- | ----------------------------------- | ---------------------------------------------- | | Caching Strategy | Document Caching, Normalized Caching with `@urql/exchange-graphcache` | Normalized Caching | Normalized Caching (schema restrictions apply) | | Added Bundle Size | +8kB (with Graphcache) | +0 (default) | +0 (default) | | Automatic Garbage Collection | ✅ | 🔶 | ✅ | | Local State Management | 🛑 | ✅ | ✅ | | Pagination Support | 🔶 | 🔶 | ✅ | | Optimistic Updates | ✅ | ✅ | ✅ | | Local Updates | ✅ | ✅ | ✅ | | Out-of-band Cache Updates | 🛑 (stays true to server data) | ✅ | ✅ | | Local Resolvers and Redirects | ✅ | ✅ | 🛑 | | Complex Resolvers (nested non-normalized return values) | ✅ | 🛑 | 🛑 | | Commutativity Guarantees | ✅ | 🛑 | ✅ | | Partial Results | ✅ | ✅ | 🛑 | | Safe Partial Results (schema-based) | ✅ | 🔶 (experimental via `useFragment`) | 🛑 | | Persistence Support | ✅ | ✅ `apollo-cache-persist` | 🟡 `@wora/relay-store` | | Offline Support | ✅ | 🛑 | 🟡 `@wora/relay-offline` | `urql` is the only of the three clients that doesn't pick [normalized caching](./graphcache/normalized-caching.md) as its default caching strategy. Typically this is seen by users as easier and quicker to get started with. All entries in this table for `urql` typically refer to the optional `@urql/exchange-graphcache` package. Once you need the same features that you'll find in Relay and Apollo, it's possible to migrate to Graphcache. Graphcache is also slightly different from Apollo's cache and more opinionated as it doesn't allow arbitrary cache updates to be made. Local state management is not provided by choice, but could be implemented as an exchange. For more details, [see discussion here](https://github.com/urql-graphql/urql/issues/323#issuecomment-756226783). `urql` is the only library that provides [Offline Support](./graphcache/offline.md) out of the box as part of Graphcache's feature set. There are a number of options for Apollo and Relay including writing your own logic for offline caching, which can be particularly successful in Relay, but for `@urql/exchange-graphcache` we chose to include it as a feature since it also strengthened other guarantees that the cache makes. Relay does in fact have similar guarantees as [`urql`'s Commutativity Guarantees](./graphcache/normalized-caching/#deterministic-cache-updates), which are more evident when applying list updates out of order under more complex network conditions. ## About Bundle Size `urql` is known and often cited as a "lightweight GraphQL client," which is one of its advantages but not its main goal. It manages to be this small by careful size management, just like other libraries like Preact. You may find that adding features like `@urql/exchange-persisted-fetch` and `@urql/exchange-graphcache` only slightly increases your bundle size as we're aiming to reduce bloat, but often this comparison is hard to make. When you start comparing bundle sizes of these three GraphQL clients you should keep in mind that: - Some dependencies may be external and the above sizes listed are total minified+gzipped sizes - `@urql/core` imports from `wonka` for stream utilities and `@0no-co/graphql.web` for GraphQL query language utilities - Other GraphQL clients may import other exernal dependencies. - All `urql` packages reuse parts of `@urql/core` and `wonka`, which means adding all their total sizes up doesn't give you a correct result of their total expected bundle size. --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/graphcache/README.md # Path: docs/graphcache/README.md --- title: Graphcache order: 5 --- # Graphcache In `urql`, caching is fully configurable via [exchanges](../architecture.md), and the default `cacheExchange` in `urql` offers a ["Document Cache"](../basics/document-caching.md), which is usually enough for sites that heavily rely on static content. However as an app grows more complex it's likely that the data and state that `urql` manages, will also grow more complex and introduce interdependencies between data. To solve this problem most GraphQL clients resort to caching data in a normalized format, similar to how [data is often structured in Redux.](https://redux.js.org/recipes/structuring-reducers/normalizing-state-shape/) In `urql`, normalized caching is an opt-in feature, which is provided by the `@urql/exchange-graphcache` package, _Graphcache_ for short. ## Features The following pages introduce different features in _Graphcache_, which together make it a compelling alternative to the standard [document cache](../basics/document-caching.md) that `urql` uses by default. - 🔁 [**Fully reactive, normalized caching.**](./normalized-caching.md) _Graphcache_ stores data in a normalized data structure. Query, mutation and subscription results may update one another if they share data, and the app will rerender or refetch data accordingly. This often allows your app to make fewer API requests, since data may already be in the cache. - 💾 [**Custom cache resolvers**](./local-resolvers.md) Since all queries are fully resolved in the cache before and after they're sent, you can add custom resolvers that enable you to format data, implement pagination, or implement cache redirects. - 💭 [**Subscription and Mutation updates**](./cache-updates.md) You can implement update functions that tell _Graphcache_ how to update its data after a mutation has been executed, or whenever a subscription sends a new event. This allows the cache to reactively update itself without queries having to perform a refetch. - 🏃 [**Optimistic mutation updates**](./cache-updates.md) When implemented, optimistic updates can provide the data that the GraphQL API is expected to send back before the request succeeds, which allows the app to instantly render an update while the GraphQL mutation is executed in the background. - 🧠 [**Opt-in schema awareness**](./schema-awareness.md) _Graphcache_ also optionally accepts your entire schema, which allows it to resolve _partial data_ before making a request to the GraphQL API, allowing an app to render everything that's cached before receiving all missing data. It also allows _Graphcache_ to output more helpful warnings and to handle interfaces and enums correctly without heuristics. - 📡 [**Offline support**](./offline.md) _Graphcache_ can persist and rehydrate its entire state, allowing an offline application to be built that is able to execute queries against the cache although the device is offline. - 🐛 [**Errors and warnings**](./errors.md). All potential errors are documented with information on how you may be able to fix them. ## Installation and Setup We can add _Graphcache_ by installing the `@urql/exchange-graphcache` package. Using the package won't increase your bundle size by as much as platforms like [Bundlephobia](https://bundlephobia.com/result?p=@urql/exchange-graphcache) may suggest, since it shares the dependency on `wonka` and `@urql/core` with the framework bindings package, e.g. `urql` or `@urql/preact`, that you're already using. ```sh yarn add @urql/exchange-graphcache # or npm install --save @urql/exchange-graphcache ``` The package exports the `cacheExchange` which replaces the default `cacheExchange` in `@urql/core`. This new `cacheExchange` must be instantiated using some options, which are used to customise _Graphcache_ as introduced in the ["Features" section above.](#features) However, you can get started without passing any options. ```js import { Client, fetchExchange } from 'urql'; import { cacheExchange } from '@urql/exchange-graphcache'; const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [cacheExchange({}), fetchExchange], }); ``` This will automatically enable normalized caching, and you may find that in a lot of cases, _Graphcache_ already does what you'd expect it to do without any additional configuration. We'll explore how to customize and set up different parts of _Graphcache_ on the following pages. [Read more about "Normalized Caching" on the next page.](./normalized-caching.md) --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/graphcache/cache-updates.md # Path: docs/graphcache/cache-updates.md --- title: Cache Updates order: 4 --- # Cache Updates As we've learned [on the page on "Normalized Caching"](./normalized-caching.md#normalizing-relational-data), when Graphcache receives an API result it will traverse and store all its data to its cache in a normalized structure. Each entity that is found in a result will be stored under the entity's key. A query's result is represented as a graph, which can also be understood as a tree structure, starting from the root `Query` entity, which then connects to other entities via links, which are relations stored as keys, where each entity has records that store scalar values, which are the tree's leafs. On the previous page, on ["Local Resolvers"](./local-resolvers.md), we've seen how resolvers can be attached to fields to manually resolve other entities (or transform record fields). Local Resolvers passively _compute_ results and change how Graphcache traverses and sees its locally cached data, however, for **mutations** and **subscriptions** we cannot passively compute data. When Graphcache receives a mutation or subscription result it still traverses it using the query document as we've learned when reading about how Graphcache stores normalized data, [quote](./normalized-caching.md/#storing-normalized-data): > Any mutation or subscription can also be written to this data structure. Once Graphcache finds a > keyable entity in their results it's written to its relational table, which may update other > queries in our application. This means that mutations and subscriptions still write and update entities in the cache. These updates are then reflected on all active queries that our app uses. However, there are limitations to this. While resolvers can be used to passively change data for queries, for mutations and subscriptions we sometimes have to write **updaters** to update links and relations. This is often necessary when a given mutation or subscription deliver a result that is more granular than the cache needs to update all affected entities. Previously, we've learned about cache updates [on the "Normalized Caching" page](./normalized-caching.md#manual-cache-updates). The `updates` option on `cacheExchange` accepts a map for `Mutation` or `Subscription` keys on which we can add "updater functions" to react to mutation or subscription results. These `updates` functions look similar to ["Local Resolvers"](./local-resolvers.md) that we've seen in the last section and similar to [GraphQL.js' resolvers on the server-side](https://www.graphql-tools.com/docs/resolvers/). ```js cacheExchange({ updates: { Mutation: { mutationField: (result, args, cache, info) => { // ... }, }, Subscription: { subscriptionField: (result, args, cache, info) => { // ... }, }, }, }); ``` An "updater" may be attached to a `Mutation` or `Subscription` field and accepts four positional arguments, which are the same as [the resolvers' arguments](./local-resolvers.md): - `result`: The full API result that's being written to the cache. Typically we'd want to avoid coupling by only looking at the current field that the updater is attached to, but it's worth noting that we can access any part of the result. - `args`: The arguments that the field has been called with, which will be replaced with an empty object if the field hasn't been called with any arguments. - `cache`: The `cache` instance, which gives us access to methods allowing us to interact with the local cache. Its full API can be found [in the API docs](../api/graphcache.md#cache). On this page we use it frequently to read from and write to the cache. - `info`: This argument shouldn't be used frequently, but it contains running information about the traversal of the query document. It allows us to make resolvers reusable or to retrieve information about the entire query. Its full API can be found [in the API docs](../api/graphcache.md#info). The cache updaters return value is disregarded (and typed as `void` in TypeScript), which makes any method that they call on the `cache` instance a side effect, which may trigger additional cache changes and updates all affected queries as we modify them. ## Why do we need cache updates? When we’re designing a GraphQL schema well, we won’t need to write many cache updaters for Graphcache. For example, we may have a mutation to update a username on a `User`, which can trivially update the cache without us writing an updater because it resolves the `User`. ```graphql query User($id: ID!) { user(id: $id) { __typename # "User" id username } } mutation UpdateUsername($id: ID!, $username: String!) { updateUser(id: $id, username: $username) { __typename # "User" id username } } ``` In the above example, `Query.user` returns a `User`, which is then updated by a mutation on `Mutation.updateUser`. Since the mutation also queries the `User`, the updated username will automatically be applied by Graphcache. If the mutation field didn’t return a `User`, then this wouldn’t be possible, and while we can write an updater in Graphcache for it, we should consider this poor schema design. An updater instead becomes absolutely necessary when a mutation can’t reasonably return what has changed or when we can’t manually define a selection set that’d be even able to select all fields that may update. Some examples may include: - `Mutation.deleteUser`, since we’ll need to invalidate an entity - `Mutation.createUser`, since a list may now have to include a new entity - `Mutation.createBook`, since a given entity, e.g. `User` may have a field `User.books` that now needs to be updated. In short, we may need to write a cache updater for any **relation** (i.e. link) that we can’t query via our GraphQL mutation directly, since there’ll be changes to our data that Graphcache won’t be able to see and store. In a later section on this page, [we’ll learn about the `cache.link` method.](#writing-links-individually) This method is used to update a field to point at a different entity. In other words, `cache.link` is used to update a relation from one entity field to one or more other child entities. This is the most common update we’ll need and it’s preferable to always try to use `cache.link`, unless we need to update a scalar. ## Manually updating entities If a mutation field's result isn't returning the full entity it updates then it becomes impossible for Graphcache to update said entity automatically. For instance, we may have a mutation like the following: ```graphql mutation UpdateTodo($todoId: ID!, $date: String!) { updateTodoDate(id: $todoId, date: $date) } ``` In this hypothetical case instead of `Mutation.updateDate` resolving to the full `Todo` object type it instead results in a scalar. This could be fixed by changing the `Mutation` in our API's schema to instead return the full `Todo` entity, which would allow us to run the mutation as such, which updates the `Todo` in our cache automatically: ```graphql mutation UpdateTodo($todoId: ID!, $date: String!) { updateTodoDate(id: $todoId, date: $date) { ...Todo_date } } fragment Todo_date on Todo { id updatedAt } ``` However, if this isn't possible we can instead write an updater that updates our `Todo` entity manually by using the `cache.writeFragment` method: ```js import { gql } from '@urql/core'; cacheExchange({ updates: { Mutation: { updateTodoDate(_result, args, cache, _info) { const fragment = gql` fragment _ on Todo { id updatedAt } `; cache.writeFragment(fragment, { id: args.id, updatedAt: args.date }); }, }, }, }); ``` The `cache.writeFragment` method is similar to the `cache.readFragment` method that we've seen [on the "Local Resolvers" page before](./local-resolvers.md#reading-a-fragment). Instead of reading data for a given fragment it instead writes data to the cache. > **Note:** In the above example, we've used > [the `gql` tag function](../api/core.md#gql) because `writeFragment` only accepts > GraphQL `DocumentNode`s as inputs, and not strings. ### Cache Updates outside updaters Cache updates are **not** possible outside `updates`'s functions. If we attempt to store the `cache` in a variable and call its methods outside any `updates` functions (or functions, like `resolvers`) then Graphcache will throw an error. Methods like these cannot be called outside the `cacheExchange`'s `updates` functions, because all updates are isolated to be _reactive_ to mutations and subscription events. In Graphcache, out-of-band updates aren't permitted because the cache attempts to only represent the server's state. This limitation keeps the data of the cache true to the server data we receive from API results and makes its behaviour much more predictable. If we still manage to call any of the cache's methods outside its callbacks in its configuration, we will receive [a "(2) Invalid Cache Call" error](./errors.md#2-invalid-cache-call). ### Updaters on arbitrary types Cache updates **may** be configured for arbitrary types and not just for `Mutation` or `Subscription` fields. However, this can potentially be **dangerous** and is an easy trap to fall into. It is allowed though because it allows for some nice tricks and workarounds. Given an updater on an arbitrary type, e.g. `Todo.author`, we can chain updates onto this field whenever it’s written. The updater can then be triggerd by Graphcache during _any_ operation; mutations, queries, and subscriptions. When this update is triggered, it allows us to add more arbitrary updates onto this field. > **Note:** If you’re looking to use this because you’re nesting mutations onto other object types, > e.g. `Mutation.author.updateName`, please consider changing your schema first before using this. > Namespacing mutations is not recommended and changes the execution order to be concurrent rather > than sequential when you use multiple nested mutation fields. ## Updating lists or links Mutations that create new entities are pretty common, and it's not uncommon to attempt to update the cache when a mutation result for these "creation" mutations come back, since this avoids an additional roundtrip to our APIs. While it's possible for these mutations to return any affected entities that carry the lists as well, often these lists live on fields on or below the `Query` root type, which means that we'd be sending a rather large API result. For large amounts of pages this is especially infeasible. Instead, most schemas opt to instead just return the entity that's just been created: ```graphql mutation NewTodo($text: String!) { createTodo(id: $todoId, text: $text) { id text } } ``` If we have a corresponding field on `Query.todos` that contains all of our `Todo` entities then this means that we'll need to create an updater that automatically adds the `Todo` to our list: ```js cacheExchange({ updates: { Mutation: { createTodo(result, _args, cache, _info) { const TodoList = gql` { todos { id } } `; cache.updateQuery({ query: TodoList }, data => { return { ...data, todos: [...data.todos, result.createTodo], }; }); }, }, }, }); ``` Here we use the `cache.updateQuery` method, which is similar to the [`cache.readQuery` method](./local-resolvers.md#reading-a-query) that we've seen on the "Local Resolvers" page before. This method accepts a callback, which will give us the `data` of the query, as read from the locally cached data, and we may return an updated version of this data. While we may want to instinctively opt for immutably copying and modifying this data, we're actually allowed to mutate it directly, since it's just a copy of the data that's been read by the cache. This `data` may also be `null` if the cache doesn't actually have enough locally cached information to fulfil the query. This is important because resolvers aren't actually applied to cache methods in updaters. All resolvers are ignored, so it becomes impossible to accidentally commit transformed data to our cache. We could safely add a resolver for `Todo.createdAt` and wouldn't have to worry about an updater accidentally writing it to the cache's internal data structure. ### Writing links individually As long as we're only updating links (as in 'relations') then we may also use the [`cache.link` method](../api/graphcache.md#link). This method is the "write equivalent" of [the `cache.resolve` method, as seen on the "Local Resolvers" page before.](./local-resolvers.md#resolving-other-fields) We can use this method to update any relation in our cache, so the example above could also be rewritten to use `cache.link` and `cache.resolve` rather than `cache.updateQuery`. ```js cacheExchange({ updates: { Mutation: { createTodo(result, _args, cache, _info) { const todos = cache.resolve('Query', 'todos'); if (Array.isArray(todos)) { cache.link('Query', 'todos', [...todos, result.createTodo]); } }, }, }, }); ``` This method can be combined with more than just `cache.resolve`, for instance, it's a good fit with `cache.inspectFields`. However, when you're writing records (as in 'scalar' values) `cache.writeFragment` and `cache.updateQuery` are still the only methods that you can use. But since this kind of data is often written automatically by the normalized cache, often updating a link is the only modification we may want to make. ## Updating many unknown links In the previous section we've seen how to update data, like a list, when a mutation result enters the cache. However, we've used a rather simple example when we've looked at a single list on a known field. In many schemas pagination is quite common, and when we for instance delete a todo then knowing the lists to update becomes unknowable. We cannot know ahead of time how many pages (and its variables) we've already accessed. This knowledge in fact _shouldn't_ be available to Graphcache. Querying the `Client` is an entirely separate concern that's often colocated with some part of our UI code. ```graphql mutation RemoveTodo($id: ID!) { removeTodo(id: $id) } ``` Suppose we have the above mutation, which deletes a `Todo` entity by its ID. Our app may query a list of these items over many pages with separate queries being sent to our API, which makes it hard to know the fields that should be checked: ```graphql query PaginatedTodos($skip: Int) { todos(skip: $skip) { id text } } ``` Instead, we can **introspect an entity's fields** to find the fields we may want to update dynamically. This is possible thanks to [the `cache.inspectFields` method](../api/graphcache.md#inspectfields). This method accepts a key, or a keyable entity like the `cache.keyOfEntity` method that [we've seen on the "Local Resolvers" page](./local-resolvers.md#resolving-by-keys) or the `cache.resolve` method's first argument. ```js cacheExchange({ updates: { Mutation: { removeTodo(_result, args, cache, _info) { const TodoList = gql` query (skip: $skip) { todos(skip: $skip) { id } } `; const fields = cache .inspectFields('Query') .filter(field => field.fieldName === 'todos') .forEach(field => { cache.updateQuery( { query: TodoList, variables: { skip: field.arguments.skip }, }, data => { data.todos = data.todos.filter(todo => todo.id !== args.id); return data; } ); }); }, }, }, }); ``` To implement an updater for our example's `removeTodo` mutation field we may use the `cache.inspectFields('Query')` method to retrieve a list of all fields on the `Query` root entity. This list will contain all known fields on the `"Query"` entity. Each field is described as an object with three properties: - `fieldName`: The field's name; in this case we're filtering for all `todos` listing fields. - `arguments`: The arguments for the given field, since each field that accepts arguments can be accessed multiple times with different arguments. In this example we're looking at `arguments.skip` to find all unique pages. - `fieldKey`: This is the field's key, which can come in useful to retrieve a field using `cache.resolve(entityKey, fieldKey)` to prevent the arguments from having to be stringified repeatedly. To summarise, we filter the list of fields in our example down to only the `todos` fields and iterate over each of our `arguments` for the `todos` field to filter all lists to remove the `Todo` from them. ### Inspecting arbitrary entities We're not required to only inspecting fields on the `Query` root entity. Instead, we can inspect fields on any entity by passing a different partial, keyable entity or key to `cache.inspectFields`. For instance, if we had a `Todo` entity and wanted to get all of its known fields then we could pass in a partial `Todo` entity just as well: ```js cache.inspectFields({ __typename: 'Todo', id: args.id, }); ``` ## Invalidating Entities Admittedly, it's sometimes almost impossible to write updaters for all mutations. It's often even hard to predict what our APIs may do when they receive a mutation. An update of an entity may change the sorting of a list, or remove an item from a list in a way we can't predict, since we don't have access to a full database to run the API locally. In cases like these it may be advisable to trigger a refetch instead and let the cache update itself by sending queries that have invalidated data associated to them to our API again. This process is called **invalidation** since it removes data from Graphcache's locally cached data. We may use the cache's [`cache.invalidate` method](../api/graphcache.md#invalidate) to either invalidate entire entities or individual fields. It has the same signature as [the `cache.resolve` method](../api/graphcache.md#resolve), which we've already seen [on the "Local Resolvers" page as well](./local-resolvers.md#resolving-other-fields). We can simplify the previous update we've written with a call to `cache.invalidate`: ```js cacheExchange({ updates: { Mutation: { removeTodo(_result, args, cache, _info) { cache.invalidate({ __typename: 'Todo', id: args.id, }); }, }, }, }); ``` Like any other cache update, this will cause all queries that use this `Todo` entity to be updated against the cache. Since we've invalidated the `Todo` item they're using these queries will be refetched and sent to our API. If we're using ["Schema Awareness"](./schema-awareness.md) then these queries' results may actually be temporarily updated with a partial result, but in general we should observe that queries with data that has been invalidated will be refetched as some of their data isn't cached anymore. ### Invalidating individual fields We may also want to only invalidate individual fields, since maybe not all queries have to be immediately updated. We can pass a field (and optional arguments) to the `cache.invalidate` method as well to only invalidate a single field. For instance, we can use this to invalidate our lists instead of invalidating the entity itself. This can be useful if we know that modifying an entity will cause our list to be sorted differently, for instance. ```js cacheExchange({ updates: { Mutation: { updateTodo(_result, args, cache, _info) { const key = 'Query'; const fields = cache .inspectFields(key) .filter(field => field.fieldName === 'todos') .forEach(field => { cache.invalidate(key, field.fieldKey); // or alternatively: cache.invalidate(key, field.fieldName, field.arguments); }); }, }, }, }); ``` In this example we've attached an updater to a `Mutation.updateTodo` field. We react to this mutation by enumerating all `todos` listing fields using `cache.inspectFields` and targetedly invalidate only these fields, which causes all queries using these listing fields to be refetched. ### Invalidating a type We can also invalidate all the entities of a given type, this could be handy in the case of a list update or when you aren't sure what entity is affected. This can be done by only passing the relevant `__typename` to the `invalidate` function. ```js cacheExchange({ updates: { Mutation: { deleteTodo(_result, args, cache, _info) { cache.invalidate('Todo'); }, }, }, }); ``` ## Optimistic updates If we know what result a mutation may return, why wait for the GraphQL API to fulfill our mutations? In addition to the `updates` configuration, we can also pass an `optimistic` option to the `cacheExchange`. This option is a factory function that allows us to create a "virtual" result for a mutation. This temporary result can be applied immediately to the cache to give our users the illusion that mutations were executed immediately, which is a great method to reduce waiting time and to make our apps feel snappier. This technique is often used with one-off mutations that are assumed to succeed, like starring a repository, or liking a tweet. In such cases it's often desirable to make the interaction feel as instant as possible. The `optimistic` configuration is similar to our `resolvers` or `updates` configuration, except that it only receives a single map for mutation fields. We can attach optimistic functions to any mutation field to make it generate an optimistic that is applied to the cache while the `Client` waits for a response from our API. An "optimistic" function accepts three positional arguments, which are the same as the resolvers' or updaters' arguments, except for the first one: The `optimistic` functions receive the same arguments as `updates` functions, except for `parent`, since we don't have any server data to work with: - `args`: The arguments that the field has been called with, which will be replaced with an empty object if the field hasn't been called with any arguments. - `cache`: The `cache` instance, which gives us access to methods allowing us to interact with the local cache. Its full API can be found [in the API docs](../api/graphcache.md#cache). On this page we use it frequently to read from and write to the cache. - `info`: This argument shouldn't be used frequently, but it contains running information about the traversal of the query document. It allows us to make resolvers reusable or to retrieve information about the entire query. Its full API can be found [in the API docs](../api/graphcache.md#info). The usual `parent` argument isn't present since optimistic functions don't have any server data to handle or deal with and instead create this data. When a mutation is run that contains one or more optimistic mutation fields, Graphcache picks these up and generates immediate changes, which it applies to the cache. The `resolvers` functions also trigger as if the results were real server results. This modification is temporary. Once a result from the API comes back it's reverted, which leaves us in a state where the cache can apply the "real" result to the cache. > Note: While optimistic mutations are waiting for results from the API all queries that may alter > our optimistic data are paused (or rather queued up) and all optimistic mutations will be reverted > at the same time. This means that optimistic results can stack but will never accidentally be > confused with "real" data in your configuration. In the following example we assume that we'd like to implement an optimistic result for a `favoriteTodo` mutation, like such: ```graphql mutation FavoriteTodo(id: $id) { favoriteTodo(id: $id) { id favorite updatedAt } } ``` The mutation is rather simple and all we have to do is create a function that imitates the result that the API is assumed to send back: ```js const cache = cacheExchange({ optimistic: { favoriteTodo(args, cache, info) { return { __typename: 'Todo', id: args.id, favorite: true, }; }, }, }); ``` This optimistic mutation will be applied to the cache. If any `updates` configuration exists for `Mutation.favoriteTodo` then it will be executed using the optimistic result. Once the mutation result comes back from our API this temporary change will be rolled back and discarded. In the above example optimistic mutation function we also see that `updatedAt` is not present in our optimistic return value. That’s because we don’t always have to (or can) match our mutations’ selection sets perfectly. Instead, Graphcache will skip over fields and use cached fields for any we leave out. This can even work on nested entities and fields. However, leaving out fields can sometimes cause the optimistic update to not apply when we accidentally cause any query that needs to update accordingly to only be partially cached. In other words, if our optimistic updates cause a cache miss, we won’t see them being applied. Sometimes we may need to apply optimistic updates to fields that accept arguments. For instance, our `favorite` field may have a date cut-off: ```graphql mutation FavoriteTodo(id: $id) { favoriteTodo(id: $id) { id favorite(since: ONE_MONTH_AGO) updatedAt } } ``` To solve this, we can return a method on the optimistic result our `optimistic` update function returns: ```js const cache = cacheExchange({ optimistic: { favoriteTodo(args, cache, info) { return { __typename: 'Todo', id: args.id, favorite(_args, cache, info) { return true; }, }, }, }, }); ``` The function signature and arguments it receives is identical to the toplevel optimistic function you define, and is basically like a nested optimistic function. ### Variables for Optimistic Updates Sometimes it's not possible for us to retrieve all data that an optimistic update requires to create a "fake result" from the cache or from all existing variables. This is why Graphcache allows for a small escape hatch for these scenarios, which allows us to access additional variables, which we may want to pass from our UI code to the mutation. For instance, given a mutation like the following we may add more variables than the mutation specifies: ```graphql mutation UpdateTodo($id: ID!, $text: ID!) { updateTodo(id: $id, text: $text) { id text } } ``` In the above mutation we've only defined an `$id` and `$text` variable. Graphcache typically filters variables using our query document definitions, which means that our API will never receive any variables other than the ones we've defined. However, we're able to pass additional variables to our mutation, e.g. `{ extra }`, and since `$extra` isn't defined it will be filtered once the mutation is sent to the API. An optimistic mutation however will still be able to access this variable, like so: ```js cacheExchange({ updates: { Mutation: { updateTodo(_result, _args, _cache, info) { const extraVariable = info.variables.extra; }, }, }, }); ``` ### Reading on [On the next page we'll learn about "Schema Awareness".](./schema-awareness.md) --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/graphcache/errors.md # Path: docs/graphcache/errors.md --- title: Errors order: 8 --- # Help! **This document lists out all errors and warnings in `@urql/exchange-graphcache`.** Any unexpected behaviour or condition will be marked by an error or warning in development. This will output as a helpful little message. Sometimes, however, this message may not actually tell you about everything that's going on. This is a supporting document that explains every error and attempts to give more information on how you may be able to fix some issues or avoid these errors/warnings. ## (1) Invalid GraphQL document > Invalid GraphQL document: All GraphQL documents must contain an OperationDefinition > node for a query, subscription or mutation. There are multiple places where you're passing in GraphQL documents, either through methods on `Cache` (e.g. `cache.updateQuery`) or via `urql` using the `Client` or hooks like `useQuery`. Your queries must always contain a main operation, one of: query, mutation, or subscription. This error occurs when this is missing, because the `DocumentNode` is maybe empty or only contains fragments. ## (2) Invalid Cache call > Invalid Cache call: The cache may only be accessed or mutated during > operations like write or query, or as part of its resolvers, updaters, > or optimistic configs. If you're somehow accessing the `Cache` (an instance of `Store`) outside any of the usual operations then this error will be thrown. Please make sure that you're only calling methods on the `cache` as part of configs that you pass to your `cacheExchange`. Outside these functions the cache must not be changed. However when you're not using the `cacheExchange` and are trying to use the `Store` on its own, then you may run into issues where its global state wasn't initialised correctly. This is a safe-guard to prevent any asynchronous work to take place, or to avoid mutating the cache outside any normal operation. ## (3) Invalid Object type > Invalid Object type: The type `???` is not an object in the defined schema, > but the GraphQL document is traversing it. When you're passing an introspected schema to the cache exchange, it is able to check whether all your queries are valid. This error occurs when an unknown type is found as part of a query or fragment. Check whether your schema is up-to-date or whether you're using an invalid typename somewhere, maybe due to a typo. ## (4) Invalid field > Invalid field: The field `???` does not exist on `???`, > but the GraphQL document expects it to exist.
> Traversal will continue, however this may lead to undefined behavior! Similarly to the previous warning, when you're passing an introspected schema to the cache exchange, it is able to check whether all your queries are valid. This warning occurs when an unknown field is found on a selection set as part of a query or fragment. Check whether your schema is up-to-date or whether you're using an invalid field somewhere, maybe due to a typo. As the warning states, this won't lead any operation to abort, or an error to be thrown! ## (5) Invalid Abstract type > Invalid Abstract type: The type `???` is not an Interface or Union type > in the defined schema, but a fragment in the GraphQL document is using it > as a type condition. When you're passing an introspected schema to the cache exchange, it becomes able to deterministically check whether an entity in the cache matches a fragment's type condition. This applies to full fragments (`fragment _ on Interface`) or inline fragments (`... on Interface`), that apply to interfaces instead of to a concrete object typename. Check whether your schema is up-to-date or whether you're using an invalid field somewhere, maybe due to a typo. ## (6) readFragment(...) was called with an empty fragment > readFragment(...) was called with an empty fragment. > You have to call it with at least one fragment in your GraphQL document. You probably have called `cache.readFragment` with a GraphQL document that doesn't contain a main fragment. This error occurs when no main fragment can be found, because the `DocumentNode` is maybe empty or does not contain fragments. When you're calling a fragment method, please ensure that you're only passing fragments in your GraphQL document. The first fragment will be used to start writing data. This also occurs when you pass in a `fragmentName` but a fragment with the given name can't be found in the `DocumentNode`. ## (7) Can't generate a key for readFragment(...) > Can't generate a key for readFragment(...). > You have to pass an `id` or `_id` field or create a custom `keys` config for `???`. You probably have called `cache.readFragment` with data that the cache can't generate a key for. This may either happen because you're missing the `id` or `_id` field or some other fields for your custom `keys` config. Please make sure that you include enough properties on your data so that `readFragment` can generate a key. ## (8) Invalid resolver data > Invalid resolver value: The resolver at `???` returned an invalid typename that > could not be reconciled with the cache. This error may occur when you provide a cache resolver for a field using `resolvers` config. The value that you returns needs to contain a `__typename` field and this field must match the `__typename` field that exists in the cache, if any. This is because it's not possible to return a different type for a single field. Please check your schema for the type that your resolver has to return, then add a `__typename` field to your returned resolver value that matches this type. ## (9) Invalid resolver value > Invalid resolver value: The field at `???` is a scalar (number, boolean, etc), > but the GraphQL query expects a selection set for this field. The GraphQL query that has been walked contains a selection set at the place where your resolver is located. This means that a full entity object needs to be returned, but instead the cache received a number, boolean, or another scalar from your resolver. Please check that your resolvers return scalars where there's no selection set, and entities where there is one. ## (10) writeOptimistic(...) was called with an operation that isn't a mutation > writeOptimistic(...) was called with an operation that is not a mutation. > This case is unsupported and should never occur. This should never happen, please open an issue if it does. This occurs when `writeOptimistic` attempts to write an optimistic result for a query or subscription, instead of a mutation. ## (11) writeFragment(...) was called with an empty fragment > writeFragment(...) was called with an empty fragment. > You have to call it with at least one fragment in your GraphQL document. You probably have called `cache.writeFragment` with a GraphQL document that doesn't contain a main fragment. This error occurs when no main fragment can be found, because the `DocumentNode` is maybe empty or does not contain fragments. When you're calling a fragment method, please ensure that you're only passing fragments in your GraphQL document. The first fragment will be used to start writing data. This also occurs when you pass in a `fragmentName` but a fragment with the given name can't be found in the `DocumentNode`. ## (12) Can't generate a key for writeFragment(...) or link(...) > Can't generate a key for writeFragment(...) [or link(...) data. > You have to pass an `id` or `_id` field or create a custom `keys` config for `???`. You probably have called `cache.writeFragment` or `cache.link` with data that the cache can't generate a key for. This may either happen because you're missing the `id` or `_id` field or some other fields for your custom `keys` config. Please make sure that you include enough properties on your data so that `writeFragment` or `cache.link` can generate a key. On `cache.link` the entities must either be an existing entity key, or a keyable entity. ## (13) Invalid undefined > Invalid undefined: The field at `???` is `undefined`, but the GraphQL query expects a > scalar (number, boolean, etc) / selection set for this field. As data is written to the cache, this warning is issued when `undefined` is encountered. GraphQL results should never contain an `undefined` value, so this warning will let you know the part of your result that did contain `undefined`. ## (14) Couldn't find \_\_typename when writing. > Couldn't find `__typename` when writing. > If you're writing to the cache manually have to pass a `__typename` property on each entity in your data. You probably have called `cache.writeFragment` or `cache.updateQuery` with data that is missing a `__typename` field for an entity where your document contains a selection set. The cache won't be able to generate a key for entities that are missing the `__typename` field. Please make sure that you include enough properties on your data so that `write` can generate a key. ## (15) Invalid key > Invalid key: The GraphQL query at the field at `???` has a selection set, > but no key could be generated for the data at this field. > You have to request `id` or `_id` fields for all selection sets or create a > custom `keys` config for `???`. > Entities without keys will be embedded directly on the parent entity. > If this is intentional, create a `keys` config for `???` that always returns null. This error occurs when the cache can't generate a key for an entity. The key would then effectively be `null`, and the entity won't be cached by a key. Conceptually this means that an entity won't be normalized but will indeed be cached by the parent's key and field, which is displayed in the first part of the warning. This may mean that you forgot to include an `id` or `_id` field. But if your entity at that place doesn't have any `id` fields, then you may have to create a custom `keys` config. This `keys` function either needs to return a unique ID for your entity, or it needs to explicitly return `null` to silence this warning. ## (16) Heuristic Fragment Matching > Heuristic Fragment Matching: A fragment is trying to match against the `???` type, > but the type condition is `???`. Since GraphQL allows for interfaces `???` may be > an interface. > A schema needs to be defined for this match to be deterministic, otherwise > the fragment will be matched heuristically! This warning is issued on fragment matching. Fragment matching is the process of matching a fragment against a piece of data in the cache and that data's `__typename` field. When the `__typename` field doesn't match the fragment's type, then we may be dealing with an interface and/or enum. In such a case the fragment may _still match_ if it's referring to an interface (`... on Interface`). Graphcache is supposed to be usable without much config, so what it does in this case is apply a heuristic match. In a heuristic fragment match we check whether all fields on the fragment are present in the cache, which is then treated as a fragment match. When you pass an introspected schema to the cache, this warning will never be displayed as the cache can then do deterministic fragment matching using schema information. ## (17) Invalid type > Invalid type: The type `???` is used with @populate but does not exist. When you're using the populate exchange with an introspected schema and add the `@populate` directive to fields it first checks whether the type is valid and exists on the schema. If the field does not have enough type information because it doesn't exist on the schema or does not match expectations then this warning is logged. Check whether your schema is up-to-date or whether you're using an invalid field somewhere, maybe due to a typo. ## (18) Invalid TypeInfo state > Invalid TypeInfo state: Found no flat schema type when one was expected. When you're using the populate exchange with an introspected schema, it will start collecting used fragments and selection sets on all of your queries. This error may occur if it hits unexpected types or inexistent types when doing so. Check whether your schema is up-to-date or whether you're using an invalid field somewhere, maybe due to a typo. Please open an issue if it happens on a query that you expect to be supported by the `populateExchange`. ## (19) Can't generate a key for invalidate(...) > Can't generate a key for invalidate(...). > You need to pass in a valid key (**typename:id) or an object with the "**typename" property and an "id" or "\_id" property. You probably have called `cache.invalidate` with data that the cache can't generate a key for. This may either happen because you're missing the `__typename` and `id` or `_id` field or if the last two aren't applicable to this entity a custom `keys` entry. ## (20) Invalid Object type > Invalid Object type: The type `???` is not an object in the defined schema, > but the `keys` option is referencing it. When you're passing an introspected schema to the cache exchange, it is able to check whether your `opts.keys` is valid. This error occurs when an unknown type is found in `opts.keys`. Check whether your schema is up-to-date, or whether you're using an invalid typename in `opts.keys`, maybe due to a typo. ## (21) Invalid updates type > Invalid updates field: The type `???` is not an object in the defined schema, > but the `updates` config is referencing it. When you're passing an introspected schema to the cache exchange, it is able to check whether your `opts.updates` config is valid. This error occurs when an unknown type is found in the `opts.updates` config. Check whether your schema is up-to-date, or whether you've got a typo in `opts.updates`. ## (22) Invalid updates field > Invalid updates field: `???` on `???` is not in the defined schema, > but the `updates` config is referencing it. When you're passing an introspected schema to the cache exchange, it is able to check whether your `opts.updates` config is valid. This error occurs when an unknown field is found in `opts.updates[typename]`. Check whether your schema is up-to-date, or whether you're using an invalid field name in `opts.updates`, maybe due to a typo. ## (23) Invalid resolver > Invalid resolver: `???` is not in the defined schema, but the `resolvers` > option is referencing it. When you're passing an introspected schema to the cache exchange, it is able to check whether your `opts.resolvers` is valid. This error occurs when an unknown query, type or field is found in `opts.resolvers`. Check whether your schema is up-to-date, or whether you've got a typo in `opts.resolvers`. ## (24) Invalid optimistic mutation > Invalid optimistic mutation field: `???` is not a mutation field in the defined schema, > but the `optimistic` option is referencing it. When you're passing an introspected schema to the cache exchange, it is able to check whether your `opts.optimistic` is valid. This error occurs when a field in `opts.optimistic` is not in the schema's `Mutation` fields. Check whether your schema is up-to-date, or whether you've got a typo in `Mutation` or `opts.optimistic`. ## (25) Invalid root traversal > Invalid root traversal: A selection was being read on `???` which is an uncached root type. > The `Mutation` and `Subscription` types are special Operation Root Types and cannot be read back > from the cache. In GraphQL every schema has three [Operation Root Types](https://spec.graphql.org/June2018/#sec-Root-Operation-Types). The `Query` type is the only one that is cached in Graphcache's normalized cache, since it's the root of all normalized cache data, i.e. all data is linked and connects back to the `Query` type. The `Subscription` and `Mutation` types are special and uncached; they may link to entities that will be updated in the normalized cache data, but are themselves not cached, since they're never directly queried. When your schema treats `Mutation` or `Subscription` like regular entity types you may get this warning. This may happen because you've used the default reserved names `Mutation` or `Subscription` for entities rather than as special Operation Root Types, and haven't specified this in the schema. Hence this issue can often be fixed by either enabling [Schema Awareness](./schema-awareness.md) or by adding a `schema` definition to your GraphQL Schema like so: ```graphql schema { query: Query mutation: YourMutation subscription: YourSubscription } ``` Where `YourMutation` and `YourSubscription` are your custom Operation Root Types, instead of relying on the default names `"Mutation"` and `"Subscription"`. ## (26) Invalid abstract resolver > Invalid resolver: `???` does not map to a concrete type in the schema, > but the resolvers option is referencing it. Implement the resolver for the types that `??` instead. When you're passing an introspected schema to the cache exchange, it is able to check whether your `opts.resolvers` is valid. This error occurs when you are using an `interface` or `union` rather than an implemented type for these. Check the type mentioned and change it to one of the specific types. ## (27) Invalid Cache write > Invalid Cache write: You may not write to the cache during cache reads. > Accesses to `cache.writeFragment`, `cache.updateQuery`, and `cache.link` may > not be made inside `resolvers` for instance. If you're using the `Cache` inside your `cacheExchange` config you receive it either inside callbacks that are called when the cache is queried (e.g. `resolvers`) or when data is written to the cache (e.g. `updates`). You may not write to the cache when it's being queried. Please make sure that you're not calling `cache.updateQuery`, `cache.writeFragment`, or `cache.link` inside `resolvers`. ## (28) Resolver and directive match the same field When you have a resolver defined on a field you shouln't be combining it with a directive as the directive will apply and the resolver will be void. --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/graphcache/local-directives.md # Path: docs/graphcache/local-directives.md --- title: Local Directives order: 3 --- # Local Directives Previously, we've learned about local resolvers [on the "Normalized Caching" page](./normalized-caching.md#manually-resolving-entities) and [the "Local Resolvers" page](./local-resolvers.md). Resolvers allow us to change the data that Graphcache resolvers for a given field on a given type. This, in turn, allows us to change which links and data are returned in a query’s result, which otherwise may not be cached or be returned in a different shape. Resolvers are useful to globally change how a field behaves, for instance, to tell Graphcache that a `Query.item(id: $id)` field returns an item of type `Item` with the `id` field, or to transform a value before it’s used in the UI. However, resolvers are limited to changing the behaviour globally, not to change a field’s behaviour per query. This is why **local directives** exist. ## Adding client-only directives Any directive in our GraphQL documents that’s prefixed with an underscore character (`_`) will be filtered by `@urql/core`. This means that our GraphQL API never sees it and it becomes a “client-only directive”. No matter whether we prefix a directive or not however, we can define local resolvers for directives in Graphcache’s configuration and make conditional local resolvers. ```js cacheExchange({ directives: { pagination(directiveArgs) { // This resolver is called for @_pagination directives return (parent, args, cache, info) => { return null; }; }, }, }); ``` Once we define a directive on the `directives` configuration object, we can reference it in our GraphQL queries. As per the above example, if we now reference `@_pagination` in a query, the resolver that’s returned in the configuration will be applied to the field, just like a local resolver. We can also reference the directive using `@pagination`, however, this will mean that it’s also sent to the API, so this usually isn’t what we want. ## Client-controlled Nullability Graphcache comes with two directives built-in by default. The `optional` and `required` directives. These directives can be used as an alternative to [the Schema Awareness feature’s](./schema-awareness.md) ability to generate partial results. If we were to write a query that contains `@_optional` on a field, then the field is always allowed to be nullable. In case it’s not cached, Graphcache will be able to replace it with a `null` value. Similarly, if we annotate a field with `@_required`, the value is not optional and, even if the cache knows the value is set to `null`, it will become required and Graphcache will either cascade to the next higher parent field annotated with `@_optional`, or will mark a query as a cache-miss. ## Pagination Previously, in [the “Local Resolvers” page’s Pagination section](./local-resolvers.md#pagination) we defined a local resolver to add infinite pagination to a given type’s field. If we add the `simplePagination` or `relayPagination` helpers as directives instead, we can still use our schema’s pagination field as normal, and instead, only use infinite pagination as required. ```js import { simplePagination } from '@urql/exchange-graphcache/extras'; import { relayPagination } from '@urql/exchange-graphcache/extras'; cacheExchange({ directives: { simplePagination: options => simplePagination({ ...options }), relayPagination: options => relayPagination({ ...options }), }, }); ``` Defining directives for our resolver factory functions means that we can now use them selectively. ```graphql { todos(first: 10) @_relayPagination(mergeMode: "outwards") { id text } } ``` ### Reading on [On the next page we'll learn about "Cache Updates".](./cache-updates.md) --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/graphcache/local-resolvers.md # Path: docs/graphcache/local-resolvers.md --- title: Local Resolvers order: 2 --- # Local Resolvers Previously, we've learned about local resolvers [on the "Normalized Caching" page](./normalized-caching.md#manually-resolving-entities). They allow us to change the data that Graphcache reads as it queries against its local cache, return links that would otherwise not be cached, or even transform scalar records on the fly. The `resolvers` option on `cacheExchange` accepts a map of types with a nested map of fields, which means that we can add local resolvers to any field of any type. For example: ```js cacheExchange({ resolvers: { Todo: { updatedAt: parent => new Date(parent.updatedAt), }, }, }); ``` In the above example, what Graphcache does when it encounters the `updatedAt` field on `Todo` types. Similarly to how Graphcache knows [how to generate keys](./normalized-caching.md#custom-keys-and-non-keyable-entities) and looks up our custom `keys` configuration functions per `__typename`, it also uses our `resolvers` configuration on each field it queries from its locally cached data. A local resolver function in Graphcache has a similar signature to [GraphQL.js' resolvers on the server-side](https://www.graphql-tools.com/docs/resolvers/), so their shape should look familiar to us. ```js { TypeName: { fieldName: (parent, args, cache, info) => { return null; // new value }, }, } ``` A resolver may be attached to any type's field and accepts four positional arguments: - `parent`: The object on which the field will be added to, which contains the data as it's being queried. It will contain the current field's raw value if it's a scalar, which allows us to manipulate scalar values, like `parent.updatedAt` in the previous example. - `args`: The arguments that the field is being called with, which will be replaced with an empty object if the field hasn't been called with any arguments. For example, if the field is queried as `name(capitalize: true)` then `args` would be `{ capitalize: true }`. - `cache`: Unlike in GraphQL.js this will not be the context, but a `cache` instance, which gives us access to methods allowing us to interact with the local cache. Its full API can be found [in the API docs](../api/graphcache.md#cache). - `info`: This argument shouldn't be used frequently, but it contains running information about the traversal of the query document. It allows us to make resolvers reusable or to retrieve information about the entire query. Its full API can be found [in the API docs](../api/graphcache.md#info). The local resolvers may return any value that fits the query document's shape, however we must ensure that what we return matches the types of our schema. It, for instance, isn't possible to turn a record field into a link, i.e. replace a scalar with an entity. Instead, local resolvers are useful to transform records, like dates in our previous example, or to imitate server-side logic to allow Graphcache to retrieve more data from its cache without sending a query to our API. Furthermore, while we see on this page that we get access to methods like `cache.resolve` and other methods to read from our cache, only ["Cache Updates"](./cache-updates.md) get to write and change the cache. If you call `cache.updateQuery`, `cache.writeFragment`, or `cache.link` in resolvers, you‘ll get an error, since it‘s not possible to update the cache while reading from it. When writing a resolver you’ll mostly use `cache.resolve`, which can be chained, to read field values from the cache. When a field points to another entity we may get a key, but resolvers are allowed to return keys or partial entities containing keys. > **Note:** This essentially means that resolvers can return either scalar values for fields without > selection sets, and either partial entities or keys for fields with selection sets, i.e. > links / relations. When we return `null`, this will be interpreted a the literal GraphQL Null scalar, > while returning `undefined` will cause a cache miss. ## Transforming Records As we've explored in the ["Normalized Caching" page's section on records](./normalized-caching.md#storing-normalized-data), "records" are scalars and any fields in your query without selection sets. This could be a field with a string value, number, or any other field that resolves to a [scalar type](https://graphql.org/learn/schema/#scalar-types) rather than another entity i.e. object type. At the beginning of this page we've already seen an example of a local resolver that we've attached to a record field where we've added a resolver to a `Todo.updatedAt` field: ```js cacheExchange({ resolvers: { Todo: { updatedAt: parent => new Date(parent.updatedAt), }, }, }); ``` A query that contains this field may look like `{ todo { updatedAt } }`, which clearly shows us that this field is a scalar since it doesn't have any selection set on the `updatedAt` field. In our example, we access this field's value and parse it as a `new Date()`. This shows us that it doesn't matter for scalar fields what kind of value we return. We may parse strings into more granular JS-native objects or replace values entirely. We may also run into situations where we'd like to generalise the resolver and not make it dependent on the exact field it's being attached to. In these cases, the [`info` object](../api/graphcache.md#info) can be very helpful as it provides us information about the current query traversal, and the part of the query document the cache is processing. The `info.fieldName` property is one of these properties and lets us know the field that the resolver is operating on. Hence, we can create a reusable resolver like so: ```js const transformToDate = (parent, _args, _cache, info) => new Date(parent[info.fieldName]); cacheExchange({ resolvers: { Todo: { updatedAt: transformToDate }, }, }); ``` The resolver is now much more reusable, which is particularly handy if we're creating resolvers that we'd like to apply to multiple fields. The [`info` object has several more fields](../api/graphcache.md#info) that are all similarly useful to abstract our resolvers. We also haven't seen yet how to handle a field's arguments. If we have a field that accepts arguments we can use those as well as they're passed to us with the second argument of a resolver: ```js cacheExchange({ resolvers: { Todo: { text: (parent, args) => { return args.capitalize && parent.text ? parent.text.toUpperCase() : parent.text; }, }, }, }); ``` This is actually unlikely to be of use with records and scalar values as our API will have to be able to use these arguments just as well. In other words, while you may be able to pass any arguments to a field in your query, your GraphQL API's schema must accept these arguments in the first place. However, this is still useful if we're trying to imitate what the API is doing, which will become more relevant in the following examples and sections. ## Resolving Entities We've already briefly seen that resolvers can be used to replace a link in Graphcache's local data on the ["Normalized Caching" page](./normalized-caching.md#manually-resolving-entities). Given that Graphcache [stores entities in a normalized data structure](./normalized-caching.md#storing-normalized-data) there may be multiple fields on a given schema that can be used to get to the same entity. For instance, the schema may allow for the same entity to be looked up by an ID while this entity may also appear somewhere else in a list or on an entirely different field. When links (or relations) like these are cached by Graphcache it is able to look up the entities automatically, e.g. if we've sent a `{ todo(id: 1) { id } }` query to our API once then Graphcache will have seen that this field leads to the entity it returns and can query it automatically from its cache. However, if we have a list like `{ todos { id } }` we may have seen and cached a specific entity, but as we browse the app and query for `{ todo(id: 1) { id } }`, Graphcache isn't able to automatically find this entity even if it has cached it already and will send a request to our API. In many cases we can create a local resolvers to instead tell the cache where to look for a specific entity by returning partial information for it. Any resolver on a relational field, meaning any field that links to an object type (or a list of object types) in the schema, may return a partial entity that tells the cache how to resolve it. Hence, we're able to implement a resolver for the previously shown `todo(id: $id)` field as such: ```js cacheExchange({ resolvers: { Query: { todo: (_, args) => ({ __typename: 'Todo', id: args.id }), }, }, }); ``` The `__typename` field is required. Graphcache will [use its keying logic](./normalized-caching.md#custom-keys-and-non-keyable-entities), and your custom `keys` configuration to generate a key for this entity and will then be able to look this entity up in its local cache. As with regular queries, the resolver is known to return a link since the `todo(id: $id) { id }` will be used with a selection set, querying fields on the entity. ### Resolving by keys Resolvers can also directly return keys. We've previously learned [on the "Normalized Caching" page](./normalized-caching.md#custom-keys-and-non-keyable-entities) that the key for our example above would look something like `"Todo:1"` for `todo(id: 1)`. While it isn't advisable to create keys manually in your resolvers, if you returned a key directly this would still work. Essentially, returning `{ __typename, id }` may sometimes be the same as returning the key manually. The `cache` that we receive as an argument on resolvers has a method for this logic, [the `cache.keyOfEntity` method](../api/graphcache.md#keyofentity). While it doesn't make much sense in this case, our example can be rewritten as: ```js cacheExchange({ resolvers: { Query: { todo: (_, args, cache) => cache.keyOfEntity({ __typename: 'Todo', id: args.id }), }, }, }); ``` And while it's not advisable to create keys ourselves, the resolvers' `cache` and `info` arguments give us ample opportunities to use and pass around keys. One example is the `info.parentKey` property. This property [on the `info` object](../api/graphcache.md#info) will always be set to the key of the entity that the resolver is currently run on. For instance, for the above resolver it may be `"Query"`, for for a resolver on `Todo.updatedAt` it may be `"Todo:1"`. ## Resolving other fields In the above two examples we've seen how a resolver can replace Graphcache's logic, which usually reads links and records only from its locally cached data. We've seen how a field on a record can use `parent[fieldName]` to access its cached record value and transform it and how a resolver for a link can return a partial entity [or a key](#resolving-by-keys). However sometimes we'll need to resolve data from other fields in our resolvers. > **Note:** For records, if the other field is on the same `parent` entity, it may seem logical to access it on > `parent[otherFieldName]` as well, however the `parent` object will only be sparsely populated with > fields that the cache has already queried prior to reaching the resolver. > In the previous example, where we've created a resolver for `Todo.updatedAt` and accessed > `parent.updatedAt` to transform its value the `parent.updatedAt` field is essentially a shortcut > that allows us to get to the record quickly. Instead we can use [the `cache.resolve` method](../api/graphcache.md#resolve). This method allows us to access Graphcache's cached data directly. It is used to resolve records or links on any given entity and accepts three arguments: - `entity`: This is the entity on which we'd like to access a field. We may either pass a keyable, partial entity, e.g. `{ __typename: 'Todo', id: 1 }` or a key. It takes the same inputs as [the `cache.keyOfEntity` method](../api/graphcache.md#keyofentity), which we've seen earlier in the ["Resolving by keys" section](#resolving-by-keys). It also accepts `null` which causes it to return `null`, which is useful for chaining multiple `resolve` calls for deeply accessing a field. - `fieldName`: This is the field's name we'd like to access. If we're looking for the record on `Todo.updatedAt` we would pass `"updatedAt"` and would receive the record value for this field. If we pass a field that is a _link_ to another entity then we'd pass that field's name (e.g. `"author"` for `Todo.author`) and `cache.resolve` will return a key instead of a record value. - `fieldArgs`: Optionally, as the third argument we may pass the field's arguments, e.g. `{ id: 1 }` if we're trying to access `todo(id: 1)` for instance. This means that we can rewrite our original `Todo.updatedAt` example as follows, if we'd like to avoid using the `parent[fieldName]` shortcut: ```js cacheExchange({ resolvers: { Todo: { updatedAt: (parent, _args, cache) => new Date(cache.resolve(parent, 'updatedAt')), }, }, }); ``` When we call `cache.resolve(parent, "updatedAt")`, the cache will look up the `"updatedAt"` field on the `parent` entity, i.e. on the current `Todo` entity. > **Note:** We've also previously learned that `parent` may not contain all fields that the entity may have and > may hence be missing its keyable fields, like `id`, so why does this then work? > It works because `cache.resolve(parent)` is a shortcut for `cache.resolve(info.parentKey)`. Like the `info.fieldName` property `info.parentKey` gives us information about the current state of Graphcache's query operation. In this case, `info.parentKey` tells us what the parent's key is. However, since `cache.resolve(parent)` is much more intuitive we can write that instead since this is a supported shortcut. From this follows that we may also use `cache.resolve` to access other fields. Let's suppose we'd want `updatedAt` to default to the entity's `createdAt` field when it's actually `null`. In such a case we could write a resolver like so: ```js cacheExchange({ resolvers: { Todo: { updatedAt: (parent, _args, cache) => parent.updatedAt || cache.resolve(parent, 'createdAt'), }, }, }); ``` As we can see, we're effortlessly able to access other records from the cache, provided these fields are actually cached. If they aren't `cache.resolve` will return `null` instead. Beyond records, we're also able to resolve links and hence jump to records from another entity. Let's suppose we have an `author { id, createdAt }` field on the `Todo` and would like `Todo.createdAt` to simply copy the author's `createdAt` field. We can chain `cache.resolve` calls to get to this value: ```js cacheExchange({ resolvers: { Todo: { createdAt: (parent, _args, cache) => cache.resolve(cache.resolve(parent, 'author') /* "Author:1" */, 'createdAt'), }, }, }); ``` The return value of `cache.resolve` changes depending on what data the cache has stored. While it may return records for fields without selection sets, in other cases it may give you the key of other entities ("links") instead. It can even give you arrays of keys or records when the field's value contains a list. When a value is not present in the cache, `cache.resolve` will instead return `undefined` to signal that a value is uncached. Similarly, a resolver may return `undefined` to tell Graphcache that the field isn’t cached and that a call to the API is necessary. `cache.resolve` is a pretty flexible method that allows us to access arbitrary values from our cache, however, we have to be careful about what value will be resolved by it, since the cache can't know itself what type of value it may return. The last trick this method allows you to apply is to access arbitrary fields on the root `Query` type. If we call `cache.resolve("Query", ...)` then we're also able to access arbitrary fields starting from the root `Query` of the cached data. (If you're using [Schema Awareness](./schema-awareness.md) the name `"Query"` may vary for you depending on your schema.) We're not constrained to accessing fields on the `parent` of a resolver but can also attempt to break out and access fields on any other entity we know of. ## Resolving Partial Data Local resolvers also allow for more advanced use-cases when it comes to links and object types. Previously we've seen how a resolver is able to link up a given field to an entity, which causes this field to resolve an entity directly instead of it being checked against any cached links: ```js cacheExchange({ resolvers: { Query: { todo: (_, args) => ({ __typename: 'Todo', id: args.id }), }, }, }); ``` In this example, while `__typename` and `id` are required to make this entity keyable, we're also able to add on more fields to this object to override values later on in our selection. For instance, we can write a resolver that links `Query.todo` directly to our `Todo` entity but also only updates the `createdAt` field directly in the same resolver, if it is indeed accessed via the `Query.todo` field: ```js cacheExchange({ resolvers: { Query: { todo: (_, args) => ({ __typename: 'Todo', id: args.id, createdAt: new Date().toString(), }), }, }, }); ``` Here we've replaced the `createdAt` value of the `Todo` when it's accessed via this manual resolver. If it was accessed someplace else, for instance via a `Query.todos` listing field, this override wouldn't apply. We can even apply overrides to nested fields, which helps us to create complex resolvers for other use cases like pagination. [Read more on the topic of "Pagination" in the section below.](#pagination) ## Computed Queries We've now seen how the `cache` has several powerful methods, like [the `cache.resolve` method](../api/graphcache.md#resolve), which allow us to access any data in the cache while writing resolvers for individual fields. Additionally the cache has more methods that allow us to access more data at a time, like `cache.readQuery` and `cache.readFragment`. ### Reading a query At any point, the `cache` allows us to read entirely separate queries in our resolvers, which starts a separate virtual operation in our resolvers. When we call `cache.readQuery` with a query and variables we can execute an entirely new GraphQL query against our cached data: ```js import { gql } from '@urql/core'; import { cacheExchange } from '@urql/exchange-graphcache'; const cache = cacheExchange({ updates: { Mutation: { addTodo: (result, args, cache) => { const data = cache.readQuery({ query: Todos, variables: { from: 0, limit: 10 } }); }, }, }, }); ``` This way we'll get the stored data for the `TodosQuery` for the given `variables`. [Read more about `cache.readQuery` in the Graphcache API docs.](../api/graphcache.md#readquery) ### Reading a fragment The store also allows us to read a fragment for any given entity. The `cache.readFragment` method accepts a `fragment` and an `id`. This looks like the following. ```js import { gql } from '@urql/core'; import { cacheExchange } from '@urql/exchange-graphcache'; const cache = cacheExchange({ resolvers: { Query: { Todo: (parent, args, cache) => { return cache.readFragment( gql` fragment _ on Todo { id text } `, { id: 1 } ); }, }, }, }); ``` > **Note:** In the above example, we've used > [the `gql` tag function](../api/core.md#gql) because `readFragment` only accepts > GraphQL `DocumentNode`s as inputs, and not strings. This way we'll read the entire fragment that we've passed for the `Todo` for the given key, in this case `{ id: 1 }`. [Read more about `cache.readFragment` in the Graphcache API docs.](../api/graphcache.md#readfragment) ### Cache methods outside of `resolvers` The cache read methods are not possible outside of GraphQL operations. This means these methods will be limited to the different `Graphcache` configuration methods. ## Living with limitations of Local Resolvers Local Resolvers are powerful tools using which we can tell Graphcache what to do with a certain field beyond using results it’s seen on prior API results. However, it’s limitations come from this very intention they were made for. Resolvers are meant to augment Graphcache and teach it what to do with some fields. Sometimes this is trivial and simple (like most examples on this page), but other times, fields are incredibly complex to reproduce and hence resolvers become more complex. This section is not exhaustive, but documents some of the more commonly asked for features of resolvers. However, beyond the cases listed below, resolvers are limited and: - can't manipulate or see other fields on the current entity, or fields above it. - can't update the cache (they're only “computations” but don't change the cache) - can't change the query document that's sent to the API ### Writing reusable resolvers As we've seen before in the ["Transforming Records" section above](#transforming-records), we can write generic resolvers by using the fourth argument that resolvers receive, the `ResolveInfo` object. This `info` object gives our resolvers some context on where they’re being executed and gives it information about the current field and its surroundings. For instance, while Graphcache has a convenience helper to access a current record on the parent object for scalar values, it doesn't for links. Hence, if we're trying to read relationships we have to use `cache.resolve`. ```js cacheExchange({ resolvers: { Todo: { // This works: updatedAt: parent => parent.updatedAt, // This won't work: author: parent => parent.author, }, }, }); ``` The `info` object actually gives us two ways of accessing the original field's value: ```js const resolver = (parent, args, cache, info) => { // This is the full version const original = cache.resolve(info.parentKey, info.fieldName, args); // But we can replace `info.parentKey` with `parent` as a shortcut const original = cache.resolve(parent, info.fieldName, args); // And we can also avoid re-using arguments by using `fieldKey` const original = cache.resolve(parent, info.fieldKey); }; ``` Apart from telling us how to access the originally cached field value, we can also get more information from `info` about our field. For instance, we can: - Read the current field's name using `info.fieldName` - Read the current field's key using `info.parentFieldKey` - Read the current parent entity's key using `info.parentKey` - Read the current parent entity's typename using `info.parentTypename` - Access the current operation's raw variables using `info.variables` - Access the current operation's raw fragments using `info.fragments` ### Causing cache misses and partial misses When we write resolvers we provide Graphcache with a value for the current field, or rather with "behavior", that it will execute no matter whether this field is also cached or not. This means that, unless our resolver returns `undefined`, if the query doesn't have any other cache misses, Graphcache will consider the field a cache hit and will, unless other cache misses occur, not make a network request. > **Note:** An exception for this is [Schema Awareness](./schema-awareness.md), which can > automatically cause partial cache misses. However, sometimes we may want a resolver to return a result, while still sending a GraphQL API request in the background to update our resolver’s values. To achieve this we can update the `info.partial` field. ```js cacheExchange({ resolvers: { Todo: { author(parent, args, cache, info) { const author = cache.resolve(parent, info.fieldKey); if (author === null) { info.partial = true; } return author; }, }, }, }); ``` Suppose we have a field that our GraphQL schema _sometimes_ returns a `null` value for, but that may be upated with a value in the future. In the above example, we wrote a resolver that sets `info.partial = true` if a field’s value is `null`. This causes Graphcache to consider the result “partial and stale” and will cause it to make a background request to the API, while still delivering the outdated result. ### Conditionally applying resolvers We may not always want a resolver to be used. While sometimes this can be dangerous (if your resolver affects the shape and types of your fields), in other cases this is necessary. For instance, if your resolver handles infinite-scroll pagination, like the examples [in the next section](#pagination), then you may not always want to apply this resolver. For this reason, Graphcache also supports [“local directives”, which are introduced on the next docs page.](./local-directives.md) ## Pagination `Graphcache` offers some preset `resolvers` to help us out with endless scrolling pagination, also known as "infinite pagination". It comes with two more advanced but generalised resolvers that can be applied to two specific pagination use-cases. They're not meant to implement infinite pagination for _any app_, instead they're useful when we'd like to add infinite pagination to an app quickly to try it out or if we're unable to replace it with separate components per page in environments like React Native, where a `FlatList` would require a flat, infinite list of items. > **Note:** If you don't need a flat array of results, you can also achieve infinite pagination > with only UI code. [You can find a code example of UI infinite pagination in our example folder.](https://github.com/urql-graphql/urql/tree/main/examples/with-pagination) [You can find a code example of infinite pagination with Graphcahce in our example folder.](https://github.com/urql-graphql/urql/tree/main/examples/with-graphcache-pagination). Please keep in mind that this patterns has some limitations when you're handling cache updates. Deleting old pages from the cache selectively may be difficult, so the UI pattern in the above note is preferred. ### Simple Pagination Given we have a schema that uses some form of `offset` and `limit` based pagination, we can use the `simplePagination` exported from `@urql/exchange-graphcache/extras` to achieve an endless scroller. This helper will concatenate all queries performed to one long data structure. ```js import { cacheExchange } from '@urql/exchange-graphcache'; import { simplePagination } from '@urql/exchange-graphcache/extras'; const cache = cacheExchange({ resolvers: { Query: { todos: simplePagination(), }, }, }); ``` This form of pagination accepts an object as an argument, we can specify two options in here `limitArgument` and `offsetArgument` these will default to `limit` and `skip` respectively. This way we can use the keywords that are in our queries. We may also add the `mergeMode` option, which defaults to `'after'` and can otherwise be set to `'before'`. This will handle in which order pages are merged when paginating. The default `after` mode assumes that pages that come in last should be merged _after_ the first pages. The `'before'` mode assumes that pages that come in last should be merged _before_ the first pages, which can be helpful in a reverse endless scroller (E.g. Chat App). Example series of requests: ``` // An example where mergeMode: after works better skip: 0, limit: 3 => 1, 2, 3 skip: 3, limit: 3 => 4, 5, 6 mergeMode: after => 1, 2, 3, 4, 5, 6 ✔️ mergeMode: before => 4, 5, 6, 1, 2, 3 // An example where mergeMode: before works better skip: 0, limit: 3 => 4, 5, 6 skip: 3, limit: 3 => 1, 2, 3 mergeMode: after => 4, 5, 6, 1, 2, 3 mergeMode: before => 1, 2, 3, 4, 5, 6 ✔️ ``` ### Relay Pagination Given we have a [relay-compatible schema](https://facebook.github.io/relay/graphql/connections.htm) on our backend, we can offer the possibility of endless data resolving. This means that when we fetch the next page in our data received in `useQuery` we'll see the previous pages as well. This is useful for endless scrolling. We can achieve this by importing `relayPagination` from `@urql/exchange-graphcache/extras`. ```js import { cacheExchange } from '@urql/exchange-graphcache'; import { relayPagination } from '@urql/exchange-graphcache/extras'; const cache = cacheExchange({ resolvers: { Query: { todos: relayPagination(), }, // Or if the pagination happens in a nested field: User: { todos: relayPagination(), }, }, }); ``` `relayPagination` accepts an object of options, for now we are offering one option and that is the `mergeMode`. This defaults to `inwards` and can otherwise be set to `outwards`. This will handle how pages are merged when we paginate forwards and backwards at the same time. outwards pagination assumes that pages that come in last should be merged before the first pages, so that the list grows outwards in both directions. The default inwards pagination assumes that pagination last pages is part of the same list and come after first pages. Hence it merges pages so that they converge in the middle. Example series of requests: ``` first: 1 => node 1, endCursor: a first: 1, after: a => node 2, endCursor: b ... last: 1 => node 99, startCursor: c last: 1, before: c => node 89, startCursor: d ``` With inwards merging the nodes will be in this order: `[1, 2, ..., 89, 99]` And with outwards merging: `[..., 89, 99, 1, 2, ...]` The helper happily supports schema that return nodes rather than individually-cursored edges. For each paginated type, we must either always request nodes, or always request edges -- otherwise the lists cannot be stiched together. ### Reading on [On the next page we'll learn about "Cache Directives".](./local-directives.md) --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/graphcache/normalized-caching.md # Path: docs/graphcache/normalized-caching.md --- title: Normalized Caching order: 1 --- # Normalized Caching In GraphQL, like its name suggests, we create schemas that express the relational nature of our data. When we create and query against a `Query` type we walk a graph that starts at the root `Query` type and walks through relational types. Rather than querying for normalized data, in GraphQL our queries request a specific shape of denormalized data, a view into our relational data that can be re-normalized automatically. As the GraphQL API walks our query documents it may read from a relational database and _entities_ and scalar values are copied into a JSON document that matches our query document. The type information of our entities isn't lost however. A query document may still ask the GraphQL API about what entity it's dealing with using the `__typename` field, which dynamically introspects an entity's type. This means that GraphQL clients can automatically re-normalize data as results come back from the API by using the `__typename` field and keyable fields like an `id` or `_id` field, which are already common conventions in GraphQL schemas. In other words, normalized caches can build up a relational database of tables in-memory for our application. For our apps normalized caches can enable more sophisticated use-cases, where different API requests update data in other parts of the app and automatically update data in our cache as we query our GraphQL API. Normalized caches can essentially keep the UI of our applications up-to-date when relational data is detected across multiple queries, mutations, or subscriptions. ## Normalizing Relational Data As previously mentioned, a GraphQL schema creates a tree of types where our application's data always starts from the `Query` root type and is modified by other data that's incoming from either a selection on `Mutation` or `Subscription`. All data that we query from the `Query` type will contain relations between "entities", JSON objects that are hierarchical. A normalized cache seeks to turn this denormalized JSON blob back into a relational data structure, which stores all entities by a key that can be looked up directly. Since GraphQL documents give the API a strict specification on how it traverses a schema, the JSON data that the cache receives from the API will always match the GraphQL query document that has been used to query this data. A common misconception is that normalized caches in GraphQL store data by the query document somehow, however, the only thing a normalized cache cares about is that it can use our GraphQL query documents to walk the structure of the JSON data it received from the API. ```graphql { __typename todo(id: 1) { __typename id title author { __typename id name } } } ``` ```json { "__typename": "Query", "todo": { "__typename": "Todo", "id": 1, "title": "implement graphcache", "author": { "__typename": "Author", "id": 1, "name": "urql-team" } } } ``` Above, we see an example of a GraphQL query document and a corresponding JSON result from a GraphQL API. In GraphQL, we never lose access to the underlying types of the data. Normalized caches can ask for the `__typename` field in selection sets automatically and will find out which type a JSON object corresponds to. Generally, a normalized cache must do one of two things with a query document like the above: - It must be able to walk the query document and JSON data of the result and cache the data, normalizing it in the process and storing it in relational tables. - It must later be able to walk the query document and recreate this JSON data just by reading data from its cache, by reading entries from its in-memory relational tables. While the normalized cache can't know the exact type of each field, thanks to the GraphQL query language it can make a couple of assumptions. The normalized cache can walk the query document. Each field that has no selection set (like `title` in the above example) must be a "record", a field that may only be set to a scalar. Each field that does have a selection set must be another "entity" or a list of "entities". The latter fields with selection sets are our relations between entities, like a foreign key in relational databases. Furthermore, the normalized cache can then read the `__typename` field on related entities. This is called _Type Name Introspection_ and is how it finds out about the types of each entity. From the above document we can assume the following relations: - `Query.todo(id: 1)` → `Todo` - `Todo.author` → `Author` However, this isn't quite enough yet to store the relations from GraphQL results. The normalized cache must also generate primary keys for each entity so that it can store them in table-like data structures. This is for instance why [Relay enforces](https://relay.dev/docs/guides/graphql-server-specification/#object-identification) that each entity must have an `id` field. This allows it to assume that there's an obvious primary key for each entity it may query. Instead, `urql`'s Graphcache and Apollo assume that there _may_ be an `id` or `_id` field in a given selection set. If Graphcache can't find these two fields it'll issue a warning, however a custom `keys` configuration may be used to generate custom keys for a given type. With this logic the normalized cache will actually create the following "links" between its relational data: - `"Query"`, `.todo(id: 1)` → `"Todo:1"` - `"Todo:1"`, `.author` → `"Author:1"` As we can see, the `Query` root type itself has a constant key of `"Query"`. All relational data originates here, since the GraphQL schema is a graph and, like a tree, all selections on a GraphQL query document originate from it. Internally, the normalized cache now stores field values on entities by their primary keys. The above can also be said or written as: - The `Query` entity's `todo` field with `{"id": 1}` arguments points to the `Todo:1` entity. - The `Todo:1` entity's `author` field points to the `Author:1` entity. In Graphcache, these "links" are stored in a nested structure per-entity. "Records" are kept separate from this relational data. ![Normalization is based on types, keys, and relations. This information can all be inferred from the query document.](../assets/query-document-info.png) ## Storing Normalized Data At its core, normalizing data means that we take individual fields and store them in a table. In our case we store all values of fields in a dictionary of their primary key, generated from an ID or other key and type name, and the field’s name and arguments, if it has any. | Primary Key | Field | Value | | ---------------------- | ----------------------------------------------- | ------------------------ | | Type name and ID (Key) | Field name (not alias) and optionally arguments | Scalar value or relation | To reiterate we have three pieces of information that are stored in tables: - The entity's key can be derived from its type name via the `__typename` field and a keyable field. By default _Graphcache_ will check the `id` and `_id` fields, however this is configurable. - The field's name (like `todo`) and optional arguments. If the field has any arguments then we can normalize it by JSON stringifying the arguments, making sure that the JSON key is stable by sorting its keys. - Lastly, we may store relations as either `null`, a primary key that refers to another entity, or a list of such. For storing "records" we can store the scalars in a separate table. In _Graphcache_ the data structure for these tables looks a little like the following, where each entity has a record from fields to other entity keys: ```js { links: Map { 'Query': Record { 'todo({"id":1})': 'Todo:1' }, 'Todo:1': Record { 'author': 'Author:1' }, 'Author:1': Record { }, } } ``` We can see how the normalized cache is now able to traverse a GraphQL query by starting on the `Query` entity and retrieve relations for other fields. To retrieve "records" which are all fields with scalar values and no selection sets, _Graphcache_ keeps a second table around with an identical structure. This table only contains scalar values, which keeps our non-relational data away from our "links": ```js { records: Map { 'Query': Record { '__typename': 'Query' }, 'Todo:1': Record { '__typename': 'Todo', 'id': 1, 'title': 'implement graphcache' }, 'Author:1': Record { '__typename': 'Author', 'id': 1, 'name': 'urql-team' }, } } ``` This is very similar to how we'd go about creating a state management store manually, except that _Graphcache_ can use the GraphQL document to perform this normalization automatically. What we gain from this normalization is that we have a data structure that we can both read from and write to, to reproduce the API results for GraphQL query documents. Any mutation or subscription can also be written to this data structure. Once _Graphcache_ finds a keyable entity in their results it's written to its relational table which may update other queries in our application. Similarly queries may share data between one another which means that they effectively share entities using this approach and can update one another. In other words, once we have a primary key like `"Todo:1"` we may find this primary key again in other entities in other GraphQL results. ## Custom Keys and Non-Keyable Entities In the above introduction we've learned that while _Graphcache_ doesn't enforce `id` fields on each entity, it checks for the `id` and `_id` fields by default. There are many situations in which entities may either not have a key field or have different keys. As _Graphcache_ traverses JSON data and a GraphQL query document to write data to the cache you may see a warning from it along the lines of ["Invalid key: [...] No key could be generated for the data at this field."](./errors.md/#15-invalid-key) _Graphcache_ has many warnings like these that attempt to detect undesirable behaviour and helps us to update our configuration or queries accordingly. In the simplest cases, we may simply have forgotten to add the `id` field to the selection set of our GraphQL query document. However, what if the field is instead called `uuid` and our query looks accordingly different? ```graphql { item { uuid } } ``` In the above selection set we have an `item` field that has a `uuid` field rather than an `id` field. This means that _Graphcache_ won't automatically be able to generate a primary key for this entity. Instead, we have to help it generate a key by passing it a custom `keys` config: ```js cacheExchange({ keys: { Item: data => data.uuid, }, }); ``` We may add a function as an entry to the `keys` configuration. The property here, `"Item"` must be the typename of the entity for which we're generating a key. The function may return an arbitarily generated key. So for our `item` field, which in our example schema gives us an `Item` entity, we can create a `keys` configuration entry that creates a key from the `uuid` field rather than the `id` field. This also raises a question, **what does _Graphcache_ do with unkeyable data by default? And, what if my data has no key?**
This special case is what we call "embedded data". Not all types in a GraphQL schema will have keyable fields and some types may just abstract data without themselves being relational. They may be "edges", entities that have a field pointing to other entities that simply connect two entities, or data types like a `GeoJson` or `Image` type. In these cases, where the normalized cache encounters unkeyable types, it will create an embedded key by using the parent's primary key and combining it with the field key. This means that "embedded entities" are only reachable from a specific field on their parent entities. They're globally unique and aren't strictly speaking relational data. ```graphql { __typename todo(id: 1) { id image { url width height } } } ``` In the above example we're querying an `Image` type on a `Todo`. This imaginary `Image` type has no key because the image is embedded data and will only ever be associated to this `Todo`. In other words, the API's schema doesn't consider it necessary to have a primary key field for this type. Maybe it doesn't even have an ID in our backend's database. We _could_ assign this type an imaginary key (maybe based on the `url`) but in fact if it's not shared data it wouldn't make much sense to do so. When _Graphcache_ attempts to store this entity it will issue the previously mentioned warning. Internally, it'll then generate an embedded key for this entity based on the parent entity. If the parent entity's key is `Todo:1` then the embedded key for our `Image` will become `Todo:1.image`. This is also how this entity will be stored internally by _Graphcache_: ```js { records: Map { 'Todo:1.image': Record { '__typename': 'Image', 'url': '...', 'width': 1024, 'height': 768 }, } } ``` This doesn't however mute the warning that _Graphcache_ outputs, since it believes we may have made a mistake. The warning itself gives us advice on how to mute it: > If this is intentional, create a keys config for `Image` that always returns null. Meaning, that we can add an entry to our `keys` config for our non-keyable type that explicitly returns `null`, which tells _Graphcache_ that the entity has no key: ```js cacheExchange({ keys: { Image: () => null, }, }); ``` ### Flexible Key Generation In some cases, you may want to create a pattern for your key generation. For instance, you may want to say "create a special key for every type ending in `'Node'`. In such a case we recommend creating a small JS `Proxy` to take care of key generation for you and making the keys functional. ```js cacheExchange({ keys: new Proxy( { Image: () => null, }, { get(target, prop, receiver) { if (prop.endsWith('Node')) { return data => data.uid; } const fallback = data => data.uuid; return target[prop] || fallback; }, } ), }); ``` In the above example, we dynamically change the key generator depending on the typename. When a typename ends in `'Node'`, we return a key generator that uses the `uid` field. We still fall back to an object of manual key generation functions however. Lastly though, when a type doesn't have a predefined key generator, we change the default behavior from using `id` and `_id` fields to using `uuid` fields. ## Non-Automatic Relations and Updates While _Graphcache_ is able to store and update our entities in an in-memory relational data structure, which keeps the same entities in singular unique locations, a GraphQL API may make a lot of implicit changes to the relations of data as it runs or have trivial relations that our cache doesn't need to see to resolve. Like with the `keys` config, we have two more configuration options to combat this: `resolvers` and `updates`. ### Manually resolving entities Some fields in our configuration can be resolved without checking the GraphQL API for relations. The `resolvers` config allows us to create a list of client-side resolvers where we can read from the cache directly as _Graphcache_ creates a local GraphQL result from its cached data. ```graphql { todo(id: 1) { id } } ``` Previously we've looked at the above query to illustrate how data from a GraphQL API may be written to _Graphcache_'s relational data structure to store the links and entities in a result against this GraphQL query document. However, it may be possible for another query to have already written this `Todo` entity to the cache. So, **how do we resolve a relation manually?** In such a case, _Graphcache_ may have seen and stored the `Todo` entity but isn't aware of the relation between `Query.todo({"id":1})` and the `Todo:1` entity. However, we can tell _Graphcache_ which entity it should look for when it accesses the `Query.todo` field by creating a resolver for it: ```js cacheExchange({ resolvers: { Query: { todo(parent, args, cache, info) { return { __typename: 'Todo', id: args.id }; }, }, }, }); ``` A resolver is a function that's similar to [GraphQL.js' resolvers on the server-side](https://www.graphql-tools.com/docs/resolvers/). They receive the parent data, the field's arguments, access to _Graphcache_'s cached data, and an `info` object. [The entire function signature and more explanations can be found in the API docs.](../api/graphcache.md#resolvers-option) Since it can access the field's arguments from the GraphQL query document, we can return a partial `Todo` entity. As long as this object is keyable, it will tell _Graphcache_ what the key of the returned entity is. In other words, we've told it how to get to a `Todo` from the `Query.todo` field. This mechanism is immensely more powerful than this example. We have other use-cases that resolvers may be used for: - Resolvers can be applied to fields with records, which means that it can be used to change or transform scalar values. For instance, we can update a string or parse a `Date` right inside a resolver. - Resolvers can return deeply nested results, which will be layered on top of the in-memory relational cached data of _Graphcache_, which means that it can emulate infinite pagination and other complex behaviour. - Resolvers can change when a cache miss or hit occurs. Returning `null` means that a field’s value is literally `null`, which will not cause a cache miss, while returning `undefined` will mean a field’s value is uncached. - Resolvers can return either partial entities or keys, so we can chain `cache.resolve` calls to read fields from the cache, even when a field is pointing at another entity, since we can return keys to the other entity directly. [Read more about resolvers on the following page about "Local Resolvers".](./local-resolvers.md) ### Manual cache updates While `resolvers`, as shown above, operate while _Graphcache_ is reading from its in-memory cache, `updates` are a configuration option that operate while _Graphcache_ is writing to its cached data. Specifically, these functions can be used to add more updates onto what a `Mutation` or `Subscription` may automatically update. As stated before, a GraphQL schema's data may undergo a lot of implicit changes when we send it a `Mutation` or `Subscription`. A new item that we create may for instance manipulate a completely different item or even a list. Often mutations and subscriptions alter relations that their selection sets wouldn't necessarily see. Since mutations and subscriptions operate on a different root type, rather than the `Query` root type, we often need to update links in the rest of our data when a mutation is executed. ```graphql query TodosList { todos { id title } } mutation AddTodo($title: String!) { addTodo(title: $title) { id title } } ``` In a simple example, like the one above, we have a list of todos in a query and create a new todo using the `Mutation.addTodo` mutation field. When the mutation is executed and we get the result back, _Graphcache_ already writes the `Todo` item to its normalized cache. However, we also want to add the new `Todo` item to the list on `Query.todos`: ```js import { gql } from '@urql/core'; cacheExchange({ updates: { Mutation: { addTodo(result, args, cache, info) { const query = gql` { todos { id } } `; cache.updateQuery({ query }, data => { data.todos.push(result.addTodo); return data; }); }, }, }, }); ``` In this code example we can first see that the signature of the `updates` entry is very similar to the one of `resolvers`. However, we're seeing the `cache` in use for the first time. The `cache` object (as [documented in the API docs](../api/graphcache.md#cache)) gives us access to _Graphcache_'s mechanisms directly. Not only can we resolve data using it, we can directly start sub-queries or sub-writes manually. These are full normalized cache runs inside other runs. In this case we're calling `cache.updateQuery` on a list of `Todo` items while the `Mutation` that added the `Todo` is already being written to the cache. As we can see, we may perform manual changes inside of `updates` functions, which can be used to affect other parts of the cache (like `Query.todos` here) beyond the automatic updates that a normalized cache is expected to perform. We get methods like `cache.updateQuery`, `cache.writeFragment`, and `cache.link` in our updater functions, which aren't available to us in local resolvers, and can only be used in these `updates` entries to change the data that the cache holds. [Read more about writing cache updates on the "Cache Updates" page.](./cache-updates.md) ## Deterministic Cache Updates Above, in [the "Storing Normalized Data" section](#storing-normalized-data), we've talked about how Graphcache is able to store normalized data. However, apart from storing this data there are a couple of caveats that many applications simply ignore, skip, or simplify when they implement a store to cache their data in. Amongst features like [Optimistic Updates](./cache-updates.md#optimistic-updates) and [Offline Support](./offline.md), Graphcache supports several features that allow our API results to be more unreliable. Essentially we don't expect API results to always come back in order or on time. However, we expect Graphcache to prevent us from making "indeterministic cache updates", meaning that we expect it to handle API results that come back in a random order and delayed gracefully. In terms of the ["Manual Cache Updates"](#manual-cache-updates) that we've talked about above and [Optimistic Updates](./cache-updates.md#optimistic-updates) the limitations are pretty simple at first and if we use Graphcache as usual we may not even notice them: - When we make an _optimistic_ change, we define what a mutation's result may look like once the API responds in the future and apply this temporary result immediately. We store this temporary data in a separate "layer". Once the real result comes back this layer can be deleted and the real API result can be applied as usual. - When multiple _optimistic updates_ are made at the same time, we never allow these layers to be deleted separately. Instead Graphcache waits for all mutations to complete before deleting the optimistic layers and applying the real API result. This means that a mutation update cannot accidentally commit optimistic data to the cache permanently. - While an _optimistic update_ has been applied, Graphcache stops refetching any queries that contain this optimistic data so that it doesn't "flip back" to its non-optimistic state without the optimistic update being applied. Otherwise we'd see a "flicker" in the UI. These three principles are the basic mechanisms we can expect from Graphcache. The summary is: **Graphcache groups optimistic mutations and pauses queries so that optimistic updates look as expected,** which is an implementation detail we can mostly ignore when using it. However, one implementation detail we cannot ignore is the last mechanism in Graphcache which is called **"Commutativity"**. As we can tell, "optimistic updates" need to store their normalized results on a separate layer. This means that the previous data structure we've seen in Graphcache is actually more like a list, with many tables of links and entities. Each layer may contain optimistic results and have an order of preference. However, this order also applies to queries. Since queries are run in one order but their API results can come back to us in a very different order, if we access enough pages in a random order things can sometimes look rather weird. We may see that in an application on a slow network connection the results may vary depending on when their results came back. ![Commutativity means that we store data in separate layers.](../assets/commutative-layers.png) Instead, Graphcache actually uses layers for any API result it receives. In case, an API result arrives out-of-order, it sorts them by precedence — or rather by when they've been requested. Overall, we don't have to worry about this, but Graphcache has mechanisms that keep our updates safe. ## Reading on This concludes the introduction to Graphcache with a short overview of how it works, what it supports, and some hidden mechanisms and internals. Next we may want to learn more about how to use it and more of its features: - [How do we write "Local Resolvers"?](./local-resolvers.md) - [How to set up "Cache Updates" and "Optimistic Updates"?](./cache-updates.md) - [What is Graphcache's "Schema Awareness" feature for?](./schema-awareness.md) - [How do I enable "Offline Support"?](./offline.md) --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/graphcache/offline.md # Path: docs/graphcache/offline.md --- title: Offline Support order: 7 --- # Offline Support _Graphcache_ allows you to build an offline-first app with built-in offline and persistence support, by adding a `storage` interface. In combination with its [Schema Awareness](./schema-awareness.md) support and [Optimistic Updates](./cache-updates.md#optimistic-updates) this can be used to build an application that serves cached data entirely from memory when a user's device is offline and still display optimistically executed mutations. ## Setup Everything that's needed to set up offline-support is already packaged in the `@urql/exchange-graphcache` package. We initially recommend setting up the [Schema Awareness](./schema-awareness.md). This adds our server-side schema information to the cache, which allows it to make decisions on what partial data complies with the schema. This is useful since the offline cache may often be lacking some data but may then be used to display the partial data we do have, as long as missing data is actually marked as optional in the schema. Furthermore, if we have any mutations that the user doesn't interact with after triggering them (for instance, "liking a post"), we can set up [Optimistic Updates](./cache-updates.md#optimistic-updates) for these mutations, which allows them to be reflected in our UI before sending a request to the API. To actually now set up offline support, we'll swap out the `cacheExchange` with the `offlineExchange` that's also exported by `@urql/exchange-graphcache`. ```js import { Client, fetchExchange } from 'urql'; import { offlineExchange } from '@urql/exchange-graphcache'; const cache = offlineExchange({ schema, updates: { /* ... */ }, optimistic: { /* ... */ }, }); const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [cache, fetchExchange], }); ``` This activates offline support, however we'll also need to provide the `storage` option to the `offlineExchange`. The `storage` is an adapter that contains methods for storing cache data in a persisted storage interface on the user's device. By default, we can use the default storage option that `@urql/exchange-graphcache` comes with. This default storage uses [IndexedDB](https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API) to persist the cache's data. We can use this default storage by importing the `makeDefaultStorage` function from `@urql/exchange-graphcache/default-storage`. ```js import { Client, fetchExchange } from 'urql'; import { offlineExchange } from '@urql/exchange-graphcache'; import { makeDefaultStorage } from '@urql/exchange-graphcache/default-storage'; const storage = makeDefaultStorage({ idbName: 'graphcache-v3', // The name of the IndexedDB database maxAge: 7, // The maximum age of the persisted data in days }); const cache = offlineExchange({ schema, storage, updates: { /* ... */ }, optimistic: { /* ... */ }, }); const client = new Client({ url: 'http://localhost:3000/graphql', exchanges: [cache, fetchExchange], }); ``` ## React Native For React Native, we can use the async storage package `@urql/storage-rn`. Before installing the [library](https://github.com/urql-graphql/urql/tree/main/packages/storage-rn), ensure you have installed the necessary peer dependencies: - NetInfo ([RN](https://github.com/react-native-netinfo/react-native-netinfo) | [Expo](https://docs.expo.dev/versions/latest/sdk/netinfo/)) and - AsyncStorage ([RN](https://react-native-async-storage.github.io/async-storage/docs/install) | [Expo](https://docs.expo.dev/versions/v42.0.0/sdk/async-storage/)). ```sh yarn add @urql/storage-rn # or npm install --save @urql/storage-rn ``` You can then create the custom storage and use it in the offline exchange: ```js import { makeAsyncStorage } from '@urql/storage-rn'; const storage = makeAsyncStorage({ dataKey: 'graphcache-data', // The AsyncStorage key used for the data (defaults to graphcache-data) metadataKey: 'graphcache-metadata', // The AsyncStorage key used for the metadata (defaults to graphcache-metadata) maxAge: 7, // How long to persist the data in storage (defaults to 7 days) }); ``` ## Offline Behavior _Graphcache_ applies several mechanisms that improve the consistency of the cache and how it behaves when it's used in highly cached-dependent scenarios, including when it's used with its offline support. We've previously read about some of these guarantees on the ["Normalized Caching" page.](./normalized-caching.md) While the client is offline, _Graphcache_ will also apply some opinionated mechanisms to queries and mutations. When a query fails with a Network Error, which indicates that the client is offline the `offlineExchange` won't deliver the error for this query to avoid it from being surfaced to the user. This works particularly well in combination with ["Schema Awareness"](./schema-awareness.md) which will deliver as much of a partial query result as possible. In combination with the [`cache-and-network` request policy](../basics/document-caching.md#request-policies) we can now ensure that we display as much data as possible when the user is offline while still keeping the cache up-to-date when the user is online. A similar mechanism is applied to optimistic mutations when the user is offline. Normal non-optimistic mutations are executed as usual and may fail with a network error. Optimistic mutations however will be queued up and may be retried when the app is restarted or when the user comes back online. If we wish to customize when an operation result from the API is deemed an operation that has failed because the device is offline, we can pass a custom `isOfflineError` function to the `offlineExchange`, like so: ```js const cache = offlineExchange({ isOfflineError(error, _result) { return !!error.networkError; }, // ... }); ``` However, this is optional, and the default function checks for common offline error messages and checks `navigator.onLine` for you. ## Custom Storages In the [Setup section](#setup) we've learned how to use the default storage engine to store persisted cache data in IndexedDB. You can also write custom storage engines, if the default one doesn't align with your expectations or requirements. One limitation of our default storage engine is for instance that data is stored time limited with a maximum age, which prevents the database from becoming too full, but a custom storage engine may have different strategies for dealing with this. [The API docs list the entire interface for the `storage` option.](../api/graphcache.md#storage-option) There we can see the methods we need to implement to implement a custom storage engine. Following is an example of the simplest possible storage engine, which uses the browser's [Local Storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage). Initially we'll implement the basic persistence methods, `readData` and `writeData`. ```js const makeLocalStorage = () => { const cache = {}; return { writeData(delta) { return Promise.resolve().then(() => { Object.assign(cache, delta); localStorage.setItem('data', JSON.stringify(cache)); }); }, readData() { return Promise.resolve().then(() => { const local = localStorage.getItem('data') || null; Object.assign(cache, JSON.parse(local)); return cache; }); }, }; }; ``` As we can see, the `writeData` method only sends us "deltas", partial objects that only describe updated cache data rather than all cache data. The implementation of `writeMetadata` and `readMetadata` will however be even simpler, since it always sends us complete data. ```js const makeLocalStorage = () => { return { /* ... */ writeMetadata(data) { localStorage.setItem('metadata', JSON.stringify(data)); }, readMetadata() { return Promise.resolve().then(() => { const metadataJson = localStorage.getItem('metadata') || null; return JSON.parse(metadataJson); }); }, }; }; ``` Lastly, the `onOnline` method will likely always look the same, as long as your `storage` is intended to work for browsers only: ```js const makeLocalStorage = () => { return { /* ... */ onOnline(cb: () => void) { window.addEventListener('online', () => { cb(); }); }, }; }; ``` --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/graphcache/schema-awareness.md # Path: docs/graphcache/schema-awareness.md --- title: Schema Awareness order: 5 --- # Schema Awareness Previously, [on the "Normalized Caching" page](./normalized-caching.md) we've seen how Graphcache stores normalized data in its store and how it traverses GraphQL documents to do so. What we've seen is that just using the GraphQL document for traversal, and the `__typename` introspection field Graphcache is able to build a normalized caching structure that keeps our application up-to-date across API results, allows it to store data by entities and keys, and provides us configuration options to write [manual cache updates](./cache-updates.md) and [local resolvers](./local-resolvers.md). While this is all possible without any information about a GraphQL API's schema, the `schema` option on `cacheExchange` allows us to pass an introspected schema to Graphcache: ```js const introspectedSchema = { __schema: { queryType: { name: 'Query' }, mutationType: { name: 'Mutation' }, subscriptionType: { name: 'Subscription' }, }, }; cacheExchange({ schema: introspectedSchema }); ``` In GraphQL, [APIs allow for the entire schema to be "introspected"](https://graphql.org/learn/introspection/), which are special GraphQL queries that give us information on what the API supports. This information can either be retrieved from a GraphQL API directly or from the GraphQL.js Schema and contains a list of all types, the types' fields, scalars, and other information. In Graphcache we can pass this schema information to enable several features that aren't enabled if we don't pass any information to this option: - Fragments will be matched deterministically: A fragment can be written to be on an interface type or multiple fragments can be spread for separate union'ed types in a selection set. In many cases, if Graphcache doesn't have any schema information then it won't know what possible types a field can return and may sometimes make a guess and [issue a warning](./errors.md#16-heuristic-fragment-matching). If we pass Graphcache a `schema` then it'll be able to match fragments deterministically. - A schema may have non-default names for its root types; `Query`, `Mutation`, and `Subscription`. The names can be changed by passing `schema` information to `cacheExchange` which is important if the root type appears elsewhere in the schema, e.g. if the `Query` can be accessed on a `Mutation` field's result. - We may write a lot of configuration for our `cacheExchange` but if we pass a `schema` then it'll start checking whether any of the configuration options actually don't exist, maybe because we've typo'd them. This is a small detail but can make a large difference in a longer configuration. - Lastly; a schema contains information on **which fields are optional or required**. When Graphcache has a schema it knows optional fields that may be left out, and it'll be able to generate "partial results". ### Partial Results As we navigate an app that uses Graphcache we may be in states where some of our data is already cached while some aren't. Graphcache normalizes data and stores it in tables for links and records for each entity, which means that sometimes it can maybe even execute a query against its cache that it hasn't sent to the API before. [On the "Local Resolvers" page](./local-resolvers.md#resolving-entities) we've seen how to write resolvers that resolve entities without having to have seen a link from an API result before. If Graphcache uses these resolvers and previously cached data we often run into situations where a "partial result" could already be generated, which is what Graphcache does when it has `schema` information. ![A "partial result" is an incomplete result of information that Graphcache already had cached before it sent an API result.](../assets/partial-results.png) Without a `schema` and information on which fields are optional, Graphcache will consider a "partial result" as a cache miss. If we don't have all the information for a query then we can't execute it against the locally cached data after all. However, an API's schema contains information on which fields are required and optional, and if our apps are typed with this schema and TypeScript, can't we then use and handle these partial results before a request is sent to the API? This is the idea behind "Schema Awareness" and "Partial Results". When Graphcache has `schema` information it may give us partial results [with the `stale` flag set](../api/core.md#operationresult) while it fetches the full result from the API in the background. This allows our apps to show some information while more is loading. ## Getting your schema But how do you get an introspected `schema`? The process of introspecting a schema is running an introspection query on the GraphQL API, which will give us our `IntrospectionQuery` result. So an introspection is just another query we can run against our GraphQL APIs or schemas. As long as `introspection` is turned on and permitted, we can download an introspection schema by running a normal GraphQL query against the API and save the result in a JSON file. ```js import { getIntrospectionQuery } from 'graphql'; import fetch from 'node-fetch'; // or your preferred request in Node.js import * as fs from 'fs'; fetch('http://localhost:3000/graphql', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ variables: {}, query: getIntrospectionQuery({ descriptions: false }), }), }) .then(result => result.json()) .then(({ data }) => { fs.writeFile('./schema.json', JSON.stringify(data), err => { if (err) { console.error('Writing failed:', err); return; } console.log('Schema written!'); }); }); ``` Alternatively, if you're already using [GraphQL Code Generator](https://graphql-code-generator.com/) you can use [their `@graphql-codegen/introspection` plugin](https://graphql-code-generator.com/docs/plugins/introspection) to do the same automatically against a local schema. Furthermore it's also possible to [`execute`](https://graphql.org/graphql-js/execution/#execute) the introspection query directly against your `GraphQLSchema`. ## Optimizing a schema An `IntrospectionQuery` JSON blob from a GraphQL API can without modification become quite large. The shape of this data is `{ "__schema": ... }` and this _schema_ data will contain information on all directives, types, input objects, scalars, deprecation, enums, and more. This can quickly add up and one of the largest schemas, the GitHub GraphQL API's schema, has an introspection size of about 1.1MB, or about 50KB gzipped. However, we can use the `@urql/introspection` package's `minifyIntrospectionQuery` helper to reduce the size of this introspection data. This helper strips out information on directives, scalars, input types, deprecation, enums, and redundant fields to only leave information that _Graphcache_ actually requires. In the example of the GitHub GraphQL API this reduces the introspected data to around 20kB gzipped, which is much more acceptable. ### Installation & Setup First, install the `@urql/introspection` package: ```sh yarn add @urql/introspection # or npm install --save @urql/introspection ``` You'll then need to integrate it into your introspection script or in another place where it can optimise the introspection data. For this example, we'll just add it to the fetching script from [above](#getting-your-schema). ```js import { getIntrospectionQuery } from 'graphql'; import fetch from 'node-fetch'; // or your preferred request in Node.js import * as fs from 'fs'; import { getIntrospectedSchema, minifyIntrospectionQuery } from '@urql/introspection'; fetch('http://localhost:3000/graphql', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ variables: {}, query: getIntrospectionQuery({ descriptions: false }), }), }) .then(result => result.json()) .then(({ data }) => { const minified = minifyIntrospectionQuery(getIntrospectedSchema(data)); fs.writeFileSync('./schema.json', JSON.stringify(minified)); }); ``` The `getIntrospectionSchema ` doesn't only accept `IntrospectionQuery` JSON data as inputs, but also allows you to pass a JSON string, `GraphQLSchema`, or GraphQL Schema SDL strings. It's a convenience helper and not needed in the above example. ## Integrating a schema Once we have a schema that's already saved to a JSON file, we can load it and pass it to the `cacheExchange`'s `schema` option: ```js import schema from './schema.json'; const cache = cacheExchange({ schema }); ``` It may be worth checking what your bundler or framework does when you import a JSON file. Typically you can reduce the parsing time by making sure it's turned into a string and parsed using `JSON.parse` --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/docs/showcase.md # Path: docs/showcase.md --- title: Showcase order: 7 --- # Showcase `urql` wouldn't be the same without our growing and loving community of users, maintainers and supporters. This page is specifically dedicated to all of you! ## Used by folks at TripAdvisor GitHub Egghead Gatsby The Atlantic loveholidays Swan Open Social Sturdy ## Articles & Tutorials - [Egghead Course](https://egghead.io/lessons/graphql-set-up-an-urql-graphql-provider-in-react?pl=introduction-to-urql-a-react-graphql-client-faaa2bf5) by [Ian Jones](https://twitter.com/_jonesian). - [How To GraphQL: React + urql](https://www.howtographql.com/react-urql/0-introduction/) ## Community packages - [`reason-urql`](https://github.com/FormidableLabs/reason-urql): The official Reason bindings for `urql`. - [`urql-persisted-queries`](https://github.com/Daniel15/urql-persisted-queries): Support for Apollo-style persisted queries. - [`urql-computed-queries`](https://github.com/Drawbotics/urql-computed-exchange): An exchange to compute fields on-the-fly using the `@computed` directive. - [`graphql-code-generator`](https://graphql-code-generator.com/docs/plugins/typescript-urql): A plugin that helps you make typesafe hooks/components with urql. - [`urql-custom-scalars-exchange`](https://github.com/clentfort/urql-custom-scalars-exchange): An exchange to automatically convert scalars. - [`@grafbase/urql-exchange`](https://github.com/grafbase/playground/tree/main/packages/grafbase-urql-exchange): URQL-exchange for handling Server Sent Events (SSE) with Grafbase GraphQL Live Queries. to automatically convert scalars. - [`urql-rest-exchange`](https://github.com/iamsavani/urql-rest-exchange): A custom exchange for URQL that supports GraphQL queries/mutations via REST endpoints - [`urql-exhaustive-additional-typenames-exchange`](https://github.com/route06/urql-exhaustive-additional-typenames-exchange): URQL-exchange that add all list fields of the operation to additionalTypenames to help document caching --- # URQL GraphQL Client Documentation # Source: https://raw.githubusercontent.com/urql-graphql/urql/main/README.md # Path: README.md
urql

A highly customisable and versatile GraphQL client

CI Status Weekly downloads Discord

## ✨ Features - 📦 **One package** to get a working GraphQL client in React, Preact, Vue, Solid and Svelte - ⚙️ Fully **customisable** behaviour [via "exchanges"](https://formidable.com/open-source/urql/docs/advanced/authoring-exchanges/) - 🗂 Logical but simple default behaviour and document caching - 🌱 Normalized caching via [`@urql/exchange-graphcache`](https://formidable.com/open-source/urql/docs/graphcache) - 🔬 Easy debugging with the [`urql` devtools browser extensions](https://formidable.com/open-source/urql/docs/advanced/debugging/) `urql` is a GraphQL client that exposes a set of helpers for several frameworks. It's built to be highly customisable and versatile so you can take it from getting started with your first GraphQL project all the way to building complex apps and experimenting with GraphQL clients. **📃 For more information, [check out the docs](https://formidable.com/open-source/urql/docs/).** ## 💙 [Sponsors](https://github.com/sponsors/urql-graphql)
BigCommerce
BigCommerce
WunderGraph
WunderGraph
The Guild
The Guild
BeatGig
BeatGig
## 🙌 Contributing **The urql project was founded by [Formidable](https://formidable.com/) and is actively developed by the urql GraphQL team.** If you'd like to get involved, [check out our Contributor's guide.](https://github.com/urql-graphql/urql/blob/main/CONTRIBUTING.md) ## 📦 [Releases](https://github.com/urql-graphql/urql/releases) All new releases and updates are listed on GitHub with full changelogs. Each package in this repository further contains an independent `CHANGELOG.md` file with the historical changelog, for instance, [here’s `@urql/core`’s changelog](https://github.com/urql-graphql/urql/blob/main/packages/core/CHANGELOG.md). If you’re upgrading to v4, [check out our migration guide, posted as an issue.](https://github.com/urql-graphql/urql/issues/3114) New releases are prepared using [changesets](https://github.com/urql-graphql/urql/blob/main/CONTRIBUTING.md#how-do-i-document-a-change-for-the-changelog), which are changelog entries added to each PR, and we have “Version Packages” PRs that once merged will release new versions of `urql` packages. You can use `@canary` releases from `npm` if you’d like to get a preview of the merged changes. ## 📃 [Documentation](https://urql.dev/goto/docs) The documentation contains everything you need to know about `urql`, and contains several sections in order of importance when you first get started: - **[Basics](https://formidable.com/open-source/urql/docs/basics/)** — contains the ["Getting Started" guide](https://formidable.com/open-source/urql/docs/#where-to-start) and all you need to know when first using `urql`. - **[Architecture](https://formidable.com/open-source/urql/docs/architecture/)** — explains how `urql` functions and is built. - **[Advanced](https://formidable.com/open-source/urql/docs/advanced/)** — covers more uncommon use-cases and things you don't immediately need when getting started. - **[Graphcache](https://formidable.com/open-source/urql/docs/graphcache/)** — documents ["Normalized Caching" support](https://formidable.com/open-source/urql/docs/graphcache/normalized-caching/) which enables more complex apps and use-cases. - **[API](https://formidable.com/open-source/urql/docs/api/)** — the API documentation for each individual package. Furthermore, all APIs and packages are self-documented using TSDocs. If you’re using a language server for TypeScript, the documentation for each API should pop up in your editor when hovering `urql`’s code and APIs. _You can find the raw markdown files inside this repository's `docs` folder._