# Hypermode > We're overhauling Dgraph's docs to make them clearer and more approachable. If --- # Source: https://docs.hypermode.com/dgraph/enterprise/access-control-lists.md # Access Control Lists We're overhauling Dgraph's docs to make them clearer and more approachable. If you notice any issues during this transition or have suggestions, please [let us know](https://github.com/hypermodeinc/docs/issues). This feature was introduced in [v1.1.0](https://github.com/hypermodeinc/dgraph/releases/tag/v1.1.0). The `dgraph acl` command is deprecated and will be removed in a future release. ACL changes can be made by using the `/admin` GraphQL endpoint on any Alpha node. Access Control List (ACL) provides access protection to your data stored in Dgraph. When the ACL feature is enabled, a client, e.g. [dgo](https://github.com/hypermodeinc/dgo) or [dgraph4j](https://github.com/hypermodeinc/dgraph4j), must authenticate with a username and password before executing any transactions, and is only allowed to access the data permitted by the ACL rules. ## Enable enterprise ACL feature 1. Generate a data encryption key that is 32 bytes long: ```sh tr -dc 'a-zA-Z0-9' < /dev/urandom | dd bs=1 count=32 of=enc_key_file ``` On a macOS you may have to use `LC_CTYPE=C; tr -dc 'a-zA-Z0-9' < /dev/urandom | dd bs=1 count=32 of=enc_key_file`. 2. To view the secret key value use `cat enc_key_file`. 3. Create a plain text file named `hmac_secret_file`, and store a randomly generated `` in it. The secret key is used by Dgraph Alpha nodes to sign JSON Web Tokens (JWT). ```sh echo '' > hmac_secret_file ``` 4. Start all the Dgraph Alpha nodes in your cluster with the option `--acl secret-file="/path/to/secret"`, and make sure that they use the same secret key file created in Step 1. Alternatively, you can [store the secret in HashiCorp Vault](#storing-acl-secret-in-hashicorp-vault). ```sh dgraph alpha --acl "secret-file=/path/to/secret" --security "whitelist=" ``` In addition to command line flags `--acl secret-file="/path/to/secret"` and `--security "whitelist="`, you can also configure Dgraph using a configuration file (`config.yaml`, `config.json`). You can also use environment variables such as `DGRAPH_ALPHA_ACL="secret-file="` and `DGRAPH_ALPHA_SECURITY="whitelist="`. See [Config](/dgraph/self-managed/config) for more information in general about configuring Dgraph. ### Example using Dgraph CLI Here is an example that starts a Dgraph Zero node and a Dgraph Alpha node with the ACL feature turned on. You can run these commands in a separate terminal tab: ```sh ## Create ACL secret key file with 32 ASCII characters echo '' > hmac_secret_file ## Start Dgraph Zero in different terminal tab or window dgraph zero --my=localhost:5080 --replicas 1 --raft idx=1 ## Start Dgraph Alpha in different terminal tab or window dgraph alpha --my=localhost:7080 --zero=localhost:5080 \ --acl secret-file="./hmac_secret_file" \ --security whitelist="10.0.0.0/8,172.0.0.0/8,192.168.0.0/16" ``` ### Example using Docker Compose If you are using [Docker Compose](https://docs.docker.com/compose/), you can set up a sample Dgraph cluster using this `docker-compose.yaml` configuration: ```yaml version: "3.5" services: alpha1: command: dgraph alpha --my=alpha1:7080 --zero=zero1:5080 container_name: alpha1 environment: DGRAPH_ALPHA_ACL: secret-file=/dgraph/acl/hmac_secret_file DGRAPH_ALPHA_SECURITY: whitelist=10.0.0.0/8,172.0.0.0/8,192.168.0.0/16 image: dgraph/dgraph:latest ports: - "8080:8080" volumes: - ./hmac_secret_file:/dgraph/acl/hmac_secret_file zero1: command: dgraph zero --my=zero1:5080 --replicas 1 --raft idx=1 container_name: zero1 image: dgraph/dgraph:latest ``` You can run this with: ```sh ## Create ACL secret key file with 32 ASCII characters echo '' > hmac_secret_file ## Start Docker Compose docker-compose up ``` ### Example using Kubernetes Helm Chart If you deploy Dgraph on [Kubernetes](https://kubernetes.io/), you can configure the ACL feature using the [Dgraph Helm Chart](https://artifacthub.io/packages/helm/dgraph/dgraph). The first step is to encode the secret with base64: ```sh ## encode a secret without newline character and copy to the clipboard printf '' | base64 ``` The next step is that we need to create a [Helm](https://helm.sh/) chart config values file, e.g. `dgraph_values.yaml`. We want to copy the results of encoded secret as paste this into the `hmac_secret_file` like the example below: ```yaml ## dgraph_values.yaml alpha: acl: enabled: true file: hmac_secret_file: configFile: config.yaml: | acl: secret_file: /dgraph/acl/hmac_secret_file security: whitelist: 10.0.0.0/8,172.0.0.0/8,192.168.0.0/16 ``` Now with the Helm chart config values created, we can deploy Dgraph: ```sh helm repo add "dgraph" https://charts.dgraph.io helm install "my-release" --values ./dgraph_values.yaml dgraph/dgraph ``` ## Storing ACL secret in HashiCorp Vault You can save the ACL secret on [HashiCorp Vault](https://www.vaultproject.io/) server instead of saving the secret on the local file system. ### Configuring a HashiCorp Vault server Do the following to set up on the [HashiCorp Vault](https://www.vaultproject.io/) server for use with Dgraph: 1. Ensure that the Vault server is accessible from Dgraph Alpha and configured using URL `http://fqdn[ip]:port`. 2. Enable [AppRole Auth method](https://www.vaultproject.io/docs/auth/approle) and enable [KV Secrets Engine](https://www.vaultproject.io/docs/secrets/kv). 3. Save the 256-bits (32 ASCII characters) long ACL secret in a KV Secret path ([K/V Version 1](https://www.vaultproject.io/docs/secrets/kv/kv-v1) or [K/V Version 2](https://www.vaultproject.io/docs/secrets/kv/kv-v2)). For example, you can upload this below to KV Secrets Engine Version 2 path of `secret/data/dgraph/alpha`: ```json { "options": { "cas": 0 }, "data": { "hmac_secret_file": "" } } ``` 4. Create or use a role with an attached policy that grants access to the secret. For example, the following policy would grant access to `secret/data/dgraph/alpha`: ```hcl path "secret/data/dgraph/*" { capabilities = [ "read", "update" ] } ``` 5. Using the `role_id` generated from the previous step, create a corresponding `secret_id`, and copy the `role_id` and `secret_id` over to local files, like `./dgraph/vault/role_id` and `./dgraph/vault/secret_id`, that will be used by Dgraph Alpha nodes. The key format for the `acl-field` option can be defined using `acl-format` with the values `base64` (default) or `raw`. ### Example using Dgraph CLI with HashiCorp Vault configuration Here is an example of using Dgraph with a Vault server that holds the secret key: ```sh ## Start Dgraph Zero in different terminal tab or window dgraph zero --my=localhost:5080 --replicas 1 --raft "idx=1" ## Start Dgraph Alpha in different terminal tab or window dgraph alpha \ --security whitelist="10.0.0.0/8,172.0.0.0/8,192.168.0.0/16" \ --vault addr="http://localhost:8200";acl-field="hmac_secret_file";acl-format="raw";path="secret/data/dgraph/alpha";role-id-file="./role_id";secret-id-file="./secret_id" ``` ### Example using Docker Compose with HashiCorp Vault configuration If you are using [Docker Compose](https://docs.docker.com/compose/), you can set up a sample Dgraph cluster using this `docker-compose.yaml` configuration: ```yaml version: "3.5" services: alpha1: command: dgraph alpha --my=alpha1:7080 --zero=zero1:5080 container_name: alpha1 environment: DGRAPH_ALPHA_VAULT: addr=http://vault:8200;acl-field=hmac_secret_file;acl-format=raw;path=secret/data/dgraph/alpha;role-id-file=/dgraph/vault/role_id;secret-id-file=/dgraph/vault/secret_id DGRAPH_ALPHA_SECURITY: whitelist=10.0.0.0/8,172.0.0.0/8,192.168.0.0/16 image: dgraph/dgraph:latest ports: - "8080:8080" volumes: - ./role_id:/dgraph/vault/role_id - ./secret_id:/dgraph/vault/secret_id zero1: command: dgraph zero --my=zero1:5080 --replicas 1 --raft idx=1 container_name: zero1 image: dgraph/dgraph:latest ``` In this example, you will also need to configure a [HashiCorp Vault](https://www.vaultproject.io/) service named `vault` in the above `docker-compose.yaml`, and then run through this sequence: 1. Launch `vault` service: `docker-compose up --detach vault` 2. Unseal and Configure `vault` with the required prerequisites (see [Configuring a HashiCorp Vault Server](#configuring-a-hashicorp-vault-server)). 3. Save role-id and secret-id as `./role_id` and `secret_id` 4. Launch Dgraph Zero and Alpha: `docker-compose up --detach` ### Example using Kubernetes Helm Chart with HashiCorp Vault configuration If you deploy Dgraph on [Kubernetes](https://kubernetes.io/), you can configure the ACL feature using the [Dgraph Helm Chart](https://artifacthub.io/packages/helm/dgraph/dgraph). The next step is that we need to create a [Helm](https://helm.sh/) chart config values file, such as `dgraph_values.yaml`. ```yaml ## dgraph_values.yaml alpha: configFile: config.yaml: | vault: addr: http://vault-headless.default.svc.cluster.local:9200 acl_field: hmac_secret_file acl_format: raw path: secret/data/dgraph/alpha role_id_file: /dgraph/vault/role_id secret_id_file: /dgraph/vault/secret_id security: whitelist: 10.0.0.0/8,172.0.0.0/8,192.168.0.0/16‘ ``` To set up this chart, the [HashiCorp Vault](https://www.vaultproject.io/) service must be installed and available. You can use the [HashiCorp Vault Helm Chart](https://www.vaultproject.io/docs/platform/k8s/helm) and configure it to [auto unseal](https://learn.hashicorp.com/collections/vault/auto-unseal) so that the service is immediately available after deployment. ## Accessing secured Dgraph Before managing users and groups and configuring ACL rules, you will need to login in order to get a token that is needed to access Dgraph. You will use this token with the `X-Dgraph-AccessToken` header field. ### Logging In To login, send a POST request to `/admin` with the GraphQL mutation. For example, to log in as the root user `groot`: ```graphql mutation { login(userId: "groot", password: "password") { response { accessJWT refreshJWT } } } ``` Response: ```json { "data": { "accessJWT": "", "refreshJWT": "" } } ``` #### Access Token The response includes the access and refresh JWTs which are used for the authentication itself and refreshing the authentication token, respectively. Save the JWTs from the response for later HTTP requests. You can run authenticated requests by passing the access JWT to a request via the `X-Dgraph-AccessToken` header. Add the header `X-Dgraph-AccessToken` with the `accessJWT` value which you got in the login response in the GraphQL tool which you're using to make the request. For example, if you were using the GraphQL Playground, you would add this in the headers section: ```json { "X-Dgraph-AccessToken": "" } ``` And in the main code section, you can add a mutation, such as: ```graphql mutation { addUser(input: [{ name: "alice", password: "whiterabbit" }]) { user { name } } } ``` #### Refresh Token The refresh token can be used in the `/admin` POST GraphQL mutation to receive new access and refresh JWTs, which is useful to renew the authenticated session once the ACL access TTL expires (controlled by Dgraph Alpha's flag `--acl_access_ttl` which is set to 6h0m0s by default). ```graphql mutation { login(userId: "groot", password: "password", refreshToken: "") { response { accessJWT refreshJWT } } } ``` ### Login using a client With ACL configured, you need to log in as a user to access data protected by ACL rules. You can do this using the client's `.login(USER_ID, USER_PASSWORD)` method. Here are some code samples using a client: * **Go** ([dgo client](https://github.com/hypermodeinc/dgo)): example `acl_over_tls_test.go` ([here](https://github.com/hypermodeinc/dgraph/blob/main/tlstest/acl/acl_over_tls_test.go)) * **Java** ([dgraph4j](https://github.com/hypermodeinc/dgraph4j)): example `AclTest.java` ([here](https://github.com/hypermodeinc/dgraph4j/blob/main/src/test/java/io/dgraph/AclTest.java)) ### Login using curl If you are using `curl` from the command line, you can use the following with the above [login mutation](#logging-in) saved to `login.graphql`: ```sh ## Login and save results JSON_RESULT=$(curl http://localhost:8080/admin --silent --request POST \ --header "Content-Type: application/graphql" \ --upload-file login.graphql ) ## Extracting a token using GNU grep, perl, the silver searcher, or jq TOKEN=$(grep -oP '(?<=accessJWT":")[^"]*' <<< $JSON_RESULT) TOKEN=$(perl -wln -e '/(?<=accessJWT":")[^"]*/ and print $&;' <<< $JSON_RESULT) TOKEN=$(ag -o '(?<=accessJWT":")[^"]*' <<< $JSON_RESULT) TOKEN=$(jq -r '.data.login.response.accessJWT' <<< $JSON_RESULT) ## Run a GraphQL query using the token curl http://localhost:8080/admin --silent --request POST \ --header "Content-Type: application/graphql" \ --header "X-Dgraph-AccessToken: $TOKEN" \ --upload-file some_other_query.graphql ``` Parsing JSON results on the command line can be challenging, so you will find some alternatives to extract the desired data using popular tools, such as [the silver searcher](https://github.com/ggreer/the_silver_searcher) or the JSON query tool [jq](https://stedolan.github.io/jq), embedded in this snippet. ## User and group administration The default configuration comes with a user `groot`, with a password of `password`. The `groot` user is part of administrative group called `guardians` that have access to everything. You can add more users to the `guardians` group as needed. ### Reset the root password You can reset the root password like this example: ```graphql mutation { updateUser( input: { filter: { name: { eq: "groot" } } set: { password: "$up3r$3cr3t1337p@$$w0rd" } } ) { user { name } } } ``` ### Create a regular user To create a user `alice`, with password `whiterabbit`, you should execute the following GraphQL mutation: ```graphql mutation { addUser(input: [{ name: "alice", password: "whiterabbit" }]) { user { name } } } ``` ### Create a group To create a group `dev`, you should execute: ```graphql mutation { addGroup(input: [{ name: "dev" }]) { group { name users { name } } } } ``` ### Assign a user to a group To assign the user `alice` to both the group `dev` and the group `sre`, the mutation should be ```graphql mutation { updateUser( input: { filter: { name: { eq: "alice" } } set: { groups: [{ name: "dev" }, { name: "sre" }] } } ) { user { name groups { name } } } } ``` ### Remove a user from a group To remove `alice` from the `dev` group, the mutation should be ```graphql mutation { updateUser( input: { filter: { name: { eq: "alice" } } remove: { groups: [{ name: "dev" }] } } ) { user { name groups { name } } } } ``` ### Delete a User To delete the user `alice`, you should execute ```graphql mutation { deleteUser(filter: { name: { eq: "alice" } }) { msg numUids } } ``` ### Delete a Group To delete the group `sre`, the mutation should be ```graphql mutation { deleteGroup(filter: { name: { eq: "sre" } }) { msg numUids } } ``` ## ACL rules configuration You can set up ACL rules using the Dgraph Ratel UI or by using a GraphQL tool, such as [Insomnia](https://insomnia.rest/), [GraphQL Playground](https://github.com/prisma/graphql-playground), [GraphiQL](https://github.com/skevy/graphiql-app), etc. You can set the permissions on a predicate for the group using a pattern similar to the UNIX file permission conventions shown below: | Permission | Value | Binary | | --------------------------- | ----- | ------ | | `READ` | `4` | `100` | | `WRITE` | `2` | `010` | | `MODIFY` | `1` | `001` | | `READ` + `WRITE` | `6` | `110` | | `READ` + `WRITE` + `MODIFY` | `7` | `111` | These permissions represent the following: * `READ` - group has permission to read read the predicate * `WRITE` - group has permission to write or update the predicate * `MODIFY` - group has permission to change the predicate's schema The following examples will grant full permissions to predicates to the group `dev`. If there are no rules for a predicate, the default behavior is to block all (`READ`, `WRITE` and `MODIFY`) operations. ### Assign predicate permissions to a group Here we assign a permission rule for the `friend` predicate to the group: ```graphql mutation { updateGroup( input: { filter: { name: { eq: "dev" } } set: { rules: [{ predicate: "friend", permission: 7 }] } } ) { group { name rules { permission predicate } } } } ``` In case you have [reverse edges](/dgraph/dql/schema#reverse-edges), they have to be given the permission to the group as well ```graphql mutation { updateGroup( input: { filter: { name: { eq: "dev" } } set: { rules: [{ predicate: "~friend", permission: 7 }] } } ) { group { name rules { permission predicate } } } } ``` In some cases, it may be desirable to manage permissions for all the predicates together rather than individual ones. This can be achieved using the `dgraph.all` keyword. The following example provides `read+write` access to the `dev` group over all the predicates of a given namespace using the `dgraph.all` keyword. ```graphql mutation { updateGroup( input: { filter: { name: { eq: "dev" } } set: { rules: [{ predicate: "dgraph.all", permission: 6 }] } } ) { group { name rules { permission predicate } } } } ``` The permissions assigned to a group `dev` is the union of permissions from `dgraph.all` and permissions for a specific predicate `name`. So if the group is assigned `READ` permission for `dgraph.all` and `WRITE` permission for predicate `name` it will have both, `READ` and `WRITE` permissions for the `name` predicate, as a result of the union. ### Remove a rule from a group To remove a rule or rules from the group `dev`, the mutation should be: ```graphql mutation { updateGroup( input: { filter: { name: { eq: "dev" } } remove: { rules: ["friend", "~friend"] } } ) { group { name rules { predicate permission } } } } ``` ## Querying users and groups You can set up ACL rules using the Dgraph Ratel UI or by using a GraphQL tool, such as [Insomnia](https://insomnia.rest/), [GraphQL Playground](https://github.com/prisma/graphql-playground), [GraphiQL](https://github.com/skevy/graphiql-app), etc. The permissions can be set on a predicate for the group using using pattern similar to the UNIX file permission convention: You can query and get information for users and groups. These sections show output that will show the user `alice` and the `dev` group along with rules for `friend` and `~friend` predicates. ### Query for users Let's query for the user `alice`: ```graphql query { queryUser(filter: { name: { eq: "alice" } }) { name groups { name } } } ``` The output should show the groups that the user has been added to, e.g. ```json { "data": { "queryUser": [ { "name": "alice", "groups": [ { "name": "dev" } ] } ] } } ``` ### Get user information We can obtain information about a user with the following query: ```graphql query { getUser(name: "alice") { name groups { name } } } ``` The output should show the groups that the user has been added to, e.g. ```json { "data": { "getUser": { "name": "alice", "groups": [ { "name": "dev" } ] } } } ``` ### Query for groups Let's query for the `dev` group: ```graphql query { queryGroup(filter: { name: { eq: "dev" } }) { name users { name } rules { permission predicate } } } ``` The output should include the users in the group as well as the permissions, the group's ACL rules, e.g. ```json { "data": { "queryGroup": [ { "name": "dev", "users": [ { "name": "alice" } ], "rules": [ { "permission": 7, "predicate": "friend" }, { "permission": 7, "predicate": "~friend" } ] } ] } } ``` ### Get group information To check the `dev` group information: ```graphql query { getGroup(name: "dev") { name users { name } rules { permission predicate } } } ``` The output should include the users in the group as well as the permissions, the group's ACL rules, e.g. ```json { "data": { "getGroup": { "name": "dev", "users": [ { "name": "alice" } ], "rules": [ { "permission": 7, "predicate": "friend" }, { "permission": 7, "predicate": "~friend" } ] } } } ``` ## Reset Groot password If you have forgotten the password to the `groot` user, then you may reset the `groot` password (or the password for any user) by following these steps. 1. Stop Dgraph Alpha. 2. Turn off ACLs by removing the `--acl_hmac_secret` config flag in the Alpha config. This leaves the Alpha open with no ACL rules, so be sure to restrict access, including stopping request traffic to this Alpha. 3. Start Dgraph Alpha. 4. Connect to Dgraph Alpha using Ratel and run the following upsert mutation to update the `groot` password to `newpassword` (choose your own secure password): ```graphql upsert { query { groot as var(func: eq(dgraph.xid, "groot")) } mutation { set { uid(groot) "newpassword" . } } } ``` 5. Restart Dgraph Alpha with ACLs turned on by setting the `--acl_hmac_secret` config flag. 6. Login as groot with your new password. --- # Source: https://docs.hypermode.com/dgraph/concepts/acl.md # Access Control Lists We're overhauling Dgraph's docs to make them clearer and more approachable. If you notice any issues during this transition or have suggestions, please [let us know](https://github.com/hypermodeinc/docs/issues). Access Control Lists (ACL) are a typical mechanism to list who can access what, specifying either users or roles and what they can access. ACLs help determine who is "authorized" to access what. Dgraph Access Control Lists (ACLs) are sets of permissions for which `Relationships` a user may access. Recall that Dgraph is "predicate based" so all data is stored in and is implicit in relationships. This allows relationship-based controls to be very powerful in restricting a graph based on roles, known as Relationship-Based Access Control (RBAC). Note that the Dgraph multi-tenancy feature relies on ACLs to ensure each tenant can only see their own data in one server. Using ACLs requires a client to authenticate (log in) differently and specify credentials that drive which relationships are visible in their view of the graph database. --- # Source: https://docs.hypermode.com/dgraph/graphql/mutation/add.md # Add Mutations > Add mutations allows you to add new objects of a particular type. Dgraph automatically generates input and return types in the schema for the add mutation. We're overhauling Dgraph's docs to make them clearer and more approachable. If you notice any issues during this transition or have suggestions, please [let us know](https://github.com/hypermodeinc/docs/issues). Add mutations allow you to add new objects of a particular type. We use the following schema to demonstrate some examples. **Schema**: ```graphql type Author { id: ID! name: String! @search(by: [hash]) dob: DateTime posts: [Post] } type Post { postID: ID! title: String! @search(by: [term, fulltext]) text: String @search(by: [fulltext, term]) datePublished: DateTime } ``` Dgraph automatically generates input and return types in the schema for the `add` mutation, as shown below: ```graphql addPost(input: [AddPostInput!]!): AddPostPayload input AddPostInput { title: String! text: String datePublished: DateTime } type AddPostPayload { post(filter: PostFilter, order: PostOrder, first: Int, offset: Int): [Post] numUids: Int } ``` **Example**: add mutation on single type with embedded value ```graphql mutation { addAuthor(input: [{ name: "A.N. Author", posts: [] }]) { author { id name } } } ``` **Example**: add mutation on single type using variables ```graphql mutation addAuthor($author: [AddAuthorInput!]!) { addAuthor(input: $author) { author { id name } } } ``` Variables: ```json { "author": { "name": "A.N. Author", "dob": "2000-01-01", "posts": [] } } ``` You can convert an `add` mutation to an `upsert` mutation by setting the value of the input variable `upsert` to `true`. For more information, see [Upsert Mutations](./upsert). ## Examples You can refer to the following [link](https://github.com/hypermodeinc/dgraph/blob/main/graphql/resolve/add_mutation_test.yaml) for more examples. --- # Source: https://docs.hypermode.com/dgraph/guides/get-started-with-dgraph/advanced-text-search.md # Get Started with Dgraph - Advanced Text Search We're overhauling Dgraph's docs to make them clearer and more approachable. If you notice any issues during this transition or have suggestions, please [let us know](https://github.com/hypermodeinc/docs/issues). **Welcome to the sixth tutorial of getting started with Dgraph.** In the [previous tutorial](./string-indicies), we learned about building social graphs in Dgraph, by modeling tweets as an example. We queried the tweets using the `hash` and `exact` indices, and implemented a keyword-based search to find your favorite tweets using the `term` index and its functions. In this tutorial, we'll continue from where we left off and learn about advanced text search features in Dgraph. Specifically, we'll focus on two advanced feature: * Searching for tweets using Full-text search. * Searching for hashtags using the regular expression search. The accompanying video of the tutorial will be out shortly, so stay tuned to [our YouTube channel](https://www.youtube.com/channel/UCghE41LR8nkKFlR3IFTRO4w). *** Before we dive in, let's do a quick recap of how to model the tweets in Dgraph. tweet model In the previous tutorial, we took three real tweets as a sample dataset and stored them in Dgraph using the above graph as a model. In case you haven't stored the tweets from the [previous tutorial](./string-indicies) into Dgraph, here's the sample dataset again. Copy the mutation below, go to the mutation tab and click Run. ```json { "set": [ { "user_handle": "hackintoshrao", "user_name": "Karthic Rao", "uid": "_:hackintoshrao", "authored": [ { "tweet": "Test tweet for the fifth episode of getting started series with @dgraphlabs. Wait for the video of the fourth one by @francesc the coming Wednesday!\n#GraphDB #GraphQL", "tagged_with": [ { "uid": "_:graphql", "hashtag": "GraphQL" }, { "uid": "_:graphdb", "hashtag": "GraphDB" } ], "mentioned": [ { "uid": "_:francesc" }, { "uid": "_:dgraphlabs" } ] } ] }, { "user_handle": "francesc", "user_name": "Francesc Campoy", "uid": "_:francesc", "authored": [ { "tweet": "So many good talks at #graphqlconf, next year I'll make sure to be *at least* in the audience!\nAlso huge thanks to the live tweeting by @dgraphlabs for alleviating the FOMO😊\n#GraphDB ♥️ #GraphQL", "tagged_with": [ { "uid": "_:graphql" }, { "uid": "_:graphdb" }, { "hashtag": "graphqlconf" } ], "mentioned": [ { "uid": "_:dgraphlabs" } ] } ] }, { "user_handle": "dgraphlabs", "user_name": "Dgraph Labs", "uid": "_:dgraphlabs", "authored": [ { "tweet": "Let's Go and catch @francesc at @Gopherpalooza today, as he scans into Go source code by building its Graph in Dgraph!\nBe there, as he Goes through analyzing Go source code, using a Go program, that stores data in the GraphDB built in Go!\n#golang #GraphDB #Databases #Dgraph ", "tagged_with": [ { "hashtag": "golang" }, { "uid": "_:graphdb" }, { "hashtag": "Databases" }, { "hashtag": "Dgraph" } ], "mentioned": [ { "uid": "_:francesc" }, { "uid": "_:dgraphlabs" } ] }, { "uid": "_:gopherpalooza", "user_handle": "gopherpalooza", "user_name": "Gopherpalooza" } ] } ] } ``` *Note: If you're new to Dgraph, and this is the first time you're running a mutation, we highly recommend reading the [first tutorial of the series before proceeding.](./introduction)* Voilà! Now you have a graph with `tweets`, `users`, and `hashtags`. It is ready for us to explore. tweet graph *Note: If you're curious to know how we modeled the tweets in Dgraph, refer to [the previous tutorial.](./string-indicies)* Let's start by finding your favorite tweets using the full-text search feature first. ## Full text search Before we learn how to use the Full-text search feature, it's important to understand when to use it. The length and the number of words in a string predicate value vary based on what the predicates represent. Some string predicate values have only a few terms (words) in them. Predicates representing `names`, `hashtags`, `twitter handle`, `city names` are a few good examples. These predicates are easy to query using their exact values. For instance, here is an example query. *Give me all the tweets where the user name is equal to `John Campbell`*. You can easily compose queries like these after adding either the `hash` or an `exact` index to the string predicates. But, some of the string predicates store sentences. Sometimes even one or more paragraphs of text data in them. Predicates representing a tweet, a bio, a blog post, a product description, or a movie review are just some examples. It is relatively hard to query these predicates. It is not practical to query such predicates using the `hash` or `exact` string indices. A keyword-based search using the `term` index is a good starting point to query such predicates. We used it in our [previous tutorial](./string-indicies) to find the tweets with an exact match for keywords like `GraphQL`, `Graphs`, and `Go`. But, for some of the use cases, just the keyword-based search may not be sufficient. You might need a more powerful search capability, and that's when you should consider using Full-text search. Let's write some queries and understand Dgraph's Full-text search capability in detail. To be able to do a Full-text search, you need to first set a `fulltext` index on the `tweet` predicate. Creating a `fulltext` index on any string predicate is similar to creating any other string indices. full text *Note: Refer to the [previous tutorial](./string-indicies) if you're not sure about creating an index on a string predicate.* Now, let's do a Full-text search query to find tweets related to the following topic: `graph data and analyzing it in graphdb`. You can do so by using either of `alloftext` or `anyoftext` in-built functions. Both functions take two arguments. The first argument is the predicate to search. The second argument is the space-separated string values to search for, and we call these as the `search strings`. ```sh - alloftext(predicate, "space-separated search strings") - anyoftext(predicate, "space-separated search strings") ``` We'll look at the difference between these two functions later. For now, let's use the `alloftext` function. Go to the query tab, paste the query below, and click Run. Here is our search string: `graph data and analyze it in graphdb`. ```graphql { search_tweet(func: alloftext(tweet, "graph data and analyze it in graphdb")) { tweet } } ``` tweet graph Here's the matched tweet, which made it to the result. ```Let's Go and catch @francesc at @Gopherpalooza today, as he scans into Go source code by building its Graph in Dgraph! Be there, as he Goes through analyzing Go source code, using a Go program, that stores data in the GraphDB built in Go!#golang #GraphDB #Databases #Dgraph pic.twitter.com/sK90DJ6rLs — Dgraph Labs (@dgraphlabs) November 8, 2019 ``` If you observe, you can see some of the words from the search strings are not present in the matched tweet, but the tweet has still made it to the result. To be able to use the Full-text search capability effectively, we must understand how it works. Let's understand it in detail. Once you set a `fulltext` index on the tweets, internally, the tweets are processed, and `fulltext` tokens are generated. These `fulltext` tokens are then indexed. The search string also goes through the same processing pipeline, and `fulltext` tokens generated them too. Here are the steps to generate the `fulltext` tokens: * Split the tweets into chunks of words called tokens (tokenizing). * Convert these tokens to lowercase. * [Unicode-normalize](http://unicode.org/reports/tr15/#Norm_Forms) the tokens. * Reduce the tokens to their root form, this is called [stemming](https://en.wikipedia.org/wiki/Stemming) (running to run, faster to fast and so on). * Remove the [stop words](https://en.wikipedia.org/wiki/Stop_words). You would have seen in [the fourth tutorial](./multi-language-strings) that Dgraph allows you to build multi-lingual apps. The stemming and stop words removal are not supported for all the languages. Here is [the link to the docs](/dgraph/dql/functions#full-text-search) that contains the list of languages and their support for stemming and stop words removal. Here is the table with the matched tweet and its search string in the first column. The second column contains their corresponding `fulltext` tokens generated by Dgraph. | Actual text data | fulltext tokens generated by Dgraph | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------- | | Let's Go and catch @francesc at @Gopherpalooza today, as he scans into Go source code by building its Graph in Dgraph!\nBe there, as he Goes through analyzing Go source code, using a Go program, that stores data in the GraphDB built in Go!\n#golang #GraphDB #Databases #Dgraph | \[analyz build built catch code data databas dgraph francesc go goe golang gopherpalooza graph graphdb program scan sourc store todai us] | | graph data and analyze it in graphdb | \[analyz data graph graphdb] | From the table above, you can see that the tweets are reduced to an array of strings or tokens. Dgraph internally uses [Bleve package](https://github.com/blevesearch/bleve) to do the stemming. Here are the `fulltext` tokens generated for our search string: \[`analyz`, `data`, `graph`, `graphdb`]. As you can see from the table above, all of the `fulltext` tokens generated for the search string exist in the matched tweet. Hence, the `alloftext` function returns a positive match for the tweet. It would not have returned a positive match even if one of the tokens in the search string is missing for the tweet. But, the `anyoftext` function would've returned a positive match as long as the tweets and the search string have at least one of the tokens in common. If you're interested to see Dgraph's `fulltext` tokenizer in action, [here is the gist](https://gist.github.com/hackintoshrao/0e8d715d8739b12c67a804c7249146a3) containing the instructions to use it. Dgraph generates the same `fulltext` tokens even if the words in a search string is differently ordered. Hence, using the same search string with different order would not impact the query result. As you can see, all three queries below are the same for Dgraph. ```graphql { search_tweet(func: alloftext(tweet, "graph analyze and it in graphdb data")) { tweet } } ``` ```graphql { search_tweet(func: alloftext(tweet, "data and data analyze it graphdb in")) { tweet } } ``` ```graphql { search_tweet(func: alloftext(tweet, "analyze data and it in graph graphdb")) { tweet } } ``` Now, let's move onto the next advanced text search feature of Dgraph: regular expression based queries. Let's use them to find all the hashtags containing the following substring: `graph`. ## Regular expression search [Regular expressions](https://www.geeksforgeeks.org/write-regular-expressions/) are powerful ways of expressing search patterns. Dgraph allows you to search for string predicates based on regular expressions. You need to set the `trigram` index on the string predicate to be able to perform regex-based queries. Using regular expression based search, let's match all the hashtags that have this particular pattern: `Starts and ends with any characters of indefinite length, but with the substring graph in it`. Here is the regex expression we can use: `^.*graph.*$` Check out [this tutorial](https://www.geeksforgeeks.org/write-regular-expressions/) if you're not familiar with writing a regular expression. Let's first find all the hashtags in the database using the `has()` function. ```graphql { hash_tags(func: has(hashtag)) { hashtag } } ``` The hashtags *If you're not familiar with using the `has()` function, refer to [the first tutorial](./introduction) of the series.* You can see that we have six hashtags in total, and four of them have the substring `graph` in them: `Dgraph`, `GraphQL`, `graphqlconf`, `graphDB`. We should use the built-in function `regexp` to be able to use regular expressions to search for predicates. This function takes two arguments, the first is the name of the predicate, and the second one is the regular expression. Here is the syntax of the `regexp` function: `regexp(predicate, /regular-expression/)` Let's execute the following query to find the hashtags that have the substring `graph`. Go to the query tab, type in the query, and click Run. ```graphql { reg_search(func: regexp(hashtag, /^.*graph.*$/)) { hashtag } } ``` Oops! We have an error! It looks like we forgot to set the `trigram` index on the `hashtag` predicate. The hashtags Again, setting a `trigram` index is similar to setting any other string index, let's do that for the `hashtag` predicate. The hashtags *Note: Refer to the [previous tutorial](./string-indicies) if you're not sure about creating an index on a string predicate.* Now, let's re-run the `regexp` query. regex-1 *Note: Refer to [the first tutorial](./introduction) if you're not familiar with the query structure in general* Success! But we only have the following hashtags in the result: `Dgraph` and `graphqlconf`. That's because `regexp` function is case-sensitive by default. Add the character `i` at the end of the second argument of the `regexp` function to make it case insensitive: `regexp(predicate, /regular-expression/i)` regex-2 Now we have the four hashtags with substring `graph` in them. Let's modify the regular expression to match only the `hashtags` which have a prefix called `graph`. ```graphql { reg_search(func: regexp(hashtag, /^graph.*$/i)) { hashtag } } ``` regex-3 ## Summary In this tutorial, we learned about Full-text search and regular expression based search capabilities in Dgraph. Did you know that Dgraph also offers fuzzy search capabilities, which can be used to power features like `product` search in an e-commerce store? Let's learn about the fuzzy search in our next tutorial. Sounds interesting? Check out our next tutorial of the getting started series [here](./fuzzy-search). ## Need Help * Please use [discuss.hypermode.com](https://discuss.hypermode.com) for questions, feature requests, bugs, and discussions. --- # Source: https://docs.hypermode.com/modus/sdk/go/agents.md # Source: https://docs.hypermode.com/modus/sdk/assemblyscript/agents.md # Source: https://docs.hypermode.com/modus/agents.md # Source: https://docs.hypermode.com/modus/sdk/go/agents.md # Source: https://docs.hypermode.com/modus/sdk/assemblyscript/agents.md # Source: https://docs.hypermode.com/modus/agents.md # Source: https://docs.hypermode.com/modus/sdk/go/agents.md # Source: https://docs.hypermode.com/modus/sdk/assemblyscript/agents.md # Source: https://docs.hypermode.com/modus/agents.md # Source: https://docs.hypermode.com/modus/sdk/go/agents.md # Source: https://docs.hypermode.com/modus/sdk/assemblyscript/agents.md # Source: https://docs.hypermode.com/modus/agents.md # What's an Agent? > Learn about stateful agents in Modus ## Agents in Modus Agents in Modus are persistent background processes that maintain memory across interactions. Unlike stateless functions that lose everything when operations end, agents remember every detail, survive system failures, and never lose their operational context. ## Key characteristics * **Stateful**: Maintains memory and context across interactions * **Persistent**: Automatically saves and restores state * **Resilient**: Graceful recovery from failures * **Autonomous**: Can operate independently over extended periods * **Actor-based**: Each agent instance runs in isolation * **Event-driven**: Streams real-time updates and operational intelligence ## When to use agents Agents are perfect for: * **Multi-turn workflows** spanning multiple interactions * **Long-running processes** that maintain context over time * **Stateful operations** that need to remember previous actions * **Complex coordination** between different system components * **Persistent monitoring** that tracks changes over time * **Real-time operations** requiring live status updates and event streaming ## Agent structure Every agent starts with the essential framework: ```go package main import ( "fmt" "strings" "time" "github.com/hypermodeinc/modus/sdk/go/pkg/agents" "github.com/hypermodeinc/modus/sdk/go/pkg/models" "github.com/hypermodeinc/modus/sdk/go/pkg/models/openai" ) type IntelligenceAgent struct { agents.AgentBase // The rest of the fields make up the agent's state and can be customized per agent intelligenceReports []string // Matrix surveillance data threatLevel float64 // Current threat assessment lastContact time.Time currentMission *MissionPhase // Track long-running operations missionLog []string // Operational progress log } type MissionPhase struct { Name string StartTime time.Time Duration time.Duration Complete bool } func (a *IntelligenceAgent) Name() string { return "IntelligenceAgent" } ``` The agent embeds `agents.AgentBase`, which provides all the infrastructure for state management, secure communications, and persistence. Your app data—intelligence reports, threat assessments, contact logs—lives as fields in the struct, automatically preserved across all interactions. ## Creating agents through functions Agents are created and managed through regular Modus functions that become part of your GraphQL API. These functions handle agent lifecycle operations: ```go // Register your agent type during initialization func init() { agents.Register(&IntelligenceAgent{}) } // Create a new agent instance - this becomes a GraphQL mutation func DeployAgent() (string, error) { agentInfo, err := agents.Start("IntelligenceAgent") if err != nil { return "", err } // Return the agent ID - clients must store this to communicate with the agent return agentInfo.Id, nil } ``` When you call this function through GraphQL, it returns a unique agent ID: ```graphql mutation { deployAgent } ``` Response: ```json { "data": { "deployAgent": "agent_neo_001" } } ``` You can think of an Agent as a persistent server process with durable memory. Once created, you can reference your agent by its ID across sessions, page reloads, and even system restarts. The agent maintains its complete state and continues operating exactly where it left off. **Agent builders and visual workflows:** We're actively developing Agent Builder tools and "eject to code" features that generate complete agent deployments from visual workflows. These tools automatically create the deployment functions and agent management code for complex multi-agent systems. ## Communicating with agents Once created, you communicate with agents using their unique ID. Create functions that send messages to specific agent instances: ```go func ImportActivity(agentId string, activityData string) (string, error) { result, err := agents.SendMessage( agentId, "matrix_surveillance", agents.WithData(activityData), ) if err != nil { return "", err } if result == nil { return "", fmt.Errorf("no response from agent") } return *result, nil } func GetThreatStatus(agentId string) (string, error) { result, err := agents.SendMessage(agentId, "threat_assessment") if err != nil { return "", err } if result == nil { return "", fmt.Errorf("no response from agent") } return *result, nil } ``` These functions become GraphQL operations that you can call with your agent's ID: ```graphql mutation { importActivity( agentId: "agent_neo_001" activityData: "Anomalous Agent Smith replication detected in Sector 7" ) } ``` Response: ```json { "data": { "importActivity": "Matrix surveillance complete. Agent Smith pattern matches previous incident in the Loop. Threat level: 0.89 based on 3 intelligence reports. Recommend immediate evasive protocols." } } ``` ```graphql query { getThreatStatus(agentId: "agent_neo_001") } ``` Response: ```json { "data": { "getThreatStatus": "Current threat assessment: 3 intelligence reports analyzed. Threat level: 0.89. Agent operational in the Matrix." } } ``` The agent receives the message, processes it using its internal state and AI reasoning, updates its intelligence database, and returns a response—all while maintaining persistent memory of every interaction. ## Agent message handling Agents process requests through their message handling system: ```go func (a *IntelligenceAgent) OnReceiveMessage( msgName string, data string, ) (*string, error) { switch msgName { case "matrix_surveillance": return a.analyzeMatrixActivity(data) case "background_reconnaissance": return a.performBackgroundRecon(data) case "threat_assessment": return a.getThreatAssessment() case "get_status": return a.getOperationalStatus() case "intelligence_history": return a.getIntelligenceHistory() default: return nil, fmt.Errorf("unrecognized directive: %s", msgName) } } ``` Each message type triggers specific operations, with all data automatically maintained in the agent's persistent memory. ## Processing operations with AI intelligence Here's how agents handle operations while maintaining persistent state and using AI models for analysis: ```go func (a *IntelligenceAgent) analyzeMatrixActivity(data string) (*string, error) { // Store new intelligence in persistent memory a.intelligenceReports = append(a.intelligenceReports, *data) a.lastContact = time.Now() // Build context from all accumulated intelligence accumulatedReports := strings.Join(a.intelligenceReports, "\n") // AI analysis using complete operational history model, err := models.GetModel[openai.ChatModel]("analyst-model") if err != nil { return nil, err } systemPrompt := `You are a resistance operative in the Matrix. Analyze patterns from accumulated surveillance reports and provide threat assessment for anomalous Agent behavior.` userPrompt := fmt.Sprintf(`All Matrix Intelligence: %s Provide threat assessment:`, accumulatedReports) input, err := model.CreateInput( openai.NewSystemMessage(systemPrompt), openai.NewUserMessage(userPrompt), ) if err != nil { return nil, err } output, err := model.Invoke(input) if err != nil { return nil, err } analysis := output.Choices[0].Message.Content // Update threat level based on data volume and AI analysis a.threatLevel = float64(len(a.intelligenceReports)) / 10.0 if a.threatLevel > 1.0 { a.threatLevel = 1.0 } // Boost threat level for critical AI analysis if strings.Contains(strings.ToLower(analysis), "critical") || strings.Contains(strings.ToLower(analysis), "agent smith") { a.threatLevel = math.Min(a.threatLevel + 0.2, 1.0) } result := fmt.Sprintf(`Matrix surveillance complete: %s (Threat level: %.2f based on %d intelligence reports)`, analysis, a.threatLevel, len(a.intelligenceReports)) return &result, nil } ``` This demonstrates how agents maintain state across complex operations while using AI models with the full context of accumulated intelligence. ## The power of intelligent persistence This combination creates agents that: **First Analysis:** "Anomalous activity detected. Limited context available. (Threat level: 0.10 based on 1 intelligence report)" **After Multiple Reports:** "Pattern confirmed across 5 previous incidents. Agent Smith replication rate exceeding normal parameters. Immediate extraction recommended. (Threat level: 0.89 based on 8 intelligence reports)" The agent doesn't just remember—it **learns and becomes more intelligent with every interaction**. AI models see the complete operational picture, enabling sophisticated pattern recognition impossible with stateless functions. ## State persistence Agents automatically preserve their state through Modus's built-in persistence system: ```go func (a *IntelligenceAgent) GetState() *string { reportsData := strings.Join(a.intelligenceReports, "|") state := fmt.Sprintf("%.2f|%s|%d", a.threatLevel, reportsData, a.lastContact.Unix()) return &state } func (a *IntelligenceAgent) SetState(data string) { if data == nil { return } parts := strings.Split(*data, "|") if len(parts) >= 3 { a.threatLevel, _ = strconv.ParseFloat(parts[0], 64) if parts[1] != "" { a.intelligenceReports = strings.Split(parts[1], "|") } timestamp, _ := strconv.ParseInt(parts[2], 10, 64) a.lastContact = time.Unix(timestamp, 0) } } ``` ## Agent lifecycle Agents have built-in lifecycle management protocols: ```go func (a *IntelligenceAgent) OnInitialize() error { // Called when agent is first created a.lastContact = time.Now() a.threatLevel = 0.0 fmt.Printf(`Resistance Agent %s awakened and ready for Matrix surveillance`, a.Id()) return nil } func (a *IntelligenceAgent) OnResume() error { // Called when agent reconnects with complete state intact fmt.Printf(`Agent back online in the Matrix. %d intelligence reports processed. Threat level: %.2f`, len(a.intelligenceReports), a.threatLevel) return nil } func (a *IntelligenceAgent) OnSuspend() error { // Called before agent goes offline return nil } func (a *IntelligenceAgent) OnTerminate() error { // Called before final shutdown fmt.Printf(`Agent %s extracted from Matrix. Intelligence archive preserved.`, a.Id()) return nil } ``` ## Asynchronous operations For fire-and-forget operations where you don't need to wait for a response, agents support asynchronous messaging: ```go func InitiateBackgroundRecon(agentId string, data string) error { // Send message asynchronously - agent processes in background err := agents.SendMessageAsync( agentId, "background_reconnaissance", agents.WithData(data), ) if err != nil { return err } // Operation initiated - agent continues processing independently return nil } ``` This enables agents to handle long-running operations like: * Background Matrix monitoring with status updates * Scheduled intelligence gathering * Multi-phase operations that continue independently * Autonomous surveillance with alert notifications ## Real-time agent event streaming For monitoring live operations and receiving real-time intelligence updates, agents support event streaming through GraphQL subscriptions. This enables your clients to receive instant notifications about operational changes, mission progress, and critical alerts. ### Subscribing to agent events Monitor your agent's real-time activities using the unified event subscription: ```graphql subscription { agentEvent(agentId: "agent_neo_001") { name data timestamp } } ``` Your agent streams various types of operational events: ```json { "data": { "agentEvent": { "name": "mission_started", "data": { "missionName": "Deep Matrix Surveillance", "priority": "HIGH", "estimatedDuration": "180s" }, "timestamp": "2025-06-04T14:30:00Z" } } } ``` ```json { "data": { "agentEvent": { "name": "agent_threat_detected", "data": { "threatLevel": "CRITICAL", "confidence": 0.92, "indicators": ["agent_smith_replication", "unusual_code_patterns"], "recommendation": "immediate_extraction" }, "timestamp": "2025-06-04T14:31:15Z" } } } ``` ```json { "data": { "agentEvent": { "name": "surveillance_progress", "data": { "phase": "Processing Matrix surveillance data", "progress": 0.65, "reportsProcessed": 5, "totalReports": 8 }, "timestamp": "2025-06-04T14:32:00Z" } } } ``` ### Publishing events from your agent Agents can broadcast real-time operational intelligence by publishing events during their operations. Use the `PublishEvent` method to emit custom events: ```go // Custom event types implement the AgentEvent interface type ThreatDetected struct { ThreatLevel string `json:"threatLevel"` Confidence float64 `json:"confidence"` Analysis string `json:"analysis"` } func (e ThreatDetected) EventName() string { return "threat_detected" } // Other event types can be defined similarly... func (a *IntelligenceAgent) analyzeMatrixActivity( data string, ) (*string, error) { // Emit mission start event err := a.PublishEvent(MissionStarted{ MissionName: "Matrix Surveillance Analysis", Priority: "HIGH", ActivityData: len(*data), }) if err != nil { return nil, err } // Store new intelligence in persistent memory a.intelligenceReports = append(a.intelligenceReports, *data) a.lastContact = time.Now() // Emit progress update a.PublishEvent(SurveillanceProgress{ ReportsProcessed: len(a.intelligenceReports), Phase: "Processing Matrix surveillance data", Progress: 0.3, }) // Build context from all accumulated intelligence accumulatedReports := strings.Join(a.intelligenceReports, "\n") // AI analysis using complete operational history model, err := models.GetModel[openai.ChatModel]("analyst-model") if err != nil { return nil, err } systemPrompt := `You are a resistance operative in the Matrix. Analyze patterns from accumulated surveillance reports and provide threat assessment for anomalous Agent behavior.` userPrompt := fmt.Sprintf(`All Matrix Intelligence: %s Provide threat assessment:`, accumulatedReports) input, err := model.CreateInput( openai.NewSystemMessage(systemPrompt), openai.NewUserMessage(userPrompt), ) if err != nil { return nil, err } // Emit AI processing event a.PublishEvent(AIAnalysisStarted{ ModelName: "analyst-model", ContextSize: len(accumulatedReports), ReportCount: len(a.intelligenceReports), }) output, err := model.Invoke(input) if err != nil { return nil, err } analysis := output.Choices[0].Message.Content // Update threat level based on data volume and AI analysis a.threatLevel = float64(len(a.intelligenceReports)) / 10.0 if a.threatLevel > 1.0 { a.threatLevel = 1.0 } // Check for Agent threats and emit alerts if strings.Contains(strings.ToLower(analysis), "critical") || strings.Contains(strings.ToLower(analysis), "agent smith") { a.threatLevel = math.Min(a.threatLevel + 0.2, 1.0) a.PublishEvent(ThreatDetected{ ThreatLevel: "HIGH", Confidence: a.threatLevel, Analysis: analysis, }) } // Emit mission completion a.PublishEvent(MissionCompleted{ MissionName: "Matrix Surveillance Analysis", Confidence: a.threatLevel, ReportsAnalyzed: len(a.intelligenceReports), Status: "SUCCESS", }) result := fmt.Sprintf(`Matrix surveillance complete: %s (Threat level: %.2f based on %d intelligence reports)`, analysis, a.threatLevel, len(a.intelligenceReports)) return &result, nil } ``` ### Event-driven operational patterns This streaming capability enables sophisticated real-time operational patterns: **Live Mission Dashboards**: build real-time command centers that show agent activities, mission progress, and threat alerts as they happen. **Reactive Coordination**: other agents or systems can subscribe to events and automatically respond to operational changes—enabling true multi-agent coordination. **Operational intelligence**: stream events to monitoring systems, alerting platforms, or data lakes for real-time operational awareness and historical analysis. **Progressive Enhancement**: update user interfaces progressively as agents work through complex, multi-phase operations without polling or manual refresh. ### Subscription protocol Modus uses GraphQL subscriptions over Server-Sent Events (SSE) following the [GraphQL-SSE specification](https://the-guild.dev/graphql/sse). To consume these subscriptions: 1. **From a web browser**: Use the EventSource API or a GraphQL client that supports SSE subscriptions 2. **From Postman**: Set Accept header to `text/event-stream` and make a POST request 3. **From curl**: Use `-N` flag and appropriate headers for streaming Example with curl: ```bash curl -N -H "accept: text/event-stream" \ -H "content-type: application/json" \ -X POST http://localhost:8080/graphql \ -d '{"query":"subscription { agentEvent(agentId: \"agent_neo_001\") { name data timestamp } }"}' ``` ## Monitoring ongoing operations You can also poll agent status directly through dedicated functions: ```go func CheckMissionProgress(agentId string) (*MissionStatus, error) { result, err := agents.SendMessage(agentId, "get_status") if err != nil { return nil, err } if result == nil { return nil, fmt.Errorf("no response from agent") } var status MissionStatus err = json.Unmarshal([]byte(*result), &status) if err != nil { return nil, err } return &status, nil } type MissionStatus struct { Phase string `json:"phase"` Progress float64 `json:"progress"` CurrentTask string `json:"current_task"` EstimatedTime int `json:"estimated_time_remaining"` IsComplete bool `json:"is_complete"` } ``` The agent tracks its operational status using the mission state we defined earlier: ```go func (a *IntelligenceAgent) getOperationalStatus() (*string, error) { var status MissionStatus if a.currentMission == nil { status = MissionStatus{ Phase: "Standby", Progress: 1.0, CurrentTask: "Awaiting mission directives in the Matrix", IsComplete: true, } } else { // Calculate progress based on mission log entries progress := float64(len(a.missionLog)) / 4.0 // 4 phases expected if progress > 1.0 { progress = 1.0 } status = MissionStatus{ Phase: a.currentMission.Name, Progress: progress, CurrentTask: a.missionLog[len(a.missionLog)-1], // Latest entry IsComplete: a.currentMission.Complete, } } statusJson, err := json.Marshal(status) if err != nil { return nil, err } result := string(statusJson) return &result, nil } ``` Your client can either poll this status endpoint via GraphQL or subscribe to real-time events for instant updates: ```graphql # Polling approach query MonitorMission($agentId: String!) { checkMissionProgress(agentId: $agentId) { phase progress currentTask estimatedTimeRemaining isComplete } } # Real-time streaming approach (recommended) subscription LiveAgentMonitoring($agentId: String!) { agentEvent(agentId: $agentId) { name data timestamp } } ``` The streaming approach provides superior operational intelligence: * **Instant Updates**: Receive events the moment they occur, not on polling intervals * **Rich Context**: Events include detailed payload data about operational state * **Event Filtering**: Subscribe to specific agent IDs and filter event types client-side * **Operational History**: Complete timeline of agent activities for audit and debugging * **Scalable Monitoring**: Monitor multiple agents simultaneously with individual subscriptions ## Beyond simple operations Agents enable sophisticated patterns impossible with stateless functions: * **Operational continuity**: Maintain state across system failures and re-deployments * **Intelligence building**: Accumulate understanding across multiple assignments through AI-powered analysis * **Recovery protocols**: Resume operations from last secure checkpoint instead of starting over * **Network coordination**: Manage complex multi-agent operations with shared intelligence and real-time event coordination * **Adaptive learning**: AI models become more effective as agents accumulate operational data * **Real-time streaming**: Broadcast operational intelligence instantly to monitoring systems and coordinating agents * **Event-driven coordination**: React to operational changes and mission updates through real-time event streams * **Progressive operations**: Update user interfaces and trigger downstream processes as agents work through complex workflows Agents represent the evolution from stateless functions to persistent background processes that maintain complete operational continuity, build intelligence over time, and provide real-time operational awareness. They're the foundation for building systems that never lose track of their work, become smarter with every interaction, and keep teams informed through live event streaming—no matter what happens in the infrastructure. --- # Source: https://docs.hypermode.com/dgraph/graphql/query/aggregate.md # Aggregate Queries > Dgraph automatically generates aggregate queries for GraphQL schemas. These are compatible with the @auth directive We're overhauling Dgraph's docs to make them clearer and more approachable. If you notice any issues during this transition or have suggestions, please [let us know](https://github.com/hypermodeinc/docs/issues). Dgraph automatically generates aggregate queries for GraphQL schemas. Aggregate queries fetch aggregate data, including the following: * *Count queries* that let you count fields satisfying certain criteria specified using a filter. * *Advanced aggregate queries* that let you calculate the maximum, minimum, sum and average of specified fields. Aggregate queries are compatible with the `@auth` directive and follow the same authorization rules as the `query` keyword. You can also use filters with aggregate queries, as shown in some of the examples provided below. ## Count queries at root For every `type` defined in a GraphQL schema, Dgraph generates an aggregate query `aggregate`. This query includes a `count` field, as well as [advanced aggregate query fields](#advanced-aggregate-queries-at-root). Example: fetch the total number of `posts`. ```graphql query { aggregatePost { count } } ``` Example: fetch the number of `posts` whose titles contain `GraphQL`. ```graphql query { aggregatePost(filter: { title: { anyofterms: "GraphQL" } }) { count } } ``` ## Count queries for child nodes Dgraph also defines `Aggregate` fields for every field which is of type `List[Type/Interface]` inside `query` queries, allowing you to do a `count` on fields, or to use the [advanced aggregate queries](#advanced-aggregate-queries-for-child-nodes). Example: fetch the number of `posts` for all authors along with their `name`. ```graphql query { queryAuthor { name postsAggregate { count } } } ``` Example: fetch the number of `posts` with a `score` greater than `10` for all authors, along with their `name` ```graphql query { queryAuthor { name postsAggregate(filter: { score: { gt: 10 } }) { count } } } ``` ## Advanced aggregate queries at root For every `type` defined in the GraphQL schema, Dgraph generates an aggregate query `aggregate` that includes advanced aggregate query fields, and also includes a `count` field (see [Count queries at root](#count-queries-at-root)). Dgraph generates one or more advanced aggregate query fields (`Min`, `Max`, `Sum` and `Avg`) for fields in the schema that are typed as `Int`, `Float`, `String` and `Datetime`. Advanced aggregate query fields are generated according to a field's type. Fields typed as `Int` and `Float` get the following query fields: `Max`, `Min`, `Sum` and `Avg`. Fields typed as `String` and `Datetime` only get the `Max`, `Min` query fields. Example: fetch the average number of `posts` written by authors: ```graphql query { aggregateAuthor { numPostsAvg } } ``` Example: fetch the total number of `posts` by all authors, and the maximum number of `posts` by any single `Author`: ```graphql query { aggregateAuthor { numPostsSum numPostsMax } } ``` Example: fetch the average number of `posts` for authors with more than 20 `friends`: ```graphql query { aggregateAuthor(filter: { friends: { gt: 20 } }) { numPostsAvg } } ``` ## Advanced aggregate queries for child nodes Dgraph also defines aggregate `Aggregate` fields for child nodes within `query` queries. This is done for each field of type `List[Type/Interface]` inside `query` queries, letting you fetch minimums, maximums, averages and sums for those fields. Aggregate query fields are generated according to a field's type. Fields typed as `Int` and `Float` get the following query fields:`Max`, `Min`, `Sum` and `Avg`. Fields typed as `String` and `Datetime` only get the `Max`, `Min` query fields. Example: fetch the minimum, maximum and average `score` of the `posts` for each `Author`, along with each author's `name`. ```graphql query { queryAuthor { name postsAggregate { scoreMin scoreMax scoreAvg } } } ``` Example: fetch the date of the most recent post with a `score` greater than `10` for all authors, along with the author's `name`. ```graphql query { queryAuthor { name postsAggregate(filter: { score: { gt: 10 } }) { datePublishedMax } } } ``` ## Aggregate queries on null data Aggregate queries against empty data return `null`. This is true for both the `Aggregate` fields and `aggregate` queries generated by Dgraph. So, in these examples,the following is true: * If there are no nodes of type `Author`, the `aggregateAuthor` query returns null. * If an `Author` hasn't written any posts, the field `postsAggregate` is null for that `Author`. --- # Source: https://docs.hypermode.com/dgraph/dql/aggregation.md # Aggregation We're overhauling Dgraph's docs to make them clearer and more approachable. If you notice any issues during this transition or have suggestions, please [let us know](https://github.com/hypermodeinc/docs/issues). Syntax Example: `AG(val(varName))` For `AG` replaced with * `min` : select the minimum value in the value variable `varName` * `max` : select the maximum value * `sum` : sum all values in value variable `varName` * `avg` : calculate the average of values in `varName` Schema Types: | Aggregation | Schema Types | | :------------ | :---------------------------------------------- | | `min` / `max` | `int`, `float`, `string`, `dateTime`, `default` | | `sum` / `avg` | `int`, `float` | Aggregation can only be applied to [value variables](./variables#value-variables). An index is not required (the values have already been found and stored in the value variable mapping). An aggregation is applied at the query block enclosing the variable definition. As opposed to query variables and value variables, which are global, aggregation is computed locally. For example: ```dql A as predicateA { ... B as predicateB { x as ...some value... } min(val(x)) } ``` Here, `A` and `B` are the lists of all UIDs that match these blocks. Value variable `x` is a mapping from UIDs in `B` to values. The aggregation `min(val(x))`, however, is computed for each UID in `A`. That is, it has a semantics of: for each UID in `A`, take the slice of `x` that corresponds to `A`'s outgoing `predicateB` edges and compute the aggregation for those values. Aggregations can themselves be assigned to value variables, making a UID to aggregation map. ## Min ### Usage at root Query Example: Get the min initial release date for any Harry Potter movie. The release date is assigned to a variable, then it's aggregated and fetched in an empty block. `json { var(func: allofterms(name@en, "Harry Potter")) { d as initial_release_date } me() { min(val(d)) } } ` ### Usage at other levels Query Example: Directors called Steven and the date of release of their first movie, in ascending order of first movie. ```json { stevens as var(func: allofterms(name@en, "steven")) { director.film { ird as initial_release_date # ird is a value variable mapping a film UID to its release date } minIRD as min(val(ird)) # minIRD is a value variable mapping a director UID to their first release date } byIRD(func: uid(stevens), orderasc: val(minIRD)) { name@en firstRelease:val(minIRD) } } ``` ## Max ### Usage at root Query Example: Get the max initial release date for any Harry Potter movie. The release date is assigned to a variable, then it's aggregated and fetched in an empty block. `json { var(func: allofterms(name@en, "Harry Potter")) { d as initial_release_date } me() { max(val(d)) } } ` ### Usage at other levels Query Example: Quentin Tarantino's movies and date of release of the most recent movie. ```json { director(func: allofterms(name@en, "Quentin Tarantino")) { director.film { name@en x as initial_release_date } max(val(x)) } } ``` ## Sum and Avg ### Usage at root Query Example: Get the sum and average of number of count of movies directed by people who have Steven or Tom in their name. ```json { var(func: anyofterms(name@en, "Steven Tom")) { a as count(director.film) } me() { avg(val(a)) sum(val(a)) } } ``` ### Usage at other levels Query Example: Steven Spielberg's movies, with the number of recorded genres per movie, and the total number of genres and average genres per movie. ```json { director(func: eq(name@en, "Steven Spielberg")) { name@en director.film { name@en numGenres : g as count(genre) } totalGenres : sum(val(g)) genresPerMovie : avg(val(g)) } } ``` ## Aggregating Aggregates Aggregations can be assigned to value variables, and so these variables can in turn be aggregated. Query Example: For each actor in a Peter Jackson film, find the number of roles played in any movie. Sum these to find the total number of roles ever played by all actors in the movie. Then sum the lot to find the total number of roles ever played by actors who have appeared in Peter Jackson movies. Note that this demonstrates how to aggregate aggregates; the answer in this case isn't quite precise though, because actors that have appeared in multiple Peter Jackson movies are counted more than once. ```json { PJ as var(func:allofterms(name@en, "Peter Jackson")) { director.film { starring { # starring an actor performance.actor { movies as count(actor.film) # number of roles for this actor } perf_total as sum(val(movies)) } movie_total as sum(val(perf_total)) # total roles for all actors in this movie } gt as sum(val(movie_total)) } PJmovies(func: uid(PJ)) { name@en director.film (orderdesc: val(movie_total), first: 5) { name@en totalRoles : val(movie_total) } grandTotal : val(gt) } } ``` --- # Source: https://docs.hypermode.com/modus/ai-enabled-apps.md # AI-Enabled Apps > Add intelligence to your app with AI models Modus makes it easy to incrementally add intelligence to your apps. Whether you're building app with Modus or starting with your first AI feature, Modus' APIs give you a full pallette to build a modern app from. --- # Source: https://docs.hypermode.com/dgraph/dql/alias.md # Aliases We're overhauling Dgraph's docs to make them clearer and more approachable. If you notice any issues during this transition or have suggestions, please [let us know](https://github.com/hypermodeinc/docs/issues). Syntax Examples: * `aliasName : predicate` * `aliasName : predicate { ... }` * `aliasName : varName as ...` * `aliasName : count(predicate)` * `aliasName : max(val(varName))` An alias provides an alternate name in results. Predicates, variables, and aggregates can be aliased by prefixing with the alias name and `:`. Aliases do not have to be different to the original predicate name, but, within a block, an alias must be distinct from predicate names and other aliases returned in the same block. Aliases can be used to return the same predicate multiple times within a block. Query Example: directors with `name` matching term `Steven`, their UID, English name, average number of actors per movie, total number of films, and the name of each film in English and French. ```json { ID as var(func: allofterms(name@en, "Steven")) @filter(has(director.film)) { director.film { num_actors as count(starring) average as avg(val(num_actors)) } } films(func: uid(ID)) { director_id : uid english_name : name@en average_actors : val(average) num_films : count(director.film) films : director.film { name : name@en english_name : name@en french_name : name@fr } } } ``` --- # Source: https://docs.hypermode.com/dgraph/graphql/query/and-or-not.md # And, Or, and Not Operators in GraphQL > Every GraphQL search filter can use AND, OR, and NOT operators. We're overhauling Dgraph's docs to make them clearer and more approachable. If you notice any issues during this transition or have suggestions, please [let us know](https://github.com/hypermodeinc/docs/issues). Every GraphQL search filter can use `and`, `or`, and `not` operators. GraphQL syntax uses infix notation, so: "a and b" is `a, and: { b }`, "a or b or c" is `a, or: { b, or: c }`, and "not" is a prefix (`not:`). The following example queries demonstrate the use of `and`, `or`, and `not` operators: Example: posts that don't have "GraphQL" in the title ```graphql queryPost(filter: { not: { title: { allofterms: "GraphQL"} } } ) { ... } ``` Example: *"Posts that have "GraphQL" or "Dgraph" in the title"* ```graphql queryPost(filter: { title: { allofterms: "GraphQL"}, or: { title: { allofterms: "Dgraph" } } } ) { ... } ``` Example: *"Posts that have "GraphQL" and "Dgraph" in the title"* ```graphql queryPost(filter: { title: { allofterms: "GraphQL"}, and: { title: { allofterms: "Dgraph" } } } ) { ... } ``` The `and` operator is implicit for a single filter object, if the fields don't overlap. For example, the `and` is required because `title` is in both filters, whereas in the query below `and` isn't required. ```graphql queryPost(filter: { title: { allofterms: "GraphQL" }, datePublished: { ge: "2020-06-15" } } ) { ... } ``` Example: *"Posts that have "GraphQL" in the title, or have the tag "GraphQL" and mention "Dgraph" in the title"* ```graphql queryPost(filter: { title: { allofterms: "GraphQL"}, or: { title: { allofterms: "Dgraph" }, tags: { eq: "GraphQL" } } } ) { ... } ``` The `and` and `or` filter both accept a list of filters. Per the GraphQL specification, non-list filters are coerced into a list. This provides backwards-compatibility while allowing for more complex filters. Example: *"Query for posts that have `GraphQL` in the title but that lack the `GraphQL` tag, or that have `Dgraph` in the title but lack the `Dgraph` tag"* ```graphql queryPost(filter: { or: [ { and: [{ title: { allofterms: "GraphQL" } }, { not: { tags: { eq: "GraphQL" } } }] } { and: [{ title: { allofterms: "Dgraph" } }, { not: { tags: { eq: "Dgraph" } } }] } ] } ) { ... } ``` ### Nesting Nested logic with the same `and`/`or` conjunction can be simplified into a single list. For example, the following complex query: ```graphql queryPost(filter: { or: [ { or: [ { foo: { eq: "A" } }, { bar: { eq: "B" } } ] }, { or: [ { baz: { eq: "C" } }, { quz: { eq: "D" } } ] } ] } ) { ... } ``` Can be simplified into the following simplified query syntax: ```graphql queryPost(filter: { or: [ { foo: { eq: "A" } }, { bar: { eq: "B" } }, { baz: { eq: "C" } }, { quz: { eq: "D" } } ] } ) { ... } ``` --- # Source: https://docs.hypermode.com/modus/api-generation.md # API Generation > Create the signature for your API Modus automatically creates an external API based on the endpoints defined in your [app manifest](/modus/app-manifest#endpoints). Modus generates the API signature based on the functions you export from your app. ## Exporting functions Modus uses the default conventions for each language. Functions written in Go use starting capital letters to expose functions as public. Modus creates an external API for public functions from any file that belongs to the `main` package. The functions below generate an API endpoint with the signature ```graphql type Query { classifyText(text: String!, threshold: Float!): String! } ``` Since the `classify` function isn't capitalized, Modus doesn't include it in the generated GraphQL API. ```go package main import ( "errors" "fmt" "github.com/hypermodeinc/modus/sdk/go/models" "github.com/hypermodeinc/modus/sdk/go/models/experimental" ) const modelName = "my-classifier" // this function takes input text and a probability threshold, and returns the // classification label determined by the model, if the confidence is above the // threshold; otherwise, it returns an empty string func ClassifyText(text string, threshold float32) (string, error) { predictions, err:= classify(text) if err != nil { return "", err } prediction := predictions[0] if prediction.Confidence < threshold { return "", nil } return prediction.Label, nil } func classify(texts ...string) ([]experimental.ClassifierResult, error) { model, err := models.GetModel[experimental.ClassificationModel](modelName) if err != nil { return nil, err } input, err := model.CreateInput(texts...) if err != nil { return nil, err } output, err := model.Invoke(input) if err != nil { return nil, err } if len(output.Predictions) != len(texts) { word := "prediction" if len(texts) > 1 { word += "s" } return nil, fmt.Errorf("expected %d %s, got %d", len(texts), word, len(output.Predictions)) } return output.Predictions, nil } ``` Functions written in AssemblyScript use ES module-style `import` and `export` statements. With the default package configuration, Modus creates an external API for functions exported form the `index.ts` file located in the `functions/assembly` folder of your project. The functions below generate an API endpoint with the signature ```graphql type Query { classifyText(text: String!, threshold: Float!): String! } ``` Since the `classify` function isn't exported from the module, Modus doesn't include it in the generated GraphQL API. ```ts import { models } from "@hypermode/modus-sdk-as" import { ClassificationModel, ClassifierResult, } from "@hypermode/modus-sdk-as/models/experimental/classification" const modelName: string = "my-classifier" // this function takes input text and a probability threshold, and returns the // classification label determined by the model, if the confidence is above the // threshold; otherwise, it returns an empty string export function classifyText(text: string, threshold: f32): string { const predictions = classify(text, threshold) const prediction = predictions[0] if (prediction.confidence < threshold) { return "" } return prediction.label } function classify(text: string, threshold: f32): ClassifierResult[] { const model = models.getModel(modelName) const input = model.createInput([text]) const output = model.invoke(input) return output.predictions } ``` ## Generating mutations By default, all exported functions are generated as GraphQL **queries** unless they follow specific naming conventions that indicate they perform mutations (data modifications). Functions are automatically classified as **mutations** when they start with these prefixes: * `mutate` * `post`, `patch`, `put`, `delete` * `add`, `update`, `insert`, `upsert` * `create`, `edit`, `save`, `remove`, `alter`, `modify` * `start`, `stop` For example: * `getUserById` → Query * `listProducts` → Query * `addUser` → Mutation * `updateProduct` → Mutation * `deleteOrder` → Mutation The prefix is detected precisely - `addPost` becomes a mutation, but `additionalPosts` remains a query since "additional" doesn't match the exact "add" prefix pattern. --- # Source: https://docs.hypermode.com/dgraph/graphql/api.md # API Endpoints > This documentation presents the Admin API and explains how to run a Dgraph database with GraphQL. We're overhauling Dgraph's docs to make them clearer and more approachable. If you notice any issues during this transition or have suggestions, please [let us know](https://github.com/hypermodeinc/docs/issues). This article presents the Admin API and explains how to run a Dgraph database with GraphQL. ## Running Dgraph with GraphQL The simplest way to start with Dgraph GraphQL is to run the all-in-one Docker image. ``` docker run -it -p 8080:8080 dgraph/standalone:%VERSION_HERE ``` That brings up GraphQL at `localhost:8080/graphql` and `localhost:8080/admin`, but is intended for quickstart and doesn't persist data. ## Advanced options Once you've tried out Dgraph GraphQL, you'll need to move past the `dgraph/standalone` and run and deploy Dgraph instances. Dgraph is a distributed graph database. It can scale to huge data and shard that data across a cluster of Dgraph instances. GraphQL is built into Dgraph in its Alpha nodes. To learn how to manage and deploy a Dgraph cluster, check our [deployment guide](/dgraph/self-managed/overview). GraphQL schema introspection is enabled by default, but you can disable it by setting the `--graphql` superflag's `introspection` option to false (`--graphql introspection=false`) when starting the Dgraph Alpha nodes in your cluster. ## Dgraph's schema Dgraph's GraphQL runs in Dgraph and presents a GraphQL schema where the queries and mutations are executed in the Dgraph cluster. So the GraphQL schema is backed by Dgraph's schema. This means that if you have a Dgraph instance and change its GraphQL schema, the schema of the underlying Dgraph will also be changed! ## Endpoints When you start Dgraph with GraphQL, two GraphQL endpoints are served. ### /graphql At `/graphql` you'll find the GraphQL API for the types you've added. That's what your app would access and is the GraphQL entry point to Dgraph. If you need to know more about this, see the [quick start](./quickstart) and [schema docs](./schema/overview). ### /admin At `/admin` you'll find an admin API for administering your GraphQL instance. The admin API is a GraphQL API that serves POST and GET as well as compressed data, much like the `/graphql` endpoint. Here are the important types, queries, and mutations from the `admin` schema. ```graphql """ The Int64 scalar type represents a signed 64‐bit numeric non‐fractional value. Int64 can represent values in range [-(2^63),(2^63 - 1)]. """ scalar Int64 """ The UInt64 scalar type represents an unsigned 64‐bit numeric non‐fractional value. UInt64 can represent values in range [0,(2^64 - 1)]. """ scalar UInt64 """ The DateTime scalar type represents date and time as a string in RFC3339 format. For example: "1985-04-12T23:20:50.52Z" represents 20 minutes and 50.52 seconds after the 23rd hour of April 12th, 1985 in UTC. """ scalar DateTime """ Data about the GraphQL schema being served by Dgraph. """ type GQLSchema @dgraph(type: "dgraph.graphql") { id: ID! """ Input schema (GraphQL types) that was used in the latest schema update. """ schema: String! @dgraph(pred: "dgraph.graphql.schema") """ The GraphQL schema that was generated from the 'schema' field. This is the schema that is being served by Dgraph at /graphql. """ generatedSchema: String! } type Cors @dgraph(type: "dgraph.cors") { acceptedOrigins: [String] } """ A NodeState is the state of an individual node in the Dgraph cluster. """ type NodeState { """ Node type : either 'alpha' or 'zero'. """ instance: String """ Address of the node. """ address: String """ Node health status : either 'healthy' or 'unhealthy'. """ status: String """ The group this node belongs to in the Dgraph cluster. See : https://docs.hypermode.com/dgraph/self-managed/cluster-setup/. """ group: String """ Version of the Dgraph binary. """ version: String """ Time in nanoseconds since the node started. """ uptime: Int64 """ Time in Unix epoch time that the node was last contacted by another Zero or Alpha node. """ lastEcho: Int64 """ List of ongoing operations in the background. """ ongoing: [String] """ List of predicates for which indexes are built in the background. """ indexing: [String] """ List of Enterprise Features that are enabled. """ ee_features: [String] } type MembershipState { counter: UInt64 groups: [ClusterGroup] zeros: [Member] maxUID: UInt64 maxNsID: UInt64 maxTxnTs: UInt64 maxRaftId: UInt64 removed: [Member] cid: String license: License """ Contains list of namespaces. Note that this is not stored in proto's MembershipState and computed at the time of query. """ namespaces: [UInt64] } type ClusterGroup { id: UInt64 members: [Member] tablets: [Tablet] snapshotTs: UInt64 checksum: UInt64 } type Member { id: UInt64 groupId: UInt64 addr: String leader: Boolean amDead: Boolean lastUpdate: UInt64 clusterInfoOnly: Boolean forceGroupId: Boolean } type Tablet { groupId: UInt64 predicate: String force: Boolean space: Int remove: Boolean readOnly: Boolean moveTs: UInt64 } type License { user: String maxNodes: UInt64 expiryTs: Int64 enabled: Boolean } directive @dgraph( type: String pred: String ) on OBJECT | INTERFACE | FIELD_DEFINITION directive @id on FIELD_DEFINITION directive @secret(field: String!, pred: String) on OBJECT | INTERFACE type UpdateGQLSchemaPayload { gqlSchema: GQLSchema } input UpdateGQLSchemaInput { set: GQLSchemaPatch! } input GQLSchemaPatch { schema: String! } input ExportInput { """ Data format for the export, e.g. "rdf" or "json" (default: "rdf") """ format: String """ Namespace for the export in multi-tenant cluster. Users from guardians of galaxy can export all namespaces by passing a negative value or specific namespaceId to export that namespace. """ namespace: Int """ Destination for the export: e.g. Minio or S3 bucket or /absolute/path """ destination: String """ Access key credential for the destination. """ accessKey: String """ Secret key credential for the destination. """ secretKey: String """ AWS session token, if required. """ sessionToken: String """ Set to true to allow backing up to S3 or Minio bucket that requires no credentials. """ anonymous: Boolean } input TaskInput { id: String! } type Response { code: String message: String } type ExportPayload { response: Response exportedFiles: [String] } type DrainingPayload { response: Response } type ShutdownPayload { response: Response } type TaskPayload { kind: TaskKind status: TaskStatus lastUpdated: DateTime } enum TaskStatus { Queued Running Failed Success Unknown } enum TaskKind { Backup Export Unknown } input ConfigInput { """ Estimated memory the caches can take. Actual usage by the process would be more than specified here. The caches will be updated according to the cache_percentage flag. """ cacheMb: Float """ True value of logRequest enables logging of all the requests coming to alphas. False value of logRequest disables above. """ logRequest: Boolean } type ConfigPayload { response: Response } type Config { cacheMb: Float } input RemoveNodeInput { """ ID of the node to be removed. """ nodeId: UInt64! """ ID of the group from which the node is to be removed. """ groupId: UInt64! } type RemoveNodePayload { response: Response } input MoveTabletInput { """ Namespace in which the predicate exists. """ namespace: UInt64 """ Name of the predicate to move. """ tablet: String! """ ID of the destination group where the predicate is to be moved. """ groupId: UInt64! } type MoveTabletPayload { response: Response } enum AssignKind { UID TIMESTAMP NAMESPACE_ID } input AssignInput { """ Choose what to assign: UID, TIMESTAMP or NAMESPACE_ID. """ what: AssignKind! """ How many to assign. """ num: UInt64! } type AssignedIds { """ The first UID, TIMESTAMP or NAMESPACE_ID assigned. """ startId: UInt64 """ The last UID, TIMESTAMP or NAMESPACE_ID assigned. """ endId: UInt64 """ TIMESTAMP for read-only transactions. """ readOnly: UInt64 } type AssignPayload { response: AssignedIds } input BackupInput { """ Destination for the backup: e.g. Minio or S3 bucket. """ destination: String! """ Access key credential for the destination. """ accessKey: String """ Secret key credential for the destination. """ secretKey: String """ AWS session token, if required. """ sessionToken: String """ Set to true to allow backing up to S3 or Minio bucket that requires no credentials. """ anonymous: Boolean """ Force a full backup instead of an incremental backup. """ forceFull: Boolean } type BackupPayload { response: Response taskId: String } input RestoreInput { """ Destination for the backup: e.g. Minio or S3 bucket. """ location: String! """ Backup ID of the backup series to restore. This ID is included in the manifest.json file. If missing, it defaults to the latest series. """ backupId: String """ Number of the backup within the backup series to be restored. Backups with a greater value will be ignored. If the value is zero or missing, the entire series will be restored. """ backupNum: Int """ Path to the key file needed to decrypt the backup. This file should be accessible by all alphas in the group. The backup will be written using the encryption key with which the cluster was started, which might be different than this key. """ encryptionKeyFile: String """ Vault server address where the key is stored. This server must be accessible by all alphas in the group. Default "http://localhost:8200". """ vaultAddr: String """ Path to the Vault RoleID file. """ vaultRoleIDFile: String """ Path to the Vault SecretID file. """ vaultSecretIDFile: String """ Vault kv store path where the key lives. Default "secret/data/dgraph". """ vaultPath: String """ Vault kv store field whose value is the key. Default "enc_key". """ vaultField: String """ Vault kv store field's format. Must be "base64" or "raw". Default "base64". """ vaultFormat: String """ Access key credential for the destination. """ accessKey: String """ Secret key credential for the destination. """ secretKey: String """ AWS session token, if required. """ sessionToken: String """ Set to true to allow backing up to S3 or Minio bucket that requires no credentials. """ anonymous: Boolean } type RestorePayload { """ A short string indicating whether the restore operation was successfully scheduled. """ code: String """ Includes the error message if the operation failed. """ message: String } input ListBackupsInput { """ Destination for the backup: e.g. Minio or S3 bucket. """ location: String! """ Access key credential for the destination. """ accessKey: String """ Secret key credential for the destination. """ secretKey: String """ AWS session token, if required. """ sessionToken: String """ Whether the destination doesn't require credentials (e.g. S3 public bucket). """ anonymous: Boolean } type BackupGroup { """ The ID of the cluster group. """ groupId: UInt64 """ List of predicates assigned to the group. """ predicates: [String] } type Manifest { """ Unique ID for the backup series. """ backupId: String """ Number of this backup within the backup series. The full backup always has a value of one. """ backupNum: UInt64 """ Whether this backup was encrypted. """ encrypted: Boolean """ List of groups and the predicates they store in this backup. """ groups: [BackupGroup] """ Path to the manifest file. """ path: String """ The timestamp at which this backup was taken. The next incremental backup will start from this timestamp. """ since: UInt64 """ The type of backup, either full or incremental. """ type: String } type LoginResponse { """ JWT token that should be used in future requests after this login. """ accessJWT: String """ Refresh token that can be used to re-login after accessJWT expires. """ refreshJWT: String } type LoginPayload { response: LoginResponse } type User @dgraph(type: "dgraph.type.User") @secret(field: "password", pred: "dgraph.password") { """ Username for the user. Dgraph ensures that usernames are unique. """ name: String! @id @dgraph(pred: "dgraph.xid") groups: [Group] @dgraph(pred: "dgraph.user.group") } type Group @dgraph(type: "dgraph.type.Group") { """ Name of the group. Dgraph ensures uniqueness of group names. """ name: String! @id @dgraph(pred: "dgraph.xid") users: [User] @dgraph(pred: "~dgraph.user.group") rules: [Rule] @dgraph(pred: "dgraph.acl.rule") } type Rule @dgraph(type: "dgraph.type.Rule") { """ Predicate to which the rule applies. """ predicate: String! @dgraph(pred: "dgraph.rule.predicate") """ Permissions that apply for the rule. Represented following the UNIX file permission convention. That is, 4 (binary 100) represents READ, 2 (binary 010) represents WRITE, and 1 (binary 001) represents MODIFY (the permission to change a predicate’s schema). The options are: * 1 (binary 001) : MODIFY * 2 (010) : WRITE * 3 (011) : WRITE+MODIFY * 4 (100) : READ * 5 (101) : READ+MODIFY * 6 (110) : READ+WRITE * 7 (111) : READ+WRITE+MODIFY Permission 0, which is equal to no permission for a predicate, blocks all read, write and modify operations. """ permission: Int! @dgraph(pred: "dgraph.rule.permission") } input StringHashFilter { eq: String } enum UserOrderable { name } enum GroupOrderable { name } input AddUserInput { name: String! password: String! groups: [GroupRef] } input AddGroupInput { name: String! rules: [RuleRef] } input UserRef { name: String! } input GroupRef { name: String! } input RuleRef { """ Predicate to which the rule applies. """ predicate: String! """ Permissions that apply for the rule. Represented following the UNIX file permission convention. That is, 4 (binary 100) represents READ, 2 (binary 010) represents WRITE, and 1 (binary 001) represents MODIFY (the permission to change a predicate’s schema). The options are: * 1 (binary 001) : MODIFY * 2 (010) : WRITE * 3 (011) : WRITE+MODIFY * 4 (100) : READ * 5 (101) : READ+MODIFY * 6 (110) : READ+WRITE * 7 (111) : READ+WRITE+MODIFY Permission 0, which is equal to no permission for a predicate, blocks all read, write and modify operations. """ permission: Int! } input UserFilter { name: StringHashFilter and: UserFilter or: UserFilter not: UserFilter } input UserOrder { asc: UserOrderable desc: UserOrderable then: UserOrder } input GroupOrder { asc: GroupOrderable desc: GroupOrderable then: GroupOrder } input UserPatch { password: String groups: [GroupRef] } input UpdateUserInput { filter: UserFilter! set: UserPatch remove: UserPatch } input GroupFilter { name: StringHashFilter and: UserFilter or: UserFilter not: UserFilter } input SetGroupPatch { rules: [RuleRef!]! } input RemoveGroupPatch { rules: [String!]! } input UpdateGroupInput { filter: GroupFilter! set: SetGroupPatch remove: RemoveGroupPatch } type AddUserPayload { user: [User] } type AddGroupPayload { group: [Group] } type DeleteUserPayload { msg: String numUids: Int } type DeleteGroupPayload { msg: String numUids: Int } input AddNamespaceInput { password: String } input DeleteNamespaceInput { namespaceId: Int! } type NamespacePayload { namespaceId: UInt64 message: String } input ResetPasswordInput { userId: String! password: String! namespace: Int! } type ResetPasswordPayload { userId: String message: String namespace: UInt64 } input EnterpriseLicenseInput { """ The contents of license file as a String. """ license: String! } type EnterpriseLicensePayload { response: Response } type Query { getGQLSchema: GQLSchema health: [NodeState] state: MembershipState config: Config task(input: TaskInput!): TaskPayload getUser(name: String!): User getGroup(name: String!): Group """ Get the currently logged in user. """ getCurrentUser: User queryUser( filter: UserFilter order: UserOrder first: Int offset: Int ): [User] queryGroup( filter: GroupFilter order: GroupOrder first: Int offset: Int ): [Group] """ Get the information about the backups at a given location. """ listBackups(input: ListBackupsInput!): [Manifest] } type Mutation { """ Update the Dgraph cluster to serve the input schema. This may change the GraphQL schema, the types and predicates in the Dgraph schema, and cause indexes to be recomputed. """ updateGQLSchema(input: UpdateGQLSchemaInput!): UpdateGQLSchemaPayload """ Starts an export of all data in the cluster. Export format should be 'rdf' (the default if no format is given), or 'json'. See : https://docs.hypermode.com/dgraph/admin/export """ export(input: ExportInput!): ExportPayload """ Set (or unset) the cluster draining mode. In draining mode no further requests are served. """ draining(enable: Boolean): DrainingPayload """ Shutdown this node. """ shutdown: ShutdownPayload """ Alter the node's config. """ config(input: ConfigInput!): ConfigPayload """ Remove a node from the cluster. """ removeNode(input: RemoveNodeInput!): RemoveNodePayload """ Move a predicate from one group to another. """ moveTablet(input: MoveTabletInput!): MoveTabletPayload """ Lease UIDs, Timestamps or Namespace IDs in advance. """ assign(input: AssignInput!): AssignPayload """ Start a binary backup. See : https://docs.hypermode.com/dgraph/enterprise/binary-backups/#create-a-backup """ backup(input: BackupInput!): BackupPayload """ Start restoring a binary backup. See : https://docs.hypermode.com/enterprise/binary-backups/#online-restore """ restore(input: RestoreInput!): RestorePayload """ Login to Dgraph. Successful login results in a JWT that can be used in future requests. If login is not successful an error is returned. """ login( userId: String password: String namespace: Int refreshToken: String ): LoginPayload """ Add a user. When linking to groups: if the group doesn't exist it's created; if the group exists, the new user is linked to the existing group. It is possible to both create new groups and link to existing groups in the one mutation. Dgraph ensures that usernames are unique, hence attempting to add an existing user results in an error. """ addUser(input: [AddUserInput!]!): AddUserPayload """ Add a new group and (optionally) set the rules for the group. """ addGroup(input: [AddGroupInput!]!): AddGroupPayload """ Update users, their passwords and groups. As with AddUser, when linking to groups: if the group doesn't exist it's created; if the group exists, the new user is linked to the existing group. If the filter doesn't match any users, the mutation has no effect. """ updateUser(input: UpdateUserInput!): AddUserPayload """ Add or remove rules for groups. If the filter doesn't match any groups, the mutation has no effect. """ updateGroup(input: UpdateGroupInput!): AddGroupPayload deleteGroup(filter: GroupFilter!): DeleteGroupPayload deleteUser(filter: UserFilter!): DeleteUserPayload """ Add a new namespace. """ addNamespace(input: AddNamespaceInput): NamespacePayload """ Delete a namespace. """ deleteNamespace(input: DeleteNamespaceInput!): NamespacePayload """ Reset password can only be used by the Guardians of the galaxy to reset password of any user in any namespace. """ resetPassword(input: ResetPasswordInput!): ResetPasswordPayload """ Apply enterprise license. """ enterpriseLicense(input: EnterpriseLicenseInput!): EnterpriseLicensePayload } ``` You'll notice that the `/admin` schema is very much the same as the schemas generated by Dgraph GraphQL. * The `health` query lets you know if everything is connected and if there's a schema currently being served at `/graphql`. * The `state` query returns the current state of the cluster and group membership information. For more information about `state` see [here](/dgraph/self-managed/dgraph-zero#more-about-the-%2Fstate-endpoint). * The `config` query returns the configuration options of the cluster set at the time of starting it. * The `getGQLSchema` query gets the current GraphQL schema served at `/graphql`, or returns null if there's no such schema. * The `updateGQLSchema` mutation allows you to change the schema currently served at `/graphql`. ## Enterprise features Enterprise Features like ACL, binary backups are also available using the GraphQL API at `/admin` endpoint. * [ACL](/dgraph/enterprise/access-control-lists#accessing-secured-dgraph) * [Backups](/dgraph/enterprise/binary-backups#create-a-backup) * [Restore](/dgraph/enterprise/binary-backups#online-restore) ## First start On first starting with a blank database: * There's no schema served at `/graphql`. * Querying the `/admin` endpoint for `getGQLSchema` returns `"getGQLSchema": null`. * Querying the `/admin` endpoint for `health` lets you know that no schema has been added. ## Validating a schema You can validate a GraphQL schema before adding it to your database by sending your schema definition in an HTTP POST request to the to the `/admin/schema/validate` endpoint, as shown in the following example: Request header: ```ssh path: /admin/schema/validate method: POST ``` Request body: ```graphql type Person { name: String } ``` This endpoint returns a JSON response that indicates if the schema is valid or not, and provides an error if isn't valid. In this case, the schema is valid, so the JSON response includes the following message: `Schema is valid`. ## Modifying a schema There are two ways you can modify a GraphQL schema: * Using `/admin/schema` * Using the `updateGQLSchema` mutation on `/admin` While modifying the GraphQL schema, if you get errors like `errIndexingInProgress`, `another operation is already running` or `server is not ready`, please wait a moment and then retry the schema update. ### Using `/admin/schema` The `/admin/schema` endpoint provides a simplified method to add and update schemas. To create a schema you only need to call the `/admin/schema` endpoint with the required schema definition. For example: ```graphql type Person { name: String } ``` If you have the schema definition stored in a `schema.graphql` file, you can use `curl` like this: ``` curl -X POST localhost:8080/admin/schema --data-binary '@schema.graphql' ``` On successful execution, the `/admin/schema` endpoint will give you a JSON response with a success code. ### Using `updateGQLSchema` to add or modify a schema Another option to add or modify a GraphQL schema is the `updateGQLSchema` mutation. For example, to create a schema using `updateGQLSchema`, run this mutation on the `/admin` endpoint: ```graphql mutation { updateGQLSchema(input: { set: { schema: "type Person { name: String }" } }) { gqlSchema { schema generatedSchema } } } ``` ## Initial schema Regardless of the method used to upload the GraphQL schema, on a black database, adding this schema ```graphql type Person { name: String } ``` would cause the following: * The `/graphql` endpoint would refresh and serve the GraphQL schema generated from type `type Person { name: String }`. * The schema of the underlying Dgraph instance would be altered to allow for the new `Person` type and `name` predicate. * The `/admin` endpoint for `health` would return that a schema is being served. * The mutation would return `"schema": "type Person { name: String }"` and the generated GraphQL schema for `generatedSchema` (this is the schema served at `/graphql`). * Querying the `/admin` endpoint for `getGQLSchema` would return the new schema. ## Migrating a schema Given an instance serving the GraphQL schema from the previous section, updating the schema to the following ```graphql type Person { name: String @search(by: [regexp]) dob: DateTime } ``` would change the GraphQL definition of `Person` and result in the following: * The `/graphql` endpoint would refresh and serve the GraphQL schema generated from the new type. * The schema of the underlying Dgraph instance would be altered to allow for `dob` (predicate `Person.dob: datetime .` is added, and `Person.name` becomes `Person.name: string @index(regexp).`) and indexes are rebuilt to allow the regexp search. * The `health` is unchanged. * Querying the `/admin` endpoint for `getGQLSchema` would return the updated schema. ## Removing indexes from a schema Adding a schema through GraphQL doesn't remove existing data (it only removes indexes). For example, starting from the schema in the previous section and modifying it with the initial schema ```graphql type Person { name: String } ``` would have the following effects: * The `/graphql` endpoint would refresh to serve the schema built from this type. * Thus, field `dob` would no longer be accessible, and there would be no search available on `name`. * The search index on `name` in Dgraph would be removed. * The predicate `dob` in Dgraph would be left untouched (the predicate remains and no data is deleted). --- # Source: https://docs.hypermode.com/modus/app-manifest.md # App Manifest > Define the resources for your app The manifest for your Modus app allows you to configure the exposure and resources for your functions at runtime. You define the manifest in the `modus.json` file within the root of directory of your app. ## Structure Expose your functions for integration into your frontend or federated API Establish connectivity for external endpoints and model hosts Define inference services for use in your functions ### Base manifest A simple manifest, which exposes a single GraphQL endpoint with a bearer token for authentication, looks like this: ```json modus.json { "$schema": "https://schema.hypermode.com/modus.json", "endpoints": { "default": { "type": "graphql", "path": "/graphql", "auth": "bearer-token" } } } ``` ## Endpoints Endpoints make your functions available outside of your Modus app. The `endpoints` object in the app manifest allows you to define these endpoints for integration into your frontend or federated API. Each endpoint requires a unique name, specified as a key containing only alphanumeric characters and hyphens. Only a GraphQL endpoint is available currently, but the modular design of Modus allows for the introduction of additional endpoint types in the future. ### GraphQL endpoint This endpoint type supports the GraphQL protocol to communicate with external clients. You can use a GraphQL client, such as [urql](https://github.com/urql-graphql/urql) or [Apollo Client](https://github.com/apollographql/apollo-client), to interact with the endpoint. **Example:** ```json modus.json { "endpoints": { "default": { "type": "graphql", "path": "/graphql", "auth": "bearer-token" } } } ``` Always set to `"graphql"` for this endpoint type. The path for the endpoint. Must start with a forward slash `/`. The authentication method for the endpoint. Options are `"bearer-token"` or `"none"`. See [Authentication](/modus/authentication) for additional details. ## Connections Connections establish connectivity and access to external services. They're used for HTTP and GraphQL APIs, database connections, and externally hosted AI models. The `connections` object in the app manifest allows you to define these hosts, for secure access from within a function. Each connection requires a unique name, specified as a key containing only alphanumeric characters and hyphens. Each connection has a `type` property, which controls how it's used and which additional properties are available. The following table lists the available connection types: | Type | Purpose | Function Classes | | :----------- | :------------------------------- | :-------------------------- | | `http` | Connect to an HTTP(S) web server | `http`, `graphql`, `models` | | `dgraph` | Connect to a Dgraph database | `dgraph` | | `mysql` | Connect to a MySQL database | `mysql` | | `neo4j` | Connect to a Neo4j database | `neo4j` | | `postgresql` | Connect to a PostgreSQL database | `postgresql` | **Don't include secrets directly in the manifest!** If your connection requires authentication, you can include *placeholders* in connection properties which resolve to their respective secrets at runtime. When developing locally, [set secrets using environment variables](/modus/run-locally#environment-secrets). When deployed on Hypermode, set the actual secrets via the Hypermode Console, where they're securely stored until needed. ### HTTP connection This connection type supports the HTTP and HTTPS protocols to communicate with external hosts. You can use the [HTTP APIs](/modus/sdk/assemblyscript/http) in the Modus SDK to interact with the host. This connection type is also used for [GraphQL APIs](/modus/sdk/assemblyscript/graphql) and to invoke externally hosted AI [models](/modus/sdk/assemblyscript/models). **Example:** ```json modus.json { "connections": { "openai": { "type": "http", "baseUrl": "https://api.openai.com/", "headers": { "Authorization": "Bearer {{API_KEY}}" } } } } ``` Always set to `"http"` for this connection type. Base URL for connections to the host. Must end with a trailing slash and may contain path segments if necessary. Example: `"https://api.example.com/v1/"` Full URL endpoint for connections to the host. Example: `"https://models.example.com/v1/classifier"` You must include either a `baseUrl` or an `endpoint`, but not both. * Use `baseUrl` for connections to a host with a common base URL. * Use `endpoint` for connections to a specific URL. Typically, you'll use the `baseUrl` field. However, some APIs, such as `graphql.execute`, require the full URL in the `endpoint` field. If provided, requests on the connection include these headers. Each key-value pair is a header name and value. Values may include variables using the `{{VARIABLE}}` template syntax, which resolve at runtime to environment variables provided for each connection, via the Hypermode Console. This example specifies a header named `Authorization` that uses the `Bearer` scheme. A secret named `AUTH_TOKEN` provides the token: ```json "headers": { "Authorization": "Bearer {{AUTH_TOKEN}}" } ``` This example specifies a header named `X-API-Key` provided by a secret named `API_KEY`: ```json "headers": { "X-API-Key": "{{API_KEY}}" } ``` You can use a special syntax for connections that require [HTTP basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication). In this example, secrets named `USERNAME` and `PASSWORD` combined and then are base64-encoded to form a compliant `Authorization` header value: ```json "headers": { "Authorization": "Basic {{base64(USERNAME:PASSWORD)}}" } ``` If provided, requests on the connection include these query parameters, appended to the URL. Each key-value pair is a parameter name and value. Values may include variables using the `{{VARIABLE}}` template syntax, which resolve at runtime to secrets provided for each connection, via the Hypermode Console. This example specifies a query parameter named `key` provided by a secret named `API_KEY`: ```json "queryParameters": { "key": "{{API_KEY}}" } ``` ### Dgraph connection This connection type supports connecting to Dgraph databases. You can use the [Dgraph APIs](/modus/sdk/assemblyscript/dgraph) in the Modus SDK to interact with the database. There are two ways to connect to Dgraph: * [Using a connection string](#using-a-dgraph-connection-string) (preferred method) * [Using a gRPC target](#using-a-dgraph-grpc-target) (older method) You can use either approach in Modus, but not both. #### Using a Dgraph connection string This is the preferred method for connecting to Dgraph. It uses a simplified URI based connection string to specify all options, including host, port, options, and authentication. **Example:** ```json modus.json { "connections": { "my-dgraph": { "type": "dgraph", "connString": "dgraph://example.hypermode.host:443?sslmode=verify-ca&bearertoken={{DGRAPH_API_KEY}}" } } } ``` Always set to `"dgraph"` for this connection type. The connection string for the Dgraph database, in URI format. #### Using a Dgraph gRPC target This is the older method for connecting to Dgraph. It uses a gRPC target to specify the host and port, and a separate key for authentication. It automatically uses SSL mode (with full CA verification) for the connection - *except* when connecting to `localhost`. Additional options such as username/password authentication aren't supported. If you need to use these options, use the connection string method instead. **Example:** ```json modus.json { "connections": { "my-dgraph": { "type": "dgraph", "grpcTarget": "example.grpc.region.aws.cloud.dgraph.io:443", "key": "{{DGRAPH_API_KEY}}" } } } ``` Always set to `"dgraph"` for this connection type. The gRPC target for the Dgraph database. The API key for the Dgraph database. ### MySQL connection This connection type supports connecting to MySQL databases. You can use the [MySQL APIs](/modus/sdk/assemblyscript/mysql) in the Modus SDK to interact with the database. **Example:** ```json modus.json { "connections": { "my-database": { "type": "mysql", "connString": "mysql://{{USERNAME}}:{{PASSWORD}}@db.example.com:3306/dbname?tls=true" } } } ``` Always set to `"mysql"` for this connection type. The connection string for the MySQL database. Values may include variables using the `{{VARIABLE}}` template syntax, which resolve at runtime to secrets provided for each connection, via the Hypermode Console. The connection string in the preceding example includes: * A username and password provided by secrets named `USERNAME` & `PASSWORD` * A host named `db.example.com` on port `3306` * A database named `dbname` * Encryption enabled via `tls=true` - which is highly recommended for secure connections Set the connection string using a URI format [as described in the MySQL documentation](https://dev.mysql.com/doc/refman/8.4/en/connecting-using-uri-or-key-value-pairs.html#connecting-using-uri). However, any optional parameters provided should be in the form specified by the Go MySQL driver used by the Modus Runtime, [as described here](https://github.com/go-sql-driver/mysql/blob/master/README.md#parameters) For example, use `tls=true` to enable encryption (not `sslmode=require`). ### Neo4j connection This connection type supports connecting to Neo4j databases. You can use the [Neo4j APIs](/modus/sdk/assemblyscript/neo4j) in the Modus SDK to interact with the database. **Example:** ```json modus.json { "connections": { "my-neo4j": { "type": "neo4j", "dbUri": "bolt://localhost:7687", "username": "neo4j", "password": "{{NEO4J_PASSWORD}}" } } } ``` Always set to `"neo4j"` for this connection type. The URI for the Neo4j database. The username for the Neo4j database. The password for the Neo4j database. ### PostgreSQL connection This connection type supports connecting to PostgreSQL databases. You can use the [PostgreSQL APIs](/modus/sdk/assemblyscript/postgresql) in the Modus SDK to interact with the database. **Example:** ```json modus.json { "connections": { "my-database": { "type": "postgresql", "connString": "postgresql://{{PG_USER}}:{{PG_PASSWORD}}@db.example.com:5432/data?sslmode=require" } } } ``` Always set to `"postgresql"` for this connection type. The connection string for the PostgreSQL database. Values may include variables using the `{{VARIABLE}}` template syntax, which resolve at runtime to secrets provided for each connection, via the Hypermode Console. The connection string in the preceding example includes: * A username and password provided by secrets named `PG_USER` & `PG_PASSWORD` * A host named `db.example.com` on port `5432` * A database named `data` * SSL mode set to `require` - which is highly recommended for secure connections Refer to [the PostgreSQL documentation](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING) for more details on connection strings. Managed PostgreSQL providers often provide a pre-made connection string for you to copy. Check your provider's documentation for details. For example, if using Neon, refer to the [Neon documentation](https://neon.tech/docs/connect/connect-from-any-app). See [Running locally with secrets](/modus/run-locally#environment-secrets) for more details on how to set secrets for local development. ## Models AI models are a core resource for inferencing. The `models` object in the app manifest allows you to easily define models, whether hosted by Hypermode or another host. Each model requires a unique name, specified as a key, containing only alphanumeric characters and hyphens. ```json modus.json { "models": { "text-generator": { "sourceModel": "meta-llama/Llama-3.2-3B-Instruct", "provider": "hugging-face", "connection": "hypermode" } } } ``` Original relative path of the model within the provider's repository. Source provider of the model. If the `connection` value is `hypermode`, this field is mandatory. `hugging-face` is currently the only supported option. Connection for the model instance. * Specify `"hypermode"` for models that Hypermode hosts. * Otherwise, specify a name that matches a connection defined in the [`connections`](#connections) section of the manifest. When using `hugging-face` as the `provider` and `hypermode` as the `connection`, Hypermode automatically facilitates the connection to an instance of a shared or dedicated instance of the model. Your project's functions securely access the hosted model, with no further configuration required. --- # Source: https://docs.hypermode.com/modus/architecture.md # Architecture name the pieces and projects we’re built on --- # Source: https://docs.hypermode.com/agents/connections/attio.md # Using Attio with Hypermode > Connect your Hypermode agent to Attio for CRM operations
Attio

Attio

Highly customizable, modern CRM platform

## Overview Attio is a modern, highly customizable CRM platform that helps businesses manage customer relationships, track deals, and organize data in a flexible way. This guide will walk you through connecting your Hypermode agent to Attio for automated CRM operations. ## Prerequisites Before connecting Attio to Hypermode, you'll need: 1. An [Attio workspace](https://attio.com/) 2. Admin permissions to generate API credentials 3. A [Hypermode workspace](https://hypermode.com/) ## Getting started with Attio ### Step 1: Sign up for Attio If you don't have an Attio account yet, you'll need to create one first. Visit the Attio homepage to get started: Attio Homepage Click "Sign up" to create your new Attio workspace. You'll need admin access to generate the API credentials required for the Hypermode integration. ### Step 2: Note your workspace domain Your Attio workspace URL will be in the format `https://[workspace-name].attio.com`. Make note of your workspace name as you'll authenticate through Attio when adding the connection to Hypermode. ## Creating your Attio agent ### Step 1: Create a new agent From the Hypermode interface, create a new agent: 1. Click the agent dropdown menu 2. Select "Create new Agent" Navigate to create agent ### Step 2: Configure agent settings Use these recommended settings for your Attio CRM agent: * **Agent Name**: CRMAgent * **Agent Title**: Attio CRM Manager * **Description**: Manages customer relationships and deal tracking in Attio CRM * **Instructions**: You have a connection to Attio CRM. You can create and update companies and deals, search for existing records, manage deal pipelines, and track customer interactions. Always confirm data before making changes and provide clear summaries of actions taken. * **Model**: GPT-4.1 - Default - Optionally, use Claude for best results Create agent modal ### Step 3: View your agent profile Once created, navigate to your agent's settings page: Agent profile ## Connecting to Attio ### Step 1: Add the Attio connection Navigate to the **Connections** tab and add Attio: 1. Click "Add connection" 2. Search for "Attio" in the available connections Add Attio connection ### Step 2: Configure connection with OAuth When you select Attio, you'll be prompted to authenticate via OAuth. This will redirect you to Attio's authorization page: Attio App Request Follow the OAuth flow to grant Hypermode access to your Attio workspace. This secure process ensures your credentials are never directly stored in Hypermode. The OAuth flow ensures secure authentication without exposing your API credentials. You'll be redirected back to Hypermode once authorization is complete. ## Understanding Attio's data model Attio uses a flexible data model that includes: * **Companies**: Organizations and account details * **Deals**: Sales opportunities and their progress through pipelines * **Custom Objects**: Any custom data types you've created * **Lists**: Collections of records with shared characteristics * **Attributes**: Custom fields that can be added to any object type This flexibility makes Attio perfect for: * Complex sales pipeline management * Detailed customer relationship tracking * Custom workflow automation * Advanced reporting and analytics ## Testing the connection ### Test 1: Search for existing companies Start a new thread with your agent and test the connection: ```text Can you show me the first 10 companies in our Attio CRM? ``` Search companies result ### Test 2: Create a new company Try adding a new company to your CRM: ```text Introspect the workspace and create a new company in Attio with the following details: Name: Tech Solutions Inc Website (domain): techsolutions.com Industry/Category: Software Employee Range: 50-100 Description: A leading provider of innovative tech solutions. Primary Location: San Francisco, CA ``` Create company ### Test 3: Create and manage a deal Create a sales opportunity and track its progress: ```text Create a new deal in Attio: - Deal name: "Q1 Enterprise Software License" - Company: Tech Solutions Inc - Value: $50,000 - Stage: Discovery ``` Create deal ### Test 4: Update deal status Move the deal through your pipeline: ```text Add a note about the "Enterprise Software License - TechCorp" deal that the demo completed yesterday. ``` Add note ## What you can do With your Attio connection established, your agent can: * **Manage companies**: Create, update, and search for organizations and account details * **Track deals**: Create opportunities, update pipeline stages, and manage deal values * **Organize data**: Use lists and custom attributes to categorize records * **Search and filter**: Find records based on various criteria * **Generate reports**: Analyze pipeline health and company data * **Integrate workflows**: Combine CRM operations with other tools like email, calendar, and project management The Attio connection provides access to a comprehensive set of tools for CRM management focused on companies and deals. The available tools may vary as we optimize the connection for the most commonly used operations. ## Troubleshooting ### Common issues #### OAuth authentication failed * Ensure you have admin permissions in your Attio workspace * Try clearing your browser cache and cookies * Make sure you're logged into the correct Attio workspace during OAuth flow #### "Workspace not found" error * Confirm you completed the OAuth flow successfully * Check that your workspace domain is spelled correctly * Verify you have access to the workspace #### Record creation failures * Ensure required fields are provided for the object type * Check that attribute names match exactly (case-sensitive) * Verify that enum values are valid for dropdown fields ## Learn more * [Attio Documentation](https://developers.attio.com/) * [Attio API Reference](https://developers.attio.com/reference) * [CRM Best Practices](https://attio.com/blog) Combine Attio with other Hypermode connections to build powerful sales workflows. For example, use Gmail to automatically log email interactions, or Google Calendar to schedule follow-up meetings directly from deal records. --- # Source: https://docs.hypermode.com/dgraph/enterprise/audit-logs.md # Audit Logging > With an Enterprise license, Dgraph can generate audit logs that let you track and audit all requests (queries and mutations). We're overhauling Dgraph's docs to make them clearer and more approachable. If you notice any issues during this transition or have suggestions, please [let us know](https://github.com/hypermodeinc/docs/issues). As a database administrator, you count on being able to audit access to your database. With a Dgraph [enterprise license](./license), you can enable audit logging so that all requests are tracked and available for use in security audits. When audit logging is enabled, the following information is recorded about the queries and mutations (requests) sent to your database: * Endpoint * Logged-in User Name * Server host address * Client Host address * Request Body (truncated at 4KB) * Timestamp * Namespace * Query Parameters (if provided) * Response status ## Audit log scope Most queries and mutations sent to Dgraph Alpha and Dgraph Zero are logged. Specifically, the following are logged: * HTTP requests sent over Dgraph Zero's 6080 port and Dgraph Alpha's 8080 port (except as noted below) * gRPC requests sent over Dgraph Zero's 5080 port and Dgraph Alpha's 9080 port (except the Raft, health and Dgraph Zero stream endpoints noted below) The following aren't logged: * Responses to queries and mutations * HTTP requests to `/health`, `/state` and `/jemalloc` endpoints * gRPC requests to Raft endpoints (see [Raft](/dgraph/concepts/raft)) * gRPC requests to health endpoints (`Check` and `Watch`) * gRPC requests to Dgraph Zero stream endpoints (`StreamMembership`, `UpdateMembership`, `Oracle`, `Timestamps`, `ShouldServe` and `Connect`) {/* We don't have any docs to link to for the endpoints described in the last two bullets. TBD fix this so we are't referencing something not described elsewhere */} ## Audit log files All audit logs are in JSON format. Dgraph has a "rolling-file" policy for audit logs, where the current log file is used until it reaches a configurable size (default: 100 MB), and then is replaced by another current audit log file. Older audit log files are retained for a configurable number of days (default: 10 days). For example, by sending this query: ```graphql { q(func: has(actor.film)){ count(uid) } } ``` You'll get the following JSON audit log entry: ```json { "ts": "2021-03-22T15:03:19.165Z", "endpoint": "/query", "level": "AUDIT", "user": "", "namespace": 0, "server": "localhost:7080", "client": "[::1]:60118", "req_type": "Http", "req_body": "{\"query\":\"{\\n q(func: has(actor.film)){\\n count(uid)\\n }\\n}\",\"variables\":{}}", "query_param": { "timeout": ["20s"] }, "status": "OK" } ``` ## Enable audit logging You can enable audit logging on a Dgraph Alpha or Dgraph Zero node by using the `--audit` flag to specify semicolon-separated options for audit logging. When you enable audit logging, a few options are available for you to configure: * `compress=true` tells Dgraph to use compression on older audit log files * `days=20` tells Dgraph to retain older audit logs for 20 days, rather than the default of 10 days * `output=/path/to/audit/logs` tells Dgraph which path to use for storing audit logs * `encrypt-file=/encryption/key/path` tells Dgraph to encrypt older log files with the specified key * `size=200` tells Dgraph to store audit logs in 200 MB files, rather than the default of 100 MB files You can see how to use these options in the example commands below. ## Example commands The commands in this section show you how to enable and configure audit logging. ### Enable audit logging In the simplest scenario, you can enable audit logging by simply specifying the directory to store audit logs on a Dgraph Alpha node: ```sh dgraph alpha --audit output=audit-log-dir ``` You could extend this command a bit to specify larger log files (200 MB, instead of 100 MB) and retain them for longer (15 days instead of 10 days): ```sh dgraph alpha --audit "output=audit-log-dir;size=200;days=15" ``` ### Enable audit logging with compression In many cases you want to compress older audit logs to save storage space. You can do this with a command like the following: ```sh dgraph alpha --audit "output=audit-log-dir;compress=true" ``` ### Enable audit logging with encryption You can also enable encryption of audit logs to protect sensitive information that might exist in logged requests. You can do this, along with compression, with a command like the following: ```sh dgraph alpha --audit "output=audit-log-dir;compress=true;encrypt-file=/path/to/encrypt/key/file" ``` ### Decrypt audit logs To decrypt encrypted audit logs, you can use the `dgraph audit decrypt` command, as follows: ```sh dgraph audit decrypt --encryption_key_file=/path/encrypt/key/file --in /path/to/encrypted/log/file --out /path/to/output/file ``` ## Next steps To learn more about the logging features of Dgraph, see [Logging](/dgraph/admin/logs). --- # Source: https://docs.hypermode.com/dgraph/guides/to-do-app/auth-rules.md # Authorization Rules > Use the @auth directive to limit access to the user’s to-dos. This step in the GraphQL tutorial walks you through authorization rules. We're overhauling Dgraph's docs to make them clearer and more approachable. If you notice any issues during this transition or have suggestions, please [let us know](https://github.com/hypermodeinc/docs/issues). This is part 3 of [Building a To-Do List App](./introduction). In the current state of the app, we can view anyone's todos, but we want our to-dos to be private to us. Let's do that using the `@auth` directive to limit that to the user's to-dos. We want to limit the user to its own to-dos, so we will define the query in `auth` to filter depending on the user's username. Let's update the schema to include that, and then let's understand what is happening there - ```graphql type Task @auth( query: { rule: """ query($USER: String!) { queryTask { user(filter: { username: { eq: $USER } }) { __typename } } } """ } ) { id: ID! title: String! @search(by: [fulltext]) completed: Boolean! @search user: User! } type User { username: String! @id @search(by: [hash]) name: String tasks: [Task] @hasInverse(field: user) } ``` Resubmit the updated schema - ``` curl -X POST localhost:8080/admin/schema --data-binary '@schema.graphql' ``` Now let's see what does the definition inside the `auth` directive means. Firstly, we can see that this rule applies to `query` (similarly we can define rules on `add`, `update` etc.). ```graphql query ($USER: String!) { queryTask { user(filter: { username: { eq: $USER } }) { __typename } } } ``` The rule contains a parameter `USER` which we will use to filter the todos by a user. As we know `queryTask` returns an array of `task` that contains the `user` also and we want to filter it by `user`, so we compare the `username` of the user with the `USER` passed to the auth rule (logged in user). Now the next thing you would be wondering is that how do we pass a value for the `USER` parameter in the auth rule since its not something that you can call, the answer is pretty simple actually that value will be extracted from the JWT token which we pass to our GraphQL API as a header and then it will execute the rule. Let's see how we can do that in the next step using Auth0 as an example. --- # Source: https://docs.hypermode.com/dgraph/graphql/security/auth-tips.md # Authorization tips > Given an authentication mechanism and a signed JSON Web Token (JWT), the `@auth` directive tells Dgraph how to apply authorization. We're overhauling Dgraph's docs to make them clearer and more approachable. If you notice any issues during this transition or have suggestions, please [let us know](https://github.com/hypermodeinc/docs/issues). ## Public data Many apps have data that can be accessed by anyone, logged in or not. That also works nicely with Dgraph auth rules. For example, in Twitter, StackOverflow, etc. you can see authors and posts without being signed it - but you'd need to be signed in to add a post. With Dgraph auth rules, if a type doesn't have, for example, a `query` auth rule or the auth rule doesn't depend on a JWT value, then the data can be accessed without a signed JWT. For example, the to do app might allow anyone, logged in or not, to view any author, but not make any mutations unless logged in as the author or an admin. That would be achieved by rules like the following. ```graphql type User @auth( # no query rule add: { rule: "{$ROLE: { eq: \"ADMIN\" } }" }, update: ... delete: ... ) { username: String! @id todos: [Todo] } ``` Maybe some to dos can be marked as public and users you aren't logged in can see those. ```graphql type Todo @auth( query: { or: [ # you are the author { rule: ... }, # or, the todo is marked as public { rule: """query { queryTodo(filter: { isPublic: { eq: true } } ) { id } }"""} ]} ) { ... isPublic: Boolean } ``` Because the rule doesn't depend on a JWT value, it can be successfully evaluated for users who aren't logged in. Ensuring that requests are from an authenticated JWT, and no further restrictions, can be done by arranging the JWT to contain a value like `"isAuthenticated": "true"`. For example, ```graphql type User @auth(query: { rule: "{$isAuthenticated: { eq: \"true\" } }" }) { username: String! @id todos: [Todo] } ``` specifies that only authenticated users can query other users. ### Blocking an operation for everyone If the `ROLE` claim isn't present in a JWT, any rule that relies on `ROLE` simply evaluates to false. You can also simply disallow some queries and mutations by using a condition on a non-existing claim: If you know that your JWTs never contain the claim `DENIED`, then a rule such as ```graphql type User @auth( delete: { rule: "{$DENIED: { eq: \"DENIED\" } }"} ) { ... } ``` block the delete operation for everyone. --- # Source: https://docs.hypermode.com/dgraph/graphql/schema/directives/auth.md # @auth We're overhauling Dgraph's docs to make them clearer and more approachable. If you notice any issues during this transition or have suggestions, please [let us know](https://github.com/hypermodeinc/docs/issues). `@auth` allows you to define how to apply authorization rules on the queries/mutation for a type. Refer to [graphql endpoint security](/dgraph/graphql/security/overview), [Role-Based Access Control (RBAC) rules](/dgraph/graphql/security/rbac-rules) and [Graph traversal rules](/dgraph/graphql/security/graphtraversal-rules) for details. `@auth` directive isn't supported on `union` and `@remote` types. --- # Source: https://docs.hypermode.com/dgraph/guides/to-do-app/auth0-jwt.md # Using Auth0 > Get an app running with Auth0. This step in the GraphQL tutorial walks you through using Auth0 in an example to-do app tutorial. We're overhauling Dgraph's docs to make them clearer and more approachable. If you notice any issues during this transition or have suggestions, please [let us know](https://github.com/hypermodeinc/docs/issues). This is part 4 of [Building a To-Do List App](./introduction). Let's start by going to our Auth0 dashboard where we can see the app which we have already created and used in our frontend-app. Dashboard Now we want to use the JWT that Auth0 generates, but we also need to add custom claims to that token which will be used by our auth rules. So we can use something known as "Rules" (left sidebar on dashboard page under "Auth Pipeline") to add custom claims to a token. Let's create a new empty rule. Rule Replace the content with the following - ```javascript function (user, context, callback) { const namespace = "https://dgraph.io/jwt/claims"; context.idToken[namespace] = { 'USER': user.email, }; return callback(null, user, context); } ``` In the above function, we are only just adding the custom claim to the token with a field as `USER` which if you recall from the last step is used in our auth rules, so it needs to match exactly with that name. Now let's go to `Settings` of our Auth0 app and then go down to view the `Advanced Settings` to check the JWT signature algorithm (OAuth tab) and then get the certificate (Certificates tab). We will be using `RS256` in this example so let's make sure it's set to that and then copy the certificate which we will use to get the public key. Use the download certificate button there to get the certificate in `PEM`. Certificate Now let's run a command to get the public key from it, which we will add to our schema. Just change the `file_name` and run the command. ``` openssl x509 -pubkey -noout -in file_name.pem ``` Copy the public key and now let's add it to our schema. For doing that we will add something like this, to the bottom of our schema file - ``` # Dgraph.Authorization {"VerificationKey":"","Header":"X-Auth-Token","Namespace":"https://dgraph.io/jwt/claims","Algo":"RS256","Audience":[""]} ``` Let me just quickly explain what each thing means in that, so firstly we start the line with a `# Dgraph.Authorization`. Next is the `VerificationKey`, so update `` with your public key within the quotes and make sure to have it in a single line and add `\n` where ever needed. Then set `Header` to the name of the header `X-Auth-Token` (can be anything) which will be used to send the value of the JWT. Next is the `Namespace` name `https://dgraph.io/jwt/claims` (again can be anything, just needs to match with the name specified in Auth0). Then next is the `Algo` which is `RS256`, the JWT signature algorithm (another option is `HS256` but remember to use the same algorithm in Auth0). Then for the `Audience`, add your app's Auth0 client ID. The updated schema will look something like this (update the public key with your key) - ```graphql type Task @auth( query: { rule: """ query($USER: String!) { queryTask { user(filter: { username: { eq: $USER } }) { __typename } } } """ } ) { id: ID! title: String! @search(by: [fulltext]) completed: Boolean! @search user: User! } type User { username: String! @id @search(by: [hash]) name: String tasks: [Task] @hasInverse(field: user) } # Dgraph.Authorization {"VerificationKey":"","Header":"X-Auth-Token","Namespace":"https://dgraph.io/jwt/claims","Algo":"RS256","Audience":[""]} ``` Resubmit the updated schema - ``` curl -X POST localhost:8080/admin/schema --data-binary '@schema.graphql' ``` Let's get that token and see what all it contains, then update the frontend accordingly. For doing this, let's start our app again. ``` npm start ``` Now open a browser window, navigate to [http://localhost:3000](http://localhost:3000) and open the developer tools, go to the `network` tab and find a call called `token` to get your JWT from its response JSON (field `id_token`). Token Now go to [jwt.io](https://jwt.io) and paste your token there. jwt The token also includes our custom claim like below. ```json { "https://dgraph.io/jwt/claims": { "USER": "vardhanapoorv" }, ... } ``` Now, you can check if the auth rule that we added is working as expected or not. Open the GraphQL tool (Insomnia, GraphQL Playground) add the URL along with the header `X-Auth0-Token` and its value as the JWT. Let's try the query to see the todos and only the todos the logged-in user created should be visible. ```graphql query { queryTask { title completed user { username } } } ``` The above should give you only your todos and verifies that our auth rule worked! Now let's update our frontend app to include the `X-Auth0-Token` header with value as JWT from Auth0 when sending a request. To do this, we need to update the Apollo client setup to include the header while sending the request, and we need to get the JWT from Auth0. The value we want is in the field `idToken` from Auth0. We get that by quickly updating `react-auth0-spa.js` to get `idToken` and pass it as a prop to our `App`. ```javascript ... const [popupOpen, setPopupOpen] = useState(false); const [idToken, setIdToken] = useState(""); ... if (isAuthenticated) { const user = await auth0FromHook.getUser(); setUser(user); const idTokenClaims = await auth0FromHook.getIdTokenClaims(); setIdToken(idTokenClaims.__raw); } ... const user = await auth0Client.getUser(); const idTokenClaims = await auth0Client.getIdTokenClaims(); setIdToken(idTokenClaims.__raw); ... {children} ... ``` Check the updated file [here](https://github.com/dgraph-io/graphql-sample-apps/blob/c94b6eb1cec051238b81482a049100b1cd15bbf7/todo-app-react/src/react-auth0-spa.js) Now let's use that token while creating an Apollo client instance and give it to a header `X-Auth0-Token` in our case. Let's update our `src/App.js` file. ```javascript ... import { useAuth0 } from "./react-auth0-spa"; import { setContext } from "apollo-link-context"; // Updated to take token const createApolloClient = token => { const httpLink = createHttpLink({ uri: config.graphqlUrl, options: { reconnect: true, }, }); // Add header const authLink = setContext((_, { headers }) => { // return the headers to the context so httpLink can read them return { headers: { ...headers, "X-Auth-Token": token, }, }; }); // Include header return new ApolloClient({ link: httpLink, link: authLink.concat(httpLink), cache: new InMemoryCache() }); // Get token from props and pass to function const App = ({idToken}) => { const { loading } = useAuth0(); if (loading) { return
Loading...
; } const client = createApolloClient(idToken); ... ``` Check the updated file [here](https://github.com/dgraph-io/graphql-sample-apps/blob/c94b6eb1cec051238b81482a049100b1cd15bbf7/todo-app-react/src/App.js). Refer this step in [GitHub](https://github.com/dgraph-io/graphql-sample-apps/commit/c94b6eb1cec051238b81482a049100b1cd15bbf7). Let's now start the app. ``` npm start ``` Now you should have an app running with Auth0! --- # Source: https://docs.hypermode.com/modus/authentication.md # Authentication > Protect your API It is easy to secure your Modus app with authentication. Modus currently supports bearer token authentication, with additional authentication methods coming soon. ## Bearer tokens Modus supports authentication via the `Authorization` header in HTTP requests. You can use the `Authorization` header to pass a bearer JSON Web Token (JWT) to your Modus app. The token authenticates the user and authorize access to resources. To use bearer token authentication for your Modus app, be sure to set the `auth` property on your endpoint to `"bearer-token"` in your [app manifest](/modus/app-manifest#endpoints). ### Setting verification keys Once set, Modus verifies tokens passed in the `Authorization` header of incoming requests against the public keys you provide. To enable this verification, you must pass the public keys using the `MODUS_PEMS` or `MODUS_JWKS_ENDPOINTS` environment variable. The value of the `MODUS_PEMS` or `MODUS_JWKS_ENDPOINTS` environment variable should be a JSON object with the public keys as key-value pairs. This is an example of how to set the `MODUS_PEMS` and `MODUS_JWKS_ENDPOINTS` environment variable: ```sh MODUS_PEMS export MODUS_PEMS='{\"key1\":\"-----BEGIN PUBLIC KEY-----\\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwJ9z1z1z1z1z1z\\n-----END PUBLIC KEY-----\"}' ``` ```sh MODUS_JWKS_ENDPOINTS export MODUS_JWKS_ENDPOINTS='{"my-auth-provider":"https://myauthprovider.com/application/o/myappname/.wellknown/jwks.json"}' ``` When deploying your Modus app on Hypermode, the bearer token authentication is automatically set up. ### Verifying tokens To verify the token, Modus uses the public keys passed via the `MODUS_PEMS` environment variable. If the token is verifiable with any of the verification keys provided, Modus decodes the JWT token and passes the decoded claims as an environment variable. ### Accessing claims The decoded claims are available through the `auth` API in the Modus SDK. To access the decoded claims, use the `getJWTClaims()` function. The function allows the user to pass in a class to deserialize the claims into, and returns an instance of the class with the claims. This allows users to access the claims in the token and use them to authenticate and authorize users in their Modus app. ```go Go import github.com/hypermodeinc/modus/sdk/go/pkg/auth type ExampleClaims struct { Sub string `json:"sub"` Exp int64 `json:"exp"` Iat int64 `json:"iat"` } func GetClaims() (*ExampleClaims, error) { return auth.GetJWTClaims[*ExampleClaims]() } ``` ```ts AssemblyScript import { auth } from "@hypermode/modus-sdk-as" @json export class ExampleClaims { public sub!: string public exp!: i64 public iat!: i64 } export function getClaims(): ExampleClaims { return auth.getJWTClaims() } ``` --- # Source: https://docs.hypermode.com/agents/available-connections.md # Available Connections Below is a curated sample—search the catalog of Connections in your Workspace or agent card for the full list.
Attio Attio
Customer relationship magic. Powerful, flexible, and data-driven, Attio makes it easy to build the exact CRM your business needs.
62 tools Details
Stripe Stripe
Stripe powers online and in-person payment processing and financial solutions for businesses of all sizes.
46 tools Details
Neo4j AuraDB Neo4j AuraDB
Fully managed graph database.
3 tools Details
MotherDuck MotherDuck
Serverless analytics platform powered by DuckDB
1 tool Details
Supabase Supabase
Supabase is the PostgreSQL development platform.
16 tools Details
MongoDB MongoDB
MongoDB is an open source NoSQL database management program.
8 tools Details
### Artificial Intelligence (AI)
OpenAI (ChatGPT) OpenAI (ChatGPT)
OpenAI is an AI research and deployment company with the mission to ensure that artificial general intelligence benefits all of humanity. They are the makers of popular models like ChatGPT, DALL-E, and Whisper.
Anthropic (Claude) Anthropic (Claude)
AI research and products that put safety at the frontier. Introducing Claude, a next-generation AI assistant for your tasks, no matter the scale.
Azure OpenAI Azure OpenAI
Apply large language models and generative AI to a variety of use cases through Microsoft's enterprise-grade platform.
wit.ai wit.ai
Natural Language for Developers - Build natural language interfaces and conversational experiences.
Algorithmia Algorithmia
Algorithmia is community algorithm development platform for machine learning model deployment and management.
DataRobot DataRobot
Enterprise AI platform for automated machine learning and predictive analytics at scale.
Rev.ai Rev.ai
Accurate Speech-to-Text APIs for all of your speech recognition needs with high-precision transcription.
IBM Cloud - Speech to Text IBM Cloud - Speech to Text
Speech to Text service that converts spoken language into written text with industry-leading accuracy.
302.ai 302.ai
Enterprise AI App Platform for building and deploying AI-powered applications across your organization.
AgentQL AgentQL
Make the Web AI-Ready with intelligent web automation and data extraction capabilities.
### Business Management
IFTTT IFTTT
Every thing works better together - Automate workflows and connect services to streamline business operations.
Algolia Algolia
Algolia helps businesses across industries quickly create relevant, scalable, and lightning fast Search and Discovery experiences.
Microsoft Dynamics 365 Business Central API Microsoft Dynamics 365 Business Central API
Run your entire business with a single solution that integrates finance, operations, sales, and customer service.
ERPNext ERPNext
Free and open-source integrated Enterprise Resource Planning software for comprehensive business management.
You Need a Budget You Need a Budget
Money doesn't have to be messy. The YNAB budgeting app helps you organize finances, demolish debt, and reach financial goals faster.
Quipu Quipu
Online bookkeeping service for small businesses to manage accounting and financial records efficiently.
### CRM
Attio Attio
Customer relationship magic. Powerful, flexible, and data-driven, Attio makes it easy to build the exact CRM your business needs.
Salesforce Salesforce
Cloud-based customer relationship management (CRM) platform that helps businesses manage sales, marketing, customer support, and other business activities, ultimately aiming to improve customer relationships and streamline operations.
HubSpot HubSpot
HubSpot's CRM platform contains the marketing, sales, service, operations, and website-building software you need to grow your business.
Zoho CRM Zoho CRM
Zoho CRM is an online Sales CRM software that manages your sales, marketing, and support in one CRM platform.
Pipedrive Pipedrive
Pipedrive is the easy-to-use, #1 user-rated CRM tool. Get more qualified leads and grow your business with Pipedrive's sales CRM.
OneSignal (REST API) OneSignal (REST API)
Push messaging platform for engaging customers across mobile, web, and email channels with targeted notifications.
Adversus Adversus
Dialer Software for sales teams to manage outbound calling campaigns and customer interactions.
Contacts+ Contacts+
Cross-Platform Contacts App for managing and synchronizing contact information across devices.
Flexie Flexie
Flexible CRM software solutions and automation tools for modern business relationship management.
FullContact FullContact
Identity Resolution Platform for enriching customer data and building comprehensive contact profiles.
Lusha Lusha
B2B Lead Enrichment in a Click - Find contact information and business insights for prospects.
### Commerce
CoinMarketCap CoinMarketCap
CoinMarketCap is a website that provides cryptocurrency market cap rankings, charts, and more.
Stripe Stripe
Stripe powers online and in-person payment processing and financial solutions for businesses of all sizes.
Pinterest Pinterest
Pinterest is a visual discovery engine for finding ideas like recipes, home and style inspiration, and more.
Shopify Shopify
Shopify is a complete commerce platform that lets anyone start, manage, and grow a business. You can use Shopify to build an online store, manage sales, market to customers, and accept payments in digital and physical locations.
WooCommerce WooCommerce
WooCommerce is the open-source ecommerce platform for WordPress.
Coinbase Coinbase
Explore crypto like Bitcoin, Ethereum, and Dogecoin. Simply and securely buy, sell, and manage hundreds of cryptocurrencies.
ShipStation ShipStation
Import, manage and ship your orders with ShipStation for streamlined e-commerce fulfillment.
Xero Accounting Xero Accounting
Accounting Software for small businesses to manage finances, invoicing, and bookkeeping.
Zoho Books Zoho Books
Online accounting software for managing business finances, invoicing, and expense tracking.
QuickBooks QuickBooks
QuickBooks Online is designed to help you manage your business finances with ease.
Chargebee Chargebee
Automated Subscription Billing Software for recurring revenue businesses and subscription management.
PayPal PayPal
Send Money, Pay Online or Set Up a Merchant Account - Global payment processing solution.
Memberstack Memberstack
User login & payments for modern websites with membership and subscription management.
Gorgias Gorgias
Gorgias is the ecommerce helpdesk that turns your customer service into a profit center.
Invoice Ninja Invoice Ninja
Open-source online invoicing app for freelancers & businesses to manage billing and payments.
Shipengine Shipengine
Shipping API & multi carrier shipping system for e-commerce logistics and delivery management.
Chargify Chargify
Billing & Revenue Management for B2B SaaS companies with subscription billing automation.
Moneybird Moneybird
Accounting software for small businesses to manage finances and commerce operations.
BigCommerce BigCommerce
Ecommerce for a New Era - Enterprise e-commerce platform for growing businesses.
Printful Printful
Easy print-on-demand drop shipping and fulfillment warehouse services for custom products.
### Communication
Discord Discord
Create a Discord source that emits messages from your guild to a Hypermode workflow.
Gmail Gmail
Gmail offers private and secure email by Google at no cost, for business and consumer accounts.
Microsoft Outlook Microsoft Outlook
Microsoft Outlook lets you bring all your email accounts and calendars in one convenient spot.
Slack Slack
Slack is a messaging platform for team communication.
Telegram Telegram
Telegram is a cloud-based, cross-platform, encrypted instant messaging (IM) service.
Amazon SES Amazon SES
Amazon SES is a cloud-based email service provider that can integrate into any application for high volume email automation.
Microsoft Teams Microsoft Teams
Microsoft Teams has communities, events, chats, channels, meetings, storage, tasks, and calendars in one place.
Zoom Zoom
Zoom is the leader in modern enterprise video communications, with an easy, reliable cloud platform for video and audio conferencing, chat, and webinars.
Twilio Twilio
Twilio is a cloud communications platform for building SMS, Voice & Messaging applications on an API built for global scale.
Intercom Intercom
Intercom is the only solution that combines an AI chatbot, help desk, and proactive support—so you can keep costs low, support teams happy, and customers satisfied.
Line Line
Line is a communication app that connects people, services, and information.
Pushbullet Pushbullet
Pushbullet connects your devices, making them feel like one.
ClickSend SMS ClickSend SMS
Business Communications. Solved.
WhatsApp Business WhatsApp Business
WhatsApp Business products support businesses from large to small. Engage audiences, accelerate sales and drive better customer support outcomes on the platform with more than 2 billion users around the world.
Bird Bird
Business in a box. Grow, Manage, Automate your company. Everything you need in one app.
Waboxapp Waboxapp
API for WhatsApp and WhatsApp Business.
Zoho Mail Zoho Mail
Zoho Mail offers secure business email for your organization. Host your business email on a secure, encrypted, privacy-guaranteed, and ad-free email service, and add a professional touch to every email that goes out.
Infobip Infobip
Infobip is a multi-channel communications platform.
RingCentral RingCentral
Experience Intelligent Phone, Meetings, Contact Center, and AI Solutions with RingCentral, the complete cloud communications platform.
Drift Drift
The New Way Businesses Buy From Businesses.
Plivo Plivo
SMS API and Voice API platform.
Cisco Webex Cisco Webex
Video conferencing, online meetings, screen share, and webinars.
Textlocal Textlocal
Bulk SMS Marketing Service for Business | Send SMS messages at scale.
### Data analytics
Alpha Vantage Alpha Vantage
Free stock APIs in JSON & Excel.
Google Analytics Google Analytics
Measure and report on user activity across websites, apps, and devices.
People Data Labs People Data Labs
The source of the truth for person data.
Segment Segment
Customer data platform.
Clearbit Clearbit
B2B Lead Data Enrichment, Qualification & Scoring.
Alpha Vantage Alpha Vantage
Free stock APIs in JSON & Excel.
RocketReach RocketReach
Accurate, up-to-date contact info.
Baremetrics Baremetrics
Subscription analytics and insights for growing businesses.
Accuranker Accuranker
World's fastest rank tracker.
Datawaves Datawaves
Customer-Facing Analytics.
MonkeyLearn MonkeyLearn
Text Analysis.
AccuWeather AccuWeather
Local, National, & Global Daily Weather Forecast.
Addressfinder Addressfinder
A reliably smart, reliably accurate data quality platform.
Adyntel Adyntel
Ad Intelligence: Gain insights, stay ahead, and optimize your strategy with our comprehensive ad intelligence API.
Akkio Akkio
AI Data Platform for Agencies.
Amplitude Amplitude
Build better products by turning your user data into meaningful insights, using Amplitude's digital analytics platform and experimentation tools.
Automatic Data Extraction Automatic Data Extraction
Instantly access web data with our patented AI-powered automated extraction API.
Axesso Data Service - Amazon Axesso Data Service - Amazon
Axesso is your real-time data API to collect structured information from various sources like Amazon, Walmart, Otto, Facebook, Instagram and many more.
Big Data Cloud Big Data Cloud
BigData Cloud provides the industry's most performant, scalable and flexible APIs. Built for eCommerce, Ad Agencies, Financial Institution, Saas, CRM Systems.
BigDataCorp BigDataCorp
The data platform for the digital age! The best data on the market in an ethical and transparent way.
BuiltWith BuiltWith
Find out what websites are Built With.
### Database
MotherDuck MotherDuck
Serverless analytics platform powered by DuckDB
1 tool Details
Neo4j AuraDB Neo4j AuraDB
Fully managed graph database.
3 tools Details
Supabase Supabase
Supabase is the PostgresSQL development platform.
16 tools Details
Weaviate Weaviate
Weaviate is an open‑source vector database.
MongoDB MongoDB
MongoDB is an open source NoSQL database management program.
8 tools Details
### Development tools
AgentQL AgentQL
Make the Web AI-Ready.
Browserbase Browserbase
A web browser for AI agents & applications.
Exa Exa
Exa is an AI-powered search and retrieval platform.
GitHub GitHub
GitHub is a web-based Git repository hosting service.
Hyperbrowser Hyperbrowser
Cloud browsers for your AI agents.
Jira Jira
Jira is the #1 agile project management tool used by teams to plan, track, release, and support great software with confidence.
Ref
Ref is a service for finding references.
### Entertainment
Google Maps Google Maps (Places API)
Find what you need by getting the latest information on businesses and other important places with Google Maps.
Spotify Spotify
Spotify is a digital music service that gives you access to millions of songs.
Strava Strava
Designed by athletes, for athletes, Strava's mobile app and website connect millions of runners and cyclists through the sports they love.
### File storage
Box Box
Platform for secure content management, workflow, and collaboration.
Dropbox Dropbox
Dropbox gives you secure access to all your files and lets you collaborate from any device.
Google Docs Google Docs
Use Google Docs to create, edit and collaborate on online documents.
Google Drive Google Drive
Google Drive lets you store and synchronize files online and access them from anywhere.
### Infrastructure & cloud
Google Google
Internet-related services and products.
Cal.com Cal.com
Scheduling infrastructure for absolutely everyone.
Vercel Vercel
Vercel is a platform for frontend frameworks and static sites.
### Marketing
Ahrefs Ahrefs
SEO tools & resources.
AirOps AirOps
Build and scale LLM-powered workflows and chat assistants using AirOps Studio.
Facebook Pages Facebook Pages
Social media and social networking service.
LinkedIn LinkedIn
LinkedIn is a business and employment-focused social media platform. Manage your professional identity. Build and engage with your professional network. Access knowledge, insights, and opportunities.
Meetup Meetup
Whatever you're looking to do this year, Meetup can help.
Product Hunt Product Hunt
The best new products in tech.
Reddit Reddit
Reddit is a network of communities based on people's interests.
### Productivity
Airtable Airtable
Airtable is a low-code platform to build next-gen apps. Move beyond rigid tools, operationalize your critical data, and reimagine workflows with AI.
Asana Asana
Work anytime, anywhere with Asana. Keep remote and distributed teams, and your entire organization, focused on their goals, projects, and tasks with Asana.
Basecamp Basecamp
Project Management & Team Communication.
CompanyCam CompanyCam
The photo app every contractor needs.
Fireflies Fireflies
Fireflies.ai helps your team transcribe, summarize, search, and analyze voice conversations.
Google Calendar Google Calendar
Google Calendar is a service for creating, managing, and organizing schedules and events.
Google Sheets Google Sheets
Use Google Sheets to create and edit online spreadsheets. Get insights together with secure sharing in real-time and from any device.
iLovePDF iLovePDF
iLovePDF is an online service to work with PDF files completely free and easy to use. Merge PDF, split PDF, compress PDF, office to PDF, and more.
Linear Linear
Linear is an issue tracking tool for software teams.
Microsoft 365 Microsoft 365
Your productivity cloud across work and life.
Microsoft OneDrive Microsoft OneDrive
Microsoft OneDrive lets you store your personal files in one place, share them with others, and get to them from any device.
Microsoft Outlook Calendar Microsoft Outlook Calendar
The calendar and scheduling component of Outlook that's fully integrated with email, contacts, and other features.
Notion Notion
Notion is a service for notes, docs, tasks, and databases.
Teamwork Teamwork
Project management software.
### Sales
Attio Attio
Customer relationship magic. Powerful, flexible, and data-driven, Attio makes it easy to build the exact CRM your business needs.
62 tools Details
Google Contacts Google Contacts
Google Contacts is a contact management service developed by Google. This service is backed by the Google People API.
HubSpot HubSpot
HubSpot's CRM platform contains the marketing, sales, service, operations, and website-building software you need to grow your business.
Salesforce Salesforce
Cloud-based customer relationship management (CRM) platform that helps businesses manage sales, marketing, customer support, and other business activities, ultimately aiming to improve customer relationships and streamline operations.
Need an integration that isn't in the catalog? [Let us know](https://hypermode.com/). --- # Source: https://docs.hypermode.com/dgraph/self-managed/aws.md # AWS Deployment > Deploy your self-hosted Dgraph cluster on Amazon Web Services using Elastic Kubernetes Service (EKS) ## AWS Deployment Deploy your self-hosted Dgraph cluster on Amazon Web Services using Elastic Kubernetes Service (EKS). ```mermaid graph TB subgraph "AWS Architecture" A[Application Load Balancer] --> B[EKS Cluster] B --> C[Dgraph Alpha Pods] B --> D[Dgraph Zero Pods] C --> E[EBS Volumes] D --> F[EBS Volumes] subgraph "EKS Cluster" C D G[Monitoring] H[Ingress Controller] end I[S3 Backup] --> C J[CloudWatch] --> G end ``` ### 1. Infrastructure Setup #### EKS Cluster Creation ```bash Create EKS Cluster aws eks create-cluster \ --name dgraph-cluster \ --version 1.28 \ --role-arn arn:aws:iam::ACCOUNT:role/eks-service-role \ --resources-vpc-config subnetIds=subnet-12345,securityGroupIds=sg-12345 ``` ```bash Update Kubeconfig aws eks update-kubeconfig --region us-west-2 --name dgraph-cluster ``` ```bash Create Node Group aws eks create-nodegroup \ --cluster-name dgraph-cluster \ --nodegroup-name dgraph-nodes \ --instance-types t3.xlarge \ --ami-type AL2_x86_64 \ --capacity-type ON_DEMAND \ --scaling-config minSize=3,maxSize=9,desiredSize=6 \ --disk-size 100 \ --node-role arn:aws:iam::ACCOUNT:role/NodeInstanceRole ``` #### Storage Class Configuration ```yaml aws-storage-class.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: dgraph-storage provisioner: ebs.csi.aws.com parameters: type: gp3 iops: "3000" throughput: "125" volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true ``` ### 2. Dgraph Deployment on AWS `bash kubectl apply -f aws-storage-class.yaml ` `bash helm repo add dgraph https://charts.dgraph.io helm repo update ` `bash kubectl create namespace dgraph ` ```bash helm install dgraph dgraph/dgraph \ --namespace dgraph \ --set image.tag="v23.1.0" \ --set alpha.persistence.storageClass="dgraph-storage" \ --set alpha.persistence.size="500Gi" \ --set zero.persistence.storageClass="dgraph-storage" \ --set zero.persistence.size="100Gi" \ --set alpha.replicaCount=3 \ --set zero.replicaCount=3 \ --set alpha.resources.requests.memory="8Gi" \ --set alpha.resources.requests.cpu="2000m" ``` ### 3. Load Balancer Configuration ```yaml aws-ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: dgraph-ingress namespace: dgraph annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:REGION:ACCOUNT:certificate/CERT-ID spec: rules: - host: dgraph.yourdomain.com http: paths: - path: / pathType: Prefix backend: service: name: dgraph-dgraph-alpha port: number: 8080 ``` --- # Source: https://docs.hypermode.com/dgraph/self-managed/azure.md # Azure Deployment > Deploy your self-hosted Dgraph cluster on Microsoft Azure using Azure Kubernetes Service (AKS) ## Azure Deployment Deploy your self-hosted Dgraph cluster on Microsoft Azure using Azure Kubernetes Service (AKS). ```mermaid graph TB subgraph "Azure Architecture" A[Application Gateway] --> B[AKS Cluster] B --> C[Dgraph Alpha Pods] B --> D[Dgraph Zero Pods] C --> E[Azure Disks] D --> F[Azure Disks] subgraph "AKS Cluster" C D G[Azure Monitor] H[Ingress Controller] end I[Azure Storage] --> C J[Azure Monitor] --> G end ``` ### 1. AKS Cluster Creation ```bash Create Resource Group az group create --name dgraph-rg --location eastus ``` ```bash Create AKS Cluster az aks create \ --resource-group dgraph-rg \ --name dgraph-cluster \ --node-count 3 \ --node-vm-size Standard_D4s_v3 \ --node-osdisk-size 100 \ --enable-addons monitoring \ --generate-ssh-keys ``` ```bash Get Credentials az aks get-credentials --resource-group dgraph-rg --name dgraph-cluster ``` ```bash Create Storage Class kubectl apply -f - < ### 2. Deploy Dgraph on AKS ```bash # Create namespace kubectl create namespace dgraph # Deploy with Helm helm install dgraph dgraph/dgraph \ --namespace dgraph \ --set alpha.persistence.storageClass="dgraph-storage" \ --set zero.persistence.storageClass="dgraph-storage" \ --set alpha.persistence.size="500Gi" \ --set zero.persistence.size="100Gi" ``` --- # Source: https://docs.hypermode.com/dgraph/ratel/backups.md # Backups We're overhauling Dgraph's docs to make them clearer and more approachable. If you notice any issues during this transition or have suggestions, please [let us know](https://github.com/hypermodeinc/docs/issues). ## Backup Here you find options to back up your server. This backup option is an [Enterprise feature](/dgraph/enterprise/binary-backups). Ratel Backup ### Creating a backup Click `Create Backup`. On the dialog box, choose the destination details. After the successful backup, it's listed on the main panel. --- # Source: https://docs.hypermode.com/dgraph/concepts/badger.md # Badger We're overhauling Dgraph's docs to make them clearer and more approachable. If you notice any issues during this transition or have suggestions, please [let us know](https://github.com/hypermodeinc/docs/issues). [Badger](/badger/overview) is a key-value store developed and maintained by Dgraph. It is also open source, and it's the backing store for Dgraph data. It is largely transparent to users that Dgraph uses Badger to store data internally. Badger is packaged into the Dgraph binary, and is the persistence layer. However, various configuration settings and log messages may reference Badger, such as cache sizes. Badger values are `Posting Lists` and indexes. Badger Keys are formed by concatenating `+`. --- # Source: https://docs.hypermode.com/modus/basic-functions.md # Basic Functions > Implement simple functions with Modus We built Hypermode first to make the easy things easy. Here you'll find a collection of examples demonstrating how to implement basic functions using the Modus framework. We designed these examples to help you get started quickly and understand the core concepts of Modus. ## Set up Before diving into the examples, make sure you have Modus installed and set up. If you haven't done this yet, please refer to the [quickstart guide](modus/first-modus-agent). ## Basic function implementations ### Hello world Learn how to create a simple "Hello World" function using Modus. This example covers the basics of setting up a function, deploying it, and invoking it. ### Data processing Explore how to implement a function that processes data. This example demonstrates how to handle input data, perform operations, and return results. ### API integration See how to integrate external APIs into your Modus functions. This example shows how to make API calls, handle responses, and use the data in your functions. ### Database operations Understand how to perform database operations with Modus. This example covers connecting to a database, executing queries, and managing data. ## Best practices * **Modular Code**: Keep your code modular and organized to make it easier to maintain and extend. * **Error Handling**: Implement robust error handling to ensure your functions can gracefully handle unexpected situations. * **Logging**: Use logging to track the execution of your functions and troubleshoot issues. * **Testing**: Write tests for your functions to ensure they work as expected and catch potential issues early. ## Additional resources Once you've mastered the basics, explore adding intelligence to your app with our [AI-enabled examples](/modus/ai-enabled-apps). We hope these examples help you get started with Modus and inspire you to build amazing apps. Happy coding! If you have any questions or need further assistance, join the discussion on our [community forum](https://discord.hypermode.com). --- # Source: https://docs.hypermode.com/dgraph/guides/get-started-with-dgraph/basic-operations.md # Get Started with Dgraph - Basic Operations We're overhauling Dgraph's docs to make them clearer and more approachable. If you notice any issues during this transition or have suggestions, please [let us know](https://github.com/hypermodeinc/docs/issues). **Welcome to the second tutorial of getting started with Dgraph.** In the [previous tutorial](./introduction) of getting started, we learned some of the basics of Dgraph. Including how to run the database, add new nodes and predicates, and query them back. Graph In this tutorial, we'll build the above Graph and learn more about operations using the UID (Universal Identifier) of the nodes. Specifically, we'll learn about: * Querying and updating nodes, deleting predicates using their UIDs. * Adding an edge between existing nodes. * Adding a new predicate to an existing node. * Traversing the Graph. You can see the accompanying video below.