Skip to content

Commit

Permalink
Merge branch 'main' into paritosh/eng-1391
Browse files Browse the repository at this point in the history
  • Loading branch information
seanparkross authored Jan 23, 2025
2 parents a70a5c6 + 6881936 commit 71ed417
Show file tree
Hide file tree
Showing 14 changed files with 331 additions and 4 deletions.
2 changes: 2 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@ RUN corepack enable && corepack prepare yarn@stable --activate && yarn set versi
# Build static files
RUN yarn build

# Env vars are in the k8s manifest and ddn-docs secret

EXPOSE 8080

CMD ["yarn", "serve", "-p", "8080", "--host", "0.0.0.0"]
Expand Down
2 changes: 1 addition & 1 deletion docs/architecture/private/self-hosted.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ For customers with strict security and compliance requirements, Private DDN Self
Control Plane and Data Plane on your own infrastructure.

This is a premium offering where the Hasura team will be helping you with setting up the entire DDN Platform
organization-wide.
organization-wide. An enterprise license is required for this offering.

If you would like access to Private DDN Self-Hosted, please [contact sales](https://hasura.io/contact-us).

Expand Down
2 changes: 1 addition & 1 deletion docs/cli/commands/ddn_context_get.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Get the value of a key in the context.
Get the value of a key in the context

```bash
ddn context get <key> (Allowed keys: selfHostedDataPlane, project, supergraph, subgraph, localEnvFile, cloudEnvFile) [flags]
ddn context get <key> (Allowed keys: supergraph, subgraph, localEnvFile, cloudEnvFile, selfHostedDataPlane, project) [flags]
```

## Examples
Expand Down
2 changes: 1 addition & 1 deletion docs/cli/commands/ddn_context_set.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Set the value of a key in the context.
Set default value of keys to be used in DDN CLI commands

```bash
ddn context set <key> <value> (Allowed keys: project, supergraph, subgraph, localEnvFile, cloudEnvFile, selfHostedDataPlane) [flags]
ddn context set <key> <value> (Allowed keys: supergraph, subgraph, localEnvFile, cloudEnvFile, selfHostedDataPlane, project) [flags]
```

## Examples
Expand Down
2 changes: 1 addition & 1 deletion docs/cli/commands/ddn_context_unset.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Unset the value of a key in the context.
Unset the value of a key in the context

```bash
ddn context unset <key> (Allowed keys: subgraph, localEnvFile, cloudEnvFile, selfHostedDataPlane, project, supergraph) [flags]
ddn context unset <key> (Allowed keys: supergraph, subgraph, localEnvFile, cloudEnvFile, selfHostedDataPlane, project) [flags]
```

## Examples
Expand Down
116 changes: 116 additions & 0 deletions docs/deployment/region-routing.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
---
title: Region Routing
sidebar_position: 10
description:
"Understand how to set up region routing in Hasura DDN to achieve efficient data fetching and lower latency. Benefit
from optimal database performance and global data access with Hasura."
keywords:
- multi-region routing
- region routing
- hasura
- postgreSQL
- data fetching
- latency
- geo-routing
- hasura data connector
- database optimization
seoFrontMatterUpdated: true
---

# Region Routing

## Introduction

With region routing, you can define the deployment configuration of your data connector for different regions.

For data connectors that connect to a data source, e.g. [PostgreSQL](/how-to-build-with-ddn/with-postgresql.mdx), it is
recommended to deploy the connector in a region closest to the data source to ensure efficient communication between the
connector and the data source.

For other data connectors, e.g. [Typescript](https://hasura.io/connectors/nodejs), it is recommended to deploy the
connector in a region closest to the consumers of the API to ensure efficient communication between the connector and
the Hasura engine.

If you have a distributed data source, with multi-region routing, you can ensure that data is fetched from the data
source closest to the user, thus minimizing latency for the request, improving the performance of your application, and
providing a better user experience.

See the list of supported regions [below](#regions).

## Single-Region Routing

You can modify the `Connector` object as per the highlighted values in the example below to force the deployment of your
data connector to a specific region. If the region is not specified, the connector will be deployed randomly to one of
the [supported regions](#regions).

```yaml title="For example, in my_subgraph/connector/my_connector/connector.yaml:"
kind: Connector
version: v2
definition:
name: my_connector
subgraph: my_subgraph
source: hasura/connector_name:<version>
context: .
#highlight-start
regionConfiguration:
- region: <region from the list below>
mode: ReadWrite
envMapping:
<CONNECTOR_ENV_VAR>: # e.g. CONNECTION_URI
fromEnv: <CONNECTOR_ENV_VAR> # e.g. Env Var set as DB read write URL
#highlight-end
```

## Multi-Region Routing

You can modify the `Connector` object as per the highlighted values in the example below to define the deployment
configuration of your connector across multiple regions.

:::note Currently only supported for PostgreSQL

Multi-region routing is currently supported only for the
[PostgreSQL connector](/how-to-build-with-ddn/with-postgresql.mdx).

Support for other data connectors will be added soon.

:::

```yaml title="For example, in my_subgraph/connector/my_connector/connector.yaml:"
kind: Connector
version: v2
definition:
name: my_connector
subgraph: my_subgraph
source: hasura/connector_name:<version>
context: .
#highlight-start
regionConfiguration:
- region: <region1: region from the list below>
mode: ReadWrite
envMapping:
<CONNECTOR_ENV_VAR>: # e.g. CONNECTION_URI
fromEnv: <CONNECTOR_ENV_VAR_REGION_1> # e.g. Env Var set as DB read write replica URL in region1
- region: <region2: region from the list below>
mode: ReadOnly
envMapping:
<CONNECTOR_ENV_VAR>: # e.g. CONNECTION_URI
fromEnv: <CONNECTOR_ENV_VAR_REGION_2> # e.g. Env Var set as DB read only replica URL in region2
- region: <region3: region from the list below>
mode: ReadOnly
envMapping:
<CONNECTOR_ENV_VAR>: # e.g. CONNECTION_URI
fromEnv: <CONNECTOR_ENV_VAR_REGION_3> # e.g. Env Var set as DB read only replica URL in region3
#highlight-end
```

## Supported regions {#regions}

Currently, Hasura DDN supports the following regions in GCP for multi-region routing:

- `gcp-asia-south1`
- `gcp-asia-southeast1`
- `gcp-australia-southeast1`
- `gcp-europe-west1`
- `gcp-southamerica-east1`
- `gcp-us-east4`
- `gcp-us-west2`
44 changes: 44 additions & 0 deletions docs/graphql-api/response-size-limit.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
---
sidebar_position: 12
sidebar_label: Response Size Limit
description: "Hasura rejects responses from a connector when their size exceeds a certain limit"
keywords:
- size limit
- response size limit
- connector response
seoFrontMatterUpdated: true
---

# Connector Response Size Limit

The maximum size for responses from connectors is **30 MB**. Beyond this threshold, Hasura will reject the response to
ensure optimum performance and data processing. It's important to be mindful of this constraint when making data queries
to Hasura's [GraphQL API](/graphql-api/overview/).

To prevent hitting the response size limit, API consumers are encouraged to utilize the
[limit argument](/graphql-api/queries/pagination#limit-results) in their queries to avoid over-fetching data from
sources via data connectors.

When GraphQL API requests exceed this size limit, they will result in an
[internal error](/graphql-api/errors#internal-errors) API response.

```json
{
"data": null,
"errors": [
{
"message": "internal error"
}
]
}
```

Hasura users are advised to check the traces for more detailed error information, which includes the actual response
size from the connector.

:::note Response size limit increases

If you require an increase in the response size limit, please reach out to [Hasura support](https://hasura.io/help/) for
assistance.

:::
27 changes: 27 additions & 0 deletions docs/supergraph-modeling/compatibility-config.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,33 @@ will be disabled. To enable these features, simply increase your `date` to a new
The following is a list of dates at which backwards-incompatible changes were added to Hasura DDN. Projects with
CompatibilityConfig `date`s prior to these dates will have these features disabled.

#### 2025-01-07

##### Disallow duplicate operator definitions for scalar type

A build error is now raised when a scalar type has multiple operator definitions with the same name. For example, if you
have a custom scalar type and define multiple operators with the same name in its boolean expression configuration, the
build will fail. This ensures that operator definitions for scalar types are unique and prevents ambiguity in boolean
expressions.

##### Disallow multidimensional arrays in boolean expressions

A build error is now raised when multidimensional arrays (arrays of arrays) are used in boolean expressions. This
restriction applies to both array comparison operators and array relationship fields within boolean expressions.
Previously, such configurations might have been allowed but could lead to runtime errors or undefined behavior. This
change ensures that only single-dimensional arrays can be used in boolean expressions, making the behavior more
predictable and preventing potential runtime issues.

##### Disallow duplicate names across types and expressions

A build error is now raised when there are duplicate names across types and expressions in your metadata. This includes
conflicts between objects with the following definitions:

- `BooleanExpressionType`
- `OrderByExpression`
- `ScalarType`
- `ObjectType`

#### 2024-12-18

##### Disallow non-scalar fields in Model v1 `orderableFields`
Expand Down
2 changes: 2 additions & 0 deletions docusaurus.config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,8 @@ const config: Config = {
})(),
hasuraVersion: 3,
DEV_TOKEN: process.env.DEV_TOKEN,
openReplayIngestPoint: process.env.OPENREPLAY_INGEST_POINT,
openReplayProjectKey: process.env.OPENREPLAY_PROJECT_KEY,
},

presets: [
Expand Down
10 changes: 10 additions & 0 deletions k8s-manifest/k8s/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,16 @@ spec:
env:
- name: PORT
value: '8080'
- name: OPENREPLAY_INGEST_POINT
valueFrom:
secretKeyRef:
name: ddn-docs
key: OPENREPLAY_INGEST_POINT
- name: OPENREPLAY_PROJECT_KEY
valueFrom:
secretKeyRef:
name: ddn-docs
key: OPENREPLAY_PROJECT_KEY
readinessProbe:
tcpSocket:
port: 8080
Expand Down
1 change: 1 addition & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
"@docusaurus/preset-classic": "3.4.0",
"@docusaurus/theme-mermaid": "3.4.0",
"@mdx-js/react": "^3.0.0",
"@openreplay/tracker": "^15.0.3",
"autoprefixer": "^10.4.16",
"clsx": "^1.2.1",
"dompurify": "^3.1.5",
Expand Down
42 changes: 42 additions & 0 deletions src/components/OpenReplay/OpenReplay.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
import siteConfig from '@generated/docusaurus.config';
import Tracker from '@openreplay/tracker';

const OPENREPLAY_SESSION_COOKIE = 'openReplaySessionHash';
const OPENREPLAY_INGEST_POINT = siteConfig.customFields.openReplayIngestPoint as string;
const OPENREPLAY_PROJECT_KEY = siteConfig.customFields.openReplayProjectKey as string;

let tracker: Tracker | null = null;

export const initOpenReplay = async () => {
const { default: Tracker } = await import('@openreplay/tracker');
tracker = new Tracker({
projectKey: OPENREPLAY_PROJECT_KEY,
ingestPoint: OPENREPLAY_INGEST_POINT,
__DISABLE_SECURE_MODE: true,
});
};

export const startOpenReplayTracking = (userId?: string) => {
if (tracker) {
const cookies = document.cookie.split('; ');
const cookie = cookies.find(c => c.startsWith(`${OPENREPLAY_SESSION_COOKIE}=`));
const existingSessionHash = cookie ? cookie.split('=')[1] : null;

if (existingSessionHash) {
// Resume existing session
tracker.start({ sessionHash: existingSessionHash });
} else {
// Start a new session
tracker.start();
const newSessionHash = tracker.getSessionToken();
if (newSessionHash) {
document.cookie = `${OPENREPLAY_SESSION_COOKIE}=${newSessionHash};`;
}
}

// Set the user ID in both cases
tracker.setUserID(userId);
} else {
console.warn('OpenReplay tracker is not initialized');
}
};
17 changes: 17 additions & 0 deletions src/theme/DocRoot/Layout/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -10,16 +10,27 @@ import BrowserOnly from '@docusaurus/BrowserOnly';
import { AiChatBot } from '@site/src/components/AiChatBot/AiChatBot';
import fetchUser from '@theme/DocRoot/Layout/FetchUser';
import posthog from 'posthog-js';
import { initOpenReplay, startOpenReplayTracking } from '@site/src/components/OpenReplay/OpenReplay';

export default function DocRootLayout({ children }) {
const sidebar = useDocsSidebar();
const location = useLocation();
const isBrowser = useIsBrowser();
const [hiddenSidebarContainer, setHiddenSidebarContainer] = useState(false);
const [hasInitialized, setHasInitialized] = useState(false);
const [hasInitializedOpenReplay, setHasInitializedOpenReplay] = useState(false);

useEffect(() => {
if (isBrowser && !hasInitialized) {
(async () => {
try {
await initOpenReplay();
setHasInitializedOpenReplay(true);
} catch (error) {
console.error('Failed to initialize OpenReplay:', error);
}
})();

posthog.init('phc_MZpdcQLGf57lyfOUT0XA93R3jaCxGsqftVt4iI4MyUY', {
api_host: 'https://analytics-posthog.hasura-app.io',
});
Expand All @@ -28,6 +39,12 @@ export default function DocRootLayout({ children }) {
}
}, [isBrowser, hasInitialized]);

useEffect(() => {
if (isBrowser && hasInitializedOpenReplay) {
startOpenReplayTracking();
}
}, [hasInitializedOpenReplay]);

useEffect(() => {
if (isBrowser && hasInitialized) {
posthog.capture('$pageview');
Expand Down
Loading

0 comments on commit 71ed417

Please sign in to comment.