Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: release branch for TimescaleDB v2.18.0. #3764

Draft
wants to merge 25 commits into
base: latest
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 20 commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
e65d89b
chore: milestone branch for TimescaleDB v2.18.0.
billy-the-fish Jan 27, 2025
47f0aa2
Update some API changes: (#3759)
fabriziomello Jan 28, 2025
e5140f9
Hypercore API ref (#3699)
billy-the-fish Feb 3, 2025
c1b22ef
Merge branch 'latest' into release-2.18.0-main
billy-the-fish Feb 3, 2025
9abfedd
262 docs rfccreate a page automate hypercore using jobs (#3728)
billy-the-fish Feb 4, 2025
28e8b39
chore: empty commit to kickstart the build.
billy-the-fish Feb 4, 2025
8109403
chore: remove secondary indexing stuff and other review updates.
billy-the-fish Feb 7, 2025
525565f
chore: remove secondary indexing stuff and other review updates.
billy-the-fish Feb 7, 2025
7313063
Data mode clarification (#3800)
atovpeko Feb 4, 2025
4c25758
Added the data mode part to IP allow list (#3797)
atovpeko Feb 5, 2025
ff4e215
Update hypertable_compression_stats.md (#3765)
mrksngl Feb 6, 2025
a516aaa
TimescaleDB v2.18 and SQL Assistant Improvements in Data Mode and Pop…
rahilsondhi Feb 6, 2025
91c0cac
Clarified refresh policy for CAGGs
atovpeko Feb 7, 2025
a37d76e
Update images in the changelog
atovpeko Feb 7, 2025
b247ea0
Fix links
atovpeko Feb 7, 2025
2e47554
Hypercore API ref (#3699)
billy-the-fish Feb 3, 2025
3b0ea67
Merge branch 'latest' of github.com:timescale/docs into release-2.18.…
billy-the-fish Feb 10, 2025
13552ae
fix: remove duplicate imports.
billy-the-fish Feb 10, 2025
ae1e554
Merge branch 'latest' into release-2.18.0-main
billy-the-fish Feb 11, 2025
c4acff8
Merge branch 'latest' into release-2.18.0-main
billy-the-fish Feb 11, 2025
4616ec1
Merge branch 'latest' into release-2.18.0-main
billy-the-fish Feb 13, 2025
cf522b7
Merge branch 'latest' into release-2.18.0-main
billy-the-fish Feb 13, 2025
5b9b8ec
Update api/hypercore/hypertable_columnstore_settings.md
billy-the-fish Feb 13, 2025
20b2ac8
fix: make compression more obvious.
billy-the-fish Feb 13, 2025
d6285fe
Merge branch 'latest' into release-2.18.0-main
billy-the-fish Feb 14, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
89 changes: 89 additions & 0 deletions _partials/_cloud_self_configuration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
import EarlyAccess from "versionContent/_partials/_early_access.mdx";

## Policies

### `timescaledb.max_background_workers (int)`

Max background worker processes allocated to TimescaleDB. Set to at least 1 +
the number of databases loaded with a TimescaleDB extension in a PostgreSQL
instance. Default value is 16.

### `timescaledb.enable_tiered_reads (bool)`

Enable [tiered reads][enabling-data-tiering] to that you query your data normally when it's distributed across different storage tiers.
Your hypertable is spread across the tiers, so queries and `JOIN`s work and fetch the same data as usual.

By default, tiered data is not accessed by queries. Querying tiered data may slow down query performance
as the data is not stored locally on Timescale's high-performance storage tier.

## Hypercore features

### `timescaledb.default_hypercore_use_access_method (bool)`

The default value for `hypercore_use_access_method` for functions that have this parameter. This function is in `user` context, meaning that any user can set it for the session. The default value is `false`.

<EarlyAccess />

## $SERVICE_LONG tuning

### `timescaledb.disable_load (bool)`

Disable the loading of the actual extension

### `timescaledb.enable_cagg_reorder_groupby (bool)`
Enable group by reordering

### `timescaledb.enable_chunk_append (bool)`
Enable chunk append node

### `timescaledb.enable_constraint_aware_append (bool)`
Enable constraint-aware append scans

### `timescaledb.enable_constraint_exclusion (bool)`
Enable constraint exclusion

### `timescaledb.enable_job_execution_logging (bool)`
Enable job execution logging

### `timescaledb.enable_optimizations (bool)`
Enable TimescaleDB query optimizations

### `timescaledb.enable_ordered_append (bool)`
Enable ordered append scans

### `timescaledb.enable_parallel_chunk_append (bool)`
Enable parallel chunk append node

### `timescaledb.enable_runtime_exclusion (bool)`
Enable runtime chunk exclusion

### `timescaledb.enable_tiered_reads (bool)`

Enable [tiered reads][enabling-data-tiering] to that you query your data normally when it's distributed across different storage tiers.
Your hypertable is spread across the tiers, so queries and `JOIN`s work and fetch the same data as usual.

By default, tiered data is not accessed by queries. Querying tiered data may slow down query performance
as the data is not stored locally on Timescale's high-performance storage tier.


### `timescaledb.enable_transparent_decompression (bool)`
Enable transparent decompression


### `timescaledb.restoring (bool)`
Stop any background workers which could have been performing tasks. This is especially useful you
migrate data to your [$SERVICE_LONG][pg-dump-and-restore] or [self-hosted database][migrate-entire].

### `timescaledb.max_cached_chunks_per_hypertable (int)`
Maximum cached chunks

### `timescaledb.max_open_chunks_per_insert (int)`
Maximum open chunks per insert

### `timescaledb.max_tuples_decompressed_per_dml_transaction (int)`

The max number of tuples that can be decompressed during an INSERT, UPDATE, or DELETE.

[enabling-data-tiering]: /use-timescale/:currentVersion:/data-tiering/enabling-data-tiering/
[pg-dump-and-restore]: /migrate/:currentVersion:/pg-dump-and-restore/
[migrate-entire]: /self-hosted/:currentVersion:/migration/entire-database/
1 change: 1 addition & 0 deletions _partials/_deprecated_2_18_0.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
<Tag variant="hollow">Old API from [TimescaleDB v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0)</Tag>
6 changes: 1 addition & 5 deletions _partials/_early_access.md
Original file line number Diff line number Diff line change
@@ -1,5 +1 @@
<Highlight type="important">
This feature is early access. Early access features might be subject to billing
changes in the future. If you have feedback, reach out to your customer success
manager, or [contact us](https://www.timescale.com/contact/).
</Highlight>
<Tag variant="hollow">Early access: TimescaleDB v2.18.0</Tag>
21 changes: 21 additions & 0 deletions _partials/_hypercore-conversion-overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
When you convert chunks from the rowstore to the columnstore, multiple records are grouped into a single row.
The columns of this row hold an array-like structure that stores all the data. For example, data in the following
rowstore chunk:

| Timestamp | Device ID | Device Type | CPU |Disk IO|
|---|---|---|---|---|
|12:00:01|A|SSD|70.11|13.4|
|12:00:01|B|HDD|69.70|20.5|
|12:00:02|A|SSD|70.12|13.2|
|12:00:02|B|HDD|69.69|23.4|
|12:00:03|A|SSD|70.14|13.0|
|12:00:03|B|HDD|69.70|25.2|

Is converted and compressed into arrays in a row in the columnstore:

|Timestamp|Device ID|Device Type|CPU|Disk IO|
|-|-|-|-|-|
|[12:00:01, 12:00:01, 12:00:02, 12:00:02, 12:00:03, 12:00:03]|[A, B, A, B, A, B]|[SSD, HDD, SSD, HDD, SSD, HDD]|[70.11, 69.70, 70.12, 69.69, 70.14, 69.70]|[13.4, 20.5, 13.2, 23.4, 13.0, 25.2]|

Because a single row takes up less disk space, you can reduce your chunk size by more than 90%, and can also
speed up your queries. This saves on storage costs, and keeps your queries operating at lightning speed.
44 changes: 44 additions & 0 deletions _partials/_hypercore_manual_workflow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
import EarlyAccess from "versionContent/_partials/_early_access.mdx";

1. **Stop the jobs that are automatically adding chunks to the columnstore**

Retrieve the list of jobs from the [timescaledb_information.jobs][informational-views] view
to find the job you need to [alter_job][alter_job].

``` sql
SELECT alter_job(JOB_ID, scheduled => false);
```

1. **Convert a chunk to update back to the rowstore**

``` sql
CALL convert_to_rowstore('_timescaledb_internal._hyper_2_2_chunk');
```

1. **Update the data in the chunk you added to the rowstore**

Best practice is to structure your [INSERT][insert] statement to include appropriate
partition key values, such as the timestamp. TimescaleDB adds the data to the correct chunk:

``` sql
INSERT INTO metrics (time, value)
VALUES ('2025-01-01T00:00:00', 42);
```

1. **Convert the updated chunks back to the columnstore**

``` sql
CALL convert_to_columnstore('_timescaledb_internal._hyper_1_2_chunk');
```

1. **Restart the jobs that are automatically converting chunks to the columnstore**

``` sql
SELECT alter_job(JOB_ID, scheduled => true);
```

[alter_job]: /api/:currentVersion:/actions/alter_job/
[informational-views]: /api/:currentVersion:/informational-views/jobs/
[insert]: /use-timescale/:currentVersion:/write-data/insert/
[setup-hypercore]: /use-timescale/:currentVersion:/hypercore/real-time-analytics-in-hypercore/
[compression_alter-table]: /api/:currentVersion:/hypercore/alter_table/
96 changes: 96 additions & 0 deletions _partials/_hypercore_policy_workflow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
import EarlyAccess from "versionContent/_partials/_early_access.mdx";

1. **Connect to your $SERVICE_LONG**

In [$CONSOLE][services-portal] open an [SQL editor][in-console-editors]. You can also connect to your service using [psql][connect-using-psql].

1. **Enable columnstore on a hypertable**

Create a [job][job] that automatically moves chunks in a hypertable to the columnstore at a specific time interval.
By default, your table is `orderedby` the time column. For efficient queries on columnstore data, remember to
`segmentby` the column you will use most often to filter your data:

* [Use `ALTER TABLE` for a hypertable][alter_table_hypercore]
```sql
ALTER TABLE stocks_real_time SET (
timescaledb.enable_columnstore = true,
timescaledb.segmentby = 'symbol');
```
* [Use ALTER MATERIALIZED VIEW for a continuous aggregate][compression_continuous-aggregate]
```sql
ALTER MATERIALIZED VIEW stock_candlestick_daily set (
timescaledb.enable_columnstore = true,
timescaledb.segmentby = 'symbol' );
```
Before you say `huh`, a continuous aggregate is a specialized hypertable.

1. **Add a policy to convert chunks to the columnstore at a specific time interval**

For example, 60 days after the data was added to the table:
``` sql
CALL add_columnstore_policy('older_stock_prices', after => INTERVAL '60d');
```
See [add_columnstore_policy][add_columnstore_policy].

1. **View the policies that you set or the policies that already exist**

``` sql
SELECT * FROM timescaledb_information.jobs
WHERE proc_name='policy_compression';
```
See [timescaledb_information.jobs][informational-views].

1. **Pause a columnstore policy**

If you need to modify or add a lot of data to a chunk in the columnstore, best practice is to stop any jobs moving
chunks to the columnstore, [convert the chunk back to the rowstore][convert_to_rowstore], then modify the data.
After the update, [convert the chunk to the columnstore][convert_to_columnstore] and restart the jobs.

``` sql
SELECT * FROM timescaledb_information.jobs where
proc_name = 'policy_compression' AND relname = 'stocks_real_time'

-- Select the JOB_ID from the results

SELECT alter_job(JOB_ID, scheduled => false);
```
See [alter_job][alter_job].

1. **Restart a columnstore policy**

``` sql
SELECT alter_job(JOB_ID, scheduled => true);
```
See [alter_job][alter_job].

1. **Remove a columnstore policy**

``` sql
CALL remove_columnstore_policy('older_stock_prices');
```
See [remove_columnstore_policy][remove_columnstore_policy].

1. **Disable columnstore**

If your table has chunks in the columnstore, you have to
[convert the chunks back to the rowstore][convert_to_rowstore] before you disable the columnstore.
``` sql
ALTER TABLE stocks_real_time SET (timescaledb.enable_columnstore = false);
```
See [alter_table_hypercore][alter_table_hypercore].


[job]: /api/:currentVersion:/actions/add_job/
[alter_table_hypercore]: /api/:currentVersion:/hypercore/alter_table/
[compression_continuous-aggregate]: /api/:currentVersion:/hypercore/alter_materialized_view/
[convert_to_rowstore]: /api/:currentVersion:/hypercore/convert_to_rowstore/
[convert_to_columnstore]: /api/:currentVersion:/hypercore/convert_to_columnstore/
[informational-views]: /api/:currentVersion:/informational-views/jobs/
[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/
[hypercore_workflow]: /api/:currentVersion:/hypercore/#hypercore-workflow
[alter_job]: /api/:currentVersion:/actions/alter_job/
[remove_columnstore_policy]: /api/:currentVersion:/hypercore/remove_columnstore_policy/
[in-console-editors]: /getting-started/:currentVersion:/run-queries-from-console/
[services-portal]: https://console.cloud.timescale.com/dashboard/services
[connect-using-psql]: /use-timescale/:currentVersion:/integrations/query-admin/psql#connect-to-your-service
[insert]: /use-timescale/:currentVersion:/write-data/insert/
3 changes: 2 additions & 1 deletion _partials/_multi-node-deprecation.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
<Highlight type="warning">

[Multi-node support is deprecated][multi-node-deprecation].
[Multi-node support is sunsetted][multi-node-deprecation].

TimescaleDB v2.13 is the last release that includes multi-node support for PostgreSQL
versions 13, 14, and 15.

</Highlight>

[multi-node-deprecation]: https://github.com/timescale/timescaledb/blob/main/docs/MultiNodeDeprecation.md
8 changes: 8 additions & 0 deletions _partials/_prereqs-cloud-and-self.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
To follow the procedure on this page you need to:

* Create a [target $SERVICE_LONG][create-service]

This procedure also works for [self-hosted $TIMESCALE_DB][enable-timescaledb].

[create-service]: /getting-started/:currentVersion:/services/
[enable-timescaledb]: /self-hosted/:currentVersion:/install/
5 changes: 5 additions & 0 deletions _partials/_prereqs-cloud-only.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
To follow the procedure on this page you need to:

* Create a [target $SERVICE_LONG][create-service]

[create-service]: /getting-started/:currentVersion:/services/
1 change: 1 addition & 0 deletions _partials/_since_2_18_0.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
<Tag variant="hollow">Since [TimescaleDB v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0)</Tag>
4 changes: 2 additions & 2 deletions _partials/_usage-based-storage-intro.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
$CLOUD_LONG charges are based on the amount of storage you use. You don't pay for
fixed storage size, and you don't need to worry about scaling disk size as your
data grows; We handle it all for you. To reduce your data costs further,
use [compression][compression], a [data retention policy][data-retention], and
use [Hypercore][hypercore], a [data retention policy][data-retention], and
[tiered storage][data-tiering].

[compression]: /use-timescale/:currentVersion:/compression/about-compression
[hypercore]: /api/:currentVersion:/hypercore/
[data-retention]: /use-timescale/:currentVersion:/data-retention/
[data-tiering]: /use-timescale/:currentVersion:/data-tiering/
2 changes: 1 addition & 1 deletion about/release-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ notes about our downloadable products, see:
* [pgspot](https://github.com/timescale/pgspot/releases) - spot vulnerabilities in PostgreSQL extension scripts.
* [live-migration](https://hub.docker.com/r/timescale/live-migration/tags) - a Docker image to migrate data to a Timescale Cloud service.


This documentation is based on TimescaleDB v2.18.0 and compatible products.

<Highlight type="note">

Expand Down
6 changes: 4 additions & 2 deletions api/add_policies.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,8 @@ timescaledb_experimental.add_policies(
refresh_start_offset "any" = NULL,
refresh_end_offset "any" = NULL,
compress_after "any" = NULL,
drop_after "any" = NULL
drop_after "any" = NULL,
hypercore_use_access_method BOOL = NULL)
) RETURNS BOOL
```

Expand All @@ -52,14 +53,15 @@ If you would like to set this add your policies manually (see [`add_continuous_a
|`refresh_end_offset`|`INTERVAL` or `INTEGER`|The end of the continuous aggregate refresh window, expressed as an offset from the policy run time. Must be greater than `refresh_start_offset`.|
|`compress_after`|`INTERVAL` or `INTEGER`|Continuous aggregate chunks are compressed if they exclusively contain data older than this interval.|
|`drop_after`|`INTERVAL` or `INTEGER`|Continuous aggregate chunks are dropped if they exclusively contain data older than this interval.|
| `hypercore_use_access_method` | BOOLEAN | `NULL` | Set to `true` to use hypercore table access metod. If set to `NULL` it will use the value from `timescaledb.default_hypercore_use_access_method`. |

For arguments that could be either an `INTERVAL` or an `INTEGER`, use an
`INTERVAL` if your time bucket is based on timestamps. Use an `INTEGER` if your
time bucket is based on integers.

## Returns

Returns true if successful.
Returns `true` if successful.

## Sample usage

Expand Down
Loading