Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mark the compression docs as oldapi #3820

Draft
wants to merge 18 commits into
base: release-2.18.0-main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
18 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions _partials/_caggs-intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ means that you can get on with working your data instead of maintaining your
database.

Because continuous aggregates are based on hypertables, you can query them in
exactly the same way as your other tables, and enable [compression][compression]
exactly the same way as your other tables, and enable [Hypercore][hypercore]
or [tiered storage][data-tiering] on your continuous aggregates. You can even
create
[continuous aggregates on top of your continuous aggregates][hierarchical-caggs].
Expand All @@ -31,5 +31,5 @@ Pre-aggregated data from the materialized view is combined with recent data that
hasn't been aggregated yet. This gives you up-to-date results on every query.

[data-tiering]: /use-timescale/:currentVersion:/data-tiering/
[compression]: /use-timescale/:currentVersion:/compression/
[hypercore]: /use-timescale/:currentVersion:/hypercore/
[hierarchical-caggs]: /use-timescale/:currentVersion:/continuous-aggregates/hierarchical-continuous-aggregates/
2 changes: 1 addition & 1 deletion _partials/_deprecated_2_18_0.md
Original file line number Diff line number Diff line change
@@ -1 +1 @@
<Tag variant="hollow">Old API from [TimescaleDB v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0)</Tag>
<Tag variant="hollow">Old API since [TimescaleDB v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0)</Tag>
4 changes: 2 additions & 2 deletions _partials/_migrate_awsrds_connect_intermediary.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
- **Key pair**: use an existing pair or create a new one that you will use to access the intermediary machine.
- **VPC**: by default, this is the same as the database instance.
- **Configure Storage**: adjust the volume to at least the size of RDS/Aurora PostgreSQL instance you are migrating from.
You can reduce the space used by your data on Timescale Cloud using [data compression][data-compression].
You can reduce the space used by your data on Timescale Cloud using [Hypercore][hypercore].
1. Click `Lauch instance`. AWS creates your EC2 instance, then click `Connect to instance` > `SSH client`.
Follow the instructions to create the connection to your intermediary EC2 instance.

Expand Down Expand Up @@ -89,5 +89,5 @@
</Procedure>

[about-hypertables]: /use-timescale/:currentVersion:/hypertables/about-hypertables/
[data-compression]: /use-timescale/:currentVersion:/compression/about-compression/
[hypercore]: /use-timescale/:currentVersion:/hypercore/
[databases]: https://console.aws.amazon.com/rds/home#databases:
1 change: 0 additions & 1 deletion _partials/_migrate_install_psql_ec2_instance.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,4 +71,3 @@
</Procedure>

[about-hypertables]: /use-timescale/:currentVersion:/hypertables/about-hypertables/
[data-compression]: /use-timescale/:currentVersion:/compression/about-compression/
5 changes: 2 additions & 3 deletions _partials/_migrate_post_schema_caggs_etal.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,8 +131,7 @@ separately. Recreate them on your Timescale database.

1. Recreate each policy. For more information about recreating policies, see
the sections on [continuous-aggregate refresh policies][cagg-policy],
[retention policies][retention-policy], [compression
policies][compression-policy], and [reorder policies][reorder-policy].
[retention policies][retention-policy], [Hypercore policies][setup-hypercore], and [reorder policies][reorder-policy].

</Procedure>

Expand Down Expand Up @@ -160,7 +159,7 @@ accessed. Skipping them does not affect statistics on your data.

[analyze]: https://www.postgresql.org/docs/10/sql-analyze.html
[cagg-policy]: /use-timescale/:currentVersion:/continuous-aggregates/refresh-policies/
[compression-policy]: /use-timescale/:currentVersion:/compression/
[setup-hypercore]: /use-timescale/:currentVersion:/hypercore/real-time-analytics-in-hypercore/
[retention-policy]: /use-timescale/:currentVersion:/data-retention/create-a-retention-policy/
[reorder-policy]: /api/:currentVersion:/hypertable/add_reorder_policy/
[timescaledb-parallel-copy]: https://github.com/timescale/timescaledb-parallel-copy
6 changes: 3 additions & 3 deletions _partials/_migrate_validate_and_restart_app.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,12 @@
1. Enable any Timescale Cloud features you want to use.

Migration from PostgreSQL moves the data only. Now manually enable Timescale Cloud features like
[hypertables][about-hypertables], [data compression][data-compression] or [data retention][data-retention]
[hypertables][about-hypertables], [Hypercore][hypercore] or [data retention][data-retention]
while your database is offline.

1. Reconfigure your app to use the target database, then restart it.


[about-hypertables]: /use-timescale/:currentVersion:/hypertables/about-hypertables/
[data-compression]: /use-timescale/:currentVersion:/compression/about-compression/
[data-retention]: /use-timescale/:currentVersion:/data-retention/about-data-retention/
[hypercore]: /use-timescale/:currentVersion:/hypercore/
[data-retention]: /use-timescale/:currentVersion:/data-retention/about-data-retention/
4 changes: 2 additions & 2 deletions _troubleshooting/mst/low-disk-memory-cpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,11 @@ for. In the `Overview` tab, locate the `Service Plan` section, and click
`Upgrade` to enable the additional resources.

If you run out of resources regularly, you might need to consider using your
resources more efficiently. Consider enabling [compression][howto-compression],
resources more efficiently. Consider enabling [Hypercore][setup-hypercore],
using [continuous aggregates][howto-caggs], or
[configuring data retention][howto-dataretention] to reduce the amount of
resources your database uses.

[howto-compression]: /use-timescale/:currentVersion:/compression
[setup-hypercore]: /use-timescale/:currentVersion:/hypercore/real-time-analytics-in-hypercore/
[howto-caggs]: /use-timescale/:currentVersion:/continuous-aggregates
[howto-dataretention]: /use-timescale/:currentVersion:/data-retention
1 change: 0 additions & 1 deletion about/pricing-and-account-management.md
Original file line number Diff line number Diff line change
Expand Up @@ -233,7 +233,6 @@ alt="Adding a payment method in Timescale"/>
- **Add-ons**: add `Production support` and improved database performance for mission critical workloads.

[cloud-login]: https://console.cloud.timescale.com/
[compression]: /use-timescale/:currentVersion:/compression/
[data-tiering]: /use-timescale/:currentVersion:/data-tiering/
[cloud-billing]: https://console.cloud.timescale.com/dashboard/billing/details
[commercial-sla]: https://www.timescale.com/legal/timescale-cloud-terms-of-service
Expand Down
72 changes: 64 additions & 8 deletions about/timescaledb-editions.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,9 @@ keywords: [Apache, community, license]
tags: [learn, contribute]
---

import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";
import Since2180 from "versionContent/_partials/_since_2_18_0.mdx";

# TimescaleDB Apache 2 and TimescaleDB Community Edition

There are two versions of TimescaleDB available:
Expand Down Expand Up @@ -162,40 +165,55 @@ You can access a hosted version of TimescaleDB Community Edition through
</tr>

<tr>
<td><strong>Compression</strong></td>
<td><strong>Hypercore</strong> <Since2180 /></td>
</tr>
<tr>
<td><a href="https://docs.timescale.com/api/latest/compression/alter_table_compression/">ALTER TABLE (Compression)</a></td>
<td><a href="https://docs.timescale.com/api/latest/hypercore/alter_materialized_view/">ALTER MATERIALIZED VIEW (Hypercore)</a></td>
<td>❌</td>
<td>✅</td>
</tr>
<tr>
<td><a href="https://docs.timescale.com/api/latest/compression/add_compression_policy/#sample-usage">add_compression_policy</a></td>
<td><a href="https://docs.timescale.com/api/latest/hypercore/alter_table/">ALTER TABLE (Hypercore)</a></td>
<td>❌</td>
<td>✅</td>
</tr>
<tr>
<td><a href="https://docs.timescale.com/api/latest/compression/remove_compression_policy/">remove_compression_policy</a></td>
<td><a href="https://docs.timescale.com/api/latest/hypercore/add_columnstore_policy/">add_columnstore_policy</a></td>
<td>❌</td>
<td>✅</td>
</tr>
<tr>
<td><a href="https://docs.timescale.com/api/latest/compression/compress_chunk/">compress_chunk</a></td>
<td><a href="https://docs.timescale.com/api/latest/hypercore/remove_columnstore_policy/">remove_columnstore_policy</a></td>
<td>❌</td>
<td>✅</td>
</tr>
<tr>
<td><a href="https://docs.timescale.com/api/latest/compression/decompress_chunk/">decompress_chunk</a></td>
<td><a href="https://docs.timescale.com/api/latest/hypercore/convert_to_columnstore/">convert_to_columnstore</a></td>
<td>❌</td>
<td>✅</td>
</tr>
<tr>
<td><a href="https://docs.timescale.com/api/latest/compression/hypertable_compression_stats/">hypertable_compression_stats</a></td>
<td><a href="https://docs.timescale.com/api/latest/hypercore/convert_to_rowstore/">convert_to_rowstore</a></td>
<td>❌</td>
<td>✅</td>
</tr>
<tr>
<td><a href="https://docs.timescale.com/api/latest/compression/chunk_compression_stats/">chunk_compression_stats</a></td>
<td><a href="https://docs.timescale.com/api/latest/hypercore/hypertable_columnstore_settings/">hypertable_columnstore_settings</a></td>
<td>❌</td>
<td>✅</td>
</tr>
<tr>
<td><a href="https://docs.timescale.com/api/latest/hypercore/hypertable_columnstore_stats/">hypertable_columnstore_stats</a></td>
<td>❌</td>
<td>✅</td>
</tr>
<tr>
<td><a href="https://docs.timescale.com/api/latest/hypercore/chunk_columnstore_settings/">chunk_columnstore_settings</a></td>
<td>❌</td>
<td>✅</td>
</tr>
<tr>
<td><a href="https://docs.timescale.com/api/latest/hypercore/chunk_columnstore_stats/">chunk_columnstore_stats</a></td>
<td>❌</td>
<td>✅</td>
</tr>
Expand Down Expand Up @@ -452,6 +470,44 @@ You can access a hosted version of TimescaleDB Community Edition through
<td>✅</td>
<td>✅</td>
</tr>
<tr>
<td><strong>Compression</strong> <Deprecated2180 /> replaced by Hypercore</td>
</tr>
<tr>
<td><a href="https://docs.timescale.com/api/latest/compression/alter_table_compression/">ALTER TABLE (Compression)</a></td>
<td>❌</td>
<td>✅</td>
</tr>
<tr>
<td><a href="https://docs.timescale.com/api/latest/compression/add_compression_policy/#sample-usage">add_compression_policy</a></td>
<td>❌</td>
<td>✅</td>
</tr>
<tr>
<td><a href="https://docs.timescale.com/api/latest/compression/remove_compression_policy/">remove_compression_policy</a></td>
<td>❌</td>
<td>✅</td>
</tr>
<tr>
<td><a href="https://docs.timescale.com/api/latest/compression/compress_chunk/">compress_chunk</a></td>
<td>❌</td>
<td>✅</td>
</tr>
<tr>
<td><a href="https://docs.timescale.com/api/latest/compression/decompress_chunk/">decompress_chunk</a></td>
<td>❌</td>
<td>✅</td>
</tr>
<tr>
<td><a href="https://docs.timescale.com/api/latest/compression/hypertable_compression_stats/">hypertable_compression_stats</a></td>
<td>❌</td>
<td>✅</td>
</tr>
<tr>
<td><a href="https://docs.timescale.com/api/latest/compression/chunk_compression_stats/">chunk_compression_stats</a></td>
<td>❌</td>
<td>✅</td>
</tr>
</table>

<!-- vale Google.Units = NO -->
Expand Down
2 changes: 1 addition & 1 deletion api/compression/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ tags: [hypertables]

import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";

# Compression (Old API, use Hypercore) <Tag type="community">Community</Tag>
# Compression (Old API, replaced by Hypercore) <Tag type="community">Community</Tag>

<Deprecated2180 /> Replaced by <a href="https://docs.timescale.com/api/latest/hypercore/">Hypercore</a>.

Expand Down
10 changes: 5 additions & 5 deletions api/enable_chunk_skipping.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ referenced in the `WHERE` clauses in your queries.
TimescaleDB supports min/max range tracking for the `smallint`, `int`,
`bigint`, `serial`, `bigserial`, `date`, `timestamp`, and `timestamptz` data types. The
min/max ranges are calculated when a chunk belonging to
this hypertable is compressed using the [compress_chunk][compress_chunk] function.
this hypertable is compressed using the [convert_to_columnstore][convert_to_columnstore] function.
The range is stored in start (inclusive) and end (exclusive) form in the
`chunk_column_stats` catalog table.

Expand All @@ -34,8 +34,8 @@ A [DROP COLUMN](https://www.postgresql.org/docs/current/sql-altertable.html#SQL-
on a column with statistics tracking enabled on it ends up removing all relevant entries
from the catalog table.

A [decompress_chunk][decompress_chunk] invocation on a compressed chunk resets its entries
from the `chunk_column_stats` catalog table since now it's available for DML and the
When you call [convert_to_rowstore][convert_to_rowstore] on a compressed chunk in the columnstore, its entries
from the `chunk_column_stats` catalog table are reset. This is because the chunk is available for DML and the
min/max range values can change on any further data manipulation in the chunk.

By default, this feature is disabled. To enable chunk skipping, set `timescaledb.enable_chunk_skipping = on` in
Expand Down Expand Up @@ -69,5 +69,5 @@ SELECT enable_chunk_skipping('conditions', 'device_id');
|`enabled`|BOOLEAN|Returns `true` when tracking is enabled, `if_not_exists` is `true`, and when a new entry is not
added|

[compress_chunk]: /api/:currentVersion:/compression/compress_chunk/
[decompress_chunk]: /api/:currentVersion:/compression/decompress_chunk/
[convert_to_rowstore]: /api/:currentVersion:/hypercore/convert_to_rowstore/
[convert_to_columnstore]: /api/:currentVersion:/hypercore/convert_to_columnstore/
2 changes: 1 addition & 1 deletion api/page-index/page-index.js
Original file line number Diff line number Diff line change
Expand Up @@ -569,7 +569,7 @@ module.exports = [
"An overview of what different tags represent in the API section of Timescale Documentation.",
},
{
title: "Compression (Old API, use Hypercore)",
title: "Compression (Old API, replaced by Hypercore)",
href: "compression",
description:
"We highly recommend reading the blog post and tutorial about compression before trying to set it up for the first time.",
Expand Down
4 changes: 3 additions & 1 deletion use-timescale/compression/about-compression.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,13 @@ excerpt: How to compress hypertables
products: [self_hosted]
keywords: [compression, hypertables]
---

import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";
import CompressionIntro from 'versionContent/_partials/_compression-intro.mdx';

# About compression

<Deprecated2180 /> see <a href="https://docs.timescale.com/use-timescale/latest/hypercore/">Hypercore</a>

<CompressionIntro />

This section explains how to enable native compression, and then goes into
Expand Down
5 changes: 5 additions & 0 deletions use-timescale/compression/compression-design.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,13 @@ products: [cloud, mst, self_hosted]
keywords: [compression, schema, tables]
---

import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";

# Designing for compression

<Deprecated2180 /> see <a href="https://docs.timescale.com/use-timescale/latest/hypercore/">Hypercore</a>


Time-series data can be unique, in that it needs to handle both shallow and wide
queries, such as "What's happened across the deployment in the last 10 minutes,"
and deep and narrow, such as "What is the average CPU usage for this server
Expand Down
5 changes: 5 additions & 0 deletions use-timescale/compression/compression-methods.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,13 @@ products: [cloud, mst, self_hosted]
keywords: [compression]
---

import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";

# About compression methods

<Deprecated2180 /> see <a href="https://docs.timescale.com/use-timescale/latest/hypercore/">Hypercore</a>


TimescaleDB uses different compression algorithms, depending on the data type
that is being compressed.

Expand Down
5 changes: 4 additions & 1 deletion use-timescale/compression/compression-policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,14 @@ excerpt: Create a compression policy on a hypertable
products: [cloud, mst, self_hosted]
keywords: [compression, hypertables, policy]
---

import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";
import CompressionIntro from 'versionContent/_partials/_compression-intro.mdx';

# Compression policy

<Deprecated2180 /> see <a href="https://docs.timescale.com/use-timescale/latest/hypercore/real-time-analytics-in-hypercore/">Prepare your data for real-time analytics in Hypercore</a>


You can enable compression on individual hypertables, by declaring which column
you want to segment by.

Expand Down
4 changes: 4 additions & 0 deletions use-timescale/compression/decompress-chunks.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,12 @@ keywords: [compression, hypertables, backfilling]
tags: [decompression]
---

import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";

# Decompression

<Deprecated2180 /> see <a href="https://docs.timescale.com/use-timescale/latest/hypercore/modify-data-in-hypercore">Modify your data in Hypercore</a>

Timescale automatically supports `INSERT`s into compressed chunks. But if you
need to insert a lot of data, for example as part of a bulk backfilling
operation, you should first decompress the chunk. Inserting data into a
Expand Down
9 changes: 7 additions & 2 deletions use-timescale/compression/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,12 @@ excerpt: Learn how compression works in Timescale
products: [cloud, mst, self_hosted]
keywords: [compression, hypertables]
---

import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";
import UsageBasedStorage from "versionContent/_partials/_usage-based-storage-intro.mdx";

# Compression
# Compression (Replaced by [Hypercore][hypercore])

<Deprecated2180 /> use <a href="https://docs.timescale.com/use-timescale/latest/hypercore/">Hypercore</a>

Time-series data can be compressed to reduce the amount of storage required, and
increase the speed of some queries. This is a cornerstone feature of
Expand All @@ -17,3 +19,6 @@ data to the form of compressed columns. This occurs across chunks of Timescale
hypertables.

<UsageBasedStorage />


[hypercore]: /use-timescale/:currentVersion:/hypercore/
3 changes: 3 additions & 0 deletions use-timescale/compression/manual-compression.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,12 @@ excerpt: Learn how to manually compress a hypertable
products: [self_hosted]
keywords: [compression, hypertables]
---
import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";

# Manually compress chunks

<Deprecated2180 /> see <a href="https://docs.timescale.com/use-timescale/latest/hypercore/modify-data-in-hypercore">Modify your data in Hypercore</a>

In most cases, an [automated compression policy][add_compression_policy] is sufficient to automatically compress your
chunks. However, if you want more control over compression, you can also manually compress specific chunks.

Expand Down
4 changes: 4 additions & 0 deletions use-timescale/compression/modify-a-schema.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,12 @@ products: [cloud, mst, self_hosted]
keywords: [compression, schemas, hypertables]
---

import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";

# Schema modifications

<Deprecated2180 /> see <a href="https://docs.timescale.com/use-timescale/latest/hypercore/modify-data-in-hypercore">Modify your data in Hypercore</a>

You can modify the schema of compressed hypertables in recent versions of
Timescale.

Expand Down
3 changes: 3 additions & 0 deletions use-timescale/compression/modify-compressed-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,12 @@ excerpt: What happens when you try to modify data in a compressed hypertable
products: [cloud, mst, self_hosted]
keywords: [compression, backfilling, hypertables]
---
import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";

# Insert and modify compressed data

<Deprecated2180 /> see <a href="https://docs.timescale.com/use-timescale/latest/hypercore/modify-data-in-hypercore">Modify your data in Hypercore</a>

In TimescaleDB&nbsp;2.11 and later, you can insert data into compressed chunks,
and modify data in compressed rows.

Expand Down
Loading