Hello,
I’m following Optimizing storage for historic time-series data and trying to do something very similar for one our time series tables.
I first create a “historic” table to test, based on the actual table from which I’ll insert (please keep in mind that datetime
is a column name):
CREATE TABLE doc.meterreading_historic (
time_bucket TIMESTAMP WITH TIME ZONE NOT NULL,
datetime ARRAY(TIMESTAMP WITH TIME ZONE) INDEX OFF,
meterid ARRAY(VARCHAR(50)) INDEX OFF,
consumption1 ARRAY(DOUBLE PRECISION) INDEX OFF,
...
...)
WITH (codec = 'best_compression');
this ran without problems.
Then I tried to insert the data:
INSERT INTO doc.meterreading_historic
SELECT DATE_TRUNC('day', datetime) AS time_bucket,
ARRAY_AGG(datetime),
ARRAY_AGG(meterid) as meterid,
ARRAY_AGG(consumption1) as consumption1,
...
FROM doc.meterreading
WHERE DATE_TRUNC('month', datetime) = DATE_TRUNC('month', NOW()) - '1 day'::INTERVAL;
but I get:
io.crate.exceptions.SQLParseException: 'date_trunc('day', datetime)' must appear in the GROUP BY clause or be used in an aggregation function. Perhaps you grouped by an alias that clashes with a column in the relations
Any ideas how I can insert data to my table so that it can be compressed?