I am afraid, there probably is no simple answer to that question, as it depends a bit on how you ingest/collect the data and how you want to query it.
We by now at least have some guide for time-series data, which you can find here
For Grafana something like
CREATE TABLE table_1 (
ts TIMESTAMP,
id TEXT,
val DOUBLE
);
INSERT INTO table_1 (ts,id,val) VALUES (now(),'id1',1.0),(now(),'id2',2.0);
would also be fine if e.g. id
would be selected as metric in Grafana.
That would be 1440 * 1000 * 365 = 525 Mio records / year
Alternatives:
As you mentioned, you could use individual columns for every sensor.
This however would lead to 1000 indexes by default.
CREATE TABLE table_1 (
ts TIMESTAMP,
sensor01 DOUBLE
...
sensor1000 DOUBLE
);
if the sensor are always read in bulk you could use arrays.
However this kinda implies, that you always read all the sensors at once.
CREATE TABLE table_1 (
ts TIMESTAMP,
sensors ARRAY(DOUBLE)
);
INSERT INTO table_1 (ts,sensors) VALUES (now(),[1.0,2.0,3.1,...,...])
and
SELECT ts, sensors[1], sensors[123] FROM table_1
another altnernative would be using OBJECT
s
with dynamic objects, this would be very similar to individual columns
CREATE TABLE table_1 (
ts TIMESTAMP,
sensors OBJECT(DYNAMIC)
);
with ignored objects, you probably save some space, but performance is worse for large aggregations
CREATE TABLE table_1 (
ts TIMESTAMP,
sensors OBJECT(IGNORED)
);