Database set up for remote IoT application

Hello,
I have a central CrateDB instance running with a WINCC OA PLS, where all Data ist stored. I want to connect multible Raspberry Pi PC’s from remote lokation’s which all should have a localy saved Database for the local fielddata.
The connection is established over public WAN, so we have to account short Network downtimes of about 3% per year in total.

What would be the best solution/set up to have the Data mirrored from the local Harddrive(PLC) to the PLS?
Running a local cluster would likely mean to much data for the Raspberry if there to many of them.

Is it possible to run a local instance of CrateDB which is synchronizing its Tables with only the central PLS-Crate-Instance?

Thanks in advance

Arisys

Hi @Arisys

Welcome to the CrateDB community!

Synchronisation/replication from tables is something we are currently looking into, but unfortunately is currently not supported.

What kind of software are you using on the device? How long to you expect the internet connection to be lost?

best regards
Georg

Hello @proddata ,
I was planning to run a Node-Red instance as Gateway.
The local Database either as a client or thru node-red.

The first idea was to save the local Data in a SQLight Database and simply send the data via postgreSQL-Node.
For the downtime, like i said public network… so 1h-3h at worst. Overall about 3% total downtime per year. With a samplerate of 15-30s per sensor at most.

Thanks
Arisys

Dear Arisys,

we implemented such a scenario you are describing the other day. For that, we used RabbitMQ and disk-persistent queues on the data acquisition device itself. That works fine as soon as it is a RaspberryPi or another SBC running Linux where it is a no-brainer to install it.

That RabbitMQ instance on the device would then connect to another RabbitMQ instance on the central site and drain its queue as soon as it will gain a network connection. In turn, messages consumed from this central queue would then be forwarded to the data historian for storage.

The devices in question were running around in vehicles and would only occasionally get GSM uplink. This whole setup worked pretty well.

With kind regards,
Andreas.

P.S.: Of course, any other messaging technology / broker can be used to implement a scenario like that, it does not have to be AMQP. However, having both disk-persistent queues and message acknowledgment mechanisms is absolutely crucial in order to make it a robust solution. With other technologies, one would have to emulate the intrinsic details of this communication path in one way or another. That’s why we chose to use RabbitMQ right away. Another benefit of using messaging technologies is that this transport is completely agnostic of the data format and any updates to it.

1 Like

as @amotl said, a message queue would be a (good) solution :+1:

If you want to stick with a local database, you could persist the latest timestamp with node-red and sync data between instances on reconnection.

1 Like