I have a litte init script which consists of 3 commands (CREATE REPOSITORY; RESTORE SNAPSHOT; CREATE VIEW;) and I would like to make my crate cluster provisioned in an Azure Kubernetes Service to run them after the cluster is up and running.
As I deploy with terraform, its easy to execute anything AFTER the cluster is deployed, but it seems that I have troubles to find a proper interface for my commands.
First I tried to cURL POST the commands to the HTTP API, but I found no way of doing this out of terraform and get the quotes of the JSON escaped right.
So I wanted to directly call crash on the first crate node. I think the -c parameter might be what I want, but it only takes one command at a time. Is there a better way to make crash execute more than one sql commands in one call?
First I tried to cURL POST the commands to the HTTP API, but I found no way of doing this out of terraform and get the quotes of the JSON escaped right.
Single quotes have to be escaped as '\''
Example for how single quotes have to be escaped right:
- curl -sS -H 'Content-Type: application/json' -X POST '127.0.0.1:4200/_sql' -d '{"stmt":"GRANT ALL TO xxx WITH (password = '\''yyy'\'');"}'
Thanks, that clarifies how to properly escape quotes, but it also gave me an error after the first semicolon. I think it has to do with my approach reading a local file, replacing the storage account key there (don’t want to put it into source control) then filling that into the payload of a curl which is called on a node inside the cluster, all with a terraform local-exec (as the kubectl provider isn’t able to do exec on its own). There are just too many hops.
Figured out how to pipe the local SQL Template into a /tmp/init.sql on the pod and calling crash -c "\r /tmp/init.sql there then which does the job pretty well and without special encoding of my local DDL Scripts.
EDIT: But I am still open to better approaches how to provision a bunch of sql Files under source control on a freshly deployed K8s/Crate Cluster.
I use azurerm Providers ‘azurerm_storage_account’ data source to read the storageaccount key directly in TF. With another ‘local_file’ datasource I am able to read DDL Files from my cloned repo. merging one into the other is a matter of TF replace() and Variable Substitution but I had really a hard time (whole morning until now) trying a lot of things to get that script running on crate… happy that it works now
EDIT Here`s a snippet in case somebody needs this
data "azurerm_storage_account" "cratesnapshots" {
name = "<name of the storage account>"
resource_group_name = "<ressource group where the sa sits"
}
data "local_file" "ddl" {
filename = "${path.module}/../path/to/some/ddl/_init.sql"
}
resource "null_resource" "restoredata" {
depends_on = [kubectl_manifest.cratedb]
provisioner "local-exec" {
command = <<EOF
export KUBECONFIG=~/.kube/${azurerm_kubernetes_cluster.the_cluster.name}-config
kubectl -n crate exec -it crate-0 -- bash -c "echo '${replace(replace(data.local_file.ddl.content, "<<account>>", "${data.azurerm_storage_account.cratesnapshots.name}"), "<<key>>", "${data.azurerm_storage_account.cratesnapshots.primary_access_key}")}' > /tmp/init.sql"
kubectl -n crate exec -it crate-0 -- crash -c "\r /tmp/init.sql"
kubectl -n crate exec -it crate-0 -- rm -f /tmp/init.sql
EOF
}
}