Why CrateDB admits access to everyone as default user with all privileges? Probable security issue?

I am experiencing this way of behaving of CrateDB and I ask for your help to understand if I am doing something wrong or there is a security problem.
In a CrateDB cluster built using docker, on three different hosts, anyone who accesses the subnet where the cluster runs can access CrateDB using the user ID ‘crate’ with all administrator privileges. Is there a way to disable this access or force CrateDB to request a password from the user ‘crate’? Is it a cluster problem when using docker? I couldn’t verify if the same thing happens by installing a cluster without using docker. As additional information, docker containers use the host network interface. Thank you for your help.

By default CrateDB would only allow access to crate from localhost following the HBA settings found here: https://github.com/crate/crate/blob/f85f383d609ec4524309351cd97ab0df38f3c989/app/src/main/dist/config/crate.yml#L91-L99

This doesn’t really work for Docker, as it would be impossible to even create any other user without accessing a shell session within the container. Therefore the defaults are different for Docker:
https://github.com/crate/docker-crate/blob/master/config/crate.yml However you can adjust the HBA settings to also not allow any access to crate.

Hi, just to elaborate on that giving an example, you could initially deploy the containers with the default settings, being careful to not expose them on the network, then connect and create appropriate users, and then redeploy the containers attaching the same /data volume but passing

- -Cauth.host_based.enabled=true
- -Cauth.host_based.config.99.method=password

Thank you for the answers.
It is not clear to me if I can insert in the configuration file /config/crate.yml (which now resides on a docker volume) the HBA setting and restart all three containers (question: do I have to edit all three crate.yml files or being clustered is the HBA configuration distributed between the nodes automatically?)

Or do I have to follow the example of hernanc (and then recreate the containers again, keeping the containers disconnected from the network?).

In this case I have to start the container and enter with exec --it … and create the users inside the container ?
If so, what should I do from inside the container docker?

Recreating the containers withh the /data volume at a later stage is there the risk of bringing back the previous configuration above the one just made to the new the container?

Hi, But how can I do this with the cluster running?

  • The docker images comes with crash CLI included.
  • The crate.yml is a per node configuration file
  • Authentication can be different per node, it is a node setting
  • the cluster state in /data is not aware about the authentication method, however users and password you created before are part of it.
version: '3.8'
    image: crate:latest
      - "4200:4200"
      - /tmp/crate/01:/data
    command: ["crate",
      replicas: 1
        condition: on-failure
      - CRATE_HEAP_SIZE=2g
docker-compose -f docker_compose.yml up
% docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS                                        NAMES
d80086af09f8   crate:latest   "/docker-entrypoint.…"   14 seconds ago   Up 14 seconds   4300/tcp,>4200/tcp, 5432/tcp   docker-cratedb-1

% docker exec -it d80086af09f8 /bin/bash

[root@d80086af09f8 data]# crash
cr> CREATE USER "admin" WITH ("password" = 'supersecret');
CREATE OK, 1 row affected (0.115 sec)
cr> GRANT ALL TO "admin";
GRANT OK, 4 rows affected (0.028 sec)

login with user admin and password supersecret.

UPDATE : SOLVED. :grinning:

I destroyed the previous container and recreated a new one according to your suggestions.

Thank you proddata.