Deploying YDB cluster with Ansible

This guide outlines the process of deploying a YDB cluster on a group of servers using Ansible. The recommended setup to get started is 3 servers with 3 disk drives for user data each. For reliability purposes each server should have as independent infrastructure as possible: they'd better be each in a separate datacenter or availability zone, or at least in different server racks.

For large-scale setups, it is recommended to use at least 9 servers for highly available clusters (mirror-3-dc) or 8 servers for single-datacenter clusters (block-4-2). In these cases, servers can have only one disk drive for user data each, but they'd better have an additional small drive for the operating system. You can learn about redundancy models available in YDB from the YDB cluster topology article. During operation, the cluster can be expanded without suspending user access to the databases.

Note

Recommended server requirements:

  • 16 CPUs (calculated based on the utilization of 8 CPUs by the storage node and 8 CPUs by the dynamic node).
  • 16 GB RAM (recommended minimum RAM).
  • Additional SSD drives for data, at least 120 GB each.
  • SSH access.
  • Network connectivity between machines in the cluster.
  • OS: Ubuntu 18+, Debian 9+.
  • Internet access is needed to update repositories and download necessary packages.

Download the GitHub repository with examples for installing YDB cluster – git clone https://github.com/ydb-platform/ydb-ansible-examples.git. This repository contains a few installation templates for deploying YDB clusters in subfolders, as well as scripts for generating TLS certificates and requirement files for installing necessary Python packages. In this article, we'll use the 3-nodes-mirror-3-dc subfolder for the most simple setup. Alternatively, you can similarly use 8-nodes-block-4-2 or 9-nodes-mirror-3-dc if you have the necessary number of suitable servers.

Repository Structure
├── 3-nodes-mirror-3-dc / 9-nodes-mirror-3-dc / 8-nodes-block-4-2
│   ├── ansible.cfg # An Ansible configuration file containing settings for connecting to servers and project structure options. It is essential for customizing Ansible's behavior and specifying default settings.
│   ├── ansible_vault_password_file # A file containing the password for decrypting encrypted data with Ansible Vault, such as sensitive variables or configuration details. This is crucial for securely managing secrets like the root user password.
│   ├── creds # A directory for environment variables that specify the username and password for YDB, facilitating secure access to the database.
│   ├── files
│   │   ├── config.yaml # A YDB configuration file, which contains settings for the database instances.
│   ├── inventory # A directory containing inventory files, which list and organize the servers Ansible will manage.
│   │   ├── 50-inventory.yaml # The main inventory file, specifying the hosts and groups for Ansible tasks.
│   │   └── 99-inventory-vault.yaml #  An encrypted inventory file storing sensitive information, such as the root user's password for YDB, using Ansible Vault.
├── README.md # A markdown file providing a description of the repository, including how to use it, prerequisites, and any other relevant information.
├── requirements.txt # A file listing Python package dependencies required for the virtual environment, ensuring all necessary tools and libraries are installed.
├── requirements.yaml # Specifies the Ansible collections needed, pointing to the latest versions or specific versions required for the project.
├── TLS #A directory intended for storing TLS (Transport Layer Security) certificates and keys for secure communication.
│   ├── ydb-ca-nodes.txt # Contains a list of Fully Qualified Domain Names (FQDNs) of the servers for which TLS certificates will be generated, ensuring secure connections to each node.
│   └── ydb-ca-update.sh # A script for generating TLS certificates from the ydb-ca-nodes.txt list, automating the process of securing communication within the cluster.

To work with the project on a local (intermediate or installation) machine, you will need: Python 3 version 3.10+ and Ansible core version 2.15.2 or higher. Ansible can be installed and run globally (installed in the system) or in a virtual environment. If Ansible is already installed – you can move on to the step "Configuring the Ansible project"; if Ansible is not yet installed, install it using one of the following methods:

  • Update the apt package list with sudo apt-get update.
  • Upgrade packages with sudo apt-get upgrade.
  • Install the software-properties-common package to manage your distribution's software sources – sudo apt install software-properties-common.
  • Add a new PPA to apt – sudo add-apt-repository --yes --update ppa:ansible/ansible.
  • Install Ansible – sudo apt-get install ansible-core (note that installing just ansible will lead to an unsuitable outdated version).
  • Check the Ansible core version – ansible --version
  • Update the apt package list – sudo apt-get update.
  • Install the venv package for Python3 – sudo apt-get install python3-venv
  • Create a directory where the virtual environment will be created and where the playbooks will be downloaded. For example, mkdir venv-ansible.
  • Create a Python virtual environment – python3 -m venv venv-ansible.
  • Activate the virtual environment – source venv-ansible/bin/activate. All further actions with Ansible are performed inside the virtual environment. You can exit it with the command deactivate.
  • Install the recommended version of Ansible using the command pip3 install -r requirements.txt, while in the root directory of the downloaded repository.
  • Check the Ansible core version – ansible --version

Navigate to the root directory of the downloaded repository and execute the command ansible-galaxy install -r requirements.yaml – this will download the Ansible collections ydb_platform.ydb and community.general, which contain roles and plugins for installing YDB.

Configure the Ansible project

Edit the inventory files

Regardless of the chosen cluster topology (3-nodes-mirror-3-dc, 9-nodes-mirror-3-dc, or 8-nodes-block-4-2), the main parameters for installing and configuring YDB are contained in the inventory file 50-inventory.yaml, which is located in the inventory/ directory.

In the inventory file 50-inventory.yaml, you need to specify the current list of FQDNs of the servers where YDB will be installed. By default, the list appears as follows:

all:
  children:
    ydb:
      static-node-1.ydb-cluster.com:
      static-node-2.ydb-cluster.com:
      static-node-3.ydb-cluster.com:

Next, you need to make the following changes in the vars section of the inventory file:

  • ansible_user – specify the user for Ansible to connect via SSH.

  • ansible_ssh_common_args: "-o ProxyJump=<ansible_user>@<static-node-1-IP>" – option for connecting Ansible to a server by IP, from which YDB will be installed (including ProxyJump server). It is used when installing YDB from a local machine not included in the private DNS zone.

  • ansible_ssh_private_key_file – change the default private SSH-key path to the actual one: "../<ssh-private-key-name>".

  • Choose one of the available options for deploying YDB executables:

    • ydb_version: automatically download one of the YDB official releases by version number. For example, 23.4.11.
    • ydb_git_version: automatically compile the YDB executables from the source code, downloaded from the official GitHub repository. The setting's value is a branch, tag, or commit name. For example, main.
    • ydb_archive: a local filesystem path for a YDB distribution archive downloaded or otherwise prepared in advance.
    • ydbd_binary and ydb_cli_binary: local filesystem paths for YDB server and client executables, downloaded or otherwise prepared in advance.

Optional changes in the inventory files

Feel free to change these settings if needed, but it is not necessary in straightforward cases:

  • ydb_cores_static – set the number of CPU cores allocated to static nodes.

  • ydb_cores_dynamic – set the number of CPU cores allocated to dynamic nodes.

  • ydb_tls_dir – specify a local path to a folder with TLS certificates prepared in advance. It must contain the ca.crt file and subdirectories with names matching node hostnames, containing certificates for a given node. If omitted, self-signed TLS certificates will be generated automatically for the whole YDB cluster.

  • ydb_brokers – list the FQDNs of the broker nodes. For example:

    ydb_brokers:
        - static-node-1.ydb-cluster.com
        - static-node-2.ydb-cluster.com
        - static-node-3.ydb-cluster.com
    

The value of the ydb_database_groups variable in the vars section has a fixed value tied to the redundancy type and does not depend on the size of the cluster:

  • For the redundancy type block-4-2, the value of ydb_database_groups is seven.
  • For the redundancy type mirror-3-dc, the value of ydb_database_groups is eight.

The values of the system_timezone and system_ntp_servers variables depend on the infrastructure properties where the YDB cluster is being deployed. By default, system_ntp_servers includes a set of NTP servers without considering the geographical location of the infrastructure on which the YDB cluster will be deployed. We strongly recommend using a local NTP server for on-premise infrastructure and the following NTP servers for cloud providers:

  • system_timezone: USA/<region_name>
  • system_ntp_servers: [169.254.169.123, time.aws.com] Learn more about AWS NTP server settings.
  • You can read about how time synchronization is configured on Azure virtual machines in this article.
  • The specifics of connecting to NTP servers in Alibaba are described in this article.
  • system_timezone: Europe/Moscow
  • system_ntp_servers: [0.ru.pool.ntp.org, 1.ru.pool.ntp.org, ntp0.NL.net, ntp2.vniiftri.ru, ntp.ix.ru, ntps1-1.cs.tu-berlin.de] Learn more about Yandex Cloud NTP server settings.

No changes to other sections of the 50-inventory.yaml configuration file are required.

Changing the root user password

Next, you can change the standard YDB root user password contained in the encrypted inventory file 99-inventory-vault.yaml and in the file ansible_vault_password_file.txt. To change the password – specify the new password in the ansible_vault_password_file.txt file and duplicate it in the 99-inventory-vault.yaml file in the format:

all:
  children:
    ydb:
      vars:
        ydb_password: <new-password>

To encrypt 99-inventory-vault.yaml, execute the command ansible-vault encrypt inventory/99-inventory-vault.yaml.

After modifying the inventory files, you can proceed to prepare the YDB configuration file.

Prepare the YDB configuration file

The YDB configuration file contains the settings for YDB nodes and is located in the subdirectory /files/config.yaml. A detailed description of the configuration file settings for YDB can be found in the article YDB cluster configuration.

The default YDB configuration file already includes almost all the necessary settings for deploying the cluster. You need to replace the standard FQDNs of hosts with the current FQDNs in the hosts and blob_storage_config sections:

  • hosts section:

    ...
    hosts:
    - host: static-node-1.ydb-cluster.com
      host_config_id: 1
      walle_location:
        body: 1
        data_center: 'zone-a'
        rack: '1'
    ...
    
  • blob_storage_config section:

    ...
    - fail_domains:
        - vdisk_locations:
          - node_id: static-node-1.ydb-cluster.com
            pdisk_category: SSD
            path: /dev/disk/by-partlabel/ydb_disk_1
    ...
    

The rest of the sections and settings in the configuration file can remain unchanged.

Deploying the YDB cluster

Note

The minimum number of servers in a YDB cluster is eight servers for the block-4-2 redundancy model and nine servers for the mirror-3-dc redundancy model.

In mirror-3-dc servers should be distributed across three availability zones or datacenters as evenly as possible.

The repository contains two ready sets of templates for deploying a YDB cluster of eight (redundancy model block-4-2) and nine servers (mirror-3-dc). Both options can be scaled to any required number of servers, considering a number of technical requirements.

To prepare your template, you can follow the instructions below:

  1. Create a copy of the directory with the ready example (3-nodes-mirror-3-dc, 9-nodes-mirror-3-dc, or 8-nodes-block-4-2).
  2. Specify the FQDNs of the servers in the file TLS/ydb-ca-nodes.txt and execute the script ydb-ca-update.sh to generate sets of TLS certificates.
  3. Change the template's inventory files according to the instructions.
  4. Make changes to the YDB configuration file according to the instructions.
  5. In the directory of the cloned template, execute the command ansible-playbook ydb_platform.ydb.initial_setup.

Installation script execution plan for YDB

The sequence of role executions and their brief descriptions:

  1. The packages role configures repositories, manages APT preferences and configurations, fixes unconfigured packages, and installs necessary software packages depending on the distribution version.
  2. The system role sets up system settings, including clock and timezone configuration, time synchronization via NTP with systemd-timesyncd, configuring systemd-journald for log management, kernel module loading configuration, kernel parameter optimization through sysctl, and CPU performance tuning using cpufrequtils.
  3. The ydb role performs tasks related to checking necessary variables, installing base components and dependencies, setting up system users and groups, deploying and configuring YDB, including managing TLS certificates and updating configuration files.
  4. The ydb-static role prepares and launches static nodes of YDB, including checking necessary variables and secrets, formatting and preparing disks, creating and launching systemd unit for the storage node, as well as initializing the storage and managing database access.
  5. The ydb-dynamic role configures and manages dynamic nodes of YDB, including checking necessary variables, creating configuration and systemd unit files for each dynamic node, launching these nodes, obtaining a token for YDB access, and creating a database in YDB.
Detailed step-by-step installation process description
  1. Role packages. Tasks:
  • check dpkg audit – Verifies the dpkg state using the dpkg --audit command and saves the command results in the dpkg_audit_result variable. The task will terminate with an error if the dpkg_audit_result.rc command returns a value other than 0 or 1.
  • run the equivalent of "apt-get clean" as a separate step – Cleans the apt cache, similarly to the apt-get clean command.
  • run the equivalent of "apt-get update" as a separate step – Updates the apt cache, akin to the apt-get update command.
  • fix unconfigured packages – Fixes packages that are not configured using the dpkg --configure --pending command.
  • set vars_for_distribution_version variables – Sets variables for a specific Linux distribution version.
  • setup apt repositories – Configures apt repositories from a specified list.
  • setup apt preferences – Configures apt preferences (variable contents are specified in roles/packages/vars/distributions/<distributive name>/<version>/main.yaml).
  • setup apt configs– Configures apt settings.
  • flush handlers – Forcibly runs all accumulated handlers. In this context, it triggers a handler that updates the apt cache.
  • install packages – Installs apt packages considering specified parameters and cache validity.

Links to the lists of packages that will be installed for Ubuntu 22.04 or Astra Linux 1.7:

  • List of packages for Ubuntu 22.04;
  • List of packages for Astra Linux 1.7.
  1. Role system. Tasks:
  • configure clock – A block of tasks for setting up system clocks:

    • assert required variables are defined – Checks for the existence of the system_timezone variable. This check ensures that the necessary variable is available for the next task in the block.
    • set system timezone – Sets the system timezone. The timezone is determined by the value of the system_timezone variable, and the hardware clock (hwclock) is set to UTC. After completing the task, a notification is sent to restart the cron service.
    • flush handlers – Forces the execution of accumulated handlers using the meta directive. This will restart the following processes: timesyncd, journald, cron, cpufrequtils, and execute the sysctl -p command.
  • configure systemd-timesyncd – A task block for configuring systemd-timesyncd:

    • assert required variables are defined asserts that the number of NTP servers (system_ntp_servers) is more than one if the variable system_ntp_servers is defined. If the variable system_ntp_servers is not defined, the execution of the configure systemd-timesyncd task block will be skipped, including the check for the number of NTP servers and the configuration of systemd-timesyncd.
    • create conf.d directory for timesyncd - Creates the /etc/systemd/timesyncd.conf.d directory if the system_ntp_servers variable is defined.
    • configure systemd-timesyncd - Creates a configuration file /etc/systemd/timesyncd.conf.d/ydb.conf for the systemd-timesyncd service with primary and backup NTP servers. The task is executed if the system_ntp_servers variable is defined. After completing the task, a notification is sent to restart the timesyncd service.
    • flush handlers - Calls accumulated handlers. Executes the handler restart timesyncd, which restarts the systemd-timesyncd.service.
    • start timesyncd - Starts and enables the systemd-timesyncd.service. Subsequently, the service will start automatically at system boot.
  • configure systemd-journald – A block of tasks for configuring the systemd-journald service:

    • create conf.d directory for journald - Creates the /etc/systemd/journald.conf.d directory for storing systemd-journald configuration files.
    • configure systemd-journald - Creates a configuration file /etc/systemd/journald.conf.d/ydb.conf for systemd-journald, specifying a Journal section with the option ForwardToWall=no. The ForwardToWall=no setting in the systemd-journald configuration means that system log messages will not be forwarded as "wall" messages to all logged-in users. After completing the task, a notification is sent to restart the journald service.
    • flush handlers - Calls accumulated handlers. Executes the handler restart journald, which restarts the systemd-journald service.
    • start journald - Starts and enables the systemd-journald.service. Subsequently, the service will start automatically at system boot.
  • configure kernel – A block of tasks for kernel configuration:

    • configure /etc/modules-load.d dir - Creates the /etc/modules-load.d directory with owner and group permissions for the root user and 0755 permissions.
    • setup conntrack module - Copies the nf_conntrack line into the file /etc/modules-load.d/conntrack.conf to load the nf_conntrack module at system start.
    • load conntrack module - Loads the nf_conntrack module in the current session.
    • setup sysctl files - Applies templates to create configuration files in /etc/sysctl.d/ for various system settings (such as security, network, and filesystem settings). The list of files includes 10-console-messages.conf, 10-link-restrictions.conf, and others. After completing this task, a notification is sent to apply the kernel settings changes.
    • flush handlers - Calls accumulated handlers. Executes the handler apply kernel settings, which runs the sysctl -p command to apply the kernel parameters specified in /etc/sysctl.conf or in other files in the /etc/sysctl.d/ directory.
  • configure cpu governor – A block of tasks for configuring the CPU frequency management mode:

    • install cpufrequtils - Installs the cpufrequtils package from apt. The task is set with cache check parameters and a task timeout of 300 seconds to expedite task execution and avoid an infinite loop waiting for apt package updates.
    • use performance cpu governor - Creates the file /etc/default/cpufrequtils with content "GOVERNOR=performance", which sets the CPU governor mode to "performance" (disabling power-saving mode when CPU cores are idle). After completing the task, a notification is sent to restart the cpufrequtils service.
    • disable ondemand.service - Disables the ondemand.service if it is present in the system. The service is stopped, its automatic start is disabled, and it is masked (preventing its start). After completing the task, a notification is sent to restart cpufrequtils.
    • flush handlers - Calls accumulated handlers. Executes the handler restart cpufrequtils, which restarts the cpufrequtils service.
    • start cpufrequtils - Starts and enables the cpufrequtils.service. Subsequently, the service will start automatically at system boot.
  1. Role ydbd. Tasks:
  • check if required variables are defined – Checks that the variables ydb_archive, ydb_config, ydb_tls_dir are defined. If any of these are undefined, Ansible will display an appropriate error message and stop the playbook execution.

  • set vars_for_distribution variables – Sets variables from the specified file in the vars_for_distribution_file variable during playbook execution. This task manages a set of variables dependent on the specific Linux distribution.

  • ensure libaio is installed – Ensures that the libaio package is installed.

  • install custom libidn from archive – Installs a custom version of the libidn library from an archive.

  • create certs group – Creates a system group certs.

  • create ydb group – Creates a system group ydb.

  • create ydb user – Creates a system user ydb with a home directory.

  • install YDB server binary package from archive – Installs YDB from a downloaded archive.

  • create YDB audit directory – Creates an audit subdirectory in the YDB installation directory.

  • setup certificates – A block of tasks for setting up security certificates:

    • create YDB certs directory – Creates a certs subdirectory in the YDB installation directory.
    • copy the TLS ca.crt – Copies the root certificate ca.crt to the server.
    • copy the TLS node.crt – Copies the TLS certificate node.crt from the generated certificates directory.
    • copy the TLS node.key – Copies the TLS certificate node.key from the generated certificates directory.
    • copy the TLS web.pem – Copies the TLS pem key web.pem from the generated certificates directory.
  • copy configuration file – Copies the configuration file config.yaml to the server.

  • add configuration file updater script – Copies the update_config_file.sh script to the server.

  1. Role ydbd_static. Tasks:
  • check if required variables are defined – Checks that the variables ydb_cores_static, ydb_disks, ydb_domain, ydb_user are defined. If any of these variables are undefined, the task will fail and an appropriate error message will be displayed for each undefined variable.
  • check if required secrets are defined – Verifies that the secret variable ydb_password is defined. If this variable is undefined, the task will fail and an error message will be displayed.
  • create static node configuration file – Creates a static node configuration file by running the copied update_config_file.sh script with ydbd-config.yaml and ydbd-config-static.yaml configurations.
  • create static node systemd unit – Creates a ydbd-storage.service file for the static node based on a template. After completing the task, a notification is sent to restart the systemd service.
  • flush handlers – Executes accumulated handlers. Restarts all systemd services.
  • format drives confirmation block – A block of tasks for formatting disks and interrupting playbook execution in case the user declines confirmation. A confirmation request to format the connected disk will be displayed in the terminal. Response options: yes – to continue executing the playbook with disk formatting. Any other value will be interpreted as a refusal to format. By default, disks are formatted automatically without asking the user for permission, as the variables ydb_allow_format_drives and ydb_skip_data_loss_confirmation_prompt are set to true. If user confirmation is required, the value of the ydb_skip_data_loss_confirmation_prompt variable should be changed to false in the inventory file 50-inventory.yaml.
  • prepare drives – A task for formatting connected disks. Calls the drive_prepare plugin – a specially developed Ansible module for YDB installation, which is part of the YDB collection and is located in the directory .../.ansible/collections/ansible_collections/ydb_platform/ydb/plugins/action/drive_prepare.py. The module will format the connected disk specified in the ydb_disks variable if the ydb_allow_format_drives variable is set to true.
  • start storage node – Starts the storage node process using systemd. If any errors occur during service startup, playbook execution will be interrupted.
  • get ydb token – Requests a YDB token to perform the storage initialization command. The token is stored in the ydb_credentials variable. The task calls the get_token module from the directory .../.ansible/collections/ansible_collections/ydb_platform/ydb/plugins/modules. If any errors occur at this step, playbook execution will be interrupted.
  • wait for ydb discovery to start working locally – Calls the wait_discovery module, which performs a ListEndpoints request to YDB to check the operability of the cluster's basic subsystems. If the subsystems are working properly, storage initialization commands and database creation can be executed.
  • init YDB storage if not initialized – Initializes the storage if it has not already been created. The task calls the init_storage plugin, which performs the storage initialization command using a grpcs request to the static node on port 2135. The command result is stored in the init_storage variable.
  • wait for ydb healthcheck switch to "GOOD" status – Waits for the YDB healthcheck system to switch to a GOOD status. The task calls the wait_healthcheck plugin, which performs a health check command on YDB.
  • set cluster root password – Sets the password for the YDB root user. The task is executed by the set_user_password plugin, which performs a grpcs request to YDB and sets a pre-defined password for the YDB root user. The password is specified in the ydb_password variable in the inventory file /examples/9-nodes-mirror-3-dc/inventory/99-inventory-vault.yaml in an encrypted form.
  1. Role ydbd_dynamic. Tasks:
  • check if required variables are defined – Verifies the presence of required variables (ydb_domain, ydb_pool_kind, ydb_cores_dynamic, ydb_brokers, ydb_dbname, ydb_dynnodes) and displays an error if any variable is missing.
  • create dynamic node configuration file – Creates a configuration file for dynamic nodes.
  • create dynamic node systemd unit – Creates a systemd service for dynamic nodes. After completing the task, a notification is sent to restart the systemd service.
  • flush handlers – Executes accumulated handlers. This will restart systemd.
  • start dynamic nodes – Starts the process of dynamic nodes using systemd.
  • get ydb token – Obtains a token for creating a database.
  • create YDB database – Creates a database. The task is executed by the create_database plugin, which performs a request to 99-inventory-vault.yaml to create the database.
  • wait for ydb discovery to start working locally – Calls the wait_discovery module again to check the operability of the cluster's basic subsystems.

As a result of executing the playbook, a YDB cluster will be created, with a test database named database, a root user with maximum access rights created, and Embedded UI running on port 8765. To connect to the Embedded UI, you can set up SSH tunneling. For this, execute the command ssh -L 8765:localhost:8765 -i <ssh private key> <user>@<first-ydb-static-node-ip> on your local machine. After successfully establishing the connection, you can navigate to the URL localhost:8765:

ydb-web-ui

Monitoring the cluster state

After successfully creating the YDB cluster, you can check its state using the Embedded UI – http://localhost:8765/monitoring/cluster/tenants:

ydb-cluster-check

This section displays the following parameters of the YDB cluster, reflecting its state:

  • Tablets – a list of running tablets. All tablet state indicators should be green;
  • Nodes – the number and state of static and dynamic nodes launched in the cluster. The node state indicator should be green, and the ratio of created to launched nodes should be equal. For example, 27/27 for a nine-node cluster.

The Load indicators (amount of RAM used) and Storage (amount of disk space used) should also be green.

You can check the state of the storage group in the storage section – http://localhost:8765/monitoring/cluster/storage:

ydb-storage-gr-check

The VDisks indicators should be green, and the state status (found in the tooltip when hovering over the Vdisk indicator) should be Ok. More about the cluster state indicators and monitoring can be read in the article YDB Monitoring.

Cluster Testing

You can test the cluster using the built-in load tests in YDB CLI. To do this, download YDB CLI version 2.5.0 to the machine where Ansible is installed. For example, using wget: wget https://storage.yandexcloud.net/yandexcloud-ydb/release/2.5.0/linux/amd64/ydb.

Make the downloaded binary file executable – chmod +x ydb and execute the connection check command:

./ydb \
config profile create <profile name> \
-d /Root/database \
-e grpcs://< FQDN node >:2135 \
--ca-file <path to generated certs>/CA/certs/ca.crt \
--user root \
--password-file <path to vault password file>/ansible_vault_password_file

Command parameters and their values:

  • config profile create – This command is used to create a connection profile. You specify the profile name. More detailed information on how to create and modify profiles can be found in the article Creating and updating profiles.
  • -e – Endpoint, a string in the format protocol://host:port. You can specify the FQDN of any cluster node and omit the port. By default, port 2135 is used.
  • --ca-file – Path to the root certificate for connections to the database using grpcs. The certificate is created by the ydb-ca-update.sh script in the TLS directory and is located at the path TLS/CA/certs/ relative to the root of the ydb-ansible-examples repository.
  • --user – The user for connecting to the database. By default, the user root is created when executing the ydb_platform.ydb.initial_setup playbook.
  • --password-file – Path to the password file. In each folder with a YDB cluster deployment template, there is an ansible_vault_password_file that contains the password for the user root.

You can check if the profile has been created using the command ./ydb config profile list, which will display a list of profiles. After creating a profile, you need to activate it with the command ./ydb config profile activate <profile name>. To verify that the profile has been activated, you can rerun the command ./ydb config profile list – the active profile will have an (active) mark.

To execute a YQL query, you can use the command ./ydb yql -s 'select 1;', which will return the result of the select 1 command in table form to the terminal. After checking the connection, you can create a test table with the command:
./ydb workload kv init --init-upserts 1000 --cols 4. This will create a test table kv_test consisting of 4 columns and 1000 rows. You can verify that the kv_test table was created and filled with test data by using the command ./ydb yql -s 'select * from kv_test limit 10;'.

The terminal will display a table of 10 rows. Now, you can perform cluster performance testing. The article Key-Value load describes 5 types of workloads (upsert, insert, select, read-rows, mixed) and the parameters for their execution. An example of executing the upsert test workload with the parameter to print the execution time --print-timestamp and standard execution parameters is: ./ydb workload kv run upsert --print-timestamp.

A report of the following type will be displayed in the terminal:

Window Txs/Sec Retries Errors  p50(ms) p95(ms) p99(ms) pMax(ms)        Timestamp
1          727 0       0       11      27      71      116     2024-02-14T12:56:39Z
2          882 0       0       10      21      29      38      2024-02-14T12:56:40Z
3          848 0       0       10      22      30      105     2024-02-14T12:56:41Z
...

After completing the tests, the kv_test table can be deleted with the command: ./ydb workload kv clean. More details on the options for creating a test table and tests can be read in the article Key-Value load.