Deploying YDB cluster with Ansible
This guide outlines the process of deploying a YDB cluster on a group of servers using Ansible. The recommended setup to get started is 3 servers with 3 disk drives for user data each. For reliability purposes each server should have as independent infrastructure as possible: they'd better be each in a separate datacenter or availability zone, or at least in different server racks.
For large-scale setups, it is recommended to use at least 9 servers for highly available clusters (mirror-3-dc
) or 8 servers for single-datacenter clusters (block-4-2
). In these cases, servers can have only one disk drive for user data each, but they'd better have an additional small drive for the operating system. You can learn about redundancy models available in YDB from the YDB cluster topology article. During operation, the cluster can be expanded without suspending user access to the databases.
Note
Recommended server requirements:
- 16 CPUs (calculated based on the utilization of 8 CPUs by the storage node and 8 CPUs by the dynamic node).
- 16 GB RAM (recommended minimum RAM).
- Additional SSD drives for data, at least 120 GB each.
- SSH access.
- Network connectivity between machines in the cluster.
- OS: Ubuntu 18+, Debian 9+.
- Internet access is needed to update repositories and download necessary packages.
Download the GitHub repository with examples for installing YDB cluster – git clone https://github.com/ydb-platform/ydb-ansible-examples.git
. This repository contains a few installation templates for deploying YDB clusters in subfolders, as well as scripts for generating TLS certificates and requirement files for installing necessary Python packages. In this article, we'll use the 3-nodes-mirror-3-dc
subfolder for the most simple setup. Alternatively, you can similarly use 8-nodes-block-4-2
or 9-nodes-mirror-3-dc
if you have the necessary number of suitable servers.
Repository Structure
├── 3-nodes-mirror-3-dc / 9-nodes-mirror-3-dc / 8-nodes-block-4-2
│ ├── ansible.cfg # An Ansible configuration file containing settings for connecting to servers and project structure options. It is essential for customizing Ansible's behavior and specifying default settings.
│ ├── ansible_vault_password_file # A file containing the password for decrypting encrypted data with Ansible Vault, such as sensitive variables or configuration details. This is crucial for securely managing secrets like the root user password.
│ ├── creds # A directory for environment variables that specify the username and password for YDB, facilitating secure access to the database.
│ ├── files
│ │ ├── config.yaml # A YDB configuration file, which contains settings for the database instances.
│ ├── inventory # A directory containing inventory files, which list and organize the servers Ansible will manage.
│ │ ├── 50-inventory.yaml # The main inventory file, specifying the hosts and groups for Ansible tasks.
│ │ └── 99-inventory-vault.yaml # An encrypted inventory file storing sensitive information, such as the root user's password for YDB, using Ansible Vault.
├── README.md # A markdown file providing a description of the repository, including how to use it, prerequisites, and any other relevant information.
├── requirements.txt # A file listing Python package dependencies required for the virtual environment, ensuring all necessary tools and libraries are installed.
├── requirements.yaml # Specifies the Ansible collections needed, pointing to the latest versions or specific versions required for the project.
├── TLS #A directory intended for storing TLS (Transport Layer Security) certificates and keys for secure communication.
│ ├── ydb-ca-nodes.txt # Contains a list of Fully Qualified Domain Names (FQDNs) of the servers for which TLS certificates will be generated, ensuring secure connections to each node.
│ └── ydb-ca-update.sh # A script for generating TLS certificates from the ydb-ca-nodes.txt list, automating the process of securing communication within the cluster.
To work with the project on a local (intermediate or installation) machine, you will need: Python 3 version 3.10+ and Ansible core version 2.15.2 or higher. Ansible can be installed and run globally (installed in the system) or in a virtual environment. If Ansible is already installed – you can move on to the step "Configuring the Ansible project"; if Ansible is not yet installed, install it using one of the following methods:
- Update the apt package list with
sudo apt-get update
. - Upgrade packages with
sudo apt-get upgrade
. - Install the
software-properties-common
package to manage your distribution's software sources –sudo apt install software-properties-common
. - Add a new PPA to apt –
sudo add-apt-repository --yes --update ppa:ansible/ansible
. - Install Ansible –
sudo apt-get install ansible-core
(note that installing justansible
will lead to an unsuitable outdated version). - Check the Ansible core version –
ansible --version
- Update the apt package list –
sudo apt-get update
. - Install the
venv
package for Python3 –sudo apt-get install python3-venv
- Create a directory where the virtual environment will be created and where the playbooks will be downloaded. For example,
mkdir venv-ansible
. - Create a Python virtual environment –
python3 -m venv venv-ansible
. - Activate the virtual environment –
source venv-ansible/bin/activate
. All further actions with Ansible are performed inside the virtual environment. You can exit it with the commanddeactivate
. - Install the recommended version of Ansible using the command
pip3 install -r requirements.txt
, while in the root directory of the downloaded repository. - Check the Ansible core version –
ansible --version
Navigate to the root directory of the downloaded repository and execute the command ansible-galaxy install -r requirements.yaml
– this will download the Ansible collections ydb_platform.ydb
and community.general
, which contain roles and plugins for installing YDB.
Configure the Ansible project
Edit the inventory files
Regardless of the chosen cluster topology (3-nodes-mirror-3-dc
, 9-nodes-mirror-3-dc
, or 8-nodes-block-4-2
), the main parameters for installing and configuring YDB are contained in the inventory file 50-inventory.yaml
, which is located in the inventory/
directory.
In the inventory file 50-inventory.yaml
, you need to specify the current list of FQDNs of the servers where YDB will be installed. By default, the list appears as follows:
all:
children:
ydb:
static-node-1.ydb-cluster.com:
static-node-2.ydb-cluster.com:
static-node-3.ydb-cluster.com:
Next, you need to make the following changes in the vars
section of the inventory file:
-
ansible_user
– specify the user for Ansible to connect via SSH. -
ansible_ssh_common_args: "-o ProxyJump=<ansible_user>@<static-node-1-IP>"
– option for connecting Ansible to a server by IP, from which YDB will be installed (including ProxyJump server). It is used when installing YDB from a local machine not included in the private DNS zone. -
ansible_ssh_private_key_file
– change the default private SSH-key path to the actual one:"../<ssh-private-key-name>"
. -
Choose one of the available options for deploying YDB executables:
ydb_version
: automatically download one of the YDB official releases by version number. For example,23.4.11
.ydb_git_version
: automatically compile the YDB executables from the source code, downloaded from the official GitHub repository. The setting's value is a branch, tag, or commit name. For example,main
.ydb_archive
: a local filesystem path for a YDB distribution archive downloaded or otherwise prepared in advance.ydbd_binary
andydb_cli_binary
: local filesystem paths for YDB server and client executables, downloaded or otherwise prepared in advance.
Optional changes in the inventory files
Feel free to change these settings if needed, but it is not necessary in straightforward cases:
-
ydb_cores_static
– set the number of CPU cores allocated to static nodes. -
ydb_cores_dynamic
– set the number of CPU cores allocated to dynamic nodes. -
ydb_tls_dir
– specify a local path to a folder with TLS certificates prepared in advance. It must contain theca.crt
file and subdirectories with names matching node hostnames, containing certificates for a given node. If omitted, self-signed TLS certificates will be generated automatically for the whole YDB cluster. -
ydb_brokers
– list the FQDNs of the broker nodes. For example:ydb_brokers: - static-node-1.ydb-cluster.com - static-node-2.ydb-cluster.com - static-node-3.ydb-cluster.com
The value of the ydb_database_groups
variable in the vars
section has a fixed value tied to the redundancy type and does not depend on the size of the cluster:
- For the redundancy type
block-4-2
, the value ofydb_database_groups
is seven. - For the redundancy type
mirror-3-dc
, the value ofydb_database_groups
is eight.
The values of the system_timezone
and system_ntp_servers
variables depend on the infrastructure properties where the YDB cluster is being deployed. By default, system_ntp_servers
includes a set of NTP servers without considering the geographical location of the infrastructure on which the YDB cluster will be deployed. We strongly recommend using a local NTP server for on-premise infrastructure and the following NTP servers for cloud providers:
system_timezone
: USA/<region_name>system_ntp_servers
: [169.254.169.123, time.aws.com] Learn more about AWS NTP server settings.
- You can read about how time synchronization is configured on Azure virtual machines in this article.
- The specifics of connecting to NTP servers in Alibaba are described in this article.
system_timezone
: Europe/Moscowsystem_ntp_servers
: [0.ru.pool.ntp.org, 1.ru.pool.ntp.org, ntp0.NL.net, ntp2.vniiftri.ru, ntp.ix.ru, ntps1-1.cs.tu-berlin.de] Learn more about Yandex Cloud NTP server settings.
No changes to other sections of the 50-inventory.yaml
configuration file are required.
Changing the root user password
Next, you can change the standard YDB root user password contained in the encrypted inventory file 99-inventory-vault.yaml
and in the file ansible_vault_password_file.txt
. To change the password – specify the new password in the ansible_vault_password_file.txt
file and duplicate it in the 99-inventory-vault.yaml
file in the format:
all:
children:
ydb:
vars:
ydb_password: <new-password>
To encrypt 99-inventory-vault.yaml
, execute the command ansible-vault encrypt inventory/99-inventory-vault.yaml
.
After modifying the inventory files, you can proceed to prepare the YDB configuration file.
Prepare the YDB configuration file
The YDB configuration file contains the settings for YDB nodes and is located in the subdirectory /files/config.yaml
. A detailed description of the configuration file settings for YDB can be found in the article YDB cluster configuration.
The default YDB configuration file already includes almost all the necessary settings for deploying the cluster. You need to replace the standard FQDNs of hosts with the current FQDNs in the hosts
and blob_storage_config
sections:
-
hosts
section:... hosts: - host: static-node-1.ydb-cluster.com host_config_id: 1 walle_location: body: 1 data_center: 'zone-a' rack: '1' ...
-
blob_storage_config
section:... - fail_domains: - vdisk_locations: - node_id: static-node-1.ydb-cluster.com pdisk_category: SSD path: /dev/disk/by-partlabel/ydb_disk_1 ...
The rest of the sections and settings in the configuration file can remain unchanged.
Deploying the YDB cluster
Note
The minimum number of servers in a YDB cluster is eight servers for the block-4-2
redundancy model and nine servers for the mirror-3-dc
redundancy model.
In mirror-3-dc
servers should be distributed across three availability zones or datacenters as evenly as possible.
The repository contains two ready sets of templates for deploying a YDB cluster of eight (redundancy model block-4-2
) and nine servers (mirror-3-dc
). Both options can be scaled to any required number of servers, considering a number of technical requirements.
To prepare your template, you can follow the instructions below:
- Create a copy of the directory with the ready example (
3-nodes-mirror-3-dc
,9-nodes-mirror-3-dc
, or8-nodes-block-4-2
). - Specify the FQDNs of the servers in the file
TLS/ydb-ca-nodes.txt
and execute the scriptydb-ca-update.sh
to generate sets of TLS certificates. - Change the template's inventory files according to the instructions.
- Make changes to the YDB configuration file according to the instructions.
- In the directory of the cloned template, execute the command
ansible-playbook ydb_platform.ydb.initial_setup
.
Installation script execution plan for YDB
The sequence of role executions and their brief descriptions:
- The
packages
role configures repositories, manages APT preferences and configurations, fixes unconfigured packages, and installs necessary software packages depending on the distribution version. - The
system
role sets up system settings, including clock and timezone configuration, time synchronization via NTP withsystemd-timesyncd
, configuringsystemd-journald
for log management, kernel module loading configuration, kernel parameter optimization throughsysctl
, and CPU performance tuning usingcpufrequtils
. - The
ydb
role performs tasks related to checking necessary variables, installing base components and dependencies, setting up system users and groups, deploying and configuring YDB, including managing TLS certificates and updating configuration files. - The
ydb-static
role prepares and launches static nodes of YDB, including checking necessary variables and secrets, formatting and preparing disks, creating and launchingsystemd unit
for the storage node, as well as initializing the storage and managing database access. - The
ydb-dynamic
role configures and manages dynamic nodes of YDB, including checking necessary variables, creating configuration andsystemd unit
files for each dynamic node, launching these nodes, obtaining a token for YDB access, and creating a database in YDB.
Detailed step-by-step installation process description
- Role
packages
. Tasks:
check dpkg audit
– Verifies the dpkg state using thedpkg --audit
command and saves the command results in thedpkg_audit_result
variable. The task will terminate with an error if thedpkg_audit_result.rc
command returns a value other than 0 or 1.run the equivalent of "apt-get clean" as a separate step
– Cleans the apt cache, similarly to theapt-get clean
command.run the equivalent of "apt-get update" as a separate step
– Updates the apt cache, akin to theapt-get update
command.fix unconfigured packages
– Fixes packages that are not configured using thedpkg --configure --pending
command.set vars_for_distribution_version variables
– Sets variables for a specific Linux distribution version.setup apt repositories
– Configures apt repositories from a specified list.setup apt preferences
– Configures apt preferences (variable contents are specified inroles/packages/vars/distributions/<distributive name>/<version>/main.yaml
).setup apt configs
– Configures apt settings.flush handlers
– Forcibly runs all accumulated handlers. In this context, it triggers a handler that updates the apt cache.install packages
– Installs apt packages considering specified parameters and cache validity.
Links to the lists of packages that will be installed for Ubuntu 22.04 or Astra Linux 1.7:
- Role
system
. Tasks:
-
configure clock
– A block of tasks for setting up system clocks:assert required variables are defined
– Checks for the existence of thesystem_timezone
variable. This check ensures that the necessary variable is available for the next task in the block.set system timezone
– Sets the system timezone. The timezone is determined by the value of thesystem_timezone
variable, and the hardware clock (hwclock
) is set to UTC. After completing the task, a notification is sent to restart thecron
service.flush handlers
– Forces the execution of accumulated handlers using themeta
directive. This will restart the following processes:timesyncd
,journald
,cron
,cpufrequtils
, and execute thesysctl -p
command.
-
configure systemd-timesyncd
– A task block for configuringsystemd-timesyncd
:assert required variables are defined
asserts that the number of NTP servers (system_ntp_servers
) is more than one if the variablesystem_ntp_servers
is defined. If the variablesystem_ntp_servers
is not defined, the execution of theconfigure systemd-timesyncd
task block will be skipped, including the check for the number of NTP servers and the configuration ofsystemd-timesyncd
.create conf.d directory for timesyncd
- Creates the/etc/systemd/timesyncd.conf.d
directory if thesystem_ntp_servers
variable is defined.configure systemd-timesyncd
- Creates a configuration file/etc/systemd/timesyncd.conf.d/ydb.conf
for thesystemd-timesyncd
service with primary and backup NTP servers. The task is executed if thesystem_ntp_servers
variable is defined. After completing the task, a notification is sent to restart thetimesyncd
service.flush handlers
- Calls accumulated handlers. Executes the handlerrestart timesyncd
, which restarts thesystemd-timesyncd.service
.start timesyncd
- Starts and enables thesystemd-timesyncd.service
. Subsequently, the service will start automatically at system boot.
-
configure systemd-journald
– A block of tasks for configuring thesystemd-journald
service:create conf.d directory for journald
- Creates the/etc/systemd/journald.conf.d
directory for storingsystemd-journald
configuration files.configure systemd-journald
- Creates a configuration file/etc/systemd/journald.conf.d/ydb.conf
forsystemd-journald
, specifying aJournal
section with the optionForwardToWall=no
. TheForwardToWall=no
setting in thesystemd-journald
configuration means that system log messages will not be forwarded as "wall" messages to all logged-in users. After completing the task, a notification is sent to restart thejournald
service.flush handlers
- Calls accumulated handlers. Executes the handlerrestart journald
, which restarts thesystemd-journald
service.start journald
- Starts and enables thesystemd-journald.service
. Subsequently, the service will start automatically at system boot.
-
configure kernel
– A block of tasks for kernel configuration:configure /etc/modules-load.d dir
- Creates the/etc/modules-load.d
directory with owner and group permissions for the root user and0755
permissions.setup conntrack module
- Copies thenf_conntrack
line into the file/etc/modules-load.d/conntrack.conf
to load thenf_conntrack
module at system start.load conntrack module
- Loads thenf_conntrack
module in the current session.setup sysctl files
- Applies templates to create configuration files in/etc/sysctl.d/
for various system settings (such as security, network, and filesystem settings). The list of files includes10-console-messages.conf
,10-link-restrictions.conf
, and others. After completing this task, a notification is sent to apply the kernel settings changes.flush handlers
- Calls accumulated handlers. Executes the handlerapply kernel settings
, which runs thesysctl -p
command to apply the kernel parameters specified in/etc/sysctl.conf
or in other files in the/etc/sysctl.d/
directory.
-
configure cpu governor
– A block of tasks for configuring the CPU frequency management mode:install cpufrequtils
- Installs thecpufrequtils
package from apt. The task is set with cache check parameters and a task timeout of 300 seconds to expedite task execution and avoid an infinite loop waiting for apt package updates.use performance cpu governor
- Creates the file/etc/default/cpufrequtils
with content "GOVERNOR=performance", which sets the CPU governor mode to "performance" (disabling power-saving mode when CPU cores are idle). After completing the task, a notification is sent to restart thecpufrequtils
service.disable ondemand.service
- Disables theondemand.service
if it is present in the system. The service is stopped, its automatic start is disabled, and it is masked (preventing its start). After completing the task, a notification is sent to restart cpufrequtils.flush handlers
- Calls accumulated handlers. Executes the handlerrestart cpufrequtils
, which restarts thecpufrequtils
service.start cpufrequtils
- Starts and enables thecpufrequtils.service
. Subsequently, the service will start automatically at system boot.
- Role
ydbd
. Tasks:
-
check if required variables are defined
– Checks that the variablesydb_archive
,ydb_config
,ydb_tls_dir
are defined. If any of these are undefined, Ansible will display an appropriate error message and stop the playbook execution. -
set vars_for_distribution variables
– Sets variables from the specified file in thevars_for_distribution_file
variable during playbook execution. This task manages a set of variables dependent on the specific Linux distribution. -
ensure libaio is installed
– Ensures that thelibaio
package is installed. -
install custom libidn from archive
– Installs a custom version of thelibidn
library from an archive. -
create certs group
– Creates a system groupcerts
. -
create ydb group
– Creates a system groupydb
. -
create ydb user
– Creates a system userydb
with a home directory. -
install YDB server binary package from archive
– Installs YDB from a downloaded archive. -
create YDB audit directory
– Creates anaudit
subdirectory in the YDB installation directory. -
setup certificates
– A block of tasks for setting up security certificates:create YDB certs directory
– Creates acerts
subdirectory in the YDB installation directory.copy the TLS ca.crt
– Copies the root certificateca.crt
to the server.copy the TLS node.crt
– Copies the TLS certificatenode.crt
from the generated certificates directory.copy the TLS node.key
– Copies the TLS certificatenode.key
from the generated certificates directory.copy the TLS web.pem
– Copies the TLS pem keyweb.pem
from the generated certificates directory.
-
copy configuration file
– Copies the configuration fileconfig.yaml
to the server. -
add configuration file updater script
– Copies theupdate_config_file.sh
script to the server.
- Role
ydbd_static
. Tasks:
check if required variables are defined
– Checks that the variablesydb_cores_static
,ydb_disks
,ydb_domain
,ydb_user
are defined. If any of these variables are undefined, the task will fail and an appropriate error message will be displayed for each undefined variable.check if required secrets are defined
– Verifies that the secret variableydb_password
is defined. If this variable is undefined, the task will fail and an error message will be displayed.create static node configuration file
– Creates a static node configuration file by running the copiedupdate_config_file.sh
script withydbd-config.yaml
andydbd-config-static.yaml
configurations.create static node systemd unit
– Creates aydbd-storage.service
file for the static node based on a template. After completing the task, a notification is sent to restart thesystemd
service.flush handlers
– Executes accumulated handlers. Restarts allsystemd
services.format drives confirmation block
– A block of tasks for formatting disks and interrupting playbook execution in case the user declines confirmation. A confirmation request to format the connected disk will be displayed in the terminal. Response options:yes
– to continue executing the playbook with disk formatting. Any other value will be interpreted as a refusal to format. By default, disks are formatted automatically without asking the user for permission, as the variablesydb_allow_format_drives
andydb_skip_data_loss_confirmation_prompt
are set totrue
. If user confirmation is required, the value of theydb_skip_data_loss_confirmation_prompt
variable should be changed tofalse
in the inventory file50-inventory.yaml
.prepare drives
– A task for formatting connected disks. Calls thedrive_prepare
plugin – a specially developed Ansible module for YDB installation, which is part of the YDB collection and is located in the directory.../.ansible/collections/ansible_collections/ydb_platform/ydb/plugins/action/drive_prepare.py
. The module will format the connected disk specified in theydb_disks
variable if theydb_allow_format_drives
variable is set totrue
.start storage node
– Starts the storage node process usingsystemd
. If any errors occur during service startup, playbook execution will be interrupted.get ydb token
– Requests a YDB token to perform the storage initialization command. The token is stored in theydb_credentials
variable. The task calls theget_token
module from the directory.../.ansible/collections/ansible_collections/ydb_platform/ydb/plugins/modules
. If any errors occur at this step, playbook execution will be interrupted.wait for ydb discovery to start working locally
– Calls thewait_discovery
module, which performs aListEndpoints
request to YDB to check the operability of the cluster's basic subsystems. If the subsystems are working properly, storage initialization commands and database creation can be executed.init YDB storage if not initialized
– Initializes the storage if it has not already been created. The task calls theinit_storage
plugin, which performs the storage initialization command using a grpcs request to the static node on port 2135. The command result is stored in theinit_storage
variable.wait for ydb healthcheck switch to "GOOD" status
– Waits for the YDB healthcheck system to switch to aGOOD
status. The task calls thewait_healthcheck
plugin, which performs a health check command on YDB.set cluster root password
– Sets the password for the YDB root user. The task is executed by theset_user_password
plugin, which performs a grpcs request to YDB and sets a pre-defined password for the YDB root user. The password is specified in theydb_password
variable in the inventory file/examples/9-nodes-mirror-3-dc/inventory/99-inventory-vault.yaml
in an encrypted form.
- Role
ydbd_dynamic
. Tasks:
check if required variables are defined
– Verifies the presence of required variables (ydb_domain
,ydb_pool_kind
,ydb_cores_dynamic
,ydb_brokers
,ydb_dbname
,ydb_dynnodes
) and displays an error if any variable is missing.create dynamic node configuration file
– Creates a configuration file for dynamic nodes.create dynamic node systemd unit
– Creates a systemd service for dynamic nodes. After completing the task, a notification is sent to restart thesystemd
service.flush handlers
– Executes accumulated handlers. This will restartsystemd
.start dynamic nodes
– Starts the process of dynamic nodes usingsystemd
.get ydb token
– Obtains a token for creating a database.create YDB database
– Creates a database. The task is executed by thecreate_database
plugin, which performs a request to 99-inventory-vault.yaml to create the database.wait for ydb discovery to start working locally
– Calls thewait_discovery
module again to check the operability of the cluster's basic subsystems.
As a result of executing the playbook, a YDB cluster will be created, with a test database named database
, a root
user with maximum access rights created, and Embedded UI running on port 8765. To connect to the Embedded UI, you can set up SSH tunneling. For this, execute the command ssh -L 8765:localhost:8765 -i <ssh private key> <user>@<first-ydb-static-node-ip>
on your local machine. After successfully establishing the connection, you can navigate to the URL localhost:8765:
Monitoring the cluster state
After successfully creating the YDB cluster, you can check its state using the Embedded UI – http://localhost:8765/monitoring/cluster/tenants:
This section displays the following parameters of the YDB cluster, reflecting its state:
Tablets
– a list of running tablets. All tablet state indicators should be green;Nodes
– the number and state of static and dynamic nodes launched in the cluster. The node state indicator should be green, and the ratio of created to launched nodes should be equal. For example, 27/27 for a nine-node cluster.
The Load
indicators (amount of RAM used) and Storage
(amount of disk space used) should also be green.
You can check the state of the storage group in the storage
section – http://localhost:8765/monitoring/cluster/storage:
The VDisks
indicators should be green, and the state
status (found in the tooltip when hovering over the Vdisk indicator) should be Ok
. More about the cluster state indicators and monitoring can be read in the article YDB Monitoring.
Cluster Testing
You can test the cluster using the built-in load tests in YDB CLI. To do this, download YDB CLI version 2.5.0 to the machine where Ansible is installed. For example, using wget: wget https://storage.yandexcloud.net/yandexcloud-ydb/release/2.5.0/linux/amd64/ydb
.
Make the downloaded binary file executable – chmod +x ydb
and execute the connection check command:
./ydb \
config profile create <profile name> \
-d /Root/database \
-e grpcs://< FQDN node >:2135 \
--ca-file <path to generated certs>/CA/certs/ca.crt \
--user root \
--password-file <path to vault password file>/ansible_vault_password_file
Command parameters and their values:
config profile create
– This command is used to create a connection profile. You specify the profile name. More detailed information on how to create and modify profiles can be found in the article Creating and updating profiles.-e
– Endpoint, a string in the formatprotocol://host:port
. You can specify the FQDN of any cluster node and omit the port. By default, port 2135 is used.--ca-file
– Path to the root certificate for connections to the database usinggrpcs
. The certificate is created by theydb-ca-update.sh
script in theTLS
directory and is located at the pathTLS/CA/certs/
relative to the root of theydb-ansible-examples
repository.--user
– The user for connecting to the database. By default, the userroot
is created when executing theydb_platform.ydb.initial_setup
playbook.--password-file
– Path to the password file. In each folder with a YDB cluster deployment template, there is anansible_vault_password_file
that contains the password for the userroot
.
You can check if the profile has been created using the command ./ydb config profile list
, which will display a list of profiles. After creating a profile, you need to activate it with the command ./ydb config profile activate <profile name>
. To verify that the profile has been activated, you can rerun the command ./ydb config profile list
– the active profile will have an (active) mark.
To execute a YQL query, you can use the command ./ydb yql -s 'select 1;'
, which will return the result of the select 1
command in table form to the terminal. After checking the connection, you can create a test table with the command:
./ydb workload kv init --init-upserts 1000 --cols 4
. This will create a test table kv_test
consisting of 4 columns and 1000 rows. You can verify that the kv_test
table was created and filled with test data by using the command ./ydb yql -s 'select * from kv_test limit 10;'
.
The terminal will display a table of 10 rows. Now, you can perform cluster performance testing. The article Key-Value load describes 5 types of workloads (upsert
, insert
, select
, read-rows
, mixed
) and the parameters for their execution. An example of executing the upsert
test workload with the parameter to print the execution time --print-timestamp
and standard execution parameters is: ./ydb workload kv run upsert --print-timestamp
.
A report of the following type will be displayed in the terminal:
Window Txs/Sec Retries Errors p50(ms) p95(ms) p99(ms) pMax(ms) Timestamp
1 727 0 0 11 27 71 116 2024-02-14T12:56:39Z
2 882 0 0 10 21 29 38 2024-02-14T12:56:40Z
3 848 0 0 10 22 30 105 2024-02-14T12:56:41Z
...
After completing the tests, the kv_test
table can be deleted with the command: ./ydb workload kv clean
. More details on the options for creating a test table and tests can be read in the article Key-Value load.