1.Enter the server addresses for each component in the inventory.yml file.
Only the variables hyperbox_hosts, hyperbox_api_host, evparser_hosts, drweb_srv_hosts, dimas_hosts, linuxbox_hosts и yara_hosts can take on a set of server addresses as a value.
Components corresponding to those variables carry the main load during file analysis and support scale-out for processing large amounts of files uploaded to Dr.Web vxCube.

|
To avoid freezing of the web UI, we recommend that you deploy vxcube_web_host and each individual analyzer (hyperbox_hosts, hyperbox_api_host, evparser_hosts, drweb_srv_hosts, dimas_hosts, yara_hosts) on individual nodes.
|
2.Specify the drives to be used for installation in the hyperbox_hosts variable in the inventory.yml file, for example:
hyperbox_hosts:
hosts:
192.168.1.10:
hyperbox_ssds: [ "sda" ]
192.168.1.11:
hyperbox_ssds: [ "sda", "sdb", "sdc", "sdd"]
|
3.To access the servers, specify a user name and a path to its private key in the values ansible_user and ansible_ssh_private_key_file in the inventory.yml file.
This user must be able to run commands as the superuser without entering a password. To create such a user on multiple servers simultaneously, use the command:
Running the command will create a user on all servers specified in inventory.yml and save a private authorization key as a file (default path: credentials/ssh/id_rsa).
4.For each node, place an individual openvpn certificate (.crt, .key) in the confs directory, for example: 192.168.1.10.crt, 192.168.1.20.crt, 192.168.2.10.key, 192.168.2.20.key.
5.In the vars-default.yml configuration file, set values for the variables openvpn_client_crt and openvpn_client_key, for example:
openvpn_client_crt: "{{ lookup('file', 'confs/{{ inventory_hostname }}.crt') }}"
openvpn_client_key: "{{ lookup('file', 'confs/{{ inventory_hostname }}.key') }}"
|
6.In the root directory of the installation archive, create the host_vars directory and create an individual YML file with deployment settings for each server, for example: 192.168.1.10.yml и 192.168.2.20.yml.
7.Enter the settings in the YML files:
a.Enter the Ansible user name and password, for example:
ansible_user: test_ansible_user
ansible_ssh_pass: test_ansible_user_pass
ansible_become_password: test_ansible_user_pass
|
b.Enter the directory where you will be cloning VM images to, for example:
c.Enter the configurations of VMs, for example:
hyperbox_images:
- vm_type: 6.1.7601.17514_x86
code: Win7x86
count: 2
clone_threads: 2
params:
memory: 2112
cores: 2
- vm_type: 6.1.7601.17514_x64
code: Win7x64
count: 2
clone_threads: 2
params:
memory: 2112
cores: 2
linuxbox_images:
- vm_type: intel64_astra_se_1.7.2
code: intel64_astra_se_1.7.2
count: 1
- vm_type: intel64_astra_ce_2.12
code: intel64_astra_ce_2.12
count: 1
dimas_images:
- vm_type: Android7.1
code: Android7.1
count: 3
memory: 4072
cores: 2
clone_threads: 3
|

|
Variables from vars-default.yml are higher priority for deployment than variables from YML files in the host_vars directory. To redefine their values, comment out the matching variables in vars-default.yml.
For example, if we create the file host_vars/192.168.1.10.yml and redefine the hyperbox_ssds variable as hyperbox_ssds: [ "sda" ], then we have to comment this variable out in the vars-default.yml file.
|
8.To start Dr.Web vxCube installation using the inventory.yml file, run the command:
|