Creating the Scanning Cluster

In this section:

Introductory Remarks.

The Example of Creating the Scanning Cluster.

Configuring Cluster Nodes.

Verifying the Cluster Operability.

Introductory Remarks

To create the scanning cluster that allows to perform the distributed checks (while scanning files or other objects), you need to have a set of network nodes with the installed Dr.Web Network Checker component on each node. To make the cluster node not only to receive and transmit data to be scanned, it is also necessary to have the scan engine Dr.Web Scanning Engine installed on the node. Thus, to create the node of the scanning cluster, it is necessary that the minimum set of the following components is installed (minimally) on the server (other components of Dr.Web for UNIX Mail Servers which are installed automatically to ensure the functionality of the components listed here, are skipped):

1.Dr.Web Network Checker (drweb-netcheck package) is a component that provides networking between nodes;

2.Dr.Web Scanning Engine (drweb-se package) is the scan engine that is need for scanning data received via network. The component may be absent, in this is node will only transmit data to be checked to other scanning cluster nodes.

The nodes that constitute the scanning cluster form peer to peer network, i.e. each of the nodes, depending on which settings are defined in the Dr.Web Network Checker component on this node, is able to act as either a scanning client (which transmits data for scanning to other nodes) or as a scanning server (which receives data for scanning from other nodes). With the appropriate settings, the cluster node can be both the scanning client and the scanning server at the same time.

The Dr.Web Network Checker parameters, related to scanning cluster setting, have names starting with LoadBalance.

The Example of Creating the Scanning Cluster

Study the example of creating the scanning cluster, displayed on the figure below.

Figure 14. The scanning cluster structure

In this case, it is assumed that the cluster consists of three nodes (displayed on the figure as node 1, node 2, and node 3). In this case, node 1 and node 2 are servers with a full-fledged Dr.Web product for UNIX servers installed (for example, Dr.Web for UNIX file servers or Dr.Web for UNIX internet gateways, the product type does not matter), and node 3 is used only for assistance in scanning files transferred from nodes 1 and 2. Therefore, only the minimum required component set (Dr.Web Network Checker and Dr.Web Scanning Engine) is installed, other components that are automatically installed to ensure the node operability, such as Dr.Web ConfigD, are not displayed on the figure). Nodes 1 and 2 can work both as servers and scanning clients between each other (perform mutual distribution of the load, associated with scanning), and node 3—only as a server, receiving tasks from nodes 1 and 2.

These components will be distributed between the locally installed scan engine Dr.Web Scanning Engine and the cluster partner nodes, acting as scanning servers depending on the load balance.

It is important to note that only components scan data that is not represented as files in the local file system, can act as a client module of verification. This means that the scanning cluster cannot be used for distributed scanning of files by SpIDer Guard file system monitors and by the Dr.Web File Checker component.

Configuring Cluster Nodes

To customize the specified cluster configuration you need to change Dr.Web Network Checker settings on all cluster nodes. All following settings are given as .ini file (refer to configuration file format description).

Node 1

[NetCheck]
InternalOnly=No
LoadBalanceUseSsl = No
LoadBalanceServerSocket = <Node 1 IP address>:<Node 1 port>
LoadBalanceAllowFrom = <Node 2 IP address>
LoadBalanceSourceAddress = <Node 1 IP address>
LoadBalanceTo = <Node 2 IP address>:<Node 2 port>
LoadBalanceTo = <Node 3 IP address>:<Node 3 port>

Node 2

[NetCheck]
InternalOnly=No
LoadBalanceUseSsl = No
LoadBalanceServerSocket = <Node 2 IP address>:<Node 2 port>
LoadBalanceAllowFrom = <Node 1 IP address>
LoadBalanceSourceAddress = <Node 2 IP address>
LoadBalanceTo = <Node 1 IP address>:<Node 1 port>
LoadBalanceTo = <Node 3 IP address>:<Node 3 port>

Node 3

[NetCheck]
InternalOnly=No
LoadBalanceUseSsl = No
LoadBalanceServerSocket = <Node 3 IP address>:<Node 3 port>
LoadBalanceAllowFrom = <Node 1 IP address>
LoadBalanceAllowFrom = <Node 2 IP address>

Notes:

Other (not mentioned here) Dr.Web Network Checker parameters are left unchanged.

IP addresses and port numbers should be changed to real.

Using of SSL for data exchange between nodes in this example is disabled. If you need to use SSL, you must set the value Yes for LoadBalanceUseSsl parameter, as well as set the needed values for the following parameters LoadBalanceSslCertificate, LoadBalanceSslKey and LoadBalanceSslCa.

Verifying the Cluster Operability

To check the cluster operation in data distribution mode, use the following command on nodes 1 and 2:

$ drweb-ctl netscan <path to file or directory>

Upon running the specified command, files from the specified directory should be checked by Dr.Web Network Checker, which should distribute the check to customized cluster nodes. To view statistics of network checks on each node before scanning, run the display of the statistics of Dr.Web Network Checker using the following command (to interrupt the displaying of statistics press Ctrl+C):

$ drweb-ctl stat -n