Document toolboxDocument toolbox

Installing and Deploying the NetMRI Operations Center


The NetMRI Operations Center provides a superset of NetMRI discovery and device management, that scales a distributed network management platform up to larger networks and larger deployments. You dedicate satellite NetMRI appliances, called collectors, to the tasks of device discovery and device management. You use a central Operations Center appliance to aggregate and view all data collected from the collector appliances, to view the operating state and manage configurations for all discovered network infrastructure devices and discovered IP networks, including routers, firewalls, load balancers, Ethernet L2/L3 switches, end hosts and end host networks, and much more. NetMRI Operations Center makes it easier to manage, control and secure the enterprise network.
Installations of Operations Center Controller appliances changes in Release 6.9. For initial appliance setup, you run the following sequence of Admin Shell commands on the Operations Center appliance from the NetMRI command line:

configure server license

configure server

configure tunserver

For each Collector appliance in your deployment, the command sequence is as follows:

configure server

license

register

If you wish to immediately begin installing and deploying your NetMRI Operations Center appliances, see the following procedures:

1st Step: Configuring Basic Settings for the Operations Center Controller

2nd Step: Installing the Operations Center License on the Controller

3rd Step: Running Configure Tunserver on the Controller

4th Step: Installation for Operations Center Collectors

5th Step: Installing the Operations Center Collector License(s)

6th Step: Registering NetMRI Collectors

Configuring Network Interfaces for Operations Center

Operations Center Appliances and Requirements

Infoblox offers NetMRI appliances in several models:

  • NetMRI-1102-A (Discontinued–NetMRI 1102-A appliances may operate as collectors only)

NetMRI-1102-A appliances are equipped with two Ethernet ports, labeled MGMT and SCAN. The MGMT port may be used singly as a dedicated management port for the appliance or may operate as the only active port, carrying both management and network monitoring traffic. By default, the appliance is configured to use the MGMT port for both system administration and network analysis functions.

  • NetMRI NT-4000

NT-4000 appliances are a next-generation 2U appliance that supports a larger CPU, memory and storage configuration, along with field-replaceable power supplies and disk drives in a RAID-10 array. The NT-4000 appliance may operate as an Operations Center and as a collector appliance. The appliance is equipped with two active Ethernet ports, labeled LAN1 and MGMT. MGMT connects the NT-4000 appliance to the management network and is used for managing the appliance. The LAN1 port is the primary connection to managed networks. (LAN1 may operate as the only active port, carrying both management and network monitoring traffic.) If activated, LAN2 also connects the appliance to managed networks.

  • NetMRI NT-1400

The NetMRI NT-1400 is designed for smaller enterprise deployments and for use as a collector for Operations Center deployments. The appliance is equipped with two active Ethernet ports, labeled LAN1 and MGMT. MGMT connects the NT-1400 appliance to the management network and is used for managing the appliance. The LAN1 port is the primary connection to managed networks. (LAN1 may operate as the only active port, carrying both management and network monitoring traffic.) If activated, LAN2 also connects the appliance to managed networks.

  • NetMRI NT-2200

The NetMRI NT-2200 appliances are higher-capacity and higher-speed appliances that may operate as both Operations Center appliances and as collectors. The appliance is equipped with two active Ethernet ports, labeled LAN1 and MGMT. MGMT connects the NT-2200 appliance to the management network and is used for managing the appliance. The LAN1 port is the primary connection to managed networks. (LAN1 may operate as the only active port, carrying both management and network monitoring traffic.) If activated, LAN2 also connects the appliance to managed networks.

  • NetMRI VM

A virtual machine version installed on a VMware ESX server to provide greater flexibility for network monitoring and analysis. VMs are often used as collectors for an Operations Center deployment. A NetMRI VM also can operate as an Operations Center.


Note: In the Operations Center context, when an appliance acts as the Operations Center it uses only a single port, which is the MGMT port for the NT-1400, NT-2200 or NT-4000. Collectors may use multiple interfaces for network discovery and management, including 802.1q virtual scan interfaces. Typically, both the LAN1 and LAN2 ports are used in this manner on each Collector appliance. For more information, see Configuring Network Interfaces for Operations Center.


In this document, all hardware models are treated generically and referred to as a "NetMRI appliance." Any currently sold appliance model can operate as a NetMRI Operations Center central node.
Infoblox NetMRI appliances should always be supported by an uninterruptible power supply (UPS) to avoid data corruption problems in cases of power outage.

Special Considerations for Virtual Appliances

NetMRI products are high-performance appliances that demand a lot of performance resources when deployed in a virtual platform. A NetMRI VM, whether a collector or an Operations Center, is highly demanding of the host's I/O processes. Before attempting to run a NetMRI VM as a standalone appliance, a collector or an OC node, the prospective deployment should be verified through the Infoblox Benchmark Tool and the NetMRI VM Benchmark Validation Spreadsheet, both of which may be obtained through Infoblox Support. For related best practices suggestions, also see the subsection Virtual Machine Best Practices.
Raw system performance also bears a direct relationship to the licensing provided for the platform. When a VM host is benchmark tested, the results also determine the number of device licenses which the virtual machine can support. All proposed VM-based OC systems must be benchmark tested through Infoblox field and support personnel before a sale is made. For more information, contact your Infoblox Sales representative or Infoblox Support.

Access Using the Command Line SSH client

Initial connections to the NetMRI Administrative Shell using an SSH command line client to the IP address of the MGMT port require a username as one of the command line parameters, as shown in this example:

ssh –l admin <system>

where <system> is the hostname or IP address assigned to NetMRI. At that point, you are prompted for the admin account password, which is the same as that used for the browser interface.

Operational and Deployment Best Practices

When you set up and deploy an Operations Center and its associated collectors, follow some best-practices guidelines to ensure a smooth and effective rollout.

  1. Keep device management levels below the licensed device limits on each collector appliance.
    • Though you have greater flexibility for network connectivity through using network views, multiple scan interfaces and virtual scan interfaces, these features do not influence the licensing limits and capacities of your appliances.
    • License limits should be defined to allow for organic and anticipated growth of the network. Consult with your Infoblox sales representative for a detailed assessment of your licensing needs.
    • License limits are enforced on each collector appliance in an OC deployment. Your OC design should avoid having excessive numbers of licenses on collectors, which can overwhelm the Operations Center and prevent timely operation.
    • New devices can 'bump' older previously-discovered devices from the license limit.
    • Devices in higher-ranked device groups will be prioritized for licensing. (You can change device group rankings in Settings icon –> Setup –> Collection and Groups –> Groups tab.)
    • Avoid using device licenses on devices in end-user network segments.
  2. During setup of a new deployment, use the default network view when you define your first discovery ranges to initially discover the network.
    • An initial network view will be present in a new Operations Center deployment. Initial setup for a new Operations Center deployment automatically creates a default network view, named Network 1, as part of the procedure. This network view is automatically assigned to the Collector appliance's LAN1 port before you perform discovery of the network.
    • When you create your initial discovery ranges, the Network 1 network view is automatically assigned to the LAN1 interface on the Collector. This network view represents the global routed network, which is the network that NetMRI will discover that is not reliant on virtual networks to route traffic.
    • When you create your discovery ranges, static IP addresses and Seed Routers (in Settings icon –> Setup –> Discovery Settings –> Ranges/Static IPs/Seed Routers), each range provides a Network View drop-down menu. You select one network view for each discovery setting; however, a network view can work with multiple discovery ranges. A single network view can use all three discovery objects.
      You define network views (under Settings icon –> Setup –> Network Views) and can assign other networks to those views.
    • For VRF discovery, you do not need to define discovery ranges in the initial rollout. NetMRI will discover VRF-aware devices in its first discovery of the global enterprise network. The system then displays a System Health alert notifying you that unassigned VRFs have been discovered.

3. Avoid using too many device groups. Target using 50 or fewer Extended device groups. Platform Limits also influence the number of device groups allowable in your system.

    • Device groups govern summary data aggregations and other device processing within each group. Device groups are defined in two varieties: Basic device groups, which offer minimal functionality and simply provide a basic categorization for discovered devices such as end hosts; and Extended device groups, which allow the enabling and disabling of specialized group features based upon the type of devices in the group. For more information, see the sections beginning with Device Groups and Switch Port Management and Creating Device Groups.
    • You can define the required device groups for your deployment; delete those that you do not need. Also, avoid frequent group definition changes, additions and deletions.
    • Keep the Unknown and Name Only device groups; do not delete these device groups.
    • Also see Understanding Platform Limits for your Deployment.

4. Ensure reliable network connections between collectors and the Operations Center node.

    • Avoid disruption of network connections between the Operation Center and its associated collectors.
    • Also ensure that DNS resolution is complete between all Collector appliances and the Operations Center Controller appliance. All Collectors should consistently be able to synchronize correctly with the Controller. (By itself, registering successfully with the Controller does not guarantee this, because registering is done solely by the Controller IP address. This could occur, for example, if a Collector is placed in the DMZ for an enterprise network.) You can use the show tunclient command on each Collector to verify DNS resolution of the Controller on the Collector. If you see RESOLVE: Cannot resolve host address messages in the show tunclient command output, add an entry for the Operations Center Controller to the Collector's /etc/hosts file.

5. Use recommended methods to improve reporting performance for your Operations Center.

    • Filter down to the most important data, such as individual device groups, specific time windows and other Report settings.
    • Schedule large, complex reports to run during off-hours.
    • Avoid unnecessarily large reports. Example: Save out monthly reports instead of running multiple-month reports.
    • Disable details for reports offering that function, if and when desirable and the details are not germane to the report.
    • If you have simultaneously running reports, change the Concurrently Running Reports setting under Settings icon –>General Settings–>Advanced Settings page.

6. Manage Syslog Traffic.

Virtual Machine Best Practices

Follow the points below to ensure efficient VM-based Collector operation:

  1. Disable or adjust VM performance monitoring systems for the product.
    • Because Operations Center VMs tend to be extremely I/O intensive, with continuous 100% CPU utilization, vSphere performance monitoring should be reduced or disabled.
  2. Avoid placing multiple NetMRI instances on the same host.

Operations Center/NetMRI instances present significant demands on I/O, particularly on virtual machine hosts. Avoid attempting to run Operations Center appliance instances on hosts with other VMs.

    • Avoid sharing storage with other virtual applications.
    • Use dedicated local storage if at all possible.
    • For network-based storage, assign dedicated spindles to the virtual machine.

3. In the host, use a high-quality RAID controller.

    • Operations Center and NetMRI virtual machines are sensitive to RAID controller quality, such as using software RAID or a RAID controller on the motherboard. Using these options is in fact worse than using no RAID at all.
    • Infoblox recommends an enterprise-grade controller with a battery-backed write cache.
    • Infoblox recommends use of RAID-10.

4. Enable Intel VT options in the BIOS of the host system.

The Operations Center validation spreadsheet provides addition VM tuning best practices.

Benchmarking for the Operations Center

A critical phase of Operations Center planning involves the NetMRI Platform Benchmark. Should you plan to run one or more virtual machines as collectors in an Operations Center environment, you must verify your virtual machine hosts and OC system to successfully conform to the requested scale of network device licenses. You work with Infoblox Support to do so. For more information, refer to the VM Benchmarking Guide for Infoblox NetMRI Operations Center, which is available for download from the Infoblox Support site at https://support.infoblox.com.

Discovery Best Practices

Follow the points below to ensure effective discovery of the network:

For simplicity, perform discovery in phases.

  • Begin with a small set of devices and ensure your discovery ranges, defined credentials, and seed routers are all correct.
  • Ensure that firewall gateways for any networks to be discovered allow discovery traffic through open TCP/UDP ports 161 and 162, to allow SNMP traffic.
  • Ensure that your discovery ranges, static IPs and seed routers are associated with their correct network views. For initial discovery, your ranges and other discovery settings can simply be associated with the Network 1 network view, which is automatically created during appliance setup and is bound to the SCAN1 port on your Collector appliance. For more information, see Configuring Network Views.
  • Avoid defining large Discovery Ranges such as /8 or /16, and avoid defining more than 1000 ranges of any size. However, having a large discovery range and seed routers is a more effective discovery technique than using hundreds of small ranges. (You can change device group rankings in Settings icon –> Setup –> Discovery Settings). For more information, see Configuring Discovery Ranges.
  • For discovery using ping sweeps, avoid attempting ping sweeps of greater than /22 subnets. Ping sweeps use protocols other than ICMP and can incur delays in refreshing previously discovered devices. For information on Smart Subnet ping sweep, see Defining Group Data Collection Settings.

Include End-Host devices and Ethernet segments in discovery ranges.

  • Use the Exclude From Management setting on end-host segment discovery ranges to prevent unnecessary SNMP credential discovery against end hosts (Settings icon –> Setup –> Discovery Settings –> Ranges tab –> Discovery Mode menu).

Planning an Operations Center Deployment

A number of factors help decide what your Operations Center deployment will look like:

  • Define your goals for the network management system.

Are you planning to manage only switched Ethernet networks? Manage all routed and switched networks? A mix of routing, switching, and security devices? Will you manage virtual routing and forwarding (VRF) networks?
These factors bear upon the type of licensed feature set for the Operations Center, and how you will deploy it.

  • Estimate the size of the managed network.

Operations Center feature licensing is defined by the number of licensed devices (including but not limited to routers, switches, firewalls and servers). Each managed device occupies a device license under NetMRI. The size of the managed network helps define the level of licensing you will need for the Operation Center deployment.
Managed devices have a different licensing scheme from discovered devices. You allocate licenses based upon the infrastructure devices in your network that you want to manage; because the number of endpoint hosts may be far greater than the number of infrastructure devices, endpoints should be considered as part of the discovered devices category in most if not all cases. Discovered devices of this type should not be licensed unless considered necessary.
Conversely, unlicensed infrastructure devices can lead to incomplete network analysis issues such as topology holes and large collections of undiscovered endpoints.

  • Determine how many Collector appliances you need, and how many device licenses should be provided on each Collector.

Your decision on how many Collectors you need in your deployment is generally determined by the size and the topology of your network. For large deployments, contact Infoblox Support.
Licensing levels are enforced on each Collector and cumulatively add to the total number of licenses for the Operations Center. On the Controller, you also install a specially generated license generated by Infoblox, directly based upon the features and device counts licensed for all Collectors in the full OC deployment.
Knowing to a fairly close margin what device counts each of your Collectors will manage, while allowing room for growth, helps determine how the Controller will be licensed.

  • What is the rate of growth for the managed network?

Plan for growth within the network when you define and set up the Operations Center deployment. A good rule of thumb is to plan for a minimum organic growth of 5% per year, but is entirely based upon the circumstances of each deployment and whatever future plans are in place for the managed network.

Understanding Platform Limits for your Deployment

NetMRI provides a detailed System Health feature set that helps enforce key evaluation elements such as Platform Limits, Licensing Limits and Effective Limits for a deployment. For more information, see the section Understanding Platform Limits, Licensing Limits and Effective Limits.
NetMRI Platform limits, Licensed limits, and Effective limits apply to all collector appliances and instances in an Operations Center. On the Operations Center, the Settings icon –> Setup –> Tunnels and Collectors page separately lists each collector's status and their associated device limits. For more information, see Checking and Viewing Operations Center Networks.

Installing Operations Center Platforms

Take the following procedures to install and configure an Operations Center:

1st Step: Configuring Basic Settings for the Operations Center Controller

2nd Step: Installing the Operations Center License on the Controller

3rd Step: Running Configure Tunserver on the Controller

4th Step: Installation for Operations Center Collectors

5th Step: Installing the Operations Center Collector License(s)

6th Step: Registering NetMRI Collectors

An Operations Center deployment consists of a controller appliance and one or more collector appliances. The Controller aggregates data and analyzes results from the collectors to provide a consolidated view of the enterprise network within one user interface, which is hosted by the controller.
Communication between the Controller and its associated Collectors takes place over a set of Secure Sockets Layer Virtual Private Networks (SSL VPN) across their designated management network. You monitor Operations Center VPN tunnels and basic collector communication from the Settings icon –> Setup –> Tunnels and Collectors page. Each VPN tunnel between the Operations Center and the associated Collectors appear in the list.
You begin installing an Operations Center platform by installing and configuring its Operations Center Controller, followed by installing and configuring its Collector appliances, whether physical or virtual.
After physically installing the Collector appliances, or deploying the virtual machines to their respective hosts, activating their instances under the hypervisor and installing their Infoblox NetMRI licenses, you need to run a brief series of NetMRI Admin Shell (CLI) commands to bring up each instance.
After finishing the basic Operations Center installation, you execute one of two different options:

Installing the Operations Center Controller

You perform Operations Center Controller appliance installation before the Collector appliances are deployed for discovering their respective networks.
Users connect to the NetMRI UI through the MGMT interface IP address on the Controller.


Note: Scan port configurations reside on each Collector appliance. The Operations Center Controller does no discovery or device management of its own; when you run configure server on the controller, you do not configure scan interfaces.


1st Step: Configuring Basic Settings for the Operations Center Controller

You will need the following information to begin setting up the Controller:

  • The Management IP address of the appliance (this IP will be assigned to the MGMT port of the appliance);
  • A name for the global network to which NetMRI will initially connect and discover;
  • The Default Gateway IP address for the management port;
  • A designated controller name, if the default is not correct;
  • The local domain name for the server network;
  • Time zone and region information;
  • DNS Server IP (and secondary DNS server IP if necessary);

There are two possibilities for basic IP configuration of the Controller:

  • Static IP addressing using the configure server command;
  • The appliance acts as a DHCP client, and the default values appear when you run the configure server command.

In this procedure, we assume use of a static IP address configuration for the Controller.

  1. Use a terminal program to connect to the management IP address of the Controller appliance.
  2. Log in using the default admin/admin username/password account.

Note: The values you enter in the configure server command are the default values that will appear in this series of steps. If your Operations Center is configured through DHCP, default values from that service appear here. Avoid overwriting DHCP-provided settings if this is the case.


3. At the Admin Shell prompt, enter configure server and press Enter.

admin-na206.corp100.com> configure server

4. Press Y to respond Yes to begin system setup.

Default values can be erased by pressing the spacebar and pressing Enter or by entering new values.

5. Enter the new Database Name and press Enter.

Database Name is a descriptive name for this deployment. It is used in reports titles, headers, etc.

Recommended: Begin name with uppercase letter.
Database Name []: Corp100_west

6. For the first-time installation, you can choose to generate a new HTTPS certificate.

Do you want to generate a new HTTPS Certificate? (y/n) [n]: y

7. Enter the local domain name in which the controller resides. This value is used for truncating device names in NetMRI data sets throughout the system.

Domain Name 1 (e.g., example.com) []: corp100.com
Domain Name 2 (optional) []:

8. Enter the time server IP address if one is available or is necessary:

Time Server [us.pool.ntp.org]:

9. Enter the time zone region by typing in the suggested numeric value from the list:

Time Zone Regions
Choose your local region.

0.

Africa

1.

Antarctica

2.

Arctic

3.

Asia

4.

Atlantic

5.

Australia

6.

Brazil

7.

Canada

8.

CET

9.

Chile

10.

EET

11.

GMT

12.

GMT-1

13.

GMT+1

14.

GMT-2

15.

GMT+2

16.

GMT-3

17.

GMT+3

18.

GMT-4

19.

GMT+4

20.

GMT-5

21.

GMT+5

22.

GMT-6

23.

GMT+6

24.

GMT-7

25.

GMT+7

26.

GMT-8

27.

GMT+8

28.

GMT-9

29.

GMT+9

30.

GMT-10

31.

GMT+10

32.

GMT-11

33.

GMT+11

34.

GMT-12

35.

GMT+12

36.

Europe

37.

Hongkong

38.

Iceland

39.

Indian

40.

Israel

41.

Mexico

42.

NZ

43.

NZ-CHAT

44.

Pacific

45.

US

46.

UTC

47.

WET

Enter choice (0-47) [0]: 45

10. Enter the time zone location by typing in the suggested numeric value from the list:

Choose a location within your time zone.

0.

Alaska

1.

Aleutian

2.

Arizona

3.

Central

4.

East-Indiana

5.

Eastern

6.

Hawaii

7.

Indiana-Starke

8.

Michigan

9.

Mountain

10.

Pacific

11.

Samoa

Enter choice (0-11) [0]: 10

11.Follow the steps for configuring the management port IP settings:

+++ Configuring Management Port Settings
You must configure an IPv4 or IPv6 address/mask on the management port.
NetMRI can perform analysis from the management port or a separate scan port.

IP Address (optional) []: 10.120.25.212
Subnet Mask (optional) []: 255.255.255.0
IPv6 Address (optional):
IPv6 Prefix (optional):

You must provide either an IPv4 gateway, an IPv6 gateway, or both.

IPv4 Default Gateway (optional) []: 10.120.25.1
IPv6 Default Gateway (optional) []:

12.Enter n for No and press Enter to skip the step for configuring the SCAN port on the Controller appliance:

Do you want to configure the Scan Port? (y/n) [n]: <enter>
You will not use the SCAN ports LAN1 and LAN2 on the Controller appliance in an OC deployment.

13.Enter the address(es) of the primary and secondary DNS server, if required:

DNS Server 1 (IP) []: 172.23.16.21
DNS Server 2 (optional) []:

14.The setup utility lists the configuration settings and queries whether you wish to edit them.

Edit these settings? (y/n) [n]:

15.Finally, the setup utility requests that you commit your settings. Press Enter to accept the Y (yes) default.

Configure the system with these settings? (y/n) [y]:


Configuring system ...
+++ Validating Interfaces ...
+++++ eth0 ... OK
+++++ eth1 ... OK
The controller appliance restarts.

16.Verify your settings by entering the following:

admin-na206.corp100.com> show settings
This command lists the complete config settings for the Operations Center.

For the controller appliance, continue to the next topic, 2nd Step: Installing the Operations Center License on the Controller.

2nd Step: Installing the Operations Center License on the Controller

You must install the cumulative feature license provided to you by Infoblox Sales & Support for the Controller to fully operate with all Collectors in the deployment. This license must contain the aggregate count of device licenses and feature entitlements that are provided for all Collectors expected to work with the OC Controller system.
When you receive the Controller appliance and physically install it, it does not automatically contain the licensed features and entitlements present on the Collectors, nor can those entitlements be transferred to the Controller. When you first set up the appliances that you are designating as Collectors in an OC deployment, they are simply operating as standalone NetMRI appliances. Each are separately licensed. The Controller has its own cumulative license file. You install that license in this step, followed by step 2a, re-running configure server on the Controller appliance.

To install the Operations Center license for a NetMRI physical appliance, generate a license on your own using the license generate administrative shell command. For more information, see license generate command.

To install the Operations Center license for a NetMRI virtual appliance, do the following:

  1. Obtain the Operations Center license through the Infoblox Support Portal at http://support.infoblox.com.
  2. Upload the license file provided through the Infoblox Support Portal into the admin account's /Backup directory using WinSCP or a similar program.
  3. Log into the admin shell, enter the license <NameOfLicenseFile> command, and press Enter.
    admin-na206.corp100.com> license <license_file_name.gpg>
    The server restarts without rebooting the appliance. The server resumes operation after several minutes of processing.

Step 2a: Re-Run Configure Server

After you install the license for your Controller, you must run the configure server command a second time.

  1. After logging in to the Controller appliance, re-run the configure server command:
    admin-na206.corp100.com> configure server
  2. Press Y to respond Yes to continue system setup. You step through the settings you defined in your first run of the configure server command by pressing Enter at each prompt. (You do not need to change any settings unless changes are required for administrative reasons.) When you come to the end of the configure server command sequence, enter N to commit the previously defined settings to the system.

    Configure the system with these settings? (y/n) [y]: n

    Configuring system ...

    +++ Validating Interfaces ...

    +++++ eth0 ... OK

    +++++ eth1 ... OK

    The controller appliance restarts.

  3. Continue to the next topic, 3rd Step: Running Configure Tunserver on the Controller.

3rd Step: Running Configure Tunserver on the Controller

The configure tunserver command governs the core security settings for the Controller appliance, including certificate usage and the VPN tunnel server settings between the Controller and all collectors.
The command also offers the option to define a reference NetMRI appliance to use for importing the library of scripts, custom reports, custom jobs, policies and user account data from an existing appliance. For more information, see the following section, Importing Data From a Reference NetMRI Instance.

  1. Use a terminal program to connect to the management IP address of the Controller appliance.
  2. Log in using the default admin/admin username/password account.
  3. Execute the following Admin Shell CLI commands on a newly installed or reset Operations Center appliance:

NetworkAutomation-VM-8DD4-66925> configure tunserver

+++ Configuring CA Settings

CA key expiry in days [5475]:

CA key size in bits [2048]:

+++ Configuring Server Settings

Server key expiry in days [5475]:

Server key size in bits [1024]:

Server Public Name or IP address: 10.120.32.167

Protocol (tcp, udp, udp6) [tcp]:

Tunnel network base [5.0.0.0]:

Block cipher:

0. None (RSA auth)

1. Blowfish-CBC

2. AES-128-CBC

3. Triple DES

4. AES-256-CBC

Enter Choice [2]:

Use compression [y]:

You can optionally designate a NetMRI client system as a "reference" system

that will be used as a source of common settings.

Enter reference system serial number or RETURN to skip:

Use these settings? (y/n) [n]: y

The system will commit the settings and restart the software without rebooting the system.

To check operation of VPN tunnel connections with Collector appliances, go to Settings icon –> Setup –> Tunnels & Collectors on the Controller.

4th Step: Installation for Operations Center Collectors

Scan port configurations reside on each Collector appliance. The Operations Center Controller does no discovery or device management of its own; when you run configure server on the controller, you do not configure the LAN1 port. The following procedure applies only to Collector appliances.
You will need the following information to begin setting up the Collector:

  • The Management IP address of the appliance (this IP will be assigned to the MGMT port of the appliance);
  • The customer name for the network to which the appliance will initially connect and discover;
  • The Default Gateway for the management port;
  • The local domain name for the server network;
  • Time zone and region information;
  • DNS Server IP (and secondary DNS server IP if necessary);

There are two possibilities for basic IP configuration of each Collector:

  • Static IP addressing using the configure server command;
  • Appliance uses DHCP, and the default values appear when you run the configure server command.
    In this procedure, we assume use of a static IP address on the management network for each Collector.
  1. Use a terminal program to connect to the management IP address of the Collector appliance.
  2. Log in using the default admin/admin username/password account.
  3. At the Admin Shell command prompt, enter configure server and press Enter.
  4. Complete the following:

Note: If your Operations Center Collectors are configured through DHCP, default values from that service will appear here. Do not override DHCP settings while using the configure server command. In this procedure, we assume use of a static IP address on the management network for each Collector.


5. At the Admin Shell prompt, enter configure server and press Enter.

6. Press Y to respond Yes to begin system setup:

Do you want to start system setup now? (y/n) [n]: y


Default values, when available, are given within [].

You may clear defaults by typing a SPACE and pressing Enter.

+++ Configuring Network Identification Settings

7. Enter the new Database Name and press Enter.

Database Name is a descriptive name for this deployment. It is used in reports titles, headers, etc.
Recommended: Begin name with uppercase letter.


Database Name []: Corp100_west

8. Enter the new Server Name and press Enter.

The Server Name identifies this system in SNMP and HTTPS server certificates.
The installed HTTPS certificate contains the following subject:

subject= /CN=NetworkAutomation-2210201208100028/O=NetMRI


Server Name []: corp100_187

9. For the first-time installation, you can choose to generate a new HTTPS certificate.
Do you want to generate a new HTTPS Certificate? (y/n) [n]: y
10. Enter the local domain name in which the appliance resides. This value is used for truncating device names in NetMRI data sets throughout the system.
Domain Name 1 (e.g., example.com) []: corp100.com
Domain Name 2 (optional) []:
11. Enter the time server IP address if one is available:
Time Server [us.pool.ntp.org]:
12 .Enter the time zone region by typing in the suggested numeric value from the list:
Time Zone Regions
Choose your local region.

Time Zone Regions
Choose your local region.

0.

Africa

1.

Antarctica

2.

Arctic

3.

Asia

4.

Atlantic

5.

Australia

6.

Brazil

7.

Canada

8.

CET

9.

Chile

10.

EET

11.

GMT

12.

GMT-1

13.

GMT+1

14.

GMT-2

15.

GMT+2

16.

GMT-3

17.

GMT+3

18.

GMT-4

19.

GMT+4

20.

GMT-5

21.

GMT+5

22.

GMT-6

23.

GMT+6

24.

GMT-7

25.

GMT+7

26.

GMT-8

27.

GMT+8

28.

GMT-9

29.

GMT+9

30.

GMT-10

31.

GMT+10

32.

GMT-11

33.

GMT+11

34.

GMT-12

35.

GMT+12

36.

Europe

37.

Hongkong

38.

Iceland

39.

Indian

40.

Israel

41.

Mexico

42.

NZ

43.

NZ-CHAT

44.

Pacific

45.

US

46.

UTC

47.

WET

Enter choice (0-47) [0]: 45

13. Enter the time zone location by typing in the suggested numeric value from the list:

Choose a location within your time zone.

0.

Alaska

1.

Aleutian

2.

Arizona

3.

Central

4.

East-Indiana

5.

Eastern

6.

Hawaii

7.

Indiana-Starke

8.

Michigan

9.

Mountain

10.

Pacific

11.

Samoa

Enter choice (0-11) [0]: 10

You continue by configuring the management port settings. You define the IPv4 and IPv6 addresses and subnet masks the default gateway IP address for the management port:

You must configure an IPv4 or IPv6 address/mask on the management port.

NetMRI can perform analysis from the management port or a separate scan port.

IPv4 Address (optional) []: 10.120.32.181

IPv4 Subnet Mask (optional) []: 255.255.255.0

IPv6 Address (optional):

IPv6 Prefix (optional):

IPv4 Default Gateway (optional) []: 10.120.32.1

IPv6 Default Gateway (optional) []:


Note: In the Operations Center, after you deploy the Controller and register the Collectors, all Collectors inherit Time Zone settings from the Controller. All systems will reboot on time zone updates, including the Controller and all Collectors.


14. Press Y (yes) to perform the step for configuring the LAN1 port on the collector appliance:

Do you want to configure the Scan Port? (y/n) [n]: y

You must configure an IPv4 or IPv6 address/mask on the scan port.

IP Address (optional) []: 10.0.60.181

Subnet Mask (optional) [] 255.255.255.0 :

IPv6 Address (optional):

IPv6 Prefix (optional):

You must provide either an IPv4 gateway, an IPv6 gateway, or both.

IPv4 Default Gateway (optional) [] 10.0.60.1 :

IPv6 Default Gateway (optional) []:

15. Enter the address(es) of the primary and secondary DNS server, if required:

DNS Servers are used to map hostnames to IP addresses.

You may enter up to 2 name servers below.

DNS Server 1 (IP) []: 172.23.16.21

DNS Server 2 (optional) []:

16. The setup utility lists the configuration settings and queries whether you wish to edit them.

Edit these settings? (y/n) [n]:

17. The setup utility requests that you commit your settings. Press Enter to accept the Y (yes) default.

Configure the system with these settings? (y/n) [y]:


Configuring system ...

+++ Validating Interfaces ...

+++++ eth0 ... OK

+++++ eth1 ... OK

The Collector appliance restarts.

You continue by installing the license for each Collector appliance. Continue to the next topic, 5th Step: Installing the Operations Center Collector License(s).

5th Step: Installing the Operations Center Collector License(s)

You will need the following information to correctly license all Collectors in the deployment:

  • Each Collector appliance's required feature licenses (Full NetMRI, or Automation Change Management (ACM));
  • The number of licensed devices that each Collector is expected to manage.

In an OC deployment, each NetMRI Collector license is provided from the Infoblox Support Portal. Once you bring the appliances up, they are simply operating as standalone NetMRI appliances.

  1. Obtain an Operations Center license from Infoblox.
  2. Upload the license file provided by Infoblox into the admin account's /Backup directory using WinSCP or a similar program.
  3. Log into the admin shell and enter the license <NameOfLicenseFile> command, and press Enter.
    The NetMRI service restarts without rebooting the appliance. NetMRI resumes operation after several minutes of processing.
  4. Continue to the next section, 6th Step: Registering NetMRI Collectors.

6th Step: Registering NetMRI Collectors

To complete the basic Operations Center deployment, you run the register command in the Admin Shell on each Collector to register them with the newly configured Operations Center.


Note: Managed device licenses are enforced on each of the collectors for the Operations Center, not on the central Operations Center node. The Operations Center node must contain a license encompassing the scope of all licenses between all Collectors.


  1. Use a terminal program to connect to the management IP address of each Collector appliance.
  2. Log in using the default admin/admin username/password account.
  3. Execute the following Admin Shell CLI commands on a newly installed or reset Operations Center appliance:

admin-na206.corp100.com> register
NOTICE: The inactivity timeout is being disabled temporarily while this command is run.


+++ Configuring Tunnel Registration Settings

Registration Server/IP [e.g., example.com]: 10.1.21.2
Registration protocol (http|https) [https]:
Registration username: admin
Registration password:#$^%#*#$


Register this system? (y/n) [y]:y

4. Press Y to establish the secure communication link between the Collector and the Operations Center appliance.

You can migrate from a standalone NetMRI appliance to an Operations Center environment. This procedure is described in the following section, Importing Data From a Reference NetMRI Instance section.

Importing Data From a Reference NetMRI Instance

The appliance designated as a Controller can import the library of scripts, custom reports, custom jobs, policies and user account data from an existing NetMRI appliance. The NetMRI appliance from which you are importing does not become the Controller itself.

  1. Choose the NetMRI instance as a reference system from which data will be copied.

Only information from the reference NetMRI can be imported into the Operations Center. When adding multiple NetMRI instances to an Operations Center environment, the scripts, policies and settings may differ between NetMRI instances. Therefore, any of the deltas you want imported into the Operations Center must either be manually added to the reference NetMRI, or imported into the Operations Center after the reference NetMRI is restored on the Operations Center.

2. Configure the Controller:

    1. Log in to the admin shell on the Operations Center Controller.
    2. At the command prompt, enter configure tunserver.
    3. When prompted to Enter the reference system serial number or RETURN to skip, type the serial number of the NetMRI reference system, then press ENTER.

Tip: In each prompt, defaults are shown in square brackets [ ]. To accept the default, simply press ENTER.


d. When prompted: Use these settings?, enter y.

e. When prompted to restart the Controller, enter y.

The complete package of scripts, policies and user data is downloaded by the Operations Center. You install the data in a following step.

3. Register the reference system with the Controller:

    1. Log in to the admin shell on the reference system.
    2. At the command prompt, enter register.
    3. When prompted to Register this system?, enter y.
    4. You are prompted to run restore-settings on the master server. Continue in step 4.

4. Define restore settings on the Controller: (This installs the uploaded reference data.)

    1. If needed, log in to the admin shell on the Controller.
    2. At the command prompt, enter restore-settings.
    3. At the Continue with import? prompt, enter y. (This installs the reference data on the Controller.)
    4. When prompted to restart the Controller, enter y.

5. Re-register the reference unit with the Controller.

    1. If needed, log in to the admin shell on the reference system.
    2. At the command prompt, enter register.
    3. When prompted to Register this system?, enter y.
    4. The appliance restarts. After restarting, the instance will be a collector in the Operations Center system.

Note: As part of the registration process, the admin password on each collector synchronizes with the password on the Operations Center Controller. After registration completes, the admin password for the collector may be different than the password you initially used to log in to the admin shell on that instance.




Note: After registration, the NetMRI GUI is not available on the reference NetMRI unit. All access to the unit takes place through the Controller.


Configuring the Operations Center Controller with Factory Defaults

This procedure describes the straightforward process of setting up an Operations Center Controller with factory defaults. No configuration data or network and device information is imported from any NetMRI reference system.

  1. Log in to the NetMRI admin shell.
  2. Enter configure tunserver.
  3. When prompted to Enter the reference system serial number or RETURN to skip, press ENTER.
  4. Proceed to build out the system, by following the procedures in the section Installing the Operations Center Controller.

Installing an Operations Center License onto an Existing NetMRI Appliance

Installing an Operations Center license is a process that should only be done on appliances that are qualified to operate as such. Otherwise, the process is straightforward.

  1. Convert the NetMRI appliance to an Operations Center Controller:
    1. Obtain an Operations Center license from Infoblox.
    2. Upload the license into the admin account's /Backup directory using WinSCP or a similar program.
    3. Log into the admin shell and enter the license <NameOfLicenseFile> command.
  2. Log in to the admin shell and enter configure tunserver. Answer the prompts to set up the basic tunnel server settings, as described in the section 3rd Step: Running Configure Tunserver on the Controller.

Configuring Network Interfaces for Operations Center

NetMRI requires a connection to each network you wish to directly discover, manage or control. Scan Interfaces are the ports on NetMRI appliances and virtual appliances that perform this function. Physical scan interfaces are actual Ethernet ports on the appliance.
You can configure virtual scan interfaces on Collector appliances, that use 802.1Q VLAN tagging between NetMRI and its connecting device, to exchange traffic for multiple networks across a single physical interface. To use virtual scan interfaces, you connect one of NetMRI's physical scan interfaces to a device interface configured to route the desired networks with 802.1Q VLAN tags.
You define physical scan interfaces and virtual scan interfaces on Operations Center Collectors. All scan interfaces of either type must be bound to a network view to enable network management across each interface.
Each network views also requires discovery settings to discover the network.

Network Views and Scan Interfaces


Note: You associate all network discovery settings, including discovery ranges, static IPs and seed routers, with each network view. For information about configuring your network views, see Configuring Network Views and Configuring Network Discovery Settings.


You use network views in combination with scan interfaces to separate and manage networks. If you plan to manage a number of networks of any kind (routed networks, virtual routing and forwarding (VRF) networks, and so on), network views, each tied to a scan interface, give you the flexibility to do so.
Network views provide the useful concept of isolation. Using network views, NetMRI enables you to manage networks that may have overlapping IP prefixes or address ranges, preventing addressing conflicts between separately managed networks. You manage every network in complete isolation from other networks.
In previous Operations Center software versions, a model termed the multi-tenancy multi-collector deployment enables either of the following:

  •   Multiple collectors on a single network (for managing an exceptionally large network).

For an OC deployment of this type, you choose the network view-collector entry from the NetworkView list. You will see multiple entries in the pages under Settings icon –>Setup–>DiscoverySettings for the NetworkView list. The entire network is assigned to a single network view; however, each network view entry is identified through the association of each Collector. This allows you to edit discovery settings for each Collector in the same network view. Examples:

corp100_west (NM35)

copr100_west (NM36)

Here each Collector, NM35 and NM36, is associated to the same network view, but discovery settings can be
edited separately.

  • Multiple networks, each assigned to one collector.

This is the Multi-Network model. Each Collector is assigned to its own separate network view, which is bound to the scan interface on each Collector. Any Collector can also manage through multiple network views, each of which is considered a separate routing domain

Through virtual scan interfaces and multiple scan ports, combined with network views, you may have multiple scan interfaces per collector, and therefore multiple networks per collector. This extends the multi-tenancy multi-collector model to allow each collector in the Operations Center to flexibly discover, catalogue and manage multiple networks.


Note: If you have multiple networks, particularly with overlapping IP address ranges, you can define virtual scan interfaces to tie NetMRI to each network without affecting the operation of those overlapping address spaces.


Configuring Physical Scan Interfaces on Collectors

Collector physical scan interface configuration varies depending on your Collector's physical configuration, and even whether the appliance is a VM.
To configure a physical scan interface, do the following:

  1. Go to Settings icon –> Setup –> Scan Interfaces.
    The Scan Interfaces Settings page appears, listing all device interfaces that may be used by the appliance. Depending on the hardware and system type, you will see one or more interfaces named MGMT and/or LANn (where n is the physical port number). In an Operations Center, the Collector Name is shown alongside the interfaces.
  2. Hover over the Action icon for any of the physical ports and select Edit from the menu.
  3. Choose from the Network View configuration section:
    1. Select Existing: Choose a network view from the list of existing ones that are defined on the system;
      • Select the view from the dropdown list;
      • Selecting Unassigned as the Network View leaves the interface in a disabled state.

–or–

b. Create New: Creates a new network view.

      • Enter the name for the new network view;
      • Enter a comment describing the view. These values can be edited at a later time.

4. Enter the IPv4 Address, IPv4 Subnet Mask and the IPv4 Default Gateway; or, enter the IPv6 Address, IPv6 Subnet Mask and the IPv6 Default Gateway values if the connection supports IPv6 in addition to, or instead of, IPv4.

5. Click Save to save the new physical scan interface configuration. On physical ports, you may Edit or Add Virtual Scan Interfaces.


Note: Though the MGMT port provides allows for the same scanning discovery and device control capabilities as other appliance physical port types, Infoblox recommends limiting managing enterprise networks through the MGMT port, using it only for management access to the appliance's web, cli and tunnel interfaces, so those functions cannot be compromised by end-user traffic.


Configuring Virtual Scan Interfaces on Collectors

To define virtual scan interfaces, do the following:

  1. Go to Settings icon –> Setup –> Scan Interfaces.
    The Scan Interfaces page appears, listing all device interfaces that may be used by the appliance. Depending on the hardware and system type, you will see one or more interfaces named MGMT and/or LANn (where n is the physical port number). If any virtual scan interfaces are defined, they will have names like LAN1.211 (NM35) (the '211' is the 802.1Q tag defined for the virtual scan interface, and the 'NM35' is the Collector ID).
    The Scan Interfaces page lists all physical scan interfaces and virtual interfaces by each collector appliance.
  2. Hover over the Action icon for any of the physical ports and select Add Virtual Scan Interface from the menu.
  3. Choose from the Network View configuration section:
    1. Select Existing: Choose a network view from the list of existing network views that are defined in NetMRI;
      • Select the view from the dropdown list;
      • Selecting Unassigned as the Network View leaves the interface in a disabled state. The Network View field remains blank in the Scan Interfaces table.

–or–

b. Create New: Creates a new network view.

      • Enter the name for the new network view;
      • Enter a comment describing the view. These values can be edited at a later time.

4. In the Tag field, enter the 802.1Q tag value defined on the facing device that transits the trunk port or router port. You will need to know the tag value on the device.

5. Enter the IPv4 Address, IPv4 Subnet Mask and the IPv4 Default Gateway; or, enter the IPv6 Address, IPv6Subnet Mask and the IPv6 Default Gateway values if the connection supports IPv6 in addition to, or instead of, IPv4.

6. Click Save to save the new virtual scan interface configuration. You may also Edit or Delete virtual scan interfaces.

Remember the following points about scan interface configuration:

  • You can assign a network view to a physical port on your appliance, such as LAN1. Doing so does not prevent the same port from supporting virtual scan interfaces, each of which supports their own network view;
  • You can define virtual scan interfaces and assign network views to them, but choose not to apply a network view to the physical port hosting those virtual scan interfaces (LAN1, for example);
  • You can create a virtual scan interface with a tagging value, but not immediately assign it to a network view. The virtual scan interface is effectively disabled and you can assign its network view at a later time;
  • On each Collector appliance, each network view can be associated with a single scan interface. Multiple Collectors can each access the same network view, each using separate discovery settings.

Operations Center Disaster Recovery Procedure


Note: Ensure that the standby Operations Center appliance and/or all standby collectors use the same NetMRI software release as those in production before continuing with this procedure.


This topic describes how to perform a disaster recovery from a Primary Operations Center to a Standby Operations Center. When you perform a disaster recovery, you first restore the database archive on the Standby Operations Center, and then migrate all collectors from the Primary Operations Center to the Standby Operations Center


Note: To fully configure the Standby Operations Center, you will need a second product license for the disaster recovery system with the same licensing entitlements as the Primary Operations Center license. Contact your Infoblox sales representative for more information.


Complete the following to perform a disaster recovery:

  1. Log in to the Standby Operations Center command line via SSH using the admin/admin system credentials.
  2. Execute the following Admin Shell CLI commands on a newly installed or reset Standby Operations Center instance:
  • Define the management port IP configuration for the Standby Operations Center:
    admin-na206.corp100.com> configure server
  • Install the license for the Standby Operations Center:
    • For a physical appliance, generate a license by running the license generate command. For more information, see license generate command.
    • For a virtual appliance, run admin-na206.corp100.com> license <license filename>.gpg.
  • Define server settings for the Standby Operations Center:
    admin-na206.corp100.com> configure server

Make a note of your settings for Step 6 of this Procedure.


Note: The configure server command also generates a new self-signed certificate for the Standby Operations Center. In cases where a CA-signed certificate is used in the original Operations Center, the HTTPS certificates need to be configured using the procedures described in the topic NetMRI Security Settings in the Admin Guide and in the online Help.


3. Verify your settings by entering the following commands:

admin-na206.corp100.com> show settings
List the complete config settings for the Standby Operations Center.
admin-na206.corp100.com> show license
Show the installed license for the Standby Operations Center.

4. Via SCP, manually transfer the Primary Operations Center database archive to the Standby Operations Center.


Note: You can also configure the database backup for the Primary as an automated transfer, using the Settings –> Database Settings –> Scheduled Archive screen on the Primary Operations Center to archive the OC database to the system designated as the Standby. The backup directory in this case should be set as "Backup"; for more information, see Database Archiving Functions in the Admin Guide and in the online Help.




Note: When using the automated database backup, you must first log in to the Standby Operations Center through your web browser, and set the admin password to a value different from the "admin" factory default.
In this case, after the Standby OC system is activated as the Primary, you must also go to the Settings –> Database Settings –> Scheduled Archive tab and define another remote system to back up the new OC's database archive.


If you schedule the transfer to occur within six hours of the start of weekly maintenance, no new archive will be created. Instead, the archive generated by weekly maintenance will be used. For large deployments with a lot of data, configuring backups to occur more frequently than the weekly interval may affect overall system performance.

5. Using the Admin Shell on the Standby Operations Center, restore the database archive on the Standby Operations Center. Restore time depends upon the size of the database, and may take several hours for a large system.

admin-na206.corp100.com> restore ExampleNet_4050201203200004-20130221-641


Note: The admin credentials (that default to admin/admin) are changed on the Standby Operations Center following the database restore operation. The Standby Operations Center will use the admin credentials that previously applied on the Primary Operations Center.


6. When the database restore task finishes on the Standby Operations Center, run configure server a second time to regenerate the Standby Operations Center's self-signed certificate for HTTPS access. Retain your settings previously defined in Step 2 of this Procedure.

7. In the Admin Shell on the Standby Operations Center, configure the VPN tunnel server on the Standby Operations Center using the same VPN subnet and other settings as on the Primary. When asked for the Server Public Name or IP address, be sure to enter the correct value for the Standby Operations Center. Do not configure a reference collector. The following listing is a sample capture for an entire session:

admin-na206.corp100.com> configure tunserver
+++ Configuring CA Settings

CA key expiry in days [5475]:
CA key size in bits [1024]:

+++ Configuring Server Settings

Server key expiry in days [5475]:
Server key size in bits [1024]:

Server Public Name or IP address: 172.23.27.170 <new IP address for Standby>

Protocol (tcp, udp, udp6) [tcp]:
Tunnel network base [5.0.0.0]:
Block cipher:

0. None (RSA auth)

1. Blowfish-CBC

2. AES-128-CBC

3. Triple DES

4. AES-256-CBC

Enter Choice [2]:

Use compression [y]:

You can optionally designate a NetMRI client system as a "reference" system that will be used as a source of common settings.

Enter reference system serial number or RETURN to skip: <press Enter here>

Use these settings? (y/n) [n]: y

+++ Initializing CA (may take a minute) ...

+++ Creating Server Params and Keypair ...

Generating DH parameters, 1024 bit long safe prime, generator 2

This is going to take a long time

....++*++*++*

+++ Creating Server Config ...

Successfully configured Tunnel CA and Server

The server needs to be restarted for these changes to take effect.

Do you wish to restart the server now? (y/n) [y]: y

+++ Restarting Server ... OK

8. Check the Standby Operation Center’s VPN tunnel server settings, which are used for communications between the Operations Center and its collectors, before proceeding:

example-oc> show tunserver

CA configured: Yes

Server configured: Yes

ServerPublicName: 172.23.27.170

Proto: tcp

Port: 443

KeySize: 1024

Network: 5.0.0.0

Cipher: AES-128-CBC

Compression: Yes

Service running: Yes

Reference NetMRI SN: N/A

Reference NetMRI Import: Skipped


Client Sessions:

UnitSerialNo: 1200201202100020

UnitName: oc-170-coll-1

UnitIPAddress: 5.0.0.15

Network: ExampleNet

UnitID: 1

Status: Offline: Last seen 2013-02-21 03:01:01

...

9. Using a Web browser, log in to the Standby Operations Center. Note that the admin password for the Standby Operations Center system will now be set to the password of the Primary Operations Center.

10.In Settings –> Setup –> Collection and Groups, re-enable all data collectors needed for the configuration.


Note: You must re-enable SNMP collection on this page, as it is automatically disabled on a restore.


11. In Settings –> Setup –> Tunnels and Collectors, verify that all collectors are listed.

12. Register the collectors to the Standby Operations Center by executing the following commands on each of the collectors. You use these commands to specify the Standby Operations Center IP address and new admin credentials:

admin-collector111.corp100.com> reset tunclient
admin-collector111.corp100.com> register

13. Verify Operations Center collector registration and communication by entering the following:

example-oc> show tunclient
Client configured: Yes
Server: 172.23.27.182
Proto: tcp
Port: 443
Cipher: AES-128-CBC
Compression: On
Tunnel Server IP: 5.0.0.1
Tunnel Client IP: 5.0.0.10
Server reachable: Yes
Service running: Yes
Latest Service Log Entries:
Apr 10 17:02:51 localhost openvpn[20804]: VERIFY KU OK
Apr 10 17:02:51 localhost openvpn[20804]: Validating certificate extended key usage
Apr 10 17:02:51 localhost openvpn[20804]: ++ Certificate has EKU (str) TLS Web Server Authentication, expects TLS Web Server Authentication
Apr 10 17:02:51 localhost openvpn[20804]: VERIFY EKU OK
Apr 10 17:02:51 localhost openvpn[20804]: VERIFY OK: depth=0, /C=US/ST=CA/L=Santa_Clara/O=Infoblox/OU=na_Operations_Center/CN=OC182/name=Tunnel-Server/emailAddress=support@infoblox.com
Apr 10 17:02:51 localhost openvpn[20804]: Data Channel Encrypt: Cipher 'AES-128-CBC' initialized with 128 bit key
Apr 10 17:02:51 localhost openvpn[20804]: Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication
Apr 10 17:02:51 localhost openvpn[20804]: Data Channel Decrypt: Cipher 'AES-128-CBC' initialized with 128 bit key
Apr 10 17:02:51 localhost openvpn[20804]: Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication
Apr 10 17:02:51 localhost openvpn[20804]: Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA
example-oc>

14. Log back in to the Standby Operations Center UI. In Settings –> Setup –> Tunnels and Collectors, verify that each of the registered collectors are online. The Operations Center will begin receiving data from collectors immediately after connection is established. Data processing and analysis will catch up in a time interval similar to how long the collectors were offline.

15. In Settings–> Database Settings –> Scheduled Archive, define the new archiving settings that you will need for the new Operations Center system, including enabling automatic archiving, defining the recurrence pattern, and defining the remote systems that will receive the periodic archives.

Replacing a Collector


Note: Database restoration is not supported on collectors. Collectors contain only a single day of data at any given time; this block of data cannot be restored. After executing the configure tunclient command noted below, the Operations Center automatically pushes all previous collector settings to the replacement collector, and the collector begins its normal Discovery tasks.


To replace a collector in an Operations Center environment, complete the following:

  1. On the Operations Center, go to Settings–>Setup–>TunnelsandCollectors.
  2. Click the Action icon in the row for the collector you want to replace and choose CollectorReplacement.
  3. Change the serial number of the existing collector to that of the replacement collector.
  4. Click OK.
  5. On the replacement collector, open a new Admin Shell session using SSH and complete the configuration commands for a basic setup.
  • Define the management port IP configuration for the replacement collector:
    admin-na240.corp100.com> configure server
  • Install the license for the replacement collector:
    • For a physical appliance, generate a license by running the license generate command. For more information, see license generate command.
    • For a virtual appliance, run admin-na240.corp100.com> license <license filename>.gpg.
  • Define server settings for the replacement collector:
    admin-na240.corp100.com> configure server
  • Register the replacement collector to the Operations Center IP Address, and define the Operations Center Network to which the collector belongs.

admin-na240.corp100.com> register
NOTICE: The inactivity timeout is being disabled temporarily while this command is run.
+++ Configuring Tunnel Registration Settings

Registration Server/IP [e.g., example.com]: controller.corp100.com
Registration protocol (http|https) [https]:
Registration username: admin
Registration password:

After executing the register command, the Operations Center automatically pushes all previous collector settings to the replacement collector.

6. Log in to the Operations Center and verify that the replacement collector status is Connected (Settings –> Setup –> Tunnels and Collectors, check Status column).

7. You can use SSH to log in the Operation Center's Admin Shell, to view a listing of the OC system and the collectors. The show tunserver command shows each collector's status in its listing.

Checking NetMRI Collectors Operation


Note: This information applies to NetMRI Operations Center installations only.


The Tunnels and Collectors page (Settings icon –> Setup –> Tunnels and Collectors) provides information about all collectors that are defined within a NetMRI Operations Center installation. A network consists of one or more Operations Center collectors. Each collector connects to the Operations Center through an encrypted tunnel.

The Status column in the table indicates a collector's status. For Operations Center collectors, the status you want to see is Connected.

You may change the data that appears in the Tunnels and Collectors table by choosing different columns of data. Useful information includes the following information:

VPN Address

Lists the tunnel endpoint IP address assigned to each collector.

Network Name

The name of the network for the Operations Center.

Connected From

This is the actual administrative IP address of the collector. Indicates the actual endpoint IP address and TCP port number across which the collector is tunneling (using its associated VPN endpoint).

Bytes Received and Bytes Sent

Lists the total amount of data received by, and sent by, each collector during its entire active period in the OC.

Collectors also have Licensing limits, corresponding to those described in the topic Understanding Platform Limits,Licensing Limits and Effective Limits. The Tunnels and Collectors page lists the following information:

Device Limit

Shows the maximum device license count for the collector–the number of devices the collector is licensed to manage. (This value does not apply to discovered device counts, which can be higher.) The value in this column corresponds to an Effective Device Limit for the collector.

Licensed Devices

The number of consumed device licenses for the listed collector. The difference between this value and the Device Limit represents the number of device licenses remaining available to the collector appliance.

To view a collector's event logs, do the following:

  1. Click the Actions icon and choose Log Messages.
  2. Click Yes to continue.
  3. When the log is available, it appears in a window.

To obtain and send technical data from any collector for troubleshooting purposes, do the following:

  1. Click the Actions icon and choose Send Support Bundle.
  2. Choose a Transfer Mode: Download to Client Workstation or Send to Infoblox Support Site.
  3. Click, CTRL+click or SHIFT+click to select one or more Data Categories. Sending technical data requires at least one category selection.
  4. Click Start and confirm the operation.

Checking and Viewing Operations Center Networks

The Networks page (Settings icon –> General Settings –> Networks) lists all defined Operations Center networks. In an enterprise-style deployment, this is usually just one network. In a managed service provider deployment, there is typically one network per customer.
In this context, a network is a single organizational domain, in which any given IP address is unique. If different Networks have devices that have the same IP as one from a different network, Operations Center will prevent any conflicts between them. The simple definition of a Network identifier, and the addition of one or more collectors to them, ensures the managed networks under each Network ID are administratively isolated from one another.
Networks can be added automatically by a collector registering to a network through the configure tunclient
command, or defined manually by an administrator through this page. To manually add a network, do the following:

  1. Click New in the lower right corner of the Networks page.
  2. In the Network dialog, enter a Name and Description, then click Save.
    To edit a network, click the Actions icon and choose Edit.

See the Operations Center Disaster Recovery Procedure for more information on replacing an Operations Center system, or a collector, with a new appliance.