Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


The NetMRI Operations Center provides a superset of NetMRI discovery and device management, that scales a distributed network management platform up to larger networks and larger deployments. You dedicate satellite NetMRI appliances, called collectors, to the tasks of device discovery and device management. You use a central Operations Center appliance to aggregate and view all data collected from the collector appliances, to view the operating state and manage configurations for all discovered network infrastructure devices and discovered IP networks, including routers, firewalls, load balancers, Ethernet L2/L3 switches, end hosts and end host networks, and much more. NetMRI Operations Center makes it easier to manage, control and secure the enterprise network.

Installation of Operations Center Controller appliances changes in Release 6.9. For initial appliance setup, you run the following sequence of Admin Shell commands on the Operations Center appliance from the NetMRI command line:

...

configure server

license

register

If you wish to immediately begin installing and deploying See the following procedures to install and deploy your NetMRI Operations Center appliances, see the following procedures:

Anchor
Operations Center Appliances and Require
Operations Center Appliances and Require
Anchor
bookmark889
bookmark889
Operations Center Appliances and Requirements

You can use the NetMRI Operations Center on physical or virtual appliances.

Infoblox offers NetMRI appliances in several models:

...

A virtual machine version of NetMRI installed on a VMware ESX server to provide greater flexibility for network monitoring and analysis. VMs are often used as collectors for an Operations Center deployment. A NetMRI VM also can operate as an Operations Center. When deployed in a virtual platform, NetMRI demands a lot of performance resources. For more information, see .

...

Note: In the Operations Center context, when an appliance acts as the Operations Center it uses only a single port, which is the MGMT port for the NT-1400, NT-2200 or NT-4000. Collectors may use multiple interfaces for network discovery and management, including 802.1q virtual scan interfaces. Typically, both the LAN1 and LAN2 ports are used in this manner on each Collector appliance. For more information, see Configuring Network Interfaces for Operations Center.

...

In this document, all hardware models are treated generically and referred to as a "NetMRI appliance." Any currently sold appliance model can operate as a NetMRI Operations Center central node.

Infoblox NetMRI appliances should always be supported by an uninterruptible power supply (UPS) to avoid data corruption problems in cases of power outage.

...

Anchor

...

Access Using the Command Line SSH client
Access Using the Command Line SSH client
Anchor

...

bookmark891

...

NetMRI products are high-performance appliances that demand a lot of performance resources when deployed in a virtual platform. A NetMRI VM, whether a collector or an Operations Center, is highly demanding of the host's I/O processes. Before attempting to run a NetMRI VM as a standalone appliance, a collector or an OC node, the prospective deployment should be verified through the Infoblox Benchmark Tool and the NetMRI VM Benchmark Validation Spreadsheet, both of which may be obtained through Infoblox Support. For related best practices suggestions, also see the subsection Virtual Machine Best Practices.
Raw system performance also bears a direct relationship to the licensing provided for the platform. When a VM host is benchmark tested, the results also determine the number of device licenses which the virtual machine can support. All proposed VM-based OC systems must be benchmark tested through Infoblox field and support personnel before a sale is made. For more information, contact your Infoblox Sales representative or Infoblox Support.

...

Initial connections to the NetMRI Administrative Shell using an SSH command line client to the IP address of the MGMT port require a username as one of the command line parameters, as shown in this example:

ssh l admin <system>

where <system> is the hostname or IP address assigned to NetMRI. At that point, you are prompted for the admin account password, which is the same as that used for the browser interface.

...

When you set up and deploy an Operations Center and its associated collectors, follow some best-practices guidelines to ensure a smooth and effective rollout.

  1. Keep device management levels below the licensed device limits on each collector appliance.
    • Though you have greater flexibility for network connectivity through using network views, multiple scan interfaces and virtual scan interfaces, these features do not influence the licensing limits and capacities of your appliances.
    • License limits should be defined to allow for organic and anticipated growth of the network. Consult with your Infoblox sales representative for a detailed assessment of your licensing needs.
    • License limits are enforced on each collector appliance in an OC deployment. Your OC design should avoid having excessive numbers of licenses on collectors, which can overwhelm the Operations Center and prevent timely operation.
    • New devices can 'bump' older previously-discovered devices from the license limit.
    • Devices in higher-ranked device groups will be prioritized for licensing. (You can change device group rankings in Settings icon –> Setup –> Collection and Groups –> Groups tab.)
    • Avoid using device licenses on devices in end-user network segments.
  2. During setup of a new deployment, use the default network view when you define your first discovery ranges to initially discover the network.
    • An initial network view will be present in a new Operations Center deployment. Initial setup for a new Operations Center deployment automatically creates a default network view, named Network 1, as part of the procedure. This network view is automatically assigned to the Collector appliance's LAN1 port before you perform discovery of the network.
    • When you create your initial discovery ranges, the Network 1 network view is automatically assigned to the LAN1 interface on the Collector. This network view represents the global routed network, which is the network that NetMRI will discover that is not reliant on virtual networks to route traffic.
    • When you create your discovery ranges, static IP addresses and Seed Routers (in Settings icon –> Setup –> Discovery Settings –> Ranges/Static IPs/Seed Routers), each range provides a Network View drop-down menu. You select one network view for each discovery setting; however, a network view can work with multiple discovery ranges. A single network view can use all three discovery objects.
      You define network views (under Settings icon –> Setup –> Network Views) and can assign other networks to those views.
    • For VRF discovery, you do not need to define discovery ranges in the initial rollout. NetMRI will discover VRF-aware devices in its first discovery of the global enterprise network. The system then displays a System Health alert notifying you that unassigned VRFs have been discovered.

3. Avoid using too many device groups. Target using 50 or fewer Extended device groups. Platform Limits also influence the number of device groups allowable in your system.

    • Device groups govern summary data aggregations and other device processing within each group. Device groups are defined in two varieties: Basic device groups, which offer minimal functionality and simply provide a basic categorization for discovered devices such as end hosts; and Extended device groups, which allow the enabling and disabling of specialized group features based upon the type of devices in the group. For more information, see the sections beginning with Device Groups and Switch Port Management and Creating Device Groups.
    • You can define the required device groups for your deployment; delete those that you do not need. Also, avoid frequent group definition changes, additions and deletions.
    • Keep the Unknown and Name Only device groups; do not delete these device groups.
    • Also see Understanding Platform Limits for your Deployment.

4. Ensure reliable network connections between collectors and the Operations Center node.

    • Avoid disruption of network connections between the Operation Center and its associated collectors.
    • Also ensure that DNS resolution is complete between all Collector appliances and the Operations Center Controller appliance. All Collectors should consistently be able to synchronize correctly with the Controller. (By itself, registering successfully with the Controller does not guarantee this, because registering is done solely by the Controller IP address. This could occur, for example, if a Collector is placed in the DMZ for an enterprise network.) You can use the show tunclient command on each Collector to verify DNS resolution of the Controller on the Collector. If you see RESOLVE: Cannot resolve host address messages in the show tunclient command output, add an entry for the Operations Center Controller to the Collector's /etc/hosts file.

5. Use recommended methods to improve reporting performance for your Operations Center.

    • Filter down to the most important data, such as individual device groups, specific time windows and other Report settings.
    • Schedule large, complex reports to run during off-hours.
    • Avoid unnecessarily large reports. Example: Save out monthly reports instead of running multiple-month reports.
    • Disable details for reports offering that function, if and when desirable and the details are not germane to the report.
    • If you have simultaneously running reports, change the Concurrently Running Reports setting under Settings icon –>General Settings–>Advanced Settings page.

6. Manage Syslog Traffic.

...

bookmark891
Access Using the Command Line SSH client

Initial connections to the NetMRI Administrative Shell using an SSH command line client to the IP address of the MGMT port require a username as one of the command line parameters, as shown in this example:

ssh l admin <system>

where <system> is the hostname or IP address assigned to NetMRI. At that point, you are prompted for the admin account password, which is the same as that used for the browser interface.

Anchor
Operational and Deployment Best Practice
Operational and Deployment Best Practice
Anchor
bookmark892
bookmark892
Operational and Deployment Best Practices

When you set up and deploy an Operations Center and its associated collectors, follow some best-practices guidelines to ensure a smooth and effective rollout.

  1. Keep device management levels below the licensed device limits on each collector appliance.
    • Though you have greater flexibility for network connectivity through using network views, multiple scan interfaces and virtual scan interfaces, these features do not influence the licensing limits and capacities of your appliances.
    • License limits should be defined to allow for organic and anticipated growth of the network. Consult with your Infoblox sales representative for a detailed assessment of your licensing needs.
    • License limits are enforced on each collector appliance in an OC deployment. Your OC design should avoid having excessive numbers of licenses on collectors, which can overwhelm the Operations Center and prevent timely operation.
    • New devices can 'bump' older previously-discovered devices from the license limit.
    • Devices in higher-ranked device groups will be prioritized for licensing. (You can change device group rankings in Settings icon –> Setup –> Collection and Groups –> Groups tab.)
    • Avoid using device licenses on devices in end-user network segments.
  2. During setup of a new deployment, use the default network view when you define your first discovery ranges to initially discover the network.
    • An initial network view will be present in a new Operations Center deployment. Initial setup for a new Operations Center deployment automatically creates a default network view, named Network 1, as part of the procedure. This network view is automatically assigned to the Collector appliance's LAN1 port before you perform discovery of the network.
    • When you create your initial discovery ranges, the Network 1 network view is automatically assigned to the LAN1 interface on the Collector. This network view represents the global routed network, which is the network that NetMRI will discover that is not reliant on virtual networks to route traffic.
    • When you create your discovery ranges, static IP addresses and Seed Routers (in Settings icon –> Setup –> Discovery Settings –> Ranges/Static IPs/Seed Routers), each range provides a Network View drop-down menu. You select one network view for each discovery setting; however, a network view can work with multiple discovery ranges. A single network view can use all three discovery objects.
      You define network views (under Settings icon –> Setup –> Network Views) and can assign other networks to those views.
    • For VRF discovery, you do not need to define discovery ranges in the initial rollout. NetMRI will discover VRF-aware devices in its first discovery of the global enterprise network. The system then displays a System Health alert notifying you that unassigned VRFs have been discovered.

3. Avoid using too many device groups. Target using 50 or fewer Extended device groups. Platform Limits also influence the number of device groups allowable in your system.

    • Device groups govern summary data aggregations and other device processing within each group. Device groups are defined in two varieties: Basic device groups, which offer minimal functionality and simply provide a basic categorization for discovered devices such as end hosts; and Extended device groups, which allow the enabling and disabling of specialized group features based upon the type of devices in the group. For more information, see the sections beginning with Device Groups and Switch Port Management and Creating Device Groups.
    • You can define the required device groups for your deployment; delete those that you do not need. Also, avoid frequent group definition changes, additions and deletions.
    • Keep the Unknown and Name Only device groups; do not delete these device groups.
    • Also see Understanding Platform Limits for your Deployment.

4. Ensure reliable network connections between collectors and the Operations Center node.

    • Avoid disruption of network connections between the Operation Center and its associated collectors.
    • Also ensure that DNS resolution is complete between all Collector appliances and the Operations Center Controller appliance. All Collectors should consistently be able to synchronize correctly with the Controller. (By itself, registering successfully with the Controller does not guarantee this, because registering is done solely by the Controller IP address. This could occur, for example, if a Collector is placed in the DMZ for an enterprise network.) You can use the show tunclient command on each Collector to verify DNS resolution of the Controller on the Collector. If you see RESOLVE: Cannot resolve host address messages in the show tunclient command output, add an entry for the Operations Center Controller to the Collector's /etc/hosts file.

5. Use recommended methods to improve reporting performance for your Operations Center.

    • Filter down to the most important data, such as individual device groups, specific time windows and other Report settings.
    • Schedule large, complex reports to run during off-hours.
    • Avoid unnecessarily large reports. Example: Save out monthly reports instead of running multiple-month reports.
    • Disable details for reports offering that function, if and when desirable and the details are not germane to the report.
    • If you have simultaneously running reports, change the Concurrently Running Reports setting under Settings icon –>General Settings–>Advanced Settings page.

6. Manage Syslog Traffic.

Anchor
Special Considerations for Virtual Appli
Special Considerations for Virtual Appli
Anchor
bookmark890
bookmark890
Special Considerations for Virtual Appliances

NetMRI products are high-performance appliances that demand a lot of performance resources when deployed in a virtual platform. A NetMRI VM, whether a collector or an Operations Center, is highly demanding of the host's I/O processes. Before attempting to run a NetMRI VM as a standalone appliance, a collector or an OC node, the prospective deployment should be verified through the Infoblox Benchmark Tool and the NetMRI VM Benchmark Validation Spreadsheet, both of which may be obtained through Infoblox Support. For related best practices suggestions, also see the subsection Virtual Machine Best Practices.

Raw system performance also bears a direct relationship to the licensing provided for the platform. When a VM host is benchmark tested, the results also determine the number of device licenses which the virtual machine can support. All proposed VM-based OC systems must be benchmark tested through Infoblox field and support personnel before a sale is made. For more information, contact your Infoblox Sales representative or Infoblox Support.

Anchor
Benchmarking for the Operations Center
Benchmarking for the Operations Center
Anchor
bookmark894
bookmark894
Benchmarking for the Operations Center

A critical phase of Operations Center planning involves the NetMRI Platform Benchmark. Should you plan to run one or more virtual machines as collectors in an Operations Center environment, you must verify your virtual machine hosts and OC system to successfully conform to the requested scale of network device licenses. You work with Infoblox Support to do so. For more information, refer to the VM Benchmarking Guide for Infoblox NetMRI Operations Center, which is available for download from the Infoblox Support site at https://support.infoblox.com.

Anchor
Virtual Machine Best Practices
Virtual Machine Best Practices
Anchor
bookmark893
bookmark893
Virtual Machine Best Practices

Follow the points below to ensure efficient VM-based Collector operation:

...

    • Operations Center and NetMRI virtual machines are sensitive to RAID controller quality, such as using software RAID or a RAID controller on the motherboard. Using these options is in fact worse than using no RAID at all.
    • Infoblox recommends an enterprise-grade controller with a battery-backed write cache.
    • Infoblox recommends use of RAID-10.

4. Enable Intel VT options in the BIOS of the host system.

The Operations Center validation spreadsheet provides addition VM tuning best practices.

...

    • .
    • Infoblox recommends use of RAID-10.

4. Enable Intel VT options in the BIOS of the host system.

The Operations Center validation spreadsheet provides addition VM tuning best practices.

Anchor
Discovery Best Practices
Discovery Best Practices
Anchor
bookmark895
bookmark895
Discovery Best Practices

Follow the points below to ensure effective discovery of the network:

  1. For simplicity, perform discovery in phases.
  • Begin with a small set of devices and ensure your discovery ranges, defined credentials, and seed routers are all correct.
  • Ensure that firewall gateways for any networks to be discovered allow discovery traffic through open TCP/UDP ports 161 and 162, to allow SNMP traffic.
  • Ensure that your discovery ranges, static IPs and seed routers are associated with their correct network views. For initial discovery, your ranges and other discovery settings can simply be associated with the Network 1 network view, which is automatically created during appliance setup and is bound to the SCAN1 port on your Collector appliance. For more information, see Configuring Network Views.
  • Avoid defining large Discovery Ranges such as /8 or /16, and avoid defining more than 1000 ranges of any size. However, having a large discovery range and seed routers is a more effective discovery technique than using hundreds of small ranges. (You can change device group rankings in Settings icon –> Setup –> Discovery Settings). For more information, see Configuring Discovery Ranges.
  • For discovery using ping sweeps, avoid attempting ping sweeps of greater than /22 subnets. Ping sweeps use protocols other than ICMP and can incur delays in refreshing previously discovered devices. For information on Smart Subnet ping sweep, see Defining Group Data Collection Settings.

...