The NetMRI Operations Center provides a superset of NetMRI discovery and device management, that scales a distributed network management platform up to larger networks and larger deployments. You dedicate satellite NetMRI appliances, called collectors, to the tasks of device discovery and device management. You use a central Operations Center appliance to aggregate and view all data collected from the collector appliances, to view the operating state and manage configurations for all discovered network infrastructure devices and discovered IP networks, including routers, firewalls, load balancers, Ethernet L2/L3 switches, end hosts and end host networks, and much more. NetMRI Operations Center makes it easier to manage, control and secure the enterprise network.
Installation of Operations Center Controller appliances changes in Release 6.9. For initial appliance setup, you run the following sequence of Admin Shell commands on the Operations Center appliance from the NetMRI command line:
...
configure server
license
register
If you wish to immediately begin installing and deploying See the following procedures to install and deploy your NetMRI Operations Center appliances, see the following procedures:
- 1st Step: Configuring Basic Settings for the Operations Center Controller
- 2nd Step: Installing the Operations Center License on the Controller
- 3rd Step: Running Configure Tunserver on the Controller
- 4th Step: Installation for Operations Center Collectors
- 5th Step: Installing the Operations Center Collector License(s)
- 6th Step: Registering NetMRI Collectors
- Configuring Network Interfaces for Operations Center
Anchor | ||||
---|---|---|---|---|
|
Anchor | ||||
---|---|---|---|---|
|
You can use the NetMRI Operations Center on physical or virtual NetMRI appliances.
Infoblox offers NetMRI appliances in several models:
...
A virtual machine version of NetMRI installed on a VMware ESX server to provide greater flexibility for network monitoring and analysis. VMs are often used as collectors for in a virtual infrastructure platform. You can use NetMRI VMs as collectors and as an Operations Center deployment. A NetMRI VM also can operate as an performance should be sufficient to handle a required number of devices. For more information on how to ensure this, see Benchmarking for the Operations Center.
...
Note: In the Operations Center context, when an appliance acts as the Operations Center it uses only a single port, which is the MGMT port for the NT-1400, NT-2200 or NT-4000. Collectors may use multiple interfaces for network discovery and management, including 802.1q virtual scan interfaces. Typically, both the LAN1 and LAN2 ports are used in this manner on each Collector appliance. For more information, see Configuring Network Interfaces for Operations Center.
...
In this document, all hardware models are treated generically and referred to as a "NetMRI appliance." Any currently sold appliance model can operate as a NetMRI Operations Center central node.
Infoblox NetMRI appliances should always be supported by an uninterruptible power supply (UPS) to avoid data corruption problems in cases of power outage.
Anchor | ||||
---|---|---|---|---|
|
Anchor | ||||
---|---|---|---|---|
|
Anchor | ||||
---|---|---|---|---|
|
Anchor | ||||
---|---|---|---|---|
|
...
Benchmarking for the Operations Center
A NetMRI VM, whether a collector or an Operations Center, is highly demanding of the host's I/O processes. Before attempting to run a NetMRI VM as a standalone appliance, a collector or an OC node, the prospective deployment should be verified through the Infoblox Benchmark Tool and the NetMRI VM Benchmark Validation Spreadsheet, both of which may be obtained through Infoblox Support. For related best practices suggestions, also see the subsection Virtual Machine Best Practices.
Raw system performance also bears a direct relationship to the licensing provided for the platform. When a VM host is benchmark tested, the results also determine the number of device licenses which the virtual machine can support. All proposed VM-based OC systems must be benchmark tested through Infoblox field and support personnel before a sale is made. For more information, contact your Infoblox Sales representative or Infoblox Supporthardware intensive and requires high performance. If you run or plan to run virtual machines in an Operations Center environment, make sure that your virtual machine hosts successfully conform to the required number of network devices. To do so, download the Infoblox Benchmark tool from the Infoblox Support site at https://support.infoblox.com. For more information, also download the Benchmarking Guide for NetMRI 7.4.2 from the same location.
For related VM best practices, see Virtual Machine Best Practices.
Anchor | ||||
---|---|---|---|---|
|
Anchor | ||||
---|---|---|---|---|
|
...
Anchor | ||||
---|---|---|---|---|
|
Anchor | ||||
---|---|---|---|---|
|
Follow the points below to ensure efficient VM-based Collector operation:
- Disable or adjust VM performance monitoring systems for the product.
- Because Operations Center VMs tend to be extremely I/O intensive, with continuous 100% CPU utilization, vSphere performance monitoring should be reduced or disabled.
- Because Operations Center VMs tend to be extremely I/O intensive, with continuous 100% CPU utilization, vSphere performance monitoring should be reduced or disabled.
- Avoid placing multiple NetMRI instances on the same host.
...
4. Enable Intel VT options in the BIOS of the host system.
The Operations Center validation spreadsheet provides addition VM tuning best practices.
...
.
Anchor | ||||
---|---|---|---|---|
|
Anchor | ||||
---|---|---|---|---|
|
Follow the points below to ensure effective discovery of the network:
- For simplicity, perform discovery in phases.
- Begin with a small set of devices and ensure your discovery ranges, defined credentials, and seed routers are all correct.
- Ensure that firewall gateways for any networks to be discovered allow discovery traffic through open TCP/UDP ports 161 and 162, to allow SNMP traffic.
- Ensure that your discovery ranges, static IPs and seed routers are associated with their correct network views. For initial discovery, your ranges and other discovery settings can simply be associated with the Network 1 network view, which is automatically created during appliance setup and is bound to the SCAN1 port on your Collector appliance. For more information, see Configuring Network Views.
- Avoid defining large Discovery Ranges such as /8 or /16, and avoid defining more than 1000 ranges of any size. However, having a large discovery range and seed routers is a more effective discovery technique than using hundreds of small ranges. (You can change device group rankings in Settings icon –> Setup –> Discovery Settings). For more information, see Configuring Discovery Ranges.
- For discovery using ping sweeps, avoid attempting ping sweeps of greater than /22 subnets. Ping sweeps use protocols other than ICMP and can incur delays in refreshing previously discovered devices. For information on Smart Subnet ping sweep, see Defining Group Data Collection Settings.
...