/
Installing vNIOS Virtual Appliance in the KVM Environment

Installing vNIOS Virtual Appliance in the KVM Environment

To deploy the virtual appliance, complete the following steps:

  1. Set up the following networks for your KVM environment. You may need to manually modify certain files to suit your environment. Refer to the respective documentation for your KVM Hypervisor.
    • MGMT_network
    • LAN1_network
    • LAN2_network
  2. Deploy the virtual appliance using one of the following methods:

Deploying the Virtual Appliance from the CLI

  1. Open the KVM console.
  2. Use the following command to deploy a virtual appliance:
    virt-install --name <name_of_the_vm> --ram=<memory in MBs> --vcpus=<no._of_cpu_cores> --disk path<path_to_NIOS_qcow2_image>,size=<primary_disk_size_in_GBs> --boot hd --import --network network:<mgmt._network> --network network:<lan1_network> --network network:<ha_network> --network network:<lan2_network> --os-type=<operating_system_type> --os-variant=<operating_system_variant>

    Example:
    virt-install --name NIOS-IB-v926 --ram=32768 --vcpus=4 --disk path=/var/lib/libvirt/images/nios-9.0.5-fixed-500G.qcow2,size=500 --boot hd --import --network network:mgmt_eno2np0_mellanox --network network:lan_eno3np1_mellanox --network network:lan_eno3np1_mellanox --network network:lan2_eno3np2_mellanox --os-type=linux --os-variant=rhel9.3
  3. (Optional) If you are deploying a reporting appliance, use the following command:
    virt-install --name <name_of_the_vm> --ram=<memory in MBs> --vcpus=<no._of_cpu_cores> --disk path<path_to_NIOS_qcow2_image>,size=<primary_disk_size_in_GBs> --disk size=<secondary_disk_size_in_GBs> --boot hd --import --network network:<mgmt._network> --network network:<lan1_network> --network network:<ha_network> --network network:<lan2_network> --os-type=<operating_system_type> --os-variant=<operating_system_variant>

    Example:
    virt-install --name NIOS-IB-v805 --ram=32768 --vcpus=4 --disk path=/var/lib/libvirt/images/nios-9.0.5-fixed-500G.qcow2,size=500 --disk size=250 --boot hd --import --network network:mgmt_eno2np0_mellanox --network network:lan_eno3np1_mellanox --network network:lan_eno3np1_mellanox --network network:lan2_eno3np2_mellanox --os-type=linux --os-variant=rhel9.3
  4. Configure the vNIOS instance as described in the Configuring the vNIOS Instance in the KVM Environment section.

Deploying the Virtual Appliance Using XML files

  1. Create an XML file required to deploy the vNIOS instance in KVM. For details, see the Defining XML files for vNIOS Appliances section. You can download sample XML files from the Downloads page for specific vNIOS versions, available on the Infoblox Support web site. For supported appliance models, see vNIOS for KVM Virtual Appliance Models.
  2. Open the KVM console and run the following commands to define and start the vNIOS virtual appliance. You need to include the paths for the XML files if you save them in a different directory.
    virsh net-define <Name of the MGMT XML file>
    virsh net-define <Name of the LAN1 XML file>
    virsh define <Name of the vnios XML file>
    virsh start <VM Name>
  3. Optionally, if you are deploying a vNIOS reporting server, create an XML file as described in the Defining an XML File for Reporting Servers section27721838 and then open the KVM console and execute the following commands to define and start the vNIOS reporting instance:
    chown -R qemu.qemu <the directory to which you uploaded the qcow2 files> (For example, if you store the qcow2 files in /storage/vm/reporting800, enter /storage/vm/reporting800.)
    virsh define <XML file> (You may need to include the path for the XML file if you save it in a different directory.)
    virsh start <VM Name>
  4. Configure the vNIOS instance as described in the Configuring the vNIOS Instance in the KVM Environment section.

Defining XML files for vNIOS Appliances

Instead of deploying a vNIOS virtual appliance through the GUI, you can create the following XML files for the appliance and then execute commands to define and start the appliance in KVM.

  • XML file for defining the MGMT interface, as shown in the Sample XML File for MGMT section.
  • XML file for defining the LAN1 interface, as shown in the Sample XML File for LAN1 section.
  • XML file for defining vNIOS information such as the name of the VM, memory size, number of CPUs, location of the qcow2 files, file format, and others, as shown in the Sample XML File for a vNIOS Appliance section.

Sample XML File for MGMT

<network>
    <name>MGMT</name>
    <forward mode='bridge'/>
    <bridge name='virbr0' />
</network>

Sample XML File for LAN1

<network>
    <name>LAN1</name>
    <forward mode='bridge'/>
    <bridge name='virbr0' />
</network>

Sample XML File for a vNIOS Appliance

Following is a sample XML file for defining a vNIOS virtual appliance in KVM. Note that the VM name, memory, vCPU, and location of the qcow2 file may vary. You can change these parameters according to your deployment.
<domain type='kvm'>
  <name> Infoblox-TE-825 </name>
  <memory unit='KiB'> 2097152 </memory>
  <vcpu placement='static'> 2 </vcpu>
  <os>
   <type arch='x86_64' machine=' pc '>hvm</type>
   <boot dev='hd'/>
  </os>
<features>
  <acpi/>
  <apic/>
  <pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
   <devices>
    <emulator> /usr/bin/qemu-system-x86_64 </emulator>
    <disk type='file' device='disk'>
     <driver name='qemu' type='qcow2' cache='none'/>
     <source
   file=' /var/lib/libvirt/images/nios-7.3.2-316478-2016-02-17-19-34-52-55G-820-disk1.q cow2 '/>
     <target dev='vda' bus='virtio'/>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
   <interface type='network'>
    <rom bar='off' />
    <source network='MGMT'/>
    <model type='virtio'/>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
   </interface>
   <interface type='network'>
    <source network='LAN1'/>
    <rom bar='off' />
    <model type='virtio'/>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
   </interface>
   <serial type='pty'>
    <target port='0'/>
   </serial>
   <console type='pty'>
    <target type='serial' port='0'/>
   </console>
   <memballoon model='virtio'>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
   </memballoon>
  </devices>
</domain>

Defining an XML File for Reporting Servers

If you are deploying a reporting server in KVM, you must define an XML file that includes information such as the name of the VM, memory size, number of CPUs, location of the qcow2 files, file format, and others before you can spin up the vNIOS instance.
Following is a sample XML file for defining a reporting server in KVM. Note that the VM name, memory, vCPU, and location of the qcow2 files may vary. You can change these parameters according to your deployment.
Depending on the KVM tool you are using to deploy the reporting server, you can use the following sample file as a reference to create your own XML file.
<domain type='kvm'>
 <name> reporting805 </name>
 <memory unit='KiB'> 8388608 </memory>
 <vcpu placement='static'> 2 </vcpu>
 <os>
   <type arch='x86_64' machine=' pc '>hvm</type>
   <boot dev='hd'/>
  </os>
  <features>
   <acpi/>
   <apic/>
   <pae/>
  </features>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
   <emulator> /usr/bin/qemu-system-x86_64 </emulator>
   <disk type='file' device='disk'>
    <driver name='qemu' type='qcow2' cache='none'/>
    <source
file=' /storage/vm/reporting800/nios-7.2.6-316673-2016-02-18-14-00-10-300G-800-disk1.qcow2 '/>
    <target dev='vda' bus='virtio'/>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
   </disk>
   <disk type='file' device='disk'>
   <driver name='qemu' type='qcow2' cache='none'/>
   <source
file=' /storage/vm/reporting800/nios-7.2.6-36673-2016-02-18-14-00-10-300G-800-disk2.qcow2 '/>
   <target dev='vdb' bus='virtio'/>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
  </disk>
  <interface type='network'>
   <rom bar='off' />
   <source network='MGMT_network'/>
   <model type='virtio'/>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
  </interface>
  <interface type='network'>
   <source network='LAN1_network'/>
   <rom bar='off' />
   <model type='virtio'/>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
  </interface>
  <interface type='network'>
   <source network='HA_network'/>
   <rom bar='off' />
   <model type='virtio'/>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
  </interface>
  <interface type='network'>
   <source network='LAN2_network'/>
   <rom bar='off' />
   <model type='virtio'/>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
  </interface>
  <serial type='pty'>
   <target port='0'/>
  </serial>
  <console type='pty'>
   <target type='serial' port='0'/>
  </console>
  <memballoon model='virtio'>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
  </memballoon>
</devices>
</domain>

Configuring the vNIOS Instance in the KVM Environment

To configure the vNIOS instance:

  1. From the KVM administration tool, such as Virtual Machine Manager, select the vNIOS instance.
  2. Open the KVM console.
  3. When the Infoblox login prompt appears, log in with the default user name and password.
    login: admin
    password:
    infoblox
    The Infoblox prompt appears: Infoblox >
  4. You must have valid licenses before you can configure the vNIOS appliance. To obtain permanent licenses, first use the Infoblox > show version command to obtain the serial number of the vNIOS appliance, and then visit the Infoblox Support web site at https://support.infoblox.com. Log in with the user ID and password you receive when you register your product online at http://www.infoblox.com/support/customer/evaluation-and-registration.
    If the vNIOS virtual appliance does not have the Infoblox licenses required to run NIOS services and to join a Grid, you can use the set temp_license command to generate and install a temporary 60-day license.
  5. From the list of licenses, select to add the GridNIOS, and other relevant licenses for your vNIOS virtual appliance. For the vNIOS reporting appliance, you must also select the Reporting license.
    Note that you must have both the Grid and NIOS licenses for the vNIOS virtual appliance to join a Grid.

  6. Use the CLI command set network to configure the network settings. For an HA Grid Master, ensure that you specify these settings for both the active and passive nodes.
    Infoblox > set network
    NOTICE: All HA configurations are performed from the GUI. This interface is used only to
    configure a standalone node or to join a Grid.
    Enter IP address:
    10.1.1.22
    Enter netmask: [Default: 255.255.255.0]: 255.255.255.0
    Enter gateway address [Default: 10.1.1.1]: 10.1.1.1
    Become Grid member? (y or n): n

    After you confirm your network settings, the Infoblox Grid Manager automatically restarts. You can then proceed to setting up a Grid, as described in Setting Up a Grid.

Managing vNIOS Instances

You can use the following commands to manage the vNIOS for KVM through a GUI virt-manager:

virsh console <VM name>: To access the VM console.
virsh start <VM name>: To start a VM.
virsh list and virsh list --all: To list VMs defined on the host. Note that the --all option includes VMs that are currently offline.
virsh undefine <VM name>: To remove the VM definition from the KVM environment. Use this command only after the VM has been shut down.