Document toolboxDocument toolbox

Known Limitations

vNIOS for KVM supports most of the features of the Infoblox NIOS appliances, with the following limitations:

  • vNIOS for KVM does not support the following features:
    • Configuration of port settings for MGMT, LAN, LAN2, and HA ports
    • The bloxTools environment
  • If you are using NIOS versions earlier than 8.6.x, if you configure an HA pair, both nodes in the HA pair must be NIOS virtual instances. You cannot configure a physical NIOS appliance and a NIOS virtual instance in an HA pair in versions earlier than 8.6.x.

  • vNIOS appliances run on virtual hardware. They do not have sensors to monitor the physical CPU temperature, fan speed, and system temperature.
  • Changing the vNIOS appliance settings through KVM may violate the terms of the vNIOS licensing and support models. The vNIOS appliance may not join the Grid or function properly.

The following known issues are specific to vNIOS for KVM deployed in the KVM-only environment:

  • A vNIOS instance may fail to start if it is deployed in KVM-only environment with Linux bridged networking enabled. You may need to modify certain files to fit your environment.
  • If you get the message error: Unable to read from monitor: Connection reset by peer when starting a vNIOS instance in KVM hypervisor, check the memory of the hypervisor. Failure to start an instance may be due to the lack of memory.

The following known issues are specific to vNIOS for KVM deployed in the OpenStack environment:

  • If the vNIOS instance fails to launch, the ports created for the instance are not deleted automatically. You need to delete them manually.
  • When you start an FTP connection on OpenStack with the Listen on port field set to 2021, you need to manually add a rule to allow the 2021 port. This ensures that the connection is successful.
  • Connection to the FTP service might fail when the virtual appliance enter the passive mode. To avoid this, do the following:
    • Use active mode instead of passive mode.
    • Modify the vnios-sec-group security group to open ports 1023 and above.
    • Use the FTP client inside the internal network.
  • The vNIOS instances deployed in the OpenStack environment do not support HSM Safenet groups.
  • IPv6 support in the OpenStack environment is limited by the Juno release and has not been verified by Infoblox.
  • You cannot configure the reporting virtual appliance as an HA pair. You also cannot configure it as a Grid Master or Grid Master Candidate. You can use it only as a dedicated reporting server in the Grid.
  • At certain times, a power failure (or an unexpected reboot) of the VM host may cause VM guest corruption unless writes are correctly persisted.  To do this, you must have an NVRAM-based disk controller on the host system and turn-off disk caching. That is, you must set the disk_cachemodes parameter to the writethrough value. The method to set the disk_cachemodes parameter varies for different hypervisors. 
    The following example pertains to vNIOS for KVM deployed in OpenStack 16.2. :
    • Log in to the system as a root user.
    • Edit the edit nova.conf and nova-cpu.conf files by running the vim /etc/nova/nova.conf and vim /etc/nova/nova-cpu.conf commands.
    • Go to the [libvirt] section in both the files.
    • On a new line, add disk_cachemodes="writethrough".
    • Restart the nova services.
    The following example pertains to vNIOS for KVM Hypervisor through the KVM Manager:
    • In KVM virt-manager, select writethrough from the disk cachemode drop-down list when deploying the instance.
    • Validate the instance by editing the guest XML file and updating the cache parameter inside the driver tag to specify a caching option to configure the cache mode. For example, to set the cache parameter to writethrough
      <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='writethrough'/>
      <source file='/home/vms/mygm.qcow2' index='1'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      ><address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
      </disk>