vNIOS for KVM supports most of the features of the Infoblox NIOS appliances, with the following limitations:
- vNIOS for KVM does not support the following features:
- Configuration of port settings for MGMT, LAN, LAN2, and HA ports
- The bloxTools environment
- When
If you are using NIOS versions earlier than 8.6.x, if you configure an HA pair, both nodes in the HA pair must be
vNIOSNIOS virtual instances. You cannot configure a physical NIOS appliance and a
vNIOSNIOS virtual instance in an HA pair in versions earlier than 8.6.x.
- vNIOS appliances run on virtual hardware. They do not have sensors to monitor the physical CPU temperature, fan speed, and system temperature.
- Changing the vNIOS appliance settings through KVM may violate the terms of the vNIOS licensing and support models. The vNIOS appliance may not join the Grid or function properly.
...
- If the vNIOS instance fails to launch, the ports created for the instance are not deleted automatically. You need to delete them manually.
- When you start an FTP connection on OpenStack with the Listen on port field set to 2021, you need to manually add a rule to allow the 2021 port. This ensures that the connection is successful.
- Connection to the FTP service might fail when the virtual appliance enter the passive mode. To avoid this, do the following:
- Use active mode instead of passive mode.
- Modify the vnios-sec-group security group to open ports 1023 and above.
- Use the FTP client inside the internal network.
- The vNIOS instances deployed in the OpenStack environment do not support HSM Safenet groups.
- IPv6 support in the OpenStack environment is limited by the Juno release and has not been verified by Infoblox.
- You cannot configure the reporting virtual appliance as an HA pair. You also cannot configure it as a Grid Master or Grid Master Candidate. You can use it only as a dedicated reporting server in the Grid.
- At certain times, a power power failure (or an unexpected reboot) of the VM host may cause a corruption of the VM guest corruption unless writes are correctly persisted. To do this, you must have an NVRAM-based disk controller on the host system and turn-off guest disk caching. That is, you must set the value of the
disk_cachemodes
parameter to thewritethrough
value. The The method to set the thedisk_cachemodes
parameter varies for different hypervisors.- The following example pertains to vNIOS
- for KVM deployed in OpenStack 16.2
- :
Sign in to the system as a root user.
Edit
the nova.conf
files by running
the following commands:
vi /var/lib/config-data/puppet-generated/nova/etc/nova/nova.conf
vi /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova
.conf
- Go to the
[libvirt]
section in both the files. On a new line,
add:
disk_cachemodes =
file=writethrough,block=writeback,network=writeback
Save the changes and restart all containers related to Nova using the command:
podman restart container_id
- The following example pertains to vNIOS
- In KVM virt-manager, select writethrough from the disk cachemode drop-down list when deploying the instance.
- Validate the instance by editing the guest XML file and updating for KVM Hypervisor:
- Set the cache mode to
writethrough
by passing thecache=writethrough
parameter in thevirt-install
command: - Validate this setting in the instance by editing the guest XML file as follows:
Configure the cache mode by updating thecache
parameter inside thedriver
- tag to specify a caching
- option.For example, to set
- the
cache
- parameter to
writethrough
:<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='writethrough'/>
<source file='/home/vms/mygm.qcow2' index='1'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
><address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
</disk>
- Set the cache mode to