This page outlines the connectivity requirements necessary for your configuration. The following connectivity requirements are covered:
NIOS-X Port Usage for Server Connectivity
Admin User Connectivity Requirements
Port Usage for Infoblox Services
Port Usage for Bare-Metal NIOS-X Servers
Connectivity Rules for DNS Forwarding Proxy
Forwarding DNS Traffic to Infoblox Platform
Infoblox Geo-Based Anycast IPs for POPs
Local DNS Request Processing Optimization
NIOS-X Port Usage for Server Connectivity
For NIOS-X server connectivity to function properly, ensure that the following are in place:
All destination domains, IPs, and ports listed in the following table must be available in your firewall.
Do not enable SSL inspection in your firewall for any of the destination domains and IPs.
Source | Destinations | Destination IPs
| Protocol | Destination | Description |
---|---|---|---|---|---|
Universal DDI | US Region
EU Region
| US Region dns.bloxone.infoblox.com
http://csp.infoblox.com
cp.noa.infoblox.com
grpc.csp.infoblox.com
app.noa.infoblox.com
EU Region dns.bloxone.eu.infoblox.com
http://csp.eu.infoblox.com
cp.noa.eu.infoblox.com
app.noa.eu.infoblox.com
| TCP | 443
| Allow these IP addresses on the firewall for the NIOS-X servers to connect to the Infoblox Portal, and to ensure Universal DDI services function properly in the respective regions. |
NIOS-X Servers DNS Forwarding Proxy
| threatdefense.bloxone.infoblox.com threatdefense.infoblox.com (and all subdomains) ope.infobloxtd.com (and all subdomains) Note that this destination domain is required only if you plan to use the “Local On-Prem resolution” feature. Note: Communication with these destinations will bypass any proxy server setting. In other words, if you configure a proxy, the DNS forwarding proxy service (threatdefense.bloxone.infoblox.com:443) is bypassed on the proxy. If you configure a proxy, the Universal DDI service destination (dns.bloxone.infoblox.com:443) is bypassed on the proxy. | US and EU Regions Anycast IPs (IPv4 and IPv6)
For geo-specific IP addresses, refer to the Infoblox geo-based Anycast IPs for POPs table in Forwarding DNS Traffic to Infoblox Platform. | TCP
| 443 53
| Infoblox uses 52.119.40.100 as the default local resolver for all NIOS-X servers. However, you can use your own local resolver to resolve the destination domains. |
NIOS-X Servers | US Region
EU Region http://csp.eu.infoblox.com
| US Region A complete list of the US region IP addresses is available in a JSON file by clicking this link. EU Region A complete list of the EU region IP addresses is available in a JSON file by clicking this link. If the server type is NIOS, do the following:
http://csp.infoblox.com
HTTP proxy configured on NIOS-X server When a HTTP proxy is configured on NIOS-X server, Data Connector is able to pull the log data from Infoblox cloud source through configured proxy. However, do note that as of now the logs sent from data connector to the configured destination will still bypass the proxy. | TCP
| 443
| All listed IPs require TCP 443 port be open when being used. |
End Client | N/A | Redirect IPs: For IPv4:
For IPv6:
| TCP | 443 or 80 | For redirect purposes. A client/end user should be connecting to the redirect server. |
NIOS-X Servers | ntp.ubuntu.com (optional) http://pool.ntp.org (optional) | N/A | UDP | 123 | For NTP server synchronization. Needed only when ESXi time sync is disabled. This is optional. |
Admin User Connectivity Requirements
Source | Destinations | Destination IPs (if applicable) | Protocol | Destination | Description |
---|---|---|---|---|---|
Infoblox admins | US Region
http://auth.infoblox.com
http://cdnjs.cloudflare.com
EU Region
| N/A | TCP (TLS) | 443 |
|
Port Usage for Infoblox Services
The following table lists the ports that must be available in your firewall for Infoblox services to function properly.
All ports listed below are outbound only, except for transferring logs from NIOS to Data Connector, which requires inbound communication.
Services | Protocol | Destination Port | Description |
---|---|---|---|
All Infoblox services | TCP
| 443
|
|
DNS Forwarding Proxy | TCP UDP
| 53
| DNS forwarding proxy uses 52.119.40.100 as the default resolver. However, you can use your own local resolver to resolve the destination domains. |
DHCP server | UDP | 68 | N/A |
Infoblox DNS | TCP | 443 | For Universal DDI authoritative DNS cloud services.
|
Sending peer of the DHCP HA (High Availability) | TCP | 647 | This is an incoming port for the HA (High Availability) feature. The receiving peer must be able to receive traffic on the port, and the sending peer must be able to send traffic to the port, generally from other random ports. |
Sending peer of the DHCP cluster | TCP
| 647 or 847 | For DHCP cluster load balancing. The receiving peer must be able to receive traffic on the port, and the sending peer must be able to send traffic to the port, generally from other random ports. |
Data Connector | TCP | 22 | Open this port if you want to send data using SCP from the Infoblox NIOS appliance (if configured) to Data Connector. The NIOS UI provides a mechanism to filter the domains it sends to Data Connector. Since NIOS is sending cache logs, when configuring NIOS for use with Data Connector, make sure to configure Data Connector to exclude internal corporate and authoritative domains (*.<corp>/Authorititative). By excluding corporate and authoritative domains, internal traffic logs will not be added. Required for incoming SCP data transfer from NIOS to Data Connector when deployed as a container. When you deploy Data Connector as a container, ensure that there are no SSH processes listening on port 22. You must terminate these SSH processes for Data Connector to collect data from NIOS. If you deploy Data Connector as a container, ensure that there are no SSH processes listening on port 22. You must terminate these SSH processes for Data Connector to collect data from NIOS. |
Data Connector | TCP | 514 | Open this port if you want to send syslog and secure syslog for RPZ from the Infoblox NIOS appliance (if configured) to Data Connector. Note: Port 514 is an insecure port. The NIOS UI provides a mechanism to filter the domains it sends to Data Connector. Since NIOS is sending cache logs, when configuring NIOS for use with Data Connector, make sure to configure Data Connector to exclude internal corporate and authoritative domains (*.<corp>/Authoritative). By excluding corporate and authoritative domains, internal traffic logs will not be added. Required for Data Connector secure syslog for RPZ hits data. If you deploy Data Connector as a container, ensure that this port is not used by other processes. If you deploy Data Connector as a container, ensure that this port is not used by other processes for Data Connector to collect data from NIOS. |
Data Connector | TCP | 6514 | Open this port if you want to send syslog and secure syslog for RPZ from the Infoblox NIOS appliance (if configured) to Data Connector. The NIOS UI provides a mechanism to filter the domains it sends to Data Connector. Since NIOS is sending cache logs, when configuring NIOS for use with Data Connector, make sure to configure Data Connector to exclude internal corporate and authoritative domains (*.<corp>/Authoritative). By excluding corporate and authoritative domains, internal traffic logs will not be added. Used for transferring syslog data from NIOS to Data container. Port 6514 is a default secure port. If you deploy Data Connector as a container, ensure that this port is not used by other processes. If you deploy Data Connector as a container, ensure that this port is not used by other processes for Data Connector to collect data from NIOS. |
A complete list of the used IP addresses is available in a JSON file by clicking this link. All listed IPs require TCP 443 port be open when being used.
For additional information on requirements for the Infoblox connectivity service, see the following:
Port Usage for Bare-Metal NIOS-X Servers
When deploying a bare-metal NIOS-X server, you must open applicable ports on the server to ensure that all services are functioning properly.
The following table lists the ports that need to be available on the bare-metal NIOS-X server, in addition to the port usage for firewalls, as described in NIOS-X Server Connectivity and Service Requirements.
IP Protocol | Port | Services using this port | Description |
---|---|---|---|
TCP | 22 |
| Required for incoming SCP data transfer from NIOS to Data Connector when deployed as a container. When you deploy Data Connector as a container, ensure that there are no SSH processes listening on port 22. You must terminate these SSH processes for Data Connector to collect data from NIOS. |
TCP | 53 |
| Ensure that there are no other processes using port 53 on the server system on which your server will be deployed. For example, some Ubuntu systems running local DNS cache (system-resolved) might occupy port 53, and your server might not function properly in this case. |
TCP | 514 |
| Required for Data Connector secure syslog for RPZ hits data. If you deploy Data Connector as a container, ensure that this port is not used by other processes. |
TCP | 2222 |
| Used by an internal service for remote monitoring. |
TCP | 6514 |
| Used for transferring syslog data from NIOS to Data container. Port 6514 is a default secure port. If you deploy Data Connector as a container, ensure that this port is not used by other processes. |
TCP | 8125 |
| This is an internal port used for communications between containers. If you deploy Data Connector as a container, ensure that this port is not used by other processes. |
TCP | 8126 |
| This is an internal port used for communications between containers. If you deploy Data Connector as a container, ensure that this port is not used by other processes. |
TCP | 50514 |
| This is an internal port used for communications between containers. If you deploy Data Connector as a container, ensure that this port is not used by other processes. |
Connectivity Rules for DNS Forwarding Proxy
The DFP makes its connection with Infoblox Platform-based on the following rules and conditions:
By default, the DFP has provisioned the following four IPv4 global addresses. The DFP monitors the health status of these addresses and sends DNS requests to the first available and healthy address in the following order. In other words, if the first IP address (103.80.6.100) is available but has an unhealthy status, it moves on to the second IP address (103.80.5.100) to establish a connection with Infoblox Platform provided that the address is reachable and has a healthy status. Note that the DFP performs periodic health checks on these addresses.
52.119.41.100
52.119.40.100
103.80.6.100
103.80.5.100
The 52.119.41.100, and 103.80.6.100 IP addresses are provisioned under AWS Anycast, so a DNS client can connect to the nearest AWS entry location. Once a connection is established, the client is routed via AWS to the nearest PoP (Point of Presence). If the nearest PoP is not reachable, the client is forwarded to another PoP based on the rules described in the first bullet, above.
The 52.119.40.100, and 103.80.5.100, IP addresses are routed using Anycast only, and they use a different architecture so the traffic is routed via third-party networks to a PoP. The 52.119.40.100 and 103.80.5.100 addresses are considered legacy.
The Local On-Prem resolution option in Security Policies for NIOS-X servers requires the NIOS-X server to have TCP 443 access to ope.infobloxtd.com at 52.119.41.120 and 103.80.6.120 as the lookups are done via API.
If you have defined a PoP for the DFP, only AWS addresses for that PoP are used while everything else works as described in the previous bullets. This connection creates a fail-open architecture. For example, if the PoP in Tokyo is provisioned for the DFP and it is not available, the traffic will be automatically routed to the next PoP based on the user/DFP location.
For more information, see the following:
Forwarding DNS Traffic to Infoblox Platform
To access Infoblox Platform DNS service, you must forward your DNS traffic (except for internal domain resolution) to the Infoblox Platform name server. In essence, a DNS forwarder is a name server to which all other name servers first send queries that they cannot resolve locally. The forwarder then sends these queries to DNS servers external to the network, and this saves the other name servers in your network from having to send queries off site. A forwarder eventually builds up a cache of information and uses it to resolve queries. This reduces Internet traffic over the network and decreases the time taken to respond to DNS clients.
Depending on your network configuration, you can forward DNS traffic while configuring the following network scopes for protection:
DFP (DNS Forwarding Proxy) (either standalone or running on NIOS)
The manner in which you configure your DNS forwarders to use the Infoblox Threat Defense name server depends on your network configuration:
If you have an on-prem Infoblox Grid, configure your Grid members (which act as DNS forwarders) to use the Infoblox Threat Defense name server.
If you are using Unbound, BIND, or any other third-party DNS server as your DNS resolver, then, in your DNS configuration file, configure your DNS forwarders to use the Infoblox Threat Defense name server IP.
You can also configure Microsoft servers to use DNS forwarders.
In corporate mode, Infoblox Endpoint supports transfer of metadata to Infoblox Platform when queries are resolved by DFP.
If you are forwarding DNS traffic to the Infoblox Threat Defense name servers using the External Networks configuration, without Infoblox Endpoint or DFP, you should provision the following DNS anycast addresses. Do note that these IP addresses are considered legacy/back up with limited availability on a regional level (PoPs).
While all Anycast IP addresses are valid and provide redundancy for our customers, Infoblox recommends using the 52.119.41.100 and 103.80.6.100 addresses. The 52.119.40.100 and 103.80.5.100 addresses are considered legacy.
The 52.119.41.100 and 103.80.6.100 addresses are provisioned under AWS Anycast, so a DNS client can connect to the nearest AWS entry location. Once a connection is established, the client is routed via AWS to the nearest PoP (Point of Presence). If the nearest PoP is not reachable, the client is forwarded to another PoP based on the rules described in the bullet point described below:
By default, the DFP has provisioned the following four IPv4 global addresses. The DFP monitors the health status of these addresses and sends DNS requests to the first available and healthy address in the following order. In other words, if the first IP address (103.80.6.100) is available but has an unhealthy status, it moves on to the second IP address (103.80.5.100) to establish a connection with Infoblox Platform provided that the address is reachable and has a healthy status. Note that the DFP performs periodic health checks on these addresses.
The 52.119.40.100 and 103.80.5.100 addresses are routed using Anycast only, and they use a different architecture so the traffic is routed via third-party networks to a PoP.
IPv6 DNS anycast addresses 2400:4840::100 and 2620:129:6000::100
For best practices when configuring DNS forwarding, see the following topics:
Infoblox Geo-Based Anycast IPs for POPs
Infoblox-provided anycast addresses (listed above) will route your DNS traffic to the appropriate PoPs.
If you want to direct DNS traffic to a specific location, you can use the geo-based anycast IPs listed in the following table.
Infoblox Geo-based Anycast IPs for POPs | |||
---|---|---|---|
Location | IPv4 Address | Secondary IPv4 Address | Server |
California (USA) | 52.119.41.51 | 103.80.6.51 | us-west-1-geo.threatdefense.infoblox.com |
Virginia (USA) | 52.119.41.52 | 103.80.6.52 | us-east-1-geo.threatdefense.infoblox.com |
London (England) | 52.119.41.53 | 103.80.6.53 | eu-west-2-geo.threatdefense.infoblox.com |
Frankfurt (Germany) | 52.119.41.54 | 103.80.6.54 | eu-central-1-geo.threatdefense.infoblox.com |
Mumbai (India) | 52.119.41.55 | 103.80.6.55 | ap-south-1-geo.threatdefense.infoblox.com |
Tokyo (Japan) | 52.119.41.56 | 103.80.6.56 | ap-northeast-1-geo.threatdefense.infoblox.com |
Singapore | 52.119.41.57 | 103.80.6.57 | ap-southeast-1-geo.threatdefense.infoblox.com |
Toronto (Canada) | 52.119.41.58 | 103.80.6.58 | ca-central-1-geo.threatdefense.infoblox.com |
Sydney (Australia) | 52.119.41.59 | 103.80.6.59 | ap-southeast-2-geo.threatdefense.infoblox.com |
São Paulo (Brazil) | 52.119.41.60 | 103.80.6.60 | sa-east-1-geo.threatdefense.infoblox.com |
Bahrain | 52.119.41.61 | 103.80.6.61 | me-south-1-geo.threatdefense.infoblox.com |
Johannesburg (South Africa) | 52.119.41.62 | 103.80.6.62 | af-south-1-geo.threatdefense.infoblox.com |
Ohio (USA) | 52.119.41.63 | 103.80.6.63 | us-east-2-geo.threatdefense.infoblox.com |
Warning
Before pointing your DNS to the Infoblox Threat Defense name server, ensure that your network and DNS server are properly configured to send DNS queries and receive responses. For more information, see Testing Network Configuration.
Local DNS Request Processing Optimization
To reduce the number of noise requests forwarded to the cloud and to avoid misconfiguration, DFP and Infoblox Endpoint will automatically forward all PTR requests for any private subnets (e.g. 10.0.0.0/8, 192.168.0.0/16, etc.) to local DNS servers. With this enhancement, you will not need to list such subnets in the internal domains or custom allow lists.
DFP will forward all private requests to a local DNS server by default when a local DNS server is provisioned on the DFP.