/
Appendix E: Infoblox Connectivity Requirements

Appendix E: Infoblox Connectivity Requirements

This page outlines the connectivity requirements necessary for your configuration. The following connectivity requirements are covered:

  • NIOS-X Port Usage for Server Connectivity

  • Admin User Connectivity Requirements

  • Port Usage for Infoblox Services

  • Port Usage for Bare-Metal NIOS-X Servers

  • Connectivity Rules for DNS Forwarding Proxy

  • Forwarding DNS Traffic to Infoblox Platform

  • Infoblox Geo-Based Anycast IPs for POPs

  • Local DNS Request Processing Optimization

  • Downloading Endpoint

NIOS-X Server Connectivity and Service Requirements

Port Usage for Bare-Metal NIOS-X Servers

When deploying a bare-metal NIOS-X server, you must open applicable ports on the server to ensure that all services are functioning properly.

The following table lists the ports that need to be available on the bare-metal NIOS-X server, in addition to the port usage for firewalls, as described in NIOS-X Server Connectivity and Service Requirements

IP Protocol

Port

Services using this port

Description

IP Protocol

Port

Services using this port

Description

TCP

22

  • Data Connector

  • NIOS

Required for incoming SCP data transfer from NIOS to Data Connector when deployed as a container. When you deploy Data Connector as a container, ensure that there are no SSH processes listening on port 22. You must terminate these SSH processes for Data Connector to collect data from NIOS.

TCP

53

  • NIOS-X servers

Ensure that there are no other processes using port 53 on the server system on which your server will be deployed. For example, some Ubuntu systems running local DNS cache (system-resolved) might occupy port 53, and your server might not function properly in this case.

TCP

514

  • Data Connector

Required for Data Connector secure syslog for RPZ hits data. If you deploy Data Connector as a container, ensure that this port is not used by other processes.

TCP

2222

  • NIOS-X servers

Used by an internal service for remote monitoring.

TCP

6514

  • NIOS (SCP data transfer)

  • Data Connector

Used for transferring syslog data from NIOS to Data container. Port 6514 is a default secure port. If you deploy Data Connector as a container, ensure that this port is not used by other processes.

TCP

8125

  • Data Connector

This is an internal port used for communications between containers. If you deploy Data Connector as a container, ensure that this port is not used by other processes. 

TCP

8126

  • Data Connector

This is an internal port used for communications between containers. If you deploy Data Connector as a container, ensure that this port is not used by other processes. 

TCP

50514

  • Data Connector

This is an internal port used for communications between containers. If you deploy Data Connector as a container, ensure that this port is not used by other processes. 

Connectivity Rules for DNS Forwarding Proxy

The DFP makes its connection with Infoblox Platform-based on the following rules and conditions:

  • By default, the DFP has provisioned the following four IPv4 global addresses. The DFP monitors the health status of these addresses and sends DNS requests to the first available and healthy address in the following order. In other words, if the first IP address (103.80.6.100) is available but has an unhealthy status, it moves on to the second IP address (103.80.5.100) to establish a connection with Infoblox Platform provided that the address is reachable and has a healthy status. Note that the DFP performs periodic health checks on these addresses. 

  1. 52.119.41.100

  2. 52.119.40.100

  3. 103.80.6.100

  4. 103.80.5.100

  • The 52.119.41.100, and 103.80.6.100 IP addresses are provisioned under AWS Anycast, so a DNS client can connect to the nearest AWS entry location. Once a connection is established, the client is routed via AWS to the nearest PoP (Point of Presence). If the nearest PoP is not reachable, the client is forwarded to another PoP based on the rules described in the first bullet, above.

  • The 52.119.40.100, and 103.80.5.100, IP addresses are routed using Anycast only, and they use a different architecture so the traffic is routed via third-party networks to a PoP. The 52.119.40.100 and 103.80.5.100 addresses are considered legacy.

  • The Local On-Prem resolution option in Security Policies for NIOS-X servers requires the NIOS-X server to have TCP 443 access to ope.infobloxtd.com at 52.119.41.120 and 103.80.6.120 as the lookups are done via API.

  • If you have defined a PoP for the DFP, only AWS addresses for that PoP are used while everything else works as described in the previous bullets. This connection creates a fail-open architecture. For example, if the PoP in Tokyo is provisioned for the DFP and it is not available, the traffic will be automatically routed to the next PoP based on the user/DFP location.

For more information, see the following:

Forwarding DNS Traffic to Infoblox Platform

To access Infoblox Platform DNS service, you must forward your DNS traffic (except for internal domain resolution) to the Infoblox Platform name server. In essence, a DNS forwarder is a name server to which all other name servers first send queries that they cannot resolve locally. The forwarder then sends these queries to DNS servers external to the network, and this saves the other name servers in your network from having to send queries off site. A forwarder eventually builds up a cache of information and uses it to resolve queries. This reduces Internet traffic over the network and decreases the time taken to respond to DNS clients.

Depending on your network configuration, you can forward DNS traffic while configuring the following network scopes for protection:

The manner in which you configure your DNS forwarders to use the Infoblox Threat Defense name server depends on your network configuration:

  • If you have an on-prem Infoblox Grid, configure your Grid members (which act as DNS forwarders) to use the Infoblox Threat Defense name server.

  • If you are using Unbound, BIND, or any other third-party DNS server as your DNS resolver, then, in your DNS configuration file, configure your DNS forwarders to use the Infoblox Threat Defense name server IP.

  • You can also configure Microsoft servers to use DNS forwarders. 

  • In corporate mode, Infoblox Endpoint supports transfer of metadata to Infoblox Platform when queries are resolved by DFP.

If you are forwarding DNS traffic to the Infoblox Threat Defense name servers using the External Networks configuration, without Infoblox Endpoint or DFP, you should provision the following DNS anycast addresses. Do note that these IP addresses are considered legacy/back up with limited availability on a regional level (PoPs).

  • While all Anycast IP addresses are valid and provide redundancy for our customers, Infoblox recommends using the 52.119.41.100 and 103.80.6.100 addresses. The 52.119.40.100 and 103.80.5.100 addresses are considered legacy.

    • The 52.119.41.100 and 103.80.6.100 addresses are provisioned under AWS Anycast, so a DNS client can connect to the nearest AWS entry location. Once a connection is established, the client is routed via AWS to the nearest PoP (Point of Presence). If the nearest PoP is not reachable, the client is forwarded to another PoP based on the rules described in the bullet point described below:

      • By default, the DFP has provisioned the following four IPv4 global addresses. The DFP monitors the health status of these addresses and sends DNS requests to the first available and healthy address in the following order. In other words, if the first IP address (103.80.6.100) is available but has an unhealthy status, it moves on to the second IP address (103.80.5.100) to establish a connection with Infoblox Platform provided that the address is reachable and has a healthy status. Note that the DFP performs periodic health checks on these addresses. 

    • The 52.119.40.100 and 103.80.5.100 addresses are routed using Anycast only, and they use a different architecture so the traffic is routed via third-party networks to a PoP.

  • IPv6 DNS anycast addresses 2400:4840::100 and 2620:129:6000::100

For best practices when configuring DNS forwarding, see the following topics:

Infoblox Geo-Based Anycast IPs for POPs

Infoblox-provided anycast addresses (listed above) will route your DNS traffic to the appropriate PoPs.

If you want to direct DNS traffic to a specific location, you can use the geo-based anycast IPs listed in the following table.

Infoblox Geo-based Anycast IPs for POPs  

Infoblox Geo-based Anycast IPs for POPs  

Location

IPv4 Address

Secondary IPv4 Address

Server

California (USA)

52.119.41.51

103.80.6.51

us-west-1-geo.threatdefense.infoblox.com

Virginia (USA)

52.119.41.52

103.80.6.52

us-east-1-geo.threatdefense.infoblox.com

London (England)

52.119.41.53

103.80.6.53

eu-west-2-geo.threatdefense.infoblox.com

Frankfurt (Germany)

52.119.41.54

103.80.6.54

eu-central-1-geo.threatdefense.infoblox.com

Mumbai (India)

52.119.41.55

103.80.6.55

ap-south-1-geo.threatdefense.infoblox.com

Tokyo (Japan)

52.119.41.56

103.80.6.56

ap-northeast-1-geo.threatdefense.infoblox.com

Singapore

52.119.41.57

103.80.6.57

ap-southeast-1-geo.threatdefense.infoblox.com

Toronto (Canada)

52.119.41.58

103.80.6.58

ca-central-1-geo.threatdefense.infoblox.com

Sydney (Australia)

52.119.41.59

103.80.6.59

ap-southeast-2-geo.threatdefense.infoblox.com

São Paulo (Brazil)

52.119.41.60

103.80.6.60

sa-east-1-geo.threatdefense.infoblox.com

Bahrain

52.119.41.61

103.80.6.61

me-south-1-geo.threatdefense.infoblox.com

Johannesburg (South Africa)

52.119.41.62

103.80.6.62

af-south-1-geo.threatdefense.infoblox.com

Ohio (USA)

52.119.41.63

103.80.6.63

us-east-2-geo.threatdefense.infoblox.com

Warning
Before pointing your DNS to the Infoblox Threat Defense name server, ensure that your network and DNS server are properly configured to send DNS queries and receive responses. For more information, see Testing Network Configuration.

Local DNS Request Processing Optimization

To reduce the number of noise requests forwarded to the cloud and to avoid misconfiguration, DFP and Infoblox Endpoint will automatically forward all PTR requests for any private subnets (e.g. 10.0.0.0/8, 192.168.0.0/16, etc.) to local DNS servers. With this enhancement, you will not need to list such subnets in the internal domains or custom allow lists.

DFP will forward all private requests to a local DNS server by default when a local DNS server is provisioned on the DFP.

Downloading Endpoint

Related content