3-Node NiFi 2.7.2 Standalone Cluster Setup Guide

ODP NiFi 2.7.2

All hostnames, IPs, paths, and credentials shown below are placeholders. Replace them with the values for your own environment before running any command.

ItemExample Used in this Guide
Node hostnamesnifi-node1.example.com, nifi-node2.example.com, nifi-node3.example.com
Node IPs10.0.0.11, 10.0.0.12, 10.0.0.13
NiFi mirror[Index of /ODP/standalone/3.3.6.4-1/](Index of /ODP/standalone/3.3.6.4-1/)
SSH usernifiuser
SSH key path~/.ssh/nifi-cluster-key
Admin credentialsadmin / ChangeMe@123

Cluster Topology

Host nameIP
nifi-node1.example.com10.0.0.11
nifi-node2.example.com10.0.0.12
nifi-node3.example.com10.0.0.13

NiFi version: 2.7.2.3.3.6.4-1

Mirror: Index of /ODP/standalone/3.3.6.4-1/

Prerequisites (run on ALL 3 nodes)

1. Populate /etc/hosts

Bash
Copy

2. Install JDK 21

Bash
Copy

3. Set JAVA_HOME

Bash
Copy

NiFi 2.x requires Java 21. Java 8 and 11 will not work.

Step 1 — Download and extract tarballs (ALL 3 nodes)

Bash
Copy

Set NIFI_HOME for convenience (ALL 3 nodes):

Bash
Copy

Step 2 — Set JAVA_HOME in bootstrap.conf (ALL 3 nodes)

Edit ${NIFI_HOME}/conf/bootstrap.conf and add the Java path as the first property:

Bash
Copy

Step 3 — Configure NiFi for clustering (ALL 3 nodes)

Edit ${NIFI_HOME}/conf/nifi.properties on each node.

3a. Sensitive properties key (SAME on all 3 nodes

NiFi 2.x in cluster mode requires a shared sensitive properties key. Generate one once and use the same value on every node:

Bash
Copy
Bash
Copy

All cluster nodes must share the same nifi.sensitive.props.key. If they differ, nodes will fail to join.

3b. Web properties (different on each node)

Bash
Copy

3c. Cluster properties

SAME on all 3 nodes:

Bash
Copy

Per-node — set to each node's own FQDN:

Bash
Copy

3d. Embedded ZooKeeper (SAME on all 3 nodes)

Bash
Copy

Step 4 — Configure embedded ZooKeeper (ALL 3 nodes)

4a. Edit ${NIFI_HOME}/conf/zookeeper.properties

Add the server list (SAME on all nodes):

Bash
Copy

4b. Create the ZooKeeper myid file

Bash
Copy

The myid number must match the server.N entry for that node.

Step 5 — Configure State Management (ALL 3 nodes)

Edit ${NIFI_HOME}/conf/state-management.xml. Find the zk-provider cluster-provider section and set the connect string:

Bash
Copy

Step 6 — Cross-import TLS Certificates for Cluster Communication

NiFi 2.x auto-generates a self-signed certificate per node on first start. Since each node has its own CA, the nodes don't trust each other by default. You must export each node's certificate and import it into every node's truststore.

6a. Initial start to generate certificates (ALL 3 nodes)

Start NiFi once on each node so it generates keystore.p12 and truststore.p12:

Bash
Copy

Verify the certs were created:

Bash
Copy

6b. Export each node's certificate (run on each respective node)

Bash
Copy

6c. Distribute certificates across all nodes

Run from NODE1 (assumes the SSH key at ~/.ssh/nifi-cluster-key is authorized for nifiuser on the other nodes):

Bash
Copy

Verify all 3 .der files exist on each node:

Bash
Copy

6d. Import all Certificates into Each Node's Truststore (ALL 3 nodes)

Bash
Copy

Importing a node's own cert may warn about a duplicate — that's fine.

6e. Verify Truststores

Bash
Copy

You should see entries for node1, node2, and node3 (plus the original generated entry).

Step 7 — Set login credentials (ALL 3 nodes)

Bash
Copy

Use the same username/password on all nodes. In single-user mode with a cluster, credentials must match.

Step 8 — Start the NiFi cluster (ALL 3 nodes)

Start all three nodes around the same time so cluster election can proceed:

Bash
Copy

Step 9 — Verify the cluster

Check NiFi status on each node:

Bash
Copy

Check port 8443 is listening:

Bash
Copy

Check logs for cluster join:

Bash
Copy

Look for messages like Node connected and Cluster coordinator elected

Access the Web UI:

Bash
Copy

Login with admin / ChangeMe@123. Once in, open the hamburger menu (top-left) → Cluster to confirm all 3 nodes appear with status CONNECTED.

Summary — what differs per node

SettingNode 1Node 2Node 3
nifi.web.https.hostnifi-node1.example.comnifi-node2.example.comnifi-node3.example.com
nifi.cluster.node.addressnifi-node1.example.comnifi-node2.example.comnifi-node3.example.com
state/zookeeper/myid123
keystore.p12 / truststore.p12 passwordsauto-generated per nodeauto-generated per nodeauto-generated per node

Everything else (sensitive props key, ZK connect string, cluster settings, login credentials) is identical across all three nodes.

Troubleshooting

  • nifi.sensitive.props.key error. NiFi 2.x requires this in cluster mode. Generate with openssl rand -hex 16 and use the same value on all nodes.
  • certificate_unknown / PKIX path validation errors. Each node auto-generates its own self-signed cert. Cross-import all certs into each node's truststore (Step 6).
  • TLS toolkit missing. NiFi 2.x removed the TLS toolkit. Use the manual cert export/import approach above.
  • ZooKeeper issues. Check logs/nifi-app.log for ZK errors. Verify myid files match the server.N entries.
  • Flow election timeout. If nodes start at very different times, increase nifi.cluster.flow.election.max.wait.time (e.g. 5 mins).
  • "Already running" after a crash. Remove the PID file: rm -f ${NIFI_HOME}/run/nifi.pid, then start again.
  • Browser TLS warning. NiFi 2.x uses self-signed certs by default — accept the browser warning to proceed, or replace the auto-generated keystore with a CA-signed cert.
Type to search, ESC to discard
Type to search, ESC to discard
Type to search, ESC to discard
  Last updated