Deployment of Standalone Kafka
Get Kafka
Download the standalone binaries of Kafka 3.7.1 and extract them.
$ tar -xzf kafka_2.12-3.7.1.3.2.3.3-2.tgz
$ cd kafka_2.12-3.7.1.3.2.3.3-2.tgz
Start the Kafka Environment
Kafka and ZooKeeper require Java to run. Ensure that Java 8 or higher is installed by running the following command.
$ java -version
Kafka 3.7.1 can either run in the Zookeeper mode or Kraft mode. To get started with either configuration, follow one of the sections below but not both.
Kafka with ZooKeeper
Kafka uses ZooKeeper to manage and coordinate the Kafka brokers. Before starting the services, ensure that the ZooKeeper configuration is correctly set.
- Configure the ZooKeeper Properties: Open the ZooKeeper configuration file, the default settings must be sufficient for a single-node setup. Here’s a basic configuration.
#config/zookeeper.properties
# Data storage location for ZooKeeper
dataDir=/tmp/zookeeper
# Port on which ZooKeeper will listen
clientPort=2181
- Start ZooKeeper Service: Once ZooKeeper is configured, start the ZooKeeper service. This service must be started first, as Kafka depends on ZooKeeper for broker management.
$ bin/zookeeper-server-start.sh config/zookeeper.properties
- Configure the Kafka Broker Properties: Open the Kafka server configuration file (
config/server.properties
) to make any necessary changes and key configurations you may want to modify:- broker.id: Unique identifier for each Kafka broker.
- log.dirs: Directory to store the Kafka logs (messages).
- zookeeper.connect: ZooKeeper connection string (usually
localhost:2181
for single-node setups).
#config/server.properties
broker.id=0
log.dirs=/tmp/kafka-logs
zookeeper.connect=localhost:2181
- Start the Kafka Broker: Open a new terminal session and run the Kafka broker service using the following command.
# Start the Kafka broker service
$ bin/kafka-server-start.sh config/server.properties
Kafka with KRaft
The KRaft mode is Kafka’s newer architecture that eliminates the dependency on ZooKeeper. In the KRaft mode, Kafka manages metadata natively using its Raft-based consensus protocol.
- Configure the Kafka Controllers: Controllers in the KRaft mode are responsible for managing metadata across the cluster. To configure a Kafka controller, follow these steps:
- Edit the Controller Configuration File: Edit the
config/kraft-controller.properties
file to define your controller configurations:
- Edit the Controller Configuration File: Edit the
Example configuration for a controller (config/kraft/controller.properties):
process.roles=controller
node.id=1
controller.quorum.voters=1@localhost:9093
listeners=PLAINTEXT://localhost:9093
log.dirs=/tmp/kraft-controller-logs
- Configure the Kafka Broker: Once the controllers are set up, configure the brokers, which handle the actual message storage and processing.
- Edit the Broker Configuration File: Edit the
config/kraft-broker.properties
file to define your broker configurations:
- Edit the Broker Configuration File: Edit the
Example configuration for a broker (config/kraft/broker.properties):
process.roles=broker
node.id=2
listeners=PLAINTEXT://localhost:9092
controller.quorum.voters=1@localhost:9093
log.dirs=/tmp/kraft-broker-logs
- Initialize the Kafka Storage: In the KRaft mode, Kafka uses a unique
cluster.id
to manage metadata. Before starting any services, you must generate and initialize the storage directories for the Kafka controllers and brokers.
# Generate a Cluster UUID on any one of the nodes:
$ KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"
#Format Log Directories for controller
$ bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/controller.properties
#Format Log Directories for broker
$ bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/broker.properties
- Start the Kafka Controllers: Once the configuration is complete, start the Kafka controller services. In the KRaft mode, the controllers must be started first, as they manage the metadata for the brokers.
$ bin/kafka-server-start.sh config/kraft/kraft-controller.properties
- Start the Broker: Open another terminal session and run the broker service.
$ bin/kafka-server-start.sh config/kraft/kraft-broker.properties
Create a Topic to Store your Events
A topic can be thought of as a folder in a file system, and the individual events are akin to files stored within that folder.
Before you can start producing and consuming events, you need to create a topic. To do so, open a new terminal and run the following command.
$ bin/kafka-topics.sh --create --topic quickstart-events --bootstrap-server localhost:9092
Kafka's command-line tools offer several options for various operations. You can get a list of these options by running the kafka-topics.sh
command without any arguments. For instance, to get details about the newly created topic, such as its partition count, you can use the following.
$ bin/kafka-topics.sh --describe --topic quickstart-events --bootstrap-server localhost:9092
Topic: quickstart-events TopicId: NPmZHyhbR9y00wMglMH2sg PartitionCount: 1 ReplicationFactor: 1 Configs:
Topic: quickstart-events Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Write Some Events into the Topic
A Kafka client interacts with the Kafka brokers over the network to write or read events. When the brokers receive the events, they store them in a durable and fault-tolerant way, ensuring the data remains available for as long as necessary, even indefinitely if configured.
To write some events to your Kafka topic, you can use the Kafka console producer. Each line of input that you provide will be written as a separate event to the specified topic.
Run the following command to start the console producer client and write some events.
$ bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092
Now, you can start typing messages. For example:
>This is my first event
>This is my second event
Each line will be treated as an individual event and sent to the quickstart-events
topic.
To stop the producer client, simply press Ctrl-C
at any time.
To read the events you've written to your Kafka topic, you can use the Kafka console consumer. This client connects to the Kafka brokers, retrieves the events, and displays them in the terminal.
Read the Events
Open a new terminal session and run the following command to start the console consumer.
$ bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server localhost:9092
The console will output the events you previously produced.
This is my first event
This is my second event
Like the producer, you can stop the consumer client by pressing Ctrl-C
at any time.
Terminate the Kafka Environment
Now that you have reached the end of the quick start, feel free to tear down the Kafka environment—or continue playing around.
- Stop the producer and consumer clients with
Ctrl-C
, if you haven't done so already. - Stop the Kafka broker with
Ctrl-C
. - Lastly, if the Kafka with ZooKeeper section was followed, stop the ZooKeeper server with
Ctrl-C
.
If you also want to delete any data of your local Kafka environment, including any events you have created along the way, run the command:
$ rm -rf /tmp/kafka-logs /tmp/zookeeper /tmp/kraft-controller-logs /tmp/kraft-broker-logs