Deployment of Standalone Kafka

Get Kafka

Download the standalone binaries of Kafka 3.7.1 and extract them.

Bash
Copy

Start the Kafka Environment

Kafka and ZooKeeper require Java to run. Ensure that Java 8 or higher is installed by running the following command.

Bash
Copy

Kafka 3.7.1 can either run in the Zookeeper mode or Kraft mode. To get started with either configuration, follow one of the sections below but not both.

Kafka with ZooKeeper

Kafka uses ZooKeeper to manage and coordinate the Kafka brokers. Before starting the services, ensure that the ZooKeeper configuration is correctly set.

  1. Configure the ZooKeeper Properties: Open the ZooKeeper configuration file, the default settings must be sufficient for a single-node setup. Here’s a basic configuration.
Bash
Copy
  1. Start ZooKeeper Service: Once ZooKeeper is configured, start the ZooKeeper service. This service must be started first, as Kafka depends on ZooKeeper for broker management.
Bash
Copy
  1. Configure the Kafka Broker Properties: Open the Kafka server configuration file (config/server.properties) to make any necessary changes and key configurations you may want to modify:
    1. broker.id: Unique identifier for each Kafka broker.
    2. log.dirs: Directory to store the Kafka logs (messages).
    3. zookeeper.connect: ZooKeeper connection string (usually localhost:2181 for single-node setups).
Bash
Copy
  1. Start the Kafka Broker: Open a new terminal session and run the Kafka broker service using the following command.
Bash
Copy

Kafka with KRaft

The KRaft mode is Kafka’s newer architecture that eliminates the dependency on ZooKeeper. In the KRaft mode, Kafka manages metadata natively using its Raft-based consensus protocol.

  1. Configure the Kafka Controllers: Controllers in the KRaft mode are responsible for managing metadata across the cluster. To configure a Kafka controller, follow these steps:
    1. Edit the Controller Configuration File: Edit the config/kraft-controller.properties file to define your controller configurations:
Bash
Copy
  1. Configure the Kafka Broker: Once the controllers are set up, configure the brokers, which handle the actual message storage and processing.
    1. Edit the Broker Configuration File: Edit the config/kraft-broker.properties file to define your broker configurations:
Bash
Copy
  1. Initialize the Kafka Storage: In the KRaft mode, Kafka uses a unique cluster.id to manage metadata. Before starting any services, you must generate and initialize the storage directories for the Kafka controllers and brokers.
Bash
Copy
  1. Start the Kafka Controllers: Once the configuration is complete, start the Kafka controller services. In the KRaft mode, the controllers must be started first, as they manage the metadata for the brokers.
Bash
Copy
  1. Start the Broker: Open another terminal session and run the broker service.
Bash
Copy

Create a Topic to Store your Events

A topic can be thought of as a folder in a file system, and the individual events are akin to files stored within that folder.

Before you can start producing and consuming events, you need to create a topic. To do so, open a new terminal and run the following command.

Bash
Copy

Kafka's command-line tools offer several options for various operations. You can get a list of these options by running the kafka-topics.sh command without any arguments. For instance, to get details about the newly created topic, such as its partition count, you can use the following.

Bash
Copy

Write Some Events into the Topic

A Kafka client interacts with the Kafka brokers over the network to write or read events. When the brokers receive the events, they store them in a durable and fault-tolerant way, ensuring the data remains available for as long as necessary, even indefinitely if configured.

To write some events to your Kafka topic, you can use the Kafka console producer. Each line of input that you provide will be written as a separate event to the specified topic.

Run the following command to start the console producer client and write some events.

Bash
Copy

Now, you can start typing messages. For example:

Bash
Copy

Each line will be treated as an individual event and sent to the quickstart-events topic.

To stop the producer client, simply press Ctrl-C at any time.

To read the events you've written to your Kafka topic, you can use the Kafka console consumer. This client connects to the Kafka brokers, retrieves the events, and displays them in the terminal.

Read the Events

Open a new terminal session and run the following command to start the console consumer.

Bash
Copy

The console will output the events you previously produced.

Bash
Copy

Like the producer, you can stop the consumer client by pressing Ctrl-C at any time.

Terminate the Kafka Environment

Now that you have reached the end of the quick start, feel free to tear down the Kafka environment—or continue playing around.

  1. Stop the producer and consumer clients with Ctrl-C, if you haven't done so already.
  2. Stop the Kafka broker with Ctrl-C.
  3. Lastly, if the Kafka with ZooKeeper section was followed, stop the ZooKeeper server with Ctrl-C.

If you also want to delete any data of your local Kafka environment, including any events you have created along the way, run the command:

Bash
Copy
Type to search, ESC to discard
Type to search, ESC to discard
Type to search, ESC to discard
  Last updated