Podman: Running Kafka in WSL for Local Development
Podman has become my container runtime of choice, particularly when working in WSL on Windows. It offers a Docker-compatible CLI without requiring a long-running daemon, which makes it a good fit for local development, experimentation, and lightweight infrastructure testing.
In this post, I’ll walk through the exact steps I took to get a Kafka broker running inside a Podman container in WSL (Ubuntu). This setup is ideal for local testing, proof-of-concept work, or integration testing with tools such as Kafka Connect, TPT, or custom consumers and producers.
1. Installing Podman in WSL
First, install Podman using the Ubuntu package manager inside WSL:
sudo apt -y install podman
Once installed, you can verify everything is working with:
podman --version
Podman runs in rootless mode by default, which is one of its key advantages over Docker from a security and simplicity standpoint.
2. Searching for a Kafka Image
Next, search the container registries for available Kafka images:
podman search kafka
This command queries multiple container registries (such as Docker Hub and Quay) and returns a list of matching images.
From the results, I selected:
spotify/kafka
I chose this image because it bundles both Kafka and Zookeeper in a single container, which keeps the setup simple for local development. While this isn’t how you’d run Kafka in production, it’s perfectly adequate for testing and learning.
3. Pulling the Kafka Image
To download the image locally:
podman pull spotify/kafka
Podman stores images in its local image cache, just like Docker, but without requiring a daemon process.
4. Running Kafka with Exposed Ports
Now we can start the Kafka container. In Podman terminology, exposing ports is often referred to as publishing or advertising ports.
podman run -d \
--name kafka \
-p 2181:2181 \
-p 9092:9092 \
-e ADVERTISED_HOST=localhost \
-e ADVERTISED_PORT=9092 \
docker.io/spotify/kafka
This command starts Kafka and Zookeeper and makes them accessible from your WSL host (and, by extension, from Windows if WSL networking is configured appropriately).
5. Command Breakdown
Here’s what each part of the command is doing:
-d
Runs the container in the background (detached mode).--name kafka
Assigns a friendly name to the container so it’s easy to manage later.-p 2181:2181
Exposes Zookeeper on port 2181.-p 9092:9092
Exposes the Kafka broker on port 9092.ADVERTISED_HOST=localhost
Ensures Kafka advertiseslocalhostas the broker address. This is critical when connecting from outside the container, as Kafka clients rely on the advertised host rather than the container’s internal hostname.ADVERTISED_PORT=9092
Specifies the port Kafka advertises to clients.
Without the advertised host and port settings, Kafka clients often fail to connect because they receive an unreachable container hostname.
6. Managing the Container
To view running containers:
podman ps
To start and stop the Kafka container:
podman start kafka
podman stop kafka
To list all containers, including stopped ones:
podman ps -aThis is useful for checking container status or cleaning up old test containers.
7. Final Thoughts
This setup gives you a fully functional Kafka environment running locally inside WSL using Podman, with no Docker daemon required. It’s lightweight, repeatable, and works well for:
Kafka client development
Connector testing
Batch and streaming integration experiments
Learning and prototyping
For production or more realistic environments, you’d obviously separate Kafka and Zookeeper (or move to KRaft mode), but for local work this approach is fast and effective.
If you’re already using Podman for other containers, adding Kafka to the mix is straightforward—and once you’ve done it once, spinning it up again takes seconds.
8. Testing Kafka with Built-in Client Tools
Once the Kafka container is running, the quickest way to verify everything is working is to use the Kafka command-line tools included inside the container. This avoids any local client installation and confirms that networking and advertised listeners are configured correctly.
8.1 Accessing the Kafka Container
First, open a shell inside the running container:
podman exec -it kafka /bin/bash
You should now be inside the container, typically at a prompt like:
root@<container-id>:/#
8.2 Creating a Test Topic
Create a simple test topic called test-topic:
/kafka/bin/kafka-topics.sh \
--create \
--topic test-topic \
--bootstrap-server localhost:9092 \
--partitions 1 \
--replication-factor 1
To verify the topic was created successfully:
/kafka/bin/kafka-topics.sh \
--list \
--bootstrap-server localhost:9092
You should see:
test-topic
8.3 Producing Messages
Start a console producer and send a few test messages:
/kafka/bin/kafka-console-producer.sh \
--topic test-topic \
--bootstrap-server localhost:9092
Now type a few lines and press Enter after each one:
hello kafka
this is a test
podman + wsl + kafka
Each line is sent as a separate Kafka message.
8.4 Consuming Messages
Open a second terminal window on your WSL host and attach to the container again:
podman exec -it kafka /bin/bash
Start a console consumer:
/kafka/bin/kafka-console-consumer.sh \
--topic test-topic \
--bootstrap-server localhost:9092 \
--from-beginning
You should immediately see the messages you produced earlier:
hello kafka
this is a test
podman + wsl + kafka
This confirms that:
Kafka is running correctly
Topics can be created
Producers and consumers can connect via the advertised host and port
8.5 Testing from the WSL Host (Outside the Container)
Because ports are published to the WSL host, you can also connect from outside the container using any Kafka client configured with:
bootstrap.servers=localhost:9092
This is particularly useful when testing:
Kafka Connect
Custom producers/consumers
Integration tools (e.g. batch or streaming ingestion pipelines)
If clients fail to connect at this point, it’s almost always due to incorrect ADVERTISED_HOST or ADVERTISED_PORT settings.
9. Summary
At this stage, you have:
Kafka and Zookeeper running in Podman on WSL
Ports exposed and advertised correctly
A validated end-to-end Kafka flow using built-in client tools
This setup provides a solid local Kafka environment for experimentation and integration work without the overhead of a full cluster or Docker Desktop.
In future posts, I’ll build on this by:
Connecting Kafka to downstream systems
Using Kafka for batch-style ingestion patterns
Exploring Kafka integration with enterprise data platforms