Accessing Kafka on host machine from minikube pods

There are times when you want to access processes running on your host machine e.g. databases etc. from your minikube Kubernetes cluster in Virtual Box (or any other supported provider)

For details, continue reading this blog on Medium

Cheers!

Posted in Distributed systems, Kafka, Kubernetes | Tagged , , | Leave a comment

Kafka Streams Interactive Queries

This blog post explores the Interactive Queries feature in Kafka Streams with help of a practical example. It covers the DSL API and how the state store information was exposed via a REST service

For details, check out the blog post on Medium and the source code is available on GitHub

Everything is setup using Docker including Kafka, Zookeeper, the stream processing services as well as the producer app

Here is a bird’s eye view of the overall solution looks

Cheers!

Posted in Distributed systems | Leave a comment

Kafka Go client: No Producer error during connection ?

Although its rare, but there are times when you actually want to see errors in your code – more importantly, at the right time !

Kafka Go Producer behaviour

You need to be mindful of this while using the Kafka Go client producer. For e.g. if you were to supply an incorrect value for the Kafka broker …

func main() {
kafkaBroker := "foo:9092"
p, err := kafka.NewProducer(&kafka.ConfigMap{"bootstrap.servers": kafkaBroker})
if err != nil {
fmt.Println("producer creation failed ", err.Error())
return
}
topic := "bar"
partition := kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny}
msg := &kafka.Message{TopicPartition: partition, Key: []byte("hello"), Value: []byte("world")}
err = p.Produce(msg, nil)
fmt.Println("done…")
}
view raw snippet1.go hosted with ❤ by GitHub
no error

… assuming you don’t have a Kafka broker at foo:9092, you would expect that the above code will respond with producer creation failed along with the specific error details. Instead, the flow carries on and ends by printing done. This is because the error returned by Produce is only in case message does not get enqueued to the internal librdkafka queue

You should…

Hook up to the delivery producer reports channel (using Producer.Events()) in order to catch this error – better late than never right !

package main
import (
"fmt"
"github.com/confluentinc/confluent-kafka-go/kafka"
)
func main() {
kafkaBroker := "foo:9092"
p, err := kafka.NewProducer(&kafka.ConfigMap{"bootstrap.servers": kafkaBroker})
if err != nil {
fmt.Println("producer creation failed ", err.Error())
return
}
topic := "bar"
partition := kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny}
msg := &kafka.Message{TopicPartition: partition, Key: []byte("hello"), Value: []byte("world")}
err = p.Produce(msg, nil)
fmt.Println("done…")
event := <-p.Events()
switch e := event.(type) {
case kafka.Error:
pErr := e
fmt.Println("producer error", pErr.String())
default:
fmt.Println("Kafka producer event", e)
}
}
view raw snippet2.go hosted with ❤ by GitHub

Now, the error is rightfully returned – producer error foo:9092/bootstrap: Failed to resolve 'foo:9092': nodename nor servname provided, or not known (after 1543569582778ms in state INIT)

You can also provide your own channel to the Produce method to receive delivery events. Only the error is handled and the other possible event i.e. *kafka.Message is ignored under the default case umbrella

Cheers!

sdfsdfsdfsdfsdf

Posted in Distributed systems, go, Kafka, messaging | Tagged , , | Leave a comment

Kafka Go client quick start

Here is a Docker based example for Kafka using Go client

Overview

  • Producer
    • uses the traditional API for pushing messages to Kafka (channel is another option)
    • receives notifications on the (default) Events (custom channel is another option or you can choose not to get notified)
  • Consumer uses (Events) channel – Poll()ing is another alternative
  • Docker Compose to bootstrap the sample app (producer and consumer) as a single unit
  • BYOK (bring your own Kafka) for trying this out

To run

Check out the README – it’s super easy to get going, thanks to Docker Compose!

References

Cheers!

 

Posted in Distributed systems, Kafka, messaging | Leave a comment

Redis 5 – bootstrapping a Redis Cluster with Docker

With Redis 5 (in RC state at time of writing), cluster creation utility is now available as part of redis-cli which is easier as compared to the (previous) ruby way of doing it (using redis-trib)

check out the release notes for more info

Here is the command list – docker run -i --rm redis:5.0-rc redis-cli --cluster help

cluster-commands

redis-cli –cluster help

It’s super simple to create a 6-node Redis Cluster (3 masters and corresponding slaves) on Docker


#———— bootstrap the cluster nodes ——————–
start_cmd='redis-server –port 6379 –cluster-enabled yes –cluster-config-file nodes.conf –cluster-node-timeout 5000 –appendonly yes'
redis_image='redis:5.0-rc'
network_name='redis_cluster_net'
docker network create $network_name
echo $network_name " created"
#———- create the cluster ————————
for port in `seq 6379 6384`; do \
docker run -d –name "redis-"$port -p $port:6379 –net $network_name $redis_image $start_cmd;
echo "created redis cluster node redis-"$port
done
cluster_hosts=''
for port in `seq 6379 6384`; do \
hostip=`docker inspect -f '{{(index .NetworkSettings.Networks "redis_cluster_net").IPAddress}}' "redis-"$port`;
echo "IP for cluster node redis-"$port "is" $hostip
cluster_hosts="$cluster_hosts$hostip:6379 ";
done
echo "cluster hosts "$cluster_hosts
echo "creating cluster…."
echo 'yes' | docker run -i –rm –net $network_name $redis_image redis-cli –cluster create $cluster_hosts –cluster-replicas 1;

Once executed, you should see a similar output


redis_cluster_net created
f66517db305c3f270f756f1fbd40611d123fc69c040695147bb43cc153491093
created redis cluster node redis-6379
726672fad3d815e7a07ac892acd7bb412e91193b28527654c483cd5debbfd011
created redis cluster node redis-6380
75d8621153b3cbb7962fcf90a5a35108fe042a05e9a78b920c2655caa5a1f7ec
created redis cluster node redis-6381
d9df8c47ccaf21470bcbc2b89f485e1cd9f68917019fa6ba7ebbac59b792b41b
created redis cluster node redis-6382
21edb22225abc54140e437091ddb19cd452542e3d82e7839470a3bbf6a0ddbf7
created redis cluster node redis-6383
5cdd12537f954aa5df4bd0c173f0670dd126c4c925336c7e9054d147508c12d9
created redis cluster node redis-6384
IP for cluster node redis-6379 is 172.27.0.2
IP for cluster node redis-6380 is 172.27.0.3
IP for cluster node redis-6381 is 172.27.0.4
IP for cluster node redis-6382 is 172.27.0.5
IP for cluster node redis-6383 is 172.27.0.6
IP for cluster node redis-6384 is 172.27.0.7
cluster hosts 172.27.0.2:6379 172.27.0.3:6379 172.27.0.4:6379 172.27.0.5:6379 172.27.0.6:6379 172.27.0.7:6379
creating cluster….
>>> Performing hash slots allocation on 6 nodes…
Master[0] -> Slots 0 – 5460
Master[1] -> Slots 5461 – 10922
Master[2] -> Slots 10923 – 16383
Adding replica 172.27.0.5:6379 to 172.27.0.2:6379
Adding replica 172.27.0.6:6379 to 172.27.0.3:6379
Adding replica 172.27.0.7:6379 to 172.27.0.4:6379
M: 78da8a4a346c3e43d2a013cfc1f8b17201ecaaf9 172.27.0.2:6379
slots:[0-5460] (5461 slots) master
M: 43b4c865269d9d57b840ae6d90ef84d74678eee1 172.27.0.3:6379
slots:[5461-10922] (5462 slots) master
M: 69801d3672c7dbc24fb670601ca088d701ff0902 172.27.0.4:6379
slots:[10923-16383] (5461 slots) master
S: 9934e5d049ceb6088c756d2faa72b76196d8503a 172.27.0.5:6379
replicates 78da8a4a346c3e43d2a013cfc1f8b17201ecaaf9
S: 12cef6f9c1c212a209eaef742c5a7c91aec32c37 172.27.0.6:6379
replicates 43b4c865269d9d57b840ae6d90ef84d74678eee1
S: 73eb4b587145bb2ebeae65679591ea11736cd212 172.27.0.7:6379
replicates 69801d3672c7dbc24fb670601ca088d701ff0902
Can I set the above configuration? (type 'yes' to accept): >>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
>>> Performing Cluster Check (using node 172.27.0.2:6379)
M: 78da8a4a346c3e43d2a013cfc1f8b17201ecaaf9 172.27.0.2:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: 69801d3672c7dbc24fb670601ca088d701ff0902 172.27.0.4:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: 43b4c865269d9d57b840ae6d90ef84d74678eee1 172.27.0.3:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: 73eb4b587145bb2ebeae65679591ea11736cd212 172.27.0.7:6379
slots: (0 slots) slave
replicates 69801d3672c7dbc24fb670601ca088d701ff0902
S: 12cef6f9c1c212a209eaef742c5a7c91aec32c37 172.27.0.6:6379
slots: (0 slots) slave
replicates 43b4c865269d9d57b840ae6d90ef84d74678eee1
S: 9934e5d049ceb6088c756d2faa72b76196d8503a 172.27.0.5:6379
slots: (0 slots) slave
replicates 78da8a4a346c3e43d2a013cfc1f8b17201ecaaf9
[OK] All nodes agree about slots configuration.
>>> Check for open slots…
>>> Check slots coverage…
[OK] All 16384 slots covered.

To check cluster, you can run

  • ​​​​​docker run -i --rm --net redis_cluster_net redis:5.0-rc redis-cli --cluster check localhost:6380
  • docker run -i --rm --net redis_cluster_net redis:5.0-rc redis-cli --cluster info localhost:6380

Cheers!

Posted in Distributed systems, nosql, redis | Tagged , , , , | 6 Comments

(eBook) Practical Redis: first few chapters released!

I am happy to announce that the first few chapters of Practical Redis are now available

About Practical Redis

cover5

 

It is a hands-on, code-driven guide to Redis where each chapter is based on an application (simple to medium complexity) which demonstrates the usage of Redis and its capabilities (data structures, commands etc.).

The applications in the book are based on Java and Golang. The Java based Redis client libraries which have been used in this book include JedisRedisson and Lettuce while go-redis is used as the client library for Golang. Docker and Docker Compose are used to deploy the application stack (along with Redis)

 

Here is a quick outline of the book contents

  • Hello Redis – quick tour of Redis capabilities including its versatile data structures
  • Redis: the basic data structures – intro to core Redis data structures with a Java based news sharing app using the Jedisclient
  • Extending Redis with Redis Modules – learn about the basics of Redis Modules and make use of ReBloom in a Go based recommendation service
  • Tweet analysis service – keep track of relevant tweets using a Java and Go based application which ingests tweets, consumes/processes them and produces queryable info in real time using reliable queues (Redis LISTs), SETs and HASHes

Coming soon …

The below chapters are work in progress on and will be made available in subsequent releases of this book

  • Pipelines and Transactions in Redis
  • Scalable chat application – Use Redis PubSub and Websocket to create a chat service
  • Stream processing with Redis – watch Redis Streams in action (new data structure in Redis 5.0)
  • Redis based Tracking service – thanks to combination of the Geo data type and Lua scripting capability in Redis
  • Redis for distributed computing – explore interesting use cases made possible by Redisson
  • Data partitioning in Redis – practical examples highlighting data sharding strategies in Redis
  • Redis high availability

Go check it out & stay tuned for more updates

Cheers!

Posted in Distributed systems, go, nosql, redis | Tagged , , , , , , , , | Leave a comment

Redis geo.lua example using Go

I stumbled upon geo.lua which seemed to be an interesting library

It’s described as – “… a Lua library containing miscellaneous geospatial helper routines for use with Redis

Here is an example of using it with the Go Redis client (go-redis). This is what it does in a nutshell

To run, refer README – its super simple

Cheers!

Posted in go, nosql, redis | Tagged , , , , , , | Leave a comment

certificate error with Go HTTP client in alpine Docker

Using the Go HTTP client from a alpine docker image will result in this error – x509: failed to load system roots and no roots provided

Solution: alpine is a minimal image, hence CA certificates are required. You add that in the Dockerfile as per below example

FROM alpine 
RUN apk add --no-cache ca-certificates 
COPY my-app .
CMD ["./my-app"]

Got this hint from the golang alpine Dockerfile
Cheers!
Posted in go | Tagged , , , | Leave a comment

etcd Watch example using Go client

etcd is a distributed, highly available key-value store which serves as the persistent back end for Kubernetes

etcd

Here is an example to demonstrate its Watch feature. For details, check out the README (and the sample code) on Github

It uses

Summary

Cheers!

Posted in Distributed systems | Tagged , , , | Leave a comment

NATS on Kubernetes example

In a previous blog, you saw an example of NATS along with producer and consumer – based on Docker Compose. This post uses the same app, but runs on Kubernetes

nats-k8s.jpg

For how to run, follow the README on Github

High level overview

  • uses the NATS Kubernetes operator
    • it seeds a CRD …
    • … using which we can spawn a NATS cluster on Kubernetes (with a single command!)
    • A Service object is created automatically – uses ClusterIP by default

nats-cluster-service.JPG

NATS Kubernetes Service object

  • The environment variable corresponding to the Serviceis used in the application logic to locate NATS server. Note its usage here

Cheers!

nats-logo.png

Posted in Distributed systems, Kubernetes, nats | Tagged , , , , , , , , | Leave a comment