{"id":4220,"date":"2020-04-03T06:37:34","date_gmt":"2020-04-03T05:37:34","guid":{"rendered":"http:\/\/www.ceyark.com\/?p=4220"},"modified":"2020-06-29T17:52:19","modified_gmt":"2020-06-29T16:52:19","slug":"cassandrapods","status":"publish","type":"post","link":"https:\/\/www.ceyark.com\/index.php\/2020\/04\/03\/cassandrapods\/","title":{"rendered":"Cassandra Pods in Kubernetes cluster"},"content":{"rendered":"<p>[vc_row][vc_column css=&#8221;.vc_custom_1513347309069{padding-top: 1px !important;}&#8221;][vc_column_text]<\/p>\n<div class=\"\">\n<div id=\":1f7\" class=\"ii gt\">\n<div id=\":1f6\" class=\"a3s aXjCH \">\n<div dir=\"ltr\">In this post, we will describe the steps we followed to enable cassabdra pods within kubernetes cluster using microk8s in Ubuntu 18.04.<\/p>\n<p>First step was to enable firewalls in ubuntu using ufw if not done already. Lets start with Install microk8s.<\/p>\n<p><strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 $ sudo snap install microk8s \u2013classic<\/strong><br \/><strong>\u00a0\u00a0\u00a0 \u00a0\u00a0 $ sudo usermod -a -G microk8s $USER<\/strong><br \/><strong>\u00a0 \u00a0 \u00a0\u00a0 $ su \u2013 $USER<\/strong><\/p>\n<p>Now we configure firewall to allow pod-to-pod and pod-to-internet communication<\/p>\n<p><strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 $ sudo ufw allow in on cni0 &amp;&amp; sudo ufw allow out on cni0<\/strong><br \/><strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 $ sudo ufw default allow routed<\/strong><\/p>\n<p>Lets enable addons for microk8s\u00a0\u00a0\u00a0\u00a0 <\/p>\n<p><strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 $ microk8s.enable dashboard dns<\/strong><\/p>\n<p>Now start the cluster.<\/p>\n<p>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <strong>$ microk8s.start<\/strong><br \/><strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 $ microk8s.status<\/strong><\/p>\n<p>If error message is not observed, then the cluster is started successfully and we can proceed to access the dashboard.<br \/>The command will give us the cluster IP address for the dashboard service.<\/div>\n<div dir=\"ltr\">\u00a0<\/div>\n<div dir=\"ltr\"><strong>\u00a0\u00a0\u00a0\u00a0\u00a0 $ sudo microk8s.kubectl get services -n kube-system<\/strong><\/p>\n<p>Typically the name of the services will be service\/kubernetes-dashboard on port 443.<\/p>\n<p>Dashboard require us to login using a token and to get the token, use the two commands below.<\/p>\n<p>$token=$(microk8s.kubectl -n kube-system get secret | grep default-token | cut -d \u201d \u201d -f1)<br \/>$microk8s.kubectl -n kube-system describe secret $token<\/p>\n<p>On successful login to dashboard, we can start to create the services.<\/p>\n<p>First we define a storageclass. If you already have a storageclass, you can use that as well. Our storageclass definition is given below.<\/p>\n<p><em><strong>apiVersion: storage.k8s.io\/v1<\/strong><\/em><br \/><em><strong>kind: StorageClass<\/strong><\/em><br \/><em><strong>metadata:<\/strong><\/em><br \/><em><strong>\u00a0 name: ceyarkvolume1<\/strong><\/em><br \/><em><strong>provisioner: microk8s.io\/hostpath<\/strong><\/em><br \/><em><strong>reclaimPolicy: Delete<\/strong><\/em><br \/><em><strong>volumeBindingMode: Immediate<\/strong><\/em><\/p>\n<p>We created the storageclass through the command below.<\/p>\n<p><strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 $ microk8s.kubectl apply -f ceyarkvolume1-sc.yml<\/strong> <\/p>\n<p>If the above command executes without any error, you can check the created storage class through<\/p>\n<p><strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 $ sudo microk8s.kubectl get sc<\/strong><\/p>\n<p>Second step is to create a PersistentVolumeClaim. Our claim definition is given below.<\/p>\n<p><em><strong>&#8212;<\/strong><\/em><br \/><em><strong>apiVersion: v1<\/strong><\/em><br \/><em><strong>kind: PersistentVolumeClaim<\/strong><\/em><br \/><em><strong>metadata:<\/strong><\/em><br \/><em><strong>\u00a0 name: \u00a0ceyark-volclaim1<\/strong><\/em><br \/><em><strong>spec:<\/strong><\/em><br \/><em><strong>\u00a0 accessModes:<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 &#8211; ReadWriteOnce<\/strong><\/em><br \/><em><strong>\u00a0 storageClassName: ceyarkvolume1<\/strong><\/em><br \/><em><strong>\u00a0 resources:<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 requests:<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 storage: 500M<\/strong><\/em><br \/><em><strong>&#8212;<\/strong><\/em><\/p>\n<p>The volume claim is created through the command below.<\/p>\n<p><strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 $ microk8s.kubectl apply -f persistantvolume-sc.yml<\/strong><\/p>\n<p>If the above command executes without any error, you can check the created persistent volume through<br \/><strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 $ sudo microk8s.kubectl get persistentVolumes<\/strong><\/p>\n<p>Third step is to create a cassandra StatefulSet. Our definition is given below.<\/p>\n<p><em><strong>&#8212;<\/strong><\/em><br \/><em><strong>apiVersion: &#8220;apps\/v1&#8221;<\/strong><\/em><br \/><em><strong>kind: StatefulSet<\/strong><\/em><br \/><em><strong>metadata:<\/strong><\/em><br \/><em><strong>\u00a0 name: cassandra<\/strong><\/em><br \/><em><strong>spec:<\/strong><\/em><br \/><em><strong>\u00a0 serviceName: cassandra<\/strong><\/em><br \/><em><strong>\u00a0 replicas: 1<\/strong><\/em><br \/><em><strong>\u00a0 selector:<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 matchLabels:<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 app: cassandra<\/strong><\/em><br \/><em><strong>\u00a0 template:<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 metadata:<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 labels:<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 app: cassandra<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 spec:<\/strong><\/em><br \/><em><strong>\u00a0volumes:<\/strong><\/em><br \/><em><strong>&#8211; name: cassandra-data<\/strong><\/em><br \/><em><strong>\u00a0persistentVolumeClaim:<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 claimName: ceyark-volclaim1<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 containers:<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 &#8211; name: cassandra<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 image: cassandra:3<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 imagePullPolicy: IfNotPresent<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 ports:<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 &#8211; containerPort: 7000<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 name: intra-node<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 &#8211; containerPort: 7001<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 name: tls-intra-node<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 &#8211; containerPort: 7199<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 name: jmx<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 &#8211; containerPort: 9042<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 name: cql<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 env:<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 &#8211; name: CASSANDRA_SEEDS<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 value: cassandra-0.cassandra.default.<wbr \/>svc.cluster.local<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 &#8211; name: MAX_HEAP_SIZE<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 value: 256M<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 &#8211; name: HEAP_NEWSIZE<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 value: 100M<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 &#8211; name: CASSANDRA_CLUSTER_NAME<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 value: &#8220;Cassandra&#8221;<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 &#8211; name: CASSANDRA_DC<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 value: &#8220;DC1&#8221;<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 &#8211; name: CASSANDRA_RACK<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 value: &#8220;Rack1&#8221;<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 &#8211; name: CASSANDRA_ENDPOINT_SNITCH<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 value: GossipingPropertyFileSnitch<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 volumeMounts:<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 &#8211; name: cassandra-data<\/strong><\/em><br \/><em><strong>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 mountPath: \/var\/lib\/cassandra\/data<\/strong><\/em><br \/><em><strong>&#8212;<\/strong><\/em><\/p>\n<p>The statefulSet is created through the command below.<\/p>\n<p><strong>\u00a0\u00a0\u00a0\u00a0 $ microk8s.kubectl apply -f cassandrastatefulset.yml<\/strong><\/p>\n<p>You can check the status of the cassandra pods through the command as shown below.<\/p>\n<p><strong>\u00a0\u00a0\u00a0\u00a0\u00a0 $ microk8s.kubectl get pods &#8211;output=wide<\/strong><br \/><strong>\u00a0\u00a0\u00a0\u00a0\u00a0 $ microk8s.kubectl exec -ti cassandra-0 &#8212; nodetool status<\/strong><\/p>\n<p>But you may be inclined to test it before you start to use it. Execute the below commands to check if you are able to access the DB.<\/p>\n<p><strong>\u00a0\u00a0\u00a0\u00a0\u00a0 $ microk8s.kubectl exec -ti cassandra-0 cqlsh<\/strong><\/div>\n<div dir=\"ltr\">On successful login:<br \/>cqlsh&gt; describe tables<\/p>\n<p>If you want to delete the pods and the claims, you can execute the below commands in sequence.<\/p>\n<p><strong>\u00a0\u00a0\u00a0\u00a0 $ microk8s.kubectl delete service -l app=cassandra<\/strong><br \/><strong>\u00a0\u00a0\u00a0\u00a0 $ microk8s.kubectl delete -f persistantvolume-sc.yml<\/strong><\/p>\n<p>To ensure the storage is reclaimed, we can execute the below commands to check.<br \/><strong>\u00a0\u00a0\u00a0 $ sudo microk8s.kubectl get persistentVolumes<\/strong><\/p>\n<p>With the configuration files ready, we were able to create the cassandra docker pods within the clusters very quickly. We tried different options including scaling the pods size, taking backup of the data in the database etc., The possibilities are exciting.<\/p><\/div>\n<\/div>\n<\/div>\n<\/div>\n<p>[\/vc_column_text][\/vc_column][\/vc_row]<\/p>\n","protected":false},"excerpt":{"rendered":"[vc_row][vc_column css=\".vc_custom_1513347309069{padding-top: 1px !important;}\"][vc_column_text] In this post, we will describe the steps we followed to enable cassabdra pods within kubernetes cluster using microk8s in Ubuntu 18.04.First step was to enable firewalls in ubuntu using ufw if not done already. Lets [...]","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[138],"tags":[],"class_list":["post-4220","post","type-post","status-publish","format-standard","hentry","category-ceyarkblogs"],"_links":{"self":[{"href":"https:\/\/www.ceyark.com\/index.php\/wp-json\/wp\/v2\/posts\/4220"}],"collection":[{"href":"https:\/\/www.ceyark.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.ceyark.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.ceyark.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.ceyark.com\/index.php\/wp-json\/wp\/v2\/comments?post=4220"}],"version-history":[{"count":5,"href":"https:\/\/www.ceyark.com\/index.php\/wp-json\/wp\/v2\/posts\/4220\/revisions"}],"predecessor-version":[{"id":4793,"href":"https:\/\/www.ceyark.com\/index.php\/wp-json\/wp\/v2\/posts\/4220\/revisions\/4793"}],"wp:attachment":[{"href":"https:\/\/www.ceyark.com\/index.php\/wp-json\/wp\/v2\/media?parent=4220"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.ceyark.com\/index.php\/wp-json\/wp\/v2\/categories?post=4220"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.ceyark.com\/index.php\/wp-json\/wp\/v2\/tags?post=4220"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}