What is ELK ??

The ELK Stack

ELK stands for Elasticsearch Logstash and Kibana which are technologies for creating visualizations from raw data.

Elasticsearch

Elasticsearch is a distributed, open source search and analytics engine, designed for horizontal scalability, reliability, and easy management. It combines the speed of search with the power of analytics via a sophisticated, developer-friendly query language covering structured, unstructured, and time-series data.

Logstash

Logstash is a flexible, open source data collection, enrichment, and transportation pipeline. With connectors to common infrastructure for easy integration, Logstash is designed to efficiently process a growing list of log, event, and unstructured data sources for distribution into a variety of outputs, including Elasticsearch.

Kibana

Kibana is an open source data visualization platform that allows you to interact with your data through stunning, powerful graphics. From histograms to geomaps, Kibana brings your data to life with visuals that can be combined into custom dashboards that help you share insights from your data far and wide.

Questions in mind:

  1. Why do i really care about data visualization ? ( I’m not the director who really care about KPI & Uptime reporting for clients & require an visual representation of the data in order to present PPT or any other reporting format.)
  2. What is use of data visualization in my day to day job ?

Answer:

Data visualization provide us real-time operational intelligence. It’s the easy, fast and secure way to search, analyze and visualize the massive streams of machine data generated by your IT systems and technology infrastructure‚ÄĒphysical, virtual and in the cloud.

Troubleshoot application problems and investigate security incidents in minutes instead of hours or days, avoid service degradation or outages, deliver compliance at lower cost and gain new business insights.

How my data looks like on ELK – Kibana Dashboard:

App-Track-Kibana

That’s it for now – I’ll be posting all the installation & configuration steps in my next post very soon.

And Soon is now: https://amitvashist.wordpress.com/2015/08/08/getting-started-with-elk

Happy Learning ūüôā ūüôā

Cheers!!

Advertisements
Posted in Big Data, Linux, search engine, Uncategorized | Tagged , , , , , , | Leave a comment

OpenSSL

OpenSSL Project

The OpenSSL Project is a collaborative effort to develop a robust, commercial-grade, full-featured, and Open Source toolkit implementing the Secure Sockets Layer (SSL v2/v3) and Transport Layer Security (TLS) protocols as well as a full-strength general purpose cryptography library. The project is managed by a worldwide community of volunteers that use the Internet to communicate, plan, and develop the OpenSSL toolkit and its related documentation.  https://www.openssl.org/

Certificate authority

In cryptography, a certificate authority or certification authority (CA) is an entity that issues digital certificates. A digital certificate certifies the ownership of a public key by the named subject of the certificate. This allows others (relying parties) to rely upon signatures or on assertions made by the private key that corresponds to the certified public key. In this model of trust relationships, a CA is a trusted third party – trusted both by the subject (owner) of the certificate and by the party relying upon the certificate. Many[quantify] public-key infrastructure (PKI) schemes feature CAs.

How to configure OpenSSL Own Certificate Authority on Linux.

In the below mentioned example, the /etc/pki/CA directory will be used to store all keys and certificates. The index.txt and serial files act as a kind of flat file database to help you keep track of all your keys and certificates.

Note Point: Ensure that your OpenSSL configuration file (/etc/pki/tls/openssl.cnf) specifies dir=/etc/pki/CA within the [ CA_default ] section.

[root@server101 ~]# cd /etc/pki/CA/
[root@server101 CA]# mkdir {certs,crl,newcerts} -p
[root@server101 CA]# ls
certs  crl  newcerts  private
[root@server101 CA]# touch index.txt
[root@server101 CA]# echo "01" > serial
[root@server101 CA]# echo "01" > crlnumber
[root@server101 CA]# cp -rf /etc/pki/tls/openssl.cnf

Required Modification in the openssl.cnf

[root@server101 CA]# cat  openssl.cnf | grep "#Vashist"
dir        = /etc/pki/CA        # Where everything is kept #Vashist
#dir        = ../../CA        # Where everything is kept  Default Value #Vashist
certificate    = $dir/ca.crt       # The CA certificate  #Vashist
#certificate    = $dir/cacert.pem     # The CA certificate Default Value #Vashist
#private_key    = $dir/private/cakey.pem# The private key Default Value #Vashist
private_key    = $dir/private/ca.key   # The private key #Vashist
[root@server101 CA]# chmod 0600 openssl.cnf

Now generate your own CA certificate & respective key.

[root@server101 CA]# openssl req -config openssl.cnf -new -x509 -extensions v3_ca -keyout private/ca.key -out certs/ca.crt -days 3650
Generating a 1024 bit RSA private key
..........................++++++
....................++++++
writing new private key to 'private/ca.key'
Enter PEM pass phrase: secretPassword
Verifying - Enter PEM pass phrase: secretPassword
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [GB]:IN
State or Province Name (full name) [Berkshire]:Delhi     
Locality Name (eg, city) [Newbury]:Vashali
Organization Name (eg, company) [My Company Ltd]:Plentree Enterprise Ltd
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:Amit Vashist
Email Address []:plentree.ca@vashist.com
[root@server101 CA]#

 

 

Now you are good to go..!!!

Happy Learning ūüôā ūüôā

Cheers!!!

Posted in Linux, OpenSSL | Tagged , , , , , , , | Leave a comment

Deploy and Manage Your Docker Containers with Cockpit

Project Atomic integrates the tools and patterns of container-based application and service deployment with trusted operating system platforms to deliver an end-to-end hosting architecture that’s modern, reliable, and secure.

Cockpit

A remote manager for GNU/Linux servers

  • Cockpit is a server manager that makes it easy to administer your GNU/Linux servers via a web browser.
  • Cockpit makes it easy for any sysadmin to perform simple tasks, such as administering storage, inspecting journals and starting and stopping services.
  • Jumping between the terminal and the web tool is no problem. A service started via Cockpit can be stopped via the terminal. Likewise, if an error occurs in the terminal, it can be seen in the Cockpit journal interface.
  • You can monitor and administer several servers at the same time. Just add them with a single click and your machines will look after its buddies.

Cockpit and Docker

Cockpit also makes it easy to monitor and administer Docker containers running on Cockpit-managed servers such as Project Atomic hosts.

  • Monitor resources consumed by containers
  • Adjust resources available to containers
    • Resource limits enforced by the CGroup subsystem in the Linux kernel
    • Adjust CPU shares
    • Assign memory limits
    • More CGroup policy controls to come
  • Stop, Start, Delete and Commit container instances
  • Run and Delete container images

Starting and Using Cockpit

Cockpit is in beta/preview at this time, but you can still try it out and help test! A preview is included with the image. For more information, see the Cockpit project page.

  1. After starting your atomic host, you need to enable the cockpit service and socket:
    [root@fedora21 ~]# systemctl enable cockpit.socket
    [root@fedora21 ~]# systemctl start cockpit.socket
  2. You can now use the cockpit management interface at http://yourhost:9090

Contain copied from Project Atomic website: http://www.projectatomic.io/docs/cockpit/

Step1 : Login to web GUI  with regular root username & password. 

Screen Shot 2015-02-16 at 6.57.35 am

Step2 : Hosts Details / Status, we can add multiple hosts. 

Screen Shot 2015-02-16 at 6.58.04 am

Step3 : Hosts Machine Resource Status report. 

Screen Shot 2015-02-16 at 6.58.53 am

Step4 : System Services Status

Screen Shot 2015-02-16 at 7.00.10 am

Step5 : System Networking Status

Screen Shot 2015-02-16 at 7.01.18 am

Step6 : Managing Resource by setting up resource limit for a container 

Screen Shot 2015-02-16 at 7.03.03 am

Step7 : Running a container with /bin/bash shell.  

Screen Shot 2015-02-16 at 7.05.24 am

Step8 : Stating a New Container with  Set of command  & cpu / memory limit. 

Screen Shot 2015-02-16 at 7.07.14 am

Step9 : Monitoring the container standard out on /bin/bash prompt via GUI

Screen Shot 2015-02-16 at 7.09.37 am Step10 : Removing a container from Host. 

Screen Shot 2015-02-16 at 7.09.49 amHappy Learning ūüôā ūüôā

Cheers!!!

Posted in Cloud, Linux, Uncategorized, Virtualization | Tagged , , , , , , , , , , , , | Leave a comment

Getting Started with Docker

How To Install and Use Docker

Docker is an open-source project that makes creating and managing Linux containers really easy. Containers are like extremely lightweight VMs ‚Äď they allow code to run in isolation from other containers but safely share the machine‚Äôs resources, all without the overhead of a hypervisor.

Docker-linux-interfaces

While a Virtual Machine is a whole other guest computer running on top of your host computer (sitting on top of a layer of virtualization), Docker is an isolated portion of the host computer, sharing the host kernel (OS) and even its bin/libraries if appropriate & it makes it possible to get far more apps running on the same old servers and it also makes it very easy to package and ship programs.

Container Vs VM’s¬†

docker-vm-container

 

Pickup this image for official docker’s website.

Prerequisites:

1. Physical machine or a VM running Fedora 21.
2. Yum should be configured.
3. Enough System resources.
4. Update all packages to the latest versions available.

In my scenario: 

I am using MacBook Air as Physical Machine & for Virtulization using VMWare Fusion & on top of it Fedora 21 as guest OS, where i am about to play with docker; You can see the beauty of Virtuliztaion, it always helping me to try the hell of the new technologies with minimum huddles & maximum outcome.

Getting Started with Docker.

Docker has a CLI Interface which provide us more flexibility & agility to play with containers.

Step 1: Installation of Docker

[root@fedora21 ~]# yum remove docker -y 
[root@fedora21 ~]# yum install docker-io -y
[root@fedora21 ~]# systemctl start docker 
[root@fedora21 ~]# systemctl enable docker
[root@fedora21 ~]# systemctl status docker 
‚óŹ docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled)
Active: active (running) since Sun 2015-02-15 22:42:38 IST; 1h 14min ago
Docs: http://docs.docker.com
Main PID: 2187 (docker)
CGroup: /system.slice/docker.service
‚ĒĒ‚ĒÄ2187 /usr/bin/docker -d --selinux-enabled
Feb 15 23:38:52 localhost.localdomain docker[2187]: time="2015-02-15T23:38:52+05:30" level="info" msg="+job containers()"
Feb 15 23:38:52 localhost.localdomain docker[2187]: time="2015-02-15T23:38:52+05:30" level="info" msg="-job containers() = OK (0)"
Feb 15 23:39:14 localhost.localdomain docker[2187]: time="2015-02-15T23:39:14+05:30" level="info" msg="DELETE /v1.16/containers/2cf3d2fc409a"
Feb 15 23:39:14 localhost.localdomain docker[2187]: time="2015-02-15T23:39:14+05:30" level="info" msg="+job rm(2cf3d2fc409a)"
Feb 15 23:39:15 localhost.localdomain docker[2187]: time="2015-02-15T23:39:15+05:30" level="info" msg="+job log(destroy, 2cf3d2fc409ad00cf84fbfc41ae491a613de445aeebbe5f50354b22097ac4288, fedora:21)"Feb 15 23:39:15 localhost.localdomain docker[2187]: time="2015-02-15T23:39:15+05:30" level="info" msg="-job log(destroy, 2cf3d2fc409ad00cf84fbfc41ae491a613de445aeebbe5f50354b22097ac4288, f...21) = OK (0)"
Feb 15 23:39:15 localhost.localdomain docker[2187]: time="2015-02-15T23:39:15+05:30" level="info" msg="-job rm(2cf3d2fc409a) = OK (0)"Feb 15 23:39:17 localhost.localdomain docker[2187]: time="2015-02-15T23:39:17+05:30" level="info" msg="GET /v1.16/containers/json?all=1"
Feb 15 23:39:17 localhost.localdomain docker[2187]: time="2015-02-15T23:39:17+05:30" level="info" msg="+job containers()"
Feb 15 23:39:17 localhost.localdomain docker[2187]: time="2015-02-15T23:39:17+05:30" level="info" msg="-job containers() = OK (0)"
Hint: Some lines were ellipsized, use -l to show in full.
[root@fedora21 ~]#
[root@fedora21 ~]# docker version
Client version: 1.4.1
Client API version: 1.16
Go version (client): go1.3.3
Git commit (client): 5bc2ff8/1.4.1
OS/Arch (client): linux/amd64
Server version: 1.4.1
Server API version: 1.16
Go version (server): go1.3.3
Git commit (server): 5bc2ff8/1.4.1
[root@fedora21 ~]#

 

Step 2:  Download a Docker Container or Launching a Container 

Launching a container is simple as docker run + the image name you would like to run + the command to run within the container. If the image doesn’t exist on your local machine, docker will attempt to fetch it from the public image registry.

[root@fedora21 ~]#docker run -i -t ubuntu /bin/bash
Unable to find image 'ubuntu:latest' locally
ubuntu:latest: The image you are pulling has been verified
27d47432a69b: Pulling fs layer 
27d47432a69b: Downloading [======>                                            ] 27d47432a69b: Pull complete 
5f92234dcf1e: Pull complete 
51a9c7c1f8bb: Pull complete 
5ba9dab47459: Pull complete 
Status: Downloaded newer image for ubuntu:latest
root@b531b2c59e2a:/# 
root@b531b2c59e2a:/# uname -a 
Linux b531b2c59e2a 3.18.5-201.fc21.x86_64 #1 SMP Mon Feb 2 21:00:58 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
root@b531b2c59e2a:/# cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.1 LTS"
NAME="Ubuntu"
VERSION="14.04.1 LTS, Trusty Tahr"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 14.04.1 LTS"
VERSION_ID="14.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
root@b531b2c59e2a:/# exit

Command Syntax

1. docker run - Run a container
2. -t - Allocate a (pseudo) tty
3. -i - Keep stdin open (so we can interact with it)
4. ubuntu/fedora - use the Ubuntu base image
5. /bin/bash - Run the bash shell

Step 3: Running a container as per the requirement

Example 1:  Running a container in interactive mode. 

[root@fedora21 ~]#docker run -i -t fedora /bin/bash
bash-4.3# ls -ltr
total 52
drwxr-xr-x.   2 root root 4096 Aug 16  2014 srv
lrwxrwxrwx.   1 root root    8 Aug 16  2014 sbin -> usr/sbin
drwxr-xr-x.   2 root root 4096 Aug 16  2014 opt
drwxr-xr-x.   2 root root 4096 Aug 16  2014 mnt
drwxr-xr-x.   2 root root 4096 Aug 16  2014 media
lrwxrwxrwx.   1 root root    9 Aug 16  2014 lib64 -> usr/lib64
lrwxrwxrwx.   1 root root    7 Aug 16  2014 lib -> usr/lib
drwxr-xr-x.   2 root root 4096 Aug 16  2014 home
lrwxrwxrwx.   1 root root    7 Aug 16  2014 bin -> usr/bin
drwx------.   2 root root 4096 Dec  3 00:56 lost+found
drwxr-xr-x.   2 root root 4096 Dec  3 00:56 run
drwxr-xr-x.  12 root root 4096 Dec  3 00:56 usr
drwxr-xr-x.  18 root root 4096 Dec  3 00:56 var
dr-xr-xr-x.   3 root root 4096 Dec  3 00:56 boot
drwxrwxrwt.   7 root root 4096 Dec  3 00:58 tmp
dr-xr-x---.   2 root root 4096 Dec  3 00:58 root
dr-xr-xr-x.  13 root root    0 Feb 15 12:10 sys
drwxr-xr-x.  47 root root 4096 Feb 15 14:11 etc
dr-xr-xr-x. 149 root root    0 Feb 15 14:11 proc
drwxr-xr-x.   5 root root  380 Feb 15 14:11 dev
bash-4.3# echo "Hello Fedora Docker Container"  > /etc/motd
bash-4.3#

 Example 2:  Running a container in detached mode. 

[root@fedora21 ~]# docker run -d ubuntu /bin/ping 192.168.13.1
f74539d082e5721f46c389e27ac9a431af4fa529b180e9c09dec1a1302b859fb

To ensure that weather the ping command is working or not, i had run the following command on my base node; Guess what got the +tv results

[root@fedora21 ~]# tcpdump ip proto \\icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on docker0, link-type EN10MB (Ethernet), capture size 262144 bytes
23:18:22.551399 IP 172.17.0.13 > 192.168.13.1: ICMP echo request, id 1, seq 206, length 64
23:18:22.551909 IP 192.168.13.1 > 172.17.0.13: ICMP echo reply, id 1, seq 206, length 64
23:18:23.552167 IP 172.17.0.13 > 192.168.13.1: ICMP echo request, id 1, seq 207, length 64
23:18:23.552595 IP 192.168.13.1 > 172.17.0.13: ICMP echo reply, id 1, seq 207, length 64
23:18:24.552610 IP 172.17.0.13 > 192.168.13.1: ICMP echo request, id 1, seq 208, length 64
23:18:24.553088 IP 192.168.13.1 > 172.17.0.13: ICMP echo reply, id 1, seq 208, length 64

Step 4: Check the status of running container 

[root@fedora21 ~]# docker ps 
CONTAINER ID        IMAGE                                          COMMAND                CREATED             STATUS              PORTS               NAMES
5928bf0b2f69        registry.amitvashist.com:5000/apache1:latest   "/bin/ping 192.168.1   36 seconds ago      Up 35 seconds                           stupefied_sinoussi   
2710f6001dc9        fedora/welcome_message_motd:latest             "/bin/bash"            15 minutes ago      Up 15 minutes                           clever_sinoussi      
[root@fedora21 ~]#

Step 5: List all the available  container Images. 

[root@fedora21 ~]# docker images
REPOSITORY                 TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
ubuntu                     latest              5ba9dab47459        2 weeks ago         192.7 MB
ubuntu                     14.04               8eaa4ff06b53        6 weeks ago         192.7 MB
fedora                     21                  834629358fe2        6 weeks ago         250.2 MB
fedora                     latest              834629358fe2        6 weeks ago         250.2 MB

Step 6: Committing  a container.

[root@fedora21 ~]# docker ps
CONTAINER ID        IMAGE                                          COMMAND                CREATED             STATUS              PORTS               NAMES
a27ccfca80ee        ubuntu:latest                                  "/bin/ping 192.168.1   4 minutes ago       Up 4 minutes                            kickass_lovelace     
2710f6001dc9        fedora/welcome_message_motd:latest             "/bin/bash"            23 minutes ago      Up 23 minutes                           clever_sinoussi      
[root@fedora21 ~]# docker commit a27ccfca80ee ubuntu/ping 
5eecea7c9b00e24ee2257a2816f929ddf720c0936d29301d3b7cbd4cdc1b2647
[root@fedora21 ~]#

 Step 7: Tacking the changes in container, it work like subversion/ Git / CVS Рfor software/ change management.  

[root@fedora21 ~]# docker ps 
CONTAINER ID        IMAGE                                          COMMAND                CREATED             STATUS              PORTS               NAMES
a27ccfca80ee        ubuntu:latest                                  "/bin/ping 192.168.1   8 minutes ago       Up 8 minutes                            kickass_lovelace      
2710f6001dc9        fedora/welcome_message_motd:latest             "/bin/bash"            28 minutes ago      Up 28 minutes                           clever_sinoussi
[root@fedora21 ~]# docker diff 2710f6001dc9
C /etc
C /etc/gshadow-
C /etc/group-
C /etc/gshadow
C /etc/group
C /etc/shadow-
C /etc/shadow
C /etc/passwd
C /home
A /home/amitvashist
A /home/amitvashist/.bash_logout
A /home/amitvashist/.bash_profile
A /home/amitvashist/.bashrc
C /var
C /var/log
C /var/log/lastlog
A /var/test1-usr
C /var/spool
C /var/spool/mail
A /var/spool/mail/amitvashist
A /amitvashist
A /amitvashist/test1
C /usr
A /usr/test1-usr
[root@fedora21 ~]#

Step 7: Attach to a running container.

[root@fedora21 ~]# docker ps 
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS               NAMES
a27ccfca80ee        ubuntu:latest       "/bin/ping 192.168.1   29 minutes ago      Up 29 minutes                           kickass_lovelace    
[root@fedora21 ~]# 

[root@fedora21 ~]# docker attach a27ccfca80ee
64 bytes from 192.168.13.1: icmp_seq=1891 ttl=63 time=0.296 ms
64 bytes from 192.168.13.1: icmp_seq=1892 ttl=63 time=0.293 ms
64 bytes from 192.168.13.1: icmp_seq=1893 ttl=63 time=0.299 ms
64 bytes from 192.168.13.1: icmp_seq=1894 ttl=63 time=0.319 ms

 Step 7: To stop/start the container.

[root@fedora21 ~]# docker stop 9932068eace4

[root@fedora21 ~]# docker start 9932068eace4
[root@fedora21 ~]# docker ps 
CONTAINER ID        IMAGE                                COMMAND             CREATED             STATUS              PORTS               NAMES
9932068eace4        fedora/welcome_message_motd:latest   "/bin/bash"         5 minutes ago       Up 3 seconds                            adoring_leakey      
[root@fedora21 ~]#

 Step 8: Check the history of an image

[root@fedora21 ~]# docker history ubuntu/ping
IMAGE               CREATED             CREATED BY                                      SIZE
943aad725b0e        7 minutes ago       /bin/ping 192.168.13.1                          0 B
5ba9dab47459        2 weeks ago         /bin/sh -c #(nop) CMD [/bin/bash]               0 B
51a9c7c1f8bb        2 weeks ago         /bin/sh -c sed -i 's/^#\s*\(deb.*universe\)$/   1.895 kB
5f92234dcf1e        2 weeks ago         /bin/sh -c echo '#!/bin/sh' > /usr/sbin/polic   194.5 kB
27d47432a69b        2 weeks ago         /bin/sh -c #(nop) ADD file:62400a49cced0d7521   192.5 MB
511136ea3c5a        20 months ago                                                       0 B
[root@fedora21 ~]#

Step 9:  Return low-level information on a container

[root@fedora21 ~]# docker inspect f3c24e745e58
[{
  "AppArmorProfile": "",
   "Args": [
             "192.168.13.1"
                            ],
"Config": {
"AttachStderr": false,
"AttachStdin": false,
"AttachStdout": false,
"Cmd": [
"/bin/ping",
 "192.168.13.1"
                ],

Step 10:  To remove a container

[root@fedora21 ~]# docker rm 81ba74620eb5
81ba74620eb5
[root@fedora21 ~]#

Step 11:  To remove an image 

[root@fedora21 ~]# docker rmi 5eecea7c9b00
Deleted: 5eecea7c9b00e24ee2257a2816f929ddf720c0936d29301d3b7cbd4cdc1b2647
[root@fedora21 ~]#

Step12 : In order to remove all the containers

[root@fedora21 ~]# docker rm $(docker ps -a -q)
Error response from daemon: You cannot remove a running container. Stop the container before attempting removal or use -f
de3b5b0e20d0
afef4470e5e9
9932068eace4
e807c9827932
67867ed15600
a3385cea9e77
e3ed676fbc6e
a27ccfca80ee
5928bf0b2f69
2710f6001dc9
1f480ed879a7

 Other useful docker commands:

attach    Attach to a running container
build     Build a container from a Dockerfile
commit    Create a new image from a container's changes
cp        Copy files/folders from the containers filesystem to the host path
diff      Inspect changes on a container's filesystem
events    Get real time events from the server
export    Stream the contents of a container as a tar archive
history   Show the history of an image
images    List images
import    Create a new filesystem image from the contents of a tarball
info      Display system-wide information
insert    Insert a file in an image
inspect   Return low-level information on a container
kill      Kill a running container
load      Load an image from a tar archive
login     Register or Login to the docker registry server
logs      Fetch the logs of a container
port      Lookup the public-facing port which is NAT-ed to PRIVATE_PORT
ps        List containers
pull      Pull an image or a repository from the docker registry server
push      Push an image or a repository to the docker registry server
restart   Restart a running container
rm        Remove one or more containers
rmi       Remove one or more images
run       Run a command in a new container
save      Save an image to a tar archive
search    Search for an image in the docker index
start     Start a stopped container
stop      Stop a running container
tag       Tag an image into a repository
top       Lookup the running processes of a container
version   Show the docker version information
wait      Block until a container stops, then print its exit code

The best way to understand Docker is to try it!

Online Tutorial Link : https://www.docker.com/tryit/

Happy learning¬†ūüôā¬†ūüôā

And Please do share your comments¬†ūüôā

Cheers!!!

Posted in Linux, Virtualization | Tagged , , , , , | Leave a comment

LXC Containers on Fedora 21

HOWTO: Configure a LXC Linux Container on Fedora 21

First time i had encounter linux container some where in 2011 on SUSE 11, when my collage all at the time discussing about Solaris Zone. So decide to dig out some thing which look & feel like solaris zones in Linux.

Concept: 

Containers provide lightweight virtualization that lets you isolate processes and resources without the need to provide instruction interpretation mechanisms and other complexities of full virtualization.

Prerequisites:

1. Physical machine or a VM running Fedora 21.
2. Yum should be configured.
3. Enough System resources.
4. Update all packages to the latest versions available.

In my scenario: 

I am using MacBook Air as Physical Machine & for Virtulization using VMWare Fusion & on top of it Fedora 21 as guest OS, where i am about to play with LXC; You can see the beauty of Virtuliztaion, it always helping me to try the hell of the new technologies with minimum huddles & maximum outcome.

Screen Shot 2015-02-04 at 2.50.51 am

Install management libraries and utilities: 

Amits-MacBook-Air:~ amitvashist$ fed21
root@192.168.13.131's password:
Last login: Thu Jan 15 16:55:57 2015 from 192.168.13.1


[root@fedora21 ~]# yum history list 5
Failed to set locale, defaulting to C
Loaded plugins: langpacks
ID     | Command line             | Date and time    | Action(s)      | Altered
-------------------------------------------------------------------------------
 5 | install libvirt libvirt- | 2014-12-31 10:11 | Install        |    6   
history list
[root@fedora21 ~]#


[root@fedora21 ~]#yum -y install libvirt-daemon-lxc libvirt-daemon-config-network

Launch libvirtd via systemd and ensure that it always comes up on boot. This step will also adjust firewalld for your containers and ensure that dnsmasq is serving up IP addresses via DHCP on your default NAT network.

[root@fedora21 ~]# systemctl start libvirtd.service
[root@fedora21 ~]# systemctl enable libvirtd.service
[root@fedora21 ~]# systemctl status network.service
[root@fedora21 ~]# systemctl start network.service
[root@fedora21 ~]# systemctl enable network.service

Now we ready to download / install container’s filesystem:

[root@fedora21 ~]# yum -y --installroot=/var/lib/libvirt/filesystems/fedora21 --releasever=21 --nogpg install systemd passwd yum fedora-release vim-minimal openssh-server procps-ng iproute net-tools dhclient

With the above mentioned step we have download the filesystem with the necessary packages to run a Fedora 21 container. We now need to tell libvirt about the container we’ve just created.

[root@fedora21 ~]# virt-install --connect lxc:// --name MyTestFedora21 --ram 512 --filesystem /var/lib/libvirt/filesystems/fedora21/,/

Screen Shot 2015-02-04 at 3.14.33 am

Now you container is up & running but in order to connected to the console of the container! We need to adjust some configuration files within the container to use it properly. Detach from the console with CTRL-].

Screen Shot 2015-02-04 at 3.16.10 am

So for now let’s stop the container so we can make some adjustments.

[root@fedora21 ~]# virsh -c lxc:// list
setlocale: No such file or directory
Id    Name                           State
----------------------------------------------------
61488 MyTestFedora21                 running
[root@fedora21 ~]#
[root@fedora21 ~]# virsh -c lxc:// shutdown MyTestFedora21
setlocale: No such file or directory
Domain MyTestFedora21 is being shutdown
[root@fedora21 ~]#

Let the container ready for production: 

  • Setup SELinux in permissive mode only for password modification, else getting:
[root@fedora21 ~]# chroot /var/lib/libvirt/filesystems/fedora21 /bin/passwd root
Changing password for user root.
New password:
Retype new password:
passwd: Authentication token manipulation error
[root@fedora21 ~]#
  • Setup the root password

Screen Shot 2015-02-04 at 3.35.06 am

  • Setup the required network configuration
[root@fedora21 ~]#cat < < EOF > /var/lib/libvirt/filesystems/fedora20/etc/sysconfig/network
NETWORKING=yes
EOF
[root@fedora21 ~]#cat < < EOF > /var/lib/libvirt/filesystems/fedora20/etc/sysconfig/network-scripts/ifcfg-eth0
BOOTPROTO=static
ONBOOT=yes
DEVICE=eth0
IPADDR=192.168.122.10
NETMASK=255.255.255.0
EOF

Launching the container by keeping finger crossed :

Launching sequence : 3 2 1 – Boom!!!

[root@fedora21 ~]# virsh -c lxc:// start MyTestFedora21
setlocale: No such file or directory
Domain MyTestFedora21 started
[root@fedora21 ~]#

Screen Shot 2015-02-04 at 3.50.22 am

Now login to container with new root password: 

Screen Shot 2015-02-04 at 3.51.17 am

Testing SSH Connectivity & it seems pretty good.

Screen Shot 2015-02-04 at 4.09.17 am

Happy learning ūüôā ūüôā

And Please do share your comments ūüôā

Cheers!!!

Posted in Linux, Uncategorized, Virtualization | Tagged , , , , , , | Leave a comment

Linux Cluster Suite

Red Hat Cluster Suite Failover Cluster explained using HTTP as a failover service

Cluster:  A Group of two or more computers to perform a same task. Through this document we will explain about the Red Hat Cluster Implementation via Conga Project which has 2 important services running on Base Node and Cluster Nodes respectively.

LUCI: Luci is the service which gets installed on a separate base node which gives us an complete functionality via Admin Console in order to create / Configure / Manage our nodes in cluster.

RICCI: Ricci is the service known as Agent Service which gets installed on all the nodes in the cluster and it is because of this service via LUCI ADMIN they will become JOIN Clusters.

CMAN: Cluster Management: CMAN manages the quorum and cluster membership. A very important component (service) of Red Hat Cluster hence mandatory to run on each of the nodes

Fencing: Fencing is a mechanism of disconnecting a node from the cluster in case node has gone down faulty in order to avoid the data corruption and maintain data integrity.

Lock Management: Red Hat Cluster provides this lock management via known as DLM ( Distributed Lock Manager ). GFS uses locks from lock manager in order to synchronize their access to shared file system metadata.CLVM uses locks from lock manager in order to synchronize their updates to LVM Volumes and Volume Groups on a shared storage.

Cluster Configuration

Cluster Configuration file lies under /etc/cluster/cluster.conf and is an XML file Cluster Resources are defined under cluster configuration file like IP Address , Script , Red Hat Storage GFS 2 Maximum number of nodes supported in Red hat cluster deployment of GFS/GFS2 is 16

Through this document we shall explain the deployment of Apache Application in Red Hat Cluster

Hardware Requirements:

Server:For setting up 2 Node Cluster we may take 2 servers (Quad Core Quad CPU HP machine with minimum of 4GB RAM) and 1 Base node [Quad Core Quad CPU HP machine with minimum of 4GB RAM) on which we will have LUCI install.

Note : You can choose Virtual Nodes as well but there are certain limitation’s with VM Fencing.

Requirements

1.  IP Detailed Requirement:-

1. Server 1 - 2 Local IP for (bonding) HTTP server  + 1 IP for Cluster fencing
2. Server 2 - 2 Local IP for (bonding) HTTP server + 1 IP  for Cluster fencing 
3. Virtual IP - 1 virtual IP for HTTP ( Cluster IP )

2. Storage Requirement:-

a. One 200 GB SAN LUN for Database. (Depend upon requirement)

How to Configure Apache Cluster

Step 1: Firstly Install RHEL 5.6 on both the services respectively with custom packages.

Step 2 : Configure Network Bonding

Steps for creating bonding are as mentioned below: Create the bond interface file for the public network and save the file as                                        # vim /etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond0
IPADDR=192.168.5.20 [This will be actual network IP address]
NETMASK=255.255.255.0
GATEWAY=192.168.5.1
USERCTL=no
BOOTPROTO=static
ONBOOT=yes

 

After creating bond0 file, modify eth0 and eth1 file respectively.

# vim /etc/sysconfig/network-scripts/ifcfg-eth0

Make sure you remove HW Address / IP Address / Gateway Information from eth0 and eth1 and add 2 important lines under those file:

# vim /etc/sysconfig/network-scripts/ifcfg-eth0
Make sure file read as follows for eth1 interface:
DEVICE=eth0
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
# vim /etc/sysconfig/network-scripts/ifcfg-eth1
Make sure file read as follows for eth1 interface:
DEVICE=eth1
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none

Load bond driver/module

Edit vim /etc/modprobe.conf
Append following two lines:
alias bond0 bonding
options bond0 mode=balance-alb miimon=100
Save the file accordingly

Test configuration

First, load the bonding module, enter:
# modprobe bonding

Restart the networking service in order to bring up bond0 interface, enter:
# service network restart

Check with the below command whether Bonding is actually working or not.
# cat /proc/net/bonding/bond0

Step 3: Need to set the hostname on the Base node and other 2 nodes as well namely.

vim /etc/hosts
192.168.5.20 station20.example.com station20
192.168.5.10 station10.example.com station10
192.168.5.30 station30.example.com  station30

Step 4: Password Less authentication shall be among both the nodes namely station20.example.com and station30.example.com

Login on station20 and enter command as ssh-keygen
#ssh-copy-id  -i  /root/.ssh/id_rsa.pub station30

Login on station30 and enter command as ssh-keygen
#ssh-copy-id  -i /root/.ssh/id_rsa.pub station20

Step 5 : Set the yum repository on the base node [ station10 ] and other 2 servers

Step 6: Make sure IPTABLES and SELINUX are disabled on all the three machines

Step 7: Login on Base node and first install LUCI and cluster packages

#yum groupinstall¬† ‚ÄúClusterStorage‚Ä̬† -y
#yum install luci*

Run command as #luci_admin init

Above command will generate a ssl certificate and asks for a password for user admin

Assign the password and it will come on # prompt stating we may login from URL as 

https://192.168.5.10:8084 via username as admin and password as redhat [assume we have given password as redhat]

#service luci restart && chkconfig luci on

Step 8: Login on other 2 nodes and first install

#yum groupinstall ‚ÄúClusterStorage‚ÄĚ ‚Äďy 
#yum install ricci* -y 
#service ricci restart && chkconfig ricci on 

Once above all steps are done, we may need to login from https://192.168.5.10:8084 and start building our cluster.

Step 9: We will use Fencing Device as ILO while building this cluster hence will add the user id and password under the ILO configuration option available in BIOS mode and also set the IP address accordingly. This will be done on both the nodes accordingly.

Need to check whether manual fencing is working or not by logging on each node:

Step 10: Login on station20 and run command as :

#fence_ilo   -a  station30 -l admin  -p   redhat  -o reboot

Username as admin and Password as redhat which we have assigned inthe ILO configuration 

Login on station30 and run command as:
#fence_ilo  -a  station20  -l  admin  -p  redhat  -o reboot 

Step 11: Assign 200 GB Storage (LUN)  for Apache, 100 GB LUN should be visible in both server node1 and node2

Step 12: Create LVM on 200GB  LUN

Step 13: vim /etc/lvm/lvm.conf and set the locking_type=3 which makes LVM a Clustered aware file system

Step 14: Install Apache on both the servers respectively as node1 and node2

Step 15 : Configure the cluster, Login on https://192.168.5.10:8084

First Step which we need to do after Login via Luci console is to create a cluster

Click Cluster > Create a new cluster and add the node host name and password

Step 16 :Mention the Cluster Name as Cluster_01 and enter the both nodes name respectively with their password as shown in the screen shot as mentioned below

Step 17 :Click on view SSL finger print and it shall verify the finger print as mentioned in the screen shot as mentioned below

Redhat_Cluster_Conga_1

Step 18: Once we click on submit button it will INSTALL / REBOOT / CONFIGURE and JOIN the node in the cluster Redhat_Cluster_Conga_2Redhat_Cluster_Conga_3 Step 19: After Installation is successful we may login on each station20.example.com and station30.example.com and can check our Cluster status via clustat & cman_tool command

#clustat
Cluster Status for Cluster_01 @ Fri Jun 1 15:56:08 2012
Member Status: Quorate
Member Name                                                   ID   Status
------ ----                                                   ---- ------
station30.example.com                                             1 Online
station20.example.com                                             2 Online, Local
#cman_tool status 
Version: 6.2.0
Config Version: 1
Cluster Name: Cluster_01
Cluster Id: 25517
Cluster Member: Yes
Cluster Generation: 8
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Quorum: 1
Active subsystems: 9
Flags: 2node Dirty
Ports Bound: 0 11 177
Node name: station20.example.com
Node ID: 2
Multicast addresses: 239.192.99.17
Node addresses: 192.168.5.20 

Step 20: Next step is to generate the fence key : For that click on Click on Cluster_01 and then click on Fence Option as mentioned in the screen shot Redhat_Cluster_Conga_4As mentioned in above screenshot tick mark on fence daemon and enter the node IP and click apply. Once we do that it will create fence_xvm.key under /etc/cluster folder   Step 21: Now we need to add the fence device and mention that fence device under each node. Since Fence device we have to add is a Non Shared Fencing Device, we shall create that fencing while adding a fence device under node itself Click on node and click on Manage Fence for this node Redhat_Cluster_Conga_5Once we click on fence device then we have to click on add a fence device for this node Redhat_Cluster_Conga_6Click on Add a fence device to this node & I am using HP ILO fencing here: Redhat_Cluster_Conga_7 Step 22: Few Setting required on ILO as well. To enter in ILO2 configuration reboot the server and wait for the prompt and press F8. First thing that we will configure is the IP address so you go to Network->DNS/DHCP as shown in the visual. ILO_Setting_1After that set DHCP Enabled to OFF. ILO_Setting_2From the main screen select Network->NIC and TCP/IP Set Network Interface Adapter to ON. Configure IP address, Subnet Mask and Gateway and press F10 to save the changes. ILO_Setting_3 Set Network Interface Adapter to ON. Configure IP address, Subnet Mask and Gateway and press F10 to save the changes. The next step is to change/create user account settings. From the main screen go to User->Add The next step is to change/create user account settings. From the main screen go to User->Add ILO_Setting_5Step 23: Click on Cluster then Failover Domain and Add Failover Domain Redhat_Cluster_Conga_8 Step 24: Format Clustered LVM with GFS2 file system

/dev/vg0/lv0 is an existing lvm
Create a file system on /dev/vg0/lvo
#mkfs.gfs2 -p lock_dlm Cluster_01:vg0 -j 3 /dev/vg0/lv0
#mkfs.gfs2: More than one device specified (try -h for help)

[root@station30 ~]# mkfs.gfs2 -p lock_dlm -t Cluster_01:vg0 -j 3 /dev/vg0/lv0
This will destroy any data on /dev/vg0/lv0.
It appears to contain a gfs filesystem.
Are you sure you want to proceed? [y/n] y
Device:                    /dev/vg0/lv0
Blocksize:                 4096
Device Size                0.48 GB (126976 blocks)
Filesystem Size:           0.48 GB (126973 blocks)
Journals:                  3
Resource Groups:           2
Locking Protocol:          "lock_dlm"
Lock Table:                "Cluster_01:vg0"
UUID:                      A4599910-69AF-5814-8FA9-C1F382B7F5E5

#mount /dev/vg0/lv0 /var/www/html/
#gfs2_tool df /dev/mapper/vg0-lv0

Step 24: Now we need to add the resources

1. Click on Add Resource and then select the IP
2. Then we need to add the GFS File system
3. Now we need to add the script
Redhat_Cluster_Conga_10

Step 25: Now Add a Service Group Redhat_Cluster_Conga_11Add resources in dependency order > IP > File System > script to run the service successfully. Start the Webby Service

#clusvcadm -r Webby -m station30.example.com

Redhat_Cluster_Conga_12 ******* If you interested in Qdisk Concept then follow the below steps ******** Quorum Disk: Just in case we have a 3 node cluster and out of 3, two of our nodes went down, then Cluster will not achieve quorum hence, cluster will not start, in order to start the cluster on a single node, we need max of 2 votes for 1 node, this quorum disk gives us that functionality of voting.

# mkqdisk -c /dev/qdisk-vg/qdisk-lv -l qdisk

Qdisk_2 Qdisk_3In the above mentioned screenshot; Setting UP  Qdisk configuration in Cluster

On all Nodes:
# /etc/init.d/qdiskd restart
# chkconfig qdiskd on

Qdisk_4

# cman_tool status

Qdisk_5 Final Setup to check your cluster is working as expected or not, i am going to power off Station30, where current my webby application is running. Expected behavior:¬† Webby application should be relocated to other cluster Node. I.E : station20 or station10; Station20 And Now I am going to power off my Station20 as well to check, whether my Qdisk configuration working as expected or not. Let’s figure crossed ūüôā ūüôā Station30 Final cman_tool status to understand the voting¬† calculation: Final_Cman_Status Cheers!!!!

Posted in Clustering, Linux | Tagged , , , , | 1 Comment

How To Kill Defunct Or Zombie Process?

A “defunct” processes is also known as a “zombie” processes. A Zombie process is referred as dead process which is receding on your system though its completed executing. In one shot we can say its a dead processes which is still in RAM. This process will be in your process table and consuming your memory. Having more defunct process will consume your memory which intern slows your system. We have to kill the defunct process in order to free RAM and make system stable. Defunct processes are processes that have become corrupted in such a way that no longer¬† communicate (not really the right word, more like signal each other) with their parent or child process.
So kill the parent or child and 99% of the time (around here at least) the defunct process will go away! No parent or child, you’re out of luck, or look for a stuck automount.

 

Why defunct process are created?
Ans : When ever a process ends all the memory used by that process are cleared and assigned to new process but due to programming errors/bugs some processes are still left in process table. These are created when there is no proper communication between parent process and child process.

Linux_process_lifecycle

 

1. How to find a defunct process?
And : Grep defunct value in ps -ef output
#ps -ef | grep defunct

[root@amitvashist ~]# ps -ef | grep defunct | more
root      4801 29261  0 09:25 pts/5    00:00:00 grep defunct
root      6951     1  0 Dec30 ?        00:00:00 [bacula-sd] <defunct>

Or 

[root@amitvashist ~]# ps -el|grep Z
F S   UID   PID  PPID  C PRI  NI ADDR SZ WCHAN  TTY          TIME CMD
0 Z  1000 10317     1 99  80   0 -     0 exit   ?        19:27:15 java <defunct>

2. How can I kill a defunct process?
And : Just use kill command
#kill defunct-pid

3. Still not able to kill?
Ans : Then use kill -9 to force kill that process
#kill -9 defunct-pid

[root@amitvashist ~]# kill -3 6951     {For nohop output. }
[root@amitvashist ~]# kill -9 6951

4. Still have an issue in killing it?
Ans : Then try to kill its parent id and then defunct.
#kill parent-id-of-defunct-pid
Then
#kill -9 parent-id-of-defunct-pid

[root@amitvashist ~]# kill -9 6951

5. Still having defunct?
Ans : If you still find defunct process eating up RAM then last and final solution is to reboot your machine(This is not preferred on production boxes).

6. To check all the current open files with that process ?
And: lsof in linux & pfile on solaris zone

[root@amitvashist ~]# lsof -p 6951  & pfile 6951
COMMAND    PID USER   FD   TYPE  DEVICE     SIZE    NODE NAME
bacula-sd 6951 root  cwd    DIR   253,0     4096 3801089 /root
bacula-sd 6951 root  rtd    DIR   253,0     4096       2 /
bacula-sd 6951 root  txt    REG   253,0  2110599  368004 /usr/local/sbin/bacula-sd
bacula-sd 6951 root  mem    REG   253,0    75284  389867 /usr/lib/libz.so.1.2.3
bacula-sd 6951 root  mem    REG   253,0    46680 3604521 /lib/libnss_files-2.5.so
bacula-sd 6951 root  mem    REG   253,0   936908  369115 /usr/lib/libstdc++.so.6.0.8

 

Posted in Linux, Monitoring, Uncategorized | Tagged | 2 Comments

VERITAS Cluster Suit on CentOS 5

Veritas Cluster

Cluster Information

Veritas Cluster 5.0 can have upto 32 nodes.

LLT (Low-Latency Transport)

Veritas uses a high-performance, low-latency protocol for cluster communications. LLT runs directly on top of the data link provider interface (DLPI) layer via Ethernet and has several major junctions:

  • sending and receiving heartbeats
  • monitoring and transporting network traffic over multiple network links to every active system within the cluster
  • load-balancing traffic over multiple links
  • maintaining the state of communication
  • providing a nonroutable transport mechanism for cluster communications.

Group membership services/Atomic Broadcast (GAB)

GAB provides the following:

  • Group Membership Services – GAB maintains the overall cluster membership by the way of its Group Membership Services function. Heartbeats are used to determine if a system is active member, joining or leaving a cluster. GAB determines what the position of a system is in within a cluster.
  • Atomic Broadcast – Cluster configuration and status information is distributed dynamically to all system within the cluster using GAB’s Atomic Broadcast feature. Atomic Broadcast ensures all active system receive all messages, for every resource and service group in the cluster. Atomic means that all system receive the update, if one fails then the change is rolled back on all systems.

High Availability Daemon (HAD)

The HAD tracks all changes within the cluster configuration and resource status by communicating with GAB. Think of HAD as the manager of the resource agents. A companion daemon called hashadow moniotrs HAD and if HAD fails hashadow attempts to restart it. Like wise if hashadow daemon dies HAD will restart it. HAD maintains the cluster state information. HAD uses the main.cf file to build the cluster information in memory and is also responsible for updating the configuration in memory.

VCS architecture

So putting the above altogether we get:

  • Agents monitor resources on each system and provide status to HAD on the local system
  • HAD on each system send status information to GAB
  • GAB broadcasts configuration information to all cluster members
  • LLT transports all cluster communications to all cluster nodes
  • HAD on each node takes corrective action, such as failover, when necessary

Service Groups

There are three types of service groups:

  • Failover – The service group runs on one system at any one time.
  • Parallel – The service group can run simultaneously pn more than one system at any time.
  • Hybrid – A hybrid service group is a combination of a failover service group and a parallel service group used in VCS 4.0 replicated data clusters, which are based on Veritas Volume Replicator.

When a service group appears to be suspended while being brought online you can flush the service group to enable corrective action. Flushing a service group stops VCS from attempting to bring resources online or take them offline and clears any internal wait states.

Resources

Resources are objects that related to hardware and software, VCS controls these resources through these actions:

  • Bringing resource online (starting)
  • Taking resource offline (stopping)
  • Monitoring a resource (probing)

When you link a parent resource to a child resource, the dependency becomes a component of the service group configuration. You can view the dependencies at the bottom of the main.cf file.

Proxy Resource

A proxy resource allows multiple service groups to monitor the same network interface. This reduces the network traffic that would result from having multiple NIC resources in different service groups monitoring the same interface.

Phantom Resource

The phantom resource is used to report the actual status of a service group that consists of only persistent resources. A service group shows an online status only when all of its nonpersistent resources are online. Therefore, if a service group has only persistent resources (network interface), VCS considers the group offline, even if the persistent resources are running properly. By adding a phantom resource, the status of the service group is hsown as online.

Veritas Cluster Cheat sheet

LLT and GAB Commands |  Port Membership | Daemons | Log Files | Dynamic Configuration | Users | Resources | Resource Agents | Service Groups | Clusters | Cluster Status | System Operations | Sevice Group Operations | Resource Operations | Agent Operations | Starting and Stopping

LLT and GAB

VCS uses two components, LLT and GAB to share data over the private networks among systems.
These components provide the performance and reliability required by VCS.

LLT LLT (Low Latency Transport) provides fast, kernel-to-kernel comms and monitors network connections. The system admin configures the LLT by creating a configuration file (llttab) that describes the systems in the cluster and private network links among them. The LLT runs in layer 2 of the network stack
GAB GAB (Group membership and Atomic Broadcast) provides the global message order required to maintain a synchronised state among the systems, and monitors disk comms such as that required by the VCS heartbeat utility. The system admin configures GAB driver by creating a configuration file ( gabtab).

LLT and GAB files

/etc/llthosts The file is a database, containing one entry per system, that links the LLT system ID with the hosts name. The file is identical on each server in the cluster.
/etc/llttab The file contains information that is derived during installation and is used by the utility lltconfig.
/etc/gabtab The file contains the information needed to configure the GAB driver. This file is used by the gabconfig utility.
/etc/VRTSvcs/conf/config/main.cf The VCS configuration file. The file contains the information that defines the cluster and its systems.

Gabtab Entries

/sbin/gabdiskconf – i /dev/dsk/c1t2d0s2 -s 16 -S 1123
/sbin/gabdiskconf – i /dev/dsk/c1t2d0s2 -s 144 -S 1124
/sbin/gabdiskhb -a /dev/dsk/c1t2d0s2 -s 16 -p a -s 1123
/sbin/gabdiskhb -a /dev/dsk/c1t2d0s2 -s 144 -p h -s 1124
/sbin/gabconfig -c -n2
gabdiskconf -i   Initialises the disk region
-s   Start Block
-S   Signature
gabdiskhb (heartbeat disks) -a   Add a gab disk heartbeat resource
-s   Start Block
-p   Port
-S   Signature
gabconfig -c   Configure the driver for use
-n   Number of systems in the cluster.

LLT and GAB Commands

Verifying that links are active for LLT lltstat -n
verbose output of the lltstat command lltstat -nvv | more
open ports for LLT lltstat -p
display the values of LLT configuration directives lltstat -c
lists information about each configured LLT link lltstat -l
List all MAC addresses in the cluster lltconfig -a list
stop the LLT running lltconfig -U
start the LLT lltconfig -c
verify that GAB is operating gabconfig -aNote: port a indicates that GAB is communicating, port h indicates that VCS is started
stop GAB running gabconfig -U
start the GAB gabconfig -c -n <number of nodes>
override the seed values in the gabtab file gabconfig -c -x

GAB Port Memberbership

List Membership gabconfig -a
Unregister port f /opt/VRTS/bin/fsclustadm cfsdeinit
Port Function a   gab driver
b   I/O fencing (designed to guarantee data integrity)
d   ODM (Oracle Disk Manager)
f   CFS (Cluster File System)
h   VCS (VERITAS Cluster Server: high availability daemon)
o   VCSMM driver (kernel module needed for Oracle and VCS interface)
q   QuickLog daemon
v   CVM (Cluster Volume Manager)
w   vxconfigd (module for cvm)

Cluster daemons

High Availability Daemon had
Companion Daemon hashadow
Resource Agent daemon <resource>Agent
Web Console cluster managerment daemon CmdServer

Cluster Log Files

Log Directory /var/VRTSvcs/log
primary log file (engine log file) /var/VRTSvcs/log/engine_A.log

Starting and Stopping the cluster

“-stale” instructs the engine to treat the local config as stale
“-force” instructs the engine to treat a stale config as a valid one
hastart [-stale|-force]
Bring the cluster into running mode from a stale state using the configuration file from a particular server hasys -force <server_name>
stop the cluster on the local server but leave the application/s running, do not failover the application/s hastop -local
stop cluster on local server but evacuate (failover) the application/s to another node within the cluster hastop -local -evacuate
stop the cluster on all nodes but leave the application/s running hastop -all -force

Cluster Status

display cluster summary hastatus -summary
continually monitor cluster hastatus
verify the cluster is operating hasys -display

Cluster Details

information about a cluster haclus -display
value for a specific cluster attribute haclus -value <attribute>
modify a cluster attribute haclus -modify <attribute name> <new>
Enable LinkMonitoring haclus -enable LinkMonitoring
Disable LinkMonitoring haclus -disable LinkMonitoring

Users

add a user hauser -add <username>
modify a user hauser -update <username>
delete a user hauser -delete <username>
display all users hauser -display

System Operations

add a system to the cluster hasys -add <sys>
delete a system from the cluster hasys -delete <sys>
Modify a system attributes hasys -modify <sys> <modify options>
list a system state hasys -state
Force a system to start hasys -force
Display the systems attributes hasys -display [-sys]
List all the systems in the cluster hasys -list
Change the load attribute of a system hasys -load <system> <value>
Display the value of a systems nodeid (/etc/llthosts) hasys -nodeid
Freeze a system (No offlining system, No groups onlining) hasys -freeze [-persistent][-evacuate]Note: main.cf must be in write mode
Unfreeze a system ( reenable groups and resource back online) hasys -unfreeze [-persistent]Note: main.cf must be in write mode

Dynamic Configuration 

The VCS configuration must be in read/write mode in order to make changes. When in read/write mode the configuration becomes stale, a .stale file is created in $VCS_CONF/conf/config. When the configuration is put back into read only mode the .stale file is removed.

Change configuration to read/write mode haconf -makerw
Change configuration to read-only mode haconf -dump -makero
Check what mode cluster is running in haclus -display |grep -i ‘readonly’0 = write mode
1 = read only mode
Check the configuration file hacf -verify /etc/VRTS/conf/configNote: you can point to any directory as long as it has main.cf and types.cf
convert a main.cf file into cluster commands hacf -cftocmd /etc/VRTS/conf/config -dest /tmp
convert a command file into a main.cf file hacf -cmdtocf /tmp -dest /etc/VRTS/conf/config

Service Groups

add a service group haconf -makerw
hagrp -add groupw
hagrp -modify groupw SystemList station40 1 station50 2
hagrp -autoenable groupw -sys station40
haconf -dump -makero
delete a service group haconf -makerw
hagrp -delete groupw
haconf -dump -makero
change a service group haconf -makerw
hagrp -modify groupw SystemList station40 1 station50 2 sun3 3
haconf -dump -makeroNote: use the “hagrp -display <group>” to list attributes
list the service groups hagrp -list
list the groups dependencies hagrp -dep <group>
list the parameters of a group hagrp -display <group>
display a service group’s resource hagrp -resources <group>
display the current state of the service group hagrp -state <group>
clear a faulted non-persistent resource in a specific grp hagrp -clear <group> [-sys] <host> <sys>
Change the system list in a cluster # remove the host
hagrp -modify grp_zlnrssd SystemList -delete <hostname># add the new host (don’t forget to state its position)
hagrp -modify grp_zlnrssd SystemList -add <hostname> 1# update the autostart list
hagrp -modify grp_zlnrssd AutoStartList <host> <host>

Service Group Operations

Start a service group and bring its resources online hagrp -online <group> -sys <sys>
Stop a service group and takes its resources offline hagrp -offline <group> -sys <sys>
Switch a service group from system to another hagrp -switch <group> to <sys>
Enable all the resources in a group hagrp -enableresources <group>
Disable all the resources in a group hagrp -disableresources <group>
Freeze a service group (disable onlining and offlining) hagrp -freeze <group> [-persistent]note: use the following to check “hagrp -display <group> | grep TFrozen”
Unfreeze a service group (enable onlining and offlining) hagrp -unfreeze <group> [-persistent]note: use the following to check “hagrp -display <group> | grep TFrozen”
Enable a service group. Enabled groups can only be brought online haconf -makerw
hagrp -enable <group> [-sys]
haconf -dump -makeroNote to check run the following command “hagrp -display | grep Enabled”
Disable a service group. Stop from bringing online haconf -makerw
hagrp -disable <group> [-sys]
haconf -dump -makeroNote to check run the following command “hagrp -display | grep Enabled”
Flush a service group and enable corrective action. hagrp -flush <group> -sys <system>

Resources

add a resource haconf -makerw
hares -add appDG DiskGroup groupw
hares -modify appDG Enabled 1
hares -modify appDG DiskGroup appdg
hares -modify appDG StartVolumes 0
haconf -dump -makero
delete a resource haconf -makerw
hares -delete <resource>
haconf -dump -makero
change a resource haconf -makerw
hares -modify appDG Enabled 1
haconf -dump -makeroNote: list parameters “hares -display <resource>”
change a resource attribute to be globally wide hares -global <resource> <attribute> <value>
change a resource attribute to be locally wide hares -local <resource> <attribute> <value>
list the parameters of a resource hares -display <resource>
list the resources hares -list
list the resource dependencies hares -dep

Resource Operations

Online a resource hares -online <resource> [-sys]
Offline a resource hares -offline <resource> [-sys]
display the state of a resource( offline, online, etc) hares -state
display the parameters of a resource hares -display <resource>
Offline a resource and propagate the command to its children hares -offprop <resource> -sys <sys>
Cause a resource agent to immediately monitor the resource hares -probe <resource> -sys <sys>
Clearing a resource (automatically initiates the onlining) hares -clear <resource> [-sys]

Resource Types

Add a resource type hatype -add <type>
Remove a resource type hatype -delete <type>
List all resource types hatype -list
Display a resource type hatype -display <type>
List a partitcular resource type hatype -resources <type>
Change a particular resource types attributes hatype -value <type> <attr>

 Resource Agents

add a agent pkgadd -d . <agent package>
remove a agent pkgrm <agent package>
change a agent n/a
list all ha agents haagent -list
Display agents run-time information i.e has it started, is it running ? haagent -display <agent_name>
Display agents faults haagent -display |grep Faults

Resource Agent Operations

Start an agent haagent -start <agent_name>[-sys]
Stop an agent haagent -stop <agent_name>[-sys]

Build VERITAS Cluster Suit on CENT OS 5 Box’s.

Installation

Before you install VCS make sure you have the following prepared:

  • Cluster Name
  • Unique ID Number
  • Hostnames of the servers
  • Devices names of the network interfaces for the private networks
  • Root access
  • Able to perform remote shell from all systems (.rhosts file requires updating)
  • VCS software

To install VCS follow below, remember that both hosts must be able to root SSH into each other without requesting for a password: –

Prerequisite:-

  • Cent OS 5.6+ OS
  • Software : VRTS_SF_HA_Solutions_5.1_SP1_RHEL.tar.gz
  • 2 or more Node‚Äôs
  • Yum Server Configure { Install Apache }
  • Share Storage { Configure Linux SCSI-Target / StarWind or from VMAX Storage }

Requirements :-

NIC/IP  Detailed Requirement:-

  • Server 1 ¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† 2 NIC Bonding { Local IP } + 2 NIC { LLT Heartbeat without IP } + 1 Backup
  • Server 2¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† 2 NIC Bonding { Local IP } + 2 NIC { LLT Heartbeat without IP } + 1 Backup
  • Virtual IP¬†¬†¬†¬†¬† ¬† ¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† 1 virtual IP for HTTP ( Cluster IP )

Note: IP and NIC Requirement are depends upon your environment, whatever mentioned above is minimum requirement to configure VCS.

How to Configure VCS Apache Cluster

Step 1 :  Firstly Install CENT OS 5.6 on both all the Node’s respectively with custom packages.

Step 2 : Configure Network Bonding

Create the bond interface file for the public network and save the file as

# vim /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
IPADDR=192.168.5.20 [This will be actual network IP address]
NETMASK=255.255.255.0
GATEWAY=192.168.5.1
USERCTL=no
BOOTPROTO=static
ONBOOT=yes

Note : РMake sure you remove HW Address / IP Address / Gateway Information from eth0 and eth1  and add 2 important lines under those file:

# vim /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none

# vim /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none

Load bond driver/module
# vim /etc/modprobe.conf
alias bond0 bonding
options bond0 mode=balance-alb miimon=100

Test configuration
# modprobe bonding
# service network restart

Check with the below command whether Bonding is actually working or not.
# cat /proc/net/bonding/bond0

Setup SSH Password Less Authentication b/w all Nodes
#ssh-keygen
#ssh-copy-id ‚Äďi .ssh/id_rsa.pub station40

Step 3: Installing VCS on the Below Mentioned Nodes

  • ¬†Station40.example.com: 192.168.5.40
  • ¬†Station50.example.com: 192.168.5.50
  • ¬†Station60.example.com: 192.168.5.60

Login into of the Cluster Node & Run the Installer Program:

# cd /root
# tar -zxvf VRTS_SF_HA_Solutions_5.1_SP1_RHEL.tar.gz
# cd /dvd1-redhatlinux/rhel5_x86_64/
#./installer

Varitas-1

For Cluster: Please choose option 2

Varatis-2

Accept EULA  Agreement:

Veritas-3

Please choose option 3 for all VCS Rpms:

Veritas-4

Please enter all the cluster node’s name:

Veritas-5

Installing VCS Rpm on all the cluster node’s:

Veratis-6

After Rpm Installation completed:

Veritas-7

Setup new & unique cluster name / id:

VCS-8

Setup LLT heart beat link’s on all the node’s:

vcs-9

Pushing the¬†same configuration on all the cluster node’s:

vcs-10

Hence Installation is finished on all the nodes.

vcs-11

Now Configuring VCS:

Set the PATH Variable On Both node:
# vim .bash_profile
export PATH=$PATH:/sbin:/usr/sbin:/opt/VRTSvcs/bin:/etc/vx/bin:/usr/lib/vxvm/bin
# export PATH
# source .bash_profile

Login & Verify Cluster related information on any Node:
# lltconfig
LLT is running
# lltstat -nvv |less

VCS-12

# gabconfig -a

vcs-13

# hastatus -sum

vcs-14
Create a Service Group

hagrp -add groupw
hagrp -modify groupw SystemList station40 1 station50 2
hagrp -autoenable groupw -sys station40

Create a disk group resource , volume and filesystem resource

We have to create a disk group resource, this will ensure that the disk group has been imported before we start any volumes
hares -add appDG DiskGroup groupw
hares -modify appDG Enabled 1
hares -modify appDG DiskGroup appdg
hares -modify appDG StartVolumes 0

Once the disk group resource has been created we can create the volume resource
hares -add appVOL Volume groupw
hares -modify appVOL Enabled 1
hares -modify appVOL Volume app01
hares -modify appVOL DiskGroup appdg

Now that the volume resource has been created we can create the filesystem mount resource
hares -add appMOUNT Mount groupw
hares -modify appMOUNT Enabled 1
hares -modify appMOUNT MountPoint /apps
hares -modify appMOUNT BlockDevice /dev/vx/dsk/appdg/app01
hares -modify appMOUNT FSType vxfs

To ensure that all resources are started in order, we create dependencies against each other
hares ‚Äďlist
haconf -makerw
hares -link appVOL appDG
hares -link appMOUNT appVOL
hares -dep appVOL
haconf ‚Äďdump -makero

Create a application resource

Once the filesystem resource has been created we cab add a application resource, this will start, stop and monitor the application.
hares -add webapp Application groupw
hares -modify webapp Enabled 1
hares -modify webapp User root
hares -modify webapp StartProgram “/etc/init.d/httpd start”
hares -modify webapp StopProgram “/etc/init.d/httpd stop”
hares -modify webapp PidFiles “/var/locks/httpd.pid”
hares -modify webapp MonitorProcesses “httpd -D”

Create a single virtual IP resource

create a single NIC resource
hares -add appNIC NIC groupw
hares -modify appNIC Enabled 1
hares -modify appNIC Device bond0

Create the single application IP resource
hares -add appIP IP groupw
hres -modify appIP Enabled 1
hres -modify appIP Device bond0
hres -modify appIP Address 192.168.5.100
hres -modify appIP NetMask 255.255.255.0
hres -modify appIP IfconfigTwice

Clear resource fault

# hastatus -sum

— SYSTEM STATE
— System¬†¬†¬†¬† State¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† Frozen
A station40         RUNNING    0
A station50         RUNNING    0

— GROUP STATE
— Group¬†¬†¬†¬†¬†¬† System¬†¬† Probed¬†¬† AutoDisabled¬†¬†¬† State
B  groupw   station40        Y             N                          OFFLINE
B  groupw   station50        Y             N                          STARTING|PARTIAL

— RESOURCES ONLINING
— Group¬†¬†¬†¬† Type¬†¬†¬†¬†¬† Resource¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† System¬†¬†¬†¬† IState
E groupw   Mount    app02MOUNT   station50          W_ONLINE

# hares -clear app02MOUNT

Flush a group

# hastatus -sum

— SYSTEM STATE
— System¬†¬†¬†¬† State¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† Frozen
A station40         RUNNING    0
A station50         RUNNING    0

— GROUP STATE
— Group¬†¬†¬†¬†¬†¬† System¬†¬† Probed¬†¬† AutoDisabled¬†¬†¬† State
B  groupw   station40        Y             N                          STOPPING|PARTIAL
B  groupw   station50        Y             N                          OFFLINE|FAULTED

— RESOURCES FAILED
— Group¬†¬†¬†¬†¬† Type¬†¬†¬†¬†¬†¬† Resource¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† System
C groupw    Mount    app02MOUNT     station50

— RESOURCES ONLINING
— Group¬†¬†¬†¬†¬†¬† Type¬†¬†¬†¬†¬†¬† Resource¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† System¬†¬†¬†¬†¬† IState
E groupw     Mount    app02MOUNT     station40           W_ONLINE_REVERSE_PROPAGATE

— RESOURCES OFFLINING
— Group¬†¬†¬†¬†¬†¬†¬† Type¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† Resource¬†¬†¬†¬† System¬†¬†¬†¬†¬† IState
F groupw      DiskGroup   appDG          station40          W_OFFLINE_PROPAGATE

# hagrp -flush groupw -sys station40
Posted in Clustering, Linux, Uncategorized | 2 Comments

Getting Started with ELK

Setting Up ELK To Centralize & Visualize Logs

elk

Lets setup the below mentioned key components one by one:

  • Logstash: The server component of Logstash that processes incoming logs
  • Elasticsearch: Stores all of the logs
  • Kibana: Web interface for searching and visualizing logs
  • Logstash Forwarders: Forward logs from client to main logstach servers, just like syslogng.

Prerequisites:

  • OS: Fedora 21
  • RAM: 2GB
  • CPU: 2
  • JAVA 7 or latest

Install Elasticsearch

Run the following command to import the Elasticsearch public GPG key into rpm:

sudo rpm --import http://packages.elasticsearch.org/GPG-KEY-elasticsearch

Create and edit a new yum repository file for Elasticsearch:

[root@base ~]# cat /etc/yum.repos.d/elasticsearch.repo 
[elasticsearch-1.4]
name=Elasticsearch repository for 1.4.x packages
baseurl=http://packages.elasticsearch.org/elasticsearch/1.4/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
[root@base ~]#
yum -y install elasticsearch-1.4.4-1.noarch

Elasticsearch is now installed. Let’s edit the configuration:

vi /etc/elasticsearch/elasticsearch.yml

Find the line that specifies network.host and uncomment it so it looks like this:

network.bind_host: 192.168.122.1
network.host: 192.168.122.1

Now start Elasticsearch:

service elasticsearch restart
/sbin/chkconfig --add elasticsearch

Now that Elasticsearch is up and running, let’s install Kibana.

[root@base elasticsearch]# ps -ef | grep elastic
elastic+ 14097 1 0 Aug08 ? 00:03:42 /bin/java -Xms256m -Xmx1g -Xss256k -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Delasticsearch -Des.pidfile=/var/run/elasticsearch/elasticsearch.pid -Des.path.home=/usr/share/elasticsearch -cp :/usr/share/elasticsearch/lib/elasticsearch-1.4.4.jar:/usr/share/elasticsearch/lib/*:/usr/share/elasticsearch/lib/sigar/* -Des.default.config=/etc/elasticsearch/elasticsearch.yml -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.work=/tmp/elasticsearch -Des.default.path.conf=/etc/elasticsearch org.elasticsearch.bootstrap.Elasticsearch
root 24969 6488 0 03:26 pts/1 00:00:00 grep --color=auto elastic
[root@base elasticsearch]#

 

Install Kibana

Download Kibana to your home directory with the following command:

cd /opt; curl -O https://download.elasticsearch.org/kibana/kibana/kibana-4.0.1-linux-x64.tar.gz

Extract Kibana archive with tar & Open the Kibana configuration file for editing:

[root@base kibana]# cat /opt/kibana/config/kibana.yml | grep -v "^#" | grep -v "^$"
port: 5601
host: "base.vashist.com"
elasticsearch_url: "http://base.vashist.com:9200"
elasticsearch_preserve_host: true
kibana_index: ".kibana"
default_app_id: "discover"
request_timeout: 300000
shard_timeout: 0
verify_ssl: true
bundled_plugin_ids:
 - plugins/dashboard/index
 - plugins/discover/index
 - plugins/doc/index
 - plugins/kibana/index
 - plugins/markdown_vis/index
 - plugins/metric_vis/index
 - plugins/settings/index
 - plugins/table_vis/index
 - plugins/vis_types/index
 - plugins/visualize/index
[root@base kibana]#

In the Kibana configuration file, find the line that specifies the elasticsearch server URL, and replace the port number (9200 by default) with 80 & on port 80 other web services are running at moment, hence We will be using Nginx to serve our Kibana.

Install Nginx:

yum -y install nginx

Create a new Kibana conf

[root@base ~]# cat /etc/nginx/conf.d/kibana.conf 
server {
listen 8080;
server_name base.vashist.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://base.vashist.com:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
[root@base ~]#

Now restart Nginx & Kibana4 to put our changes into effect:

[root@base ~]# systemctl status nginx.service 
‚óŹ nginx.service - The nginx HTTP and reverse proxy server
 Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
 Active: active (running) since Sun 2015-08-02 23:27:18 IST; 6 days ago
 Main PID: 1250 (nginx)
 CGroup: /system.slice/nginx.service
 ‚Ēú‚ĒÄ1250 nginx: master process /usr/sbin/nginx
 ‚Ēú‚ĒÄ1251 nginx: worker process
 Aug 02 23:27:18 base.vashist.com systemd[1]: Started The nginx HTTP and reverse proxy server.
[root@base ~]#
[root@base kibana]# systemctl status kibana4.service 
‚óŹ kibana4.service
 Loaded: loaded (/etc/systemd/system/kibana4.service; enabled; vendor preset: disabled)
 Active: active (running) since Sun 2015-08-02 23:26:55 IST; 6 days ago
 Main PID: 820 (node)
 CGroup: /system.slice/kibana4.service
 ‚ĒĒ‚ĒÄ820 /opt/kibana/bin/../node/bin/node /opt/kibana/bin/../src/bin/kibana.js
Aug 09 02:53:28 base.vashist.com kibana4[820]: {"@timestamp":"2015-08-08T21:23:28.158Z","level":"info","message":"POST /_msearch?timeout=0...0928791"
[root@base kibana]#

 

Install Logstash

 cd /opt; curl -O https://www.elastic.co/downloads/logstash/logstash-1.5.2.tar.gz

Extract Logstash archive with tar & setup the configuration as per data sets:

Choosing a dataset

The first thing you need is some data which you want to analyze. As an example, I have used historical data of the Apple stock, which you can download from Yahoo’s historical stock database. Below you can see an excerpt of the raw data.

[avashist@base conf.d]$ tail -f /home/avashist/Downloads/yahoo_stock.csv
1980-12-26,35.500082,35.624961,35.500082,35.500082,13893600,0.541092
1980-12-24,32.50016,32.625039,32.50016,32.50016,12000800,0.495367
1980-12-23,30.875039,30.999918,30.875039,30.875039,11737600,0.470597
1980-12-22,29.625121,29.75,29.625121,29.625121,9340800,0.451546
1980-12-19,28.249759,28.375199,28.249759,28.249759,12157600,0.430582
1980-12-18,26.625201,26.75008,26.625201,26.625201,18362400,0.405821
1980-12-17,25.874799,26.00024,25.874799,25.874799,21610400,0.394383

Insert the data

Now we have to stream data from the csv source file into the database. With Logstash, we can also manipulate and clean the data on the fly. I am using a csv file in this example, but Logstash can deal with other input types as well.

Now create a new logstach conf file:

[avashist@base ~]$ vi /opt/logstash-1.5.2/conf.d/stock_yahoo.conf

input { 
 file {
 path => "/home/avashist/Downloads/yahoo_stock.csv"
 start_position => "beginning" 
 }
}

Explanation:
With the input section of the configuration file, we are telling logstash to take the csv file as a datasource and start reading data at the beginning of the file.

Now as we have logstash reading the file, Logstash needs to know what to do with the data. Therefore, we are configuring the csv filter.

filter { 
 csv {
 separator => ","
 columns => ["Date","Open","High","Low","Close","Volume","Adj Close"]
 }
 mutate {convert => ["High", "float"]}
 mutate {convert => ["Open", "float"]}
 mutate {convert => ["Low", "float"]}
 mutate {convert => ["Close", "float"]}
 mutate {convert => ["Volume", "float"]}
}

Explanation:
The filter section is used to tell Logstash in which data format our dataset is present (in this case csv). We also give the names of the columns we want to keep in the output. We then converting all the fields containing numbers to float, so that Kibana knows how to deal with them.

The last thing is to tell Logstash where to stream the data. As we want to stream it directly to Elasticsearch, we are using the Elasticsearch output. You can also give multiple output adapters for streaming to different outputs. In this case, I have added the stdout output for seeing the output in the console. It is important to specify an index name for Elasticsearch. This index will be used later for configuring Kibana to visualize the dataset. Below, you can see the output section of our logstash.conf file.

output {  
    elasticsearch {
      host => "base.vashist.com" 
      index => "stock_indx"
        action => "index"
        workers => 1
    }
    stdout {}
}
[avashist@base ~]$

Explanation:
The output section is used to stream the input data to Elasticsearch. You also have to specify the name of the index which you want to use for the dataset.

The final step for inserting the data is to run logstash with the configuration file:

[root@base bin]# ./logstash -f ../conf.d/stock_yahoo.conf 
Aug 09, 2015 4:30:16 AM org.elasticsearch.node.internal.InternalNode <init>
INFO: [logstash-base.vashist.com-26823-13458] version[1.5.1], pid[26823], build[5e38401/2015-04-09T13:41:35Z]
INFO: [logstash-base.vashist.com-26823-13458] started
Logstash startup completed

 

Connect to Kibana

When you are finished setting up Logstash Forwarder on all of the servers that you want to gather logs for, let’s look at Kibana, the web interface that we installed earlier.

In a web browser, go to the FQDN or public IP address of your Logstash Server. You should see a Kibana welcome page.

Click on Logstash Dashboard to go to the premade dashboard. You should see a histogram with log events, with log messages below (if you don’t see any events or messages, one of your four Logstash components is not configured properly).

Here, you can search and browse through your logs. You can also customize your dashboard. This is a sample of what your Kibana instance might look like:

Screenshot from 2015-08-09 04-48-46

GitHub Link : https://github.com/amitvashist7/ELK

Happy Learning ūüôā ūüôā

Cheers!!

Posted in Big Data, Linux, Uncategorized | Tagged , , , , , , , , , , , , | Leave a comment

Verify Certificate

How to verify certificate & it content with the help of openssl set of commands. 

1. Verify the subject and issuer of a certificate

[root@fedora101 CA]# openssl x509 -subject -issuer -enddate -noout -in /tmp/fedora101.crt 
subject= /C=IN/ST=UP/O=Plentree Enterprise Ltd/CN=Amit Vashist/emailAddress=plentree.ca@vashist.com
issuer= /C=IN/ST=UP/L=Meerut/O=Plentree Enterprise Ltd/CN=Amit Vashist/emailAddress=plentree.ca@vashist.com
notAfter=Apr 10 18:15:02 2016 GMT
[root@fedora101 CA]#

2. Verify all content of a certificate

[root@fedora101 CA]# openssl x509 -in /tmp/fedora101.crt -noout -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 1 (0x1)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=IN, ST=UP, L=Meerut, O=Plentree Enterprise Ltd, CN=Amit Vashist/emailAddress=plentree.ca@vashist.com
        Validity
            Not Before: Apr 11 18:15:02 2015 GMT
            Not After : Apr 10 18:15:02 2016 GMT
        Subject: C=IN, ST=UP, O=Plentree Enterprise Ltd, CN=Amit Vashist/emailAddress=plentree.ca@vashist.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:c1:33:64:98:25:a1:72:5c:28:37:97:e1:b8:24:
                    f0:7b:5d:0e:45:d6:93:7d:d6:3f:33:3a:19:97:9b:
                    f3:5e:5c:d1:e2:47:37:e7:4b:35:4e:9f:45:bc:0b:
                    ad:0f:37:21:f1:40:aa:bd:3a:62:4c:ba:66:1b:36:
                    62:da:44:e6:53:25:09:f2:63:69:9a:35:50:f7:a2:
                    5d:68:88:de:5b:89:08:bc:0f:7b:6b:7e:a6:df:ab:
                    e2:0b:4e:97:b8:e3:62:a3:64:44:07:3f:07:1b:8e:
                    f5:bb:21:68:32:db:78:76:a3:f1:84:82:32:97:0a:
                    34:58:22:3c:28:fb:53:a3:d3:aa:e6:c6:34:65:8e:
                    25:2e:5b:f4:b4:b2:87:36:6d:75:3c:e7:bf:fa:0e:
                    db:cd:f1:99:d9:16:1a:3a:f3:3c:35:3d:b0:f7:76:
                    a2:7e:bc:d0:72:b9:0d:49:80:f4:89:be:0a:ff:3e:
                    70:cf:c2:79:be:d5:69:d7:7e:ff:0b:32:f6:d5:9b:
                    ab:b4:bd:44:a2:29:21:8a:d2:d6:0c:5f:45:c5:44:
                    6f:72:f7:17:2e:d5:a8:64:c4:e3:58:a9:70:4f:b8:
                    5d:8e:3f:25:07:0d:01:7a:97:a9:eb:df:ca:08:83:
                    55:b3:af:3b:6a:46:b2:51:70:3b:a2:12:e9:39:02:
                    24:29
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Basic Constraints: 
                CA:FALSE
            Netscape Comment: 
                OpenSSL Generated Certificate
            X509v3 Subject Key Identifier: 
                D6:6E:9E:60:23:85:D1:ED:21:33:22:59:1C:96:CE:B0:38:5C:37:39
            X509v3 Authority Key Identifier: 
                keyid:2A:FC:86:41:D9:84:9E:9C:B6:6A:0C:19:B1:8C:A8:A4:A1:A4:97:EA

    Signature Algorithm: sha256WithRSAEncryption
         6c:53:ec:27:a6:2e:b7:b0:ec:58:b2:40:71:f7:e7:68:6a:9a:
         d6:58:db:0a:ed:a1:10:15:b9:dd:1e:50:73:c3:8b:4d:bb:7b:
         d6:a9:24:24:29:b5:f2:f0:41:70:f5:8e:77:dd:c0:28:d4:a4:
         a7:4b:67:1d:4b:fc:46:7a:a2:c6:74:2b:85:a2:53:f3:53:3a:
         fb:45:30:ab:9b:7a:dd:66:0e:33:40:a5:3f:95:3a:07:4d:f0:
         ba:58:e5:a7:bf:16:ff:7d:ee:36:c7:00:d6:37:1f:15:ef:a4:
         75:d0:91:f2:27:7a:9d:0c:97:42:65:62:2c:f8:d7:34:e3:83:
         9e:2a:a7:b1:c2:0a:f1:65:37:79:73:ed:77:4e:c7:9d:b0:f3:
         51:f1:d7:39:cf:1c:e9:06:08:43:61:a3:fe:e1:18:4e:7e:00:
         bf:5b:29:22:ef:96:50:1e:d9:4d:d2:0f:41:b8:66:73:5a:0f:
         2e:49:b8:ee:de:b8:51:3c:57:ac:88:8f:6a:30:a5:ba:42:02:
         20:7e:0f:9b:5d:83:d9:66:5d:62:f1:8d:fe:29:c4:fd:4b:da:
         aa:81:a1:ed:8e:27:98:41:c7:14:4b:f7:b6:44:df:d4:7a:68:
         9f:dc:c9:5c:fb:e6:c0:5a:c2:21:bc:4b:bf:6a:6d:78:a3:57:
         c3:1b:8e:fd
[root@fedora101 CA]# ^C

3. Verify that the certificate is valid for server authentication.

[root@fedora101 CA]# openssl verify -purpose sslserver -CAfile certs/ca.crt /tmp/fedora101.crt 
/tmp/fedora101.crt: OK
[root@fedora101 CA]#

Happy Learning ūüôā ūüôā

Cheers!!!

Posted in Linux, OpenSSL | Tagged , , , , , , , , | 1 Comment

Certificate Signing Request

Certificate Signing Request

In public key infrastructure (PKI) systems, a certificate signing request (also CSR or certification request) is a message sent from an applicant to a certificate authority in order to apply for a digital identity certificate. The most common format for CSRs is the PKCS #10 specification and another is the Signed Public Key and Challenge Spkac format generated by some Web browsers.

Create a Certificate Request (CSR)

[root@fedora101 tls]# openssl req -config /etc/pki/tls/openssl.cnf -new -nodes -keyout fedora101.key -out fedora101.csr -days 100
Generating a 2048 bit RSA private key
....................................................................+++
..+++
writing new private key to 'fedora101.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:IN
State or Province Name (full name) []:UP
Locality Name (eg, city) [Default City]:Meerut
Organization Name (eg, company) [Default Company Ltd]:Plentree Enterprise Ltd
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:Amit Vashist
Email Address []:plentree.ca@vashist.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
[root@fedora101 tls]#

Two files are created upon completion of these instructions.  fedora101.key is generated and put into the private folder.  This is a private key file specfic to the domain that the certificate request was created for.  fedora101.csr is generated and put into the CA folder.  This is a certificate request file and can be used to generate a certificate specific to the domain the certificate request was created for.

[root@fedora101 tls]# ls
cert.pem  certs  fedora101.csr  fedora101.key  misc  openssl.cnf  private
[root@fedora101 tls]#

Now you can send CSR file to CA Server in order sign & get the new CA Signed certificate for you.

[root@fedora101 CA]# openssl ca -config openssl.cnf -in /etc/pki/tls/fedora101.csr -out /tmp/fedora101.crt
[root@fedora101 CA]# openssl ca -config openssl.cnf -in /etc/pki/tls/fedora101.csr -out /tmp/fedora101.crt
Using configuration from openssl.cnf
Enter pass phrase for /etc/pki/CA/private/ca.key:
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 1 (0x1)
        Validity
            Not Before: Apr 11 18:15:02 2015 GMT
            Not After : Apr 10 18:15:02 2016 GMT
        Subject:
            countryName               = IN
            stateOrProvinceName       = UP
            organizationName          = Plentree Enterprise Ltd
            commonName                = Amit Vashist
            emailAddress              = plentree.ca@vashist.com
        X509v3 extensions:
            X509v3 Basic Constraints: 
                CA:FALSE
            Netscape Comment: 
                OpenSSL Generated Certificate
            X509v3 Subject Key Identifier: 
                D6:6E:9E:60:23:85:D1:ED:21:33:22:59:1C:96:CE:B0:38:5C:37:39
            X509v3 Authority Key Identifier: 
                keyid:2A:FC:86:41:D9:84:9E:9C:B6:6A:0C:19:B1:8C:A8:A4:A1:A4:97:EA

Certificate is to be certified until Apr 10 18:15:02 2016 GMT (365 days)
Sign the certificate? [y/n]:y


1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated
[root@fedora101 CA]#

To verify your certificate please run the below mentioned command on CA Server:

[root@fedora101 CA]# openssl x509 -subject -issuer -enddate -noout -in /tmp/fedora101.crt 
subject= /C=IN/ST=UP/O=Plentree Enterprise Ltd/CN=Amit Vashist/emailAddress=plentree.ca@vashist.com
issuer= /C=IN/ST=UP/L=Meerut/O=Plentree Enterprise Ltd/CN=Amit Vashist/emailAddress=plentree.ca@vashist.com
notAfter=Apr 10 18:15:02 2016 GMT
[root@fedora101 CA]#

Some Sample Errors:

[root@server101 CA]# openssl ca -config /etc/pki/CA/openssl.cnf -in /tmp/fedora101.csr -out /tmp/fedora101.crt
Using configuration from /etc/pki/CA/openssl.cnf
Enter pass phrase for ./private/ca.key:
Check that the request matches the signature
Signature ok
The countryName field needed to be the same in the
CA certificate (US) and the request (IN)
[root@server101 CA]#
Posted in File System, Linux, OpenSSL, Uncategorized | Tagged , , , , , | 1 Comment