20 Command Line Tools to Monitor Linux Performance

It’s really very tough job for every System or Network administrator to monitor and debug Linux System Performance problems every day. After being a Linux Administrator for 5 years in IT industry, I came to know that how hard is to monitor and keep systems up and running. For this reason, we’ve compiled the list of Top 20 frequently used command line monitoring tools that might be useful for every Linux/Unix System Administrator. These commands are available under all flavors of Linux and can be useful to monitor and find the actual causes of performance problem. This list of commands shown here are very enough for you to pick the one that is suitable for your monitoring scenario.

1. Top – Linux Process Monitoring

Linux Top command is a performance monitoring program which is used frequently by many system administrators to monitor Linux performance and it is available under many Linux/Unix like operating systems. The top command used to dipslay all the running and active real-time processes in ordered list and updates it regularly. It display CPU usage, Memory usage, Swap Memory, Cache Size, Buffer Size, Process PID, User, Commands and much more. It also shows high memory and cpu utilization of a running processess. The top command is much userful for system administrator to monitor and take correct action when required. Let’s see top command in action.

2. VmStat – Virtual Memory Statistics

Linux VmStat command used to display statistics of virtual memory, kernerl threads, disks, system processes, I/O blocks, interrupts, CPU activity and much more. By default vmstat command is not available under Linux systems you need to install a package called sysstat that includes a vmstat program. The common usage of command format is.

3. Lsof – List Open Files

Lsof command used in many Linux/Unix like system that is used to display list of all the open files and the processes. The open files included are disk files, network sockets, pipes, devices and processes. One of the main reason for using this command is when a disk cannot be unmounted and displays the error that files are being used or opened. With this commmand you can easily identify which files are in use. The most common format for this command is.

4. Tcpdump – Network Packet Analyzer

Tcpdump one of the most widely used command-line network packet analyzer or packets sniffer program that is used capture or filter TCP/IP packets that received or transferred on a specific interface over a network. It also provides a option to save captured packages in a file for later analysis. tcpdump is almost available in all major Linux distributions.

5. Netstat – Network Statistics

Netstat is a command line tool for monitoring incoming and outgoing network packets statistics as well as interface statistics. It is very useful tool for every system administrator to monitor network performance and troubleshoot network related problems.

6. Htop – Linux Process Monitoring

Htop is a much advanced interactive and real time Linux process monitoring tool. This is much similar to Linux top command but it has some rich features like user friendly interface to manage process, shortcut keys, vertical and horizontal view of the processes and much more. Htop is a third party tool and doesn’t included in Linux systems, you need to install it using YUM package manager tool. For more information on installation read our article below.

7. Iotop – Monitor Linux Disk I/O

Iotop is also much similar to top command and Htop program, but it has accounting function to monitor and display real time Disk I/O and processes. This tool is much useful for finding the exact process and high used disk read/writes of the processes.

8. Iostat – Input/Output Statistics

IoStat is simple tool that will collect and show system input and output storage device statistics. This tool is often used to trace storage device performance issues including devices, local disks, remote disks such as NFS.

9. IPTraf – Real Time IP LAN Monitoring

IPTraf is an open source console-based real time network (IP LAN) monitoring utility for Linux. It collects a variety of information such as IP traffic monitor that passes over the network, including TCP flag information, ICMP details, TCP/UDP traffic breakdowns, TCP connection packet and byne counts. It also gathers information of general and detaled interface statistics of TCP, UDP, IP, ICMP, non-IP, IP checksum errors, interface activity etc.

10. Psacct or Acct – Monitor User Activity

psacct or acct tools are very useful for monitoring each users activity on the system. Both daemons runs in the background and keeps a close watch on the overall activity of each user on the system and also what resources are being consumed by them.

These tools are very useful for system administrators to track each users activity like what they are doing, what commands they issued, how much resources are used by them, how long they are active on the system etc.

11. Monit – Linux Process and Services Monitoring

Monit is a free open source and web based process supervision utility that automatically monitors and managers system processes, programs, files, directories, permissions, checksums and filesystems.

It monitors services like Apache, MySQL, Mail, FTP, ProFTP, Nginx, SSH and so on. The system status can be viewed from the command line or using it own web interface.

12. NetHogs – Monitor Per Process Network Bandwidth

NetHogs is an open source nice small program (similar to Linux top command) that keeps a tab on each process network activity on your system. It also keeps a track of real time network traffic bandwidth used by each program or application.

13. iftop – Network Bandwidth Monitoring

iftop is another terminal-based free open source system monitoring utility that displays a frequently updated list of network bandwidth utilization (source and destination hosts) that passing through the network interface on your system. iftop is considered for network usage, what ‘top‘ does for CPU usage. iftop is a ‘top‘ family tool that monitor a selected interface and displays a current bandwidth usage between two hosts.

14. Monitorix – System and Network Monitoring

Monitorix is a free lightweight utility that is designed to run and monitor system and network resources as many as possible in Linux/Unix servers. It has a built in HTTP web server that regularly collects system and network information and display them in graphs. It Monitors system load average and usage, memory allocation, disk driver health, system services, network ports, mail statistics (Sendmail, Postfix, Dovecot, etc), MySQL statistics and many more. It designed to monitor overall system performance and helps in detecting failures, bottlenecks, abnormal activities etc.

15. Arpwatch – Ethernet Activity Monitor

Arpwatch is a kind of program that is designed to monitor Address Resolution (MAC and IP address changes) of Ethernet network traffic on a Linux network. It continuously keeps watch on Ethernet traffic and produces a log of IP and MAC address pair changes along with a timestamps on a network. It also has a feature to send an email alerts to administrator, when a pairing added or changes. It is very useful in detecting ARP spoofing on a network.

16. Suricata – Network Security Monitoring

Suricata is an high performance open source Network Security and Intrusion Detection and Prevention Monitoring System for Linux, FreeBSD and Windows.It was designed and owned by a non-profit foundation OISF (Open Information Security Foundation).

17. VnStat PHP – Monitoring Network Bandwidth

VnStat PHP a web based frontend application for most popular networking tool called “vnstat“. VnStat PHP monitors a network traffic usage in nicely graphical mode. It displays a total IN and OUT network traffic usage in hourly, daily, monthly and full summary report.

18. Nagios – Network/Server Monitoring

Nagios is an leading open source powerful monitoring system that enables network/system administrators to identify and resolve server related problems before they affect major business processes. With the Nagios system, administrators can able to monitor remote Linux, Windows, Switches, Routers and Printers on a single window. It shows critical warnings and indicates if something went wrong in your network/server which indirectly helps you to begin remediation processes before they occur.

19. Nmon: Monitor Linux Performance

Nmon (stands for Nigel’s performance Monitor) tool, which is used to monitor all Linux resources such as CPU, Memory, Disk Usage, Network, Top processes, NFS, Kernel and much more. This tool comes in two modes: Online Mode and Capture Mode.

The Online Mode, is used for real-time monitoring and Capture Mode, is used to store the output in CSV format for later processing.

20. Collectl: All-in-One Performance Monitoring Tool

Collectl is a yet another powerful and feature rich command line based utility, that can be used to gather information about Linux system resources such as CPU usage, memory, network, inodes, processes, nfs, tcp, sockets and much more.

15 Practical Grep Command Examples In Linux / UNIX

In this article let us review 15 practical examples of Linux grep command that will be very useful to both newbies and experts.

First create the following demo_file that will be used in the examples below to demonstrate grep command.

$ cat demo_file
THIS LINE IS THE 1ST UPPER CASE LINE IN THIS FILE.
this line is the 1st lower case line in this file.
This Line Has All Its First Character Of The Word With Upper Case.

Two lines above this line is empty.
And this is the last line.

1. Search for the given string in a single file

The basic usage of grep command is to search for a specific string in the specified file as shown below.

Syntax:
grep "literal_string" filename
$ grep "this" demo_file
this line is the 1st lower case line in this file.
Two lines above this line is empty.
And this is the last line.

2. Checking for the given string in multiple files.

Syntax:
grep "string" FILE_PATTERN

This is also a basic usage of grep command. For this example, let us copy the demo_file to demo_file1. The grep output will also include the file name in front of the line that matched the specific pattern as shown below. When the Linux shell sees the meta character, it does the expansion and gives all the files as input to grep.

$ cp demo_file demo_file1

$ grep "this" demo_*
demo_file:this line is the 1st lower case line in this file.
demo_file:Two lines above this line is empty.
demo_file:And this is the last line.
demo_file1:this line is the 1st lower case line in this file.
demo_file1:Two lines above this line is empty.
demo_file1:And this is the last line.

3. Case insensitive search using grep -i

Syntax:
grep -i "string" FILE


This is also a basic usage of the grep. This searches for the given string/pattern case insensitively. So it matches all the words such as “the”, “THE” and “The” case insensitively as shown below.

$ grep -i "the" demo_file
THIS LINE IS THE 1ST UPPER CASE LINE IN THIS FILE.
this line is the 1st lower case line in this file.
This Line Has All Its First Character Of The Word With Upper Case.
And this is the last line.

4. Match regular expression in files

Syntax:
grep "REGEX" filename


This is a very powerful feature, if you can use use regular expression effectively. In the following example, it searches for all the pattern that starts with “lines” and ends with “empty” with anything in-between. i.e To search “lines[anything in-between]empty” in the demo_file.

$ grep "lines.*empty" demo_file
Two lines above this line is empty.

From documentation of grep: A regular expression may be followed by one of several repetition operators:

  • ? The preceding item is optional and matched at most once.
  • * The preceding item will be matched zero or more times.
  • + The preceding item will be matched one or more times.
  • {n} The preceding item is matched exactly n times.
  • {n,} The preceding item is matched n or more times.
  • {,m} The preceding item is matched at most m times.
  • {n,m} The preceding item is matched at least n times, but not more than m times.

5. Checking for full words, not for sub-strings using grep -w

If you want to search for a word, and to avoid it to match the substrings use -w option. Just doing out a normal search will show out all the lines.

The following example is the regular grep where it is searching for “is”. When you search for “is”, without any option it will show out “is”, “his”, “this” and everything which has the substring “is”.

$ grep -i "is" demo_file
THIS LINE IS THE 1ST UPPER CASE LINE IN THIS FILE.
this line is the 1st lower case line in this file.
This Line Has All Its First Character Of The Word With Upper Case.
Two lines above this line is empty.
And this is the last line.


The following example is the WORD grep where it is searching only for the word “is”. Please note that this output does not contain the line “This Line Has All Its First Character Of The Word With Upper Case”, even though “is” is there in the “This”, as the following is looking only for the word “is” and not for “this”.

$ grep -iw "is" demo_file
THIS LINE IS THE 1ST UPPER CASE LINE IN THIS FILE.
this line is the 1st lower case line in this file.
Two lines above this line is empty.
And this is the last line.

6. Displaying lines before/after/around the match using grep -A, -B and -C

When doing a grep on a huge file, it may be useful to see some lines after the match. You might feel handy if grep can show you not only the matching lines but also the lines after/before/around the match.

Please create the following demo_text file for this example.

$ cat demo_text
4. Vim Word Navigation

You may want to do several navigation in relation to the words, such as:

 * e - go to the end of the current word.
 * E - go to the end of the current WORD.
 * b - go to the previous (before) word.
 * B - go to the previous (before) WORD.
 * w - go to the next word.
 * W - go to the next WORD.

WORD - WORD consists of a sequence of non-blank characters, separated with white space.
word - word consists of a sequence of letters, digits and underscores.

Example to show the difference between WORD and word

 * 192.168.1.1 - single WORD
 * 192.168.1.1 - seven words.

6.1 Display N lines after match

-A is the option which prints the specified N lines after the match as shown below.

Syntax:
grep -A <N> "string" FILENAME


The following example prints the matched line, along with the 3 lines after it.

$ grep -A 3 -i "example" demo_text
Example to show the difference between WORD and word

* 192.168.1.1 - single WORD
* 192.168.1.1 - seven words.

6.2 Display N lines before match

-B is the option which prints the specified N lines before the match.

Syntax:
grep -B <N> "string" FILENAME


When you had option to show the N lines after match, you have the -B option for the opposite.

$ grep -B 2 "single WORD" demo_text
Example to show the difference between WORD and word

* 192.168.1.1 - single WORD

6.3 Display N lines around match

-C is the option which prints the specified N lines before the match. In some occasion you might want the match to be appeared with the lines from both the side. This options shows N lines in both the side(before & after) of match.

$ grep -C 2 "Example" demo_text
word - word consists of a sequence of letters, digits and underscores.

Example to show the difference between WORD and word

* 192.168.1.1 - single WORD

7. Highlighting the search using GREP_OPTIONS

As grep prints out lines from the file by the pattern / string you had given, if you wanted it to highlight which part matches the line, then you need to follow the following way.

When you do the following export you will get the highlighting of the matched searches. In the following example, it will highlight all the this when you set the GREP_OPTIONS environment variable as shown below.

$ export GREP_OPTIONS='--color=auto' GREP_COLOR='100;8'

$ grep this demo_file
this line is the 1st lower case line in this file.
Two lines above this line is empty.
And this is the last line.

8. Searching in all files recursively using grep -r

When you want to search in all the files under the current directory and its sub directory. -r option is the one which you need to use. The following example will look for the string “ramesh” in all the files in the current directory and all it’s subdirectory.

$ grep -r "ramesh" *

9. Invert match using grep -v

You had different options to show the lines matched, to show the lines before match, and to show the lines after match, and to highlight match. So definitely You’d also want the option -v to do invert match.

When you want to display the lines which does not matches the given string/pattern, use the option -v as shown below. This example will display all the lines that did not match the word “go”.

$ grep -v "go" demo_text
4. Vim Word Navigation

You may want to do several navigation in relation to the words, such as:

WORD - WORD consists of a sequence of non-blank characters, separated with white space.
word - word consists of a sequence of letters, digits and underscores.

Example to show the difference between WORD and word

* 192.168.1.1 - single WORD
* 192.168.1.1 - seven words.

10. Display the lines which does not matches all the given pattern.

Syntax:
grep -v -e "pattern" -e "pattern"
$ cat test-file.txt
a
b
c
d

$ grep -v -e "a" -e "b" -e "c" test-file.txt
d

11. Counting the number of matches using grep -c

When you want to count that how many lines matches the given pattern/string, then use the option -c.

Syntax:
grep -c "pattern" filename
$ grep -c "go" demo_text
6


When you want do find out how many lines matches the pattern

$ grep -c this demo_file
3


When you want do find out how many lines that does not match the pattern

$ grep -v -c this demo_file
4

12. Display only the file names which matches the given pattern using grep -l

If you want the grep to show out only the file names which matched the given pattern, use the -l (lower-case L) option.

When you give multiple files to the grep as input, it displays the names of file which contains the text that matches the pattern, will be very handy when you try to find some notes in your whole directory structure.

$ grep -l this demo_*
demo_file
demo_file1

13. Show only the matched string

By default grep will show the line which matches the given pattern/string, but if you want the grep to show out only the matched string of the pattern then use the -o option.

It might not be that much useful when you give the string straight forward. But it becomes very useful when you give a regex pattern and trying to see what it matches as

$ grep -o "is.*line" demo_file
is line is the 1st lower case line
is line
is is the last line

14. Show the position of match in the line

When you want grep to show the position where it matches the pattern in the file, use the following options as

Syntax:
grep -o -b "pattern" file
$ cat temp-file.txt
12345
12345

$ grep -o -b "3" temp-file.txt
2:3
8:3


Note: The output of the grep command above is not the position in the line, it is byte offset of the whole file.

15. Show line number while displaying the output using grep -n

To show the line number of file with the line matched. It does 1-based line numbering for each file. Use -n option to utilize this feature.

$ grep -n "go" demo_text
5: * e - go to the end of the current word.
6: * E - go to the end of the current WORD.
7: * b - go to the previous (before) word.
8: * B - go to the previous (before) WORD.
9: * w - go to the next word.
10: * W - go to the next WORD.

35 Practical Examples of Linux Find Command

The Linux Find Command is one of the most important and much used command in Linux sytems. Find command used to search and locate list of files and directories based on conditions you specify for files that match the arguments. Find can be used in variety of conditions like you can find files by permissions, users, groups, file type, date, size and other possible criteria.

Linux Find Command

Through this article we are sharing our day-to-day Linux find command experience and its usage in the form of examples. In this article we will show you the most used 35 Find Commands examples in Linux. We have divided the section into Five parts from basic to advance usage of find command.

  1. Part I: Basic Find Commands for Finding Files with Names
  2. Part II: Find Files Based on their Permissions
  3. Part III: Search Files Based On Owners and Groups
  4. Part IV: Find Files and Directories Based on Date and Time
  5. Part V: Find Files and Directories Based on Size
Part I – Basic Find Commands for Finding Files with Names

1. Find Files Using Name in Current Directory

Find all the files whose name is tecmint.txt in a current working directory.

# find . -name tecmint.txt

./tecmint.txt

2. Find Files Under Home Directory

Find all the files under /home directory with name tecmint.txt.

# find /home -name tecmint.txt

/home/tecmint.txt

3. Find Files Using Name and Ignoring Case

Find all the files whose name is tecmint.txt and contains both capital and small letters in /home directory.

# find /home -iname tecmint.txt

./tecmint.txt
./Tecmint.txt

4. Find Directories Using Name

Find all directories whose name is Tecmint in / directory.

# find / -type d -name Tecmint

/Tecmint

5. Find PHP Files Using Name

Find all php files whose name is tecmint.php in a current working directory.

# find . -type f -name tecmint.php

./tecmint.php

6. Find all PHP Files in Directory

Find all php files in a directory.

# find . -type f -name "*.php"

./tecmint.php
./login.php
./index.php
Part II – Find Files Based on their Permissions

7. Find Files With 777 Permissions

Find all the files whose permissions are 777.

# find . -type f -perm 0777 -print

8. Find Files Without 777 Permissions

Find all the files without permission 777.

# find / -type f ! -perm 777

9. Find SGID Files with 644 Permissions

Find all the SGID bit files whose permissions set to 644.

# find / -perm 2644

10. Find Sticky Bit Files with 551 Permissions

Find all the Sticky Bit set files whose permission are 551.

# find / -perm 1551

11. Find SUID Files

Find all SUID set files.

# find / -perm /u=s

12. Find SGID Files

Find all SGID set files.

# find / -perm /g+s

13. Find Read Only Files

Find all Read Only files.

# find / -perm /u=r

14. Find Executable Files

Find all Executable files.

# find / -perm /a=x

15. Find Files with 777 Permissions and Chmod to 644

Find all 777 permission files and use chmod command to set permissions to 644.

# find / -type f -perm 0777 -print -exec chmod 644 {} \;

16. Find Directories with 777 Permissions and Chmod to 755

Find all 777 permission directories and use chmod command to set permissions to 755.

# find / -type d -perm 777 -print -exec chmod 755 {} \;

17. Find and remove single File

To find a single file called tecmint.txt and remove it.

# find . -type f -name "tecmint.txt" -exec rm -f {} \;

18. Find and remove Multiple File

To find and remove multiple files such as .mp3 or .txt, then use.

# find . -type f -name "*.txt" -exec rm -f {} \;

OR

# find . -type f -name "*.mp3" -exec rm -f {} \;

19. Find all Empty Files

To file all empty files under certain path.

# find /tmp -type f -empty

20. Find all Empty Directories

To file all empty directories under certain path.

# find /tmp -type d -empty

21. File all Hidden Files

To find all hidden files, use below command.

# find /tmp -type f -name ".*"
Part III – Search Files Based On Owners and Groups

22. Find Single File Based on User

To find all or single file called tecmint.txt under / root directory of owner root.

# find / -user root -name tecmint.txt

23. Find all Files Based on User

To find all files that belongs to user Tecmint under /home directory.

# find /home -user tecmint

24. Find all Files Based on Group

To find all files that belongs to group Developer under /home directory.

# find /home -group developer

25. Find Particular Files of User

To find all .txt files of user Tecmint under /home directory.

# find /home -user tecmint -iname "*.txt"
Part IV – Find Files and Directories Based on Date and Time

26. Find Last 50 Days Modified Files

To find all the files which are modified 50 days back.

# find / -mtime 50

27. Find Last 50 Days Accessed Files

To find all the files which are accessed 50 days back.

# find / -atime 50

28. Find Last 50-100 Days Modified Files

To find all the files which are modified more than 50 days back and less than 100 days.

# find / -mtime +50 –mtime -100

29. Find Changed Files in Last 1 Hour

To find all the files which are changed in last 1 hour.

# find / -cmin -60

30. Find Modified Files in Last 1 Hour

To find all the files which are modified in last 1 hour.

# find / -mmin -60

31. Find Accessed Files in Last 1 Hour

To find all the files which are accessed in last 1 hour.

# find / -amin -60
Part V – Find Files and Directories Based on Size

32. Find 50MB Files

To find all 50MB files, use.

# find / -size 50M

33. Find Size between 50MB – 100MB

To find all the files which are greater than 50MB and less than 100MB.

# find / -size +50M -size -100M

34. Find and Delete 100MB Files

To find all 100MB files and delete them using one single command.

# find / -size +100M -exec rm -rf {} \;

35. Find Specific Files and Delete

Find all .mp3 files with more than 10MB and delete them using one single command.

# find / -type f -name *.mp3 -size +10M -exec rm {} \;

That’s it, We are ending this post here, In our next article we will discuss more about other Linux commands in depth with practical examples. Let us know your opinions on this article using our comment section.

Understanding and Using Systemd

Systemd components graphic

Image courtesy Wikimedia Commons, CC BY-SA 3.0

Like it or not, systemd is here to stay, so we might as well know what to do with it.

systemd is controversial for several reasons: It’s a replacement for something that a lot of Linux users don’t think needs to be replaced, and the antics of the systemd developers have not won hearts and minds. But rather the opposite, as evidenced in this famous LKML thread where Linus Torvalds banned systemd dev Kay Sievers from the Linux kernel.

It’s tempting to let personalities get in the way. As fun as it is to rant and rail and emit colorful epithets, it’s beside the point. For lo so many years Linux was content with SysVInit and BSD init. Then came add-on service managers like the service and chkconfig commands. Which were supposed to make service management easier, but for me were just more things to learn that didn’t make the tasks any easier, but rather more cluttery.

Then came Upstart and systemd, with all kinds of convoluted addons to maintain SysVInit compatibility. Which is a nice thing to do, but good luck understanding it. Now Upstart is being retired in favor of systemd, probably in Ubuntu 14.10, and you’ll find a ton of systemd libs and tools in 14.04. Just for giggles, look at the list of files in the systemd-services package in Ubuntu 14.04:

$ dpkg -L systemd-services

Check out the man pages to see what all of this stuff does.

It’s always scary when developers start monkeying around with key Linux subsystems, because we’re pretty much stuck with whatever they foist on us. If we don’t like a particular software application, or desktop environment, or command there are multiple alternatives and it is easy to use something else. But essential subsystems have deep hooks in the kernel, all manner of management scripts, and software package dependencies, so replacing one is not a trivial task.

So the moral is things change, computers are inevitably getting more complex, and it all works out in the end. Or not, but absent the ability to shape events to our own liking we have to deal with it.

First systemd Steps

Red Hat is the inventor and primary booster of systemd, so the best distros for playing with it are Red Hat Enterprise Linux, RHEL clones like CentOS and Scientific Linux, and of course good ole Fedora Linux, which always ships with the latest, greatest, and bleeding-edgiest. My examples are from CentOS 7.

Experienced RH users can still use service and chkconfig in RH 7, but it’s long past time to dump them in favor of native systemd utilities. systemd has outpaced them, and service and chkconfig do not support native systemd services.

Our beloved /etc/inittab is no more. Instead, we have a /etc/systemd/system/ directory chock-full of symlinks to files in /usr/lib/systemd/system//usr/lib/systemd/system/ contains init scripts; to start a service at boot it must be linked to /etc/systemd/system/. The systemctl command does this for you when you enable a new service, like this example for ClamAV:

# systemctl enable clamd@scan.service
ln -s '/usr/lib/systemd/system/clamd@scan.service' '/etc/systemd/system/multi-user.target.wants/clamd@scan.service'

How do you know the name of the init script, and where does it come from? On Centos7 they’re broken out into separate packages. Many servers (for example Apache) have not caught up tosystemd and do not have systemd init scripts. ClamAV offers both systemd and SysVInit init scripts, so you can install the one you prefer:

$ yum search clamav
clamav-server-sysvinit.noarch
clamav-server-systemd.noarch

So what’s inside these init scripts? We can see for ourselves:

$ less /usr/lib/systemd/system/clamd@scan.service
.include /lib/systemd/system/clamd@.service
[Unit]
Description = Generic clamav scanner daemon
[Install]
WantedBy = multi-user.target

Now you can see how systemctl knows where to install the symlink, and this init script also includes a dependency on another service, clamd@.service.

systemctl displays the status of all installed services that have init scripts:

$ systemctl list-unit-files --type=service
UNIT FILE              STATE
[...]
chronyd.service        enabled
clamd@.service         static
clamd@scan.service     disabled

There are three possible states for a service: enabled or disabled, and static. Enabled means it has a symlink in a .wants directory. Disabled means it does not. Static means the service is missing the [Install] section in its init script, so you cannot enable or disable it. Static services are usually dependencies of other services, and are controlled automatically. You can see this in the ClamAV example, as clamd@.service is a dependency of clamd@scan.service, and it runs only when clamd@scan.service runs.

None of these states tell you if a service is running. The ps command will tell you, or use systemctl to get more detailed information:

$ systemctl status bluetooth.service
bluetooth.service - Bluetooth service
   Loaded: loaded (/usr/lib.systemd/system/bluetooth.service; enabled)
   Active: active (running) since Thu 2014-09-14 6:40:11 PDT
  Main PID: 4964 (bluetoothd)
   CGroup: /system.slice/bluetooth.service
           |_4964 /usr/bin/bluetoothd -n

systemctl tells you everything you want to know, if you know how to ask.

Cheatsheet

These are the commands you’re probably going to use the most:

# systemctl start [name.service]
# systemctl stop [name.service]
# systemctl restart [name.service]
# systemctl reload [name.service]
$ systemctl status [name.service]
# systemctl is-active [name.service]
$ systemctl list-units --type service --all

systemd has 12 unit types. .service is system services, and when you’re running any of the above commands you can leave off the .service extension, because systemd assumes a service unit if you don’t specify something else. The other unit types are:

  • Target: group of units
  • Automount: filesystem auto-mountpoint
  • Device: kernel device names, which you can see in sysfs and udev
  • Mount: filesystem mountpoint
  • Path: file or directory
  • Scope: external processes not started by systemd
  • Slice: a management unit of processes
  • Snapshot: systemd saved state
  • Socket: IPC (inter-process communication) socket
  • Swap: swap file
  • Timer: systemd timer.

It is unlikely that you’ll ever need to do anything to these other units, but it’s good to know they exist and what they’re for. You can look at them:

$ systemctl list-units --type [unit name]

Blame Game

For whatever reason, it seems that the proponents of SysVInit replacements are obsessed with boot times. My systemd systems, like CentOS 7, don’t boot up all that much faster than the others. It’s not something I particularly care about in any case, since most boot speed measurements only measure reaching the login prompt, and not how long it takes for the system to completely start and be usable. Microsoft Windows has long been the champion offender in this regards, reaching a login prompt fairly quickly, and then taking several more minutes to load and run nagware, commercialware, spyware, and pretty much everything except what you want. (I swear if I see one more stupid Oracle Java updater nag screen I am going to turn violent.)

Even so, for anyone who does care about boot times you can run a command to see how long every program and service takes to start up:

$ systemd-analyze blame
  5.728s firewalld.service
  5.111s plymouth-quit-wait.service
  4.046s tuned.service
  3.550s accounts.daemon.service
  [...]

And several dozens more. Well that’s all for today, folks. systemd is already a hugely complex beast;

Docker and DevOps: Why it Matters

Unless you have been living under a rock the last year, you have probably heard aboutDocker. Docker describes itself as an open platform for distributed applications for developers and sysadmins. That sounds great, but why does it matter?

Wait, virtualization isn’t new!?

Virtualization technology has existed for more than a decade and in the early days revolutionized how the world managed server environments. The virtualization layer later became the basis for the modern cloud with virtual servers being created and scaled on-demand. Traditionally virtualization software was expensive and came with a lot of overheard. Linux cgroups have existed for a while, but recently linux containers came along and added namespace support to provide isolated environments for applications. Vagrant + LXC + Chef/Puppet/Ansible have been a powerful combination for a while so what does Docker bring to the table?

Virtualization isn’t new and neither are containers, so let’s discuss what makes Docker special.

The cloud made it easy to host complex and distributed applications and their lies the problem. Ten years ago applications looked straight-forward and had few complex dependencies.

Screen Shot 2014-10-21 at 10.35.22 AM

The reality is that application complexity has evolved significantly in the last five years, and even simple services are now extremely complex.

Screen Shot 2014-10-21 at 10.35.28 AM

It has become a best practice to build large distributed applications using independent microservices. The model has changed from monolithic to distributed to now containerized microservices. Every microservice has its dependencies and unique deployment scenarios which makes managing operations even more difficult. The default is not a single stack being deployed to a single server, but rather loosely coupled components deployed to many servers.

Docker makes it easy to deploy any application on any platform.

The need for Docker

It is not just that applications are more complex, but more importantly the development model and culture has evolved. When I started engineering, developers had dedicated servers with their own builds if they were lucky. More often than not your team shared a development server as it was too expensive and cumbersome for every developer to have their environment. The times have changed significantly as the cultural norm nowadays is for every developer to be able to run complex applications off of a virtual machine on their laptop (or a dev server in the cloud). With the cheap on-demand resource provided by cloud environments, it is common to have many application environments dev, QA, production. Docker containers are isolated, but share the same kernel and core operating system files which makes them lightweight and extremely fast. Using Docker to manage containers makes it easier to build distributed systems by allowing applications to run on a single machine or across many virtual machines with ease.

Docker is both a great software project (Docker engine) and a vibrant community (DockerHub). Docker combines a portable, lightweight application runtime and packaging tool and a cloud service for sharing applications and automating workflows.

Docker makes it easy for developers and operations to collaborate

Screen Shot 2014-10-21 at 10.35.35 AM
DevOps professionals appreciate Docker as it makes it extremely easy to manage the deployment of complex distributed applications. Docker also manages to unify the DevOps community whether you are a Chef fan, Puppet enthusiast, or Ansible aficionado. Docker is also supported by the major cloud platforms including Amazon Web Services and Microsoft Azure which means it’s easy to deploy to any platform. Ultimately, Docker provides flexibility and portability so applications can run on-premise on bare metal or in a public or private cloud.

DockerHub provides official language stacks and repos

Screen Shot 2014-10-21 at 10.35.41 AM

The Docker community is built on a mature open source mentality with the corporate backing required to offer a polished experience. There is a vibrant and growing ecosystem brought together on DockerHub. This means official language stacks for the common app platforms so the community has officially supported and quality Docker repos which means wider and higher quality support.

Since Docker is so well supported you see many companies offering support for Docker as a platform with official repos onDockerHub.

Screen Shot 2014-10-21 at 10.35.48 AM

What is Docker?

Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. As a result, IT can ship faster and run the same app, unchanged, on laptops, data center VMs, and any cloud.

Why do developers like it?

With Docker, developers can build any app in any language using any toolchain. “Dockerized” apps are completely portable and can run anywhere – colleagues’ OS X and Windows laptops, QA servers running Ubuntu in the cloud, and production data center VMs running Red Hat.

Developers can get going quickly by starting with one of the 13,000+ apps available on Docker Hub. Docker manages and tracks changes and dependencies, making it easier for sysadmins to understand how the apps that developers build work. And with Docker Hub, developers can automate their build pipeline and share artifacts with collaborators through public or private repositories.

Docker helps developers build and ship higher-quality applications, faster.

Why do sysadmins like it?

Sysadmins use Docker to provide standardized environments for their development, QA, and production teams, reducing “works on my machine” finger-pointing. By “Dockerizing” the app platform and its dependencies, sysadmins abstract away differences in OS distributions and underlying infrastructure.

In addition, standardizing on the Docker Engine as the unit of deployment gives sysadmins flexibility in where workloads run. Whether on-premise bare metal or data center VMs or public clouds, workload deployment is less constrained by infrastructure technology and is instead driven by business priorities and policies. Furthermore, the Docker Engine’s lightweight runtime enables rapid scale-up and scale-down in response to changes in demand.

Docker helps sysadmins deploy and run any app on any infrastructure, quickly and reliably.

How is this different from Virtual Machines?

Virtual Machines

Each virtualized application includes not only the application – which may be only 10s of MB – and the necessary binaries and libraries, but also an entire guest operating system – which may weigh 10s of GB.

Docker

The Docker Engine container comprises just the application and its dependencies. It runs as an isolated process in userspace on the host operating system, sharing the kernel with other containers. Thus, it enjoys the resource isolation and allocation benefits of VMs but is much more portable and efficient.

Ten Open Source Node.js Apps

In this article we are going to quickly look at 10 open source Node.js applications. I want to shed some light on a few awesome Node.js projects for the sake of exposure and hope that you (just like me) would read and learn from their source code even if the applications themselves are of no interest to you.

Strider CD

Strider CD (GitHub: Strider-CD/strider, License: BSD) is Continuous Deployment / Continuous Integration platform.

Getting started is very simple. Simply add your GitHub projects to strider, and relax as the tight integration means your tests will run on every commit. Checkout the YouTube demo for a 4 minute introduction.

Popcorn Time

Popcorn Time (GitHub: popcorn-official/popcorn-app, License: GPL) allows any computer user to watch movies easily streaming from torrents, without any particular knowledge. It is, in my opinion, the most interesting applications technology wise in this list with a good amount of controversy to boot.

It’s a desktop app build on top of the node-webkit (GitHub: rogerwang/node-webkit, License: MIT) that is able to stream and play a torrent movie practicaly in real time… and it’s all JavaScript. Seriously cool stuff!

MediacenterJS

MediacenterJS (GitHub: jansmolders86/mediacenterjs, License: GPL) is a media center (like for instance XBMC) running completely from the comfort of your browser. The server application runs on Windows, MAC and Linux systems, the client runs in every modern browser.

KiwiIRC

KiwiIRC (GitHub: prawnsalad/KiwiIRC, License: AGPL) is a fully featured IRC client that can be extended to suit almost any needs. Using the web application is extremly simple even without any IRC knowledge as all the common needs are built directly into the UI.

Slate

Slate (GitHub: slate/slate, License: MIT) is a modern minimalistic IRC client, completely extensible through plugins and built with web technologies and for OSX, Linux, and eventually Windows.

David

David (GitHub: alanshaw/david-www, License: MIT) is a web service that tells you when your project NPM dependencies are out of date.

Shields

Shields (GitHub: badges/shields, License: CC0) is a service that provides legible & concise status badges for third-party codebase services, like those that you see aplenty all over GitHub.

I Love Open Source

I Love Open Source (GitHub: codio/iloveopensource, License: MIT) is a way of encouraging users of Open Source code to express their gratitude through a simple acknowledgement page. Along the way, they are gently offered a chance to donate cash or just thanks.

browsenpm.org

browsenpm.org (GitHub: nodejitsu/browsenpm.org, License: MIT) allows you to browse packages, users, code, stats and more the public npm registry in style.

npmjs.org

npmjs.org (GitHub: npm/npm-www, License: BSD) is the source for npmjs.org that you probably have seen many times before but might have not realized was open for anyone to see and contribute to.

Setting Up Rails 4 with MongoDB and Mongoid

If you’ve never checked out MongoDB before, I’d very much encourage you to. It’s a NoSQL document store that makes for a very interesting method of development compared to relational databases. We’ve been using it a lot lately for prototyping at Efeqdev and have thoroughly enjoyed it. Today we’re going to walk through installing it on Ubuntu and setting up Rails with MongoDB and Mongoid to replace ActiveRecord.

Installing MongoDB

First let’s install MongoDB from their official repository:

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10
sudo echo "deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen" | tee -a /etc/apt/sources.list.d/10gen.list
sudo apt-get -y update
sudo apt-get -y install mongodb-10gen

If you run mongo in your shell you should get an interactive database prompt:

MongoDB shell version: 2.4.6
connecting to: test
>

Rails 4 with Mongoid

Next you’re going to need to generate your Rails application with rails new myapp --skip-active-record. The --skip-active-record is important because it doesn’t include ActiveRecord in the app that is generated. We need to modify the Gemfile to remove sqlite3 and add Mongoid.

You’ll want to delete these lines from your Gemfile:

# Use sqlite3 as the database for Active Record
gem 'sqlite3'

And add these:

gem 'mongoid', '~> 4', github: 'mongoid/mongoid'
gem 'bson_ext'

The mongoid gem that supports Rails 4 hasn’t been fully published yet so we’re using it directly from their Github repository. Now that these are added you can run bundle to install the gems for your Rails app.

Now we need to generate the Mongoid configuration file that is very much like yourconfig/database.yml that you may be used to with ActiveRecord. We’ll create the mongoid file by running the following command:

rails g mongoid:config

This generates config/mongoid.yml which you can take a look at and make any configuration changes as necessary.

Conclusion

And that’s all there is to it! While it’s much more work than you might expect, I imagine this will become a smoother process in the future once the Rails 4 gem has been released. Be sure to check out Mongoid’s documentation for more information on how to use it.

IVR and its benefits

What Is IVR

IVR stands for Interactive Voice Response. It is the technology that enables interaction between a caller and a computer via the telephone. Callers can interact with IVR systems by pressing numbers on a telephone keypad or by speaking simple commands to answer the computer’s voice prompts. Common uses of IVR include account balance inquiries, finding store locations and simple caller identification and routing.

IVR Uses

IVR technology is most commonly found in the call centers of companies’ seeking to improve their customer service, reduce costs, and expand their call center operations. IVR can help:

  • handle high call volumes
  • service customers after normal business hours
  • improve customer service
  • lower call center costs
  • prioritize customers so urgent calls are handled quickly
  • automate an outbound call campaign

Key IVR Benefits

  • Intelligent call routing allows your customers to reach the right agent every time
  • Integrate your IVR system with internal applications to improve your call center operations
  • Expand your ability to get feedback from customers with surveys that populate your database from your IVR solution
  • Advanced call routing allows your team to be accessible via cell, land line, or even another IVR system
  • Reduce idle time in your call center with outbound campaign management

What to look for in an IVR solution

The best IVR solutions should do more than detect voice and keypad inputs. An IVR solution should be a critical application that can scale with your business needs. Below are just a few of the things you should look for in an IVR solution:

  • Simple management tools that make IVR changes in real-time
  • Scalability to handle a virtually unlimited number of calls
  • Robust, real-time monitoring and reporting
  • Integration with your back end applications and databases
  • Easy and quick deployment

Why Choose Hosted IVR

  • Traditional on-site systems require large upfront costs
  • Hosted IVR can be deployed in hours instead of months
  • There are no maintenance or support costs with our hosted IVR solution
  • No IVR hardware for your IT department to manage
  • Hosting provides instant scalability
  • Our IVR solution provides redundancyand failover

 

Switching to nginx and php-fpm, so LAMP becomes LEMP

About two and a half years ago i ordered a virtual server for private purposes at Server4you. That was quite a long time ago and in the meantime you can get the same server for less money or a better server for the same price ;-)
As migrating from an old productline to a new one isn’t possible at Server4you I decided to order a new server and migrate all my services to it so that i can quit the old machine afterwards.

And here we go:
LAMP (Linux, Apache, MySQL and PHP) is quite common and i also used this setup (Apache with mod_php) for hosting some small websites on my old server. The planned migration made me think about alternatives and I crawled through many blogposts about NGINX – a webserver which gains more and more popularity in our days (see this survey for more details).
Therefor i decided to give it a try and that’s where LEMP comes from (NGINX is pronounced as Engine-X)

Another new thing i wanted to try out is the integration of PHP in a different way. A project called PHP-FPM (FastCGI Process Manager) adds some new features to the ‘normal’ FastCGI implementation. It’s more or less a patch for the PHP sourcecode and it seems like it will go directly to the PHP core with PHP version 5.3.3.

After a bit of compilation and writing some scripts to automate this for future use all the websites (mostly WordPress) run on my LEMP setup. To get SEO-friendly URLs with WordPress on nginx is a bit more tricky than before, because .htaccess files are not beeing parsed by nginx, but the effort is really worth it.

Here is a related snippet from my nginx configuration file for this blog (slightly modified):

server {
            listen   80;
            server_name www.schmalenegger.com;

            access_log /logs/schmalenegger.com_access.log;
            error_log /logs/schmalenegger.com_error.log;

            location / {

                        root   /var/www/schmalenegger.com/;
                        index  index.php index.html;

                        # Basic version of WordPress parameters, supporting nice permalinks.
                        include /etc/nginx/wordpress_params.regular;
                        }

            # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
            #
            location ~ \.php$ {
            fastcgi_pass 127.0.0.1:9999;
            fastcgi_index index.php;
            include /etc/nginx/fastcgi_params;
            include /etc/nginx/fastcgi_params.default;
            fastcgi_param SCRIPT_FILENAME /var/www/schmalenegger.com/$fastcgi_script_name;
            }
      }

And this is the content from /etc/nginx/wordpress_params.regular:

# WordPress pretty URLs:
if (-f $request_filename) {
break;
}
if (-d $request_filename) {
break;
}
rewrite ^(.+)$ /index.php?q=$1 last;

To be honest, I didn’t measure the differences between the performances of both setups, but I can really feel, that the webpages are being loaded much faster than before and also the memory consumption of nginx is absolutly awesome. So I’m quite happy with this new setup, but sometimes there are still some pitfalls to deal with.