20 Command Line Tools to Monitor Linux Performance

It’s really very tough job for every System or Network administrator to monitor and debug Linux System Performance problems every day. After being a Linux Administrator for 5 years in IT industry, I came to know that how hard is to monitor and keep systems up and running. For this reason, we’ve compiled the list of Top 20 frequently used command line monitoring tools that might be useful for every Linux/Unix System Administrator. These commands are available under all flavors of Linux and can be useful to monitor and find the actual causes of performance problem. This list of commands shown here are very enough for you to pick the one that is suitable for your monitoring scenario.

1. Top – Linux Process Monitoring

Linux Top command is a performance monitoring program which is used frequently by many system administrators to monitor Linux performance and it is available under many Linux/Unix like operating systems. The top command used to dipslay all the running and active real-time processes in ordered list and updates it regularly. It display CPU usage, Memory usage, Swap Memory, Cache Size, Buffer Size, Process PID, User, Commands and much more. It also shows high memory and cpu utilization of a running processess. The top command is much userful for system administrator to monitor and take correct action when required. Let’s see top command in action.

2. VmStat – Virtual Memory Statistics

Linux VmStat command used to display statistics of virtual memory, kernerl threads, disks, system processes, I/O blocks, interrupts, CPU activity and much more. By default vmstat command is not available under Linux systems you need to install a package called sysstat that includes a vmstat program. The common usage of command format is.

3. Lsof – List Open Files

Lsof command used in many Linux/Unix like system that is used to display list of all the open files and the processes. The open files included are disk files, network sockets, pipes, devices and processes. One of the main reason for using this command is when a disk cannot be unmounted and displays the error that files are being used or opened. With this commmand you can easily identify which files are in use. The most common format for this command is.

4. Tcpdump – Network Packet Analyzer

Tcpdump one of the most widely used command-line network packet analyzer or packets sniffer program that is used capture or filter TCP/IP packets that received or transferred on a specific interface over a network. It also provides a option to save captured packages in a file for later analysis. tcpdump is almost available in all major Linux distributions.

5. Netstat – Network Statistics

Netstat is a command line tool for monitoring incoming and outgoing network packets statistics as well as interface statistics. It is very useful tool for every system administrator to monitor network performance and troubleshoot network related problems.

6. Htop – Linux Process Monitoring

Htop is a much advanced interactive and real time Linux process monitoring tool. This is much similar to Linux top command but it has some rich features like user friendly interface to manage process, shortcut keys, vertical and horizontal view of the processes and much more. Htop is a third party tool and doesn’t included in Linux systems, you need to install it using YUM package manager tool. For more information on installation read our article below.

7. Iotop – Monitor Linux Disk I/O

Iotop is also much similar to top command and Htop program, but it has accounting function to monitor and display real time Disk I/O and processes. This tool is much useful for finding the exact process and high used disk read/writes of the processes.

8. Iostat – Input/Output Statistics

IoStat is simple tool that will collect and show system input and output storage device statistics. This tool is often used to trace storage device performance issues including devices, local disks, remote disks such as NFS.

9. IPTraf – Real Time IP LAN Monitoring

IPTraf is an open source console-based real time network (IP LAN) monitoring utility for Linux. It collects a variety of information such as IP traffic monitor that passes over the network, including TCP flag information, ICMP details, TCP/UDP traffic breakdowns, TCP connection packet and byne counts. It also gathers information of general and detaled interface statistics of TCP, UDP, IP, ICMP, non-IP, IP checksum errors, interface activity etc.

10. Psacct or Acct – Monitor User Activity

psacct or acct tools are very useful for monitoring each users activity on the system. Both daemons runs in the background and keeps a close watch on the overall activity of each user on the system and also what resources are being consumed by them.

These tools are very useful for system administrators to track each users activity like what they are doing, what commands they issued, how much resources are used by them, how long they are active on the system etc.

11. Monit – Linux Process and Services Monitoring

Monit is a free open source and web based process supervision utility that automatically monitors and managers system processes, programs, files, directories, permissions, checksums and filesystems.

It monitors services like Apache, MySQL, Mail, FTP, ProFTP, Nginx, SSH and so on. The system status can be viewed from the command line or using it own web interface.

12. NetHogs – Monitor Per Process Network Bandwidth

NetHogs is an open source nice small program (similar to Linux top command) that keeps a tab on each process network activity on your system. It also keeps a track of real time network traffic bandwidth used by each program or application.

13. iftop – Network Bandwidth Monitoring

iftop is another terminal-based free open source system monitoring utility that displays a frequently updated list of network bandwidth utilization (source and destination hosts) that passing through the network interface on your system. iftop is considered for network usage, what ‘top‘ does for CPU usage. iftop is a ‘top‘ family tool that monitor a selected interface and displays a current bandwidth usage between two hosts.

14. Monitorix – System and Network Monitoring

Monitorix is a free lightweight utility that is designed to run and monitor system and network resources as many as possible in Linux/Unix servers. It has a built in HTTP web server that regularly collects system and network information and display them in graphs. It Monitors system load average and usage, memory allocation, disk driver health, system services, network ports, mail statistics (Sendmail, Postfix, Dovecot, etc), MySQL statistics and many more. It designed to monitor overall system performance and helps in detecting failures, bottlenecks, abnormal activities etc.

15. Arpwatch – Ethernet Activity Monitor

Arpwatch is a kind of program that is designed to monitor Address Resolution (MAC and IP address changes) of Ethernet network traffic on a Linux network. It continuously keeps watch on Ethernet traffic and produces a log of IP and MAC address pair changes along with a timestamps on a network. It also has a feature to send an email alerts to administrator, when a pairing added or changes. It is very useful in detecting ARP spoofing on a network.

16. Suricata – Network Security Monitoring

Suricata is an high performance open source Network Security and Intrusion Detection and Prevention Monitoring System for Linux, FreeBSD and Windows.It was designed and owned by a non-profit foundation OISF (Open Information Security Foundation).

17. VnStat PHP – Monitoring Network Bandwidth

VnStat PHP a web based frontend application for most popular networking tool called “vnstat“. VnStat PHP monitors a network traffic usage in nicely graphical mode. It displays a total IN and OUT network traffic usage in hourly, daily, monthly and full summary report.

18. Nagios – Network/Server Monitoring

Nagios is an leading open source powerful monitoring system that enables network/system administrators to identify and resolve server related problems before they affect major business processes. With the Nagios system, administrators can able to monitor remote Linux, Windows, Switches, Routers and Printers on a single window. It shows critical warnings and indicates if something went wrong in your network/server which indirectly helps you to begin remediation processes before they occur.

19. Nmon: Monitor Linux Performance

Nmon (stands for Nigel’s performance Monitor) tool, which is used to monitor all Linux resources such as CPU, Memory, Disk Usage, Network, Top processes, NFS, Kernel and much more. This tool comes in two modes: Online Mode and Capture Mode.

The Online Mode, is used for real-time monitoring and Capture Mode, is used to store the output in CSV format for later processing.

20. Collectl: All-in-One Performance Monitoring Tool

Collectl is a yet another powerful and feature rich command line based utility, that can be used to gather information about Linux system resources such as CPU usage, memory, network, inodes, processes, nfs, tcp, sockets and much more.

Examples of Awk Command in Unix

Awk is one of the most powerful tools in Unix used for processing the rows and columns in a file. Awk has built in string functions and associative arrays. Awk supports most of the operators, conditional blocks, and loops available in C language.
One of the good things is that you can convert Awk scripts into Perl scripts using a2p utility.

The basic syntax of AWK:

awk 'BEGIN {start_action} {action} END {stop_action}' filename

Here the actions in the begin block are performed before processing the file and the actions in the end block are performed after processing the file. The rest of the actions are performed while processing the file.

Create a file input_file with the following data. This file can be easily created using the output of ls -l.

-rw-r--r-- 1 center center  0 Dec  8 21:39 p1
-rw-r--r-- 1 center center 17 Dec  8 21:15 t1
-rw-r--r-- 1 center center 26 Dec  8 21:38 t2
-rw-r--r-- 1 center center 25 Dec  8 21:38 t3
-rw-r--r-- 1 center center 43 Dec  8 21:39 t4
-rw-r--r-- 1 center center 48 Dec  8 21:39 t5

From the data, you can observe that this file has rows and columns. The rows are separated by a new line character and the columns are separated by a space characters. We will use this file as the input for the examples discussed here.

1. awk ‘{print $1}’ input_file
Here $1 has a meaning. $1, $2, $3… represents the first, second, third columns… in a row respectively. This awk command will print the first column in each row as shown below.


To print the 4th and 6th columns in a file use awk ‘{print $4,$5}’ input_file

Here the Begin and End blocks are not used in awk. So, the print command will be executed for each row it reads from the file. In the next example we will see how to use the Begin and End blocks.

2. awk ‘BEGIN {sum=0} {sum=sum+$5} END {print sum}’ input_file

This will prints the sum of the value in the 5th column. In the Begin block the variable sum is assigned with value 0. In the next block the value of 5th column is added to the sum variable. This addition of the 5th column to the sum variable repeats for every row it processed. When all the rows are processed the sum variable will hold the sum of the values in the 5th column. This value is printed in the End block.

3. In this example we will see how to execute the awk script written in a file. Create a file sum_column and paste the below script in that file

#!/usr/bin/awk -f
BEGIN {sum=0} 
END {print sum}

Now execute the the script using awk command as

awk -f sum_column input_file.

This will run the script in sum_column file and displays the sum of the 5th column in the input_file.

4. awk ‘{ if($9 == “t4”) print $0;}’ input_file
This awk command checks for the string “t4” in the 9th column and if it finds a match then it will print the entire line. The output of this awk command is

-rw-r--r-- 1 pcenter pcenter 43 Dec  8 21:39 t4

5. awk ‘BEGIN { for(i=1;i<=5;i++) print “square of”, i, “is”,i*i; }’
This will print the squares of first numbers from 1 to 5. The output of the command is

square of 1 is 1
square of 2 is 4
square of 3 is 9
square of 4 is 16
square of 5 is 25

Notice that the syntax of “if” and “for” are similar to the C language.

Awk Built in Variables:

You have already seen $0, $1, $2… which prints the entire line, first column, second column… respectively. Now we will see other built in variables with examples.

FS – Input field separator variable:

So far, we have seen the fields separted by a space character. By default Awk assumes that fields in a file are separted by space characters. If the fields in the file are separted by any other character, we can use the FS variable to tell about the delimiter.

6. awk ‘BEGIN {FS=”:”} {print $2}’ input_file
awk -F: ‘{print $2} input_file

This will print the result as

39 p1
15 t1
38 t2
38 t3
39 t4
39 t5

OFS – Output field separator variable:

By default whenever we printed the fields using the print statement the fields are displayed with space character as delimiter. For example

7. awk ‘{print $4,$5}’ input_file

The output of this command will be

center 0
center 17
center 26
center 25
center 43
center 48

We can change this default behavior using the OFS variable as

awk ‘BEGIN {OFS=”:”} {print $4,$5}’ input_file


Note: print $4,$5 and print $4$5 will not work the same way. The first one displays the output with space as delimiter. The second one displays the output without any delimiter.

NF – Number of fileds variable:

The NF can be used to know the number of fields in line

8. awk ‘{print NF}’ input_file
This will display the number of columns in each row.

NR – number of records variable:
The NR can be used to know the line number or count of lines in a file.

9. awk ‘{print NR}’ input_file
This will display the line numbers from 1.

10. awk ‘END {print NR}’ input_file
This will display the total number of lines in the file.

String functions in Awk:
Some of the string functions in awk are:


Advanced Examples:

1. Filtering lines using Awk split function

The awk split function splits a string into an array using the delimiter.

The syntax of split function is
split(string, array, delimiter)

Now we will see how to filter the lines using the split function with an example.

The input “file.txt” contains the data in the following format

1 U,N,UNIX,000
2 N,P,SHELL,111
3 I,M,UNIX,222
4 X,Y,BASH,333
5 P,R,SCRIPT,444

Required output: Now we have to print only the lines in which whose 2nd field has the string “UNIX” as the 3rd field( The 2nd filed in the line is separated by comma delimiter ).
The ouptut is:

1 U,N,UNIX,000
3 I,M,UNIX,222

The awk command for getting the output is:

awk '{ 
        if(arr[3] == "UNIX") 
        print $0 
} ' file.txt

10 Sed (Stream Editor) Command Examples

Sed is a stream editor in UNIX like operating system which is used for filtering and transforming text. Sed is derived originally from the basic line editor ‘ed’, an editor you will find on every unix system but one that is rarely used because of its difficult user interface.

How sed works …. ?

As sed is a stream editor , its does its work on a stream of data it receives from stdin, such as through a pipe , writing its results as a stream of data on stdout ( often desktop screen). We can redirect this output to a file . Sed doesn’t typically modify an original input file ; instead we can send contents of our file through a pipe to be processed by sed. This means we don’t need to have a file on the disk with the data you want to changed, this is particularly useful if you have data coming from another process rather than already written in a file.

Syntax of sed :

# sed [option] commands [input-file ]

In this post we will discuss some of the practical examples of sed command , we will be doing lot of sed operation on the file ‘passwd’ , so first copy the file ‘/etc/passwd’ to /tmp folder.

root@nextstep4it:~# cp /etc/passwd /tmp/

Example:1 Deleting all the Lines with sed

root@nextstep4it:~# cat /tmp/passwd | sed 'd'

Above Command sent the entire contents of the file /tmp/passwd through a pipe to sed. Keep in mind the file /tmp/passwd was not altered at all. Sed only read the contents of the file and we didnot tell it to write to the file, only read from it.The results of editing commands on each line printed to standard output. In this case, nothing was printed to the screen because we have used option ‘d’ to delete every line.

Example:2 Invoking sed with ‘-e’ option ( add the script to the commands to be executed )

Instead of invoking sed by sending a file to it through a pipe, we can instruct sed to read data from a file as shown in the below example .

root@nextstep4it:~# sed -e 'd' /tmp/passwd

Invoking sed in this manner explicitly defines the editing command as a sed script to be executed on the input file /tmp/passwd. The script is simply a one-character editing command here but it could much larger

We can also redirect the standard output from the sed command into a file.

root@nextstep4it:~# sed -e 'd' /tmp/passwd > /tmp/new-passwd

Example:3 Printing lines using sed ( -n flag & p command)

The ‘-n’ flag disables the automatic printing so that sed will instead print lines only when it is explicitly told to do so with the ‘p’ command .

root@nextstep4it:~# cat /tmp/passwd | sed 'p' | head -5

As we see above if we specify ‘p’ editing command without ‘-n’ flag , sed will print duplicate lines. So to display unique lines use ‘-n’ flag with p command in sed. Examples is shown below :

root@nextstep4it:~# cat /tmp/passwd | sed -n 'p' | head -5

Example:4 Editing the Source file by using ‘-i’ option

Sed command by default does not edit the orginal or source file for our safety but by using ‘-i’ option source file can be edited.

root@nextstep4it:~# sed -i '1d' /tmp/passwd

Above command will delete the first line of source file /tmp/passwd.

Example:5 Take backup of source file prior to editing

When we use ‘-i’ option in sed command then it becomes very risky because it will directly edit the source file, so it is better to take the backup of source file before editing, example is shown below.

root@nextstep4it:~# sed -i.bak '1d' /tmp/passwd

root@nextstep4it:~# ls -l /tmp/passwd*
-rw-r--r-- 1 root root 2229 Nov 24 22:36 /tmp/passwd
-rw-r--r-- 1 root root 2261 Nov 24 22:35 /tmp/passwd.bak

In the above sed command , 1st line of file /tmp/passwd will be deleted but before that sed command takes the backup of /tmp/passwd as /tmp/passwd.bak

Example:6 Deleting the lines by specifying range.

Deleting first 5 lines of /tmp/passwd file.

root@nextstep4it:~# cat /tmp/passwd | sed '1,5d'

Example:7 Delete the empty lines of a file

root@nextstep4it:~# cat /tmp/detail.txt 



In details.txt file we have two empty lines , use below command to delete empty lines.

root@nextstep4it:~# sed '/^$/d' /tmp/detail.txt

Example:8 Deleting the lines containing the strings

Suppose we want to delete line from the file /tmp/passwd which contains ‘games’ word.

root@nextstep4it:~# sed '/games/d' /tmp/passwd

Example:9 Search and Replace strings in the file.

Suppose you want to replace the ‘root’ with ‘Admin’, example is shown below :

root@nextstep4it:~# sed 's/root/Admin/' /tmp/passwd

It is very important to note that sed substitutes only the first occurrence on a line. If the string ‘root’ occurs more than once on a line, only the first match will be replaced. To replace every string in the file with the new one instead of just the first occurrence , to make substitution globally add a letter ‘g’ to end of command as shown below :

root@nextstep4it:~# sed 's/root/Admin/g' /tmp/passwd

Example:10 Multiple substitution using -e option

Suppose we want to replace ‘root’ string with ‘Admin’ and ‘bash’ string with ‘sh’. Example is shown below :

root@nextstep4it:~# cat /tmp/passwd | sed -e 's/root/Admin/g' -e 's/bash/sh/g'

The Database Timeline

1961 Development begins on Integrated Data Store, or IDS, at General Electric. IDS is generally considered the first “proper” database.” It was doing NoSQL and Big Data decades before today’s NoSQL databases.

1967 IBM develops Information Control System and Data Language/Interface (ICS/DL/I), a hierarchical database for the Apollo program. ICS later became Information Management System (IMS), which was included with IBM’s System360 mainframes.

1970 IBM researcher Edgar Codd publishes his paper A Relational Model of Data for Large Shared Data Banks, establishing the mathematics used by relational databases.

1973 David R. Woolley develops PLATO Notes, which would later influence the creation of Lotus Notes.

1974 Development begins at IBM on System R, an implementation of Codd’s relational databases and the first use of the structured query language (SQL). This later evolves into the commercial product IBM DB2. Inspired by Codd’s research, University of Berkeley students Michael Stonebraker and Eugene Wong begin development on INGRES, which became the basis for PostGreSQL, Sybase, and many other relational databases.

1979 The first publicly available version of Oracle is released.

1984 Ray Ozzie founds Iris Associates to create a PLATO-Notes-inspired groupware system.

1988 Lotus Agenda, powered by a document database, is released.

1989 Lotus Notes is released.

1990 Objectivity, Inc. releases its flagship object database.

1991 The key-value store Berkeley DB is developed

2003 Live Journal open sources the original version of Memcached.

2005 Damien Katz open sources CouchDB.

2006 Google publishes BigTable paper.

2007 Amazon publishes Dynamo paper. 10gen starts coding MongoDB. Powerset open sources its BigTable clone, Hbase. Neo4j released.

2008 Facebook open sources Cassandra.

2009 ReadWriteWeb asks: “Is the relational database doomed?” Redis released. First NoSQL meetup in San Francisco.

2010 Some of the leaders of the Memcached project, along with Zynga, open source Membase.

A Guide to International Payment Preferences

Global e-commerce promises huge opportunities for merchants but, as is often the case with buried treasure, there are many challenges to overcome and no clear map to follow. One of the most overlooked but important of these obstacles involves online payment methods.

The latest research indicates that 68 percent of consumers have abandoned an online retail site due to its payment process. Almost half of them chose not to complete a transaction because they weren’t offered their preferred payment option. This makes it clear exactly how crucial it is that merchants provide prospective customers in each locality with their payment method of choice, as well as staying on top of new payment options as they become available.

Offering a variety of credit card schemes is not always enough though — in some countries, credit cards aren’t the payment method of choice. Here, a short guide to payment preferences around the world.

United States

In the U.S., buyers predominantly use credit cards, although eWallets are also very popular. One 2014 survey found that 79 percent of respondents had made payments using PayPal, and 40 percent through Google Wallet.


Europe is a diverse payment market. Credit card sales are becoming increasingly popular, but many customers still prefer real-time banking options through which they’re redirected to their online bank accounts to submit payment. Some payment methods are pan-European, but most are localized per country.

Localized European payment methods

Almost half of all online transactions in the U.K. are paid by credit card. Debit cards account for some 35 percent of e-commerce payment. PayPal is the country’s third most popular online payment method. Although alternative payment methods are not yet widespread in the U.K., a new digital payment ecosystem called Zapp is expected to have a significant impact on online payments when it’s launched later this year. Zapp puts near real-time payments on buyers’ mobile phones through their existing mobile banking application, enabling secure payments between consumers and merchants.

In France, Carte Blue debit cards account for 85 percent of all e-com transactions. Carte Blue recently introduced a voice authorization security mechanism to ensure greater ecommerce cybersecurity. Other payment methods used in France include credit cards and PayPal.

In the Netherlands, iDEAL is a popular payment method in online stores. When checking out, the customer authorizes the pre-filled payment instruction. Once payment is authorized, the amount due is debited from the customer’s account and transferred to the merchant’s bank account.

In Finland and Sweden, real-time bank transactions account for up to 35 percent of the market share. Finland has 10 bank brands offering different real-time banking solutions, and Sweden has four.

Klarna is a major payment method offered by more than 15,000 e-stores in Sweden, Norway, Finland, Denmark, Germany, the Netherlands and Austria. About 20 percent of all e-commerce sales in Sweden go through Klarna.

Pan-European payment methods

SEPA (Single Euro Payments Area) is a European Union payment-integration initiative currently in the making. Its aim is to simplify bank transfers denominated in Euros. A total of 33 European countries are taking part in SEPA, in addition to 28 EU member states, the four countries in the European Free Trade Association and Monaco. SEPA, which doesn’t distinguish between national and cross-border transactions, will handle credit transfers and direct debits. Direct debit will enable creditors to collect funds from a debtor’s account if a signed mandate has been granted by the payer to the biller.

The SOFORT payment platform offers currency conversion and is used in 10 European countries (Germany, Austria, Switzerland, the UK, Italy, Spain, Poland, Hungary, Slovakia and the Czech Republic). This method doesn’t require a second account (wallet) or registration. A multi-level authentication process and one-time validity ensure secure transactions.


For the Japanese, payment method mistrust is a big issue when it comes to online shopping. Many customers prefer to pay for online goods with cash at convenience stores called Konbinis. After credit cards, Konbinis constitute 25 percent of the market. So if you’re selling in Japan, Konbini is a vital payment method.


Alipay dominates online payments in China, claiming 60 percent of market share. This platform recently launched a mobile wallet application, offering online-to-offline payments. PayEase is another popular payment service provider, enabling comprehensive payment services like mobile payments via SMS, internet banking, call centers and POS terminals.

Cash on delivery is also quite popular in China, and UnionPay credit cards play a central payment role for merchants entering the Chinese market.


The Russian Federation’s most widespread payment method is Qiwi, which offers self-service kiosks that are active around the clock. They are located in malls and on the streets, similar to ATMs. Payments can also be made on WIN PC terminals, which are widely used in mobile dealer shops.

Yandex is another widely used payment service that offers online stores a universal payment solution for accepting online payments. The platform enables merchants to accept the most popular payment methods in Russia and other CIS countries, including bank cards, credit cards and the Yandex.Money and WebMoney e-wallets. Currently, more than 65,000 online stores accept Yandex.Money and 22 percent of Russians regularly use it to make payments.


Internet bank payments are the preferred choice in India, but prepaid cards and cash payments are also widely used. Mobile payments are rapidly gaining popularity in this region.


Mobile payment systems are on the increase in the Asia-Pacific, with more than two-thirds of those acquainted with the methods using digital wallets and SMS payments last year.

Latin America

The greatest cause of shopping cart abandonment in Mexico, Peru, Argentina and Colombia is the fear of security risks. As such, local and regional online payment sites are still the most trusted methods. DineroMail and MercadoPago specialize in the Latin American market.

As a rule, Brazilians have fairly low credit card limits, so almost half of online purchases are made via installment plans. Boleto Bancário is also popular; this payment process is comparable to wire transfer and cash payment methods. After receiving a pre-filled Boleto Bancário bank slip, the customer can pay for the online purchase using cash at any bank branch or via authorized processors like supermarkets or regular banking points.


In Africa, the mobile payment market has proven to be more popular than banking services. In fact, mobile payment users already outnumber bank account holders. M-Pesa is a widespread mobile-phone-based money transfer and micro-financing service that enables users to deposit, withdraw and transfer money using a mobile device. This system enables users to deposit money into an account stored on the user’s cell phone, send payments using PIN-secured SMS text messages and redeem deposits for cash.


So how can a retailer keep track of these and hundreds of other localized alternative payment methods? Many merchants have adopted data-driven, flexible payment platforms that enable them to offer optimal payment methods in every location. The result is a pleasurable shopping experience that buyers will be eager to repeat.

Docker and DevOps: Why it Matters

Unless you have been living under a rock the last year, you have probably heard aboutDocker. Docker describes itself as an open platform for distributed applications for developers and sysadmins. That sounds great, but why does it matter?

Wait, virtualization isn’t new!?

Virtualization technology has existed for more than a decade and in the early days revolutionized how the world managed server environments. The virtualization layer later became the basis for the modern cloud with virtual servers being created and scaled on-demand. Traditionally virtualization software was expensive and came with a lot of overheard. Linux cgroups have existed for a while, but recently linux containers came along and added namespace support to provide isolated environments for applications. Vagrant + LXC + Chef/Puppet/Ansible have been a powerful combination for a while so what does Docker bring to the table?

Virtualization isn’t new and neither are containers, so let’s discuss what makes Docker special.

The cloud made it easy to host complex and distributed applications and their lies the problem. Ten years ago applications looked straight-forward and had few complex dependencies.

Screen Shot 2014-10-21 at 10.35.22 AM

The reality is that application complexity has evolved significantly in the last five years, and even simple services are now extremely complex.

Screen Shot 2014-10-21 at 10.35.28 AM

It has become a best practice to build large distributed applications using independent microservices. The model has changed from monolithic to distributed to now containerized microservices. Every microservice has its dependencies and unique deployment scenarios which makes managing operations even more difficult. The default is not a single stack being deployed to a single server, but rather loosely coupled components deployed to many servers.

Docker makes it easy to deploy any application on any platform.

The need for Docker

It is not just that applications are more complex, but more importantly the development model and culture has evolved. When I started engineering, developers had dedicated servers with their own builds if they were lucky. More often than not your team shared a development server as it was too expensive and cumbersome for every developer to have their environment. The times have changed significantly as the cultural norm nowadays is for every developer to be able to run complex applications off of a virtual machine on their laptop (or a dev server in the cloud). With the cheap on-demand resource provided by cloud environments, it is common to have many application environments dev, QA, production. Docker containers are isolated, but share the same kernel and core operating system files which makes them lightweight and extremely fast. Using Docker to manage containers makes it easier to build distributed systems by allowing applications to run on a single machine or across many virtual machines with ease.

Docker is both a great software project (Docker engine) and a vibrant community (DockerHub). Docker combines a portable, lightweight application runtime and packaging tool and a cloud service for sharing applications and automating workflows.

Docker makes it easy for developers and operations to collaborate

Screen Shot 2014-10-21 at 10.35.35 AM
DevOps professionals appreciate Docker as it makes it extremely easy to manage the deployment of complex distributed applications. Docker also manages to unify the DevOps community whether you are a Chef fan, Puppet enthusiast, or Ansible aficionado. Docker is also supported by the major cloud platforms including Amazon Web Services and Microsoft Azure which means it’s easy to deploy to any platform. Ultimately, Docker provides flexibility and portability so applications can run on-premise on bare metal or in a public or private cloud.

DockerHub provides official language stacks and repos

Screen Shot 2014-10-21 at 10.35.41 AM

The Docker community is built on a mature open source mentality with the corporate backing required to offer a polished experience. There is a vibrant and growing ecosystem brought together on DockerHub. This means official language stacks for the common app platforms so the community has officially supported and quality Docker repos which means wider and higher quality support.

Since Docker is so well supported you see many companies offering support for Docker as a platform with official repos onDockerHub.

Screen Shot 2014-10-21 at 10.35.48 AM

What is Docker?

Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. As a result, IT can ship faster and run the same app, unchanged, on laptops, data center VMs, and any cloud.

Why do developers like it?

With Docker, developers can build any app in any language using any toolchain. “Dockerized” apps are completely portable and can run anywhere – colleagues’ OS X and Windows laptops, QA servers running Ubuntu in the cloud, and production data center VMs running Red Hat.

Developers can get going quickly by starting with one of the 13,000+ apps available on Docker Hub. Docker manages and tracks changes and dependencies, making it easier for sysadmins to understand how the apps that developers build work. And with Docker Hub, developers can automate their build pipeline and share artifacts with collaborators through public or private repositories.

Docker helps developers build and ship higher-quality applications, faster.

Why do sysadmins like it?

Sysadmins use Docker to provide standardized environments for their development, QA, and production teams, reducing “works on my machine” finger-pointing. By “Dockerizing” the app platform and its dependencies, sysadmins abstract away differences in OS distributions and underlying infrastructure.

In addition, standardizing on the Docker Engine as the unit of deployment gives sysadmins flexibility in where workloads run. Whether on-premise bare metal or data center VMs or public clouds, workload deployment is less constrained by infrastructure technology and is instead driven by business priorities and policies. Furthermore, the Docker Engine’s lightweight runtime enables rapid scale-up and scale-down in response to changes in demand.

Docker helps sysadmins deploy and run any app on any infrastructure, quickly and reliably.

How is this different from Virtual Machines?

Virtual Machines

Each virtualized application includes not only the application – which may be only 10s of MB – and the necessary binaries and libraries, but also an entire guest operating system – which may weigh 10s of GB.


The Docker Engine container comprises just the application and its dependencies. It runs as an isolated process in userspace on the host operating system, sharing the kernel with other containers. Thus, it enjoys the resource isolation and allocation benefits of VMs but is much more portable and efficient.

10 Most Common App Security Mistakes

Why Mobile App Security?

App security mistakes for Android and iPhone are generally a lesser prioritized area for a mobile developer, mostly because due to the time pressure. It does not usually get what it deserves in project plans. Moreover, in case of absence of a security owner in project teams, no one claims responsibility. That’s why mobile app security is a matter of attention left only to the developer’s initiative.

Security and Usability are two concepts that are inversely related. Highly secure solutions require additional processes and flows. However, most business units, working directly with consumers, don’t consider app security as the first priority.

In practice, nobody brings out a security concern unless something really goes wrong in case of “hacking”. Most application developers are not taking care of specific Android security and iPhone security tests. (Application Security Test).

Always keep in mind the principle that No app is %100 safe!


What Should You Do for App Security?

Our purpose is to make your app more secure than others by using quick and simple features, hence discouraging the hackers to mess with your mobile application. Make your app ready for mobile security. Here are the 10 Most Common App Security Mistakes:

1. Data Store Approach: First of all, sensitive data should not be stored on the device during runtime as much as possible. Data can be processed in case of need and should be deleted immediately when not needed. In case of a data storage need on the mobile device, data should be encrypted and stored in the documents folder. Passwords should be stored in KeyChain for iOS and KeyStore for Android app security. This is also important for app store security checks.

2. Missing front-end validation: Missing data entry validation causes both security and formatting issues. These may be things like letting alphanumeric values in numeric fields, missing masking on formatted fields, not checking for high risk character values such as <>` ” ()|#. Such missing validations may cause security breaches by allowing remote code execution or unexpected responses.

3. Server Based Controls: Application development is a client side operation. Server side is the place where you should store and manage the data. Server side checks should be applied regardless of channel (mobile, web, etc…) for data security and formatting. We do not mean iCloud keychain or a similar feature, please take care. This is about the app-specific backend security. There are also security concerns for Apple iCloud, however this is Apple’s work to do it more secure!

4. SSL: HTTPS must be used for secure transmission of sensitive information. If possible, a custom certificate should be used instead of the certificates from the built-in device certificate store. The certificate that is unique to the app and the server should be embedded inside the app.

5. Obfuscation: It is very important especially for Android apps to go through obfuscation. If script files are also used in parts of the app, these files should be taken through obfuscation as well.

6. Comment Lines: Explanatory data, included as comment lines may be viewed by others if the application is decompiled. This is very simple but a common mistake both for Android and iOS apps. This does not mean that you shouldn’t use comments during app development, just don’t forget to remove them from the production version of your app.

7. Excessive Permissions: When editing permission preferences for Android apps, only the permissions that are absolutely needed should be enabled. Permissions with access to personal information, such as “access to contacts”, should be avoided as much as possible for Android app security. If anything goes wrong, there is less chance of a data breach.

8. Encryption: The key used in encryption should also be encrypted and stored in secure storage. The installation file should also be obfuscated. Another dangerous practice that should avoided is the downloading of the encryption key from server during runtime.

9. Rooted/Jailbroken Devices: It is not possible to store totally secure data in rooted devices as root permissions provide unlimited access to the device filesystem and memory. However, it is possible for developers to check if the device is rooted or not. This risk should be noted and evaluated based on project scope for all flows and processes.

10. Application Tampering Protection: The application binary files (.apk or .ipa) can be altered and published in different markets by hackers. They may inject malicious code in your app or they may crack mobile apps to bypass license and payment restrictions. Nowadays it’s even possible to crack in-app purchases and license checks just by emulating app store responses. Therefore, application integrity and in-app purchases should be checked with a third-party trusted server to prevent such cases. For this purpose, there are some good solutions available in the market. Again, this is very critical for Google Play and iTunes App Store security.


If you are developing your app with a Platform-Based Approach (Obj-C, Swift, Java) the items above should be managed entirely by yourself or your team. However, if you’re using a cross-platform native framework approach, most of the items above are covered by the mobile development framework for mobile app security.

We especially suggest you to use cross-platform native frameworks for mobile security industry. You may have already faced the well-known risks of cross-platform hybrid frameworks, such as DOM Payload, Scripting Engine Scope, etc. You can assess security capabilities of the Smartface.

(collected from Smartface)

CMS for eCommerce: Magento, PrestaShop, and Shopify

In this series of posts I will examine and compare different eCommerce CMSs. The three CMSs that are possibly most popular right now are Magento, PrestaShop, and Shopify.

These are three solutions which I know well and have worked with, and even though I may leave other promising systems out (I’m thinking of OpenCart), these three are, in my view, the three main platforms at this time.

In this post I will give a general overview, continue with an individual examination of each solution, and finally give my conclusions in my last post.

I have used Google Trends to provide a historical introduction to the birth and evolution of these three CMSs. I have opted to compare these three solutions with the previous reference in eCommerce CMS: osCommerce. This is simply a measurement of interest, based on the number of Google searches for these terms.

Magento started to take off in mid-2007, while PrestaShop did so one year later, and interest in Shopify wasn’t significant until 2011. The growth of Magento seems to have become stagnant since 2011, possibly due to the appearance of other solutions by other companies. Even though the level of interest in Shopify is still lower than the other two solutions, its growth curve is very promising.

Even though Magento and PrestaShop use a similar model in which customers install their solution in their hosting, Shopify opts for a SaaS model that includes hosting. It should be pointed out that both Magento and Shopify have created SaaS solutions, although so far they don’t seem to be very popular.

As regards the technology used, both Magento and PrestaShop use PHP and MySQL. In the case of Shopify, even though it’s implemented using Ruby on Rails, this is not relevant to its SaaS model. All platforms use an API that makes it possible to automate store processes, and even though this may not seem relevant for store users, it allows for greater flexibility when it comes to creating extensions.

As we have said before, one of the factors when it comes to choosing a system of this type is the existence of extensions created by the CMS creators or else by the community that surrounds them. In this sense, there are extension marketplaces for all three systems where you can find modules that cover practically any need and which are not given in the standard version.

The last comparison that I will give in this post is pricing, as it does not depend on previously explaining how each of the three CMSs works separately.


The system is free open-source (AFL 3.0 license). Therefore, you can download, install, and configure the system in your server for free. However, PrestaShop offers an initial installation and configuration plan plus support and customer services at different levels and prices:

  • Initial pack €1,995
  • Essential support €399
  • Premium support €699
  • Deluxe support €1,399


Magento comes in two versions: Community and Enterprise. The Community version open-source (OSL 3.0 license). Like PrestaShop, you can install the Community version for free. The Enterprise version includes various exclusive improvements and its price depends on the support level chosen:

  • Magento Enterprise: from $15,550
  • Magento Enterprise Premium: from $77,990


Given that this is a SaaS service, you cannot install Shopify in your server. However, Shopify prices include hosting. Prices vary depending on the functionalities and the volume required by the online store, and the most affordable prices include a percentage of sales.

  • Basic $29 + 2% sales
  • Professional $79 + 1% sales
  • Unlimited $179