How To Use Cron To Automate Tasks On a VPS

Introduction


One of the most standard ways to run tasks in the background on Linux machines is with cron jobs. They’re useful for scheduling tasks on the VPS and automating different maintenance-related jobs. “Cron” itself is a daemon (or program) that runs in the background. The schedule for the different jobs that are run is in a configuration file called “crontab.”

Installation


Almost all distros have a form of cron installed by default. However, if you’re using a system that doesn’t have it installed, you can install it with the following commands:

For Ubuntu/Debian:

sudo apt-get update
sudo apt-get install cron

For Cent OS/Red Hat Linux:

sudo yum update
sudo yum install vixie-cron crontabs

You’ll need to make sure it runs in the background too:

sudo /sbin/chkconfig crond on
sudo /sbin/service crond start

Syntax


Here is an example task we want to have run:

5 * * * * curl http://www.google.com

The syntax for the different jobs we’re going to place in the crontab might look intimidating. It’s actually a very succinct and easy-to-parse if you know how to read it. Every command is broken down into:

  • Schedule
  • Command

The command can be virtually any command you would normally run on the command line. The schedule component of the syntax is broken down into 5 different options for scheduling in the following order:

  • minute
  • hour
  • day of the month
  • month
  • day of the week

Examples


Here is a list of examples for some common schedules you might encounter while configuring cron.

To run a command every minute:

* * * * *

To run a command every 12th minute on the hour:

12 * * * *

You can also use different options for each placeholder. To run a command every 15 minutes:

0,15,30,45 * * * *

To run a command every day at 4:00am, you’d use:

0 4 * * *

To run a command every Tuesday at 4:00am, you’d use:

0 4 * * 2

You can use division in your schedule. Instead of listing out 0,15,30,45, you could also use the following:

*/4 2-6 * * *

Notice the “2-6” range. This syntax will run the command between the hours of 2:00am and 6:00am.

The scheduling syntax is incredibly powerful and flexible. You can express just about every possible time imaginable.

Configuration


Once you’ve settled on a schedule and you know the job you want to run, you’ll have to have a place to put it so your daemon will be able to read it. There are a few different places, but the most common is the user’s crontab. If you’ll recall, this is a file that holds the schedule of jobs cron will run. The files for each user are located at /var/spool/cron/crontab, but they are not supposed to be edited directly. Instead, it’s best to use the crontab command.

You can edit your crontab with the following command:

crontab -e

This will bring up a text editor where you can input your schedule with each job on a new line.

If you’d like to view your crontab, but not edit it, you can use the following command:

crontab -l

You can erase your crontab with the following command:

crontab -r

If you’re a privileged user, you can edit another user’s by specifying crontab -u <user> -e

Output


For every cron job that gets executed, the user’s email address that’s associated with that user will get emailed the output unless it is directed into a log file or into /dev/null. The email address can be manually specified if you provide a “MAILTO” setting at the top of the crontab. You can also specify the shell you’d like run, the path where to search for the cron binary and the home directory with the following example:

First, let’s edit the crontab:

crontab -e

Then, we’ll edit it like so:

SHELL=/bin/bash
HOME=/
MAILTO=”example@digitalocean.com”
#This is a comment
* * * * * echo ‘Run this command every minute’

This particular job will output “Run this command every minute.” That output will get emailed every minute to the “example@digitalocean.com” email address I specified. Obviously, that might not be an ideal situation. As mentioned, we can also pipe the output into a log file or into an empty location to prevent getting an email with the output.

To append to a log file, it’s as simple as:

* * * * * echo ‘Run this command every minute’ >> file.log

Note: “>>” appends to a file.

If you want to pipe into an empty location, use /dev/null. Here is a PHP script that gets executed and runs in the background.

* * * * * /usr/bin/php /var/www/domain.com/backup.php > /dev/null 2>&1

Restricting Access


Restricting access to cron is easy with the /etc/cron.allow and /etc/cron.deny files. In order to allow or deny a user, you just need to place their username in one of these files, depending on the access required. By default, most cron daemons will assume all users have access to cron unless one of these file exists. To deny access to all users and give access to the user tdurden, you would use the following command sequence:

echo ALL >>/etc/cron.deny
echo tdurden >>/etc/cron.allow

First, we lock out all users by appending “ALL” to the deny file. Then, by appending the username to the allow file, we give the user access to execute cron jobs.

Special Syntax


There are several shorthand commands you can use in your crontab file to make administering a little easier. They are essential shortcuts for the equivalent numeric schedule specified:

  • @hourly – Shorthand for 0 * * * *
  • @daily – Shorthand for 0 0 * * *
  • @weekly – Shorthand for 0 0 * * 0
  • @monthly – Shorthand for 0 0 1 * *
  • @yearly – Shorthand for 0 0 1 1 *

and @reboot, which runs the command once at startup.

Note: Not all cron daemons can parse this syntax (particularly older versions), so double-check it works before you rely on it.

To have a job that runs on start up, you would edit your crontab file (crontab -e) and place a line in the file similar to the following:

@reboot echo "System start up"

This particular command would get executed and then emailed out to the user specified in the crontab.

How to Read a File Line by Line in a Shell Script

There are many ways to handle any task on a Unix platform,  but some
techniques that are used to process a file waste a lot of CPU time.
Most of the wasted time is spent in unnecessary variable assignment and
continuously opening and closing the same file over and over. Using a
pipe also has a negative impact on the timing.

 In this article I will explain various techniques for parsing a file
line by line. Some techniques are very fast and some make you wait for
half a day. The techniques used in this article are measurable, and I
tested each technique with time command so that you can see which tec-
hniques suits your needs.

 I don't explain in depth every thing, but if you know basic shell
scripting, I hope you can understand easily.

 I extracted last five lines from my /etc/passwd file, and stored in a
file "file_passwd".

[root@www blog]# tail -5 /etc/passwd > file_passwd
[root@www blog]# cat file_passwd
venu:x:500:500:venu madhav:/home/venu:/bin/bash
padmin:x:501:501:Project Admin:/home/project:/bin/bash
king:x:502:503:king:/home/project:/bin/bash
user1:x:503:501::/home/project/:/bin/bash
user2:x:504:501::/home/project/:/bin/bash
 I use this file whenever a sample file required.

Method 1: PIPED while-read loop


#!/bin/bash
# SCRIPT: method1.sh
# PURPOSE: Process a file line by line with PIPED while-read loop.

FILENAME=$1
count=0
cat $FILENAME | while read LINE
do
let count++
echo “$count $LINE”
done

echo -e “\nTotal $count Lines read”

 With catting a file and piping the file output to a while read loop a
single line of text is read into a variable named LINE on each loop
iteration. This continuous loop will run until all of the lines in the
file have been processed one at a time.

 Bash can sometimes start a subshell in a PIPED "while-read" loop. So
the variable set within the loop will be lost (unset) outside of the
loop. Therefore, $count would return 0, the initialized value outside
the loop.

Output:

[root@www blog]# sh method1.sh file_passwd
1 venu:x:500:500:venu madhav:/home/venu:/bin/bash
2 padmin:x:501:501:Project Admin:/home/project:/bin/bash
3 king:x:502:503:king:/home/project:/bin/bash
4 user1:x:503:501::/home/project/:/bin/bash
5 user2:x:504:501::/home/project/:/bin/bash

Total 0 Lines read


Method 2: Redirected “while-read” loop


#!/bin/bash
#SCRIPT: method2.sh
#PURPOSE: Process a file line by line with redirected while-read loop.

FILENAME=$1
count=0

while read LINE
do
let count++
echo “$count $LINE”

done < $FILENAME

echo -e “\nTotal $count Lines read”

 We still use the while read LINE syntax, but this time we feed the
loop from the bottom (using file redirection) instead of using a pipe.
You will find that this is one of the fastest ways to process each
line of a file. The first time you see this it looks a little unusual,
but it works very well.

 Unlike method 1, with method 2 you will get total number of lines out
side of the loop.

Output:

[root@www blog]# sh method2.sh file_passwd
1 venu:x:500:500:venu madhav:/home/venu:/bin/bash
2 padmin:x:501:501:Project Admin:/home/project:/bin/bash
3 king:x:502:503:king:/home/project:/bin/bash
4 user1:x:503:501::/home/project/:/bin/bash
5 user2:x:504:501::/home/project/:/bin/bash

Total 5 Lines read

Note: In some older shell scripting languages, the redirected loop
would also return as a subshell.

Method 3:while read LINE Using File Descriptors

 A file descriptor is simply a number that the operating system assigns
to an open file to keep track of it. Consider it a simplified version
of a file pointer. It is analogous to a file handle in C.

 There are always three default "files" open, stdin (the keyboard),
stdout (the screen), and stderr (error messages output to the screen).
These, and any other open files, can be redirected. Redirection simply
means capturing output from a file, command, program, script, or even
code block within a script  and sending it as input to another file,
command, program, or script.

 Each open file gets assigned a file descriptor. The file descriptors
for stdin,stdout, and stderr are 0,1, and 2, respectively. For opening
additional files, there remain descriptors 3 to 9 (may be vary depend-
ing on OS). It is sometimes useful to assign one of these additional
file descriptors to stdin, stdout, or stderr as a temporary duplicate
link. This simplifies restoration to normal after complex redirection
and reshuffling .

 There are two steps in the method we are going to use. The first step
is to close file descriptor 0 by redirecting everything to our new file
descriptor 3. We use the following syntax for this step:

  exec 3<&0

 Now all of the keyboard and mouse input is going to our new file des-
criptor 3. The second step is to send our input file, specified by the
variable $FILENAME, into file descriptor 0 (zero), which is standard
input. This second step is done using the following syntax:

    exec 0<$FILENAME

 At this point any command requiring input will receive the input from
the $FILENAME file. Now is a good time for an example.

#!/bin/bash
#SCRIPT: method3.sh
#PURPOSE: Process a file line by line with while read LINE Using
#File Descriptors

FILENAME=$1
count0=
exec 3<&0
exec 0< $FILENAME

while read LINE
do
let count++
echo “$count $LINE”
done

exec 0<&3
echo -e “\nTotal $count Lines read”

 while loop reads one line of text at a time.But the beginning of this
script does a little file descriptor redirection. The first exec comm-
and redirects stdin to file descriptor 3. The second exec command red-
irects the $FILENAME file into stdin, which is file descriptor 0. Now
the while loop can just execute without our having to worry about how
we assign a line of text to the LINE variable. When the while loop
exits we redirect the previously reassigned stdin, which was sent to
file descriptor 3, back to its original file descriptor 0.

 exec 0<&3

In other words we set it back to the system’s default value.

Output:

[root@www tempdir]# sh method3.sh file_passwd 
1 venu:x:500:500:venu madhav:/home/venu:/bin/bash
2 padmin:x:501:501:Project Admin:/home/project:/bin/bash
3 king:x:502:503:king:/home/project:/bin/bash
4 user1:x:503:501::/home/project/:/bin/bash
5 user2:x:504:501::/home/project/:/bin/bash

Total 5 Lines read

Method 4: Process file line by line using awk

 awk is pattern scanning and text processing language. It is useful
for manipulation of data files, text retrieval and processing. Good
for manipulating and/or extracting fields (columns) in structured
text files.

 Its name comes from the surnames of its authors: Alfred Aho, Peter
Weinberger, and Brian Kernighan.

 I am not going to explain everything here.To know more about awk just
Google it.

At the command line, enter the following command:

$ awk '{ print }' /etc/passwd 

 You should see the contents of your /etc/passwd file appear before
your eyes.Now, for an explanation of what awk did. When we called awk,
we specified /etc/passwd as our input file. When we executed awk, it
evaluated the print command for each line in /etc/passwd, in order.All
output is sent to stdout, and we get a result identical to catting
/etc/passwd. Now, for an explanation of the { print } code block. In
awk, curly braces are used to group blocks of code together, similar
to C. Inside our block of code,we have a single print command. In awk,
when a print command appears by itself, the full contents of the curr-
ent line are printed.

Here is another awk example that does exactly the same thing:

$ awk '{ print $0 }' /etc/passwd 

 In awk, the $0 variable represents the entire current line, so print
and print $0 do exactly the same thing. Now is a good time for an
example.

#!/bin/bash
#SCRIPT: method4.sh
#PURPOSE: Process a file line by line with awk

FILENAME=$1

awk ‘{kount++;print kount, $0}
END{print “\nTotal ” kount ” lines read”}’ $FILENAME

Output:

[root@www blog]# sh method4.sh file_passwd
1 venu:x:500:500:venu madhav:/home/venu:/bin/bash
2 padmin:x:501:501:Project Admin:/home/project:/bin/bash
3 king:x:502:503:king:/home/project:/bin/bash
4 user1:x:503:501::/home/project/:/bin/bash
5 user2:x:504:501::/home/project/:/bin/bash

Total 5 lines read

Awk is really good at handling text that has been broken into multiple
logical fields, and allows you to effortlessly reference each individ-
ual field from inside your awk script. The following script will print
out a list of all user accounts on your system:

awk -F":" '{ print $1 "\t " $3  }' /etc/passwd

 Above, when we called awk, we use the -F option to specify ":" as the
field separator. By default white space (blank line) act as filed sep-
arator. You can set new filed separator with -F option. When awk proc-
esses the print $1 "\t " $3 command, it will print out the first and
third fields that appears on each line in the input file. "\t" is used
to separate field with tab.

Method 5: Little tricky with head and tail
commands


#!/bin/bash
#SCRIPT: method5.sh
#PURPOSE: Process a file line by line with head and tail commands

FILENAME=$1
Lines=`wc -l < $FILENAME`

count=0

while [ $count -lt $Lines ]
do
let count++
LINE=`head -n $count $FILENAME | tail -1`
echo “$count $LINE”
done
echo -e “\nTotal $count lines read”

On each iteration head command extracts top $count lines, then tail
command extracts bottom line from that lines. A very stupid method,
but some people still using it.

Output:

[root@www blog]# sh method5.sh file_passwd
1 venu:x:500:500:venu madhav:/home/venu:/bin/bash
2 padmin:x:501:501:Project Admin:/home/project:/bin/bash
3 king:x:502:503:king:/home/project:/bin/bash
4 user1:x:503:501::/home/project/:/bin/bash
5 user2:x:504:501::/home/project/:/bin/bash

Total 5 lines read


Time Comparison for the Five Methods

 Now take a long breath, we are going test each technique. Before you
get into test each method of parsing a file line by line create a large
file that has the exact number of lines that you want to process.
Use bigfile.sh script to create a large file.

$ sh bigfile.sh 900000 

bigfile.sh with 900000 lines as an argument,it has taken more than two
hours to generate bigfile.4227. I don't know exactly how much time it
has taken. This file is extremely large to parse a file line by line,
but I needed a large file to get the timing data greater than zero.

[root@www blog]# du -h bigfile.4227
70M bigfile.4227
[root@www blog]# wc -l bigfile.4227
900000 bigfile.4227

[root@www blog]# time ./method1.sh bigfile.4227 >/dev/null

real 6m2.911s
user 2m58.207s
sys 2m58.811s
[root@www blog]# time ./method2.sh bigfile.4227 > /dev/null

real 2m48.394s
user 2m39.714s
sys 0m8.089s
[root@www blog]# time ./method3.sh bigfile.4227 > /dev/null

real 2m48.218s
user 2m39.322s
sys 0m8.161s
[root@www blog]# time ./method4.sh bigfile.4227 > /dev/null

real 0m2.054s
user 0m1.924s
sys 0m0.120s
[root@www blog]# time ./method5.sh bigfile.4227 > /dev/null
I waited more than half day, still i didn’t get result, then I created
a 10000-line file to test this method.
[root@www tempdir]# time ./method5.sh file.10000 > /dev/null

real 2m25.739s
user 0m21.857s
sys 1m12.705s

Method 4 came in first place,it has taken very less time 2.05 seconds,
but we can't compare Method 4 with other methods, because awk is not
just a command, but a programming language too.

 Method 2 and method 3 are tied for second place, they  produce mostly
the same real execution time at 2 minutes and 48 seconds . Method 1
came in third at 6 minutes and 2.9 seconds.

 Method 5 has taken more than half a day. 2 minutes 25 seconds to pro-
cess just a 10000 line file, how stupid it is.

Note: If file contain escape characters, use read -r instead of read,
then Backslash does not act as an escape character. The back-slash is
considered to be part of the line. In particular, a backslash-newline
pair may not be used as a line continuation.

How to execute a MySQL Command from a Linux Bash Shell?

Sometimes it is necessary to runMySQL query directly from theLinux Command Line without actually going into the interactive MySQL prompt.

For example, when you want to schedule a backup of MySQL databases or automate a creation of MySQL databases and users with some Bash Script.

Use one of the following commands to run a MySQL query from a Linux command line.

MySQL Command From a Bash Shell in One Line

Use the following command for quickly execution of MySQL query from a Linux Bash Shell :
# mysql -u [user] -p[pass] -e “[mysql commands]”
Example :
# mysql -u root -pSeCrEt -e “show databases”

Run a MySQL Query From a Bash script using EOF

Use the following syntax in your Bash scripts for running MySQL commands :
mysql -u [user] -p[pass] << EOF
[mysql commands]
EOF

Example :

#!/bin/bash
mysql -u root -pSeCrEt << EOF
use mysql;
show tables;
EOF

Execute a MySQL Command Remotely

Use -h option to specify a MySQL server’s IP address:
# mysql -h [ip] -u [user] -p[pass] -e “[mysql commands]”
Example :
# mysql -h 192.168.1.10 -u root -pSeCrEt -e “show databases”

Specify a Database to Use

Use -D option to specify the name of MySQL database :
# mysql -D [db name] -u [user] -p[pass] -e “[mysql commands]”
Example :
# mysql -D clients -u root -pSeCrEt -e “show tables”

Install Ruby on Rails – Ubuntu Linux

Install Ruby on Rails 4.1 on Ubuntu Linux. Up-to-date, detailed instructions for the Rails newest release. How to install Rails 4.1, the newest version of Rails, on Ubuntu.

This in-depth installation guide is used by developers to configure their working environment for real-world Rails development. This guide doesn’t cover installation of Ruby on Rails for a production server.

To develop with Rails on Ubuntu, you’ll need Ruby (an interpreter for the Ruby programming language) plus gems (software libraries) containing the Rails web application development framework.

For an overview of what’s changed in each Rails release, see a Ruby on Rails Release History.

What is the RailsApps Project?

This is an article from the RailsApps project. The RailsApps project provides example applications that developers use as starter apps. Hundreds of developers use the apps, report problems as they arise, and propose solutions. Rails changes frequently; each application is known to work and serves as your personal “reference implementation.” Support for the project comes from subscribers. If this article is useful, please support us and join the RailsApps project.

Ruby on Rails on Ubuntu

Ubuntu is a popular platform for Rails development, as are other Unix-based operating systems such as Mac OS X. Installation is relatively easy and widespread help is available in the Rails developer community.

Use a Ruby Version Manager

You’ll need an easy way to switch between Ruby versions. Just as important, you’ll have a dependency mess if you install gems into the system environment. I recommend RVM to manage Ruby versions and gems because it is popular, well-supported, and full-featured. If you are an experienced Unix administrator, you can consider alternatives such as Chruby, Sam Stephenson’s rbenv, or others on this list.

Conveniently, you can use RVM to install Ruby.

Don’t Install Ruby from a Package

Ubuntu provides a package manager system for installing system software. You’ll use this to prepare your computer before installing Ruby. However, don’t use apt-get to install Ruby. The package manager will install an outdated version of Ruby. And it will install Ruby at the system level (for all users). It’s better to use RVM to install Ruby within your user environment.

Hosted Development

You can use Ruby on Rails without actually installing it on your computer. Hosted development, using a service such as Nitrous.io, means you get a computer “in the cloud” that you use from your web browser. Any computer can access the hosted development environment, though you’ll need a broadband connection. Nitrous.io is free for small projects.

Using a hosted environment means you are no longer dependent on the physical presence of a computer that stores all your files. If your computer crashes or is stolen, you can continue to use your hosted environment from any other computer. Likewise, if you frequently work on more than one computer, a hosted environment eliminates the difficulty of maintaining duplicate development environments. For these reasons some developers prefer to “work in the cloud” using Nitrous.io. For more on Nitrous.io, see the article Ruby on Rails with Nitrous.io. Nitrous.io is a good option if you have trouble installing Ruby on Rails on your computer.

Prepare Your System

You’ll need to prepare your computer with the required system software before installing Ruby on Rails.

You’ll need superuser (root) access to update the system software.

Update your package manager first:

$ sudo apt-get update

This must finish without error or the following step will fail.

Install Curl:

$ sudo apt-get install curl

You’ll use Curl for installing RVM.

Install Ruby Using RVM

Use RVM, the Ruby Version Manager, to install Ruby and manage your Rails versions.

If you have an older version of Ruby installed on your computer, there’s no need to remove it. RVM will leave your “system Ruby” untouched and use your shell to intercept any calls to Ruby. Any older Ruby versions will remain on your system and the RVM version will take precedence.

Ruby 2.1.1 was current when this was written. You can check for the current recommended version of Ruby. RVM will install the newest stable Ruby version.

The RVM website explains how to install RVM. Here’s the simplest way:

$ \curl -L https://get.rvm.io | bash -s stable --ruby

Note the backslash before “curl” (this avoids potential version conflicts).

The “—ruby” flag will install the newest version of Ruby.

RVM includes an “autolibs” option to identify and install system software needed for your operating system. See the article RVM Autolibs: Automatic Dependency Handling and Ruby 2.0 for more information.

If You Already Have RVM Installed

If you already have RVM installed, update it to the latest version and install Ruby:

$ rvm get stable --autolibs=enable
$ rvm install ruby
$ rvm --default use ruby-2.1.1

Installation Troubleshooting and Advice

RVM Troubleshooting

If you have trouble installing Ruby with RVM, you can get help directly from the RVM team using the IRC (Internet Relay Chat) channel #rvm on irc.freenode.net:

http://webchat.freenode.net/?channels=rvm

If you’ve never used IRC, it’s worthwhile to figure out how to use IRC because the RVM team is helpful and friendly. IRC on freenode requires registration (see how to register).

Install Node.js

Since Rails 3.1, a JavaScript runtime has been needed for development on Ubuntu Linux. The JavaScript runtime is required to compile code for the Rails asset pipeline. For development on Ubuntu Linux it is best to install the Node.js server-side JavaScript environment.

$ sudo apt-get install nodejs

and set it in your $PATH.

If you don’t install Node.js, you’ll need to add this to the Gemfile for each Rails application you build:

gem 'therubyracer'

Check the Gem Manager

RubyGems is the gem manager in Ruby.

Check the installed gem manager version:

$ gem -v
2.2.2

You should have:

Use gem update --system to upgrade the Ruby gem manager if necessary.

RVM Gemsets

Not all Rails developers use RVM to manage gems, but many recommend it.

Display a list of gemsets:

$ rvm gemset list

gemsets for ruby-2.1.1
=> (default)
   global

Only the “default” and “global” gemsets are pre-installed.

If you get an error “rvm is not a function,” close your console and open it again.

RVM’s Global Gemset

See what gems are installed in the “global” gemset:

$ rvm gemset use global
$ gem list

A trouble-free development environment requires the newest versions of the default gems.

Several gems are installed with Ruby or the RVM default gemset:

To get a list of gems that are outdated:

$ gem outdated
### list not shown for brevity

To update all stale gems:

$ gem update
### list not shown for brevity

In particular, rake should be updated to version 10.2.1 or newer.

Faster Gem Installation

By default, when you install gems, documentation files will be installed. Developers seldom use gem documentation files (they’ll browse the web instead). Installing gem documentation files takes time, so many developers like to toggle the default so no documentation is installed.

Here’s how to speed up gem installation by disabling the documentation step:

$ echo "gem: --no-document" >> ~/.gemrc

This adds the line gem: --no-document to the hidden .gemrc file in your home directory.

Staying Informed

You can stay informed of new gem versions by creating an account at RubyGems.org and visiting your dashboard. Search for each gem you use and “subscribe” to see a feed of updates in the dashboard (an RSS feed is available from the dashboard).

After you’ve built an application and set up a GitHub repository, you can stay informed with Gemnasium or VersionEye. These services survey your GitHub repo and send email notifications when gem versions change. Gemnasium and VersionEye are free for public repositories with a premium plan for private repositories.

Rails Installation Options

Check for the current version of Rails. Rails 4.1.0.rc2 is the newest pre-release version of Rails. Rails 4.0.4 is the current stable release.

You can install Rails directly into the global gemset. However, many developers prefer to keep the global gemset sparse and install Rails into project-specific gemsets, so each project has the appropriate version of Rails.

If you install Rails at this point, you will install it into the global gemset.

Instead, make a gemset just for the pre-release version of Rails:

$ rvm use ruby-2.1.1@rails4.1 --create

Or, if you want to stay with the current stable release:

$ rvm use ruby-2.1.1@rails4.0 --create

Here are the options you have for installing Rails.

If you want the newest beta version or release candidate, you can install with --pre.

$ gem install rails --pre
$ rails -v

If you want the most recent stable release:

$ gem install rails
$ rails -v

Or you can get a specific version.

For example, if you want the Rails 3.2.17 release:

$ gem install rails --version=3.2.17
$ rails -v

Create a Workspace Folder

You’ll need a convenient folder to store your Rails projects. You can give it any name, such as code/ or projects/. For this tutorial, we’ll call itworkspace/.

Create a projects folder and move into the folder:

$ mkdir workspace
$ cd workspace

This is where you’ll create your Rails applications.

New Rails Application

Here’s how to create a project-specific gemset, installing Rails, and creating a new application.

$ mkdir myapp
$ cd myapp
$ rvm use ruby-2.1.1@myapp --ruby-version --create
$ gem install rails --pre
$ rails new .

We’ll name the new application “myapp.” Obviously, you can give it any name you like.

With this workflow, you’ll first create a root directory for your application, then move into the new directory.

With one command you’ll create a new project-specific gemset. The option “—ruby-version” creates .ruby-version and .ruby-gemset files in the root directory. RVM recognizes these files in an application’s root directory and loads the required version of Ruby and the correct gemset whenever you enter the directory.

When we create the gemset, it will be empty (though it inherits use of all the gems in the global gemset). We immediately install Rails. The commandgem install rails installs the most recent release of Rails.

Finally we run rails new .. We use the Unix “dot” convention to refer to the current directory. This assigns the name of the directory to the new application.

This approach is different from the way most beginners are taught to create a Rails application. Most instructions suggest using rails new myapp to generate a new application and then enter the directory to begin work. Our approach makes it easy to create a project-specific gemset and install Rails before the application is created.

The rails new command generates the default Rails starter app. If you wish, you can use the Rails Composer tool to generate a starter application with a choice of basic features and popular gems.

Quick Test

For a “smoke test” to see if everything runs, display a list of Rake tasks.

$ rake -T

There’s no need to run bundle exec rake instead of rake when you are using RVM (see RVM and bundler integration).

This concludes the instructions for installing Ruby and Rails. Read on for additional advice and tips.

Rails Starter Apps

The starter application you create with rails new is very basic.

Use the Rails Composer tool to build a full-featured Rails starter app.

You’ll get a choice of starter applications with basic features and popular gems.

Here’s how to generate a new Rails application using the Rails Composer tool:

Using the conventional approach:

$ rails new myapp -m https://raw.github.com/RailsApps/rails-composer/master/composer.rb

Or, first creating an empty application root directory:

$ mkdir myapp
$ cd myapp
$ rvm use ruby-2.1.1@myapp --ruby-version --create
$ gem install rails
$ rails new . -m https://raw.github.com/RailsApps/rails-composer/master/composer.rb

The -m option loads an application template that is hosted on GitHub.

You can add the -T flags to skip Test::Unit if you are using RSpec for testing.

You can add the -O flags to skip Active Record if you are using a NoSQL datastore such as MongoDB.

If you get an error “OpenSSL certificate verify failed” when you try to generate a new Rails app, see the article OpenSSL errors and Rails.

Rails Tutorials and Example Applications

The RailsApps project provides example apps that show how real-world Rails applications are built. Each example is known to work and can serve as your personal “reference implementation”. Each is an open source project. Dozens of developers use the apps, report problems as they arise, and propose solutions as GitHub issues. Purchasing a subscription for the tutorials gives the project financial support.

Example Applications for Rails 4.1 Tutorial Comments
Rails and Bootstrap Tutorial starter app for Rails and Bootstrap
Rails and Foundation Quickstart Guide starter app for Rails and Zurb Foundation
OmniAuth and Rails OmniAuth for authentication
Devise and Rails Devise for authentication
Devise and Pundit and Rails Pundit for authorization
Example Applications for Rails 4.0 Tutorial Comments
Learn Rails Learn Ruby on Rails introduction to Rails for beginners

Adding a Gemset to an Existing Application

If you’ve already created an application with the command rails new myapp, you can still create a project-specific gemset. Here’s how to create a gemset for an application named “myapp” and create .ruby-version and .ruby-gemset files in the application’s root directory:

$ rvm use ruby-2.1.1@myapp --ruby-version --create

You’ll need to install Rails and the gems listed in your Gemfile into the new gemset by running:

$ gem install rails
$ bundle install

Specifying a Gemset for an Existing Application

If you have already created both an application and a gemset, but not .ruby-version and .ruby-gemset files, here’s how to add the files. For example, if you want to use an existing gemset named “ruby-2.1.1@myapp”:

$ echo "ruby-2.1.1" > .ruby-version
$ echo "myapp" > .ruby-gemset

Using .ruby-version and .ruby-gemset files means you’ll automatically be using the correct Rails and gem version when you switch to your application root directory on your local machine.

Databases for Rails

Rails uses the SQLite database by default. RVM installs SQLite and there’s nothing to configure.

Though SQLite is adequate for development (and even some production applications), a new Rails application can be configured for other databases. The command rails new myapp --database= will show you a list of supported databases.

Supported for preconfiguration are: mysql, oracle, postgresql, sqlite3, frontbase, ibm_db, sqlserver, jdbcmysql, jdbcsqlite3, jdbcpostgresql, jdbc.

For example, to create a new Rails application to use PostgreSQL:

$ rails new myapp --database=postgresql

The --database=postgresql parameter will add the pg database adapter gem to the Gemfile and create a suitable config/database.yml file.

Don’t use the --database= argument with the Rails Composer tool. You’ll select a database from a menu instead.

Deployment

If you wish to run your own servers, you can deploy a Rails application using Capistrano deployment scripts. However, unless system administration is a personal passion, it is much easier to deploy your application with a “platform as a service” provider such as Heroku.

Hosting

For easy deployment, use a “platform as a service” provider such as:

For deployment on Heroku, see the article:

Security

By design, Rails encourages practices that avoid common web application vulnerabilities. The Rails security team actively investigates and patches vulnerabilities. If you use the most current version of Rails, you will be protected from known vulnerabilities. See the Ruby On Rails Security Guide for an overview of potential issues and watch the Ruby on Rails Security Mailing List for announcements and discussion.

Your Application’s Secret Token

Rails uses a session store to provide persistence between page requests. The default session store uses cookies. To prevent decoding of cookie data and hijacking a session, Rails encrypts cookie data using a secret key. When you create a new Rails application using the rails new command, a unique secret key is generated. If you’ve used the Rails Composer tool to generate the application, the application’s secret token will be unique, just as with any Rails application generated with the rails new command.

In Rails 4.1, the file config/secrets.yml contains secret tokens for development and production.

In Rails 4.0, the config/initializers/secret_token.rb file contains the secret token.

Take care to hide the secret token you use in production. Don’t expose it in a public GitHub repo, or people could change their session information, and potentially access your site without permission. It’s best to set the secret token in a Unix shell variable.

If you need to create a new secret token:

$ rake secret

The command rake secret generates a new random secret you can use. The command won’t install the key; you have to copy the key from the console output to the appropriate file.

Where to Get Help

Your best source for help with problems is Stack Overflow. Your issue may have been encountered and addressed by others.

You can also try Rails Hotline, a free telephone hotline for Rails help staffed by volunteers.