How to install an .ipa file on an iPad or iPhone

The Developer .ipa file is intended to allow you to test the app on your iPad or iPhone before you send the .zip file to Apple for review. To test the .ipa file, you need to install it on your iPad/iPhone. Unfortunately, you can’t use AirDrop, Dropbox, or a similar service to install the file. Apple provides 2 ways to do this. One way is to add the .ipa file to your iTunes library on your computer, and then sync your iPad/iPhone with iTunes. But if you have a lot of content on your device, this process can be maddeningly slow. It may take 5 or 10 minutes or more to complete.

A far faster, simpler way is to use Xcode. Xcode is a free Apple development tool used to build apps for Mac and iOS. But you don’t need to know anything about Xcode or app development to use this tool to quickly and easily install an .ipa file on your iPad or iPhone. Here’s how to do it with Xcode 6.1.1

1. Download and install Xcode.

2. Run Xcode. You’ll find it in your Applications folder.

3. Wire your iPad or iPhone to your Mac with a USB cable.

4. In Xcode, choose Window > Devices, or press command-shift-2. You should see your device displayed in the Devices window.

Screen Shot 2014 12 12 at 6 56 15 AM

5. Either drag your .ipa file into the “Installed Apps” section, or click on the plus sign and select your .ipa file. This will install the development app on your device. There is a bug that sometimes causes the error message below to appear. If you get this message, and you have the correct device attached, just click the OK button and try again. It should work the second time.

Screen Shot 2014 12 12 at 6 52 01 AM

Alternative Raspberry Pi Operating Systems

The original Raspberry Pi has always had a few different operating systems (OSs) available, albeit most of them based on Linux.  With the release of the Raspberry Pi 2 a few more are starting to appear.  The reason behind this is that most Linux operating systems are written to run on the ARMv7 architecture (the CPU at the centre of the Raspberry Pi). The original Pi’s CPU was based on ARMv6. It is therefore becoming much easier for operating systems to be ported to the Pi2.

The OS of choice has always been Raspbian, and there are no plans for it to change.  Raspbian is based on the DebianLinux distribution.  A ‘distribution’ is the word that is often used to describe a flavour of Linux, and probably came about when users used to ‘distribute’ sets of CDs with the operating system and applications on them.  This term has stuck.

Other distributions available for download directly from the Raspberry Pi website are:

  • OpenElec – this is a media centre that takes music, photos and videos served by other devices on your network, streamed channels or files from an attached drive, and allows you to play them back via your monitor or TV.  It works well on the original Pi, and is able to play back 1080p video, but the interface is far more responsive on the Pi2.
  • Pidora – This is another Linux distribution like Raspbian, but is based on the Fedora distribution.  It gives you a different look and feel to Raspbian.  The current build is for the ARMv6 architecture, and therefore will not run on the Pi2.

  • RISC OS – This operating system is different from the others in the fact that it is not based on Linux, but is instead a completely separate OS.  It was originally designed by Acorn in Cambridge and has links to the team that developed the original ARM microprocessors.

  • Snappy Ubuntu Core – With the advent of the ARMv7 in the Raspberry Pi2, a version of the Ubuntu Linux operating system has become available.  This is an early, alpha release, which means that it is not really intended for everyday users, but more for developers to start developing “snappy” apps for Ubuntu.


With over 5,000,000 Pi’s out there, there’s no surprise that some people are porting their favourite operating system over to the Pi.  The Pi2 has made this much easier because of the additional speed and memory.  So lets look at a few:
  • Android – This is the most popular smartphone operating system in the world and is run (mainly) on ARM based phones – although an Intel based version is now becoming popular with some manufacturers.  I cannot currently find a stable build of the OS for the Pi, but this video shows it in action.  I have no doubt that it will appear some time within the year.
  • Windows 10 – Microsoft announced that their new operating system would run on the Raspberry Pi.  There is little information, and a lot of misinformation about it, like whether it will have a GUI or just be their embedded version (i.e. it will run without a display).  I do not want to say anything that may be wrong, so I’m not going to provide any more information that may mislead people; we will all just have to wait for its release.  What it does say, though, is that Microsoft are supporting machines like the Pi, and when the Compute Module 2 is released (which I am sure will happen) the humble Pi will help them with one of their goals of ‘a Windows PC in every room’.
  • Ubuntu MATE 15.04 – The full Ubuntu MATE distro has been built for the Pi 2 by Ryan Finnie and Sjoerd Simons, providing the whole desktop environment on our little friend.

Ubuntu Mate 15.04

  • Minibian – The default Raspbian image from the Raspberry Pi website contains most of the useful software for those who are starting out with the Raspberry Pi, but some people may not want all that software.  This is where Minibian comes in.  It is VERY small, can fit onto a 512MB SD card and runs on both the Pi and Pi2.  It is really aimed at those wanting to build embedded systems that use the least amount of resources, but it can also be used by those who want to start with a small distribution and add only the software they want to run, like a NAS, web server, or a robot that does not need the GUI and all the other software provided by the full OS.

  • Hypriot – This one is only for the hardy! It’s Raspbian with Docker enabled. What on earth (or the high seas) is that, you ask?  Well, Docker is a way of installing Docker ‘containers’ that contain a number of individual applications and libraries that one would otherwise have to install individually.  This makes it easy to, for example, install a web server with Apache, MySQL and PHP with a pre-determined set of add-ons and configuration; you only need to obtain that ‘container’ and Docker does all the rest.  Head over to Hypriot for more.
  • Arch LinuxArch Linux is another distribution for more experienced users; the base OS is minimal and needs additional packages to be installed by the user to make up the OS into a full environment.  However, it has the reputation of being a good, stable distro.
  • PiPlay – The PiPlay is a pre-built OS for gaming and emulation.  It provides emulation for some of the most popular but old gaming platforms, like the Playstation 1, the Sage Genesis, GeoGeo, SNES, Gameboy and Nintendo Gameboy Advance, Atari 2600, Commodore 64 and others.

This is by no way a full list of operating systems available for the Pi.  There are many, many more, each with its own reason for being.  One I have not mentioned is Volumio, which is based on Raspbian, and serves in a Raspberry Pi/IQaudIO/QAcoustics setup as my HQ media centre.  Others are listed on the eLinux Raspberry Pi area.  Support for some is great, and for others is a little sketchy.  If you can’t find what you are looking for, why not build one of your own?

How I Develop Mobile Web Apps

For most projects, I perform two or more iterations with the following steps.

Requirements Gathering

As you would suspect, for this step I prefer to go to the people who know the most about what is needed, preferably the end users of the future app or software.

Development Environment Setup

This is where I set up my workstation to develop the project. It entails creating directories for source code, documentation and other resources; setting up source control (I normally use Bitbucket); and installing code libraries, frameworks and third-party software needed for the project.

Low Fidelity UI Prototyping

This is where I create low-fidelity prototypes of the user interface of the app. I useBalsamiq and Pencil as the tools for this step. I prefer short work sessions with the users or clients until they are satisfied with the prototypes.

High Fidelity UI Prototyping

I also make high-fidelity prototypes of the UI, based on the low-fidelity prototypes. I prefer to create these prototypes with jQuery Mobile due to the ease of use of this framework. As with the previous step, I work with users or clients until we agree on the prototypes.

UI Implementation with Mock Data

Once I have UI prototypes that my clients believe are very close to the final product, I start to implement the UI of the app. During this phase I use mock data to populate the screens. I also simulate network connections and other processes that need to be in place in the production app.

The goal here is to have a working UI that users can test on an emulator or physical device. I use emulators such as Ripple and Genymotion, as well as the iOS emulators provided with Xcode.

Once I finish this stage, I know that I can focus on implementing the controllers and services that will drive the UI, with the confidence that there will be little changes to the UI going forward.

Implementation of Client-Side Controllers and Services

The client-side controllers and services drive the app’s UI and perform tasks such as communications with the server and local data access. During this phase, I perform mini-iterations consisting of the following steps:

  • Create behavior-driven tests for the controllers and services layer.
  • Implement the controller and services layer functions.
  • Wire the UI to the controller and remove mock data.

The test-first approach makes it easy to create lean controller and service methods that do what’s needed for the specific task, without code bloat.

It’s simple to create good behavior-driven tests once the UI is well defined. If I need to make changes to the UI down the road, the tests also make it easy to find the places where I need to change the controller or services to respond to the UI changes.

Implementation of Server-Side Endpoints and Services

The server-side endpoints and services store application data and run tasks on behalf of the mobile app. During this phase, I perform mini-iterations consisting of the following steps:

  • Create behavior-driven tests of endpoints with fake data, using Postman or similar tools.
  • Implement the endpoints.
  • Create behavior-driven tests for the services layer (authentication, authorization, data access, etc.).
  • Implement the services layer.

Tests on Emulator and Physical Devices

This phase is all about end-to-end tests of the application on different emulators and physical devices. I first perform functional tests that cover the features of the app; then I move on to non-functional tests, paying particular attention to usability and performance issues.

Here again I use emulators such as Ripple and Genymotion, as well as the iOS provided with Xcode.


The packaging step is where I package the application (if needed) so it can be deployed through an enterprise portal or app store.

Linking AdWords to Google Analytics & Webmaster Tools

Roughly 85 percent of Google queries are not new searches. The majority of searches are old favorites that are asked nearly each day.

The same is true at our Google AdWords trainings where FAQs dominate Q&A. People tend to struggle from similar obstacles year after year, whether it be match type or account settings or whatever.

Somewhere just beyond Michael’s top 5 common PPC questions is linking Google Analytics and Google Webmaster Tools accounts, a question that we hear at most AdWords training sessions.

“Why can’t I see my AdWords data in my Google Analytics?”

This is an update to an outdated post that offered a step-by-step troubleshooting guide, complete with how-tos and screenshots. Linking your Google products together helps to provide context to your data, allowing you to draw deeper insights and hopefully, make more accurate conclusions!

Verify Appropriate Access Levels

Before we begin, please verify email addresses and access levels. You will need administrative access at the account level in AdWords and the “Manage Users/Edit” access at the property level in Analytics. Use the following screenshots to ensure that you hold the right rank. (You can click on screenshots for full-size images!)

Verifying AdWords Access


Verifying Analytics Access


Show AdWords data in Google Analytics

The goal in this section is to share information between accounts so (1) AdWords data can be used in Analytics and (2) Analytics data can be used in AdWords.

Step A: Login to Google Analytics and (1) select the Admin tab at the top of the page. You will then need to pick the appropriate ACCOUNT and (2) PROPERTY.


Step B: Under the PRODUCT LINKING header, (1) click on AdWords Linking and (2) verify the AdWords account. Note: Any trouble here might be a sign that you do not have an administrative access level.


Step C: Once the AdWords account has been selected, you will need to (1) add a name for the account link, (2) pick the views where you would like to make AdWords data available and (3) confirm the link by clicking the blue button.


Show Google Analytics Data in AdWords

Step D: Login to AdWords and (1) select the gear icon then (2) Account settings.


Step E: Navigate to (1) Linked accounts then (2) Google Analytics. You can add data from Google Analytics views by (3) clicking the Add button.


Step F: Add data from Google Analytics to AdWords by (1) selecting Customize columns under the Columns tab.


Step G: Find the (1) Google Analytics tab and (2) Add metric columns to display that data in AdWords reports.


If all dots were connected correctly, you should have an email in your inbox to acknowledge that your work here is done.

Linking AdWords to Google Webmaster Tools

A link between these accounts provides broader search data to AdWords, allowing marketers to compare the relationship between organic and paid search efforts. Stephen wrote a nice overview here on how to use the Paid & Organic Report.

Step H: In the Account settings section of AdWords that we discussed above, (1) select Linked accounts then (2) Webmaster Tools. That will allow you to (3) search for your domain in Google Webmaster Tools. Note: Access to the Webmaster Tools account is required.


Step I: You should see something like this screen once the account has been linked.


For more on the Paid & Organic Report, see Google’s guide here.

HMAC authentication in ASP.NET Web API

In this article I will explain the concepts behind HMAC authentication and will show how to write an example implementation for ASP.NET Web API using message handlers. The project will include both server and client side (using Web API’s HttpClient) bits.

HMAC based authentication

HMAC (hash-based message authentication code) provides a relatively simple way to authenticate HTTP messages using a secret that is known to both client and server. Unlike basic authentication it does not require transport level encryption (HTTPS), which makes its an appealing choice in certain scenarios. Moreover, it guarantees message integrity (prevents malicious third parties from modifying contents of the message).

On the other hand proper HMAC authentication implementation requires slightly more work than basic HTTP authentication and not all client platforms support it out of the box (most of them support cryptographic algorithms required to implement it though). My suggestion would be to use it only if HTTPS + basic authentication does not suit your requirements.

One prominent example of HMAC usage is Amazon S3 service.

The basic idea behind HMAC authentication in HTTP can be described as follows:

  • both client and server have access to a secret that will be used to generate HMAC – it can be a password (or preferably password hash) created by the user at the time of registration,
  • using the secret client generates a message signature using HMAC algorithm (the algorithm is provided by .NET ‘for free’),
  • signature is attached to the message (eg. as a header) and the message is sent,
  • the server receives the message and calculates its own version of the signature using the secret (both client and server use the same HMAC algorithm),
  • if the signature computed by the server matches the on the message it means that the message is authorized.

As you can see the secret key (eg. password hash) is only shared between client and server once (eg. during user registration). Noone will be able to produce a valid signature without the access to the secret also any modification of the message (eg. appending content) will result in server calculating a different signature and refusing authorization.

Broadly speaking to create a HMAC authenticated client/server pair using ASP.NET Web API we need:

  • method that will return a string representing given http request,
  • method that based on secret string and message representation calculates HMAC signature,
  • client side – message handler that uses these methods to calculate the signature and attaches it to the request (as HTTP header),
  • server side – message handler that calculates signature of incoming request and compares it with the one contained in the header.

Web API client

Ok, so let’s start by writing the first piece.

public interface IBuildMessageRepresentation  
    string BuildRequestRepresentation(HttpRequestMessage requestMessage);
public class CanonicalRepresentationBuilder : IBuildMessageRepresentation  
    /// <summary>
    /// Builds message representation as follows:
    /// HTTP METHOD\n +
    /// Content-MD5\n +  
    /// Timestamp\n +
    /// Username\n +
    /// Request URI
    /// </summary>
    /// <returns></returns>
    public string BuildRequestRepresentation(HttpRequestMessage requestMessage)
        bool valid = IsRequestValid(requestMessage);
        if (!valid)
            return null;

        if (!requestMessage.Headers.Date.HasValue)
            return null;
        DateTime date = requestMessage.Headers.Date.Value.UtcDateTime;

        string md5 = requestMessage.Content == null ||
            requestMessage.Content.Headers.ContentMD5 == null ?  "" 
            : Convert.ToBase64String(requestMessage.Content.Headers.ContentMD5);

        string httpMethod = requestMessage.Method.Method;
        //string contentType = requestMessage.Content.Headers.ContentType.MediaType;
        if (!requestMessage.Headers.Contains(Configuration.UsernameHeader))
            return null;
        string username = requestMessage.Headers
        string uri = requestMessage.RequestUri.AbsolutePath.ToLower();
        // you may need to add more headers if thats required for security reasons
        string representation = String.Join("\n", httpMethod,
            md5, date.ToString(CultureInfo.InvariantCulture),
            username, uri);

        return representation;

    private bool IsRequestValid(HttpRequestMessage requestMessage)
        //for simplicity I am omitting headers check (all required headers should be present)

        return true;

A couple of points worth mentioning:

  • we construct message representation by concatenating ‘important’ headers, http method and uri,
  • instead of using incorporating the content we use its md5 hash (base64 encoded),
  • all parts of the message (eg. headers) that can affect its meaning and have side effects on the server side should be included in the representation (otherwise an attacker would be able to modify them without changing the signature).

Now lets look at that component that will calculate authentication code (signature).

public interface ICalculteSignature  
    string Signature(string secret, string value);
public class HmacSignatureCalculator : ICalculteSignature  
    public string Signature(string secret, string value)
        var secretBytes = Encoding.UTF8.GetBytes(secret);
        var valueBytes = Encoding.UTF8.GetBytes(value);
        string signature;

        using (var hmac = new HMACSHA256(secretBytes))
            var hash = hmac.ComputeHash(valueBytes);
            signature = Convert.ToBase64String(hash);
        return signature;

The signature will be encoded using base64 so that we can pass it easily in a header. What header you may ask? Well, unfortunately there is no standard way of  including message authentication codes into the message (as there is no standard way of constructing message representation). We will use Authorization HTTP header for that purpose providing a custom schema (ApiAuth).

Authorization: ApiAuth HMAC_SIGNATURE

The HMAC will be calculated and attached to the request in a custom message handler.

public class HmacSigningHandler : HttpClientHandler  
    private readonly ISecretRepository _secretRepository;
    private readonly IBuildMessageRepresentation _representationBuilder;
    private readonly ICalculteSignature _signatureCalculator;

    public string Username { get; set; }

    public HmacSigningHandler(ISecretRepository secretRepository,
                          IBuildMessageRepresentation representationBuilder,
                          ICalculteSignature signatureCalculator)
        _secretRepository = secretRepository;
        _representationBuilder = representationBuilder;
        _signatureCalculator = signatureCalculator;

    protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request,
                                 System.Threading.CancellationToken cancellationToken)
        if (!request.Headers.Contains(Configuration.UsernameHeader))
            request.Headers.Add(Configuration.UsernameHeader, Username);
        request.Headers.Date = new DateTimeOffset(DateTime.Now,DateTime.Now-DateTime.UtcNow);
        var representation = _representationBuilder.BuildRequestRepresentation(request);
        var secret = _secretRepository.GetSecretForUser(Username);
        string signature = _signatureCalculator.Signature(secret,

        var header = new AuthenticationHeaderValue(Configuration.AuthenticationScheme, signature);

        request.Headers.Authorization = header;
        return base.SendAsync(request, cancellationToken);
public class Configuration  
    public const string UsernameHeader = "X-ApiAuth-Username";
    public const string AuthenticationScheme = "ApiAuth";
public class DummySecretRepository : ISecretRepository  
    private readonly IDictionary<string, string> _userPasswords
        = new Dictionary<string, string>()

    public string GetSecretForUser(string username)
        if (!_userPasswords.ContainsKey(username))
            return null;

        var userPassword = _userPasswords[username];
        var hashed = ComputeHash(userPassword, new SHA1CryptoServiceProvider());
        return hashed;

    private string ComputeHash(string inputData, HashAlgorithm algorithm)
        byte[] inputBytes = Encoding.UTF8.GetBytes(inputData);
        byte[] hashed = algorithm.ComputeHash(inputBytes);
        return Convert.ToBase64String(hashed);

public interface ISecretRepository  
    string GetSecretForUser(string username);

In a real life scenario you could retrieve the hashed password from the a persistent store (a database). If you remember how we constructed our message representation you will notice that we also need to set content MD5 header. We could do it in HmacSigningHandler, but to have separation of concerns and because Web API allows us to combine handlers in a neat way I moved it to a separate (dedicated) handler.

public class RequestContentMd5Handler : DelegatingHandler  
    protected async override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request,
                                       System.Threading.CancellationToken cancellationToken)
        if (request.Content == null)
            return await base.SendAsync(request, cancellationToken);

        byte[] content = await request.Content.ReadAsByteArrayAsync();
        MD5 md5 = MD5.Create();
        byte[] hash = md5.ComputeHash(content);
        request.Content.Headers.ContentMD5 = hash;
        var response = await base.SendAsync(request, cancellationToken);
        return response;

For simplicity the HMAC handler derives directly from HttpClientHandler. Here is how we would make a request:

static void Main(string[] args)  
    var signingHandler = new HmacSigningHandler(new DummySecretRepository(),
                                            new CanonicalRepresentationBuilder(),
                                            new HmacSignatureCalculator());
    signingHandler.Username = "username";

    var client = new HttpClient(new RequestContentMd5Handler()
        InnerHandler = signingHandler
    client.PostAsJsonAsync("http://localhost:48564/api/values","some content").Wait();

And that’s basically it as far as http client is concerned. Let’s have a look at server part.

Web API service

The general logic will be that we will want to authenticate every incoming request (we can us per route handlers to secure only one route for example). Each request’s authentication code will be calculated using the very same IBuildMessageRepresentation and ICalculateSignature implementations. If the signature does not match (or the content md5 hash is different from the value in the header) we will immediately return a 401 response.

public class HmacAuthenticationHandler : DelegatingHandler  
    private const string UnauthorizedMessage = "Unauthorized request";

    private readonly ISecretRepository _secretRepository;
    private readonly IBuildMessageRepresentation _representationBuilder;
    private readonly ICalculteSignature _signatureCalculator;

    public HmacAuthenticationHandler(ISecretRepository secretRepository,
        IBuildMessageRepresentation representationBuilder,
        ICalculteSignature signatureCalculator)
        _secretRepository = secretRepository;
        _representationBuilder = representationBuilder;
        _signatureCalculator = signatureCalculator;

    protected async Task<bool> IsAuthenticated(HttpRequestMessage requestMessage)
        if (!requestMessage.Headers.Contains(Configuration.UsernameHeader))
            return false;

        if (requestMessage.Headers.Authorization == null 
            || requestMessage.Headers.Authorization.Scheme 
                    != Configuration.AuthenticationScheme)
            return false;

        string username = requestMessage.Headers.GetValues(Configuration.UsernameHeader)
        var secret = _secretRepository.GetSecretForUser(username);
        if (secret == null)
            return false;

        var representation = _representationBuilder.BuildRequestRepresentation(requestMessage);
        if (representation == null)
            return false;

        if (requestMessage.Content.Headers.ContentMD5 != null 
            && !await IsMd5Valid(requestMessage))
            return false;

        var signature = _signatureCalculator.Signature(secret, representation);        

        var result = requestMessage.Headers.Authorization.Parameter == signature;

        return result;

    protected async override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request,
           System.Threading.CancellationToken cancellationToken)
        var isAuthenticated = await IsAuthenticated(request);

        if (!isAuthenticated)
            var response = request
                .CreateErrorResponse(HttpStatusCode.Unauthorized, UnauthorizedMessage);
            response.Headers.WwwAuthenticate.Add(new AuthenticationHeaderValue(
            return response;
        return await base.SendAsync(request, cancellationToken);

The bulk of work is done by IsAuthenticated() method. Also please note that we do not sign the response, meaning the client will not be able verify the authenticity of the response (although response signing would be easy to do given components that we already have). I have omitted IsMd5Valid() method for brevity, it basically compares content hash with MD5 header value (just remember not to compare byte[] arrays using == operator).

Configuration part is simple and can look like that (per route handler):

                name: "DefaultApi",
                routeTemplate: "api/{controller}/{id}",
                constraints: null,
                handler: new HmacAuthenticationHandler(new DummySecretRepository(),
                    new CanonicalRepresentationBuilder(), new HmacSignatureCalculator())
                        InnerHandler = new HttpControllerDispatcher(config)
                defaults: new { id = RouteParameter.Optional }

Replay attack prevention

There is one very important flaw in the current approach. Imagine a malicious third party intercepts a valid (properly authenticated) HTTP request coming from a legitimate client (eg. using a sniffer). Such a message can be stored and resent to our server at any time enabling attacker to repeat operations performed previously by authenticated users. Please note that new messages still cannot be created as the attacker does not know the secret nor has a way of retrieving it from intercepted data.

To help us fix this issue lets make following three observations/assumptions about dates of  requests in our system:

  • requests with different Date header values will have different signatures, thus attacker will not be able to modify the timestamp,
  • we assume identical, consecutive messages coming from a user will always have different timestamps – in other words that no client will want to send two or more identical messages at a given point in time,
  • we introduce a requirement that no http request can be older than X (eg. 5) minutes – if for any reason the message is delayed for more than that it will have to be resent with a refreshed timestamp.

Once we know the above we can introduce following changes into IsAuthenticated() method:

protected async Task<bool> IsAuthenticated(HttpRequestMessage requestMessage)  
    var isDateValid = IsDateValid(requestMessage);
    if (!isDateValid)
        return false;

    //disallow duplicate messages being sent within validity window (5 mins)
        return false;

    var result = requestMessage.Headers.Authorization.Parameter == signature;
    if (result == true)
        MemoryCache.Default.Add(signature, username,
    return result;

private bool IsDateValid(HttpRequestMessage requestMessage)  
    var utcNow = DateTime.UtcNow;
    var date = requestMessage.Headers.Date.Value.UtcDateTime;
    if (date >= utcNow.AddMinutes(Configuration.ValidityPeriodInMinutes)
        || date <= utcNow.AddMinutes(-Configuration.ValidityPeriodInMinutes))
        return false;
    return true;

For simplicity I didn’t test the example for sever and client residing in different timezones (although as long as we normalize the dates to UTC we should be save here).

The Database Timeline

1961 Development begins on Integrated Data Store, or IDS, at General Electric. IDS is generally considered the first “proper” database.” It was doing NoSQL and Big Data decades before today’s NoSQL databases.

1967 IBM develops Information Control System and Data Language/Interface (ICS/DL/I), a hierarchical database for the Apollo program. ICS later became Information Management System (IMS), which was included with IBM’s System360 mainframes.

1970 IBM researcher Edgar Codd publishes his paper A Relational Model of Data for Large Shared Data Banks, establishing the mathematics used by relational databases.

1973 David R. Woolley develops PLATO Notes, which would later influence the creation of Lotus Notes.

1974 Development begins at IBM on System R, an implementation of Codd’s relational databases and the first use of the structured query language (SQL). This later evolves into the commercial product IBM DB2. Inspired by Codd’s research, University of Berkeley students Michael Stonebraker and Eugene Wong begin development on INGRES, which became the basis for PostGreSQL, Sybase, and many other relational databases.

1979 The first publicly available version of Oracle is released.

1984 Ray Ozzie founds Iris Associates to create a PLATO-Notes-inspired groupware system.

1988 Lotus Agenda, powered by a document database, is released.

1989 Lotus Notes is released.

1990 Objectivity, Inc. releases its flagship object database.

1991 The key-value store Berkeley DB is developed

2003 Live Journal open sources the original version of Memcached.

2005 Damien Katz open sources CouchDB.

2006 Google publishes BigTable paper.

2007 Amazon publishes Dynamo paper. 10gen starts coding MongoDB. Powerset open sources its BigTable clone, Hbase. Neo4j released.

2008 Facebook open sources Cassandra.

2009 ReadWriteWeb asks: “Is the relational database doomed?” Redis released. First NoSQL meetup in San Francisco.

2010 Some of the leaders of the Memcached project, along with Zynga, open source Membase.

Switch vs Router vs Hub vs Bridge Vs Repeater Vs Wireless Access Point

Following analysis compares  Switch vs Router vs Hub vs Bridge Vs Repeater and highlights various differences among them for various different networks.

Comparison of the Network layer at which Switch Router Hub Bridge Repeater operate





Wireless Access Point


Network Layer 1 (Physical) 2 (Data) 2 (Data) or 3 (Network) 3 (Network) 1 (Physical) or 2 (Data) 1 (Physical) ,2 (Data) or 3 (Network)

Comparison and properties of a Hub

 Hub properties

A Hub is the simplest of these devices out of the five compared.

Hubs cannot filter data so data packets are sent to all connected devices/computers. The device has to make decision if it needs the packet. This can slow down the network overall.

Hubs do not have intelligence to find out best path for data packets. This leads to inefficiencies and wastage.

Pretty much repeat signal on one end to another.

Hubs are used on small networks where data transmission is not very high.

Comparison and properties of a Bridge

A bridge is more complex than hub.

A bridge maintains a MAC address table for both LAN segments it is connected to.

Bridge has a single incoming and outgoing port.

Bridge filters traffic on the LAN by looking at the MAC address.

Bridge looks at the destination of the packet before forwarding unlike a hub.It restricts transmission on other LAN segment if destination is not found.

Bridges are used to separate parts of a network that do not need to communicate regularly, but need to be connected.

Comparison and properties of a Switch

 network switch properties

A switch when compared to bridge has multiple ports.

Switches can perform error checking before forwarding data.

Switches are very efficient by not forwarding packets that error-ed out or forwarding good packets selectively to correct devices only.

Switches can support both layer 2 (based on MAC Address) and layer 3 (Based on IP address) depending on the type of switch.

Usually large networks use switches instead of hubs to connect computers within the same subnet.

Comparison and properties of a Router

Router properties

A router, like a switch forwards packets based on address.

A router uses the IP address to forward packets. This allows the network to go across different protocols.

Routers forward packets based on software while a switch (Layer 3 for example) forwards using hardware called ASIC (Application Specific Integrated Circuits)

Routers support different WAN technologies but switches do not.

Wireless Routers have Access Point built in.

The most common home use for routers is to share a broadband internet connection. The router has a public IP address and that address is shared with the network. When data comes through the router it is forwarded to the correct computer.

Comparison and properties of a wireless access point

Wireless Access Point bridges wireless and wired traffic.

Wireless Access Point allows devices/computers to connect to LAN in a wireless fashion.

Wireless Access Point allows wired and wireless devices work to communicate with each other.

 Comparison and properties of a Repeater

Repeaters are built into the hubs or switches. Repeaters clean, amplify and resend the signals that have been weakened due to long cables traveling large distances.

Understanding and Using Systemd

Systemd components graphic

Image courtesy Wikimedia Commons, CC BY-SA 3.0

Like it or not, systemd is here to stay, so we might as well know what to do with it.

systemd is controversial for several reasons: It’s a replacement for something that a lot of Linux users don’t think needs to be replaced, and the antics of the systemd developers have not won hearts and minds. But rather the opposite, as evidenced in this famous LKML thread where Linus Torvalds banned systemd dev Kay Sievers from the Linux kernel.

It’s tempting to let personalities get in the way. As fun as it is to rant and rail and emit colorful epithets, it’s beside the point. For lo so many years Linux was content with SysVInit and BSD init. Then came add-on service managers like the service and chkconfig commands. Which were supposed to make service management easier, but for me were just more things to learn that didn’t make the tasks any easier, but rather more cluttery.

Then came Upstart and systemd, with all kinds of convoluted addons to maintain SysVInit compatibility. Which is a nice thing to do, but good luck understanding it. Now Upstart is being retired in favor of systemd, probably in Ubuntu 14.10, and you’ll find a ton of systemd libs and tools in 14.04. Just for giggles, look at the list of files in the systemd-services package in Ubuntu 14.04:

$ dpkg -L systemd-services

Check out the man pages to see what all of this stuff does.

It’s always scary when developers start monkeying around with key Linux subsystems, because we’re pretty much stuck with whatever they foist on us. If we don’t like a particular software application, or desktop environment, or command there are multiple alternatives and it is easy to use something else. But essential subsystems have deep hooks in the kernel, all manner of management scripts, and software package dependencies, so replacing one is not a trivial task.

So the moral is things change, computers are inevitably getting more complex, and it all works out in the end. Or not, but absent the ability to shape events to our own liking we have to deal with it.

First systemd Steps

Red Hat is the inventor and primary booster of systemd, so the best distros for playing with it are Red Hat Enterprise Linux, RHEL clones like CentOS and Scientific Linux, and of course good ole Fedora Linux, which always ships with the latest, greatest, and bleeding-edgiest. My examples are from CentOS 7.

Experienced RH users can still use service and chkconfig in RH 7, but it’s long past time to dump them in favor of native systemd utilities. systemd has outpaced them, and service and chkconfig do not support native systemd services.

Our beloved /etc/inittab is no more. Instead, we have a /etc/systemd/system/ directory chock-full of symlinks to files in /usr/lib/systemd/system//usr/lib/systemd/system/ contains init scripts; to start a service at boot it must be linked to /etc/systemd/system/. The systemctl command does this for you when you enable a new service, like this example for ClamAV:

# systemctl enable clamd@scan.service
ln -s '/usr/lib/systemd/system/clamd@scan.service' '/etc/systemd/system/'

How do you know the name of the init script, and where does it come from? On Centos7 they’re broken out into separate packages. Many servers (for example Apache) have not caught up tosystemd and do not have systemd init scripts. ClamAV offers both systemd and SysVInit init scripts, so you can install the one you prefer:

$ yum search clamav

So what’s inside these init scripts? We can see for ourselves:

$ less /usr/lib/systemd/system/clamd@scan.service
.include /lib/systemd/system/clamd@.service
Description = Generic clamav scanner daemon
WantedBy =

Now you can see how systemctl knows where to install the symlink, and this init script also includes a dependency on another service, clamd@.service.

systemctl displays the status of all installed services that have init scripts:

$ systemctl list-unit-files --type=service
UNIT FILE              STATE
chronyd.service        enabled
clamd@.service         static
clamd@scan.service     disabled

There are three possible states for a service: enabled or disabled, and static. Enabled means it has a symlink in a .wants directory. Disabled means it does not. Static means the service is missing the [Install] section in its init script, so you cannot enable or disable it. Static services are usually dependencies of other services, and are controlled automatically. You can see this in the ClamAV example, as clamd@.service is a dependency of clamd@scan.service, and it runs only when clamd@scan.service runs.

None of these states tell you if a service is running. The ps command will tell you, or use systemctl to get more detailed information:

$ systemctl status bluetooth.service
bluetooth.service - Bluetooth service
   Loaded: loaded (/usr/lib.systemd/system/bluetooth.service; enabled)
   Active: active (running) since Thu 2014-09-14 6:40:11 PDT
  Main PID: 4964 (bluetoothd)
   CGroup: /system.slice/bluetooth.service
           |_4964 /usr/bin/bluetoothd -n

systemctl tells you everything you want to know, if you know how to ask.


These are the commands you’re probably going to use the most:

# systemctl start [name.service]
# systemctl stop [name.service]
# systemctl restart [name.service]
# systemctl reload [name.service]
$ systemctl status [name.service]
# systemctl is-active [name.service]
$ systemctl list-units --type service --all

systemd has 12 unit types. .service is system services, and when you’re running any of the above commands you can leave off the .service extension, because systemd assumes a service unit if you don’t specify something else. The other unit types are:

  • Target: group of units
  • Automount: filesystem auto-mountpoint
  • Device: kernel device names, which you can see in sysfs and udev
  • Mount: filesystem mountpoint
  • Path: file or directory
  • Scope: external processes not started by systemd
  • Slice: a management unit of processes
  • Snapshot: systemd saved state
  • Socket: IPC (inter-process communication) socket
  • Swap: swap file
  • Timer: systemd timer.

It is unlikely that you’ll ever need to do anything to these other units, but it’s good to know they exist and what they’re for. You can look at them:

$ systemctl list-units --type [unit name]

Blame Game

For whatever reason, it seems that the proponents of SysVInit replacements are obsessed with boot times. My systemd systems, like CentOS 7, don’t boot up all that much faster than the others. It’s not something I particularly care about in any case, since most boot speed measurements only measure reaching the login prompt, and not how long it takes for the system to completely start and be usable. Microsoft Windows has long been the champion offender in this regards, reaching a login prompt fairly quickly, and then taking several more minutes to load and run nagware, commercialware, spyware, and pretty much everything except what you want. (I swear if I see one more stupid Oracle Java updater nag screen I am going to turn violent.)

Even so, for anyone who does care about boot times you can run a command to see how long every program and service takes to start up:

$ systemd-analyze blame
  5.728s firewalld.service
  5.111s plymouth-quit-wait.service
  4.046s tuned.service
  3.550s accounts.daemon.service

And several dozens more. Well that’s all for today, folks. systemd is already a hugely complex beast;

A Guide to International Payment Preferences

Global e-commerce promises huge opportunities for merchants but, as is often the case with buried treasure, there are many challenges to overcome and no clear map to follow. One of the most overlooked but important of these obstacles involves online payment methods.

The latest research indicates that 68 percent of consumers have abandoned an online retail site due to its payment process. Almost half of them chose not to complete a transaction because they weren’t offered their preferred payment option. This makes it clear exactly how crucial it is that merchants provide prospective customers in each locality with their payment method of choice, as well as staying on top of new payment options as they become available.

Offering a variety of credit card schemes is not always enough though — in some countries, credit cards aren’t the payment method of choice. Here, a short guide to payment preferences around the world.

United States

In the U.S., buyers predominantly use credit cards, although eWallets are also very popular. One 2014 survey found that 79 percent of respondents had made payments using PayPal, and 40 percent through Google Wallet.


Europe is a diverse payment market. Credit card sales are becoming increasingly popular, but many customers still prefer real-time banking options through which they’re redirected to their online bank accounts to submit payment. Some payment methods are pan-European, but most are localized per country.

Localized European payment methods

Almost half of all online transactions in the U.K. are paid by credit card. Debit cards account for some 35 percent of e-commerce payment. PayPal is the country’s third most popular online payment method. Although alternative payment methods are not yet widespread in the U.K., a new digital payment ecosystem called Zapp is expected to have a significant impact on online payments when it’s launched later this year. Zapp puts near real-time payments on buyers’ mobile phones through their existing mobile banking application, enabling secure payments between consumers and merchants.

In France, Carte Blue debit cards account for 85 percent of all e-com transactions. Carte Blue recently introduced a voice authorization security mechanism to ensure greater ecommerce cybersecurity. Other payment methods used in France include credit cards and PayPal.

In the Netherlands, iDEAL is a popular payment method in online stores. When checking out, the customer authorizes the pre-filled payment instruction. Once payment is authorized, the amount due is debited from the customer’s account and transferred to the merchant’s bank account.

In Finland and Sweden, real-time bank transactions account for up to 35 percent of the market share. Finland has 10 bank brands offering different real-time banking solutions, and Sweden has four.

Klarna is a major payment method offered by more than 15,000 e-stores in Sweden, Norway, Finland, Denmark, Germany, the Netherlands and Austria. About 20 percent of all e-commerce sales in Sweden go through Klarna.

Pan-European payment methods

SEPA (Single Euro Payments Area) is a European Union payment-integration initiative currently in the making. Its aim is to simplify bank transfers denominated in Euros. A total of 33 European countries are taking part in SEPA, in addition to 28 EU member states, the four countries in the European Free Trade Association and Monaco. SEPA, which doesn’t distinguish between national and cross-border transactions, will handle credit transfers and direct debits. Direct debit will enable creditors to collect funds from a debtor’s account if a signed mandate has been granted by the payer to the biller.

The SOFORT payment platform offers currency conversion and is used in 10 European countries (Germany, Austria, Switzerland, the UK, Italy, Spain, Poland, Hungary, Slovakia and the Czech Republic). This method doesn’t require a second account (wallet) or registration. A multi-level authentication process and one-time validity ensure secure transactions.


For the Japanese, payment method mistrust is a big issue when it comes to online shopping. Many customers prefer to pay for online goods with cash at convenience stores called Konbinis. After credit cards, Konbinis constitute 25 percent of the market. So if you’re selling in Japan, Konbini is a vital payment method.


Alipay dominates online payments in China, claiming 60 percent of market share. This platform recently launched a mobile wallet application, offering online-to-offline payments. PayEase is another popular payment service provider, enabling comprehensive payment services like mobile payments via SMS, internet banking, call centers and POS terminals.

Cash on delivery is also quite popular in China, and UnionPay credit cards play a central payment role for merchants entering the Chinese market.


The Russian Federation’s most widespread payment method is Qiwi, which offers self-service kiosks that are active around the clock. They are located in malls and on the streets, similar to ATMs. Payments can also be made on WIN PC terminals, which are widely used in mobile dealer shops.

Yandex is another widely used payment service that offers online stores a universal payment solution for accepting online payments. The platform enables merchants to accept the most popular payment methods in Russia and other CIS countries, including bank cards, credit cards and the Yandex.Money and WebMoney e-wallets. Currently, more than 65,000 online stores accept Yandex.Money and 22 percent of Russians regularly use it to make payments.


Internet bank payments are the preferred choice in India, but prepaid cards and cash payments are also widely used. Mobile payments are rapidly gaining popularity in this region.


Mobile payment systems are on the increase in the Asia-Pacific, with more than two-thirds of those acquainted with the methods using digital wallets and SMS payments last year.

Latin America

The greatest cause of shopping cart abandonment in Mexico, Peru, Argentina and Colombia is the fear of security risks. As such, local and regional online payment sites are still the most trusted methods. DineroMail and MercadoPago specialize in the Latin American market.

As a rule, Brazilians have fairly low credit card limits, so almost half of online purchases are made via installment plans. Boleto Bancário is also popular; this payment process is comparable to wire transfer and cash payment methods. After receiving a pre-filled Boleto Bancário bank slip, the customer can pay for the online purchase using cash at any bank branch or via authorized processors like supermarkets or regular banking points.


In Africa, the mobile payment market has proven to be more popular than banking services. In fact, mobile payment users already outnumber bank account holders. M-Pesa is a widespread mobile-phone-based money transfer and micro-financing service that enables users to deposit, withdraw and transfer money using a mobile device. This system enables users to deposit money into an account stored on the user’s cell phone, send payments using PIN-secured SMS text messages and redeem deposits for cash.


So how can a retailer keep track of these and hundreds of other localized alternative payment methods? Many merchants have adopted data-driven, flexible payment platforms that enable them to offer optimal payment methods in every location. The result is a pleasurable shopping experience that buyers will be eager to repeat.

Docker and DevOps: Why it Matters

Unless you have been living under a rock the last year, you have probably heard aboutDocker. Docker describes itself as an open platform for distributed applications for developers and sysadmins. That sounds great, but why does it matter?

Wait, virtualization isn’t new!?

Virtualization technology has existed for more than a decade and in the early days revolutionized how the world managed server environments. The virtualization layer later became the basis for the modern cloud with virtual servers being created and scaled on-demand. Traditionally virtualization software was expensive and came with a lot of overheard. Linux cgroups have existed for a while, but recently linux containers came along and added namespace support to provide isolated environments for applications. Vagrant + LXC + Chef/Puppet/Ansible have been a powerful combination for a while so what does Docker bring to the table?

Virtualization isn’t new and neither are containers, so let’s discuss what makes Docker special.

The cloud made it easy to host complex and distributed applications and their lies the problem. Ten years ago applications looked straight-forward and had few complex dependencies.

Screen Shot 2014-10-21 at 10.35.22 AM

The reality is that application complexity has evolved significantly in the last five years, and even simple services are now extremely complex.

Screen Shot 2014-10-21 at 10.35.28 AM

It has become a best practice to build large distributed applications using independent microservices. The model has changed from monolithic to distributed to now containerized microservices. Every microservice has its dependencies and unique deployment scenarios which makes managing operations even more difficult. The default is not a single stack being deployed to a single server, but rather loosely coupled components deployed to many servers.

Docker makes it easy to deploy any application on any platform.

The need for Docker

It is not just that applications are more complex, but more importantly the development model and culture has evolved. When I started engineering, developers had dedicated servers with their own builds if they were lucky. More often than not your team shared a development server as it was too expensive and cumbersome for every developer to have their environment. The times have changed significantly as the cultural norm nowadays is for every developer to be able to run complex applications off of a virtual machine on their laptop (or a dev server in the cloud). With the cheap on-demand resource provided by cloud environments, it is common to have many application environments dev, QA, production. Docker containers are isolated, but share the same kernel and core operating system files which makes them lightweight and extremely fast. Using Docker to manage containers makes it easier to build distributed systems by allowing applications to run on a single machine or across many virtual machines with ease.

Docker is both a great software project (Docker engine) and a vibrant community (DockerHub). Docker combines a portable, lightweight application runtime and packaging tool and a cloud service for sharing applications and automating workflows.

Docker makes it easy for developers and operations to collaborate

Screen Shot 2014-10-21 at 10.35.35 AM
DevOps professionals appreciate Docker as it makes it extremely easy to manage the deployment of complex distributed applications. Docker also manages to unify the DevOps community whether you are a Chef fan, Puppet enthusiast, or Ansible aficionado. Docker is also supported by the major cloud platforms including Amazon Web Services and Microsoft Azure which means it’s easy to deploy to any platform. Ultimately, Docker provides flexibility and portability so applications can run on-premise on bare metal or in a public or private cloud.

DockerHub provides official language stacks and repos

Screen Shot 2014-10-21 at 10.35.41 AM

The Docker community is built on a mature open source mentality with the corporate backing required to offer a polished experience. There is a vibrant and growing ecosystem brought together on DockerHub. This means official language stacks for the common app platforms so the community has officially supported and quality Docker repos which means wider and higher quality support.

Since Docker is so well supported you see many companies offering support for Docker as a platform with official repos onDockerHub.

Screen Shot 2014-10-21 at 10.35.48 AM