Skip to end of metadata
Go to start of metadata

Linux Tutorial

If you have an account with AppNexus, this tutorial will walk you through launching an instance and setting up a basic web application.  If you do not yet have an account, please contact us at: sales@appnexus.com.

Overview

The building blocks of an AppNexus configuration are: the server, the instance, the image, and the pool.

Servers are actual physical machines sitting in datacenters with all of the usual components: CPUs, memory, disk, network cards, and so on.  On each server, AppNexus runs a Xen virtualization layer.  This virtualization layer allows you to install multiple operating systems that each think they're running on their own server.

Instances are individual operating systems that run on top of a virtualization layer.  Each instance has its own IP address (assigned from the customer's private VLAN), and gets a share of the CPU, memory, disk, and other resources of the underlying server.  You can install as few or as many instances on a server as you want.  The operating system won't know the difference.

Images are files that contain all of the information needed to start an instance---basically a snapshot of a hard drive running a particular operating system.  AppNexus provides a base images for CentOS (based on Red Hat Enterprise Linux), or you can create your own.

Pools are sets of instances that share an external address for load balancing.  The load-balancing hardware will take incoming requests and hand them off to the instances in the pool based on criteria you set: round robin, least connections, and so on.  If an instance in the pool should go down for any reason (maintenance, hardware crash, software bug), the load balancer will remove it temporarily from the pool, sending more requests to the other instances.

In this quick start guide, we will use the AppNexus command-line tools to reserve a server, launch some instances, and create a pool to load balance traffic between them.

Step 1: Log in to your management instance

Initially you filled out a customer questionnaire that included VLAN settings and a public key from a public-private key pair. For more information on key pair authentication please see this page.

Then you received a welcome email that included information about your management instance.  This management instance is your initial entry point into your AppNexus environment.  From this instance, you can upload images, launch instances, and set up your environment.  Please note that your management instance takes up minimal resources on a server.  We recommend you use the rest of the server's resources for other instances.  This will not affect your management instance.

To log in to your management instance, SSH to the address specified in your welcome email.  You will use your private key for authentication.

$ ssh root@8.19.XX.XX

The AppNexus command-line tools (CLI) have been installed in /usr/bin.  These have been configured with your ssh keys, so you can use them without any further configuration.

Step 2: Reserve a Server

Since each physical AppNexus server is dedicated to a particular customer, you need to reserve servers before launching instances.  Your management instance is running on a server that's already reserved for you.

To see what servers you already have reserved, use the list option on the manage-server command.

$ manage-server list --reserved --username <USERNAME>
.-----------------------------------------------------------------------------------------------------------------.
|                                                     Reserved Servers                                            |
+---------+---------------+----------+--------+-------+-------+-------------------+-----------------+-------------+
| id      | hostname      | status   | config | rack  | cores | avail_memory (MB) | avail_disk (GB) | description |
+---------+---------------+----------+--------+-------+-------+-------------------+-----------------+-------------+
| NYM1:39 | 029.webb.nym1 | reserved | webb   | 01-12 |     4 |              7192 |         630.000 |             |
'---------+---------------+----------+--------+-------+-------+-------------------+-----------------+-------------'

Note

that you have to authenticate yourself in order to use the API/CLI commands. Authentication can be done either by specifying your login in the command line or by placing your credentials into the CLI configuration file 'rpc.cfg'. Refer to API and CLI Documentation for details.

This says that we have already reserved the server NYM1:39 with the server specification "webb" in rack 01-12, with a quad-core CPU, 7192MB of memory, and 630GB of available disk space.  (Some resources are taken up by the management instance.)  You can get even more information about this server with the --verbose parameter: manage-server list --reserved --verbose.  This will, for example, include status, cpu_speed, total_memory, total_disk, and ip_address. Of special importance in the verbose listing is the lease_expires_on column which shows the date when you will need to release the server back to the pool of available servers. If you would like to see where a rack is located in the datacenter, see our Datacenters page.

Let's see what servers are available to reserve at the moment.

$ manage-server list --available --username <USERNAME> | head -n 10
.-------------------------------------------------------------------------------------------------------.
|                                           Available servers                                           |
+----------+---------------+--------+-------+-------+-------------------+-----------------+-------------+
| id       | hostname      | config | rack  | cores | avail_memory (MB) | avail_disk (GB) | description |
+----------+---------------+--------+-------+-------+-------------------+-----------------+-------------+
| NYM1:7   | 007.webb.nym1 | webb   | 01-14 |     4 |              7448 |         650.000 |             |
| LAX1:8   | 012.weba.lax1 | weba   | 01-01 |     4 |              7448 |         650.000 |             |
| LAX1:51  | 006.dbb.lax1  | dbb    | 01-03 |     8 |             15360 |         481.000 |             |
| LAX1:68  | 057.webb.lax1 | webb   | 01-05 |     4 |              7448 |         650.000 |             |
| LAX1:71  | 060.webb.lax1 | webb   | 01-05 |     4 |              7448 |         650.000 |             |

As you can see, there are a variety of servers available.  Let's reserve a basic "webb" box.  To do so, chose the ID of a box from the list of available servers.  ID is in the form <datacenter>:<server ID>

$ manage-server reserve --server-id=NYM1:7 --username <USERNAME>
Server 007.webb.nym1.appnexus.net (NYM1:7) has  been reserved successfully

If we list our reserved servers again, we'll now see both our management server and the server we just reserved.

Further Reading

Step 3: Launch your first instances

The next step is to launch new instances on our servers.  First we need to choose an image to run.  For now, we'll use CentOS (the open source version of RedHat).  For convenience, AppNexus provides a pre-built version of CentOS, which we can access by looking in the images directory on the public Network Attached Storage share, which has been mounted on your management instance.

When we start an instance, we can decide which of the resources of the server to give it.  By assigning limited resources, we can share the server across multiple functions, which can be very cost efficient.  Let's start a small instance on each of our servers.  We'll assign each one 10000MB of disk, 1GB of memory, and 1 CPU core.  For now we will let the API assign the first available IP address from our VLAN.  

If you have more than one user and more than one key pair for authentication, or if you are launching an instance on behalf of another user, you may want to use the "--authorized-keys" option to add extra public keys when launching an instance.  If no keys are added to the instance in this step, the instance will not be accessible.  For more information, see Key Pair Authentication

$ manage-instance launch --share-name=public --path=images/centos5-base/centos-current.fs.tgz \
                         --name=NYM1:first_instance --server-id=NYM1:39 --cpu-units=1 --memory=1024mb --disk=10000mb \
                         --username <USERNAME> --authorized-keys <AUTHORIZED_KEYS>
Instance starting:
	id: 158, IP: 8.12.73.92
$ manage-instance launch --share-name=public --path=images/centos5-base/centos-current.fs.tgz \
                         --name=LAX1:second_instance --server-id=LAX1:20 --cpu-units=1 --memory=1024mb --disk=10000mb \
                         --username <USERNAME> --authorized-keys <AUTHORIZED_KEYS>
Instance starting:
	id: 159, IP: 8.19.73.92

When an image is launched, it takes a couple of minutes to copy over the image file and start the operating system.  We can use the list command to watch the process:

$ manage-instance list --username <USERNAME>
.---------------------------------------------------------------------------------------------------------------------------.
|                                                         Instances                                                         |
+-----------+---------------------+-----------------+-----------+-------------+----------+-----------+----------+-----------+
| id        | name                | server_hostname | server_id | ip_address  | state    | cpu_units | memory   | disk      |
+-----------+---------------------+-----------------+-----------+-------------+----------+-----------+----------+-----------+
| NYM1:143  | management_instance | 029.webb.nym1   | NYM1:39   | 8.12.73.87  | running  |         1 | 1862 MB  | 80000 MB  |
| NYM1:158  | first_instance      | 029.webb.nym1   | NYM1:39   | 8.12.73.92  | starting |         1 | 1024 MB  | 10000 MB  |
| LAX1:159  | second_instance     | 005.webb.lax1   | LAX1:20   | 8.19.73.92  | starting |         1 | 1024 MB  | 10000 MB  |
'-----------+---------------------+-----------------+-----------+-------------+----------+-----------+----------+-----------'

After a few minutes, we note the state changes from "starting" to "running":

$ manage-instance list --username <USERNAME>
.---------------------------------------------------------------------------------------------------------------------------.
|                                                         Instances                                                         |
+-----------+---------------------+-----------------+-----------+-------------+----------+-----------+----------+-----------+
| id        | name                | server_hostname | server_id | ip_address  | state    | cpu_units | memory   | disk      |
+-----------+---------------------+-----------------+-----------+-------------+----------+-----------+----------+-----------+
| NYM1:143  | management_instance | 029.webb.nym1   | NYM1:39   | 8.12.73.87  | running  |         1 | 1862 MB  | 80000 MB  |
| NYM1:158  | first_instance      | 029.webb.nym1   | NYM1:39   | 8.12.73.92  | running  |         1 | 1024 MB  | 10000 MB  |
| LAX1:159  | second_instance     | 005.webb.lax1   | LAX1:20   | 8.19.73.93  | running  |         1 | 1024 MB  | 10000 MB  |
'-----------+---------------------+-----------------+-----------+-------------+----------+-----------+----------+-----------'

Now all we need to do is log in to the instances!  Each instance has your base SSH key installed, so logging in is easy:

$ ssh -i /etc/appnexus/userkey.pem root@IP_ADDRESS
The authenticity of host '8.19.73.92 (8.19.73.92)' can't be established.
RSA key fingerprint is f0:56:c5:6b:b0:09:7e:2f:f2:f2:54:59:85:30:51:e0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '8.19.73.92' (RSA) to the list of known hosts.
Last login: Thu Nov 15 00:44:48 2007
[root@006 ~]#
Further Reading

Step 4: Configure your application

Now that the instance has started, let's poke around a bit.  Let's check out the CPU:

[root@003 ~]# cat /proc/cpuinfo
processor    : 0
vendor_id    : GenuineIntel
cpu family    : 6
model        : 23
model name    : Intel(R) Xeon(R) CPU           E5440  @ 2.83GHz
stepping    : 6
cpu MHz        : 2826.248
cache size    : 6144 KB
...

That's good---the operating system only sees one core.  Let's take a look at memory:

[root@003 ~]# cat /proc/meminfo
MemTotal:      1054476 kB
MemFree:        752180 kB
Buffers:         13060 kB
Cached:         203368 kB
SwapCached:          0 kB
Active:          87088 kB
Inactive:       157848 kB
SwapTotal:     2097144 kB
SwapFree:      2097144 kB
...

As expected, the operating system sees 1GB of RAM.  How about disk?

[root@003 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             9.7G  2.0G  7.2G  22% /
tmpfs                 515M     0  515M   0% /dev/shm

Let's check out what packages are installed in our CentOS image:

[root@003 ~]# yum list
Loading "installonlyn" plugin
Setting up repositories
Reading repository metadata in from local files
Installed Packages
Deployment_Guide-en-US.noarch            5.0.0-19.el5.centos    installed
GConf2.x86_64                            2.14.0-9.el5           installed
GConf2.i386                              2.14.0-9.el5           installed
MAKEDEV.x86_64                           3.23-1.2               installed
NetworkManager.x86_64                    1:0.6.4-6.el5          installed
NetworkManager-glib.x86_64               1:0.6.4-6.el5          installed
...

OK, lots of packages.  Is Apache installed?

root@003 ~]# yum info httpd
Loading "installonlyn" plugin
Setting up repositories
Reading repository metadata in from local files
Installed Packages
Name   : httpd
Arch   : x86_64
Version: 2.2.3
Release: 6.el5.centos.1
Size   : 2.9 M
Repo   : installed
Summary: Apache HTTP Server

If it isn't installed, you can install it with:

root@003 ~]# yum install httpd

Great!  We're ready to launch a web site.  Let's start apache, and have it start automatically on reboot.

[root@003 ~]# chkconfig httpd on
[root@003 ~]# /sbin/service httpd start
Starting httpd: httpd: apr_sockaddr_info_get() failed for 003.cust003.lax1
httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName
[  OK  ]
[root@003 ~]# /sbin/service httpd status
httpd (pid 1921 1920 1919 1918 1917 1916 1915 1914 1912) is running...

We'll install a dummy web page on the server so that we know our installation worked:

[root@003 ~]# echo "Hello World, this is instance one" > /var/www/html/index.html
[root@003 ~]# curl http://localhost
Hello World, this is instance one

Excellent!  Let's repeat these steps from our other instance:

[root@003 ~]# logout
[root@001 api-tools]# ssh -i userkey.pem root@8.12.73.92
[root@004 ~]# chkconfig httpd on
[root@004 ~]# /sbin/service httpd start
Starting httpd: httpd: apr_sockaddr_info_get() failed for 004.cust003.lax1
httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName
[  OK  ]
[root@004 ~]# echo "Hello World, this is instance two" > /var/www/html/index.html
[root@004 ~]# logout

Let's make sure everything works from our management instance:

[root@001 api-tools]# curl http://8.12.73.92/index.html
Hello World, this is instance one
[root@001 api-tools]# curl http://8.19.73.92/index.html
Hello World, this is instance two

Success!

Step 5: Load Balancing Our Instances

Before we can create a load-balancing pool, we need to reserve a virtual IP to assign to the pool.  The reason we use a VIP is because we can use an IP across multiple LB pools (as long as each uses a separate port!).

We reserve a LB VIP just like we reserve servers.  Make sure you use an IP located in the same datacenter as your load-balancing pool.

$ manage-lb-ip list --available --username <USERNAME> | head -n 10
.------------------------.
| Available IP addresses |
+------------------------+
| ip                     |
+------------------------+
| LAX1:8.19.72.153       |
| LAX1:8.19.72.154       |
| NYM1:8.19.72.155       |
| NYM1:8.19.72.156       |
| NYM1:8.19.72.157       |

$ manage-lb-ip reserve --ip=NYM1:8.19.72.155 --username <USERNAME>
IP address LAX1:8.19.72.153 has been reserved successfully

With a load-balancer VIP in hand, let's create our first pool.  Although there are various options for directing traffic, we are going to stick to the default, which is round robin.  Please see Configuring Local Load Balancing for details on more advanced options.

$ manage-lb-pool create --name=my-first-pool --ip=NYM1:8.19.72.155 --port=80 --username <USERNAME>
LB pool created:
        id: NYM1:87
        name: my-first-pool

Next we add our two instances to the pool.  If you've forgotten the IPs, just use the manage-instance command to look them up:

$ manage-instance list --username <USERNAME>
.---------------------------------------------------------------------------------------------------------------------------.
|                                                         Instances                                                         |
+-----------+---------------------+-----------------+-----------+-------------+----------+-----------+----------+-----------+
| id        | name                | server_hostname | server_id | ip_address  | state    | cpu_units | memory   | disk      |
+-----------+---------------------+-----------------+-----------+-------------+----------+-----------+----------+-----------+
| NYM1:143  | management_box      | 029.webb.nym1   | NYM1:39   | 8.12.73.87  | running  |         1 | 1862 MB  | 80000 MB  |
| NYM1:158  | first_instance      | 029.webb.nym1   | NYM1:39   | 8.12.73.92  | running  |         1 | 1024 MB  | 10000 MB  |
| LAX1:159  | second_instance     | 005.webb.lax1   | LAX1:20   | 8.19.73.93  | running  |         1 | 1024 MB  | 10000 MB  |
'-----------+---------------------+-----------------+-----------+-------------+----------+-----------+----------+-----------'

$ manage-lb-pool add-node --name={DATACENTERID}my-first-pool --node=8.12.73.92:80 --username <USERNAME>
LB pool 'my-first-pool (87)' with IP address 8.12.72.155
Node 8.12.73.92:80 added

Repeat with any other nodes you would like to add.  Let's add node 8.19.73.92.

Next let's check to see if the load balancer can properly load our instances by using the status command:

$ manage-lb-pool status --name={DATACENTERID}my-first-pool --username <USERNAME>
LB pool 'my-first-pool (87)' with IP address 8.19.72.155
SSL certificate not set
.---------------------------------------.
|                 Nodes                 |
+------------+------+-------------------+
| ip         | port | status            |
+------------+------+-------------------+
| 8.12.73.92 |   80 | MONITOR_STATUS_UP |
| 8.19.73.92 |   80 | MONITOR_STATUS_UP |
'------------+------+-------------------'

Let's check the LB-VIP to see if we get a hello world:

$ curl 8.19.72.155
Hello World, this is instance two

And we're done!  We have successfully set up two load-balanced instances.  Note that when you curl the load balancer repeatedly you may very well continually hit the same instance even though the selected method is "round robin."  Our load-balancing hardware reuses TCP connections to limit the number of open connections to open nodes.

Global Load Balancing

You can increase redundancy and availability for your applications, balancing your traffic globally between Los Angeles and New York datacenters. Please find more about DNS-based global server load balancing (GSLB) at Global Load Balancing Documentation and Global Load Balancing wiki pages.

Further Reading

As always, please create a ticket at https://portal.appnexus.com/ or contact us at support@appnexus.com if you have any questions or concerns.