This page acts as an introduction to Atlantis from the perspective of a user of Atlantis: a developer looking to deploy applications. It is not intended as a guide for a developer working to improve Atlantis, though it may be a helpful overview of some of its features.
This guide will help you understand the workflow to deploy a new application through Atlantis. This guide is focused on the usage of the Atlantis Dashboard, a GUI-based application. If you prefer to use the command line, the atlantis-manager binary can also be used; see its help for more information.
Log into the Atlantis Dashboard on the default https port of the manager.
A superuser must give a team permissions for an application if LDAP permissions are enabled. At that point, any team member can register and work with the application.
We assume that you have prepared a manifest using TOML (manifest.toml), which must contain the following fields that you will use when registering and deploying your app:
For example:
name = "hello-go" description = "Hello World Go Server" internal = true image = "precise64" app_type = "go1.3" run_commands = [ "./hello" ] dependencies = [ "cmk" ] cpu_shares = 5 memory_limit = 128
You only need to register your app once.
To register an Atlantis app, select Register > Apps menu item in the Atlantis Dashboard.
Click the Register radio button if you are registering your app for the first time, or the Update radio button if you are updating your app's registration information.
Specify the following information, ensuring it matches the information contained in the app's manifest.toml file:
To register a non-Atlantis app, select the Non-Atlantis checkbox and specify:
If you would like to specify dependencies on other apps, you can specify such information in the Request Dependency section. See below for more information.
Click Perform Action.
To request dependencies on applications, begin by specifying those dependencies in the manifest.toml file submitted to Github with your application. In the following example, an external application called hello-proxy-go depends on an internal application called hello-go:
name = "hello-proxy-go" description = "A proxy for the Hello World Go Server" internal = false image = "precise64" app_type = "go1.2" run_commands = [ "./hello-proxy" ] dependencies = [ "hello-go" ] cpu_shares = 5 memory_limit = 128
To request a dependency when registering your app, select Register > Apps menu item in the Atlantis Dashboard. In the Request Dependency section, specify the following information:
Click Perform Action. An email request is sent to the owner email address of the app you specified in the Dependency field. Once approved, parameters are displayed on the app registration page indicating the dependency is allowed. You must request a dependency in each region and for each environment.
Select Manage > Environments menu item in the Atlantis Dashboard. The Resolve Dependencies section provides you with the address information for the dependencies you specify. Specify the following:
The Resolved Dependencies text area provides you with the address information for the specified environment. For example:
Resolved Dependencies for Environment 'us-east-1a': { "hello-go" { "address": "internal-router.a.us-east-1.atlantis.example.com:49156" } }
Go to the manager and select Register > Apps. In App Registration select Register and check Non-Atlantis. Enter the application name and an owner e-mail for dependency requests.
Once the app is registered, configure its information. In App Environments select Add to specify the information for your app in the appropriate environments. The convention for a simple hostname is to use a single "address field" containing the URL of the service:
{
"address": "http://app-load-balancer.us-east-1.example.com:8081/"
}
Other fields may be added as needed.
Fields can also be added on a per-application basis using the App Depender Environments section. For example, database credentials should be per-application.
Note also that when you click a link to approve a dependency, you'll be brought back to this page, so it often makes sense to fill in dependency information as needed.
Consider the example of
blog
,
entry-server
, and
database
. Let's say that
blog
is an external app deployed on Atlantis owned by the frontend team,
entry-server
is an internal app deployed on Atlantis owned by the backend team, and
database
is not deployed on Atlantis but is a dependency that is required by some Atlantis apps (owned by the DBA).
blog
depends on
entry-server
while
entry-server
depends on
database
. In order for this to work,
database
will be registered as a Non-Atlantis app.Here is how entry-server
's database
dependency will work:
database
Non-Atlantis appdatabase
environments to add the Host/Port/IP dataentry-server
internal Atlantis appentry-server
as a database
depender for the environments staging
and prod
database
dependers to add entry-server
database
entry-server
app information to add staging
and prod
as allowed environments.database
's entry-server
app information to add the username/password he created for entry-server
.entry-server
in both staging
and prod
while depending on database
.And here is now blog
's entry-server
dependency will work:
entry-server
as an internal Atlantis Appblog
external Atlantis appblog
as a entry-server
depender for the environments staging
and prod
entry-server
's dependers to add blog
entry-server
's blog
app information to add staging
and prod
as allowed environments.blog
in both staging
and prod
while depending on entry-server
.There are separate routers for internal and external apps, and the default routing for each is significantly different. You will normally specify the router configuration after you have deployed your app. This is typically the case when the routing defaults do not serve your purposes.
Internal and external apps run on the same supervisors. The routers may be in separate security groups, and internal applications do not normally communicate with the external router. Internal apps are used by other applications (not by people), and are assigned a unique port for each application/environment pair, by default in the 49000 range. This port is configured to point to the application's trie (application.environment) when the application is first deployed in the environment, and will not change unless one of the two is deleted (tearing down all containers is not sufficient; as long as the application and environment are still registered, the port will stay constant). This is normally the only routing necessary for internal applications, and no manual configuration is required.
For an internal application, typically no specific configuration is required. Once the app is deployed in a specific environment, that app/env will be assigned a port that will not change as long as the app and environment exist. Even if all instances are torn down, the port will be retained as long as the app and environment are registered.
External applications, on the other hand, are not automatically routed and must be configured manually. Atlantis's assumption is that the application will be available on port 80, with some application specific-routing (e.g., send www.example.com/blog to the blog app).
Select the Manage > Router menu item in the Atlantis Dashboard. The Router Config Management page appears, giving you tabs to configure routing for either an external or internal app.
One useful feature of Atlantis is to have multiple versions of an application running in production, with selection criteria to select which version to send a particular request to.
To configure the routing for an internal app, specify the ports, tries, and rules. The Ports area lets you to specify the port and root trie, the Tries area lets you to specify which existing rules to use with the root trie, and the Rules area lets you create new rules:
The Ports area provides you with a drop down list from which you can select a port. Once you have specified a port, you can test it by appending the specified port to the internal router URL. For example, if you specified port 49156, browse to http://internal-router.us-east-1.atlantis.example.com:49156 to check whether it is running on that port.
To see routing diagnostics, specify 8080/<port>. For example, http://internal-router.us-east-1.atlantis.example.com:8080/49156/ displays the port number, trie, rules, and pools for your application.
To view all the containers on the router, specify 8080/statusz. For example, http://internal-router.us-east-1.atlantis.example.com/:8080/statusz/ .
Select the Root Trie in the drop down list. In the ports section, select by port (not by name). In this case, select port 80 to see that it maps to the root trie.
Select the trie matching the Root Trie in the Ports area. The trie contains rules that have been previously created, and indicates the pool for your app version (specified by the SHA). You can add more rules to the trie by clicking the + icon and selecting from the drop down list. For example, if you deploy another instance of your app with a different SHA to the environment you can specify that instance as one of the rules. In this example, if the first container is torn down, the trie will ensure that the next app version will be used.
Rules can match on various criteria:
For example, port 80 on an external router may map to a "root" trie; one rule on the root trie might send requests on the host "api.example.com" to the "api-host" trie. This trie could then send requests with the prefix "/blog/ to the blog.production trie (among other rules).
To create a new rule, specify the following information in the Rules area:
Rule Name: Specify the name in the text box to the right of New rule.
Type: Specifies how matching will be performed: Host, Header, Path prefix, Path suffix, Path regexp, Percentage, Static, Multi-Host . For example, if you select Percentage , you could specify that 10% of traffic is to be sent to the container selected from Send to pool.
Hostname: Specify the host name from the drop down if you specified a Type of Host .
Continue to trie : Specifies the next trie.
Once you have specified the information for the new rule, click Create Rule. The new rule becomes available in the Tries drop down menu for adding rules.
An external request may go through various steps before hitting Atlantis - a CDN such as Akamai, SSL termination via nginx, or a load-balancer like Haproxy. These steps are independent of Atlantis; we assume that a request somehow gets to Atlantis, and examine how it is routed once it hits the external router.
This is the most common case - you want to route myapplication.example.com (or a variant) to point at your application. This takes two steps: creating a rule to route to your application, and adding it to the "root" trie. This assumes you have a root trie is configured to route requests on port 80, so requests coming into Atlantis with the standard configuration will go to this trie.
Log into the manager, and go to Manage->Router. Select the "Internal" or "External" tab to match your application. In the third box, "Rules":
Warning: In the standard configuration, this trie routes most top-level domains routed through Atlantis. Ideally, we'd have permissions on this so that you can't accidentally break other team's rules, but that's not currently the case. Be careful, but as long as you don't click randomly around the page, you should be fine.
On the same page (Manage Router), in the "Tries" box:
And you're done. Please note that no changes are saved until you explicitly save them; if you did something wrong, you can just reload the page and your changes will be reverted. But again, if you do save changes that you think may be incorrect, please contact appsplat-oncall immediately.
To check that this works, you can follow some of the relevant steps from Troubleshooting Deployed Atlantis Containers, specifically,
curl -H 'Host: myapplication.example.com' [router]/path
To test routing to your appllication, and
curl -H 'Host: myapplication.example.com' [router]:8080/80/path
To see how routing works for your hostname on port 80.
Once your application responds on the router, you need to get a CNAME that directs the hostname to the appropriate source, whether the Atlantis routers, haproxy, or Akamai. This is outside the scope of Atlantis, though we do have some support for Route53 which could be extended to handle this case.
Some hosts are shared among multiple services; e.g., www.example.com/blog might go to the blog, while the rest of www.example.com should goes to web-primary. To handle this, create a new trie for the host (e.g., www-host.production) that matches the hostname. Create a rule that matches the hostname as above, but instead of pointing at the app directly, point to this intermediate trie. Then, create an an additional rule matching a path prefix, and point that rule at the app/env trie. Finally, add the path prefix rule to the host trie, and test as above.
There may be cases where it makes sense to use a port rather than hostname, similar to routing for internal apps. In this case, you can simply direct a port to the automatically-created trie for you app/environment. To do so, log into the manager, and go to Manage->Router. In the top box, "Ports", type in your port number, select the trie for your app/environment, and click "Create Port". This can be any port, though we recommend avoiding high-numbered ports, since they may conflict with randomly-chosen ports used for outgoing connections. Any port under 10,000 should be safe.
Once you have your rules and trie configured, you can test your application to see if you hit the new version. Atlantis also provides basic troubleshooting; if you go to port 8080 on the router, you can append a port number to get information on how the request passes through tries and rules. This is particularly useful with the -H option of curl to set the host header:
$ curl -H 'Host: www.example.com' router.us-east-1.atlantis.example.com:8080/80/blog
port 80
trie root
rule api-host F
rule www-host T
trie www-host.production
rule blog-production F
rule primary-production F
This shows that a request to www.example.com/unknown-path (port 80 is the default) doesn't match the api-host rule but does match www-host, so goes to the www-host.production trie. Then all remaining rules fail, resulting in a 502.
On further examination, the blog rule requires a trailing slash:
$ curl -H 'Host: www.example.com' router.us-east-1.atlantis.example.com:8080/80/blog
port 80
trie root
rule api-host F
rule www-host T
trie www-host.production
rule blog-production T
trie blog.production
rule static-blog-cdcd439280c8f5ff3451ec6ace19342e1330c01b-production T
pool blog-cdcd439280c8f5ff3451ec6ace19342e1330c01b-production
Here the blog rule matches, going to the blog.production trie, which has a single static rule going to the current version of the app.
When you deploy an app:
To deploy your app select the Deploy menu item in the Atlantis Dashboard.
Specify the following information, ensuring it matches the information contained in the app's manifest.toml file:
Click the Deploy button. You will see a confirmation message indicating that a Jenkins Build for the Docker image is being triggered.
When the build is finished a More Details link appears. Click that link to see additional details, including:
Click the Status menu item in the Atlantis Dashboard. You can find information about your deployed application by searching for its container ID. The entry for your container includes a link to the host.
You will normally need to specify the router configuration after you have deployed your app. This is typically the case when the routing defaults do not serve your purposes. For more information see Specifying Routing.
To tear down an app deployment, select the Deploy menu item in the Atlantis Dashboard. In the Teardown area, check the Sha, Env, and Container boxes as appropriate, and specify the corresponding fields that occur in the drop down menus to the right. For example, to tear down the hello-go app, specify:
Click the Teardown button. A verification message appears with a Teardown ID. You can confirm this by selecting the Status menu item in the Atlantis Dashboard and searching for the SHA. You can then return to the Router Config Management page (select the Manage > Router menu item in the Atlantis Dashboard). The Tries section will no longer have the rule specifying that app version. You can then adjust the Tries and Rules sections accordingly.
Once you have finished tearing down your app:
This guide is will help you figure out why your deployed Atlantis container isn't working. It's split into two acts, which share several components, who will be presented first. It assumes that you generally understand Atlantis, and have the command line client installed.
Without further ado:
The request flow for an internal application is simple:
Container <- Internal Router <- Client
To check if the Container is running, you can try curling it directly. Use `atlantis get-container
[container id]` or the UI to get the host and port of the container. You can make requests directly against
the container; if they seem to be working, continue to troubleshooting the Internal Router.
If the request fails, you can ssh into the container, with the `atlantis ssh [container id]` command. (Do not
directly ssh using the "SSH Port" from get-container; your keys will not be on the container, so you will get
permission denied.)
In the container, you can examine logs in `/var/log/atlantis/app0/`; one of those file will likely have useful
information from your app telling you what's wrong.
You can also examine the dependencies passed into your app in `/etc/atlantis/config/config.json` (json_pp is
installed in containers for convenient json viewing), or do anything else you want to in your container. It's
just a Linux instance.
If your application is working, the next step in the chain is the internal router. Every deployed
application/environment is automatically assigned a port, typically in the 49000 range.
You can see where the application should be available with `atlantis get-app-env-port -a [application] -e
[environment]` or in the UI. Then curl the internal-router for your region[link] at that port. It should
pass through to your application. If it does, great! The internal router is working. Continue to
troubleshooting the client.
If you get a Bad Gateway, then the router thinks there are no running containers for you
application/environment. Check the router's statusz page; router-host:8080/statusz. Find the pool
representing your application/environment/sha. If the status isn't okay, then the router isn't getting the
right Server-Status from your containers; use curl -i to make sure that /healthz returns a "Server-Status: OK"
header.
If the client isn't working, it's probably the client's fault; just be aware that firewall rules may prevent
some connections from working.
The request flow for an external application is similar:
Container <- External Router <- Client
This is identical to act 1.
The external router is similar to the internal router, but ports aren't automatically assigned. Instead, each
application is assigned a trie to handle versioning, but that trie must be manually added to a trie descended
from the root true. All requests come in through port 80, and are sorted by hostname or route.
Similar to the internal router, you can check the router's[link] status page at [router]:8080/statusz. If
your services does not have status OK, troubleshoot it as in Act 1. If the app is up, check routing: if you
go to [router]:8080/[port]/path, you will see how [router]:[port]/path is routed; typically you'll want to use
port 80. Note also that most routing is host-based, so you need to connect to the router, but send the host
header for the service you want. This can be done with
curl -H 'Host: [service-host]' [router]:8080/[port]/path
This will tell you where the request is being routed. If it's not getting to your app, file a ticket to get
it set up.
Dependencies show up in /etc/atlantis/config/config.json. Note that dependencies must be both in your
manifest.toml *and* approved by the appropriate team for the environment. This duplication is so that we can
check at deploy time if all dependencies are all available (based on the manifest.toml), but resolve them at
deploy time based on the environment.
Atlantis ingests, rotates, and backs up logs using the facilities provided by rsyslog. These logs are co-located between containers and their supervisors, such that when a container goes away, whether due to termination or error, all of it's logs are still available through the supervisor. There, the logs are categorized by origin container id, and then split up via the following hierarchy (from least to most granular):
Container ID
App Number (Position of run command in list. This will usually be app0)
Year
Month
Day
Log Type/Name
Since a container can be running multiple run commands, we separate these logs out when collected. These are distinguished using the various local facilities provided by syslog. Namely, the first run command, and usually the main app, is called "app0" (and subsequent apps will be app1, app2, etc), and logs to syslog facility local0. This is partially done automatically for the developer, but can also be used manually.
manifest.toml
file$HTTP_PORT
$SECONDARY_PORT1
through $SECONDARY_PORT5
. These ports will be visible but will not be configurable within router./healthz
" on it's $HTTP_PORT
. It must return the HTTP
status 200
and the header Server-Status
set to OK
if the app's health is ok (DEGRADED
for degraded, CRITICAL
for critical, and MAINTENANCE
for maintenance). Any other HTTP
status will be considered MAINTENANCE
meaning the app will not receive any traffic.$Atlantis
will be set to "true
" when your app is running within Atlantis.The internal
boolean in the manifest should be set to false
if the app should be visible outside of ooyala. Otherwise it should be true
(and subsequently will not be visible outside of ooyala).
If internal
is set to false
, the pool for the app will be created in the external router configs. If it is set to true
, the pool for the app will be created in the internal router configs.
If internal
is set to true
, DNS aliases will automatically be created for the app. They will be in the following format:
<appname>.<environment>.<zone>.<region>.atlantis.services.ooyala.com # for zone-specific routing. environment will be empty
# if the name matches /^(prod|production)([_-]|$)/
<appname>.<environment>.<region>.atlantis.services.ooyala.com # for region-specific routing. environment will be empty
# if the name matches /^(prod|production)([_-]|$)/
Example:
hello-go.jigish.a.us-east-1.atlantis.services.ooyala.com
hello-go.jigish.us-east-1.atlantis.services.ooyala.com
Each run command gets its own facility and folder to which it's stdout and stderr are automatically logged. If your app needs to have other logfiles than these 2 per run command, it is possible to specify up to 8 custom facilities/folders for log files, minus one for each run command, as these each use a single local facility (the first uses local0 and so on). In order to specify custom logging facilities, add a section to your manifest as below:
[logging]
[logging.local1]
name = "access"
info = "apache.log"
debug = "apache-debug.log"
[logging.local2]
name = "routes"
debug = "debug.log"
crit = "routes.log"
Note: This example assumes that this app only has one run command, otherwise you would get an error that local1 facility is already being used by your second run command. It may be a good practice to start with local7 for your first custom logging group, and work your way down, in order to minimize the risk of collision in the case you add run commands to your app in the future.
With this in your config, your app can log to local1.info, local1.debug, local2.debug, and local2.crit and rsyslog will in addition to /var/log/atlantis/app0/std*.log create the respective files /var/log/atlantis/access/apache.log, /var/log/atlantis/access/apache-debug.log, /var/log/atlantis/routes/debug.log and /var/log/atlantis/routes/routes.log when you write to these facilities/priorities using syslog.
go1.1.2
, go1.2
, and
go1.3
When using the go1.1.2
, go1.2
,or go1.3
app types, the contract is as follows:
Makefile
in its root with the make
target package
.make package
should create a directory called package
inside the app's root directory.setup_command
is specified, it will be run from within the app's root directory before make package
is run.run_command
will be run from within the package
directory, but the package
directory will be moved outside of the app root. Contents of the package
directory should not depend on anything outside of the package
directory. Everything outside of the package
directory will be removed from the container.
Example Repo: hello-go
ruby1.9.3
When using the ruby1.9.3
and other upcoming ruby app types, the contract is as follows:
Gemfile
in its root. Bundler will be used to install the gems.setup_command
is specified, it will be run from within the app's root directory before bundle install
is run.run_command
will be run from within the app's root directory.
Example Repo: hello-ruby
and hello-ruby2
python2.7.3
When using the python2.7.3
app type, the contract is as follows:
requirements.txt
in its root, pip will be used to install the eggs.setup_command
is specified, it will be run from within the app's root directory before bundle install
is run.run_command
will be run from within the app's root directory.java1.7-scala
When using the java1.7
app type, and scala
java type the contract is as follows:
build.sbt
in its root. sbt will be used to compile the code into a jar, which will be executed.sbt assembly
will run all tests before packaging and compiling.setup_command
is specified, it will be run from within the app's root directory after sbt assembly
is run. Note that sbt assembly
is run outside of the container, but setup_command
is run within the container.run_command
will be run from within the app's root directory.java1.7-maven
When using the java1.7
app type, and maven
java type the contract is as follows:
pom.xml
in its root. mvm will be used to compile the code into a jar, which will be executed.mvn build
will run all tests before packaging and compiling.setup_command
is specified, it will be run from within the app's root directory after mvn build
is run. Note that mvn build
is run outside of the container, but setup_command
is run within the container.run_command
will be run from within the app's root directory.Some information about the build can be found within the container in the /etc/atlantis/build
folder.
/etc/atlantis/build/branch
/etc/atlantis/build/time
/etc/atlantis/build/revlist