Building Custom Runtime

Basics: Extending Default Runtime

If you are reading this tutorial, you’ve probably failed to run your application using pre-built environments (See: Configuring Your Project). Don’t worry, you can build one by yourself. There are two major scenarios: either none of pre-built environments fits your project or a chosen environment needs some tweaks. If there’s no environment that fits your project, you’ll need to build a custom runtime from scratch (See: Writing Your Custom Dockerfile).

However, oftentimes, a pre-built environment is quite OK, and it fails because of a missing Python or PHP module, config file, or simply uses a different approach to install things. In such a case, you can extend this environment, i.e. use an existing recipe but complement it with a few additional steps.

What’s My Current Environment?

The first questions that you will ask yourself is “What is my current runtime?”. When importing your project, you have gone through configuration and chosen some runtime environment. You may have forgotten it though. Besides, runner name and description just indicate versions of installed technologies and software. You may take a look at our Dockerfiles project on GitHub to find the recipe your runner uses. Yet, the best way is to click View Recipe icon on the Runner tab (it’s also available in Run menu). Runner recipe will be opened in a new editor tab.

Extending Default Environment

Having found out the default runtime recipe, you can now use it in your custom recipe. There are a couple of scenarios here: you may want to replace or amend a particular Docker instruction or add a new instruction. Either way, use base image that is used in the default runtime. Refer to Writing a Custom Dockerfile section for more details.

There is one thing that you should pay attention to – mounting sources and build artifact. In our pre-defined environments we do not ADD projects but mount them when starting a Docker container. So, if in the default runtime you had:

VOLUME ["/home/user/app"]

Your custom recipe should read:

VOLUME ["/home/user/app"]

This way you will tell the system which directory contains your project sources.

Also, when viewing runtime recipes, all the variables acquire real values. Thus, if the Docker recipe you had:

ADD web-spring-java-simple-1.0-SNAPSHOT.war_unpack /home/user/app/

The same instruction in your custom recipe will look like (adding build artifact):

ADD $build$ /home/user/app/
ADD sources_5w1p_unpack /home/user/app/

In your custom recipe will turn into:

ADD $src$ /home/user/app/

Get more details about injecting source code and build artifacts.

Basics: Writing Your Custom Dockerfile

Before You Get Started

So, you have realized that none of Codenvy pre-built environments is a 100% fit for your project. Odds are that the environment itself is OK, but the sequence and nature of used commands is not what you expect. For example, you’d love to use Codenvy Gulp environment but Gulp task used there is not how you start your project. You may also want to install additional tools and software and perform manipulations with files, download external resources etc. If you have faced such a situation, create your own Docker recipe and reconfigure your project to use this environment as a default one (optional). Find more info on how to create a custom Dockerfile in Runtime section of our docs.


When writing a Dockerfile, you need to clearly understand the goal. Only having a clear goal in mind, you can write down all steps necessary to achieve the goal. Just imagine that a Dockerfile is a terminal on your local UNIX machine. Same commands, same approaches, and a little bit of Docker and Codenvy specifics.


There are several steps to take. Some of them are obligatory, some are options. It depends on the environment you want to build, your project and how you want to start it. Let’s build a custom environment with JDK7 and Tomcat 8 that runs on a non default 8081 port, and explain what each step means:

Step 1: Inheriting From Base Image (Mandatory)

Every Dockerfile starts with FROM instruction. This is a base image on top of which you will build a custom environment. All Codenvy pre-built environments inherit from Codenvy base images. You can learn more about structure of Codenvy pre-built environments at this page. We recommend inheriting from a Codenvy image because:

  • images are lightweight. They are all based on Debian Jessie and contain the right minimum of tools and software
  • all Codenvy images inherit from codenvy/shellinabox which is a web based SSH terminal. If you inherit from Codenvy base image, you will be able to connect to it (Terminal tab on the runner panel)
  • there are images for most popular programming languages/frameworks: Java, Python, PHP, Ruby, Rails, and environments with pre-installed application servers like Tomcat, JBoss, GrassFish, Jetty. Some images have been specifically built to run GAE apps – Java and Python.

Browse our DockerHub repositories or visit GitHub project.

Of course, you may inherit from any image you can find on DockerHub. There’s just one restriction here – it is temporarily impossible to pull private images. You should also bear in mind that you will have to take care of Shellinabox (or equivalent) implementation, in case you need access to the container.

If you choose to inherit from a non-Codenvy image, we recommend pulling from lightweight images with necessary tools and software. Choose something thin, elegant and self sufficient. If you are Ubuntu fan, inherit from the minimum distribution. A light-weight image will be loaded and processed faster than a cluttered one.

We’ll use codenvy/jdk7 as a base image:

FROM codenvy/jdk7

Step 2: Installing Software (Optional)

It’s dead simple here, especially for a seasoned Ubuntu, Debian or CentOS user. There are actually several ways to install software. You can use package managers:

apt-get install for Ubuntu, OpenSuse etc
yum install for CentOS

Don’t forget to use -y flag to automatically confirm choices during the installation process. You may also want to install minimal software packages, e.g. without docs, extras and ‘install recommends’. So, if you want to install Git, go ahead and:

apt-get install git -y
yum install git -y

Depending on the user permissions, you may or may not be required to use sudo for a range of operations and installs, including apt-get and yum. If you inherit from a Codenvy image, you have to install software with sudo, since all operations are performed by a user, not root.

You can also build software from source. It will require installation of additional software and compilers.

You can also just download and unpack binaries. This is what we’ll do in our image. First, Tomacat 8 gets donwloaded and unpacked, and the content of webapps directory with Welcome pages is deleted. We should also create a directory for Tomcat beforehand:

FROM codenvy/jdk7
RUN mkdir /home/user/tomcat8 && \
    wget -qO- "" | tar -zx --strip-components=1 -C /home/user/tomcat8 && \
    rm -rf /home/user/tomcat8/webapps/*

Step 3: Configuration/Manipulation With Files (Optional)

You may need to configure installed software and tools. Since there’s no file manager and editor to comfortably edit files, you may need to do it from the command line. Here are a few examples:

Unbind MySQL from localhost by replacing with

RUN sudo sed -i.bak 's/' /etc/mysql/my.cnf

Adding Java_HOME export to .bashrc file:

RUN echo "export JAVA_HOME=$JAVA_HOME" >> /home/user/.bashrc

You may also want to view content of some files while your environment is being built:

RUN cat /home/user/.bashrc

You can copy, move or delete files, unzip archives and perform any other operations you would perform from a local terminal. The only difference is that you do it remotely through a set of Docker instructions.

In our example, for some reason, we do not want Tomcat to run on a default 8080 port. Let’s change it to 8081, just to show the power of a Dockerfile. We’ll need to edit Tomcat’s conf/server.xml. Let’s use sed:

FROM codenvy/jdk7
RUN mkdir /home/user/tomcat8 && \
    wget -qO- "" | tar -zx --strip-components=1 -C /home/user/tomcat8 && \
    rm -rf /home/user/tomcat8/webapps/*
RUN sudo sed -i.bak 's/8080/8081/g' /home/user/tomcat8/conf/server.xml

Step 4: Exposing and Listening to Ports (Optional)

Well, it says optional, but in most cases it’s mandatory. If you are running a console app that prints Hello World, you do not need to expose any ports.

There are two things to keep in mind: you may need to expose ports and you may need to listen to ports. Ports exposure is a must if you run a web application, or a database you need to connect to, using a Datasource Plugin. Here’s a simple example. If your Tomcat runs on a default 8080 port, you have to expose this port in a Dockerfile:


You may expose more than one port. Just write them in a row, with no commas – 8080, 4200. Easy!

However, this is just part of the deal. Having exposed a port, you need to listen to it, if it’s a web application. To be exact, you need to tell the system where to look for application URL that will show up in the Runner panel. Here, you should use a special Codenvy environment variable:

for example:


Let’s recap the rules:

No port exposure – nothing in the app URL

No Codenvy APP port – no application URL in the Runner Tab

We recommend putting these two instructions together, just not to miss anything. We want to deploy war in Tomcat on an non-default 8081 port, so let’s add these instructions to your Dockerfile to have no issues with ports:

FROM codenvy/jdk7
RUN mkdir /home/user/tomcat8 && \
    wget -qO- "" | tar -zx --strip-components=1 -C /home/user/tomcat8 && \
    rm -rf /home/user/tomcat8/webapps/*
RUN sudo sed -i.bak 's/8080/8081/g' /home/user/tomcat8/conf/server.xml

Step 5: Environment Variables (Optional)

These may be required when installing software like Java, Maven or Grails. So, you will need JAVA_HOME, M2_HOME and GRAILS_HOME exported, and of course, they should be added to PATH.

There’s one additional requirement that is relevant if you use base Codenvy images or have own Shellinabox implementation. Due to limitations in Shellinabox, environment variables are unavailable in the running environment, unless they are directly saved to .bashrc or .profile. Here’s an example of declaring environment variables for Java and adding it to PATH:

ENV JAVA_HOME /opt/jdk1.7.0_55
RUN echo "export JAVA_HOME=$JAVA_HOME" >> /home/user/.bashrc
RUN echo "export PATH=$PATH" >> /home/user/.bashrc

As you see, each ENV declaration is dubbed by writing the same things to .bashrc. Setting environment variables is a must-take step when installing many tools and software, don’t miss this step when cooking your custom environment.

In our example, there are no additional env variables to export.

Step 6: Injecting Project Sources (Mandatory)

This is a potentially tricky step. You are almost there, but you need to add project sources/artifacts into the image, otherwise your beautiful environment just won’t make any sense. Don’t worry, it is not complicated, once you get comfortable with using two variables and following several simple rules.

Mounting Sources

You can mount a directory with your project sources when starting a container. Mounting is preferred over adding files for interpreted languages. If you add sources to the image you won’t be able to see changes live. When mounting sources, you can edit files in your workspace and see changes in the running app.

How to mount sources? There are two mandatory instructions: mount a volume and tell the system to mount project sources there:

VOLUME ["/dir/in/image"]

Adding Sources

If you want to inject project sources (in fact, if this is an interpreted language, this is the only option available), use $src$ variable. $src$ is a zipped archive of your project. You may unpack it, add as is.

This is how an application is added to /app directory. Pay attention to slash – / – after app. If you don’t have this slash either after /app or a destination directory name, like in the example below, you will get an error message:

ADD $src$ /home/user/app/

You can also add an individual file, and choose its name in the destination directory if necessary.

ADD $src$/requirements.txt /home/tmp/requirements.txt

Adding individual files may be necessary to install some libs before the app is started. It’s just one of usecases. Needless to say, you should make sure the file exists in your project and you have provided the right path.

Adding Build Artifacts

For interpreted languages (C/C++ is an exception since build is performed in the Docker image, not on the builder instance) we use $build$ variable. $build$ means build artifact. The same rules are applied here: you can inject it as is or unpack is necessary.

This is how build artifact is added as is. Artifact name depends on settings in build file:

ADD $build$ /home/user/$build$

It is possible to unpack the artifact as well. Mind the slash after destination directory name or $build$:

$build$ /home/user/tomcat8/webapps/

Often, you may need to rename build artifact when injecting it. For instance, this is usually done to load your apps in application servers without adding war name to the URL. This is exactly what we’ll do in our Dockerfile:

FROM codenvy/jdk7
RUN mkdir /home/user/tomcat8 && \
    wget -qO- "" | tar -zx --strip-components=1 -C /home/user/tomcat8 && \
    rm -rf /home/user/tomcat8/webapps/*
RUN sudo sed -i.bak 's/8080/8081/g' /home/user/tomcat8/conf/server.xml
ADD $build$ /home/user/tomcat8/webapps/ROOT.war

Step 7: Access to Terminal

Terminal is tab on the Runner panel that offers access to a running container. You may or may not need it. All Codenvy images inherit from codenvy/shellinabox image that installs and runs Shellinabox – a web based terminal.

If you inherit from any of Codenvy images, you should not worry about access to a Terminal. You’ll always have it. If, for some reason, you want to use a different base image, and you want to have access to a Terminal, there’s some work to do:

  • install and run Shellinabox
  • expose and listen to 4200 port

There’s one universal way to install Shellinabox that will work for any Linux distribution – build from source. Add the following lines to your Dockerfile, and you’ll enjoy access to a Terminal:

FROM library/ubuntu
RUN apt-get update && \
    apt-get -y install sudo procps wget unzip gcc make
RUN echo "%sudo ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers && \
    useradd -u 5001 -G users,sudo -d /home/user --shell /bin/bash -m user && \
    echo "secret\nsecret" | passwd user
RUN mkdir /opt/shellinabox && \
    wget -qO- "" | tar -zx --strip-components=1 -C /opt/shellinabox
RUN cd /opt/shellinabox && \
    ./configure && \
USER user
ENV service /:user:users:/home/user:/bin/bash 
CMD sudo /opt/shellinabox/shellinaboxd --no-beep --service $service 

There’s a simpler way that has been tested with Ubuntu (doesn’t work for Debian):

RUN apt-get update && apt-get -y install shellinabox && \
    echo "%sudo ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers && \
    useradd -u 5001 -G users,sudo -d /home/user --shell /bin/bash -m user && \
    echo "secret\nsecret" | passwd user
CMD shellinaboxd --no-beep --disable-ssl

For RHEL, CentOS and Fedora the following approach has been tested:

RUN yum install wget sudo -y && \
  	 wget && \
	 rpm -ivh epel-release-6-8.noarch.rpm && \
    yum install openssl shellinabox -y
RUN echo "user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers && \
    adduser user -g users -d /home/user && \
    echo "codenvy" | passwd --stdin user && \
    sed -i 's/requiretty/!requiretty/g' /etc/sudoers && \
    sed -i 's/--disable-ssl-menu/--disable-ssl/g' /etc/sysconfig/shellinaboxd && \
	 sed '/^/#/g' -i /etc/pam.d/*

CMD sudo service shellinaboxd start --no-beep

Let’s recall the rule here: If you inherit from a Codenvy image you don’t need to worry about Terminal access. If you inherit from a non-Codenvy image, you take care of Shellinabox implementation by yourself.

Step 8: Start Command (Mandatory)

You will need a command that starts a container. Usually, these are commands that launch servers, services, standalone apps, or bash scripts. You can combine several start commands in one instruction. Here’s an example of starting MySQL and Apache servers, as well as following Apache server logs:

CMD sudo service mysql start > /dev/null && \
    sudo service apache2 start && \
    sudo tail -f $APACHE_LOG_DIR/access.log -f $APACHE_LOG_DIR/error.log

Our goal is to start Tomcat 8. Let’s do it:

FROM codenvy/jdk7
RUN mkdir /home/user/tomcat8 && \
    wget -qO- "" | tar -zx --strip-components=1 -C /home/user/tomcat8 && \
    rm -rf /home/user/tomcat8/webapps/*
RUN sudo sed -i.bak 's/8080/8081/g' /home/user/tomcat8/conf/server.xml
ADD $build$ /home/user/tomcat8/webapps/ROOT.war
CMD /home/user/tomcat8/bin/ run

Test Success&Recap

This may seem a bit complicated for users unfamiliar with Docker and UNIX. However, after a few attempts, you’ll be a Dockerfile ninja. Let’s recall how we have built our Dockerfile:

Common Mistakes

These are unavoidable. If you are not familiar with Docker you’ll probably run your application a few times before you are happy with the result. Mistakes in a Dockerfile may have different nature, yet they all lead to one thing – Docker fails to execute a particular instruction. Look at the logs to see where the Dockerfile fails.

Fail to add/mount sources

We’re all humans and may forget certain things. You may be so obsessed with creating a perfect environment for your application that you actually forget to inject project source or build artifacts. Make sure you have ADD instruction in your recipe.

Also, you should follow the rules in Adding Sources section. Let’s look at common mistakes and error messages that follow them.

ADD $src$ /home/user

While you expect to inject project sources in a container, you’ll see an error message:

[DOCKER] Step 1 : ADD /home/user
[DOCKER] [ERROR] lchown /var/lib/docker/devicemapper/mnt/0c477703e9475df9ebb53c673ff03b68bfdc2ed6f/rootfs/home/user/ not a directory

This means you neither unpack sources nor add them as zip. There are two solutions here:

Unpack sources. Pay attention to slash – /:

ADD $scr$ /home/user/


ADD $src$/ /home/user

Add project as zip (you can specify zip name), unpack it and remove the zip:

ADD $app$ /home/user/
RUN cd /home/user && unzip -q && rm -r

Instead of zip name you may use $src$ variable. This way the zip you add will acquire project name:

ADD $src$/ /home/user/$src$
RUN cd /home/user && unzip -q && rm -r

Same story with injecting build artifact.

No Choice and Confirmation Flags

When installing software locally, you are often asked to confirm your choice. When installation happens in a Docker container, you should confirm installation before you actually install software. This is where -y flags are used. See: Installing Software.

For example, when attempting to install Git this way:

sudo apt-get install git

You will get the following error:

[DOCKER] 0 upgraded, 32 newly installed, 0 to remove and 62 not upgraded.
Need to get 11.3 MB of archives.
After this operation, 40.1 MB of additional disk space will be used.
Do you want to continue? [Y/n] 
[DOCKER] Abort.
[DOCKER] [ERROR] The command [/bin/sh -c sudo apt-get install git] returned a non-zero code: 1

Here’s an example how to pass ‘yes’ and ‘no’ user choice when updating Android SDK and creating avd:

RUN echo y | android update sdk --all --filter platform-tools,android-17,sys-img-armeabi-v7a-android-17 && \
    echo no | android create avd -n myandroid -t android-17

Insufficient Permissions

You may attempt to install software without sudo and get the following error:

Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)
E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?

This is because a current user isn’t root. Therefore, installation should be performed with sudo.

You may also have insufficient permissions to files and folders. If the error message says ‘permission’ denied, chances are that a current users is not authorized to access this particular resource. Solutions? Of course, there are a few tricks.

Recursively change permissions for a directory:

sudo chmod a+rw -R /home/user/application/

or this way, specifying the exact user:

RUN sudo chown -R user:user /home/user/application

Bear in mind that if you are not root, you do not have permissions to modify resources added through ADD instruction. This if the application server or build system needs to modify something in the app sources injected into the container, you’ll need to grant a current user adequate access rights.

Missing Files

It sometimes happens that you may start application server or a script pointing to a non existing file. Docker will respond back with an error, saying it cannot locate a specified file in a specified location.

For example, you want to launch a Python application:

CMD /env/bin/python /home/user/application/$$ runserver 2>&1

This is a default instruction for pre-built Python environment. This command expects script to be in the root of /home/user/application. You have previously added sources to this directory, and think that everything is going to be just fine. However, the problem is that if your file is located in a directory, not project root, you should tell Docker where it is to be found. Thus, the start command should be:

CMD /env/bin/python /home/user/application/YourFolder/$$ runserver 2>&1

Same concerns any individual files that you add to a container and then use them. Always double check paths.

No CMD Command

If neither base image nor your custom Docker recipe has CMD command, the script will finish executing instructions and exit. Therefore, you should make sure there is a command that launches Docker container. Of course, if you do not expect your environment to be non-terminating, CMD isn’t really necessary. This is the case with console apps, where app output is piped into the Runner console.

Software That Requires Software

It is not Codenvy or Docker to blame. Sad but true! Some software requires another software to be installed and properly configured. Docker will give you an error message with clues what has caused failure to execute a particular recipe instruction.

For example, to install psycopg2 Python library, one needs libpq-dev python-dev installed as well. So, to actually install psycopg2 with PIP, you first need to:

RUN sudo apt-get -y install libpq-dev python-dev

and then:

sudo /env/bin/pip install psycopg2

Always read error messages. They are always informative and point to the root of the problem.

Start Services in CMD or ENTRYPOINT

If you need to start a service, for instance MySQL, PostgreSQL, Mongo, Apache or anything else, you should do it in your CMD or ENTRYPOINT command. To keep docker containers running, you need to keep a process active in the foreground. You can’t have multiple CMD lines, so if you have several services to start or commands to execute, write them in one line. There are various ways to do it:

Starting Riak database and following logs:

CMD /bin/riak start && tail -F /var/log/riak/erlang.log.1

Starting Apache and MySQL servers:

CMD sudo service mysql start > /dev/null && \
    sudo service apache2 start && \
    sudo tail -f $APACHE_LOG_DIR/access.log -f $APACHE_LOG_DIR/error.log

Starting MySQL with arguments:

CMD ["mysqld", "--datadir=/var/lib/mysql", "--user=mysql"]

If you have lots of things to start in your CMD command, you may want to write a nice little script that starts all the services one after another. Besides, you can make this script do certain checks, create files and directories if necessary. Here’s an example of starting MySQL and executing JAR:

CMD sudo /home/user/

The script looks like this:


source /home/user/.mysqlrc


echo "Waiting for MySQL server initialize..."
service mysql start > /dev/null

if [ $? -eq 0 ] ; then
    echo "MySQL server started."

    if [ -e $JAR ] ; then
        echo "Starting application."
        echo "Done."
        echo "Executable jar application doesn't exist"
    echo "Failed to start MySQL server."

# keep docker container running after stopping of application
sleep 365d

Basics: Analyzing Error Logs

Errors are unavoidable. However, when Docker fails to build an image for you, it has good reasons to do so. As said above, always look at error messages to understand what cause the failure. All messages from Docker come with [DOCKER] at the beginning. In the below example, we have purposely tried to install non-existing software git1. Here’s the result:

[INFO] Starting Runner @ Sat Dec 13 14:39:20 UTC 2014
[DOCKER] Step 0 : FROM codenvy/shellinabox
[DOCKER]  ---> d374f2d64431
[DOCKER] Step 1 : RUN sudo apt-get install git1
[DOCKER]  ---> Running in 5f16e635cfdc
[DOCKER] Reading package lists...
[DOCKER] Building dependency tree...
[DOCKER] Reading state information...
[DOCKER] E: Unable to locate package git1

[DOCKER] [ERROR] The command [/bin/sh -c sudo apt-get install git1] returned a non-zero code: 100
[ERROR] We are having trouble starting the runner and deploying application phpmysql. Either necessary files are missing or a fundamental configuration has changed.
The command [/bin/sh -c sudo apt-get install git1] returned a non-zero code: 100

There is a standard message about having troubles to start runner. Every user will get that, no matter what Docker error has caused it. Do not pay too much attention to it. Look at the two messages above this one. This is the operating system response to sudo apt-get install git1. You’ll get the same response if you attempt to run this command locally from your UNIX terminal.

A few lines above the error message you can see the step that caused the problem – [DOCKER] Step 1 : RUN sudo apt-get install git1. So, now you have all information to fix the error.

If you don’t know how to fix it, copy the error message and google it. Chances are that you’ll find a few informative and helpful Stack Overflow threads.