The AWS ECR and the Tavve Dev Container
“A journey of a thousand miles begins with a single step.” — Lao Tzu
A leading provider of secure network optimization software and appliances, Tavve has been helping customers manage their on-premise networks for years. Their Software Development Life Cycle (SDLC) has been battle hardened and mirrored their customer’s on-premise network change management process. As more of their customers make their way to public, private and hybrid cloud deployment models, Tavve is refining and evolving the way they deliver software and manage their SDLC in the cloud.
Tavve has partnered with Defiance Digital, an AWS Consulting Partner, to begin their journey to the cloud. Defiance reviewed Tavve’s priorities and outcomes, the motivations behind those priorities, gathered their initial requirements and ultimately settled on a plan that started their journey. That journey started with a proof of concept to automate their development environment utilizing Docker and AWS’s Elastic Container Registry (ECR) service. After the POC, Tavve and Defiance will be ready to create and walk the roadmap that ends with Tavve’s entire SDLC workload in AWS.
The Problem
Within the confines of Tavve’s strategy to move their SDLC workload to AWS, one of their biggest challenges in their SDLC process was onboarding a new developer and setting up the development environment. The process was overly manual and involved reading through and executing numerous wiki pages. Due to the complexity it was often done by an existing developer who was familiar with the process and understood the importance of each step and could execute it reliably. This could often take 3 or more days in which that (probably senior-level) developer was not directly contributing to the product.
This process doesn’t scale well. Defiance viewed this as a great place to run the initial AWS POC with the added benefit of greatly reducing the onboarding time for new teammates. It would also empower the new developers to complete the onboarding instead of an existing developer. These changes have greatly reduced the time to first commit.
Additionally, Tavve was forced to go remote in the Spring of 2020 and discovered that their development machines had a requirement to be on Tavve’s internal network. This means all their developers were using a VPN to connect back to the office and a VNC client to access their in-office machines. This often caused issues when network connectivity went down.
The POC
From herein, it is we and us, not they and them! Great team stories are written together.
The first step in containerizing the development environment was taking all the file dependencies we had for a specific product and packaging those into their own Docker image. This included Java runtime environments and various third party JARs the application leveraged. These make up the runtime dependencies of the products. Most of those JARs were stored on a network file share and mounting those were part of the initial setup of any new development environment.
Since these dependencies were slow changing and quite large, we packaged them centrally and allowed developers to leverage them as a base image for their local machines. We chose our tagging strategy to be a simple “<product>:<version>-<sub>” scheme. A few hours to build Dockerfiles and push the images to AWS’s ECR and we were all set. We figure these images will also be used during the adoption of Docker into our automation processes like builds and testing leveraging AWS’s Elastic Container Service.
Then we started working on the Dockerfile for the development machines. Our new runtime dependency images are used as the base and from there we add our system dependencies for development. Our `yum` command came directly from wikis that had been maintained by the development teams:
RUN yum -y update && yum clean all RUN yum -y groupinstall "Fonts" RUN yum -y install \ yum-plugin-ovl iproute \ epel-release \ gtk3 \ libstdc++-devel glibc-devel gcc gcc-c++ make \ glibc-devel.i686 libstdc++-devel.i686 ncurses-devel.i686 \ net-snmp net-snmp-utils \ ntp ntpdate \ rpm-build \ sudo git \ && \ update-alternatives --auto java && \ java -version
There was some additional user setup, guided again by the wikis:
RUN mkdir -p /home/$USER && \ echo "$USER:x:1000:1000:Developer,,,:/home/$USER:/bin/bash" >> /etc/passwd && \ echo "eng:x:1000:" >> /etc/group && \ chown $USER:eng -R /home/$USER
Locale setup was next, again we were able to just copy and paste from the wikis.
ENV LANG=en_US RUN chmod 777 /usr/lib/locale/locale-archive RUN localedef -c -f UTF-8 -i en_US en_US
If you want to sudo, you need a password, so we randomly create one if one wasn’t provided:
ARG PASSWORD RUN [ -z $PASSWORD ] && export PASSWORD=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1); usermod --password $PASSWORD $USER
Next, we added our user to the sudoers file.
RUN echo "$USER ALL=(ALL) ALL" >> /etc/sudoers.d/user && chmod 0440 /etc/sudoers.d/user
And then finally we set up the PATH:
ENV PATH=/home/$USER/IBM/TeamConcert/scmtools/eclipse:$PATH
We also originally had the installation of Eclipse, the team’s chosen IDE. However, that was removed as we upgraded to a version of Eclipse that doesn’t support a command-line installer (it’s all GUI now). With that, we decided to take a step back and choose to not make this Docker image that opinionated about the IDE.
Now we had our Dockerfile all setup and just needed to build and run it. We began by providing some basic shell scripts to build and execute the image. After some feedback from the development team we decided to invest in a tailor-built CLI instead. Thanks to frameworks like yargs this was a trivial effort. Now the development team could simply install the CLI and execute a few commands:
tavve-dev login tavve-dev build packetranger:1.0 tavve-dev run packetranger:1.0
The ‘login’ command will use the current AWS credentials to login to ECR. The ‘build’ command will build the Docker image. Finally, the ‘run’ command is used to run the container.
The ‘run’ was where the tailor-built CLI came in really handy. Running a container was going to be subtly different for each developer. Each developer is going to choose to have different volumes mounted. Some will choose to run an IDE inside of the container (and need to have the X11 socket volume mounted) while some may choose to use the IDE on their host machine. Thanks to ‘yargs’ this all can be saved in an .rc file and easily customized by each developer.
By containerizing the entire development process we saw gains in multiple areas. First, and most importantly, we were able to shift the onboarding process from an existing developer to the new developer as all the steps were standard and well documented already (like how to install Docker). Second, we were able to greatly reduce the overall time it took by removing complex steps like NFS mounts and user setup, things easily represented inside the container. What used to take days before could be done in less than a day. Finally, we were able to completely remove the need to be connected to the internal network which greatly improved the development experience in this new remote-friendly world.
While we focused heavily on the containerization and reducing our time to first commit, we blazed a trail right into the cloud. Leveraging AWS ECR, we provided a clean, simple and elegant solution for automating our dev environment and enabling remote work for Tavve. With this win, the Defiance/Tavve partnership is ready for its next milestone in AWS and meeting Tavve’s customers at the intersection.