Infrastructure As Code: The ODIN Way

This blog details how I accelerated the deployment of cloud environments through the creation of a web portal called ODIN ‘Optimal Deployment In Network’. This will be a continuation series with this article being Part I.

Discussing The Problem

With CI/CD most of the time being implemented by grabbing code, running tests, and deploying the code in question to a specific environment; I wanted to produce an application that could be used to create infrastructure with  a simple button click. I began brainstorming on different ways of creating my application, but I needed a precise problem to solve.

It quickly became clear that a major pain point to be addressed was a bottleneck in the deployment process of environments for the use of the development team that took days and even weeks. Mitigating this pain point seemed like a good place to start because it could optimize an important part of the current flow and also pave the way to the implementation of CI/CD in the future.

ODIN would automate the process and make dev/test environments available to developers on demand via an easy “one-click” process that will take minutes instead of days, therefore assisting in order to optimize and streamline deployments. This process can also be extended in the future to be triggered automatically as part of a CI/CD process.

Overview

I designed a solution for creating a self-service web portal that will automate and accelerate the process of delivering cloud environments upon request. The solution as a whole is illustrated in the following diagram:

Steps

  1. Users access the self-service portal, a web app deployed on Azure App Service.
  2. Users are authenticated using Azure Active Directory using their work account.
  3. The web app requests a new deployment from the Azure Resource Manager using the chosen template and the Azure .NET SDK.
  4. Azure Resource Manager deploys the requested environment, which includes virtual machines running Puppet agents.
  5. The puppet master serves configuration settings to the Puppet agents, which configure the virtual machines and perform necessary software installations.

But Why?

Before we get started, we should quickly discuss the ‘what’s the point’ argument.

Writing infrastructure as code is pretty nice and I enjoy the declarative approach of defining what you want and letting the tooling take care of the how. Using ARM templates (or CloudFormation templates in AWS) allows for a developer to create quick and precise environments every single time.

Below are three main practices that ODIN encourages:

  • Self-service environment. The solution as a whole implements the “self-service environment” DevOps practice because it allows users to trigger the deployment of new cloud environments in a fully automated manner with a simple button click.
  • Infrastructure as code. The use of Resource Manager templates allows a team to manage “infrastructure as code” a DevOps practice that allows developers to define and configure infrastructure (such as cloud resources) consistently while using software tools such as source control to version, store, and manage these configurations.
  • Security through eradication. Over 250,000 new malicious programs are found every day. Antivirus says your systems are fine. Intrusion Prevention Systems say you are safe. You hire a professional to scan your network and he/she concludes that a backdoor was previously installed into your network. You have no idea how many systems have been compromised. Did I mention that it is a Saturday? Infrastructure as code allows for you to easily grab a template from source control, add some new firewall rules and parameters, then invoke it with ODIN. Within minutes you can have your entire infrastructure rebuilt without the infection.  With ODIN, you can rebuild all of your servers, mount your original data, then continue with business as usual.

Divide and Conquer

To provide a proof-of-concept implementation of the solution, work was divided into three areas, each focusing on a different part of the solution:

  1. Implementing the self-service portal code named ODIN.
  2. Authoring a Resource Manager template to deploy as an example environment.
  3. Using Puppet to automate post-deployment configuration.

ODIN

Optimal Deployment In Network (ODIN)

The portal was implemented as an ASP.NET Core web application deployed on an Azure web app. The application was connected to my personal Azure Active Directory for user authentication and I used the Azure .NET SDK to access Azure Resource Manager for deploying environments.

User sign-in with Azure Active Directory

The web app was configured to authenticate, as described in the article ASP.NET web app sign-in and sign-out with Azure AD. This allows users to use their existing work credentials to access the application and the web application to retrieve user profiles easily.

In order to communicate to Azure, you must provide important information identifying your application. I chose to insert my keys into a JSON file. The information that you need are as follows:

Once you have figured out your correct keys, we have to configure authentication for Azure.


public void ConfigureServices(IServiceCollection services)
{
// Adding Auth
services.AddAuthentication(sharedOptions =>
{
sharedOptions.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme;
sharedOptions.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme;
})
.AddAzureAd(options => Configuration.Bind("AzureAd", options))
.AddCookie();

services.AddMvc();

// Adding SignalR
services.AddSignalR(options => {
options.KeepAliveInterval = TimeSpan.FromSeconds(10);
});

services.AddOptions();

// DI
services.Configure<AzureAdOptions>(Configuration.GetSection("AzureAd"));
services.AddSingleton<AzureAdOptions>();
services.AddSingleton<IHelper, Helper>();
services.AddSingleton<IDeploymentTemplateTask, DeploymentTemplateTask>();
}

Finally, it is helpful to create an extension method to bind the keys that were located inside of a JSON config to a POCO class. This allows a developer the ability to inject the POCO directly inside of a class constructor via Dependency Injection.


public static class AzureAdAuthenticationBuilderExtensions
{
public static AuthenticationBuilder AddAzureAd(this AuthenticationBuilder builder)
=> builder.AddAzureAd(_ => { });

public static AuthenticationBuilder AddAzureAd(this AuthenticationBuilder builder, Action<AzureAdOptions> configureOptions)
{
builder.Services.Configure(configureOptions);
builder.Services.AddSingleton<IConfigureOptions<OpenIdConnectOptions>, ConfigureAzureOptions>();
builder.AddOpenIdConnect();
return builder;
}

private class ConfigureAzureOptions: IConfigureNamedOptions<OpenIdConnectOptions>
{
private readonly AzureAdOptions _azureOptions;

public ConfigureAzureOptions(IOptions<AzureAdOptions> azureOptions)
{
_azureOptions = azureOptions.Value;
}

public void Configure(string name, OpenIdConnectOptions options)
{
options.ClientId = _azureOptions.ClientId;
options.Authority = $"{_azureOptions.Instance}{_azureOptions.TenantId}";
options.UseTokenLifetime = true;
options.CallbackPath = _azureOptions.CallbackPath;
options.RequireHttpsMetadata = false;
}

public void Configure(OpenIdConnectOptions options)
{
Configure(Options.DefaultName, options);
}
}
}

Deploy an ARM Template

The web application needs to send requests to the Azure Resource Manager to deploy new environments. For this purpose, the web application makes use of the Azure .NET SDK to programmatically communicate with the Azure Resource Manager and deploy the requested Resource Manager JSON template. See a step-by-step tutorial on how to deploy a template using .NET. Below is the code that I use to deploy a Virtual Machine to Azure based on a well formatted ARM template.


public async Task<DeploymentExtendedInner> CreateTemplateDeploymentAsync(TokenCredentials credential,
string groupName, string deploymentName, string subscriptionId, string templatePath,
string templateParametersPath)
{
#region Fail Fast

if (string.IsNullOrEmpty(templatePath))
throw new ArgumentNullException("Template cannot be null!");

if (string.IsNullOrEmpty(templateParametersPath))
throw new ArgumentNullException("Parameter template cannot be null!");

#endregion

var templateFileContents = GetJsonFileContents(templatePath);
var parameterFileContents = GetJsonFileContents(templateParametersPath);

var deployment = new Deployment
{
Properties = new DeploymentPropertiesInner
{
Mode = DeploymentMode.Incremental,
Template = templateFileContents,
Parameters = parameterFileContents["parameters"].ToObject<JObject>()
}
};

try
{
using (var resourceManagementClient = new ResourceManagementClient(credential))
{
resourceManagementClient.SubscriptionId = subscriptionId;
return await resourceManagementClient.Deployments.CreateOrUpdateAsync(groupName, deploymentName,
deployment.Properties, CancellationToken.None);
}
}
catch (Exception exception)
{
Console.WriteLine(exception.Message);
throw;
}
}

Infrastructure as code using Resource Manager templates

Azure Resource Manager templates (ARM templates) are the preferred way of automating the deployment of resources to Azure Resource Manager (AzureRM). ARM templates are JavaScript Object Notation (JSON) files. The resources that you want to deploy are declaratively described within JSON. Because they are written in JSON, this allows for versioning of particular templates. This enforces the idea of write once, deploy forever.

Puppet Enterprise virtual machine extensions

As described earlier, I chose to use Puppet for automating post-deployment virtual machine configuration. This means that Puppet Enterprise agents need to be installed on virtual machines defined in the Resource Manager templates. To make this process truly automatic, the agents need to be installed automatically as soon as the virtual machines are created. This can be achieved by using Azure virtual machine extensions, which allow performing an array of post-deployment operations on Windows or Linux virtual machines.

Puppet

Once virtual machines are deployed for the requested environment, post-deployment virtual machine configuration is handled by Puppet. For more information on Puppet, visit the Puppet official website and the Puppet Enterprise overview.

Installing Puppet

To install Puppet Enterprise, see the official installation guide.

Alternatively, the Azure marketplace offers a preconfigured Puppet Enterprise template that allows users to deploy a Puppet Enterprise environment (including Puppet master server and UI console) within minutes.

Accepting Agents

For security reasons, a Puppet master needs to accept agents that attempt to connect to it. On default, this needs to be done manually using the console. To make the process truly automatic, the Puppet master needed to be configured to automatically accept certificates.

This can be achieved by adding the line autosign = true to the [master]block in the Puppet master configuration file /etc/puppetlabs/puppet/puppet.conf

Note: This was done specifically for this POC. Do not accept agents automatically using this method.

Configure Agents

For the purpose of the POC, I decided to showcase three Puppet post-deployment actions to be executed on the Windows Server 2012 R2 virtual machines:

  1. Installation of Chocolatey (a Windows package manager).
  2. Use of Chocolatey to install Firefox.
  3. Transfer a text file to C:\

Conclusion

This project has been a great opportunity to learn more about Azure and to strengthen DevOps best practices. I had no previous Azure experience before this, so learning how to set up a personal account, resource groups, creating ARM templates, and communicating with Azure Resource Manager via REST was a blast!

At this point, the POC can showcase the following:

  1. User logs onto the self-service portal using corporate credentials.
  2. User chooses an environment to be deployed and includes parameters such as region and environment name.
  3. The web portal uses the associated Resource Manager template to programmatically trigger a cloud environment deployment.
  4. Virtual machines, deployed in the new environments, run Puppet agents that connect to the Puppet Enterprise master.
  5. Puppet agents perform installations and configurations on the hosts, preconfigured by the master.

By automating the existing process, ODIN has managed to optimize and accelerate an important part of the dev/test process that use to take days and required manual work.

Looking forward, I have defined the following focus areas for extending the solution:

  • Externalizing template parameters to be configured by the user.
  • Adding additional templates to be available to users.
  • Grabbing templates from BLOB.
  • Connecting the implemented solution to the CI/CD process.
  • Re-branding the front end using React with Typescript.
  • Settings that allow a user to use a hyperlink for a particular template.
  • Ability to upload a template directly to the BLOB or a particular storage component.
  • ODIN admins can configure ODIN to use Github, Bitbucket, or various storage components for templates.
  • Integrating CloudFormation templates for AWS.
  • Integrating the ability to deploy Docker images and containers.

In Part II, we will integrate the consumption of ARM templates located in BLOB storage as well as making the application front end more user friendly by using React components. Thanks for reading!

Tools, Software, and More

Below are the tools that were used to create ODIN:

  • .NET Core v2.0
  • OpenId Connect
  • Docker
  • ARM Template Base with Parameter Template
  • Puppet
  • Visual Studio
  • Visual Studio Code
  • Typescript
  • SignalR
  • Azure Admin Access

Additional Resources

Restricting guest access to a Microsoft Teams tab linked to a SharePoint document library

Guest access is a new feature in Microsoft Teams that allows different organizations to collaborate together in a shared environment. Anyone with a business or consumer email account, such as Outlook, Gmail, or others, can participate as a guest in Teams with full access to team chats, meetings, and files.

With the Teams and SharePoint site integration of an Office 365 group, you can now restrict access to a certain document library in SharePoint and have these access restrictions replicate to the corresponding Teams tab. This blog post will provide a walk-through of the required steps of restricting a document library in SharePoint, and creating a corresponding Teams tab that is linked to this document library.

Restricting access for SharePoint document library

  1. In your SharePoint site, navigate to the document library you wish to secure.
  2. Click on the gear icon on the top of the page, and click on ‘Library settings’:

  1. Click on ‘Stop Inheriting Permissions’ under the Permissions tab:

  1. Select all current permission groups and click ‘Remove User Permissions’:

  1. Click on ‘Grant Permissions’
  2. Add the users you want to be able to access the document library:

  1. Click on ‘Show Options’
  2. Select the appropriate permission level

  1. Click Share
  2. Now only users you have explicitly given access to, along with site owners, will be able to access that document library.

Creating a tab in Teams linked to the document library

  1. Navigate to the appropriate Team.
  2. Click on the ‘+’ sign for the channel that you want the tab to be created on.
  3. Click on ‘Document Library’.
  4. Under ‘Relevant sites’, choose the SharePoint site that you created the document library in.
  5. Click ‘Next’.
  6. Pick the document library you want to add:

  1. Change the name of the tab that will be displayed in Teams:

  1. Click ‘Save’.
  2. Now only the Office 365 Group members who you have explicitly given access to this document library will be able to view it in Teams. This allows different organizations to collaborate together on projects while maintaining the ability to restrict access to sensitive content.

Securing an ASP.NET Core App on Ubuntu Using Nginx and Docker (Part I)

Typically, when you develop with ASP.NET you have the luxury of IIS Express taking care of SSL and hosting, however IIS and IIS Express are exclusive to the Windows platform. ASP.NET Core 1.0 has decoupled the web server from the environment that hosts the application. This is great news for cross-platform developers since web servers other than IIS such as Apache and Nginx may be set up on Linux and Mac machines. Securing ASP.NET Core App on Ubuntu Using Nginx and Docker Part 1.png

This tutorial involves using Nginx as the web server to host a dockerized .NET Core web application with SSL Termination on a Ubuntu machine. In this three-part tutorial, I’ll guide you in:

Part I (This post)  1. Creating and publishing an ASP.NET Core web app using the new dotnet CLI tools and 2. Installing and configuring PuTTY so we may SSH and transfer files with our Ubuntu machine

Part II – Setting up Docker and creating a Docker Image on Ubuntu 16.04

Part III – 1. Configuring Nginx for SSL termination and 2. Building and Running the Docker Image.

Continue reading “Securing an ASP.NET Core App on Ubuntu Using Nginx and Docker (Part I)”

AWS S3 Bucket Name Validation Regex

Amazon Web Services enforces a strict naming convention for buckets used for storing files. Amazon’s requirements for bucket names include: aws s3 bucket name validation regex by Scott Lanoue at Easy Dynamics Technical Blog

  • A Bucket’s name can be between 6 and 63 characters long, containing lowercase characters, numbers, periods, and dashes
  • Each label must start with a lowercase letter or number
  • Bucket names cannot contain underscores, end with a dash, have consecutive periods, or use dashes adjacent to periods
  • Lastly, the bucket name cannot be formatted as an IPV4 address (e.g. 255.255.255.255)

Continue reading “AWS S3 Bucket Name Validation Regex”

Rapid Development Using Online IDEs

One of the most important processes in software development is the Rapid Application Development (RAD) model. The RAD model promotes adaptability – it emphasizes that requirements should be able to change as more knowledge is gained during the project lifecycle. Not only does it offer a viable alternative to the conventional waterfall model, but it has also spawned the development of the Agile methodology, which you can learn more about here.Rapid_Development_Using_Online_IDEs_By_Will_Shah Technical Blog post by Easy Dynamics

A core concept of the RAD model is that programmers should quickly develop prototypes while communicating with users and teammates. However, historically, this has been hard to do – when starting a project, you often need to decide which languages, libraries, APIs, and editors to use before you can begin. This takes the “rapid” out of rapid application development, and this was always a problem until online integrated development environments (IDEs) started popping up.  Continue reading “Rapid Development Using Online IDEs”

Section 508 Coding Practices

So you’ve read our previous blog onSection_508_Coding_Practices_Kishore_Jogia Easy Dynamics Technical Blog Section 508 Standards are and how to test for them and next thought “Gosh, that’s nice, but how do I make a page 508 Compliant?” Or you stumbled upon this blog from a quick search. Either way, if you’re looking for quick and easy tips on how to make your site more 508 Compliant, you’re in the right place! We’ll cover a few common 508 Standards and give basic html examples on how to meet compliancy.

Continue reading “Section 508 Coding Practices”

Sandbox Solutions vs. Farm Solutions

So you want to build a sand castle, but you’re using Sandbox Solutions. Next, the question may come up about Farm solutions. You may ask yourself, “What are the differences?” In this blog post, I’ll cover the differences and advantages of Sandbox Solutions vs. Farm Solutions. The goal of this post is to not only introduce you to idea of SandBox Solutions and Farm Solutions but also address the issues associated with each. 

Continue reading “Sandbox Solutions vs. Farm Solutions”

Bootstrapping Windows Servers with AWS EC2

When working with Amazon Web Services (AWS) EC2 instances, bootstrapping refers to using scripts provided at launch to configure new EC2 instances (servers). Concerning Windows servers, there are several considerations when determining the best method of bootstrapping. Bootstrap scripts can be applied directly from the management console, but we will be looking at a programmatic method of using bootstrapping scripts through the AWS CLI. There are many alternative choices. Many can be quite elaborate. This approach is one with very few dependencies. This scope of this blog covers creating a batch file that will run at the command prompt using the AWS CLI interface. Continue reading “Bootstrapping Windows Servers with AWS EC2”

Best Practices for AWS EC2

best practices for AWS EC2 blog post by mike carlino, aws infrastructure engineer at Easy DynamicsThere are several things to consider before clicking that “Launch” button in the AWS (Amazon Web Services) console. The more you plan and take into consideration ahead of time, the more you can save yourself a few headaches down the road. I will go over some fundamental best practices to consider before launching your EC2 instance. These topics will cover storage, security, backup/recovery, and finally management. Continue reading “Best Practices for AWS EC2”