Ten Things We Love About the NIST Privacy Framework

By Paul Grassi and Maria Vachino

As cybersecurity and identity professionals, you might think we nerd out all day long talking about authentication, identity proofing, and federation – and you’d be mostly right. We are passionate about cybersecurity and identity but, for the record, we’ve been longtime privacy superfans. Frankly, we think good privacy enables good identity, not the other way around. Given NIST’s recent release of Version 1.0 of the Privacy Framework, it’s only fitting that we follow-up last year’s blog, 10 Things We Love About the Updated Federal Identity Guidance written by our resident privacy nerd, Jamie Danker, with 10 things we love about the newly released NIST Privacy Framework.

  1. Risk-based – Security has long embraced this approach and we’re delighted to see privacy moving in this direction. Aiming for perfect privacy has the same pitfalls that aiming for perfect security does – it isn’t achievable, doesn’t help with prioritization, and can even interfere with mission critical objectives. The solution is a risk-based approach that considers both the likelihood and impact of problematic data actions, then weighs options for mitigation against available resources and organizational needs (but don’t confuse a risk-based approach as a substitute for compliance with current laws and regulations).
  2. Actionable – The Privacy Framework is a tool that can help deconstruct legal and policy requirements into system requirements – you know those things that are actually needed to ensure compliance with laws! 
  3. Explains the Security and Privacy Relationship – We think the Privacy Framework offers one of the best explanations of the overlap and distinctions between security and privacy. It certainly helps explain to the security community and other stakeholders that protecting privacy goes beyond concerns surrounding data breaches and extends more broadly into the privacy risks that arise from data processing. Understanding the two as distinct yet interrelated helps further the development of systems, products, and services that are both secure and privacy enhancing.
  4. Emphasizes Privacy Risk Assessment – We’ve been following NIST’s Privacy Engineering Program and are glad to see the emphasis on Privacy Risk. Everyone knows adding security to a system as an afterthought doesn’t work. Well, tacking on privacy at the end doesn’t work either! We hope the new framework helps to mature privacy impact assessment practices so they become part of initial system design.
  5. Communication Tool –Ever been in a cross functional team meeting and have the feeling that you’re not all speaking the same language? When it comes to privacy, the Framework levels the playing field. We’ve seen how the Framework for Improving Critical Infrastructure Security (Cybersecurity Framework) has impacted how organizations communicate about cybersecurity with its 5 Functions: Identify, Detect, Protect, Respond and Recover. Now for privacy, you’ll want to remember these five simple words: Identify-P, Govern-P, Control-P, Communicate -P, and Protect-P. To understand the meaning behind these words, check out the Privacy Framework Core.
  6. Flexible – One of the greatest features of the Privacy Framework is its flexibility. The framework is not something to be “complied with.” Rather we think proper use of the framework is to align privacy with systems development and with security accreditation processes. As the framework notes, “deriving benefits from data while simultaneously managing risks to individuals’ privacy is not well suited to one-size fits all solutions.” This is very familiar to any system owner that has achieved an Authority to Operate (ATO) where not every single security control is relevant to a particular risk environment.
  7. Eco(system)-Friendly – The Privacy Framework acknowledges that there are many roles within the data processing ecosystem. We’re a fan of LEGOs and we see the framework as a really cool set that can be custom built based on the particular role(s) your organization takes. Even cooler is that these can be reused and interoperate with others in the data processing ecosystem.
  8. Stakeholder Driven – The Privacy Framework development process was open and transparent with workshops, webinars, and comments periods galore! We’re excited to see the launch of a Resource Repository where organizations can share profiles, crosswalks, and tools to help advance the framework’s implementation.
  9. Recognizes Digital Identity as a key enablerOK, OK, yes privacy is an enabler of identity, but it actually works both ways. We have a special place in our hearts for digital identity at Easy Dynamics, so we are excited to see that the Privacy Framework’s Identify-P Function includes the Identity Management, Authentication, and Access Control category. Best practices in identity and security, such as the use of Federation, can substantially reduce privacy risk by reducing the replication of PII. And strong identity controls (see NIST SP 800-63 Rev. 3) are key to compliance with privacy laws and regulations that have individual access request requirements, such as the Privacy Act of 1974, the General Data Protection Regulation (GDPR), and the California Consumer Privacy Act.  
  10. AgnosticOrganizations needn’t wait for federal or state privacy legislation – they can get started on using the framework as building block right away, on a VOLUNTARY basis. Since it’s agnostic to laws and regulations, getting started can only better position organizations to respond to changes in the legislative landscape.

So, let’s take a moment to reflect on what a huge milestone this is for privacy programs, solution engineers, and individuals alike and the enormous potential the framework has to advance the privacy conversation!

Ten Things We Love About the Updated Federal Identity Guidance 

Jamie Danker, Director of Privacy, Easy Dynamics Corporation

On May 21, 2019, OMB issued a new cybersecurity memorandum, M-19-17 – Enabling Mission Delivery through Improved Identity, Credential, and Access Management, setting forth a modernized policy for the federal government’s approach to Identity, Credential, and Access Management (ICAM). This long-awaited memo represents a major overhaul of federal identity policy and strategically points agencies to the risk-based approach detailed in the National Institute of Standards and Technology (NIST) Special Publication (SP) 800-63-3, Digital Identity Guidelines. There’s a lot to unpack in this 13-page memo, and so much to love about it that we at Easy Dynamics can hardly contain ourselves! Here are ten things we love about the new guidance:

  1. Goodbye Levels of Assurance (LOA) -Shred your old copies of M-04-04, E-Authentication Guidance. It was rescinded by this memo, kills LOAs, and points agencies to distinct assurance levels for identity proofing, authenticators, and federation, defined in SP 800-63.
  2. 800-63 as the Foundation of Digital Identity Agencies are required to update their identity strategies to ensure they are following 800-63. This revision is a major update to old guidance and materially impacts legacy solutions based in prior revisions.
  3. Digital Identity Risk Management – Requires agencies to incorporate Digital Identity Risk Management, including consideration of privacy risk, into existing Federal processes as outlined in 800-63. Agencies are also asked to share feedback with the Federal CIO Council, Federal Privacy Council, and NIST to drive improvement to 800-63. This feedback could take the form of a “digital identity acceptance statement” that explains the rationale for implementing at an xAL that differs from what your risk assessment yielded.
  4. Federation FirstAcknowledges and that a smartcard only approach to federation is no longer tenable by driving the federal government toward a federation first approach in both government-to-government (G2G) and government-to-citizen (G2C) contexts. In several places the memo emphasizes leveraging federated solutions as a requirement rather than a mere suggestion. While Federal Information Processing Standard (FIPS) 201-2, Personal Identity Verification (PIV) of Federal Employees and Contractors, remains the government-wide standard for common identification as called for in Homeland Security Presidential Directive 12 (HSPD-12), the memo recognizes that the government must offer flexible solutions as technology evolves. Agencies have a fantastic opportunity to get creative and work with the Federal CIO Council, the Federal Privacy Council, and NIST to pilot additional solutions that meet the intent of HSPD-12. The memo also requires agencies to support cross-government identity federation and interoperability by identifying and resolving obstacles to accept PIV assertions from other agencies.
  5. Zero Trust – Paul Grassi, some guy whose name can be found on the cover of 800-63 and who also happens to be our SVP of Cybersecurity, really loves the paradigm shift here to zero trust solutions and the concept of using identity as the underpinning for managing cyber-risk. Trust that architecture requirements are abundant in this guidance requiring agencies to establish authoritative solutions for ICAM services, ensure that deployed ICAM capabilities are interchangeable, use commercially available products, and leverage open APIs and commercial standards to promote interoperability and manage the digital identity lifecycle of devices, non-person entities (NPEs) and automated technologies such as Robotic Process Automation tools and Artificial Intelligence.
  6. Bring Your Own Authenticator – Options baby! Agencies can give the public more options and allow them to bring non-Government furnished authenticators to their digital identity when they access digital services. This policy enables strong authentication to government services, reduces cost, and reduces the number of authenticators individuals use in their daily lives. This is win-win for agencies and citizens.
  7. Governance – Everyone is invited, well, required actually, to be part of an integrated agency-wide ICAM governance structure to “include personnel from the offices of the Chief Information Officer, Chief Financial Officer, Human Resources, General Counsel, Chief Information Security Officer, Senior Agency Official for Privacy (yay!), Chief Acquisition Officer, Senior Official(s) responsible for Physical Security, and component organizations that manage ICAM programs and capabilities, including ICAM capabilities deployed through the CDM Program.” Chief Operating Officers or equivalents are required to ensure coordination on ICAM among agency leaders and agencies are required to define and maintain a single comprehensive ICAM policy process and technology solution roadmap.
  8. Agencies as Attribute Sources The federal government has had longstanding challenges in implementing remote identity proofing, in part due to the availability of authoritative data sources. This guidance chips away at that challenge, for both the public and private sectors, by directing agencies that are authoritative sources for attributes (e.g., SSN) utilized in identity proofing events as selected by OMB and permissible by law, to establish privacy-enhanced data validation APIs for public and private sector identity proofing services to consume. Imagine being able to remotely proof a user with a passport!
  9. Privacy MattersPrivacy always has a special place in my heart and clearly OMB feels the same way. The new guidance calls for agencies to limit the collection of Personally Identifiable Information (PII) and protect it commensurate with risk. With the push toward federation, it will be important to involve your agency’s privacy officials to ensure appropriate consent mechanisms and privacy protections are in place in using federally or commercially provided shared services. Whether or not you are a privacy professional it’s important to understand that a privacy risk assessment must be part of your agency’s digital identity risk assessment process.
  10. Roadmaps for new optionsthere are agency-specific responsibilities galore in here but keep an eye on the Department of Commerce (NIST) roadmap and guidance requirements and expect new options for derived credentials! For example NIST is required to 1) develop a roadmap within months for developing new and updating existing NIST guidance related to ICAM, 2) develop and issue guidance to promote the deployment of technology, and 3) develop guidance to facilitate deployment and use of derived credentials using authenticators that satisfy the security and privacy requirements on 800-63 while leveraging the PIV identity proofing process.

Finally, the government is adopting models that have proven successful in the private sector! Kudos OMB!

Supporting Multiple Environments in a single TeamCity Build Config

Having multiple build configs can clutter up the TeamCity dashboard and make maintenance bothersome. The use of inheritance and build templates can help with maintenance, but that doesn’t solve the issue of having a build config for each environment (DEV, TEST, PROD, etc).

Using TeamCity service messages and the TeamCity REST API can help solve this problem and keep all operations for a project under one build config. In some cases this can help reduce costs since the number of build configs TeamCity allows is limited by the Licensing that was obtained.

The example used in this article is to support deployment to multiple environments however this may be used for builds, testing and so on.

By following this article, the TeamCity dashboard would go from looking like this (with a “Execute Deployment” build config under each project):


To this:


Selecting an environment will now occur when clicking “Run” on the new build config; the user will be prompted to select an environment.


  1. For this to work correctly all values that need to change per environment must be parameterized. You should also consider making a TeamCity service account that will be used for REST API calls.
  2. Devise a parameter naming convention to denote the environment that the parameter value belongs to. Avoid using prefixes that interfere with TeamCity’s defaults (env., system., dep.)
    1. Ex:
      TargetServerAddress becomes defined as:
  3. Go through all the parameters and update them to follow the formatting specified above.
  4. On the TeamCity project that will have the environment switches, add a parameter “TargetEnvironment” which will prompt the user for a selection upon running a build.
    1. Select “Edit…” in the Spec section to modify the parameter to prompt the user
    2. Fill in the various environment identifiers (these are defined in step 2)
    3. The final result should look like this
  5. Create a build config to execute a shell script. The following examples are done using the PowerShell build runner.
    1. Create your auth headers.
      $pair = "$($TeamCityApiUserName):$($TeamCityApiPassword)"
      $encodedCreds = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes($pair))
      $basicAuthValue = "Basic $encodedCreds"
      $tcHeaders = @{
      "Authorization" = $basicAuthValue;
      "Accept" = "application/xml";
      "Content-Type" = "application/xml";
    2. Create helper function to get all TeamCity parameters for a specified Project Id.
      Function GetTeamCityParameters($ProjectId) {
      try {
      $api = "$($TeamCityHost)httpAuth/app/rest/projects/id:$ProjectId/parameters"
      $response = Invoke-WebRequest -Uri $api -Headers $tcHeaders -Method GET
      return ( $response).properties.property
      } catch {
    3. Create  helper function to Emit TeamCityParameters using TeamCity Service Messages
      Function EmitTeamCityParameter($Key, $Value) {
      Write-Host "##teamcity[setParameter name='$Key' value='$Value']"
    4. Loop through all matching parameters to emit the value of “DEV.TargetServerAddress” as the value for “TargetServerAddress”.
      ForEach ($param in (GetTeamCityParameters $ProjectId)) {
      if ($param.Name -and $param.Name.StartsWith("$TargetEnvironment.")) {
      EmitTeamCityParameter -Key ($param.Name).Replace("$TargetEnvironment.","") -Value "%%$($param.Name)%%"
  6. To make use of these freshly emitted environment parameters whichever build config being set up for multi-environment use, in my case Execute Deployment, must have a Snapshot Dependency on the build config that emits the environment parameters.
    1. Every time an emitted parameter is to be used it must be called on via dependency parameter. For instance, to use the emitted value “TargetServerAddress” in a build config it must be called like so:
    2. This is because using TeamCity Service Messages will only update the value of the parameter for the single build that executed. What this means is after Emit Environment Parameter executes, the value of “TargetServerAddress” never gets persisted.


You may run into problems when attempting this password parameters. This is because TeamCity masks all passwords as asterisks before executing the script, effectively resulting in the parameter value being updated as ******* rather than whatever the passwords true value is. In the next blog post, we will discuss using the TeamCity REST API to get around this issue with passwords.

Reducing Costs in the Cloud using Machine Learning

In the modern-day business environment where organizations are consistently being asked to do more with less, businesses of all sizes are continuing to move to the cloud en masse. The switch to a cloud-based platform over a traditional on-premise installation is driven largely by a desire to reduce costs, with added benefits such as optimized performance and higher levels of compliance.

Image result for cloud vs on premise
A comparison of cloud-based platforms to on-premises installations

However, organizations that use the cloud for just hosting purposes have barely scratched the surface in terms of functionality. While features such as resource pooling and automatic backups are utilized commonly, some features, such as machine learning in the cloud, continue to be underutilized by users.

Machine learning can broadly be defined as the science of feeding computers data to autonomously improve their learning over time, and it is starting to explode in the cloud-sphere. Azure was one of the first cloud platforms to announce machine learning capabilities (back in late 2014) and Amazon followed suit in 2015. More recently, Google launched Google Cloud Machine Learning in 2017, while Amazon introduced a more advanced machine learning platform called SageMaker at re:Invent 2017.

Machine learning in the cloud, at its core, is comprised of three separate requirements:

  1. The first, and most important, part of the machine learning process is the converting of unstructured data into understandable, value-rich data. The core principle here is that without good data, there is no value to be derived from it – clean data is essential for machine learning purposes. Cloud providers make it easy for you to import your data – Amazon SageMaker allows native integration with S3, RDS, DynamoDB, and Redshift, and all 3 major companies allow you to bring in your own data as well.
  2. The second piece of the puzzle is the machine learning algorithm that the data runs through. SageMaker, for example, offers many built-in algorithms such as Linear Learner, XGBoost, and K-Means. It also allows you to use your own trained algorithms, which are packaged as Docker images (like the built-in ones). This gives you the flexibility to use almost any algorithm code, regardless of implementation language, dependent libraries, or frameworks.
  3. Lastly, machine learning scientists need a certain level of insight to be able to connect models and algorithms to business processes. Industry-standard processes for data mining (illustration below) show that understanding the core business and data is key to modeling algorithms. Without an understanding of what data can help make impactful business decisions, the output of any algorithm is useless.

Image result for azure machine learning process
An illustration of the Cross Industry Standard Process for Data Mining

Machine learning in the cloud really is as simple as finding the right dataset (or collecting the data using built-in cloud monitoring services), training the right algorithm, and continuing to fine-tune the machine learning system. The barrier to entry is not as large as organizations perceive it to be, and not every user of a cloud-based machine learning system must be a trained machine learning scientist.

At this point, you may be wondering – how can I use these systems to reduce costs and increase operational intelligence in my current cloud setup?

One of the most important ways any organization can reduce costs is by proactively predicting performance issues and remediating problems before they impact your business. Specifically, funneling a dataset of historical uptime/downtime metrics through a machine learning algorithm can help you identify when a server or system may go down in the future. Using this knowledge, you can avert any work slowdowns or stoppages and, in the process, save a substantial amount of potentially lost wages.

Image result for machine learning

In additional to technical cost-reduction, machine learning in the cloud is already being used to reduce costs by improving processes in many industries. One great use case is how Google Cloud augments typical help-desk scenarios with Cloud AI using an event-based server-less architecture. It also has the potential to completely revolutionize health-care, as it can provide alerts from real-time patient data, while also providing proactive health management for existing patients.

In summation, while cloud migrations are ubiquitous, the true power of the cloud is still untapped while services like machine learning tools are underutilized. These tools, offered by all the major cloud providers, can help organizations by providing proactive analyses and minimizing operational waste.


  1. https://www.cms-connected.com/News-Archive/November-2017/Cloud-vs-On-Premise-Digital-Asset-Management
  2. https://www.cms-connected.com/News-Archive/November-2017/Cloud-vs-On-Premise-Digital-Asset-Management
  3. https://www.techemergence.com/what-is-machine-learning/
  4. https://www.youtube.com/watch?v=COSXg5HKaO4
  5. https://aws.amazon.com/blogs/aws/sagemaker/
  6. https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms.html
  7. https://blogs.msdn.microsoft.com/azuredev/2017/02/19/the-data-science-process-with-azure-machine-learning/
  8. https://www.ca.com/content/dam/ca/us/files/solution-brief/how-can-machine-learning-help-you.pdf
  9. https://d1.awsstatic.com/Marketplace/scenarios/bi/Q42017/BIA12-predictive-maintenance-scenario-brief.pdf

Infrastructure As Code: The ODIN Way

This blog details how I accelerated the deployment of cloud environments through the creation of a web portal called ODIN ‘Optimal Deployment In Network’. This will be a continuation series with this article being Part I.

Discussing The Problem

With CI/CD most of the time being implemented by grabbing code, running tests, and deploying the code in question to a specific environment; I wanted to produce an application that could be used to create infrastructure with  a simple button click. I began brainstorming on different ways of creating my application, but I needed a precise problem to solve.

It quickly became clear that a major pain point to be addressed was a bottleneck in the deployment process of environments for the use of the development team that took days and even weeks. Mitigating this pain point seemed like a good place to start because it could optimize an important part of the current flow and also pave the way to the implementation of CI/CD in the future.

ODIN would automate the process and make dev/test environments available to developers on demand via an easy “one-click” process that will take minutes instead of days, therefore assisting in order to optimize and streamline deployments. This process can also be extended in the future to be triggered automatically as part of a CI/CD process.


I designed a solution for creating a self-service web portal that will automate and accelerate the process of delivering cloud environments upon request. The solution as a whole is illustrated in the following diagram:


  1. Users access the self-service portal, a web app deployed on Azure App Service.
  2. Users are authenticated using Azure Active Directory using their work account.
  3. The web app requests a new deployment from the Azure Resource Manager using the chosen template and the Azure .NET SDK.
  4. Azure Resource Manager deploys the requested environment, which includes virtual machines running Puppet agents.
  5. The puppet master serves configuration settings to the Puppet agents, which configure the virtual machines and perform necessary software installations.

But Why?

Before we get started, we should quickly discuss the ‘what’s the point’ argument.

Writing infrastructure as code is pretty nice and I enjoy the declarative approach of defining what you want and letting the tooling take care of the how. Using ARM templates (or CloudFormation templates in AWS) allows for a developer to create quick and precise environments every single time.

Below are three main practices that ODIN encourages:

  • Self-service environment. The solution as a whole implements the “self-service environment” DevOps practice because it allows users to trigger the deployment of new cloud environments in a fully automated manner with a simple button click.
  • Infrastructure as code. The use of Resource Manager templates allows a team to manage “infrastructure as code” a DevOps practice that allows developers to define and configure infrastructure (such as cloud resources) consistently while using software tools such as source control to version, store, and manage these configurations.
  • Security through eradication. Over 250,000 new malicious programs are found every day. Antivirus says your systems are fine. Intrusion Prevention Systems say you are safe. You hire a professional to scan your network and he/she concludes that a backdoor was previously installed into your network. You have no idea how many systems have been compromised. Did I mention that it is a Saturday? Infrastructure as code allows for you to easily grab a template from source control, add some new firewall rules and parameters, then invoke it with ODIN. Within minutes you can have your entire infrastructure rebuilt without the infection.  With ODIN, you can rebuild all of your servers, mount your original data, then continue with business as usual.

Divide and Conquer

To provide a proof-of-concept implementation of the solution, work was divided into three areas, each focusing on a different part of the solution:

  1. Implementing the self-service portal code named ODIN.
  2. Authoring a Resource Manager template to deploy as an example environment.
  3. Using Puppet to automate post-deployment configuration.


Optimal Deployment In Network (ODIN)

The portal was implemented as an ASP.NET Core web application deployed on an Azure web app. The application was connected to my personal Azure Active Directory for user authentication and I used the Azure .NET SDK to access Azure Resource Manager for deploying environments.

User sign-in with Azure Active Directory

The web app was configured to authenticate, as described in the article ASP.NET web app sign-in and sign-out with Azure AD. This allows users to use their existing work credentials to access the application and the web application to retrieve user profiles easily.

In order to communicate to Azure, you must provide important information identifying your application. I chose to insert my keys into a JSON file. The information that you need are as follows:

Once you have figured out your correct keys, we have to configure authentication for Azure.

public void ConfigureServices(IServiceCollection services)
// Adding Auth
services.AddAuthentication(sharedOptions =>
sharedOptions.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme;
sharedOptions.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme;
.AddAzureAd(options => Configuration.Bind("AzureAd", options))


// Adding SignalR
services.AddSignalR(options => {
options.KeepAliveInterval = TimeSpan.FromSeconds(10);


// DI
services.AddSingleton<IHelper, Helper>();
services.AddSingleton<IDeploymentTemplateTask, DeploymentTemplateTask>();

Finally, it is helpful to create an extension method to bind the keys that were located inside of a JSON config to a POCO class. This allows a developer the ability to inject the POCO directly inside of a class constructor via Dependency Injection.

public static class AzureAdAuthenticationBuilderExtensions
public static AuthenticationBuilder AddAzureAd(this AuthenticationBuilder builder)
=> builder.AddAzureAd(_ => { });

public static AuthenticationBuilder AddAzureAd(this AuthenticationBuilder builder, Action<AzureAdOptions> configureOptions)
builder.Services.AddSingleton<IConfigureOptions<OpenIdConnectOptions>, ConfigureAzureOptions>();
return builder;

private class ConfigureAzureOptions: IConfigureNamedOptions<OpenIdConnectOptions>
private readonly AzureAdOptions _azureOptions;

public ConfigureAzureOptions(IOptions<AzureAdOptions> azureOptions)
_azureOptions = azureOptions.Value;

public void Configure(string name, OpenIdConnectOptions options)
options.ClientId = _azureOptions.ClientId;
options.Authority = $"{_azureOptions.Instance}{_azureOptions.TenantId}";
options.UseTokenLifetime = true;
options.CallbackPath = _azureOptions.CallbackPath;
options.RequireHttpsMetadata = false;

public void Configure(OpenIdConnectOptions options)
Configure(Options.DefaultName, options);

Deploy an ARM Template

The web application needs to send requests to the Azure Resource Manager to deploy new environments. For this purpose, the web application makes use of the Azure .NET SDK to programmatically communicate with the Azure Resource Manager and deploy the requested Resource Manager JSON template. See a step-by-step tutorial on how to deploy a template using .NET. Below is the code that I use to deploy a Virtual Machine to Azure based on a well formatted ARM template.

public async Task<DeploymentExtendedInner> CreateTemplateDeploymentAsync(TokenCredentials credential,
string groupName, string deploymentName, string subscriptionId, string templatePath,
string templateParametersPath)
#region Fail Fast

if (string.IsNullOrEmpty(templatePath))
throw new ArgumentNullException("Template cannot be null!");

if (string.IsNullOrEmpty(templateParametersPath))
throw new ArgumentNullException("Parameter template cannot be null!");


var templateFileContents = GetJsonFileContents(templatePath);
var parameterFileContents = GetJsonFileContents(templateParametersPath);

var deployment = new Deployment
Properties = new DeploymentPropertiesInner
Mode = DeploymentMode.Incremental,
Template = templateFileContents,
Parameters = parameterFileContents["parameters"].ToObject<JObject>()

using (var resourceManagementClient = new ResourceManagementClient(credential))
resourceManagementClient.SubscriptionId = subscriptionId;
return await resourceManagementClient.Deployments.CreateOrUpdateAsync(groupName, deploymentName,
deployment.Properties, CancellationToken.None);
catch (Exception exception)

Infrastructure as code using Resource Manager templates

Azure Resource Manager templates (ARM templates) are the preferred way of automating the deployment of resources to Azure Resource Manager (AzureRM). ARM templates are JavaScript Object Notation (JSON) files. The resources that you want to deploy are declaratively described within JSON. Because they are written in JSON, this allows for versioning of particular templates. This enforces the idea of write once, deploy forever.

Puppet Enterprise virtual machine extensions

As described earlier, I chose to use Puppet for automating post-deployment virtual machine configuration. This means that Puppet Enterprise agents need to be installed on virtual machines defined in the Resource Manager templates. To make this process truly automatic, the agents need to be installed automatically as soon as the virtual machines are created. This can be achieved by using Azure virtual machine extensions, which allow performing an array of post-deployment operations on Windows or Linux virtual machines.


Once virtual machines are deployed for the requested environment, post-deployment virtual machine configuration is handled by Puppet. For more information on Puppet, visit the Puppet official website and the Puppet Enterprise overview.

Installing Puppet

To install Puppet Enterprise, see the official installation guide.

Alternatively, the Azure marketplace offers a preconfigured Puppet Enterprise template that allows users to deploy a Puppet Enterprise environment (including Puppet master server and UI console) within minutes.

Accepting Agents

For security reasons, a Puppet master needs to accept agents that attempt to connect to it. On default, this needs to be done manually using the console. To make the process truly automatic, the Puppet master needed to be configured to automatically accept certificates.

This can be achieved by adding the line autosign = true to the [master]block in the Puppet master configuration file /etc/puppetlabs/puppet/puppet.conf

Note: This was done specifically for this POC. Do not accept agents automatically using this method.

Configure Agents

For the purpose of the POC, I decided to showcase three Puppet post-deployment actions to be executed on the Windows Server 2012 R2 virtual machines:

  1. Installation of Chocolatey (a Windows package manager).
  2. Use of Chocolatey to install Firefox.
  3. Transfer a text file to C:\


This project has been a great opportunity to learn more about Azure and to strengthen DevOps best practices. I had no previous Azure experience before this, so learning how to set up a personal account, resource groups, creating ARM templates, and communicating with Azure Resource Manager via REST was a blast!

At this point, the POC can showcase the following:

  1. User logs onto the self-service portal using corporate credentials.
  2. User chooses an environment to be deployed and includes parameters such as region and environment name.
  3. The web portal uses the associated Resource Manager template to programmatically trigger a cloud environment deployment.
  4. Virtual machines, deployed in the new environments, run Puppet agents that connect to the Puppet Enterprise master.
  5. Puppet agents perform installations and configurations on the hosts, preconfigured by the master.

By automating the existing process, ODIN has managed to optimize and accelerate an important part of the dev/test process that use to take days and required manual work.

Looking forward, I have defined the following focus areas for extending the solution:

  • Externalizing template parameters to be configured by the user.
  • Adding additional templates to be available to users.
  • Grabbing templates from BLOB.
  • Connecting the implemented solution to the CI/CD process.
  • Re-branding the front end using React with Typescript.
  • Settings that allow a user to use a hyperlink for a particular template.
  • Ability to upload a template directly to the BLOB or a particular storage component.
  • ODIN admins can configure ODIN to use Github, Bitbucket, or various storage components for templates.
  • Integrating CloudFormation templates for AWS.
  • Integrating the ability to deploy Docker images and containers.

In Part II, we will integrate the consumption of ARM templates located in BLOB storage as well as making the application front end more user friendly by using React components. Thanks for reading!

Tools, Software, and More

Below are the tools that were used to create ODIN:

  • .NET Core v2.0
  • OpenId Connect
  • Docker
  • ARM Template Base with Parameter Template
  • Puppet
  • Visual Studio
  • Visual Studio Code
  • Typescript
  • SignalR
  • Azure Admin Access

Additional Resources

Restricting guest access to a Microsoft Teams tab linked to a SharePoint document library

Guest access is a new feature in Microsoft Teams that allows different organizations to collaborate together in a shared environment. Anyone with a business or consumer email account, such as Outlook, Gmail, or others, can participate as a guest in Teams with full access to team chats, meetings, and files.

With the Teams and SharePoint site integration of an Office 365 group, you can now restrict access to a certain document library in SharePoint and have these access restrictions replicate to the corresponding Teams tab. This blog post will provide a walk-through of the required steps of restricting a document library in SharePoint, and creating a corresponding Teams tab that is linked to this document library.

Restricting access for SharePoint document library

  1. In your SharePoint site, navigate to the document library you wish to secure.
  2. Click on the gear icon on the top of the page, and click on ‘Library settings’:

  1. Click on ‘Stop Inheriting Permissions’ under the Permissions tab:

  1. Select all current permission groups and click ‘Remove User Permissions’:

  1. Click on ‘Grant Permissions’
  2. Add the users you want to be able to access the document library:

  1. Click on ‘Show Options’
  2. Select the appropriate permission level

  1. Click Share
  2. Now only users you have explicitly given access to, along with site owners, will be able to access that document library.

Creating a tab in Teams linked to the document library

  1. Navigate to the appropriate Team.
  2. Click on the ‘+’ sign for the channel that you want the tab to be created on.
  3. Click on ‘Document Library’.
  4. Under ‘Relevant sites’, choose the SharePoint site that you created the document library in.
  5. Click ‘Next’.
  6. Pick the document library you want to add:

  1. Change the name of the tab that will be displayed in Teams:

  1. Click ‘Save’.
  2. Now only the Office 365 Group members who you have explicitly given access to this document library will be able to view it in Teams. This allows different organizations to collaborate together on projects while maintaining the ability to restrict access to sensitive content.

Securing an ASP.NET Core app on Ubuntu using Nginx and Docker – Part III

 In Part I of this tutorial, we created a self-contained ASP.NET Core web application using the new dotnet CLI tools, configured PuTTY and PSCP to SSH and transfer files, and then finally transfer the self-contained app from a Windows environment to the Ubuntu VM. Part II discussed setting up Docker, creating a Docker Image, and running your application from a Docker container.

Continue reading “Securing an ASP.NET Core app on Ubuntu using Nginx and Docker – Part III”

Securing an ASP.NET Core App on Ubuntu Using Nginx and Docker (Part I)

Typically, when you develop with ASP.NET you have the luxury of IIS Express taking care of SSL and hosting, however IIS and IIS Express are exclusive to the Windows platform. ASP.NET Core 1.0 has decoupled the web server from the environment that hosts the application. This is great news for cross-platform developers since web servers other than IIS such as Apache and Nginx may be set up on Linux and Mac machines. Securing ASP.NET Core App on Ubuntu Using Nginx and Docker Part 1.png

This tutorial involves using Nginx as the web server to host a dockerized .NET Core web application with SSL Termination on a Ubuntu machine. In this three-part tutorial, I’ll guide you in:

Part I (This post)  1. Creating and publishing an ASP.NET Core web app using the new dotnet CLI tools and 2. Installing and configuring PuTTY so we may SSH and transfer files with our Ubuntu machine

Part II – Setting up Docker and creating a Docker Image on Ubuntu 16.04

Part III – 1. Configuring Nginx for SSL termination and 2. Building and Running the Docker Image.

Continue reading “Securing an ASP.NET Core App on Ubuntu Using Nginx and Docker (Part I)”

AWS S3 Bucket Name Validation Regex

Amazon Web Services enforces a strict naming convention for buckets used for storing files. Amazon’s requirements for bucket names include: aws s3 bucket name validation regex by Scott Lanoue at Easy Dynamics Technical Blog

  • A Bucket’s name can be between 6 and 63 characters long, containing lowercase characters, numbers, periods, and dashes
  • Each label must start with a lowercase letter or number
  • Bucket names cannot contain underscores, end with a dash, have consecutive periods, or use dashes adjacent to periods
  • Lastly, the bucket name cannot be formatted as an IPV4 address (e.g.

Continue reading “AWS S3 Bucket Name Validation Regex”