Supporting Multiple Environments in a single TeamCity Build Config

Having multiple build configs can clutter up the TeamCity dashboard and make maintenance bothersome. The use of inheritance and build templates can help with maintenance, but that doesn’t solve the issue of having a build config for each environment (DEV, TEST, PROD, etc).

Using TeamCity service messages and the TeamCity REST API can help solve this problem and keep all operations for a project under one build config. In some cases this can help reduce costs since the number of build configs TeamCity allows is limited by the Licensing that was obtained.

The example used in this article is to support deployment to multiple environments however this may be used for builds, testing and so on.

By following this article, the TeamCity dashboard would go from looking like this (with a “Execute Deployment” build config under each project):

Before

To this:

After

Selecting an environment will now occur when clicking “Run” on the new build config; the user will be prompted to select an environment.

Procedure

  1. For this to work correctly all values that need to change per environment must be parameterized. You should also consider making a TeamCity service account that will be used for REST API calls.
  2. Devise a parameter naming convention to denote the environment that the parameter value belongs to. Avoid using prefixes that interfere with TeamCity’s defaults (env., system., dep.)
    1. Ex:
      TargetServerAddress becomes defined as:
      DEV.TargetServerAddres
      TEST.TargetServerAddress
      PROD.TargetServerAddress
  3. Go through all the parameters and update them to follow the formatting specified above.
  4. On the TeamCity project that will have the environment switches, add a parameter “TargetEnvironment” which will prompt the user for a selection upon running a build.
    1. Select “Edit…” in the Spec section to modify the parameter to prompt the user
    2. Fill in the various environment identifiers (these are defined in step 2)
    3. The final result should look like this
  5. Create a build config to execute a shell script. The following examples are done using the PowerShell build runner.
    1. Create your auth headers.
      $pair = "$($TeamCityApiUserName):$($TeamCityApiPassword)"
      $encodedCreds = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes($pair))
      $basicAuthValue = "Basic $encodedCreds"
      $tcHeaders = @{
      "Authorization" = $basicAuthValue;
      "Accept" = "application/xml";
      "Content-Type" = "application/xml";
      }
    2. Create helper function to get all TeamCity parameters for a specified Project Id.
      Function GetTeamCityParameters($ProjectId) {
      try {
      $api = "$($TeamCityHost)httpAuth/app/rest/projects/id:$ProjectId/parameters"
      $response = Invoke-WebRequest -Uri $api -Headers $tcHeaders -Method GET
      $api
      return ( $response).properties.property
      } catch {
      throw
      }
      }
      
    3. Create  helper function to Emit TeamCityParameters using TeamCity Service Messages
      Function EmitTeamCityParameter($Key, $Value) {
      Write-Host "##teamcity[setParameter name='$Key' value='$Value']"
      }
    4. Loop through all matching parameters to emit the value of “DEV.TargetServerAddress” as the value for “TargetServerAddress”.
      ForEach ($param in (GetTeamCityParameters $ProjectId)) {
      if ($param.Name -and $param.Name.StartsWith("$TargetEnvironment.")) {
      EmitTeamCityParameter -Key ($param.Name).Replace("$TargetEnvironment.","") -Value "%%$($param.Name)%%"
      }
      }
  6. To make use of these freshly emitted environment parameters whichever build config being set up for multi-environment use, in my case Execute Deployment, must have a Snapshot Dependency on the build config that emits the environment parameters.
    1. Every time an emitted parameter is to be used it must be called on via dependency parameter. For instance, to use the emitted value “TargetServerAddress” in a build config it must be called like so:
      %dep.EmitEnvironmentParameters.TargetServerAddress%
    2. This is because using TeamCity Service Messages will only update the value of the parameter for the single build that executed. What this means is after Emit Environment Parameter executes, the value of “TargetServerAddress” never gets persisted.

 

You may run into problems when attempting this password parameters. This is because TeamCity masks all passwords as asterisks before executing the script, effectively resulting in the parameter value being updated as ******* rather than whatever the passwords true value is. In the next blog post, we will discuss using the TeamCity REST API to get around this issue with passwords.

Reducing Costs in the Cloud using Machine Learning

In the modern-day business environment where organizations are consistently being asked to do more with less, businesses of all sizes are continuing to move to the cloud en masse. The switch to a cloud-based platform over a traditional on-premise installation is driven largely by a desire to reduce costs, with added benefits such as optimized performance and higher levels of compliance.

Image result for cloud vs on premise
A comparison of cloud-based platforms to on-premises installations

However, organizations that use the cloud for just hosting purposes have barely scratched the surface in terms of functionality. While features such as resource pooling and automatic backups are utilized commonly, some features, such as machine learning in the cloud, continue to be underutilized by users.

Machine learning can broadly be defined as the science of feeding computers data to autonomously improve their learning over time, and it is starting to explode in the cloud-sphere. Azure was one of the first cloud platforms to announce machine learning capabilities (back in late 2014) and Amazon followed suit in 2015. More recently, Google launched Google Cloud Machine Learning in 2017, while Amazon introduced a more advanced machine learning platform called SageMaker at re:Invent 2017.

Machine learning in the cloud, at its core, is comprised of three separate requirements:

  1. The first, and most important, part of the machine learning process is the converting of unstructured data into understandable, value-rich data. The core principle here is that without good data, there is no value to be derived from it – clean data is essential for machine learning purposes. Cloud providers make it easy for you to import your data – Amazon SageMaker allows native integration with S3, RDS, DynamoDB, and Redshift, and all 3 major companies allow you to bring in your own data as well.
  2. The second piece of the puzzle is the machine learning algorithm that the data runs through. SageMaker, for example, offers many built-in algorithms such as Linear Learner, XGBoost, and K-Means. It also allows you to use your own trained algorithms, which are packaged as Docker images (like the built-in ones). This gives you the flexibility to use almost any algorithm code, regardless of implementation language, dependent libraries, or frameworks.
  3. Lastly, machine learning scientists need a certain level of insight to be able to connect models and algorithms to business processes. Industry-standard processes for data mining (illustration below) show that understanding the core business and data is key to modeling algorithms. Without an understanding of what data can help make impactful business decisions, the output of any algorithm is useless.
Image result for azure machine learning process
An illustration of the Cross Industry Standard Process for Data Mining

Machine learning in the cloud really is as simple as finding the right dataset (or collecting the data using built-in cloud monitoring services), training the right algorithm, and continuing to fine-tune the machine learning system. The barrier to entry is not as large as organizations perceive it to be, and not every user of a cloud-based machine learning system must be a trained machine learning scientist.

At this point, you may be wondering – how can I use these systems to reduce costs and increase operational intelligence in my current cloud setup?

One of the most important ways any organization can reduce costs is by proactively predicting performance issues and remediating problems before they impact your business. Specifically, funneling a dataset of historical uptime/downtime metrics through a machine learning algorithm can help you identify when a server or system may go down in the future. Using this knowledge, you can avert any work slowdowns or stoppages and, in the process, save a substantial amount of potentially lost wages.

Image result for machine learning

In additional to technical cost-reduction, machine learning in the cloud is already being used to reduce costs by improving processes in many industries. One great use case is how Google Cloud augments typical help-desk scenarios with Cloud AI using an event-based server-less architecture. It also has the potential to completely revolutionize health-care, as it can provide alerts from real-time patient data, while also providing proactive health management for existing patients.

In summation, while cloud migrations are ubiquitous, the true power of the cloud is still untapped while services like machine learning tools are underutilized. These tools, offered by all the major cloud providers, can help organizations by providing proactive analyses and minimizing operational waste.

References

  1. https://www.cms-connected.com/News-Archive/November-2017/Cloud-vs-On-Premise-Digital-Asset-Management
  2. https://www.cms-connected.com/News-Archive/November-2017/Cloud-vs-On-Premise-Digital-Asset-Management
  3. https://www.techemergence.com/what-is-machine-learning/
  4. https://www.youtube.com/watch?v=COSXg5HKaO4
  5. https://aws.amazon.com/blogs/aws/sagemaker/
  6. https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms.html
  7. https://blogs.msdn.microsoft.com/azuredev/2017/02/19/the-data-science-process-with-azure-machine-learning/
  8. https://www.ca.com/content/dam/ca/us/files/solution-brief/how-can-machine-learning-help-you.pdf
  9. https://d1.awsstatic.com/Marketplace/scenarios/bi/Q42017/BIA12-predictive-maintenance-scenario-brief.pdf

Infrastructure As Code: The ODIN Way

This blog details how I accelerated the deployment of cloud environments through the creation of a web portal called ODIN ‘Optimal Deployment In Network’. This will be a continuation series with this article being Part I.

Discussing The Problem

With CI/CD most of the time being implemented by grabbing code, running tests, and deploying the code in question to a specific environment; I wanted to produce an application that could be used to create infrastructure with  a simple button click. I began brainstorming on different ways of creating my application, but I needed a precise problem to solve.

It quickly became clear that a major pain point to be addressed was a bottleneck in the deployment process of environments for the use of the development team that took days and even weeks. Mitigating this pain point seemed like a good place to start because it could optimize an important part of the current flow and also pave the way to the implementation of CI/CD in the future.

ODIN would automate the process and make dev/test environments available to developers on demand via an easy “one-click” process that will take minutes instead of days, therefore assisting in order to optimize and streamline deployments. This process can also be extended in the future to be triggered automatically as part of a CI/CD process.

Overview

I designed a solution for creating a self-service web portal that will automate and accelerate the process of delivering cloud environments upon request. The solution as a whole is illustrated in the following diagram:

Steps

  1. Users access the self-service portal, a web app deployed on Azure App Service.
  2. Users are authenticated using Azure Active Directory using their work account.
  3. The web app requests a new deployment from the Azure Resource Manager using the chosen template and the Azure .NET SDK.
  4. Azure Resource Manager deploys the requested environment, which includes virtual machines running Puppet agents.
  5. The puppet master serves configuration settings to the Puppet agents, which configure the virtual machines and perform necessary software installations.

But Why?

Before we get started, we should quickly discuss the ‘what’s the point’ argument.

Writing infrastructure as code is pretty nice and I enjoy the declarative approach of defining what you want and letting the tooling take care of the how. Using ARM templates (or CloudFormation templates in AWS) allows for a developer to create quick and precise environments every single time.

Below are three main practices that ODIN encourages:

  • Self-service environment. The solution as a whole implements the “self-service environment” DevOps practice because it allows users to trigger the deployment of new cloud environments in a fully automated manner with a simple button click.
  • Infrastructure as code. The use of Resource Manager templates allows a team to manage “infrastructure as code” a DevOps practice that allows developers to define and configure infrastructure (such as cloud resources) consistently while using software tools such as source control to version, store, and manage these configurations.
  • Security through eradication. Over 250,000 new malicious programs are found every day. Antivirus says your systems are fine. Intrusion Prevention Systems say you are safe. You hire a professional to scan your network and he/she concludes that a backdoor was previously installed into your network. You have no idea how many systems have been compromised. Did I mention that it is a Saturday? Infrastructure as code allows for you to easily grab a template from source control, add some new firewall rules and parameters, then invoke it with ODIN. Within minutes you can have your entire infrastructure rebuilt without the infection.  With ODIN, you can rebuild all of your servers, mount your original data, then continue with business as usual.

Divide and Conquer

To provide a proof-of-concept implementation of the solution, work was divided into three areas, each focusing on a different part of the solution:

  1. Implementing the self-service portal code named ODIN.
  2. Authoring a Resource Manager template to deploy as an example environment.
  3. Using Puppet to automate post-deployment configuration.

ODIN

Optimal Deployment In Network (ODIN)

The portal was implemented as an ASP.NET Core web application deployed on an Azure web app. The application was connected to my personal Azure Active Directory for user authentication and I used the Azure .NET SDK to access Azure Resource Manager for deploying environments.

User sign-in with Azure Active Directory

The web app was configured to authenticate, as described in the article ASP.NET web app sign-in and sign-out with Azure AD. This allows users to use their existing work credentials to access the application and the web application to retrieve user profiles easily.

In order to communicate to Azure, you must provide important information identifying your application. I chose to insert my keys into a JSON file. The information that you need are as follows:

Once you have figured out your correct keys, we have to configure authentication for Azure.


public void ConfigureServices(IServiceCollection services)
{
// Adding Auth
services.AddAuthentication(sharedOptions =>
{
sharedOptions.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme;
sharedOptions.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme;
})
.AddAzureAd(options => Configuration.Bind("AzureAd", options))
.AddCookie();

services.AddMvc();

// Adding SignalR
services.AddSignalR(options => {
options.KeepAliveInterval = TimeSpan.FromSeconds(10);
});

services.AddOptions();

// DI
services.Configure<AzureAdOptions>(Configuration.GetSection("AzureAd"));
services.AddSingleton<AzureAdOptions>();
services.AddSingleton<IHelper, Helper>();
services.AddSingleton<IDeploymentTemplateTask, DeploymentTemplateTask>();
}

Finally, it is helpful to create an extension method to bind the keys that were located inside of a JSON config to a POCO class. This allows a developer the ability to inject the POCO directly inside of a class constructor via Dependency Injection.


public static class AzureAdAuthenticationBuilderExtensions
{
public static AuthenticationBuilder AddAzureAd(this AuthenticationBuilder builder)
=> builder.AddAzureAd(_ => { });

public static AuthenticationBuilder AddAzureAd(this AuthenticationBuilder builder, Action<AzureAdOptions> configureOptions)
{
builder.Services.Configure(configureOptions);
builder.Services.AddSingleton<IConfigureOptions<OpenIdConnectOptions>, ConfigureAzureOptions>();
builder.AddOpenIdConnect();
return builder;
}

private class ConfigureAzureOptions: IConfigureNamedOptions<OpenIdConnectOptions>
{
private readonly AzureAdOptions _azureOptions;

public ConfigureAzureOptions(IOptions<AzureAdOptions> azureOptions)
{
_azureOptions = azureOptions.Value;
}

public void Configure(string name, OpenIdConnectOptions options)
{
options.ClientId = _azureOptions.ClientId;
options.Authority = $"{_azureOptions.Instance}{_azureOptions.TenantId}";
options.UseTokenLifetime = true;
options.CallbackPath = _azureOptions.CallbackPath;
options.RequireHttpsMetadata = false;
}

public void Configure(OpenIdConnectOptions options)
{
Configure(Options.DefaultName, options);
}
}
}

Deploy an ARM Template

The web application needs to send requests to the Azure Resource Manager to deploy new environments. For this purpose, the web application makes use of the Azure .NET SDK to programmatically communicate with the Azure Resource Manager and deploy the requested Resource Manager JSON template. See a step-by-step tutorial on how to deploy a template using .NET. Below is the code that I use to deploy a Virtual Machine to Azure based on a well formatted ARM template.


public async Task<DeploymentExtendedInner> CreateTemplateDeploymentAsync(TokenCredentials credential,
string groupName, string deploymentName, string subscriptionId, string templatePath,
string templateParametersPath)
{
#region Fail Fast

if (string.IsNullOrEmpty(templatePath))
throw new ArgumentNullException("Template cannot be null!");

if (string.IsNullOrEmpty(templateParametersPath))
throw new ArgumentNullException("Parameter template cannot be null!");

#endregion

var templateFileContents = GetJsonFileContents(templatePath);
var parameterFileContents = GetJsonFileContents(templateParametersPath);

var deployment = new Deployment
{
Properties = new DeploymentPropertiesInner
{
Mode = DeploymentMode.Incremental,
Template = templateFileContents,
Parameters = parameterFileContents["parameters"].ToObject<JObject>()
}
};

try
{
using (var resourceManagementClient = new ResourceManagementClient(credential))
{
resourceManagementClient.SubscriptionId = subscriptionId;
return await resourceManagementClient.Deployments.CreateOrUpdateAsync(groupName, deploymentName,
deployment.Properties, CancellationToken.None);
}
}
catch (Exception exception)
{
Console.WriteLine(exception.Message);
throw;
}
}

Infrastructure as code using Resource Manager templates

Azure Resource Manager templates (ARM templates) are the preferred way of automating the deployment of resources to Azure Resource Manager (AzureRM). ARM templates are JavaScript Object Notation (JSON) files. The resources that you want to deploy are declaratively described within JSON. Because they are written in JSON, this allows for versioning of particular templates. This enforces the idea of write once, deploy forever.

Puppet Enterprise virtual machine extensions

As described earlier, I chose to use Puppet for automating post-deployment virtual machine configuration. This means that Puppet Enterprise agents need to be installed on virtual machines defined in the Resource Manager templates. To make this process truly automatic, the agents need to be installed automatically as soon as the virtual machines are created. This can be achieved by using Azure virtual machine extensions, which allow performing an array of post-deployment operations on Windows or Linux virtual machines.

Puppet

Once virtual machines are deployed for the requested environment, post-deployment virtual machine configuration is handled by Puppet. For more information on Puppet, visit the Puppet official website and the Puppet Enterprise overview.

Installing Puppet

To install Puppet Enterprise, see the official installation guide.

Alternatively, the Azure marketplace offers a preconfigured Puppet Enterprise template that allows users to deploy a Puppet Enterprise environment (including Puppet master server and UI console) within minutes.

Accepting Agents

For security reasons, a Puppet master needs to accept agents that attempt to connect to it. On default, this needs to be done manually using the console. To make the process truly automatic, the Puppet master needed to be configured to automatically accept certificates.

This can be achieved by adding the line autosign = true to the [master]block in the Puppet master configuration file /etc/puppetlabs/puppet/puppet.conf

Note: This was done specifically for this POC. Do not accept agents automatically using this method.

Configure Agents

For the purpose of the POC, I decided to showcase three Puppet post-deployment actions to be executed on the Windows Server 2012 R2 virtual machines:

  1. Installation of Chocolatey (a Windows package manager).
  2. Use of Chocolatey to install Firefox.
  3. Transfer a text file to C:\

Conclusion

This project has been a great opportunity to learn more about Azure and to strengthen DevOps best practices. I had no previous Azure experience before this, so learning how to set up a personal account, resource groups, creating ARM templates, and communicating with Azure Resource Manager via REST was a blast!

At this point, the POC can showcase the following:

  1. User logs onto the self-service portal using corporate credentials.
  2. User chooses an environment to be deployed and includes parameters such as region and environment name.
  3. The web portal uses the associated Resource Manager template to programmatically trigger a cloud environment deployment.
  4. Virtual machines, deployed in the new environments, run Puppet agents that connect to the Puppet Enterprise master.
  5. Puppet agents perform installations and configurations on the hosts, preconfigured by the master.

By automating the existing process, ODIN has managed to optimize and accelerate an important part of the dev/test process that use to take days and required manual work.

Looking forward, I have defined the following focus areas for extending the solution:

  • Externalizing template parameters to be configured by the user.
  • Adding additional templates to be available to users.
  • Grabbing templates from BLOB.
  • Connecting the implemented solution to the CI/CD process.
  • Re-branding the front end using React with Typescript.
  • Settings that allow a user to use a hyperlink for a particular template.
  • Ability to upload a template directly to the BLOB or a particular storage component.
  • ODIN admins can configure ODIN to use Github, Bitbucket, or various storage components for templates.
  • Integrating CloudFormation templates for AWS.
  • Integrating the ability to deploy Docker images and containers.

In Part II, we will integrate the consumption of ARM templates located in BLOB storage as well as making the application front end more user friendly by using React components. Thanks for reading!

Tools, Software, and More

Below are the tools that were used to create ODIN:

  • .NET Core v2.0
  • OpenId Connect
  • Docker
  • ARM Template Base with Parameter Template
  • Puppet
  • Visual Studio
  • Visual Studio Code
  • Typescript
  • SignalR
  • Azure Admin Access

Additional Resources

Restricting guest access to a Microsoft Teams tab linked to a SharePoint document library

Guest access is a new feature in Microsoft Teams that allows different organizations to collaborate together in a shared environment. Anyone with a business or consumer email account, such as Outlook, Gmail, or others, can participate as a guest in Teams with full access to team chats, meetings, and files.

With the Teams and SharePoint site integration of an Office 365 group, you can now restrict access to a certain document library in SharePoint and have these access restrictions replicate to the corresponding Teams tab. This blog post will provide a walk-through of the required steps of restricting a document library in SharePoint, and creating a corresponding Teams tab that is linked to this document library.

Restricting access for SharePoint document library

  1. In your SharePoint site, navigate to the document library you wish to secure.
  2. Click on the gear icon on the top of the page, and click on ‘Library settings’:

  1. Click on ‘Stop Inheriting Permissions’ under the Permissions tab:

  1. Select all current permission groups and click ‘Remove User Permissions’:

  1. Click on ‘Grant Permissions’
  2. Add the users you want to be able to access the document library:

  1. Click on ‘Show Options’
  2. Select the appropriate permission level

  1. Click Share
  2. Now only users you have explicitly given access to, along with site owners, will be able to access that document library.

Creating a tab in Teams linked to the document library

  1. Navigate to the appropriate Team.
  2. Click on the ‘+’ sign for the channel that you want the tab to be created on.
  3. Click on ‘Document Library’.
  4. Under ‘Relevant sites’, choose the SharePoint site that you created the document library in.
  5. Click ‘Next’.
  6. Pick the document library you want to add:

  1. Change the name of the tab that will be displayed in Teams:

  1. Click ‘Save’.
  2. Now only the Office 365 Group members who you have explicitly given access to this document library will be able to view it in Teams. This allows different organizations to collaborate together on projects while maintaining the ability to restrict access to sensitive content.

Securing an ASP.NET Core app on Ubuntu using Nginx and Docker – Part III


 In Part I of this tutorial, we created a self-contained ASP.NET Core web application using the new dotnet CLI tools, configured PuTTY and PSCP to SSH and transfer files, and then finally transfer the self-contained app from a Windows environment to the Ubuntu VM. Part II discussed setting up Docker, creating a Docker Image, and running your application from a Docker container.

Continue reading “Securing an ASP.NET Core app on Ubuntu using Nginx and Docker – Part III”

Securing an ASP.NET Core App on Ubuntu Using Nginx and Docker (Part I)

Typically, when you develop with ASP.NET you have the luxury of IIS Express taking care of SSL and hosting, however IIS and IIS Express are exclusive to the Windows platform. ASP.NET Core 1.0 has decoupled the web server from the environment that hosts the application. This is great news for cross-platform developers since web servers other than IIS such as Apache and Nginx may be set up on Linux and Mac machines. Securing ASP.NET Core App on Ubuntu Using Nginx and Docker Part 1.png

This tutorial involves using Nginx as the web server to host a dockerized .NET Core web application with SSL Termination on a Ubuntu machine. In this three-part tutorial, I’ll guide you in:

Part I (This post)  1. Creating and publishing an ASP.NET Core web app using the new dotnet CLI tools and 2. Installing and configuring PuTTY so we may SSH and transfer files with our Ubuntu machine

Part II – Setting up Docker and creating a Docker Image on Ubuntu 16.04

Part III – 1. Configuring Nginx for SSL termination and 2. Building and Running the Docker Image.

Continue reading “Securing an ASP.NET Core App on Ubuntu Using Nginx and Docker (Part I)”

AWS S3 Bucket Name Validation Regex

Amazon Web Services enforces a strict naming convention for buckets used for storing files. Amazon’s requirements for bucket names include: aws s3 bucket name validation regex by Scott Lanoue at Easy Dynamics Technical Blog

  • A Bucket’s name can be between 6 and 63 characters long, containing lowercase characters, numbers, periods, and dashes
  • Each label must start with a lowercase letter or number
  • Bucket names cannot contain underscores, end with a dash, have consecutive periods, or use dashes adjacent to periods
  • Lastly, the bucket name cannot be formatted as an IPV4 address (e.g. 255.255.255.255)

Continue reading “AWS S3 Bucket Name Validation Regex”

Rapid Development Using Online IDEs

One of the most important processes in software development is the Rapid Application Development (RAD) model. The RAD model promotes adaptability – it emphasizes that requirements should be able to change as more knowledge is gained during the project lifecycle. Not only does it offer a viable alternative to the conventional waterfall model, but it has also spawned the development of the Agile methodology, which you can learn more about here.Rapid_Development_Using_Online_IDEs_By_Will_Shah Technical Blog post by Easy Dynamics

A core concept of the RAD model is that programmers should quickly develop prototypes while communicating with users and teammates. However, historically, this has been hard to do – when starting a project, you often need to decide which languages, libraries, APIs, and editors to use before you can begin. This takes the “rapid” out of rapid application development, and this was always a problem until online integrated development environments (IDEs) started popping up.  Continue reading “Rapid Development Using Online IDEs”

Section 508 Coding Practices

So you’ve read our previous blog onSection_508_Coding_Practices_Kishore_Jogia Easy Dynamics Technical Blog Section 508 Standards are and how to test for them and next thought “Gosh, that’s nice, but how do I make a page 508 Compliant?” Or you stumbled upon this blog from a quick search. Either way, if you’re looking for quick and easy tips on how to make your site more 508 Compliant, you’re in the right place! We’ll cover a few common 508 Standards and give basic html examples on how to meet compliancy.

Continue reading “Section 508 Coding Practices”