Reducing Costs in the Cloud using Machine Learning

In the modern-day business environment where organizations are consistently being asked to do more with less, businesses of all sizes are continuing to move to the cloud en masse. The switch to a cloud-based platform over a traditional on-premise installation is driven largely by a desire to reduce costs, with added benefits such as optimized performance and higher levels of compliance.

Image result for cloud vs on premise
A comparison of cloud-based platforms to on-premises installations

However, organizations that use the cloud for just hosting purposes have barely scratched the surface in terms of functionality. While features such as resource pooling and automatic backups are utilized commonly, some features, such as machine learning in the cloud, continue to be underutilized by users.

Machine learning can broadly be defined as the science of feeding computers data to autonomously improve their learning over time, and it is starting to explode in the cloud-sphere. Azure was one of the first cloud platforms to announce machine learning capabilities (back in late 2014) and Amazon followed suit in 2015. More recently, Google launched Google Cloud Machine Learning in 2017, while Amazon introduced a more advanced machine learning platform called SageMaker at re:Invent 2017.

Machine learning in the cloud, at its core, is comprised of three separate requirements:

  1. The first, and most important, part of the machine learning process is the converting of unstructured data into understandable, value-rich data. The core principle here is that without good data, there is no value to be derived from it – clean data is essential for machine learning purposes. Cloud providers make it easy for you to import your data – Amazon SageMaker allows native integration with S3, RDS, DynamoDB, and Redshift, and all 3 major companies allow you to bring in your own data as well.
  2. The second piece of the puzzle is the machine learning algorithm that the data runs through. SageMaker, for example, offers many built-in algorithms such as Linear Learner, XGBoost, and K-Means. It also allows you to use your own trained algorithms, which are packaged as Docker images (like the built-in ones). This gives you the flexibility to use almost any algorithm code, regardless of implementation language, dependent libraries, or frameworks.
  3. Lastly, machine learning scientists need a certain level of insight to be able to connect models and algorithms to business processes. Industry-standard processes for data mining (illustration below) show that understanding the core business and data is key to modeling algorithms. Without an understanding of what data can help make impactful business decisions, the output of any algorithm is useless.
Image result for azure machine learning process
An illustration of the Cross Industry Standard Process for Data Mining

Machine learning in the cloud really is as simple as finding the right dataset (or collecting the data using built-in cloud monitoring services), training the right algorithm, and continuing to fine-tune the machine learning system. The barrier to entry is not as large as organizations perceive it to be, and not every user of a cloud-based machine learning system must be a trained machine learning scientist.

At this point, you may be wondering – how can I use these systems to reduce costs and increase operational intelligence in my current cloud setup?

One of the most important ways any organization can reduce costs is by proactively predicting performance issues and remediating problems before they impact your business. Specifically, funneling a dataset of historical uptime/downtime metrics through a machine learning algorithm can help you identify when a server or system may go down in the future. Using this knowledge, you can avert any work slowdowns or stoppages and, in the process, save a substantial amount of potentially lost wages.

Image result for machine learning

In additional to technical cost-reduction, machine learning in the cloud is already being used to reduce costs by improving processes in many industries. One great use case is how Google Cloud augments typical help-desk scenarios with Cloud AI using an event-based server-less architecture. It also has the potential to completely revolutionize health-care, as it can provide alerts from real-time patient data, while also providing proactive health management for existing patients.

In summation, while cloud migrations are ubiquitous, the true power of the cloud is still untapped while services like machine learning tools are underutilized. These tools, offered by all the major cloud providers, can help organizations by providing proactive analyses and minimizing operational waste.

References

  1. https://www.cms-connected.com/News-Archive/November-2017/Cloud-vs-On-Premise-Digital-Asset-Management
  2. https://www.cms-connected.com/News-Archive/November-2017/Cloud-vs-On-Premise-Digital-Asset-Management
  3. https://www.techemergence.com/what-is-machine-learning/
  4. https://www.youtube.com/watch?v=COSXg5HKaO4
  5. https://aws.amazon.com/blogs/aws/sagemaker/
  6. https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms.html
  7. https://blogs.msdn.microsoft.com/azuredev/2017/02/19/the-data-science-process-with-azure-machine-learning/
  8. https://www.ca.com/content/dam/ca/us/files/solution-brief/how-can-machine-learning-help-you.pdf
  9. https://d1.awsstatic.com/Marketplace/scenarios/bi/Q42017/BIA12-predictive-maintenance-scenario-brief.pdf

Rapid Development Using Online IDEs

One of the most important processes in software development is the Rapid Application Development (RAD) model. The RAD model promotes adaptability – it emphasizes that requirements should be able to change as more knowledge is gained during the project lifecycle. Not only does it offer a viable alternative to the conventional waterfall model, but it has also spawned the development of the Agile methodology, which you can learn more about here.Rapid_Development_Using_Online_IDEs_By_Will_Shah Technical Blog post by Easy Dynamics

A core concept of the RAD model is that programmers should quickly develop prototypes while communicating with users and teammates. However, historically, this has been hard to do – when starting a project, you often need to decide which languages, libraries, APIs, and editors to use before you can begin. This takes the “rapid” out of rapid application development, and this was always a problem until online integrated development environments (IDEs) started popping up.  Continue reading “Rapid Development Using Online IDEs”