In this article Sneh Pandya takes a look at the strategies that will help you manage multiple applications at scale.
Not all software development and delivery teams have embraced the DevOps approach to software development and delivery. But it’s striking how many have, and it’s even more striking how successful many adopters of DevOps have become. While some of the practices and technologies associated with DevOps are still immature, small teams have nevertheless evolved from basic agile development techniques to a more comprehensive engagement with their company’s overall IT capabilities. Businesses are seeing happier customers, and they’re beginning to see bottom-line results.
When DevOps efforts are successful, it’s not uncommon for the people who achieved that success to get inundated with requests for new projects, applications and infrastructure.
Why do you need to configure your environment?
Here are the reasons you should manage configuration and app settings variations per environment using deployment tools:
More maintainable: These tools allow you to manage variables and secrets at multiple levels. For variables that are needed in many release definitions, you can store variables in a project. You can also define configuration variables that are scoped to a specific release definition or a specific environment in a release definition. Team Services provides an easy interface to manage these variables at each of these scopes.
Fewer locations to update: By storing your configuration and app settings in a deployment tool, if settings change at a given time, you only need to update the settings in the deployment tool rather than in every possible file that the setting is hard-coded in. Furthermore, by storing them in the right scope, you do not have to update all the release definitions or environments in which they are used.
Fewer mistakes: When storing the appropriate settings for each environment in the environment itself in the deployment tool, you won’t accidentally point to the development connection string when in production. The values for settings are siloed out per environment, so it would be hard to actively mix them up if using a deployment tool.
Secure environment: When you have connection strings, passwords or any other settings stored in plaintext in a settings file, anyone who has access to the code can potentially access a database or machine. Restricted access roles and a rich permissions model for determining who can manage and use secrets at each scope are the way to go.
Reliable automation: By automating the process of transforming configurations through a deployment tool, you’ll be able to count on the transforms always happening during deployment and setting the appropriate values.
Now, let’s look at the strategies that will help you manage multiple applications at scale.
Determining if your application is scalable
The best way to find out if your application is scalable before it gets deployed to thousands of users is to test it in various ways. The main idea of scalability testing is to determine the major workloads and mitigate bottlenecks that can impede your app’s scalability. To ensure you make the application truly scalable, it is vital to regularly and rigorously test it for scalability issues. Even if your customers want all the advantages of using your app, they’ll be disappointed by simple scalability and performance issues.
In load testing, you set criteria to ensure releases of a product meet certain conditions, like a given number of users they can support while delivering a specific response time. Performance testing is undertaken to analyze and improve your app’s performance. This type of test mainly focuses on optimizing resource consumption by analyzing data composed during testing.
Scalability testing caters to performance testing centered on a detailed understanding of the scalability of an application. The goal is to focus on the breaking point when the application stops scaling and to find the underlying issues.
Scaling from the initial stage
Planning to scale your apps before actually scaling them is an excellent business strategy. Design your apps in a way that allows them to grow to the point where they are being used by thousands or millions of people. Here are some important questions to ask yourself and your team before getting started:
-
On average, how many people will use the app in the coming months?
-
What kind of data is involved? How do you plan to manage it?
-
What is the approximate time range required to fit all your customers on one server?
-
How will you manage more customers or data than expected?
These initial phases of planning to scale involve both creating the technology needed to provide growth and informing business decisions. You need to take the past growth rate and expected growth rate into account and determine what potential technology limits will be encountered.
Managing application architecture
To get a better idea of why we need to move away from monoliths, let’s look at the example of NASA’s Apollo 13 mission. Apollo 13 was made of two ships: the lunar module and the command module. In software terms, these were essentially two giant monoliths.
When the command module experienced an oxygen failure, its crew needed to repurpose oxygen filters from the lunar module. However, the lunar module engineers designed their oxygen filters differently, causing a major headache when the crew needed to reassemble and retrofit these parts.
There are multiple approaches available for creating reliable infrastructure for maintaining multiple applications at scale:
Microservices
Microservices are an approach to developing a single application as a suite of small services instead of one single giant app. Each small service runs in its own process instead of relying on one single process to run the entire app. A typical example is an e-commerce app that is split into microservices: one for payment, one for the front-end user interface, one for email notifications, one for managing comments and another one for recommendations.
While much of the discussion about microservices has revolved around architectural definitions and characteristics, their value can be more easily understood through fairly simple business and organizational benefits.
-
Code can be updated more easily, and new features or functionality can be added without changing the entire application.
-
Teams can use different stacks and different programming languages for different components.
-
Components can be scaled independently of one another, reducing the cost and complexity associated with having to scale entire applications.
-
Microservices compose a single application from many smaller, loosely coupled services, in contrast to the monolithic approach of a large, tightly coupled application.
Serverless computing
Serverless computing is a cloud computing model that provisions computing resources on demand and offloads all responsibility for common infrastructure management tasks, e.g., scaling, scheduling, patching, provisioning, etc. The serverless model requires no management and operation of infrastructure, giving developers more time to optimize code and develop innovative new features and functionality.
There are most definitely servers in serverless computing. The term serverless persists because it describes the customer’s experience of servers: They’re essentially invisible because the customer doesn’t see them, manage them or interact with them in any way.
The most useful way to define and understand serverless is by focusing on the core attributes that distinguish serverless computing from other computing models, namely:
-
The serverless model requires no management and operation of infrastructure, giving developers more time to optimize code and develop innovative new features and functionality.
-
Serverless computing runs code on demand only, typically in a stateless container on a per-request basis, and scales transparently with the number of requests being served.
-
Serverless computing enables end users to pay only for resources being used, never paying for idle capacity.
-
For certain workloads, such as those that require parallel processing, serverless computing can be both faster and more cost-effective than other forms of computing.
White label apps
A white label app is an app developed by a white label platform or private label development organization and subsequently rebranded and re-sold by other organizations. This sort of method is utilized mainly for generic items and services that fall under the mass manufacturing category.
White label applications do not require any significant initial expenditure. This may be advantageous for many organizations, notably startups and smaller organizations whose budget does not enable them to commit to more substantial spending. The advantages of white label apps are:
-
Customization – the app can be heavily customized to reflect the brand’s image. This is useful when a customer wants to sync a product or service with its website and social media networks.
-
Reduced marketing time – be it a regional organization or a global enterprise, once white label solutions are established in the market, they will simply be an added benefit for the organization’s clients.
-
Single codebase – in most cases, the organization only needs to maintain one codebase, and all of the configuration is stored independently of the code that is written. The configuration is injected while the builds are made, and the specific flavor is ready in no time!
Read a case study on how Criton is building apps for 200+ hotels
Learn more about white label app CI/CD
Managing infrastructure
When an organization wants to expand its DevOps capability across more – and potentially larger – projects, the first step is to create a process and workflow that can be repeated by multiple teams. Another key to expanding DevOps is to ensure that the early initiative proves the benefits in terms of time savings, cost savings, business results, etc. As a broader effort gets underway, teams will have clearer expectations with these established measures for effectiveness.
Your continuous delivery pipeline typically consists of multiple environments. You may want to deploy changes to a test or staging environment first before deploying to a production environment. Furthermore, your production environment may itself comprise multiple scale units, each of which you may deploy in parallel or one after the other for a gradual rollout.
Containers and orchestration
There is quite a bit of complexity associated with using so many types of cloud computing simultaneously across an organization, and there is potential for these clouds to turn into a storm with data as well as applications in use across multiple cloud solutions.
Cloud orchestration is a process that can be used to manage these multiple workloads in an automated fashion across several cloud solutions, with the goal being to synthesize them into a single workflow. Similar to an orchestra, the goal is to get all the instruments to perform the same piece together. This makes cloud orchestration like the musical conductor that controls the performance and keeps it in sync. Popular examples include Kubernetes, Docker Swarm and Red Hat OpenShift.
Cloud orchestration involves the coordination of multiple tasks. There are often fewer manual coding requirements, as the coding from cloud automation is built upon, which prevents redundant work. Compared to cloud automation, cloud orchestration works on a higher level of coordination, as the processes are already streamlined through cloud automation.
Infrastructure as Code
Infrastructure as Code (IaC) uses a high-level descriptive language to automate the provisioning of IT infrastructure. This automation eliminates the need for developers to manually provision and manage servers, operating systems, database connections, storage and other infrastructure elements every time they want to develop, test or deploy a software application.
IaC is also an essential step in the DevOps lifecycle and is indispensable for competitively paced software delivery. It enables teams to rapidly create and version infrastructure in the same way they version source code and to track these versions so as to avoid inconsistency among IT environments, which can lead to serious issues during deployment.
Managing delivery
Use a Container-Based Pipeline: If a pipeline is containerized, you can run it independently with different language versions. Shared libraries are not the solution, as they create version-specific conflicts and ultimately result in havoc. Using a Docker image is better than using shared libraries because users can self-serve images with whatever version they want. There are many resources on Docker Hub that make it easier to perform common CI/CD steps, such as building a Docker image, cloning a Git repository, running unit tests, linting projects and security scanning. Using these images can help reduce wasted development effort.
Contextual Operation: Instead of managing thousands of pipelines for all your applications or microservices, use a minimal number of malleable pipelines for all integrations and deployments. This method involves Triggers, which carry information to direct actions. In this setup, Triggers carry context, or metadata, allowing a pipeline to change its behavior accordingly. These Triggers could be time based or centered around actions like a repo Git commit or pushing a new image. When a trigger is initiated, it brings in relevant dependencies to perform an action, such as cloning a repository or codebase. Further steps can then consume these unique criteria.
Canary Testing: If an organization has tens of applications, hundreds of pipelines or thousands of microservices, spinning up each instance to run independent tests can quickly become cost-prohibitive. Testing early becomes less useful as infrastructure complexity arises. To solve this issue, teams can adopt a canary release strategy. Canary releasing is a practice in which new software is first released and tested among a small subgroup of users. It’s a good way to limit the blast radius of a change that goes wrong.
Centralized Management: As multiple solutions for installation and lifecycle management sprang up, companies seeking to adopt solutions such as orchestration (e.g., Kubernetes) had to figure out the right approach. With the open-source community standardizing technologies like Cluster API for installation and declarative lifecycle management of multiple clusters, a new path toward consistency in this respect across clouds has emerged. There is a shift in how teams are building their environments and how they are moving away from deploying one large cluster for workloads that is subdivided using namespaces. Instead, they are adopting a more resilient architecture that enables the deployment of many workload clusters and an ephemeral, cluster-as-cattle mentality to proactively reduce their business risk.
Group-Based Deployments: Organizations should also ensure they have group-based deployments to target specific users and enable administrators to target users with pre-defined bundles of applications that can be installed with a single click. This means leveraging content delivery networks (CDNs) that allow for cloud-based, on-demand bandwidth that is capable of pushing apps to large groups of users without clogging up network bandwidth.
Business insights
DevOps pipelines: Codemagic
Codemagic provides a versatile infrastructure for running and building pipelines for all mobile use cases. From encrypting sensitive information in environment variables to using codemagic.yaml
for custom workflows, this feature-rich platform makes it possible for teams to ship mobile apps 50% faster.
Codemagic’s Professional plan offers concurrent builds, unlimited build minutes, in-app support and a pool of external third-party integrations to supercharge your workflows, as well as priority feature requests to address the special needs of your business use case.
IBM Cloud Orchestrator
Cloud orchestration is accomplished via a vendor, such as IBM Cloud Orchestrator – which supports public, private and hybrid clouds. Using such a platform promises benefits like reducing service delivery times by up to 90%. By fully automating previously manual workloads, common processes are implemented, and costs are reduced. Innovation is also hastened on public cloud services, while business policies are consistently imposed. Service-level agreements (SLAs) are also met.
IBM Cloud Orchestrator is a customizable self-service portal. It can automate many IT processes and works with multiple cloud providers, including Amazon EC2, Microsoft Azure and IBM’s SoftLayer. The higher-level Enterprise tier adds instant health dashboards, multi-tenant cloud usage reporting and what-if capacity analysis.
Serverless and microservices: Microsoft Azure
Azure Static Web Apps is a turnkey service for modern full-stack web apps with pre-built and pre-rendered static front ends and serverless API back ends. You can use it to develop with popular front-end frameworks or static site generators, quickly build and test your apps locally, and deploy with a simple check-in. This enables you to focus on developing your app while Azure takes care of the deployment and infrastructure.
Conclusion
Scalability is vital for your application’s success. When scalability is possible, it can enhance growth, offer a good user experience for new users and give you a better return on investment. If your target audience numbers in the thousands or millions, then proper planning and execution are required from the beginning. Additionally, ensure you have a suitable process in place to manage new servers as required.
Be ready for rising demand by having images of advanced virtual machines that are connected to run each of the servers. This way, they will be connected with the latest configuration and be functional with your infrastructure in a fraction of the time. In fact, any seasoned mobile app development company will always take care of scalability from day one. Do not wait until the app serves all its current users – instead, begin testing from the initial phase with loading tests, performance tests and scalability tests to achieve the best results.
Useful links and resources
-
Here’s an article on how to maintain your automated build pipelines.
-
Here’s an article on what Mobile DevOps is and why you should care.
-
Here’s an article on DevOps testing tools in 2021.
-
For discussions, learning and support, join the Codemagic Slack Community.
Sneh is a Senior Product Manager based in Baroda. He is a community organizer at Google Developers Group and co-host of NinjaTalks podcast. His passion for building meaningful products inspires him to write blogs, speak at conferences and mentor different talents. You can reach out to him over Twitter (@SnehPandya18) or via email (sneh.pandya1@gmail.com).