The Evolution of Enterprise Applications and Performance

Data centers and application design have evolved significantly over the last 25 years from decentralized and sprawled compute clusters focused on delivering client-server application architectures to today's cloud-based and stateless application design built for web-scale volumes.

In this paper, we examine the evolution of enterprise applications, the data centers in which these applications reside and the increasing expectations of businesses and end-users on the Quality of Service their applications deliver. We also discuss Application Performance Monitoring (APM) and why this approach is no longer sufficient to meet today's complex data center and application design, nor meet end-user and business expectations.

Read more to learn about:

  • The evolution of enterprise applications, the data centers which house them and the new challenges in assuring their performance
  • Why assuring application performance is more important than ever
  • Common approaches to assuring application performance
  • How to bridge the gap between application and infrastructure performance

A Look Back, Evolution of Data Center & Enterprise Applications

During the 1960s, computers were large mainframes stored in rooms – what we call a "data center" today. They were costly and businesses could rent out space on the mainframe to fulfill specific functions or applications. With the release of the first commercial microprocessor by Intel in 1971 and the 1980s microcomputer boom, computers became widely used in the office. At the time, little thought was given to the specific environment or operating requirements for microcomputers or what we call servers today.

In the 1990s, the dot-com bubble occurred and businesses needed a quick way to establish presence on the Internet. Companies installed server rooms or data centers inside their company walls with the availability of inexpensive networking equipment. At roughly the same time in the early 1990s, the client-server computing model gained popularity providing a distributed application structure that partitioned the tasks or workloads between the providers of resources or services, called Servers, and the service requests, called Clients. At the time, data centers were decentralized and sprawled.

Over the next twenty years, enterprise applications and the data centers providing the infrastructure to house them have significantly evolved. This evolution has also made assuring application performance more complex. We have moved from a static world where applications had dedicated, wholly-owned infrastructure to a dynamic, shared world in which more than 71% of x86 servers are virtualized. Virtualization enables data center operations teams to configure in real-time the access workloads have to compute, storage and network resources as their demand fluctuates.

In 2006, with the launch of Amazon Web Services™ and the dawn of public cloud computing, data centers and applications entered a new phase where workloads could run across multiple boundaries and were often short-lived providing a discrete set of tasks before disappearing.

Figure 1: Evolution of the data center

The Rise of the Virtualized Data Center

In 2001, VMware® ESX® was launched: a bare-metal hypervisor that runs directly on server hardware without an additional underlying operating system. Virtualization provided a dramatic impact on increasing infrastructure utilization and going from a 1:1 ratio of workloads to servers to an average of 11:11.

Virtualization gives IT operators levers and control knobs to decide when a workload or virtual machine should be created, how much compute, storage or network to provide each workload and where each workload should reside. Use cases such as disaster-recovery, assuring high availability through N+1 policies or deploying new servers for new workloads become easier to execute.

Virtualization has also significantly increased the challenge of assuring application performance. The interdependencies and decision points required to assure each workload— and by extension, each application—has access to sufficient resources makes it nearly impossible for an IT administrator to manage. Virtualization has introduced orders of magnitude of increased complexity because there are thousands of metrics and control knobs across the entire virtualized IT stack which a system administrator must consider when trying to maintain the environment and assure application performance.

Figure 2: Interdependencies in today's virtualized data center

Cloud-Based Services & Changing App Design

Today's data centers are shifting from an infrastructure, hardware and software ownership model, toward a subscription and capacity-on-demand model. The data center industry is now changing thanks to consolidation, cost control and cloud support. Cloud computing, paired with today's data center allows IT decisions to be made on a "workload by workload" basis about how resources are accessed.

With every part of the application running as a single, self-contained entity, system administrators can more easily move them around, for redundancy/failure tolerance, capacity and feature testing.

Concurrently, application design is evolving with born-in-the cloud applications leading the way and migrated-to-the-cloud apps "playing catch up". New applications focused on a mobile end-user experience or NoSQL big-data use cases are often the "poster children" for cloud native applications. While stateless application design—an approach that does not record information generated in one session—is preferred for cloud environments and can dramatically improve portability and scalability of applications, most of today's enterprise applications are stateful and significant resources are required to re-architect them.

Another key development is containers. As Linux has become more prevalent in the data center, containers are gaining popularity over VMs – one of the downsides of containers compared to VMs is that they can't run multiple operating systems, like Windows on top of Linux. In addition, "microservices"—an emerging application architecture in which discrete pieces of functionality can scale independently—is especially suited to containers. With every part of the application running as a single, self-contained entity, system administrators can more easily move them around, for redundancy/failure tolerance, capacity and feature testing.

With traditional approaches to application performance management, monitoring and trending each component of containerization and a highly distributed application means another order of magnitude of increased complexity.

So what does this mean for application performance? With traditional approaches to application performance management, monitoring and trending each component of containerization and a highly distributed application means another order of magnitude of increased complexity. Applications aren't necessarily confined to just one container. We might have an application that consists of ten containers running together. We could have 1,000 applications running across 10,000 containers. Or we might have a single big data job that involves multiple, interdependent applications.

Application Performance is King

There have been multiple articles and studies showing how customer-centric organizations outperform their peers. A recent study by Forrester highlights how we have entered a 20year business cycle in which the most successful enterprises will reinvent themselves to systematically understand and serve increasingly powerful customers. What does this have to do with application performance? In a word, everything.

Customer interactions with businesses are becoming more and more digital (e.g. web, mobile, social media) and even the physical ones (e.g. call center, store front) are underpinned by internal business services and applications that are tuned to understand, connect and serve customers at their moment of need. The amount of data that is being collected, stored and analyzed on customer interactions and purchases is growing exponentially with more sophisticated business intelligence and analytical approaches to identify trends and gain a competitive advantage based on a better understanding of the customer in order to acquire and retain them.

With this increasing 'consumerization' of IT, it's not only consumers raising the bar for consistent multi-channel, digital interaction with businesses and brands. Business users are also expecting the internal applications they use to serve their customers to provide the same user experience and performance they've grown accustomed to from the Apple iPad® or Google Docs™. Business users expect to pull up a web browser wherever they may be and quickly access customer data, purchase orders or support tickets with no latency.

Software and applications are the lifeblood of the customer-centric organization.

Software and applications are the lifeblood of the customercentric organization. As another Forrester study highlights, there are significant consequences to degraded application performance. The research highlighted the impact of poor application performance: 83% of organizations reported loss of business user productivity, 52% saw an increase in customer dissatisfaction and 46% saw a loss of revenue.

Traditional APM Concepts

The traditional approach to assuring application performance is predicated on Application Performance Monitoring (APM). The basic approach is built around collecting data from as many log files and agents as possible and leveraging analytical techniques to describe what is going on as well as predict what may happen in the future. Gartner defines a mature APM approach as having five dimensions of functionality:4

  • End-user experience monitoring: capturing data on how end-to-end latency execution correctness and quality appear to the real user of the application with a secondary focus on application availability
  • Application topology discovery and visualization: the discovery of the software and hardware infrastructure components involved in application execution, and the array of possible patches across which these components communicate to deliver the application
  • User-defined transaction profiling: the tracing of user-grouped events, which compromise a transaction as they occur within the application as they interact with components in response to a user's request to the application
  • Application component deep dive: the fine-grained monitoring of resources consumed and events occurring within the components in application discovery and visualization
  • IT Operations Analytics: the usage of multiple analytical techniques in order to further enable visibility and root cause analysis

Multiple software vendors provide tools to accomplish the above functionality. IBM, BMC Software, CA Technologies and HP were the traditional leaders in this space. However, over the last decade and with the emergence of SaaS as a delivery model, NewRelic, AppDynamics and DynaTrace (Compuware) are now viewed as the industry leaders according to Gartner.

Although these vendors have evolved to address the needs of more complex data centers and highly distributed application design, the fundamental premise has stayed the same. The expectation is that performance will be degraded and a human being will be presented with a set of dashboards or alerts and need to make a decision as to how to quickly restore service levels to end-user expectations.

What Decisions Assure Performance?

Virtualization has exposed multiple infrastructure control knobs which a system administrator can use to tune how much compute, storage or network a workload receives as well as where that workload runs, e.g. which host, which cluster or which data center. The same control knobs exist for Containers, as well as for key application components through modern application platforms such as Java®.

By automating the decisions on resource allocation and placement based on the demand of an application and its components, application performance can be assured while the utilization of the underlying infrastructure is maximized.

Let's look at Java application servers (e.g. IBM WebSphere®, Oracle WebLogic®, Apache Tomcat® and Red Hat JBoss®) as an example. These application servers provide application support professionals with the ability to define and configure key control knobs.

Auto-scaling: A feature which has been popularized by cloud computing services (e.g. AWS) and allows growth of computational power automatically as load increases on CPU, memory or disk storage, e.g. as load increases on an application server instance, a new instance is spun up and a load balancer is used to distribute traffic to the new instance therefore assuring that performance is not degraded.

Java Virtual Machine (JVM) Heap: The area of memory used by the JVM, the heap is typically broken into the working set or short-lived objects currently in use and garbage collection which handles memory management or collecting and releasing the memory no longer occupied by the objects. The size of the JVM Heap can be tuned to improve performance: increased to prevent Heap Exhaustion or slow performance; decreased to prevent overallocation of memory space which may be required by other JVMs or applications.

Assuring application performance in highly distributed applications requires a full understanding of the real-time demand of the applications and the ability to match this demand with the underlying supply of compute, storage and network resources.

Figure 4: JVM Heap must be sized based on application demand to assure performance.

Thread Pools: A collection of threads—small sequences of programmed instructions that can be managed independently— created to execute a number of tasks. Tasks are held in a queue and are released for execution when a thread is available. The number of tasks typically exceeds the number of threads. The size of the thread pool can be tuned to improve performance. Increase the size of the thread pool to prevent thread waits or degraded performance.

Decrease the size of the thread pool to reduce overhead.

Figure 5: Thread-pools must be sized based on application demand to assure performance.

Modern Java application servers provide access to these control knobs as well as some built in decision automation. However, assuring performance is not as simple as increasing heap size when there is a threat of heap exhaustion or auto-scaling when transaction volume dramatically increases. As discussed earlier, the multiple application components are distributed among multiple VMs and servers and are consuming compute, storage and network resources from multiple providers. Increasing heap may over tax the VM vMem or the memory consumption on the underlying hosts, in turn affecting the performance of other VMs or components of the application.

Assuring application performance in today's virtualized and cloud environments with highly distributed applications requires a full understanding of the real-time demand of the applications and the ability to match this demand with the underlying supply of compute, storage and network resources. It requires an autonomic platform that makes placement, sizing, and provisioning decisions in real time, all the time.

How Do We Achieve Automated Decisions?

Turbonomic is an autonomic platform that solves the challenge of assuring application performance through control of the infrastructure resources while running the infrastructure as efficiently as possible.

This approach allows all workloads —at every layer of the stack—to selfmanage, ensuring they get the resources they need to perform.

The platform abstracts the managed environment as a market of buyers and sellers trading commodities. Everything in the data center, e.g., hosts, data stores, VMs, applications, containers, zones, etc., is a buyer and a seller. The commodities traded are compute resources, such as Memory, CPU, IO, Ready Queues, IOPS, Latency, Transactions Per Second, etc. For example, a host sells Memory, CPU, IO, Network, CPU Ready Queues, Ballooning, Swapping, etc. A Data Store sells IOPS, Latency, Storage Amount. A VM buys these resources and sells vMem, vCPU, vStorage, etc. An Application buys these resources and sells Transactions Per Second. This approach allows all workloads—at every layer of the stack—to self-manage, ensuring they get the resources they need to perform.

Figure 6: Turbonomic abstracts the IT as a supply-chain of buyers (demand) and sellers (supply) that shop for the best placement and configuration to assure application performance

The platform also allows you to define performance or Quality of Service targets (transactions, response time). It understands demand at both the application (CPU, memory, IO) and end user level (transactions, response time), and controls applications to provision/decommission instances and resize resources.

Turbonomic enables you to:

  • Application performance is only as good as its weakest link.
  • Assure application Quality of Service levels by ensuring the right resources are available in the right place at the right time based on real-time demand; taking into account all of the dependencies in the virtualized/cloud environment, not a CPU threshold or a temporary increase in transaction volume
  • Reliably scale applications up or out by sizing applications appropriately based on current and future demand
  • Ensure that your infrastructure resources are used most efficiently by understanding application demand

Application performance is only as good as its weakest link. An autonomic platform that allows workloads to self-manage is the key to assuring performance at every layer of the stack and scaling virtualization and cloud management with today's distributed applications.