Gauging cloud app performance with Yahoo Cloud Serving Benchmark

Obtaining solid metrics on application performance is difficult, but a Yahoo framework can help enterprises compare apps running on the same cloud.

Comparing cloud provider costs is easy; comparing application performance, however, is not. It's challenging in

particular to compare performance among different types of databases running within the same cloud or across different clouds. The Yahoo Cloud Serving Benchmark (YCSB) is a framework designed to help you understand how well different cloud databases and data stores perform under realistic loads. The source code for the framework can be downloaded on GitHub.

The Yahoo Cloud Serving Benchmark tool includes two essential components -- a workload generator and a set of workloads the generator produces -- and interfaces for a number of NoSQL databases, including Cassandra, DynamoDB, HBase, MongoDB, Redis and Oracle NoSQL Database. It also includes a JDBC interface for relational databases.

Database benchmarks such as YCSB are particularly useful when faced with application architecture decisions.

Database benchmarks such as YCSB are particularly useful when faced with application architecture decisions. An existing database application that no longer meets performance requirements, for example, may force you to consider scaling up hardware or to change the underlying database. Scaling up would be an appropriate option if additional hardware provides a nearly linear performance improvement (e.g., double the number of servers to double the performance). We don't always realize linear improvement when scaling because there are bottlenecks within an application that don't allow admins to take full advantage of additional resources.

If you change your database, you will have many options -- especially if you switch from a relational to NoSQL database. Although not obvious at first, some problems lend themselves to a specific type of NoSQL database. Social network analysis, for example, fits well with graph databases. Key value databases, such as Cassandra and Amazon DynamoDB, may be a better option than a document store, such as MongoDB, if you do not need to support complex queries. Benchmarking can provide data to help you decide which database will best meet your needs.

Yahoo Cloud Serving Benchmark workload management

The first step to running the YCSB is to determine the type of database you want to test and the workload to run on it. Once you have your database created, you must create a schema associated with your target workload. The implementation details of the schema will vary by database type. For example, a table may be created in MySQL, while a column family and keyspaces would be created in Cassandra.

Benchmarking is a valuable tool for comparing database performance, but workloads must be comparable to your actual production loads and results should be considered along with other design considerations when selecting a database. You can run the core workloads provided with YCSB or create your own. The core workloads include a set of six workloads with varying read/write characteristics. Some workloads are update-heavy, while others will test read performance. If you create your own workload, you must create a Java application that extends existing YCSB classes to generate data and performs read/write operations.

Workloads are parameterized so you can test a workload under varying numbers of threads and operations per second. They consist of two phases: the load phase, which creates data, and the transaction phase, which executes the operations specified in the workload and then outputs a set of performance statistics. In addition to runtime and operations per second, there are a number of latency measures that include average, minimum, maximum, 95th percentile latency and 99th percentile latency.

If you have an existing application, it may be able to collect data from performance monitoring tools to determine the mix of read, write and update operations on your application. If you use a relational database, pay particular attention to the most frequently run and longest-running queries. These are good candidates for use in a custom workload.

If you develop a new application, then you should run a range of benchmarks to assess a number of different possible production loads before putting that app into production. This can provide valuable information about how different databases perform under different conditions. If you expect moderate loads most of the time with occasional spikes in demand, then test for both. Ideally, the database you choose would perform well in all expected conditions.

You should also choose a database according to frequency of use. If one database performs well under most expected loads but poorly on an unexpected or infrequent load, you might want to choose that database.

About the author:
Dan Sullivan holds a Master of Science degree and is an author, systems architect and consultant with more than 20 years of IT experience. He has had engagements in advanced analytics, systems architecture, database design, enterprise security and business intelligence, and worked in a broad range of industries, including financial services, manufacturing, pharmaceuticals, software development, government, retail and education. Dan has written extensively about topics that range from data warehousing, cloud computing and advanced analytics to security management, collaboration and text mining.

This was first published in January 2014

Dig deeper on Network and application performance in the cloud

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchServerVirtualization

SearchVMware

SearchVirtualDesktop

SearchAWS

SearchDataCenter

SearchWindowsServer

SearchSOA

SearchCRM

Close