Effective regression testing, and comparative application testing in general, depends largely on your ability to reproduce in a test environment a workload that closely mimics the production application workload of concern. Historically, the difficulty of reproducing a production workload in test has depended on the nature of that workload. Back when I started in IT (Ronald Reagan was in his first term as President of the USA), the focus of mainframe application performance testing efforts was often on batch workloads, and reproducing that type of production workload in test was not so tough: you had your input files, you had your batch jobs that were submitted in a certain sequence and with a certain timing through a job scheduling tool, you ran the jobs and measured the results against baseline data, and that was that.
Batch is still an important part of mainframe computing, but over time the emphasis at many DB2 for z/OS sites has shifted to transactional workloads; moreover, the nature of transactional workloads -- especially during the past 10 years or so -- has changed, with multi-tiered, network-attached, DRDA-using, client-server applications coming to the fore. On top of that, these modern transactional applications are much more dynamic than their forerunners. They are not driven by in-house staff performing clerical and data-entry functions via a "green screen" interface; instead, in many cases the end-users are external to the organization -- maybe consumers browsing online product catalogs and making purchase decisions, or perhaps employees of client companies in a business-to-business context, checking on orders or reviewing fulfillment history. If the end-users are internal to the organization, increasingly they are not clerical workers; rather, they are senior professionals, managers, and executives using analytics-oriented applications to improve speed and quality-of-outcome with respect to decision-making. The actions of these individuals, and the frequency and timing of their interactions with your DB2 for z/OS subsystem, are often hard to predict. For testing purposes, how do you get your arms around that?
And, getting your arms around that particular kind of application testing scenario is getting to be more and more important. If an environmental change (e.g., new system software releases) or an application modification is going to negatively impact performance from the end-user perspective, you REALLY want to catch that before the change goes into production. Elongated response time for in-house clerical staff is one thing, but poor performance affecting an external-to-the-organization end user can lead to lost business and, perhaps, long-term loss of customers (as the now-familiar adage goes, your competition is often just a click away). If performance degrades for an internal-use decision support application, likely as not it won't be a DBA getting calls from irate users -- it'll be directors and VPs and maybe your CIO getting calls from their peers on the business side of the organization.
In short, the challenge is tougher than it's been before, and the stakes are higher than they've been before. Gulp.
Fortunately, a recently announced and available IBM tool addresses this need very nicely. It's called IBM InfoSphere Optim Query Capture and Replay for DB2 on z/OS, and it came out just a couple of months ago. The fundamentals of what Optim Query Capture and Replay can do are spelled out in the product's name: it can capture a DDF application workload executing in your production DB2 for z/OS environment and enable the playing back of that captured workload in a test environment: the same SQL statements, with the same values, executed with the same volume and timing through the same number of connections to the DB2 subsystem.
This capture and replay capability by itself would come in very handy for things like regression testing, but that's not where the story ends. Suppose you want to see what would happen to response times if transaction volume were to increase by some amount? No problem: not only can Optim Query Capture and Replay play back a captured workload -- it can play it back at a higher speed; so, instead of, say, the 100 transactions per second seen in production for a client-server application workload, you could see how response times hold up at 150 transactions per second.
Speaking of response time, Optim Query Capture and Replay provides built-in reporting capabilities that help you to easily zero in on changes between baseline and replay test results.
What's more, Optim Query Capture and Replay can be used to invoke the IBM DB2 Cloning Tool to make a copy of a DB2 subsystem for testing purposes.
Oh, and I would be remiss if I failed to tell you that Optim Query Capture and Replay is not just about comparative workload testing. It's also a great tool for helping you to better understand a DB2 client-server application workload. Often, when it comes to these very dynamic, shape-shifting transactional applications, people want to get a better look at the trees in the forest. What SQL statements are being executed? What predicate values are being supplied by users? What columns are being retrieved? We are familiar with the idea of taking a "snapshot" of a database, but taking a snapshot (more accurately, a time slice), of a DDF workload seemed implausible -- until now. And why stop with just a better understanding of a client-server application workload? How about tuning it? The SQL statements in a workload captured by Optim Query Capture and Replay can be exported for analysis and tuning -- something you might do with a tool such as IBM's InfoSphere Optim Query Workload Tuner.
Now, all this good stuff would be less appealing if it came with too great a cost in terms of system overhead, so it's nice to know that Optim Query Capture and Replay has a pretty small footprint. This is true largely because the tool employs a "catch and throw" mechanism (more like "copy" than "catch") to send statements associated with a workload being captured to an external appliance, from which the workload can be replayed; thus, there is not a reliance on relatively expensive performance trace classes to get the statement-level data recorded by Optim Query Capture and Replay.
There you have it: a way to efficiently capture what may have appeared to you as an elusive workload, and to effectively use that captured workload for regression testing, "what if?" testing, and application SQL analysis. Check out Optim Query Capture and Replay, and get ready to go from, "My gut tells me..." to, "Here are the numbers."
This blog, authored by Robert Catterall, an IBM Information Management software specialist, highlights IBM tools that can help DB2 for z/OS people optimize the management, performance, and availability of mainframe DB2 systems. The opinions expressed herein are the author's, and should not be construed as reflecting official positions of the IBM Corporation.
Monday, January 14, 2013
Monday, January 7, 2013
Get Hands-On with DB2 Automation Tool in Texas
First, my apologies for having let so much time go by since last posting to this blog. The fourth quarter of 2012 was a very busy time for me. The pace is a little less frenetic now, and I should be able to resume blogging here on a fairly regular basis (I hope to post an entry here within the next couple of days on a new and very cool tool that can be a big help in the area of DB2 for z/OS application testing).
Second, for those of you in the GREAT state of Texas (where I was born and raised), and particularly for people in the heart of The Lone Star State, I want to make you aware of an opportunity to get some hands-on time with the DB2 Automation Tool for z/OS (about which I blogged last year). IBM has partnered with the Heart of Texas DB2 Users Group (aka HOTDUG -- one of my favorite regional DB2 user group acronyms) to provide a free half-day of DB2 Automation Tool education and training. Show up, and you'll not only get the expected overview presentation -- you'll also get a demonstration on how to use the tool to set up profiles that can drive automated and intelligent DB2 for z/OS utility execution. AND you'll get to participate in a hands-on lab that will give you the opportunity to put the DB2 Automation Tool through its paces. AND you'll get breakfast and lunch. Sounds like a half-day well spent, to me.
You can RSVP for this event (appreciated, for food planning purposes, but not required for attendance) by sending an e-mail to my colleague Bill Houston (houstonb@us.ibm.com).
More information:
Date: Tuesday, January 22, 2013
Time: 9 AM to 12:00 PM
Location:
IBM Executive Briefing Center
Rio Grande Room (Building 904)
11501 Burnet Road
Austin, Texas 78758
Check it out. Get some food. Get some knowledge. Get your hands on a keyboard.
Second, for those of you in the GREAT state of Texas (where I was born and raised), and particularly for people in the heart of The Lone Star State, I want to make you aware of an opportunity to get some hands-on time with the DB2 Automation Tool for z/OS (about which I blogged last year). IBM has partnered with the Heart of Texas DB2 Users Group (aka HOTDUG -- one of my favorite regional DB2 user group acronyms) to provide a free half-day of DB2 Automation Tool education and training. Show up, and you'll not only get the expected overview presentation -- you'll also get a demonstration on how to use the tool to set up profiles that can drive automated and intelligent DB2 for z/OS utility execution. AND you'll get to participate in a hands-on lab that will give you the opportunity to put the DB2 Automation Tool through its paces. AND you'll get breakfast and lunch. Sounds like a half-day well spent, to me.
You can RSVP for this event (appreciated, for food planning purposes, but not required for attendance) by sending an e-mail to my colleague Bill Houston (houstonb@us.ibm.com).
More information:
Date: Tuesday, January 22, 2013
Time: 9 AM to 12:00 PM
Location:
IBM Executive Briefing Center
Rio Grande Room (Building 904)
11501 Burnet Road
Austin, Texas 78758
Check it out. Get some food. Get some knowledge. Get your hands on a keyboard.
Subscribe to:
Posts (Atom)