Visit our Google Group Way 2 SAP BASIS

Performance Doc

Moderators: BASIS24x7, Rashed

Performance Doc

Postby moiz7070 »

When trying to improve the performance of your R/3 system, the toughest job is deciding what to tune. In order to make this decision, everyone must first agree on the objectives, costs, and benefits of the tuning project. If you are asked to tune for performance reasons, you must first identify what is performing poorly, define what levels of performance are expected, and determine what it will cost to obtain the level of performance desired. Many performance problems will go away simply by following this process — some people who raised performance concerns will drop them when they realize the cost to fix them, and some will find that they can’t clearly define the problem at all.
How do you know if a problem is too expensive or time consuming? First, talk with managers, technical and business consultants, and the end users of the process you’re trying to improve. Once you have a general idea of the situation, review the system statistics and decide whether you need to tune the whole system, a particular program, or both. Doing so will give you an idea of the effort required to solve the problem.
But how do you determine whether the whole system or only a program needs to be tuned? In this article, I will briefly explain the process I use to minimize the “cost of discovery” (the time and money spent finding the cause of your problem) and discuss some tools for documenting and solving performance problems.
To find out what to tune, start by asking these questions:
• How many programs are slow? How often does the phone ring? Are the calls from several users with different transactions, several users with the same transaction, the same user with different transactions, or the same user with the same transaction?
• Which programs are slow? Are specific transactions (for example, VA01) or specific parts of a transaction (for example, VA01, but only when SAVE is pressed) slow? Does the problem occur in all programs that update or print, or only those that update or read particular tables? Does it occur in all custom-written programs, all interface programs or programs with RFCs, all batch (but not dialog), or all dialog (but not batch)?
• What are average response times? What are user response times based on their wristwatch (not just “it takes forever”)? What are the response times according to the Computer Center Management System (CCMS)? Where is the response time being spent (wait, Central Processing Unit, database request, load)? Are response times high for all dialogs? Do user-measured times agree with system-measured times? Which times are similar and which are different?
End users can often tell you whether problems are systemwide or not from their own experience. Trust their experience, ask a few of the questions above, and verify your theories with the following tools. This should be enough to corner most any performance problem. Let’s start with problems that seem to affect everyone.
When you suspect system-level problems, begin your analysis with the system’s basic layout and work your way from there. Unfortunately, you can’t always follow a specific path to the problem. No matter which path you take, you will want to cover a certain number of basic items before rendering an opinion on any tuning problem (which is why the introductory tuning class for R/3 — BC315 — lasts three days, and why an experienced analyst will spend a minimum of two to three days analyzing your system).
I will list the basics I always cover and let you develop a path that works best for you. Rule number one is, don’t attempt to tune production systems based on statistics from development systems or from any time periods when data conversion, transport, or other one-time only processes were executing.
Here are some base-level tuning tips grouped by the five most common tuning transactions. These base items must be in order before you can successfully and seriously tune a particular program or process. Once I’ve covered the basics for system-level tuning, I’ll explain how to tune individual programs.
1. Base configuration: servers, work processes, and load balancing. The Process Overview (SM50, SM51, and SM66) is often the best way to gain a high-level view of your R/3 system. With SM50 you can watch programs execute in real time and see cumulative processing times for work processes (see Figure 1).
Time statistics on work process activity will provide information on the types of jobs that are being processed and on the servers handling the heaviest loads, allowing you to quickly identify servers with too few or too many processes. For example, if your system has a total of 25 update processes and five show no activity after the system has been active for a complete normal business cycle, you can save a little memory by eliminating the unused work processes. Or, if the other work processes appear overloaded, you can simply convert the unused ones to something more useful.
When you see several servers with uneven distribution, review your logon load balancing schemes. Although I don’t believe in silver bullets, logon load balancing is the closest I’ve seen to one — and, for some strange reason, it’s often overlooked.
If you see work processes in PRIV mode, you need to analyze your memory management strategies and server profile parameters. Private mode is usually an indicator of serious or complex problems that should be entrusted to the SAP EarlyWatch team. This team is part of the TCC (Technical Core Competence) organization and specializes in performance and tuning. (For many customers, this service is free, so contact them at 1-800-677-7271 or Although eliminating private mode may appear simple, verifying the problem and ensuring the correct solution is rarely easy.
If the R/3 work dispatcher determines your program is too large for conventional memory allocation or requires static addressing, the work process will be placed in private mode. Additionally, programs with complex requirements or improper coding techniques can confuse the R/3 memory managers and invoke the private mode for relatively small tasks. Private mode can also result from systemic overloading of extended memory (that is, too many requests for similar allocations) and a host of other conditions.
Solutions range from running jobs in off-peak hours or simply allocating more extended memory, to complete analysis and reconfiguration of profile parameters. With separate profiles for each server and several hundred parameters to consider, this task can be quite cumbersome.
Work processes that stop or hang up will display a reason code and probably semaphores or other valuable system codes. For semaphore analysis, the SAP Online Support Services (OSS) Note 33873 provides a listing of codes and definitions. Semaphores are generally wait or conflict codes issued by the operating system. An occasional semaphore is quite normal, but you should investigate any codes that occur repeatedly or coincide with reported problems.
The action code shows the current mode for the work process, such as waiting or debug. You can get more detail by double clicking on the work process. Current statistics from the database and memory management interfaces will be displayed, along with R/3 buffer statistics for any active process. Keep in mind that you are interrupting many internal processes to generate these statistics; if you overuse these transactions, you can decrease performance significantly.
It’s easy to misinterpret actions and error codes. Don’t jump to conclusions about what might be happening until you analyze the entire system and related information. This is especially true in the area of memory management: Make sure you don’t confuse terminology or spend time searching for clues in the wrong area (for example, looking for operating system memory problems when R/3 buffers are to blame).
A colleague of mine likens performance analysis to peeling an onion. Although it’s true both can make you cry, his point is that you must approach performance analysis one layer at a time.
2. Hardware and operating system. Don’t forget that R/3 is just another application to the OS and the hardware. Although we hope nothing other than R/3 is running on the operating system, in reality processes are often contending for OS-level memory. These will have a dramatic effect on the R/3 System because SAP memory management is completely dependent on the resources allocated by the base operating system.
So tuning is even more complicated: You have to decide when to analyze the operating system and when to concentrate on R/3. And that’s not even accounting for hardware! I usually assume that vendors have set up the hardware and operating system properly. Of course, you should always make a pass through CCMS (transaction OS06, DB03, and so forth) to make sure, but I rarely approach a system expecting problems at that level. Unless you know of specific OSS Notes that apply to your environment, let R/3 tell you when to suspect hardware and operating systems.
These rough guidelines, and statistics from OS06/ST06, indicate reasons to suspect the hardware and operating system:
• CPU idle time for a single server that consistently falls below 30 percent.
• “Pages out/second” that exceeds 10,000 per hour over a 24-hour period.
• Load average over the last 15 minutes that exceeds 3.00.
• Low values for “physical memory available.” (These values vary greatly by the operating system, R/3 release, and the server loads, but you should expect at least 1GB of memory to be available for R/3.)
• Extremely high values for “physical memory free.” (Such values generally indicate that you have not allocated available memory via profile parameters of available memory. Extremely low values indicate you have allocated all available memory, which is usually acceptable as long as the operating system has room to recover from errors and extend its swap space, if needed. You must also leave room for non-SAP processes such as OS-level monitors or backup processes.)
• Consistently low values for free swap space. (Swap space is also dependent on the operating system and the R/3 activities being executed. Development systems typically need three to four times the SAP configured memory for OS swap space. Production systems with stable environments and only occasional upgrades or transports often perform well with only 1.5 times the memory for OS swap space. Check with your hardware vendor’s competency center for details.)
Occasional spikes for all these items are normal. Always interview the customer to determine the types of processing being performed for the evaluated time periods. This will often explain your spikes or unusual values.
3. SAP R/3 WorkLoad Analysis. The WorkLoad Analysis statistics found via transaction ST03 are the most popular indicators of R/3 performance. WorkLoad Analysis will show average response times for the entire environment or for individual servers. It also provides detailed statistics for specific transactions and programs within a given time period. Transaction RZ03 will generate some of these same statistics, as well as buffer usage, in real time.
Remember that transaction ST03 provides average response times for specific time periods and can be skewed by long-running programs or inefficient programs that are executed repeatedly. For shorter time periods or tracking of individual jobs, you may need to review the detailed statistics and calculate mean or median times, rather than average times, to provide a more accurate view of system performance.
Response times are highly individualized based on the jobs run and the technical environment. Certain values should tell you to investigate in detail when unusual business requirements or special processing can’t account for the “unusual” performance statistics. Consider system-level tuning or more detailed investigation when the ST03 conditions shown in Table 1 exist for dialog work processes. Remember that query programs for large tables may cause excessive roll ins or roll outs, especially when run as a dialog process.
Consider system-level tuning or more detailed investigation when the ST03 conditions in Table 2 exist for update work processes.
For background response times, you must determine acceptable performance statistics based on your particular jobs and technical environment. When you compare background to dialog response times, you’ll generally find that:
• Most background jobs are reported as a single dialog step and will have very high average response times.
• Response times for query or report programs should generally have a lower percentage of CPU time and higher percentage of database request time.
• Programs that perform complex calculations or routine scheduling (for example, Material Requirements Planning should have a higher percentage of CPU time and lower percentage of database request time).
• Programs with UPDATE LOCAL will have higher database request times.
• Custom programs that are not stored in the buffer and programs that generate code (for example, Batch Data Conversion programs [BDCs] will have higher load times).
4. SAP R/3 buffer statistics. ST02 is used to tune system memory and buffers. Tuning memory and buffers is very complex, so don’t attempt to do it without extensive analysis and expertise. Additionally, memory must always be tuned as a unit, because changes to one parameter will always affect other areas. Several hundred profile parameters affect system performance, and the majority of these either directly affect memory or rely on proper memory management.
If you don’t have regularly scheduled EarlyWatch sessions, you should call the SAP TCC America group for tuning when your initial analysis shows the following conditions. (And remember, development systems typically need larger buffers and considerably more entries than stable, productive systems. Tuning a development system is much more complex and difficult than tuning most productive systems.)
• Hitratio: The hitratio is an indication of the buffers’ efficiency. When an SAP user or system function module needs data or additional objects, the system will generally look in R/3 buffers first. When the system is initially started, buffers are empty; once objects are called for they are stored subsequently in each buffer. The buffers will start with a hitratio of zero and should improve over time to a maximum possible of 100 percent. A system must be active for a considerable length of time before the hitratios will be stable enough for analysis. Avoid analyzing buffers during unusual processing periods or when significant system development is occurring. For a stable system, investigate buffers that consistently show hit ratios below 95 percent or whose hitratios appear to be declining.
• Nametab, CUA, screen, and calendar: These buffers should generally see little growth in a production environment and can therefore be configured with less freespace. Most of the other buffers will need 20 percent freespace or more.
• Directory entries: All buffers need free directory entries. Memory space requirements for storing directory entries is minimal, so don’t hesitate to increase the number of directory entries for any buffer.
• Object swaps: A large number of object swaps is generally an indication of poor tuning; however, some swaps are unavoidable. Object swap totals are cumulative from system startup; therefore, you should expect to see these numbers grow over time. BDCs, variant product configurator, and other functions that must generate code in real time, may increase swaps in the program buffer. Swaps that occur for short periods of time (for example, during month-end close) should generally not pose problems and may not benefit from constant tuning efforts.
• Roll and paging area memory: Analyze roll and paging area memory in detail when the percentage of current use consistently exceeds 80 percent or when a large portion of processing occurs on disk rather than in real memory. Analyze extended memory in detail when current use consistently exceeds 75 percent.
• Heap memory: Batch programs can use large amounts of heap memory and, therefore, cause OS-level swap space requirements to increase. When you see large amounts of heap memory being used, make sure you have enough OS swap space to keep from freezing your operating system. Investigate non-background processes that use large amounts of heap memory.
• Call statistics: You can use call statistics to analyze table buffers and how effectively ABAP programs use them. Investigate low hitratios and high numbers of fails. Unusually high numbers for SELECT are often caused by programs performing large numbers of table scans, improper use or unavailability of appropriate indexes, or improper search techniques.
5. SAP R/3 table statistics. Use transaction ST10 to review table call statistics. ST10 will show the number of table changes, direct reads, and sequential reads and the number of calls and rows affected. These statistics can be invaluable when you’re considering buffering a table, adding an alternate index, or stripping data and tablespaces.
Be sure to note any unusual processes that may have occurred during the time being reviewed. Processes that may invalidate your analysis include upgrades, transports, table and tablespace reorganization or maintenance, data conversions, one-time-only processes, and interfaces from other systems.
If the original problem reported was a single program or group of related processes, and the base-level tuning is done, you’re now ready to analyze specific functions. Or maybe you finished tuning the system and still have some areas that aren’t performing well enough. What tools are available to analyze specific functions and programs? And what path do we follow to find and fix these programs?
For a specific program the process seems simple: run the transaction, pull down the System -> Status menu, and start surfing the ABAP Workbench. Of course, editing the code is going to reveal a list of “Includes” and function module calls longer than President Clinton’s list of court documents.
The following tools and techniques can be used to analyze performance problems with a little more exacting science:
• ABAP Runtime Analysis (SE30)
• ABAP Program Extended Syntax Check (SLIN)
• Process Overview (SM50)
• Performance WorkLoad Analysis (ST02, ST03)
• Performance WorkLoad Statistics Records (STAT)
• Miscellaneous database statistics (ST04, DB01, DB02)
• Trace Requests (ST05)
• ABAP Program Debugger (SE38).
These tools only support solid logic and verification of procedures via walkthroughs. There is no better tool for developers or maintenance staff than appropriate walkthroughs. You don’t need full-blown meetings that demand five or six participants, scribes, moderators, and so forth unless you’re embarking on a large development effort involving several programs and modules. For a couple of lines of code, buy your colleague a cup of coffee and sit down for a quick review. The remainder of your walkthroughs will fall somewhere in between these two extremes. However, you should perform some sort of review for all changes to your system.
And remember, the tools don’t tell you what to do — they merely show you what’s happening. For example, the Runtime Analysis includes ABAP command comparisons to let you see how expensive particular coding techniques are and how alternative code might perform. The suggestions are based on a series of laboratory benchmarks and are often quick-and-easy ways to improve performance significantly.
The ABAP Program Extended System Check is often overlooked, but is able to:
• Locate unused data fields and tables
• Find dead code in programs
• Check authorizations
• Quick check interfaces and compatibility of calls to forms, modules, and external programs
• Locate type conversions
• Give details of errors or warnings.
The Process Overview, transaction SM50, as presented earlier, is often the best way to observe a program during execution. By watching the function modules being processed in real time, you can see which areas are the most time consuming and decide what types of tracing or debugging is most appropriate.
Figure 2 shows Performance WorkLoad Analysis statistics from transaction ST02. You can use ST02 and ST03 to generate a nearly endless variety of statistics to tune individual programs or the entire system. ABAPs with serious problems (or ones that are run often) will affect memory and database statistics. For example, high counts for Fails may indicate programming with incorrect or nonexistent indexes or searches for records that do not exist.
A high number of selects with few inserts, updates, and deletes are typical of programs that only read or display data. These statistics may help you locate programs that inadvertently perform full scans on files or look up large groups of data when only a single record is needed.
High average times associated with all I/O activity may be due to database or disk-related problems but are most often the result of bad logic or poor programming techniques. High times for only one category (for example, deletes averaging 200 milliseconds) could be due to poor indexing, inefficient table/ tablespace layouts, or contention for hardware resources, but most often result from bad code.
You can use hitratios to evaluate the use of indexes and programming techniques for data retrieval and updates, especially during periods of inactivity when you can obtain a good before-and-after reading.
Performance WorkLoad Statistics Records will provide detail in a variety of search formats (by time, user, program, transaction, and so forth). The detail is similar and supporting of the information shown as Figure 3, a sample of transaction ST03. You will want to use both to obtain specifics about the process you are analyzing. If you are still pondering general versus specific problems, compare a common transaction, such as Main Menu, with a function-specific transaction, such as VL04. High processing times for common transactions such as Main Menu suggest that tuning is needed at the system level. Low processing times for most common transactions, with only a few expensive modules, suggest tuning should be specific to those with high times. These statistics will allow you to find patterns within groups of slow, or fast, programs or modules that may provide clues on where to look next. Some key questions you can research include:
• Do they share the same tables?
• Do they all have data on a particular physical or logical disk or tablespace?
• Do they perform or include common tasks, such as RFC, WorkFlow/ALE, and User-Exits?
• Do they have Business Warehouse or Information System (LIS/EIS) links?
• Are they all update or non-update programs?
• Are they just batch or dialog programs?
• Did they execute on a particular server (logical or physical), work process, network, and so forth?
• Are they from particular time periods or operation modes?
• Do they slow when repeatedly using the same data?
• Are they slow only the first time they execute or access certain data or modules?
Poor response times often can be traced to specific tasks, whose detail also can be shown via this transaction. In the example in Figure 3, the high wait time was ultimately attributed to the update task.
For custom code or programs with user exits, use the dialog step counts to determine roughly how modifications affect standard SAP code. Comparing statistics from code before and after modifications have been made will tell you a lot about how efficiently the code was written. Compare the execution times to the business models and make sure the amount of time spent in each area is what you expected.
For a more detailed breakdown of processing times for individual programs see Top Time, By Task, and other displays within ST03. ST02 and ST03 are very complex transactions with a multitude of performance statistics. Don’t forget to look under the goto menu for additional options. For most transactions a simple double-click on the item in question will produce more than enough detail.
Database statistics can come from a variety of sources. Transaction DB01 can reveal exclusive lockwait situations, which can be used for both system-level and application tuning. (It can often explain why programs appear to be doing nothing.)
Transaction DB02 shows table and index statistics. You can check for missing indexes, files with too little or too much space, extent problems, and a host of related items. These numbers will come in handy when you begin stripping data and tablespaces.
Database transaction ST04 will provide high-level performance statistics. With ST04, you can make sure the database interface isn’t having problems servicing your program’s request for data.
For individual program tuning, try SQL, enqueue, and RFC tracing via transaction ST05 – Trace Requests. The traces will show step-by-step details of each and every function within your program. SQL trace will show database-level activity, as well as ABAP. For example, within the SQL trace you can use the Explain function to show the data that was loaded into record fields and the exact index path chosen by the database optimizer. The SQL trace will reveal full scans of tables, inappropriate index choice, any failure to load record keys, repetitive calls to the same data, and similar conditions that make tuning an application much easier.
The SQL trace is one of the easiest ways to determine when an alternate index or table buffering may be beneficial. You’ll also need to view SE11 (ABAP Workbench: Dictionary) to determine the technical setup and whether buffering or indexing is possible for the table in question. And, don’t forget to check DB02 to ensure your indexes are properly located in the data dictionary, your table has been allocated the proper amount of space, and so on.
Transaction ST10 (Table Call Statistics) shows table statistics for a variety of chosen time periods. Statistics for tables can show the total number of database accesses, the number of calls issued, and the number of rows affected. Additionally, you can see how many direct reads, sequential reads, and changes were performed. This information can be invaluable for both system- and program-level tuning, especially when considering the creation of an additional index.
One final tool available is the ABAP Program Debugger. This tool is best suited for program development and maintenance, but may be useful for tracing some difficult performance problems. The debugger allows you to watch particular sections of code or data fields during program execution. You can also generate lists of modules, calls, and other key commands to summarize program execution. You also have the ability to change data fields and processing routines in a real-time mode.
My tuning strategy follows advice my doctor gave me years ago: Study the books to make sure you know what a horse and a zebra look like. But when you’re in a field and hear hoofbeats, don’t look up and expect to see a zebra.
When tuning an R/3 system, many problems will pose the same initial characteristics / symptoms. For some strange reason, everyone thinks their system is unique and those footsteps just have to be zebras. However, the percentages say it’s a horse (that is, a common problem). So follow a path that will lead you to the most common problems first and prevent you from getting sidetracked by thinking that your situation must be unique.
In my more than 20 years in this field, I’ve seen very few “silver bullets” in the world of performance and tuning. The reason many systems run poorly, and why it is increasingly difficult to find people with the expertise to tune them properly is that tuning takes a lot of time, effort, and background knowledge. Poor performance is usually the result of several small errors, parameter or code changes, and misunderstandings that have gone unnoticed. Rarely will you find a single, risk-free parameter that you merely flip like a switch to solve your problems. So use all available resources, and ensure your solution is beneficial to the entire system rather than a particular process you are reviewing.
Remember, you can make any process run faster, but the costs to do so may be prohibitive. Make sure you have a general idea of the problem, goals, possible solutions, and costs before chasing “better performance.”
Shaik Moiz Ahmed
Terrenos Software Technologies Pvt.Ltd
Posts: 14
Location: Hyderabad

Return to CCMS Forum

Who is online

Users browsing this forum: No registered users and 1 guest

Visit our Google Group Way 2 SAP BASIS