Pages

Wednesday 29 January 2014

BW Query Performance

1. What kind of tools are available to monitor the overall Query Performance?

Answer:
1. BW Statistics
2.BW Workload Analysis in ST03N (Use Export Mode!)
3.Content of Table RSDDSTAT

2. Do I have to do something to enable such tools?

Answer:
Yes, you need to turn on the BW Statistics:
  RSA1, choose Tools -> BW statistics for InfoCubes
  (Choose OLAP and WHM for your relevant Cubes)
                                                                       
3. What kinds of tools are available to analyze a specific query in detail?

Answer:
1. Transaction RSRT
2. Transaction RSRTRACE

4.  Do I have a overall query performance problem?

Answer:
Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Front-end shows a high number in all InfoCubes.
You need to run ST03N in expert mode to get these values

5. What can I do if the database proportion is high for all queries?

Answer:
Check:
1. If the database statistic strategy is set up properly for your DB platform 
  (above all for the BW specific tables)
2. If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
3. If Buffers, I/O, CPU, memory on the database server are exhausted?
4. If Cube compression is used regularly
5. If Database partitioning is used (not available on all DB platforms)

6. What can I do if the OLAP proportion is high for all queries?

Answer:
Check:
1. If the CPUs on the application server are exhausted
2. If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
3. If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT, Customizing default)

7. What can I do if the client proportion is high for all queries?

Answer:
Check whether most of your clients are connected via a WAN Connection and the amount of data which is transferred is rather high.

8. Where can i get specific run-time information for one query?

Answer:
1.Again you can use ST03N -> BW System Load
2. Depending on the time frame you select, you get historical data or current data.
3. To get to a specific query you need to drill down using the InfoCube name
4. Use Aggregation Query to get more runtime information about a single query. Use tab All data to get to the details. (DB, OLAP, and Frontend time, plus Select/ Transferred records, plus number of cells and formats)
                                                                     
9. What kind of query performance problems can I recognize using ST03N values for a specific query?

Answer:
(Use Details to get the runtime segments)
1. High Database Runtime
2. High OLAP Runtime
3. High Frontend Runtime

10.What can I do if a query has a high database run-time

Answer:
1. Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
2. Check if database statistics are update to data for the Cube/Aggregate, use TX RSRV output (use database check for statistics and indexes)
3. Check if the read mode of the query is unfavourable - Recommended (H)

11. What can I do if a query has a high OLAP runtime?

Answer:
1. Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
2. Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated
  Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
3. Check if a user exit Usage is involved in the OLAP runtime?
4. Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
- Check if a proper index on the inclusion table exists

12. What can I do if a query has a high frontend runtime?

Answer:
1.Check if a very high number of cells and formatting are transferred to the Frontend ( use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
2. Check if frontend PC are within the recommendation (RAM, CPU Mhz)
3. Check if the bandwidth for WAN connection is sufficient 

Saturday 25 January 2014

ABAP Programming Best practice for BW

Contents
Introduction
Aesthetics of programming
Variable Declaration
SELECT statements
General
1. Introduction
This document is intended to cover the basics of good ABAP programming for BW developers, developers who are proficient in BW but not so in.
ABAP and ABAP learners. In a normal BW environment we use ABAP to write a code for doing a look-up in transformation or to code function.
Modules to extract data from source systems. This process would generally involve declarations, selection of data, read the selected data to.
Manipulate it or to perform calculations. I have taken into consideration only these basic steps for this document.
2. Aesthetics of programming
·      ALWAYS comment your code
·      Use comment blocks to explain a complete code flow and single line comments for individual statements wherever necessary
·      Your comments should explain the question of “why” and not the “what”. What is happening can be understood from the code
SELECT COMP_CODE GL_ACCOUNT SALES FROM /BIC/AZSALESDSO00
INTO TABLE l_t_zsales
WHERE DOC_TYPE = ‘SP’.
* Selecting documents of type ‘SP’ from sales DSO
The WHAT is not very useful is it?
* Only Special Posting documents are selected for a valid look-up
·      Maintain a change log at the start of the program with date, user ID and a short description of the change
·      Maintain a certain style of format throughout the program
                  a). Reserved words in upper case. Variables in lower case
       b).E.g: SELECT FROM mytable WHERE myfield = field
·      Always click pretty printer button before saving and activating the code (whenever possible), this will do the proper formatting of code(Citing VinodBokkade's comment below)
3. Variable Declaration
·      Use only required length when declaring a variable using ‘Elementary Data Types‘
·      Eg: lv_xxx TYPE i; lv_yyy TYPE c length 10
·      It is always better to define a variable using ‘TYPE’ and referring to a DDIC object
 E.g.: lv_xxx TYPE TABLE-FIELD1.
·      Always follow a convention for declaring variable names. This will make it easier to understand the code and a lot easier when you debug the code.
E.g.: var1 TYPE c length 4 does NOT say much about the purpose of the variable, but a declaration like “lv_user_date TYPE sy-datum” would imply that the variable is a local variable and it is used to store user date in the system date format of YYYYMMDD
*       lv_<var> - Local Variable
*       gv_<var> - gloal variable
*       l_s_<structure> - structure definition
*       l_t_<table> - internal table
*       wa_<area> - work area

·      While declaring an internal table, make sure you define a structure first with the fields that would be needed. It is not recommended todefine the internal table on the structure of the database table unless you are going to consume all the fields in the table
Doing this saves on space allocated for the temporary table and also improves the speed of SELECT’s and READ’s
E.g.: l_t_sales TYPE STANDARD TABLE OF /BIC/AZSALESDSO00 would slow down your code. Instead define a structure first,
TYPES: BEGIN OF l_s_sales,
doc_no TYPE /BIC/AZSALESDSO00-doc_no,
amount TYPE /BIC/AZSALESDSO00-amount,
END OF l_s_sales.
DATA: l_t_sales TYPE STANDARD TABLE OF l_s_sales,
wa_sales TYPE l_s_sales.
4. SELECT Statements
·      Avoid using multiple SELECT statements to the same database table. Instead SELECT all that would be needed, fields to an internal table and thenuse a READ with necessary restriction
·      Avoid using,
SELECT…ENDSELECT
SELECT statements inside LOOPs
Non-Key fields as search conditions in SELECT statements
·      In the context of choosing records from a database table based on records in different database table, use a JOIN in your SELECT statement, insteadof executing two separate SELECTs and looping through two internal tables
·      Do not use ‘SELECT *’. Instead define a structure with the fields you would be selecting and then ‘SELECT’ only those fields into an internal tablethat uses this structure
·      In the context of transformation routines, use ‘FOR ALL ENTRIES’ when selecting records based on source or result package. This would restrict thenumber of records being selected instead of having an open SELECT which would bring all records
Make sure you check that the source or result package is not empty before doing a ‘FOR ALL ENTRIES’ based SELECT
Also, try to eliminate duplicate entries from the internal table based on which records are selected using FOR ALL ENTRIES
·      Using INTO CORRESPONDING FIELDS OF statement in your SELECT does not affect performance, but make sure the target internal table containsonly the needed fields in the right order
5. General
·      Avoid using nested LOOP statements wherever possible
·      Observe the following for READ statement
Always SORT an internal table before you do a READ
It is always recommended to do a BINARY SEARCH in a READ statement as long as the fields used for SORTing are same as the WITH KEY fields used when READing
A BINARY SEARCH in A READ statement may pick no records if the SORT and READ statements use different fields
READ statements must always be followed by a 'IF sy-subrc = 0' check
READ with TRANSPORTING NO FIELDS can be used if the fileds are not used for further processing
·      Never hard code a break point in a program
There is always a chance that the program might get transport with the breakpoint
·      In the context of transformations, when there are only one-to-one mapping between the source and target, you might use a simple expert routine. Theadvantage is that,
Source package can be directly moved to result package without having to go through individual field routines for each field mapping
LOOP AT SOURCE_PACKAGE INTO wa_source_package.
MOVE-CORRESPONDING wa_source_package TO wa_result_package.
APPEND wa_result_package TO RESULT_PACKAGE.
ENDLOOP.
·      Call to function modules should be preceded with a 'IF sy-subrc = 0' check
Remove any un-used code or function module calls, even if it is commented, before transporting to production

SAP BW Data packet processing issues - Second attempt of processing

This topic explains one of the issues of data packet processing that we faced in our project. Data packet shows the message and it is locked saying second attempt of data processing is successful and it remains active with yellow status with warning as shown below for data packet 20.

As you can see in the first screen shot below WBS element data load got stuck due to second attempt error and job remains active indefinitely unless we process this data load.


Before starting the below job there are few steps those needs to be followed.
Go to the data load request and set it’s QM status to red as shown in the first screen shot above.

Go to Edit – Init Update --- Setting for further update


Data packet processing should be set to be processed in the background.


Once this setting is done go to data packet and right click and select the manual update option. This option is only shown if data packet is in yellow status. Once you start manual further update following job will start with BI_BOOK* ID. You need to monitor this job in SM37 till it gets completed successfully.


Once this is completed successfully see to it that QM status for this request turns green overall. If not you can make it green manually.

Treatment of warning can also be handled as below in InfoPackage settings


There are multiple reasons for this to happen.
·      This can occur when doing massive parallel updating of time-dependant master data into the target characteristic. Solution is to reduce the number of Data Packets updating in parallel; effectively serializing the update by processing 1 Data Packet at a time.
·      This can occur also during the nightly load window when there is a lot of other cube based activity occurring in the system due to different process chains scheduled at the same time. Solution is to re-arrange the nightly load window process chain to do only master data activities and then do transaction data after, not both at the same time.
·      You could also review the sequence of loading the master data attributes. The system will behave more efficiently when you execute the master data loads in the order of their relational dependencies with each other. As you know, a characteristic has attributes. Each attribute is a separate characteristic in its own right. Solution is to load the simpler, base characteristics first and then load the characteristics that have them as attributes. This will help to ensure the SIDs are all in place first and reduce the subsequent master data loading to only need to check if the values exists (which it now does).
·      Treatment of such warning can be handled as this is occurring say in master data you can make the QM status green as shown above and further processing of process chain will not get affected due to this.

SAP BW Production Support Issues

1. DTP Failure

Select the step-> right click and select “Display Message”-> there we will get the message which gives the reason for ABEND.

A DTP can failure due to following reasons, in such case we can go for restarting the job.
·         System Exception Error
·         Request Locked
·         ABAP Run time error.
·         Duplicate records
·         Erroneous Records from PSA.

Duplicate records:

            In case of duplication in the records, we can find it in the error message along with the Info Provider’s name. Before restarting the job after deleting the bad DTP request, we have to handle the duplicate records. Go to the info provider -> DTP step -> Update tab -> check handle duplicate records -> activate -> Execute DTP. After successful competition of the job uncheck the Handle Duplicate records option and activate.

DTP Log Run:

·         If a DTP is taking log time than the regular run time without having the back ground job, then we have to turn the status of the DTP into Red and then delete the DTP bad request (If any), repeat the step or restart the job.

·         Before restarting the Job/ repeating the DTP step, make sure about the reason for failure.
·         If the failure is due to “Space Issue” in the F fact table, engage the DBA team and also BASIS team and explain them the issue. Table size needs to be increased before performing any action in BW. It’ll be done by DBA Team. After increasing the space in the F fact table we can restart the job.
  
  
Erroneous Records from PSA:
  
     When ever a DTP fails because of erroneous records while fetching the data from PSA to Data Target, in such cases data needs to be changed in the ECC. If it is not possible, then after getting the approval from the business, we can edit the Erroneous records in PSA and then we have to run the DTP.

Go to PSA -> select request -> select error records -> edit the records and save.
Then run the DTP.
  
2.      INFO PACKAGE FAILURE:
  
The following are the reasons for Info Pack failure.
·         Source System Connection failure
·         tRFC/IDOC failure
·         Communication Issues
·         Processing the IDOC Manually in BI

·         Check the source system connection with the help of SAP BASIS, if it is not fine ask them to rebuild the connection. After that restart the job (Info Pack).
Go to RSA1 -> select source system -> System -> Connection check.

·         In case of any failed tRFC’s/IDOC’s, the error message will be like “Error in writing the partition number DP2” or “Caller 01, 02 errors”. In such case reprocess the tRFC/IDOC with the help of SAP BASIS, and then job will finish successfully.
   
·         If the data is loading from the source system to DSO directly, then delete the bad request in the PSA table, then restart the job

·         Info Pack Long Run: If an info pack is running long, then check whether the job is finished at source system or not. If it is finished, then check whether any tRFC/IDOC struck/Failed with the help of SAP BASIS. Even after reprocessing the tRFC, if the job is in yellow status then turn the status into “Red”. Now restart / repeat the step. After completion of the job force complete.

·         Before turning the status to Red/Green, make sure whether the load is of Full/Delta and also the time stamp is properly verified.

·         Time Stamp Verification:

Select Info Package-> Process Monitor -> Header -> Select Request -> Go to source System (Header->Source System) -> Sm37-> give the request and check the status of the request in the source system -> If it is in active, then we have to check whether there any struck/failed tRFC’s/IDOC’s
If the request is in Cancelled status in Source system -> Check the Info Pack status in BW system -> If IP status is also in failed state/cancelled state -> Check the data load type (FULL or DELTA) -> if the status is full then we can turn the Info Package status red and then we can repeat/restart the Info package/job. -> If the load is of Delta type then we have to go RSA7 in source system-> (Compare the last updated time in Source System SM37 back ground job)) Check the time stamp -> If the time stamp in RSA7 is matching then turn the Info Package status to Red -> Restart the job. It’ll fetch the data in the next iteration
If the time stamp is not updated in RSA7 -> Turn the status into Green -> Restart the job. It’ll fetch the data in the next iteration.

Source System
BW System
Source System RSA7
Source System SM37
Action
I/P Status RED(Cancelled)
I/P Status (Active)
Time stamp matching with SM37 last updated time
Time stamp matching with RSA7 time stamp
Turn the I/P Status into Red and Restart the Job
I/P Status RED(Cancelled)
I/P Status (Cancelled)
Time stamp matching with SM37 last updated time
Time stamp matching with RSA7 time stamp
Turn the I/P Status into Red and Restart the Job
I/P Status RED(Cancelled)
I/P Status (Active)
Time stamp is not  matching with SM37 last updated time
Time stamp is not matching with RSA7 time stamp
Turn the I/P status into Green and Restart the job
I/P Status RED(Cancelled)
I/P Status (Cancelled)
Time stamp is not  matching with SM37 last updated time
Time stamp is not matching with RSA7 time stamp
Turn the I/P status into Green and Restart the job



·         Processing the IDOC Manually in BI:
  
When there is an IDOC which is stuck in the BW and successfully completed the background job in the source system, in such cases we can process the IDOC manually in the BW.

Go to Info Package -> Process Monitor -> Details -> select the IDOC which is in yellow status(stuck) -> Right click -> Process the IDOC manually -> it’ll take some time to get processed.
******Make sure that we can process the IDOC in BW only when the back ground job is completed in the source system and stuck in the BW only.



3.      DSO Activation Failure:

When there is a failure in DSO activation step, check whether the data is loading to DSO from PSA or from the source system directly. If the data is loading to DSO from PSA, then activate the DSO manually as follows

·         Right click DSO Activation Step -> Target Administration -> Select the latest request in DSO -> select Activate -> after request turned to green status, Restart the job.

·         If the data is loading directly from the source system to DSO, then delete the bad request in the PSA table, then restart the job

4.      Failure in Drop Index/ Compression step:

When there is a failure in Drop Index/ compression step, check the Error Message. If it is failed due to “Lock Issue”, it means job failed because of the parallel process or action which we have performed on that particular cube or object. Before restarting the job, make sure whether the object is unlocked or not.

There is a chance of failure in Index step in case of TREX server issues. In such cases engage BASIS team and get the info reg TREX server and repeat/ Restart the job once the server is fixed.

Compression Job may fail when there is any other job which is trying to load the data or accessing from the Cube. In such case job fails with the error message as “Locked by ......” Before restarting the job, make sure whether the object is unlocked or not.


5. Roll Up failure:

“Roll Up” fails due to Contention Issue. When there is Master Data load is in progress, there is a chance of Roll up failure due to resource contention. In such case before restarting the job/ step, make sure whether the master data load is completed or not. Once the master data load finishes restart the job.


6. Change Run – Job finishes with error RSM 756

When there is a failure in the attribute change run due to Contention, we have to wait for the other job (Attribute change run) completion. Only one ACR can run in BW at a time. Once the other ACR job is completed, then we can restart/repeat the job.

We can also run the ACR manually in case of nay failures.
Go to RSA1-> Tool -> Apply Hierarchy/Change Run -> select the appropriate Request in the list for which we have to run ACR -> Execute.

7. Transformation In-active:

In case of any changes in the production/moved to the production without saving properly or any modification done in the transformation without changing, in such cases there is a possibility of Load failure with the error message as “ Failure due to Transformation In active”.

In such cases, we will have to activate the Transformation which is inactive.
Go to RSA1 -> select the transformation -> Activate

In case of no authorization for activating the transformation in production system, we can do it by using the program - RSDG_TRFN_ACTIVATE

Try the following steps to use the program "RSDG_TRFN_ACTIVATE” here you will need to enter certain details:
Transformation ID: Transformation’s Tech Name (ID)
Object Status: ACT
Type of Source: “Source Name”
Source name: “Source Tech Name”
Type of Target: Target Name
Target name: “Target Tech Name”
Execute. The Transformation status will be turned into Active.
Then we can restart the job. Job will be completed successfully.


     8. Process Chain Started from the yesterday’s failed step:

In few instances, process chain starts from the step which was failed in the previous iteration instead of starting from the “Start” step.
In such cases we will have to delete the previous day’s process chain log, to start the chain form the beginning (from Start variant).
Go To ST13-> Select the Process Chain -> Log -> Delete.
Or we can use Function Module for Process Chain Log Deletion: RSPROCESS_LOG_DELETE.
Give the log id of the process chain, which we can get from the ST13 screen.
Then we can restart the chain.
Turning the Process Chain Status using Function Module:

At times, when there is no progress in any of the process chains which is running for a long time without any progress, we will have to turn the status of the entire chain/Particular step by using the Function Module.
Function ModuleRSPC_PROCESS_FINISH
  
The program "RSPC_PROCESS_FINISH" for making the status of a particular process as finished.
To turn any DTP load which was running long, so please try the following steps to use the program "RSPC_PROCESS_FINISH" here you need to enter the following details:

LOG ID: this id will be the id of the parent chain.
CHAIN: here you will need to enter the chain name which has failed process.
TYPE: Type of failed step can be found out by checking the table "RSPCPROCESSLOG" via "SE16" or "ZSE16" by entering the Variant & Instance of the failed step. The table "RSPCPROCESSLOG" can be used to find out various details regarding a particular process.
INSTANCE & VARIANT: Instance & Variant name can be found out by right clicking on the failed step and then by checking the "Displaying Messages Options" of the failed step & then checking the chain tab.
STATE: State is used to identify the overall state of the process. Below given are the various states for a step.
R Ended with errors
G Successfully completed
F Completed
A Active
X Canceled
P Planned
S Skipped at restart
Q Released
Y Ready
Undefined
J Framework Error upon Completion (e.g. follow-on job missing)

1.     Hierarchy save Failure:

When there a failure in Hierarchy Save, then we have to follow the below process...
If there is an issue with Hierarchy saves, we will have to schedule the Info packages associated with the Hierarchies manually. Then we have to run Attribute Change Run to update the changes to the associated Targets. Please find the below mentioned the step by step process..


ST13-> Select Failed Process Chain -> Select Hierarchy Save Step ->Rt click Display Variant -> Select the info package in the hierarchy -> Go to RSA! -> Run the Info Package Manually -> Tools -> Run Hierarchy/Attribute Change Run -> Select Hierarchy List (Here you can find the List of Hierarchies) -> Execute.