Pages

Friday 30 August 2013

DSO Job Log and Activation Parameters

1. DataStore Objects in BW 7.x

In BI 7.x you have three different kinds of DataStoreObjects (DSO): Standard, write-optimized and direct update. A Standard DSO consists of a new data and an active date table and a changelog table, which records the changes. Write-optimized DSO and DSO for direct update consist of an active table only.

In BI 7.x the background process how data in standard DataStoreObjects is activated has changed in comparison to BW 3.5 or prior. In this article I will explain the DSO activation job log and the settings / parameters of transaction "RSODSO_Settings". I will describe how the parameters you can set in this transaction influence DSO activation performance. I will not describe the different activation types. 

2. Manually activation of a request

If you loaded a new request with a Data Transfer Process (DTP) in your standard DSO, data is written to new data table. You can manually activate the request or within a process chain. If you manually activate requests, you get following popup screen:


Picture 1: Manual DSO Activation


Button "Activate in Parallel" sets the settings for parallel activating. In this popup you select either dialog or background. In background you select job class and server. For both you define the number of jobs for parallel processing. By default it is set to '3'. This means, you have two jobs that can be scheduled parallel to activate your data, the BIBCTL* jobs. The third job is needed for controlling the activation process and scheduling the processes. That's the BI_ODSA* job.

3. BI_ODSA* and BIBCT* Jobs

The main job for activating your data is "BI_ODSAxxxxxxxxxxxxxxxxxxxxxxxxx" with a unique 25-letter-GUID at the end. Let's have a look at the job log with SM37.


Picture 2: Job log for BI_ODSA* job

Activating data is done in 3 steps. First it checks status of request in DSO if it can be activated, marked green in the log. If there is another yellow or red request before this request in the DSO, activation terminates. In a second step data is checked against archived data, marked blue in the log. In a third step the activation of data takes place, marked red in the log. During step 3 a number of sub-jobs "BIBCTL_xxxxxxxxxxxxxxxxxxxxxxxx" with a unique 25-letter-GUID at the end are scheduled. This is done for the reason to get a higher parallelism and so a better performance. 

But often, the counterpart seems to happen. You set a high parallel degree and start activation. But activation even of a few data records takes a long time. In Reference 3 you will find some general hints for DataStore performance.

But often, the counterpart seems to happen. You set a high parallel degree and start activation. But activation even of a few data records takes a long time. In Reference 3 you will find some general hints for DataStore performance.

4. Transaction for DSO settings

You can view and change this DSO settings with "Goto->Customizing DataStore" in your manage DSO view.You’re now in transaction RSODSO_SETTINGS. In Reference 4 at the end of this document you can find a note for runtime parameters of DSO.

Picture 3: RSODSO_SETTINGS

As you can see, you can make cross-DataStore settings or DataStore specific settings. I choose crossDataStore and choose "Change" for now. A new window opens which is divided into three sections:

Picture 4: Parameters for cross-datastore settings

Section 1 for activation, section 2 for SID generation and section 3 for rollback. Let's have a look at the sections one after another.

5. Settings for activation

In the first section you can set the data package size for activation, the maximum wait time and change process parameters.

If you click on the button "Change process params" in part "Parameter for activation" a new popup window opens:


Picture 5: Job parameters

The parameter described here are your default '3' processes provided for parallel processing. By default also background activation is chosen. You can now save and transport these settings to your QAS or productive system. But be careful: These settings are valid for any DSO in your system. As you can see from picture 4 the number of data records per data package is set to 20000 and wait time for process is set to 300, it may defer from your system. What does this mean? This means simply that all your records which have to be activated are split into smaller packages with maximum of 20000 records each package. A new job "BIBCTL*" is scheduled for each data package.

The main activation job calculates the number of "BIBCTL*" jobs to be scheduled with this formula:
<Maximum number of jobs> = Max( <parallel processes for activation>, <number of records in new data> div <maximum package size> + 2 .

It is the maximum of your number of available processes for activation and the integer part of your number of records divided by your maximum package size plus 2. One process for the fractal part of your division and one control process for your BI_ODSA job. So if you have 10000 records only to activate and four processes for activation, what happens? The result of the Formula Max (4, 2) is 4. Your BW will schedule 3 jobs and split your 10000 records into your 3 processes to 3333 each.

The default setting of 20000 records can be enough for small DSO or delta DSO where you expect only few data per load. But if you have mass data like in an initial load for CO-PA with millions of records you should increase this parameter to speed up your CO-PA activation. Keep this rule of thumb in mind taken from Note 1392715:

<Number of records to activate at once> = <Number of requests> * <Sum of all records in these requests> /<Package size of the activation step> <= 100.000.

This rule is implemented by note 1157070.

So if you have only a couple of records to activate it can make sense to set your number of parallel processes to 2 instead of 3. For mass data with expected number of millions of records you can set the number of parallel process to a higher value, for example 6, so that you have 5 processes for activation and 1 control process. This parameter depends of your machine size and memory.

In this note 1392715 you also find SAP some common recommendations for your system and other useful performance hints:

Batch processes = (All Processes/2)+1

Wait Time in Sec. = 3 * rdisp/max_wprun_time

rdisp/max_wprun_time - RZ11 (Note 25528)

The maximum wait time for process defines the time how long the job is allowed to run and activate data. If the activation job takes longer than defined, BW assumes system problems or database problems and aborts this job. If you have high number of data packages you should set the wait time to a higher value, if you have less data packages set a low value.

One little trick: If you simply count the number of your BIBCTL* jobs in the job log and multiply it with maximum package size, you know how many records have been activated and how many records still have to be activated. As further approximation you can take the time how long each activation job BIBCTL* runs and multiply it by the maximum number of jobs you can calculate how long your activation job will run.

You can change the parameters for data package size and maximum wait time for process and they will be used for the next load.

6. Settings for SID generation

In section SID generation you can set parameters for maximum Package size and wait time for processes too. With button "Change process params" popup described in picture 5 appears. In this popup you define how many processes will be used for SID generation in parallel. It's again your default value. Minimum package size describes the minimum number of records that are bundled into one package for SID activation. With SAP Layered Scalable Arcitecture (LSA) in mind, you need SID generation for your DSO only if you want to report on them and have queries built on them. Even if you have queries built on top of DSO without SID generation at query execution time missing SIDs will be generated, which slows down query execution. For more information to LSA you can find really good webinar from Jürgen Haupt in the references at the end of the document. Unfortunately SID generation is set as default if you create your DSO. My recommendation is: Switch off SID generation for any DSO! If you use the DataStore object as the consolidation level, SAP recommends that you use the write-optimized DataStore object instead. This makes it possible to provide data in the Data Warehouse layer 2 to 2.5 times faster than with a standard DataStore object with unique data records and without SID generation! See performance tips for details.











The saving in runtime is influenced primarily by the SID determination. Other factors that have a favorable influence on the runtime are a low number of characteristics and a low number of disjointed characteristic attributes.

7. Settings for Rollback

Finally last section describes rollback. Here you set the maximum wait time for rollback processes and with button “Change process params” you set the number of processes available for rollback. If anything goes wrong during activation, e.g. your database runs out of table space, an error during SID generation occurs, rollback will be started and your data is reset to the state before activation. The most important parameter is maximum wait time for Rollback. If time is over, rollback job will be canceled. This could leave your DSO in an unstable state. My recommendation set this parameter to a high value. If you've large amount of data to activate you should take at least double the time of maximum wait time for activation for rollback. You should give your database enough time to execute rollback and reset your DSO to the state before activation started.

Button "Save" saves all your cross-datastore settings.

8. DataStore-specific settings

For a DataStore-specific setting you enter your DSO in the input field as you can see from picture 3. With this DSO local setting you overwrite the global DSO settings for this selected DSO. Especially if you expect to have very large DSOs with lot of records you can change your parameters here. If you press button "Changeprocess params" the same popup opens as under global settings, see picture 5.

9. Activation in Process chains

I explained the settings for manual activation of requests in a standard DSO. For process chains you have to create a variant for DSO activation as step in your chain, see picture 6. In this variant you can set the number of parallel jobs for activation accordingly with button "Parallel Processing".

Picture 6: Process variant DSO activation

Other parameters for your standard DSO are taken from global or local DSO settings in TA "RSODSO_SETTINGS" during run of the process chain and activation.         


Wednesday 28 August 2013

BEx Reporting on 12 Months Rolling Periods using Posting Period (0FISCPER3)


Introduction


Here is the description of scenario. Your organization needs to do reporting on last one year data but on rolling period basis. That means user will input any month/year combination and starting from that month/year, you will have to report on previous 12 months. And you have come to know that it is not straight forward as instead of fiscal year/period (0FISCPER), your info provider contains fiscal year (0FISCYEAR) and posting periods (0FISCPER3). So below are the steps to follow and successfully achieve the desired result.


Frontend Setting

First of all you need to design your query in such a way that it can provide the right platform to achieve the desired output. Basic requirement for report is that user needs to input at least one month/year combination. For this you will require two user defined variables. Create them as below.

Create variables for posting period (0FISCPER3)


Create a variable named ZPER_P0 for posting period to store the initial value input by user as per below specifications using create variable wizard.
  1. Variable Type: Characteristic Value
  2. Processing Type: User Entry/Default Value
  3. Variable Entry: Mandatory
  4. Ready for Input selected
 
Now, create another 11 variables named ZPER_P1 to ZPER_P11 for posting period to store the rest of eleven values calculated automatically as per below specifications using create variable wizard.
  1. Variable Type: Characteristic Value
  2. Processing Type: Customer Exit
  3. Variable Entry: Mandatory
  4. Ready for Input
NOT selected  

Create variables for fiscal year (0FISCYEAR)

Create a variable named ZFYR_Y0 for fiscal year to store the initial value input by user as per below specifications using create variable wizard.
  1. Variable Type: Characteristic Value
  2. Processing Type: User Entry/Default Value
  3. Variable Entry: Mandatory
  4. Ready for Input selected
 
Now, create another 11 variables named ZFYR_Y1 to ZFYR_Y11 for fiscal year to store the rest of eleven values calculated automatically as per below specifications using create variable wizard.
  1. Variable Type: Characteristic Value
  2. Processing Type: Customer Exit
  3. Variable Entry: Mandatory
  4. Ready for Input
NOT selected
This finishes our front-end setting. Now it is time to fill those customer exit variables. Follow the below section of backend setting.

Backend Setting


Now we have created the necessary variables to do reporting on rolling periods. We have created one user entry variable for both posting period and fiscal year and eleven customer exit variable for both posting period and fiscal year. To fill those customer exit variable, now we need to write the code in RSR00001 exit (Customer Exit Global Variables in Reporting). To accomplish this task basic knowledge of ABAP is required. Follow the below steps to fill the values for variables.
  1. Create a project in CMOD (if you do not have any)
  2. Assign the enhancement RSR00001 to the project
  3. Save and activate the project
  4. Go to SMOD transaction
  5. Select RSR00001 as enhancement
  6. Select components and click Display
  7. Double click on EXIT_SAPLRRS0_001 function module
  8. Double click on ZXRSRU01 include
 
A code editor screen will appear; click on Change button icon to enable the code editor for input.


Logic for filling Posting Period Values


Now we want to fill the rest of the eleven posting period variables ZPER_P1 to ZPER_P11 in rolling back fashion. That means if user inputs value of posting period variable at selection screen as ‘003’ (March), than ZPER_P1 should have ‘002’ (February), ZPER_P2 should have ‘001’ (January), ZPER_P3 should have ‘012’ (December, of previous year) and so on.
To accomplish the above task logic is given below.
Step 1: Read the user input value at selection screen for first posting period variable i.e. ZPER_P0
Step 2: Subtract the index number of current processing variable from the user input value (Refer to code)
Step 3: If the subtraction is less than 1 than add 12 to the result
Step 4: Assign the result as value to currently being processed customer exit variable for posting period

Sample code for ZPER_P1 posting period variable


read table i_t_var_range into loc_var_range with key vnam = 'ZPER_P0'. "(Reference to step 1)
if sy-subrc = 0.
per = loc_var_range-low.
endif.
nper = per+58(2).
nper = nper - 1. "(reference to step 2, here 1 is subtracted because ZPER_P1)
if nper < 1.
nper = nper + 12. "(Reference to step 3)
endif.
per = nper.
lastper = per.
l_s_range-sign = 'I'.
l_s_range-opt = 'EQ'.
l_s_range-low = lastper+57(3). "(Reference to step 4)
append l_s_range to e_t_range.


Logic for filling Fiscal Year Values

At this point we have filled correctly the posting period values but what about the corresponding year values. We also need to fill these fiscal year values correctly. For example, if month is getting changed from January to December than year should also change from 2008 to 2007. Now we will see in the following section how we populate ZFYR_Y1 to ZFYR_Y11 fiscal year variables.
To accomplish the above task, logic is given below.
Step 1: Read the user input value at selection screen for first fiscal year variable i.e. ZFYR_Y0
Step 2: Read the user input value at selection screen for first posting period variable i.e. ZPER_P0
Step 3: Subtract the index number of current processing variable from the user input value of posting period (Refer to code)
Step 4: If the result of posting period subtraction is less than 1 than subtract the fiscal year value by 1 (fiscal year is subtracted always by 1 for all variables created on fiscal year).
Step 5: Assign the new fiscal year value to currently being processed customer exit variable for fiscal year

Sample code for ZFYR_Y1 posting period variable

read table i_t_var_range into loc_var_range with key vnam = 'ZFYR_Y0'. "(Reference to step 1)
if sy-subrc = 0.
l_fiscyear = loc_var_range-low.
read table i_t_var_range into l_posper_rng with key vnam = 'ZPER_P0'. "(Reference to step 2)
if sy-subrc = 0.
l_posper_rng-low = l_posper_rng-low - 1. "(Reference to step 3)
if l_posper_rng-low < 1. "(Reference to step 4)
l_fiscyear = l_fiscyear - 1. "(Reference to step 5)
endif.
endif.
endif.
l_s_range-sign = 'I'.
l_s_range-opt = 'EQ'.
l_s_range-low = l_fiscyear.
append l_s_range to e_t_range.


Create similar codes for rest of the variables for posting period and fiscal year. Do not forget to keep in mind the number of posting period or fiscal year variable you are processing e.g. index number of variable ZPER_P3 is 3.
Save and activate the code and project and now you are ready to report on rolling periods.

Enhancements
  • You can also use function module to omit the repeated block of logic e.g. subtracting the value from posting period variable
  • You can also use text variables to display the descriptive titles for month names like February, January etc. instead of pure numbers like 2, 1 etc. in the report output.
  • Inventory Management (0IC_C03) Part:2


    Introduction:

    To monitor the Inventory Management and to take any decisions on Inventory Controlling, 
    Management/Users need reports on Inventory Management; basically they need complete status/details of 
    the all Materials i.e. from Raw Materials to Finished Goods. I.e. In-Flow and Out-Flow of the Materials in the Organization.
    In the coming Articles we will discuss about Process Keys and Movement Types and other details.

    DataSources:

    A DataSource is a set of fields that provide the data for a business unit for data transfer into BI System. From 
    a technical viewpoint, the DataSource is a set of logically-related fields that are provided to transfer data into 
    BI in a flat structure (the extraction structure), or in multiple flat structures (for hierarchies).
    There are two types of DataSource (Master Data DataSources are split into three types.):
    1. DataSource for Transaction data.
    2. DataSource for Master data.
    a) DataSource for Attributes.
    b) DataSource for Texts.
    c) DataSource for Hierarchies.
    To full fill the Industry requirements, SAP given the following Transactional DataSources for Inventory 
    Management/ Materials Management.
    1. 2LIS_03_BX : Stock Initialization for Inventory Management.
    2. 2LIS_03_BF : Goods Movements From Inventory Management.
    3. 2LIS_03_UM : Revaluations.
    All the above DataSources are used to extract the Inventory Management/ Materials Management data from 
    ECC to BI/BW.

    About 2LIS_03_BX DataSource:

    This DataSource is used to extract the stock data from MM Inventory Management for initialization 
    to a BW system, to enable the functions for calculating the valuated sales order stock and of the 
    valuated project stock in BW.
    See the SAP help on 2LIS_03_BX DataSource.

    About 2LIS_03_BF DataSource:

    This DataSource is used to extract the material movement data from MM Inventory Management 
    (MM-IM) consistently to a BW system. A data record belonging to extractor 2LIS_03_BF is defined 
    in a unique way using the key fields Material Document Number (MBLNR), Material Document 
    Year (MJAHR), Item in Material Document (ZEILE), and Unique Identification of Document Line 
    (BWCOUNTER).
    See the SAP help on 2LIS_03_BF DataSource.

    About 2LIS_03_UM DataSource:

    This DataSource is used to extract the revaluation data from MM Inventory Management (MM-IM) 
    consistently to a BW system.
    See the SAP help on 2LIS_03_UM. DataSource.
    To use the all above DataSources, first you need to install in ECC. See the steps given below.

    How to Install?

    In ECC using Transaction Code RSA5, we can install the above DataSources. SAP given all DataSources in the form of “Delivered Version (From this System)”, once you install it, then the status will changes to “Active Version”.


    Select the DataSources one by one and Click on Activate DataSources.


    Post Process DataSources:

    Once we install the DataSources, it will be available in RSA6 and you can see the DataSources in RSA6.
    So go to Transaction Code RSA6 and then do the Post Processing steps that are required for DataSources.


    Select DataSource and then click on Change Icon and



    Once you click on Change Icon, it will go to DataSource Customer Version, here you can hide the unwanted fields and you can select the fields for Data Selection in InfoPackage. After that you save and come back.


    Selection:

    Set this indicator to use a field from the DataSource as the selection field in the Scheduler for BW. Due to the properties of the DataSource, not every field can be used as a selection field.

    Hide Field:

    To exclude a field in the extraction structure from the data transfer, set this flag. The field is then no longer available in BW for determining transfer rules and therefore cannot be used for generating transfer structures.

    Inversion:

    Field Inverted When Cancelled (*-1)
    The field is inverted in the case of reverse posting. This means that it is multiplied by (-1). For this, the extractor has to support a delta record transfer process, in which the reverse posting is identified as such.
    If the option Field recognized only in Customer Exit is chosen for the field, this field is not inverted. You cannot activate or deactivate inversion if this option is set.


    Inventory Management (0IC_C03) Part:1

    What is Inventory Management?

    Inventory management is primarily about specifying the size and placement of stocked goods. Inventory
    management is required at different locations within a facility or within multiple locations of a supply network
    to protect the regular and planned course of production against the random disturbance of running out of
    materials or goods. The scope of inventory management also concerns the fine lines between replenishment
    lead time, carrying costs of inventory, asset management, inventory forecasting, inventory valuation,
    inventory visibility, future inventory price forecasting, physical inventory, available physical space for
    inventory, quality management, replenishment, returns and defective goods and demand forecasting.
    The Inventory Management system provides information to efficiently manage the flow of materials,
    effectively utilize people and equipment, coordinate internal activities, and communicate with customers.
    Inventory Management and the activities of Inventory Control do not make decisions or manage operations;
    they provide the information to Managers who make more accurate and timely decisions to manage their
    operations.

    Building blocks for the Inventory Management system:

    Sales Forecasting or Demand Management
    Sales and Operations Planning
    Production Planning
    Material Requirements Planning
    Inventory Reduction

    Inventory Management Process:

    This process allows users to manage all the Materials/items purchased, manufactured, sold, or kept in stock.
    For each Material/item, users enter the data relevant for a particular area in the system. This data is used
    automatically by the system for purchasing, sales, production, inventory management, accounting and etc...
    It provides optimum support for business. Helps create orders, delivery notes, and outgoing invoices,
    automatically calculating prices, sales units, and gross profit. Enables complete control over stock quantities
    at all times and lets users analyze the financial aspects of stockholding at the same time. Allows users to
    control production on the basis of the items that are used for production and on the basis of the finished
    product and any by-products created.

    The Purpose of SAP - Inventory Management:

    Today most of the industries/companies implemented SAP and also implementing SAP, because SAP is
    giving the full-fledged solution for their Business Requirements and it is meeting Customer/Client
    requirements without huge Customization.
    For example, If you take FMCG industries, they want to know the status of their Inventory on
    Daily/Weekly/Monthly/Yearly basis. Because at the end of the day, Management wants to know the
    movements of the Goods/Products for their future planning and also to maintain the balance between
    Demand and Supply.
    The main components are:
    Management of material stocks on a quantity and value basis.
    Planning, Entry, and Documentation of all Goods Movements.
    Carrying out the Physical Inventory.
    Look into the following Business Process &amp; Inventory Analyses diagram:


    Inventory Management System


    Integration with other Modules:

    Inventory Management is part of the Materials Management and it is fully integrated with the following other 
    modules in SAP. So with this we can easily track the whole information/transactions of the 
    Materials Management
    Production Planning
    Sales &amp; Distribution
    Quality Management
    Plant Maintenance
    Logistics Information System

    How to Monitor?

    In any organization to Monitor the all Inventory Management activities, from Shop Floor Executive’s to Top 
    Manager’s they need good reports, so it is highly difficult to take such kind of MM reports in ECC (OTLP 
    System), so for that we need to implement SAP-BI/BW and generate reports in SAP-BI/BW (OLAP System).
    Especially, if you want to generate the reports for all Materials and Plant combinations which contain MTD 
    and YTD calculations, it is very difficult in ECC (OLTP Server).

    Inventory Data Flow Diagram from ECC to BI/BW:

    To take all kind of Inventory Management SAP given 0IC_C03 InfoCube, which will use three DataSources, 
    see the below figure.


    In the coming Articles we are going to discuss about the above Dataflow. I.e. configuring the DataSources in ECC, Mapping to InfoCube in BI/BW, Data Loads to InfoCube and Reports.

    Some important Reports Details: 

    Few important reports names given below…
    Stock Ledger Report
    Stock Overview Report.
    Demand Supply Match Report.
    Valuated Stock Report.
    Inventory Aging Report.
    Inventory Turnover Report.
    Stock in Transit Report.
    Scrap Report.
    Blocked Stock Report.
    Days' Supply Report.
    Receipt and Issue Consignment Stock at Customer Report.

    Conclusion &amp; Benefits:

    For optimal inventory management processes, we need robust functionality for managing our logistics
    facilities. Support for Inventory Management helps us record and track of each materials on the basis of both
    Quantity and Value.
    We can reduce the cost for warehousing, transportation, order fulfillment, and material handling – while
    improving customer service. We can significantly improve inventory turns, optimize the flow of goods, and
    shorten routes within our warehouse or HUB/distribution center. Additional benefits of inventory management
    include improved cash flow, visibility, and fast and good decision making.
    Inventory management offers one of the largest opportunities in supply chain management. End-to-end
    inventory visibility increases buyer purchasing power, minimizes inventory levels, ensures product balance,
    and ultimately reduces warehousing costs.