Pages

Thursday, 26 December 2013

Data Transfer Process (DTP)

DTP determines the process for transfer of data between two persistent objects within BI.
As of SAP NetWeaver 7.0, InfoPackage loads data from a Source System only up to PSA. It is DTP that determines the further loading of data thereafter.
Use
  • Loading data from PSA to InfoProvider(s).
  • Transfer of data from one InfoProvider to another within BI.
  • Data distribution to a target outside the BI system; e.g. Open HUBs, etc.
In the process of transferring data within BI, the Transformations define mapping and logic of data updating to the data targets whereas, the Extraction mode and Update mode are determined using a DTP.
NOTE: DTP is used to load data within BI system only; except when they are used in the scenarios of Virtual InfoProviders where DTP can be used to determine a direct data fetch from the source system at run time.
Key Benefits of using a DTP over conventional IP loading
  1. DTP follows one to one mechanism between a source and a Target i.e. one DTP sources data to only one data target whereas, IP loads data to all data targets at once. This is one of the major advantages over the InfoPackage method as it helps in achieving a lot of other benefits.
  2. Isolation of Data loading from Source to BI system (PSA) and within BI system. This helps in scheduling data loads to InfoProviders at any time after loading data from the source.
  3. Better Error handling mechanism with the use of Temporary storage area, Semantic Keys and Error Stack.
Extraction
There are two types of Extraction modes for a DTP – Full and Delta.

Full:

Update mode full is same as that in an InfoPackage.
It selects all the data available in the source based on the Filter conditions mentioned in the DTP.
When the source of data is any one from the below InfoProviders, only FULL Extraction Mode is available.
  • InfoObjects
  • InfoSets
  • DataStore Objects for Direct Update
Delta is not possible when the source is anyone of the above.

Delta:
Unlike InfoPackage, delta transfer using a DTP doesn’t require an explicit initialization. When DTP is executed with Extraction mode Delta for the first time, all existing request till then are retrieved from the source and the delta is automatically initialized.



The below 3 options are available for a DTP with Extraction Mode: Delta.
  • Only Get Delta Once.
  • Get All New Data Request By Request.
  • Retrieve Until No More New Data.
Ionly get delta once:
If this indicator is set, a snapshot scenario is built. The Data available in the Target is an exact replica of the Source Data.
Scenario:
Let us consider a scenario wherein Data is transferred from a Flat File to an InfoCube. The Target needs to contain the data from the latest Flat File data load only. Each time a new Request is loaded, the previous request needs to be deleted from the Target. For every new data load, any previous Request loaded with the same selection criteria is to be removed from the InfoCube automatically. This is necessary, whenever the source delivers only the last status of the key figures, similar to a Snap Shot of the Source Data.
Solution – Only Get Delta Once
A DTP with a Full load should suffice the requirement. However, it is not recommended to use a Full DTP; the reason being, a full DTP loads all the requests from the PSA regardless of whether these were loaded previously or not. So, in order to avoid the duplication of data due to full loads, we have to always schedule PSA deletion every time before a full DTP is triggered again.
‘Only Get Delta Once’ does this job in a much efficient way; as it loads only the latest request (Delta) from a PSA to a Data target.
  1. Delete the previous Request from the data target.
  2. Load data up to PSA using a Full InfoPackage.
  3. Execute DTP in Extraction Mode: Delta with ‘Only Get Delta Once’ checked.
The above 3 steps can be incorporated in a Process Chain which avoids any manual intervention.
II Get all new data request by request:
If you set this indicator in combination with ‘Retrieve Until No More New Data’, a DTP gets data from one request in the source. When it completes processing, the DTP checks whether the source contains any further new requests. If the source contains more requests, a new DTP request is automatically generated and processed.
NOTE: If ‘Retrieve Until No More New Data’ is unchecked, the above option automatically changes to ‘Get One Request Only’. This would in turn get only one request from the source.
Also, once DTP is activated, the option ‘Retrieve Until No More New Data’ no more appears in the DTP maintenance.

Package Size

The number of Data records contained in one individual Data package is determined here.
Default value is 50,000.

Filter

The selection Criteria for fetching the data from the source is determined / restricted by filter.



We have following options to restrict a value / range of values:
Multiple selections
OLAP variable
ABAP Routine
A Tick Mark  on the right of the Filter button indicates the Filter selections exist for the DTP.
Semantic Groups
Choose Semantic Groups to specify how you want to build the data packages that are read from the source (DataSource or InfoProvider). To do this, define key fields. Data records that have the same key are combined in a single data package.
This setting is only relevant for DataStore objects with data fields that are overwritten. This setting also defines the key fields for the error stack. By defining the key for the error stack, you ensure that the data can be updated in the target in the correct order once the incorrect data records have been corrected.
A Tick Mark on the right side of the ‘Semantic Groups’ button indicates the Semantic keys exist for the DTP.



Update

Error Handling

  • Deactivated:
If an error occurs, the error is reported at the package level and not at the data record level.
The incorrect records are not written to the error stack since the request is terminated and has to be updated again in its entirety.
This results in faster processing.
  • No Update, No Reporting:
If errors occur, the system terminates the update of the entire data package. The request is not released for reporting. The incorrect record is highlighted so that the error can be assigned to the data record.
The incorrect records are not written to the error stack since the request is terminated and has to be updated again in its entirety.
  • Valid Records Update, No Reporting (Request Red):
This option allows you to update valid data. This data is only released for reporting after the administrator checks the incorrect records that are not updated and manually releases the request (by a QM action, that is, setting the overall status on the Status tab page in the monitor).
The incorrect records are written to a separate error stack in which the records are edited and can be updated manually using an error DTP.
  • Valid Records Update, Reporting Possible (Request Green):
Valid records can be reported immediately. Automatic follow-up actions, such as adjusting the aggregates, are also carried out.
The incorrect records are written to a separate error stack in which the records are edited and can be updated manually using an error DTP.

Error DTP

Erroneous records in a DTP load are written to a stack called Error Stack.
Error Stack is a request-based table (PSA table) into which erroneous data records from a data transfer process (DTP) are written. The error stack is based on the data source (PSA, DSO or Info Cube), that is, records from the source are written to the error stack.
In order to upload data to the Data Target, we need to correct the data records in the Error Stack and manually run the Error DTP.
Execute



Processing Mode
Serial Extraction, Immediate Parallel Processing:
A request is processed in a background process when a DTP is started in a process chain or manually.

Serial in dialog process (for debugging):
A request is processed in a dialog process when it is started in debug mode from DTP maintenance.
This mode is ideal for simulating the DTP execution in Debugging mode. When this mode is selected, we have the option to activate or deactivate the session Break Points at various stages like – Extraction, Data Filtering, Error Handling, Transformation and Data Target updating.
You cannot start requests for real-time data acquisition in debug mode.
Debugging Tip:
When you want to debug the DTP, you cannot set a session breakpoint in the editor where you write the ABAP code (e.g. DTP Filter). You need to set a session break point(s) in the Generated program as shown below:



No data transfer; delta status in source: fetched:
This processing is available only when DTP is operated in Delta Mode. It is similar to Delta Initialization without data transfer as in an InfoPackage.
In this mode, the DTP executes directly in Dialog. The request generated would mark the data found from the source as fetched, but does not actually load any data to the target.
We can choose this mode even if the data has already been transferred previously using the DTP.

Delta DTP on a DSO
There are special data transfer options when the Data is sourced from a DTP to other Data Target.



  • Active Table (with Archive)
The data is read from the DSO active table and from the archived data.
  • Active Table (Without Archive)
    The data is only read from the active table of a DSO. If there is data in the archive or in near-line storage at the time of extraction, this data is not extracted.
  • Archive (Full Extraction Only)
    The data is only read from the archive data store. Data is not extracted from the active table.

  • Change Log
    The data is read from the change log and not the active table of the DSO.

Friday, 30 August 2013

DSO Job Log and Activation Parameters

1. DataStore Objects in BW 7.x

In BI 7.x you have three different kinds of DataStoreObjects (DSO): Standard, write-optimized and direct update. A Standard DSO consists of a new data and an active date table and a changelog table, which records the changes. Write-optimized DSO and DSO for direct update consist of an active table only.

In BI 7.x the background process how data in standard DataStoreObjects is activated has changed in comparison to BW 3.5 or prior. In this article I will explain the DSO activation job log and the settings / parameters of transaction "RSODSO_Settings". I will describe how the parameters you can set in this transaction influence DSO activation performance. I will not describe the different activation types. 

2. Manually activation of a request

If you loaded a new request with a Data Transfer Process (DTP) in your standard DSO, data is written to new data table. You can manually activate the request or within a process chain. If you manually activate requests, you get following popup screen:


Picture 1: Manual DSO Activation


Button "Activate in Parallel" sets the settings for parallel activating. In this popup you select either dialog or background. In background you select job class and server. For both you define the number of jobs for parallel processing. By default it is set to '3'. This means, you have two jobs that can be scheduled parallel to activate your data, the BIBCTL* jobs. The third job is needed for controlling the activation process and scheduling the processes. That's the BI_ODSA* job.

3. BI_ODSA* and BIBCT* Jobs

The main job for activating your data is "BI_ODSAxxxxxxxxxxxxxxxxxxxxxxxxx" with a unique 25-letter-GUID at the end. Let's have a look at the job log with SM37.


Picture 2: Job log for BI_ODSA* job

Activating data is done in 3 steps. First it checks status of request in DSO if it can be activated, marked green in the log. If there is another yellow or red request before this request in the DSO, activation terminates. In a second step data is checked against archived data, marked blue in the log. In a third step the activation of data takes place, marked red in the log. During step 3 a number of sub-jobs "BIBCTL_xxxxxxxxxxxxxxxxxxxxxxxx" with a unique 25-letter-GUID at the end are scheduled. This is done for the reason to get a higher parallelism and so a better performance. 

But often, the counterpart seems to happen. You set a high parallel degree and start activation. But activation even of a few data records takes a long time. In Reference 3 you will find some general hints for DataStore performance.

But often, the counterpart seems to happen. You set a high parallel degree and start activation. But activation even of a few data records takes a long time. In Reference 3 you will find some general hints for DataStore performance.

4. Transaction for DSO settings

You can view and change this DSO settings with "Goto->Customizing DataStore" in your manage DSO view.You’re now in transaction RSODSO_SETTINGS. In Reference 4 at the end of this document you can find a note for runtime parameters of DSO.

Picture 3: RSODSO_SETTINGS

As you can see, you can make cross-DataStore settings or DataStore specific settings. I choose crossDataStore and choose "Change" for now. A new window opens which is divided into three sections:

Picture 4: Parameters for cross-datastore settings

Section 1 for activation, section 2 for SID generation and section 3 for rollback. Let's have a look at the sections one after another.

5. Settings for activation

In the first section you can set the data package size for activation, the maximum wait time and change process parameters.

If you click on the button "Change process params" in part "Parameter for activation" a new popup window opens:


Picture 5: Job parameters

The parameter described here are your default '3' processes provided for parallel processing. By default also background activation is chosen. You can now save and transport these settings to your QAS or productive system. But be careful: These settings are valid for any DSO in your system. As you can see from picture 4 the number of data records per data package is set to 20000 and wait time for process is set to 300, it may defer from your system. What does this mean? This means simply that all your records which have to be activated are split into smaller packages with maximum of 20000 records each package. A new job "BIBCTL*" is scheduled for each data package.

The main activation job calculates the number of "BIBCTL*" jobs to be scheduled with this formula:
<Maximum number of jobs> = Max( <parallel processes for activation>, <number of records in new data> div <maximum package size> + 2 .

It is the maximum of your number of available processes for activation and the integer part of your number of records divided by your maximum package size plus 2. One process for the fractal part of your division and one control process for your BI_ODSA job. So if you have 10000 records only to activate and four processes for activation, what happens? The result of the Formula Max (4, 2) is 4. Your BW will schedule 3 jobs and split your 10000 records into your 3 processes to 3333 each.

The default setting of 20000 records can be enough for small DSO or delta DSO where you expect only few data per load. But if you have mass data like in an initial load for CO-PA with millions of records you should increase this parameter to speed up your CO-PA activation. Keep this rule of thumb in mind taken from Note 1392715:

<Number of records to activate at once> = <Number of requests> * <Sum of all records in these requests> /<Package size of the activation step> <= 100.000.

This rule is implemented by note 1157070.

So if you have only a couple of records to activate it can make sense to set your number of parallel processes to 2 instead of 3. For mass data with expected number of millions of records you can set the number of parallel process to a higher value, for example 6, so that you have 5 processes for activation and 1 control process. This parameter depends of your machine size and memory.

In this note 1392715 you also find SAP some common recommendations for your system and other useful performance hints:

Batch processes = (All Processes/2)+1

Wait Time in Sec. = 3 * rdisp/max_wprun_time

rdisp/max_wprun_time - RZ11 (Note 25528)

The maximum wait time for process defines the time how long the job is allowed to run and activate data. If the activation job takes longer than defined, BW assumes system problems or database problems and aborts this job. If you have high number of data packages you should set the wait time to a higher value, if you have less data packages set a low value.

One little trick: If you simply count the number of your BIBCTL* jobs in the job log and multiply it with maximum package size, you know how many records have been activated and how many records still have to be activated. As further approximation you can take the time how long each activation job BIBCTL* runs and multiply it by the maximum number of jobs you can calculate how long your activation job will run.

You can change the parameters for data package size and maximum wait time for process and they will be used for the next load.

6. Settings for SID generation

In section SID generation you can set parameters for maximum Package size and wait time for processes too. With button "Change process params" popup described in picture 5 appears. In this popup you define how many processes will be used for SID generation in parallel. It's again your default value. Minimum package size describes the minimum number of records that are bundled into one package for SID activation. With SAP Layered Scalable Arcitecture (LSA) in mind, you need SID generation for your DSO only if you want to report on them and have queries built on them. Even if you have queries built on top of DSO without SID generation at query execution time missing SIDs will be generated, which slows down query execution. For more information to LSA you can find really good webinar from Jürgen Haupt in the references at the end of the document. Unfortunately SID generation is set as default if you create your DSO. My recommendation is: Switch off SID generation for any DSO! If you use the DataStore object as the consolidation level, SAP recommends that you use the write-optimized DataStore object instead. This makes it possible to provide data in the Data Warehouse layer 2 to 2.5 times faster than with a standard DataStore object with unique data records and without SID generation! See performance tips for details.











The saving in runtime is influenced primarily by the SID determination. Other factors that have a favorable influence on the runtime are a low number of characteristics and a low number of disjointed characteristic attributes.

7. Settings for Rollback

Finally last section describes rollback. Here you set the maximum wait time for rollback processes and with button “Change process params” you set the number of processes available for rollback. If anything goes wrong during activation, e.g. your database runs out of table space, an error during SID generation occurs, rollback will be started and your data is reset to the state before activation. The most important parameter is maximum wait time for Rollback. If time is over, rollback job will be canceled. This could leave your DSO in an unstable state. My recommendation set this parameter to a high value. If you've large amount of data to activate you should take at least double the time of maximum wait time for activation for rollback. You should give your database enough time to execute rollback and reset your DSO to the state before activation started.

Button "Save" saves all your cross-datastore settings.

8. DataStore-specific settings

For a DataStore-specific setting you enter your DSO in the input field as you can see from picture 3. With this DSO local setting you overwrite the global DSO settings for this selected DSO. Especially if you expect to have very large DSOs with lot of records you can change your parameters here. If you press button "Changeprocess params" the same popup opens as under global settings, see picture 5.

9. Activation in Process chains

I explained the settings for manual activation of requests in a standard DSO. For process chains you have to create a variant for DSO activation as step in your chain, see picture 6. In this variant you can set the number of parallel jobs for activation accordingly with button "Parallel Processing".

Picture 6: Process variant DSO activation

Other parameters for your standard DSO are taken from global or local DSO settings in TA "RSODSO_SETTINGS" during run of the process chain and activation.         


Wednesday, 28 August 2013

BEx Reporting on 12 Months Rolling Periods using Posting Period (0FISCPER3)


Introduction


Here is the description of scenario. Your organization needs to do reporting on last one year data but on rolling period basis. That means user will input any month/year combination and starting from that month/year, you will have to report on previous 12 months. And you have come to know that it is not straight forward as instead of fiscal year/period (0FISCPER), your info provider contains fiscal year (0FISCYEAR) and posting periods (0FISCPER3). So below are the steps to follow and successfully achieve the desired result.


Frontend Setting

First of all you need to design your query in such a way that it can provide the right platform to achieve the desired output. Basic requirement for report is that user needs to input at least one month/year combination. For this you will require two user defined variables. Create them as below.

Create variables for posting period (0FISCPER3)


Create a variable named ZPER_P0 for posting period to store the initial value input by user as per below specifications using create variable wizard.
  1. Variable Type: Characteristic Value
  2. Processing Type: User Entry/Default Value
  3. Variable Entry: Mandatory
  4. Ready for Input selected
 
Now, create another 11 variables named ZPER_P1 to ZPER_P11 for posting period to store the rest of eleven values calculated automatically as per below specifications using create variable wizard.
  1. Variable Type: Characteristic Value
  2. Processing Type: Customer Exit
  3. Variable Entry: Mandatory
  4. Ready for Input
NOT selected  

Create variables for fiscal year (0FISCYEAR)

Create a variable named ZFYR_Y0 for fiscal year to store the initial value input by user as per below specifications using create variable wizard.
  1. Variable Type: Characteristic Value
  2. Processing Type: User Entry/Default Value
  3. Variable Entry: Mandatory
  4. Ready for Input selected
 
Now, create another 11 variables named ZFYR_Y1 to ZFYR_Y11 for fiscal year to store the rest of eleven values calculated automatically as per below specifications using create variable wizard.
  1. Variable Type: Characteristic Value
  2. Processing Type: Customer Exit
  3. Variable Entry: Mandatory
  4. Ready for Input
NOT selected
This finishes our front-end setting. Now it is time to fill those customer exit variables. Follow the below section of backend setting.

Backend Setting


Now we have created the necessary variables to do reporting on rolling periods. We have created one user entry variable for both posting period and fiscal year and eleven customer exit variable for both posting period and fiscal year. To fill those customer exit variable, now we need to write the code in RSR00001 exit (Customer Exit Global Variables in Reporting). To accomplish this task basic knowledge of ABAP is required. Follow the below steps to fill the values for variables.
  1. Create a project in CMOD (if you do not have any)
  2. Assign the enhancement RSR00001 to the project
  3. Save and activate the project
  4. Go to SMOD transaction
  5. Select RSR00001 as enhancement
  6. Select components and click Display
  7. Double click on EXIT_SAPLRRS0_001 function module
  8. Double click on ZXRSRU01 include
 
A code editor screen will appear; click on Change button icon to enable the code editor for input.


Logic for filling Posting Period Values


Now we want to fill the rest of the eleven posting period variables ZPER_P1 to ZPER_P11 in rolling back fashion. That means if user inputs value of posting period variable at selection screen as ‘003’ (March), than ZPER_P1 should have ‘002’ (February), ZPER_P2 should have ‘001’ (January), ZPER_P3 should have ‘012’ (December, of previous year) and so on.
To accomplish the above task logic is given below.
Step 1: Read the user input value at selection screen for first posting period variable i.e. ZPER_P0
Step 2: Subtract the index number of current processing variable from the user input value (Refer to code)
Step 3: If the subtraction is less than 1 than add 12 to the result
Step 4: Assign the result as value to currently being processed customer exit variable for posting period

Sample code for ZPER_P1 posting period variable


read table i_t_var_range into loc_var_range with key vnam = 'ZPER_P0'. "(Reference to step 1)
if sy-subrc = 0.
per = loc_var_range-low.
endif.
nper = per+58(2).
nper = nper - 1. "(reference to step 2, here 1 is subtracted because ZPER_P1)
if nper < 1.
nper = nper + 12. "(Reference to step 3)
endif.
per = nper.
lastper = per.
l_s_range-sign = 'I'.
l_s_range-opt = 'EQ'.
l_s_range-low = lastper+57(3). "(Reference to step 4)
append l_s_range to e_t_range.


Logic for filling Fiscal Year Values

At this point we have filled correctly the posting period values but what about the corresponding year values. We also need to fill these fiscal year values correctly. For example, if month is getting changed from January to December than year should also change from 2008 to 2007. Now we will see in the following section how we populate ZFYR_Y1 to ZFYR_Y11 fiscal year variables.
To accomplish the above task, logic is given below.
Step 1: Read the user input value at selection screen for first fiscal year variable i.e. ZFYR_Y0
Step 2: Read the user input value at selection screen for first posting period variable i.e. ZPER_P0
Step 3: Subtract the index number of current processing variable from the user input value of posting period (Refer to code)
Step 4: If the result of posting period subtraction is less than 1 than subtract the fiscal year value by 1 (fiscal year is subtracted always by 1 for all variables created on fiscal year).
Step 5: Assign the new fiscal year value to currently being processed customer exit variable for fiscal year

Sample code for ZFYR_Y1 posting period variable

read table i_t_var_range into loc_var_range with key vnam = 'ZFYR_Y0'. "(Reference to step 1)
if sy-subrc = 0.
l_fiscyear = loc_var_range-low.
read table i_t_var_range into l_posper_rng with key vnam = 'ZPER_P0'. "(Reference to step 2)
if sy-subrc = 0.
l_posper_rng-low = l_posper_rng-low - 1. "(Reference to step 3)
if l_posper_rng-low < 1. "(Reference to step 4)
l_fiscyear = l_fiscyear - 1. "(Reference to step 5)
endif.
endif.
endif.
l_s_range-sign = 'I'.
l_s_range-opt = 'EQ'.
l_s_range-low = l_fiscyear.
append l_s_range to e_t_range.


Create similar codes for rest of the variables for posting period and fiscal year. Do not forget to keep in mind the number of posting period or fiscal year variable you are processing e.g. index number of variable ZPER_P3 is 3.
Save and activate the code and project and now you are ready to report on rolling periods.

Enhancements
  • You can also use function module to omit the repeated block of logic e.g. subtracting the value from posting period variable
  • You can also use text variables to display the descriptive titles for month names like February, January etc. instead of pure numbers like 2, 1 etc. in the report output.
  • Inventory Management (0IC_C03) Part:2


    Introduction:

    To monitor the Inventory Management and to take any decisions on Inventory Controlling, 
    Management/Users need reports on Inventory Management; basically they need complete status/details of 
    the all Materials i.e. from Raw Materials to Finished Goods. I.e. In-Flow and Out-Flow of the Materials in the Organization.
    In the coming Articles we will discuss about Process Keys and Movement Types and other details.

    DataSources:

    A DataSource is a set of fields that provide the data for a business unit for data transfer into BI System. From 
    a technical viewpoint, the DataSource is a set of logically-related fields that are provided to transfer data into 
    BI in a flat structure (the extraction structure), or in multiple flat structures (for hierarchies).
    There are two types of DataSource (Master Data DataSources are split into three types.):
    1. DataSource for Transaction data.
    2. DataSource for Master data.
    a) DataSource for Attributes.
    b) DataSource for Texts.
    c) DataSource for Hierarchies.
    To full fill the Industry requirements, SAP given the following Transactional DataSources for Inventory 
    Management/ Materials Management.
    1. 2LIS_03_BX : Stock Initialization for Inventory Management.
    2. 2LIS_03_BF : Goods Movements From Inventory Management.
    3. 2LIS_03_UM : Revaluations.
    All the above DataSources are used to extract the Inventory Management/ Materials Management data from 
    ECC to BI/BW.

    About 2LIS_03_BX DataSource:

    This DataSource is used to extract the stock data from MM Inventory Management for initialization 
    to a BW system, to enable the functions for calculating the valuated sales order stock and of the 
    valuated project stock in BW.
    See the SAP help on 2LIS_03_BX DataSource.

    About 2LIS_03_BF DataSource:

    This DataSource is used to extract the material movement data from MM Inventory Management 
    (MM-IM) consistently to a BW system. A data record belonging to extractor 2LIS_03_BF is defined 
    in a unique way using the key fields Material Document Number (MBLNR), Material Document 
    Year (MJAHR), Item in Material Document (ZEILE), and Unique Identification of Document Line 
    (BWCOUNTER).
    See the SAP help on 2LIS_03_BF DataSource.

    About 2LIS_03_UM DataSource:

    This DataSource is used to extract the revaluation data from MM Inventory Management (MM-IM) 
    consistently to a BW system.
    See the SAP help on 2LIS_03_UM. DataSource.
    To use the all above DataSources, first you need to install in ECC. See the steps given below.

    How to Install?

    In ECC using Transaction Code RSA5, we can install the above DataSources. SAP given all DataSources in the form of “Delivered Version (From this System)”, once you install it, then the status will changes to “Active Version”.


    Select the DataSources one by one and Click on Activate DataSources.


    Post Process DataSources:

    Once we install the DataSources, it will be available in RSA6 and you can see the DataSources in RSA6.
    So go to Transaction Code RSA6 and then do the Post Processing steps that are required for DataSources.


    Select DataSource and then click on Change Icon and



    Once you click on Change Icon, it will go to DataSource Customer Version, here you can hide the unwanted fields and you can select the fields for Data Selection in InfoPackage. After that you save and come back.


    Selection:

    Set this indicator to use a field from the DataSource as the selection field in the Scheduler for BW. Due to the properties of the DataSource, not every field can be used as a selection field.

    Hide Field:

    To exclude a field in the extraction structure from the data transfer, set this flag. The field is then no longer available in BW for determining transfer rules and therefore cannot be used for generating transfer structures.

    Inversion:

    Field Inverted When Cancelled (*-1)
    The field is inverted in the case of reverse posting. This means that it is multiplied by (-1). For this, the extractor has to support a delta record transfer process, in which the reverse posting is identified as such.
    If the option Field recognized only in Customer Exit is chosen for the field, this field is not inverted. You cannot activate or deactivate inversion if this option is set.