Performance confirmation procedure
The wall that hits you when creating add-on programs is a performance problem, and if problems are found, there are cases where you have to make major modifications.
It is recommended to check the execution with the same number of data items as the production system as early as possible.Execution time analysis (transaction code: SE30 or SAS) confirms when there is a performance problem.
If you enter the program name or transaction name and execute it, you can see details about the execution time.
The information that can be confirmed is the execution time of SQL and the execution time of each FORM. If you write all the processing in START-OF-SELECTION without FORM, you can not confirm where the performance is bad, so some processing is required.
It’s better to split it.
Here, since it is possible to understand whether the processing time is used largely for SQL or not, let’s check what kind of correspondence can be done for each.
SQL execution time is long!
-Are only the appropriate items acquired in the SELECT clause?
I want to use * (asterisk) because it is troublesome, but let’s describe the acquired items as much as possible and try to reduce the data transfer amount.
-It’s too late to specify a key or index!
Do you also do other items? Let’s check if performance can be improved by extracting only by key or index, deleting it after putting it in the internal table.
・ Slow to join! Single SQL, but slow!
Since there are both cases, I can not say unequivocally, but in the case of JOIN SQL, remove JOIN.
Use FOR ALL ENTRIES to see if performance improves.
On the other hand, if the SQL is not JOINed, check if there is a master that can JOIN.
It seems to be effective when it is possible to JOIN with the key or index by referring to the master from the selected items.
・ Create index
It seems that there is no particular problem if indexing with Note etc. is recommended, but in other cases,
Let’s check first if we can handle without indexing.
Other processing is long!
I think many problems are related to LOOP. Let’s check some points.
・ Is the number of LOOP data items appropriate?
Let’s check if we are processing with useless data. If it is the data that will be finally deleted, delete all before doing LOOP.
-Check whether the SELCT statement is written in LOOP.
Execute SQL before LOOP and check if it cannot be stored in internal table in advance.
・ Reduce the number of READ TABLE and LOOP.
For example, when looping BSEG and acquiring data from BKPF, the same data will be acquired many times.
In such a case, let’s check whether it can be obtained only when the key has changed.
・ When using READ TABLE, try using BINARY SEARCH. Or try changing to SORTED TABLE.
BINARY SEARCH is also effective when the number of referenced internal tables is large.
・ Let’s use field symbols.
If there are many cases, it may be effective …?
Recent Comments