Quantcast
Channel: SCN : All Content - ABAP for SAP HANA
Viewing all 831 articles
Browse latest View live

SAP HANA Explained - Again

$
0
0

I was fortunate to be invited to take part in the SAP Influencer Program at SAPPHIRE NOW and ASUG Annual Conference last week.


During his keynote, Hasso Platner described how HANA provided the opportunity to completely redesign applications in a way that is much simpler than ever before. Some people I spoke with afterwards told me this was the moment that crystallised for them that HANA more than just another database.

 

I encourage you to watch the replay of the keynote, but I think it worth taking some time to go through the example Hasso provided again.

 

Hasso started with something we are all familiar with – SAP Business Suite. When we post the simplest invoice transaction we need to do a couple of things. We create the invoice document in the header table – known as BKPF. And we create three entries in the segment table – known as BSEG. One entry is for the customer, one for the profit account and one for the tax record.

Screenshot 2014-06-08 13.19.22.png

Sometime later we will need to do some querying and reporting that will include this transaction data – for example get the balance of one of the accounts it effects. With traditional databases it is impractical to calculate the balance at runtime by doing a sum of all transaction values. This means we build aggregates to hold the current balance which we update each time a transaction is executed. So, after inserting the transaction data, we need to also update data in tables such as...


TableNotional Description
KNC1Customer Account Balance
LFC1Vendor Account Balance
GLT0GL Account Balance

 

We also need fast access to the transactional data from various other areas so we use some secondary indexes. This means inserting records in tables such as...


TableNotional Description
BSISSecondary Index for G/L Accounts
BSIDSecondary Index for Customers
BSETTax Data Document Segment
BSIKSecondary Index for Vendors
BSAKSecondary Index for Vendors (Cleared Items)
BSASSecondary Index for G/L Accounts (Cleared Items)
BSADSecondary Index for Customers (Cleared Items)

 

So now our code to post the original transaction is getting a lot more complex. And this additional work can take some time so we have to use other techniques, like passing off some processing to the update task to perform asynchronously so we can provide the end-user with acceptable response times. Our code for the simple invoice posting transaction is now getting very complex, it is located across several different places, it has to provide choreography between loosely related steps, support complex error handling, etc.

 

HANA provides us with the opportunity to radically simplify out invoice posting application. Because all these aggregates can actually be immediately calculated on the fly as they are required there is no need for the tables that hold this data at all. The same applies to the secondary index tables. We can simply get rid of them and replace them with identically named views. This means there is no change required to any of the code that reads these tables because whether you are reading data from a table or a view the SQL syntax remains the same.

 

But that is not where the real advantages are. The real advantage is that all the code that creates those aggregates and secondary indexes is no longer required. Instead of calculating and updating the many aggregates the invoice transaction effects we simply have to insert the header and item data into their respective tables. There is no need to create secondary index entries either. That's it, job done. We have a simpler data model, simpler code, much less code and therefore a significantly simpler application. It is easier to build, easier to test, easier to change, and much less error prone. In Hasso's words “there is almost nothing that can go wrong”.

Screenshot 2014-06-08 13.20.00.png

By way of example - think about what would happen if a bug was introduced into the code that calculates one of those pesky aggregates. In the traditional model this would immediately introduce errors in the data stored in the database. So a fix to the code would have to include a suitable correction program to check all the aggregate data stored in the database and correct it to eliminate the error and its' effects. In the new simplified design the algorithm that calculates the aggregate can be fixed and then the correct aggregates would be available immediately.

 

Pretty simple eh?


Suggestion needed for Starting a career in HANA Field.

$
0
0

I had done my B.Tech in Information Technology in 2012. I was more enthusiastic to proceed my career in SAP field. As a technical guy, i had selected ABAP module to learn. Then i had gone through training in ABAP in Feb-2013. After the training , i was waiting for job in SAP ABAP.After the completion of the course ,i am still connected with my course to learn more and more. But i didn't get any such opportunity to explore my skills .But my question is > Can i go for SAP HANA module ? What will be the prerequisite needed for this ?

What will be the job opportunity for ABAPers and HANA people ?

 

 

Awaiting for a good response.

 

Thanks and Regards.

Error during creation of external views

$
0
0

Hello,

 

I am trying to create an analytic view modelled like below:

 

AnalyticView.jpg

 

When I do a data preview of the same from the Modeler perspective, it works fine and I can see the data. But I switch to ABAP perspective and consume this Analytic view as an External view and try a data preview, I get an error like below:

 

AnalyticView_ExternalError.jpg

 

Can someone help out as to what could be wrong here.

 

Thanks and regards,

Manjunath

Fine-Tune the Execution of SADL-Based Gateway Services

$
0
0

SADL (Service Adaptation Definition Language) enables fast read access to data for scenarios on mobile and desktop applications based on HANA, by means of a query push-down. As part of the query push-down, the user's input and the application parameters are collected through consumer APIs and used to configure the request for the database. The SADL engine adds the resulting restriction to the condition for the database select (WHERE-clause). This document provides a detailed guide on how to use the query options in order to parameterize and fine-tune the execution of your SADL-based SAP Gateway Services.

View this SAP How-to Guide

Enforce Authorizations for SADL-Based Applications

$
0
0

SADL (Service Adaptation Definition Language) enables fast read access to data for scenarios on mobile and desktop applications based on HANA by means of a query push-down. As part of the query push-down, all users' input is collected through consumer APIs and used to configure the request for the database. Specifically, the authorization enforcement in this process is interposed between query specification by application or end user and data retrieval from the database. This document provides a detailed guide on how to use the SAP authorizations in your SADL-based applications.

View this SAP How-to Guide

Model a Gateway Service based on Business Entities through SADL

$
0
0

Now, SAP NetWeaver 7.40 SP 5 provides useful tools to create OData services for read-only scenarios via a model-based approach. This gives you the great chance to assemble a standard, optimized implementation of a Gateway Service. The attached document provides you a tutorial, based on the Enterprise Procurement Model (EPM). Here, you will find a detailed guidance on how to join business entities in order to integrate them into a service via the Gateway Service Builder.

View this SAP How-to Guide

How to Use OData Analytics in SADL-Based Services.pdf

$
0
0

In SAP NetWeaver 7.40 SP07 and higher, you can configure a Gateway Service to return aggregated values, for example, a sum, a minimum value, or a maximum value. This How To guide provides a detailed guide on how to annotate SADL-based Gateway Services in order to use analytical features. You will learn how to: 1. Add a property for the generated key 2. Map properties to elements 3. Generate runtime artifacts 4. Provide analytical annotations 5. Provide aggregation behavior information

View this SAP How-to Guide

How to transport ABAP for SAP HANA applications with HTC

$
0
0
This document gives you a compact overview of the SAP HANA Transport Container (HTC) and demonstrates how to use it. You can also access a related video here.

What is the SAP HANA Transport Container all about?

With SAP NetWeaver 7.4 numerous SAP HANA related optimizations are provided which enable developers to easily leverage of the power of SAP HANA in ABAP-based applications. ABAP for SAP HANA applications - meaning applications containing ABAP and HANA development entities – can now be easily developed, updated, corrected and enhanced (Access to diverse ABAP for SAP HANA tutorials).
Well, this means that - as usually done for reasons of quality assurance – the different ABAP and HANA development entities have to be transported through the system landscape; typically from the development system to the consolidation/quality system, and then to the productive system.
Here comes the SAP HANA Transport Container (HTC) in action!
HTC is an ABAP development object which is required to integrate HANA repository content into the standard Change and Transport System (CTS). As of AS ABAP 7.4, HTC is seamlessly integrated into the Transport Organizer of AS ABAP and so integrating the HANA repository content into CTS. It ensures an efficient delivery process of applications built out of ABAP and HANA content by means of the proven ABAP transport mechanism.
In short, ABAP for SAP HANA applications are transported as normal as any classic ABAP-based application through the system landscape.

Let's see how it works!

Prerequisites

  • SAP HANA Studio is installed and running
  • ABAP Development Tools for SAP NetWeaver is installed and running
  • AS ABAP 7.4 runs on SAP HANA database
  • The connection to the ABAP backend is configured in the ABAP perspective
  • The connection to the SAP HANA database is configured in the Modeler perspective

Assumption

A well-structured package hierarchy (main packages and sub-packages) has been defined for your project. The SAP HANA entities will be / are contained in those Packages.
Info: Package system-local is meant for local objects which are not intended to be transported. It means that all its sub-packages are also by definition not transportable and so cannot be attached to a delivery unit. It means that SAP HANA entities contained in a Package below system-local cannot be transported between SAP systems.

Procedure Overview

The picture below shows the main steps involved in the whole process. The development of a demo application – ABAP and HANA content - is not part of this demonstration in order to plainly focus on HTC. (Access diverse ABAP for SAP HANA tutorials here).
HTC_ProcedureOverview.png

Step-by-Step Procedure

Step 1: Create a Delivery Unit (DU) and Assign the relevant packages

Info: SAP HANA Delivery Units are application-specific and are used to group and transport repository content. Find more information about DUs in the Help under menu path Help > Help contents.
  1. Start the SAP HANA Studio and go to the Modeler perspective by selecting menu path Window > Open Perspective > Others … and choosing Modeler from the opened dialog.
    Step_1.1.png
  2. Open the Quick Launch view of the Modeler perspective - if not already opened - by selecting menu path Help > Quick Launch
  3. Select the relevant system - if not yet selected - by clicking on the Select System… button located in the upper area of the view.
    Note that the system connection must have been previously added to the Modeler perspective.
    Step_1.3.png
  4. Now, click on the link Delivery Units… under the Setup section area
    Click on the Create… button in the upper right area of the Delivery Units dialog.
    Maintain the required information in the New Delivery Unit dialog and press on OK.
    Detailed information about the different DU properties is available in the Help. You can for example open the integrated Help window by clicking on the Help icon located bottom left on the dialog.
    Step_1.4.png
  5. Assign the relevant package(s) to the DU.
    In case the packages to be assigned to the DU already exist and are not yet assigned to another DU, then just select the relevant DU and click on the Add button in the lower right area.
    You can now select the relevant packages (zdemo in my case) and confirm the action.
    If the sub-packages of the selected node have to be selected too, then make sure the appropriate checkbox is checked.
    Step_1.5.png

    Other ways to assign a package to a DU:
    • (a) The package is not yet created: In this case you just have to select the appropriate value in the Delivery Unit dropdown field in the New Package dialog.
      Step_1.5a.png
    • (b) The package already exists and is already assigned to another DU: The easiest way is to maintain the new DU name is to select the relevant package and change the DU value in its Properties pane.
    • (c) The package exists, but is not yet assigned to a DU (like in the present example): The previous handling is also applicable here
      Step_1.5b.png

Step 2: Create a HANA Transport Container (HTC)

  1. Now switch to the ABAP perspective by selecting menu path Window > Open Perspective > Others..., then ABAP and confirming the opened dialog.
  2. Select the package to which the DU belongs to, open its context menu (by right-clicking on it) and select New > Other ABAP Repository Objects and then select entry Others > HANA Transport Container (you can filter for ‘transport’) and press on Next.
    The New HANATransport Container dialog is now opened.
  3. Enter the name of the DU in the field HANA Delivery Unit Name and press on Next.
    The name of the delivery unit will automatically be assigned to the new HTC object (e.g. zepm_oia_demo in my case).
    Info: There is a one-to-one relationship between HANA Transport Container objects and Delivery Units.
    Tip: Whenever available, you can use the Content Assist functionality of a given field by setting the cursor in it and pressing Ctrl+Space
  4. Now select the appropriate Transport Request and press on Finish.
    Step_2.4.png
    The new HTC has now been successfully created and you can have a look at its content (so-called Snapshot) on the Content tab.
  5. Activate the new object
    Step_2.5.png
    Info: HANA Transport Container objects are not updated automatically. The update of the  snapshot (content) of a given HTC must be triggered manulay by developers anytime the content of the underlying delivery unit has been modified.
    This is done by executing the function Take a Snapshot and Save on the Overview pane of the relevant HANA Tranport Ccontainer object.
    It is strongly recommended to execute this function before releasing a transport request.

Step 3: Release the Transport Request in AS ABAP

Assumption: you are ready with your development tasks.
  1. Open a SAPGUI window for the relevant project (Ctrl+6)
  2. Go to Transport Organizer (SE09 or SE01) and display the transport request  and its tasks.
  3. Check the request consistency and release the tasks and then the transport request.
  4. Check the Export result in the Logs.
    Step_3.3.png
    You can later also check whether the import in the follow-on SAP HANA system was successful, by checking the transport logs under the step 'Execution of programs after import (XPRA)'.

Step 4: Activate the HANA Repository Content in Target Systems

Info about the automatic activation of imported HANA content:
The automatic activation of imported HANA content is controlled in the AS ABAP via the entries in the database table SNHI_DUP_PREWORK, which contains two fields SOFTWARE_COMPONENT and PREWORK_DONE. A maintenance view is available for this table in transaction SM30.
The table is used to switch on/off the automatic activation. It means a DU – its content more precisely – is automatically activated after its import, if the field PREWORK_DONE is set (‘X’) for the software component to which the corresponding HTC belongs to.
The PREWORK_DONE field should only be set in a given system, if the schema mapping has been configured in the underlying SAP HANA Database.
Note that the entries in  table SNHI_DUP_PREWORK are customizing data which can be maintained in each system directly or transported through your system landscape using Customizing Request.
You need to trigger the activation manually if the PREWORK_DONE field is not set for the software component your HTC belongs to in your target system. How to proceed in this case is described below.
  1. Go to the Quick Launch view of the f the Modeler perspective and select the relevant target system. (Refer to Step1->1&2)
    The relevant target system connection must have previously been added to the Modeler perspective.
  2. Click on link Activate... located in the Content area of the view.
    A dialog will open showing the inactive objects available in the system.
  3. Now select the relevant repository objects and activate them.
    Step_4.png
Don't be irritated not to see the objects that were used in steps 1 to 3. The process is the same as shown in the screenshot above.

Additional Step: Update HANA Content already transported into Target systems

Once a HANA Transport Container has been transported into target systems, it is usual that the content of the underlying Delivery Unit gets modified in the source system: New objects (e.g. packages and HANA entities) are added or existing ones are modified.
The main question here is: How to get these updates from the source system into the target systems?
In such a scenario, you do not have to create a new HANA Transport Container object - or delete and re-create one. You simply have to update the existing one and then transport it.
How do you do this?
Do as followed:
  1. Go to the ABAP perspective and open the relevant ABAP project
  2. Open the HANA Transport Container related to the modified Delivery Unit
    PS: Do not forget: Both have the same name.
  3. Execute the functionTake a Snapshot and Save (press on the link) on the Overview pane (Refere to the screenshots in Step 2.5).  The HTC content (aka Snapshot) will be updated.
  4. Assign a transport request and confirm
  5. Activate the updated object

The updated HTC is now ready to be transported. You can check the update on the Content pane.

 

As already mentioned in Step 2.5, do not forget that a HANA Trasnport Container is not updated automatically. In order to avoid inconsistencies in the target systems, it is strongly recommended to take a Snapshot before releasing the transport request.

Summary

This was a short introduction on the HANA Transporter Container (HTC) showing how easy it is to transport applications built out of ABAP and HANA content - so-called ABAP for SAP HANA applications -  between SAP systems!

Related Content


ABAP on HANA guidance for Date/Time fields

$
0
0

In ABAP we have the following options for storing DATE/TIME values.

 

DATS: Date field (YYYYMMDD) stored as char(8)

TIMS: Time field (hhmmss), stored as char(6)

TIMESTAMP: DEC 15,0 Counter or amount field with comma and sign

TIMESTAMPL: DEC 21,7 Counter or amount field with comma and sign

 

 

Native HANA has Date types

Datetime types

DATE, TIME, SECONDDATE, TIMESTAMP

 

None of the ABAP date types above will create a native HANA date type (interesting to know if that will come some time in the future?).

 

In the mean time, I would be interested to know for custom Z Column store tables, would the older Date & Time (two separate fields) be a better fit for HANA wrt storage and associated compression?

 

There may be performance benefits to use HANA native date comparisons functions e.g. (ADD_DAYS,ADD_MONTHS,ADD_SECONDS,ADD_YEARS,COALESCE,CURRENT_DATE,CURRENT_TIME....) etc

however as of now, these won't be available to ABAP generated tables or use in CDS views.

 

Ideally, we want to avoid the need for converting the data type in order to do a comparison, so as expected I can see the DATS & TIMESTAMP equivalent fields in HANA db (NVARCHAR & DECIMAL respectively) just do a simple comparison using their own native data types when queried from ABAP.

 

Any help appreciated..

Install SAP ECC 6 on SAP HANA

$
0
0

Hi,

 

Can anybody kindly advise the path to download SAP ECC 6.16 Export(1/11)? I can only found ECC 6.06 and ECC 6.17 on Service Market Place.

 

Thank you!

 

-Jack

How to keep cursor alive after the call of the proxy methods

$
0
0

Hi All,

 

I am using the cursor for fetching the data from the external view in ABAP. The cursor becomes invalid after the call to one of the proxy methods.

 

Is there any way to store/retain the cursor value.

 

 

 

Regards,

Nidhi

Reg: ECC EHP 7 system sizing for Hana

$
0
0

Hi Team,

 

 

I am new for  HANA system  environment we are planing to implement ECC ehp7 in SAP Hana system , Can you please guide me how the sizing would be done for the HANA system.

 

 

 

 

Regards,

R.K

Regarding sap HANA consolidation

$
0
0

Hello Gurus,

 

     Recently my client is going on for Sap Hana Consolidation and SLO (System Landscape Optimization) project with existing Sap Solutions ( SRM, BW, Portal , PPM , ECC and PI) . Now I want to know the possible ways to perform this. As we are moving on one system on client for SAP Hana Platform. Can someone guide me the correct process regarding above migration.Mainly the project scope is :-

  • All SAP Instances (SRM, BW, Portal, PPM, ECC, PI) to run on SAP HANA Platform.
  • ECC System Consolidation, Final Goal will be one system one client on SAP HANA Platform.

 

Please provide me the idea for above process to be performed.Can some one provide me better guidance for this .

 

Thanks

Gaurav Gautam

 

Unleash the power of SAP HANA from your ABAP Custom Code- Accelerate your custom reports like never before - Experience Sharing

$
0
0


Introduction

 

Unleash the power of SAP HANA from your ABAP Custom Code- Accelerate your custom reports like never before Custom Code Optimization is an important step for SAP customers especially post migration to Suite on HANA. This is an exciting phase as well as the phase when the customer gets to unleash the potential of HANA directly in most of the critical custom reports that the day to day business heavily depends on.

In this blog series we wish to combine

  1. Experiences gained while working closely with some of the SAP customers in the custom code optimization phase and
  2. Some insights into the latest tools and techniques for custom code optimization available in our latest ABAP platform.


Also offer some pointers which we feel would be highly beneficial for those who wish to venture into this space

Before going further let us have a brief recap of why custom code optimization is important and relevant in the context of HANA.


  1. HANA is an in-memory database and supports column store tables ( in addition to row store tables). This implies that as much as the big queries that process millions of records on large data sets being much faster, poorly written ABAP code which deals with small queries in an un-optimized way might run much slower.
  2. Existing ABAP custom reports which could have been developed and maintained by different consultants over the years may not comply always with Open SQL Golden Rules
  3. With the advent of HANA there is a significant shift in the ABAP development paradigm i.e. from "data-to-code" to "code-to-data".


The above points are just a precursor but the fact remains that developing ABAP for HANA is NOT completely different from how it used to be earlier .These points are made to send the context for introducing our ABAP development community into some cools tools and techniques that will help them to smoothly transition into the new exciting world in an absolutely non-disruptive manner.

In this blog series we will cover the different steps of a possible approach that a customer could follow in order to fully benefit from the custom code optimization phase.



Tools Required and Prerequisites

 

As effort estimations for adapting existing custom ABAP code are required before the start of the migration project, these tools are also provided for lower SAP NetWeaver releases. The following table summarizes the new or enhanced tools including their availability, and gives additional references for further information.

 

Tool

( Name / Transcation )

Capabilities

Availability

(Releases)

SQL Monitor (new)

Transaction: SQLM

  • Capture aggregated SQL runtime data over longer period in productive system

NetWeaver standard (SQLM):

  • NetWeaver 7.4 (SP2)
  • NetWeaver 7.03/7.31 (SP9)
  • NetWeaver 7.02 (SP14)

Below NetWeaver 7.4, Kernel 7.21 (PL>118) is required.

ABAP Test-cockpit  (enhanced)

Transaction: ATC, (SE80)

  • New checks in SAP code inspector for detecting potential functional issues during transition
  • New performance checks for finding optimization potential

The ABAP code inspector infrastructure is available with NetWeaver 7.0 and above. The ABAP Test Cockpit is available in NetWeaver 7.02 and above.

The enhanced code checks are available in

  • NetWeaver 7.4 (SP2)
  • NetWeaver 7.03/7.31 (SP9)
  • NetWeaver 7.02 (SP14)

SQL Performance Tuning Worklist (new)

Transaction:

SWLT

  • Possible to combine SQL Monitor data with the result of a static code analysis
  • Creation of a ranked worklist
  • NetWeaver 7.4 (SP2)
  • NetWeaver 7.03/7.31 (SP9)
  • NetWeaver 7.02 (SP14)

 

Next Steps

 

The blog series would discuss the different phases of the custom code management as follows.

  1. Functional Correctness - http://scn.sap.com/community/abap/hana/blog/2014/06/20/unleash-the-power-of-sap-hana-from-your-abap-custom-code-accelerate-your-custom-reports-like-never-before--functional-correctness
  2. Detect and Prioritize your Custom Code
  3. Optimize your Custom Code


  • Please note that the approach shared here is a completely iterative and flexible model which is just suggestive in nature rather than a rigid process.



Unleash the power of SAP HANA from your ABAP Custom Code- Accelerate your custom reports like never before - Functional Correctness

$
0
0

1. Functional Correctness

Prerequisites

 

Before you start reading this blog it is good to read the blog series.


  1. Introduction - Unleash the power of SAP HANA from your ABAP Custom Code - http://scn.sap.com/community/abap/hana/blog/2014/06/20/abap-custom-code-management--leverage-the-power-of-sap-hana-in-abap


What does Functional Correctness mean?

 

Before going for optimization, the existing custom code should result as expected after SAP HANA migration. In general the custom code should work perfectly as expected after migration until unless

  • The custom code has some DB specific code or query in it
    • Each DB has specific features and unique technical behavior.
    • DB specific code may rely on these features of the used database
  • The custom code has used some DB indexes specified in earlier DB
    • Column based architecture - as a consequence e.g. secondary DB indexes are less important.
    • DB specific code may rely on the existence/usage of certain DB indexes.
  • The cluster/pool specific table reading (for Binary Search)
    • During the migration to SAP HANA most pool and cluster DB tables are transformed to transparent DB tables (de-pooling / de-clustering) so that the tables can be used in analytic scenarios.
    • DB specific code may rely on the technical specifics of pool and cluster tables.

 

  • The SAP OSS Note 1785057 details about the above checks.


The above checks can be identified using functional check. SAP provides tools to identify them and suggestions are given on the tool for the identified issues to correct them to work as expected.


Example of DB Hint:

Image1.png

The above source code shows an example of usage of DB hint in the SQL Query. The DB hint (from Oracle DB) is enforcing the SQL query to use the index defined at the DB level which means that post SAP HANA migration this index becomes invalid and leads to functional issues.


Example for Cluster/Pool table read:

 

Have a look at the code which is reading data from the table “EKPO” which was a cluster / pool table before migration.  After migration to SAP HANA this table becomes transparent table. The code is written to fetch the records from the table “EKPO”. The following statement reads the internal table “IT_BKPF” using binary search. Binary search, expects the internal table sorted by key attributes. If not, the search fails. The code works perfectly fine before migrating to SAP HANA as the internal table “IT_BKPF” is sorted by default as it is implicit behavior of cluster / pool table. Post migration to SAP HANA, this statement fails as the internal table “IT_BKPF” is not sorted. Hence before using the binary search on the internal table an explicit sort (based on primary key at least) is needed to make sure the migrated report / custom code results the same output.



SELECT

    awkey

    gjahr

    belnr

    xblnr

    bldat

  FROM bkpf " BYPASSING BUFFER

  INTO CORRESPONDING FIELDS OF TABLE it_bkpf

  FOR ALL ENTRIES IN lt_invoice_details

  WHERE

    gjahr = lt_invoice_details-gjahr AND

    xblnr = lt_invoice_details-xblnr AND

    awkey = lt_invoice_details-awkey.

 

“…… Some calculations

READ TABLE it_bkpf WITH KEY gjahr = invoice_details-gjahr

                            xblnr = invoice_details-xblnr

                            awkey = invoice_details-awkey

           BINARY SEARCH

           TRANSPORTING belnr  bldat.

IF sy-subrc = 0 .

“ Further calculations

ENDIF.



The recommendation / solution:

 

Image1.png

  • SAP Note 1622681 for the supported DBSL hints for SAP HANA.

 

How to find the functional issues:

 

SAP Provides tools to identify such issues during migration to SAP HANA. The tool ATC (ABAP Test Cockpit) helps to identify the functional issues.

  1. Start the transaction SATC
  2. Add your objects (into the object list)
  3. Use the variant “FUNCTIONAL_DB” which is preconfigured with necessary checks for identifying the functional issues.

 

The code inspector tool (ATC/SCI):

 

The code inspector tool helps to identify the functional issues and potential performance issues. These checks are static check based on the custom code. SAP has improved the code inspector tool with more checks to identify the functional issues which can occur after migrating to SAP HANA. For SAP HANA, ATC is the tool for preparing the custom code for functional correctness and detecting custom code for potential optimization.


  • ATC availability starts with NW 702 SP12 / NW 731 SP5. In older releases the Code Inspector can be used.


The code inspector tool is available to find all the functional issues easily. The image below shows the new checks added to the tool and its purpose.Image1.pngThere are new checks added under category,

  • Security Checks: Analyze the native SQL and open SQL carefully and finds the functional issues.
    • Critical Statements: Checks the native SQL and DB hints on the SQL statements.
    • Use of ADBC interface: Checks the ADBC class for the SQL statements used in the query.
  • Robust Programming: Analyze the problematic statements which can lead the wrong results for cluster/pool table (Transparent tables after migration).
    • Search problematic statement without ORDER BY clause:
      • Finds the custom code which relies on implicit sorting
      • Search statements for BINARY SEARCH, DELETE ADJACENT DUPLICATES for cluster/pool tables
      • Mostly these checks fall under false positives
    • Depooling/Declustering - Search without ORDER BY clause:
      • Searches the statements for cluster/pool tables without ORDER BY clause on it
      • Works only for cluster/pool table reads
      • Mostly these checks fall under false positives
  • Search Functions: Analyze the function module call for index related usage.

 

  • With ATC, SAP delivered a standard variant for Functional correctness checks named “FUNCTIONAL_DB” which is configured to identify all functional issues discussed earlier.



Next Steps

 

The blog series would discuss the different phases of the custom code management as follows.

  1. Functional Correctness - http://scn.sap.com/community/abap/hana/blog/2014/06/20/unleash-the-power-of-sap-hana-from-your-abap-custom-code-accelerate-your-custom-reports-like-never-before--functional-correctness
  2. Detect and Prioritize your Custom Code
  3. Optimize your Custom Code


  • Please note that the approach shared here is a completely iterative and flexible model which is just suggestive in nature rather than a rigid process.

Concat function or equivalent for CDS view?

$
0
0

Trying to create a CDS view and looking to combine multiple character fields (e.g. first_name + last_name) and am wondering if this is possible? Am on CRM NW 7.4 SP5 on HANA.

 

Using the arithmetic function gives me the following.

 

Data type CHAR is currently not supported in an arithmetic expression

 

Looking at the official documentation here, it doesn't appear to be included?

 

ABAP Keyword Documentation

 

I can always create a AMDP or even a HANA native view, just wondering if anyone has come across the same issue and found a solution within CDS view?

 

Thanks in advance,
Sean.

Unstructural Information to Structural One

$
0
0

Dear All,

My idea is to implement the system for converting the Unstructural Information to Structural one.  As per today's world, we use to receive more number of information from all the sources, but the problem is, we are not able to organize the data into a Structural Information.

 

As we know well, In order to reuse the data, it is required to store the data in database as structural  Information.

 

Based on the importance of the data, we are working on the system and converting the same into structural data. But again, when different category comes, we have to do some other change in the system, it moves on repeated process.

 

My idea is to implement the system which is capable of handling the Information from Multiple sources with different category(Unstructural/Structural). We have the business with us, we know the type of data which we are receiving from any number of sources. It shows, the information changes everytime, but the business will remain same.

 

In order to move forward with our business, it is required to implement the system only based on business concept, if we do like that I hope in future we may not struggle with different category information with Big data.

 

 

Regards

 

Rajkumar Narasimman

External View - Restrictions regarding HANA Views?

$
0
0

Hi,

 

trying to use a Calculation View in an External View to be able to access it easy via Open SQL from my ABAP program using an IDA Alv.

 

My calculation views are stored in an XS project on HANA side.

 

It seems that the HANA view names must not be

- longer than 50 characters (with package)

- cannot have lower case characters.

 

I searched in the HANA guides for that restrictions but I didn't find anything about that.

 

My question is if that restrictions are valid or if I made any other mistake.

 

Thx

Florian

Calculation View via External View in IDA ALV

$
0
0

Hi,

 

I try to display data (1 mio records for test reasons) which is delivered by a Calculation View in an IDA ALV using an external view.

 

That works fine, but just for the first 200 entries. It seems that the paging does not work correct. I did not found any note which handles that issue or any documentation about that. Also the External View restriction page gives no hint, that this is not supported.

 

The Data Preview in AiE works fine. I also get more data records if I set the maximum number of rows for the IDA ALV to 10.000. In both cases no paging is involved.

 

Did anyone had already the same problem?

 

I work on HANA with SP7 and NW 7.40 with SP5.

 

Thx.

Florian

Unleash the power of SAP HANA from your ABAP Custom Code- Accelerate your custom reports like never before - Detect and Prioritize your Custom Code

$
0
0

2. Detect and Prioritize your Custom Code

 

Prerequisites

 

Before you start reading this blog it is good to read the blog series.


  1. Introduction - Unleash the power of SAP HANA from your ABAP Custom Code -http://scn.sap.com/community/abap/hana/blog/2014/06/20/abap-custom-code-management--leverage-the-power-of-sap-hana-in-abap
  2. Unleash the power of SAP HANA from your ABAP Custom Code- Accelerate your custom reports like never before - Functional Correctness - http://scn.sap.com/community/abap/hana/blog/2014/06/20/unleash-the-power-of-sap-hana-from-your-abap-custom-code-accelerate-your-custom-reports-like-never-before--functional-correctness

 

  • NOTE: This blog contains information collected from various sources. The objective is give the continuity with the previous blog series and also to give an example of how have we used the tools to find the performance hot spots and how did we prioritize.

Introduction

 

After functional correctness, the custom code is ensured to result as expected after migration to SAP HANA. In general, the custom code gets default performance improvement by using the in-memory capabilities. But often, custom code is not written by following the SAP standard guidelines hence there is lack of performance. This section explains the next steps for improving the performance for custom code.


Golden Rules and Its Priority changes for SAP HANA

 

One should really know the basics of performance optimizations. To understand that, you should have clear understandings of "Golden Rules of SQL" and the priority changes with respect to SAP HANA. The image below depicts the golden rules of SQL. With respect to SAP HANA there are some priority changes on the rule.


golden rule.png


There are five important golden rules should be followed by any (SAP) Application/Report. Let see the rule before and after SAP HANA and its priority changes.


  1. Keep the result sets small: it is very important that the application should not load all or irrelevant data from database layer to application layer. This rule applies with the same priority for SAP HANA as well.
  2. Minimize the Amount of Transferred data: While reading from the database, the application should be fetch only the data for the further calculation. The conditions on the business logic should be transferred as where conditions or filters to reduce the amount data fetched. With respect to SAP HAHA, the priority is increased for this rule.
  3. Minimize the number of database calls: The application should not make unnecessary access to database as it is costly operation. Hence application should think of using JOINs and FAEs as much as possible to reduce the number of calls. Also, SELECT SINGLE on LOOP, NESTED SELECTs should be avoided. With respect to SAP HANA, this takes more priority.
  4. Minimize the search overhead: With respect to SAP HANA this takes low priority because the SAP HANA has very powerful search engine the application can really make use of it.
  5. Keep load away from Database: This rule also takes low priority with respect to SAP HANA because of SAP HANA’s in memory capabilities. SAP recommends pushdown the data intensive business logic as much as possible to database layer by means of HANA artefacts.



Detecting potential hot spots for optimization

 

When we talk about performance customers asks one basic question, “how can I find ABAP code which shall be optimized or which has potential for massive acceleration”. The answer is, in general no changes are necessary if your SQL code or Custom code follows the golden Open SQL rules. The ATC or SCI checks can be used to find the SQL patterns that violate the golden Open SQL rules. Add runtime performance data from production system to rank the findings and to find potential for massive acceleration. The code check tool (ATC/SCI) now has been improved with additional checks to identify the performance loopholes.


The image below shows the additional checks on the ATC tool and mapped with the golden rules to show the relevance.


rule mapping.png


There are additional checks on the section “Performance Checks”, which identifies the code for performance loopholes. These checks indicates the improvement on,

  • Unused data from the select statements
  • Checks which can nullify the data retrieved
  • Unsecure use of FAE. This identifies the FAE table is empty or not. If empty it would ideally result the whole database table on the select statement which is very costly.


SQL Monitor Tool

 

The SQL Monitor tool is used to identify the performance traces of each and every SQL statement which is executed on the ABAP Server (Generally in production system). This tool can get the answer for some of the questions with respect to optimization.

  • What are the most expensive statements in my ABAP code?
  • What are the most frequent statements executed in my ABAP code?
  • Which are the most read/write operations in my ABAP code?

 

None of the standard performance analysis tools you might know is capable of answering any of these questions in a satisfying way. Let’s take a moment to figure out why.


Trace tools like ST05 (SQL Trace) or SAT (ABAP Runtime Analysis), on the one hand, are designed to trace a single process but not your entire system. Hence, even if activated only for a short period of time, the trace files would become way too big and the performance overhead would be unacceptable in a productive system. Monitoring tools such as STAD (Business Transaction Analysis) and ST03 (Workload Monitor), on the other hand, run system-wide and provide aggregated performance data on the process level. However, they don’t allow you to drill down in the data so there is no way to get the SQL profile of a process. Other monitoring utilities like ST04 (DB Performance Monitor) supply you with detailed information about every executed SQL statement but cannot provide a link to the driving business processes.


So how can you answer the questions stated above? This is where the new SQL Monitor kicks in by providing you with system-wide aggregated runtime data for each and every database access. You may think of it as an aggregated SQL trace that runs permanently and without user restriction. On top of that, the SQL Monitor also establishes a connection between the SQL statement and the driving business process. To be more precise, this tool not only provides you with the source code position of every executed SQL statement but also with the request’s entry point that is, for instance, the transaction code.


SQL monitor.png


SAP recommends activating the SQL monitor tool on the custom production system at least for a week. Often cases, it needs to be activated for two week. The SQL monitor collects all the runtime traces of all the SQL Statement executed over the period of activation. This data can be exported and uploaded a snapshot to the development or quality system which can be used to analyze further for detecting the performance potentials. The above diagram is a collection of SQL Statements executed over a span of week on the production system. This gives insights of,


  • The SQL Statement – Most number of executions
  • The SQL Statement – Most volume of data transferred
  • The Call stack – Indicates the location of the SQL statement and how it is being executed

 

Prioritize the findings

 

The next step after collecting the runtime traces, one needs to prioritize the finds. For doing this, SAP delivered a tool named “SQL Performance Tuning Work list (SWLT)” which combines the result of static checks and runtime findings.


The SQL Performance Tuning Worklist tool (transaction /SWLT) enables you to find ABAP SQL code that has potential for performance improvement in productive business processes. This tool combines new ABAP code scans (ABAP Test Cockpit or Code Inspector /SCI) with monitoring and analysis utilities (SQL Monitor and Coverage Analyzer), and automatically creates a condensed worklist. The resulting findings allow you to rank the worklist according to specific performance issues and your business relevance. Prior to analyzing static checks, appropriate ABAP test cockpit runs must be performed in the case of systems and their results must be replicated into the relevant system.

 

The image below explains how the static and runtime findings are combined on one tool called SWLT.


swlt.png


The SWLT tool allows managing the snapshots of SQLM taken from production system and combines the result with ATC results to prioritize the optimizations. With the combined result of ATC and SQLM results it becomes easy to prioritize.


Considerations to Prioritize

 

To prioritize the findings, one can consider the following inputs.

 

  1. The SQL Monitor results - The top most SQL statements which have consumed most of the DB/Execution (run)time.
  2. The ATC check results - The ATC Check results which indicated certain performance loopholes on the code
  3. The SWLT results - The SWLT results which co-relates the static check findings and runtime findings (ATC + SQLM).

 

Apart from these three above inputs there are other inputs has to be considered.

 

  1. The business impact - The business value gained for optimizing the top listed SQLM or SWLT findings. This can be derived from the customer inputs mostly from the functional team from customer site.
  2. Customer Input - Customer would have already prepared a list of reports based on the business value / need.


Next Steps

 

The blog series would discuss the different phases of the custom code management as follows.

  1. Functional Correctness - http://scn.sap.com/community/abap/hana/blog/2014/06/20/unleash-the-power-of-sap-hana-from-your-abap-custom-code-accelerate-your-custom-reports-like-never-before--functional-correctness
  2. Detect and Prioritize your Custom Code
  3. Optimize your Custom Code


  • Please note that the approach shared here is a completely iterative and flexible model which is just suggestive in nature rather than a rigid process.
Viewing all 831 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>