How we can profit from Lessons Learned

By documenting Lessons Learned, positive, as well as negative, insights from projects are written down, in order transform the knowledge gained into an asset of experience for future projects.

Typical questions are:

  • Have we learnt anything from our previous experiences?
  • Have we critically reflected on our experiences and drawn the right conclusions from them?

Lessons Learned workshops are usually held at the end of the project. And while there is nothing wrong with this approach, a lot of the knowledge gained during the project may still be forgotten and no longer recorded.

Hence, it is recommended to carry out Lessons Learned sessions after each completed project phase in order to make use of as many of the experiences made as possible for each following step in the project cycle. All findings should be debated, and measures should be derived accordingly.

Lessons Learned documentation often disappears in someone’s drawer and will only rarely be used for subsequent projects. To ensure that analogue or even entirely new project plans can benefit from projects that have already been completed, at Inspiricon, we have made the Lessons Learned documents accessible to all project managers and consultants in our company and stored them centrally.

How we can profit from Lessons Learned

Lessons Learned from an upgrade project

In addition, we should not only investigate the negative experiences. There is also a lot to learn from our achievements and we should not take our factors of success for granted.  Instead, successful projects should also be tracked and documented thoroughly in order to be able to reproduce the perfect procedure at any time.

And although a Lessons Learned process might require some additional time, it is well worth the effort because the amount of time wasted on recurring errors is much higher.

At Inspiricon, we emphasize the documentation and consideration of lessons learned throughout each individual project phase. And with our best practices, we will gladly assist you in successfully implementing your projects.

Thomas Dietl Project Manager
Phone: +49 (0) 7031 714 660 0


Evolution of SAP BW

Hi everyone,

in the course of our BW/4HANA activities, we dived into the archives and tried to graph the history of SAP BW. The result is this infographic, starting at the beginnings of BW 1.2 until today. We did not only research the whole way – no, we walked it! We would very much like to discuss with you the evolution of SAP BW as well as the latest migration strategies.

History of SAP BW

Jörg Waldenmayer Lead Consultant
Phone: +49 (0) 7031 714 660 0

Performance Optimization of SAP BW Data Loading Processes Using Multidimensional Internal Tables

Long runtimes can occur when loading data targets in SAP BW if large amounts of data have to be processed, for example, in end routines. Performance problems may sometimes occur only after the beginning of production, due to growing data volumes that were not available at the time of development and in the test phase. In such cases, there are always risks involved when performance optimizations have to be made to a productive application. The approach described below represents one way of significantly speeding up the processing of large amounts of data under certain conditions — without having to make changes to the originally implemented logic or processing steps.

In Practice

A typical example in practice is the reporting on open purchase orders. One of our SAP BI implementation projects required the processing of these open purchase orders at the individual item level, along with their material classification and vendor acknowledgement, and any related receipts of inbound goods — all valuable information that had to be available in reports.

Lookup tables were used to provide the material classifications, the vendor confirmations, and the related receipts of inbound goods. These tables were defined as standard tables. The processing of these table entries was followed by the calculation of open, confirmed, and unconfirmed order quantities, and by the offsetting of goods receipts in the case of partial deliveries. The processing of these lookup tables took place in nested loops within an outer loop using the RESULT_PACKAGE. Because both the RESULT_PACKAGE and the lookup tables contained several tens of thousands of records, the overall performance of the data processing was correspondingly low.

The lookup tables were queried using complex WHERE clauses within several nested LOOP – ENDLOOP blocks. Tables of TYPE HASHED TABLE could therefore no longer be used.

The total processing time for approximately 400,000 order items was over 90 minutes.

The use of sorted tables led to only minor performance improvements.

Here is a code example from the data processing:














The problem of the long duration could only be solved with the help of multidimensional internal tables. Here we defined a HASHED TABLE, whose “UNIQUE KEY“ improved reading performance greatly. In the case above, the key consisted of the number of the purchase order header concatenated with the purchase order position for item detail. In addition to this key, the table contained another internal table located at the position of an additional field, and which corresponded to the table in Listing 1.









Data Preparation

The filling of the original internal table remains unchanged and takes place within the FOR ALL ENTRIES statement using the RESULT_PACKAGE:







At the end of processing the data are transferred into the HASHED TABLE and the original internal table is deleted:












Optimizing Data Processing

Within the following data processing, the original internal table is filled only with the currently required data records:








The processing of the original internal table remains unchanged.







At this point the table contains only the records which are required for this processing step. In the current processing the required records are: material classifications, vendor acknowledgements  and inbound goods receipts. Since the internal table only contains very few data records compared to previously, the processing inside the loop is now much more efficient.

What is the benefit of this technique?

One advantage of implementing this technique is that no changes need to be made to the existing internal tables and the logic used in the WHERE clauses. The cost of setting up the HASHED TABLE is of little significance within the entire application lifecycle. This solution can be used with low risk to optimize the runtime of existing applications, since the core of the processing logic remains unchanged and only optimizes data provisioning.

This particular technique is recommended, for example, for applications that have been used in production for a long time and are reaching their runtime limits due to the ever increasing volume of data to be processed.

In the above practical example of processing open purchase order documents, the total data processing time was reduced from over 90 minutes to just under 15 minutes.

Source of image: Inspiricon AG

Oskar Glaser Lead Consultant BI Reporting
Phone: +49 (0) 7031 714 660 0
Classic DataStore Object

Classic DataStore Object vs. Advanced DataStore Object

There have been many architecture level changes in SAP BW/4HANA. One of this change are data modeling based.

In this article we will walk through the various features and capabilities of ADSOs, as well as explore how these capabilities help to optimize various tasks in your SAP BW environment.

At first, we will talk about the classic DSO and his features. After that I will present you the differences between the classic DSO and the new implemented ADSO.

DSO (Data Store Object)

What is DSO?

A DSO is a two-dimensional storage unit which mainly stores transaction data or master data on a lowest granularity. The data is stored at detailed level.

Types of DSO

When creating a DSO, you must choose the type:


When we create a DSO, the system sets a system ID of ‘SIDs Generation upon Activation ‘by default. This option can be found in the edit mode settings of a DSO. If we checked this option, the system will check the SID values for all the characteristics in the DSO. If a SID value for the characteristic doesn’t exist, the system will then generate the SIDs. If the SIDs are generated during the Activation, this process will help the system to improve the runtime performance of a query. In this way the system doesn’t have to generate SID’s at query runtime. SID values are always stored in SID table of a InfoObject. Using this SID, the attributes and texts of a master data InfoObject is accessed. The SID table is connected to the associated master data tables via the char key.

The following Table shows you the properties of the different DSO types and architecture:


ADSO (Advanced Data Store Object)

The Advanced DSO manages to replace all these objects.


Before we create an ADSO we must know that it includes 3 main tables:

  1. Inbound Table
    • Activation queue table for classic DSO
    • Uncompressed fact table of non-SAP HANA optimized InfoCube
    • All records with are stored with a technical key
  2. Table of Active Data
    • Same as classic DSO, contains the current values after activation.  The key of the table is the DSO-Key (more about keys later)
    • Compressed fact table of non-SAP HANA optimized InfoCube
  3. Change Log
    • Same as classic DSO
    • Stores the difference between Inbound and Active-table
    • Needed for Delta-generation

Important Steps in creating a ADSO

We create an ADSO in the BWMT in Eclipse like all new Objects (in BW 7.5 you have the possibility top create the classical objects still in SAP GUI, in BW4HANA you can create only the new objects in BWMT).

In the General tab you will be able to configure activation settings and other property. At first the user must write a description. After that we have the possibility to choose a Model Template. In the ADSO you can behave like either one of the objects from classic BW:


  • Acquisition Layer

In this layer you can create objects that cover the “write-optimized” use cases for classic DSO. It is divided into 3 other layers:

  1. Data Acquisition Layer
    • Corresponds to a persistent staging area (PSA) and acts as an incoming storage area in BW for data from source systems
    • No use of Active Table, so activation is not needed
    • Requests will be loaded into and extracted from the inbound table
    • All the records in the Inbound Table contain a Request Transaction Number (TSN), Data packet, and Data record number
    • The inbound (Old name = New Data / Activation Queue Table) table is accessed to execute a BEx query and for extraction
    • Data doesn’t get aggregated
  2. Corporate memory with compression feature
    • Requests will still be loaded into the inbound table
    • Old requests that are no longer needed on detailed level can be compressed into the active data table.
    • To save memory space, the CM – compression ADSO doesn’t use a Change Log table, only an Inbound Table and an Active Data Table.
    • As soon as a load request is activated, the system loads the data into the Active Table and deletes it from the Inbound Table
    • If there are 2 records with the same key, BW/4HANA overwrites all the characteristics of the record with the characteristics of the lastly loaded record.
  3. Corporate memory with reporting option
    • A difference between this template and the “Corporate memory with compression feature” template is that, the system does not erase data from the Inbound Table. Instead, the data also remain in the Inbound Table so that none of the technical information is lost.
    • The CM reporting template has no Change Log though
    • Another difference is that the data is not extracted from the Active Table but from the Inbound Table
    • Because the data remain in the Inbound Table after activation, these ADSOs are a good solution for you when you want to store data but save space by not using a Change Log
  • Propagation Layer
    • Provides a basis for further distribution and reuse of data
    • Corresponds to a standard DataStore object (classic)
    • Requests will be loaded into the inbound table
    • For reporting the user must activate the loaded requests
    • The data is then transferred into the active data table and the deta is stored in the change log
    • The change log is also used to rollback already activated request
  • Reporting Layer
    • Used to perform queries for analysis
    • Corresponds to a standard InfoCube
    • The inbound table acts as “F”-table and the active data table as “E”-table
    • It does not have a Change Log. If the Change log table do not exist the Delta process cannot be done.
    • After activation, the Inbound Table is empty
    • The user reports on a union of the inbound table and the active data table
  • Planning Layer

It splits in 2 other layers:

  1. Planning on Direct Update
    • Data is automatically loaded into the Active table, so no need for activation
    • It has no Change Log or Inbound Table
    • You can fill the Active table with an API
    • also load data to this type of ADSO using a DTP
    • Only have an Overwrite option. No summation of key figures like there is in the Planning on Cube-like ADSO
  2. Planning on Cube-like
    • Has an Inbound Table and an Active Table
    • All characteristic fields are marked as key fields in the Active Table, which is a necessary requirement to make it suitable for planning.
    • Only have an Summation option

Process of SID generation highly optimized for HANA

With the goal to optimize the performance, in BW/4HANA it is possible to set a flag not only on InfoProvider level, but individually per characteristic of the DSO. The data integrity check then is only executed on the selected characteristic.


As a new feature, you can use fields with simple data types instead of InfoObject. To do so, go to the Details tab and click the Add Field button. Under Identify, you can specify in the “With” dropdown menu whether you want to use an InfoObject or a Field for the definition.


In BW the user can choose whether he is modeling with InfoObjects or fields. Modelling with InfoObjects brings extra effort, but also brings a lot of advantages. Before you choose one of this option, you should consider the advantages and the disadvantages of both of this modeling options.

In the following I will present you a part of the advantages and disadvantages when you choose the option of modeling with fields:

Advantages when using fields:

  • If the query contains fields, it can be processed key-based in SAP HANA
  • Using fields can enhance the flexibility and range of the data warehouse, when the data volume is small.

Disadvantages when using fields

  • The services for InfoObjects (attributes and hierarchies for example) are not available for fields.
  • Validity characteristics for DataStore objects (advanced) with non-cumulative key figures must be InfoObjects.
  • InfoObject attributes must be InfoObjects
  • A field-based key figure cannot be an exception aggregation
  • Planning queries on DataStore objects (advanced) are only supported with fields as read-only
  • If fields are used in the query, the InfoProviders can only be read sequentially
  • In the query on a CompositeProvider, not all data types for fields are supported (ex. maximum length for fields is 20 characters)

Defining Keys for a ADSO
Also, in this tab we select which fields make up the keys of our ADSO. To define a key, click on Manage Keys button.


Key Fields

There are 2 types of keys: Primary  and Foreign key

Advantages of using Key fields:

  • uniquely identify a record in a table.
  • Key Fields cannot be NULL
  • Used to link two tables
  • Main purpose of a foreign key is data validation.
  • Read Master Data: using the input field value as a key, you can read the value of a Characteristic attribute belonging to a specified Characteristic
  • Read from advanced DataStore: using the input field value(s) as a (compounded) key, you can read the data fields of a specified advanced DataStore Object (DSO)
  • Somethings that you don’t wish for is that, 2 records with the same key, BW/4HANA overwrites all the characteristics of the record with the characteristics of the lastly loaded record

Disadvantage of not using Key fields:

  • Records are not uniquely identified =>Duplicates records allowed
  • Performance affected

Benefits of using a ADSO instead of a classic DSO:

  • Simplification of object types
    • Can behave like 4 Objects from the classic BW
  • Flexibility in data modeling
    • Modeling your ADSO using the Reporting Layer settings
  • Performance of data loads and activation is optimized for HANA as ADSO is a HANA native object.

Source of images: SAP SE, Inspiricon AG

Roxana Hategan Associate
Phone: +49 (0) 7031 714 660 0

Comparison between the modelling in SAP BW and SAP BW/4HANA application

“We study the past to understand the present.” – William Lund

More and more customers approach us to learn more about BW 7.5 and BW4/HANA. All the more reason for us to start a new blog series to take a closer look at this subject. Let us start by examining the history of SAP BW and then move on to outlining the subject areas we will be covering over the coming weeks.

For this article, the past refers to the SAP BW modelling and the present to SAP BW/4HANA modelling.

Most organizations and individual users are still not sure which are the differences between SAP BW(old modelling) and SAP BW/4HANA.The purpose of this article is to put things into perspective and to provide you a clear answer regarding this topic.

SAP BW History – a short overview


What are the differences between SAP BW and SAP BW/4 HANA?

One of SAP’s main goals is to simplify the system. Consequently, it bundles together objects and processes and reduces the number of steps involved.

1.  Modelling Objects

A quick comparison between the modelling objects accessible in the classic SAP BW application and those in SAP BW/4HANA may help illustrate the level of modelling simplification accomplished.


In the upcoming articles in our series we will introduce you to the new Providers, starting with ADSOs.

2. Data Flows

The central entry point for modelling in SAP BW∕4HANA is the data flow. This defines which objects and processes are needed to transfer data from a source to SAP BW∕4HANA and cleanse, consolidate and integrate the data so that it can be made available for analysis and reporting. SAP BW∕4HANA is using a new integrated layer architecture (Layered Scalable Architecture – LSA++).

The classic SAP BW is using LSA, the old version of LSA++. This layer is more restrictive and not so flexible with the data.


One of the major benefits of using LSA++ is the reduction in the number of persistence layers. This has two effects:

For one, it improves data processing performance: You spend far less time saving and activating!

Second, this reduces the data volume. Given that storage place was not considered a critical factor, it used to be that redundancies were deliberately used in the BW system to improve read performance. But with the advent of HANA, things changed profoundly. Main memory is expensive, both in terms of hardware when compared to hard disk storage, and in licensing terms, as HANA is licensed as main memory. Another benefit is that the reduction in “physical” layers allows for far more flexibility in system design.

3. Source Systems

SAP is also pursuing its simplification approach when it comes to the source systems.

SAP BW∕4HANA offers flexible ways of integrating data from various sources. The data can be extracted and transformed from the source and load it into the SAP BW system, or directly access the data in the source for reporting an analysis purposes, without storing it physically in the Enterprise Data Warehouse.

sap-bw4hana-simplified-source-systemsa) SAP HANA Source System

  • this connectivity can be used for all other databases (e.g. Teradata, Sybase IQ, Sybase ASE).

b) SAP Operation Data Provisioning (ODP)

  • acts as the hub for all data flowing into BW from external sources
  • used exclusively with SAP Landscape Transformation (SLT), SAP ERP Extractor (SAP Business Suite), HANA Views and SAP BW.
  • The PSA no longer exists with the new ODP concept which provides a much faster extraction mechanism.

With those two connectivity types, data can be made available in batch mode, using real-time replication or direct access.

HANA views are automatically generated within the Sap HANA database, after you activate the objects (ex. ADSO, Composite Provider).

4. Performance

As pointed out in connection with LSA++, data processing is much faster with HANA. While data flows were all about streamlining the architecture, there are also a number of tangible benefits in terms of technical performance:

Additionally to classic SAP BW, SAP BW/4HANA offers in memory Data Warehousing:

  • No Aggregates or Roll-up Processes
  • No Performance specific Objects
  • Fewer Indexes
  • Faster Loading and Processing

SAP is going in the same direction with the ability to move transformations directly to the database, the so-called push-down.

This performance that SAP BW/4HANA offers is ensured by an algorithm push-down.


This is one of the subjects that we will be discussing in one of our next articles.

Source of images: SAP SE, Inspiricon AG

Roxana Hategan Associate
Phone: +49 (0) 7031 714 660 0
SAP BusinessObjects

SAP BusinessObjects – choosing the right client tool

Are you also wondering which BusinessObjects tool best suits your needs? This post wants to provide you with a brief overview of the subject and give you some tips to put you on the right path.

The range of SAP BusinessObjects front-end products

Over the past years, SAP has significantly simplified its range of BusinessObjects front-end tools, as the following chart illustrates:

Image 1: The range of SAP BusinessObjects front-end products

Today, it has become much easier to select the right tool. First of all, you can choose from three categories:

  1. Data Discovery & Applications
  2.  Office Integration
  3.  Reporting

The main difference between these is their degree of interactivity, standardization, and visualization.

The one thing that connects all three categories is their interoperability, meaning that the content you create can be reused within any of the individual tools. This includes:

  • The ability to add additional scripts to your Lumira 2.0 Discovery files (formerly Lumira 1.x) in Lumira 2.0 Designer (formerly Design Studio 1.x).
  • Or, vice versa, the ability to use Designer applications in Discovery, for example to create story boards.
  • Calling parameterized Crystal Reports or Web Intelligence reports from within the respective tools is still possible and is an option that remains widely used.
  • And much more.

But nevertheless, business users and management are asking themselves which front-end tool is best suited to perform their evaluations and analyses.

There are two main situations where you need to answer the question about which tool is right for you:

  1. If your current front-end production tools are no longer able to meet certain new requirements, forcing you to take a closer look at the remaining SAP BusinessObjects tools.
  2. If your company is planning to introduce SAP BusinessObjects for the first time.

Failing to select the appropriate tools is not only likely to negatively impact end user acceptance, it will usually also lead to longer implementation times. This makes choosing the right tool all the more important.

Selection stages

Ideally, the selection process has multiple stages. We have put together the following chart to illustrate them:


Image 2: Tool selection stages

In a nutshell: Start by determining who will actually be using the reports/applications. Managers usually have widely different expectations regarding content and visualization than, for example, business analysts and are often more interested in aggregated, static and visually more sophisticated data. What’s more, managers have less time to spend on analyzing the data in detail and want to automatically receive precalculated reports with report summaries.

The next step involves defining the different use cases that basically reflect the different requirements placed on the tools. The last stage is all about assessing and monitoring the tools once they have gone live.

Selection methods

The above-mentioned requirements are collected, allowing you to then transfer them to a decision tree. Your decision tree could look something like this:


Image 3: Decision tree for your tool selection

This approach is best suited for a small number of clearly distinguishable requirements – and a small number of end users. If, however, you have a much larger number of users and with them more and more diverse user requirements, a more efficient approach would be to conduct a standardized user survey (interviews) and derive your requirements catalog from their answers. Here’s how that could look like:

  • Reports and analyses need to be available in the browser or in Microsoft Office
  • Users need to be able to create / add ad-hoc calculations
  • Users need to be able to work with hierarchies
  • Users need to be able to work with standard SAP BEx query structures
  • Reports and analyses need to be available online and offline
  • Users need to be able to send reports and analyses by email
  • Users need to be able to filter the data
  • Users need to be able to create their own reports or adapt existing reports
  • Users need to be able to navigate within reports
  • Users need to be provided with drill-down capabilities
  • Reports and analyses need to be highly formatted
  • The information in the reports needs to be highly aggregated
  • Reports and analyses need to meet demanding visual standards
  • and so on

After you have thoroughly gathered all relevant requirements you can then move on to comparing these to the functionality offered by the different SAP BusinessObjects tools. Mark any requirements the tool is able to meet in green and those it fails to meet in red.

The following illustrates this approach on the example of Crystal Reports:

  • Reports and analyses need to be available in the browser or in Microsoft Office
  • Users need to be able to create / add ad-hoc calculations
  • Users need to be able to work with hierarchies
  • Users need to be able to work with standard BEx query structures
  • Reports and analyses need to be available online and offline
  • Users need to be able to send reports and analyses by email
  • Users need to be able to filter the data
  • Users need to be able to create their own reports or adapt existing reports
  • Users need to be able to navigate within reports
  • Users need to be provided with drill-down capabilities
  • Reports and analyses need to be highly formatted
  • The information in the reports needs to be highly aggregated
  • Reports and analyses need to meet demanding visual standards

Once you have measured your requirements against each of the tools you can identify the tool with the most requirements marked in green.

This type of requirements catalog allows you to add more complexity and drill down even deeper, for example to the department or user group level, giving you a very exact breakdown of which tool is best suited for which target group within the company.

You can also translate this approach into a feature matrix (e.g. in MS Excel) to allow for a more comprehensive use. This will initially require more time and effort but used frequently will provide you with a standardized and effective means to accurately pick the tool that satisfies the most requirements.


Regardless of the selection method you choose, you will hardly ever achieve 100 % requirements coverage – there simply is no such thing as a one-size-fits-all solution. It can, however, be rather beneficial to have a strong mix of different SAP BusinessObjects tools as this will enable you to fully exploit the individual strengths of the different tools.

But keep in mind that the methods presented here are examples intended to give you impetus and represent more of a rough approach. When looked at in detail, a thorough analysis and comparison of your specific requirements can make a large contribution towards choosing the right tool.

If you’re interested in exploring the subject further or need a customized analysis, then don’t hesitate to contact us. We will be happy to assist you.

Artur Witzke Senior Consultant
Phone: +49 (0) 7031 714 660 0
Who needs SAP Vora?!

Who needs SAP Vora?!

What is SAP Vora good for anyway?

SAP Vora allows you to analyze structured and semi-structured data within an existing Hadoop cluster in a modern interface and to combine these data types with one another as well as with data from SAP HANA.

From a technical standpoint, this is an extension of the Apache Spark execution framework, which has long been established in the world of Hadoop.

This way, SAP Vora gives you a distributed in-memory query engine for both structured and semi-structured data in a Hadoop cluster.

How can I use SAP Vora to my advantage?

There are several options for you to benefit from SAP Vora; it goes without saying that SAP would like to see you use it in the cloud – thus keeping in line with its own cloud strategy:


  • By downloading the Developer Edition from the SAP Development Center (, which is available free of charge for SAP partners
  • By downloading the Production Edition from the SAP Support Portal (


  • By using the Developer Edition, which is free of charge for SAP partners, through Amazon Web Services (AWS) or SAP Cloud Appliance Library (SAP CAL)
  • By using the paid Production Edition through AWS
  • By using it as a service through SAP Big Data Services
  • By using a bring your own license (BYOL) model (SAP Cloud Platform, AWS)

The SAP Vora Developer Edition in AWS provides complete functionality and with just a few clicks, the environment can be custom-configured according to pre-established parameters.

The underlying Hadoop cluster is a Hortonworks distribution (HDP) with the corresponding tools/software solutions such as Spark, Ambari, Zeppelin, etc. and has a maximum of 4 nodes.

The variant offered by SAP through the SAP Cloud Appliance Library (CAL) is delivered as a pre-configured appliance with functionality that is very similar to AWS. It is best suited for anyone already using SAP CAL.

The Production Edition differs only in terms of upward scalability of the cluster and, of course, in terms of cost.

How does SAP Vora work?

Once you have made your decision regarding a deployment model (on-premises or cloud) you then go on to – depending on your choice – installation and configuration.

The installation process involves three steps:

  1. Determining the number of nodes required for Vora in the Hadoop cluster depending on your
    • availability requirements
    • sizing requirements (CPU, disk vs. RAM, control nodes vs. compute nodes, different sizing for each specific Spark engine, etc.)
    • expected data growth
  2. Deploying SAP Vora Manager Services on the required cluster nodes
  3. Configuring and starting the SAP Vora Services on the cluster nodes using the SAP Vora Manager UI

Once you have successfully completed the installation and configuration in a Hadoop cluster (the HDP, Cloudera and MapR distributions are supported), you can start using SAP Vora. In addition to the above-mentioned SAP Vora Manager for the more administrative side of things, end users are provided with a central GUI by means of a set of tools known as the SAP Vora Tools.

The following tools are available in the GUI:

  • Data Browser: view the contents of tables and views
  • SQL Editor: create and execute SQL statements
  • Modeler: create and modify tables and views
  • User Management: manage user accounts and access to the SAP Vora Tools

The end users can leverage the SAP Vora Tools to analyze data that differs in structure and data type found in the Hadoop cluster. In the next section, we will take a closer look at the analytics options.

What can I analyze with SAP Vora?

Vora enables you to interpret JSON documents, conduct time series and graph analytics, and use SQL to also analyze data that is conventionally structured in a relational way.

In doing so, Vora uses a specific Spark engine with optimized processing for each of the different types of analytics.

The “Doc Store” – NoSQL analytics of semi-structured JSON documents

Starting with version 1.3, SAP introduced the “Doc Store”. With it, you can store modified documents as schema-free tables, which in turn allows you to scale out and flexibly handle document fields (delete, add).

Once you have created a document store (= collection) based on JSON documents existing in the cluster in Vora, it serves as the basis for the creation of a view that can also be expanded with the familiar JSON expressions. This view is then stored in Vora’s own Doc Store and can be processed both in the table and the JSON format.

Time series analytics – leveraging efficient compression and data distribution

The Spark engine available for time series analytics exhibits its full strength when the underlying data is spread across as many cluster nodes as possible and can be efficiently compressed.

Based on the time series data stored in the cluster, a “times series table” is created within Vora, for which a unique column with time ranges (= range type) must exist. Along with various other options, you can also enter equidistance properties and additional compression parameters.

In order to be able to analyze time series data, you also need to create a view that can be enhanced with specific table functions (e.g. cross/auto correlation).

With this, you can then conduct the corresponding analyses such as regression, binning, sampling, similarity, etc.

Real-time graph analytics – analyzing very large graphs

Vora comes with its own in-memory graph database that was specifically developed for the real-time analysis of large graphs. Accordingly, the modelling of the graphs in the graphical metadata viewer is supported by a path expression editor.

With an in-memory engine available, it is capable of highly complex graph-related queries and you can count on the visualization of the graphs to be state of the art.

The graph analytics engine is particularly suited for supply chain management applications or to visualize elaborate organizational and project hierarchies or business networks.

Relational engine – using SQL to analyze relations

Last but not least, Vora also lets you use SQL to represent and query structured data in the cluster in the form of relational, column-based tables. This approach also uses in-memory data compression.

For relational data that does not need to be kept in memory, Vora also comes with a disk engine. It stores the data in a file on the local node on which the engine runs. As with the dynamic tiering option in HANA, you can also easily join the column-based relational disk tables with the in-memory tables.

Also worth mentioning

  • Once you have completed the registration in the registry, Vora also allows you to use SAP HANA tables along with any views and tables created in Vora. From Vora, you can also write data to SAP HANA.
  • The creation of both level and parent-child hierarchies and the use of joint fact tables is supported.
  • You can use currency translation (standard or ERP) in tables and views.
  • There are specific partitioning functions and types for each engine, that is, for the specific data structures created in Vora that allow you to optimally distribute or partition them in the cluster (hash, block, range).

What data sources and formats are currently supported?

With the SAP Vora Tools, you can process the following files in Vora:

  • .CSV
  • .ORC
  • .JSON
  • .JSG

In addition to the standard data type HDFS and the ORC and PARQUET types (option (format “orc” / format “parquet”)), it is also possible to load the following additional types in the “CREATE TABLE” statement in Vora:

  • Amazon S3 (option (storagebackend “s3”))
  • Swift Object (option (storagebackend “swift”))

Conclusion and outlook

It is hardly surprising that SAP Vora’s main strength lies in the combination with SAP HANA, as this enables you to analyze relational data from HANA along with semi-structured data from your Hadoop cluster. What’s more, Vora gives you an array of analysis options (graphs, documents, etc.) combined into a single tool that would otherwise require you to rely on multiple tools (or different databases) from different Hadoop distributors or third-party vendors.

SAP is planning to support the transaction concept (ACID) in Vora to improve on its consistent data storage capabilities. For 2018, initial support for insert/update/delete statements is already in the works. SAP furthermore plans to add support for SQL Pass-through from SAP HANA to SAP Vora.

All friends of SAP BW will also be glad to hear that SAP plans to support DSOs beyond 2018.

If you’re an SAP partner, you can easily get started with the free Developer Edition to familiarize yourself with the subject – it’s the perfect place to learn more about its configuration and use cases.

Or you can just ask us – we’ll be happy to help!

Andreas Keller Associate Partner
Phone: +49 (0) 7031 714 660 0
Query Designer in Eclipse

How to successfully use the Query Designer in Eclipse

We are very happy to present to you our first guest author: Jürgen Noe. He is Managing Partner of Jürgen Noe Consulting UG (haftungsbeschränkt) in Mannheim ( His article is about – as is his new book – Query Designer in Eclipse. Many thanks again Jürgen – and now enjoy the read!


Along with support for HANA, the introduction to SAP BW 7.4 also brought a silent interface revolution. Up until this SAP BW release, support in terms of software development environments (SDE) had been limited to the SAP GUI. But with the development of the HANA database, the Hasso Plattner Institute relied on the Eclipse as a SDE from the get-go. This now gives developers of SAP applications two relevant development environments. When it comes to, for example, developing HANA database objects HANA Studio is the go-to environment, while traditional ABAP applications still required the SAP GUI.

But SAP had another surprise in store for us: What started out with support for the HANA platform only was eventually expanded with other tools for the development and customization of applications on HANA in Eclipse.

One of the tools here is BW-MT, short for BW Modelling Tools, which allows developers to completely move their typical BW customizing tasks to Eclipse. The creation of InfoProviders, really the entire ETL (extraction, transformation, load) process can now be carried out from start to finish in BW-MT.

The logical consequence was to recreate the central tool for creating BW queries in the Modelling Tools as well. This renders the good old-fashioned Query Designer as a standalone application within the context of the Business Explorer Suite (BEx) obsolete with all releases starting from SAP BW version 7.4.

A quick start to the Query Designer in Eclipse

Against this background, I wrote a book to describe the new functionalities offered by the Query Designer in Eclipse. The book titled “Schnelleinstieg in den Query Designer in Eclipse” and published by Espresso Tutorials in September of 2017 is available in German only.

Query Designer

Click here to purchase the book.

I would like to take this opportunity and use the following paragraphs to outline the book for you:

The book starts out with some basic information about SAP BW and Eclipse in general. In the Eclipse section of the book, I provide a short explanation of how Eclipse is structured and break down essential terms such as plug-in, view, and perspectives. Experienced Eclipse users can skip this chapter.

The third chapter summarizes the BW Modelling Tools. I explain how to call well-known transactions such as the Data Warehouse Workbench in Eclipse and how to create data flows, accompanied by an in-depth description of central views such as the Project Explorer view and the InfoProvider view.


Given the central role that the Project Explorer plays in Eclipse, the book includes a detailed walk-through of how to create a new project and work with it. After that, I will explain how to navigate to the InfoProvider view, which is shown in the following figure 1, in Project Explorer:

Infoprovider View

Figure 0.1 InfoProvider view

This view allows you to create global, reusable objects such as restricted or calculated key figures, structures, filters, but also variables. You can find them under the Reusable Components folder in figure 1.

Chapter four then features a detailed description and many screenshots of how to create the different reusable elements and an overview of the various setting options along with their impact. The ability to create reusable components from a central location is one of the reasons why I think switching from the old BEx version to the new Query Designer in Eclipse is worth your while. Gone are the times when you had to click your way through multiple windows in the BEx Query Designer in order to, for example, create a formula variable. What’s more, I also noticed major improvements in navigation and usability.

There is yet another area where BW-MT demonstrates its full strength: It has never been easier to jump from one object to another, change it, and view the changes in the initial object right away. Here’s an example: You realize that you need an additional key figure in the query. It used to be that you first had to create it in the DataWarehouse Workbench, add and assign it in the MultiProvider, and restart the Query Designer for it to register the change before you could insert it into the query. Now, you no longer have to deal with the inconvenience of having to jump back and forth between different tools and transactions. With BW-MT all that changes are the views in Eclipse! You simply switch from the Query Designer view to the master data view, where you create your key figure, and go on to the InfoProvider view to add it to your data model in the MultiProvider. Once you have saved it, you can switch right back to the Query Designer view.

And you can do all of this in parallel in a single tool, using multiple windows, however you see fit!

With Eclipse, you can view the changes to the MultiProvider right away. And even if not, simply hit refresh to have your new key figure available in your query. It has never been so easy!

A detailed look at the query properties

Surely, you are now asking yourself how the Query Designer view that allows you to create, change and delete queries looks like. You can find the answer to this in figure 2:

Query Filter

Figure 0.2 Query definition (filter)

As you can see, the query definition is spread across multiple tabs. The General tab allows you to configure general query properties such as how to display repeated key values and much more.

Figure 2 shows the definition of a query filter. As with the BEx Query Designer, the fundamental query structure with columns, rows, and free characteristics stays the same. You can define this structure in the Sheet Definition tab. All of these configurations are carried out using the context menu, which lets you access all relevant functions in the respective views.

The Conditions tab allows you to specify conditions such as show me all datasets with revenues of more than 1 million euros.

Use the Exceptions tab to define any exceptions. These exceptions allow you to color code rows, columns or individual cells to highlight outliers or special circumstances.

I’m very fond of the Dependency Structure tab, which provides you with an overview of any other queries in which the variables used in query at hand are also used.

The Runtime Properties tab lets you configure the performance properties of the query, for example whether to use the delta cache process and many other properties that you are already familiar with from the transaction RSRT.

Chapter five of the book includes many screenshots and examples that serve to explain the various options provided by the different tabs and their respective impact.

So, what does the query result look like?

Once you have created you query, you will want to test and execute it. With BW-MT, the query result is presented in a separate view, as shown in figure 3.


Figure 0.3: Query result view

You can navigate the query results freely, apply filters, add drilldowns, delete, just like you did in the past. Once again, you will find everything you need in this view, there is no longer the need to have a JAVA server installed to produce web output or to switch to the BEx Analyzer to create Excel output.

For more complex queries, you may need two structures:
In the old BEx Query Designer, you had to work with the cell editor. The cell editor was completely overhauled with the new Query Designer and now includes useful options such as copy & paste. It also eliminates any annoying roundtrips to the server to check the entries, which makes working with the cell editor that much faster. Take a look at the cell editor in figure 4:

Cell editor

Figure 0.4 Cell editor

Last but not least: the variables

The last item on our list are the variables that add dynamic to your queries. The sixth chapter takes a closer look at variables and uses screenshots and simple examples to demonstrate how to create all typical variables.

The advantages of the new Query Designer in Eclipse:

  • A modern, user-friendly and future-proof interface
  • Any existing BEx Query Designer functions can also be found in the new Query Designer in Eclipse
  • Seamless integration of BW data modelling in a single tool

My conclusion is a wholehearted recommendation to switch to the Query Designer in Eclipse along with BW-MT. It has never been so easy to create and test entire BW data models and queries. To me, the Query Designer in Eclipse is a big step towards the future!

Jürgen Noe Managing Partner Jürgen Noe Consulting UG (limited liability)
Phone: +49 (0) 621 72963337
5 Essential Stages of a Successful Testing Strategy in SAP BI Implementations

5 Essential Stages of a Successful Testing Strategy in SAP BI Implementations

How important is testing in SAP BI anyway?

For those who are not yet aware, there are just 3 golden rules to ensuring the success of your SAP BI projects…

Testing, Testing, and more Testing!

Okay, seriously now, there are actually 5 essential stages of a successful test strategy in any SAP BI implementation. They can be summed up in these points:

  1. Unit Testing
  2. Functional Testing
  3. Data Migration and Integration Testing
  4. Performance Testing
  5. User Acceptance Testing (UAT) & Final Sign-Off

“If you think it´s too expensive to do thorough testing, see what it costs when you don’t do it!”

You can almost hear yourself asking, “So, what is all this testing going to cost?!”

Some customers tend to balk at the cost involved in the essential testing of a SAP BI implementation. They may reason that the “quality” should already be included – that is, if the BI Team has done a thorough job in developing all the reporting objects. They may reject allocating scarce budget to testing activities, in favor of realizing more developments before the Go-Live. They may be under pressure themselves to “get the most” out of the BI project team while they are still there – and all that should fit within the fixed project scope of time-budget-quality.

Despite these forces that seem to support reduced investment in testing, SAP BI project experience demonstrates exactly the opposite. Of course, there is significant additional effort that gets pushed forward in the project timeline when you decide to go for more thorough testing – however, in the end, the overall costs of the implementation are typically reduced.

Studies have shown that the ultimate cost of a software defect can rise exponentially, depending on what point in time it is discovered.

For example, there’s the 1:10:100 rule:
a defect that costs just €1 to fix in the design and initial programming phase, would potentially cost €10 to fix during the UAT, and a whopping €100 if found after the Go-Live!

Our extensive BI project experience has proven, time and again, that uncovering and fixing bugs during initial testing phases saves overall costs – the earlier you start testing and the more you invest in testing up front, the more you stand to save in the overall project costs, over the short and long run!

Some BI project managers estimate these potential savings at 2-3 times the cost! If that is not enough to convince your project stakeholders of the value of more testing, then there’s this: the pressure and effort directly before the Go-Live can be significantly reduced — not to mention many nerves being calmed among the Business users — as the focus on testing starts to bear fruit. It will then be recognized as the most worthwhile investment in quality.

To Test or Not to Test, That Is (Not) the Question

Now that we agree, testing is essential to any SAP BI project, the questions you still need to answer are what needs to be tested, when, how, and how much?

The basic rule of thumb is, the more you test, the lower your development costs overall.

Doing this testing in a highly structured manner provides additional benefits.
For example, effectively capturing and using the test results for immediate benefits improves the quality of the end product (your final BI reports) over each test cycle iteration and testing stage. This brings your BI project into a process of continuous improvement, which can then carry over to benefit your SAP BI system long after the initial implementation project has concluded.

What exactly does testing mean in SAP BI – what is involved?

Testing is a regular “Quality Check”, a chance to measure the quality of your implementation, at given points in the project timeline and across various aspects. All testing should be supported by tracking and documentation of both SAP BI test cases and defects in an appropriate testing tool. One commonly used testing tool is “HPQC” – HP ALM Quality Center, but of course many others are commercially available.

At Inspiricon, we recognize that each customer’s situation is unique. There is no “standard approach” that fits every scenario. However, there is an established framework that can be used to structure testing activities.
In our SAP BI projects, we focus our extensive experience and expertise from other projects to address your individual challenges, and we create a customized testing strategy and plan to ensure the highest quality for your SAP BI implementation and reduce costs overall.

The following testing activities are not intended to be an all-inclusive list, nor to represent a standard approach. They are simply examples of some of the most important Quality Checks that could be made within those 5 essential stages of testing in SAP BI:

1. Unit Testing:

Good unit testing is the basis for all that comes after it. This initial testing encompasses a solid check of the design work, followed by a step-by-step approach to documenting the data as it moves through the different layers of your BI architecture, in both persistent and non-persistent states. The more bugs that can be detected and (more easily) corrected at this point in testing, the more substantial the cost savings overall!

  • Tech Spec documentation and approval (this could include a description of report design and data flows in Word, a visual overview in PowerPoint, and an Excel with complete mappings for data model)
  • Business User meetings to approve report layouts and headers (in advance of test data availability)
  • IT Internal Testing by the BI developer, includes a thorough check of data at each stage in the data flow, and a direct comparison of source system data vs. SAP BW data vs. data in final BI report).

2. Functional Testing:

The key to effective functional testing is having resources on your team who understand not only the Business’ requirements and the processes in the ERP source system, but also the data model, data flows, and report design in SAP BI, and how these two worlds are “bridged” or united.

  • Quality Assurance (QA) testing, especially by internal BI project colleagues with expert knowledge of previous reporting system and underlying business processes.
  • Documentation of test cases in testing software, such as HPQC.
  • Pre-UAT (Pre-User Acceptance Testing) testing by Business, including defect fixes and retesting, all documented and tracked in testing tool, e.g. HPQC.
  • IT Internal Testing by BI Team, coordinated by lead developers, and documentation in a central status list or “catalog” of all BI reports.

3. Data Migration and Integration Testing:

Here is where the data migration and the smooth and seamless integration with the source system(s) are verified. Data flows, including InfoPackages, DTPs, transformations, and ultimately the process chains that automate the complete daily data load, must be coordinated and tested for completeness, accuracy, correct error-handling, and stability. And of course, all BI objects that are transported to the Productive system must be verified before the Go-Live!

  • Data migrations verified
  • Full versus Delta data loads verified
  • Daily and periodic process chains optimized and tested
  • Metachains configured and tested
  • Automated BW “housekeeping”/cleanup tasks activated and verified
  • Transported BI objects verified in Productive system before the Go-Live

4. Performance Testing:

Here is where the performance of the system architecture and system sizing gets a thorough check. Initial refresh times and acceptable levels of performance are determined by the Business’ expectations, usually based on the performance benchmarks of their previous reporting systems.

Automated performance testing, provided by 3rd party software such as LoadRunner, covering such aspects as:

  • Peak loads of concurrent users
  • Initial load times for report structure and prompts
  • Refresh times for reports with data
  • Maximum data volumes returned by report queries
  • Storage volumes for saved reports and schedules

5. User Acceptance Testing (UAT) & Final Sign-Off:

Here’s where it really gets interesting! The End Users, who are the intended “recipients” of our final product, finally have a chance to test their reports! They can verify the quality of the report results by performing test cases for data checks and plausibility checks. In addition, they can log any defects and enhancements in the testing tool, and retest any fixes. The End Users have the last say on when a report has received the final “sign off” by Business.

  • UAT Data Checks by Business, including another round of defect fixes and retesting.
  • UAT Plausibility Checks by Business, to compare original reports with migrated/replicated SAP BI reports in terms of purpose, available columns, approximate results. This can be useful, despite increasing differences between the two pools of data: “ongoing” live source system data versus the “snapshot” of originally migrated data plus accumulated UAT test data.
  • Post-UAT phase of continued testing and validation by broader groups of testers with a larger set of data, perhaps more migrated data and more accumulated test data.
  • Final Business “Sign-Off” via test case “Passes” in the testing tool, e.g. HPQC.

Again, these are just some examples of recommended steps in a comprehensive testing strategy for BI implementations, and are not intended to represent a standard approach. Every customer’s implementation project is unique, and should be addressed with a custom testing strategy and plan, tailored to meet the needs of that specific customer and their individual SAP BI scenario.

Quality First!

To sum it all up, a comprehensive testing strategy, supported with effective structuring, tracking, and documentation in the appropriate testing tool, is the best investment you can make to ensure a smooth Go-Live for your End Users, and a high-quality BI system going forward!

The more effort and investment made “up front” in testing, the fewer issues that will surface at your Go-Live – and the lower the overall cost of your SAP BI project!

If you would like help in setting up your testing strategy for SAP BI, give me a call or send me an e-Mail today!

Andrea Taylor Senior Consultant SAP BI
Phone: +49 (0) 7031 714 660 0
Run Better Anywhere with SAP BusinessObjects Mobile

Run Better Anywhere with SAP BusinessObjects Mobile

Today’s business users expect their data, reports, and dashboards to be available everywhere, all the time. Not just when they are sitting at their desks, but also on mobile devices such as mobile phones and tablets.

SAP BusinessObjects Mobile and the SAP BusinessObjects BI Platform can support this request.

However, the way a user interacts with a mobile dashboard differs a lot from using a dashboard on a laptop or desktop computer — the smaller mobile screen means less space for visualizations, filters, buttons, and other components.

How to Deploy a WebI Report on SAP BusinessObjects Mobile App

In this article we will focus on deploying a WebI Report on SAP BusinessObjects Mobile App. This is the preferred option, as the app provides a secure and easy way to connect to the BI platform and run WebI reports, Design Studio applications, as well as other BI documents such as Crystal Reports, and SAP Lumira documents. The support for Lumira 2.0 is also important, that comes with the latest release of iOS SAP Business Objects Mobile 6.6.

iOS SAP Business Objects Mobile 6.6

Currently the iOS users have a bit more features and technical improvements compared to the Android version, but most aspects that would meet the business users’ needs are available for both.

Below you can see a WebI Report in desktop usage and at the end of this article we will have the same report on mobile as result (Fig 1.1):

BI Launchpad

Fig 1.1

Of course, developing a reliable and optimized mobile application requires a different approach than for example developing a dashboard for desktop usage. Still you can apply these steps for deploying on SAP BusinessObjects Mobile.

4 Steps to Your App

>> Note: To use this app, downloaders must be users of the SAP BusinessObjects Business Intelligence platform, with mobile services enabled by the IT department.

  1. Download SAP BusinessObjects Mobile app from Apple Store/Google Play
  2. Configure the connection within the app to your BO Server

Let’s assume that you have already installed the app on your phone. The next step is to configure the connection to your BO Server (Fig 1.2).

Create new connection

Fig 1.2

We will choose BOE Connection Type because our target is the BI Platform. For the CMS Name we log in into the Central Management Console – Servers – Server List and there we can find the Central Management Server with the Host Name (Fig 1.3).

Central Management Console

Fig 1.3

  1. Deploying the desired app/report from BO Launchpad

Next step is to log in into the BO Launchpad. Having admin authorization, we have access to a folder called “Categories” where we can see all applications and reports that are currently available on mobile (Fig 2.1).

BI Launchpad

Fig 2.1

We browse to our folders and we choose a report that we want to deploy, right click on it and select “Categories” (Fig 2.2).

BI Launchpad

Fig 2.2

A pop-up with “Corporate Categories” appears, which we can expand, select Mobile and save it (Fig 2.3).

BI Launchpad

Fig 2.3

  1. Accessing our mobile application, we can see the report that we have deployed (Fig 2.4).

Report deployed

Fig 2.4

What Is The Result Of These Steps?

A report that can be accessed everywhere and anytime from your mobile device, with the possibility to view it in a landscape mode, too (Fig 3).


Fig 3

SAP BusinessObjects Mobile comes with a push notification feature that enables mobile users to receive notifications even when the user session is not active, or when the application is not running on the mobile. To enable this feature, you should make a few changes in settings, both at the server end and client end (Fig 4.1).

Push notification

Fig 4.1, Source: YouTube

Conclusion And Outlook

The ability to run smarter “anywhere” is about making informed decisions with instant access to targeted, personalized information wherever you are – in the board room, at a customer site, or on the shop floor, and analyze information without a need for additional training.

SAP BusinessObjects Mobile provides instant access to business intelligence (BI) reports, metrics, and right-time information. Surely, there are many aspects that need to be discussed regarding SAP BusinessObjects Mobile, as well as the right approach in developing Mobile Apps which we will talk about in an upcoming article.

Cristian Moldovan Junior Consultant SAP BI
Phone: +49 (0) 7031 714 660 0