4 steps to create a universe

4 steps to create a universe – the Information Design Tool, part 2

To continue our journey to create a universe full of data we will learn the steps to do this mission. The major steps in this process are:

  1. Create a new project in IDT (Information Design Tool).
  2. Create a connection between the HANA system and our project from IDT.
  3. Create a data foundation and a business layer.
  4. Publish to a repository.

First of all we have to define some basic terms that you have to know when handling universes and IDT.

  • Connection: defines how a universe connects to a relational or OLAP database. Local connections are stored as .CNX files, and secure connections are stored as .CNS files.
  • Data Foundation: a scheme that defines the relevant tables and joins from one or more relational databases. The designer enhances the data foundation with contexts, prompts, calculated columns and other SQL definitions. It can represent a basis for multiple business layers.
  • Business Layer: it represents the first draft of a universe. When the business layer is complete, it is compiled with the connections or connection shortcuts as well as data foundation. Then it is published and deployed as a universe.
  • Universe: the compiled file that includes all resources used in the definition of the metadata objects built in the design of the business layer.

Step 1: Create a new project in IDT

No matter if your plan to discover the universe is big or small, you need to have in mind a whole project. Technically speaking, the journey to a universe starts with a new project. We can create a new project opening IDT by opening IDT and then going to File -> New and selecting Project. The project will represent your box full of ideas, a workspace where you can build your universe brick by brick.

The bricks are represented by specific objects like data foundation or business layer.

How to create a project in IDT.

Figure 1: How to create a project in IDT.


Project is the house for resources used to build a universe. This project is used to manage the local objects for the universe creation.

In a project, you can have a multiple number of objects like data foundation, business layers, and data source connections etc.

Figure 2: How to create a project: enter name.

Figure 2: How to create a project: enter name.


We will enter the project name (in this case we use the name Test). Then we will set the project location in our workspace, too.

You also have the possibility to edit an existing project. You have to go on File -> Open Project Local Project area.

Another very interesting functionality in a project is represented by the locking resource – in this way you are able to inform other developers that you are working on the resource.

Step 2: Create a connection between the HANA system and our project from IDT .

Once the project is created, we have to assign a connection to be able to connect a data source.

This connection will define how the source system provides data. There are two types of connections: Relational or OLAP connection.

A Relational connection is used to connect to the database layer to import tables and joins.

An OLAP connection is used to connect to the multidimensional model like an Information View in SAP HANA.

How to create a project in IDT.

Figure 3: How to create a relational connection.


In order to create the connection, we have to enter the system details, the password and the user name.

After we have created the connection, we will test it and then we have to publish it to the repository to make it ready to use for our universe.

How to publish a connection.

Figure 4: How to publish a connection.


In order publish the connection, you have to provide the BO Connection Parameters (Repository, Username and Password).

Step 3: Create a data foundation and a business layer.

Once we have an available connection to the repository, we can proceed to create a data foundation for our universe. Data Foundation layer gives you the opportunity to import tables and joins from different relational databases.

In Information Design Tool, there are two types of Data Foundation: Single-source enabled or multi-source enabled.

Single-source Data Foundation supports a single relational connection. Single Source Data Foundation supports a local or a secured connection so the universe designed on this can be maintained locally or can be published to the repository.

How to create a Data Foundation.

Figure 5: How to create a Data Foundation.


Multi-source enabled Data Foundation supports one or more relational connections. Connections can be added when you design the Data Foundation or even later. Multi-source enabled Data Foundation is designed on secured connections published in a repository.

We have to define Data Foundation Technical Name and click on Next.

An interesting option in IDT is that you can create a universe on multiple sources. In the next screen, you can see a list of available connections to create the universe (both .cnx and .cns).

.cnx connection is used when we do not want to publish the universe to a central repository (can be used in local universe).

If you want to publish the universe to any repository (local or central), you have to use the .cns (secured) connection.

Once we set the finish button we will have a data foundation.

How does a Data Foundation look like?

Figure 6: How does a Data Foundation look like?


Congratulations, you are already half-way on your way to an own universe!

We have a project, a set connection and a data foundation so far. Now we need the other half of the universe, the Business Layer.

To do this, we have to go in the project menu, new and select Business Layer. Here we have to select the connection and also the Data Foundation on which we want to create the Business Layer.

How to create a Business Layer.

Figure 7: How to create a Business Layer.


Business Layer contains all classes and objects. You can also check dimensions and measures that are defined in a universe. When you publish the business layer in the repository, this shows the completion of a universe creation.

Once you created the business layer, you have to decide which fields need to work as dimensions (attributes / characteristics) and which fields functions as measures (Key Figures).

To give the universe a better layout, we can create folders for each dimension and one for Measures. Set by default option, measures are also treated as attributes.

Step 4: Publish to a repository.

Our journey to a universe full of data approaches the final step. We have all the elements and only just one click left to the finish line. After an integrity check we have to save and publish the universe to a repository.

How to publish the universe to a repository.

Figure 8: How to publish the universe to a repository.

This article is inspired by tutorials provided by Tutorials Point.

Ionel-Calin Oltean Consultant SAP BI/BW
Phone: +49 (0) 7031 714 660 0

To the top with SAP S/4HANA – how to conduct business in real-time

The economic world is getting more and more complex and connected. Core processes have to be integrated, simplified and made available in real-time in order for you to be ready for the challenges of digitalization. SAP S/4HANA offers all possibilities to simplify business processes in a digital world. There is an immediate added value at the same time as maximum usability. The application is based on the In-Memory platform SAP HANA. Together with SAP Fiori, it allows a user experience that you can usually find in the end-user world only.

There are two options available for SAP S/4HANA: cloud and on-premise. Integrate your business processes and departments and board a digitalized world – you can create a real added-value with S/4HANA.

Read more about SAP HANA in our blog.

Interested in SAP Fiori? Find more on our website and in our blog

SAP S/4HANA – the next generation

SAP S/4HANA is a In-Memory-ERP-Suite which supports the different technologies of the digital world. Among these are

  • Big Data
  • Mobile
  • Real-time analyses
  • IoT (Internet of Things)
  • as well as solutions of other software providers and business networks.

The ERP-Suite gives you as a business user insights that can be implemented at once. All integrated solution proposals go far beyond simple automation.

S/4HANA offers you several fundamental advantages

  • With S/4HANA you can benefit from a powerful In-Memory platform. It special architecture allows the simultaneous implementation of analytical processes (OLAP) and transactional processes (OLTP) – in real-time.
  • The data model has been facilitated radically. Aggregated and redundant data layers cease. You benefit from maximum flexibility and at the same time reduction memory space as well as a multiple increase in performance.
  • Scope and complexity of the coding have been tremendously reduced with S/4HANA. Data base building reduces data scope up to 90 percent. This improves speed of analyses as well as performance of transactions and simulations – in real-time.
  • The intuitive user interface of SAP Fiori visualizes all sources which are brought together in real-time. This also applies for mobile scenarios – so that usability is guaranteed for all devices. SAP Fiori offers a personalized, role-based and uniform user guidance.
  • SAP S/4HANA paves the way for optimized and innovative core business processes. Among these are procurement, production, stock management, sales and logistics, quality control as well as finance.
  • Furthermore it offers you the flexibility needed in order to transform your business processes.

Do you have any questions concerning SAP S/4HANA? We are looking forward to your call or your message.


Jörg Waldenmayer Team Lead Technology
Phone: +49 (0) 7031 714 660 0
Inspiricon SAP-HANA planning application kit 1030x665

Fresh news from SAP Planning Tools

Author: Gabriela López de Haro, Senior BI Consultant

Today we would like to talk about planning. In particular we would like to focus on what what we can expect from the new HANA Planning Application Kit, what are the differences and advantages in comparison to the standard Integrated Planning.

But before we start, let´s first talk about the importance of planning:

Why planning?

Companies can benefit from improvements in the decision making process by planning. With the help of current, past data and some predictive analysis, companies can set goals for the future and predict what might happen. The comparison between actual and plan data helps to improve the process of planning.

Overview on SAP BI Planning Tools

In order to plan, it is necessary to have the ability to manually enter data in the system (input-ready query). This data entry functionality can be achieved in several different ways and with different tools, both in BI as well as in the transactional system.

What differentiates planning tools from the simple data entry functionality, is that it also offers the possibility to analyze data and generate complex planning scenarios. This can be achieved with the help of planning functions that range from the simple ones (copy, delete, repost, currency conversion) to more complex customer defined functions. Among the SAP BI Planning tools are:


Evolution of BI Planning

Business Planning and Simulation (BPS)

Unlike Integrated Planning, it is not fully integrated into the BI system. Even though it was replaced by Integrated Planning, there are some planning functions that still run partially on BPS (i.e: Cost Center Retraction).

  • Planning levels, planning functions and sequences are created in transaction BPS0
  • Manual entries layouts, filters and variables are also created in transaction BPS0 and are not aligned with BEx queries, filters or variables.
  • The frontend is also not integrated with the BI tools. Instead, it is necessary to create a web interface in transaction BPS_WB.

BI Integrated Planning

This solution is fully integrated into the BI system. Data providers are created into the Data Warehouse Workbench, input-ready queries are created with the BEx Query Designer and all other planning specific functionalities (planning functions, planning sequences, data slices, and characteristic relationship) are defined in the Planning Modeler. It can be migrated completely to a SAP HANA Database, with the benefit of an improved read access (In-Memory).

  • Manual inputs are created directly in BEx Query Designer. Variables and filters correspond to BEx Variables and Filters.
  • Planning Functions, Sequences, Characteristic Relationships, Data Slices, etc. are created in transaction RSPLAN.
  • Migration to a SAP HANA Database can be achieved without adjustments, leading to a better response time and hence benefits on the user side. This is because data is now saved in the In-Memory Database. Nevertheless, aggregations and calculations continue to take place in the Application Layer.

SAP Planning Application Kit on HANA

It is the newest on planning. The difference with the above solution, is that the planning engine is In-Memory optimized. The application of the business logic is moved from the application layer to the In-Memory Database, reducing the data circulation between layers and leading to a significant improvement on the performance.

  • BW-IP needs no changes work in HANA, however, in order to benefit from the improved performance in the calculation engine, planning functions must be migrated.
  • The execution of the calculations in-memory can be enabled with the activation of the Planning Application Kit. SAP notes1637199 explains the procedure for the activation.
  • Most planning function types are already available in the Planning Application Kit (Copy, Delete, Revaluate, etc.).
  • Still, not all planning functions can be executed In-Memory, in particular planning functions based on ABAP exit. In order to be able to execute those functions In-Memory, it is necessary to write an SQL-Script.
  • In order to generate a planning function type based in SQL-Script you will need to create a function type based on an ABAP Class. The SQL-Script will be called from the Execute Method.
Comparison old/new

Comparison old/new

In summary, with the new HANA Planning KIT we can expect to maintain the same planning functionalities that were available in the Standard IP, but with improved performance due to an In-Memory Database.

Read more about planning and SAP HANA in our blog.

To finalize, we would like to mention that in today’s post we have not mentioned BPC. We will keep this subject for a future post.


Inspiricon SAP BW Upgrade 7.4

Being successful with Inspiricon Best Practices: Upgrade SAP BW 7.3 to 7.4 at a global pharma group

Today, we present you an upgrade project of Inspiricon. It was a SAP BW Upgrade from 7.3 to 7.4, realized in the years 2015 – 2016.

Our clients‘ requirements were the following:

  • Release change to SAP BW
  • Increase of system stability
  • Integration of HANA-optimized data models

Please find a summary of our approach here:

Our initial position

Before the upgrade, our client had the release SAP BW 7.3 with different service packages. The upgrade to version 7.4 was necessary in order to

  • enhance system performance
  • develop mobile reportings and applications with Fiori
  • keep SAP systems up to date (bringing in the latest SAP information)

The challenge for Inspiricon

Due to the fact that our client is working globally, our project team was confronted with a heterogeneous IT landscape. The existing reporting is also used worldwide. As a consequence, the BI system creates the possibility to work with the data globally on group level in order to ensure a uniform reporting system.

The following challenges had to be solved from the Inspiricon team in this project:

  • Connected systems and related dependencies had to be taken into account
  • The existing landscape is very complex
  • Many projects were ongoing during the release change, all simultaneously on the system that was to upgrade. Organizationally spoken, information exchange between different projects had to be coordinated promptly in all time zones.
  • No interruption of ongoing operations (“silent upgrade”)
  • Test management: ongoing coverage of all existing system functionalities

The project team comprised up to 20 persons, of which two were Nearshoring resources (BI-IT). The Inspiricon Nearshoring-team contributed the onsite-team not only with its long-time experience and quality standards but also with a significant cost benefit. Read more on Nearshoring here.


Inspiricon Project Approach

Inspiricon Project Approach


Successful project closing

Go-Live was at the end of May 2016, followed by an intensive Hyper Care Phase for 3 weeks. During this time, our Inspiricon team supported the client until the system was running as performant and stable as it was required.

Due to the release change, our client is now able to use the latest HANA objects entirely and to benefit from their performance advantage. Customer requirements were fully met – subsequent projects are already assigned.

Inspiricon Best Practices

We can learn a lot from successful projects. That is why we developed best practices for future projects from this upgrade project. They include the following and can be transferred to other projects:

  1. Upgrade process (Approach, Upgrade Project Timeline)
  2. Upgrade of technical plan (Pre-upgrade, Upgrade and post-upgrade checklists)
  3. Resource Matrix (Project Organigram, Roles and Responsibilities, Communication Matrix)
  4. Challenges and recommendations

Additional to this blog post, we recommend you to read our Best Practice “Inspiricon Best Practices in use” (only available in German).

You are interested in SAP BW? Read more on You are also welcome to contact us directly – your contact person for SAP BW is Michael Schmer.


Istvan Boda SAP BI Consultant
Phone: +49 (0) 7031 714 660 0
Inspiricon SAP-HANA cloud Platform

Do you already know the SAP HANA Cloud Platform?

Big Data with its progressive interlinking and real time processing is changing companies tremendously. Information-driven services and products allow completely new user experiences. SAP HANA as an In-Memory platform offers all infrastructure components to enable such technological design – no matter if in the cloud or on premise.

SAP HANA Cloud Platform introduces unforeseen possibilities

With SAP HANA Cloud Platform (SAP HCP), SAP opened the opportunity to implement cloud-based technologies. This PaaS (Platform as a Service) offer helps to develop innovative applications and then putting them into operation. It is linked to SAP HANA. Therefore, companies can use numberless data bases and application services in the cloud. For example the following:

  • integrational and security-related functionalities,
  • support of several development languages as well as HTML5-based interfaces that are intuitive to operate,
  • search functionalities,
  • processing options of geo-referenced information,
  • analysis tools,
  • offline functionalities.

Added value of SAP HCP for users

SAP HCP offers users a huge added value. Users are able to make application environments fit for future requests. The PaaS offer provides a powerful development platform and also functions as runtime environment for individual extensions of IT products. It is very easy to integrate existing and new solutions whether it is a SAP product or not. The SAP HANA Cloud Platform replaces the software life cycle of client-individual adjustments and standard applications. The switch to Business Suite SAP S/4HANA is facilitated in this way:

  • An advancement is not linked to the core application anymore.
  • Default settings that are present in corporate solutions are not affected.
  • This protects existing business processes and improves agility.
  • Consisting in-house developments can be outsourced to SAP HANA Cloud Plattform easily.

SAP HANA Cloud Platform: a huge improvement

SAP HANA Cloud Platform is operated and maintained in SAP data centers. Thus, as a customer you can concentrate yourself on your core competencies – SAP takes care of the rest.

It is possible to test SAP HCP at any time. When you are getting to know the offer, own applications can be realized. Corporations can fall back to customized service packages in different scales.

SAP HANA Cloud Platform sets a new course for businesses. The digital change contains sales, service, marketing, logistics and industry 4.0. In the future, billions of devices will be connected and will exchange data flows – buzzword Internet of Things (IoT).

SAP HCP also offers functionalities for managing devices. Furthermore, device-based messaging is possible. IoT application enablement with data modelling is also integrated. Machine data collection and application data can be received, processed and evaluated in real-time in the cloud. With additional services, sensor-based information can be analyzed and integrated in applications in real-time. Thus, new processes, services and business models are possible. SAP HCP is a signpost that can lead to business success in a digitalized economy.

You want to know more? Or are you looking for someone who supports you to introduce SAP HCP? Get in touch and we will discuss our next steps together.


Jörg Waldenmayer Team Lead Technology
Phone: +49 (0) 7031 714 660 0
Inspiricon HANA in-Memory Speicherplatzreduzierung

How to shrink your growing HANA Database: please meet Inspiricon HANA In-Memory Space Reduction

Businesses nowadays get tremendously bigger. Dealing with large amounts of data becomes a challenge for the Data Warehouse. According to Gartner’s survey regarding data center infrastructure trends and challenges, “Data growth is the largest data center hardware and infrastructure challenge for large enterprises”.

In order to face these challenges many organizations found their relief in in-memory data platforms such as SAP HANA with its hybrid structure for both processing transactional workloads and analytical workloads completely in-memory. With the new release of SAP HANA, SAP introduces a new concept of Multi-Temperature and Data Lifecycle Management storing data efficiently on different types of storage devices based on data temperature, as shown in figure 1. Taking this into consideration, data which is accessed frequently (hot data) is stored in-memory on HANA while warm data can be stored on Extended Storage Host (Dynamic Tiering). Cold data is moved on less expensive storage devices outside the HANA Database only for the read purpose, known as Near-Line Storage (NLS, read more here).

Figure 1. Multi-Temperature Data Classification



One of our customer’s project goals was to move some of less frequented data on disk that still could be accessed and don’t keep it in HANA in-memory. In search for a suitable solution for our customer, who was looking to optimize HANA memory consumption and with this reducing license costs as well as improving the system stability, we have analyzed and investigate a possible SAP solution:

  • Dynamic Tiering unfortunately doesn’t fit our customer scenario due to additional license costs as well as many features and functions which are underdeveloped yet and will only be available in future releases.

Our first research analysis was focused on developing the Inspiricon HANA In-Memory Space Reduction project where we have evaluated different approaches such as:

  • Migration of the existing data models to HANA optimized ones (LSA++).
  • Identifying and deleting outdated data/objects that are not required for reporting any more.
  • Defining with business and unloading historical data from HANA in-memory to HANA disk space.
  • HANA DB compression.

The above listed approaches exploit SAP HANA features, e.g. partitioning of AdvancedDSO. We have developed a methodology, based on Inspiricon best practices, which does not imply additional license costs and does not increase the TCO (Total Costs of Ownership).

As a result, we have introduced the Data Lifecycle Management (DLM), which covers defining data keeping needs through Business/IT and methods to apply for a certain data model depending on its complexity and Business/IT requirements.

Figure 2. Inspiricon Methodology for DLM

Inspiricon Methodology for DLM

Inspiricon Methodology for DLM

Our project was based on a detailed system analysis identifying the biggest data objects that would be possible candidates for minimizing HANA memory consumption.

Figure 3. DLM approaches to reduce memory consumption

DLM approaches to reduce memory consumption

DLM approach to reduce memory consumption


This solution helps to better organize the existing data model by either deleting data which is stored multiple times in different InfoProviders or eliminating BW objects in layers which become obsolete on HANA. As a consequence HANA in-memory is reduced.

Unload Application

This application was developed by Inspiricon in particular for this customer scenario. The main usage is focused on unloading certain objects’ tables from HANA in-memory to HANA disk space. Thus, HANA DB will automatically load them back to the memory, once data is accessed in any way.


This approach helps to decrease memory consumption by unloading just a number of predefined partitions excluding others, based typically on time characteristics like calendar year/ month or fiscal period. Partitioning is defined on BW and it is performed on the Database level and can be applied to the existing DSO (with restrictions) or to the AdvancedDSO depending on customer business requirements.


  • As a result, the Inspiricon HANA In-Memory Space Reduction Project saved 20% of our customer HANA memory consumption without additional license costs and resources.
  • The main deliverables were better loading performance and improvement of the system stability, smoother DW landscape – and with this decreasing considerably the annually SAP HANA memory licenses.
  • Our consultancy added value is to deliver the best solution that matches our customer needs, implying our know-how, effective consulting and partnership.


Claudio Volk Member of the Management Board
Phone: +49 (0) 7031 714 660 0
Inspiricon nearline storage cold data

Keep your data cold

We all know that information is a big strategic asset. But what type of data is more important and more valuable? Making the information effective and a long-standing asset is the key for running your business efficiently. In order to achieve this goal, companies invest a lot in BI tools for analysis, reporting and archiving solutions. One of the most innovative archiving solutions that we would like to introduce today is Nearline Storage (NLS) – it keeps your data “cold” and reduces the BW database size and administration costs significantly, as well as improves performance.

What is cold data?

Nearline Storage (NLS) is a concept that defines the type of data storage archiving between online storage and offline storage.

Types of data storage

Types of data storage

This solution stores data in a compressed form with less backups and reduces costs for the data which is less accessed (cold data). This advanced technology is based on Information Life Cycle Management (ILM) to secure backups and store data on less expensive storage devices. The most frequently accessed data is more valuable for the company (hot data).  Thus, new data is stored on high-performance storage devices which are more expensive. In time, the level of importance of data decreases and it is less accessed and it is moved on a cost-efficient storage nearline repository where data is remaining accessible for read-only purposes.

Hot and cold data

Hot and cold data

It is worth it?

There is a considerable improvement when choosing this solution, for example query performance. In order to understand the advantages of this solution we present to you a case study based on a real customer scenario. You can see the expected query performance when NLS solution is applied. There are two tests for different types of queries which compare query runtimes on normal conditions and on those when using the NLS solution. The results are impressive regarding NLS usage: faster access to the archived data (avg. 5 times faster), reduced database administration and improved performance.

NLS Case Study 1

NLS Case Study 1

NLS Case Study 2

NLS Case Study 2

In addition to that, when Nearline Storage is implemented the costs of managing growing data streams decreases. As you can see below, year by year, the DB gets bigger and bigger. Which means that – when archiving cold data with NLS – you can save up to 40% of your DB storage. This represents a prominent cost reduction as well. The benefits are clear and measurable. NLS solution removes less frequent data from the online DB and compresses it by 90%, reducing storage costs but also keeping and increasing the performance.

Managing data growth

Managing data growth

To wrap things up, I would like to emphasize the importance of NLS in order to keep your BW fit daily. Even if your data grows fast, there is no reason to worry about it when you have the NLS asset in your pocket. It helps significantly to reduce your BW database size, hardware and administration costs. Furthermore, less effort is needed for backup and disaster recovery.

Additionally, when considering SAP HANA in the future, Nearline Storage helps the user to benefit from a leaner BW system, which might lower the TCO for the future HANA migration. The size of the working memory in use is reduced considerably – allowing huge cost savings for licenses and hardware for SAP BW on HANA.

Any questions regarding Nearline Storage? Contact us or make an appointment with us. You can also download a market research with case studies on NLS by sending us an E-mail with the subject “Market Research NLS”. We are looking forward hearing from you!


Claudio Volk Member of the Management Board
Phone: +49 (0) 7031 714 660 0