Cognos Migration SAP BW on HANA

How to leverage a standardized SAP solution to harmonize processes across the enterprise and ensure comparability

Hey there:

Today we would like to share with you yet another successful project in our portfolio. For this project, we were tasked with migrating an existing Cognos reporting solution from SAP BW on HANA with SAP BusinessObjects as the frontend.

The customer that sought our support specializes in the manufacturing and distribution of electronic components. As part of an international group, it is one of several subsidiaries, all of which rely on different ERP and business intelligence solutions. Faced with the group’s continued growth and the ever-increasing complexity that goes along with it, the proprietary ERP solution was reaching its limits both in terms of functionality and reporting capabilities.

THE PROJECT

Working from a newly introduced and standardized SAP ERP solution with SAP BW, Inspiricon was able to replace the existing Cognos reporting solution.

There were a number of reasons that motivated this decision:

  • For one, it would have been too costly and time consuming to keep developing the reporting solutions that, for the most part, had been developed in house and customized to specific needs.
  • Also, maintenance and support had become almost unmanageable.
  • What’s more, the group was no longer able to maintain the consistency of KPI purpose and content across its different companies, resulting in the inability to compare their processes and results.
  • The sheer number of different interfaces with other systems drove costs and significantly increased the error rate. Reporting to the parent company, in particular, had become a tedious and time-consuming endeavor. Even more interfaces were needed to do so because reporting was tailored exclusively to customers based on a – custom – Cognos reporting solution that had been implemented on top of everything else.
  • Internal expertise regarding the existing solutions was limited to a small number of people inside the company. Loosing this technical knowledge due to employee turnover was an ever-present threat.

THE SOLUTION

Since another subsidiary was already using SAP products, the new ERP solution was already present on the system.

The project, however, was not just about bringing the ERP system up to speed, the group was also eager to take its BI system to the next level. And it wanted it to be a SAP solution, prompting the decision to replace the existing Cognos reporting solution with SAP Business Warehouse on HANA.

The project was launched with release SAP 7.4 on HANA. During the project, a migration to SAP 7.5 on HANA was carried out. In this case too, SAP BW was already being used by another subsidiary and was therefore readily available.

Products in the SAP BusinessObjects family were to be used for the frontend as well. These adjustments were intended to bring about following changes and benefits:

  • Standardized data models that can be leveraged by every company in the group, allowing its globally dispersed service centers to provide cost-optimized support.
  • Consistent KPIs that guarantee comparability.
  • A smooth transition for users of the Cognos solution by means of a like-to-like migration.
  • Elimination of unnecessary interfaces.
  • Improved and accelerated flow of information along the value chain.
  • Increased responsiveness for better corporate management, which also improves competitiveness.

Our solution consists of Web Intelligence reports based on SAP BW on HANA along with standardized data provisioning from the operational SAP ERP system. The following illustration outlines the project’s structure:

Illustration Project Structure

By replacing its existing ERP and BI systems, our customer was able to improve its business. Not only can users now access reports and data much faster, substantially reducing time to decision, but they can also, to a certain degree, customize their reports in the context of a self-service BI. Another big plus is a significantly improved security posture due to centrally managed access authorizations. Last but not least, we were able to reduce license costs and therefore improve cost-effectiveness for the customer.

The introduction and standardization of SAP ERP and BI solutions typically require significant investments. By having our employees from our nearshoring location in Cluj-Napoca in Romania contribute to the project, we were able to drastically cut these costs. At the peak of the project, up to 6 colleagues from Cluj-Napoca were involved in the project. We coordinated through daily briefings during which our team communicated remotely both through video and audio.

Author
Claudio Volk Member of the Management Board
Phone: +49 (0) 7031 714 660 0
Email: info@inspiricon.de
4 steps to create a universe

4 steps to create a universe – the Information Design Tool, part 2

To continue our journey to create a universe full of data we will learn the steps to do this mission. The major steps in this process are:

  1. Create a new project in IDT (Information Design Tool).
  2. Create a connection between the HANA system and our project from IDT.
  3. Create a data foundation and a business layer.
  4. Publish to a repository.

First of all we have to define some basic terms that you have to know when handling universes and IDT.

  • Connection: defines how a universe connects to a relational or OLAP database. Local connections are stored as .CNX files, and secure connections are stored as .CNS files.
  • Data Foundation: a scheme that defines the relevant tables and joins from one or more relational databases. The designer enhances the data foundation with contexts, prompts, calculated columns and other SQL definitions. It can represent a basis for multiple business layers.
  • Business Layer: it represents the first draft of a universe. When the business layer is complete, it is compiled with the connections or connection shortcuts as well as data foundation. Then it is published and deployed as a universe.
  • Universe: the compiled file that includes all resources used in the definition of the metadata objects built in the design of the business layer.

Step 1: Create a new project in IDT

No matter if your plan to discover the universe is big or small, you need to have in mind a whole project. Technically speaking, the journey to a universe starts with a new project. We can create a new project opening IDT by opening IDT and then going to File -> New and selecting Project. The project will represent your box full of ideas, a workspace where you can build your universe brick by brick.

The bricks are represented by specific objects like data foundation or business layer.

How to create a project in IDT.

Figure 1: How to create a project in IDT.

 

Project is the house for resources used to build a universe. This project is used to manage the local objects for the universe creation.

In a project, you can have a multiple number of objects like data foundation, business layers, and data source connections etc.

Figure 2: How to create a project: enter name.

Figure 2: How to create a project: enter name.

 

We will enter the project name (in this case we use the name Test). Then we will set the project location in our workspace, too.

You also have the possibility to edit an existing project. You have to go on File -> Open Project Local Project area.

Another very interesting functionality in a project is represented by the locking resource – in this way you are able to inform other developers that you are working on the resource.

Step 2: Create a connection between the HANA system and our project from IDT .

Once the project is created, we have to assign a connection to be able to connect a data source.

This connection will define how the source system provides data. There are two types of connections: Relational or OLAP connection.

A Relational connection is used to connect to the database layer to import tables and joins.

An OLAP connection is used to connect to the multidimensional model like an Information View in SAP HANA.

How to create a project in IDT.

Figure 3: How to create a relational connection.

 

In order to create the connection, we have to enter the system details, the password and the user name.

After we have created the connection, we will test it and then we have to publish it to the repository to make it ready to use for our universe.

How to publish a connection.

Figure 4: How to publish a connection.

 

In order publish the connection, you have to provide the BO Connection Parameters (Repository, Username and Password).

Step 3: Create a data foundation and a business layer.

Once we have an available connection to the repository, we can proceed to create a data foundation for our universe. Data Foundation layer gives you the opportunity to import tables and joins from different relational databases.

In Information Design Tool, there are two types of Data Foundation: Single-source enabled or multi-source enabled.

Single-source Data Foundation supports a single relational connection. Single Source Data Foundation supports a local or a secured connection so the universe designed on this can be maintained locally or can be published to the repository.

How to create a Data Foundation.

Figure 5: How to create a Data Foundation.

 

Multi-source enabled Data Foundation supports one or more relational connections. Connections can be added when you design the Data Foundation or even later. Multi-source enabled Data Foundation is designed on secured connections published in a repository.

We have to define Data Foundation Technical Name and click on Next.

An interesting option in IDT is that you can create a universe on multiple sources. In the next screen, you can see a list of available connections to create the universe (both .cnx and .cns).

.cnx connection is used when we do not want to publish the universe to a central repository (can be used in local universe).

If you want to publish the universe to any repository (local or central), you have to use the .cns (secured) connection.

Once we set the finish button we will have a data foundation.

How does a Data Foundation look like?

Figure 6: How does a Data Foundation look like?

 

Congratulations, you are already half-way on your way to an own universe!

We have a project, a set connection and a data foundation so far. Now we need the other half of the universe, the Business Layer.

To do this, we have to go in the project menu, new and select Business Layer. Here we have to select the connection and also the Data Foundation on which we want to create the Business Layer.

How to create a Business Layer.

Figure 7: How to create a Business Layer.

 

Business Layer contains all classes and objects. You can also check dimensions and measures that are defined in a universe. When you publish the business layer in the repository, this shows the completion of a universe creation.

Once you created the business layer, you have to decide which fields need to work as dimensions (attributes / characteristics) and which fields functions as measures (Key Figures).

To give the universe a better layout, we can create folders for each dimension and one for Measures. Set by default option, measures are also treated as attributes.

Step 4: Publish to a repository.

Our journey to a universe full of data approaches the final step. We have all the elements and only just one click left to the finish line. After an integrity check we have to save and publish the universe to a repository.

How to publish the universe to a repository.

Figure 8: How to publish the universe to a repository.

This article is inspired by tutorials provided by Tutorials Point.

Author
Ionel-Calin Oltean Consultant SAP BI/BW
Phone: +49 (0) 7031 714 660 0
Email: cluj@inspiricon.de
When good advice is hard to come by

When good advice is hard to come by: navigation attributes from SAP BW and how they are connected to the HANA views generated from them

A look at the current situation and two approaches to make the most of it

Hello everybody:

The combination of SAP BW and HANA has opened up a world of new modeling opportunities. And we have put them to good use in our most recent projects.

One of these modeling opportunities lets you generate HANA views from within the BW system and leverage them for HANA development. This approach is, for example, ideally suited for directly accessing the data in the HANA database without having to take the detour through the Query Designer.

There are, however, some weak spots in this way of directly accessing the data.

One of them being that, unfortunately, navigation attributes are not supported in the way we have come to expect from BW. Here, we are limited by the fact that compound objects (that is, objects with a concatenated key) are not supported. And even though this limitation is well documented, I just have to say: Come on SAP, are you serious?

Especially in large BW landscapes, it is very common to have all your objects compounded – for example with a source system. And now you are telling me that none of these objects are functioning properly anymore? After all, giving us the ability to use navigation attributes was one of the major strengths of BW over these past years.

Simply put, a navigation attribute – just like any other master data object – is what is called a join, for which the master data is read in to the transaction data.

Typically, a Cube or a DSO also holds the characteristics (dimensions, master data, keys of the other tables) along with the fact table (key figures, values). In the past, you could always automatically access the attributes of these characteristics and you practically didn’t have to think twice about how it works. It just did.

Naturally, SAP has issued a recommendation on how to solve the problem. And at Inspiricon, we too have come up with a solution for our project. Both solutions leverage the automatism mentioned earlier. Now let’s have a closer look at what they do.

SAP’s solution

The CompositeProvider as the successor to the MultiProviders should be the object by way of which reporting, or in our case, the data access from within HANA is to be carried out. Underneath of this object, there is one (or several) Cubes, DSOs, and ADSOs. The CompositeProvider has significant advantages over the MultiProvider when it comes to joins. Here, you can create the joins that you need for your master data. Typically, these are left outer joins.

What SAP’s solution now does is to also make the required master data a part of the CompositeProvider. This means they are not used as part of the ADSOs, they are rather once again directly included in the CompositeProvider by join.

The Inspiricon solution

Since we ran into a few problems with the CompositeProviders during this particular project, we built a solution similar to that of SAP, but on a different level.

We didn’t build those joins in BW – and therefore not in the CompositeProvider – but rather in HANA. What we did instead was to use the generated views of the CompositeProviders and of the master data and to then join them.

Conclusion

Summing it all up I can safely say: it works. Both approaches get the job done.

Like for most technical problems there is, indeed, a solution. But it does involve a considerable amount of extra work, regardless of whether you prefer our or SAP’s approach. What bugs me the most is the fact that this extra work isn’t done after they have been created, it has to be put in for every subsequent modification, because these are manual steps that were simply not necessary in the past.

I hope that SAP will step up and solve this problem for good or at least give us an automatism.

But until then, we will weather the headwinds and continue to solve the problem with what we have.

Author
Jörg Waldenmayer Team Lead Technology
Phone: +49 (0) 7031 714 660 0
Email: info@inspiricon.de
Big Data and Artificial Intelligence

Big data and artificial intelligence – a powerful duo that will shape our future

SAP makes artificial intelligence a reality

Using intelligent machines to analyze big data is no longer just wishful thinking. SAP has turned yesterday’s dreams into today’s reality. Based on SAP HANA, SAP has breathed life into multiple software components capable of thinking autonomously and analyzing vast amounts of data. As you are reading this, its software is recognizing faces and objects and carrying out large-scale analyses that would neither be possible nor conceivable if done manually.

With SAP, artificial intelligence (AI) has made the leap from vision to reality.

SAP Clea and SAP Leonardo: artificial intelligence is gaining momentum

SAP has added a new version to its HANA platform. Along with it, a number of AI and IoT (Internet of Things) services were introduced to the market.
SAP’s new SAP Clea software runs in the SAP cloud and is capable of autonomous learning without requiring any explicit programming. Its analytical intelligence is already being leveraged by companies like Munich Re and the European Space Agency. The insurance giant Munich Re is constantly calculating the risk associated with forest fires using data on vegetation lines. These calculations are supported by intelligent software and their capability for large-scale big data analytics.

SAP Leonardo is the name of SAP’s IoT solution portfolio. Just like its namesake Leonardo da Vinci, SAP Leonardo takes a broad and interdisciplinary approach – a fundamental requirement for the Internet of Things. Information from across the company is taken into consideration, paving the way for the development of novel solutions and business models.

SAP Leonardo is designed to assist potential customers in crafting an IoT strategy and in identifying the solutions that will best meet their specific needs.

Because one thing is for certain: There is no such thing as the one piece of software for IoT and AI. It usually takes a combination of multiple applications.

The digital core: The S/4HANA SAP Enterprise Suite

S/4HANA is already being leveraged by 4,000 companies in 25 countries. It forms the digital core for the transformation and can be used for IoT and AI/machine learning applications. SAP has introduced several additions in the cloud, such as Connected Logistics, Connected Vehicles, Connected Manufacturing, Connected Assets, Connected Retail, and Future Cities. These allow companies to, for example, manage their fleets, control quality levels, and calculate routes.

In the field of artificial intelligence and machine learning services, SAP offers a range of services on the SAP Cloud Platform:

  • Resume Matching to streamline recruiting
  • Cash Application to analyze payment behavior
  • Social Media Customer Service
  • Brand Intelligence
  • Fraud Detection for insurers and banks

SAP Fiori – the new SAP UI

SAP Fiori is an initiative that aims to enhance usability (for more information, please refer to our website and our blog). With Fiori 2.0, SAP wants to harmonize the user experience for all SAP applications and has included a number of improvements in its visual design and usability.

This is yet another area where AI is leveraged. The user is assisted by a co-pilot that anticipates user actions and prepares them accordingly. Its built-in voice control, for example, streamlines maintenance and warehouse workflows. Artificial intelligence is used to analyze suppliers and categorize them according to a predefined requirements profile.

Inspired: straightforward big data analytics

The combination of artificial intelligence and big data supported by SAP Fiori makes for more effective automation and analysis. The software tries to foresee what actions the user wants to take, enhancing effectiveness and boosting speed. SAP Fiori enables companies to conduct large-scale analyses of big data and to automatically monitor important business metrics. Anomalies, trends, and patterns are automatically communicated to the responsible staff in an interface that puts the user first. The available data is analyzed in a central user interface that allows for intuitive operation. There is no longer a need for complex modifications of input and output parameters.

Artificial intelligence ensures that the algorithms deliver meaningful results without requiring input from the business departments.

The main objective of machine learning is to identify data patterns and relationships and to apply them to new sets of data. The underlying algorithms are based on statistics, the calculation of probabilities, and algebra. With SAP Application Intelligence, data is the fuel that powers machine learning.

Deep learning: where machines outperform humans

One discipline of machine learning is known as deep learning: Here, neural networks are flooded with vast amounts of data. The intention is to enable the software to recognize faces, classify objects, and understand language. Capabilities that are constantly being refined and increasingly used in robotics applications.

This also opens up new medical applications:
Soon, the data of individual cancer patients will be compared to millions of medical records to enable the customization of healthcare through precision medicine. Artificial intelligence works for the broad mass and delivers very promising results. The software is increasingly becoming an all-rounder that can be leveraged across the board. SAP Application Intelligence is capable of unearthing relationships that would otherwise stay hidden from the human eye. Working under time constraints, human employees often overlook crucial data that then remains undiscovered in day-to-day business.

Conclusion

Artificial intelligence has already found its way into business intelligence solutions, where it is used to control data analytics, for example in order to detect anomalies or automatically structure and interpret data. On top of that, AI algorithms are also used to monitor data streams.

The intelligent and integrative interaction between new SAP components such as Clea, Leonardo, or Fiori on the basis of SAP HANA Cloud and on-premises solutions are continuously inspiring Inspiricon to search for innovative services for our customers.

You are curious to learn more and explore new potential business? Do not hesitate to contact us now!

Author
Claudio Volk Member of the Management Board
Phone: +49 (0) 7031 714 660 0
Email: info@inspiricon.de
Universe

A universe full of data – the Information-Design-Tool 

A short introduction to the Information-Design-Tool (IDT)

Nowadays any big company takes part in the process of globalization and the transfer of products, but also the global transfer of information. It is not a big surprise for a German company to take a decision based on a report with data provided from the other part of the planet. But how can we put together all this data and information? Let’s create an own universe full of data! Yes, a new universe is the solution. But how can we manage a universe? Read the next rows and you will discover how to become a master of universes.

All magic elements are coming from the heart

The heart of HANA System:

Business Layer Structure

Figure 1: Business Layer Structure – Source: Internal training – IDT Presentation

The first step in order to create a universe is to create a view. This first brick in our “Chinese Wall” is called SAP HANA Information Modeler, also known as HANA Data Modeler in the heart of HANA System. It enables users to create modeling views at the top of database tables and implement business logic to create a meaningful report for analysis.

There are three types of Information Views

Defined as:

Attribute View Analytic View Calculation View
  • Attribute views are dimensions, BW characteristics or master data.
  • Attribute views are used to join to a dimension or attribute view. 
  • In most cases used to model master data like entities (like product, employee, and business partner).
  • Analytic views are star schemas or fact tables surrounded by dimensions, calculations or restricted measures.
  • Analytic views are typically defined on at least one fact table that contains transactional data along with number of tables or attribute views.
  • Analytic views leverage the computing power of SAP HANA to calculate aggregated data, e. g., the number of bikes sold per country, or the maximum power consumed per month.
  • Calculation views are composite views used on top of analytical and attribute views.
  • It can perform complex calculations which are not possible with other views.
  • It can be defined as either graphical views or scripted views depending on how they are created. Graphical views can be modeled using the graphical modeling features of the SAP HANA Modeler. Scripted views are created as sequences of SQL statements.

Figure 2 Short description of each type of SAP HANA View. Source: http://saphanatutorial.com/sap-hana-modeling/ (Entire data from the table is taken from this source)

Once we have the first brick, the view, we can start to do a concert. This role is played by IDT (Information-Design-Tool).

According to Information Design Tool User Guide, published by SAP, the Information-Design-Tool is an SAP BusinessObjects metadata design environment that enables a designer to extract, define, and manipulate metadata from relational and OLAP sources to create and deploy SAP BusinessObjects universes.

The universe designer may be a database administrator, an applications manager or developer, a project manager, or a report creator who has acquired enough technical skills to create universes for other users. A security administrator also uses the Information-Design-Tool to define universe security profiles. There can be more than one universe designer in a company. The number of universe designers depends on the company’s data requirements. For example, one universe designer could be appointed for each application, project, department or functional area.

A universe is an organized collection of metadata objects that enable business users to analyse and report on corporate data in a non-technical language.

These objects include

  • Dimensions
  • Measures
  • Hierarchies
  • Attributes
  • Pre-defined calculations
  • Functions
  • Queries

The metadata object layer, called the business layer, is built on a relational database schema or an OLAP cube, so the objects map directly to the database structures via SQL or MDX expressions. A universe includes connections identifying the data sources so queries can be run on the data.

The role of the universe

The role of the universe is to provide the business user with semantically understandable business objects. The user is free to analyse data and create reports using relevant business language regardless of the underlying data sources and structures.

Example of an universe

Figure 3: Printscreen from an universe – Source: Internal training – IDT Presentation Inspiricon SRL

Steps to create a universe:

  1. Create a new project in IDT
  2. Create a connection between HANA System and our project from IDT
  3. Create a business layer
  4. Publish to a repository

For an easier understanding, please keep in mind that universes enable the business users too freely and securely access, analyze, enrich and share information using familiar business terms. Another important aspect is, that universes do not store data from HANA or add any performance overhead.

Universes created using the Information-Design-Tool can be used by the following SAP BusinessObjects data analysis and reporting applications starting with version BI 4: SAP BusinessObjects Web Intelligence, SAP Crystal Reports for Enterprise, SAP BusinessObjects Explorer, SAP BusinessObjects Dashboard Design.

You can find more details about IDT and the description of every step for the process of creating a universe in the next article – stay tuned!

Source: https://help.sap.com/doc/businessobject_product_guides_boexir4_de_xi4sp8_info_design_tool_de_pdf/XI4.0.8/de-DE/xi4sp8_info_design_tool_de.pdf

Author
Ionel-Calin Oltean Consultant SAP BI/BW
Phone: +49 (0) 7031 714 660 0
Email: cluj@inspiricon.de
Inspiricon SAP HCP mobile services Fiori

SAP HCP Mobile Services for SAP Fiori – new ways for mobile use

SAP directs its product strategy towards the future and uses keywords like “Mobility first” and “Device independence“. With SAP HCP, mobile services for SAP Fiori, the technical basis for the development of applications has been extended immensely. And it was about time: in a lot of cases, potential of mobile applications is lying fallow – this relates in particular to cost reduction and efficiency.

SAP HCP and SAP Fiori

SAP HCP (HANA Cloud Platform) is an online-service that is offered within SAP’s platform services (please read our blog article on SAP HCP: http://en.inspiricon.de/sap-hcp/). It is one of the most favorite PaaS offers (Platform as a Service) which are currently on the market. The cloud computing platform is addressed to software developers and comprises different services such as SAP HANA databases, IoT services (Internet of Things) and integration services.

SAP Fiori stands for a number of applications for the SAP system in SAPUI5 or rather HTML5 (read our blog on SAP Fiori: http://en.inspiricon.de/category/technology/fiori/). SAP Fiori enables the mobile use of SAP transactions. Concerning User Experience and Usability, SAP was not the first name that came into mind in the past – but the IT made great progress especially in these areas.

SAP HCP is the central extension and development platform for applications in the cloud. It is a real all-rounder that will probably soon be used from each SAP application company. SAP Fiori for mobile users is optimized by means of SAP HCP. This offers completely new possibilities. The advantages and the simplicity of use convince in many ways.

What are the advantages of the new platform?

  1. SAP Fiori can be aimed to the use of mobile user by means of SAP HCP. Mobile users are able to open their internet browser via SAP Fiori applications – in doing so, a secure and seamless integration is granted. SAP HANA Cloud platform is an application service that enables an optimization of mobile activities.
  2. End-to-end solutions allow customization, protection and connection of Fiori Apps – as well as testing, distribution and surveillance of applications.
  3. SAP Fiori applications work on portable devices as veritable mobile applications. Users benefit from a high usability.
  4. Device functionalities like barcode scanner, authorization and data security can be implemented and designed in terms of usability – the same applies for branding.
  5. Users are able to profit from device functions quick and comprehensive. Receiving push notifications and accessing local data are smooth processes.
  6. Companies benefit from high usability and data security. Extensive authorization methods increase security and allow to allocate authorizations.

How to do it

The platform SAP HCP for SAP Fiori can be used quite easy. If you want to try it, you can create a free developer account. All you have to do is to register and apply for an account: that can be activated by an e-Mail link. The platform is designed in a clear way and you will be able to use it intuitively. On the left-hand side you will find the most important features such as the dashboard, HANA XS applications, database systems, HTML5 applications, services and authorizations. As soon as you click on one of the categories, the corresponding category will open on the right-hand side. E.g. Internet of Things, device management or Fiori Mobile. The following steps are self-explanatory and follow the principles of a modular design, based on intuitive handling.

SAP HCP for mobile services for SAP Fiori

Using SAP HCP is quite simple and you do not need special prior knowledge. Advantages for mobile services are huge. There are tangible improvements especially with regard to data and application security, usability and interlinking. By means of user-friendly innovations, fundamental possibilities have been created to improve usability and user experience as well as to design mobile applications in a forward-thinking and modern way.

Inspiricon Best Practice: How to develop a search for http-services with SAP Fiori und SAP HCP

We built a sector-independant and standardized solution for autosuggest with fuzzy matching. Read our Best Practice here: Inspiricon developed a search for http-services with SAP Fiori and SAP HCP (only available in German).

 

Author
Andrei Vlad CEO Inspiricon SRL
Phone: +49 (0) 7031 714 660 0
Email: cluj@inspiricon.de
Inspiricon SAP-HANA-2 cloud-in-memory

SAP HANA 2: the next generation of SAP HANA platform

SAP HANA 2 adds new features to the In-Memory Platform SAP HANA. SAP HANA is a platform designed to push forward the digital transformation. Via SAP YaaS (Hybris as a Service) SAP HANA 2 offers new microservices. These are based on the SAP HANA Cloud Platform (read our last blog article: http://en.inspiricon.de/sap-hcp/). Due to these microservices, developers are able to create new innovations and provide modern applications with extensive information.

SAP HANA was introduced in 2010 – with that SAP took a pioneering role in the area of In-Memory technology. SAP HANA offers innovations that are based on a stable master data platform. SAP HANA 2 adds to this pioneering role in order to shape the digital future successfully. SAP HANA 2 will be available for customers from 30 November 2016. Followed by the Express Edition: designed for companies to advance new development projects. There will be extensions each half year to support agile developments.

New functions and extensions

SAP HANA 2 will offer extensive enhancements. Concerning data bank management the focus will be on high availability, workload management as well as security. Supported by these, IT companies are able to guarantee uninterrupted operation.

The option Active or rather Active Read-Enabled offers the use of secondary systems which were only used for system replication until now. This option serves to cope with reading-intensive workloads – without creating a performance loss.

In the range of data management, customers can profit from extensions concerning data quality, data integration, business modelling as well as Tiered Storage (please read our blog article on HANA In-Memory: http://en.inspiricon.de/shrink-growing-hana-database/). Due to these extensions, companies are able to access all data – no matter where they are stored.

There will be a new version of the web application SAP Enterprise Architecture Designer. It is possible to manage complex information architectures. In addition to that, potential impacts of innovative technologies can be visualized before implementing them.

SAP HANA 2 can do even more

SAP HANA 2 also offers an improvement of the analytical intelligence. Innovative engines allow an analytical processing of geo data, text, streaming data and graphical data. Developers will be able to embed comprehensive information in modern applications.

Predictive Analysis Library in SAP HANA 2 comprises new algorithms for association, classification, regression and time series analysis. Using these analyses, data scientists are able to uncover patterns in their data and supplement own applications with machine learning.

In the range of application development, users can benefit from extended functions for application server, development tools as well as development languages. The principle „Bring your own Language“ is supported. Furthermore, additional runtime environments from third-party and build packs are available. They can be used in the model Extended Application Services. With the new interface File Processor API, developers can extract and analyze text and meta data.

New microservices in the cloud

The new microservices are cloud-based and facilitate an integration of analyses in applications. It can be done on each development or language platform via APIs. The cloud microservices include the following extensions:

  • Text Analysis Entity Extraction, Text Analysis Linguistic Analysis and Text Analysis Fact Extraction: with these functions, it is possible to process text data in the cloud whereby processing of natural languages is also feasible.
  • Earth Observation Analysis: this service was developed in collaboration with ESA. Basis for this service is the interface standard WCS of OGC. Earth Observation Analysis uses satellite data of ESA and processes geo data via SAP HANA Spatial, a data base application in the cloud. This service provides historical data in real-time – the calculation happens on the basis of spectral measuring data.

You want to learn more about SAP HANA 2? Contact us here. Also follow us on Facebook or LinkedIn, and we will keep you posted on new trends and topics.

 

Author
Steffen Böhm Member of the Management Board
Phone: +49 (0) 7031 714 660 0
Email: info@inspiricon.de
Inspiricon SAP-HANA cloud Platform

Do you already know the SAP HANA Cloud Platform?

Big Data with its progressive interlinking and real time processing is changing companies tremendously. Information-driven services and products allow completely new user experiences. SAP HANA as an In-Memory platform offers all infrastructure components to enable such technological design – no matter if in the cloud or on premise.

SAP HANA Cloud Platform introduces unforeseen possibilities

With SAP HANA Cloud Platform (SAP HCP), SAP opened the opportunity to implement cloud-based technologies. This PaaS (Platform as a Service) offer helps to develop innovative applications and then putting them into operation. It is linked to SAP HANA. Therefore, companies can use numberless data bases and application services in the cloud. For example the following:

  • integrational and security-related functionalities,
  • support of several development languages as well as HTML5-based interfaces that are intuitive to operate,
  • search functionalities,
  • processing options of geo-referenced information,
  • analysis tools,
  • offline functionalities.

Added value of SAP HCP for users

SAP HCP offers users a huge added value. Users are able to make application environments fit for future requests. The PaaS offer provides a powerful development platform and also functions as runtime environment for individual extensions of IT products. It is very easy to integrate existing and new solutions whether it is a SAP product or not. The SAP HANA Cloud Platform replaces the software life cycle of client-individual adjustments and standard applications. The switch to Business Suite SAP S/4HANA is facilitated in this way:

  • An advancement is not linked to the core application anymore.
  • Default settings that are present in corporate solutions are not affected.
  • This protects existing business processes and improves agility.
  • Consisting in-house developments can be outsourced to SAP HANA Cloud Plattform easily.

SAP HANA Cloud Platform: a huge improvement

SAP HANA Cloud Platform is operated and maintained in SAP data centers. Thus, as a customer you can concentrate yourself on your core competencies – SAP takes care of the rest.

It is possible to test SAP HCP at any time. When you are getting to know the offer, own applications can be realized. Corporations can fall back to customized service packages in different scales.

SAP HANA Cloud Platform sets a new course for businesses. The digital change contains sales, service, marketing, logistics and industry 4.0. In the future, billions of devices will be connected and will exchange data flows – buzzword Internet of Things (IoT).

SAP HCP also offers functionalities for managing devices. Furthermore, device-based messaging is possible. IoT application enablement with data modelling is also integrated. Machine data collection and application data can be received, processed and evaluated in real-time in the cloud. With additional services, sensor-based information can be analyzed and integrated in applications in real-time. Thus, new processes, services and business models are possible. SAP HCP is a signpost that can lead to business success in a digitalized economy.

You want to know more? Or are you looking for someone who supports you to introduce SAP HCP? Get in touch and we will discuss our next steps together.

 

Author
Jörg Waldenmayer Team Lead Technology
Phone: +49 (0) 7031 714 660 0
Email: info@inspiricon.de
Inspiricon HANA in-Memory Speicherplatzreduzierung

How to shrink your growing HANA Database: please meet Inspiricon HANA In-Memory Space Reduction

Businesses nowadays get tremendously bigger. Dealing with large amounts of data becomes a challenge for the Data Warehouse. According to Gartner’s survey regarding data center infrastructure trends and challenges, “Data growth is the largest data center hardware and infrastructure challenge for large enterprises”.

In order to face these challenges many organizations found their relief in in-memory data platforms such as SAP HANA with its hybrid structure for both processing transactional workloads and analytical workloads completely in-memory. With the new release of SAP HANA, SAP introduces a new concept of Multi-Temperature and Data Lifecycle Management storing data efficiently on different types of storage devices based on data temperature, as shown in figure 1. Taking this into consideration, data which is accessed frequently (hot data) is stored in-memory on HANA while warm data can be stored on Extended Storage Host (Dynamic Tiering). Cold data is moved on less expensive storage devices outside the HANA Database only for the read purpose, known as Near-Line Storage (NLS, read more here).

Figure 1. Multi-Temperature Data Classification

Multi-Temperature-Classification

Multi-Temperature-Classification

One of our customer’s project goals was to move some of less frequented data on disk that still could be accessed and don’t keep it in HANA in-memory. In search for a suitable solution for our customer, who was looking to optimize HANA memory consumption and with this reducing license costs as well as improving the system stability, we have analyzed and investigate a possible SAP solution:

  • Dynamic Tiering unfortunately doesn’t fit our customer scenario due to additional license costs as well as many features and functions which are underdeveloped yet and will only be available in future releases.

Our first research analysis was focused on developing the Inspiricon HANA In-Memory Space Reduction project where we have evaluated different approaches such as:

  • Migration of the existing data models to HANA optimized ones (LSA++).
  • Identifying and deleting outdated data/objects that are not required for reporting any more.
  • Defining with business and unloading historical data from HANA in-memory to HANA disk space.
  • HANA DB compression.

The above listed approaches exploit SAP HANA features, e.g. partitioning of AdvancedDSO. We have developed a methodology, based on Inspiricon best practices, which does not imply additional license costs and does not increase the TCO (Total Costs of Ownership).

As a result, we have introduced the Data Lifecycle Management (DLM), which covers defining data keeping needs through Business/IT and methods to apply for a certain data model depending on its complexity and Business/IT requirements.

Figure 2. Inspiricon Methodology for DLM

Inspiricon Methodology for DLM

Inspiricon Methodology for DLM

Our project was based on a detailed system analysis identifying the biggest data objects that would be possible candidates for minimizing HANA memory consumption.

Figure 3. DLM approaches to reduce memory consumption

DLM approaches to reduce memory consumption

DLM approach to reduce memory consumption

Remodeling

This solution helps to better organize the existing data model by either deleting data which is stored multiple times in different InfoProviders or eliminating BW objects in layers which become obsolete on HANA. As a consequence HANA in-memory is reduced.

Unload Application

This application was developed by Inspiricon in particular for this customer scenario. The main usage is focused on unloading certain objects’ tables from HANA in-memory to HANA disk space. Thus, HANA DB will automatically load them back to the memory, once data is accessed in any way.

Partitioning

This approach helps to decrease memory consumption by unloading just a number of predefined partitions excluding others, based typically on time characteristics like calendar year/ month or fiscal period. Partitioning is defined on BW and it is performed on the Database level and can be applied to the existing DSO (with restrictions) or to the AdvancedDSO depending on customer business requirements.

Summary

  • As a result, the Inspiricon HANA In-Memory Space Reduction Project saved 20% of our customer HANA memory consumption without additional license costs and resources.
  • The main deliverables were better loading performance and improvement of the system stability, smoother DW landscape – and with this decreasing considerably the annually SAP HANA memory licenses.
  • Our consultancy added value is to deliver the best solution that matches our customer needs, implying our know-how, effective consulting and partnership.

 

Author
Claudio Volk Member of the Management Board
Phone: +49 (0) 7031 714 660 0
Email: info@inspiricon.de
Inspiricon nearline storage cold data

Keep your data cold

We all know that information is a big strategic asset. But what type of data is more important and more valuable? Making the information effective and a long-standing asset is the key for running your business efficiently. In order to achieve this goal, companies invest a lot in BI tools for analysis, reporting and archiving solutions. One of the most innovative archiving solutions that we would like to introduce today is Nearline Storage (NLS) – it keeps your data “cold” and reduces the BW database size and administration costs significantly, as well as improves performance.

What is cold data?

Nearline Storage (NLS) is a concept that defines the type of data storage archiving between online storage and offline storage.

Types of data storage

Types of data storage

This solution stores data in a compressed form with less backups and reduces costs for the data which is less accessed (cold data). This advanced technology is based on Information Life Cycle Management (ILM) to secure backups and store data on less expensive storage devices. The most frequently accessed data is more valuable for the company (hot data).  Thus, new data is stored on high-performance storage devices which are more expensive. In time, the level of importance of data decreases and it is less accessed and it is moved on a cost-efficient storage nearline repository where data is remaining accessible for read-only purposes.

Hot and cold data

Hot and cold data

It is worth it?

There is a considerable improvement when choosing this solution, for example query performance. In order to understand the advantages of this solution we present to you a case study based on a real customer scenario. You can see the expected query performance when NLS solution is applied. There are two tests for different types of queries which compare query runtimes on normal conditions and on those when using the NLS solution. The results are impressive regarding NLS usage: faster access to the archived data (avg. 5 times faster), reduced database administration and improved performance.

NLS Case Study 1

NLS Case Study 1

NLS Case Study 2

NLS Case Study 2

In addition to that, when Nearline Storage is implemented the costs of managing growing data streams decreases. As you can see below, year by year, the DB gets bigger and bigger. Which means that – when archiving cold data with NLS – you can save up to 40% of your DB storage. This represents a prominent cost reduction as well. The benefits are clear and measurable. NLS solution removes less frequent data from the online DB and compresses it by 90%, reducing storage costs but also keeping and increasing the performance.

Managing data growth

Managing data growth

To wrap things up, I would like to emphasize the importance of NLS in order to keep your BW fit daily. Even if your data grows fast, there is no reason to worry about it when you have the NLS asset in your pocket. It helps significantly to reduce your BW database size, hardware and administration costs. Furthermore, less effort is needed for backup and disaster recovery.

Additionally, when considering SAP HANA in the future, Nearline Storage helps the user to benefit from a leaner BW system, which might lower the TCO for the future HANA migration. The size of the working memory in use is reduced considerably – allowing huge cost savings for licenses and hardware for SAP BW on HANA.

Any questions regarding Nearline Storage? Contact us or make an appointment with us. You can also download a market research with case studies on NLS by sending us an E-mail with the subject “Market Research NLS”. We are looking forward hearing from you!

 

Author
Claudio Volk Member of the Management Board
Phone: +49 (0) 7031 714 660 0
Email: info@inspiricon.de