Monday, October 1, 2012

Change Universe in SAP BusinessObjects 3.1

"Some objects are no longer available in the universe. See your BusinessObjects administrator. (Error: WIS00001)(Error: INF)"

SAP BusinessObjects 3.1

When you want to replace the universe, which is used in a Web Intelligence report by a new universe, you might get the error message "Some objects are no longer available in the universe. See your BusinessObjects administrator. (Error: WIS00001)(Error: INF)" even when all the dimensions and measures definitely exist in the new universe.
The reason is, that sometimes Web Intelligence doesn't use the object names to reference dimensions and measures of a universe, but it's own internal object ids. These object ids are not visible for the user, but when creating a new object in the universe, a new object id is created automatically.
To overcome the error in Web Intelligence after changing the universe, the only solution I have found is not to recreate the objects in the new universe, but to use the existing universe with the existing object ids and replace the SQL query in the properties. This way the object ids remain and the Web Intelligence report can reference the objects correctly.

Friday, August 3, 2012

Slowly Changing DWH (facts+dimensions)

How to enable a DWH for slowly changing facts
Datawarehouse Architecture, Oracle Database, SQL Server, SAP BusinessObjects

In the classic Kimball DWH we have fact tables and dimension tables, which can have different types of historization. Usually this approach satisfies most of the customer needs. Hence I was quite suprised when I had the requirement in a project to track all changes of dimensions as well as of fact tables (SCD Type 2). These were for example estimated values, which get updated frequently. If we can really call them facts is just a theoretical question at the end, the fact is that the customer needs the possibility to view reports, how they were in a given point in time with appropriate dimension and fact values. Of course, most of the time the customer wants to see the most current values, but from time to time also previous reports are needed.
I want to break the design explanation into 3 points, which were necessary to fulfill these requirements:

Change Detection
No question, the most convinient way to detect changes is CDC, which means that you only get the changed data rows. But this is not always an option and sometimes it's also necessary to synchronise the DWH with the source system when you are missing some changes in the DWH, for example after structural changes in the source system. For small dimension tables comparing the source table with the DWH table is not a big deal. But when we need to compare fact tables containing millions of rows, this might be a performance issue.
After some testing, I found the by far fastest approach to compare to tables here: On Injecting and Comparing
As we also need updated values, I added an analytical function to partition the result set by the key columns.


select Key1,Key2,Attribute1,Attribute2,
       case when count = 2 and tbl1 = 1 
            then 'U'
            when count = 1 and tbl1 = 1
            then 'I'
            when count = 1 and tbl1 = 0 
            then 'D'
            else 'O' 
       end flag
  from (select Key1,Key2,Attribute1,D,
               min(count1) tbl1,         
             count(*) over (partition by Key1,Key2 order by null) count
          from (select Key1,Key2,Attribute1,Attribute2, 
                       count(tbl1) count1
                       ,count(tbl2) count2
                  from (select Key1,Key2,Attribute1,D, 
                               1 tbl1, 
                               to_number(null) tbl2 
                          from source_table                     
                        union all
                        select Key1,Key2,Attribute1,D, 
                               to_number(null) tbl1, 
                               2 tbl2 
                          from DWH_Table                     
                       )
                 group by Key1,Key2,Attribute1,Attribute2
                 having count(tbl1) != count(tbl2)
               )
               group by Key1,Key2,Attribute1,Attribute2 
       )

The performance of this query is incredible compared to other solutions.
The Query returns inserted, deleted and updated rows (as well as the old values if you need them), which is the source for an usual ETL process, that inserts the SCD2 data into the DWH (e.g. How to load a Slowly Changing Dimension Type 2 with one SQL Merge statement in Oracle).
Please check out a more detailled explanation here: Change Detection

Data Model
At the beginning this point was the trickiest one. Building a DWH with surrogate keys etc. as we know it was getting really complicated here. To get the right surrogate key of an dimension value in the fact table, we actually have to cut the fact data into pieces every time a fact changes. When a fact table is referencing plenty of dimension, we get a really huge fact table after some time, a complex ETL process and due to the large fact table probably a poor performance. Each time we register a change in one of a referenced dimension table, a new fact row with the new surrogate key of the dimension table must be inserted in our fact table. When you imagine that dimensions can also reference other dimensions, it's not a funny thing to implement an efficient ETL-process for that requirement.
Here I have to thank this blog entry: Slowly Changing Facts, which brought me to the useful Kimball Design Tip #74. Even when this is not exactly what I was looking for, it's good to know that there is also a theoretical answer for the problem of slowly changing facts.
We decided at the end to use the business keys to reference to the dimension values. Of course this would give you multiple correspondending rows in your join, so you have to assure in your reporting tool, that all your tables contain a filter condition with a date, the user can specify in a prompt or however it's called in your relational reporting tool. For example in SAP Business Objects I used that expression:


WHERE TO_DATE(NVL(TRIM(@Prompt('Query Date','A',,MONO,FREE)),'31/12/9998 12:00:00'),'mm/dd/yyyy HH:MI:SS AM') BETWEEN VALID_FROM AND VALID_TO


If the user doesn't enter a value or a enters a space, the 'Query Date' is replaced by a date like '31/12/9998', which returns the current state of the data. If the user enters any date in the past, the query returns exactly the data from that point in time.

Performance Optimization
Usually the customer wants to see the most current state of the report, which should show up in a few seconds, while viewing a previous report state can take several minutes.
This requirement could perfectly fulfilled using query rewrite-enabled materialized views (in SQL Server language: indexed views). For more information about query rewrite, there are plenty of useful internet sources available (e.g. Query Rewrite).
To get a query rewrite enabled materialized view, the query must return deterministic results. That's why we need a constant value to get the results of the most current date. As in most SCD2 implementations a date from year '9999' in the 'valid_to' column represents the current datarow.
We add the following filter condition to each table of the report query in our materialized view.

WHERE TO_DATE(NVL(TRIM(''),'31/12/9998 12:00:00'),'mm/dd/yyyy HH:MI:SS AM') BETWEEN VALID_FROM AND VALID_TO

When the report user enters an empty string '' into the prompt dialog, the query gets automatically rewritten and the precalculated materialized view is used instead of the detail tables. Using this technique, the most current report version is shown instantly. Only when a previous report is needed and the user enters a past date into the prompt dialog, the detail tables must be queried and the report might takes a bit longer to show up.

Note that some prerequisits must be fulfilled to enable query rewrite in materialized views, the most important points are:
Create the materialized view with 'QUERY REWRITE' Option: 'ENABLE QUERY REWRITE'

  • Set query_rewrite_integrity, e.g. alter system set query_rewrite_integrity=stale_tolerated scope=spfile;
  • Set query_rewrite_enabled, e.g. alter system SET query_rewrite_enabled=FORCE scope=spfile;

Friday, April 13, 2012

Overcome the define ranges-restriction when using MS Excel as a data source in Informatica PowerCenter

Informatica PowerCenter

When you use a MS Excel Spreadsheet as a data source in Informatica PowerCenter, there are some really bad restrictions to my mind. One of them is the fact that you must have defined ranges in your spreadsheet, so that PowerCenter can identify them as relational sources. That means if you get frequently new MS Excel files, you have to define the ranges manually every time.

In my last project I made a detour to connect from PowerCenter to MS Excel Spreadsheets to overcome that restriction. The name of the detour is MS Access.

Just create a new MS Access database, name it something like "Import_Excel" and go to the tab "External Data". Chose MS Excel as the datasource, in the following window you can link an Excel table to the Access database. Note that you don't import the Excel data to your Access database, it's a link, so you always get the up-to-date Excel data in your MS Access database.



The good thing is, that you can, but must not use the defined ranges from the Excel spreadsheet. You can also use one of the sheets in your Excel as a table.


Once you have created the link to the MS Excel file, you can read from the created table in your MS Access database like from a usual database table. You could also define a query using SQL to preprocess the data in your MS Access database for Informatica PowerCenter, but that's up to you.

Now you can create a ODBC connection to your MS Access database, that you can use as a data source in Informatica PowerCenter.


Another side effect is, that you need just one single ODBC connection for multiple MS Excel files.

Again let me redirect to another blog with a detailed explanation of how to connect to a ODBC source from Informatica PowerCenter: http://www.clearpeaks.com/blog/etl/ms-excel-spreadsheets-as-a-data-source-in-informatica-powercenter

And don't forget to set the Default buffer block size to a smaller value like "8" instead of "Auto" if you get a "terminated unexpectedly" error.

Thursday, April 5, 2012

Informatica PowerCenter "terminated unexpectedly" when using ODBC

Informatica PowerCenter

To connect to a MS Excel spreadsheet or MS Access database using ODBC there are several ressources available (e.g. http://www.clearpeaks.com/blog/etl/ms-excel-spreadsheets-as-a-data-source-in-informatica-powercenter)
I got a helpful "terminated unexpectedly" error message in the Workflow Monitor and couldn't find the error. Finally I found out that a little session property solved my problem.





I set the "Default buffer block size" to a value of 8 (for example) instead auf Auto and my Workflow terminated successfully.

Saturday, February 18, 2012

Engine-based ETL tool vs. code-generating ETL tool vs. stored procedures

In my DWH projects I was using several approaches to implement the ETL processes. I think it also makes sense to distinguish engine-based etl tools from code-generating etl tools, which i am missing in some discussions.
I want to describe my opinion based on several criterias, I don't claim that my entry is a complete comparison on ETL approaches, I am really interested in your experiences and opinion, so you are very welcome to add a comment.

Usability
ETL tools (engine-based as well as code-generating) provide nice GUIs and state that it makes development easier. My personal experience is, that it's true, they do have fancy GUIs, but that doesn’t necessarily mean, that it’s easier to maintain. There are much more SQL coders out there than people with experience in a specific ETL tool. I personally find an error much faster in a PL/SQL code, than debugging mappings of ETL tools. But I really think that it depends on the ETL process. The GUI gives you a nice overview when you have complex transformation and it also can be used as an documentation of the ETL process.

Development efficiency
In some projects I had to implement the same ETL logic for different source tables (e.g. SCD2). When you have hundreds of tables where you have to do exactly the same staff, I find it more efficient to write an PL/SQL code generator, also in terms of maintenance. In software engineering it's a very bad practice to duplicate code. What you do there is writing a function or a sub program, so you have your logic stored in just one place. I don't understand why modern ETL tools don't provide such an approach. I know that you can use reusable components or templates, but when you have a different table, with different columns, keys, data types, etc. I don't know how to implement a generic solution with ETL tools, or am I wrong? This is only possible with PL/SQL code generators or you write your own code generator for your ETL tool, but this is much more difficult.

Flexibility
Most of the operations of modern ETL tools like lookups, aggregations, branching based inserts, conditions, expressions, string & date manipulations are built in modern database solutions as well, and they are even faster. So in my opinion i prefer here the code-generating ETL tools or stored procedures.
But flexibility also means database-flexibility. This is an clear advantage of ETL tools as they provide connection of almost all common source system.

Performance
For me there is no doubt, that an ELT approach will always outperform an ETL solution, as you have so many performance optimisation techniques in your database and you don't have to transfer your data to your ETL server and back to your DB. I really don't understand the advandate of engine-based ETL tools like Informatica, etc. if someone can explain me, you are welcome! The only thing you can't do with ELT is implementing an ETL process database-independent, but this is the only point I can think of using an engine-based ETL-tool.

Logging
Monitoring, Logging etc. is easier with ETL tools (engine-based and code-generating), as they provide a standardized way to monitor your processes. When writing your own procedures, you have to implement all that on your own.

Conclusion
Of course it always depends on the specific requirements, so I can only tell from my own experiences in different projects. As I already mentioned I really don't get it why one should buy a really expensive engine-based ETL tool when all that functionality is already built in the database, where it is even faster. I like some code-generating ETL tools, because they can support the development process by providing frequently used ETL logics, they give you a good overview of your workflows and provide mechanisms like monitoring and logging. To my mind in some cases it can also be a good solution to write your own ETL procedures, expecially to automatize the development process or to combine that approach with code-generating ETL tools.

I am very looking forward to your opinion with ETL development!