Quantcast
Channel: Cameron's Blog For Essbase Hackers
Viewing all 271 articles
Browse latest View live

Calculation Manager, BSO Planning, and ASO Planning combine for an awesome ASO Essbase procedural calculation hack -- Part 1

$
0
0

A bit of a tease

I’m not talking about me.  If anything I am more than a bit too straightforward.  I try not think of all of the own goals I have scored in my Cameron-tells-it-like-it-is way when a bit of discretion would have been better employed.  At least I am learning as I enter my dotage.

Nope, what I’m talking about is this first part of a three part blog post on how to achieve awesomeness in ASO Planning and beyond.  I really wanted to do this all in one shot but it simply became overwhelming in its length and complexity, so I am reluctantly going to break it into three parts with this first one simply laying the ground work.

Here’s how I plan on putting it out in over the next three weeks:
  • Part 1 – You’re reading it; an introduction to the CDFs you likely don’t know about
  • Part 2 – The genius of Joe Watkins
  • Part 3 – Tying it all together across three different Oracle EPM products with of course a glorious hack

What’s the mystery

As you read this, remember what’s in the title:  Calculation Manager, ASO Planning, ASO Essbase procedural calculations, and BSO Planning.  If you came to the Kscope14 session Tim German and I gave on ASO Planning I went a bit into this but there was simply no time to go into detail.  You won’t remember what I am about to show you because it got cut.  And, as I noted, this is just the introduction.  Hopefully that will keep you coming back for the next two installments because what I stumbled into is pretty damn cool.

Here are the clews

Beginning with, I think, Planning 11.1.2.2, standard Essbase (Custom Defined Functions) CDFs are installed with every copy of Essbase.  Surprised?  They’re definitely there in Essbase 11.1.2.3.500 and they actually differ between what’s available to Essbase and what is installed with each Planning application.  Even more surprised?  Oh the things we can find if we but search.  Of course as I am in the “Even a blind pig occasionally finds an acorn” set, I am putting this down to luck.

Two ways to find them

The first (and boring/safe/approved) way to figure out what these CDFs is to be a good little boy (or girl) and use Calculation Manager to insert functions:

Create and insert

Create a rule, insert a script object, and then click on the Insert Function toolbar button.  It’s the very first button – it’s like Oracle wants you to use it or something crazy like that.

Waddayasee?

I hope you see all kinds of functions.  We’re going to be focusing on the Custom Defined Functions (CDFs).

NB – I am doing this in a BSO Planning application.  As we will see in a bit, there are nuances between what’s available to Planning Plan Types and pure Essbase applications.

Here’s the list of functions.  Nothing much new here except the fourth and third from the bottom although for the purposes of this post I will ignore Custom Defined Macros and leave that as an exploratory exercise for you, Gentle Reader.

What’s in the CDFs?

Lots and lots of interesting stuff is what.  Here’s most of what’s available to a Planning application.

Naming standards

I’m not going to cover most of these, but if you look at the above, you will note that there are CDFs that start with @CalcMgr and ones that start with @Hsp.  Telling you that if it starts with @CalcMgr it has to do with Calculation Manager and if it starts with @Hsp it has to do with Planning is not giving away the game.

So what does that mean for Essbase?

The @Hsp functions are only usable in Planning applications.  Don’t believe me?  Create a BSO Essbase Calculation Manager script and go through the same insert functionality.  What do you see?
This makes sense, right?  You’re not in Planning so the CDFs that Oracle wrote to support Planning aren’t there.

There’s more to Calculation Manager than Calculation Manager

Go into good old EAS and create a BSO calc script.  Now look over in the lower left hand side of the script window and select Categorical->User Defined Funcitons – they are the same @CalcMgr functions.  Here’s another hint (possibly a bit of a red herring) to the puzzle – those functions say @CalcMgr but they’re available in plain old calc scripts as well.  
The plot thickens, eh?  :)  What oh what oh what would you do with all of this?

An alternate, and cooler, and faster way to get to all of the above

I know, I know, Cameron has been a good little boy and painted within the lines.  As Donald Fauntleroy Duck would say, Ah, Phooey!

A much faster, and hackier, and more informative, and thus more awesome way to do this is to simply go looking for essfunc.xml on your Essbase server.  Oh, the things you will see.  

Lookatthat

There’s lots of files with that name.  How oh how oh how do they differ?

Two differing properties

Size matters
Discounting the Windows shortcuts, note the spread in size from 7K to 10K.  What do you suppose is in the 7K one?
Nothing but @CalcMgr functions.

And the 9K one?
Nothing but @Hsp functions.

And the 10K one?
A mix of @CalcMgr and @Hsp CDFs.  Curiouser and curiouser.
Location, location, location
Where you live matters, and where essfunc.xml domiciles matters as well.  

The @CalcMgr functions that are available to Plannnig and Essbase live in c:\oracle\middleware\user_projects\epmsystem1\EssbaseServer\essbaseserver1\java\essfunc.xml.

Planning applications get (it appears) dynamically generated essfunc.xml files that are stored in each Plan Type’s Essbase application folder.  Even more interestingly, it looks like @CalcMgr functions are added to the Planning/Essbase application’s essfunc.xml file on an as-needed basis.  That accounts for the mix of @Hsp (always there in Planning) and @CalcMgr (but only a handful) CDFs in the 10K file.

Why essfunc.xml is better than Calculation Manager’s insert functions

Although you can get the parameters in both Calculation Manager and EAS’ calc script editor, I personally find that many parameters hard to follow and the interface itself is kind of kludgy.  It turns out that the function dialog box in both tools simply reads the examples from essfunc.xml.  
A difference in parameters
Planning
It’s probably a bug, but param1, param2, param3, etc are not particularly helpful when it comes to figuring out what and how the @CalcMgrExecuteEncryptMaxLFile.
Essbase
Ah, that’s a bit better.  Now we can see that the private key, maxlFileName, arguments, and asynchronous parameters are used with @CalcMgrExecuteEncryptMaxLFile although there is still no way to know what the arguments and asynchronous parameters are.

Wouldn’t it simply be easier to read the file?  Why yes it would.

Here’s the @CalcMgrExecuteEncryptMaxLFile function in essfunc.xml

   <function name="@CalcMgrExecuteEncryptMaxLFile" tssec="1378746290" tsmcs="206000" javaSpec="com.hyperion.calcmgr.common.cdf.MaxLFunctions.runMaxLEnFile(String,String,String[],String)">
     <flag value="RUNTIME"/>
     <spec>
       <![CDATA[@CalcMgrExecuteEnMaxLFile(privateKey, maxlFileName, arguments, asynchronous)]]>
     </spec>
     <comment>
       <![CDATA[Calc Manager CDF for running Encrypted MaxL file. RunJava  com.hyperion.calcmgr.common.cdf.MaxLFunctions true -D 2115028571,2505324337 c:/MaxL/maxl.mxl 906388712099924604712352658511 0893542980829559883146306518502837293210. First argument if false, will be an synchromous maxl task]]>
     </comment>
   </function>

Hmm, same ambiguous @CalcMgrExecuteMaxLFile syntax but there’s now something else here – a RUNJAVA command.  At least there’s an example, sort of, although it looks as though both the private key and the encrypted username and password need to passed with this method.

Confusing, isn’t it?

Let’s end the confusion

Whilst I would love to tell you that I figured all of this out, I must confess that I reached out to the Calculation Manager product manager, Sree Menon, and begged for help.  Sree and his associate, Kim Reeve, were beyond helpful.  I promised them that I would share what they taught me.

A brief foray into the ACE program

Btw, lest you think I know Sree because I am an ACE Director, in fact Sree reached out to me way back when I wrote my second post on Calculation Manager.  I don’t think he knew (or cared) about my ACE status.  I am not in any way putting down OTN’s ACE program, I am merely noting that Oracle reaches out to everyone who is an advocate of their products.  That’s a long-winded way of saying if yr. obt. svt. can do it (it being evangelizing, working with Oracle, and maybe becoming an ACE), you most certainly can too and I very much encourage you to do so.  I am on a personal crusade as of this writing to get four people recognized as ACEs because they love Oracle’s products and thus do a ridiculous amount of evangelizing for Oracle.  It will take a long time and a lot of work all round but I’m convinced they will get in – they deserve it, at least in my opinion.

Sorry for the digression, but sometimes I think people think becoming an ACE is an impossible task.  It is a difficult task, and I might note that the first time I was nominated, I was rejected, but now here I am.  Again, if I can do it, you can too.  Don’t be intimidated by the process as making it is incredibly rewarding; probably the most rewarding thing I’ve achieved in my career.  Note that the evangelism works both ways cf. Sree reaching out to me so aim high and you might just make it.

Back to the code

It’s very simple MaxL code.  All that it is doing is logging in to Sample.Basic and running a MDX query.

encryptest.mshs

spool on to "c:\\tempdir\\encryptest.log" ;

login $key 9404266461315012977165999794704691034001 $key 0307596931591918242060507329306599979470 on localhost;

/*    The below settings are right out of Developing Essbase Applications    */
alter session set dml_output alias off ;
alter session set dml_output numerical_display fixed_decimal ;
alter session set dml_output precision 4 ;
set column_width 40 ;
set timestamp on ;

SELECT {[Measures].$3}
    ON COLUMNS,
{[Year].Levels(0).Members}
ON ROWS
FROM [Sample].[Basic]
WHERE ([Scenario].[Actual], [Product].[Product], [Market].[Market]) ;

spool off ;

exit ;

Note that $3 command line parameter – yes, you can pass parameters from a calc script to MaxL.  Get your creative juices flowing over that.  I’ll try to demonstrate a few bits of it.

Running the code in EAS via CDF3.csc

It’s actually quite straightforward, although it didn’t seem so when I was figuring it out.

The key syntax notes are as follows:
  • Pass all strings in double quotes
  • Use forward slashes only in file paths
  • Wrap application, server, and any command line parameters in @LIST
  • Those arguments map to command line parameters, so server = $1, database = $2, the first paramter = $3, etc., etc.
  • Wrap any member names in @NAME

Two block issues to be aware of

@CalcMgrExecuteEncryptMaxLFile runs as a member calculation block.  That means that if the FIX that contains the command defines more than one block, it will run that many times.  I only know this because I tried running the command without any FIX and it took (seemingly) forever.  That’s simply because it ran once for each block.  As my copy of Sample.Basic has 376 blocks, the code ran 376 times.  Imagine it in a real database.  Shudder.

Of course if there is no block then @CalcMgrExecuteEncryptMaxLFile won’t run.  Yep, block creation issues raise their ugly head once again.

So what happens when you run it?

Here’s encryptest.log’s output.  Note that $3 resolved to “Sales”.
MAXL> login $key 9404266461315012977165999794704691034001 $key 0307596931591918242060507329306599979470 on localhost;

OK/INFO - 1051034 - Logging in user [hypadmin@Native Directory].
OK/INFO - 1241001 - Logged in to Essbase.

MAXL> alter session set dml_output alias off ;

OK/INFO - 1056226 - Session altered for user [native://nvid=c9d9665628b8695a:6d29254d:1410456fa26:-7adf?USER].

MAXL> alter session set dml_output numerical_display fixed_decimal ;

OK/INFO - 1056226 - Session altered for user [native://nvid=c9d9665628b8695a:6d29254d:1410456fa26:-7adf?USER].

MAXL> alter session set dml_output precision 4 ;

OK/INFO - 1056226 - Session altered for user [native://nvid=c9d9665628b8695a:6d29254d:1410456fa26:-7adf?USER].


     essmsh timestamp: Mon Jul 14 16:50:59 2014

MAXL> SELECT {[Measures].Sales}
  2>     ON COLUMNS,
  3> {[Year].Levels(0).Members}
  4> ON ROWS
  5> FROM [Sample].[Basic]
  6> WHERE ([Scenario].[Actual], [Product].[Product], [Market].[Market]) ;

Axis-1                                  (Sales)                                
+---------------------------------------+---------------------------------------
(Jan)                                                                31538.0000
(Feb)                                                                32069.0000
(Mar)                                                                32213.0000
(Apr)                                                                32917.0000
(May)                                                                33674.0000
(Jun)                                                                35088.0000
(Jul)                                                                36134.0000
(Aug)                                                                36008.0000
(Sep)                                                                33073.0000
(Oct)                                                                32828.0000
(Nov)                                                                31971.0000
(Dec)                                                                33342.0000

OK/INFO - 1241150 - MDX Query execution completed.

     essmsh timestamp: Mon Jul 14 16:50:59 2014

So what about RUNJAVA, and why?

There’s an alternate way of running this MaxL file execute, and it is actually there in the essfunc.xml file although again, it is imperfectly documented.

The syntax (and even the properties) are totally different from @CalcMgrExecuteEncryptMaxLFile.

Start off RUNJAVA with the right object name, in this case com.hyperion.calcmgr.common.cdf.MaxLFunctions.

Then, per the above code, you must pass:
  • The asynchronous flag of false
  • A –D to decrypt the MaxL script (just like off of a command line)
  • The public key for decryption
  • The already encrypted username
  • The already encrypted password
  • The server name
  • Do not pass the application and database
  • Any command line prompts
    • The encryrpted username is command line parameter 1 (but you must still pass it this way)
    • The enrypted password is command line parameter 2
    • The server name is command line parameter 3
    • After that you can pass member names (or whatever) as command line parameters 4, 5, 6, etc.
  • Wrap all parameters in double quotes
  • @LIST and @NAME are not required

This is what the MaxL script looks like.  It’s the same except the $3 became a $4.
spool on to "c:\\tempdir\\encryptest.log" ;

login $key 9404266461315012977165999794704691034001 $key 0307596931591918242060507329306599979470 on localhost;

/*    The below settings are right out of Developing Essbase Applications    */
alter session set dml_output alias off ;
alter session set dml_output numerical_display fixed_decimal ;
alter session set dml_output precision 4 ;
set column_width 40 ;
set timestamp on ;

SELECT {[Measures].$4}
    ON COLUMNS,
{[Year].Levels(0).Members}
ON ROWS
FROM [Sample].[Basic]
WHERE ([Scenario].[Actual], [Product].[Product], [Market].[Market]) ;

spool off ;

exit ;

Two other things about RUNJAVA

Celvin Kattookaran wrote that he thought RUNJAVA was a better way to go than @CalcMgr*.  I’m not sure I agree with him because of the below.

OTOH, I am sure I love stalking him, just like Glenn Schwartzberg loves stalking me.  The cycle of abuse continues.  Celvin, find someone to pick on.  :)

Syntax checking is gone, gone, gone.  

This will syntax check:

For those of you old enough (sob) to remember taking typing drills in school (my grandmother was a business teacher, taught typing, and that meant when we visited her in the summer we could use a – ooooh, electric typewriter – bored geeks like yr. obt. svt.typed that again and again and again, and Grandma, I miss you), that’s from the man that invented touch typing and isn’t exactly part of Oracle EPM.  

In other words, you had best be sure your syntax is right because Essbase (nor Calculation Manager) will tell you that you’re wrong.

Blocks, what blocks?

RUNJAVA, because it is not a calculation block but in essence a command line, could care less about a block existing, or not.  It’ll run just fine with no blocks, e.g.:
MAXL> login $key 9404266461315012977165999794704691034001 $key 0307596931591918242060507329306599979470 on localhost;

OK/INFO - 1051034 - Logging in user [hypadmin@Native Directory].
OK/INFO - 1241001 - Logged in to Essbase.

MAXL> alter session set dml_output alias off ;

OK/INFO - 1056226 - Session altered for user [native://nvid=c9d9665628b8695a:6d29254d:1410456fa26:-7adf?USER].

MAXL> alter session set dml_output numerical_display fixed_decimal ;

OK/INFO - 1056226 - Session altered for user [native://nvid=c9d9665628b8695a:6d29254d:1410456fa26:-7adf?USER].

MAXL> alter session set dml_output precision 4 ;

OK/INFO - 1056226 - Session altered for user [native://nvid=c9d9665628b8695a:6d29254d:1410456fa26:-7adf?USER].


     essmsh timestamp: Mon Jul 14 19:20:02 2014

MAXL> SELECT {[Measures].Sales}
  2>     ON COLUMNS,
  3> {[Year].Levels(0).Members}
  4> ON ROWS
  5> FROM [Sample].[Basic]
  6> WHERE ([Scenario].[Actual], [Product].[Product], [Market].[Market]) ;

Axis-1                                  (Sales)                            
+---------------------------------------+---------------------------------------
(Jan)                                                                  #Missing
(Feb)                                                                  #Missing
(Mar)                                                                  #Missing
(Apr)                                                                  #Missing
(May)                                                                  #Missing
(Jun)                                                                  #Missing
(Jul)                                                                  #Missing
(Aug)                                                                  #Missing
(Sep)                                                                  #Missing
(Oct)                                                                  #Missing
(Nov)                                                                  #Missing
(Dec)                                                                  #Missing

OK/INFO - 1241150 - MDX Query execution completed.

     essmsh timestamp: Mon Jul 14 19:20:02 2014

Again, think about some of the ramifications of the above.  They’re pretty heady, aren’t they?  I’ll come back to them in part three of this series.

That’s all for today

Has your interest been piqued?  We haven’t even gotten to the use case for this and it’s already been a huge amount of work.  No kidding, I spent almost two full man weeks trying to figure all of the above (and a bit more) out for Kscope14.  It was fun and painful, all at the same time.

Have you guessed where I’m taking this?  Send me your comments care of the commend section below.  C’mon, I’m lonely.

Hopefully that was enough for you – that’s over 16 pages and over 2,700 words.  Given that the above is 100% free, surely you’ve gotten value for money.

Be seeing you.

Enhanced Planning Validations: Part 1

$
0
0

Introduction


What you are about to read is not my work.  No, I didn’t steal it, but instead I somehow recruited Tyler Feddersen of Performance Architects to write about this really clever technique to enhance validations in Hyperion Planning.  Also, that will likely mean that the quality of what you are about to read is several levels of awesomeness above my usual drivel, so pay attention, this is good stuff.  

Does this mean I work for Performance Architects now?  Nope, I am still my one man band.  

I know Tyler (and Chuck Persky and Jonathan Etkin) because they were incredibly gracious, helpful, and just all around awesome when I was writing my Kscope14ASO Planning presentation with Tim German and needed some real world feedback on what that tool is like.  Sometimes people think that consulting companies are ruthless competitors (okay, I am not exactly a competitor to that firm unless they’ve been fooling me and have just Tyler as an employee but in some sense we do compete) that don’t share.  The hour long conference call I had with Performance Architects and the multiple emails back and forth show me that they “get it”.  What a nice bunch of guys.  I’m sure they treat their customers with the same generosity of spirt and technical acumen if they spent all that time and energy with me for free.  You should hire them when you aren’t hiring me.   ;)

And with that, every single word below except the conclusion  is Tyler’s.  Tyler, thanks for writing this and I look forward to part two.

Tyler Feddersen, Performance Architects   (Big thanks as well to Mohan Chanila for making this readable)

What’s the big deal?

Data form validations offered in Hyperion Planning 11.1.2+ were a nice step in the right direction for easily identifying data entry errors. Additionally, a lesser known functionality that was introduced simultaneously was the ability to hook up these data form validations with Process Management. Unfortunately, there are a few caveats. First off, there are very few of us out there using Process Management. Secondly, even if you do…I’ve found the validation processing to be less than stellar if there are too many validations.

Facing similar issues, a fellow co-worker of mine, Alex Rodriguez, introduced the initial idea to use business rules as an alternative, which turned out to be quite efficient. Since then, I’ve expanded on the idea to create a full validation solution. I have to say…I like the outcome. In fact, I’ve used it in multiple recent Planning solutions. The result does not involve anything like code manipulation but rather is an innovative take on pieces of functionality that are already available. Part 1 of this solution will go through the creation of a process to identify errorneous data entry.

A Look Ahead

Below is an overview of what this blog will be attempting to accomplish. Some of the members may seem a little different since this was done on a PSPB application.

A correct data set.

After saving new data, an error is displayed due to an invalid entry. Note: The cell is highlighted with “red” by using the data form validations. If the data were to be saved correctly again, the form would again appear as the initial image.

Additionally, the error appears in a consolidated list of errors for administrators and/or users. 

How It Was Done

Step 1: The Validation Member

The first step is to create a new member specific to validations. The sole use of this member is going to be for storing validation “flags” (SmartList values). This validation member should be created in a SPARSE dimension that will not be part of the validation logic. For example, I used the Budget Item dimension (sparse) in my application because I wanted the errors to be stored at a specific Employee/Position/Entity combination, but I did not care to see errors by specific Budget Item members. By using a Sparse member, the validation intersection will be able to load much faster on a data form using “Suppress Missing Blocks” since it has its own block.

Note: In the images below, you will see that two members were actually used, one in the Budget Item dimension (Validate) and one in the Account dimension (AllocValidate). This was done for two reasons. One, it allowed me to assign the SmartList (coming up) to an Account and keep the Evaluation Settings untouched. Two, it allowed the form to display the error with a complete new column/row combination. However, similar functionality can be achieved by simply adding the single sparse member.


Hold up…is block suppression really THAT important?

Thanks for asking. Yes. The use of block suppression with this solution is incredibly important in getting an efficient final product . As we already know (or now you will), a block is needed for data in each sparse, stored member combination. If we create a validation member in the dense dimension and don’t trim down the possible sparse combinations, the validation form is going to need to process all blocks that exist in the entire system to return the validation errors. Meanwhile, by choosing one of the sparse dimensions and only using a single member within it (a validation member or other random member) , the number of evaluated blocks is knocked down to only those affected by the actual validations. Using the “Suppress Missing Blocks” option, Planning will suppress all blocks that contain no data prior to sending the query to Essbase.

For example, the image below shows the processed cells when a sparse member was utilized. Only two entire cells were processed when the Planning form was opened, due to one validation error existing.


Now, the image below shows the number of processed cells when the sparse member strategy was not used. Notice that the number of cells increased exponentially, even though there was still only one validation error existing.


While 10,168 cells may not seem like much relative to larger data sets, it represents every block in a fairly small and sparse data set. In a heavier model, the differences could go from Planning crashing to the form opening in a matter of seconds.

Step 2: The SmartList of Errors

Next, we need to create a SmartList with entries for each error to be tested. “AllocValidate_Menu” has been created in the image below.  


An entry should be put in for each desired error. In the image below, there are 13 different errors that can be flagged. For example, the second entry is used when a specific assignment contains over a 100% allocation.


Lastly, assign the SmartList to the Validation member that was previously created. In the example below, it is being assigned to the AllocValidate member, which exists in the Account dimension.

Step 3: Create the Business Rule

This step can either be accomplished by creating a new rule or by adding to an existing script. In the example below, the script is meant to be run on a data form with the Scenario, Version, Organization, Position, and Employee in the POV or Page. The rule then checks the total Percentage Allocation for the specified intersection. If the rule confirms an error, the following intersection will be assigned the error flag of having a high percentage:

{Scenario}->{Version}->{Organization}->{Position_Dim}->{Employee}->”BegBalance”->”Unspecified Element”->”No Year”->”AllocValidate”->”Validate”

Step 4: Create the Data Form

As with the business rule, this can be done by either creating a new form or adding to an existing one. In the examples provided at the beginning, both of these techniques were used.  The example below also shows the addition of a menu item to the form. The menu item “Run Validation” will kick off the rule that was created in the third step to populate the data form.


Voila! A fully functioning error handling process is at your disposal. The complexity of this error handling can go as far as Essbase/Planning will take you. Since all of the error handling has been merged with the rest of the Planning solution, there are plenty of ways to explore additional features to this solution. Tune in for Part 2 to see a few ways on how this process can be expanded further, including the creation of a “locking” procedure to prevent business rule execution and an integration with Process Management.  

Conclusion by Cameron

And there you have it – a really clever way to handle validations in Planning using the functionality built into the tool.  Thanks again Tyler for sharing this.

Be seeing you.

Stupid Programming Trick No. 18 -- Modifying the Period dimension in Planning

$
0
0

Introduction

The genesis of this Stupid Programming Trick comes both from this post over on Network54 and because this has bugged me for, oh, ever (as long as ever = how long Planning has been a product so 13-odd years at this point).  It’s not that summary time periods in a custom time period Planning application is hard, it’s just that it’s all a bit unintuitive and I’ve never seen it documented anywhere.

For those reasons, what you will see is how to manipulate custom periods in a Planning Period dimension to get quarter totals.  I will also show how custom Period members outside of YearTotal.

Create the application

Note that I am picking Custom as my base time period.

Do I have a custom time period Planning application?  It sure looks like it.  Let’s create the application.

Working…working…working

Glorious 12th +1


So now I have a custom Period Planning application.  How do I add quarters into my YearTotal hierarchy?

Let’s add those quarters

Don’t do this

If you want to add siblings to BegBalance and YearTotal, click on Add Child.  If you want to add children to BegBalance or YearTotal, don’t click on Add Child.  That makes sense, does it not?  No, I suppose not.

Do this

Click on the first level zero period under YearTotal and then click on Add Summary Time Period.

Nothing new to see here

Name it, define its storage property, and click on Save.

13 Periods in a quarter?

Not forever, but for now I have now added the first summary time period under YearTotal.  I will need to do this three more times.

Now add the next one

Select the next first child of Q2.  If Q1 is P1 through P3, the first child of Q2 will be P4.

Easy peasy lemon squeezy

Nothing new to see here, folks.  Move along.

Here’s Q2

Weird enough for you?  Yet it works.  And it happens again in Q3 and Q4.

Do it again

You should know the drill by now.  Click on P7 and then Add Summary Time Period.
Ta da

One more time

Q4 is almost done.

It’s done

Why does Period behave this way?  Ask the Oracle Product Manager, but I wouldn’t blame him – it has been this way since I first laid eyes on Planning 1.5.

Adding a sibling to YearTotal

So that’s the weirdness when it comes to quarters.  What about other custom time periods?  More weirdness, of course.

Note how you do not click on YearTotal and Add Sibling to add, oh, a sibling to YearTotal as that would likely be too intuitive.  

Nope, instead you click on Period and then click on the toolbar button Add Child.

Put in a value.

Ta-da, now you have a sibling to YearTotal


Add a child to P1YTD.
Now you are (or at least I am) done.

Don’t do this at home, kids

Oooo, scary

All done

Conclusion

What exactly can we conclude from this?
  • Adding siblings and parents in Period is just…weird
  • It ain’t intuitive, but much of life is that way
  • You can add siblings and parents so long as you follow the heretofore undocumented steps
  • Yr. obt. svt. answers pleas for help even whilst he is on vacation.  Am I nuts?  Probably.

Be seeing you.

Calculation Manager, BSO Planning, and ASO Planning combine for an awesome ASO Essbase procedural calculation hack -- Part 2

$
0
0

Introduction

This is part two of a three part series on Calculation Manager, ASO Planning, its relationship with BSO Planning, and Essbase ASO procedural calculations.  Part one covered all sorts of interesting things you can do with, oddly enough, BSO + Calculation Manager.  There is a reason for introducing that first, and then this post on making ASO procedural calculations fast.  I’d encourage you to stick around for part three which ties them both together in a most unusual way.  With that, let’s get into the details of making ASO procedural calcs fast, fast, fast.

Joe Watkins’ genius

How often do you get called genius?  If you’re anything like yr. obt. svt. it isn’t all that regular of an occurrence unless that term is within the context of, “You have a genius for screwing things up” or “You’re a genius at prevarication” or “I’ve met geniuses and you are no genius”.  

Joe Watkins (with whom I have only ever “met” via email and Network54) is a genius, at least when it comes to ASO Essbase, because the approach Joe came up with is…genius.

Why do I use the word genius?  Simply because:
  1. His approach is 100% undocumented.
  2. It is not an intuitive solution at first glance, but on examination it is not only obvious, it’s !@#$ing awesome.
  3. It solves a problem you could drive an HGV double-articulated lorry through.
  4. It is fast, fast, fast, fast.
  5. It is a total hack.

And oh yes, it is part two of my three part series on Calculation Manager, BSO Planning, ASO Planning, and ASO Essbase procedural calculations.

While this blog post stands on its own for Essbase-only practitioners for the technique alone,  I will explain why you will at least want to combine it with the CDF information I gave in part one even if the words “Hyperion Planning” never cross your lips.  Hyperion Planning geeks will have to read all three parts to get all of this hack (and yes, I contributed something to this, so it isn’t all a case of steal from others to get a good solution).

The problem(s) with ASO Essbase procedural calcs

It’s really very simple and very devastating at the same time – ASO procedural calculations do not ignore empty level zero member intersections but instead consider all of them.  Where’s there constituent data, the Essbase values the result; where there’s none, Essbase leaves the result blank.  For us BSO geeks, this is 100% not the way BSO Essbase works by default; in BSO, no blocks = no result unless we force Essbase to do so.  If only there were a way to make ASO Essbase behave the same way…

What this ASO Essbase behavior means is that procedural calculations, unless they are very tightly constrained in scope, can be agonizingly slow.  And even that tight targeting of the calculations can be a roadblock – do you always know where in a database a calculation should occur?  Maybe you could write a series of small scope calcs in a reporting application, but that would be very difficult to do if there is an element of budgeting to the application.

And in fact, I’ve understated the problem – even in a pretty small ASO database a procedural calculation can take a very, very, very long time.  Proof?  Read on.

The database

I stumbled (as is my usual wont) into this as part of a presentation.  I was trying to write a currency conversion calc to mimic, sort of, what happens in BSO Planning as part of an ASO Planning Kscope14 presentation I gave with Tim German.  I should also mention that Dan Pressman was a big help in the building of the dimensions.

The dimensions


By ASO standards it isn’t much.  Of course, by the lights of BSO, it’s huge – more on that in a moment.

The logic

Oddly, ASO Planning does not create all of the dimensions required for a multiple currency database.  They expect you to do so.  No problem, I thought, I’ll simply create similar dimensions to what exists in BSO Planning and go from there.  

Dimensions

Here’s how the fx relevant dimensions look when compared to a standard Planning-derived BSO fx plan type:
BSO
ASO
HSP_Rates
Fx Rates
Account (HSP_RateType and children)
Account (Fx Rate Type and children)
Currency
Currency
Product (Entity)
Product (Entity)
N/A
Analytic

For you Planning geeks, the Product dimension is the Entity dimension, Currency was automatically created by Planning (although corresponding fx conversion logic was not), and Analytic, and Fx Rates are custom dimensions I created to support ASO fx.
Account
Accounts stored the difference between Average and Ending rates.  This is just like BSO Planning.
FX Rates
I don’t have a HSP_Rates dimension but this is the same thing, mostly.
Currency
This dimension came directly from Planning itself.
Product (Entity)
The concept of tagging the members with UDAs is the same as BSO Planning.

BSO code

Want the code?  Create a multicurrency Planning app and then have Planning generate the calc script.  I’m just showing the screen shot to give you a feel for the logic.

All that the code is doing is:
  1. FIXing on level zero members
  2. Testing for UDAs assigned to the Entity dimension
  3. Multiplying the Local input value by the fx rate / by the USD rate (which is always 1)

As this is BSO, after the fx conversion, an aggregation is needed but not shown.  Of course ASO won’t require that aggregation code as it does that aggregation on the fly.

ASO first attempt

I wrote this in Calculation Manager (remember, I was trying to do this for Planning) but the logic is exactly the same in MaxL.
Execute allocation

Note the SourceRegion – in this case it’s all individual members because I was trying to calculate just one rate.  I would at least have to open up the fx member set if I were to calculate more than one.
fxtest.csc
This is a one line currency conversion formula.  
This whole approach is ugly and not easily exapandable, but it serves to illustrate an almost literal translation from BSO to ASO logic.
Success, if you can call it that.  
After entering the rates (sent from a Smart View ad-hoc sheet into the Essbase database as ASO Planning doesn’t push rates the way BSO Planning does), I entered one data value to be converted from Sterling to Dollars as per the above code.  That is one as in a whole number, greater than zero, but less than two.  How long do you think it took to run?  A second?  Half a second?  Something else?

How about 6,000 seconds?  Let me repeat that in words, not numbers: six thousand seconds or sixty hundred seconds or one and two thirds hours.  To convert one data value.  See the problem with ASO procedural calculations?

How long did the same database (just about) with the same amount of data take to run in BSO?  The total elapsed calculation time was 0.025 seconds.  So much for the might power of ASO compared to poor old obsolete BSO.

The fix

The key to BSO’s speed is that BSO does not consider all of the possible level zero member intersections, it only considers the sparse member combinations that have data.  In ASO terms, it only calculates based on the non-empty member intersections.  There are NONEMPTYTUPLE and NONEMPTYMEMBER commands in MDX but unfortunately they are not part of the execute calculation and execute allocation (the two and only two ways to run ASO procedural calculations) grammar.

NB – Oracle say this is coming to Essbase but when is tbd.  That will be (some day) great for those of us who are on the latest release, not so much for everyone on 11.2.3.500 and before.

So how can we get NONEMPTY functionality in ASO if it’s not part of the commands?  Enter, finally, Joe Watkins’ technique again.

The problem with NONEMPTYTUPLE

NONEMPTYTUPLE (I used that instead of NONEMPTYMEMBER) can only be used in an outline member formula.  Member formulas are (quite naturally) in the outline.  The member formula fires at all levels, and in the case of an fx rate calculation, is only actually valid at level zero.

This is a bit of an impasse – we know the problem with procedural calcs is the NONEMPTY issue, fx rate calculations only make sense at level zero, MDX has a keyword to address this, but only in member formulas and member formulas fire at all levels, not just level zero.  What to do?

Back to the Genius of Joe Watkins

Instead of trying to fight Essbase, Joe came up with a really clever way of using existing ASO functionality.  I read about his approach to fx over on Network54 in the beginning of 2013 and, since I was doing BSO Planning (that’s all there was) at the time, filed it away for future reference.  I also thought he was nuts for saying things like, “This is the future of ASO.. BSO will be dead in 5 years.. (my prediction)....”  Now I’m not so sure.

What he did

Joe:
  1. Created a member formula that contained his fx logic
  2. Stuck a NONEMPTYTUPLE keyword at the top of the formula
  3. Ran an ASO procedural allocation (not calculation) that 100% copied his member formula to a stored member thus harnessing ASO’s fast non empty performance but keeping the data calculated only at level zero
  4. Enjoyed success, the good life, and that warm glow inside that only comes from helping others

I may be slightly exaggerating number four, but one through three are the truth.

NB – The example here is a fx calculation, but this approach works for any and all level zero calculations.

Here’s how I did it

Additional member

In the Analytic dimension, I created a calculate-only member called MTD USA.  It contains the member formula to calculate fx conversion.

MTD USA’s member formula

Note the NONEMPTYTUPLE command that makes the member formula only address non empty data.

The CASE statement is a MDX version of the BSO currency conversion calc script.

Execute allocation

It’s all pretty simple from here on, thanks to Joe.  All I need to do is kick off an execute allocation in MaxL, set up my pov aka my FIX statement, identify the source (Local) and target (USD).  By not defining a spread range other than USD, Essbase copies everything from MTD USA in Local to MTD in USD.

Did you see the $5, $6, and $7 in the code?  If it’s MaxL, it can be driven through parameter variables.  Think about how you might use that in Planning given the last post’s review of @CalcMgrExecuteEncryptMaxLFile.

How fast is it?

On my VM with a limited set of data (I have finally ordered that 1 TB SSD but have yet to install it so I am constrained for space) I observed the following calculation times:
Process
BSO
ASO
X Fast
Allocate
106
3
35
Fx
400
1.2
333
Aggregate
1,772
N/A
N/A
Total
2,278
4.2
542

The allocate and the aggregate times are interesting, but the biggest overall difference is in fx – it’s over 300 times as fast as the equivalent BSO outline and data.  Look at that, ASO is faster than BSO, if it only considers non empty data.  

And now, thanks to Joe’s technique, it can.  A hack, most assuredly, but a glorious one.  I have an upcoming ASO budgeting application that I was dreading because I couldn’t figure out how to quickly do its rate calculations (no fx involved).  Now I know how to do it, and quickly.

This technique is in a word, awesome.  Yeah, I take some stick for overusing that word, but 300 times the speed of BSO is sort of remarkable.  All of us who use ASO procedural calcs owe Joe a huge round of thanks.

So what’s left?

Part three of this series will bring together:
  1. Running MaxL from BSO while in an ASO Planning application with Calculation Manager
  2. The fast ASO procedural calcs you just read about
  3. How to use this in Planning (and even Essbase)

I know it’s all been a bit long but there’s a lot of information to impart and it took me freaking forever to figure out how to tie it all together – there’s no reason to see why explaining it should be any faster.  

:)

Be seeing you.

Enhanced Planning Validations: Part 2

$
0
0

Introduction

At the end of July Tyler Feddersen very generously wrote part one of this two part series on data validation in Planning; you now are reading the concluding entry.  I’ll let Tyler do the explaining below but I want to make sure you, Gentle Reader, understand that other than this introduction and a super short concluding paragraph that everything is 100% Tyler’s.  I don’t have many guest bloggers (if you are interested send me a comment via this blog or a LinkedIn message) but I am always beyond happy to use my blog as a platform for sharing information.  

Thanks must also go out to Performance Architects– not many companies (in my experience, “not many” should read “just about none”) would allow one of their employees to write for something other than their blog.  PA are to be commended for their spirit of information sharing.  Chuck Persky, whom I know from the long ago days of ISA (and thus my connection to PA), will be presenting at Oracle Open World – if you are going to that event I encourage you to attend his session Oracle Planning and Budgeting Cloud Service: Oracle Hyperion Planning in the Cloud [UGF9091] on Sunday, 28 September.  I’m not presenting on that Sunday (or at all at OOW, for that matter – sanity occasionally comes my way), but I am the guy who solicited the ODTUG EPM user group sessions.  And lest unworthy thoughts enter your mind,  I reached out to Chuck long before this blog post was written – there is no quid pro quo.

With that, enjoy.

Tyler Feddersen, Performance Architects

Let’s take it a step further


If you are reading this and have not read Part 1, I would definitely recommend stopping now and doing so. The rest of this blog will be taking the results of Part 1 and taking it a step further in creating a pretty good overall validation solution.

The validation process discussed in Part 1 was a simple way to be able to identify errors that would cause data integrity issues. However, it does not prevent them. This is the part of the process that Part 2 will attempt to accomplish.

A Look Ahead


Each individual step with this solution is fairly simple and direct. However, the process that combines them all starts to get a little confusing. Let’s take a glance at the general process flow that an end user would go through when working with this technique.

  1. The user completes all necessary data entries
  2. Following data entry, the user would change a lock status to “Locked”
    1. Done through a separate entry form
    2. A business rule checks for any errors prior to allowing the lock
    3. Causes a flag to show as “Locked”, which also prevents additional business rule execution
  3. If Process Management is used, the user would approve the entity, which would validate against the data form used in Step #2
    1. If approvals process finds that the entity has not gone through the initial locking process, an error will be displayed
    2. By successfully approving, the user is no longer the owner of the entity
  4. Following a successful approval, the user is no longer able to perform any data entry, which means that the initial lock in Step #2 cannot be reverted
  5. At this point, the user would need to have the entity sent back through Process Management in order to perform any further data modifications

The end result of the process allows for analysts and administrators to switch their focus to more business-related issues rather than focusing on locking down the system and having to check for all of the typical data entry errors. Administrators generally have to pick a universal end date to lock down the system in order to be assured that users are not making additional changes. With the process mentioned above, budgets and forecasts can be potentially analyzed as they are completed since a user’s ability to make any modifications is removed as soon as everything is approved.

The important part to remember is that this process is done completely using out-of-the-box functionality. If there is anything through the steps that you feel could be simplified….it probably could. But this will at least get us started.

Step 1: Dimensionality


If you recall from Part 1, I created two validation members: AllocValidate and Validate. AllocValidate was created in the dense Accounts dimension  while the Validate member was created within a sparse dimension, to give us block suppression capabilities. For this portion of the validation process, I created an additional Accounts member, LockFlag. This new member will be used in coordination with the previously created member, AllocValidate, to create a locking “flag” that all business rules can use to decide whether the rule should continue to process or not.

Additionally, I added a “NO_XXXX” for each dimension. The flag only needs to be stored at the Entity level, so each dimension outside of Accounts, Period, Scenario, Version, Year (although, I use No Year), and the dimension containing “Validate” will need the “NO_XXXX” member.  





Step 2: SmartLists


Building off the SmartList from Part 1, I added a few new entries. These entries will be used to display the status of an entity in the approval process, including: locked, open, and invalid. Additionally, I created a new SmartList to accompany the “LockFlag” member. This SmartList will be used by the users to lock and unlock their entity.  



Step 3: Create the Data Form


Create a data form with following parameters:

Rows: Entity
Columns: AllocValidate and LockFlag
POV: All other members where the flag will be stored (including Validate)

Additional Specifications:
  • Make the column for AllocValidate Read-Only. This column will be modified based on the user selection for LockFlag in coordination with a business rule that will run on save.
  • Add a Validation Rule for the AllocValidate column to show as invalid if the column does not appear as being locked.  



Step 4: Business Rules


First, create a rule to run on save that will check for the “LockFlag” selection and update the “AllocValidate” status accordingly. The script should initially check if an entity has any error messages. If not, then it will lock or unlock based on the LockFlag selection. Note that this rule actually runs for all entities each time the save occurs. However, there is very little to no potential conflict issue between users, as the rule runs very quickly. However, this rule can also be run as a Menu option to select a single entity at a time if desired. An example script is shown below.

The rule checks if any validation errors already exist. If so, it changes the AllocValidate flag to appear as invalid.


If the rule detects an invalid entry, the rest of the rule is not processed. If there was no error found, the rest of the rule will lock or unlock based on the LockFlag selection.

Next, update each business rule to take the “AllocValidate” flag into consideration. If the flag’s current status is “Locked”, then we no longer want to run the business rule if it will change the current data set. An example is shown below, using the @RETURN function as well.

Lastly, I’ve udated the rule that was created in Part 1 to include an automated re-assignment of the AllocValidate lock status to be invalid, if an error occurs. This is to prevent an issue where the user locks the entity but then submits invalid data, as data entries cannot be prevented through the use of business rules.  The example below would assign an error to a specific assignment in addition to assigning the “Invalid” flag for the whole entity.

Step 5: Process Management


Process Management is a fairly complex subject within Hyperion Planning, so I will not be going into a lot of detail on the specifics of setting everything up as that could be its own blog…or two. The assumption here is that the decision has been made to use Process Management, which allows it to incorporate this validation process. The important part is to make sure to select the checkbox “Do Not Promote” within the options for the Validation Rule set up in Step 3. This will cause approvals to fail if it detects an error within the data form.


Step 6: Example


If you’re confused at this point, it’s quite understandable. I am as well. Nonetheless, let’s put everything together and actually go through a sample process that a user might go through.

  1. The user has invalid data and is trying to lock department (entity) DP_0000. However, the rule has found a validation error.


  1. The user can navigate to a form with the list of errors affecting all of their entities. The example below was the form that was created in Part 1.


  1. The error is displayed on the form, and the user fixes the error. Note: My error flags were thrown off during the preparation for this demo, and the error doesn’t EXACTLY reflect the real error. We’ll pretend.


  1. After fixing the error, the user navigates back to the locking form and tries to lock again. The lock is successful this time.


  1. The user tries to run a rule to add a new position to the department that has been locked. Since this rule would change the data set that has been validated, an error is returned.


  1. The user goes back into the data set, entering a change to the data set that will cause an error to occur. The flag is automatically reset to be invalid.


  1. As an example for Process Management, we’ll pretend the user has approved the department successfully. In doing so, the depatment can no longer be unlocked unless the department is sent back, creating a completely locked down department.


Conclusion


By now, you probably get the gist of what is trying to be accomplished here. However, it all can get a litlte complex with how all of these pieces interact with each other. As mentioned earlier, all of these steps are just an example of how all of these pieces can work together to create a customized flow within any Planning application. They can be refined, simplified, or even expanded as needed. The best part about all of this is that it’s simply a new take on existing functionality, allowing developers and users to remain within the Planning technology. With the Hyperion functionality, we’re always attempting to achieve higher efficiency, and there is nothing more efficient than removing some redundant tasks from the administrator’s plate. Thanks to all for reading and a special thanks to Cameron as well for allowing me to hijack his blog for a few entries. Let me know if you have any questions or comments!

Cameron’s conclusion

So there you have it – a way to prevent planners from inputting incorrect data values.  It’s sort of the budget data quality Holy Grail for Planning administrators and consultants alike.  And oh yeah, business owners too.  This is awesome stuff and again my thanks to Tyler Feddersen for writing this and Performance Architects for allowing Tyler to write outside the walls of his company.  PA understands that sharing information is how we all grow and learn. Thanks for sharing, guys.

Be seeing you.

Stupid Programming Tricks No. 19 -- TRUNCATEing DATAEXPORT

$
0
0

An interesting statistic

The code I am about to show you, and the solution that I came up with, took me over 15 hours of research, cursing, testing, and finally triumph.  Multiple cups of coffee were also consumed as were a few cups of proper tea.  Why do you care about what someone (such as yr. obt. svt.)  does for free or how he imbibes caffeine?  Perhaps for the amusement factor, as the solution that I came up with consists of eleven lines of code.  Eleven lines of code in 15 hours, for those of you mathematically challenged, is 0.73 lines of code per hour, or 5.5 lines of code per calendar day.  Ouch.  But that is the price for:  having a “good” idea, venturing into technology areas one knows very little about, and not giving up.

I tend to agree (if only to salve my ego) with the first point by David Veksler of the Mises Economic Blog (What, you’re not all Hayek fans?).  I quote:
A programmer spends about 10-20% of his time writing code, and most programmers write about 10-12 lines of code per day that goes into the final product, regardless of their skill level. Good programmers spend much of the other 90% thinking, researching, and experimenting to find the best design.

Does that put me in the category of a good programmer?  It’s hard to say – I do spend an awful lot of time thinking, “What am I trying to do and conceptually, how would I do it?” and then later, much later, I spend even more time trying to figure out how I turn that concept into reality.  So is that being a good programmer or just someone flailing about in areas that he doesn’t know much about?  You decide.

Why oh why oh why am I prattling on about this?

There was a thread the other day over on Network54 that discussed how to use the headers that come from a BSO DATAEXPORT statement as they are, without a doubt, pants.  I suggested that SQL output and subsequent manipulation (I should note that Adam_M went and did it in a text file but no matter, this was the inspiration and based on what he wanted to do, I think SQL might have been a better fit for transforming the data – Read The Whole Thing and make up your own mind) was the answer to Adam_M’s issue, but when I did so I had in the back of my mind the nagging reminder that, “Yeah, he’ll write it to relational, but then he’ll do it two or three or more times as he tweaks the extract and will end up with many more records than he ought.  This is a bummer (and erroneous – I would personally rather have it error out) and cannot be resolved by Essbase.”  And that’s right; Essbase will not truncate the table before it writes.  You have to do this manually and that of course means you have some kind of SQL access which in the real world, particularly in production, you will most likely not have.

The goal aka the functional spec

It would be super nice if BSO’s DATAEXPORT calc script function did a TRUNCATE to clear the table, or at least had the option of doing a TRUNCATE, before writing to a SQL target.

Two paths not taken

Just too much of a hack, even for me

One approach would be to use the @CalcMgrExecute function I wrote about (and I still owe part three of that series to you, Gentle Reader), run a MaxL script, in that MaxL script use the shell grammar, run a SQL*Plus (if Oracle) or sqlcmd (if SQL Server) script that does the TRUNCATE and then return back to the calling MaxL script, end it, and do the export as the next step in the original calc script.  At least I am pretty sure that would work but it is just too clunky for words.  I would go down this path if I wasn’t allowed to make the modifications I am about to describe but I would hate myself in the morning.

The way I would like to do it, but cannot

What occurred to me was that a SQL trigger that did a TRUNCATE before the INSERT from DATAEXPORT would do the trick.  Unfortunately, this cannot work because of the way Essbase writes to relational tables.

Would you believe…

That the default for Essbase is to do separate INSERTs for each and every line in the export?  I had heard this from My Man In California (MMIC) aka Glenn Schwartzberg but like oh so many words of wisdom he drops my way (MMIC is a very giving guy and I a forgetful one), I completely forgot.  Why does this inefficient approach to writing to relational matter?

It all goes KABOOM

It matters because the common SQL triggers of BEFORE INSERT TRIGGER (Oracle) and INSTEAD OF INSERT (SQL Server) would fire each and every time Essbase writes to the table.

Here’s what SQL Profiler showed me as the DATAEXPORT code ran:

There are 2,645 records to export out of Sample.Basic and that requires 2,645 INSERT statements.  Eeek.

It might be possible, although inefficient, to use these triggers to do the TRUNCATE and then copy the single record to yet another table, but that is an ugly approach one that I am not sure I have thought through completely as that target would need to be truncated on first write.  So maybe an INSERT trigger is not on.

Another approach

But then I again recalled of one of MMIC’s pearls of wisdom (see, Glenn, I really do listen to you) – there is an Essbase.cfg file setting that can allow batch inserts if the database supports that functionality.

DATAEXPORTENABLEBATCHINSERT

Off I went to the Tech Ref and did my requisite RTM.  Oh goody, thought I, it’s going to work.  Just read the description:  “When DATAEXPORTENABLEBATCHINSERT is set to TRUE, Essbase determines whether the relational database and the ODBC driver permit batch insert.”  But I should have known that when Essbase giveth with one hand, it taketh away with the other, because the next sentence states, “If they do, Essbase uses the batch-insert method, and, thus, performance is optimized.  Essbase determines the batch size; however, you can control the number of rows (from 2 to 1000) that are inserted at one time by using the DEXPSQLROWSIZE configuration setting.”

Oh, bugger, all this statement does is increase the number of records until that INSERT trigger fails.  I know that Sample.Basic has 2,645 rows and that means three INSERTs at the maximum row DEPSQLROWSIZE.  Bugger, again.  What to do?

Is there a way out?  Why yes there is.

INSERT triggers are out, but is there another trappable event that could allow the TRUNCATE to occur?  I looked again at SQL Profiler and saw two interesting facts.

  1. The LoginName column reflects whatever username I defined in the ODBC System DSN I used.
  2. There is a login event, at least as trapped via SQL Profiler.

The simple solution, although figuring out how to do it took, oh, forever, and it isn’t enough

If I could write a trigger that intercepts the act of EssbaseLogin (or whatever SQL username I chose) connecting to SQL Server I could have the login trigger TRUNCATE the table right then and there.  This frees me from the multiple INSERT issue quite handily.  How oh how oh how to do it was the question.

Figuring it out

SQL Server has a trigger database operation keyword called FOR LOGON (the Oracle equivalent is AFTER LOGON).  All I needed to do was to create that trigger, test for the username, do the TRUNCATE, and, because I am a paranoid nut, log: the fact that I did this, when I did it, and that the TRUNCATE succeeded.

A note for all of you SQL n00bs and experts – what you are about to see is the very first SQL trigger yr. obt. svt. ever wrote.  At least that’s the excuse I am sticking to as explanation why it took so long.  SQL n00bs – take heart, if I can do it, so can you; experts – everyone has to start somewhere, so stop the snickering at my plodding progress.

The target database

Here is what QueryTest.SampleBasicExport looks like – a simple table that sticks Product, Market, Measures, and Scenario into the rows and the 12 level zero months of Year as columns.

Approach #1

I stole the genesis for this code somewhere on the web (I meant to keep the link, but forgot to do so).   Here’s what I ended up with:

Line by line

As most of the readers of this blog are Essbase practitioners first, and SQL geeks second (or third, or fourth, etc.), I will explain each line of the code.

  1. The first line defines the trigger EssbaseDataExport – as I changed this about eleventy billion times the code shows ALTER instead of CREATE – and that this is a server-level trigger.
  2. The second line states that this is a LOGON trigger.
  3. Line three’s AS statement defines the trigger.
  4. The BEGIN statement on line four is the beginning block of the trigger logic.
  5. We finally get to the TRUNCATE logic on line five.
  6. Line six’s END marks the end of the trigger.

All I need to do is execute the code and ta-da, I now have the server-level trigger EssbaseDataExport.  You can see EssbaseDataExport in SQL Server Studio that the trigger is now there.

EssbaseDataExport  in action

Here’s what my calc script looks like:
Note that the SQL username EssbaseLogin is used.  At this point this isn’t important but it will be in just a bit.

Let’s do a count of the rows in QueryTest.SampleBasicExport.

If I run the export  I expect to see 2,645 records.

What’s the count?

When I run it a second time I expect the same number of rows as I have done a TRUNCATE just after logon but before the INSERTs actually happen:

And check the count:

DATAEXPORT ran, the LOGON trigger did the TRUNCATE, and the table still has 2,645 records.  Success boil in bag!!!

Is this really working? If I disable the trigger, there will be no truncation on run and the record count will now be 2,645 + 2,645 or 5,290 records.  I hope.
SQL Server gives me confirmation that the trigger EssbaseDataExport is disabled.

Run the DATAEXPORT one more time (Do I have to show you yet another MaxL screen shot?  Hopefully not as I am not going to do so.)

What’s the count?  Why yes it is 5,290 records.  So proof that Essbase won’t do a TRUNCATE before writing when the trigger is turned off.  Huzzah!

With a quick re-enable of the trigger, what happens when I run the DATAEXPORT calc script?
It runs yet again.
And now I have a row count of…
2,645.  I have now proved that every time there’s a login to SQL Server, I can get a TRUNCATE.  

Approach #2

But there’s a problem with this approach and it’s really quite a big one.  Every connection to SQL Server is going to result in a TRUNCATE.  Don’t believe me?  I will disconnect from SQL Server Studio, and then reconnect.

Here’s my connect to SQL Server dialog box. I click on the Connect button and…

What’s my row count on QueryTest.SampleBasicExport?

Yup, it’s zero.  As Donald Fauntleroy Duck would say, “Aw, Nuts!”  I need to be able to test for the username so that only the username that does that particular write to relational forces the clear.  And I’m going to need to be very careful about what username does that clear – EssbaseLogin could do two different kinds of connects – once to write (the TRUNCATE makes sense) and then again to read (which would be a bummer because the trigger would blow away the table contents).

So what I really need to do is create another username just used for writes, and maybe even a username just used for writing to that particular table.  I don’t want a SQL Load Rule to force that TRUNCATE to occur because that too will fire the CONNECT trigger.

Setting up a different DSN

As I am admin on my box and my SQL Server instance, this is easy peasy lemon squeezy.

Create the SQL Server username

As I am a dba on my own server, I will create the username EssbaseDataExport_Writer.

And then give EssabseDataExport_Writer db_owner access so it can do a TRUNCATE.

Creating the system DSN

Create the ODBC system DSN with that username EssbaseDataExport_Writer.

Set the default database to QueryTest:

And confirm that the username can connect:

Modifying the trigger to test for a specific username

What the trigger needs to do is test for the username EssbaseDataExport_Writer and then, and only then, perform the TRUNCATE.  All other usernames will not perform the clear of the SampleBasicExport table.

Happily there is a database operation keyword called SYSTEM_USER that will return the connection username.  Stick that SYSTEM_USER into an IF statement and then do the TRUNCATE if true and we should be good.

Here’s the trigger with the IF test for SYSTEM_USER.

Proof of the pudding part 1

If I the DataExp1 calc script using the non-tested username EssbaseLogin, SQL Server should not perform that TRUNCATE.  If there were already 2,645 records in the table SampleBasicExport, I should now have 5,290.

And so it is:

Proof of the pudding part 2

I created another calc script, this time pointing to the new DSN with the username DataExport_Writer.  In my typically unimaginative/lazy way, I named it DataExp2.

What happens now when I run the calc script?

And the row count?

Yup, I am now testing for just the username EssbaseDataExport_Writer!  Whew, what a long journey, but now when Essbase writes to SampleBasicExport using the EssbaseDataExport_Writer DSN/username, it will never double, triple, quadruple, etc. count the data.  

More cool trigger functions

It’s great that the TRUNCATE works, but wouldn’t it also be nice to have a record of each and every time the export fired and that the target table was cleared out before the write?  Why yes it would.  All I need to do is create a table with that kind of information.

Over in QueryTest, I created a table called LoginAudit with three fields:  DateTime, SystemUser, and RowsBeforeInsert.

I then altered the EssbaseDataExport trigger to do an INSERT after the TRUNCATE.  SYSDATETIME() is a date/time stamp, SYSTEM_USER() we’re already familiar with, and then I did a subquery to get the row count in SampleBasicExport which, if the TRUNCATE is successful, should always be zero.

After I apply the trigger, I then run the calc script DataExp2 and…

This result is just what I hoped for.  I could stick that INSERT statement before the IF to log everyone (this is actually a pretty common usage) and track everyone, I could extend the IF to an ELSEIF and test for other tables, in short I could do all of the cool things that SQL allows me to do.

And that is why SQL should be used for ETL work

I have gone pretty far afield and it took me 19 pages in MS Word to get to the end but a lot of that was spelling out each and every step in the process for we SQL beginners.

ETL in a Load Rule is, in my opinion, the work of the Devil Hisself because it isn’t transparent, is easy to get wrong, and has functions that cannot be undone, e.g. splitting a column.  Whatever the theological genesis of SQL is, it doesn’t suffer from any of the faults of Essbase Load Rules.  Yes, this is another Cameron rant about the Evils of Load Rules.  But I’m done.  :)

Thanks to the power of SQL, I’ve solved the issue with getting TRUNCATE access to an Essbase DATAEXPORT table which is quite often very difficult to do in a production environment through a very simple trigger.  See, SQL is awesome.

A caution, or at least a series of questions
One point about awesomeness -- I  reached out to my buddy Rich Magee, who knows SQL inside and out from a prior professional existence as a DBA, and asked him if there were any downsides to this approach.  Of course there were (life is often a combination of good and bad).  Here are his comments and they are food for thought:

"My understanding is that Logon triggers are typically used to monitor and control max number of logins, time outs and such. However, I could see no reason to not use them in your circumstance.
 
My questions (dba hat on) to you would be:
  • Why can you not simply schedule the stored proc to run?
  • What if the user logs on and off 5 times in a 5 minute period?
  • Would that not spawn unnecessary/redundant jobs all doing the same thing?
  • Could a risk be filling up the CPU or Disk with jobs/temp files that aren’t needed?"
So definitely some food for thought -- as always, you, Gentle Reader, must decide if what I hand out for free is genius or madness.  I tend to think it is the latter, not the former, but that is for you to decide.
The conclusion to the conclusion

Re the point about being a good (or bad) programmer based on approach  – I cannot say which category I fall into but I do know that I spent a lot of time figuring out what I wanted to do and then a lot more time figuring out how to do it with very little code at the end.  As before, you have to decide what class I (or you) fall into.  At least this is something you won’t have to figure out.

Be seeing you.

Essbase 11.1.2.3.502 is available for download

$
0
0

It’s out

I must give a hat tip to Steph who commented on my 11.1.2.4 post as I didn’t know about the patch release.

Two things that I found interesting

  1. No stated (although I could swear that Gabby said there was) improvements to Hybrid BSO
  2. Fragmentation (storage engine not stated, but I am not aware of significant ASO .dat fragmentation) no longer matters
Here's the relevant quote with emphasis added:
Historically, fragmentation has been perceived as degrading performance. However, with advances in hardware, memory, and disk architectures, the correlation between fragmentation and performance is no longer significant. Furthermore, several enhancements have been made to algorithms within Essbase, making the older statistics pertaining to fragmentation less relevant. Oracle recommends the use of the latest efficient storage systems to store Essbase data files, such as Storage Area Network (SAN) or flash.

That’s going to blow up the rule of thumb “defrag for performance”.   <grin>  One thing that the documentation does not note is when this became true.  Presumably it’s in the .502 patch as that’s when this went into the ReadMe but sometimes documentation lags a bit, particularly when it doesn’t address a defect.

Oracle do go on to state that fragmentation is still somewhat important because it increases disk requirements:
The second implication of fragmentation is related to increase in the size of data files. Oracle recommends regular monitoring of the number of blocks and the data file size. If the size of the data files increases even though the number of data blocks remains the same, and available disk space is diminishing, consider taking steps to reduce fragmentation.

So Essbase continues to eat disk when it’s fragmented but that’s only a worry if the database is constrained on space.  Verrrrry interesting.

I’m not sure how one would test this – I suppose a series of benchmarks against a db when it’s 100% defragmented and then when it is nicely fragmented would do it although per their comment, if the statistics are no longer totally relevant, how will you know?  I look forward to someone other than myself doing the testing.  <even bigger grin>

There’s quite a bit more to the ReadMe so you should Read The Whole Thing (login to Oracle Support required).

Be seeing you.

Calculation Manager, BSO Planning, and ASO Planning combine for an awesome ASO Essbase procedural calculation hack -- Part 3

$
0
0

Introduction

This is the third and final installment of a three part series on ASO calculations, and specifically, ASO Planning calculations.  Thus far I’ve showed how to use the @CalcMgrExecuteEncryptMaxLFile via Calculation Manager which is pretty cool, and then how to make ASO procedural calculations in MaxL fast, fast, fast.  That’s all well and good, but how does that relate to ASO Planning?

I’m awfully glad you asked that, because these two hacks combine in ASO Planning to create ASO Planning procedural calculations that are both unbelievably fast and slick.  Read on, and all will be revealed.

The path not taken

Before I go any further, you are likely thinking, (Are you?  Really?  Really?  If so, you’re just as sad as I.  We both should seek help.) ‘arf a mo’, Cameron, why wouldn’t you use the ASO procedural calculation functionality in Calculation Manager?  Why indeed?

It isn’t as though ASO Calc Mgr procedural calculations aren’t available in ASO Planning 11.1.2.3.500 – they are.

But what is also there is a bug, and I have to say quite a reasonable one.  I like to think of myself as the kind of person that can break anything, if I try long enough.

A short review

The essence of fast procedural calculations in ASO Essbase is (or would be) to use a NONEMPTY modifier in the calc script.  Unfortunately, at this time that is not available although I understand it is somewhere on the product enhancement list.  What my prior post explained in great detail was the hack Joe Watkins came up with to use the ASO procedural allocation grammar to copy the results of a member formula to a stored member.  That member formula (dynamic, and in the case of currency conversion, only valid at level zero) can use the NONEMPTYTUPLE keyword to make Essbase only consider existing data and in turn it moves it to a stored member.

The next few paragraphs are a rip-and-read from that post but it’s short, explains everything, and I am too lazy to paraphrase all of it.

Additional member

In the Analytic dimension of my Planning app, I created a calculate-only member called MTD USA.  It contains the member formula to calculate fx conversion.

MTD USA’s member formula

Note the NONEMPTYTUPLE command that makes the member formula only address non empty data.

The CASE statement is a MDX version of the BSO currency conversion calc script.

Execute allocation

It’s all pretty simple from here on, thanks to Joe.  All I need to do is kick off an execute allocation in MaxL, set up my pov aka my FIX statement, identify the source (Local) and target (USD).  By not defining a spread range other than USD, Essbase copies everything from MTD USA in Local to MTD in USD.

Did you see the $5, $6, and $7 in the code?  If it’s MaxL, it can be driven through parameter variables.  

Got it?  MTD member formula with NONEMPTYTUPLE + ASO procedural allocation that allocates 100% of that dynamic formula member to a stored member equals fast, fast, fast.

So what didn’t work?

I know that the Calc Mgr team is quite proactive and I suspect that this bug will be fixed soon, but in the meantime, and because this is a Most Excellent Hack with lots of possibilities outside of Planning, I’ll show how to get round it.

Specifically, what went KABOOM?

Oracle never thought anyone would allocate 100% of a level zero member to another.  And I can hardly blame them for thinking it.

Here’s the relevant screenshot in Calc Mgr.  It (again, quite reasonably) assumes that when you allocate a data value, you do it from an upper level member all the way down to the bottom.  And that is the normal way to do an allocation, except the fast ASO procedural calc hack doesn’t do that – it allocates a level zero member to a level zero member.  And that doesn’t work.

How I solved this

I found this defect as I was writing the joint presentation I gave with Tim German for Kscope14 and I wasn’t exactly doing it months before conference.  I was stuck.

But I remembered seeing the @CalcMgr functions back in Essbase 11.1.2.2.  What if I could write a BSO Calc Mgr rule and drive an ASO procedural calc via MaxL?  

And it turns out that in fact there are a lot of ways to run a MaxL script from BSO:
  • @CalcMgrExecuteEncryptMaxLFile (privateKey, maxlFileName, arguments, asynchronous)
  • @CalcMgrExecuteMaxLEnScript (privateKey, maxlScripts, arguments, asynchronous)
  • @CalcMgrExecuteMaxLFile (user, password, maxlFileName, arguments, asynchronous)
  • @CalcMgrExecuteMaxLFile (user, password, maxlFileName, arguments)
  • @CalcMgrExecuteMaxLScript (user, password, maxlScripts, arguments, asynchronous)
  • @CalcMgrExecuteMaxLScript (user, password, maxlScript, arguments)
  • RUNJAVA

And it gets better

Once I realized this, it hit me that I could likely drive it off of ASO Planning forms and pass the Page, POV, and even the User Variable values on save into a BSO Calc Mgr rule and from there into a MaxL script that runs the allocation.  OMG, Essbase ASO procedural calc nirvana could ensue.  Or the end of the world.  If igniting the atmosphere side bets are good enough for Manhattan Project physicists during atom bomb tests, surely giving this a whack seems worthwhile.

The short story is that all of this somewhat amazingly works, and works quite well.  I’ll cover the straightforward setup and application of this and then go into some of the more interesting possibilities.

Doing it @CalcMgrExecuteEncryptMaxLFile style

You will remember from the first post that it is very important, if you only mean to run the ASO procedural calc once, to limit the scope of @CalcMgrExecuteMaxLFile to one and only one block.  And oh yes, that block must exist for this to work.  Here’s the code:

See part one for all this approach’s requirements beyond blocks.  You will note that this BSO script does not have any Calc Mgr variables but I could have easily used them.

RUNJAVA, RUNJAVA, RUN RUN RUN

Again, see part one for all of the rules.  Note that the FIX and the existing block requirements do not apply.  But what I want you to focus on the {varYear}, {varBRUVAccount}, {varProduct}, and {varPostCode} Calc Mgr variables.

Here are the variables as defined in Calc Mgr.  NB – These are Calc Mgr variables passed from an ASO to a BSO Calc Mgr rule.  Coolness.  And awesomeness.  And a great hack.

Here’s the (again, BSO)  rule associated to the (again, ASO) form in Planning.  Note the Use Members on Form tag:

ASO procedural calc

Here’s the MaxL code containing the ASO allocation script:

And the output from that fx conversion.  Note how ASO Planning form values got passed to Calc Mgr variables and then to MaxL to run the ASO procedural calculation (see the pretty pastel highlight colors):

And now the demo

And here’s a very short movie showing it executing.  Please excuse the editing (with clock) at the end as I was trying to spare you all the trauma of me searching for the calculation time in the Essbase application log.  In any case, the time logged to MaxL (0.027 seconds) shows up in the application log as well.

Numbers don’t lie

Finally, you know from part two of this series how fast this can be.  The times you are seeing below are slower than what I demonstrated because they represent full database size (my database is just a fraction of the full dataset because of disk space constraints – trust me, these numbers are real):
Process
BSO
ASO
X Fast
Allocate
106
3
35
Fx
400
1.2
333
Aggregate
1,772
N/A
N/A
Total
2,278
4.2
542
Using this technique, the ASO fx is over 300 times as fast as the equivalent BSO outline and data.  A little slice of Essbase performance heaven, isn’t it?

Conclusion, or is it?

A combination of the Calc Mgr CDF that is in every copy of Planning (and Essbase, for that matter), the tried and true POV/page/and now row and column set passing to Calc Mgr, and a little creative ASO Essbase procedural calculations gives the Planning community access to an amazing amount of power and functionality.  
Cool, eh?  But this technique can be taken quite a bit further.

Where this starts getting really interesting

The demo you see above is from a Planning application that has BSO and ASO plan types that mirror one another.  As such, the dimensionality in the BSO application mostly matches ASO.  Is this required?  Absolutely not.
In fact, all that I need to run ASO procedural calculations in Planning is a BSO plan type with exactly one block of data (for @CalcMgrExecuteEncryptMaxLFile) or one that is completely empty (for RUNJAVA) and I can then address any ASO Planning plan type, even ones across multiple Planning applications or even servers.  The Calc Mgr functions call MaxL and MaxL can address any Essbase database to which it is provisioned whether that be that a Planning plan type, an ASO Essbase database, a BSO Essbase database, some combination of the above, etc., etc., etc.  
Calc Mgr itself isn’t even required (or even Planning) if you wish – you could use this all in a pure Essbase database and use command line substitution variables to drive scope, or just hard code it all.  You can go absolutely wild, relatively speaking, with this approach and do just about anything with it.  It is a very powerful technique and one that I hope will be exploited.
I find this all oddly stimulating.  But I’m weird.

Now the real conclusion and a question for you

This is one of my longer posts – almost 30 pages in Word which equates to approximately 6,500 words in total.  Does it make sense to write multiple part posts like this or would the EPM community be better served with me trying to write things like this as a white paper?  Write care of this blog or just email me.
Be seeing you.

Oracle OpenWorld 2014 EPM/BI meetup

$
0
0

It’s coming

What is it?  It is nothing less than Oracle OpenWorld 2014.  OOW is an interesting kind of conference, especially compared to a very focused one like Kscope.

What oh what oh what is on the horizon?

OOW caters to all levels of participant, be he technical, functional, executive, or just (allegedly) there for the many parties.  There are sessions, lots and lots of sessions, in many, many, many buildings all over San Francisco.  There are meetings, at Oracle, in hotel conference rooms (if you haven’t already booked a place to sleep, good luck to you), in bars, wherever.  There are vendors in many, many, many exhibition halls (Ione half of Moscone, and that is just a fraction of the exhibition space at OOW,  is bigger than every single session room and the exhibitor expo at Kscope combined).  There are client events, many, many, many of them where consulting companies of every stripe wine and dine their clients or potential clients.  There are sales events, many, many, many of them by Oracle sales reps and executives.  And then there is the humongous blowout on Treasure Island Wednesday night.

While all of this activity is great, it is all a bit exhausting for the average geek.  And can lead to strange personality changes.  

Transformations, the bad kind

Do you want to look like this?  I have, or at least felt like it.
http://1.bp.blogspot.com/_KW3S0od5s5U/TQJpbCLCECI/AAAAAAAAGEA/n4KmhewIetk/s1600/Animal+BoRhap.jpg

And sound like this?  

Transformations, the good kind

Or look like this?
http://upload.wikimedia.org/wikipedia/en/f/fc/Niceneasy.jpg

Ol’ Blue Eyes had it right – nice ‘n’ easy does it every time.

I cannot promise a transformative event that will make you look like Frankie Sinatra, but I can suggest an OOW event that will at the least, relax.

Oracle Open World EPM/BI meetup

This is the (in)famous Tim and Cameron Essbase/EPM meetup rebranded for a new and expanded role:  EPM and BI.  Haven’t been?  For shame, it has been a lot of fun as documented here and here over the past two years.  I expect it will be the same again this year, but better.

Why better?

2014’s Most Excellent Tim and Cameron EPM/BI meetup will be better than years past simply because this meetup will now cater to both EPM and BI practitioners.  We’re practically the same on many levels except for a perfectly understandable love of Essbase on one side and a bizarre reliance on SQL on the other.  :)  I kid, I kid.

The important outcome from this audience expansion is that two separate Oracle communities are coming together as one.  Yes, that is exciting.  And besides, it isn’t as though the two product lines aren’t converging.  Now we can hang out together.  Let the cross-pollination begin!

Why is it special?

Speaking of Oracle, I hope (but of course cannot guarantee) that we will again attract Oracle product and development management employees.  Everyone (by that I mean non-Oracle attendees) has been very cool about not cornering them and whining (not that I could ever be accused of that) about whatever the latest pain point in an implementation might be and instead have used this event as a chance to socially meet the people that control our technological destinies.  There is a lot of geek talk, a lot of gossip, and a lot of fun, and all in an informal, relaxed, and low key way.  See, you can look like the Chairman Of The Board, too.   

The food is pretty good, too

There’s lots of good food in San Francisco, and the location of this meetup is at one of the best restaurants I’ve been to in that city.  For those of you who remember Specchio from years past, it has been renamed to Piattini, changed its menu, and in general been spruced up.  Gino Assaf is still in charge and based on the reviews here and here I expect the same good level of food.  Venetian + Northern Italian cuisine + a tapas menu = yum.

When is it, where is it, and how do you sign up?

Tim and I are holding the meetup on Tuesday, 30 September 2014, at 7 pm.  Closing time is whenever Gino can’t take any more of the geekiness and tosses us out the door.  Thinking back, that was pretty darn late but no one wanted to leave.

Piattini is located at 2331 Mission St. (between 19th St & 20th St), San Francisco, CA 94110.

If you haven’t figured out how to sign up via meetup.com you must be ignoring all of the links I’ve sprinkled through the text.  Just click on meetup.com's Oracle Open World EPM/BI meetup and off you go.

See you at the meetup!

Once more unto the ODTUG breach

$
0
0

I’m asking for your ODTUG support

Again I come to you (and for the last, term-limited time) to continue your work on the ODTUG Board of Directors.

As a Director, in the last two years I’ve:
  • Waved the ODTUG flag in New Zealand and Australia
  • Been heavily involved in EPM Kscope13 and Kscope14 content selection
  • Recruited volunteers to manage Kscope’s training lab Cloud infrastructure
  • As always, blogged, tweeted, and cheerleaded about the best Oracle user group there is
  • Set the vision for the EPM community initiatives, recruited volunteers, and begun the grassroots transformation

ODTUG’s community initiatives are your user group’s way to increase your participation, hear your voice, and help drive ODTUG in the direction you need.  I am leading the EPM community initiatives and with the help of some incredibly talented volunteers, people just like you, we will change ODTUG for the better.  That work has just begun but you will soon see the impact at Kscope15, in social media, and at local meetups all over the country, all with the goal of making ODTUG reflect your professional Oralce needs.

I’m asking you to return me to the ODTUG Board of Directors for the final time so I can finish the great work you and I have begun.

Biographical Sketch

In case you don’t know who I am (although I have to wonder about that given that you are reading this off of my blog), here is my professional summary.

I first worked with OLAP technology in the dinosaur days of Comshare’s System W and saw the Essbase light in 1993.  

Since that life-altering event, I:
  • Introduced what was then Arbor Software’s Essbase to Johnson & Johnson Corporate
  • Independently consulted since 1996, with a brief foray into working for consulting companies
  • Created solutions for customers using Oracle’s Essbase, Planning, and anything else that ties to those two products
  • With the help of 12 of his closest friends, wrote the only advanced Essbase book
  • Actively post on OTN’s and Network54’s Essbase board – sharing knowledge makes my day interesting
  • Presented at multiple conferences including Hyperion Solutions, Oracle Open World, and of course Kscope
  • Taught multiple formal classes and webinars
  • Served on the ODTUG Hyperion SIG
  • Have been an Oracle ACE since November 2010 and an ACE Director as of August 2012

Once more unto the breach, dear friends, once more

ODTUG is your user group.  The whole purpose of a user group is that it serves the users.  All of us on the ODTUG board of directors do that to the best of our abilities.   

I like to think that I’ve served you (I do style myself yr. obt. svt. on purpose) during my two terms through my advocacy, dedication, and passion for you, ODTUG’s members, in what is surely the best Oracle user group ever.  

I am asking for your vote to continue this work one more time.  The EPM community initiatives are just getting up a full head of steam and I think I am best suited to seeing that they are firmly established.  Please help me do so by reelecting me.

Be seeing you.



Oracle OpenWorld, day 1

$
0
0
Oracle OpenWorld 2014 Day 1

The craziness has begun

To be absolutely accurate, I’m actually on day three of OpenWorld 2014 as I attended the ACE Directors briefing (Ooh, name dropping.  All I can say is that there are ACE Directors and then there ACE Directors.  I’m definitely in the how-on-earth-did-I-get-here? category and I think I’m not the only one who agrees with that.) on Thursday and Friday last.  

Part one

I am now sitting in the first session of the ODTUG symposium and am listening to Jon Rambeau of US Analytics presenting his session:  Why does EPM seem so challenging?  A good practices guide to EPM implementations

Part two

Ron Dimon is now presenting his session on The ROI of EPM as a Management Process.  Like Jon’s session, this is not a technical session but a functional one.  We EPMers have to walk both sides of the street; I tend towards the technical but I have to do functional as well.  This is all good stuff.

Part three

I’m now sitting in on Scott Leshinski’s session:  Managing Your Company’s Cash: Using EPM to Capitalize on Future Growth [UGF9092].  Don’t you wish you were here?  Why aren’t you?

How does ODTUG end up at OpenWorld?

Oracle quite graciously gives each major user group a room to present symposiums – we have fantastic content this year (every year, actually) and I’m really looking forward to the rest of the sessions today.  Thanks again, Oracle, for your openness and support of ODTUG.

What’s next?

I’m going to continue liveblogging today’s Hyperion EPM symposium and the rest of OpenWorld and keep you unbelievably jealous up to date on what’s happening at OpenWorld 2014.  Keep on checking this blog post for more information.

In addition to the symposium, I have a (we’ll be part of the great podcast series that Bob Rhubart puts out through the OTN ArchBeat  podcast program) recording session for a podcast with Gurcan Orhan, Michael Rainey, and Christophe Dupupet on my very favorite data integration tool, Oracle Data Integrator.  I have some exciting news about the future of ODI and EPM that I’ll try to get out.  :)  

Alas and alack, I will miss a few sessions but again, the I-wish-I-had-a-clone-for-this-conference theme holds true.  

Be seeing you.

Oracle OpenWorld, day 2

$
0
0

The craziness continues

It really isn’t craziness, more like crazy busy.  But this is the nature of conferences, right?

The rest of yesterday

Probably the most interesting/embarrassing thing yesterday was me rushing to a 3 pm podcast on ODI.  Except of course…

OTOH, it was quite a popular tweet.  :)

I think I can make today’s meeting.  I hope.

Update -- I did.  Of course my ODI knowledge quite a bit behind behind my copanellists but that is true for all sorts of technology areas I work in.  I have decided to embrace my weakness(es).  :)  It isn't like I have much of a choice anyway.

Oracle ACE dinner

OTN hosts a dinner for ACEs every year, and yr. obt. svt. somehow got to attend.  The entertainment will be the Stuff Of Legend.  

Right now

CON2659  --  Oracle BI in the Cloud: Getting Started, Deployment Scenarios, and Best Practices

I’m sitting in on Mark Rittman’s session on Oracle BI in the cloud.  It’s standing room only (I had to annoy a few people in getting to one of the few open seats to allow me to type), and Mark is his usual brilliant self.

It’s still early days on the tool, and there are quite a few things that are not there compared to the on premises product.  For instance, there is no Essbase (gasp), but this is coming.

Oooh, someone just asked Mark a question on Essbase.  Mark doesn’t think it will be Essbase SaaS, but there might be an option to make Essbase as the backend to options (persistence, Planning, etc.) within OBIEE SaaS.

And now Smart View – again, Mark thinks that because it has to be locally installed and chatty web service calls, it won’t be available in the near term.  But it might be.  I always enjoy answers like that but I suspect Mark (and maybe even Oracle product management) don’t know yet.

CON8424  --  Oracle Business Analytics Product and Technology Roadmap

Sitting in on a SRO (again) session by Paul Rodwick on what’s coming in Business Analytics.  Interesting stuff.

GEN8525  --  General Session: Executive Briefing on Oracle’s EPM Strategy and Roadmap

Balaji Yelamanchili is speaking on the future of EPM in a big room.

Here’s a snap from my BFF, Natalie Delemar.  Note that her circa 2013 phone has a rather better camera than my nineteen aught three, steam powered, all brass, powered by anthracite coal, phone.

Where is Oracle EPM going?

You’re going to get this as a set of bullet points.  I’m not going to focus on what’s already out, but what is on the roadmap – I just can’t type fast enough.

Focus
  • New apps
  • Apps in the cloud
  • Keep in being the best
  • Social and mobile to attract new EPM users
  • Do this for everything:  Close, Planning, and Reporting

Next 12 months
  • Financial consolidation & close
  • HFM 11.1.2.4
    • Lighter, faster, simpler, & portable
    • HFM on Exalytics
  • New supplemental data management module
  • New tax governance module
  • Mobile workflow for FCM

HFM 11.1.2.4

  • Platform independence
  • Significant performance (see below) improvements
  • Simplified deployment architecture
  • Multiple databases per instance
  • Streamlined integrations, Java API
  • Online monitoring (Exalytics only)
  • Easy install and upgrade
  • One click (Exalytics only)
  • Full LCM support

And now an HFM 11.1.2.4 demo.  I’ll do my best to describe this but I am not an accountant and this isn’t my area.
  • New UI is more Windows Explorer-ish with
    • filtering,
    • collapsing folders,
    • no more pop ups for grid options,
    • Direct link to Smart View
    • New form designer
    • Form legends to explain what cell shading/coloring means
    • Favorites
  • Oracle Financial Management Analytics
    • Link to OFMA directly from within HFM (pretty nice looking stuff, btw)
  • And of course…
    • Much better performance, 3x faster (at least in their example) better performance
    • Some other examples
      • 2 hours to 6 minutes (2200 accounts, 2800 entities, 400 custom members) consolidation
      • 53 to 21 minutes (10K accounts, 15K entities, 3000 custom members), extract data from 1 hour to 12 minutes
      • Etc., etc., etc.  :)

Planning

Next 12 months
  • User-defined sandbox, grid improvements, instantaneous calculations (hellllllllllllooooo Hybrid Essbase)
  • Cloud innovations available for on premises
  • Planning Cloud will get the full suite of Planning modules

Planning demo now up – and yes, now I have a clue.  Barely.
  • Support for user sandbolxes
    • Auto calcs, auto save on change
    • Cloud and on-premises
  • Supports large data sets
    • Rapid forms, quick cell navigation (so quite an improvement), scrolling in all grids
  • User-defined client side calcs using familiar Excel syntax in the grid itself
Formulas can reference dimension members and persisted in Planning (I believe the data is persisted in Planning and the form calculations are persisted in Planning – this is not, I think, a way for users to create Planning members, although one could certainly argue that a custom, persisted set of calculations that writes back to Planning/Essbase is pretty nice)

Demo
  • Mobile interface
    • Looks nice, definitely not the familiar Workspace
  • Forms are quick, quick, quick.  No more of 11.1.2.2’s pain.
  • Sandboxes can be named, and creates a virtual Version for at least everything on the form.  Maybe more as well?  
    • Data that is saved into Sandbox can be compared to base Working, and then published when happy with the result back to the real Working Version.
      • Sandbox gets destroyed when it has been published back to Working
      • When working in Sandbox, the data is private
  • Excel integration
    • Showed the custom member formulas
    • Smart View grids can then be saved and opened up in the browser with the custom calculations
      • Can these formula grids be shared with other users?  Dunno.

EPM Cloud

  • Cloud, cloud, cloud, cloud.  Did I mention cloud?   :)
  • It’s all very exciting, and I think Oracle are finding uptake far better than they had hoped for – 150+ customers in six months.  Oracle is having a problem buying enough hardware to do all this..  This is bad news for infrastructure consultants (so not a problem for me as I am infrastructure-challenged to be charitable) but great news for everyone else who just want to do cool application stuff.  And oh yeah, hardware (Exa?) manufacturers are having a good time as well.
  • PBCS – now and see above
  • Financial Performance Reporting Cloud Service – in preview
    • Not Financial Reports
    • Think of it as a managed way to combine:  reporting, document management (collaboration, process management, document history, etc.), data narrative, versioning, auditing.
      • IOW, no more Excel hell with documents.  So FPRCS is to financial documents as Essbase is to data.  :)
    • It’s meant for people who produce high level briefing books or external reporting like a 10-Q.
  • Financial Consolidation and Close Cloud Service – in development
  • Note that customers do monthly patches – no more opatch pain.

Essbase

  • Parallel scripting
  • Scalability for concurrent query and calculation operations
  • Hybrid
    • See Dan Pressman and Tim German’s session on this Thursday
  • In-memory enhancements
    • Intriguing, and I have no idea what that means.  But I shall find out…

CON8526  --  What’s New and What’s Coming:  Oracle Hyperion Enterprise Financial Planning Suite

I popped late into this one because of the ODI podcast recording.  

It looks like most of what Shankar Viswanathan and Prasad Kulkarni are talking about (at least for someone who came in 30 minutes late) was discussed at Kscope14.  Yet another vote for the awesomeness of ODTUG and Kscope.

So far Prasad has talked about:
  • Faster grids
  • Sandboxing
  • Valid combinations
  • Excel formulas

What’s next?

Beyond my podcast on ODI (and I think I have some interesting news on the future of ODI, FDMEE, and EPM) today, I also have a book signing tomorrow at 3 pm, and of course the rest of OpenWorld.

Watch this blog for more information.

Be seeing you.

Oracle OpenWorld, Day 3

$
0
0

The craziness continues, part the third


For once I am  not in a session but instead at the Oracle Publishers Seminar.  Why oh why would I be interested in publishing?  Haven’t I already been there, done that?  All I can say is watch this space for some very exciting news.  :)  I don’t want to say anything more lest I jinx the whole thing.

Some fun stuff from yesterday

Here are some snaps to show what Oracle geeks get up to when they’re not taking notes/desperately trying to learn new stuff/networking like crazy.  Instead, we actually have a social life or at least a semblance of one.  See if you recognize some of the face.

My Man In California, my Ride or Die Girl, and yr. obt. svt.

That’s Glenn Schwartzberg, Natalie Delemar, and again, yr. obt. svt.

I think it’s interesting that you’re looking at three competitors (okay, I am a bit of a stretch as a competitor as I am a mighty consulting company of just exactly one, but I beg your indulgence on this) and we’re all friends.  So, frenemies?  
ODTUG event at Le Charm
Above is John King to the right and two people who I absolutely I should know but alas do not.  Through the door you see Danny Bryant and Crystal Walton.

Two of the coauthors of Developing Essbase Applications

Again, that’s Natalie Delemar on the left and Gary Crisci at dinner.

And now today

CON7498  --  Oracle Essbase New Features and Roadmap Update

Whew, that was a bit of a run from the Westin St. Francis (where the Publishers Seminar is actually being held right now) to Moscone West, room 3007.

I’m listening to John Baker, Gabby Rubin, and Steve Liebermensch speak on my Very Favorite Essbase Database In The Whole Wide World aka VFEDTWWW.  Try saying that out loud.  Quite difficult, isn’t it?


Gabby Rubin is about to begin his overview on the state of Essbase today.

What did Essbase get in the last 12 months?

  • FIXPARALLEL
    • Overcomes restrictions of CALCPARALLEL
      • Forces parallelization when CALCPARALLEL failed
      • Enables it for DATAEXPORT and DATACOPY
        • Note to self – must benchmark this
  • Parallel processing in 11.2.3.500
    • There are huge differences in both commodity (yeah!) and Exalytics
    • Remember that the latest Exalytics release is optimized for Essbase at the hardware layer, so the performance boosts on that platform are larger
      • More parallelism is allowed, e.g., more than 8 parallel threads
      • Like 40 or even 60 parallel threads
      • Found that best performance was a mix of FIXPARALLEL and more cores
        • Sometimes CALCPARALLEL does just as well as FIXPARALLEL, and if so, keep it.
        • But if not, then use FIXPARALLEL
  • MDX improvements
    • Optimized AGGREGATE
      • Faster totals for mutli-level hierarchies
        • Really a rewrite of the AGGREGATE function
      • Improvement is based on the query and dimension depth
    • MDX Sub Select (not yet via OBI)
      • Significant performance improvement for queries against large databases
      • The bigger the model, the better (relatively) it gets
      • Very much like a SQL subquery – wrap the thing in parentheses and the query from that
    • MDX optimizations for attributes
  • Essbase and Planning on Exalytics
    • Remember that improvements on Exalytics comes to the core product (usually)
    • Better concurrency
    • Patentable lockless algorithms
    • Up to 3.5x improvements from Exalytics v1
      • NB – This isn’t for a single calc, but for scalability to reflect real world concurrency
  • Reduce BSO fragmentation
    • This is Exalytics only for now
      • In-place block write
      • Slows fragmentation
    • Exalytics first optimization
      • It’s easier for Oracle to do this because it’s one platform as opposed to the many commodity hardware OSes
  • BSO/ASO Hybrid Aggregation v1
    • Combine BSO with ASO aggregation performance
    • 100% backwards compatible with all existing BSO databases – zero learning curve
    • Revolutionary (my words)
    • First release only handles simple aggregation
      • Not all the functions of BSO
        • But, you will always return the right result, although if it fires in BSO classic query processor it might very well be really slow
      • But if it fits within Hybrid/ASO, it’ll be fast
      • More functionality to come
    • Mix and match stored and dynamic hierarchies as required in calculations
    • Is ASO going away?
      • Nope, it’s the home for Really Big Databases
      • But the reporting cube use case is going to be obviated

The future

Steve Libermensch is now going onwards.  The estimated release is Subect to change, ETA EPM PS4 but of course no guarantees, so don’t blame Oracle if something slips.

One more time -- this is all discussed under Oracle’s Safe Harbor statement, which boils down to, “Oracle isn’t going to commit to anything, will deny that any Oracle employee ever uttered these words, and you likely shouldn’t rely on anything you’re about to hear.  Or maybe you should.  But we’re not telling.”  If Oracle didn’t do this, we’d never hear about what very well may be the future.  And then again may be not.  You decide.  

And with that disclaimer, here’s what Steve talked about:
ETA EPM PS4
  • Hybrid
    • Increase coverage for additional functions
    • Time Balance, Dynamic Time Series
    • Complex calculation semantics
      • Cross-dimensional references (yeah!)
      • Dimension references
    • Longer term:  Hybrid Mode in Calc Scripts
      • Upper level Hybrid members in a calc scripts, aka, allocations based on spread on % of total calculations
    • Faster because
      • It’s dynamic
      • The number of blocks are way less
      • IND file is smaller, as is PAG file
    • Having said that, depending on the query, stored BSO queries can be faster than Hybrid.  The thing to remember is that all of the pain around storage, calculation time, etc. is reduced, potentially quite dramatically.
  • @XRANGE within functions
    • Available in more calculation functions
    • Code against slice of data instead to single vector or dimension
  • @RELXRANGE
    • Bugger, I missed it.  :)
  • FOR LOOP
    • Two new variable types:  MEMBER and NUMBER
    • Syntax
FOR (mbr, mbrList)
Statement ;
ENDFOR
    • No more block creation – this is outline driven, not block driven
  • Renegade
    • Selected member that will provide a home for orphaned members
      • No more dataload.err
  • Batch outline editing
    • New API to allow mass operations on Essbase Outline
      • Death to Load Rules!
      • Huge performance
  • Improved resource management and CPU utilization
    • Thread management and thread based memory allocation and management
    • Fundamental improvement to Essbase infrastructure
  • Essbase R
    • Library to read and write Essbase database fromR
      • Connect
      • mdx2Array
      • writeBack
    • Future direction
      • Allow Essbase users to trigger R from Essbase
      • Embedded R in calc scripts
      • Generic capability to allow the extendibility of Essbase with 3rd party scripting languages
  • EAL financial intelligence moving into Essbase
    • Bringing EAL into Essbase by enhancing its core strength:  financial apps
    • Replicate EAL into Essbase
      • Financial Accounts dimension, member types, and behaviors
    • Integrated per plan
  • Post Load Processing
    • Post load script for data manipulation before it is stored in the cube
      • Think balances vs flow on periodicities
      • Any frequency submit, store in one
        • YTD, MTD, QTD comes in, periods get stored
        • Coming for Planning as well from forms
          • We don’t have to write calc scripts to do this
  • Dimension member properties
    • Text (!), arrgh, missed the rest
  • Cell status
    • How and when did a cell in the db get populated
      • Load, Calc, data entry, Dynamic
      • Transaction ID for the last update transaction
    • Available via calc scripts, AP, and MaxL
    • Will be available across all engine types
  • In-memory
    • On Exalytics, Essbase will be a pure in memory engine so no wait for I/O
      • Remain Exalytics only because cannot rely on commodity servers having enough RAM to do this
    • In memory aggregate views for ASO will not need to be stored on disk

Whew, that was a lot.  Shouldn’t you come to OpenWorld and see all this stuff live and in person?

Book signing at the OpenWorld bookstore

John Wyzalek of CRC Press got us a book signing event at OpenWorld.  Here I am flanked by the One And Only Natalie Delemar and Dan Pressman at the Oracle bookstore.


What’s next?

See, I do have friends, even as odd as I am.  Perhaps I am the object of widespread pity?  If so, please do not tell me.

Tim and Cameron’s Most Excellent BI/EPM meetup

And I’ll have even more time with my putative friends tonight at the BI/EPM meetup Tim Tow and I are hosting. 

You hopefully know all about the meetup Tim Tow and I are hosting tonight:  Oracle Open World EPM/BI meetup

You don’t have to be an OpenWorld attendee to join us.  If you haven’t yet RSVP’d, I’d be obliged if you did so we have a handle on how many geeks are coming.

See you at 7:00 tonight at:

Watch this blog for more information.

Be seeing you.

Oracle OpenWorld 2014, day 4

$
0
0

The craziness continues, part the fourth

I am slowing down.  No, not because I am old and feeble (definitely something to look forward to but not just yet) but because I am not getting enough sleep.  I don’t even seem to be able to get enough energy to take pictures.  Or attend that many sessions.  For shame, but I’ve been at this a week already (remember I was here four days before OpenWorld actually began), I’m running out of clean clothes (possibly too much information?), and I’m just…well, tired.  See, the craziness continues bit wasn’t exaggeration on my part.

At the same time, OpenWorld is a great place to catch up with otherwise virtual friends, fly the ODTUG flag, and meet with key Oracle personnel.  

So no complaints on my side, other than my inability to discipline myself to go to bed early.  That’s hardly the fault of Oracle, but instead the fault of Cameron.

Yesterday

We had quite the blowout at the meetup Tim Tow and I hosted.  So fun, so much talking, so much networking that I forgot (gasp) to take photos or ask others to do the same.  So unfortunately, just the one picture I took at the beginning.  As always, it was nice to see familiar faces and meet in a relaxed forum.  You should join us next year.  :)  And take photos so yr. obt. svt. would have something to post.

And today

CON8532  --  Product Development Panel Q&A: Oracle Hyperion EPM Applications

Talk about more stars than there are in heaven, at least if heaven is defined as Oracle development management.

Left to right from your perspective, Gentle Reader, is:  Matt Bradley, Kash Mohammed, mystery HFM development manager (sorry, I am HFM-stupid or I would know who this is), Prasad Kulkarni, and Toufic Wakim.

CON7615  --  Oracle Exalytics In-Memory Machine: The Fast Path to In-Memory Analytics

I’m listening to Gabby Rubin speak right now re Exalytics.  There is a ton of true Business Intelligence offerings on Exalytics – it’s way more than just OBIEE.  Try Endeca, in-memory engine (both Essbase and Oracle database), TimesTen, InfiniBand, and more cores than you can shake a stick at.  Essbase has grown and grown and, unlike me, isn’t getting tired.

CON8546  --  Oracle Enterprise Performance Management on Mobile

Here’s Al Marciante talking about mobile, EPM, and cloud:

What I am very glad to hear is that my customers will not be Planning on their iPhone.  That was going to be ugly.  And tiny.

Financial Reports is coming (not soon, but it is coming) to mobile.

As is Smart View (yes) on MicroSoft’s Surface Pro tablet (which is essentially a Windows 8 computer).  

Planning on mobile:
  • Interface for tablets
  • Full write-back (so tiny type?)
    • Forms
    • Reports
    • Calc Man rules
  • HTML5 based
  • Consistent interface with Fusion applications

The other thing that is interesting is Oracle Financial Management Analysis on mobile.  Cool mashup of HFM and OBIEE without require a PhD in Oracle Business Intelligence.  I cannot wait for the Planning version of this.  

I will note that I am a bit of anomaly as I bang away at my laptop as mobile devices are absolutely everywhere at OpenWorld.  I do have my much-maligned phone, but I am sort of a minimalist when it comes to using it, although that may be a case of cutting my suit to fit my cloth.

What’s next?

If someone would send me snaps of last night’s meetup, I’d be happy to update this blog with them.  Hint.

I am not going to the event tonight.  Remember that bit above about tiredness.  I will likely (hopefully) have a quite dinner with a few of my friends and take some pictures this time to prove it.

Watch this blog for more information on the last (sob) day of OpenWorld.
Be seeing you.

Oralce OpenWorld, Day 5

$
0
0

The craziness continues, part the fifth

Hybrid Essbase:  Evolution or Revolution

Here I am, at the first session, watching Tim German and Dan Pressman present on Hybrid Essbase.  
Tim and I presented on this subject at ODTUG’s Kscope14 session.

Hybrid Essbase was released this year with Essbase 11.1.2.3.500 and really is a revolution in Essbase.  ASO on top of BSO (technically, I believe that it is really ASO-like but is so close in functionality it’s functionally equivalent) obviates the need for ASO reporting cubes, slow BSO calcs, BSO size, etc. really is going to change what we do with Essbase.  I am very, very, very excited about this tool.

Exalytics,  Essbase, and internals

Above are Kumar and Steve talking about some Really Cool Stuff.

You will see some of the Kumar Secret Essbase Sauce below as it relates to Exalytics.  Yes, I did ask if it was okay to blog this.

And remember that this is all safe harbored, so no promises, no dates, no nothing.  All of the below may happen but who knows.  Don’t hold them to any of this, and don’t make any plans (other than pondering it).
  • New X4-4
    • Chip is faster than anything you can buy as a commodity customer
    • X4-4, 2TB RAM, 60 cores max
    • The fewer the cores, the faster the clock
    • T5-8, 4TB RAM, Sparc
  • Oracle database
    • Run the Oracle database in RAM
    • 100x faster queries:  real-time analytics
    • Instantaneous query results using the database in memory in lieu of TimesTen
    • Cheap way of trying in memory db 12c
      • OBIEE 11.1.1.7 certification for 12c
      • Summary advisor
      • Aggregate persistence
      • Why use the db – skills, true relational query, good push to OBIEE, can be faster for simple reporting compared to Essbase
      • Can still be a normal data source
      • But, Essbase is still going to be faster when it comes to analytic processes
      • Think of it as an in-memory data mart
      • TimesTen still around
  • 2x increased transaction time, although Exalytics is not the target of transactional processing
  • HFM
    • Coming with PS4 (11.1.2.4)
    • Certified for Exalytics, both on the iron and VMs
    • No Linux HFM except on Exalytics
  • Essbase enhancements
    • Pure in-memory engine as calculation will no longer wait for I/O (background write)
      • Finish calc, allow user to use it, write happens in background later
      • Thread management and thread based memory allocation
      • Fundamental improvement, impacts resource consumption, stability, and performance
      • All aggregate views will stay in memory – once read from disk, it will never return to disk
        • This is only going to happen in Exalytics because they can know how much RAM the box has vs. commodity hardware with any amount of RAM
        • Where a particular task goes, from a CPU and memory perspective will be handled from a hardware and software level
    • Leverage X4-4 capabilities by improving scalability
    • No really firm release dates on all of the above – some in PSU4, others later, sometime
    • Lots of work with the hardware engineering
    • Performance
      • X4 improvements in BSO MDX grows as threads increase
        • Read-only
        • Mixed load (like Planning in queries and calcs happening simultaneously)
          • Bigger improvement in this scenario than read-only although that too has an improvement
          • X3 tends to bog down as this increases, X4 doesn’t thrash
      • T5
        • Similar work with Sparc engineering to improve performance on the Exalytics platform
        • Performance improvements almost 50% in .504 release
    • Why Exalytics?
      • Uptime
      • Volume
      • Iterations by users
      • ???
      • Pretty impressive improvements but of course variable results depending on application – YMMV
    • X vs. T
      • X – Destroyer
        • Faster with less load, but less capacity
      • T – Battleship
        • Slower with equivalent load, but faster when capacity level requirements are higher
    • Development model
      • Exalytics first
        • First developed in Exalytics, but released later to commodity
        • Faster for Oracle to write and test based on known hardware target
        • E.g., in-place block writes
      • Restricted
        • Performance potential (more cores, for instance) greater on Exalytics, so less restrictions on Exalytics, more on commodity
        • E.g., FixParallel
      • Only
        • This is Oracle’s USP
        • In-memory aggregate views because they know the amount of memory and have no idea what commodity has
    • Challenges
      • Symmetric multi-processing (SMP) guarantees uniform latency for CPU, but overall RAM is limited
      • Multi socket boxes needed for large amounts of RAM, but memory latency is not uniform
      • Complexity surfaces in software, not hardware
      • As databases get bigger, harder to keep in RAM
      • Essbase was not written for huge databases and RAM, but instead designed for smaller databases
        • As memory went over socket RAM, the speed in the socket is faster than out of the socket
        • Non Uniform Memory Access (last word right?)
        • THIS IS WHY RUNS GIVE DIFFERENT RUN TIMES
      • Two approaches
        • Use local socket ram in critical parts of the software by using thread affinity
        • Use padding to avoid false sharing – align important memory structures to cache lines
        • This is all only on Exalytics – commodity hardware does not get the above approaches
      • File sharing
        • Go google for this – this is an additional problem that Oracle handles on Exalytics
      • High CPU core challenges
        • Many users, fast and small workloads
        • Small users or single user with large highly parallelizable workload
        • Large and legacy code that runs sequentially but requires fasterCPU
        • Approaches (all part of 11.1.2.3.500)
          • Semaphores, mutual exclusion, and synchronization do not help
          • Any locking is bad and leads to poor CPU utilization
          • Lockless (for a block, for instance) algorithms based on Intel hardare instructions (compare and swap) were designed and implemented – chip specific instructions
          • Shared data structures, but cannot use typical semaphores – use lockless algorithms to reduce contention

CON7520  --  The Future of Oracle Business Intelligence and Oracle Essbase Integration

You see Mitch Campbell speaking, with the by-now-surely-well-known Steve Liebermensch and Gabby Rubin up on the dias.  Essbase is moving in new directions, it’s up to us if we will invest the time and training we’ll need to follow.

CON5532  --  Which Reporting Tool Should I Use?

Sob, this is the last session of OpenWorld.  It has been quite a whirlwind with meetings, meetups, old and new friends, and just general activity.  I am both sorry that it is over and oh so glad all at the same time.

Glenn Schwartzberg, aka My Man In California aka the-older-brother-I-wish-I-had-but-the-sentiment-is-not-reciprocated is speaking on selecting a reporting tool for EPM.  As always, good stuff, even if he doesn’t think so.  :)

What’s next?

And that’s the end.  Whew, I have blogged, tweeted, talked, texted, and just in general social mediaed myself into a tizzy.  I am ready for a break.

But I’ll be back in a little bit with (hopefully) some interesting technical stuff.  The learning and fun never stop – it would be a bit boring if it did.
 
 Be seeing you.



Two Calc Man and ASO Essbase webinars in one

$
0
0

Use Calc Man and ASO?  Then you should watch us


What you’ll see is a fairly rare multiple consulting company (Ludovic and Paul work for TopDown Consulting, I work for me) webinar.  I like to think I inspire an ecumenical atmosphere amongst competing firms but probably I am a threat to no one, so companies allow their consultants to work with me.  See, weakness can be strength.

What you’ll also see, and why you’ll care about this webinar, is the distillation of our respective Kscope14 presentations on how to get the most out of Calc Man and ASO databases.  Ludovic and Paul presented on this at Kscope14 (alas, I got to attend exactly two presentations that weren’t my own and I think one of my sessions coincided with theirs),  the bit I’m presenting is a part of the ASO Planning: Don’t Do That, Do This presentation I gave with Tim German of Qubix International at Kscope14.

The really interesting stuff

As I read through Ludovic and Paul’s slides, I realized that they came up:
  • A good overview of Calc Man and ASO Essbase
  • A review of what’s right and wrong with Calc Man and ASO Essbase
  • A really cool hack to run Calc Man rules to get round these problems

I’m contributing my hack (what good is a presentation if you don’t get a few completely unsupported yet effective hacks?) on getting round the non-empty issue with ASO calculations.  I should mention that I learnt about this from Joe Watkins although it turns out that Steve Liebermensch has been spreading this technique since the Year Dot.  Which I have apparently missed since the Year Dot.

I have high hopes for a guest blog by Ludovic and Paul as they combine their hack with mine to really and truly get round ASO procedural calc issues.  I think it will be like peanut butter and chocolate. This webinar won't be half-bad either.

And how to get it

Click on the description below, click right here, just click and sign up.

Be seeing you tomorrow, 12 pm Eastern.

Time to submit that Essbase abstract

$
0
0

Time to submit

No, not to my awesome will, altho’ I must confess to having a slight Svengali-like effect (or is that repulsive-to-all-who-know-me effect?) when it comes to ODTUG’s Kscope15.

Time for what?  Time for Essbase abstract submission for you only have till 15th October 2014 to get those submissions in and Uncle Essbase Wants You to do so as soon as possible.

Why oh why oh why would you do this?

Because you want to present at Kscope15?

Because Kscope15 is the best Oracle users conference there ever has been or ever will be?

Because a chance to present at Kscope15 is the ne plus ultra of your speaking career?

Because Essbase is awesome, and Kscope15 is awesome, so obviously the two together is awesome2?

Those are the obvious reasons.  But there’s more, more, more on offer:
  • Your registration fee, which ranges from an ODTUG member Early Bird cost of $1,500 to an OMG-you-are-so-late $2,250, is waived if your presentation is accepted.
  • You have received much from the ODTUG EPM (ahem, we all know that really means Essbase) community; this is your chance to give back.
  • Want to build your personal EPM brand?  Showcase your firm’s consulting awesomeness?  Prove that customers really do it better than a bunch of soi-disant expert consultants?  Presenting at Kscope is the way to do that.

And what oh what oh what would you present?

Here are the subtopics.  If you can’t find something to submit under these areas, you don’t love Essbase.  So why are you reading this blog?
  • Administration - automation via scripting or other tools, configuration at the Essbase server or application level, monitoring and troubleshooting, backup and recovery, and security management.
  • Optimization - managing and improving Essbase performance - speed, concurrency, or storage. Also includes new features or techniques to solve well-known problems in a higher-performance way.
  • Design - metadata design and dimensional analysis, application and database structure, partitioning, calculation approaches, and engine (BSO, ASO, Hybrid) selection considerations.
  • MDX - multi-dimensional expressions against either ASO or BSO. ASO member formulas and calculation scripts can be involved, but consider whether the focus is the MDX itself or if the presentation belongs in ‘Calculation’ instead.
  • Calculation - ASO and BSO calculation scripts, member formulas, calculation Manager for Essbase, and new or interesting approaches to specific calculation problems.
  • Essbase-related technologies - integration between Essbase and Oracle or non-Oracle tools, interaction with other parts of the EPM stack, Essbase Studio, client tools, the APIs and Web Services, and Essbase Exalytics-related topics. 
  • Other Essbase - business problem-focused presentations, managing Essbase projects or support, Essbase ‘internals,' and session topics that do not fit into the other subcategories.

Surely that’s enough

ODTUG is asking you to submit an Essbase abstract and summary.  I am asking you.  What, you want me to get down on my knees and beg?

How about this for a plea, a call for action, and a rousing cheer for Essbase at Kscope15:  You have good ideas (otoh, you read this blog, so maybe not…), you have passion, you love Essbase, you are a Kscope devotee, in short, you should have already submitted an Essbase abstract.  If you haven’t, good grief man, why haven’t you submitted that abstract?

I am looking forward to your Essbase abstract.  

Be seeing you.

A different kind of currency conversion in Planning and Calculation Manager

$
0
0

Read me first

Note No. 1

Dear Oracle and Oracle’s competitors.  I’m about to use the straw man technique to illustrate currency conversion in Hyperion Planning.  All of my complaints are from my offended sense of elegant design, not actual functionality.

Note No. 2

NB – When you see fx, substitute the words “currency conversion”.  I am too lazy mentally and physically to type that out 100 times in this post.  

A rant (yeah, I’m good at them) about Planning’s currency conversion

We’re all familiar with Planning’s widely reviled (although somewhat unfairly if you will read past the rant) currency conversion functionality in multi-currency Planning applications.  It’s been a part of Planning since at least version 1.5 (perhaps 2.1 – for sure I implemented it there but I think it was also available in 1.5) and its functionality really hasn’t changed a bit.  I suppose the thought is that the functionality works, so why bother improving it?  But the opening sentence isn’t hyperbole – no one has a kind word for it.  Why?

I think a lot of the dislike of the native Planning currency conversion is because of 3 reasons:
  1. Design
    1. It does odd things with data locations.  Writing rates to the tops of dimensions?  A sparse rate dimension that’s the first dimension in the outline?  It works, but there aren’t many Essbase developers who would design currency conversion that way.  Weird.
    2. I have to believe that Hyperion located the rates where they did because these are dimension points that are guaranteed to exist.  However, common practice in non-multiple currency applications is to set the tops of dimensions to label-only because planners cannot view that top of the dimension so there is no reason to store data points that only the administrator can see.  That particular security design decision I really can’t figure out but I’ll save that rant for another day.  

Here’s a partial snippet of this data in Essbase.  Note Version, Currency, Product, and PostCode at their dimension tops, but Scenario set to the specific value of Actual.  

Frustrating.
  1. Code
    1. An automatically (so this is good) generated calc script (a good thing for a guy that writes a blog called “Essbase hackers” but an odd choice for Planning – why no business rule?).  Calc script sourcing means the Planning administrator must copy and paste the code into a Calculation Manager script.  Manual.
    2. Speaking of maintenance, when the currency conversion calc script is generated, it is generated for all years that contain rates.  Only want to fx FY14 but your application has FY11 to FY14?  Edit that calc script or you will convert historical years.  Whoops.
    3. HspCRtB?  You mean I have to run it to get the send of rates to work?  And I need to rerun it for out years?  And this isn’t terribly well documented anywhere? And if you don’t run it, exchange rate refreshes will not work, with nary an error message.  Confused.
    4. The generated code isn’t hard to understand, but it is 100% undocumented.  As someone once told me when I was first starting out in IT (so we are talking 1990), “Good programmers don’t need documentation.  They just read the code.”  At the time, I was too young and callow to know any better and just took it, but I haven’t heard anyone else say that since.  Perhaps that programmer went on to work for Hyperion development in the late 1990s?  Sarcasm.   ;)
    5. There’s no automatically generated aggregation after the currency conversion.  But that is almost always the next step after fx.  Why couldn’t that get automatically generated?  Frustrating, again.
  2. Type of fx
    1. Planning assumes a currency conversion design where the Planner inputs data into the Currency member Local and, based on UDAs assigned to Entities via the Base Currency property, performs fx.  So long as fx is focused on an Account breaking cleanly across country-based Entities, all is well.  But what happens when there is Account activity across more than one currency for a single Entity?  How does Planning know that an expense or revenue item has US Dollar, Sterling, and Swiss Franc activity in that context?  It doesn’t, because that’s not how Planning fx is designed.  Bummer.

I feel much better.  There really isn’t anything quite as satisfying as venting one’s spleen.  

Just for the record
The fx script that Planning generates has three parts:
1)       Copy USD with Local.  This creates blocks more than anything else as the Local-in-USD gets overwritten by the fx calculation.
2)      There’s a FIX that touches all level zero Accounts, Entities (Product), and custom dimensions (PostCode).
3)      A member formula to do the actual rate conversion.  In this case, because I set up the exchange rate as Multiply in Planning, the calc script multiples Local by the rate.
That’s it.  I just thought it would be nice if it was finally documented.



And thus ends the default Planning fx strawman.

Is there anything good about Planning fx?

Having just slagged off Planning fx design, I likely now have an army of current Oracle and former Hyperion developers and product managers gunning for my hide.  Is the above totally fair?  

The rant (you have to admit, it is a fairly epic one at that) covers what Essbase hackers find objectionable about Planning’s in-built fx.  Does any of that matter to Planning administrators or planners?  Actually, no, not a bit, because all of the whining is on the developer side.  

Why?

It’s easy

There are easy places to enter rates via a special web form:

It’s (mostly) automatic

Currency conversion code gets generated automatically.  So the first time round, there’s not a scintilla of code to write.  If currencies are added, a rerun of the fx calculation script generation picks up those new currencies.  Easy peasy, lemon squeezy.

Even with a bit of deleting of unneeded years and copying and pasting into a Calc Man businsess rule, it really isn’t that hard to to manage.

It’s invisible to the user

And planners don’t know or care how the fx is calculated.  Why would they?  They enter local currency data in, the system generates USD out via attached calc scripts (unlikely) or Calc Man business rules (quite a bit more likely).  Who cares how the sausage is made?

Performance is acceptable

I have heard from lots of other consultants, “We roll our own fx and it’s way better than Planning’s.”  Really?  I’ll bet they didn’t benchmark it because the out of the box performance is actually pretty good.  I know this disparaging view of the default fx calc because I assumed the same, inflicted implemented Cameron’s-obviously-better-fx at multiple clients, and was generally quite pleased with myself.

For the record, I used a technique I learnt while I was at interRel – it looked an awful lot like the old Essbase currency partition, was easy to maintain in a separate Essbase database, used the cool ARRAY calc script function, and in general should have been the berries.  

Then, for a blog post that as you might imagine never got written, yr. obt. svt. decided to benchmark my approach and Planning’s in a like-for-like set of Planning databases.  And…

My code was slower.  Hubris.

If only, and I do mean only, I had tested instead of assumed I could have been out there defending Planning’s built in fx functionality.  I assumed that Planning’s code sucked eggs because I didn’t like the design.  Except that design is better.  As my buddy Natalie Delemar said to me at OpenWorld when I got something, “You really aren’t infallible, are you?”  Nope, I am most definitely not.  Alas and alack.

Why Planning’s fx is good and why yr. obt. svt. can be an idiot

It turns out that the primary way to speed calculation is to reduce the scope of the data.  The old page and POV dimension technique works there, as does the user variable trick for rows and columns that I documented some time ago for row/column focused aggregations.  

My guess is that consultants who tout their fx approach have done just that:  compared to the calc-everything-all-the-time approach of the auto-generated calc script code to their focused custom code.  That ain’t faster code, that’s smaller and thus faster.  A rate calc is a rate calc and at the end of the day, that is exactly what fx is all about.

Everything else is soi-disant Essbase geeks (like yr. obt. svt.) having their sense of design offended.  Think of the default Planning fx as an engineering problem.  If one defines engineering as the art of the possible given limited resources, then it follows that the Planning fx use case had a bunch of requirements (automatic, integrated with Planning, fast) that the Hyperion (it is that old) development team satisfied.  Ta da, that’s how a commercial product is written, Essbase hackers design sensibilities be damned.

Software engineering
As an aside, I come from a family of engineers and given my computer orientation I am the failure at family dinners because I couldn’t hack Differential Equations.  Oh the shame.  But I did pick up engineering’s weltanschauung that I try to apply to my design and code although I obviously fail that approach sometimes, cf. Cameron’s-obviously-better-fx.

Engineers are designers – we as Oracle EPM implementers should have the same philosophy when it comes to solving a problem – figure out the problem and then do the most with the least.  And that approach drives my questions over on Network54 where I ask why someone has gone down some unbelievably complex, unsustainable, and generally awful approach.  I note that very often these why questions of mine go completely unanswered.  Do I offend?  Am I so dense that a Rube Goldberg/Heath Robinson approach is the best way and I just can’t see it?  Could it be hubris on the part of the poster?  Do I just like whacking hornet’s nests with sticks?  You decide.

So where does fx go from here?

We have two views:  the strawman that Planning’s default fx is absolute pants, and the counter argument that Planning’s default fx is actually perfectly adequate.  As much as it pains me, I have to admit that the base functionality, with a bit of tweaking, is probably more than good enough.  Remember that comment about engineering and the art of the possible.  

Given that, is there any point in even talking about fx?  Absolutely, for the use case in rant point number three– fx that requires contributory currencies.  Planning fx can’t do that.  Consultants can, and do, write these kinds of fx conversions, but maybe there’s a better way to handle it.

Enter Calculation Manager

Remember the use case requirements of automatic code,integration with Planning, and acceptable speed? They have raised their collective head again.

The Calc Man development team of Sree Menon and Kim Reeve looked at this issue, and as they so often do, came up with a really clever way of meeting those fx requirements in Calc Man via a System Template.

Not applicable to multi-currency Planning apps

Although this is a bit unintuitive, you cannot use the Calc Man fx system template in multi-currency applications.  This is easy to suss out by trying to find it in one of those multi-currency applications.  It isn’t there.

But when you look at a single currency (I told you this wasn’t intuitive) that fx template is there.

Required dimensionality

Currency

A typical multi-currency Planning application has a Currency dimension that looks like this with one reporting currency:

The Calc Man fx template requires a very different looking Currency dimension based on the requirement of the contributory currencies.  Trust me, this structure will make sense in due time.




NB – I believe that the Reporting hierarchy I show here is probably not necessary.  I will update this post as I hear back from Oracle.


Also, remember planners do not enter data into Local, but instead have to select the correct currency for a given Account/Entity/custom member combination.  And yes, that makes forms more complex, but that is the price of this kind of fx.

The Base and Reporting members will become important during the fx template wizard.  Take it as read this is required and the actual reasoning behind this hierarchy will be covered later in this post.

Account

More custom members must be created for the rate types.

As you know, periodic line items like income statement accounts use average rates and balance sheet accounts use end of month rates.  Both need an Account to live in.

Using the fx template

The fx template is a graphical object, so it’s a wizard like the other Calc Man graphical objects.  Let’s take a walk through the template and see how many mistakes I can make.

Before the beginning

Although the wizard will start on object drag, cancelling out of the wizard will show the below instructions.  The design I came up with above for the Currency and Account dimensions reflects the instructions below.


Again, the bit about, “A parent that contains the reporting currencies (USD Reporting, EUR Reporting, etc)” is in error, I think.  The template still works, but it only does one reporting currency.  This is a difference from the way Planning works with its optional multiple reporting currencies.

With that, let’s go back to actually creating this fx process.

Drag it into the rule


The drag and drop initiates the Wizard.

Set the Currency dimension

It begins off with questions about how currency should be defined.  In the case of a non-currency application, Currency (I could have named it taters and neeps– this is totally up to the developer) is a custom dimension; all custom dimensions show up in the dropdown control.

Set the reporting curency

Once you’ve selected a Currency dimension, you must then select the reporting currency.

Pick a base currency parent

Can you guess why I chose “Base” instead of USD, GBP, or CHF?  Read on, Gentle Reader.

Pick an account type to drive currency type

Exchange rate option screen filled out

NB – I used the wizard’s member selector to define member names.  You will see later that I got bored with this and decided to type in values on my own.  Beware.

One thing to note – the parent member that contains the currency members is used to drive the currencies in use -- the template is automatic in that it generates code for all children of the parent member.  Cool.

POV

Just as with the in-built Planning fx, you must select a Point of View for the fx.  This is going to be pretty straight forward as I will simply type in the functions for the level 0 members of YearTotal, Total, and Entity.  Can you spot one of the errors?

Set the location of the Average rate

Oh, the errors in this one.  Again, this is a quiz for those who, unlike me, closely look at what they type (hint).  But there is a different error here as well – again, can you spot it?

Setting the location of the ending rate

Oh the shame.  The same two errors.  Hint.  At least I am consistent.

And with that, we are done

When the ending rate is set, the fx template wizard is complete.  It really was kind of easy, wasn’t it?  We also get a nice summary of the selections.  It’s across two screens as SnagIt doesn’t play nice in scrolling windows on a VM.

Or are we?

Let’s have a look at the code by clicking on the Script tab in the rule, and then copying and pasting the code to EAS’ script editor.  Sorry, Oracle, but to understand what the template does, I have to read the code.

For those of you that do not know the trick about seeing the code behind the graphical object, see below:


Getting back to the code, some bits of this make perfect sense:
  1. There’s a FIX that matches the POV setting.
  2. There are two FIXes that select Average versus Ending Accounts based on Planning-derived UDAs.
  3. USD Reporting gets cleared in both fx types.
  4. There’s a rate calculation…wait, hmm, something (several somethings, actually) isn’t right.

Let’s take a closer look

Issue no. 1
@RELATIVE(“YearTotal”, 0)->fx_HSP_Average?  A cross dim and a function that returns a set of members?  Is that possible?  Er, no.

How did I manage to do this?  It was stupidity on my part after I set the POV to @RELATIVE(“YearTotal”, 0).  That’s fine for the POV, but not so fine for the FX_Average rate setting.  Whoops.

Here’s the culprit:

Remember that comment about my consistency in making mistakes?  I did it again with the ending rates.  Double whoops.

So get rid of the erroneous @RELATIVE(“YearTotal”, 0):
Issue no. 2
Did I delimit member names with double quotes?  Especially the ones with spaces in the names?  Sometimes.

Bugger.

Now it’s fixed.

Consistency
I did it with Total Geography, then I did it with No Segment and then with No Entity.  Clever, aren’t I?  No.

What happens when I try to validate this in Calc Man?

Well, there’s definitely an error…

Sree tells me Oracle know this and are working on getting better errors.  Probably someone ought to use this template and tell Oracle that they are.  Squeaky wheels and all that.

So fix it per the above errors

Once I recover from my stupidity, all is well.

How does it work?

Quite nicely, actually.

Here are the rates:

Remember, as this is not a multi-currency application, the rates must be entered through a form.

And here are the results:

If you want to see this in action, with proof from Essbase and Excel calculations, see this video:

The end of this blog post

See, Hyperion Planning fx is actually pretty awesome, whether it is in native Planning or via Calc Man’s fx template.

Native Planning provides single Entity fx.  And it's automatic.  And it's fast.

Calc Man provides contributory Entity fx.  And it's wizard driven, and once wizard driven, automatic for all currencies under the Base Currency member.  And it's fast, too.

What's not to like? 

I am a huge fan of Calc Man’s fx template – it’s fast, it’s easy, even I can do it.  And, if I a
m so inclined, I can steal (ahem, “borrow the idea from”) it from the code and do what I want with it.  

So cool.  Thanks Oracle for putting this out there.  And thanks Sree and Kim for answering my questions.

Be seeing you.

An Essbase ASO procedural calculation too screwy to be true

$
0
0

But it is

Tim German and I presented at Kscope14 on ASO Planning.  As part of the use case for that presentation, I wrote code that mimicked BSO Planning’s fx (I seem to have currency conversion on the brain, cf. my last post).  You can read all about the power of ASO procedural allocations here:  Calculation Manager, BSO Planning, and ASO Planning combine for an awesome ASO Essbase procedural calculation hack -- Part 3.

What I didn’t cover in that presentation is something I don’t really understand (although I have high hopes that this blog post will spur explanations):  increasing the scope of an ASO procedural execute allocation slows down the calc (so this is pretty self-evident), but decreasing the scope (so far, this makes sense) and and then combining the multiple procedural allocations speeds up the calc (so not quite so self-evident).

There are Doubting Thomases out there, and I completely understand their skepticism given how odd this finding is.  I too was amazed, and would be equally doubtful given the claim.  I’m not from Missouri, but I completely believe in proving what I state.

The numbers

Given the same database, same data, and same general code with the only difference being the range of the Accounts dimension within the execute allocation POV, I get the following times.

Split code line

Single code line

The analysis

Just in case you aren’t following this, that’s the same set of data at 111,222 cells but 32.944 seconds for three execute allocations in a row versus 134.581 seconds.  That’s a difference of 101.637 seconds.  The repeated code is four times as fast as the single code line.  Weird, eh?

The code

For those of you who don’t believe me (and hey, why would you?), here are the logs straight from MaxL:

Split code line

MAXL> execute allocation process on database T3_ASO.T3_ASO with
  2> pov
  3> "CROSSJOIN( {[FY07]},
  4> CROSSJOIN( {[Final]},
  5> CROSSJOIN( {([Actual])},
  6> CROSSJOIN( {([No fx])},
  7> CROSSJOIN( Descendants( PERIOD, PERIOD.Levels(0)),
  8> CROSSJOIN( { (Descendants( [Net Income], ACCOUNT.Levels(0)))},
  9> CROSSJOIN( Descendants( Product, Product.Levels(0)),
 10> ( Descendants( PostCode, PostCode.Levels(0)) ) ) ))))))"
 11> amount "([MTD USA])"
 12> amountcontext "([Local])"
 13> target "([MTD])"
 14> range "{([USD])}"
 15> spread;

OK/INFO - 1300006 - Essbase generated [61523] cells.
OK/INFO - 1013374 - The elapsed time of the allocation is [2.197] seconds.
OK/INFO - 1241188 - ASO Allocation Completed on Database ['T3_ASO'.'T3_ASO'].

     essmsh timestamp: Wed Oct 22 07:39:25 2014

Assets

     essmsh timestamp: Wed Oct 22 07:39:25 2014

MAXL> execute allocation process on database T3_ASO.T3_ASO with
  2> pov
  3> "CROSSJOIN( {[FY07]},
  4> CROSSJOIN( {[Final]},
  5> CROSSJOIN( {([Actual])},
  6> CROSSJOIN( {([No fx])},
  7> CROSSJOIN( Descendants( PERIOD, PERIOD.Levels(0)),
  8> CROSSJOIN( { (Descendants( [Assets], ACCOUNT.Levels(0)))},
  9> CROSSJOIN( Descendants( Product, Product.Levels(0)),
 10> ( Descendants( PostCode, PostCode.Levels(0)) ) ) ))))))"
 11> amount "([MTD USA])"
 12> amountcontext "([Local])"
 13> target "([MTD])"
 14> range "{([USD])}"
 15> spread;

OK/INFO - 1300006 - Essbase generated [23466] cells.
OK/INFO - 1013374 - The elapsed time of the allocation is [14.45] seconds.
OK/INFO - 1241188 - ASO Allocation Completed on Database ['T3_ASO'.'T3_ASO'].

     essmsh timestamp: Wed Oct 22 07:39:40 2014

Liabilities

     essmsh timestamp: Wed Oct 22 07:39:40 2014

MAXL> execute allocation process on database T3_ASO.T3_ASO with
  2> pov
  3> "CROSSJOIN( {[FY07]},
  4> CROSSJOIN( {[Final]},
  5> CROSSJOIN( {([Actual])},
  6> CROSSJOIN( {([No fx])},
  7> CROSSJOIN( Descendants( PERIOD, PERIOD.Levels(0)),
  8> CROSSJOIN( { (Descendants( [Liabilities], ACCOUNT.Levels(0)))},
  9> CROSSJOIN( Descendants( Product, Product.Levels(0)),
 10> ( Descendants( PostCode, PostCode.Levels(0)) ) ) ))))))"
 11> amount "([MTD USA])"
 12> amountcontext "([Local])"
 13> target "([MTD])"
 14> range "{([USD])}"
 15> spread;

OK/INFO - 1300006 - Essbase generated [26133] cells.
OK/INFO - 1013374 - The elapsed time of the allocation is [16.297] seconds.
OK/INFO - 1241188 - ASO Allocation Completed on Database ['T3_ASO'.'T3_ASO'].

Single code line

MAXL> execute allocation process on database T3_ASO.T3_ASO with
  2> pov
  3> "CROSSJOIN( {[FY07]},
  4> CROSSJOIN( {[Final]},
  5> CROSSJOIN( {([Actual])},
  6> CROSSJOIN( {([No fx])},
  7> CROSSJOIN( Descendants( PERIOD, PERIOD.Levels(0)),
  8> CROSSJOIN( { (Descendants( [Net Income], ACCOUNT.Levels(0))), (Descendants
( [Assets], ACCOUNT.Levels(0))), (Descendants( [Liabilities], ACCOUNT.Levels(0))
)},
  9> CROSSJOIN( Descendants( Product, Product.Levels(0)),
 10> ( Descendants( PostCode, PostCode.Levels(0)) ) ) ))))))"
 11> amount "([MTD USA])"
 12> amountcontext "([Local])"
 13> target "([MTD])"
 14> range "{([USD])}"
 15> spread;

OK/INFO - 1300006 - Essbase generated [111122] cells.
OK/INFO - 1013374 - The elapsed time of the allocation is [133.581] seconds.
OK/INFO - 1241188 - ASO Allocation Completed on Database ['T3_ASO'.'T3_ASO'].

Conclusion from the data

I have a few, although they are not satisfying:
  1. Those results are freaking weird.
  2. Something is going on within ASO Essbase that makes the multiple code lines faster.
  3. I wish I was smart enough to know the answer to point number two.
  4. Someone will be smart and knowledgeable enough to figure this out.
  5. General rejoicing will occur on the completion of point number four.  I will cheer the loudest.

A plea for help

I’m not enough of a scientist to delve into the why (I do try, somewhat, to have a life) nor am I smart enough to figure it out.  Dan Pressman is working on this and he is way smarter, and even more obsessive than me, so we all stand a very good chance of finding out why this is so.

Dan did address this in the Network54 thread, and wrote:


The allocation POV is required to be a symmetrical area of the cube. I know this because I tried some tricks with filters and nonemptytuple using <dimension>.currentmember. To understand this suppose I had a cube with the dimensions FloraOrFauna and Species (among others). Well we know that there will never be tuples such as (Flora, Canine) or (Fauna, ChristmasTree).

Looking at the whole we can not eliminate Canines just on the Flora side because that would be non-symmetrical. That is what using leaves and nonemptytuple is faced with. However if we split the allocation into a Flora allocation and a Flora allocation then we are ok.
 
I believe that Dan’s explanation correctly states that fewer and smaller allocation POV ranges are faster than large ones.  

What I do not understand (but am waiting with bated breath for) is an explanation as to why multiple small POVs that when combined equal the size of the full POV set are faster than a single full POV definition.  As you can see from my statistics, the number of addressed cells is the same.


I look forward to the explanation.  :)

Be seeing you.

2014 ODTUG board of directors election results

$
0
0

First of all, thank you

You have, perhaps a trifle unwisely, reelected me to the ODTUG board of directors.  Thank you and I will do my best to serve all ODTUG members to the best of my ability.

I have to say that given the intense campaigning this time round, I was pleasantly surprised to be reelected as this blog post is all that I did to announce my candidacy.  So again, thank you for your belief in me and I will do my best to not let you down.  I may even make you happy.  :)

Who got elected

Yr. obt. svt.

You are reading my blog, perhaps you even voted for me.  You know who I am for better or worse, and as I wrote before, thank you for voting for me.  

Tim Tow

Tim has been treasurer for as long as I’ve been on the board and was originally (I think I have this bit right) appointed to the board when ODTUG first welcomed what was then called Hyperion back in 2008; he subsequently ran for the board and is now on his third reelection.  Tim is also a personal hero of mine; anyone who knows him in the EPM and ODTUG community knows and respects him as well.  Tim and I are both reaching the end of our six year term limit so this will be it for him, at least for a while.

Mike Riley

Mike is a former ODTUG president (so slightly insane then, but nice) and Kscope conference chairman (again with the slightly out of his mind, but again thankfully someone took on that Herculean task).  He hit his six year term limit last year, took the mandatory year off and is back.  

Mike helped bring EPM to what was then called Kaleidoscope, so everyone who reads this blog owes a debt of gratitude to him.

I don’t think anyone can doubt his dedication to ODTUG and drive.  I look forward to serving with him again.

Sarah Zumbrum

Sarah is new to the ODTUG scene and has seemingly exploded out of nowhere.  That’s not actually true as she was a member of the former Hyperion SIG.  I’ve worked with Sarah on the EPM Community initiative and have been very pleased with her work ethic, determination, and ability to lead the other volunteers.

She will make a great contributor to ODTUG and I look forward to serving with her for the next two years.

What I’m going to do after this term

ODTUG board members are currently limited to three contiguous two year terms.  After that, they must take at least a one year break.  This upcoming two year term will therefore be my last as 2014 is the end of year four for me.

But it’s going to be more than just the end of my maximum three concurrent terms – the upcoming 2014 to 2016 term is it, there ain’t no more to give, I’m moving out, it’s Arrivederci, Auf Wiedersehen, Tot Ziens, and Cheerio after that.  

It’s not that I don’t love ODUTG, but I feel that six years is more than enough time to enact whatever vision I have (I am speaking for myself and no other board member) and with the grassroots EPM Communities initiatives I am managing (Ah, management:  I give very few general directions, which are largely ignored and rightfully so, and Jennifer Anderson, Courtney Foster, and Sarah Zumbrum along with a bunch of other people do all the work and somehow credit is accreted to me.  Why am I not a manager in real life?) I think I have found the task that strikes passion in my ODTUG heart.

So I have two more years to help get the EPM Community initiatives get their sea legs, do all of the other sundry tasks and requirements of ODTUG board members (some are quite exciting, others deadly boring, so just like a real job), and then get out of the way for the next group of volunteers.

Am I divorcing ODTUG?  Is this a community property state?  Who gets custody?

Absolutely not.  I owe a huge professional and personal debt to ODTUG – it has had all kinds of impacts on my career.  These include:

I am not going to forget or fail to repay that.  I’m not sure what kind of ODTUG role I will be able to carve out in future but even if it’s stuffing bags at Kscope17, I’ll be involved.

I’m not dead yet

I want to finish my work with the EPM community initiatives – you have elected me to do that.  Once that is done, it is time for someone else to come onto the board for the first time and go through his process of learning and contributing.

Until that time, I look forward to serving every single member of ODTUG.  Thanks again for believing in me.

Be seeing you.
Viewing all 271 articles
Browse latest View live