First, let me ask if I am correct in understanding that while fact tables can be partitioned, a drillthrough dimension cannot ? Thus, a drillthrough dimension must span the entire fact table ?
My users both want one to two years of info in the cube, and also want drillthrough. We have been able to persuade them to accept only a month or two of drillthrough.
Unfortunately, it appears that this is not doable.
>>First, let me ask if I am correct in understanding that while fact tables can be partitioned, a drillthrough dimension cannot ? Thus, a drillthrough dimension must span the entire fact table
That is correct. Drillthough is often implemented by creating fact dimension and you cannot partition fact dimensions.
The solution is to define a view that spawns all the partitions in your cube and define fact dimenion off that view.
This way you are going to get data for the entire range of data.
Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Thanks, Edward.
But the reason I need to limit the drillthrough to just a month or so is that just six months worth of data consists of about 120 million records.
|||In this case limit your view to only the data you'd like to see in the drillthorough.
Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Hey guys - I'm in exactly this boat and had trouble implementing this solution. Said that the
rows inbetween the delta (i.e., more than one month old in your case) didn't connect to fact table, so dropped out from the cube entirely rather than just 'not being available for drill through'.
I posted this as a separate thread at
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1532618&SiteID=1
Any help would be great! On large cubes, I would think this process has to be SOP...
|||BIGuy--we never found a solution.
|||Well that isn't good! Anyone with over 158 million rows has to have this problem because that is the size at which the asstore file reaches a 4-gig limit if the fact-dim-key is int (132 million for bigint) - so unless I'm missing something obvious about setup or architecture, this would mean that no cube with over 170 million rows can use drillthrough? Geesh, somehow we've got to deal with that..........
|||you can use reporting services action drill through where you can drill through to specific set of records.
Keep in mind that cube drill through (MDX DRILLTHROUGH statements) can only be used against data in the cube.
Anything outside of the cube you need to use Reporting Services Action for example.
I don't think the users would appreciate letting them drill through 158 million records.
When they drill through, they already know SPECIFICALLY what records they want to see in more detail.
So Reporting Services reports can be called through Drill-through action defined in the cube.
|||Binh Cao--
Thanks for your suggestion. Where necessary we have done this. But we really find this to be a considerable increase in complexity and time involved. What was just an analysis services project becomes an analysis services/reporting services project. Passed parameters need to be translated from the form they are in the cube (and in the case of heirarchies, this can be a bit involved) to a form that can be used in a SQL statement on the underlying table.
And ultimately, this puts the computational burden back on our SQL datawarehouse--and our whole intent in developing cubes was to get users off the datawarehouse.
BIGuy--
Agree with what you have said.
|||How about if you still use a DrillThrough MDX statement in Reporting Services (which works with OLE DB for OLAP Provider), but add a suitable time member (like a month) to constrain the drillthrough co-ordinates which are passed?|||Binh -
Yes - just so everyone understands, its not that I want 158 million rows via drillthrough , its that you can't **process** the cube at all if you have that many rows in your cube. As you say, the user will have a view on their screen and drillthrough to what they are interested in, and the result may be from 1 to 10k or so rows depending on thier goal at that time. That is very funny, 158M rows for drillthrough.
But if you have more than 158 million rows in your cube, then it will fail during processing because the fact dim that contains the Invoice and Invoice Line Number has gotten too large (google "asstore 4 gig limit" to learn more).
And one might say use ROLAP for the fact dim, which is correct in theory (and should work for smal cubes just fine), but in practice the query it generates to go get the rows takes over 20 minutes to resolve versus 5 seconds with MOLAP fact dim.
We are using ProClarity, so on our side Reporting Services hacks/tricks won't help.........
No comments:
Post a Comment