In many cases, some of OLAP cubes are huge and Full Process takes a lot of time.
Therefore, it’s important to know which change, when the updated solution is deployed, causes the cube to move to “Unprocessed” status and needs new full process. Dimension process status is also critical since if dimension moves to unprocessed, all cubes that have connection to this dimension will be unprocessed too.
Here is the detailed tested list of different actions, and how they apply on cube (tested on SSAS 2008. I don’t remember any difference with SSAS 2005, but not certified)
We can summarize the results in one sentence: Every action that affects data will move cube or dimension status to unprocessed. Such changes are also called structural changes, meaning any change that apply on cube/dimension structure (new element, edition on aggregation method, etc.) and therefore will require data loading or reloading.
One exception: Adding or deleting translation to dimension unprocesses it when deployed. I don’t know why..
And now to the detailed list:
Type | Action | Unprocess cube/dimension? |
Cube | New measure group | Yes |
Cube | New measure | Yes |
Cube | Edit measure aggregation method | Yes |
Cube | Measure format | No |
Cube | Measure name | No |
Cube | Measure display folder | No |
Cube | ErrorConfiguration edition | No |
Cube | Edit dimension usage | Yes |
Cube | Calculations | No |
Cube | Add, edit or delete kpi | No |
Cube | Add, edit or delete action | No |
Cube | Edit partition query | No, not applied till next process. |
Cube | Add new partition | No, new partition unprocessed. |
Cube | Edit partition storage mode | No, but data is empty. |
Cube | Create,edit and assign aggregations | No, not applied till next process. |
Cube | Add, edit or delete perspective | No |
Cube | Add, edit or delete translation | No |
Dim | Add attribute to dim | Yes |
Dim | Edit attribute name | No |
Dim | Order by property of attribute | Yes |
Dim | AttributeHierarchyVisible property | No |
Dim | Edit attribute relationship | Yes |
Dim | Add or delete translation to dim | Yes |
Dim | Edit translation | No |
Other | Add, Edit, Delete Role | No |
Other | Edit data source | No, not applied till next process. |
Other | Edit DSV | No |
Finally, 3 remarks:
· BIDS Helper add-in gives you possibility to deploy MDX script or aggregations only, so you can be sure nothing else is affected.
· Use Deployment Wizard tool when you want to deploy without overwriting Partitions or roles, in cases they were edited in the OLAP DB.
· If your cube is down and you need to have one measure group/partition available without processing the entire cube (because of process long time), you can first process the cube with “Process structure” Process mode. It will move cube status to Processed but without any data. Then you can process the specific part you want and it will be available.
Hello,
ReplyDeleteI have a Data Warehouse in SSAS 2008 configured with Dimensions (such as customers and products) and Measure Groups (such as inventory and orders) which are partitioned at the monthly level (i.e. Jan 2011, Feb 2011, etc.). Data is loaded on a daily basis via SSIS packages. When the data has completely loaded, a script runs to determine the partitions that need to be processed (i.e. if only orders for October 2011 were added then only the Order October 2011 partition is included in the xmla script).
In order to save on time (since it takes about 2 hours to process the cube due to the volume of data), is there a way to only process the dimensions and affected partitions? Currently an SSIS package is executing two xmla scripts: 1) to process all of the dimensions [which takes roughly 15 minutes to run] and 2) to process the affected partitions and cube [which takes roughly 2 hours to run]. However if I don't inclue the directive to also process the cube (I am using ProcessUpdate) the cube becomes in an Unprocessed state.
Can you provide any insight as to whether it is possible to add data to a cube and only process the dimensions and affected partitions without having to process the cube itself and keep the cube in a Processed state so it can be queried?
Thanks for your help!
Shannon
Hi Shannon,
ReplyDeleteI don't understand why your cube moves to unprocessed. If the only thing you do is to process update dimensions and then process relevant partitions, it shouldn't happen.
Anyway, you can solve it by adding a step between 2 above: process structure of entire cube. It moves status to processed but without any data inside, and then you can process relevant partitions.
Thanks for the follow up!
ReplyDeleteUpon further review of the xmla script for processing the dimensions, I am setting the Type to ProcessFull (versus ProcessUpdate or ProcessAdd). Could that be the problem?
Otherwise I will try your suggestion of:
1. processing the dimensions
2. process the cube as ProcessStructure
3. processing the partitions that have updated data
Hi,
ReplyDeleteI need to comment out that is need to delete translations on one dimension, Do I need to do Full process of the dimension followed by cube again? or just process update is enough?
Thank you!!!
Kiran
I need to add/update some records to two existing dimensions on which security is implementated, do I need to reprocess cube to get effect of those records to cube.
ReplyDeleteHi Niraj,
DeleteNo, process update dimension should be enough.
but I will have to process dimension after adding/updating records to source table of dimension?
DeleteYes, with "process update" option.
DeleteThe process full of the dimension will lead to the cube getting into the unprocessed mode. Hence always perfer process update in your dimension processing.
ReplyDeleteI have a database that has two cubes in it, hourly and daily data. I need to split the these two cubes to be separate databases. If I were to edit the original data source view to remove all hourly related fields (not changing anything for the daily) and removing the hourly cube, would this database have to be reprocessed, the one that would become my daily one? I would create a completely new database for hold the hourly data.
ReplyDeleteI want to avoid having to reprocess the daily database if possible because it contains so much data.
Yes. You can process partition by partition collecting your partitions in the following XMLA
ReplyDelete'
#innerXML
'
I like your blog, I read this blog please update more content on hacking,
ReplyDeleteTableau Online Training
class 12 tuition classes in gurgaon
ReplyDeletebest coaching classes for class 10 in gurgaon
best coaching classes for class 11 in gurgaon
best coaching classes for class 12 in gurgaon
what is microsoft azure
azure free trial
azure adf
azure data factory interview questions
azure certification path
azure traffic manager
Smm Panel
ReplyDeletesmm panel
iş ilanları
instagram takipçi satın al
hirdavatciburada.com
beyazesyateknikservisi.com.tr
servis
tiktok jeton hilesi
çekmeköy samsung klima servisi
ReplyDeletebeykoz bosch klima servisi
ümraniye toshiba klima servisi
kartal beko klima servisi
tuzla alarko carrier klima servisi
kadıköy samsung klima servisi
maltepe mitsubishi klima servisi
pendik bosch klima servisi
ataşehir alarko carrier klima servisi
Good content. You write beautiful things.
ReplyDeletehacklink
taksi
hacklink
mrbahis
mrbahis
vbet
korsan taksi
sportsbet
vbet
başakşehir
ReplyDeletebayrampaşa
beşiktaş
beykoz
beylikdüzü
KVKTD6
"Excellent material! Your post expertly demonstrates your knowledge and commitment. Our professional community values and highly appreciates your insights. I look forward to your future insightful contributions. Splunk Course
ReplyDelete"Your post shows that you have a thorough understanding of the topic. Your opinions are well-informed and significantly advance worthwhile debates. Kudos! Splunk Certification
ReplyDelete"Your commitment and contagious optimism truly shine, strengthening our team. I appreciate your significant contributions. DevOps Training
ReplyDelete"Your post demonstrates a thorough comprehension of the subject. Your insightful comments add to the conversation and present new angles. Within our professional community, we sincerely appreciate the efforts you have made. I'm grateful. DevOps Course
ReplyDeleteIt is a magnificent work of art! It enthrals with its perceptive writing, flawless research, and writing style. I appreciate you sharing your insightful viewpoint. Outstanding effort! Blue Prism Certification
ReplyDeleteYour essay is a stunning piece of writing! It captivates readers with its insightful writing, impeccable research, and writing style. Thank you for sharing your wise perspective. A superb work! Ruby On Rails Training
ReplyDeleteYour essay is a stunning piece of writing! It captivates readers with its insightful writing, impeccable research, and writing style. Thank you for sharing your wise perspective. A superb work! Ruby On Rails Certification
ReplyDeleteThe caliber and depth of your most recent post have truly astonished me. Your knowledge is evident in every sentence, and our professional community has benefited much from your views. It's impressive how you can simplify difficult ideas into facts that everybody can understand. Salesforce Admin Course
ReplyDeleteشركة تسليك مجاري بالدمام
ReplyDeleteشركة تسليك مجاري
OpenStack is an open source platform that uses pooled virtual resources to build and manage private and public clouds.
ReplyDeleteGreat and that i have a keen give: When To Renovate House home renovation services
ReplyDeleteCool and that i have a super offer you: How Much House Renovation Cost Philippines remodel outside of house
ReplyDelete