Thursday, March 29, 2012
ETL / Data mart / Cube question
However the business area wants a complete snapshot of ALL THE data for this given subject matter. Their reason is that sometimes our CUBES don't have all the information they want, and they want to be able to query the data mart tables against the staging area tables using SQL or Access. Since our data mart load times is first Saturday of the month - this argues in favor of this. If the load process was every day or once a week - it might be better to just query across the data mart and the reporting (replicated production) database - the the data mart tables would be older than the information they were linking against on the replicated report DB using the 1st Saturday approach.
Questions:
1. Is this the correct approach?
2. According to a Microsoft diagram, the load goes from Source to Staging to OLAP DB - which is really just another OLTP DB but the tables are modeled as Dimension/Fact tables, right?
3. Our existing data mart which everyone mostly uses CUBES to access, has a big manual step. After all the ETL happens and the Dim/Fact tables have been updated - we are having to open up SSAS and process the cubes, the first business day after the 1st Saturday, to get them refreshed - can't this be automated within SSIS?
4. There have been some major production changes to our existing data source for our existing production data mart. All the ETL was done on SQL2k with stored proc's and DTS was only used as the policeman to control the flow. All the SCD processes were done and are working fine using SP's. We now have to make changes and are faced with having two complete different systems to manage - one with an ETL using the old method and the 2nd using SSIS. Some of the CUBES from the original data mart are going to be combined with our new cubes we are creating in our new system. To try and make it clearer - we have for example a Client dimension and a Client fact table that first is refreshed monthly with the old ETL, and now our new subject matter is now also going to affect the Client dimension and Client fact table that was created the first project. The old production changes it appears may be many weeks to adjust using the old ETL methods - so it begs the question, since all the ETL is working fine with DTS as flow master - is it perhaps better to get it all working with SSIS calling all the SP's and incorporate all our new ETL within SSIS using the new features that SSIS can offer?
5. When adding a 3rd subject matter data mart - is it a 3rd ETL package or just enhancing the one we have?
____________________
Joe Horton
SQL 2005 Business Intelligence
Aging & Disability Services Administration
I think that for 1, 2 and 3 you're right - you can schedule cube
process withing SSIS (there is a specific task to do that).
For 4 and 5 it's hard to say - it depends from the cost of
implementation and the cost of maintenance... SSIS packages are easier
to maintain in the long term, but you could have an higher cost in the
short term for the initial design. Mixing SSIS and SP's is certainly
feasible, even if it makes the solution harder to manage and also
performance can suffert compared to a native SSIS solution. Anyway, it
all depends by other constraints (volume of data, time of development
available, and so on).
Marco Russo
http://www.sqlbi.eu
http://www.sqljunkies.com/weblog/sqlbi
On Feb 1, 6:16 pm, "Joe" <hortoris...@.gmail.dot.com> wrote:
> We are in process of using new Kimball toolkit book as our guide to create another DM. Since we only have one data source, we think we need to pull in all the tables into the staging area. At first we wanted to just pull in minimal data to populate the Dimension/Fact tables and any information needed to derive columns.
> However the business area wants a complete snapshot of ALL THE data for this given subject matter. Their reason is that sometimes our CUBES don't have all the information they want, and they want to be able to query the data mart tables against the staging area tables using SQL or Access. Since our data mart load times is first Saturday of the month - this argues in favor of this. If the load process was every day or once a week -it might be better to just query across the data mart and the reporting (replicated production) database - the the data mart tables would be older than the information they were linking against on the replicated report DB using the 1st Saturday approach.
> Questions:
> 1. Is this the correct approach?
> 2. According to a Microsoft diagram, the load goes from Source to Staging to OLAP DB - which is really just another OLTP DB but the tables aremodeled as Dimension/Fact tables, right?
> 3. Our existing data mart which everyone mostly uses CUBES to access, has a big manual step. After all the ETL happens and the Dim/Fact tables have been updated - we are having to open up SSAS and process the cubes, the first business day after the 1st Saturday, to get them refreshed - can't this be automated within SSIS?
> 4. There have been some major production changes to our existing data source for our existing production data mart. All the ETL was done on SQL2k with stored proc's and DTS was only used as the policeman to control the flow. All the SCD processes were done and are working fine using SP's. We now have to make changes and are faced with having two complete different systems to manage - one with an ETL using the old method and the 2nd using SSIS. Some of the CUBES from the original data mart are going to be combined with our new cubes we are creating in our new system. To try and make it clearer - we have for example a Client dimension and a Client fact table that first is refreshed monthly with the old ETL, and now our new subject matter is now also going to affect the Client dimension and Client fact table that was created the first project. The old production changes it appears may be many weeks to adjust using the old ETL methods - so it begs the question, since all the ETL is working fine with DTS as flow master - is it perhaps better to get it all working with SSIS calling all the SP's and incorporate all our new ETL within SSIS using the new features that SSIS can offer?
> 5. When adding a 3rd subject matter data mart - is it a 3rd ETL package or just enhancing the one we have?
> ____________________
> Joe Horton
> SQL 2005 Business Intelligence
> Aging & Disability Services Administration
|||Hi Joe,
I'm involved in a Data Warehouse project. Client is already invested in SQL
Server 2005.
Directly coming to the point..
You must have setup server(s) for your data warehouse. I wanted to know
whether you were involved in Server Sizing / Data Sizing. If yes, then can
you please guide me as to which method / template did you use to size your
data and based on it what hardware (with configuration) did you suggest /
decide?
Please help this is very critical and urgent!
Regards,
Bharat Tamhankar
"Joe" wrote:
>
> We are in process of using new Kimball toolkit book as our guide
> to create another DM. Since we only have one data source, we think we
> need to pull in all the tables into the staging area. At first we wanted
> to just pull in minimal data to populate the Dimension/Fact tables and any
> information needed to derive columns.
> However the business area wants a complete snapshot of ALL THE
> data for this given subject matter. Their reason is that sometimes our
> CUBES don't have all the information they want, and they want to be able
> to query the data mart tables against the staging area tables using SQL or
> Access. Since our data mart load times is first Saturday of the month -
> this argues in favor of this. If the load process was every day or once a
> week - it might be better to just query across the data mart and the
> reporting (replicated production) database - the the data mart tables
> would be older than the information they were linking against on the
> replicated report DB using the 1st Saturday approach.
>
> Questions:
> 1. Is this the correct approach?
> 2. According to a Microsoft diagram, the load goes from Source to
> Staging to OLAP DB - which is really just another OLTP DB but the tables
> are modeled as Dimension/Fact tables, right?
> 3. Our existing data mart which everyone mostly uses CUBES to
> access, has a big manual step. After all the ETL happens and the Dim/Fact
> tables have been updated - we are having to open up SSAS and process the
> cubes, the first business day after the 1st Saturday, to get them
> refreshed - can't this be automated within SSIS?
> 4. There have been some major production changes to our existing
> data source for our existing production data mart. All the ETL was done
> on SQL2k with stored proc's and DTS was only used as the policeman to
> control the flow. All the SCD processes were done and are working fine
> using SP's. We now have to make changes and are faced with having two
> complete different systems to manage - one with an ETL using the old
> method and the 2nd using SSIS. Some of the CUBES from the original data
> mart are going to be combined with our new cubes we are creating in our
> new system. To try and make it clearer - we have for example a Client
> dimension and a Client fact table that first is refreshed monthly with the
> old ETL, and now our new subject matter is now also going to affect the
> Client dimension and Client fact table that was created the first project.
> The old production changes it appears may be many weeks to adjust using
> the old ETL methods - so it begs the question, since all the ETL is
> working fine with DTS as flow master - is it perhaps better to get it all
> working with SSIS calling all the SP's and incorporate all our new ETL
> within SSIS using the new features that SSIS can offer?
> 5. When adding a 3rd subject matter data mart - is it a 3rd ETL
> package or just enhancing the one we have?
>
>
>
> ____________________
> Joe Horton
> SQL 2005 Business Intelligence
> Aging & Disability Services Administration
>
>
>
|||I'm sorry - the existing archetecture was in place before I arrived and as
my expertise is in coding not hardware - I wouldn't have been much help in
that area anyhow.
"Bharat Tamhankar" <BharatTamhankar@.discussions.microsoft.com> wrote in
message news:2B8F4494-BADC-4F47-86BC-383C5C07F0E6@.microsoft.com...[vbcol=seagreen]
> Hi Joe,
> I'm involved in a Data Warehouse project. Client is already invested in
> SQL
> Server 2005.
> Directly coming to the point..
> You must have setup server(s) for your data warehouse. I wanted to know
> whether you were involved in Server Sizing / Data Sizing. If yes, then can
> you please guide me as to which method / template did you use to size your
> data and based on it what hardware (with configuration) did you suggest /
> decide?
> Please help this is very critical and urgent!
> Regards,
> Bharat Tamhankar
>
> "Joe" wrote:
ETL / Data mart / Cube question
o create another DM. Since we only have one data source, we think we need t
o pull in all the tables into the staging area. At first we wanted to just
pull in minimal data to populate the Dimension/Fact tables and any informati
on needed to derive columns.
However the business area wants a complete snapshot of ALL THE dat
a for this given subject matter. Their reason is that sometimes our CUBES d
on't have all the information they want, and they want to be able to query t
he data mart tables against the staging area tables using SQL or Access. Si
nce our data mart load times is first Saturday of the month - this argues in
favor of this. If the load process was every day or once a week - it might
be better to just query across the data mart and the reporting (replicated p
roduction) database - the the data mart tables would be older than the infor
mation they were linking against on the replicated report DB using the 1st S
aturday approach.
Questions:
1. Is this the correct approach?
2. According to a Microsoft diagram, the load goes from Source to Stag
ing to OLAP DB - which is really just another OLTP DB but the tables are mod
eled as Dimension/Fact tables, right?
3. Our existing data mart which everyone mostly uses CUBES to access,
has a big manual step. After all the ETL happens and the Dim/Fact tables ha
ve been updated - we are having to open up SSAS and process the cubes, the f
irst business day after the 1st Saturday, to get them refreshed - can't this
be automated within SSIS?
4. There have been some major production changes to our existing data
source for our existing production data mart. All the ETL was done on SQL2k
with stored proc's and DTS was only used as the policeman to control the fl
ow. All the SCD processes were done and are working fine using SP's. We no
w have to make changes and are faced with having two complete different syst
ems to manage - one with an ETL using the old method and the 2nd using SSIS.
Some of the CUBES from the original data mart are going to be combined wit
h our new cubes we are creating in our new system. To try and make it clear
er - we have for example a Client dimension and a Client fact table that fir
st is refreshed monthly with the old ETL, and now our new subject matter is
now also going to affect the Client dimension and Client fact table that was
created the first project. The old production changes it appears may be m
any weeks to adjust using the old ETL methods - so it begs the question, sin
ce all the ETL is working fine with DTS as flow master - is it perhaps bette
r to get it all working with SSIS calling all the SP's and incorporate all o
ur new ETL within SSIS using the new features that SSIS can offer?
5. When adding a 3rd subject matter data mart - is it a 3rd ETL packag
e or just enhancing the one we have?
____________________
Joe Horton
SQL 2005 Business Intelligence
Aging & Disability Services AdministrationI think that for 1, 2 and 3 you're right - you can schedule cube
process withing SSIS (there is a specific task to do that).
For 4 and 5 it's hard to say - it depends from the cost of
implementation and the cost of maintenance... SSIS packages are easier
to maintain in the long term, but you could have an higher cost in the
short term for the initial design. Mixing SSIS and SP's is certainly
feasible, even if it makes the solution harder to manage and also
performance can suffert compared to a native SSIS solution. Anyway, it
all depends by other constraints (volume of data, time of development
available, and so on).
Marco Russo
http://www.sqlbi.eu
http://www.sqljunkies.com/weblog/sqlbi
On Feb 1, 6:16 pm, "Joe" <hortoris...@.gmail.dot.com> wrote:
> =B7 We are in process of using new Kimball toolkit book as our gu=
ide to create another DM. Since we only have one data source, we think we =
need to pull in all the tables into the staging area. At first we wanted t=
o just pull in minimal data to populate the Dimension/Fact tables and any i=
nformation needed to derive columns. =20
> =B7 However the business area wants a complete snapshot of ALL TH=
E data for this given subject matter. Their reason is that sometimes our C=
UBES don't have all the information they want, and they want to be able to =
query the data mart tables against the staging area tables using SQL or Acc=
ess. Since our data mart load times is first Saturday of the month - this =
argues in favor of this. If the load process was every day or once a week -=
it might be better to just query across the data mart and the reporting (r=
eplicated production) database - the the data mart tables would be older th=
an the information they were linking against on the replicated report DB us=
ing the 1st Saturday approach.
> Questions:
> 1. Is this the correct approach? =20
> 2. According to a Microsoft diagram, the load goes from Source to S=
taging to OLAP DB - which is really just another OLTP DB but the tables are=
modeled as Dimension/Fact tables, right? =20
> 3. Our existing data mart which everyone mostly uses CUBES to acces=
s, has a big manual step. After all the ETL happens and the Dim/Fact table=
s have been updated - we are having to open up SSAS and process the cubes, =
the first business day after the 1st Saturday, to get them refreshed - can'=
t this be automated within SSIS?
> 4. There have been some major production changes to our existing da=
ta source for our existing production data mart. All the ETL was done on S=
QL2k with stored proc's and DTS was only used as the policeman to control t=
he flow. All the SCD processes were done and are working fine using SP's. =
We now have to make changes and are faced with having two complete differe=
nt systems to manage - one with an ETL using the old method and the 2nd usi=
ng SSIS. Some of the CUBES from the original data mart are going to be com=
bined with our new cubes we are creating in our new system. To try and mak=
e it clearer - we have for example a Client dimension and a Client fact tab=
le that first is refreshed monthly with the old ETL, and now our new subjec=
t matter is now also going to affect the Client dimension and Client fact t=
able that was created the first project. The old production changes it ap=
pears may be many weeks to adjust using the old ETL methods - so it begs th=
e question, since all the ETL is working fine with DTS as flow master - is =
it perhaps better to get it all working with SSIS calling all the SP's and =
incorporate all our new ETL within SSIS using the new features that SSIS ca=
n offer? =20
> 5. When adding a 3rd subject matter data mart - is it a 3rd ETL pac=
kage or just enhancing the one we have?
> ____________________
> Joe Horton
> SQL 2005 Business Intelligence
> Aging & Disability Services Administration|||Hi Joe,
I'm involved in a Data Warehouse project. Client is already invested in SQL
Server 2005.
Directly coming to the point..
You must have setup server(s) for your data warehouse. I wanted to know
whether you were involved in Server Sizing / Data Sizing. If yes, then can
you please guide me as to which method / template did you use to size your
data and based on it what hardware (with configuration) did you suggest /
decide?
Please help this is very critical and urgent!
Regards,
Bharat Tamhankar
"Joe" wrote:
>
> We are in process of using new Kimball toolkit book as our guide
> to create another DM. Since we only have one data source, we think we
> need to pull in all the tables into the staging area. At first we wanted
> to just pull in minimal data to populate the Dimension/Fact tables and any
> information needed to derive columns.
> However the business area wants a complete snapshot of ALL THE
> data for this given subject matter. Their reason is that sometimes our
> CUBES don't have all the information they want, and they want to be able
> to query the data mart tables against the staging area tables using SQL or
> Access. Since our data mart load times is first Saturday of the month -
> this argues in favor of this. If the load process was every day or once a
> week - it might be better to just query across the data mart and the
> reporting (replicated production) database - the the data mart tables
> would be older than the information they were linking against on the
> replicated report DB using the 1st Saturday approach.
>
> Questions:
> 1. Is this the correct approach?
> 2. According to a Microsoft diagram, the load goes from Source to
> Staging to OLAP DB - which is really just another OLTP DB but the tables
> are modeled as Dimension/Fact tables, right?
> 3. Our existing data mart which everyone mostly uses CUBES to
> access, has a big manual step. After all the ETL happens and the Dim/Fact
> tables have been updated - we are having to open up SSAS and process the
> cubes, the first business day after the 1st Saturday, to get them
> refreshed - can't this be automated within SSIS?
> 4. There have been some major production changes to our existing
> data source for our existing production data mart. All the ETL was done
> on SQL2k with stored proc's and DTS was only used as the policeman to
> control the flow. All the SCD processes were done and are working fine
> using SP's. We now have to make changes and are faced with having two
> complete different systems to manage - one with an ETL using the old
> method and the 2nd using SSIS. Some of the CUBES from the original data
> mart are going to be combined with our new cubes we are creating in our
> new system. To try and make it clearer - we have for example a Client
> dimension and a Client fact table that first is refreshed monthly with the
> old ETL, and now our new subject matter is now also going to affect the
> Client dimension and Client fact table that was created the first project.
> The old production changes it appears may be many weeks to adjust using
> the old ETL methods - so it begs the question, since all the ETL is
> working fine with DTS as flow master - is it perhaps better to get it all
> working with SSIS calling all the SP's and incorporate all our new ETL
> within SSIS using the new features that SSIS can offer?
> 5. When adding a 3rd subject matter data mart - is it a 3rd ETL
> package or just enhancing the one we have?
>
>
>
> ____________________
> Joe Horton
> SQL 2005 Business Intelligence
> Aging & Disability Services Administration
>
>
>|||I'm sorry - the existing archetecture was in place before I arrived and as
my expertise is in coding not hardware - I wouldn't have been much help in
that area anyhow.
"Bharat Tamhankar" <BharatTamhankar@.discussions.microsoft.com> wrote in
message news:2B8F4494-BADC-4F47-86BC-383C5C07F0E6@.microsoft.com...[vbcol=seagreen]
> Hi Joe,
> I'm involved in a Data Warehouse project. Client is already invested in
> SQL
> Server 2005.
> Directly coming to the point..
> You must have setup server(s) for your data warehouse. I wanted to know
> whether you were involved in Server Sizing / Data Sizing. If yes, then can
> you please guide me as to which method / template did you use to size your
> data and based on it what hardware (with configuration) did you suggest /
> decide?
> Please help this is very critical and urgent!
> Regards,
> Bharat Tamhankar
>
> "Joe" wrote:
>
Thursday, March 22, 2012
Error-the dimension is not referenced by a fact table?
Hi, all experts here,
Thank you very much for your kind attention.
I got a very strange problem during the creation of a cube.When I unchecked 'fact' for the dimension, It says-the dimension (in my case, the time dimension) is not referenced by a fact table. But actually in the data source view, I have already created relationships for the time diemnsion table and other fact tables? (in my case, due to the nature of the data, I set the fact tables also as dimensions). Why is that?
Hope my question is clear.
I am looking forward to hearing from you shortly and thanks a lot.
With best regards,
Yours sincerely,
Helen
Please help me to get a dimension as quarter, Year, Month using a column in a table
|||Data source view (DSV) represents the relational source for the dimensions and measure groups. When you defined relationships between tables on DSV you helped the cube wizard to guess which tables should become source of dimensions and which should become the source of measure groups (later it is also used to create SQL queries during processing). It makes its guess, which might be not what you want. But the outcome of the wizard should be a cube, which can be built. Therefore when you unchecked the offered fact table the wizard shows you a message that it can't build a valid cube because in a valid cube a dimension must be related to at least one measure group. To complete the wizard you need to select another fact table, which you think should be correct source for the measure group.
You are not obligated to build the cubes with the wizard. To complete a wizard you need to select a fact table and then do what you think is right in the designer. There you would create your measure groups and relate them to the dimensions on Dimension Usage tab.
Hope my answer makes sense.
Wednesday, March 21, 2012
Errors while browsing the cube - "#Value"
While browsing the cube, I'm getting a "#Value" error, which I believe is due to the non existence of certain attribute members.
For example, working on Adventure Works database. Assuming there is no Product Category = 5 in the SSAS database but we want to add this category into the calculation for future purposes, so the calculation would be as follows
(Measures.[Internet Sales Amount], [Product].[Category].&[5])
This calculation gives me a "#Value" error while browsing, to handle this kind of situation, I added an exception handling, check whether the member exist, if not return 0.
IIF(
ISEMPTY(EXISTS([Product].[Category].[Category].MEMBERS, [Product].[Category].&[5]).ITEM(0)),
0,
Measures.[Internet Sales Amount], [Product].[Category].&[5]
)
This seems to work fine in the SQL Managemen Studio, but when added to the Cube in the Calculations and processes, still I get the same "#Value" error.
Is the apporach to handle this is right? Is there a different way to handle this?
-
Vivek
Doing the whole "IIF( ISEMPTY( EXISTS(...)))" is going to wreck your performance. I'm surprised that it appears to work in SSMS as the EXISTS function will attempt to resolve both the sets in order to evaluate them and should throw an exception trying to resolve [Product[.[Category].&[5] and IsEmpty is not checking if the set is empty, but rather it will be doing a value comparison against the default measure which could lead to misleading results.
There are better ways to code a check like this, but they are still not a good idea. If you know you are going to have a Category 5, add that row to the dimension now and re-process the dimension, even though you don't yet have any facts for it. This way this script will be able to resolve the member reference and everything will work.
Adding references in the script to non-existant members and trying to trap errors is generally a bad thing.
NOTE: in SSMS you can hover over or double click on the #Value to see the text of the error message.
|||Thanks for the reply.
I guess you are right, I further did a search on this and go the same inputs.
Books Online suggested to use "IS" instead of "ISEMPTY()" to check for emptiness of dimension/attribute members. Even this works in SSMS (MDX query) and when ported to BIDS and browse fails.
You did say there are ways to handle this kind of exception, can you share them? I know it is performance intensive, but I have to catch such kind of exception.
-
Vivek
|||Just to be entirely clear - you ONLY ever need this sort of check if you are taking user input for something like a parameter in SSRS or some other reporting engine. You should NEVER use the following type of code in your MDX Script.
The pattern in something like an SSRS report would look like the following, passing in the unique name as a string and using the StrToMember() function and testing with the IS operator.
eg.
IIF( StrToMember( @.parameter ) IS NULL, ...
I know I am repeating myself, but I need to make sure this is clear - if you know that you are going to need a product category 5, then create it in the dimension member now. Never put references in your MDX script to members that do not yet exist and then try to wrap these references with code to handle the errors.
If you have a calc that works in SSMS, but not in BIDS, it will be because of the way the browser control in BIDS is structuring the query. Start up a profiler session and capture the queries that BIDS generates and test them in SSMS. There will probably be multiple commands fired off as BIDS quite often uses things like session scoped sets in it's queries.
|||Darren,
Thanks. Surely will not use this in BIDS, as it clearly doesn't work.
--
Vivek
Errors while browsing the cube - "#Value"
While browsing the cube, I'm getting a "#Value" error, which I believe is due to the non existence of certain attribute members.
For example, working on Adventure Works database. Assuming there is no Product Category = 5 in the SSAS database but we want to add this category into the calculation for future purposes, so the calculation would be as follows
(Measures.[Internet Sales Amount], [Product].[Category].&[5])
This calculation gives me a "#Value" error while browsing, to handle this kind of situation, I added an exception handling, check whether the member exist, if not return 0.
IIF(
ISEMPTY(EXISTS([Product].[Category].[Category].MEMBERS, [Product].[Category].&[5]).ITEM(0)),
0,
Measures.[Internet Sales Amount], [Product].[Category].&[5]
)
This seems to work fine in the SQL Managemen Studio, but when added to the Cube in the Calculations and processes, still I get the same "#Value" error.
Is the apporach to handle this is right? Is there a different way to handle this?
-
Vivek
Doing the whole "IIF( ISEMPTY( EXISTS(...)))" is going to wreck your performance. I'm surprised that it appears to work in SSMS as the EXISTS function will attempt to resolve both the sets in order to evaluate them and should throw an exception trying to resolve [Product[.[Category].&[5] and IsEmpty is not checking if the set is empty, but rather it will be doing a value comparison against the default measure which could lead to misleading results.
There are better ways to code a check like this, but they are still not a good idea. If you know you are going to have a Category 5, add that row to the dimension now and re-process the dimension, even though you don't yet have any facts for it. This way this script will be able to resolve the member reference and everything will work.
Adding references in the script to non-existant members and trying to trap errors is generally a bad thing.
NOTE: in SSMS you can hover over or double click on the #Value to see the text of the error message.
|||Thanks for the reply.
I guess you are right, I further did a search on this and go the same inputs.
Books Online suggested to use "IS" instead of "ISEMPTY()" to check for emptiness of dimension/attribute members. Even this works in SSMS (MDX query) and when ported to BIDS and browse fails.
You did say there are ways to handle this kind of exception, can you share them? I know it is performance intensive, but I have to catch such kind of exception.
-
Vivek
|||Just to be entirely clear - you ONLY ever need this sort of check if you are taking user input for something like a parameter in SSRS or some other reporting engine. You should NEVER use the following type of code in your MDX Script.
The pattern in something like an SSRS report would look like the following, passing in the unique name as a string and using the StrToMember() function and testing with the IS operator.
eg.
IIF( StrToMember( @.parameter ) IS NULL, ...
I know I am repeating myself, but I need to make sure this is clear - if you know that you are going to need a product category 5, then create it in the dimension member now. Never put references in your MDX script to members that do not yet exist and then try to wrap these references with code to handle the errors.
If you have a calc that works in SSMS, but not in BIDS, it will be because of the way the browser control in BIDS is structuring the query. Start up a profiler session and capture the queries that BIDS generates and test them in SSMS. There will probably be multiple commands fired off as BIDS quite often uses things like session scoped sets in it's queries.
|||Darren,
Thanks. Surely will not use this in BIDS, as it clearly doesn't work.
--
Vivek
sqlErrors when setting up Cube from Project Server 2007 in Analysis Server
We try to setup a cube from Project Server in our (seperated) Analysis Server.
Even though following all the instructions which can be found and equally we setup the repository in the SQL 2005 Server or use the .mdb, we run into the following errors:
[26.07.2007 12:19 PM] Analysis Services session failed with the following error: Failed to connect to the Analysis Services server SERVER. Error: Your permissions on the server computer do not allow you to administer this Analysis server.
Error:
===== Process Completed =====
[26.07.2007 12:19 PM] Failed to build the OLAP cubes. Error: Analysis Services session failed with the following error: Failed to connect to the Analysis Services server SERVER. Error: Your permissions on the server computer do not allow you to administer this Analysis server.
The Project Server Queue Service user is a) SQL sysadmin and b) SQL Server Admin group member - at least it became this after everything else didn't work out..
The correct shares etc. on filebase are given, even it is checked that the Queue Account can connect to the share from another computer, ...
So now I wonder whether someone else had this issue and can help out here?
Thanks in advance.
Hello! In order to set up an SSAS2005 cube you will need to have administrators permissions on SSAS2005.
I assume that SSAS2005 and SQL Server 2005 run on the same server.
Connect to SSAS2005 with Management Studio and right click on the server. Choose properties and the security tab(server wide security privileges) and the add the user who shoul have rights to deploy and process the cube.
HTH
Thomas Ivarsson
|||
I am having similar problems.
I have added the shared service provider to AS box. I restarted AS and I am still gettting the same error.
Any help would be greatly appreciated....
Here is the event log:
"Standard InformationSI Entry Point:
Project User: <domain\server name>
Correlation Id: 4f990c61-13e6-49b3-949e-2867e681c38b
PWA Site URL: <server name>
SSP Name: SharedServices1
PSError: CBSASConnectionFailure (17003)
Cube build failed to connect to the Analysis Services server. Verify the data source connection is valid. Error: Setting UID=00007829-4392-48b3-b533-5a5a4797e3c9 ASServerName=<IP Address> ASDBName=OLAPCUBE ASExtraNetAddress=<IP Address> RangeChoice=0 PastNum=1 PastUnit=0 NextNum=1 NextUnit=0 FromDate=08/01/2007 12:11:58 ToDate=08/01/2007 12:11:58 HighPriority=True
For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
"
Monday, March 19, 2012
Errors processing SSAS 2005 Cube against SQL 2000 database
Hi,
I am receiving the following erros when i try to deploy/process my cube. The source db is Sql Server 2000, could this be the issue? I have a connection created (have tried both OLDB providers - SQL Native client and MS OLDB provider) and I am supplying a sql server user name and pass word (datareader permission in db). The 2005 SSAS service is running under a local user account so authentication is not possible using that account. And I have tried various settings under "Impersonation Information" with no luck?
SQL 2K5 sp1 on Win XP sp2
Sql 2k (sp4) on Win2k server
Thanks!
Build started: Project: Aegis, Configuration: Development
Started Building Analysis Services project: Incremental ....
Build complete -- 0 errors, 0 warnings
Deploy started: Project: Aegis, Configuration: Development
Performing a full deployment of the 'Aegis' database to the 'JWILAMETZ\SQLSERVER2005_01' server.
Generating deployment script...
Add Database Aegis
Process Database Aegis
Done
Sending deployment script to the server...
Error -1056899072 : The following system error occurred: Logon failure: unknown user name or bad password. .
Error -1055784860 : Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'IBANK AEGIS PROD DM', Name of 'IBANK AEGIS PROD DM'.
Error -1054932980 : Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'CURRENCY DIM', Name of 'CURRENCY DIM' was being processed.
Error -1054932979 : Errors in the OLAP storage engine: An error occurred while the 'CURRENCY CD' attribute of the 'CURRENCY DIM' dimension from the 'Aegis' database was being processed.
Error -1056899072 : The following system error occurred: Logon failure: unknown user name or bad password. .
Error -1055784860 : Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'IBANK AEGIS PROD DM', Name of 'IBANK AEGIS PROD DM'.
Error -1054932980 : Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'CURRENCY DIM', Name of 'CURRENCY DIM' was being processed.
Error -1054932979 : Errors in the OLAP storage engine: An error occurred while the 'CURRENCY DESC' attribute of the 'CURRENCY DIM' dimension from the 'Aegis' database was being processed.
Error -1056899072 : The following system error occurred: Logon failure: unknown user name or bad password. .
Error -1055784860 : Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'IBANK AEGIS PROD DM', Name of 'IBANK AEGIS PROD DM'.
Error -1054932980 : Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'CURRENCY DIM', Name of 'CURRENCY DIM' was being processed.
Error -1054932979 : Errors in the OLAP storage engine: An error occurred while the 'DATE ENTERED' attribute of the 'CURRENCY DIM' dimension from the 'Aegis' database was being processed.
Error -1056899072 : The following system error occurred: Logon failure: unknown user name or bad password. .
Error -1055784860 : Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'IBANK AEGIS PROD DM', Name of 'IBANK AEGIS PROD DM'.
Error -1054932980 : Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'CURRENCY DIM', Name of 'CURRENCY DIM' was being processed.
Error -1054932979 : Errors in the OLAP storage engine: An error occurred while the 'EXCHANGE RATE' attribute of the 'CURRENCY DIM' dimension from the 'Aegis' database was being processed.
Deploy complete -- 16 errors, 0 warnings
========== Build: 1 succeeded or up-to-date, 0 failed, 0 skipped ==========
========== Deploy: 0 succeeded, 1 failed, 0 skipped ==========
First try and see if you connect to your SQL Server 2000 from AS machine.
Login into AS machine under credentials of your local user account and try connecting to your SQL Server machine.
The credentals many customers choosing for their Data Source object is ImpersonateServiceAccount.
Please note. Once you saved user name and password as part of your connection sting, you will have to re-submit username and password every time you update your datasource.
Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Willajo,
can you please let me know how could you solve? I am getting the same error.
thanks a lot in advance...
Errors processing SSAS 2005 Cube against SQL 2000 database
Hi,
I am receiving the following erros when i try to deploy/process my cube. The source db is Sql Server 2000, could this be the issue? I have a connection created (have tried both OLDB providers - SQL Native client and MS OLDB provider) and I am supplying a sql server user name and pass word (datareader permission in db). The 2005 SSAS service is running under a local user account so authentication is not possible using that account. And I have tried various settings under "Impersonation Information" with no luck?
SQL 2K5 sp1 on Win XP sp2
Sql 2k (sp4) on Win2k server
Thanks!
Build started: Project: Aegis, Configuration: Development
Started Building Analysis Services project: Incremental ....
Build complete -- 0 errors, 0 warnings
Deploy started: Project: Aegis, Configuration: Development
Performing a full deployment of the 'Aegis' database to the 'JWILAMETZ\SQLSERVER2005_01' server.
Generating deployment script...
Add Database Aegis
Process Database Aegis
Done
Sending deployment script to the server...
Error -1056899072 : The following system error occurred: Logon failure: unknown user name or bad password. .
Error -1055784860 : Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'IBANK AEGIS PROD DM', Name of 'IBANK AEGIS PROD DM'.
Error -1054932980 : Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'CURRENCY DIM', Name of 'CURRENCY DIM' was being processed.
Error -1054932979 : Errors in the OLAP storage engine: An error occurred while the 'CURRENCY CD' attribute of the 'CURRENCY DIM' dimension from the 'Aegis' database was being processed.
Error -1056899072 : The following system error occurred: Logon failure: unknown user name or bad password. .
Error -1055784860 : Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'IBANK AEGIS PROD DM', Name of 'IBANK AEGIS PROD DM'.
Error -1054932980 : Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'CURRENCY DIM', Name of 'CURRENCY DIM' was being processed.
Error -1054932979 : Errors in the OLAP storage engine: An error occurred while the 'CURRENCY DESC' attribute of the 'CURRENCY DIM' dimension from the 'Aegis' database was being processed.
Error -1056899072 : The following system error occurred: Logon failure: unknown user name or bad password. .
Error -1055784860 : Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'IBANK AEGIS PROD DM', Name of 'IBANK AEGIS PROD DM'.
Error -1054932980 : Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'CURRENCY DIM', Name of 'CURRENCY DIM' was being processed.
Error -1054932979 : Errors in the OLAP storage engine: An error occurred while the 'DATE ENTERED' attribute of the 'CURRENCY DIM' dimension from the 'Aegis' database was being processed.
Error -1056899072 : The following system error occurred: Logon failure: unknown user name or bad password. .
Error -1055784860 : Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'IBANK AEGIS PROD DM', Name of 'IBANK AEGIS PROD DM'.
Error -1054932980 : Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'CURRENCY DIM', Name of 'CURRENCY DIM' was being processed.
Error -1054932979 : Errors in the OLAP storage engine: An error occurred while the 'EXCHANGE RATE' attribute of the 'CURRENCY DIM' dimension from the 'Aegis' database was being processed.
Deploy complete -- 16 errors, 0 warnings
========== Build: 1 succeeded or up-to-date, 0 failed, 0 skipped ==========
========== Deploy: 0 succeeded, 1 failed, 0 skipped ==========
First try and see if you connect to your SQL Server 2000 from AS machine.
Login into AS machine under credentials of your local user account and try connecting to your SQL Server machine.
The credentals many customers choosing for their Data Source object is ImpersonateServiceAccount.
Please note. Once you saved user name and password as part of your connection sting, you will have to re-submit username and password every time you update your datasource.
Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Willajo,
can you please let me know how could you solve? I am getting the same error.
thanks a lot in advance...
errors processing cube
Everytime i try to process this cube it'll complain that "A member with key
"(something)" was found in the fact table but not found in the level
(dimension level's name) of the dimension (dimension name)
anyone got similar experience in this? i've check that the fact table ids
are all present in the dimension tables... what could be wrong? please
help...
You need to reprocess your dimension. When a dimension is processed the data
in it is stored in a different data structure independent of the actual
dimension tables, and additions to the dimension tables won't be included in
the dimension unless you process it again.
Jacco Schalkwijk
SQL Server MVP
"Nestor" <n3570r@.yahoo.com> wrote in message
news:ODIW9DN9EHA.2788@.TK2MSFTNGP15.phx.gbl...
> i'm facing some difficulties in trying to process a 3 dimensional cube.
> Everytime i try to process this cube it'll complain that "A member with
> key "(something)" was found in the fact table but not found in the level
> (dimension level's name) of the dimension (dimension name)
> anyone got similar experience in this? i've check that the fact table ids
> are all present in the dimension tables... what could be wrong? please
> help...
>
errors processing cube
Everytime i try to process this cube it'll complain that "A member with key
"(something)" was found in the fact table but not found in the level
(dimension level's name) of the dimension (dimension name)
anyone got similar experience in this? i've check that the fact table ids
are all present in the dimension tables... what could be wrong? please
help...You need to reprocess your dimension. When a dimension is processed the data
in it is stored in a different data structure independent of the actual
dimension tables, and additions to the dimension tables won't be included in
the dimension unless you process it again.
Jacco Schalkwijk
SQL Server MVP
"Nestor" <n3570r@.yahoo.com> wrote in message
news:ODIW9DN9EHA.2788@.TK2MSFTNGP15.phx.gbl...
> i'm facing some difficulties in trying to process a 3 dimensional cube.
> Everytime i try to process this cube it'll complain that "A member with
> key "(something)" was found in the fact table but not found in the level
> (dimension level's name) of the dimension (dimension name)
> anyone got similar experience in this? i've check that the fact table ids
> are all present in the dimension tables... what could be wrong? please
> help...
>
Errors in the OLAP storage engine:
Errors in the OLAP storage engine: I am new to Analysis Services. I created a cube using Analysis Services 2005 with 3 fact tables and five dimensions. All the dimensions processed successfully apart from one dimension table which gave the following error. The attribute key cannot be found: Table:
dbo_Severity, Column:
SeverityCode, Value: 6
When i try to deploy the cube, I also fail. Is it because of this dimension table in which the attribute key can not be found.
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Parallel>
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<Object>
<DatabaseID>AnalysisServices</DatabaseID>
<CubeID>MSCPROJECTDSV</CubeID>
</Object>
<Type>ProcessFull</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
</Parallel>
</Batch>
Processing Dimension 'Allergy' failed.
Start time: 6/14/2006 6:04:32 PM; End time: 6/14/2006 6:05:00 PM; Duration: 0:00:28
Processing Dimension Attribute '(All)' completed successfully.
Start time: 6/14/2006 6:04:32 PM; End time: 6/14/2006 6:04:32 PM; Duration: 0:00:00
Processing Dimension Attribute 'Allergy Name' completed successfully. 7 rows have been read.
Start time: 6/14/2006 6:04:32 PM; End time: 6/14/2006 6:04:41 PM; Duration: 0:00:09
SQL queries 1
SELECT
DISTINCT
[dbo_Allergy].[AllergyName] AS [dbo_AllergyAllergyName0_0]
FROM [dbo].[Allergy] AS [dbo_Allergy]
Processing Dimension Attribute 'Severity Name' completed successfully. 6 rows have been read.
Start time: 6/14/2006 6:04:32 PM; End time: 6/14/2006 6:04:41 PM; Duration: 0:00:09
SQL queries 1
SELECT
DISTINCT
[dbo_Severity].[SeverityName] AS [dbo_SeveritySeverityName0_0]
FROM [dbo].[Severity] AS [dbo_Severity]
Processing Dimension Attribute 'Severity' completed successfully. 6 rows have been read.
Start time: 6/14/2006 6:04:41 PM; End time: 6/14/2006 6:04:51 PM; Duration: 0:00:10
SQL queries 1
SELECT
DISTINCT
[dbo_Severity].[SeverityCode] AS [dbo_SeveritySeverityCode0_0],[dbo_Severity].[SeverityName] AS [dbo_SeveritySeverityName0_1]
FROM [dbo].[Severity] AS [dbo_Severity]
Processing Dimension Attribute 'Allergy' failed. 1 rows have been read.
Start time: 6/14/2006 6:04:51 PM; End time: 6/14/2006 6:05:00 PM; Duration: 0:00:09
SQL queries 1
SELECT
DISTINCT
[dbo_Allergy].[AllergyCode] AS [dbo_AllergyAllergyCode0_0],[dbo_Allergy].[AllergyName] AS [dbo_AllergyAllergyName0_1],[dbo_Allergy].[SeverityCode] AS [dbo_AllergySeverityCode0_2]
FROM [dbo].[Allergy] AS [dbo_Allergy]
Error Messages 2
Errors in the OLAP storage engine: The attribute key cannot be found: Table: dbo_Severity, Column: SeverityCode, Value: 6. Errors in the OLAP storage engine: The record was skipped because the attribute key was not found. Attribute: Allergy of Dimension: Allergy from Database: AnalysisServices, Record: 7.
Errors in the OLAP storage engine: The attribute key cannot be found: Table: dbo_Severity, Column: SeverityCode, Value: 6. Errors in the OLAP storage engine: The record was skipped because the attribute key was not found. Attribute: Allergy of Dimension: Allergy from Database: AnalysisServices, Record: 7. Errors in the OLAP storage engine: The process operation ended because the number of errors encountered during processing reached the defined limit of allowable errors for the operation. Errors in the OLAP storage engine: An error occurred while the 'Allergy' attribute of the 'Allergy' dimension from the 'AnalysisServices' database was being processed.
Errors and Warnings from Response
Errors in the OLAP storage engine: The process operation ended because the number of errors encountered during processing reached the defined limit of allowable errors for the operation.
Errors in the OLAP storage engine: An error occurred while the 'Allergy' attribute of the 'Allergy' dimension from the 'AnalysisServices' database was being processed.
Errors in the OLAP storage engine: The process operation ended because the number of errors encountered during processing reached the defined limit of allowable errors for the operation.
Errors in the OLAP storage engine: The attribute key cannot be found: Table: dbo_Severity, Column: SeverityCode, Value: 6.
Errors in the OLAP storage engine: The record was skipped because the attribute key was not found. Attribute: Allergy of Dimension: Allergy from Database: AnalysisServices, Record: 7.
Please help me out.
Regards,
Ronaldlee
Looks like the problem is in the relational database.
Looks like you have a record in the Allergy table with a value in the SeverirtyCode column that does not exist in the Severity table.
Try first sending a SQL query AS sends for the Severity attribute
Then send a SQL query for the Allergy attribute and see which value appears in the SeverityCode column and does not appear in the Severity table.
Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Thanx a Lot.
You are the man.
Ronald
|||I have made all the necassary changes,
I have three fact tables in Cube, two of them process successfully, but one doesnot execute successfully.
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Parallel>
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<Object>
<DatabaseID>AnalysisServices</DatabaseID>
<CubeID>MSCPROJECTDVS</CubeID>
</Object>
<Type>ProcessFull</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
</Parallel>
</Batch>
Processing Cube 'MSCPROJECTDVS' failed.
Start time: 6/15/2006 12:31:24 PM; End time: 6/15/2006 12:31:25 PM; Duration: 0:00:01
Processing Measure Group 'Medication Allergy' completed successfully.
Start time: 6/15/2006 12:31:24 PM; End time: 6/15/2006 12:31:25 PM; Duration: 0:00:01
Processing Partition 'Medication Allergy' completed successfully.
Start time: 6/15/2006 12:31:25 PM; End time: 6/15/2006 12:31:25 PM; Duration: 0:00:00
SQL queries 1
SELECT [dbo_MedicationAllergy].[dbo_MedicationAllergy0_0] AS [dbo_MedicationAllergy0_0],[dbo_MedicationAllergy].[dbo_MedicationAllergyMedicineCode0_1] AS [dbo_MedicationAllergyMedicineCode0_1]
FROM
(
SELECT 1 AS [dbo_MedicationAllergy0_0],[MedicineCode] AS [dbo_MedicationAllergyMedicineCode0_1]
FROM [dbo].[MedicationAllergy]
)
AS [dbo_MedicationAllergy]
Processing Measure Group 'Prescription' failed.
Start time: 6/15/2006 12:31:24 PM; End time: 6/15/2006 12:31:25 PM; Duration: 0:00:01
Processing Partition 'Prescription' failed.
Start time: 6/15/2006 12:31:25 PM; End time: 6/15/2006 12:31:25 PM; Duration: 0:00:00
SQL queries 1
SELECT [dbo_Prescription].[Quantity] AS [dbo_PrescriptionQuantity0_0],[dbo_Prescription].[Price] AS [dbo_PrescriptionPrice0_1],[dbo_Prescription].[MedicineCode] AS [dbo_PrescriptionMedicineCode0_2],[dbo_Prescription].[PatientId] AS [dbo_PrescriptionPatientId0_3],[dbo_Prescription].[Date] AS [dbo_PrescriptionDate0_4]
FROM [dbo].[Prescription] AS [dbo_Prescription]
Error Messages 2
Errors in the OLAP storage engine: The attribute key cannot be found: Table: dbo_Prescription, Column: PatientId, Value: 10001; Table: dbo_Prescription, Column: Date, Value: 2/9/2000. Errors in the OLAP storage engine: The record was skipped because the attribute key was not found. Attribute: PatientId - Date of Dimension: Return Visit from Database: AnalysisServices, Cube: MSCPROJECTDVS, Measure Group: Prescription, Partition: Prescription, Record: 1.
Errors in the OLAP storage engine: The attribute key cannot be found: Table: dbo_Prescription, Column: PatientId, Value: 10001; Table: dbo_Prescription, Column: Date, Value: 2/9/2000. Errors in the OLAP storage engine: The record was skipped because the attribute key was not found. Attribute: PatientId - Date of Dimension: Return Visit from Database: AnalysisServices, Cube: MSCPROJECTDVS, Measure Group: Prescription, Partition: Prescription, Record: 1. Errors in the OLAP storage engine: The process operation ended because the number of errors encountered during processing reached the defined limit of allowable errors for the operation. Errors in the OLAP storage engine: An error occurred while processing the 'Prescription' partition of the 'Prescription' measure group for the 'MSCPROJECTDVS' cube from the AnalysisServices database.
Processing Measure Group 'Patient Diagnosis' completed successfully.
Start time: 6/15/2006 12:31:24 PM; End time: 6/15/2006 12:31:25 PM; Duration: 0:00:01
Processing Partition 'Patient Diagnosis' completed successfully.
Start time: 6/15/2006 12:31:25 PM; End time: 6/15/2006 12:31:25 PM; Duration: 0:00:00
SQL queries 1
SELECT [dbo_PatientDiagnosis].[dbo_PatientDiagnosis0_0] AS [dbo_PatientDiagnosis0_0],[dbo_PatientDiagnosis].[dbo_PatientDiagnosisDiagnosisCode0_1] AS [dbo_PatientDiagnosisDiagnosisCode0_1],[dbo_PatientDiagnosis].[dbo_PatientDiagnosisPatientId0_2] AS [dbo_PatientDiagnosisPatientId0_2],[dbo_Diagnosis_3].[MedicineCode] AS [dbo_DiagnosisMedicineCode2_0]
FROM
(
SELECT 1 AS [dbo_PatientDiagnosis0_0],[DiagnosisCode] AS [dbo_PatientDiagnosisDiagnosisCode0_1],[PatientId] AS [dbo_PatientDiagnosisPatientId0_2]
FROM [dbo].[PatientDiagnosis]
)
AS [dbo_PatientDiagnosis],[dbo].[Diagnosis] AS [dbo_Diagnosis_3]
WHERE
(
(
[dbo_PatientDiagnosis].[dbo_PatientDiagnosisDiagnosisCode0_1] = [dbo_Diagnosis_3].[DiagnosisCode]
)
)
Errors and Warnings from Response
Errors in the OLAP storage engine: The process operation ended because the number of errors encountered during processing reached the defined limit of allowable errors for the operation.
Errors in the OLAP storage engine: An error occurred while processing the 'Prescription' partition of the 'Prescription' measure group for the 'MSCPROJECTDVS' cube from the AnalysisServices database.
Errors in the OLAP storage engine: The process operation ended because the number of errors encountered during processing reached the defined limit of allowable errors for the operation.
Errors in the OLAP storage engine: The attribute key cannot be found: Table: dbo_Prescription, Column: PatientId, Value: 10001; Table: dbo_Prescription, Column: Date, Value: 2/9/2000.
Errors in the OLAP storage engine: The record was skipped because the attribute key was not found. Attribute: PatientId - Date of Dimension: Return Visit from Database: AnalysisServices, Cube: MSCPROJECTDVS, Measure Group: Prescription, Partition: Prescription, Record: 1.
Is there any connected to my attributes in the relational data base tables
|||I have solved the problem though i am still getting some error.
Now it is the MedicationAllergy Fact Table giving me errors.
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Parallel>
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<Object>
<DatabaseID>AnalysisServices</DatabaseID>
<CubeID>MSCPROJECTDVS</CubeID>
</Object>
<Type>ProcessFull</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
</Parallel>
</Batch>
Processing Cube 'MSCPROJECTDVS' failed.
Start time: 6/15/2006 12:51:08 PM; End time: 6/15/2006 12:51:18 PM; Duration: 0:00:10
Processing Measure Group 'Prescription' completed successfully.
Start time: 6/15/2006 12:51:08 PM; End time: 6/15/2006 12:51:18 PM; Duration: 0:00:10
Processing Partition 'Prescription' completed successfully.
Start time: 6/15/2006 12:51:08 PM; End time: 6/15/2006 12:51:18 PM; Duration: 0:00:10
Processing Measure Group 'Medication Allergy' failed.
Start time: 6/15/2006 12:51:08 PM; End time: 6/15/2006 12:51:18 PM; Duration: 0:00:10
Processing Partition 'Medication Allergy' failed. 1 rows have been read.
Start time: 6/15/2006 12:51:08 PM; End time: 6/15/2006 12:51:18 PM; Duration: 0:00:10
SQL queries 1
SELECT [dbo_MedicationAllergy].[dbo_MedicationAllergy0_0] AS [dbo_MedicationAllergy0_0],[dbo_MedicationAllergy].[dbo_MedicationAllergyMedicineCode0_1] AS [dbo_MedicationAllergyMedicineCode0_1]
FROM
(
SELECT 1 AS [dbo_MedicationAllergy0_0],[MedicineCode] AS [dbo_MedicationAllergyMedicineCode0_1]
FROM [dbo].[MedicationAllergy]
)
AS [dbo_MedicationAllergy]
Error Messages 2
Errors in the OLAP storage engine: The attribute key cannot be found: Table: dbo_MedicationAllergy, Column: MedicineCode, Value: 7. Errors in the OLAP storage engine: The record was skipped because the attribute key was not found. Attribute: Allergy of Dimension: Allergy from Database: AnalysisServices, Cube: MSCPROJECTDVS, Measure Group: Medication Allergy, Partition: Medication Allergy, Record: 6.
Errors in the OLAP storage engine: The attribute key cannot be found: Table: dbo_MedicationAllergy, Column: MedicineCode, Value: 7. Errors in the OLAP storage engine: The record was skipped because the attribute key was not found. Attribute: Allergy of Dimension: Allergy from Database: AnalysisServices, Cube: MSCPROJECTDVS, Measure Group: Medication Allergy, Partition: Medication Allergy, Record: 6. Errors in the OLAP storage engine: The process operation ended because the number of errors encountered during processing reached the defined limit of allowable errors for the operation. Errors in the OLAP storage engine: An error occurred while processing the 'Medication Allergy' partition of the 'Medication Allergy' measure group for the 'MSCPROJECTDVS' cube from the AnalysisServices database.
Processing Measure Group 'Patient Diagnosis' completed successfully.
Start time: 6/15/2006 12:51:08 PM; End time: 6/15/2006 12:51:18 PM; Duration: 0:00:10
Processing Partition 'Patient Diagnosis' completed successfully.
Start time: 6/15/2006 12:51:08 PM; End time: 6/15/2006 12:51:18 PM; Duration: 0:00:10
Errors and Warnings from Response
Errors in the OLAP storage engine: The process operation ended because the number of errors encountered during processing reached the defined limit of allowable errors for the operation.
Errors in the OLAP storage engine: An error occurred while processing the 'Medication Allergy' partition of the 'Medication Allergy' measure group for the 'MSCPROJECTDVS' cube from the AnalysisServices database.
Errors in the OLAP storage engine: The process operation ended because the number of errors encountered during processing reached the defined limit of allowable errors for the operation.
Errors in the high-level relational engine. The database operation was cancelled because of an earlier failure.
Errors in the OLAP storage engine: An error occurred while processing the 'Patient Diagnosis' partition of the 'Patient Diagnosis' measure group for the 'MSCPROJECTDVS' cube from the AnalysisServices database.
Errors in the OLAP storage engine: An error occurred while processing the 'Prescription' partition of the 'Prescription' measure group for the 'MSCPROJECTDVS' cube from the AnalysisServices database.
Errors in the OLAP storage engine: The attribute key cannot be found: Table: dbo_MedicationAllergy, Column: MedicineCode, Value: 7.
Errors in the OLAP storage engine: The record was skipped because the attribute key was not found. Attribute: Allergy of Dimension: Allergy from Database: AnalysisServices, Cube: MSCPROJECTDVS, Measure Group: Medication Allergy, Partition: Medication Allergy, Record: 6.
Ronald
|||In your fact table you have keys that dont appear in the dimension table.
I think in the dbo_MedicationAllergy fact table, MedicineCode column in has more keys than the Allergy dimension.
Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Errors in the OLAP storage engine
Dear all
I have one table 'Sales' as dimension in a cube for an analysis. In that table one column exist ' DateofOrder'. I want to generate the analysis as Year, Quarter, Month like that using the column DateOfOrder....
What I have to do
Please help ...
Hi,
You can ceate an attribute as DateOfOrder with composite columns Year, Quater, Month, and define the key attribute columns for this attribute as well. (in this case, you could define the key columns as year, quater, month).
Regards,
|||Thank a lot Helen..
But please help me how to can I create an attrrbute in a dimension 'Sales' for orderdate
|||Go to the dimension in the data source view, if you are using Analysis Services 2005, and add new named calculation(right click on the dimension table). You use TSQL time functions like YEAR(MyDateColumn), QUARTER(MyDateColumn) and DATEPART(mm,MyDateColumn) to create these levels.
HTH
Thomas Ivarsson
|||The name of the table 'Sales' does not sound like a "dimension table". Did you mean "Sales" is your Fact table, which is the source of the measure group? What other columns in Sales table would be the source of attributes in the dimension?
Suppose my suspecion is right and Sales is your fact table, then...
The easiest way is to use Server Time generated dimension. Start creating new dimension. The dimension wizard will show up. Select "Build the dimension without data source", navigate to the next page of the wizard. Select "Server time dimension". After that follow the wizard to specify which attributes your dimension will have. Once you have your time dimension you need to relate it to your measure group. Go to Dimension Usage panel of the Cube Designer and click the cell at the intersection of your generated Time dimension and Sales measure group. In the dialog box, which will show up, select Regular relationship and select DateOfOrder column as a measure group column.
If my guess is not right and you indeed need to have a dimension out of Sales table then you need to follow the suggestions you already got.
|||Sir,
Many Thanks to u sir.... Many thanks ... ..
With regards
Polachan
|||Error 4
When I am deploying the project I got the following error. Please help me sir
Errors in the OLAP storage engine: An error occurred while processing the 'Product Tran Header' partition of the 'Product Tran Header' measure group for the 'Test' cube from the productreport database. 0 0
I did the following steps
1. added new measure selecting new source table
2. selected one column dateoforder
3. Add new dimension as without using data source
4. selected server time dimension
5. Selected Year/Month/Quarter/date
6 Selected fiscal year
7.Selected dimension usage in cube desgn
8. selected time dimension and selected regular relation, Granulary attribute as Date
9. Measure group table 'Product tran header' new measure group
10 selected measure group column as dateoforder.
after that while deploying the above mentioned error will come
Please help
|||There are 2 things. Did i understand right that on the step 2 you selected [dateoforder] column as a source of measure? What is your measure? Is it count of transactions? If you measure something like [Order Amount] then you nees to drag [Order Amount] column onto measure group tree view and not [dateoforder]. This column will just be used later in Dimension Usage step, which you made right in step 10.
If you open your server generated dimension in the designer and select tree node related to the dimension itself you can browse the properties of that dimension. There is one of the properties (forgot its exact name), which describes the time range of the dimension like 1/1/2000 - 12/31/2005. Please make sure that this range covers all the instances of [dateoforder] from your fact table.
|||Dear Sir,
I wil explain what I need
I have one table 'Transactioheader' with the following column
1. id
2. DateofTransaction
Another table with the following column
1. headerid
2. qty
3. productcode
4. areacode
I wanto get the report with sqlanalyis with following format
Quarter1 Quarter2 Quarter3
Qty Qty Qty
Area
Omagh 200 10 100
Belfast 10 05 04
The Quarter 1, Quarter2, Quarter 3 have to be derived from the basis of DateofTransaction column with mapping of Servertime dimension..Please help me sir in this situation what I have to do to use servertime dimension
Thanks a lot for ur caring my queries...
Errors in the OLAP storage engine
One of the dimension in my cube is Region. When I try to browse this cube with Region deimension, I get the following error:
"Errors in the OLAP storage engine: The process operation ended because the number of errors encountered during processing reached the defined limit of allowable errors for the operation"
When I browse Region dimension alone it displays data without any error, also when I browse the cube with other dimensions there are no errors.
Does anybody has any idea for this.
Thanks,
Nasir
Nasir
Errors in the metadata manager
Hello,
I'm trying to process a cube and i'm getting the following error message. I'm not working in Analysis Services Project 4, i created another project. I deleted Analysis Services Project 4 and every project i try to execute i get this error message.
"Error 1 File system error: Error opening file; \\?\C:\Program Files\Microsoft SQL Server\MSSQL.2\OLAP\Data\Analysis Services Project4.0.db\TBLE~MC.0.cub.xml is not a disk file or file is not accessible. Errors in the metadata manager. An error occurred when loading the TBLE~MC cube, from the file, '\\?\C:\Program Files\Microsoft SQL Server\MSSQL.2\OLAP\Data\Analysis Services Project4.0.db\TBLE~MC.0.cub.xml'. File system error: The following error occurred while writing to the file 'MSSQLServerOLAPService': The event log file is full. . 0 0 "
Any ideas how to solve this problem?
Thank you in advance.
Looks like the data folder is corrupted. Could you please try the following steps:
1. Stop the Analysis Services service
2. Delete everything in your C:\Program Files\Microsoft SQL Server\MSSQL.2\OLAP\Data\%YOUR_PROJECTNAME% folder
3. Restart the Analysis Services service
4. Re-deploy your project
Hope this helps,
Artur
|||I already try this but without stopping the analysis services and didn't work.
I will try it again... thanks for the help:)
|||You should really stop Analysis Server :)
Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Sunday, March 11, 2012
Errors in the high-level relational engine
Hi members,
could some help me and explain why i am getting this kind of error. My cube executes successfully with errors.
Thanx
Ronald
Errors in the high-level relational engine. The data source view does not contain a definition for the 'dbo_Prescription' table or view. The Source property may not have been set.
See my reply to your previous post.
Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Errors in the high-level relational engine
I'm getting the following error back when trying to process in a new measure group into an existing cube in 2005...
"Errors in the high-level relational engine. The data source view does not contain a definition for the 'dbo_view_FACT_CPR_Hours' table or view. The Source property may not have been set."
I'm in the process of migrating my 2000 objects to 2005. I've basically copied over the 2000 relational db over to a 2005 SQL server and am going through and recreating the cubes from scratch. I was having this problem with another fact table and was able to remedy by creating a named query rather then hitting the view directly (even though the named query pulled the view straight). This hasn't worked for all of the tables giving me problems though. I know it's not a permission issue because I can hit some, but not all the tables. The bizarre thing is, I can browse the data in the dsv so I know it is recognizing the datasource, it just isn't recognizing it in the cube.
When I try to go to the partition tab in the cube builder it throws an "object block not set" error and crashes, or just crashes so I'm pretty sure it has something to do with this.
Is anyone else experiencing this problem? Help!
It looks like Analysis Services cannot find a DSV table 'dbo_view_FACT_CPR_Hours' . One of Analysis Services objects is based on this table.
It is very possible you can query tables in DSV and see the data, but if one of the dimension attributes references old table or a view, you will get error like one you are seeing above.
There is nothing much you can do, but carefully go through the cube desing and clean problem after problem. What can help you is: Try and create XMLA script for your database and then search for a 'dbo_view_FACT_CPR_Hours' table. This way you should be able to find an error faster.
Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
That's the problem though, there is a 'dbo_view_FACT_CPR_Hours' table in the dsv. I can't understand why AS can't find it when processing the cube. I built the measure group in the cube referencing it that source in the dsv. The cube will process fine without that measure group but as soon as I add it in again and try to process it bombs.
I'll continue to investigate the code on my end. I'll let you know if I come up with something.
Thanks for your help.
errors in creating a cube by AMO
Hello
I am creating a cube in C# by AMO. I created data source view with two table and one relation, one table is dimension and another one is fact. Then I added a dimension to db. This dimesnion has two attribute one key and one regular. The key attribute has a AttributeRelationship to another attribute. After that I updated database and every thing is ok. Then I want to add cube. First I add it
Cube cube = db.Cubes.Add("MyCube");
cube.Dimensions.Add("T Deal");
cube.Validate(errors);
Still the errors.Count is 0. Then I create a MeasureGroup and add a Measure to that.
MeasureGroup measureGroup = cube.MeasureGroups.Add("Vw Cube Fact");
Microsoft.AnalysisServices.Measure measure = measureGroup.Measures.Add("Vw Cube Fact");
measure.Source = new DataItem("vw_CubeFact", "RES Notional");
measureGroup.Validate(errors);
but here there is an error as
No dimension relationships exist. The measure group 'Vw Cube Fact' is not related to any dimensions.
After that I try to add a dimension by
RegularMeasureGroupDimension rmgd = measureGroup.Dimensions.Add("T Deal");
rmgd.Validate(errors);
But the numbers of errors increased to two and the second one is
Granularity is not defined.
Can anybody please help me how can I fix it.
Thanks in advance
Mohsen
I did a mistake in setting AttributeRelationShips.
a very good reference is http://msdn2.microsoft.com/en-us/library/ms345091.aspx
Wednesday, March 7, 2012
Error1Memory error: The operation cannot be completed because the memory quota estimate exceeds
Error 1 Memory error: The operation cannot be completed because the memory quota estimate exceeds the available system memory. 0 0
It runs on a Microsoft Windows NT 5.2 (3790) with the JUNE SQL 2005 Developer release. I has 4 gigs of RAM
This error comes when trying to process all sizes of cubes.
Hi,
I am receiving the same error when I try to process a specific measure group. This measure group has 13 partitions, each partition containing approximately 26 million rows of data.
I have read this microsoft article http://support.microsoft.com/kb/914595/ which details the same error message for processing large dimensions and the solution is to apply service pack 1. I've talked to our dba and he confirmed that we have sp1 applied. My problem is slightly different since I am processing measure groups.
The exact error message that I am receiving is [Analysis Services Execute DDL Task] Error: Memory error: The operation cannot be completed because the memory quota estimate (17010MB) exceeds the available system memory (13308MB).
Any advice would be much appreciated.
Thanks.
Dev88
|||Try to search this forum for previous posts with similar errors reported. For instance:
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=334677&SiteID=1
The problem is: your machine doesnt have enoght memory for Analysis Server to complete processing operation. Before it starts processing Analysis Server tries to estimate how much memory it needs to complete the operation. The error is given if there is shortage of memory.
Things to try:
1. Process objects one by one not in parallel in a single transaction.
2. Try and see how much memory you have on your machine and if you can free some of it for AS to complete the processing.
Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Hi Edward,
Thank you for your reply. I had already previously looked at that other post and tried to process the objects one by one with no success. The error I received still stated that the memory quota estimate of 17 Gigs exceeded the memory available of 13 gigs. Ie, regardless of whether or not I processed objects 1 at time or in parallel Analysis Services still had the same memory quota estimate of 17 gigs.
In any case we have a 64 bit machine with 20 gigs of memory. In order to process these partitions I had to change some of the analysis services properties. I changed the OLAP \ Process \ BufferMemoryLimit, OLAP \ Process Plan \ MemoryLimit, and Memory \TotalMemoryLimit all to 90 percent. All of these properties have to do with the amount of memory that Analysis Services is allowed to use when porcessing dimensions and partitions, specifically MemoryLimit and BufferMemoryLimit. Only then was I able to process these partitions.
In this solution here from microsoft http://support.microsoft.com/kb/914595/ it deals with this same error message for processing dimensions and states how Analysis Services incorrectly estimates the amount of memory needed to process the dimension. Is this the same for Processing measure groups? I have processed cubes with 4 partitions each containing more than 40 million rows of data. The most recent cube that I processed contained 13 partitions with approximately 26 million rows per partition and only then did I received the error. Like I said earlier I increased the amount of memory available for use by AS and only then was I able to process these partitions. My concern is that when I begin to add more data to my cube, I will again receive this error.
So here are my questions:
1.Is this a problem with Analysis Services incorrectly estimating memory quota?
2.Is there anyway to change how AS estimates its memory usage?
3.Is the only solution to purchase more memory?
Any advice would be much appreciated.
Thanks.
|||Some of the properties you suggest changing would have a little effect at the processing. For some reason I cannot get to the KB article you refer to.
The one property that probaly had the most effect in your situation was BufferMemoryLimit. You can read about it and about other properties in http://download.microsoft.com/download/4/f/8/4f8f2dc9-a9a7-4b68-98cb-163482c95e0b/SSASProperties.doc. I am not sure I would suggest using this option in your case.
As for the options you have:
Let me try to rephase #1 in my list above: Process smallest possible objects one by one ...
In your case processing of entire measure group internally getting resolved into processsing of all it's partition in parallel. Of cource these partition processing jobs are coming at the same time and competeing for memory.
You should try and create command processing partitions explicitly and make sure you control how many partitions are processed in parallel.
Analysis server is not perfect in estimating amount of memory needed for processing, but it is pretty close in it's etimate. And etimation logic there for a reason; It prevents from you from processing for hours before running into shortage of memory and aborting processing.
In your case, I dont think you need more memory.
Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
The document that you suggested is the document that I read prior to modifying my AS properties. The property that actually made it possible to process the partition was an undocumented property OLAP \ Process Plan \ MemoryLimit. I agree that increasing the amount of memory available was probably not the best solution but it seem to be the only one that worked. I'm not sure what you mean by 'command processing partitions explicitly to control the number of partitions being process in parallel' but the steps that I am following to control how many partitions are processed in parallel is manual.
In the MS management studio, I browse the down the the partitions directory of my cube. I then right click the partition which I would like to process and then click process. By processing this single partition, does AS perform its estimate as if all of the partitions would be processed in parallel? Or is there another method which I can use to control how many partitions are being processed?
Once again I appreciate all your help and I look forward to hearing your response.
Thanks,
Dev88
|||Try following:
Navigate in the SQL Management studio to the Partitions node in your measure group. The "Summary" page on the right will list all of your partitions.
Multi-select partitions you would like to process and launch the Processing dialog using Right-Click menu on the summary page. At this moment you will see multiple partitions in the processing dialog window.
Click on the "Change Settings" button and you will see the the dialog window where you can choose the Processing order. You can select how many partitions in parallel you would like to process.
Closing "Change Settings" dialog gets you back to the Processing dialog, here you click on the Script button above and script processing command into SSMS window.
Take a look at the script generated. It has all of your partitions listed. The Parallel MaxParallel= element specifies how many partitions would be processed in parallel.
Here is whitepaper talking about processing http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnsql90/html/sql2k5_asprocarch.asp
Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Hello, I tried what you suggested. I also reviewed that white paper again since I had previously read it a couple of months ago.
I only selected one partition. On the page after I click "Change Settings" I selected that I one partition to process in parallel. After closing that dialog to get back to the processing dialog I click on the Script button to open the xmla script into ssms. Here is the script:
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<ErrorConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<KeyErrorLimit>-1</KeyErrorLimit>
<KeyNotFound>IgnoreError</KeyNotFound>
<NullKeyNotAllowed>IgnoreError</NullKeyNotAllowed>
</ErrorConfiguration>
<Parallel MaxParallel="1">
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<Object>
<DatabaseID>Cube009</DatabaseID>
<CubeID>B Data</CubeID>
<MeasureGroupID>P Warehouse</MeasureGroupID>
<PartitionID>P Warehouse</PartitionID>
</Object>
<Type>ProcessFull</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
</Parallel>
</Batch>
As you can see the Parallel MaxParallel=1. I am still receiving the error.
<return xmlns="urn:schemas-microsoft-com:xml-analysis">
<results xmlns="http://schemas.microsoft.com/analysisservices/2003/xmla-multipleresults">
<root xmlns="urn:schemas-microsoft-com:xml-analysis:empty">
<Exception xmlns="urn:schemas-microsoft-com:xml-analysis:exception" />
<Messages xmlns="urn:schemas-microsoft-com:xml-analysis:exception">
<Error ErrorCode="3238199299" Description="Memory error: The operation cannot be completed because the memory quota estimate (17017MB) exceeds the available system memory (13308MB). " Source="Microsoft SQL Server 2005 Analysis Services" HelpFile="" />
<Error ErrorCode="3240034325" Description="Errors in the OLAP storage engine: An error occurred while processing the indexes for the P Warehouse 1 partition of the P Warehouse measure group of the B Data cube from the Cube009 database." Source="Microsoft SQL Server 2005 Analysis Services" HelpFile="" />
</Messages>
</root>
</results>
</return>
Is there anything else that I can try?
Thanks.
|||At this moment I am thinking ether your patition is too big. You have 20Gb memory avaliable, I can hardly imagine the partition size that cannot process with that much memory. Try partition your data futher.
It is also possible, you've changed some server properites and now server estimation logic is skewed. Get back to the server defaults.
And lastly, it is possible there is a bug in Analysis Server, so you please and try contacting customer service.
Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Error1Memory error: The operation cannot be completed because the memory quota estimate exce
Error 1 Memory error: The operation cannot be completed because the memory quota estimate exceeds the available system memory. 0 0
It runs on a Microsoft Windows NT 5.2 (3790) with the JUNE SQL 2005 Developer release. I has 4 gigs of RAM
This error comes when trying to process all sizes of cubes.
Hi,
I am receiving the same error when I try to process a specific measure group. This measure group has 13 partitions, each partition containing approximately 26 million rows of data.
I have read this microsoft article http://support.microsoft.com/kb/914595/ which details the same error message for processing large dimensions and the solution is to apply service pack 1. I've talked to our dba and he confirmed that we have sp1 applied. My problem is slightly different since I am processing measure groups.
The exact error message that I am receiving is [Analysis Services Execute DDL Task] Error: Memory error: The operation cannot be completed because the memory quota estimate (17010MB) exceeds the available system memory (13308MB).
Any advice would be much appreciated.
Thanks.
Dev88
|||Try to search this forum for previous posts with similar errors reported. For instance:
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=334677&SiteID=1
The problem is: your machine doesnt have enoght memory for Analysis Server to complete processing operation. Before it starts processing Analysis Server tries to estimate how much memory it needs to complete the operation. The error is given if there is shortage of memory.
Things to try:
1. Process objects one by one not in parallel in a single transaction.
2. Try and see how much memory you have on your machine and if you can free some of it for AS to complete the processing.
Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Hi Edward,
Thank you for your reply. I had already previously looked at that other post and tried to process the objects one by one with no success. The error I received still stated that the memory quota estimate of 17 Gigs exceeded the memory available of 13 gigs. Ie, regardless of whether or not I processed objects 1 at time or in parallel Analysis Services still had the same memory quota estimate of 17 gigs.
In any case we have a 64 bit machine with 20 gigs of memory. In order to process these partitions I had to change some of the analysis services properties. I changed the OLAP \ Process \ BufferMemoryLimit, OLAP \ Process Plan \ MemoryLimit, and Memory \TotalMemoryLimit all to 90 percent. All of these properties have to do with the amount of memory that Analysis Services is allowed to use when porcessing dimensions and partitions, specifically MemoryLimit and BufferMemoryLimit. Only then was I able to process these partitions.
In this solution here from microsoft http://support.microsoft.com/kb/914595/ it deals with this same error message for processing dimensions and states how Analysis Services incorrectly estimates the amount of memory needed to process the dimension. Is this the same for Processing measure groups? I have processed cubes with 4 partitions each containing more than 40 million rows of data. The most recent cube that I processed contained 13 partitions with approximately 26 million rows per partition and only then did I received the error. Like I said earlier I increased the amount of memory available for use by AS and only then was I able to process these partitions. My concern is that when I begin to add more data to my cube, I will again receive this error.
So here are my questions:
1.Is this a problem with Analysis Services incorrectly estimating memory quota?
2.Is there anyway to change how AS estimates its memory usage?
3.Is the only solution to purchase more memory?
Any advice would be much appreciated.
Thanks.
|||Some of the properties you suggest changing would have a little effect at the processing. For some reason I cannot get to the KB article you refer to.
The one property that probaly had the most effect in your situation was BufferMemoryLimit. You can read about it and about other properties in http://download.microsoft.com/download/4/f/8/4f8f2dc9-a9a7-4b68-98cb-163482c95e0b/SSASProperties.doc. I am not sure I would suggest using this option in your case.
As for the options you have:
Let me try to rephase #1 in my list above: Process smallest possible objects one by one ...
In your case processing of entire measure group internally getting resolved into processsing of all it's partition in parallel. Of cource these partition processing jobs are coming at the same time and competeing for memory.
You should try and create command processing partitions explicitly and make sure you control how many partitions are processed in parallel.
Analysis server is not perfect in estimating amount of memory needed for processing, but it is pretty close in it's etimate. And etimation logic there for a reason; It prevents from you from processing for hours before running into shortage of memory and aborting processing.
In your case, I dont think you need more memory.
Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
|||
The document that you suggested is the document that I read prior to modifying my AS properties. The property that actually made it possible to process the partition was an undocumented property OLAP \ Process Plan \ MemoryLimit. I agree that increasing the amount of memory available was probably not the best solution but it seem to be the only one that worked. I'm not sure what you mean by 'command processing partitions explicitly to control the number of partitions being process in parallel' but the steps that I am following to control how many partitions are processed in parallel is manual.
In the MS management studio, I browse the down the the partitions directory of my cube. I then right click the partition which I would like to process and then click process. By processing this single partition, does AS perform its estimate as if all of the partitions would be processed in parallel? Or is there another method which I can use to control how many partitions are being processed?
Once again I appreciate all your help and I look forward to hearing your response.
Thanks,
Dev88
|||Try following:
Navigate in the SQL Management studio to the Partitions node in your measure group. The "Summary" page on the right will list all of your partitions.
Multi-select partitions you would like to process and launch the Processing dialog using Right-Click menu on the summary page. At this moment you will see multiple partitions in the processing dialog window.
Click on the "Change Settings" button and you will see the the dialog window where you can choose the Processing order. You can select how many partitions in parallel you would like to process.
Closing "Change Settings" dialog gets you back to the Processing dialog, here you click on the Script button above and script processing command into SSMS window.
Take a look at the script generated. It has all of your partitions listed. The Parallel MaxParallel= element specifies how many partitions would be processed in parallel.
Here is whitepaper talking about processing http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnsql90/html/sql2k5_asprocarch.asp
Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Hello, I tried what you suggested. I also reviewed that white paper again since I had previously read it a couple of months ago.
I only selected one partition. On the page after I click "Change Settings" I selected that I one partition to process in parallel. After closing that dialog to get back to the processing dialog I click on the Script button to open the xmla script into ssms. Here is the script:
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<ErrorConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<KeyErrorLimit>-1</KeyErrorLimit>
<KeyNotFound>IgnoreError</KeyNotFound>
<NullKeyNotAllowed>IgnoreError</NullKeyNotAllowed>
</ErrorConfiguration>
<Parallel MaxParallel="1">
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<Object>
<DatabaseID>Cube009</DatabaseID>
<CubeID>B Data</CubeID>
<MeasureGroupID>P Warehouse</MeasureGroupID>
<PartitionID>P Warehouse</PartitionID>
</Object>
<Type>ProcessFull</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
</Parallel>
</Batch>
As you can see the Parallel MaxParallel=1. I am still receiving the error.
<return xmlns="urn:schemas-microsoft-com:xml-analysis">
<results xmlns="http://schemas.microsoft.com/analysisservices/2003/xmla-multipleresults">
<root xmlns="urn:schemas-microsoft-com:xml-analysis:empty">
<Exception xmlns="urn:schemas-microsoft-com:xml-analysis:exception" />
<Messages xmlns="urn:schemas-microsoft-com:xml-analysis:exception">
<Error ErrorCode="3238199299" Description="Memory error: The operation cannot be completed because the memory quota estimate (17017MB) exceeds the available system memory (13308MB). " Source="Microsoft SQL Server 2005 Analysis Services" HelpFile="" />
<Error ErrorCode="3240034325" Description="Errors in the OLAP storage engine: An error occurred while processing the indexes for the P Warehouse 1 partition of the P Warehouse measure group of the B Data cube from the Cube009 database." Source="Microsoft SQL Server 2005 Analysis Services" HelpFile="" />
</Messages>
</root>
</results>
</return>
Is there anything else that I can try?
Thanks.
|||At this moment I am thinking ether your patition is too big. You have 20Gb memory avaliable, I can hardly imagine the partition size that cannot process with that much memory. Try partition your data futher.
It is also possible, you've changed some server properites and now server estimation logic is skewed. Get back to the server defaults.
And lastly, it is possible there is a bug in Analysis Server, so you please and try contacting customer service.
Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.