Showing posts with label pull. Show all posts
Showing posts with label pull. Show all posts

Thursday, March 29, 2012

ETL / Data mart / Cube question

We are in process of using new Kimball toolkit book as our guide to create another DM. Since we only have one data source, we think we need to pull in all the tables into the staging area. At first we wanted to just pull in minimal data to populate the Dimension/Fact tables and any information needed to derive columns.
However the business area wants a complete snapshot of ALL THE data for this given subject matter. Their reason is that sometimes our CUBES don't have all the information they want, and they want to be able to query the data mart tables against the staging area tables using SQL or Access. Since our data mart load times is first Saturday of the month - this argues in favor of this. If the load process was every day or once a week - it might be better to just query across the data mart and the reporting (replicated production) database - the the data mart tables would be older than the information they were linking against on the replicated report DB using the 1st Saturday approach.
Questions:
1. Is this the correct approach?
2. According to a Microsoft diagram, the load goes from Source to Staging to OLAP DB - which is really just another OLTP DB but the tables are modeled as Dimension/Fact tables, right?
3. Our existing data mart which everyone mostly uses CUBES to access, has a big manual step. After all the ETL happens and the Dim/Fact tables have been updated - we are having to open up SSAS and process the cubes, the first business day after the 1st Saturday, to get them refreshed - can't this be automated within SSIS?
4. There have been some major production changes to our existing data source for our existing production data mart. All the ETL was done on SQL2k with stored proc's and DTS was only used as the policeman to control the flow. All the SCD processes were done and are working fine using SP's. We now have to make changes and are faced with having two complete different systems to manage - one with an ETL using the old method and the 2nd using SSIS. Some of the CUBES from the original data mart are going to be combined with our new cubes we are creating in our new system. To try and make it clearer - we have for example a Client dimension and a Client fact table that first is refreshed monthly with the old ETL, and now our new subject matter is now also going to affect the Client dimension and Client fact table that was created the first project. The old production changes it appears may be many weeks to adjust using the old ETL methods - so it begs the question, since all the ETL is working fine with DTS as flow master - is it perhaps better to get it all working with SSIS calling all the SP's and incorporate all our new ETL within SSIS using the new features that SSIS can offer?
5. When adding a 3rd subject matter data mart - is it a 3rd ETL package or just enhancing the one we have?
____________________
Joe Horton
SQL 2005 Business Intelligence
Aging & Disability Services Administration
I think that for 1, 2 and 3 you're right - you can schedule cube
process withing SSIS (there is a specific task to do that).
For 4 and 5 it's hard to say - it depends from the cost of
implementation and the cost of maintenance... SSIS packages are easier
to maintain in the long term, but you could have an higher cost in the
short term for the initial design. Mixing SSIS and SP's is certainly
feasible, even if it makes the solution harder to manage and also
performance can suffert compared to a native SSIS solution. Anyway, it
all depends by other constraints (volume of data, time of development
available, and so on).
Marco Russo
http://www.sqlbi.eu
http://www.sqljunkies.com/weblog/sqlbi
On Feb 1, 6:16 pm, "Joe" <hortoris...@.gmail.dot.com> wrote:
> We are in process of using new Kimball toolkit book as our guide to create another DM. Since we only have one data source, we think we need to pull in all the tables into the staging area. At first we wanted to just pull in minimal data to populate the Dimension/Fact tables and any information needed to derive columns.
> However the business area wants a complete snapshot of ALL THE data for this given subject matter. Their reason is that sometimes our CUBES don't have all the information they want, and they want to be able to query the data mart tables against the staging area tables using SQL or Access. Since our data mart load times is first Saturday of the month - this argues in favor of this. If the load process was every day or once a week -it might be better to just query across the data mart and the reporting (replicated production) database - the the data mart tables would be older than the information they were linking against on the replicated report DB using the 1st Saturday approach.
> Questions:
> 1. Is this the correct approach?
> 2. According to a Microsoft diagram, the load goes from Source to Staging to OLAP DB - which is really just another OLTP DB but the tables aremodeled as Dimension/Fact tables, right?
> 3. Our existing data mart which everyone mostly uses CUBES to access, has a big manual step. After all the ETL happens and the Dim/Fact tables have been updated - we are having to open up SSAS and process the cubes, the first business day after the 1st Saturday, to get them refreshed - can't this be automated within SSIS?
> 4. There have been some major production changes to our existing data source for our existing production data mart. All the ETL was done on SQL2k with stored proc's and DTS was only used as the policeman to control the flow. All the SCD processes were done and are working fine using SP's. We now have to make changes and are faced with having two complete different systems to manage - one with an ETL using the old method and the 2nd using SSIS. Some of the CUBES from the original data mart are going to be combined with our new cubes we are creating in our new system. To try and make it clearer - we have for example a Client dimension and a Client fact table that first is refreshed monthly with the old ETL, and now our new subject matter is now also going to affect the Client dimension and Client fact table that was created the first project. The old production changes it appears may be many weeks to adjust using the old ETL methods - so it begs the question, since all the ETL is working fine with DTS as flow master - is it perhaps better to get it all working with SSIS calling all the SP's and incorporate all our new ETL within SSIS using the new features that SSIS can offer?
> 5. When adding a 3rd subject matter data mart - is it a 3rd ETL package or just enhancing the one we have?
> ____________________
> Joe Horton
> SQL 2005 Business Intelligence
> Aging & Disability Services Administration
|||Hi Joe,
I'm involved in a Data Warehouse project. Client is already invested in SQL
Server 2005.
Directly coming to the point..
You must have setup server(s) for your data warehouse. I wanted to know
whether you were involved in Server Sizing / Data Sizing. If yes, then can
you please guide me as to which method / template did you use to size your
data and based on it what hardware (with configuration) did you suggest /
decide?
Please help this is very critical and urgent!
Regards,
Bharat Tamhankar
"Joe" wrote:

>
> We are in process of using new Kimball toolkit book as our guide
> to create another DM. Since we only have one data source, we think we
> need to pull in all the tables into the staging area. At first we wanted
> to just pull in minimal data to populate the Dimension/Fact tables and any
> information needed to derive columns.
> However the business area wants a complete snapshot of ALL THE
> data for this given subject matter. Their reason is that sometimes our
> CUBES don't have all the information they want, and they want to be able
> to query the data mart tables against the staging area tables using SQL or
> Access. Since our data mart load times is first Saturday of the month -
> this argues in favor of this. If the load process was every day or once a
> week - it might be better to just query across the data mart and the
> reporting (replicated production) database - the the data mart tables
> would be older than the information they were linking against on the
> replicated report DB using the 1st Saturday approach.
>
> Questions:
> 1. Is this the correct approach?
> 2. According to a Microsoft diagram, the load goes from Source to
> Staging to OLAP DB - which is really just another OLTP DB but the tables
> are modeled as Dimension/Fact tables, right?
> 3. Our existing data mart which everyone mostly uses CUBES to
> access, has a big manual step. After all the ETL happens and the Dim/Fact
> tables have been updated - we are having to open up SSAS and process the
> cubes, the first business day after the 1st Saturday, to get them
> refreshed - can't this be automated within SSIS?
> 4. There have been some major production changes to our existing
> data source for our existing production data mart. All the ETL was done
> on SQL2k with stored proc's and DTS was only used as the policeman to
> control the flow. All the SCD processes were done and are working fine
> using SP's. We now have to make changes and are faced with having two
> complete different systems to manage - one with an ETL using the old
> method and the 2nd using SSIS. Some of the CUBES from the original data
> mart are going to be combined with our new cubes we are creating in our
> new system. To try and make it clearer - we have for example a Client
> dimension and a Client fact table that first is refreshed monthly with the
> old ETL, and now our new subject matter is now also going to affect the
> Client dimension and Client fact table that was created the first project.
> The old production changes it appears may be many weeks to adjust using
> the old ETL methods - so it begs the question, since all the ETL is
> working fine with DTS as flow master - is it perhaps better to get it all
> working with SSIS calling all the SP's and incorporate all our new ETL
> within SSIS using the new features that SSIS can offer?
> 5. When adding a 3rd subject matter data mart - is it a 3rd ETL
> package or just enhancing the one we have?
>
>
>
> ____________________
> Joe Horton
> SQL 2005 Business Intelligence
> Aging & Disability Services Administration
>
>
>
|||I'm sorry - the existing archetecture was in place before I arrived and as
my expertise is in coding not hardware - I wouldn't have been much help in
that area anyhow.
"Bharat Tamhankar" <BharatTamhankar@.discussions.microsoft.com> wrote in
message news:2B8F4494-BADC-4F47-86BC-383C5C07F0E6@.microsoft.com...[vbcol=seagreen]
> Hi Joe,
> I'm involved in a Data Warehouse project. Client is already invested in
> SQL
> Server 2005.
> Directly coming to the point..
> You must have setup server(s) for your data warehouse. I wanted to know
> whether you were involved in Server Sizing / Data Sizing. If yes, then can
> you please guide me as to which method / template did you use to size your
> data and based on it what hardware (with configuration) did you suggest /
> decide?
> Please help this is very critical and urgent!
> Regards,
> Bharat Tamhankar
>
> "Joe" wrote:

ETL / Data mart / Cube question

We are in process of using new Kimball toolkit book as our guide t
o create another DM. Since we only have one data source, we think we need t
o pull in all the tables into the staging area. At first we wanted to just
pull in minimal data to populate the Dimension/Fact tables and any informati
on needed to derive columns.
However the business area wants a complete snapshot of ALL THE dat
a for this given subject matter. Their reason is that sometimes our CUBES d
on't have all the information they want, and they want to be able to query t
he data mart tables against the staging area tables using SQL or Access. Si
nce our data mart load times is first Saturday of the month - this argues in
favor of this. If the load process was every day or once a week - it might
be better to just query across the data mart and the reporting (replicated p
roduction) database - the the data mart tables would be older than the infor
mation they were linking against on the replicated report DB using the 1st S
aturday approach.
Questions:
1. Is this the correct approach?
2. According to a Microsoft diagram, the load goes from Source to Stag
ing to OLAP DB - which is really just another OLTP DB but the tables are mod
eled as Dimension/Fact tables, right?
3. Our existing data mart which everyone mostly uses CUBES to access,
has a big manual step. After all the ETL happens and the Dim/Fact tables ha
ve been updated - we are having to open up SSAS and process the cubes, the f
irst business day after the 1st Saturday, to get them refreshed - can't this
be automated within SSIS?
4. There have been some major production changes to our existing data
source for our existing production data mart. All the ETL was done on SQL2k
with stored proc's and DTS was only used as the policeman to control the fl
ow. All the SCD processes were done and are working fine using SP's. We no
w have to make changes and are faced with having two complete different syst
ems to manage - one with an ETL using the old method and the 2nd using SSIS.
Some of the CUBES from the original data mart are going to be combined wit
h our new cubes we are creating in our new system. To try and make it clear
er - we have for example a Client dimension and a Client fact table that fir
st is refreshed monthly with the old ETL, and now our new subject matter is
now also going to affect the Client dimension and Client fact table that was
created the first project. The old production changes it appears may be m
any weeks to adjust using the old ETL methods - so it begs the question, sin
ce all the ETL is working fine with DTS as flow master - is it perhaps bette
r to get it all working with SSIS calling all the SP's and incorporate all o
ur new ETL within SSIS using the new features that SSIS can offer?
5. When adding a 3rd subject matter data mart - is it a 3rd ETL packag
e or just enhancing the one we have?
____________________
Joe Horton
SQL 2005 Business Intelligence
Aging & Disability Services AdministrationI think that for 1, 2 and 3 you're right - you can schedule cube
process withing SSIS (there is a specific task to do that).
For 4 and 5 it's hard to say - it depends from the cost of
implementation and the cost of maintenance... SSIS packages are easier
to maintain in the long term, but you could have an higher cost in the
short term for the initial design. Mixing SSIS and SP's is certainly
feasible, even if it makes the solution harder to manage and also
performance can suffert compared to a native SSIS solution. Anyway, it
all depends by other constraints (volume of data, time of development
available, and so on).
Marco Russo
http://www.sqlbi.eu
http://www.sqljunkies.com/weblog/sqlbi
On Feb 1, 6:16 pm, "Joe" <hortoris...@.gmail.dot.com> wrote:
> =B7 We are in process of using new Kimball toolkit book as our gu=
ide to create another DM. Since we only have one data source, we think we =
need to pull in all the tables into the staging area. At first we wanted t=
o just pull in minimal data to populate the Dimension/Fact tables and any i=
nformation needed to derive columns. =20
> =B7 However the business area wants a complete snapshot of ALL TH=
E data for this given subject matter. Their reason is that sometimes our C=
UBES don't have all the information they want, and they want to be able to =
query the data mart tables against the staging area tables using SQL or Acc=
ess. Since our data mart load times is first Saturday of the month - this =
argues in favor of this. If the load process was every day or once a week -=
it might be better to just query across the data mart and the reporting (r=
eplicated production) database - the the data mart tables would be older th=
an the information they were linking against on the replicated report DB us=
ing the 1st Saturday approach.
> Questions:
> 1. Is this the correct approach? =20
> 2. According to a Microsoft diagram, the load goes from Source to S=
taging to OLAP DB - which is really just another OLTP DB but the tables are=
modeled as Dimension/Fact tables, right? =20
> 3. Our existing data mart which everyone mostly uses CUBES to acces=
s, has a big manual step. After all the ETL happens and the Dim/Fact table=
s have been updated - we are having to open up SSAS and process the cubes, =
the first business day after the 1st Saturday, to get them refreshed - can'=
t this be automated within SSIS?
> 4. There have been some major production changes to our existing da=
ta source for our existing production data mart. All the ETL was done on S=
QL2k with stored proc's and DTS was only used as the policeman to control t=
he flow. All the SCD processes were done and are working fine using SP's. =
We now have to make changes and are faced with having two complete differe=
nt systems to manage - one with an ETL using the old method and the 2nd usi=
ng SSIS. Some of the CUBES from the original data mart are going to be com=
bined with our new cubes we are creating in our new system. To try and mak=
e it clearer - we have for example a Client dimension and a Client fact tab=
le that first is refreshed monthly with the old ETL, and now our new subjec=
t matter is now also going to affect the Client dimension and Client fact t=
able that was created the first project. The old production changes it ap=
pears may be many weeks to adjust using the old ETL methods - so it begs th=
e question, since all the ETL is working fine with DTS as flow master - is =
it perhaps better to get it all working with SSIS calling all the SP's and =
incorporate all our new ETL within SSIS using the new features that SSIS ca=
n offer? =20
> 5. When adding a 3rd subject matter data mart - is it a 3rd ETL pac=
kage or just enhancing the one we have?
> ____________________
> Joe Horton
> SQL 2005 Business Intelligence
> Aging & Disability Services Administration|||Hi Joe,
I'm involved in a Data Warehouse project. Client is already invested in SQL
Server 2005.
Directly coming to the point..
You must have setup server(s) for your data warehouse. I wanted to know
whether you were involved in Server Sizing / Data Sizing. If yes, then can
you please guide me as to which method / template did you use to size your
data and based on it what hardware (with configuration) did you suggest /
decide?
Please help this is very critical and urgent!
Regards,
Bharat Tamhankar
"Joe" wrote:

>
> We are in process of using new Kimball toolkit book as our guide
> to create another DM. Since we only have one data source, we think we
> need to pull in all the tables into the staging area. At first we wanted
> to just pull in minimal data to populate the Dimension/Fact tables and any
> information needed to derive columns.
> However the business area wants a complete snapshot of ALL THE
> data for this given subject matter. Their reason is that sometimes our
> CUBES don't have all the information they want, and they want to be able
> to query the data mart tables against the staging area tables using SQL or
> Access. Since our data mart load times is first Saturday of the month -
> this argues in favor of this. If the load process was every day or once a
> week - it might be better to just query across the data mart and the
> reporting (replicated production) database - the the data mart tables
> would be older than the information they were linking against on the
> replicated report DB using the 1st Saturday approach.
>
> Questions:
> 1. Is this the correct approach?
> 2. According to a Microsoft diagram, the load goes from Source to
> Staging to OLAP DB - which is really just another OLTP DB but the tables
> are modeled as Dimension/Fact tables, right?
> 3. Our existing data mart which everyone mostly uses CUBES to
> access, has a big manual step. After all the ETL happens and the Dim/Fact
> tables have been updated - we are having to open up SSAS and process the
> cubes, the first business day after the 1st Saturday, to get them
> refreshed - can't this be automated within SSIS?
> 4. There have been some major production changes to our existing
> data source for our existing production data mart. All the ETL was done
> on SQL2k with stored proc's and DTS was only used as the policeman to
> control the flow. All the SCD processes were done and are working fine
> using SP's. We now have to make changes and are faced with having two
> complete different systems to manage - one with an ETL using the old
> method and the 2nd using SSIS. Some of the CUBES from the original data
> mart are going to be combined with our new cubes we are creating in our
> new system. To try and make it clearer - we have for example a Client
> dimension and a Client fact table that first is refreshed monthly with the
> old ETL, and now our new subject matter is now also going to affect the
> Client dimension and Client fact table that was created the first project.
> The old production changes it appears may be many weeks to adjust using
> the old ETL methods - so it begs the question, since all the ETL is
> working fine with DTS as flow master - is it perhaps better to get it all
> working with SSIS calling all the SP's and incorporate all our new ETL
> within SSIS using the new features that SSIS can offer?
> 5. When adding a 3rd subject matter data mart - is it a 3rd ETL
> package or just enhancing the one we have?
>
>
>
> ____________________
> Joe Horton
> SQL 2005 Business Intelligence
> Aging & Disability Services Administration
>
>
>|||I'm sorry - the existing archetecture was in place before I arrived and as
my expertise is in coding not hardware - I wouldn't have been much help in
that area anyhow.
"Bharat Tamhankar" <BharatTamhankar@.discussions.microsoft.com> wrote in
message news:2B8F4494-BADC-4F47-86BC-383C5C07F0E6@.microsoft.com...[vbcol=seagreen]
> Hi Joe,
> I'm involved in a Data Warehouse project. Client is already invested in
> SQL
> Server 2005.
> Directly coming to the point..
> You must have setup server(s) for your data warehouse. I wanted to know
> whether you were involved in Server Sizing / Data Sizing. If yes, then can
> you please guide me as to which method / template did you use to size your
> data and based on it what hardware (with configuration) did you suggest /
> decide?
> Please help this is very critical and urgent!
> Regards,
> Bharat Tamhankar
>
> "Joe" wrote:
>

Sunday, February 26, 2012

Error:The precision must be between 1 and 38.

Hi All

I am trying to pull data from Oracle to SQL Server but if I use Oledb Source than I get this error

Error at Data Flow Task [DTS.Pipeline]: The "output column "CUST_ID" (590)" has a precision that is not valid. The precision must be between 1 and 38.


ADDITIONAL INFORMATION:

Exception from HRESULT: 0xC0204018 (Microsoft.SqlServer.DTSPipelineWrap)


The only solution I found is use DataReader Source,

But if I use DataReader Source everything works fine , I mean I am able to see the records and convert it desired data type (using Data Convertion component).

My question is what component should I use as Destination, coz if I use OLEDB Desination I get a red cross on the components although I can map all the columns.....

I believe you need to convert that column to have a precision of 1 or greater in the query that you use to get at the data. Search this forum for examples. You're not the first with this issue.|||

I fixed the error which I was getting with DataReader Source. Everthing is Ok at design time but I get this error at the source at runtime:

[DataReader Source [1]] Error: System.Data.OleDb.OleDbException: Oracle error occurred, but error message could not be retrieved from Oracle.
at System.Data.OleDb.OleDbConnectionInternal..ctor(OleDbConnectionString constr, OleDbConnection connection)
at System.Data.OleDb.OleDbConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo,
DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection

owningConnection, DbConnectionPoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection)
at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory)
at System.Data.OleDb.OleDbConnection.Open() at Microsoft.SqlServer.Dts.Runtime.ManagedHelper.GetManagedConnection(String assemblyQualifiedName,
String connStr, Object transaction) at Microsoft.SqlServer.Dts.Runtime.Wrapper.IDTSConnectionManager90.AcquireConnection(Object pTransaction)
at Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter.AcquireConnections(Object transaction)
at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostAcquireConnections(IDTSManagedComponentWrapper90 wrapper, Object transaction)

|||Try your query in SQL*Plus or some other Oracle client to see if it errors out there as well.|||

I've dealt a lot with this error. So far I've found two solutions:

1) Cast column to NUMBER(precision, scale) in Oracle select (eg. select CAST(columnName as NUMBER(10, 3)) as columName from ..

2) Cast to string in Oracle select (eg. select TO_CHAR(columnName) as columName from ..

For more info, check out this thread: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1432727&SiteID=1

Another common problem when selecting from Oracle is getting the error "accessor is not a parameter accessor", which you get when precision from a calculation in the select exceeds 38. For example when you divide two columns. ADO can not handle the same amount of precision that the Oracle select can handle. Anyway, to overcome this problem you cast the column (or expression (division)) to char (TO_CHAR(....)) and then you specify precision and scale (preferred) when you do the insert or substring the result to fit the target column.

Code (VB.NET): Dim para5 As New OleDb.OleDbParameter("myParamName", OleDb.OleDbType.Numeric)
para5.Value = myReader.Item("myColumn")
para5.Scale = CByte(10)
para5.Precision = CByte(22)

More about accessor is not a parameter accessor (only one google hit, and is cached):

http://www.google.com/search?q=cache:ZVTf6GErxyIJ:www.dotnetforums.com/archive/index.php/t-398.html+Conversion+failed+because+the+Decimal+data+value+overflowed+the+type+specified+for+the+Decimal+value+part+in+the+consumer's+buffer.&hl=sv&strip=1

|||

I don't know what's wrong , this is driving me crazy....this package was working fine till last week n now it is giving error

[Source-AP [1]] Error: The AcquireConnection method call to the connection manager "MS_OLEDB" failed with error code 0xC0202009.
[DTS.Pipeline] Error: component "Source-AP" (1) failed validation and returned error code 0xC020801C.

I am able to see the data preview and column mapping etc

Error:The precision must be between 1 and 38.

Hi All

I am trying to pull data from Oracle to SQL Server but if I use Oledb Source than I get this error

Error at Data Flow Task [DTS.Pipeline]: The "output column "CUST_ID" (590)" has a precision that is not valid. The precision must be between 1 and 38.


ADDITIONAL INFORMATION:

Exception from HRESULT: 0xC0204018 (Microsoft.SqlServer.DTSPipelineWrap)


The only solution I found is use DataReader Source,

But if I use DataReader Source everything works fine , I mean I am able to see the records and convert it desired data type (using Data Convertion component).

My question is what component should I use as Destination, coz if I use OLEDB Desination I get a red cross on the components although I can map all the columns.....

I believe you need to convert that column to have a precision of 1 or greater in the query that you use to get at the data. Search this forum for examples. You're not the first with this issue.|||

I fixed the error which I was getting with DataReader Source. Everthing is Ok at design time but I get this error at the source at runtime:

[DataReader Source [1]] Error: System.Data.OleDb.OleDbException: Oracle error occurred, but error message could not be retrieved from Oracle.
at System.Data.OleDb.OleDbConnectionInternal..ctor(OleDbConnectionString constr, OleDbConnection connection)
at System.Data.OleDb.OleDbConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo,
DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection

owningConnection, DbConnectionPoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection)
at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory)
at System.Data.OleDb.OleDbConnection.Open() at Microsoft.SqlServer.Dts.Runtime.ManagedHelper.GetManagedConnection(String assemblyQualifiedName,
String connStr, Object transaction) at Microsoft.SqlServer.Dts.Runtime.Wrapper.IDTSConnectionManager90.AcquireConnection(Object pTransaction)
at Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter.AcquireConnections(Object transaction)
at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostAcquireConnections(IDTSManagedComponentWrapper90 wrapper, Object transaction)

|||Try your query in SQL*Plus or some other Oracle client to see if it errors out there as well.|||

I've dealt a lot with this error. So far I've found two solutions:

1) Cast column to NUMBER(precision, scale) in Oracle select (eg. select CAST(columnName as NUMBER(10, 3)) as columName from ..

2) Cast to string in Oracle select (eg. select TO_CHAR(columnName) as columName from ..

For more info, check out this thread: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1432727&SiteID=1

Another common problem when selecting from Oracle is getting the error "accessor is not a parameter accessor", which you get when precision from a calculation in the select exceeds 38. For example when you divide two columns. ADO can not handle the same amount of precision that the Oracle select can handle. Anyway, to overcome this problem you cast the column (or expression (division)) to char (TO_CHAR(....)) and then you specify precision and scale (preferred) when you do the insert or substring the result to fit the target column.

Code (VB.NET): Dim para5 As New OleDb.OleDbParameter("myParamName", OleDb.OleDbType.Numeric)
para5.Value = myReader.Item("myColumn")
para5.Scale = CByte(10)
para5.Precision = CByte(22)

More about accessor is not a parameter accessor (only one google hit, and is cached):

http://www.google.com/search?q=cache:ZVTf6GErxyIJ:www.dotnetforums.com/archive/index.php/t-398.html+Conversion+failed+because+the+Decimal+data+value+overflowed+the+type+specified+for+the+Decimal+value+part+in+the+consumer's+buffer.&hl=sv&strip=1

|||

I don't know what's wrong , this is driving me crazy....this package was working fine till last week n now it is giving error

[Source-AP [1]] Error: The AcquireConnection method call to the connection manager "MS_OLEDB" failed with error code 0xC0202009.
[DTS.Pipeline] Error: component "Source-AP" (1) failed validation and returned error code 0xC020801C.

I am able to see the data preview and column mapping etc

Wednesday, February 15, 2012

Error: Subreport could not be shown

I have a data dictionary report. It lists table & column definitions, then
has a subreport in the group footer to pull up dependent objects (the only
way I could get more than the first dependent object to list). The report
has parameters where you can choose to look at All Tables or individual
tables in the DB. It passes this param (@.tblName) to the subreport.
When I preview individual tables, the subreport works great! When I preview
"All Tables", the subreport works on some tables, and then has that error
listed after other tables. Even tables that it works on fine individually
can have this error message.
The data set for the subreport is called TableDepends and uses the following
code:
---
Declare @.objid int
If @.tblName <> 'All Tables'
Begin
Set @.objid = Object_ID(@.tblName)
if exists (select *
from sysdepends
where depid = @.objid)
begin
select distinct 'name' = (s.name + '.' + o.name), @.tblName as TableName,
type = substring(v.name, 5, 16)
from sysobjects o, master.dbo.spt_values v, sysdepends d,
sysusers s
where o.id = d.id
and o.xtype = substring(v.name,1,2) collate database_default
and v.type = 'O9T'
and d.depid = @.objid
and o.uid = s.uid
and deptype < 2
End
End
---
It's weird that this code works in some parts of the main report when I make
@.tblName = 'All Tables' in the main report, but then doesn't parse on other
parts of the main report. And I can't figure out how to do an ELSE clause
that still lists all table dependencies due to the whole Object_ID issue.
Any suggestions?
Thanks,Did a little more looking around. It appears that this error renders in the
Preview tab of Report Designer at Page 6 (and above) of 52 pages. The first
5 pages render correctly. So, I guess my question is whether or not the
Designer can handle all those re-renderings of the subreport.
On the first 5 pages, it doesn't matter if there's 1 dependency or many
dependencies, they all seem to render fine.
Thoughts?
Catadmin
--
MCDBA, MCSA
Random Thoughts: If a person is Microsoft Certified, does that mean that
Microsoft pays the bills for the funny white jackets that tie in the back?
@.=)
"Catadmin" wrote:
> I have a data dictionary report. It lists table & column definitions, then
> has a subreport in the group footer to pull up dependent objects (the only
> way I could get more than the first dependent object to list). The report
> has parameters where you can choose to look at All Tables or individual
> tables in the DB. It passes this param (@.tblName) to the subreport.
> When I preview individual tables, the subreport works great! When I preview
> "All Tables", the subreport works on some tables, and then has that error
> listed after other tables. Even tables that it works on fine individually
> can have this error message.
> The data set for the subreport is called TableDepends and uses the following
> code:
> ---
> Declare @.objid int
>
> If @.tblName <> 'All Tables'
> Begin
> Set @.objid = Object_ID(@.tblName)
> if exists (select *
> from sysdepends
> where depid = @.objid)
> begin
> select distinct 'name' = (s.name + '.' + o.name), @.tblName as TableName,
> type = substring(v.name, 5, 16)
> from sysobjects o, master.dbo.spt_values v, sysdepends d,
> sysusers s
> where o.id = d.id
> and o.xtype = substring(v.name,1,2) collate database_default
> and v.type = 'O9T'
> and d.depid = @.objid
> and o.uid = s.uid
> and deptype < 2
> End
> End
> ---
> It's weird that this code works in some parts of the main report when I make
> @.tblName = 'All Tables' in the main report, but then doesn't parse on other
> parts of the main report. And I can't figure out how to do an ELSE clause
> that still lists all table dependencies due to the whole Object_ID issue.
> Any suggestions?
> Thanks,
>|||Further information:
Finally found the ReportServer Error Logs. Here are the error messages I'm
getting:
----
aspnet_wp!processing!12f4!5/12/2005-10:30:34:: w WARN: Data source
'APACS_BASE': Report processing has been aborted.
aspnet_wp!processing!12f4!5/12/2005-10:30:34:: e ERROR: Throwing
Microsoft.ReportingServices.ReportProcessing.ProcessingAbortedException: An
error has occurred during report processing., ;
Info:
Microsoft.ReportingServices.ReportProcessing.ProcessingAbortedException: An
error has occurred during report processing. -->
System.InvalidOperationException: Invalid attempt to read when no data is
present.
at System.Data.SqlClient.SqlDataReader.GetOrdinal(String name)
at
Microsoft.ReportingServices.DataExtensions.DataReaderWrapper.GetOrdinal(String fieldName)
at
Microsoft.ReportingServices.DataExtensions.MappingDataReader..ctor(String
dataSetName, IDataReader sourceReader, String[] aliases, String[] fieldNames)
at Microsoft.ReportingServices.ReportProcessing.h..ctor(String A_0,
IDataReader A_1, String[] A_2, String[] A_3)
at Microsoft.ReportingServices.ReportProcessing.a1.v()
at Microsoft.ReportingServices.ReportProcessing.ax.c()
at Microsoft.ReportingServices.ReportProcessing.a1.a(Boolean& A_0)
at Microsoft.ReportingServices.ReportProcessing.ax.b()
at Microsoft.ReportingServices.ReportProcessing.a1.a(Object A_0)
-- End of inner exception stack trace --
----
This error comes up in the log for every time the subreport tries to print
and can't. Anyone have any ideas how to get around it?
Thanks!
--
MCDBA, MCSA
Random Thoughts: If a person is Microsoft Certified, does that mean that
Microsoft pays the bills for the funny white jackets that tie in the back?
@.=)
"Catadmin" wrote:
> I have a data dictionary report. It lists table & column definitions, then
> has a subreport in the group footer to pull up dependent objects (the only
> way I could get more than the first dependent object to list). The report
> has parameters where you can choose to look at All Tables or individual
> tables in the DB. It passes this param (@.tblName) to the subreport.
> When I preview individual tables, the subreport works great! When I preview
> "All Tables", the subreport works on some tables, and then has that error
> listed after other tables. Even tables that it works on fine individually
> can have this error message.
> The data set for the subreport is called TableDepends and uses the following
> code:
> ---
> Declare @.objid int
>
> If @.tblName <> 'All Tables'
> Begin
> Set @.objid = Object_ID(@.tblName)
> if exists (select *
> from sysdepends
> where depid = @.objid)
> begin
> select distinct 'name' = (s.name + '.' + o.name), @.tblName as TableName,
> type = substring(v.name, 5, 16)
> from sysobjects o, master.dbo.spt_values v, sysdepends d,
> sysusers s
> where o.id = d.id
> and o.xtype = substring(v.name,1,2) collate database_default
> and v.type = 'O9T'
> and d.depid = @.objid
> and o.uid = s.uid
> and deptype < 2
> End
> End
> ---
> It's weird that this code works in some parts of the main report when I make
> @.tblName = 'All Tables' in the main report, but then doesn't parse on other
> parts of the main report. And I can't figure out how to do an ELSE clause
> that still lists all table dependencies due to the whole Object_ID issue.
> Any suggestions?
> Thanks,
>

ERROR: Subreport cannot be shown

Hi,

I have a report (rdlc) which has a subreport. I have specified which report to use as my subreport. I created a dataset to pull all data from two tables. One table goes with the main report, the other goes with the subreport. I then created two sqldatasources and configured the first one to pull the data using parameter to fill the main report then configured the other to do the same for the subreport. I then configured my report to work in those two datasource instances. Then when I run my report I get ERROR: Subreport cannot be shown. But my main report section works fine. Also, if I change the report to just use my report that is the subreport, it works fine. I just can't get it to work as a subreport! PLEASE HELP! I am going insane!

I had the same problem, and found a solution. It appears that even if the parameters are named the same, the subreport will not get its parameter value set unless you connect it to the parameter that the main report is using.

Open the main report|||

I had the same problem, and found a solution. It appears that even if the parameters are named the same, the subreport will not get its parameter value set unless you connect it to the parameter that the main report is using.

Open the main report|||Thanks!|||

Hi Billy,

I have done the same thing as mentioned by you. But I am still not able to view the report. The main report goes in some loop. If I run main report & subreport separately then it is running perfectly fine. I have the same list of parameters for the main report as well as for the subreport.

Do I need to do something else besides this? Appreciate your help.

Thanks

Hitesh

|||

Hi, there

I got the same error with you. Just wonder did u solve your problem? I just sent your an email.

contact me atste_chen@.yahoo.com

thanks a lot !

steven

|||

Having the same parameter names will not cause the data to flow from the parent report to the sub report. Did you go through the bullet points above and map the parameter values from the parent report to the child report?

|||

You may need to handle the OnSubreportProcessing event in your code. Build a dataset of the table you want to use and add it to the SubreportProcessingEventArgs of the local report.

Private Sub ReportViewer_SubreportProcessing(ByVal senderAs Object,ByVal eAs SubreportProcessingEventArgs)Dim SubRptConnAs New SqlConnection("Connection string")Dim SubRptSqlCmdTextAs String ="SELECT * FROM Table;"Dim SubRptSqlCmdAs New SqlCommand(SubRptSqlCmdText, SubRptConn)Dim SubRptSqlDAAs New SqlDataAdapter(SubRptSqlCmd)Dim SubRptSqlDSAs New DataSet SubRptConn.Open()SubRptSqlDA.Fill(SubRptSqlDS)SubRptConn.Close()Dim SubRptDSAs New ReportDataSource("Activities_V_LINCS_ItenEventsDetail", SubRptSqlDS.Tables(0))e.DataSources.Add(SubRptDS)End Sub
|||

Has anyone figured this out?

I have my main report and my subreport with a parameter called X. Then on my sub report viewer control thing you place on the main report I have set the parameter to the same name and set the value to be the main report parameters (parameters!x.value).

This last post had a code regarding ReportViewer_SubreportProcessing, where does this go? My sub report is already bound to a table adapter.
Please help this is driving me nuts. I get the report could not be shown error.
Thanks.
-S

|||

I Have The Same Errot The SubreportProcessingEventHandler is fired up and the query Runs O.K. But The Sub Report Isn't Shown up ,

i Think It's Because in The local Report Mode the main report page doesnt refresh it self