Showing posts with label size. Show all posts
Showing posts with label size. Show all posts

Thursday, March 29, 2012

Estimation of Full Backup size

Has anyone come across a more accurate method than sp_spaceused to estimate the size of a full database backup for SQL Server 2000 ?

I have found this to have too great a variance (even after running updateusage) to rely on any accuracy for it. I have also looked at perhaps using the ALLOCATED Pages indicated in the GAM pages but this also seems to be pretty inaccurate.

I have a number of servers where space can be limited and backups using Maintenance Plans have occasionally failed because they delete the old backups AFTER they do the latest one. I am writing a script which can check the space remaining and adjust the backup accordingly but the variance I have observed so far with sp_spaceused is too great.

Any ideas welcomed.

Full backups copy all pages within allocated extents. So, there may be a few pages which are not allocated, but which are copied to the backup. Also, there is a small percentage of overhead in the backup format, but this should be predictable.

The most accurate estimate would be to use updateusage and then look at the reserved column of sp_spaceused.

What is the variance that you are seeing? It should be fairly small.

The real bottom line is that if you have servers which you know occasionally run out of backup space, they need to have storage added so that you can ensure the safety of your databases. You can minimize the space requirements by compressing the backups and/or copying them off the server as soon as they are created, but you need the space there.

|||

Thanks for that Kevin

I need to do a few more tests to confirm the variance that I am seeing - it seemed to be inconsistent for a couple of databases (shooting up to 4MB) after being within about 85KB - 90KB for all of the others so there may have been something else happening.

You mention all pages within allocated extents - does that mean that I could use the GAM or SGAM pages to count the number of allocated extents and use this as an estimate? I already have a method of doing this for estimating differential backups using the DIFF_MAP pages.

Martin

|||

The other component of a backup is that the portion of the log created during the execution of the backup is also backed up. This is required in order to get a consistent view of the data at a single point in time.

Depending on the activity level and size of your database, this could easily account for this much variability and more.

Lets look at this from a different angle:

What are your business requirements around backups that are driving your backup plans? If we understand those, we can perhaps better optimize your strategy. I find that the best way to craft a backup strategy is to start from the recovery requirements and work backwards to come up with a plan which will support those needs.

What are the sizes of the databases you are backing up?

Estimating the Size of a Clustered Index

Dear All,
I am trying to calculate the size of a cluster index using the "Estimating
the Size of a Clustered Index" section of Books online. Can some body tell me
what the below formula means
Calculate the number of pages in the index:
Num_Index_Pages = â'Level (Index_Rows_Per_Page)Level â' 1
where 1 <= Level <= LevelsShri.DBA wrote:
> Dear All,
> I am trying to calculate the size of a cluster index using the "Estimating
> the Size of a Clustered Index" section of Books online. Can some body tell me
> what the below formula means
> Calculate the number of pages in the index:
> Num_Index_Pages = â'Level (Index_Rows_Per_Page)Level â' 1
> where 1 <= Level <= Levels
>
for each value of variable 'level' between 1 and Levels (levels is
variable calculated earlier):
calculate (Index_Rows_Per_Page)^(Level â' 1)
sum all (Index_Rows_Per_Page)^(Level â' 1)
I used ^ to mark exponent
that means:
let's say we have 3 levels, Levels=3:
calculate:
- for level = 1: (Index_Rows_Per_Page)^(1 â' 1) =(Index_Rows_Per_Page)^0=1
- for level = 2: (Index_Rows_Per_Page)^(2 â' 1) =(Index_Rows_Per_Page)^1=Index_Rows_Per_Page
- for level = 3: (Index_Rows_Per_Page)^(3 â' 1) =(Index_Rows_Per_Page)^2=Index_Rows_Per_Page*Index_Rows_Per_Page
now, sum all these: 1 + Index_Rows_Per_Page +
Index_Rows_Per_Page*Index_Rows_Per_Page
OK?|||> Can some body tell me
> what the below formula means
The calculation means that the number of non-leaf node index pages required
for a clustered index is the sum of the number pages need at all levels.
The number of required pages at a given level is Index_Rows_Per_Page to the
power of level - 1. Below is the Transact-SQL equivalent.
DECLARE
@.Num_Index_Pages int,
@.Index_Rows_Per_Page int,
@.Level int
SELECT
@.Num_Index_Pages = 0,
@.Index_Rows_Per_Page = 10,
@.Level = 3 --init at number of levels
WHILE @.Level > 0
BEGIN
SET @.Level = @.Level - 1
SET @.Num_Index_Pages = @.Num_Index_Pages + POWER(@.Index_Rows_Per_Page,
@.Level)
END
SELECT @.Num_Index_Pages AS Num_Index_Pages
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Shri.DBA" <ShriDBA@.discussions.microsoft.com> wrote in message
news:5634D0D1-7B1F-4D3A-8878-DE30960D7255@.microsoft.com...
> Dear All,
> I am trying to calculate the size of a cluster index using the "Estimating
> the Size of a Clustered Index" section of Books online. Can some body tell
> me
> what the below formula means
> Calculate the number of pages in the index:
> Num_Index_Pages = â'Level (Index_Rows_Per_Page)Level â' 1
> where 1 <= Level <= Levels
>|||"Zarko Jovanovic" wrote:
> Shri.DBA wrote:
> > Dear All,
> >
> > I am trying to calculate the size of a cluster index using the "Estimating
> > the Size of a Clustered Index" section of Books online. Can some body tell me
> > what the below formula means
> >
> > Calculate the number of pages in the index:
> >
> > Num_Index_Pages = â'Level (Index_Rows_Per_Page)Level â' 1
> >
> > where 1 <= Level <= Levels
> >
> >
> for each value of variable 'level' between 1 and Levels (levels is
> variable calculated earlier):
> calculate (Index_Rows_Per_Page)^(Level â' 1)
> sum all (Index_Rows_Per_Page)^(Level â' 1)
> I used ^ to mark exponent
> that means:
> let's say we have 3 levels, Levels=3:
> calculate:
> - for level = 1: (Index_Rows_Per_Page)^(1 â' 1) => (Index_Rows_Per_Page)^0=1
> - for level = 2: (Index_Rows_Per_Page)^(2 â' 1) => (Index_Rows_Per_Page)^1=Index_Rows_Per_Page
> - for level = 3: (Index_Rows_Per_Page)^(3 â' 1) => (Index_Rows_Per_Page)^2=Index_Rows_Per_Page*Index_Rows_Per_Page
> now, sum all these: 1 + Index_Rows_Per_Page +
> Index_Rows_Per_Page*Index_Rows_Per_Page
> OK?
>
----
Some how I am not conveniced with this formula for calculating the number of
index pages.
I have created below table and loaded 1000000 records. When I calculate the
number of index pages it gives me 387507 pages. I don't understand why so
many index pages are required to store clustered index key size of 4 bytes.
Regards
Balaji
USE [MyDB]
GO
/****** Object: Table [dbo].[Log] Script Date: 10/30/2007 17:39:04 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[Log](
[LogID] [int] IDENTITY(1,1) NOT NULL,
[LogMessage] [varchar](50) NOT NULL,
[LogDateTime] [datetime] NOT NULL,
CONSTRAINT [PK_Log] PRIMARY KEY CLUSTERED
(
[LogID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY =OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF|||Balaji wrote:
> "Zarko Jovanovic" wrote:
>> Shri.DBA wrote:
>> Dear All,
>> I am trying to calculate the size of a cluster index using the "Estimating
>> the Size of a Clustered Index" section of Books online. Can some body tell me
>> what the below formula means
>> Calculate the number of pages in the index:
>> Num_Index_Pages = â'Level (Index_Rows_Per_Page)Level â' 1
>> where 1 <= Level <= Levels
>>
>> for each value of variable 'level' between 1 and Levels (levels is
>> variable calculated earlier):
>> calculate (Index_Rows_Per_Page)^(Level â' 1)
>> sum all (Index_Rows_Per_Page)^(Level â' 1)
>> I used ^ to mark exponent
>> that means:
>> let's say we have 3 levels, Levels=3:
>> calculate:
>> - for level = 1: (Index_Rows_Per_Page)^(1 â' 1) =>> (Index_Rows_Per_Page)^0=1
>> - for level = 2: (Index_Rows_Per_Page)^(2 â' 1) =>> (Index_Rows_Per_Page)^1=Index_Rows_Per_Page
>> - for level = 3: (Index_Rows_Per_Page)^(3 â' 1) =>> (Index_Rows_Per_Page)^2=Index_Rows_Per_Page*Index_Rows_Per_Page
>> now, sum all these: 1 + Index_Rows_Per_Page +
>> Index_Rows_Per_Page*Index_Rows_Per_Page
>> OK?
> ----
> Some how I am not conveniced with this formula for calculating the number of
> index pages.
> I have created below table and loaded 1000000 records. When I calculate the
> number of index pages it gives me 387507 pages. I don't understand why so
> many index pages are required to store clustered index key size of 4 bytes.
> Regards
> Balaji
> USE [MyDB]
> GO
> /****** Object: Table [dbo].[Log] Script Date: 10/30/2007 17:39:04 ******/
> SET ANSI_NULLS ON
> GO
> SET QUOTED_IDENTIFIER ON
> GO
> SET ANSI_PADDING ON
> GO
> CREATE TABLE [dbo].[Log](
> [LogID] [int] IDENTITY(1,1) NOT NULL,
> [LogMessage] [varchar](50) NOT NULL,
> [LogDateTime] [datetime] NOT NULL,
> CONSTRAINT [PK_Log] PRIMARY KEY CLUSTERED
> (
> [LogID] ASC
> )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY => OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
> ) ON [PRIMARY]
> GO
> SET ANSI_PADDING OFF
did you read whole article in BOL?|||Hi Dan Guzman,
Thanks very much for your help. I see there is a difference of 10-15% from
the calculated value and original value. Any thoughts.
Regards
Balaji
"Dan Guzman" wrote:
> > Can some body tell me
> > what the below formula means
> The calculation means that the number of non-leaf node index pages required
> for a clustered index is the sum of the number pages need at all levels.
> The number of required pages at a given level is Index_Rows_Per_Page to the
> power of level - 1. Below is the Transact-SQL equivalent.
>
> DECLARE
> @.Num_Index_Pages int,
> @.Index_Rows_Per_Page int,
> @.Level int
> SELECT
> @.Num_Index_Pages = 0,
> @.Index_Rows_Per_Page = 10,
> @.Level = 3 --init at number of levels
> WHILE @.Level > 0
> BEGIN
> SET @.Level = @.Level - 1
> SET @.Num_Index_Pages = @.Num_Index_Pages + POWER(@.Index_Rows_Per_Page,
> @.Level)
> END
> SELECT @.Num_Index_Pages AS Num_Index_Pages
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Shri.DBA" <ShriDBA@.discussions.microsoft.com> wrote in message
> news:5634D0D1-7B1F-4D3A-8878-DE30960D7255@.microsoft.com...
> > Dear All,
> >
> > I am trying to calculate the size of a cluster index using the "Estimating
> > the Size of a Clustered Index" section of Books online. Can some body tell
> > me
> > what the below formula means
> >
> > Calculate the number of pages in the index:
> >
> > Num_Index_Pages = â'Level (Index_Rows_Per_Page)Level â' 1
> >
> > where 1 <= Level <= Levels
> >
> >
>|||> Thanks very much for your help. I see there is a difference of 10-15% from
> the calculated value and original value. Any thoughts.
What original value are you referring to?
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Balaji" <Balaji@.discussions.microsoft.com> wrote in message
news:AA2312C8-5C72-43F3-A283-E4D7AE2EFE3A@.microsoft.com...
> Hi Dan Guzman,
> Thanks very much for your help. I see there is a difference of 10-15% from
> the calculated value and original value. Any thoughts.
> Regards
> Balaji
> "Dan Guzman" wrote:
>> > Can some body tell me
>> > what the below formula means
>> The calculation means that the number of non-leaf node index pages
>> required
>> for a clustered index is the sum of the number pages need at all levels.
>> The number of required pages at a given level is Index_Rows_Per_Page to
>> the
>> power of level - 1. Below is the Transact-SQL equivalent.
>>
>> DECLARE
>> @.Num_Index_Pages int,
>> @.Index_Rows_Per_Page int,
>> @.Level int
>> SELECT
>> @.Num_Index_Pages = 0,
>> @.Index_Rows_Per_Page = 10,
>> @.Level = 3 --init at number of levels
>> WHILE @.Level > 0
>> BEGIN
>> SET @.Level = @.Level - 1
>> SET @.Num_Index_Pages = @.Num_Index_Pages + POWER(@.Index_Rows_Per_Page,
>> @.Level)
>> END
>> SELECT @.Num_Index_Pages AS Num_Index_Pages
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "Shri.DBA" <ShriDBA@.discussions.microsoft.com> wrote in message
>> news:5634D0D1-7B1F-4D3A-8878-DE30960D7255@.microsoft.com...
>> > Dear All,
>> >
>> > I am trying to calculate the size of a cluster index using the
>> > "Estimating
>> > the Size of a Clustered Index" section of Books online. Can some body
>> > tell
>> > me
>> > what the below formula means
>> >
>> > Calculate the number of pages in the index:
>> >
>> > Num_Index_Pages = â'Level (Index_Rows_Per_Page)Level â' 1
>> >
>> > where 1 <= Level <= Levels
>> >
>> >|||Hi Dan,
The original value I am referring to here is the result of sp_spaceused.
Regards
Balaji.T
"Dan Guzman" wrote:
> > Thanks very much for your help. I see there is a difference of 10-15% from
> > the calculated value and original value. Any thoughts.
> What original value are you referring to?
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Balaji" <Balaji@.discussions.microsoft.com> wrote in message
> news:AA2312C8-5C72-43F3-A283-E4D7AE2EFE3A@.microsoft.com...
> > Hi Dan Guzman,
> >
> > Thanks very much for your help. I see there is a difference of 10-15% from
> > the calculated value and original value. Any thoughts.
> >
> > Regards
> > Balaji
> >
> > "Dan Guzman" wrote:
> >
> >> > Can some body tell me
> >> > what the below formula means
> >>
> >> The calculation means that the number of non-leaf node index pages
> >> required
> >> for a clustered index is the sum of the number pages need at all levels.
> >> The number of required pages at a given level is Index_Rows_Per_Page to
> >> the
> >> power of level - 1. Below is the Transact-SQL equivalent.
> >>
> >>
> >> DECLARE
> >> @.Num_Index_Pages int,
> >> @.Index_Rows_Per_Page int,
> >> @.Level int
> >> SELECT
> >> @.Num_Index_Pages = 0,
> >> @.Index_Rows_Per_Page = 10,
> >> @.Level = 3 --init at number of levels
> >> WHILE @.Level > 0
> >> BEGIN
> >> SET @.Level = @.Level - 1
> >> SET @.Num_Index_Pages = @.Num_Index_Pages + POWER(@.Index_Rows_Per_Page,
> >> @.Level)
> >> END
> >> SELECT @.Num_Index_Pages AS Num_Index_Pages
> >>
> >> --
> >> Hope this helps.
> >>
> >> Dan Guzman
> >> SQL Server MVP
> >>
> >> "Shri.DBA" <ShriDBA@.discussions.microsoft.com> wrote in message
> >> news:5634D0D1-7B1F-4D3A-8878-DE30960D7255@.microsoft.com...
> >> > Dear All,
> >> >
> >> > I am trying to calculate the size of a cluster index using the
> >> > "Estimating
> >> > the Size of a Clustered Index" section of Books online. Can some body
> >> > tell
> >> > me
> >> > what the below formula means
> >> >
> >> > Calculate the number of pages in the index:
> >> >
> >> > Num_Index_Pages = â'Level (Index_Rows_Per_Page)Level â' 1
> >> >
> >> > where 1 <= Level <= Levels
> >> >
> >> >
> >>
>|||> The original value I am referring to here is the result of sp_spaceused.
Perhaps extra space is required due to fragmentation.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Balaji" <Balaji@.discussions.microsoft.com> wrote in message
news:CE6A90B3-E3D1-4CD1-A67F-B6DA9F78D754@.microsoft.com...
> Hi Dan,
> The original value I am referring to here is the result of sp_spaceused.
> Regards
> Balaji.T
> "Dan Guzman" wrote:
>> > Thanks very much for your help. I see there is a difference of 10-15%
>> > from
>> > the calculated value and original value. Any thoughts.
>> What original value are you referring to?
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "Balaji" <Balaji@.discussions.microsoft.com> wrote in message
>> news:AA2312C8-5C72-43F3-A283-E4D7AE2EFE3A@.microsoft.com...
>> > Hi Dan Guzman,
>> >
>> > Thanks very much for your help. I see there is a difference of 10-15%
>> > from
>> > the calculated value and original value. Any thoughts.
>> >
>> > Regards
>> > Balaji
>> >
>> > "Dan Guzman" wrote:
>> >
>> >> > Can some body tell me
>> >> > what the below formula means
>> >>
>> >> The calculation means that the number of non-leaf node index pages
>> >> required
>> >> for a clustered index is the sum of the number pages need at all
>> >> levels.
>> >> The number of required pages at a given level is Index_Rows_Per_Page
>> >> to
>> >> the
>> >> power of level - 1. Below is the Transact-SQL equivalent.
>> >>
>> >>
>> >> DECLARE
>> >> @.Num_Index_Pages int,
>> >> @.Index_Rows_Per_Page int,
>> >> @.Level int
>> >> SELECT
>> >> @.Num_Index_Pages = 0,
>> >> @.Index_Rows_Per_Page = 10,
>> >> @.Level = 3 --init at number of levels
>> >> WHILE @.Level > 0
>> >> BEGIN
>> >> SET @.Level = @.Level - 1
>> >> SET @.Num_Index_Pages = @.Num_Index_Pages +
>> >> POWER(@.Index_Rows_Per_Page,
>> >> @.Level)
>> >> END
>> >> SELECT @.Num_Index_Pages AS Num_Index_Pages
>> >>
>> >> --
>> >> Hope this helps.
>> >>
>> >> Dan Guzman
>> >> SQL Server MVP
>> >>
>> >> "Shri.DBA" <ShriDBA@.discussions.microsoft.com> wrote in message
>> >> news:5634D0D1-7B1F-4D3A-8878-DE30960D7255@.microsoft.com...
>> >> > Dear All,
>> >> >
>> >> > I am trying to calculate the size of a cluster index using the
>> >> > "Estimating
>> >> > the Size of a Clustered Index" section of Books online. Can some
>> >> > body
>> >> > tell
>> >> > me
>> >> > what the below formula means
>> >> >
>> >> > Calculate the number of pages in the index:
>> >> >
>> >> > Num_Index_Pages = â'Level (Index_Rows_Per_Page)Level â' 1
>> >> >
>> >> > where 1 <= Level <= Levels
>> >> >
>> >> >
>> >>|||Ok...thank you very much for the reply.
"Dan Guzman" wrote:
> > The original value I am referring to here is the result of sp_spaceused.
> Perhaps extra space is required due to fragmentation.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Balaji" <Balaji@.discussions.microsoft.com> wrote in message
> news:CE6A90B3-E3D1-4CD1-A67F-B6DA9F78D754@.microsoft.com...
> > Hi Dan,
> >
> > The original value I am referring to here is the result of sp_spaceused.
> >
> > Regards
> > Balaji.T
> >
> > "Dan Guzman" wrote:
> >
> >> > Thanks very much for your help. I see there is a difference of 10-15%
> >> > from
> >> > the calculated value and original value. Any thoughts.
> >>
> >> What original value are you referring to?
> >>
> >> --
> >> Hope this helps.
> >>
> >> Dan Guzman
> >> SQL Server MVP
> >>
> >> "Balaji" <Balaji@.discussions.microsoft.com> wrote in message
> >> news:AA2312C8-5C72-43F3-A283-E4D7AE2EFE3A@.microsoft.com...
> >> > Hi Dan Guzman,
> >> >
> >> > Thanks very much for your help. I see there is a difference of 10-15%
> >> > from
> >> > the calculated value and original value. Any thoughts.
> >> >
> >> > Regards
> >> > Balaji
> >> >
> >> > "Dan Guzman" wrote:
> >> >
> >> >> > Can some body tell me
> >> >> > what the below formula means
> >> >>
> >> >> The calculation means that the number of non-leaf node index pages
> >> >> required
> >> >> for a clustered index is the sum of the number pages need at all
> >> >> levels.
> >> >> The number of required pages at a given level is Index_Rows_Per_Page
> >> >> to
> >> >> the
> >> >> power of level - 1. Below is the Transact-SQL equivalent.
> >> >>
> >> >>
> >> >> DECLARE
> >> >> @.Num_Index_Pages int,
> >> >> @.Index_Rows_Per_Page int,
> >> >> @.Level int
> >> >> SELECT
> >> >> @.Num_Index_Pages = 0,
> >> >> @.Index_Rows_Per_Page = 10,
> >> >> @.Level = 3 --init at number of levels
> >> >> WHILE @.Level > 0
> >> >> BEGIN
> >> >> SET @.Level = @.Level - 1
> >> >> SET @.Num_Index_Pages = @.Num_Index_Pages +
> >> >> POWER(@.Index_Rows_Per_Page,
> >> >> @.Level)
> >> >> END
> >> >> SELECT @.Num_Index_Pages AS Num_Index_Pages
> >> >>
> >> >> --
> >> >> Hope this helps.
> >> >>
> >> >> Dan Guzman
> >> >> SQL Server MVP
> >> >>
> >> >> "Shri.DBA" <ShriDBA@.discussions.microsoft.com> wrote in message
> >> >> news:5634D0D1-7B1F-4D3A-8878-DE30960D7255@.microsoft.com...
> >> >> > Dear All,
> >> >> >
> >> >> > I am trying to calculate the size of a cluster index using the
> >> >> > "Estimating
> >> >> > the Size of a Clustered Index" section of Books online. Can some
> >> >> > body
> >> >> > tell
> >> >> > me
> >> >> > what the below formula means
> >> >> >
> >> >> > Calculate the number of pages in the index:
> >> >> >
> >> >> > Num_Index_Pages = â'Level (Index_Rows_Per_Page)Level â' 1
> >> >> >
> >> >> > where 1 <= Level <= Levels
> >> >> >
> >> >> >
> >> >>
> >>
>

Estimating the Size and Growth of a Database / Table

I have been looking for a copy of a tool called the 'data sizer' that could
be found in the Microsoft BackOffice 4.5 Resource Kit. I have had no luck
tracking it down.
What I would like is a tool / script / stored proc that would allow me to
estimate how large a database would be and what the growth potential may be.
If anyone has anything they could share I would appreciate it.
Thanks,
SniperX
> What I would like is a tool / script / stored proc that would allow me to
> estimate how large a database would be and what the growth potential may
be.
> If anyone has anything they could share I would appreciate it.
If you have Books Online installed, see these topics:
Estimating the Size of a Table with a Clustered Index
Estimating the Size of a Table Without a Clustered Index
sql

Estimating the Size and Growth of a Database / Table

I have been looking for a copy of a tool called the 'data sizer' that could
be found in the Microsoft BackOffice 4.5 Resource Kit. I have had no luck
tracking it down.
What I would like is a tool / script / stored proc that would allow me to
estimate how large a database would be and what the growth potential may be.
If anyone has anything they could share I would appreciate it.
Thanks,
SniperX> What I would like is a tool / script / stored proc that would allow me to
> estimate how large a database would be and what the growth potential may
be.
> If anyone has anything they could share I would appreciate it.
If you have Books Online installed, see these topics:
Estimating the Size of a Table with a Clustered Index
Estimating the Size of a Table Without a Clustered Index

Estimating the Size and Growth of a Database / Table

I have been looking for a copy of a tool called the 'data sizer' that could
be found in the Microsoft BackOffice 4.5 Resource Kit. I have had no luck
tracking it down.
What I would like is a tool / script / stored proc that would allow me to
estimate how large a database would be and what the growth potential may be.
If anyone has anything they could share I would appreciate it.
Thanks,
SniperX> What I would like is a tool / script / stored proc that would allow me to
> estimate how large a database would be and what the growth potential may
be.
> If anyone has anything they could share I would appreciate it.
If you have Books Online installed, see these topics:
Estimating the Size of a Table with a Clustered Index
Estimating the Size of a Table Without a Clustered Index

estimating size of datable / database

are there any rules of thumb for estimating database size / datatable size. I don't seem to find any specifically for SQL2005

hi,

a good "start up" article on that can be found at http://www.sqlmag.com/Articles/ArticleID/50257/50257.html, by "Notre Dame SQL Server" Kalen Delaney, where she digs into lot of new catalog views to see where data is stored in the SQL Server 2005 architecture... but it's available for SQL Server Magazine subscriber only...

regards

|||

Check out:

Estimating the Size of a Table and Estimating the Size of a Database on MSDN.

Mike

Estimating Size of a Table

There's seem to be a problem with the formula given from the documentation of SQL Server2K:

The formula for computing :

RowsPerPage = 8096 */(RowSize+2);
FreeRowsPerPage = 8096 * (100-FillFactor)/100)/(RowSize+2)
NumberOfPages = NumRows/(RowsPerPage - FreeRowsPerPage)
TableSize = 8192 * NumberOfPages

Question...what if the FillFactor is zero,
the NumberOfPages will have an error (divide by zero)....

Even if this is greater than zero (eg 1, 2), the TableSize computed is too big if compared with the output from SP_SPACEUSED...

Can anyone help me on this?
Thank.The FillFactor is a bit of a fudge factor. It is the minimum percentage of the page that will be filled. If you supply a fillfactor of 1, and the average row takes up 75% of the page, then you get about the same actual result as if you had made the fillfactor 74.

One other thing to note is that fillfactor is only used at the creation of an index. As time goes by, and rows are updated, inserted, and deleted, the actual page usage can vary widely.

Hope this helps.
~Matt|||Thanks for the reply Matt.

I have a very large table (abt 7M rows). The time I created it, I didnt specify the FillFactor, so it defaults to 0 as stated in the doc. So I thought I should use the same FillFactor in the formula, but then it gave me too big a tablesize. From my understanding on your reply that actual page usage varies after a number of DML statements, does it mean that the formula given may not apply anymore?

I'm creating a program to get the tablesize (from this value, also growth rate) of all my tables (from production db) and put them into a table.

Is there a another accurate way for me to get the tablesize other than the SP_SPACEUSED?

Btw, how accurate is the SP_SPACEUSED?|||That is correct, if the table has undergone significant data modifications (insert, update, delete), then the formula is not going to be a very good guide. sp_spaceused can also drift over time, as the data is modified. I have seen import tables that have large amounts of I/O (truncate and bcp) become negative in size, but DBCC UPDATEUSAGE will clear that up admirably.

sp_spaceused is fairly accurate at times. If you can manage to run DBCC UPDATEUSAGE(0) on the database before you run sp_spaceused, you will get almost exact results. At least, as exact as anything in MS SQL ;-).|||RE:
Q1 ...If you can manage to run DBCC UPDATEUSAGE(0) on the database before you run sp_spaceused, you will get almost exact results. At least, as exact as anything in MS SQL ;-).

A1 You may wish to consider running sp_SpaceUsed with the update option (for each object if that best meets the requirement). If sp_SpaceUsed is run frequently that will address the issue. For example:

Use Pubs
Go
-- Update usage for entire Pubs DB:
Exec sp_SpaceUsed
@.updateusage = 'True'
Go

-- Update usage for Authors table only:
Exec sp_SpaceUsed
@.objname = 'Authors',
@.updateusage = 'True'

RE:
Q2 ...Is there a another accurate way for me to get the tablesize other than the SP_SPACEUSED?

Q3 Btw, how accurate is the SP_SPACEUSED?

A3 The sp_SpaceUsed proc queries file page use data, file data, etc., there is not likely a more accurate method. (A2-->) However, nothing prevents one from using it as a starting point in an effort to make a more accurate "sp_MoreAccurateSpaceUsed".

Estimating needed size when replicating

Hi,
Can somebody tell how to make an estimation of the size needed when creating a 'consult' DB with snapshot replication ?
Thanks !Five liters.

If you can be a bit more specific, I can be a bit more serious ;)

-PatP|||:rolleyes:
sorry, i ment how much extra size is there needed besides the size of the database itself, like for systemtables and that kind of stuff...|||The overhead within the database is trivial. I'd allow a megabyte or two tops.

Now the distribution server (both database and snapshot file space) can be a very different story!

-PatP|||There is no difference in size between the published database and the resulting snapshot on the subscriber. Subscriber's disk subsystem needs to be as fast if not faster than the publisher's. Depending on the origin of invocation (I hate this style of writing myself, so don't laugh) of the distribution agent subscriber's CPU may take quite a bit of a hit. Network, interestingly, is very often the bottleneck. To minimize its affects, use Alternate location with compression (default location does not support compression option).|||There is no difference in size between the published database and the resulting snapshot on the subscriber. Subscriber's disk subsystem needs to be as fast if not faster than the publisher's. Depending on the origin of invocation (I hate this style of writing myself, so don't laugh) of the distribution agent subscriber's CPU may take quite a bit of a hit. Network, interestingly, is very often the bottleneck. To minimize its affects, use Alternate location with compression (default location does not support compression option).At least on my servers, I get a handful of tables in the publisher that track the replication details (like syspublications, sysarticles, etc). The overhead is trivial, but it is there.

I've never had problems with disk speed... The distributor always takes up the slack for me.

I never use compression with databases. Maybe that's because I got bit so badly years ago, but I've never been able to justify doing it since.

-PatP|||Pat, "years ago" alternate location and compression weren't supported, so I don't know what you're referring to ;)

estimating maximum row size

hi,
give a table definition, how can i estimate the maximum row size ?
eg:
table (a varchar(8000),
b varchar(100))
the row size (from the column lengths) is 8100, but actually it is more than
that.
i know that coz when i create the above table i get an error as:
Warning: The table 'size_test' has been created but its maximum row size
(8125) exceeds the maximum number of bytes per row (8060). INSERT or UPDATE
of a row in this table will fail if the resulting row length exceeds 8060
bytes.
i would like to check against this limit. that is why i need the max row size.
thanks
--
Vivek T S
Member Technical Staff (Inucom)Hello,
Check the following link
http://msdn.microsoft.com/library/default.asp?
url=/library/en-us/architec/8_ar_ts_8dbn.asp which gives
the max size per row as 8060 bytes.
I would sugest you change your varchar to something like a
text for some other large object. The reason is that they
can store a lot more than 8060 and still be on the same
table as other fields.
Peter
"Happiness is nothing more than good health and a bad
memory."
Albert Schweitzer
>--Original Message--
> hi,
> give a table definition, how can i estimate the
maximum row size ?
> eg:
> table (a varchar(8000),
> b varchar(100))
>the row size (from the column lengths) is 8100, but
actually it is more than
>that.
>i know that coz when i create the above table i get an
error as:
>Warning: The table 'size_test' has been created but its
maximum row size
>(8125) exceeds the maximum number of bytes per row
(8060). INSERT or UPDATE
>of a row in this table will fail if the resulting row
length exceeds 8060
>bytes.
>i would like to check against this limit. that is why i
need the max row size.
>thanks
>--
>Vivek T S
>Member Technical Staff (Inucom)
>.
>sql

Estimating Log File Size

Dear All,
Yesterday I was given a sp to calculate the size of tables
and the overall size of data files.
Is there something similar for log files, i.e. an
algorithm to calculate the size now and say 2 years time
of a log file ?
Thanks
Jim
Jimbo
Look at sp_helpfile as well as sysfiles system table
It is hard to estimate what is your log file will be in the next two years.
"Jimbo" <anonymous@.discussions.microsoft.com> wrote in message
news:33ff01c47ec6$197af750$a501280a@.phx.gbl...
> Dear All,
> Yesterday I was given a sp to calculate the size of tables
> and the overall size of data files.
> Is there something similar for log files, i.e. an
> algorithm to calculate the size now and say 2 years time
> of a log file ?
> Thanks
> Jim
|||Probably the best way to estimate future sizes is to capture growth over
time and predict from that...
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Jimbo" <anonymous@.discussions.microsoft.com> wrote in message
news:33ff01c47ec6$197af750$a501280a@.phx.gbl...
> Dear All,
> Yesterday I was given a sp to calculate the size of tables
> and the overall size of data files.
> Is there something similar for log files, i.e. an
> algorithm to calculate the size now and say 2 years time
> of a log file ?
> Thanks
> Jim
|||Hi,
You could also use the below command to get current log size and usage
dbcc sqlperf(logspace)
Estimating the log size for next 2 years will be hard.. That depends up on
the amount of batch operation and frequency in which
you perform the transaction log backup. Normally it is not required to
project Log size because the log file will be cleared once
you perform the transaction log backup.
Thanks
Hari
MCDBA
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:eFgFomsfEHA.3932@.TK2MSFTNGP09.phx.gbl...
> Jimbo
> Look at sp_helpfile as well as sysfiles system table
> It is hard to estimate what is your log file will be in the next two
years.
>
> "Jimbo" <anonymous@.discussions.microsoft.com> wrote in message
> news:33ff01c47ec6$197af750$a501280a@.phx.gbl...
>

Estimating Log File Size

Dear All,
Yesterday I was given a sp to calculate the size of tables
and the overall size of data files.
Is there something similar for log files, i.e. an
algorithm to calculate the size now and say 2 years time
of a log file ?
Thanks
JimJimbo
Look at sp_helpfile as well as sysfiles system table
It is hard to estimate what is your log file will be in the next two years.
"Jimbo" <anonymous@.discussions.microsoft.com> wrote in message
news:33ff01c47ec6$197af750$a501280a@.phx.gbl...
> Dear All,
> Yesterday I was given a sp to calculate the size of tables
> and the overall size of data files.
> Is there something similar for log files, i.e. an
> algorithm to calculate the size now and say 2 years time
> of a log file ?
> Thanks
> Jim|||Probably the best way to estimate future sizes is to capture growth over
time and predict from that...
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Jimbo" <anonymous@.discussions.microsoft.com> wrote in message
news:33ff01c47ec6$197af750$a501280a@.phx.gbl...
> Dear All,
> Yesterday I was given a sp to calculate the size of tables
> and the overall size of data files.
> Is there something similar for log files, i.e. an
> algorithm to calculate the size now and say 2 years time
> of a log file ?
> Thanks
> Jim|||Hi,
You could also use the below command to get current log size and usage
dbcc sqlperf(logspace)
Estimating the log size for next 2 years will be hard.. That depends up on
the amount of batch operation and frequency in which
you perform the transaction log backup. Normally it is not required to
project Log size because the log file will be cleared once
you perform the transaction log backup.
Thanks
Hari
MCDBA
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:eFgFomsfEHA.3932@.TK2MSFTNGP09.phx.gbl...
> Jimbo
> Look at sp_helpfile as well as sysfiles system table
> It is hard to estimate what is your log file will be in the next two
years.
>
> "Jimbo" <anonymous@.discussions.microsoft.com> wrote in message
> news:33ff01c47ec6$197af750$a501280a@.phx.gbl...
>

Estimating Log File Size

Dear All,
Yesterday I was given a sp to calculate the size of tables
and the overall size of data files.
Is there something similar for log files, i.e. an
algorithm to calculate the size now and say 2 years time
of a log file ?
Thanks
JimJimbo
Look at sp_helpfile as well as sysfiles system table
It is hard to estimate what is your log file will be in the next two years.
"Jimbo" <anonymous@.discussions.microsoft.com> wrote in message
news:33ff01c47ec6$197af750$a501280a@.phx.gbl...
> Dear All,
> Yesterday I was given a sp to calculate the size of tables
> and the overall size of data files.
> Is there something similar for log files, i.e. an
> algorithm to calculate the size now and say 2 years time
> of a log file ?
> Thanks
> Jim|||Probably the best way to estimate future sizes is to capture growth over
time and predict from that...
--
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Jimbo" <anonymous@.discussions.microsoft.com> wrote in message
news:33ff01c47ec6$197af750$a501280a@.phx.gbl...
> Dear All,
> Yesterday I was given a sp to calculate the size of tables
> and the overall size of data files.
> Is there something similar for log files, i.e. an
> algorithm to calculate the size now and say 2 years time
> of a log file ?
> Thanks
> Jim|||Hi,
You could also use the below command to get current log size and usage
dbcc sqlperf(logspace)
Estimating the log size for next 2 years will be hard.. That depends up on
the amount of batch operation and frequency in which
you perform the transaction log backup. Normally it is not required to
project Log size because the log file will be cleared once
you perform the transaction log backup.
Thanks
Hari
MCDBA
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:eFgFomsfEHA.3932@.TK2MSFTNGP09.phx.gbl...
> Jimbo
> Look at sp_helpfile as well as sysfiles system table
> It is hard to estimate what is your log file will be in the next two
years.
>
> "Jimbo" <anonymous@.discussions.microsoft.com> wrote in message
> news:33ff01c47ec6$197af750$a501280a@.phx.gbl...
> > Dear All,
> >
> > Yesterday I was given a sp to calculate the size of tables
> > and the overall size of data files.
> >
> > Is there something similar for log files, i.e. an
> > algorithm to calculate the size now and say 2 years time
> > of a log file ?
> >
> > Thanks
> > Jim
>

Estimating growth of mdf file size...

Hi, how can I estimate the mdf file size's growth as record inserted?

For a rough calculation you can calculate the sum of bytes needed for one row, e.g. Having a table with three CHAR(200) will need 600 bytes + additional overhead ~32 bytes. You can just sum up all the data types you have to get the row size in your table.

HTH, Jens K. Suessmeyer.


http://www.sqlserver2005.de

|||Thank you for replying.

Then, what about the growth if the table has index column?|||You will have to add the size of the indexed columns in addition.

HTH, Jens K. Suessmeyer.

http://www.sqlserver2005.de

Estimating growth

h
how can i find the below from database
Average Record Size (KB)
Total Master Table Storage (GB
Index Storage for 1 Record
Regard
RahLook up the chapter "Estimating the size of a database" in SQL Server 2000
Books Online.
--
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
Is .NET important for a database professional?
http://vyaskn.tripod.com/poll.htm
"Rah" <anonymous@.discussions.microsoft.com> wrote in message
news:811C9279-BEE1-43FA-9085-B3BD63A8FC61@.microsoft.com...
hi
how can i find the below from database.
Average Record Size (KB)
Total Master Table Storage (GB)
Index Storage for 1 Record
Regards
Rah

Estimating growth

hi
how can i find the below from database.
Average Record Size (KB)
Total Master Table Storage (GB)
Index Storage for 1 Record
Regards
RahLook up the chapter "Estimating the size of a database" in SQL Server 2000
Books Online.
--
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
Is .NET important for a database professional?
http://vyaskn.tripod.com/poll.htm
"Rah" <anonymous@.discussions.microsoft.com> wrote in message
news:811C9279-BEE1-43FA-9085-B3BD63A8FC61@.microsoft.com...
hi
how can i find the below from database.
Average Record Size (KB)
Total Master Table Storage (GB)
Index Storage for 1 Record
Regards
Rah

Tuesday, March 27, 2012

Estimating Disk Space

Thanks for your replies Tibor / John.
Is there any script/tool available to estimate the size of
the target database. I have the table structures and
estimated rows.
Thanks,
HariI believe that the book "Inside SQL Server 2000" from MS Press comes with such a tool. The book is a
"must" IMO for any intermediate/advanced SQL Server person anyhow, IMO... :-)
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Hari Shankar" <anonymous@.discussions.microsoft.com> wrote in message
news:2429f01c45f3c$1f2b89c0$a401280a@.phx.gbl...
> Thanks for your replies Tibor / John.
> Is there any script/tool available to estimate the size of
> the target database. I have the table structures and
> estimated rows.
> Thanks,
> Harisql

Estimating Database Size Issue

All,
I am following the guidelines laid out on the following page to
calculate an estimated size for a database.
http://msdn2.microsoft.com/en-us/library/ms178085.aspx.
However, I am noticing something weird. Because
Index_Rows_Per_Page is calculated by dividing the (Index_Row_Size +
2), the MORE variable and non-variable keys you add to your table, the
SMALLER the size of the database gets. Does this make sense to anyone
because it doesn't make sense to me.
Here's my database:
Num_Rows = 75,000,000
Num_Cols = 3
Fixed_Data_Size = 8
Num_Variable_Cols = 0
Max_Var_Size = 0
Num_Key_Cols = 2
Fixed_Key_Size = 4
Num_Variable_Key_Cols = 0
Max_Var_Key_Size = 0
Num_Nullable_Key_Cols = 0
Using this information, I get 438.07GB as the size of my
database.
If I change the number of Fixed_Key_Size to 6, the size of the
database drops to 335.25GB.
Again, this doesn't make sense to me. If quadruple checked my
math against what's in the URL above, and it appears to match what is
in the document. Is Microsoft's calculation wrong? Surely not. Maybe I
missed something? Are you guys getting the same results based off of
my info above?
Thanks,
JeremyI remember when I first read this article I felt pretty overwhelmed.
I've found the many scripts and articles from www.sqlservercentral.com
(and www.sql-server-perfromance.com I think) to be much more
intuitive. I know this doesn't really answer your question, its more
of a suggestion.
Erik|||Thanks Erik. I was looking further through the code I wrote, value by
value, and it turns out that it's either the LOG function or the
Summation function (or a combination of both) that is making the size
decrease when the byte size of indexed keys increases. But it still
doesn't make sense.
I'll take a look at those links though, thanks!
Jeremysql

Estimating Database Size Issue

All,
I am following the guidelines laid out on the following page to
calculate an estimated size for a database.
http://msdn2.microsoft.com/en-us/library/ms178085.aspx.
However, I am noticing something weird. Because
Index_Rows_Per_Page is calculated by dividing the (Index_Row_Size +
2), the MORE variable and non-variable keys you add to your table, the
SMALLER the size of the database gets. Does this make sense to anyone
because it doesn't make sense to me.
Here's my database:
Num_Rows = 75,000,000
Num_Cols = 3
Fixed_Data_Size = 8
Num_Variable_Cols = 0
Max_Var_Size = 0
Num_Key_Cols = 2
Fixed_Key_Size = 4
Num_Variable_Key_Cols = 0
Max_Var_Key_Size = 0
Num_Nullable_Key_Cols = 0
Using this information, I get 438.07GB as the size of my
database.
If I change the number of Fixed_Key_Size to 6, the size of the
database drops to 335.25GB.
Again, this doesn't make sense to me. If quadruple checked my
math against what's in the URL above, and it appears to match what is
in the document. Is Microsoft's calculation wrong? Surely not. Maybe I
missed something? Are you guys getting the same results based off of
my info above?
Thanks,
Jeremy
I remember when I first read this article I felt pretty overwhelmed.
I've found the many scripts and articles from www.sqlservercentral.com
(and www.sql-server-perfromance.com I think) to be much more
intuitive. I know this doesn't really answer your question, its more
of a suggestion.
Erik
|||Thanks Erik. I was looking further through the code I wrote, value by
value, and it turns out that it's either the LOG function or the
Summation function (or a combination of both) that is making the size
decrease when the byte size of indexed keys increases. But it still
doesn't make sense.
I'll take a look at those links though, thanks!
Jeremy

Estimate Snapshow Duration

Dear all,
I am going to set up a snapshot replicate. it will run nightly. The size of
DB is around 2GB, does anyone know how long will it take to completed the
snapshot replication. Is there any way of estimate the duration? Doesn't have
to be very accurate, a rough figure will do.
Thanks,
Jas
Jas,
this is almost impossible, as your network bandwidth at the time of the
replication and the indexes you have on the tables are just two factors
that'll change the result radically. Perhaps you can 'calibrate' the system
yourself in a rough way by replicating a table of 100MB with representative
indexes to a dummy database and then multiplying the result by 20 - just a
finger inthe air, but will give you some idea.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)

estimate size of table based on number of rows

Hi everyone,
This might be a tough one, I don't know if this is doable.
Basically I want to create a stored procedure to return estimated size of a
table.
because I don't know how many rows it will have in the future, I want to
calcuate how much disk space it takes to have one row,
then multiply by number of rows I specified. (space calculation for index is
not necessary).
EXEC GetEstimateTableSize @.TableName='Table1', @.NumberOfRows ='3000000'
it'll return value in KBs after I execute proc. possible?If you run :
EXEC dbo.sp_spaceused table_name
You get the current space usage. Divide it by the current number of rows,
and multiply by the projected one.
If the table is completely empty or you'd rather calculate theoretical size,
there are formulas you can use from BOL, or better yet, a lengthy discussion
on internal structures and row sizes in Inside SQL Server 2000.
BG, SQL Server MVP
www.SolidQualityLearning.com
"Britney" <britneychen_2001@.yahoo.com> wrote in message
news:uWKIMMBpFHA.3936@.TK2MSFTNGP10.phx.gbl...
> Hi everyone,
> This might be a tough one, I don't know if this is doable.
> Basically I want to create a stored procedure to return estimated size of
> a table.
> because I don't know how many rows it will have in the future, I want to
> calcuate how much disk space it takes to have one row,
> then multiply by number of rows I specified. (space calculation for index
> is not necessary).
>
> EXEC GetEstimateTableSize @.TableName='Table1', @.NumberOfRows ='3000000'
>
> it'll return value in KBs after I execute proc. possible?
>
>
>
>
>
>|||Have you looked in the Books Online for sp_spaceused? It'll get you
part of the way there because it returns the rows used and the current
size of the table. Here's a quick and dirty stab at it; obviously
it'll need polishing:
This is actually a pretty useful idea; I'm planning on using this
myself.
Stu
DECLARE @.Table varchar(255)
DECLARE @.NumberOfRows int
SET @.Table = 'Splat'
SET @.NumberOfRows = 1
CREATE TABLE #t (name varchar(255),
rows int,
reserved varchar(100),
data varchar(100),
index_size varchar(100),
unused varchar(100))
--how big is the table now?
exec sp_spaceused @.Table
INSERT INTO #t
exec sp_spaceused @.Table
--strip off the ' KB' from the data column
--convert data and rows to decimal, and
--divide data by number of rows and multiply by number of anticipated
rows
SELECT rows, data, (data/rows) * @.NumberOfRows
FROM ( SELECT rows = CONVERT(decimal(32,3), rows),
data = CONVERT(decimal(32,3), LEFT(data, LEN(data)-3))
FROM #t) x
DROP TABLE #t
HTH,
Stu|||The following may help:
http://www.microsoft.com/downloads/...&displaylang=en
If you need it in a SP, try to understand the formulas used in the
spreadsheet and translate them in T-SQL (using the data from syscolumns
and other system tables).
Razvan|||I see 4 columns called reserved , index_size, unused, data
I guess I need to add 4 columns to get the total size, then divide by number
of row to find out how much disk space per row?
then disk space per row * estimate number of rows to find out estimate size?
"Itzik Ben-Gan" <itzik@.REMOVETHIS.SolidQualityLearning.com> wrote in message
news:etjHceBpFHA.2888@.TK2MSFTNGP10.phx.gbl...
> If you run :
> EXEC dbo.sp_spaceused table_name
> You get the current space usage. Divide it by the current number of rows,
> and multiply by the projected one.
> If the table is completely empty or you'd rather calculate theoretical
> size, there are formulas you can use from BOL, or better yet, a lengthy
> discussion on internal structures and row sizes in Inside SQL Server 2000.
> --
> BG, SQL Server MVP
> www.SolidQualityLearning.com
>
> "Britney" <britneychen_2001@.yahoo.com> wrote in message
> news:uWKIMMBpFHA.3936@.TK2MSFTNGP10.phx.gbl...
>|||You need the data + index_size.
Reserved includes: data + index_size + unused.
BG, SQL Server MVP
www.SolidQualityLearning.com
"Britney" wrote:

> I see 4 columns called reserved , index_size, unused, data
> I guess I need to add 4 columns to get the total size, then divide by numb
er
> of row to find out how much disk space per row?
> then disk space per row * estimate number of rows to find out estimate siz
e?
> "Itzik Ben-Gan" <itzik@.REMOVETHIS.SolidQualityLearning.com> wrote in messa
ge
> news:etjHceBpFHA.2888@.TK2MSFTNGP10.phx.gbl...
>
>|||but I have a NTEXT column,
I don't think it can calculate NTEXT.
"Razvan Socol" <rsocol@.gmail.com> wrote in message
news:1124386169.690259.289640@.z14g2000cwz.googlegroups.com...
> The following may help:
> http://www.microsoft.com/downloads/...&displaylang=en
> If you need it in a SP, try to understand the formulas used in the
> spreadsheet and translate them in T-SQL (using the data from syscolumns
> and other system tables).
> Razvan
>|||Indeed, ntext columns are not covered by the spreadsheed.
In the DataSizer.doc file, they wrote:
The tool does not include the formula to estimate the size of a table
that has Text columns. Not NULL text values consume 16 bytes in the
data row and have a minimum size of 84 bytes on the text page. Text
values are packed onto text pages with the same algorithm as data
rows so it should be possible to estimate the size of text data
storage using the HEAP table spreadsheet if you know the average size
of your text values.
Razvan|||if I use sp_spaceused 'tablename' command,
if that table has a few NText Columns,
I think it will calculate total spaces including disk spaces for NTEXT
column, correct?
"Razvan Socol" <rsocol@.gmail.com> wrote in message
news:1124483366.622335.202490@.g49g2000cwa.googlegroups.com...
> Indeed, ntext columns are not covered by the spreadsheed.
> In the DataSizer.doc file, they wrote:
> The tool does not include the formula to estimate the size of a table
> that has Text columns. Not NULL text values consume 16 bytes in the
> data row and have a minimum size of 84 bytes on the text page. Text
> values are packed onto text pages with the same algorithm as data
> rows so it should be possible to estimate the size of text data
> storage using the HEAP table spreadsheet if you know the average size
> of your text values.
> Razvan
>