Showing posts with label performance. Show all posts
Showing posts with label performance. Show all posts

Thursday, March 29, 2012

dropping performance on SQL server

I am in no way familiar with SQL server (DB2/ORACLE background), but since we have a bought application running on this RDBMS and performance is dropping off, I would like to do a little investigation into this new field.
Main problem is query time-outs and swapping during query execution. What can be done with the standard administration tools SQL server provides? I am used to such fancy tools as TOAD for ORACLE, but in this case I have no special tools at hand. Can someone suggest a strategy on how to handle such an investigation?

Example : I run a simple command line statement in DB2 to check whether reorganization is needed. What is the equivalent in SQL server?Query Analyzer is one tool.Sql Server profiler is the other tool.Go to Start-->Programs-->Microsoft Sql Server--Profiler(Windows) and run it from there.You need to be sa in order to run it.Create a new trace on the server you want to run this,save it to a file,let it run for some time(may be 2-3 hours during the peak use).Stop the profiler after the desired time and Click on the Tools-->Index Tuning Wizard and that will guide you to the Index Suggestion.
There are so many other ways to tune a sql server things like update statistics,which you should be doing quite often,Index rebuilding or Defrag(I prefer defrag as it's an online operation),Recompiling your sp and you can also use the Performance monitor to monitor the box running the sql server.
There is a tool avaliable from netiq.com ,called sqlcheck which helps you in monitoring the sql box.
Hope it helps.|||So many places to start from here.

1. I think the nearest equivalent tool to show you whether a reorg is needed is DBCC SHOWCONTIG (table_name). It will show you how fragmented a particular table is. A badly fragmented table can be re-org'd using DBCC DBREINDEX (table_name). Use caution, this can be a resource pig. Check SQL BOL.

2. Check your hardware and especially your disk setup. Ideally you want RAID 1&0 for your data and your log file partitions. You want your log files on a separate partition from your data. If you can afford it, get your tempdb off onto it's own partition (this is a luxury).

3. In general, the Index Tuning Wizard mentioned above works quite well. It has stood me well in the past when I overlooked an index.

4. Try to have one clustered index per table (the limit is one, but try to have one).

5. Make sure SQL is running on a dedicated box (not concurrent with other services like IIS).

6. Boost priority to SQL Server (only if it's on a dedicated box). This will have only marginal effect.

7. If possible, SQL seems to perform very well with multiple processors. If you're running it on a single processor box, consider an upgrade.

8. Look at your memory and be sure you have enough. It's cheap and it may help.

But most of the time your problems will lie with poorly tuned queries or indices which are notably absent.

There is much more to be said, of course, but those should do for starters.

HTH,

Hugh Scott

Originally posted by blom0344
I am in no way familiar with SQL server (DB2/ORACLE background), but since we have a bought application running on this RDBMS and performance is dropping off, I would like to do a little investigation into this new field.
Main problem is query time-outs and swapping during query execution. What can be done with the standard administration tools SQL server provides? I am used to such fancy tools as TOAD for ORACLE, but in this case I have no special tools at hand. Can someone suggest a strategy on how to handle such an investigation?

Example : I run a simple command line statement in DB2 to check whether reorganization is needed. What is the equivalent in SQL server?sql

Tuesday, March 27, 2012

Dropping and re-creating an index kills performance

Hello all,
I'm new to some of the index operations, so hopefully this is a trivial
question. I attempted to increase the length of a database field and
got the message that an index that included the field would go above
900 bytes, and it failed. That's a large index to begin with, so my
boss thought we should delete the index and see if we took a
performance hit. The first execution of a relevant stored procedure
after the delete actually yielded faster results. However, every
attempt thereafter tanked. No big deal, right? Just re-add the index. I
did, just as it existed previously, and there was no change in the poor
performance. I get the same results with or without the index. Can
anyone tell me what might be causing this and what I can do to get the
index to behave as before?
Thanks in advance,
Shannon
in the QA run the Estimated Execution Plan and see if your query is
using the index. if its not you can always manually tell it to.
|||Shannon Cayze wrote:
> so my boss thought we should delete the index and see if we
> took a performance hit.
This was the first mistake. I hope you made the case that blindly
removing a production index and hoping for the best was not the proper
protocol. At the very least an audit of all queries that used that
column in the index should have been undertaken to see if it's even
used, and if so, how. And all this should have been performed on a test
server, not in production. Scary.
Secondly, it would really help here to see the before and after
execution plans for those queries that were affected by the the index
removal and subsequent re-add. That would require that you first
examined existing queries to see how they were using the index and
compare that to how the queries are using the index now. Since you have
no baseline, there's no way to tell what changed. Are you sure the index
was re-created with the same parameters (clustered, non-clustered,
unique, fill-factor, etc.)?
It's possible the procedure needs to be recompiled. You can use
sp_recompile [ @.objname = ] 'object' to do this.
David Gugick
Quest Software
www.imceda.com
www.quest.com
|||First of all, I never said it was production. In fact, it was the test
environment. Secondly, I do have the baseline execution plan and the
results of subsequent queries after altering the index. I didn't
include them because I wanted to keep the posting short and all I
wanted was a few general ideas on how to optimize an index when deleted
and re-added. However, thank you for the sp_recompile hint. Hopefully,
it will help.
|||Shannon Cayze wrote:
> First of all, I never said it was production. In fact, it was the test
> environment. Secondly, I do have the baseline execution plan and the
> results of subsequent queries after altering the index. I didn't
> include them because I wanted to keep the posting short and all I
> wanted was a few general ideas on how to optimize an index when
> deleted and re-added. However, thank you for the sp_recompile hint.
> Hopefully, it will help.
Post the baseline and new execution plans so we can see the differences.
Also post the query in question. I'm guessing a recompile will do it,
but if not, post all the related information and we'll see if we can
find a solution for you.
David Gugick
Quest Software
www.imceda.com
www.quest.com

Dropping and re-creating an index kills performance

Hello all,
I'm new to some of the index operations, so hopefully this is a trivial
question. I attempted to increase the length of a database field and
got the message that an index that included the field would go above
900 bytes, and it failed. That's a large index to begin with, so my
boss thought we should delete the index and see if we took a
performance hit. The first execution of a relevant stored procedure
after the delete actually yielded faster results. However, every
attempt thereafter tanked. No big deal, right? Just re-add the index. I
did, just as it existed previously, and there was no change in the poor
performance. I get the same results with or without the index. Can
anyone tell me what might be causing this and what I can do to get the
index to behave as before?
Thanks in advance,
Shannonin the QA run the Estimated Execution Plan and see if your query is
using the index. if its not you can always manually tell it to.|||Shannon Cayze wrote:
> so my boss thought we should delete the index and see if we
> took a performance hit.
This was the first mistake. I hope you made the case that blindly
removing a production index and hoping for the best was not the proper
protocol. At the very least an audit of all queries that used that
column in the index should have been undertaken to see if it's even
used, and if so, how. And all this should have been performed on a test
server, not in production. Scary.
Secondly, it would really help here to see the before and after
execution plans for those queries that were affected by the the index
removal and subsequent re-add. That would require that you first
examined existing queries to see how they were using the index and
compare that to how the queries are using the index now. Since you have
no baseline, there's no way to tell what changed. Are you sure the index
was re-created with the same parameters (clustered, non-clustered,
unique, fill-factor, etc.)?
It's possible the procedure needs to be recompiled. You can use
sp_recompile [ @.objname = ] 'object' to do this.
David Gugick
Quest Software
www.imceda.com
www.quest.com|||First of all, I never said it was production. In fact, it was the test
environment. Secondly, I do have the baseline execution plan and the
results of subsequent queries after altering the index. I didn't
include them because I wanted to keep the posting short and all I
wanted was a few general ideas on how to optimize an index when deleted
and re-added. However, thank you for the sp_recompile hint. Hopefully,
it will help.|||Shannon Cayze wrote:
> First of all, I never said it was production. In fact, it was the test
> environment. Secondly, I do have the baseline execution plan and the
> results of subsequent queries after altering the index. I didn't
> include them because I wanted to keep the posting short and all I
> wanted was a few general ideas on how to optimize an index when
> deleted and re-added. However, thank you for the sp_recompile hint.
> Hopefully, it will help.
Post the baseline and new execution plans so we can see the differences.
Also post the query in question. I'm guessing a recompile will do it,
but if not, post all the related information and we'll see if we can
find a solution for you.
David Gugick
Quest Software
www.imceda.com
www.quest.com

Dropping and re-creating an index kills performance

Hello all,
I'm new to some of the index operations, so hopefully this is a trivial
question. I attempted to increase the length of a database field and
got the message that an index that included the field would go above
900 bytes, and it failed. That's a large index to begin with, so my
boss thought we should delete the index and see if we took a
performance hit. The first execution of a relevant stored procedure
after the delete actually yielded faster results. However, every
attempt thereafter tanked. No big deal, right? Just re-add the index. I
did, just as it existed previously, and there was no change in the poor
performance. I get the same results with or without the index. Can
anyone tell me what might be causing this and what I can do to get the
index to behave as before?
Thanks in advance,
Shannonin the QA run the Estimated Execution Plan and see if your query is
using the index. if its not you can always manually tell it to.|||Shannon Cayze wrote:
> so my boss thought we should delete the index and see if we
> took a performance hit.
This was the first mistake. I hope you made the case that blindly
removing a production index and hoping for the best was not the proper
protocol. At the very least an audit of all queries that used that
column in the index should have been undertaken to see if it's even
used, and if so, how. And all this should have been performed on a test
server, not in production. Scary.
Secondly, it would really help here to see the before and after
execution plans for those queries that were affected by the the index
removal and subsequent re-add. That would require that you first
examined existing queries to see how they were using the index and
compare that to how the queries are using the index now. Since you have
no baseline, there's no way to tell what changed. Are you sure the index
was re-created with the same parameters (clustered, non-clustered,
unique, fill-factor, etc.)?
It's possible the procedure needs to be recompiled. You can use
sp_recompile [ @.objname = ] 'object' to do this.
--
David Gugick
Quest Software
www.imceda.com
www.quest.com|||First of all, I never said it was production. In fact, it was the test
environment. Secondly, I do have the baseline execution plan and the
results of subsequent queries after altering the index. I didn't
include them because I wanted to keep the posting short and all I
wanted was a few general ideas on how to optimize an index when deleted
and re-added. However, thank you for the sp_recompile hint. Hopefully,
it will help.|||Shannon Cayze wrote:
> First of all, I never said it was production. In fact, it was the test
> environment. Secondly, I do have the baseline execution plan and the
> results of subsequent queries after altering the index. I didn't
> include them because I wanted to keep the posting short and all I
> wanted was a few general ideas on how to optimize an index when
> deleted and re-added. However, thank you for the sp_recompile hint.
> Hopefully, it will help.
Post the baseline and new execution plans so we can see the differences.
Also post the query in question. I'm guessing a recompile will do it,
but if not, post all the related information and we'll see if we can
find a solution for you.
--
David Gugick
Quest Software
www.imceda.com
www.quest.com

Wednesday, March 7, 2012

Drop and recreate all indexes

Sql server 2005
There are a couple of administrators who have repeatedly said they
dropped and recreated indexes
and performance spiked after a migration and it would be nice to leave
no stone unturned in a bid to better performance.
Has anyone come across a script or has a way to do this
Your input as usual is greatly appreciated
Mike
There are many flavors out there... but this is a pretty common way to
reindex your tables.
The DBCC DBREINDEX statement is the key!
!UNTESTED SQL!
USE DatabaseName --Enter the name of the database you want to reindex
DECLARE @.TableName varchar(255)
DECLARE TableCursor CURSOR FOR
SELECT table_name FROM information_schema.tables
WHERE table_type = 'base table'
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @.TableName
WHILE @.@.FETCH_STATUS = 0
BEGIN
DBCC DBREINDEX(@.TableName,' ',90)
FETCH NEXT FROM TableCursor INTO @.TableName
END
CLOSE TableCursor
DEALLOCATE TableCursor
"Massa Batheli" <mngong@.gmail.com> wrote in message
news:1161357089.891287.7040@.e3g2000cwe.googlegroup s.com...
> Sql server 2005
> There are a couple of administrators who have repeatedly said they
> dropped and recreated indexes
> and performance spiked after a migration and it would be nice to leave
> no stone unturned in a bid to better performance.
> Has anyone come across a script or has a way to do this
> Your input as usual is greatly appreciated
>
> Mike
>
|||Hi
Check out ALTER INDEX ALL in Books Online, example D in the topic
sys.dm_db_index_physical_stats shows you how you can call this for multiple
tables http://msdn2.microsoft.com/en-us/library/ms188917.aspx
John
"Massa Batheli" wrote:

> Sql server 2005
> There are a couple of administrators who have repeatedly said they
> dropped and recreated indexes
> and performance spiked after a migration and it would be nice to leave
> no stone unturned in a bid to better performance.
> Has anyone come across a script or has a way to do this
> Your input as usual is greatly appreciated
>
> Mike
>
|||Thank you so much Immy.
Just ran something similar and the next step was to run a drop index
...
and recreate index ...
That is what is help is needed with ,again thank you for your time and
I appreciate more ideas
|||Hi
DBCC DBREINDEX may be removed in future versions of SQL Server, therefore if
you are writing production code you should consider using ALTER INDEX.
John
"Massa Batheli" wrote:

> Thank you so much Immy.
> Just ran something similar and the next step was to run a drop index
> ...
> and recreate index ...
> That is what is help is needed with ,again thank you for your time and
> I appreciate more ideas
>

Drop and recreate all indexes

Sql server 2005
There are a couple of administrators who have repeatedly said they
dropped and recreated indexes
and performance spiked after a migration and it would be nice to leave
no stone unturned in a bid to better performance.
Has anyone come across a script or has a way to do this
Your input as usual is greatly appreciated
MikeThere are many flavors out there... but this is a pretty common way to
reindex your tables.
The DBCC DBREINDEX statement is the key!
!UNTESTED SQL!
USE DatabaseName --Enter the name of the database you want to reindex
DECLARE @.TableName varchar(255)
DECLARE TableCursor CURSOR FOR
SELECT table_name FROM information_schema.tables
WHERE table_type = 'base table'
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @.TableName
WHILE @.@.FETCH_STATUS = 0
BEGIN
DBCC DBREINDEX(@.TableName,' ',90)
FETCH NEXT FROM TableCursor INTO @.TableName
END
CLOSE TableCursor
DEALLOCATE TableCursor
"Massa Batheli" <mngong@.gmail.com> wrote in message
news:1161357089.891287.7040@.e3g2000cwe.googlegroups.com...
> Sql server 2005
> There are a couple of administrators who have repeatedly said they
> dropped and recreated indexes
> and performance spiked after a migration and it would be nice to leave
> no stone unturned in a bid to better performance.
> Has anyone come across a script or has a way to do this
> Your input as usual is greatly appreciated
>
> Mike
>|||Hi
Check out ALTER INDEX ALL in Books Online, example D in the topic
sys.dm_db_index_physical_stats shows you how you can call this for multiple
tables http://msdn2.microsoft.com/en-us/library/ms188917.aspx
John
"Massa Batheli" wrote:
> Sql server 2005
> There are a couple of administrators who have repeatedly said they
> dropped and recreated indexes
> and performance spiked after a migration and it would be nice to leave
> no stone unturned in a bid to better performance.
> Has anyone come across a script or has a way to do this
> Your input as usual is greatly appreciated
>
> Mike
>|||Thank you so much Immy.
Just ran something similar and the next step was to run a drop index
...
and recreate index ...
That is what is help is needed with ,again thank you for your time and
I appreciate more ideas|||Hi
DBCC DBREINDEX may be removed in future versions of SQL Server, therefore if
you are writing production code you should consider using ALTER INDEX.
John
"Massa Batheli" wrote:
> Thank you so much Immy.
> Just ran something similar and the next step was to run a drop index
> ...
> and recreate index ...
> That is what is help is needed with ,again thank you for your time and
> I appreciate more ideas
>|||As said earlier John the purpose is to completely drop and rebuild
indexes
Not sure why that has to be done but still looking for ways to do that
on instructions
Reason for this post
.....
John Bell wrote:
> Hi
> DBCC DBREINDEX may be removed in future versions of SQL Server, therefore if
> you are writing production code you should consider using ALTER INDEX.
> John
> "Massa Batheli" wrote:
> > Thank you so much Immy.
> > Just ran something similar and the next step was to run a drop index
> > ...
> > and recreate index ...
> > That is what is help is needed with ,again thank you for your time and
> > I appreciate more ideas
> >
> >|||DBCC DBREINDEX and ALTER INDEX with the REBILD option will execute the same code internally as if
you do DROP INDEX and then CREATE INDEX.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Massa Batheli" <mngong@.gmail.com> wrote in message
news:1161806390.411989.59800@.i42g2000cwa.googlegroups.com...
> As said earlier John the purpose is to completely drop and rebuild
> indexes
> Not sure why that has to be done but still looking for ways to do that
> on instructions
> Reason for this post
>
> .....
> John Bell wrote:
>> Hi
>> DBCC DBREINDEX may be removed in future versions of SQL Server, therefore if
>> you are writing production code you should consider using ALTER INDEX.
>> John
>> "Massa Batheli" wrote:
>> > Thank you so much Immy.
>> > Just ran something similar and the next step was to run a drop index
>> > ...
>> > and recreate index ...
>> > That is what is help is needed with ,again thank you for your time and
>> > I appreciate more ideas
>> >
>> >
>|||Hi
Rebuilding indexes should certainly be done post upgrading to SQL 2005, and
you should periodically rebuild your indexes to remove fragmentation to make
sure that will perform efficiently. You should also look at updating
statistics and usage. Check out the view sys.dm_db_index_physical_stats in
books online, which will give you an example script for rebuilding indexes if
they are fragmented by a certain amount.
John
"Massa Batheli" wrote:
> As said earlier John the purpose is to completely drop and rebuild
> indexes
> Not sure why that has to be done but still looking for ways to do that
> on instructions
> Reason for this post
>
> ......
> John Bell wrote:
> > Hi
> >
> > DBCC DBREINDEX may be removed in future versions of SQL Server, therefore if
> > you are writing production code you should consider using ALTER INDEX.
> >
> > John
> >
> > "Massa Batheli" wrote:
> >
> > > Thank you so much Immy.
> > > Just ran something similar and the next step was to run a drop index
> > > ...
> > > and recreate index ...
> > > That is what is help is needed with ,again thank you for your time and
> > > I appreciate more ideas
> > >
> > >
>

Drop and recreate all indexes

Sql server 2005
There are a couple of administrators who have repeatedly said they
dropped and recreated indexes
and performance spiked after a migration and it would be nice to leave
no stone unturned in a bid to better performance.
Has anyone come across a script or has a way to do this
Your input as usual is greatly appreciated
MikeThere are many flavors out there... but this is a pretty common way to
reindex your tables.
The DBCC DBREINDEX statement is the key!
!UNTESTED SQL!
USE DatabaseName --Enter the name of the database you want to reindex
DECLARE @.TableName varchar(255)
DECLARE TableCursor CURSOR FOR
SELECT table_name FROM information_schema.tables
WHERE table_type = 'base table'
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @.TableName
WHILE @.@.FETCH_STATUS = 0
BEGIN
DBCC DBREINDEX(@.TableName,' ',90)
FETCH NEXT FROM TableCursor INTO @.TableName
END
CLOSE TableCursor
DEALLOCATE TableCursor
"Massa Batheli" <mngong@.gmail.com> wrote in message
news:1161357089.891287.7040@.e3g2000cwe.googlegroups.com...
> Sql server 2005
> There are a couple of administrators who have repeatedly said they
> dropped and recreated indexes
> and performance spiked after a migration and it would be nice to leave
> no stone unturned in a bid to better performance.
> Has anyone come across a script or has a way to do this
> Your input as usual is greatly appreciated
>
> Mike
>|||Hi
Check out ALTER INDEX ALL in Books Online, example D in the topic
sys.dm_db_index_physical_stats shows you how you can call this for multiple
tables http://msdn2.microsoft.com/en-us/library/ms188917.aspx
John
"Massa Batheli" wrote:

> Sql server 2005
> There are a couple of administrators who have repeatedly said they
> dropped and recreated indexes
> and performance spiked after a migration and it would be nice to leave
> no stone unturned in a bid to better performance.
> Has anyone come across a script or has a way to do this
> Your input as usual is greatly appreciated
>
> Mike
>|||Thank you so much Immy.
Just ran something similar and the next step was to run a drop index
...
and recreate index ...
That is what is help is needed with ,again thank you for your time and
I appreciate more ideas|||Hi
DBCC DBREINDEX may be removed in future versions of SQL Server, therefore if
you are writing production code you should consider using ALTER INDEX.
John
"Massa Batheli" wrote:

> Thank you so much Immy.
> Just ran something similar and the next step was to run a drop index
> ...
> and recreate index ...
> That is what is help is needed with ,again thank you for your time and
> I appreciate more ideas
>|||As said earlier John the purpose is to completely drop and rebuild
indexes
Not sure why that has to be done but still looking for ways to do that
on instructions
Reason for this post
.....
John Bell wrote:[vbcol=seagreen]
> Hi
> DBCC DBREINDEX may be removed in future versions of SQL Server, therefore
if
> you are writing production code you should consider using ALTER INDEX.
> John
> "Massa Batheli" wrote:
>|||DBCC DBREINDEX and ALTER INDEX with the REBILD option will execute the same
code internally as if
you do DROP INDEX and then CREATE INDEX.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Massa Batheli" <mngong@.gmail.com> wrote in message
news:1161806390.411989.59800@.i42g2000cwa.googlegroups.com...
> As said earlier John the purpose is to completely drop and rebuild
> indexes
> Not sure why that has to be done but still looking for ways to do that
> on instructions
> Reason for this post
>
> .....
> John Bell wrote:
>|||Hi
Rebuilding indexes should certainly be done post upgrading to SQL 2005, and
you should periodically rebuild your indexes to remove fragmentation to make
sure that will perform efficiently. You should also look at updating
statistics and usage. Check out the view sys.dm_db_index_physical_stats in
books online, which will give you an example script for rebuilding indexes i
f
they are fragmented by a certain amount.
John
"Massa Batheli" wrote:

> As said earlier John the purpose is to completely drop and rebuild
> indexes
> Not sure why that has to be done but still looking for ways to do that
> on instructions
> Reason for this post
>
> ......
> John Bell wrote:
>

Friday, February 24, 2012

drive configuration for maximum performance

Using SS2000. We have 14 146G drives that are divided into 2 logical drives
of 7 physical drives each. I want maximum performance. Would a configuration
of 1 logical drive for all 14 physical drives of raid 10 be better? Or is
another configuration better?
Thanks,
Dan D.
I was told by a MS SQL guy at a conference once when I asked that question
this:
OS should be on a RAID 5 drive group of at least 3 drives.
DB should be on a RAID 5 drive group of at least 3 drives.
Logs should be on a RAID 0 mirrored group of two drives.
The two RAID 5 groups are because of the random nature of the data access
and the RAID 0 group because logs are written sequentially. The RAID 5
will give quicker random access and fault tolerance where the mirrored RAID
0 allows for fault tolerance and quicker sequential writes.
Regards,
John
"Dan D." <DanD@.discussions.microsoft.com> wrote in message
news:BFFE2FA1-F222-470E-AED6-0E6C25550F46@.microsoft.com...
> Using SS2000. We have 14 146G drives that are divided into 2 logical
> drives
> of 7 physical drives each. I want maximum performance. Would a
> configuration
> of 1 logical drive for all 14 physical drives of raid 10 be better? Or is
> another configuration better?
> Thanks,
> --
> Dan D.
|||From the research that I've done, using raid 5 versus raid 10 seems to be a
matter of whether you're doing a lot writes or not. For a system that does
more than 10% writes, raid 10 is supposed to be better because it is faster
when doing writes. We do a lot of data manipulation for direct mail files so
we're doing a lot of writes. raid 10 also seems to be more fault tolerant
because you can lose more than one drive in the array and still recover. I
think you can only lose one drive in a raid 5. So, for our situation I'm
reasonably sure that raid 10 is better than raid 5 for the data drives.
I was thinking that putting all 14 drives into one logical drive would give
us a lot more drive heads to go find the data than dividing the drives up
into two logical drives and therefore would be faster.
Thanks for the feedback,
Dan D.
"John J. Hughes II" wrote:

> I was told by a MS SQL guy at a conference once when I asked that question
> this:
> OS should be on a RAID 5 drive group of at least 3 drives.
> DB should be on a RAID 5 drive group of at least 3 drives.
> Logs should be on a RAID 0 mirrored group of two drives.
> The two RAID 5 groups are because of the random nature of the data access
> and the RAID 0 group because logs are written sequentially. The RAID 5
> will give quicker random access and fault tolerance where the mirrored RAID
> 0 allows for fault tolerance and quicker sequential writes.
> Regards,
> John
> "Dan D." <DanD@.discussions.microsoft.com> wrote in message
> news:BFFE2FA1-F222-470E-AED6-0E6C25550F46@.microsoft.com...
>
>
|||RAID 10 is RAID 1 over RAID 0. The RAID 0 would give better throughput and
the RAID 1 mirror would give protection. I would not have a problem with
replacing the RAID 5 with a RAID 10 in the below configuration but I would
still suggest moving the OS and transaction log to two different RAID arrays
to increase performance.
But as they say it is your system, not mine and you are in a better position
to judge which configuration would give you better performance / usability
for your needs.
Regards,
John
"Dan D." <DanD@.discussions.microsoft.com> wrote in message
news:EDBD0576-C037-4EA4-82C1-07AFB8BAB668@.microsoft.com...[vbcol=seagreen]
> From the research that I've done, using raid 5 versus raid 10 seems to be
> a
> matter of whether you're doing a lot writes or not. For a system that does
> more than 10% writes, raid 10 is supposed to be better because it is
> faster
> when doing writes. We do a lot of data manipulation for direct mail files
> so
> we're doing a lot of writes. raid 10 also seems to be more fault tolerant
> because you can lose more than one drive in the array and still recover. I
> think you can only lose one drive in a raid 5. So, for our situation I'm
> reasonably sure that raid 10 is better than raid 5 for the data drives.
> I was thinking that putting all 14 drives into one logical drive would
> give
> us a lot more drive heads to go find the data than dividing the drives up
> into two logical drives and therefore would be faster.
> Thanks for the feedback,
> --
> Dan D.
>
> "John J. Hughes II" wrote:
|||I agree with you on that. I should have been more clear. We already have the
OS on a separate set of drives and the logs on another. I was only asking
about the data drives.
Thanks,
Dan D.
"John J. Hughes II" wrote:

> RAID 10 is RAID 1 over RAID 0. The RAID 0 would give better throughput and
> the RAID 1 mirror would give protection. I would not have a problem with
> replacing the RAID 5 with a RAID 10 in the below configuration but I would
> still suggest moving the OS and transaction log to two different RAID arrays
> to increase performance.
> But as they say it is your system, not mine and you are in a better position
> to judge which configuration would give you better performance / usability
> for your needs.
> Regards,
> John
>
> "Dan D." <DanD@.discussions.microsoft.com> wrote in message
> news:EDBD0576-C037-4EA4-82C1-07AFB8BAB668@.microsoft.com...
>
>

drive configuration for maximum performance

Using SS2000. We have 14 146G drives that are divided into 2 logical drives
of 7 physical drives each. I want maximum performance. Would a configuration
of 1 logical drive for all 14 physical drives of raid 10 be better? Or is
another configuration better?
Thanks,
--
Dan D.I was told by a MS SQL guy at a conference once when I asked that question
this:
OS should be on a RAID 5 drive group of at least 3 drives.
DB should be on a RAID 5 drive group of at least 3 drives.
Logs should be on a RAID 0 mirrored group of two drives.
The two RAID 5 groups are because of the random nature of the data access
and the RAID 0 group because logs are written sequentially. The RAID 5
will give quicker random access and fault tolerance where the mirrored RAID
0 allows for fault tolerance and quicker sequential writes.
Regards,
John
"Dan D." <DanD@.discussions.microsoft.com> wrote in message
news:BFFE2FA1-F222-470E-AED6-0E6C25550F46@.microsoft.com...
> Using SS2000. We have 14 146G drives that are divided into 2 logical
> drives
> of 7 physical drives each. I want maximum performance. Would a
> configuration
> of 1 logical drive for all 14 physical drives of raid 10 be better? Or is
> another configuration better?
> Thanks,
> --
> Dan D.|||From the research that I've done, using raid 5 versus raid 10 seems to be a
matter of whether you're doing a lot writes or not. For a system that does
more than 10% writes, raid 10 is supposed to be better because it is faster
when doing writes. We do a lot of data manipulation for direct mail files so
we're doing a lot of writes. raid 10 also seems to be more fault tolerant
because you can lose more than one drive in the array and still recover. I
think you can only lose one drive in a raid 5. So, for our situation I'm
reasonably sure that raid 10 is better than raid 5 for the data drives.
I was thinking that putting all 14 drives into one logical drive would give
us a lot more drive heads to go find the data than dividing the drives up
into two logical drives and therefore would be faster.
Thanks for the feedback,
Dan D.
"John J. Hughes II" wrote:

> I was told by a MS SQL guy at a conference once when I asked that question
> this:
> OS should be on a RAID 5 drive group of at least 3 drives.
> DB should be on a RAID 5 drive group of at least 3 drives.
> Logs should be on a RAID 0 mirrored group of two drives.
> The two RAID 5 groups are because of the random nature of the data access
> and the RAID 0 group because logs are written sequentially. The RAID 5
> will give quicker random access and fault tolerance where the mirrored RAI
D
> 0 allows for fault tolerance and quicker sequential writes.
> Regards,
> John
> "Dan D." <DanD@.discussions.microsoft.com> wrote in message
> news:BFFE2FA1-F222-470E-AED6-0E6C25550F46@.microsoft.com...
>
>|||RAID 10 is RAID 1 over RAID 0. The RAID 0 would give better throughput and
the RAID 1 mirror would give protection. I would not have a problem with
replacing the RAID 5 with a RAID 10 in the below configuration but I would
still suggest moving the OS and transaction log to two different RAID arrays
to increase performance.
But as they say it is your system, not mine and you are in a better position
to judge which configuration would give you better performance / usability
for your needs.
Regards,
John
"Dan D." <DanD@.discussions.microsoft.com> wrote in message
news:EDBD0576-C037-4EA4-82C1-07AFB8BAB668@.microsoft.com...[vbcol=seagreen]
> From the research that I've done, using raid 5 versus raid 10 seems to be
> a
> matter of whether you're doing a lot writes or not. For a system that does
> more than 10% writes, raid 10 is supposed to be better because it is
> faster
> when doing writes. We do a lot of data manipulation for direct mail files
> so
> we're doing a lot of writes. raid 10 also seems to be more fault tolerant
> because you can lose more than one drive in the array and still recover. I
> think you can only lose one drive in a raid 5. So, for our situation I'm
> reasonably sure that raid 10 is better than raid 5 for the data drives.
> I was thinking that putting all 14 drives into one logical drive would
> give
> us a lot more drive heads to go find the data than dividing the drives up
> into two logical drives and therefore would be faster.
> Thanks for the feedback,
> --
> Dan D.
>
> "John J. Hughes II" wrote:
>|||I agree with you on that. I should have been more clear. We already have the
OS on a separate set of drives and the logs on another. I was only asking
about the data drives.
Thanks,
--
Dan D.
"John J. Hughes II" wrote:

> RAID 10 is RAID 1 over RAID 0. The RAID 0 would give better throughput an
d
> the RAID 1 mirror would give protection. I would not have a problem with
> replacing the RAID 5 with a RAID 10 in the below configuration but I would
> still suggest moving the OS and transaction log to two different RAID arra
ys
> to increase performance.
> But as they say it is your system, not mine and you are in a better positi
on
> to judge which configuration would give you better performance / usability
> for your needs.
> Regards,
> John
>
> "Dan D." <DanD@.discussions.microsoft.com> wrote in message
> news:EDBD0576-C037-4EA4-82C1-07AFB8BAB668@.microsoft.com...
>
>

Tuesday, February 14, 2012

Drill Down Performance

I was wondering if when you click on the + in a drill down report, does the report requery the database, or is the report just re-rendered without having to query again?

depending on the answer, performance considerations would be different.

The only requery is on the ReportServer db which has cached the results of the dataset. Your original query is not rerun.

Drill down matrix report slow performance

I have created OLAP cubes and we are creating reports on top of them. The reports take incredibly long to render (40 seconds to 5 minutes). The MDX for the reports runs in less than 15 seconds when executed as a query in Mgt. Studio. Browsing is also fast. The reports are drill down matrix-type reports and they do eventually display properly. Not that this should help with the mechanics of the issue but the report data source query will show how we drill across dimensions:

<code>

SELECT NON EMPTY { [Measures].[Number of Queries Run] } ON COLUMNS, NON EMPTY { ([ClientProjectDim].[Project ID].[Project ID].ALLMEMBERS * [UserDim].[UserName].[UserName].ALLMEMBERS * [QueryDim].[Status].[Status].ALLMEMBERS * [DateDim].[Date Hierarchy].[ClusteredHourOfDay].ALLMEMBERS ) } DIMENSION PROPERTIES MEMBER_CAPTION, MEMBER_UNIQUE_NAME ON ROWS FROM [Ad Hoc] CELL PROPERTIES VALUE, BACK_COLOR, FORE_COLOR, FORMATTED_VALUE, FORMAT_STRING, FONT_NAME, FONT_SIZE, FONT_FLAGS

</code>

The RS and security logs don't have anything meaningful. As the MDX works well when executed as a query but not in the reports I can't tell if it's a rendering issue somewhere. Has anyone experienced this issue and is there a resolution aside from restructuring the report?

I am experiencing similar issues but no answers, did you ever resolve youra?|||We did not resolve this, but rather restructured our reports. You might consider this. You can set up your reports with params for drill-through.|||Thanks, however I did parametize this report. I will try to restructure and do this long and laborious way. I noticed in the AS browser and MDX query designer they are using nice drill through tree for the parameters, but no drill through in the rendered form. Is that due to a property setting I am missing.