Tuesday, March 27, 2012

Dropping and re-creating an index kills performance

Hello all,
I'm new to some of the index operations, so hopefully this is a trivial
question. I attempted to increase the length of a database field and
got the message that an index that included the field would go above
900 bytes, and it failed. That's a large index to begin with, so my
boss thought we should delete the index and see if we took a
performance hit. The first execution of a relevant stored procedure
after the delete actually yielded faster results. However, every
attempt thereafter tanked. No big deal, right? Just re-add the index. I
did, just as it existed previously, and there was no change in the poor
performance. I get the same results with or without the index. Can
anyone tell me what might be causing this and what I can do to get the
index to behave as before?
Thanks in advance,
Shannonin the QA run the Estimated Execution Plan and see if your query is
using the index. if its not you can always manually tell it to.|||Shannon Cayze wrote:
> so my boss thought we should delete the index and see if we
> took a performance hit.
This was the first mistake. I hope you made the case that blindly
removing a production index and hoping for the best was not the proper
protocol. At the very least an audit of all queries that used that
column in the index should have been undertaken to see if it's even
used, and if so, how. And all this should have been performed on a test
server, not in production. Scary.
Secondly, it would really help here to see the before and after
execution plans for those queries that were affected by the the index
removal and subsequent re-add. That would require that you first
examined existing queries to see how they were using the index and
compare that to how the queries are using the index now. Since you have
no baseline, there's no way to tell what changed. Are you sure the index
was re-created with the same parameters (clustered, non-clustered,
unique, fill-factor, etc.)?
It's possible the procedure needs to be recompiled. You can use
sp_recompile [ @.objname = ] 'object' to do this.
David Gugick
Quest Software
www.imceda.com
www.quest.com|||First of all, I never said it was production. In fact, it was the test
environment. Secondly, I do have the baseline execution plan and the
results of subsequent queries after altering the index. I didn't
include them because I wanted to keep the posting short and all I
wanted was a few general ideas on how to optimize an index when deleted
and re-added. However, thank you for the sp_recompile hint. Hopefully,
it will help.|||Shannon Cayze wrote:
> First of all, I never said it was production. In fact, it was the test
> environment. Secondly, I do have the baseline execution plan and the
> results of subsequent queries after altering the index. I didn't
> include them because I wanted to keep the posting short and all I
> wanted was a few general ideas on how to optimize an index when
> deleted and re-added. However, thank you for the sp_recompile hint.
> Hopefully, it will help.
Post the baseline and new execution plans so we can see the differences.
Also post the query in question. I'm guessing a recompile will do it,
but if not, post all the related information and we'll see if we can
find a solution for you.
David Gugick
Quest Software
www.imceda.com
www.quest.com

No comments:

Post a Comment