Thursday, March 29, 2012

Dropping indexes before bulk insert

Hi all,

I have a huge table 170 Gb of size. On that table we have 10 indexes
of around 12 GB in size.

The application is designed such that it bulk inserts the file in to
sql server. But , often we are getting time outs and some latching
isssues ( as can be seen in activity monitor).

So, will this be a good idea of dropping those indexes and then
recreating them again for better performance.

1) Its SQL 2005 Standard Edition SP1

2) Databases are in SIMPLE Recovery mode.

3) Database is not OLTP.

Thanks.

//N

Hi Naj,

Do you see any blocking on the server while the bulk insert is running?

How do you mean latches issues ?

Could you please look for waittime and waitypes in the following view and let us know what is there.

select * from sys.dm_os_waiting_tasks where session_id > 50

Jag

|||

you should be better off by dropping indexes and after bulk insert creating them off line. indexes created offline have very small footprint on log and much faster then indexes created online. as a trade off your table will be offline for a while. how long it depends on clustered key as with large table clustered key plays very vital role when rebuilding indexes.

another point is to consider switching off AUTO_UPDATE_STATISTICS while inserting rows into your table. this will kill the performance while inserting data and updating staticstics at the same time. also there is a new option AUTO_UPDATE_STATISTICS_ASYNC which provide background statistics update. use with extra care as they can hinder performance significatly.

see article http://www.mssqltips.com/tip.asp?tip=1193

No comments:

Post a Comment