In these days I'm really busy (my posts here are decreased ) and one of
the main activites that I've done in this period is working on optimizing
performances on big ERP systems based on Microsoft Dynamics
NAV with SQL Server.
I've done a big work last weeks with a Microsoft's internal in order to check
all the possible causes of performance lacks on this type of Enterprise
Applications (Dynamics NAV systems with a big number of concurrent users) and
here is a brief summary of the tips I recommend to check in order to optimize
performances and reduce the number of locks:
1) Check your code carefully and try to reorganize it in the better way:
sometimes lots of people works during the years on the same code and sometimes
it could not be optimized and well-written. Check for it in order to have the
best code possible (avoid redundancy).
2) Use the right keys for query: select always the best key that satisfy your
needs and use the fields of the key in the right order. For example, if you have
a key Document No.,Posting Date, use the filters in this order:
3) Where possible, keep the locking order the same
4) From every table (expecially the biggest tables) delete the keys that
you're not using anymore and deactivate the keys that are used rarely (remember
that every key is an index on SQL Server that must be update on every insertion
on a table and maintain the index structure costs a lot).
5) Check MaintainSQLIndex property of a key.
Activate it on keys that you use frequently and performs big calculations or
filters and deactivate it on keys that you use less. Remember
that sometimes SQL Server blocks index rather then table, so setting
MaintainSQLIndex to FALSE can help on reducing locks (it will improve write
transactions, but will slow down read operations).
6) Check MaintainSIFTIndex property of a key
and activate it only on keys that must be used to calculate important flowfields
(the SIFT structure maintained on SQL Server is complex). Remember that
sometimes SQL Server blocks SIFT table rather then main table, so setting
MaintainSIFTIndex to FALSE can help on reducing locks (it will improve write
transactions, but slow down flowfields calculation).
7) Use ModifyAll and DeleteAll to modify/delete large blocks of data
instead of looping, so replace loops like this:IF
Table.FIND('-') THEN REPEAT
Table.Field := Value;
Table.NEXT = 0;
with instructions like:
MODIFYALL/DELETEALL are performed on server side, where the first code is
performed on client side. In first code SQL Server blocks current record and
records before and after current one and it's more lenghty.
8) From Dynamics NAV 4.01, 3 new instructions optimized for SQL Server
was introducted (and I've talked about
them in the past). Replace all your Table.FIND('-') and Table.FIND('+') code with Table.FINDLAST and Table.FINDFIRST. These instructions (if you check
the SQL code) are translated into a SELECT TOP 1 and return one record
(you don't have a full scan of the table).
These instructions are optimized also regarding the cache usage: if you use
FIND for looping on a set of records, it takes 3 loops for Navision in order to
start using the cache for .NEXT. If you use FINDFIRST etc. the cache is used
since the first loop instead.
9) Check the clustered index on the tables
10) Periodically rebuild and defragment your index directly from SQL Server
DBCC DBREINDEX (or ALTER INDEX ... REBUILD in SQL Server
DBCC INDEXDEFRAG (or ALTER INDEX ... REORGANIZE in SQL Server 2005)
11) Remember to
update statistics directly from SQL Server:
UPDATE STATISTICS [table_name] to update
statistics for a single table
to update statistics for all the tables
12) Use Index Hinting and Lock Granularity configurations on SQL Server. In
particular, enable the lock granularity in order to permit that SQL Server
itself will be responsible to decide the best lock to apply. To do this:
- Create this table on SQL Server: CREATE TABLE
[$ndo$dbconfig](config VARCHAR(512) NOT NULL)
- Give the rights on reading the table to your users
- Insert this parameter on the config field:
These are tips that must carefully analyzed in order to improve your
performances. Obviously, there's another aspect that I've not taken into
consideration here and it's the hardware configuration and how the database is
created on the server.
If you have a large database with lots of concurrent users, it's recommended
to have a good server (multiprocessor and with a big amount of RAM), disks with
RAID configuration and the best choice is to split the database into at least 3
disks (one for the primary datafile, one for the secondary datafile and one for
the transaction log).