We migrated our platform to new SQL Server 2016 instances a few days ago. Immediately after the migration, we ran sp_Blitz again, but we noticed the next entry, which concerns the dedicated volume containing tempdb (the eight data files and the single log file reside on the same dedicated volume). All database volumes use SSDs, which is rather surprising:
Slow storage written on the T drive Writes last on average over 100ms
for at least one database on this drive. For a specific database file
speeds, run the query from the information link.
No links were provided, but in the code we found that sp_Blitz uses dm_io_virtual_file_stats to evaluate the following logic:
o_stall_write_ms / (1.0 + num_of_writes))> 100 AND num_of_writes> 100000
Using this logic, the function points to each of our eight tempdb data files, with write speeds between 164 and 198 ms, suggesting that the problem is serious.
To prove the problem to the management, we have since run traces of the Performance Monitor to examine the "Avg. Disk Dry / Write" counter for all volumes attached to the server, but the T volume did not appear to be slower than other volumes the hours of work we sampled.
- Are we using the wrong PerfMon counter to prove it? If so, which meter should we use?
- Does the above function indicate the speed or worst speed she has ever encountered (ie should we ignore this warning unless it is still there after next server restart)?
- Could someone provide additional information / advice?