I have some tables used in reporting and some of them grow huge with the daily ETL so I have implemented som jobs that delete rows more that x days old. E.g. everyday 10% new data gets added and I delete 10% of the oldest rows in another job.
Do I need to do something for efficiency? Some tables are heaps with nonclustered indexes and some also have clustered indexes. E.g. to I need to rebuild the tables in case of heaps or rebuild index in case of clustered index on the table? If so how often? Most tables are only used once per day and when all joins and calculations are done they get extracted for visualization in a non-live manner.
I know a little about index fragmentation and that one can query fragmentation in percent. How much do I need to worry about the above scenario?