I am designing a database that will replace a large number of CSV files that I currently use for storing data, which are starting to become messy and inconsistent. I'm using C # in Visual Studio. This is Bricklink / Lego data, and I'm just going to explain the section I need help with, simplifying the actual numbers:
The part of the database I'm going to focus on has 3 tables: -Parts contains the PartID and about 30 other fields (for example, mass, averageSalePrice). There are 50,000 pieces. -Store contains StoreID and about 10 other fields. There are 1000 stores. -StoreParts binds the two in a multiple relationship. It contains PartID, StoreID, Date, Price, and Notes.
Now here is my problem: each store contains 10,000 coins. There would be about 10 million records in StoreParts (more if I save multiple dates). A query that I am likely to perform will have to extract all the pieces from a given store and compare their price to the averageSalePrice in the rooms. I think it might be very slow because you would have to go through 10 million records of StoreParts to find the 10,000 coins.
When I used CSV files to store the data, I had one file for each store / date, so it only remained to open it with the 10,000 pieces. I think that would be more effective than having to find the pieces of a store in the list of 10 million or more.
Is it possible to configure my database to create a separate table for each store? I think the search would be more efficient, but in my experience, this is not consistent with best practices in database design because I would have 1,000 store tables. If I consider storing store data on different dates (for example, a store has the price of all of its parts at 100 different dates), things could get way too long and slow.
I'd love to get advice on this, because I'd love to do it right and not have to keep CSV files everywhere, like I do now. Thank you.