@zareen said in CS403 GDB1 Solution and discussion: Now would you normalize the database or keep your database in de-normalized form. Although, denormalized schema can greatly improve performance under extreme read-loads but the updates and inserts become complex as the data is duplicate and hence has to be updated/inserted in more than one places. One clean way to go about solving this problem is through the use of triggers. For example in our case where the orders table has the product_name column as well, when the value of product_name has to be updated, then it can simply be done in the following way: Have a trigger setup on the products table that updates the product_name on any update to the products table. Execute the update query on the products table. The data would automatically be updated in the orders table because of the trigger. However, when de-normalizing the schema, do take into consideration, the number of times you would be updating records compared to the number of times you would be executing SELECTs. When mixing normalization and de-normalization, focus on de-normalizing tables that are read intensive, while tables that are write intensive keep them normalized. link text