Normal Forms   «Prev 

Database Normalization

Database normalization is the process of organizing the columns and tables of a relational database to minimize data redundancy.
Normalization involves decomposing a table into less redundant tables without losing information and defining foreign keys in the old table and referencing the primary keys of the new ones. The objective is to isolate data so that additions, deletions, and modifications of an attribute can be made in just one table and then propagated through the rest of the database using the defined foreign keys.
Edgar F. Codd introduced the concept of normalization and what we now know as the First normal form (1NF) in 1970. Codd went on to define the (2NF) Second normal form and (3NF) Third normal form in 1971, and Codd and Raymond F. Boyce defined the Boyce-Codd Normal Form (BCNF) in 1974.
A relational database table is often described as "normalized" if it meets Third Normal Form. Most Third Normal Form tables are free of insertion, update, and deletion anomalies. The relational model separates the logical design from the physical design: DBMS performance is a matter of
  1. physical design using indexes,
  2. view materialization,
  3. big buffers.
It is not a matter of changing the logical design.

Database Design and Theory
1) A novice database designer might attempt to store each piece of data in one table

2) Often you can repeat the process to break those tables into smaller tables

3) Using the rules of normalization and your knowledge of the business objects involved, you can split the large table into two smaller, more efficient tables

4) until you have broken the single table into a series of smaller tables, that reflect the business objects represented in your database