There is only one reason not to use the largest block size available in your environment for a new database:
If you cannot support a greater number of users performing updates and inserts against a single block.
Other than that, increasing the database block size should improve the performance of almost everything in your application.
Larger database block sizes help keep indexes from splitting levels and help keep more data in memory longer.
To support many concurrent inserts and updates, increase the setting for the pctfree parameter at the object level.
Store Data Efficiently at the Block Level
Oracle stores blocks of data in memory.
It is in your best interest to make sure those blocks are as densely packed with data as possible.
If your data storage is inefficient at the block level, you will not gain as much benefit as you can from the caches available in the database. If the rows in your application are not going to be updated, set the pctfree as low as possible.
For partitioned tables, set the pctfree value for each partition to maximize the row storage within blocks.
By default, each partition will use the pctfree setting for the table. Set a low pctfree value for indexes.
If the pctfree setting is too low, updates may force Oracle to move the row (called a migrated row).
In some cases row chaining is inevitable, such as when your row length is greater than your database block size.
When row chaining and migration occur, each access of a row will require accessing multiple blocks, impacting the number of logical reads required for each command. You can detect row chaining by analyzing the table and then checking its statistics via
the Chain_Cnt column will be populated with the number of chained rows when the table is analyzed.