SQL Databases using Client/Server Architecture
Client/server in the domain of a database management system refers to how information is processed to fulfill a request.
With typical systems, your computer queries the database for an entire set of records, then sorts through the results to determine what information it wants to keep to answer your request.
With client/server, your system, the client, creates the request. The request is sent to the server, which focuses on the best way to answer the request.
Once the information has been put together to answer the request, the entire package of results is returned.
Only those results that correspond to the request are returned, ready to be used immediately by the client application.
The client/server concept is similar to realizing you need a file from a file cabinet down the hall. You have two choices:
You can have the file cabinet wheeled to your office where you can sort through the files manually, or you can simply request the specific file you need, then have someone retrieve only that single file and return it to you. This latter scenario is typical of a client/server situation.
The client/server model involves one or more shared computers, called servers, that are connected by a network of workstations to the individual users, called clients.
Client/server computing arrived in the 1980s, riding a wave of marketing hype from hardware and software vendors which had never before been seen in the IT industry.
The original model used is now called the two-tier client/sever model, which later evolved into what we call the three-tier client/server model, and finally into the N-tier client/server model, which is also known as the Internet computing model.
Each of these is discussed in the following subsections
Two-Tier Client/Server Model
The two-tier client/server model is almost the opposite of the centralized model in that all the business and presentation logic is placed on the client workstation, which typically is a high-powered personal computer system.
The only thing remaining on a centralized server is the database.
The two-tier model intended to take advantage of the superior presentation and user interface capabilities of the modern workstation.
However, the marketing hype of the late 1980s and early 1990s promised faster development of better application systems at a lower cost.
However, the vendors were offering a solution, and business managers of the day were far too willing to believe them.
The lie of the day was in cost comparisons between mainframes and central servers and workstations. The vendors typically showed cost comparisons in dollars per (MIPS) millions of instructions per second .
The problem was that a given instruction on the personal computers of the day did far less than a given instruction on a mainframe or highpowered server.
So it really was comparing apples and oranges. Cynics of the day defined MIPS as meaningless indicator of processor speed, and they were not far wrong.
The other factor that was largely ignored was that personal computers did not read from and write to their disks at anywhere near the rates achieved by mainframes and high-powered servers.
So although moving all the application programs (business logic) to the client workstations appeared to be a much less expensive solution, it was, in fact, a false economy.