Advanced Databases
MC9276 ADVANCED DATABASES
UNIT I -- PARALLEL AND DISTRIBUTED DATABASES
Database System Architectures: Centralized and Client-Server Architectures – Server System Architectures – Parallel Systems- Distributed Systems – Parallel Databases: I/O Parallelism – Inter and Intra Query Parallelism – Inter and Intra operation Parallelism – Distributed Database Concepts - Distributed Data Storage – Distributed Transactions – Commit Protocols – Concurrency Control – Distributed Query Processing – Three Tier Client Server Architecture- Case Studies.
1.1.Database System Architectures
The architecture of a database system is influenced by the underlying computer system on which it runs, in particular by such aspects of computer architecture as networking, parallelism and distribution.
Networking of computers allows some tasks to be executed on a server system and some tasks to be executed on client systems. This division of work has led to client-server database systems. Parallel processing within a computer system allows databases system activities to be speeded up allowing faster response to transactions, as well as more transactions per second. The need for parallel query processing has led to parallel data base systems. Keeping multiple copies of the data base across different sites is known as distributed databases and it allows large organizations to continue their database operations even when one site is affected by a natural disaster, such as flood, fire or earthquake.
1.2.Centralized System Architecture
Centralized database systems are those that run on a single computer system and do not interact with other computer systems.
A modern general purpose computer system consists of one to a few processors and a number of device controllers that are connected through a common bus that provides access to shared memory.
The processors have local cache memories that stores local copies of parts of memory, to speed up access to data. Each processor may have several independent cores, each of which can execute a separate instruction stream.Each device control is in-charge of a specific type of device.
We distinguish two way in which computer is used.
a)Single-user systems
b)Multi-user systems
a)Single-user systems
Personal Computers and workstations fall into this category. It is a desktop unit used by a single person usually with only one processor and one or two hard disks and only one person using the machine at a time. It will not support concurrency control. In single-user system, provisions for crash recovery are either absent or primitive.
b)Multi-user systems
Multi-user system may have multiple processors, more disks, more memory and a multiuser OS. It serves a large number of users who are connected to the system remotely. It supports concurrency control & full transactional features.
1.3.Client-Server Architecture
General structure of Client-server System
Server systems satisfy the requests that are generated by m client systems whose general structure is given above. In Client-server system, database functionality can be broadly divided into two parts.
a)Front-end
b)Back-end
a)Back-end
It manages access structures, query evaluation and optimization, concurrency control and recovery.
b)Front-end
It consists of tools such as the SQL user interface, forms interfaces, report generation tools, data mining & analysis tools.
The interface between the front-end and the back-end is through SQL or through an application program standard such as ODBC & JDBC were developed to interface clients with servers. Certain application programs such as spreadsheet and statistical analysis packages, use the client-server interface directly to access data from a back-end server.
Some Transaction-processing systems provide a transactional remote procedure call interface to connect clients with a server. These calls appear like ordinary procedure calls to the programmer, but all the remote procedures calls from a client are enclosed in a single transaction at the server end.
1.4.Server System Architecture
Server system can be broadly categorized as
a)Transaction servers –widely used in relational database systems.
b)Data servers and – Used in object oriented database systems.
c)Cloud Servers
a)Transaction Servers
Transaction server system also called as query server system or SQL server system provides an interface to which clients can send requests to perform an action, in response to which the execute the action and sent back results to the client. Usually client machine send transactions to the server system, where those transactions are executed & results are sent back to the clients that are in charge of displaying the data.
Requests are specified in SQL or a specialized application program interface like Remote Procedure call or ODBC(C language application program interface standard from Microsoft for connecting to a server) or JDBC (Similar to ODBC for Java).
A typical transaction-server system consists of multiple processes accessing data in shared memory.
Server Processes:
Server processes receives user queries (transactions), executes them and send the results back. The queries may be submitted to the server processes from a user interface, or from a user process running embedded SQL or via JDBC, ODBC or similar protocols. Some database may use separate process for each user session and a few use a single database process for all user sessions with multiple threads, so that multiple queries can execute concurrently. Many database systems use hybrid architecture, with multiple processes, each one running multiple threads.
Lock Manager Process:
Lock manager process implements lock manager functionality, which includes lock grant, lock release and deadlock detection.
Database Writer Process:
Database writer process are one or more processes that output modified buffer blocks back to disk on continuous basis.
Log Writer Process:
Log writer process outputs log records from the log record buffer to stable storage. Server process simply add log records to the log records buffer in shared memory, and if a log force is required, they request the log writer process to output log records.
Checkpoint Process:
Checkpoint process performs periodic checkpoints.
Process Monitor Process:
Process monitor process monitors other processes and if any of them fails, it takes recovery actions for the process such as aborting any transaction being executed by the failed process, and then restarting the process.
Shared Memory:
It contains all shared data such as
1) Buffer Pool
2) Lock table
3) Log buffer which containing log records waiting to be output to the log on stable storage.
4) Cached query plans which can be reused is the same query is submitted again.
All database processes can access the data in shared memory.
To avoid overhead of interprocess communication for lock request/grant,
each database process operates directly on the lock table
instead of sending requests to lock manager process
Lock manager process still used for: deadlock detection
b) Data Servers
1) Used in high-speed LANs, in cases where:
The clients are comparable in processing power to the server
The tasks to be executed arecompute intensive.
2) Data are shipped to clients where:
Processing is performed, and
Then shipped results back to the server.
3) This architecture requires fullback-end functionality at the clients.
4) Used in many object-oriented database systems.
5) Issues:
Page-Shipping versus Item-Shipping
Locking
Data Caching
Lock Caching
1.5.Parallel Systems
Parallel database systems consist of:
multiple processors and
multiple disks
connected by a fast interconnection network.
A coarse-grainparallel machine consists of:
a small number of powerful processors
A massively parallel or fine grain parallelmachine utilizes:
thousands of smaller processors.
Two main performance measures:
Throughput:
the number of tasks that can becompletedin a given time interval
response time:
the amount of time it takes to complete a single task
–from the time it is submitted
Speedup:
a fixed-sized problem executing on a small system is:
given to a system which is N-times larger.
Measured by:
speedup = small system elapsed time/ large system elapsed time
Speedup is linearifequationequalsN.
Speedup
Scaleup
increase the size ofboth the problem and the system
N-times larger system used to perform N-times larger job
Measured by:
scaleup = small system small problem elapsed time/ big system big problem elapsed time
Scale up is linearifequationequals1.
Scaleup
Factors Limiting Speedup and Scaleup
Speedup and scaleup are often sublinear due to:
Startup costs:
Cost of starting up multiple processes may dominate computation time,
if the degree of parallelism is high.
Interference:
Processes accessing shared resources (e.g.,system bus, disks, or locks)
compete with each other,
–thus spendingtime waiting on other processes,
–rather than performing useful work.
Skew:
Increasing the degree of parallelism
increases the variance in service times of parallely executing tasks.
Overall execution time determined by slowest parallel tasks.
1.6.Parallel Systems
Shared memory –
processors share a common memory
Shared disk –
processors share a common disk
Shared nothing –
processors share neither a common memory nor common disk
Hierarchical –
hybrid of the above architectures
Parallel Database Architectures
Shared Memory
Processors and disks have access to a common memory,
typically via a bus
or through an interconnection network.
Extremely efficient communication between processors —
data in shared memory can be accessed by any processor
without having to move it using software.
Downside – architecture is not scalable beyond 32 or 64 processors
since the bus or the interconnection network becomes a bottleneck
Widely used for lower degrees of parallelism (4 to 8).
Shared Disk
All processors can directly accessall disks
via an interconnection network,
but the processors haveprivatememories.
The memory bus is not a bottleneck
Architecture provides a degree of fault-tolerance —
if a processor fails, the other processors can take over its tasks
since the database is resident on disks
that are accessible from all processors.
Ex: IBM Sysplex and DEC clusters (now part of Compaq)
running Rdb (now Oracle Rdb) were early commercial users
Downside:
bottleneck now occurs at:
interconnection to the disk subsystem.
Shared-disk systems can scale to a somewhat larger number of processors,
but communication between processors is slower.
Shared Nothing
Node consists of a processor, memory, and one or more disks.
Processors at one node communicate with another processor at another node
using an interconnection network.
A node functions as the server for:
the data on the disk or disks the node owns.
Ex: Teradata, Tandem, Oracle-n CUBE
Data accessed from local disks (and local memory accesses)
do not pass through interconnection network,
thereby minimizing the interference of resource sharing.
Shared-nothing multiprocessors :
can be scaled up to thousands of processors
without interference.
Main drawback: cost of communication and non-local disk access;
sending data involves software interaction at both ends.
Hierarchical
Combines characteristics of :
shared-memory, shared-disk, and shared-nothing architectures.
Top level is a shared-nothing architecture –
nodes connected by an interconnection network, and
do not share disks or memory with each other.
Each node of the system could be:
a shared-memory system with a few processors.
Alternatively, each node could be:
a shared-disk system, and each of the systems sharing a set of disks
could be a shared-memory system.
Reduce the complexity of programming such systems by:
distributed virtual-memory architectures
Also called non-uniform memory architecture (NUMA)
1.7.Distributed Systems
Dataspread over multiple machines
(also referred to as sites or nodes).
Network interconnects the machines
Datashared by users on multiple machines
Homogeneous distributed databases
Same software/schema on all sites, data may be partitioned among sites
Goal: provide a view of a single database, hiding details of distribution
Heterogeneous distributed databases
Different software/schema on different sites
Goal: integrate existing databases to provide useful functionality
Differentiate between local and global transactions
A local transaction accesses data in the single site at which the transaction was initiated.
A global transaction either accesses data in a site different from the one at which the transaction was initiated or accesses data in several different sites.
Trade-offs in Distributed Systems
Sharing data – users at one site able to access the data residing at some other sites.
Autonomy – each site is able to retain a degree of control over data stored locally.
Higher system availability through redundancy — data can be replicated at remote sites, and system can function even if a site fails.
Disadvantage: added complexity required to ensure proper coordination among sites.
Software development cost.
Greater potential for bugs.
Increased processing overhead.
Implementation Issues for Distributed Databases
Atomicity needed even for transactions that update data at multiple sites
The two-phase commit protocol (2PC) is used to ensure atomicity
Basic idea: each siteexecutes transaction until just before commit, and
then leaves final decision to a coordinator
Each site must follow decision of coordinator,
even if there is a failurewhile waiting for coordinators decision
2PC is not always appropriate:
other transaction models based on:
persistent messaging, and workflows, are also used.
Distributed concurrency control (and deadlock detection) required
Data items may be replicated to improve data availability
Distributed Query Processing
•For centralized systems, the primary criterion for measuring the cost of a particular strategy is the number of disk accesses.
•In a distributed system, other issues must be taken into account:
–The cost of a data transmission over the network.
The potential gain in performance from having several sites process parts of the query in parallel
Par &dist Query processing
•The world of parallel and distributed query optimization
–Parallel world, invent parallel versions of well-known algorithms, mostly based on broadcasting tuples and dataflow driven computations
–Distributed world, use plan modification and coarse grain processing, exchange large chunks
Transformation rules for distributed systems
•Primary horizontally fragmented table:
–Rule 9: The union is commutative
E1E2 = E2E1
–Rule 10: Set union is associative.
(E1E2) E3 = E1 (E2E3)
–Rule 12: The projection operation distributes over union
L(E1E2) = (L(E1)) (L(E2))
•Derived horizontally fragmented table:
–The join through foreign-key dependency is already reflected in the fragmentation criteria
Vertical fragmented tables:
–Rules: Hint look at projection rules
Simple Distributed Join Processing
•Consider the following relational algebra expression in which the three relations are neither replicated nor fragmented
account depositor branch
•account is stored at site S1
•depositor at S2
•branch at S3
•For a query issued at site SI, the system needs to produce the result at site SI
Possible Query Processing Strategies
•Ship copies of all three relations to site SI and choose a strategy for processing the entire locally at site SI.
•Ship a copy of the account relation to site S2 and compute temp1 = account depositor at S2. Ship temp1 from S2 to S3, and compute temp2 = temp1 branch at S3. Ship the result temp2 to SI.
•Devise similar strategies, exchanging the roles S1, S2, S3
•Must consider following factors:
–amount of data being shipped
–cost of transmitting a data block between sites
–relative processing speed at each site
Three-tier Architecture
1 / PREPARED BY S.PON SANGEETHA