Sybase Database Architecture
ASE’s
multi-threaded kernel uses ASE’s own threading model rather than the operating
system one. ASE’s kernel handles scheduling and dispatching of user threads and
internal database threads. Instead of relying on the operating system to
schedule operation on a CPU or multiple CPUs, ASE takes this responsibility for
itself. Figure 1 provides a high-level view of ASE’s multi-threaded
architecture running on a uni processor (single CPU) host. This diagram is
fairly simplified; however, it should serve the purpose. The client programs
(which could be direct applications) connect to the server directly (two-tier
architecture) or can come from the application servers (three-tier
architecture). ASE is running as process ID 7434 under the operating system.
The ASE scheduler schedules the tasks to run. In this scenario task 9 is
running and task 8 is in the run queue.
Note that all ASE processes (internal or user) are threads as opposed to processes at the
OS level. When ASE is started with a single engine, only one ASE executable can
be seen at the operating system level. In order to see all the logged threads,
one has to log in to ASE itself and do a SQL query. An ASE engine is defined as
a process executing the ASE binary to perform database services on behalf of user
requests and system services.
In a system with multiple CPUs, one can enhance ASE’s performance by configuring ASE to run using multiple engines. In this sense, each engine is a single operating system process. This process is called a data server. ASE can be run with a single engine, like the example shown in Figure 1. With multiple processes, ASE utilizes Symmetric Multi Processing (SMP) as shown in Figure 2. Depending on the number of CPUs available and the demands placed on ASE, a good practice is to run a maximum of one ASE engine per available CPU. ASE will not allow the server to be started with more ASE engines than the number of CPUs. If we add another engine to ASE, the parent data server process sprawns (forks) another OS process.
In a system with multiple CPUs, one can enhance ASE’s performance by configuring ASE to run using multiple engines. In this sense, each engine is a single operating system process. This process is called a data server. ASE can be run with a single engine, like the example shown in Figure 1. With multiple processes, ASE utilizes Symmetric Multi Processing (SMP) as shown in Figure 2. Depending on the number of CPUs available and the demands placed on ASE, a good practice is to run a maximum of one ASE engine per available CPU. ASE will not allow the server to be started with more ASE engines than the number of CPUs. If we add another engine to ASE, the parent data server process sprawns (forks) another OS process.
ASE Database :
- When you create ASE, the system specific databases master, model, sybsystemprocs, sybsystemdb and tempdb will be created and you cannot drop them either.
- ASE’s master database contains the server wide metadata sys and the dynamic monitoring MDA tables. When you startup ASE, the master database is the first database recovered and brought online.
- The system database model provides the template metadata required to create a user or temporary database. The model database is small of the order of 2MB. The system database sybsystemprocs contains system specific stored procedures. A Sybase DBA can add additional user defined stored procedures to this database for server wide usage.
- In a nutshell, master, model and sybsystemprocs databases are as vital to ASE as are SYSTEM, SYSAUX and control files for Oracle.
- The system database sybsystemdb provides the intra-server two-phase commit and the inter-server distributed transaction coordination modules for ASE.
- ASE calls this temporary work space a tempdb database. The tempdb database is a system database used by ASE to store temporary tables and temporary stored procedures, for sorting, subqueries, and aggregates with GROUP BY, ORDER BY, for cursors, and so on. This database contains only temporary objects. The contents of tempdb databases are built from the model database each time the server is restarted.
Storage Concept
:
• The basic unit of storage in ASE is
data page. A data page is the minimum amount of data transferred to and from
disk to the cache. The supported page sizes for ASE are 2K, 4K, 8K and 16K.
When you create ASE, you specify the default page size for the server respectively.
Once you have specified the ASE’s default page size, you cannot change it
later. This is in contrast to ASE that currently offers one page size per
server instance from which the structures on disk can be constructed. ASE
allows multiple of this page size for I/O. They are referred to as buffer
pools.
AirBar brings Non-touch screen laptop to touch screen functionality |
• Extent is the next unit of storage in
ASE. Extents are always allocated to a table, index, or LOB structure. An
ASE extent is fixed at eight contiguous data pages. The smallest amount of
space that a table or index can occupy is one extent. Extents are deallocated only when all the pages in an extent
are empty. The logical and physical storage components of ASE are shown in
Figure 4.
Memory Management :
ASE uses shared memory to hold the data cache, the procedure
cache, user log cache and ASE kernel data structures such as the sleep queue,
runnable task queue, lock chains and pending I/O.
This Pool is equivalent to the Default Data Cache in
ASE.Additionally, ASE enables one to create User Defined Caches. These are
called Named Cache and are widely deployed for performance reasons. ASE allows
one to create multiple buffer pool sizes in each Cache, thus enabling the
optimizer to use different page sizes in the query.
The optimizer is the component of an RDBMS that determines the
best way to execute each query. For example for a Server created with 8K page
size, one can have 8K, 16K, 32K and 64K buffer pools. Two cache replacement
strategies are used by ASE. These are called Least Recently Used (LRU)
replacement strategy and Most Recently Used (MRU) (fetch-and-discard)
replacement strategy respectively. These two strategies are distinctively
different. The LRU replacement strategy is used for pages that a query needs to
access more than once or pages that must be updated. In contrast, the MRU or
fetch-and-discard replacement strategy is used for pages that a query needs to
read only once.
ASE uses a dynamic parameter called max
memory to initialise the total memory required for ASE. Everything else takes
memory from this parameter. As long as the host has enough shared memory, ASE’s
memory can be increased dynamically by increasing max memory parameter without
rebooting the server. ASE allows various memory constructs such as the default
data cache, procedure cache, named caches and others to be configured
dynamically by the user.
Task Management :
- The client program establishes a network socket connection to Adaptive Server.
- Adaptive Server assigns a task from the pool of tasks, which are allocated at start-up time. The task is identified by the Adaptive Server process identifier, or spid, which is tracked in the sysprocesses system table.
- Adaptive Server transfers the context of the client request, including information such as permissions and the current database, to the task.
- Adaptive Server parses, optimizes, and compiles the request.
- If parallel query execution is enabled, Adaptive Server allocates subtasks to help perform the parallel query execution. The subtasks are called worker processes.
- Adaptive Server executes the task. If the query was executed in parallel, the task merges the results of the sub tasks.
- The task returns the results to the client, using TDS packets.
Database Logins :
File System Permission on Windows :
Master Database :
No comments:
Post a Comment